0% found this document useful (0 votes)
166 views150 pages

Signals and Systems Lecture Notes: Dr. Mahmoud M. Al-Husari

This document contains lecture notes on signals and systems. It introduces key concepts such as different types of signals and systems, mathematical descriptions of signals including operations on signals, descriptions of systems including characteristics like linearity and time-invariance, the Fourier series for representing periodic signals, and the Fourier transform for representing non-periodic signals. The document provides detailed information on analyzing and representing signals and systems using mathematical tools.

Uploaded by

Wek Kiv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views150 pages

Signals and Systems Lecture Notes: Dr. Mahmoud M. Al-Husari

This document contains lecture notes on signals and systems. It introduces key concepts such as different types of signals and systems, mathematical descriptions of signals including operations on signals, descriptions of systems including characteristics like linearity and time-invariance, the Fourier series for representing periodic signals, and the Fourier transform for representing non-periodic signals. The document provides detailed information on analyzing and representing signals and systems using mathematical tools.

Uploaded by

Wek Kiv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 150

Dr. Mahmoud M.

Al-Husari

Signals and Systems


Lecture Notes

This set of lecture notes are never to be considered as a substitute


to the textbook recommended by the lecturer.
2
Contents

1 Introduction 5
1.1 Signals and Systems Defined . . . . . . . . . . . . . . . . . . . . . 5
1.2 Types of Signals and Systems . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Mathematical Description of Signals 9


2.1 Classification of CT and DT Signals . . . . . . . . . . . . . . . . 9
2.1.1 Periodic and non-periodic Signals . . . . . . . . . . . . . . 9
2.1.2 Deterministic and Random Signals . . . . . . . . . . . . . 11
2.1.3 Signal Energy and Power . . . . . . . . . . . . . . . . . . 12
2.1.4 Even and Odd Functions . . . . . . . . . . . . . . . . . . 15
2.2 Useful Signal Operations . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.2 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.3 Time Reflection . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.4 Multiple Transformations . . . . . . . . . . . . . . . . . . 20
2.3 Useful Signal Functions . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.1 Complex Exponentials and Sinusoids . . . . . . . . . . . . 22
2.3.2 The Unit Step Function . . . . . . . . . . . . . . . . . . . 25
2.3.3 The Signum Function . . . . . . . . . . . . . . . . . . . . 26
2.3.4 The Unit Ramp Function . . . . . . . . . . . . . . . . . . 26
2.3.5 The Rectangle Function . . . . . . . . . . . . . . . . . . . 27
2.3.6 The Unit Impulse Function . . . . . . . . . . . . . . . . . 28
2.3.7 Some Properties of the Unit Impulse . . . . . . . . . . . . 29
2.3.8 The Unit Sinc Function . . . . . . . . . . . . . . . . . . . 34

3 Description of Systems 37
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Systems Characteristics . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.1 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.2 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.3 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.5 Time Invariance . . . . . . . . . . . . . . . . . . . . . . . 40
3.2.6 Linearity and Superposition . . . . . . . . . . . . . . . . . 41
3.3 Linear Time-invariant Systems . . . . . . . . . . . . . . . . . . 42
3.3.1 Time-Domain Analysis of LTI systems . . . . . . . . . . 42

3
4 CONTENTS

3.3.2 The Convolution Sum . . . . . . . . . . . . . . . . . . . . 43


3.4 The Convolution Integral . . . . . . . . . . . . . . . . . . . . . . 51
3.5 Properties of LTI Systems . . . . . . . . . . . . . . . . . . . . . . 58

4 The Fourier Series 61


4.1 Orthogonal Representations of Signals . . . . . . . . . . . . . . . 62
4.1.1 Orthogonal Vector Space . . . . . . . . . . . . . . . . . . 62
4.1.2 Orthogonal Signal Space . . . . . . . . . . . . . . . . . . . 63
4.2 Exponential Fourier Series . . . . . . . . . . . . . . . . . . . . . . 66
4.2.1 The Frequency Spectra (Exponential) . . . . . . . . . . . 67
4.3 Trigonometric Fourier Series . . . . . . . . . . . . . . . . . . . . . 70
4.3.1 Compact (Combined) Trigonometric Fourier Series . . . . 73
4.3.2 The Frequency Spectrum (Trigonometric) . . . . . . . . . 74
4.4 Convergence of the Fourier Series . . . . . . . . . . . . . . . . . . 78
4.4.1 Dirichlet Conditions . . . . . . . . . . . . . . . . . . . . . 78
4.5 Properties of Fourier Series . . . . . . . . . . . . . . . . . . . . . 79
4.5.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5.2 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5.3 Frequency Shifting . . . . . . . . . . . . . . . . . . . . . . 80
4.5.4 Time Reflection . . . . . . . . . . . . . . . . . . . . . . . . 80
4.5.5 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.5.6 Time Differentiation . . . . . . . . . . . . . . . . . . . . . 81
4.5.7 Time Integration . . . . . . . . . . . . . . . . . . . . . . . 82
4.5.8 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.5.9 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.5.10 Effects of Symmetry . . . . . . . . . . . . . . . . . . . . . 84
4.5.11 Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . . 86
4.6 System Response to Periodic Inputs . . . . . . . . . . . . . . . . 87

5 The Fourier Transform 91


5.1 Development of the Fourier Transform . . . . . . . . . . . . . . . 91
5.2 Examples of The Fourier Transform . . . . . . . . . . . . . . . . 95
5.3 Fourier Transform of Periodic Signals . . . . . . . . . . . . . . . . 99
5.4 Properties of the Fourier Transform . . . . . . . . . . . . . . . . 101
5.4.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.4.2 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.4.3 Frequency Shifting (Modulation) . . . . . . . . . . . . . . 103
5.4.4 Time Scaling and Frequency Scaling . . . . . . . . . . . . 105
5.4.5 Time Reflection . . . . . . . . . . . . . . . . . . . . . . . . 107
5.4.6 Time Differentiation . . . . . . . . . . . . . . . . . . . . . 107
5.4.7 Frequency Differentiation . . . . . . . . . . . . . . . . . . 110
5.4.8 Time Integration . . . . . . . . . . . . . . . . . . . . . . . 111
5.4.9 Time Convolution . . . . . . . . . . . . . . . . . . . . . . 112
5.4.10 Frequency Convolution (Multiplication) . . . . . . . . . . 113
5.4.11 Symmetry - Real and Imaginary Signals . . . . . . . . . . 114
5.4.12 Symmetry - Even and Odd Signals . . . . . . . . . . . . . 116
5.4.13 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.4.14 Energy of Non-periodic Signals . . . . . . . . . . . . . . . 119
5.5 Energy and Power Spectral Density . . . . . . . . . . . . . . . . . 121
5.5.1 The Spectral Density . . . . . . . . . . . . . . . . . . . . . 121
CONTENTS 5

5.5.2 Energy Spectral Density . . . . . . . . . . . . . . . . . . . 121


5.5.3 Power Spectral Density . . . . . . . . . . . . . . . . . . . 123
5.6 Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . 126
5.6.1 Energy Signals . . . . . . . . . . . . . . . . . . . . . . . . 126
5.6.2 Power Signals . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.6.3 Convolution and Correlation . . . . . . . . . . . . . . . . 127
5.6.4 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . 127
5.7 Correlation and the Fourier Transform . . . . . . . . . . . . . . 129
5.7.1 Autocorrelation and The Energy Spectrum . . . . . . . . 129
5.7.2 Autocorrelation and the Power Spectrum . . . . . . . . . 130

6 Applications of The Fourier Transform 131


6.1 Signal Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.1.1 Frequency Response . . . . . . . . . . . . . . . . . . . . . 131
6.1.2 Ideal Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.1.3 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.1.4 Practical Filters . . . . . . . . . . . . . . . . . . . . . . . 140
6.2 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . 141
6.3 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3.1 The Sampling Theorem . . . . . . . . . . . . . . . . . . . 145
6 CONTENTS
Chapter 1

Introduction

1.1 Signals and Systems Defined


Course Objective: A mathematical study of signals and systems. Why study
Signals and Systems? The tools taught here are fundamental to all engineering
disciplines. Electrical engineering topics such as but not limited to
• Speech processing
• Image processing
• Communication
• Control systems
• Advanced Circuit Analysis
uses the tools taught in this course extensively.

What is a signal? Vague definition: A signal is something that contains


information. Many of the examples of signals shown in Figure 1.1 provide us
with information one way or another.

(a) Traffic Signals (b) Human Signals (c) ECG Graph

Figure 1.1: Examples of Signals

Formal Definition: A signal is defined as a function of one or more variables


which conveys information on the nature of a physical phenomenon. In other
words, any time-varying physical phenomenon which is intended to convey in-
formation is a signal.

7
8 CHAPTER 1. INTRODUCTION

Signals are processed or operated on by systems. What is a system?


Formal Definition: A system is defined as an entity that manipulates one or
more signals to accomplish a function, thereby yielding new signals. When one
or more excitation signals are applied at one or more system inputs, the system
produces one or more response signals at its outputs. (throughout my lecture
notes I will simply use the terms input and output.)

Figure 1.2: Block diagram of a simple system.

Systems with more than one input and more than one output are called MIMO
(Multi-Input Multi-Output). Figures 1.3 through 1.5 shows examples of
systems.

Figure 1.3: Communication between two people involving signals and signal
processing by systems.

(a) Stock Market (b) Signals & System Course

Figure 1.4: Examples of systems

1.2 Types of Signals and Systems


Signals and systems are classified into two basic types:
• Continuous Time.
• Discrete Time.
1.2. TYPES OF SIGNALS AND SYSTEMS 9

Figure 1.5: Electric Circuit

1.2.1 Signals
A continuous-time (CT) signal is one which is defined at every instant of time
over some time interval. They are functions of a continuous time variable. We
often refer to a CT signal as x(t). The independent variable is time t and
can have any real value, the function x(t) is called a CT function because it is
defined on a continuum of points in time.

Figure 1.6: Example of CT signal

It is very important to observe here that Figure 1.6(b) illustrates a


discontinuous function. At discontinuity, the limit of the function value
as we approach the discontinuity from above is not the same as the limit
as we approach the same point from below. Stated mathematically, if the
time t = t0 is a point of discontinuity of a function g(t), then

lim g(t + ) 6= lim g(t − )


→t0 →t0

However, the two functions shown in Figure 1.6 are continuous-time


functions because their values are defined on a continuum of times t (t ∈
R), where R is the set of all real values. Therefore, the terms continuous
and continuous time mean different things. A CT function is defined on
a continuum of times, but is not necessarily continuous at every point in
time.
10 CHAPTER 1. INTRODUCTION

A discrete-time (DT) signal is one which is defined only at discrete points


in time and not between them. The independent variable have only a discrete
set of values. We often refer to DT signal as x[n], n here belongs to the set of
all integers Z (n ∈ Z) i.e. n = 0, ±1, ±2, ∙ ∙ ∙ . (Figure 1.7)

Figure 1.7: Example of DT function

1.2.2 Systems
A CT system transforms a continuous time input signal into CT outputs as
shown in Figure 1.8

Figure 1.8: Continuous time system

Similarly a DT system transforms a discrete time input signal to a DT output


signal.

In Engineering disciplines, problems that often arise are of the form

• Analysis problems

• Synthesis problems
In Analysis problems one is usually presented with a specific system and is
interested in characterizing it in detail to understand how it will respond to vari-
ous inputs. On the other hand, synthesis problems requires designing
systems to process signals in a particular way to achieve desired outputs. Our
main focus in this course are on analysis problems.
Chapter 2

Mathematical Description
of Signals

2.1 Classification of CT and DT Signals


2.1.1 Periodic and non-periodic Signals
A periodic function is one which has been repeating an exact pattern for an
infinite period of time and will continue to repeat that exact pattern for an
infinite time. That is, a periodic function x(t) is one for which

x(t) = x(t + nT ) (2.1)

for any integer value of n, where T > 0 is the period of the function and
−∞ < t < ∞. The signal repeats itself every T sec. Of course, it also repeats
every 2T, 3T and nT . Therefore, 2T, 3T and nT and are all periods of the
function because the function repeats over any of those intervals. The minimum
positive interval over which a function repeats itself is called the fundamental
period T0 , (Figure 2.1). T0 is the smallest value that satisfies the condition

x(t) = x(t + T0 ) (2.2)

The fundamental frequency f0 of a periodic function is the reciprocal of the

Figure 2.1: Example of periodic CT function with fundamental period

fundamental frequency f0 = T10 . It is measured in Hertz and is the number of


cycles (periods) per second. The fundamental angular frequency ω0 measured

11
12 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

in radians per second is



ω0 = = 2πf0 . (2.3)
T0

Example 2.1 With respect to the signal shown in Figure 2.2 determine the fundamental
frequency and the fundamental angular frequency.

1
 Solution It is clear that the
0.5
fundamental period T0 = 0.2sec.
Thus,
x(t)

0
1
f0 = = 5Hz
-0.5
0.2
ω0 = 2πf0 = 10π rad/sec.
-1
It repeats itself 5 times in one sec-
0 0.2 0.4 0.6 0.8 1
Time, seconds ond, which can be clearly seen in
Figure 2.2. 
Figure 2.2: Signal of Example 2.1

Example 2.2 A real valued sinusoidal signal x(t) can be expressed mathematically by
x(t) = A sin(ω0 t + φ) (2.4)
Show that x(t) is periodic.

 Solution For x(t) to be periodic it must satisfy the condition x(t) = x(t+T0 ),
thus
x(t + T0 ) = A sin(ω0 (t + T0 ) + φ)
= A sin(ω0 t + φ + ω0 T0 )
Recall that sin(α + β) = sinα cos β + cosα sinβ, therefore
x(t + T0 ) = A [sin(ω0 t + φ) cos ω0 T0 + cos(ω0 t + φ) sin ω0 T0 ] (2.5)

Substituting the fundamental period T0 = ω0 in (2.5) yields

x(t + T0 ) = A [sin(ω0 t + φ) cos2π + cos(ω0 t + φ) sin2π]


= A sin(ω0 t + φ)
= x(t) 
An important question for signal analysis is whether or not the sum of two
periodic signals is periodic. Suppose that x1 (t) and x2 (t) are periodic signals
with fundamental periods T1 and T2 , respectively. Then is the sum x1 (t) + x2 (t)
periodic; that is, is there a positive number T such that
x1 (t + T ) + x2 (t + T ) = x1 (t) + x2 (t) f or all t? (2.6)
It turns out that (2.6) is satisfied if and only if the ratio TT12 can be written as
the ratio kl of two integers k and l. This can be shown by noting that if TT12 = kl ,
2.1. CLASSIFICATION OF CT AND DT SIGNALS 13

then lT1 = kT2 and since k and l are integers x1 (t) + x2 (t) are periodic with
period lT1 . Thus the expression (2.6) follows with T = lT1 . In addition, if k
and l are co-prime (i.e. k and l have no common integer factors other than 1,)
then T = lT1 is the fundamental period of the sum x1 (t) + x2 (t).

πt
 πt

Let x1 (t) = cos 2 and x2 (t) = cos 3 , determine if x1 (t) + x2 (t) is periodic. Example 2.3

 Solution x1 (t) and x2 (t) are periodic with the fundamental periods T1 = 4
(since ω1 = π2 = 2π
T1 =⇒ T1 = 4) and similarly T2 = 6. Now

T1 4 2
= =
T2 6 3

then with k = 2 and l = 3, it follows that the sum x1 (t) + x2 (t) is periodic with
fundamental period T = lT1 = (3)(4) = 12sec. 

2.1.2 Deterministic and Random Signals


Deterministic Signals are signals who are completely defined for any instant of
time, there is no uncertainty with respect to their value at any point of time.
They can also be described mathematically, at least approximately. Let a signal
1
x(t) be defined as (Figure 2.3)
(

x(t)
1 − |t| , −1 < t < 1
x(t) =
0, otherwise
0
-3 -2 -1 0 1 2 3
It is clear that this function is well defined mathematically (Figure 2.3). Time, seconds

Figure 2.3: Example of deter-


ministic signal.
A random signal is one whose values cannot be predicted exactly and cannot be
described by any mathematical function. A common name for random signals
is noise, (Figure 2.4) .

Figure 2.4: Examples of Noise


14 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

2.1.3 Signal Energy and Power


Size of a Signal

0
The size of any entity is a number that indicates the largeness or strength of that
entity. Generally speaking, the signal amplitude varies with time. How can a
signal as a one shown in Figure 2.5, that exists over a certain time interval with
varying amplitude be measured by one number that will indicate the signal size
or signal strength? One must not consider only signal amplitude but also the
0
Time, seconds duration. If for instance one wants to measure the size of a human by a single
Figure 2.5: What is the size of number one must not only consider his height but also his width. If we make
a signal?
a simplifying assumption that the shape of a person is a cylinder of variable
radius r (which varies with height h) then a reasonable measure of a human
size of height H is his volume given by
Z H
V =π r2 (h)dh
0

Arguing in this manner, we may consider the area under a signal as a possible
measure of its size, because it takes account of not only the amplitude but
also the duration. However this will be a defective measure because it could
be a large signal, yet its positive and negative areas could cancel each other,
indicating a signal of small size. This difficulty can be corrected by defining the
signal size as the area under the square of the signal, which is always positive.
We call this measure the Signal Energy E∞ , defined for a real signal x(t) as
Z ∞
E∞ = x2 (t) dt (2.7)
−∞

This can be generalized to a complex valued signal as


Z ∞
2
E∞ = |x(t)| dt (2.8)
−∞

2
(Note for complex signals |x(t)| = x(t)x∗ (t) where x∗ (t) is the complex
conjugate of x(t).) Signal energy for a DT signal is defined in an analogous
way as
X∞
2
E∞ = |x[n]| (2.9)
n=−∞

Example 2.4 Find the signal energy of


(
A, |t| < T1 /2
x(t) =
0, otherwise

 Solution From the definition in (2.7)


Plotting x(t) is helpful, as
Z ∞ Z T1 /2 sometimes you do not need to
E∞ = x2 (t) dt = A2 dt determine the integral. You
−∞ −T1 /2 can find the area under the
 T1 /2 square of the signal from the
= A2 t = A2 T1 .  graph instead.
−T1 /2
2.1. CLASSIFICATION OF CT AND DT SIGNALS 15

For many signals encountered in signal and system analysis, neither the integral
in Z ∞
2
E∞ = |x(t)| dt
−∞

nor the summation



X 2
E∞ = |x[n]|
n=−∞

converge because the signal energy is infinite. The signal energy must be finite
for it to be a meaningful measure of the signal size. This usually occurs because
the signal in not time-limited (Time limited means the signal is nonzero over
only a finite time.) An example of a CT signal with infinite energy would be a
sinusoidal signal
x(t) = A cos(2πf0 t).
For signals of this type, it is usually more convenient to deal with the average
signal power of the signal instead of the signal energy. The average signal power
of a CT signal is defined by
Z T /2
1 2
P∞ = lim |x(t)| dt (2.10)
T →∞ T −T /2

Some references use the definition


Z T
1 2
P∞ = lim |x(t)| dt (2.11)
T →∞ 2T −T

Note that the integral in (2.10) is the signal energy of the signal over a time T,
and is then divided by T yielding the average signal power over time T. Then
as T approached infinity, this average signal power becomes the average signal
power over all time. Observe also that the signal power P∞ is the time average
(mean) of the signal amplitude squared, that is, the mean-squared value of x(t). √
Indeed, the square root of P∞ is the familiar rms(root mean square) value of rms= P∞
x(t).

For DT signals the definition of signal power is


N −1
1 X 2
P∞ = lim |x[n]| (2.12)
N →∞ 2N
n=−N

which is the average signal power over all discrete time.

For periodic signals, the average signal power calculation may be simpler. The
average value of any periodic function is the average over any period. Therefore,
since the square of a periodic function is also periodic, for periodic CT signals
Z
1 2
P∞ = |x(t)| dt (2.13)
T T
R
where the notation T means the integration over one period (T can be any
period but one usually chooses the fundamental period).
16 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Example 2.5 Find the signal power of

x(t) = A cos(ω0 t + φ)

 Solution From the definition of signal power for a periodic signal in (2.13),

Z Z T0 /2  
1 2 A2 2π
P∞ = |A cos(ω0 t + φ)| dt = cos2 t + φ dt (2.14)
T T T0 −T0 /2 T0

Using the trigonometric identity

1
cos(α)cos(β) = [cos(α − β) + cos(α + β)]
2

in (2.14) we get

Z T0 /2  

A2 4π
P∞ = 1 + cos t + 2φ dt (2.15)
2T0 −T0 /2 T0
Z T0 /2 Z T0 /2  
A2 A2 4π
= dt + cos t + 2φ dt (2.16)
2T0 −T0 /2 2T0 −T0 /2 T0
| {z }
=0

the second integral on the right hand side of (2.16) is zero because it is the
integral of a sinusoid over exactly two fundamental periods. Therefore, the
2
power is P∞ = A2 . Notice that this result is independent of the phase φ and
the angular frequency ω0 . It depends only on the amplitude A. 

Example 2.6 Find the power of the signal shown in Figure 2.6

A Periodic Square Pulse

0.5
 Solution From the definition of signal power
for a periodic signal
x(t)

0
Z Z 0.25 Z 0.5 
1 2 1
-0.5 P∞ = |x(t)| dt = 12 dt + (−1)2 dt
T T 0.5 0 0.25
-1 =1 
-1 -0.5 0 0.5 1
Time, seconds

Figure 2.6: A periodic pulse


2.1. CLASSIFICATION OF CT AND DT SIGNALS 17

Comment The signal energy as defined in (2.7) or (2.8) does not indicate
the actual energy of the signal because the signal energy depends not only
on the signal but also on the load. To make this point clearer assume we
have a voltage signal v(t) across a resistor R, the actual energy delivered to
the resistor by the voltage signal would be
Z ∞ 2 Z ∞
|v(t)| 1 2 E∞
Energy = dt = |v(t)| dt =
−∞ R R −∞ R

The signal energy is proportional to the actual physical energy delivered by


the signal and the proportionality constant, in this case, is R. However, one
can always interpret signal energy as the energy dissipated in a normalized
load of a 1Ω resistor. Furthermore, the units of the signal energy depend
on the units of the signal. For the voltage signal whose unit is volt(V),
the signal energy units is expressed in V 2 .s. Parallel observations applies to
signal power defined in (2.11).

Signals which have finite signal energy are referred to as energy signals and
signals which have infinite signal energy but finite average signal power are
referred to as power signals. Observe that power is the time average of energy.
Since the averaging is over an infinitely large interval, a signal with finite energy
has zero power, and a signal with finite power has infinite energy. Therefore, a
signal cannot both be an energy and power signal.. On the other hand, there
are signals that are neither energy nor power signals. The ramp signal is such an
example. Figure 2.7 Shows examples of CT and DT energy and power signals.

Figure 2.7: Examples of CT and DT energy and power signals

2.1.4 Even and Odd Functions


A function g(t) is said to be an even function of t if

g(t) = g(−t)
18 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

and a function g(t) is said to be an odd function of t if

g(t) = −g(−t)

An even function has the same value at the instants t and -t for all values of
t. Clearly, g(t) in this case is symmetrical about the vertical axis (the vertical
axis acts as a mirror) as shown in Figure 2.8. On the other hand, the value
of an odd function at the instant t is the negative of its value at the instant
-t. Therefore, g(t) in this case is anti-symmetrical about the vertical axis, as
depicted in Figure 2.8.

Any function x(t) can be expressed as a sum of its even and odd components

Figure 2.8: An even and odd function of t

x(t) = xe (t) + xo (t) . The even and odd parts of a function x(t) are

x(t) + x(−t) x(t) − x(−t)


xe (t) = xo (t) = (2.17)
2 2
The most important even and odd functions in signal analysis are cosines and
sines. Cosines are even, and sines are odd.

Some properties of Even and Odd Functions


• Even function × odd function = odd function.
• Odd function × odd function = even function.
• Even function × even function = even function.
Ra Ra
• For even functions, −a x(t) dt = 2 0 x(t) dt, (Figure 2.9).
Ra
• For odd functions, −a x(t) dt = 0, (Figure 2.9).

Figure 2.9: Integrals of even and an odd function


2.1. CLASSIFICATION OF CT AND DT SIGNALS 19

• If the odd part of a function is zero, the function is even.


• If the even part of a function is zero, the function is odd.
Figure 2.10 show some examples of products of even and odd CT functions.

(a) Product of two even functions (b) Product of even and odd functions

(c) Product of even and odd functions (d) Product of two odd functions

Figure 2.10: Examples of products of even and odd CT functions

Determine the even and odd components of the signal shown in Figure 2.11 Example 2.7

 Solution Using (2.17) to find the even and odd


1 components of x(t) and are shown in Figure 2.12. 

0.5
x(t)

0.5
xe (t)

0 0
-3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3
Time, seconds
0.5
Figure 2.11: Finding even and
odd components of x(t)
xo (t)

-0.5
-3 -2 -1 0 1 2 3
This is Time, seconds

Figure 2.12: Even and odd components of x(t)


20 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

2.2 Useful Signal Operations


Three useful signal operations are discussed here: shifting (also called time
translation), scaling and reflection.

2.2.1 Time Shifting


Time shifting is a transformation of the independent variable. Consider a signal
f (t) as shown in Figure 2.13(a) and the same signal delayed by t0 seconds

Figure 2.13: Time shifting a signal

Figure 2.13(b) which we shall denote φ(t). Whatever happens in f (t) at some
instant t also happens in φ(t) but t0 seconds later at the instant t+t0 . Therefore
φ(t + t0 ) = f (t)
and
φ(t) = f (t − t0 ).
Therefore, to time shift a signal by t0 , we replace t with t − t0 . Thus f (t − t0 )
represents f (t) time shifted by t0 seconds. If t0 is positive, the shift is to the
right (delay). If t0 is negative, the shift is to the left (advance).

Example 2.8 Let a signal be defined as follows




1, −1 < t < 0
x(t) = 1 − 12 t, 0 < t < 2


0, otherwise
sketch x(t − 1).
2.2. USEFUL SIGNAL OPERATIONS 21

 Solution First plot the signal x(t) to visualize the signal and the
1
important points of time (Figure 2.14). We can begin to understand how to
0.8
make this transformation by computing the values of x(t − 1) at a few selected

x(t)
points as shown in Table 2.1 of Figure 2.15. Next, plot x(t − 1) as function of 0.6

t. It should be apparent that replacing t by t − 1 has the effect of shifting the 0.4

function one unit to the right as in Figure 2.15.  0.2

0
-3 -2 -1 0 1 2 3
1.2 Time, seconds

t t−1 x(t − 1)
1 Figure 2.14: A Signal x(t)
-2 -3 0
-1 -2 0 0.8
0 -1 1
x(t − 1)

1 0 1 0.6
2 1 0.5
3 2 0 0.4
4 3 0
5 4 0 0.2

Table 2.1 0
-4 -3 -2 -1 0 1 2 3 4
Time, seconds

Figure 2.15: Selected values of x(t − 1)

2.2.2 Time Scaling


The compression or expansion of a signal in time is known as time scaling.
Consider the signal x(t) of Figure 2.14, if x(t) is to be compressed in time by a
factor a(a > 1), the resulting signal φ(t) is given by
φ(t) = x(at)
Assume a = 2, then φ(t) = x(2t), constructing a similar table to Table 2.1,
paying particular attention to the turning points of the original signal, as shown
in Table 2.2. Next, plot x(2t) as function of t, (Figure 2.16). On the other hand,

1.2
t 2t x(2t)
-2 -4 0 1
-1.5 -3 0
0.8
-1 -2 0
x(2t)

-0.5 -1 1
0.6
0 0 1
0.5 1 0.5 0.4
1 2 0
1.5 3 0 0.2
2 4 0
0
Table 2.2 -4 -3 -2 -1 0 1 2 3 4
Time, seconds

Figure 2.16: Selected values of x(2t)

if x(t) is to be expanded (stretched) in time by a factor a(a > 1) the resulting


signal φ(t) is given by  
t
φ(t) = x
a
22 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Assume a = 2, then φ(t) = x 2t and following the same procedure earlier the
1
expansion can be seen clearly in Figure 2.17.
0.8
2)

In summary, to time scale a signal by a factor a, we replace t with at. If


t

0.6
x

0.4 a > 1, the scaling results in compression, and if a < 1, the scaling results in
0.2 expansion.
0
-4 -3 -2 -1 0 1 2 3 4 5 6
Time, seconds
2.2.3 Time Reflection
Figure 2.17: Stretched version
of x(t). Also called time reversal, the signal φ(t) = x(−t). Observe that whatever
happens at the time instant t also happens at the instant −t. The mirror image
of x(t) about the vertical axis is x(−t). Recall that the mirror image of x(t)
about the horizontal axis is −x(t). Figure 2.18 shows a discrete time example
of time reflection.

Figure 2.18: A function g[n] and its reflected version g[−n].

2.2.4 Multiple Transformations


All three transformations, time shifting, time scaling and time reflection, can
be applied simultaneously, for example
 
t − t0
φ(t) = x (2.18)
a
To understand the overall effect, it is usually best to break down a transforma-
tion like (2.18) into successive simple transformations,
   
t→t/a t t→t−t0 t − t0
x(t) −−−−→ x −−−−−→ x (2.19)
a a
Observe here that the order of the transformation is important. For example, if
we exchange the order of the time-scaling and time-shifting operations in (2.19),
we get    
t→t−t0 t→t/a t t − t0
x(t) −−−−−→ x(t − t0 ) −−−−→ x − t0 6= x
a a
The result of this sequence of transformations is different from the preceding
result. We could have obtained the same preceding result if we first observe
that    
t − t0 t t0
x =x − .
a a a
Then we could time-shift first and time-scale second, yielding
t
     
t→t− a0 t0 t→t/a t t0 t − t0
x(t) −−−−−→ x t − −−−−→ x − =x .
a a a a
2.2. USEFUL SIGNAL OPERATIONS 23

For a different transformation, a different sequence may be better, for example

x(bt − t0 )

In this case the sequence of time shifting and then time scaling is the simplest
path to correct transformation

t→t−t
0 t→bt
x(t) −−−−−→ x(t − t0 ) −−−→ x(bt − t0 )

Let a signal be defined as in Example 2.9. Determine and plot x(−3t − 4). Example 2.9

 Solution Method 1 Construct a table to compute the values of x(−3t − 4) at


a few selected points as shown in Table 2.3 of Figure 2.19. Next, plot x(−3t − 4)
as function of t.

1
t t0 = −3t − 4 x(−3t − 4)
-2/3 -2 0 0.8
x(−3t − 4)

-1 -1 1
-4/3 0 1 0.6

-5/3 1 0.5
0.4
-2 2 0
-7/3 3 0 0.2

Table 2.3
0
-3 -2 -1 0 1 2
Time, seconds

Figure 2.19: Selected values of x(−3t − 4)

Comment: When solving using the method of constructing a table as the one
shown in Table 2.3, it is much easier to start constructing your table from the
second column i.e. the time transformation argument of the function. The time
transformation argument in this example is −3t − 4 which can be labeled t0 .
Start with few selected points of t0 , find the corresponding t points and fill the
column corresponding tot. This  could be done easily by writing an expression
0 t0 +4
of t in terms of t , t = − 3 . Finally, plot x(−3t − 4) as function of t.

Method 2 We do the transformation graphically paying particular attention to


the correct sequence of transformations. We can consider the following sequence
  
t→−3t t→t+ 43 4
x(t) −−−−→ x(−3t) −−−−−→ x −3 t + = x(−3t − 4)
3

as shown in Figure 2.20(a). Alternatively,

t→t−4 t→−3t
x(t) −−−−→ x(t − 4) −−−−→ x(−3t − 4)

as shown in Figure 2.20(b).


24 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

1 1
x(t)

x(t)
0.5 0.5

0 0
-3 -2 -1 0 1 2 3 4 5 6 7 -3 -2 -1 0 1 2 3 4 5 6 7

1 1

x(t − 4)
x(−3t)

0.5 0.5

0 0
-3 -2 -1 0 1 2 3 4 5 6 7 -3 -2 -1 0 1 2 3 4 5 6 7
x(−3(t + 4/3))

1 1

x(−3t − 4)
0.5 0.5

0 0
-3 -2 -1 0 1 2 3 4 5 6 7 -3 -2 -1 0 1 2 3 4 5 6 7
Time, seconds Time, seconds

(a) First Reflect and scale then shift (b) Alternative sequence of transformation

Figure 2.20: Method 2 solutions to Example 2.9.

2.3 Useful Signal Functions


2.3.1 Complex Exponentials and Sinusoids
Some of the most commonly used mathematical functions for describing signals
should already be familiar: the CT sinusoids
 
2πt
x(t) = A cos +φ = A cos(ω0 t + φ) = A cos(2πf0 t + φ)
T0

where

• A = amplitude of sinusoid or exponential

• T0 = real fundamental period of sinusoid

• f0 = real fundamental frequency of sinusoid, Hz

• ω0 = real fundamental frequency of sinusoid, radians per second (rad/s)

Another important function in the area of signals and systems is the exponential
signal est where s is complex in general. An exponential function in its most
general form is written as
x(t) = Cest

where both C, s can be real or complex numbers.


2.3. USEFUL SIGNAL FUNCTIONS 25

Reminder It is very useful here to remember the following Euler identities

ejφ = cos(φ) + jsin(φ) (2.20)


−jφ
e = cos(φ) − jsin(φ) (2.21)
jφ −jφ
e
+e
cos(φ) = (2.22)
2
ejφ − e−jφ
sin(φ) = (2.23)
2j

The exponential signal x(t) = Cest is studied in more details by writing

C = Aejφ , (polar f orm)


s = σ + jω, (rectangularf orm)

Therefore,

x(t) = Aejφ e(σ+jω)t


= Aeσt ejφ ejωt
σt j(ωt+φ)
= |{z}
A | e{z } | e {z }
T ermI T ermII T ermIII

Where

• Term I, A = the amplitude of the complex exponential

• Term II, eσt = real exponential function (Figure 2.21)

σ<0 σ>0

1 σ=0

0
Time, seconds

Figure 2.21: Real exponential signal eσt

• Term III, ej(ωt+φ) = complex exponential function. Using Euler identity


(2.20)
ej(ωt+φ) = cos(ωt + φ) + jsin(ωt + φ)

Figure 2.22 shows different examples of the real part of the function
x(t) = Aejφ e(σ+jω)t .
26 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

eσt envelope (σ > 0)

Real part of x(t)


Real part of x(t) eσt envelope (σ < 0)

Time, seconds

Figure 2.22: The real part of x(t) = Aeσt ej(ωt+φ)

Some properties of Complex Exponential Functions


The complex exponential function ej(ω0 t+φ) has a number of important proper-
ties.

• It is periodic with fundamental period T0 = |ω0 | since
   
j ω0 t+ 2πk +φ
ej(ω0 t+φ) = e ω 0 f or any k ∈ Z.

Note that ej2πk = 1 for any k ∈ Z.


 
• Re ej(ω0 t+φ) = cos(ω0 t + φ) and the Im ej(ω0 t+φ) = sin(ω0 t + φ),
these terms are sinusoids of frequency ω0 .
• The term φ is often called phase. Note that we can write
 
jω0 t+ ωφ
ej(ω0 t+φ) = e 0

which implies that the phase has the effect of time shifting the signal.
• Since complex exponential functions are periodic they have infinite total
energy but finite power.
Z Z t+T
1 jω t 2
P∞ = e 0 dt = 1 1.dτ = 1
T T T t

• Set of periodic exponentials with fundamental frequencies that are multi-


plies of a single positive frequency ω0 are said to be harmonically related
complex exponentials

Θk (t) = ejkω0 t f or k = 0, ±1, ±2, ∙ ∙ ∙ (2.24)

? k = 0 ⇒ Θk (t) is a constant.
2.3. USEFUL SIGNAL FUNCTIONS 27

? k 6= 0 ⇒ Θk (t) is periodic with fundamental frequency |k| ω0 and


2π T0
fundamental period |k|ω 0
= |k| . Note that each exponential in (2.24)
is also periodic with period T0 .

? Θk (t) is called the k th harmonic. Harmonic (from music): tones


resulting from variations in acoustic pressures that are integer mul-
tiples of a fundamental frequency.

2.3.2 The Unit Step Function


A CT unit step function is defined as, (Figure 2.23)
Figure 2.23: The CT unit step
( function
1, t > 0
x(t) = (2.25)
0, t < 0
This function is called the unit step because the height of the step change in
function value is one unit in the system of units used to describe the signal. The
function is discontinuous at t = 0 since the function changes instantaneously
from 0 to 1 when t = 0. It will be shown later that u(t = 0) = 12 .

Some authors define the unit step by


( (
1, t ≥ 0 1, t>0
u(t) = or u(t) =
0, t < 0 0, t≤0

For most analysis purposes these definitions are all equivalent.

The unit step is defined and used in signal and system analysis because it can
mathematically represent a very common action in real physical systems, fast
switching from one state to another. For example in the circuit shown in Figure
2.24 the switch moves from one position to the other at time t = 0. The voltage
applied to the RC circuit can be described mathematically by v0 (t) = vs (t)u(t).
The unit step function is very useful in specifying a function with different
mathematical description over different intervals. Consider, for example, the Figure 2.24: Simple RC circuit
rectangular pulse depicted in Figure 2.25(a). We can express such a pulse in
terms of the unit step function by observing that the pulse can be expressed as
the sum of the two delayed unit step functions as shown in Figure 2.25(b). The
unit step function delayed by t0 seconds is u(t − t0 ). From Figure 2.25(b), it is
clear that one way of expressing the pulse of Figure 2.25(a) is u(t−2)−u(t−4).

The DT Unit Step Function


The DT counterpart of the CT unit step function u(t) is u[n], also called unit
sequence (Figure 2.26) defined by
Figure 2.26: The DT unit step
(
function
1, n ≥ 0
u[n] = (2.26)
0, n < 0
For this function there is no disagreement or ambiguity about its value at n = 0,
it is one.
28 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Unit Step delayed by 2 units

u(t − 2)
1

0
0 1 2 3 4 5 6

Unit step delayed by 4 units

u(t − 4)
0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
Time, seconds Time, seconds

(a) A Rectangular pulse (b) Representing rectangular pulses by step functions

Figure 2.25: A Rectangular pulse and its representation by step functions

2.3.3 The Signum Function


The signum function Figure 2.27 is closely related to the unit step function. It
is some time called the sign function, but the name signum is more common so
as not confuse the sounds of the two words sign and sine !! The signum function
is defined as
(
1, t>0
sgn(t) = (2.27)
−1, t < 0

Figure 2.27: The CT signum


function 2.3.4 The Unit Ramp Function
Another type of signal that occurs in systems is one which is switched on at
some time and changes linearly after that time or one which changes linearly
before some time and is switched off at that time. Figure 2.28 illustrates some
examples.

5 -1
4
3
x(t)

x(t)

-2
2
1
Figure 2.29: The CT unit ramp 0 -3
function 0 2 4 -1 0 1 2 3 4 5
Time, seconds Time, seconds

Figure 2.28: Functions that change linearly before or after some time

Signals of this kind can be described with the use of the ramp function. The
CT unit ramp function (Figure 2.29) is the integral of the unit step function. It
2.3. USEFUL SIGNAL FUNCTIONS 29

is called the unit ramp function because, for positive t, its slope is one
( Z t
t, t > 0
ramp(t) = = u(λ) dλ = tu(t) (2.28)
0, t ≤ 0 −∞

The integral relationship in (2.28) between the CT unit step and CT ramp func-
tions is shown below in Figure 2.30.

Figure 2.30: Illustration of the integral relationship between the CT unit step
and the CT unit ramp.

Describe the signal shown in Figure 2.31 in terms of unit step functions. Example 2.10

 Solution The signal x(t) illustrated in Figure 2.31 can be conveniently han-
dled by breaking it up into the two components x1 (t) and x2 (t), as shown in
Figure 2.32. Here, x1 (t) can be obtained by multiplying the ramp t by the pulse
u(t) − u(t − 2) as shown in Figure 2.32. Therefore,
x1 (t) = t[u(t) − u(t − 2)].
2

Similarly,
x2 (t) = −2(t − 3)[u(t − 2) − u(t − 3)]
x(t)

1
and
x(t) = x1 (t) + x2 (t)
0
= t[u(t) − u(t − 2)] − 2(t − 3)[u(t − 2) − u(t − 3)] -2 -1 0 1 2 3 4 5 6
Time, seconds
= tu(t) − 3(t − 2)u(t − 2) + 2(t − 3)u(t − 3).  Figure 2.31: A signal x(t).

2.3.5 The Rectangle Function


A very common type of signal occurring in systems is one in which a signal
is switched on at some time and then back off at a later time. The rectangle
function (Figure 2.33) is defined as
  (
t 1, |t| < τ2
rect = (2.29)
τ 0, otherwise Figure 2.33: The CT rectangle
function.
30 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

2 t 2 x1 (t)

1 1

0 0

-1 -1
-1 0 1 2 3 4 -1 0 1 2 3 4

2 −2(t − 3) 2 x2 (t)

1 1

0 0

-1 -1
-1 0 1 2 3 4 -1 0 1 2 3 4

Figure 2.32: Description of the signal of example 2.10 in terms of unit step
function.

2rect t+1
4
 The notation used here is convenient, τ represent the width of the rectangle
2 function while the rectangle centre is at zero, therefore any time transformations
can be easily applied to the notation in (2.29), (Figure 2.34). A special case
of the rectangle function defined in (2.29) is when τ = 1, it is called the unit
1
rectangle function, rect(t) (also called the square pulse). It is a unit rectangle
function because its width, height, and area are all one. Note that one can
always perform time transformations to the unit rectangle function to arrive at
0
-5 -4 -3 -2 -1 0 1 2 3 4 5 the same notation used in (2.29). The signal in Figure 2.34 could be obtained
Figure 2.34: A shifted rect. by scaling and shifting the unit rectangle function. However the notation used
function with width four
seconds and center at -1.
in (2.29) is much easier when handling rectangle functions.

2.3.6 The Unit Impulse Function


The unit impulse function δ(t), also called the delta function is one of the most
important functions in the study of signals and systems and yet the strangest.
It was first defined by P.A.M Dirac (sometimes called by his name the Dirac
Distribution) as

δ(t) = 0 t 6= 0 (2.30)
Z ∞
δ(t) dt = 1 (2.31)
−∞

Try to visualise this function: a signal of unit area equals zero everywhere
except at t = 0 where it is undefined !! To be able to understand the definition
of the delta function let us consider a unit-area rectangular pulse defined by the
function (Figure 2.35) (
1
, |t| < a2
δa (t) = a (2.32)
0, |t| > a2
2.3. USEFUL SIGNAL FUNCTIONS 31

Figure 2.35: A unit-area rectangular pulse of width a

Now imagine taking the limit of the function δa (t) as a approaches zero. Try
to visualise what will happen, the width of the rectangular pulse will become
infinitesimally small, a height that has become infinitely large, and an overall
area that has been maintained at unity. Using this approach to approximate
the unit impulse which is now defined as
δ(t) = lim δa (t) (2.33)
a→0

Other pulses, such as triangular pulse may also be used in impulse approxi-
mations (Figure 2.36). The area under an impulse is called its strength, or

Figure 2.36: A unit area triangular pulse

sometimes its weight. An impulse with a strength of one is called a unit im-
pulse. The impulse cannot be graphed in the same way as other functions
because its amplitude is undefined when t = 0. For this reason a unit impulse is
represented by a vertical arrow a spear-like symbol. Sometimes, the strength of
the impulse is written beside it in parentheses, and sometimes the height of the
arrow indicates the strength of the impulse. Figure 2.37 illustrates some ways
of representing impulses graphically.

2.3.7 Some Properties of the Unit Impulse


Multiplication of a Function by an Impulse
A common mathematical operation that occurs in signals and systems analysis
is the multiplication of an impulse with another function g(t) that is known to
be continuous and finite at t = 0. (i.e. g(t = 0) exists and its value is g(0), ) we
obtain
g(t)δ(t) = g(0)δ(t) (2.34)
32 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Figure 2.37: Graphical representation of impulses

since the impulse exists only at t = 0. It is useful here to visualise the above
product of the two functions. The unit impulse δ(t) is the limit of the pulse
δa (t) defined in (2.32). The multiplication is then a pulse whose height at t = 0
g(0)/a and whose width is a. In the limit as a approaches zero the pulse becomes
an impulse and the strength is g(0), (Figure 2.38). Similarly, if a function g(t)
is multiplied by an impulse δ(t − t0 )(impulse located at t = t0 ), then

g(t)δ(t − t0 ) = g(t0 )δ(t − t0 ) (2.35)

provided g(t) is finite and continuous at t = t0 .

The Sampling or Sifting Property


Another important property that follows naturally from the multiplication prop-
erty is the so-called sampling or sifting property. Before we state this prop-
The word sifting is spelled erty let us first explore an important idea. Consider the unit-area rectangular
correctly it is not to be con- function δa (t) defined in (2.32). Let this function multiply another function g(t),
fused with the word shift- which is finite and continuous at t = 0, and find the area under the product of
ing. the two functions, Z ∞
A= δa (t)g(t) dt
−∞

(Figure 2.38). Using the definition of δa (t) we can rewrite the integral as

Figure 2.38: Multiplication of a unit-area rectangular pulse centered at t = 0


and a function g(t), which is continuous and finite at t = 0.

Z a/2
1
A= g(t) dt (2.36)
a −(a/2)
2.3. USEFUL SIGNAL FUNCTIONS 33

Now imagine taking the limit of this integral as a approaches zero. In the limit,
the two limits of the integration approach zero from above and below. Since
g(t) is finite and continuous at t = 0, as a approaches zero in the limit the value
of g(t) becomes a constant g(0) and can be taken out of the integral. Then
Z
1 a/2 1
lim A = g(0) lim dt = g(0) lim (a) = g(0) (2.37)
a→0 a→0 a −(a/2) a→0 a

So in the limit as a approaches zero, the function δa (t) has the interesting
property of extracting (hence the name sifting) the value of any continuous
finite function g(t) at time t = 0 when the multiplication of δa (t) and g(t) is
integrated between any two limits which include time t = 0. Thus, in other
words Z ∞ Z ∞
g(t)δ(t) dt = lim g(t)δa (t) dt = g(0) (2.38)
−∞ a→0 −∞

The above result follows naturally from (2.34),


Z ∞ Z ∞
g(t)δ(t) dt = g(0) δ(t) dt = g(0) (2.39)
−∞ −∞
| {z }
=1

This result means that the area under the product of a function with an impulse
δ(t) is equal to the value of that function at the instant where the unit impulse
is located. From (2.35) it follows that
Z ∞ Z ∞
g(t)δ(t − t0 ) dt = g(t0 ) δ(t − t0 ) dt = g(t0 ) (2.40)
−∞ −∞
| {z }
=1

The unit impulse δ(t) can be defined by using the sampling property that is
when it is multiplied by any function g(t), which is finite and continuous at
t = 0, and the product is integrated between limits which include t = 0, the
result is Z ∞
x(0) = x(t)δ(t) dt
−∞

One can argue here, what if I can find a function other than the impulse function
that satisfies the sampling property. The answer would be it must be equivalent
to the impulse δ(t). The next property explores this argument.

The Unit Impulse is the Derivative of the Unit Step


Let us evaluate the integral (du/dt)x(t), using integration by parts:
Z ∞ ∞ Z ∞
du(t)
x(t) dt = u(t)x(t) − u(t)ẋ(t) dt
−∞ dt −∞ −∞
Z ∞
= x(∞) − 0 − ẋ(t) dt
0

= x(∞) − x(t) 0
= x(0)
34 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

The result shows that du/dt satisfies the sampling property of δ(t). Therefore

du
= δ(t) (2.41)
dt

Consequently
Z t
δ(τ ) dτ = u(t) (2.42)
−∞

The result in (2.42) can be obtained graphically by observing that the area from
−∞ to t is zero if t < 0 and unity if t > 0
Z (
t
0, t<0
δ(τ ) dτ =
−∞ 1, t>0
= u(t)

The same result could have been obtained by considering a function g(t) and
its derivative ġ(t) as in Figure 2.39. In the limit as a approaches zero the
function g(t) approaches the unit step function. In that same limit the width
of ġ(t) approaches zero but maintains a unit area which the same as the initial
definition of δa (t). The limit as a approaches zero of ġ(t) is called the generalised
derivative of u(t).

Figure 2.39: Functions which approach the unit step and unit impulse

Time Transformations applied to the Unit Impulse

The important feature of the unit impulse function is not its shape but the fact
that its width approaches zero while the area remains at unity. Therefore when
time transformations are applied to δ(t), in particular scaling it is the strength
that matters and not the shape of δ(t), (Figure 2.40). It is helpful to note that
Z ∞ Z ∞
dλ 1
δ(αt)dt = δ(λ) = (2.43)
−∞ −∞ |α| |α|

and so
δ(t)
δ(αt) = (2.44)
|α|
2.3. USEFUL SIGNAL FUNCTIONS 35

Figure 2.40: Effect of scaling on unit impulse

Summary of Properties of δ(t)


1. δ(t) = 0, t 6= 0.
R∞
2. −∞ δ(t) dt = 1.

3. δ(t) is an even function i.e. δ(t) = δ(−t)


Rt
4. −∞ δ(τ ) dτ = u(t).
R∞
5. 0 δ(t − τ ) dt = u(t).
du
6. dt = δ(t).
δ(t) 1
7. δ(αt) = |α| . More generally, δ(α(t − τ )) = |α| δ(t − τ)

8. x(t)δ(t) = x(0)δ(t).

9. x(t)δ(t − τ ) = x(τ )δ(t − τ ).


R∞
10. −∞ x(t)δ(t) dt = x(0).
R∞
11. −∞ x(t)δ(t − τ ) dt = x(τ ). Figure 2.41: The DT unit im-
pulse function.
The DT Unit Impulse Function
The DT unit impulse function δ[n], sometimes referred to as the Kronecker delta
function (Figure 2.41) is defined by
(
1, n = 0
δ[n] = (2.45)
0, n 6= 0

The DT delta function δ[n] is referred to as the unit sample that occurs at n = 0 (
1, n=k
and the shifted function δ[n − k] as the sample that occurs at n = k . δ[n − k] =
0, n 6= k

Some properties of δ[n]


1. δ[n] = 0 for n 6= 0.
n
X
2. δ[m] = u[n], this can be easily seen by considering two cases for n,
m=−∞
namely n < 0 and n > 0, (Figure 2.42).
36 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

n
X
• Case 1: δ[m] = 0 for n < 0, this is true since δ[m] has a value
m=−∞
of one only when m = 0 and zero anywhere else. The upper limit of
the summation is less than zero thus δ[m = 0] is not included in the
summation.
Figure 2.42: A DT unit im-
pulse function • Case 2: On the other hand if n > 0, δ[m = 0] will be included in
Xn
the summation, therefore δ[m] = 1.
m=−∞
In summary,
n
(
X 1, n≥0
δ[m] =
m=−∞
0, n<0
= u[n]

3. u[n] − u[n − 1] = δ[n], this can be clearly see in Figure 2.43 as you subtract
the two signals from each other.

Figure 2.43: δ[n] = u[n] − u[n − 1]


X
4. δ[n − k] = u[n].
k=0

5. x[n]δ[n] = x[0]δ[n].
6. x[n]δ[n − k] = x[k]δ[n − k].
7. The DT unit impulse is not affected by scaling, i.e. δ[αn] = δ[n].
8. I will leave out some other important properties to a later stage of this
course in particular the sifting property of the DT unit impulse.

2.3.8 The Unit Sinc Function


The unit sinc function (Figure 2.44) is called a unit function because its height
and area are both one, it is defined as

sin(πt)
sinc(t) = (2.46)
πt
2.3. USEFUL SIGNAL FUNCTIONS 37

Figure 2.44: The CT unit sinc function

Some authors define the sinc function as


sin(t)
sinc(t) =
t
One can use either of them as long as one definition is used consistently.

It is also known as the filtering or interpolating function, some authors name it


the sampling function Sa(πt). Since the denominator is an increasing function
of t and the numerator is bounded (|sin t| ≤ 1), therefore it is simply a damped
sine wave. Figure 2.44 shows that the sinc function is an even function of t
having its peak at t = 0. What is sinc(0)? To determine the value of sinc(0),
simply use L’Hôpital’s rule to the definition in (2.46). Then

sin(πt) π cos(πt)
lim sinc(t) = lim = lim = 1.
t→0 t→0 πt t→0 π
38 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS
Chapter 3

Description of Systems

3.1 Introduction
The words signal and systems were defined very generally in Chapter 1. Sys-
tems can be viewed as any process or interaction of operations that transforms
an input signal into an output signal with properties different from those of the
input signals. A system may consist of physical components (hardware realiza-
tion) or may consist of an algorithm that computes the output signal from the
input signal (software realization).
One way to define a system is anything that performs a function, it operates Figure 3.1: CT and DT sys-
tems block diagrams
on something to produce something else. It can be thought of as a mathematical
operator. A CT system operates on a CT input signal to produce a CT output
i.e. y(t) = H {x(t)}. H is the operator denoting the action of a system, it
specifies the operation to be performed and also identifies the system. On the
other hand, a DT system operates on a DT signal to produce a DT output
(Figure 3.1). I will sometimes use the following notation to describe a system
H
x(t) −−−→ y(t)

which simply means the input x to system H produces the output y.

By knowing how to mathematically describe and characterize all the com-


ponents in a system and how the components interact with each other, an engi-
neer can predict, using mathematics, how a system will work, without actually
building it. Systems may be interconnected together in different configurations,
mainly in series, parallel and feedback (Figure 3.2.)

3.2 Systems Characteristics


Systems may be classified in the following categories:

1. Memoryless (instantaneous) and dynamic (with memory) systems.

2. Invertible and non-invertible systems.

3. Causal and non-causal systems.

39
40 CHAPTER 3. DESCRIPTION OF SYSTEMS

Figure 3.2: A system composed of four interconnected components, with two


inputs and two outputs.

4. Stable and non-stable systems.


5. Time-invariant and time-varying systems.
6. Linear and non-linear systems.

3.2.1 Memory
A systems output or response at any instant t generally depends upon the entire
past input. However, there are systems for which the output at any instant t
depends only on its input at that instant and not on any other input at any
other time. Such systems are said to have no memory or is called memoryless.
The only input contributing to the output of the system occurs at the same
time as the output. The system has no stored information of any past inputs
thus the term memoryless. Such systems are also called static or instantaneous
systems. Otherwise, the system is said to be dynamic (or a system with mem-
ory). Instantaneous systems are a special case of dynamic systems. Here are
some examples
• y(t) = 2x(t), this is a memoryless system.
• y(t) = x(2t), system with memory.
Z
1 t
• y(t) = x(τ ) dτ , system with memory.
C −∞
• A voltage divider circuit is a memoryless system (Figure 3.3).
Figure 3.3: A voltage divider

3.2.2 Invertibility
A system H performs certain operations on input signals. If we can obtain the
input x(t) back from the output y(t) by some operation, the system H is said
to be invertible. Thus, an inverse system H−1 can be created so that when
the output signal is fed into it, the input signal can be recovered (Figure 3.4).
For a non-invertible system, different inputs can result in the same output,
and it is impossible to determine the input for a given output. Therefore, for
an invertible system it is essential that distinct inputs applied to the system
produce distinct outputs so that there is one-to-one mapping between an input
and the corresponding output. An example of a system that is not invertible is
a system that performs the operation of squaring the input signals, y(t) = x2 (t).
3.2. SYSTEMS CHARACTERISTICS 41

Figure 3.4: The inverse system

For any given input x(t) it is possible to determine the value of the output y(t).
However, if we attempt topfind the output, given the input, by rearranging the
relationship into x(t) = y(t) we√ face a problem. The square root function
has multiple values, for example 4 = ±2. Therefore, there is no one to one
mapping between an input and the corresponding output signals. In other words
we have the same output for different inputs. For a system that is invertible,
consider an inductor whose input-output relationship is described by
Z t
1
y(t) = x(τ ) dτ
L −∞

d
the operation representing the inverse system is simply: L dt .

3.2.3 Causality
A causal system is one for which the output at any instant t0 depends only on
the value of the input x(t) for t ≤ t0 . In other words, the value of the current
output depends only on current and past inputs. This should seem obvious as
how could a system respond to an input signal that has not yet been applied.
Simply, the output cannot start before the input is applied. A system that
violates the condition of causality is called a noncausal system. A noncausal
system is also called anticipative which means the systems knows the input in
the future and acts on this knowledge before the input is applied. Noncausal
systems do not exist in reality as we do not know yet how to build a system
that can respond to inputs not yet applied. As an example consider the system
specified by y(t) = x(t + 1). Thus, if we apply an input starting at t = 0, the
output would begin at t = −1, as seen in Figure 3.5 hence a noncausal system.

On the other hand a system described by the equation


Z t
y(t) = x(τ ) dτ
−∞

is clearly a causal system since the output y(t) depends on inputs that occur
since −∞ up to time t (the upper limit of the integral). If the upper limit is
given as t + 1 the system is noncausal.

3.2.4 Stability
A system is stable if a bounded input signal yields a bounded output signal. A
signal is said bounded if its absolute value is less than some finite value for all
time,
|x(t)| < ∞, −∞ < t < ∞.
42 CHAPTER 3. DESCRIPTION OF SYSTEMS

x(t) x(t + 1)

1 1

0 0
-2 -1 0 1 2 -2 -1 0 1 2
Time, seconds Time, seconds

Figure 3.5: A noncausal system

A system for which the output signal is bounded when the input signal is
bounded is called bounded-input-bounded-output (BIBO) stable system.

3.2.5 Time Invariance


A system is time-invariant if the input-output properties do not change with
time. For such a system, if the input is delayed by t0 seconds, the output is the
same as before but delayed by t0 seconds. In other words, a time shift in the
input signal causes the same time shift in the output signal without changing
the functional form of the output signal. If input x(t) yields output y(t) the
input x(t − t0 ) yields output y(t − t0 ) for all t0 ∈ R, i.e.
H H
x(t) −−−→ y(t) =⇒ x(t − t0 ) −−−→ y(t − t0 ).
A very simple example of a system which is not time-invariant (i.e. time-varying)
would be one described by
y[n] = x[2n].
Let x1 [n] = g[n] and let x2 [n] = g[n − 1], where g[n] is shown in Figure 3.6, and
let the output signal corresponding to x1 [n] be y1 [n] and the output to x2 [n] be
y2 [n]. For this system to be time-invariant the output y2 [n] must be the same
as y1 [n] but delayed by one unit, but it is not as shown in Figure 3.7.
Figure 3.6: A DT input signal.

Figure 3.7: Outputs of the system described by y[n] = x[2n] to two different
inputs.
3.2. SYSTEMS CHARACTERISTICS 43

3.2.6 Linearity and Superposition


Homogeneity (Scaling) Property
A system is said to be homogenous for arbitrary real or complex number α,
if the input signal is increased α-fold, the output signal also increases α-fold.
Thus, if
H
x(t) −−−→ y(t)
then for all real or imaginary α
H
αx(t) −−−→ αy(t)

Additivity Property
The additivity property of a system implies that if several inputs are acting on
the system, then the total output of the system can be determined by considering
each input separately while assuming all the other inputs to be zero. The total
output is then the sum of all the component outputs. This property may be
expressed as follows: if an input x1 (t) acting alone produces an output y1 (t),
and if another input x2 (t), also acting alone, has an output y2 (t), then, with
both inputs acting together on the system, the total output will be y1 (t) + y2 (t).
Thus, if
H H
x1 (t) −−−→ y1 (t) and x2 (t) −−−→ y2 (t)
then
H
x1 (t) + x2 (t) −−−→ y1 (t) + y2 (t).
A system is linear if both the homogeneity and the additivity property are
satisfied. Both these properties can be combined into one property (superposi-
tion) which can be expressed as follows: if
H H
x1 (t) −−−→ y1 (t) and x2 (t) −−−→ y2 (t)

then for all real or imaginary α and β,


H
αx1 (t) + βx2 (t) −−−→ αy1 (t) + βy2 (t).

Determine whether the system described by the differential equation Example 3.2

aÿ(t) + by 2 (t) = x(t)

is linear or nonlinear.

 Solution Consider two individual inputs x1 (t) and x2 (t), the equations
describing the system for the two inputs acting alone would be

aÿ1 (t) + by12 (t) = x1 (t) and aÿ2 (t) + by22 (t) = x2 (t)

The sum of the two equations is

a[ÿ1 (t) + ÿ2 (t)] + b[y12 (t) + y22 (t)] = x1 (t) + x2 (t)
44 CHAPTER 3. DESCRIPTION OF SYSTEMS

which is not equal to


00
a[y1 (t) + y2 (t)] + b[y1 (t) + y2 (t)]2 = x1 (t) + x2 (t).

Therefore, in this system superposition is not applied hence the system is


nonlinear. 

Remark For a system to be linear a zero input signal implies a zero output.
Consider for an example the system

y[n] = 2x[n] + x0

where x0 might be some initial condition. If x[n] = 0 it is clear that y[n] 6= 0


which is not linear unless x0 is zero.

3.3 Linear Time-invariant Systems


From now on we will focus on systems that are linear and time-invariant (LTI).
Many engineering systems are well approximated by LTI models, analysis of
such systems is simple and elegant. We consider two methods of analysis of
LTI systems: the time-domain method and the frequency-domain method. The
frequency domain methods are addressed at a late stage in the course.

3.3.1 Time-Domain Analysis of LTI systems


Analysis for DT systems will be introduced first, as it is easier to analyze, it will
then be extended to CT systems. Recall that by analysis we mean determining
the response y[n] of an LTI system to an arbitrary input x[n]

Unit Impulse Response h[n]


The unit impulse function δ[n] is used extensively in determining the response
of a DT LTI system. When the input signal to the system is δ[n] the output is
called the impulse response, h[n]
H
δ[n] −−−→ h[n].

If we know the system response to an impulse input, and if an arbitrary input


x[n] can be expressed as a sum of impulse components, the system response
could be obtained by summing the system response to various impulse compo-
nents. Figure 3.8 shows how a signal x[n] can be expressed as a sum of impulse
components.

The component of x[n] at n = k is x[k]δ[n − k], and x[n] is the sum of all these
components summed from k = −∞ to ∞. Therefore

x[n] = x[0]δ[n] + x[1]δ[n − 1] + x[2]δ[n − 2] + ∙ ∙ ∙


+ x[−1]δ[n + 1] + x[−2]δ[n + 2] + ∙ ∙ ∙
X∞
= x[k]δ[n − k] (3.1)
k=−∞
3.3. LINEAR TIME-INVARIANT SYSTEMS 45

3 x[n]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[−2]δ[n + 2]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[−1]δ[n + 1]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[0]δ[n]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[1]δ[n − 1]
2
1
0
-3 -2 -1 0 1 2 3 n→
3
x[2]δ[n − 2]
2
1
0
-3 -2 -1 0 1 2 3 n→

Figure 3.8: Representation of an arbitrary signal x[n] in terms of impulse com-


ponents

The expression in (3.1) is the DT version of the sifting property, x[n] is written
as a weighted sum of unit impulses.

Express the signal shown in Figure 3.9 as a weighted sum of impulse Example 3.3
components. x[n]
2

 Solution This can be easily shown as in Figure 3.10. Therefore, 1

x[n] = 2δ[n + 1] + 2δ[n] − δ[n − 1] + δ[n − 2].  0

-1
-2 -1 0 1 2 3
3.3.2 The Convolution Sum n
Figure 3.9: A DT signal x[n]
We are interested in finding the system output y[n] to an arbitrary input x[n]
knowing the impulse response h[n] to a DT LTI system. There is a systematic
way of finding how the output responds to an input signal, it is called convolu-
tion. The convolution technique is based on a very simple idea, no matter how
complicated your input signal is, one can always express it in terms of weighted
impulse components. For LTI systems we can find the response of the system
to one impulse component at a time and then add all those responses to form
46 CHAPTER 3. DESCRIPTION OF SYSTEMS

2 x[n]

-1
-3 -2 -1 0 1 2 3
3
2 2δ[n + 1]
1
0
-1
-3 -2 -1 0 1 2 3
3
2 2δ[n]
1
0
-1
-3 -2 -1 0 1 2 3
3
2 −1δ[n − 1]
1
0
-1
-3 -2 -1 0 1 2 3
3
2 1δ[n − 2]
1
0
-1
-3 -2 -1 0 1 2 3
n

Figure 3.10: x[n] expressed as a sum of individual weighted unit impulses

the total system response. Let h[n] be the system response (output) to impulse
input δ[n]. Thus if
H
δ[n] −−−→ h[n]
then because the system is time-invariance
H
δ[n − k] −−−→ h[n − k]

and because of linearity, if the input is multiplied by a weight or constant the


output is multiplied by the same weight thus
H
x[k]δ[n − k] −−−→ x[k]h[n − k]

and again because of linearity



X ∞
X
H
x[k]δ[n − k] −−−→ x[k]h[n − k]
k=−∞ k=−∞
| {z } | {z }
x[n] y[n]
3.3. LINEAR TIME-INVARIANT SYSTEMS 47

The left hand side is simply x[n] [see equation (3.1)], and the right hand is the
system response y[n] to input x[n]. Therefore

X
y[n] = x[k]h[n − k] (3.2)
k=−∞

The summation on the RHS is known as the convolution sum and is denoted by
y[n] = x[n] ∗ h[n]. Now in order to construct the response or output of a DT LTI
system to any input x[n], all we need to know is the systems impulse response
h[n].

Properties of the Convolution Sum


1. The Commutative Property

x[n] ∗ h[n] = h[n] ∗ x[n] (3.3)

This property can be easily proven by starting with the definition of con-
volution
X∞
y[n] = x[k]h[n − k]
k=−∞

and letting q = n − k. Then we have



X ∞
X
x[n] ∗ h[n] = x[n − q]h[q] = h[q]x[n − q] = h[n] ∗ x[n]
q=−∞ q=−∞

2. The Distributive Property

x[n] ∗ (h[n] + z[n]) = x[n] ∗ h[n] + x[n] ∗ z[n] (3.4)

If we convolve x[n] with the sum of h[n] and z[n], we get



X
x[n] ∗ (h[n] + z[n]) = x[k] (h[n − k] + z[n − k])
k=−∞
X∞ ∞
X
= x[k]h[n − k] + x[k]z[n − k]
k=−∞ k=−∞
| {z } | {z }
=x[n]∗h[n] =x[n]∗z[n]

3. The Associative Property

x[n] ∗ (h[n] ∗ z[n]) = (x[n] ∗ h[n]) ∗ z[n] (3.5)

The proof to this property is left as an exercise to the reader.


4. The Shifting property

x[n − m] ∗ h[n − q] = y[n − m − q] (3.6)

In words, the input x is delayed by m samples, the signal h is also delayed


by q samples, therefore the result of the convolution of both signals will
introduce a total delay in the output signal by m + q samples.
48 CHAPTER 3. DESCRIPTION OF SYSTEMS

5. Convolution with an Impulse

x[n] ∗ δ[n] = x[n] (3.7)

This property can be easily seen from the definition of convolution



X
x[n] ∗ δ[n[= x[k]δ[n − k] (3.8)
k=−∞

and the RHS in (3.8) is simply x[n] from (3.1).


6. The Width Property

If x[n] and h[n] have lengths of m and n elements respectively, then the
length of y[n] is m + n − 1 elements. In some special cases this property
could be violated. One should be careful to count samples with zero am-
plitudes that exist in between the samples. Furthermore, the appearance
of the first sample in the output will be located at the summation of the
locations of the first appearing samples of each function.

We shall evaluate the convolution sum first by analytical method and later with
graphical aid.

Example 3.4 Determine y[n] = x[n] ∗ h[n] for x[n] and h[n] as shown in Figure 3.11

x[n] h[n]
2 2

1 1

0 0

-1 -1
-3 -2 -1 0 1 2 3 n→ -3 -2 -1 0 1 2 3 n→

Figure 3.11: Two DT signals x[n] and h[n]

 Solution Method 1 Express x[n] as weighted sum of impulse components

x[n] = δ[n + 1] − δ[n] + δ[n − 1] + δ[n − 2]

Since the system is an LTI one, the output is simply (Figure 3.12)

y[n] = h[n + 1] − h[n] + h[n − 1] + h[n − 2] (3.9)

Remark It would have been easier if we determined y[n] = h[n] ∗ x[n]!! Try
to verify this yourself ?? The answer would have been the same because of the
commutative property. Note also the output width is (3 + 4 − 1 = 6 elements),
the first sample in the output appears at [n = 0 + (−1) = −1] which follows
from the width property.
3.3. LINEAR TIME-INVARIANT SYSTEMS 49

h[n + 1]
2

0
-3 -2 -1 0 1 2 3 4 5

−h[n]
0

-1

-2
-3 -2 -1 0 1 2 3 4 5

h[n − 1]
2

0
-3 -2 -1 0 1 2 3 4 5

h[n − 2]
2

0
-3 -2 -1 0 1 2 3 4 5

y[n] = h[n + 1] − h[n] + h[n − 1] + h[n − 2]


3
2
1
0
-1
-3 -2 -1 0 1 2 3 4 5

Figure 3.12: Impulse response to individual components of x[n]

Method 2 By direct evaluation of the convolution sum



X
y[n] = x[k]h[n − k]
k=−∞

which can be written as

y[n] = ∙ ∙ ∙ + x[−2]h[n + 2] + x[−1]h[n + 1] + x[0]h[n]


+ x[1]h[n − 1] + x[2]h[n − 2] + ∙ ∙ ∙

and for x[n] in Figure 3.11 we have

y[n] = x[−1]h[n + 1] + x[0]h[n] + x[1]h[n − 1] + x[2]h[n − 2]


= (1)h[n + 1] − (1)h[n] + (1)h[n − 1] + (1)h[n − 2] (3.10)

which is the same as equation (3.9).


50 CHAPTER 3. DESCRIPTION OF SYSTEMS

Graphical Procedure for the Convolution Sum


The direct analytical methods to evaluate the convolution sum are simple and
convenient to use as long as the number of samples are small. It is helpful to
explore some graphical concepts that helps in performing convolution of more
complicated signals. If y[n] is the convolution of x[n] and h[n], then

X
y[n] = x[k]h[n − k] (3.11)
k=−∞

It is crucial to note that the summation index in (3.11) is k, so that n is just


like a constant. With this in mind h[n − k] should be considered a function of k
for purposes of performing the summation in (3.11). This consideration is also
important when we sketch the graphical representations of the functions x[k]
and h[n − k]. Both of these functions should be sketched as functions of k, not
of n. To understand what the function h[n − k] looks like let us start with the
function h[k] and perform the following transformations
k→−k k→k−n
h[k] −−−−→ h[−k] −−−−−→ h[−(k − n)] = h[n − k]

The first transformation is a time reflected version of h[k], and the second trans-
formation shifts the already reflected function n units to the right for positive
n; for negative n, the shift is to the left. The convolution operation can be
performed as follows:
1. Reflect h[k] about the vertical axis (n = 0) to obtain h[−k].
2. Time shift h[−k] by n units to obtain h[n − k]. For n > 0, the shift is to
the right; for n < 0, the shift is to the left.
3. Multiply x[k] by h[n − k] and add all the products to obtain y[n]. The
procedure is repeated for each value n over the range −∞ to ∞.
Example 3.5 Determine y[n] = x[n] ∗ h[n] graphically, where x[n] and h[n] are defined as in
Figure 3.11.

 Solution Before starting with the graphical procedure it is a good idea here
to determine where the first sample in the output will appear (this was found
earlier to be at n = −1). Furthermore, the width property implies that the
number of elements in y[n] are six. Thus, y[n] = 0 for −∞ < n ≤ −2, and
y[n] = 0 for n ≥ 5, hence the only interesting range for n is −1 ≤ n ≤ 4. Now
for n = −1
X∞
y[−1] = x[k]h[−1 − k]
k=−∞

and a negative n (n = −1) implies a time shift to the left for the function
h[−1 − k]. Next multiply x[k] by h[−1 − k] and add all the products to obtain
y[−1] = 2. We keep repeating the procedure incrementing n by one every time,
it is important to note here that by incrementing n by one every time means
shifting h[n − k] to the right by one sample. Figures 3.13 and 3.14 illustrates
the procedure for n = −1, 0, 1 and 2. Continuing in this matter for n = 3 and
4, we obtain y[n] as illustrated in Figure 3.15. 
3.3. LINEAR TIME-INVARIANT SYSTEMS 51

h[−1 − k] h[−k]
2 2

1 1

0 0
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→

x[k] x[k]
1 1

0 0

-1 -1
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→

x[k]h[−1 − k] x[k]h[−k]
2 1

0
1
-1

0 -2
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
y[−1] = 2 y[0] = −1

Figure 3.13: y[n] for n = −1 and 0

h[1 − k] h[2 − k]
2 2

1 1

0 0
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→

x[k] x[k]
1 1

0 0

-1 -1
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→

x[k]h[1 − k] x[k]h[2 − k]
2 2

1 1

0 0

-1 -1
-4 -3 -2 -1 0 1 2 3 4 k→ -4 -3 -2 -1 0 1 2 3 4 k→
y[1] = 3/2 y[2] = 5/2

Figure 3.14: y[n] for n = 1 and 2

Alternative Form of Graphical Procedure

The alternative procedure is basically the same, the only difference is that in-
stead of presenting the data as graphical plots, we display it as a sequence of
numbers on tapes. The following example demonstrates the idea and should
become clearer.
52 CHAPTER 3. DESCRIPTION OF SYSTEMS

y[n]
3

-1
-3 -2 -1 0 1 2 3 4 5 n→

Figure 3.15: Graph of y[n]

Example 3.6 Determine y[n] = x[n] ∗ h[n], using the sliding tape method where x[n] and
h[n] are defined as in Figure 3.11.

Solution In this procedure we write the sequences x[n] and h[n] in the slots of
two tapes. Now leave the x tape fixed to correspond to x[k]. The h[−k] tape is
obtained by time inverting the h[k] tape about the origin (n = 0) Figure 3.16.

x tape → 1 −1 1 1 x[k] 1 −1 1 1
h tape → 2 1 0.5 ← h[−k] 0.5 1 2
n=0 n=0

Figure 3.16: Sliding tape procedure for DT convolution

Before going any further we have to align the slots such that the first element in
the stationary x[k] tape corresponds to the first element of the already inverted
h[−k] tape as illustrated in Figure 3.17. We now shift the inverted tape by n
slots, multiply values on two tapes in adjacent slots, and add all the products to
find y[n]. Figure 3.17 show the cases for n = 0, 1 and 2. For the case of n = 1,
for example
y[1] = (1 × 0.5) + (−1 × 1) + (1 × 2) = 1.5
Continuing in the same fashion for all n we obtain the same answer as is in
Figure 3.15. 
n=0
1 −1 1 1 y[−1] = 2
0.5 1 2 ← n = −1

1 -1 1 1 y[0] = −1
0.5 1 2 n=0

1 -1 1 1 y[1] = 1.5
0.5 1 2 →n=1

Figure 3.17: y[n] for n = 0, 1 and 2 using sliding tape procedure


3.4. THE CONVOLUTION INTEGRAL 53

3.4 The Convolution Integral


Let us turn our attention now to CT LTI systems, we shall use the principle
of superposition to derive the system’s response to some arbitrary input x(t).
In this approach, we express x(t) in terms of impulses. Suppose the CT signal
x(t) in Figure 3.18 is an arbitrary input to some CT system. We begin by
approximating x(t) with narrow rectangular pulses as depicted in Figure 3.19. Figure 3.18: An arbitrary input
This procedure gives us a staircase approximation of x(t) that improves as pulse
width is reduced. In the limit as pulse width approaches zero, this representation

Figure 3.19: Staircase approximation to an arbitrary input

becomes exact, and the rectangular pulses becomes impulses delayed by various
amounts. The system response to the input x(t) is then given by the sum of
the system’s responses to each impulse component of x(t). Figure 3.19 shows
x(t) as a sum of rectangular pulses, each of width Δτ. In the limit as Δτ → 0,
each pulse approaches an impulse having a strength equal to the area under
that pulse. For example the pulse located at t = nΔτ can be expressed as
 
t − nΔτ
x(nΔτ )rect
Δτ
and will approach an impulse at the same location with strength x(nΔτ )Δτ ,
which can be represented by
[x(nΔτ )Δτ ] δ(t − nΔτ ) (3.12)
| {z }
strength

If we know the impulse response of the system h(t), the response to the impulse
in (3.12) will simply be [x(nΔτ )Δτ ]h(t − nΔτ ) since
H
δ(t) −−−→ h(t)
H
δ(t − nΔτ ) −−−→ h(t − nΔτ )
H
[x(nΔτ )Δτ ]δ(t − nΔτ ) −−−→ [x(nΔτ )Δτ ]h(t − nΔτ ) (3.13)
The response in (3.13) represents the response to only one of the impulse com-
ponents of x(t). The total response y(t) is obtained by summing all such com-
ponents (with Δτ → 0)

X ∞
X
H
lim x(nΔτ )Δτ δ(t − nΔτ ) −−−→ lim x(nΔτ )Δτ h(t − nΔτ )
Δτ →0 Δτ →0
n=−∞ n=−∞
| {z } | {z }
T he input x(t) T he output y(t)
54 CHAPTER 3. DESCRIPTION OF SYSTEMS

and both sides by definition are integrals given by


Z ∞ Z ∞
H
x(τ )δ(t − τ )dτ −−−→ x(τ )h(t − τ )dτ
−∞ −∞
| {z } | {z }
x(t) y(t)

In summary the response y(t) to the input x(t) is given by


Z ∞
y(t) = x(τ )h(t − τ )dτ (3.14)
−∞

and the integral in (3.14) is known as the convolution integral and denoted by
y(t) = x(t) ∗ h(t).

Properties of The Convolution Integral


The properties of the convolution integral are the same as of the convolution
sum and will be stated here for completion.
1. The Commutative Property
x(t) ∗ h(t) = h(t) ∗ x(t) (3.15)
This property can be easily proven by starting with the definition of con-
volution Z ∞
y(t) = x(τ )h(t − τ )dτ
−∞
and letting λ = t − τ so that τ = t − λ and dτ = −λ, we obtain
Z ∞ Z ∞
x(t) ∗ h(t) = x(t − λ)h(λ)dλ = h(λ)x(t − λ)dλ = h(t) ∗ x(t)
−∞ −∞

2. The Distributive Property


x(t) ∗ (h(t) + z(t)) = x(t) ∗ h(t) + x(t) ∗ z(t) (3.16)

3. The Associative Property


x(t) ∗ (h(t) ∗ z(t)) = (x(t) ∗ h(t)) ∗ z(t) (3.17)

4. The Shifting property


x(t − T1 ) ∗ h(t − T2 ) = y(t − T1 − T2 ) (3.18)

5. Convolution with an Impulse


x(t) ∗ δ(t) = x(t) (3.19)
By definition of convolution
Z ∞
x(t) ∗ δ(t) = x(τ )δ(t − τ )dτ (3.20)
−∞

Because δ(t − τ ) is an impulse located at τ = t and according to the


sampling property of the impulses, the integral in (3.20) is the value of
x(τ ) at τ = t, that is x(t).
3.4. THE CONVOLUTION INTEGRAL 55

6. The Width Property

If x(t) has a duration of T1 and h(t) has a duration of T2 , then the du-
ration of y(t) is T1 + T2 . Furthermore, the appearance of the output will
be located at the summation of the times of where the two functions first
appear.

7. The Scaling Property

If y(t) = x(t) ∗ h(t) then y(at) = |a|x(at) ∗ h(at)

This property of the convolution integral has no counterpart for the con-
volution sum.

The Graphical procedure


The steps in evaluating the convolution integral are parallel to thoses followed
in evaluating the convolution sum. If y(t) is the convolution of x(t) and h(t),
then Z ∞
y(t) = x(τ )h(t − τ )dτ (3.21)
−∞

One of the crucial points to remember here is that the integration is performed
with respect to τ , so that t is just like a constant. This consideration is also
important when we sketch the graphical representations of the functions x(t)
and h(t − τ ). Both of these functions should be sketched as functions of τ , not
of t. The convolution operation can be performed as follows:

1. Keep the function x(τ ) fixed.

2. Reflect h(τ ) about the vertical axis (t = 0) to obtain h(t − τ ).

3. Time shift h(−τ ) by t0 seconds to obtain h(t − t0 ). For t0 > 0, the shift
is to the right; for t0 < 0, the shift is to the left.

4. Find the area under the product of x(τ ) by h(t0 − τ ) to obtain y(t0 ), the
value of the convolution at t = t0 .

5. The procedure is repeated for each value of t over the range −∞ to ∞.

Determine y(t) = x(t) ∗ h(t) for x(t) and h(t) as shown in Figure 3.20. Example 3.7

x(t) y(t)

1 1

0 0
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
Time, seconds Time, seconds

Figure 3.20: CT signals to be convolved


56 CHAPTER 3. DESCRIPTION OF SYSTEMS

 Solution Figure 3.21(a) shows x(τ ) and h(−τ ) as functions of τ . The function
h(t − τ ) is now obtained by shifting h(−τ ) by t. If t is positive, the shift is to
the right; if t is negative the shift is to the left. Figure 3.21(a) shows that for
negative t, h(t − τ ) does not overlap x(τ ), and the product x(τ )h(t − τ ) = 0, so
that y(t) = 0 for t < 0. Figure 3.21(b) shows the situation for 0 ≤ t ≤ 2 , here
x(τ ) and h(t − τ ) do overlap and the product is nonzero only over the interval
0 ≤ τ ≤ t (shaded area). Therefore,
Z t
y(t) = x(τ )h(t − τ )dτ 0≤t<2
0

All we need to do now is to substitute correct expressions for x(τ ) and h(t − τ )
in this integral
Z t
y(t) = (1)(1)dτ = t 0≤t<2
0

As we keep right shifting h(−τ ) to obtain h(t − τ ), the next interesting range
for t is 2 ≤ t ≤ 4, (Figure 3.21(c).) Clearly, x(τ ) overlaps with h(t − τ ) over the
shaded interval, which is t − 2 ≤ τ < 2. Therefore,
Z 2
y(t) = x(τ )h(t − τ )dτ
t−2
Z 2
= (1)(1)dτ
t−2
=4−t 2≤t≤4

1 1 1

0.5 x(τ ) 0.5 x(τ ) 0.5 x(τ )

0 0 0
-5 0 1 2 τ→ -5 0 2 τ→ -5 0 1 2 τ→

1 t<0 1 0≤t<2 1 t≥2

0.5 h(−τ ) 0.5 h(t − τ ) 0.5 h(t − τ )

0 0 0
-5 0 τ→ -5 t τ→ -5 t−2 t τ→

1 t<0 1 0≤t<2 1 t≥2

0.5 x(τ )h(−τ ) 0.5 x(τ )h(t − τ ) 0.5 x(τ )h(t − τ )

0 0 0
-5 0 τ→ -5 0 t τ→ -5 τ→
(a) (b) (c)

Figure 3.21: Convolution of x(t) and h(t)


3.4. THE CONVOLUTION INTEGRAL 57

It is clear that for t > 4, x(τ ) will not overlap h(t − τ ) which implies y(t) = 0
for t > 4. Therefore the result of the convolution is (Figure 3.22), y(t)

 2
0


t<0
t 0≤t<2
y(t) = 1
4 − t 2 ≤ t ≤ 4



0 t>4  0
-5 -4 -3 -2 -1 0 1 2 3 4 5
Time, seconds
Hint To check your answer, the convolution has the property that the Figure 3.22: Convolution of
area under the convolution integral is equal to the product of the areas x(t) and h(t)
of the two signals entering into the convolution. The area can be computed
by integrating equation (3.14) over the interval −∞ < t < ∞, giving
Z ∞ Z ∞Z ∞
y(t)dt = x(τ )h(t − τ )dτ dt
−∞ −∞ −∞
Z ∞ Z ∞ 
= x(τ ) h(t − τ )dt dτ
−∞ −∞
Z ∞
= x(τ ) [area under h(t)] dτ
−∞
= area under x(t) × area under h(t)

This check also applies to DT convolution,



X ∞
X ∞
X
y[n] = x[m] h[n − m]
n=−∞ m=−∞ n=−∞

Determine graphically y(t) = x(t) ∗ h(t) for x(t) = e−t u(t) and h(t) = e−2t u(t). Example 3.8

 Solution Figure 3.23 show x(t) and h(t) and Figure 3.24(a) shows x(τ ) and
h(−τ ) as functions of τ . The function h(t−τ ) is obtained by shifting h(−τ ) by t.
Clearly, h(t−τ ) does not overlap x(τ ) for t < 0, and the product x(τ )h(t−τ ) = 0,
so that y(t) = 0 for t < 0. Figure 3.24(b) shows the situation for t ≥ 0. Here
x(τ ) and h(t − τ ) do overlap over the shaded interval (0, t). Therefore,
Z t
y(t) = e−τ e−2(t−τ ) dτ
0
Z t
−2t
=e eτ dτ
0
= e−t − e−2t t≥0

x(t)
h(t)

1
1
e−t

e−2t

0 t→ 0 t→

Figure 3.23: Signals x(t) and h(t)


58 CHAPTER 3. DESCRIPTION OF SYSTEMS

1 1

0.5 x(τ ) 0.5 x(τ )

0 0
0 τ→ 0 τ→

1 t<0 1 t>0

0.5 h(t − τ ) h(−τ ) 0.5 h(t − τ )

0 0
t 0 τ→ 0 t τ→

1 t<0 1 t>0

0.5 x(τ )h(−τ ) 0.5 x(τ )h(t − τ )

0 0
0 τ→ 0 t τ→
y(t) (a) (b)

Figure 3.24: Convolution of x(t) and h(t)

Therefore the output y(t) is (Figure 3.25),

0 t→
y(t) = (e−t − e−2t )u(t) t≥0 
Figure 3.25: y(t)

Example 3.9 Compute the convolution x(t) ∗ h(t), where x(t) and h(t) are as in Figure 3.26
x(t)
h(t)

1
1

0 0
-1 0 t→ -1 0 1 t→

Figure 3.26: Signals x(t) and h(t)

Here, x(t) has a simpler mathematical expression that that of h(t), so it is prefer-
able to invert x(t). Hence, we shall determine h(t) ∗ x(t) rather than x(t) ∗ h(t).
According to Figure 3.26 x(t) and h(t) are

(  −1 ≤ t < 0
1 −1 ≤ t < 0 t + 1
x(t) = and h(t) = 1 0≤t<1
0 otherwise 

0 otherwise
3.4. THE CONVOLUTION INTEGRAL 59

Figure 3.27 demonstrates the overlapping of the two signals h(τ ) and x(t − τ ).
We can see that for t < −2, the product h(τ )x(t − τ ) is always zero. For
−2 ≤ t < −1,
Z t+1
y(t) = (τ + 1)(1)dτ
−1
  t+1
τ2
= + τ
2 −1
1
= (t + 2)2 − 2 ≤ t < −1
2

1 1

0.5 h(τ ) 0.5 h(τ )

0 0
-1 0 1 τ→ -1 0 1 τ→

x(t − τ )
1 t < −2 1 −2 ≤ t < −1

0.5 x(−τ ) 0.5 x(t − τ )

0 0
t t+1 τ→ t t+1 τ→

1 t < −2 1 −2 ≤ t < −1

0.5 h(τ )x(t − τ ) 0.5 h(τ )x(t − τ )

0 0
τ→ -1 t + 1 τ→

Figure 3.27: Graphical interpretation of h(t) ∗ x(t)

For −1 ≤ t < 0, the product is shown in Figure 3.28(a) and


Z 0 Z t+1
y(t) = (τ + 1)(1)dτ + (1)(1)dτ
t 0
t2
=1− −1≤t<0
2

For 0 ≤ t < 1, the product is shown in Figure 3.28(b) and


Z 1
y(t) = (1)(1)dτ
t
=1−t 0≤t<1
60 CHAPTER 3. DESCRIPTION OF SYSTEMS

For t > 1 the product is always zero. Summarizing, we have




 0 t < −2


 (t+2)2

 2 −2 ≤ t < −1
2
t
y(t) = 1 − 2 −1 ≤ t < 0


1 − t
 0≤t<1



0 t≥1

1 1

0.5 h(τ ) 0.5 h(τ )

0 0
-1 0 1 τ→ -1 0 1 τ→

0≤t<1
1 −1 ≤ t < 0 1
x(t − τ )

0.5 0.5 x(t − τ )

0 0
t t+1 τ→ t t+1 τ→

0≤t<1
1 −1 ≤ t < 0 1

0.5 h(τ )x(t − τ ) 0.5 h(τ )x(t − τ )

0 0
t 0 t+1 τ→ -4 t 1 τ→
(a) (b)

Figure 3.28: Graphical representation of h(t) ∗ x(t)

3.5 Properties of LTI Systems


The impulse response of an LTI system represents a complete description of the
characteristics of the system.

Memoryless LTI Systems


In section 3.2.1 we defined a system to be memoryless if its output at any instant
in time depends only on the values of the input at the same instant in time.
There we saw the input output relation of a memoryless LTI system is
y(t) = Kx(t) (3.22)
for some constant K. By setting x(t) = δ(t) in (3.22), we see that this system
has the impulse response
h(t) = Kδ(t)
3.5. PROPERTIES OF LTI SYSTEMS 61

In other words the only memoryless systems are what we call constant gain
systems.

Invertible LTI Systems


Recall that a system is invertible only if there exists an inverse system which
enables the reconstruction of the input given the output. If hinv (t) represents
the impulse response of the inverse system, then in terms of the convolution
integral we must therefore have
y(t) = x(t) ∗ h(t) ∗ hinv (t) = x(t)
this is only possible if
h(t) ∗ hinv (t) = hinv (t) ∗ h(t) = δ(t)

Causal LTI Systems


Using the convolution integral,
Z ∞
y(t) = x(τ )h(t − τ )dτ
−∞

for a CT system to be causal, y(t) must not depend on x(τ ) for τ > t. We can
see that this will be so if
h(t − τ ) = 0 f or τ >t
Let λ = t − τ , implies
h(λ) = 0 f or λ<0
In this case the convolution integral becomes
Z t
y(t) = x(τ )h(t − τ )dτ
−∞
Z ∞
= h(τ )x(t − τ )dτ
0

Stable LTI Systems


A CT system is stable if and only if every bounded input produces a bounded
output. Consider a bounded input x(t) such that |x(t)| < B for all t. Suppose
that this input is applied to an LTI system with impulse response h(t). Then
Z ∞


|y(t)| = h(τ )x(t − τ )dτ
−∞
Z ∞
≤ |h(τ )| |x(t − τ )| dτ
−∞
Z ∞
≤B |h(τ )| dτ
−∞

Therefore, the system is stable if


Z ∞
|h(τ )| dτ < ∞
−∞
62 CHAPTER 3. DESCRIPTION OF SYSTEMS
Chapter 4

The Fourier Series

In Chapter 3 we saw how to obtain the response of a linear time invariant sys-
tem to an arbitrary input represented in terms of the impulse function. The
response was obtained in the form of the convolution integral. In this chapter
we explore other ways of expressing an input signal in terms of other signals.
In particular we are interested in representing periodic signals in terms of com-
plex exponentials, or equivalently, in terms of sine and cosine waveforms. This
representation of signals leads to the Fourier series, named after the French
physicist Jean Baptiste Fourier. Fourier was the first to suggest that peri-
odic signals could be represented by a sum of sinusoids. The concept is really
simple: consider a periodic signal with fundamental period T0 and fundamental
frequency ω0 = 2πf , this periodic signal can be expressed as a linear combi-
nation of harmonically related sinusoids as shown in Figure 4.1. In Fourier

Figure 4.1: The concept of representing a periodic signal as a linear combination


of sinusoids

63
64 CHAPTER 4. THE FOURIER SERIES

series representation of a signal, the higher frequency components of sinusoids


have frequencies that are integer multiplies of the fundamental frequency. This
is called the harmonic number, for example, the function cos(2π(kf )) is the
k th harmonic cosine, and its frequency is kf . The idea of the Fourier series
demonstrated in Figure 4.1 uses a constant, sines and cosines to represent the
original function, thus called the trigonometric Fourier series. Another form
of the Fourier series is the complex form, here the original periodic function is
represented as a combination of harmonically related complex exponentials. A
set of harmonically related complex exponentials form an orthogonal basis by
which periodic signals can be represented, a concept explored next.

4.1 Orthogonal Representations of Signals


In this section we show a way of representing a signal as a sum of orthogonal
signals, such representation simplifies calculations involving signals. We can
visualize the signal as a vector in an orthogonal coordinate system, with the
orthogonal waveforms being the unit coordinates.

4.1.1 Orthogonal Vector Space


Vectors, functions and matrices can be expressed in more than one set of co-
ordinates, which we usually call a vector space. A three dimensional Cartesian
Figure 4.2 is an example of a vector space denoted by R3 . The three axes of
three-dimensional Cartesian coordinates, conventionally denoted the x, y, and
z axes form the basis of the vector space R3 . A very natural and simple basis
Figure 4.2: 3D Cartesian coor- is simply the vectors e1 = (1, 0, 0), e2 = (0, 1, 0) and e3 = (0, 0, 1). Any vector
dinate system
in R3 can be written in terms of this basis. A vector v = (a, b, c) in R3 for
example can be written uniquely as the linear combination v = a e1 + b e2 + c e3
and a, b, c are simply called the coefficients. We can obtain the coefficients with
respect to the basis using the inner product, for vectors this is simply the dot
product
 
  1
< v, e1 > = vT e1 = a b c 0  = a
0
 
  0
< v, e2 > = vT e2 = a b c 1  = b
0
 
  0
< v, e3 > = vT e3 = a b c 0  = c
1

We say the basis is orthogonal if the inner product of any two different vectors
of the basis set is zero, and to visualize, they are simply at right angles. It is
clear that the vectors e1 , e2 and e3 form an orthogonal basis since
< e1 , e2 > = eT
1 e2 = 0

< e2 , e3 > = eT
2 e3 = 0

< e3 , e1 > = eT
3 e1 = 0
4.1. ORTHOGONAL REPRESENTATIONS OF SIGNALS 65

4.1.2 Orthogonal Signal Space


In the previous section we saw that a vector can be represented as a sum of
orthogonal vectors, which form the coordinate system of a vector space. The
problem in signals is analogous, we start by generalizing the concept of the inner
product to signals. The inner product of two real valued functions f (t) and g(t)
over an interval (a, b) is defined as
Z b
< f, g > = f (t)g(t)dt (4.1)
a

and for complex valued functions the inner product is defined as


Z b
< f, g > = f (t)g ∗ (t)dt (4.2)
a

where g ∗ (t) stands for the complex conjugate of the signal. For any basis set
to be orthogonal the inner product of every single element of the set to every
other element must be zero. A set of signals Φi , i = 0, ±1, ±2, ∙ ∙ ∙ , is said to be
orthogonal over an interval (a, b) if
Z (
b
En m=n
Φm (t)Φ∗n (t)dt = (4.3)
a 0 m 6= n

and En is simply the signal energy over the interval (a, b). If the energies En = 1
for all n, the Φn (t) are said to be orthonormal signals. An√orthogonal set can
always be normalized by dividing each signal in the set by E n .

Show that the signals Φm (t) = sin mt, m = 1, 2, 3, ∙ ∙ ∙ , form an orthogonal Example 4.1
set on the interval −π < t < π.

 Solution We start by showing that the inner product of each single element
of the set to every other element is zero
Z π Z π

Φm (t)Φn (t)dt = (sin mt)(sin nt)dt
−π −π
Z π Z
1 1 π
= cos(m − n)t dt − cos(m + n)t dt
2 −π 2 −π
(
π m=n
=
0 m 6= n

Since the energy in each signal equals π, the following set of signals forms an
orthonormal set over the interval −π < t < π
sin t sin 2t sin 3t
√ , √ , √ ,∙∙∙ 
π π π

Show that the signals Φk (t) = ej(kω0 t) , k = 0, ±1, ±2, ∙ ∙ ∙ , form an orthogonal Example 4.2

set on the interval (0, T ) where T = ω 0
.
66 CHAPTER 4. THE FOURIER SERIES

 Solution It is easy to show that


Z T Z T
Φk (t)Φ∗l (t)dt = ej(kω0 t) e−j(lω0 t) dt
0 0
Z T
= ej(k−l)ω0 t dt
0
(
T k=l
=
0 k 6= l

and hence the signals √1T ejkω0 t forms an orthonormal set over the interval (0, T ).
Evaluating the integral for the case k 6= l is not trivial and is shown below
Z T T
1
ej(k−l)ω0 t dt = ej(k−l)ω0 t
0 j(k − l)ω0 0
1 h i 2π
j(k−l)ω0 T
= e −1 and ω0 =
j(k − l)ω0 T
=0
since ej2π(k−l) = 1 for k 6= l. 

Now, consider expressing a signal x(t) with finite energy over the interval (a, b)
by an orthonormal set of signals Φi (t) over the same interval as

X
x(t) = ci Φi (t) (4.4)
i=−∞

The series representation in (4.4) is called a generalized Fourier series of x(t).


We can visualize this similar to expressing a vector as a linear combination of
the orthogonal basis. In an analogous fashion, here we are expressing a signal as
a linear combination of orthogonal or orthonormal set of signals. The question
remains is how to find the coefficients ci for such a linear combination. It turns
out that computing ci is really simple, just multiply (4.4) by Φ∗k (t) and integrate
over the range of definition of x(t). Therefore,
Z b Z b X∞
x(t)Φ∗k (t)dt = ci Φi (t)Φ∗k (t)dt
a a i=−∞

X Z b
= ci Φi (t)Φ∗k (t)dt (4.5)
i=−∞ a

from (4.3) and since Φi (t) is an orthonormal set, (4.5) can be simplified as
Z b
ck = x(t)Φ∗k (t)dt k = 0, ±1, ±2, ∙ ∙ ∙ (4.6)
a

noting that the summation in (4.5) has a value only when i = k and the sum-
mation is always zero otherwise. Note also that (4.6) is the inner product of the
signal with the orthonormal set. If the set Φi (t) is an orthogonal set, then the
coefficients in (4.6) are
Z b
1
ck = x(t)Φ∗k (t)dt k = 0, ±1, ±2, ∙ ∙ ∙ (4.7)
Ek a
4.1. ORTHOGONAL REPRESENTATIONS OF SIGNALS 67

Consider the signal x(t) defined over the interval (0, 3) as shown in Figure (4.3). Example 4.3

x(t)

0
0 1 2 3 t
Time, seconds

Figure 4.3: A signal x(t) defined over the interval (0, 3)

It is possible to represent this signal in terms of orthogonal set of basis signals


defined over the interval (0, 3). Figure (4.4) shows a set of three orthogonal
signals used to represent the signal x(t).
φ1 (t) φ2 (t) φ3 (t)

1 1 1

0 0 0
0 1 2 3 t 0 1 2 3 t 0 1 2 3 t

Figure 4.4: A set of three orthogonal signals

The coefficients that represent signal x(t), obtained using equation (4.6), are
given by
Z 3
c1 = x(t)Φ∗1 (t)dt = 2
0
Z 3
c2 = x(t)Φ∗2 (t)dt = 0
0
Z 3
c3 = x(t)Φ∗3 (t)dt = 1
0

The signal x(t) can be represented in terms of Φi by


3
X
x(t) = ci Φi (t)
i=1
= c 1 Φ 1 + c 2 Φ2 + c 3 Φ3
= 2Φ1 + Φ3 

It is worth emphasizing here that the choice of the basis is not unique and many
other possibilities exist.
68 CHAPTER 4. THE FOURIER SERIES

4.2 Exponential Fourier Series


It was shown in Example 4.2 that the set of exponentials ejkω0 t (k = 0, ±1, ±2, ∙ ∙ ∙ )

is orthogonal over an interval (0, T ) where T = ω 0
. If we select such a set as
basis functions then according to (4.4)


X
x(t) = cn ejnω0 t (4.8)
n=−∞

where, from (4.7), the cn are complex constants and are given by
Z T
1
cn = x(t)e−jnω0 t dt n = 0, ±1, ±2, ∙ ∙ ∙ (4.9)
T 0

Each term in the series has a period T , hence if the series converges, its sum is
periodic with period T . Such a series is called the complex exponential Fourier
series, and the cn are called the Fourier coefficients or spectral coefficients. Fur-
thermore, the interval of integration can be replaced by any other intervalR of
length T . Recall that we denote integration over an interval of length T by T .

Example 4.4 Find the exponential Fourier series for the signal x(t) in Figure 4.5

Figure 4.5: A periodic signal x(t)

 Solution In this case T = 2, ω0 = 2π


T = π, the Fourier coefficients are
Z 1
1
cn = x(t)e−jnπt dt
2 −1
Z 0 Z 1
1 −jnπt 1
= −Ke dt + Ke−jnπt dt
2 −1 2 0
 
K 1 − ejnπ e−jnπ − 1
= +
2 jnπ −jnπ
 
K 1 jnπ 1 −jnπ
= 1− e − e
jnπ 2 2
(
2K
n odd
= jnπ 
0 n even
4.2. EXPONENTIAL FOURIER SERIES 69

4.2.1 The Frequency Spectra (Exponential)


Also called the line spectra, theses are separate plots of the magnitude of cn ver-
sus ω (or harmonic number), magnitude spectrum, and the phase of cn versus ω
(or harmonic number), phase spectrum. In special cases we allow the magnitude
spectrum to take on negative values as well as positive, the spectrum is called
amplitude spectrum. These two plots together are the frequency or line spectra
(since amplitudes and phases are indicated by vertical lines) of the periodic sig-
nal x(t). The Fourier coefficients cn is complex in general, this requires that the
coefficients cn be expressed in polar form as |cn | ej∠cn . Consider the signal of
Example 4.4, it was found that the Fourier coefficients are given by
(
2K
n odd
cn = jnπ
0 n even

The magnitude spectrum is


(
2K
|n|π n odd
|cn | =
0 n even

paying particular attention to the case when n = 0, which can be shown by


using l’Hôpital’s rule that c0 = 0. The phase spectrum of x(t) is given by

 π
− 2 n = (2m − 1), m = 1, 2, ∙ ∙ ∙
∠cn = 0 n = 2m, m = 0, 1, 2, ∙ ∙ ∙

π
2 n = −(2m − 1), m = 1, 2, ∙ ∙ ∙

The line spectra of x(t) are shown in Figure 4.6. Note that the magnitude spec-
trum has even symmetry, the phase spectrum has odd symmetry. The spectrum
exists only at n = 0, ±1, ±2, ±3, ∙ ∙ ∙ ; corresponding to ω = 0, ±ω0 , ±2ω0 , ±3ω0 , ∙ ∙ ∙ ;
i.e., only at discrete values of ω. It is therefore a discrete spectrum and it is

Magnitude Spectrum

2K 2K
π π

2K 2K
3π 3π
2K 2K
5π 5π

-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 n→

Phase Spectrum

π π π
2 2 2

− π2 − π2 − π2

-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 n→

Figure 4.6: Line Spectra for the periodic signal x(t) of Example 4.4
70 CHAPTER 4. THE FOURIER SERIES

very common to see the spectrum written in terms of the discrete unit impulse
function. The magnitude spectrum cn in Figure 4.6 could be expressed as
2k 2k 2k 2k
cn = ∙ ∙ ∙ + δ[n + 3] + δ[n + 1] + [n − 1] + δ[n − 3] + ∙ ∙ ∙
3π π π 3π

Example 4.5 Find the exponential Fourier series of the impulse train δT (t) shown in Figure
4.7.

Figure 4.7: Impulse train

 Solution From (4.9), the Fourier coefficients are


Z T /2
1
cn = δT (t)ejnω0 t dt
T −T /2
Z T /2
1 1 jnω0 t 1
= δ(t)ejnω0 t dt = e =
T −T /2 T t=0 T

The result is based on the sifting property of the impulse function


Z ∞
x(t)δ(t − τ )dt = x(τ )
−∞

The exponential form of the Fourier series is given by


X∞
1 jnω0 t
δT (t) = e
n=−∞
T

The amplitude spectrum for this function is given in Figure 4.8. Note that since
the Fourier coefficients are real, there is no need for the phase spectrum. 

Figure 4.8: Amplitude spectrum for an impulse train


4.2. EXPONENTIAL FOURIER SERIES 71

Figure 4.9: A periodic signal x(t)

Find the exponential Fourier series for the square signal x(t) in Figure 4.9. Example 4.6

 Solution In this case ω0 = 2π T the Fourier coefficient for the d.c


component is
Z Z
1 1 T /2
c0 = x(t)dt = 1 dt
T T T −T /2
Z
1 T1 2T1
= 1 dt =
T −T1 T
The Fourier coefficients are

Z T /2
1
cn = x(t)ejnω0 t dt
T −T /2
Z T1
1 1 
= 1 ejnω0 t dt = ejnω0 T1 − e−jnω0 T1
T −T1 jT nω0
   
2 2 2π
= sin(nω0 T1 ) =  sin n T1
T nω0 T n 2πT
T
 
2T1 2nT1
= sinc  (4.10)
T T
π
Consider the special case of T1 = 2and T = 2π in example 4.6. Hence,
1 n
cn = sinc
2 2
The line spectra to this particular case is shown in Figure 4.10. The frequency
spectra provides an alternative description to the signal x(t), namely the fre-
quency domain description. The time domain description of a signal x(t) is a
one as depicted in Figure 4.9. On the other hand the frequency domain de-
scription of x(t) is as shown in Figure 4.10. One can say the signal has a dual
identity: the time domain identity x(t) and the frequency domain identity (fre-
quency spectra). The two identities provide better understanding of a signal.
We notice some interesting features of these spectra. First, the spectra exist for
positive as well as negative values of ω (the frequency). Second, the magnitude
spectrum is an even function of ω and the phase spectrum is an odd function
ω, a property that we explain in details later.
72 CHAPTER 4. THE FOURIER SERIES

π
Figure 4.10: The magnitude and phase spectra for x(t) for T1 = 2 and T = 2π

What is Negative Frequency?

The existence of the spectrum at negative frequencies (i.e. double-sided) is


somewhat disturbing, because, by definition, the frequency is a positive quantity.
How do we interpret a negative frequency? And how do we then interpret
the spectral plots for negative values of ω? First we have to accept the fact
that this situation only arises when dealing with complex exponentials. The
frequency spectra are a graphical representation of coefficients cn as a function
of ω. Existence of the spectrum at ω = −nω0 is simply an indication of the fact
that an exponential component e−jnω0 t exists in the series.

4.3 Trigonometric Fourier Series

In the previous section we represented signals using a set of complex exponentials


which is orthogonal over an interval (0, T ). A question that may well be asked
at this point: If we know that a given function x(t) is real-valued, isn’t there an
equivalent way of expressing a Fourier series representation of x(t) using a set
of real-valued orthogonal functions? In this section we show that an orthogonal
trigonometric signal set can be selected as basis function to express a signal x(t)
over any interval of duration T . Consider a signal set

Φi (t) = {1, cos ω0 t, cos 2ω0 t, ∙ ∙ ∙ , cos ω0 t, ∙ ∙ ∙ ;


sin ω0 t, sin ω0 t, ∙ ∙ ∙ , sin nω0 t, ∙ ∙ ∙ } (4.11)
4.3. TRIGONOMETRIC FOURIER SERIES 73

It can be easily shown that this set is orthogonal over any interval T = 2π/ω0 ,
which is the fundamental period. Therefore,
Z (
0 m 6= n
cos nω0 t cos mω0 t dt = T (4.12)
T 2 m = n 6= 0

and (
Z
0 m 6= n
sin nω0 t sin mω0 t dt = T
(4.13)
T 2 m = n 6= 0

and Z
sin ω0 t cos mω0 t dt = 0 for all n and m (4.14)
T

If we select Φi (t) in (4.11) as basis function then we can now express a signal
x(t) by a trigonometric Fourier series over any interval of duration T as

x(t) = a0 +a1 cos ω0 t + a2 cos 2ω0 t + ∙ ∙ ∙


+b1 sin ω0 t + b2 sin 2ω0 t + ∙ ∙ ∙ (4.15)

or

X
x(t) = a0 + [an cos nω0 t + bn sin nω0 t] (4.16)
n=1

Using Z
1
cn = x(t)Φn (t)dt
En T

we can determine the coefficients a0 , an and bn , thus


Z
1
a0 = c 0 = x(t)dt averaged value of x(t) over one period
T T
Z
2
an = x(t) cos nω0 t dt n = 1, 2, 3, ∙ ∙ ∙ (4.17)
T T
Z
2
bn = x(t) sin nω0 t dt n = 1, 2, 3, ∙ ∙ ∙ (4.18)
T T

In order to see the close connection of the trigonometric Fourier series with the
exponential Fourier series, we shall re-derive the trigonometric Fourier series
from the exponential Fourier series.
For real valued signal x(t), the complex conjugate of cn is given by
" Z #∗
T
1
c∗n = x(t)e−jnω0 t dt
T 0
Z
1 T
= x(t)ejnω0 t dt recall x(t) = x∗ (t) for a real signal
T 0
= c−n

Hence,
|c−n | = |cn | and ∠c−n = −∠cn (4.19)
74 CHAPTER 4. THE FOURIER SERIES

and this explains why the amplitude spectrum has even symmetry while the
phase spectrum has odd symmetry. This property can be clearly seen in the
examples of section 4.2.1. Furthermore, this allows us to regroup the exponential
series in (4.8) as follows
−1
X ∞
X
x(t) = c0 + cn ejnω0 t + cn ejnω0 t
n=−∞ n=1
X∞ ∞
X
= c0 + c−n e−jnω0 t + cn ejnω0 t
n=1 n=1
X∞
 
= c0 + c−n e−jnω0 t + cn ejnω0 t (4.20)
n=1

Since cn in general is complex we can express cn in rectangular form as cn =


α + jβ and substitute this form for cn in (4.20). Hence,

X  
x(t) = c0 + (α − jβ)e−jnω0 t + (α + jβ)ejnω0 t
n=1
X∞
  
= c0 + α ejnω0 t + e−jnω0 t + jβ ejnω0 t − e−jnω0 t
n=1

Using Euler’s identities we have



X
x(t) = c0 + [α (2 cos nω0 t) + jβ (2j sin nω0 t)]
n=1
X∞
= c0 + [2α cos nω0 t − 2β sin nω0 t] (4.21)
n=1

Equation (4.21) can be written as



X
x(t) = a0 + [an cos nω0 t + bn sin nω0 t]
n=1

where the coefficients a0 , an and bn are given by


Z
1
a0 = c 0 = x(t)dt
T T
Z
2
an = 2α = 2Re{cn }= x(t) cos nω0 t dt
T T
Z
2
bn = −2β = -2Im{cn }= x(t) sin nω0 t dt
T T

which is clearly as derived before. The result above becomes clearer when we
substitute e−jnω0 t = cos nω0 t − j sin nω0 t in
Z T
1
cn = x(t)e−jnω0 t dt
T 0
4.3. TRIGONOMETRIC FOURIER SERIES 75

to obtain
Z
1
cn = x(t) (cos nω0 t − j sin nω0 t) dt
T T
Z Z
1 1
= x(t) cos nω0 t dt − j x(t) sin nω0 t dt
T T T T
1 1
= α + jβ = an − j bn (4.22)
2 2

4.3.1 Compact (Combined) Trigonometric Fourier Series


The trigonometric Fourier series in (4.16) can be written in a compact form
by combining the sine and cosine terms to obtain a single sinusoid. Since the
sinusoidal terms are of the same frequency we can use the trigonometric identity
an cos ω0 t + bn sin nω0 t = An cos(nω0 t + θn )
where
p
An = a2n + b2n (4.23)
 
−bn
θn = tan−1 (4.24)
an
The trigonometric Fourier series in (4.16) can now be expressed in the compact
form of the trigonometric Fourier series as

X
x(t) = a0 + An cos(nω0 t + θn ) (4.25)
n=1

where the coefficients An and θn are computed from an and bn using (4.17) and
(4.18).

Find the compact trigonometric Fourier series for the periodic square wave x(t) Example 4.7
illustrated in Figure 4.9 where T1 = π/2 and T = 2π.

 Solution The period is T = 2π and ω0 = 2π/T = 1. Therefore



X
x(t) = a0 + [an cos nt + bn sin nt]
n=1

where Z Z π/2
1 1 1
a0 = x(t)dt = dt =
T T 2π −π/2 2
Note that a0 could have easily been deduced by inspecting x(t) in Figure 4.9, it
is the average value of x(t) over one period. Next,
Z π/2  nπ 
2 2
an = cos nt dt = sin
2π −π/2 nπ 2


0 n even
2
= πn n = 1, 5, 9, 13, ∙ ∙ ∙

 2
− πn n = 3, 7, 11, 15, ∙ ∙ ∙
76 CHAPTER 4. THE FOURIER SERIES

and Z π/2
2
bn = sin nt dt = 0
2π −π/2

Therefore
X∞
1
x(t) = + an cos nt
2 n=1
n odd
 
1 2 1 1 1
= + cos t − cos 3t + cos 5t − cos 7t + ∙ ∙ ∙ (4.26)
2 π 3 5 7

Only the cosine terms appear in the trigonometric series. The series is therefore
already in compact form except that the amplitudes of alternating harmonics
are negative. Now, by definition, amplitudes An are positive. The negative sign
can be expressed instead by introducing a phase of π radians since

− cos α = cos(α ± π)

Using this fact, we can express the series in (4.26) as


 
1 2 1 1 1
x(t) = + cos t + cos(3t − π) + cos 5t + cos(7t − π) + ∙ ∙ ∙
2 π 3 5 7

This is now the Fourier series in compact form where


1
a0 =
2
(
0 n even
An = 2
πn n odd
(
0 for all n 6= 3, 7, 11, 15, ∙ ∙ ∙
θn =
−π n = 3, 7, 11, 15, ∙ ∙ ∙ 

4.3.2 The Frequency Spectrum (Trigonometric)


The compact trigonometric Fourier series in (4.25) indicates that a periodic sig-
nal x(t) can be expressed as a sum of sinusoids of frequencies 0(dc), ω0 , 2ω0 , ∙ ∙ ∙ ,
nω0 , ∙ ∙ ∙ , whose amplitudes are A0 , A1 , A2 , ∙ ∙ ∙ , An , ∙ ∙ ∙ , and whose phases are
0, θ1 , θ2 , ∙ ∙ ∙ , θn , ∙ ∙ ∙ , respectively. We can plot amplitude An vs. ω (amplitude
spectrum) and θn vs. ω (phase spectrum).
To see the close connection between the trigonometric spectra (An and θn )
with the exponential spectra (|cn | and ∠cn ), express cn in (4.22) in polar form
cn = |cn |ej∠cn as follows
q

an 2
2 −1 −bn
cn = 2 + b2n ej tan ( an )
1p 2 −1 −bn
= an + b2n ej tan ( an )
2
Hence,  
1p 2 −bn
|cn | = an + b2n and ∠cn = tan −1
(4.27)
2 an
4.3. TRIGONOMETRIC FOURIER SERIES 77

and when compared with (4.23) and (4.24) implies |cn | = 12 An for n ≥ 1 and
∠cn = θn for positive n. From (4.19), ∠cn = −θn for negative n. In conclu-
sion, the connection between the trigonometric spectra with exponential spectra
can be summarized as follows. The dc components c0 and a0 are identical in
both spectra. Moreover, the exponential amplitude spectrum |cn | is half of the
trigonometric amplitude spectrum An for n ≥ 1. The exponential angle spec-
trum ∠cn is identical to the trigonometric phase spectrum θn for positive n and
−θn for negative n. We can therefore produce the exponential spectra merely
by inspection of trigonometric spectra, and vice versa. The following example
demonstrates this feature.

The trigonometric Fourier spectra of a certain periodic periodic signal x(t) Example 4.8
are shown in Figure 4.11(a). By inspecting these spectra, sketch the corre-
sponding exponential Fourier spectra.

An |cn |
16 16
12 12
8 8
4 4
0 0
-12 -9 -6 -3 0 3 6 9 12 -12 -9 -6 -3 0 3 6 9 12

θn π/2 ∠cn

π/4
0
−π/4 −π/4
−π/2 −π/2
-12 -9 -6 -3 0 3 6 9 12 -12 -9 -6 -3 0 3 6 9 12
(a) (b)

Figure 4.11: Spectra for Example 4.8

 Solution The trigonometric frequency components exist at frequencies 0, 3, 6,


and 9. The exponential frequency components exist at 0, 3, 6, 9 and −3, −6, −9.
Consider the amplitude spectrum first, the dc component remains the same,
i.e. c0 = a0 = 16. Now |cn | is an even function of ω and |cn | = A2n . Thus, all
the remaining spectrum |cn | for positive n is half the trigonometric amplitude
spectrum An , and for negative n is a reflection about the vertical axis of the
spectrum for positive n, as shown in Figure 4.11(b). Note that the trigonometric
Fourier series for x(t) is written as
  
x(t) = 16 + 12 cos 3t − π4 + 8 cos 6t − π2 + 4 cos 9t − π4
The exponential Fourier series is
 π π   π π   π π 
x(t) = 16 + 6e−j 4 ej3t + 6ej 4 e−3jt + 4e−j 2 ej6t + 4ej 2 e−j6t + 2e−j 4 ej9t + 2ej 4 e−j9t
h π π
i h π π
i h π π
i
= 16 + 6 ej(3t− 4 ) + e−j(3t− 4 ) + 4 ej(6t− 2 ) + e−j(6t− 2 ) + 2 ej(9t− 4 ) + e−j(9t− 4 )
  
= 16 + 12 cos 3t − π4 + 8 cos 6t − π2 + 4 cos 9t − π4
Clearly both sets of spectra represent the same periodic signal. 
78 CHAPTER 4. THE FOURIER SERIES

Example 4.9 For the train of impulses in Example 4.5 sketch the trigonometric spectrum
and write the trigonometric Fourier series.

 Solution By inspecting the double sided spectrum in Figure 4.8 we con-


clude
1
A0 = c0 =
T
2
An = 2|cn | = n = 1, 2, 3, ∙ ∙ ∙
T
θn = 0

Figure 4.12 shows the trigonometric Fourier spectrum. From this spectrum we
can express the impulse train δT (t) as
1 2π
δT (t) = [1 + 2(cos ω0 t + cos 2ω0 t + cos 3ω0 t + ∙ ∙ ∙ )] ω0 =
T T

Figure 4.12: A triangular periodic signal

Example 4.10 Find the compact trigonometric Fourier series for the triangular periodic signal
x(t) illustrated in Figure 4.13 and sketch the amplitude and phase spectra.

x(t)

-1
-4 -3 -2 -1 0 1 2 3 t→

Figure 4.13: A triangular periodic signal

 Solution In this case the period is T = 2. Hence, ω0 = π and



X
x(t) = a0 + [an cos nt + bn sin nt]
n=1

where
(
2t |t| ≤ 12
x(t) =
2(1 − t) 12 < t ≤ 3
2
4.3. TRIGONOMETRIC FOURIER SERIES 79

Note it will be easier to choose the interval of integration from − 12 to 32 rather


than 0 to 2. By inspecting Figure 4.13 it shows that the average value (dc) of
x(t) is zero, so that a0 = 0. Also
Z
2 3/2
an = x(t) cos nπt dt
2 −1/2
Z 1/2 Z 3/2
= 2t cos nπt dt + 2(1 − t) cos nπ dt
−1/2 1/2

= zero
Therefore
an = 0
Z 1/2 Z 3/2
bn = 2t sin nπt dt + 2(1 − t) sin nπ dt
−1/2 1/2


0 n even
8
= = n28π2 n = 1, 5, 9, 13, ∙ ∙ ∙
n2 π 2 

− n28π2 n = 3, 7, 11, 15, ∙ ∙ ∙
Therefore
 
8 1 1 1
x(t) = sin πt − sin 3πt + sin 5πt − sin 7πt + ∙ ∙ ∙
π2 9 25 49
In order to plot Fourier spectra, the series must be converted into compact
trigonometric form as in Equation (4.25). This can be done using the identity
± sin kt = cos(kt ∓ π2 )
Hence, x(t) can be expressed as
 
8 1 1 1
x(t) = 2 cos(πt − π2 ) + cos(3πt + π2 ) + cos(5πt − π2 ) + cos(7πt + π2 ) + ∙ ∙ ∙
π 9 25 49
Note in this series how all the even harmonics are missing. The phases of odd
harmonics alternate from − π2 to π2 . Figure 4.14 shows amplitude and phase
spectra for x(t). 
Comment The summation of sinusoids that are harmonically related will
result in a signal that is periodic. The ratio of any two frequencies is a
rational number. The largest positive number of which all the frequencies are
integral (integer) multiples is the fundamental frequency. Consider the the
signal x(t)

x(t) = 2 + 7 cos( 12 t + θ1 ) + 3 cos( 23 t + θ2 ) + 5 cos( 76 t + θ3 )

The frequencies in the spectrum of x(t) are 12 , 23 , and 76 ( we do not con-


sider dc). The ratios of the successive frequencies are 34 and 47 , respec-
tively. Because both these numbers are rational, all the three frequencies
in the spectrum are harmonically related and the signal x(t) is periodic. The
largest number of which 12 , 23 , and 76 are integral multiples is 16 . Moreover,
3( 16 ) = 12 , 4( 16 ) = 23 , and 7( 16 ) = 76 . Therefore the fundamental frequency is
1
6 . The three frequencies in the spectrum are the third, fourth, and seventh
harmonics. The fundamental frequency in this example is absent.
80 CHAPTER 4. THE FOURIER SERIES

The Magnitude Spectrum

8
π2

8
9π 2 8
25π 2
0
0 1 2 3 4 5 6 7 8 9 n→

The Phase Spectrum


π
2

− π2
0 1 2 3 4 5 6 7 8 9 n→

Figure 4.14: Line Spectra for the triangular periodic signal x(t) of Example 4.10

4.4 Convergence of the Fourier Series


For the Fourier series to make sense, the Fourier coefficients cn should be finite
XN
N →∞
for all values of n and cn ejnω0 t −−−−→ x(t), i.e. the summation must
n=−N
converge to the signal x(t). Fourier believed that any periodic signal could be
expressed as a sum of sinusoids. However, this turned out not to be the case.

4.4.1 Dirichlet Conditions


For the Fourier series to converge, signal x(t) must possess over any period the
following properties, which are known as the Dirichlit conditions.
1. x(t) is absolutely integrable, that is
Z
|x(t)| dt < ∞
T

2. x(t) has at most a finite number of discontinuities.


3. x(t) must have at most a finite number of maxima and minima.
These conditions are sufficient but not necessary. If a signal x(t) satisfies the
Dirichlet conditions, then the corresponding Fourier series is convergent. Its
sum is x(t), except at any point t0 at which x(t) is discontinuous. At the points
of discontinuity, the sum of the series is the average of the left and right hand
limits of x(t) at t0 , that is,
1
x(t0 ) = [x(t− +
0 ) + x(t0 )]
2
Example 4.11 Consider the periodic signal in Example 4.7, x(t) is written as
 
1 2 1 1 1
x(t) = + cos t − cos 3t + cos 5t − cos 7t + ∙ ∙ ∙ (4.28)
2 π 3 5 7
4.5. PROPERTIES OF FOURIER SERIES 81

We notice at t = π2 , which is a point of discontinuity of x(t), the sum in Equation


(4.28) has a value of 12 , which is equal to the arithmetic mean of the values one
(x(t = π2 − )) and zero (x(t = π2 + )) of our signal x(t). Since x(t) satisfies Dirichlet
conditions, the sum in Equation (4.28) converges. Setting t = 0 in Equation
(4.28), we obtain
 
1 2 1 1 1
x(0) = + 1 − + − + ∙∙∙
2 π 3 5 7
1 2
= + (the sum of this infinite series must be π4 ) = 1 
2 π

4.5 Properties of Fourier Series


In this section several properties of the Fourier series are stated without proofs.
The students can refer to many references for the proofs of these properties.
However, proofs of the Fourier transform properties will be provided.

4.5.1 Linearity
Suppose that x(t) and y(t) are periodic, both having the same period. Let their
Fourier series expansion be given by

X
x(t) = βn ejnω0 t
n=−∞
X∞
y(t) = γn ejnω0 t
n=−∞

and let
z(t) = k1 x(t) + k2 y(t)
where k1 and k2 are arbitrary constants. If
FS
x(t) ←→ βn
FS
y(t) ←→ γn

Then the Fourier coefficients of z(t) are


FS
k1 x(t) + k2 y(t) ←→ k1 βn + k2 γn

4.5.2 Time Shifting


If x(t) has the Fourier series coefficients cn ,
FS
x(t) ←→ cn

Then the signal x(t − τ ) has coefficients dn ,


FS
x(t − τ ) ←→ dn
82 CHAPTER 4. THE FOURIER SERIES

where
Z
1
dn = x(t − τ )e−jnω0 t dt
T T
Z
−jnω0 τ 1
=e x(σ)e−jnω0 σ dσ
T T
= cn e−jnω0 τ

Therefore,
FS
x(t − τ ) ←→ cn e−jnω0 τ

4.5.3 Frequency Shifting


Let z(t) = ejmω0 t x(t), where m is an integer. If x(t) has the Fourier series
coefficients cn ,
FS
x(t) ←→ cn
Then the signal z(t) has coefficients dn ,
FS
ejmω0 t x(t) ←→ dn

where
Z
1
dn = ejmω0 t x(t)e−jnω0 t dt
T T
Z
1
= x(t)e−j(n−m)ω0 t dt
T T
= cn−m

Notice the duality between the time shifting and frequency shifting properties.
Shifting in one domain corresponds to multiplication by a complex exponential
in the other domain.

4.5.4 Time Reflection


If x(t) has the Fourier series coefficients cn ,
FS
x(t) ←→ cn

Then the signal x(−t) has coefficients c−n ,


FS
x(−t) ←→ c−n

4.5.5 Time Scaling


Let z(t) = x(at), with a > 0, and let the fundamental period of x(t) be T
(Figure 4.15). The first thing to realize is that if x(t) is periodic with period T ,
then z(t) is also periodic with fundamental period Ta and fundamental frequency
aω0 . The Fourier series coefficients will be
Z Z
a −jn(aω0 )t a
dn = z(t)e dt = x(at)e−jn(aω0 )t dt
T T T T
a a
4.5. PROPERTIES OF FOURIER SERIES 83

Figure 4.15: A signal x(t) and a time scaled version z(t) of that signal

We can make the change of variable λ = at. Therefore,


Z Z
a1 1
dn = x(λ)e−jn(aω0 )(λ/a) dλ = x(λ)e−jnω0 λ dλ = cn
Ta T T T
We notice that the Fourier series coefficients remain the same. Describing z(t)
over its fundamental period Ta0 is the same as describing x(t) over its funda-
mental period T0 . The only difference is that both have different fundamental
frequencies. The representations are

X ∞
X
x(t) = cn ejnω0 t and x(at) = cn ejn(aω0 )t
n=−∞ n=−∞

The scaling operation simply changes the harmonic spacing from ω0 to aω0 ,
keeping the coefficients identical.

4.5.6 Time Differentiation


The Fourier series representation of x(t) is

X
x(t) = cn ejnω0 t (4.29)
n=−∞

Differentiating both sides of this equation gives


X∞
d
x(t) = jnω0 cn ejnω0 t
dt n=−∞

Thus
d FS
x(t) ←→ jnω0 cn
dt
This operation accentuates the high frequency components of the signal. Note
that differentiation forces the average value, dc component of x(t), of the differ-
entiated signal to be zero. Hence the Fourier series coefficient for n = 0 is zero.
So differentiation of a time function corresponds to a multiplication of it Fourier
series coefficients by an imaginary number whose value is a linear function of
the harmonic number.
84 CHAPTER 4. THE FOURIER SERIES

4.5.7 Time Integration


If a periodic signal contains a nonzero average value (c0 6= 0), then the inte-
gration of this signal produces a component that increases linearly with time
and, therefore, the resultant signal is non-periodic. However, if c0 = 0, then the
integrated signal is periodic as shown in Figure 4.16. Integrating both sides of
Equation (4.29)
Z t Z t X ∞
x(λ)dλ = cn ejnω0 λ dλ
−∞ −∞ n=−∞

X Z t
= cn ejnω0 λ dλ
n=−∞ −∞

X∞
cn jnω0 t
= e
n=−∞
jnω0

Therefore, Z t
cn FS
x(λ)dλ ←→ , n 6= 0
−∞ jnω 0
Note that integration attenuates the magnitude of the high frequency compo-
nents of the signal. Integration may be viewed as an averaging operation and
thus tends to smooth signals in time, it is sometimes called a smoothing opera-
tion.

Figure 4.16: The effect of a nonzero average value on the integral of a periodic
signal x(t).

4.5.8 Multiplication
If x(t) and y(t) are periodic signals with the same period, their product z(t) is
z(t) = x(t)y(t). Let
FS FS FS
x(t) ←→ αn , y(t) ←→ βn and z(t) ←→ γn (4.30)
Then Z Z
1 −jω0 nt 1
γn = z(t)e dt = x(t)y(t)e−jω0 nt dt (4.31)
T T T T
4.5. PROPERTIES OF FOURIER SERIES 85

Then, using

X ∞
X
y(t) = βn ejnω0 t = βm ejmω0 t
n=−∞ m=−∞

in (4.31), we have
Z ∞
!
1 X
γn = x(t) βm e jmω0 t
e−jω0 nt dt
T T m=−∞

Reversing the order of integration and summation



X Z
1
γn = βm x(t)e−j(n−m)ω0 t dt
T T
m=−∞ | {z }
=αn−m

Then

X
γn = βm αn−m
m=−∞

Therefore,

X
FS
x(t)y(t) ←→ βm αn−m = αn ∗ βn
m=−∞

This result is a convolution sum of the two sequences αn and βn . So multiplying


CT periodic signals with the same period corresponds to convolving their Fourier
series coefficients.

4.5.9 Convolution
For periodic signals with the same period, a special form of convolution, known
as periodic or circular convolution, is defined by the integral
Z
1
z(t) = x(t) ~ y(t) = x(τ )y(t − τ )dτ
T T

where the integral is taken over one period. It can be shown that
FS
x(t) ~ y(t) ←→ αn βn

where αn and βn are the Fourier series coefficients of x(t) and y(t) respectively.
So convolution of two CT periodic signals with the same period corresponds to
multiplication of their Fourier series coefficients.

Evaluate the periodic convolution of the sinusoidal signal Example 4.12

y(t) = 2 cos(2πt) + sin(4πt)

with the periodic square wave x(t) depicted in Figure 4.9 of Example 4.5 with
T1 = 14 and T = 1.
86 CHAPTER 4. THE FOURIER SERIES

 Solution Both x(t) and y(t) have fundamental period T = 1. The Fourier
series coefficients for x(t) may be obtained from (4.10) as
  n
2T1 2nT1 1
αn = sinc = sinc
T T 2 2

The Fourier series coefficients of y(t) are




 1 n = ±1

1
2j n=2
βn =


 −1 n = −2
 2j
0 otherwise

FS
Let z(t) = x(t)~y(t). The convolution property indicates x(t)~y(t) ←→ αn βn .
Hence the Fourier coefficients for z(t) are
(
1
n = ±1
α n βn = π
0 otherwise

which implies that


2
z(t) = cos(2πt) 
π

4.5.10 Effects of Symmetry


The Fourier series for the periodic signal in Figure 4.9 (Example 4.6) consists of
cosine terms only, and the series for the signal x(t) in Figure 4.13 (Example 4.8)
consists of sine terms only. This is not coincidental, we can show that the Fourier
series of any even periodic function x(t) consists of cosine terms only and the
series for any odd periodic function x(t) consists of sine terms only. In knowing
the type of symmetry any signal possesses, unnecessary work in determining
the Fourier coefficients can be avoided, thus simplifying the computations of
the Fourier coefficients. The important types of symmetry are

1. even symmetry, x(t) = x(−t),

2. odd symmetry, x(t) = −x(−t),


T
3. half-wave symmetry, x(t − 2 ) = −x(t).

By knowing what type of symmetry and the signal over half period, the
Fourier coefficients can be computed by integrating over only half the period
rather than a complete period. To prove this result, recall that
Z
1
a0 = x(t)dt (4.32)
T T
Z
2
an = x(t) cos nω0 t dt (4.33)
T T
Z
2
bn = x(t) sin nω0 t dt (4.34)
T T
4.5. PROPERTIES OF FOURIER SERIES 87

Recall also from section 2.1.4 that cos nω0 t is an even function and sin nω0 t is
an odd function. If x(t) is an even function, then x(t) cos nωo t is also an even
function and x(t) sin nω0 t is an odd function. The Fourier series of an even
signal x(t) having period T is

X
x(t) = a0 + an cos nω0 t
n=1

with coefficients
Z
1
a0 = x(t)dt
T T
Z Z T /2
2 4
an = x(t) cos nω0 t dt = x(t) cos nω0 t dt (4.35)
T T | {z } T 0
even

since Z Z
T T /2
x(t) dt = 2 x(t)dt
−T |{z} 0
even
and
Z
2
bn = x(t) sin nω0 t dt
T T | {z }
odd
=0
since Z T
x(t) dt = 0
−T |{z}
odd
Similarly, if x(t) is an odd function, then x(t) cos nω0 t is an odd function and
x(t) sin nω0 t is an even function. Therefore
a0 = an = 0
Z
4 T /2
bn = x(t) sin nω0 t dt (4.36)
T 0
Observe that, because of symmetry, the integration required to compute the
coefficients need to be performed over only half the period.
If the two halves of one period of a periodic signal are of identical shape
except that one is the negative of the other, the periodic signal is said to have
a half wave symmetry. The signal in Figure 4.13 is a clear example of such
a symmetry. If a periodic signal x(t) with period T satisfies the half-wave
symmetry condition then
x(t − T2 ) = −x(t)
In this case all even-numbered harmonics vanish (note also a0 = 0) , and the
odd-numbered harmonic coefficients are given by
Z
4 T /2
an = x(t) cos nω0 t dt
T 0
Z
4 T /2
bn = x(t) sin nω0 t dt
T 0
The consequence of these symmetries are summarized in Table 4.1
88 CHAPTER 4. THE FOURIER SERIES

Symmetry a0 an bn Remarks

Even a0 6= 0 an 6= 0 bn = 0 Integrate over T2 only


and multiply the coefficients by 2
Odd a0 = 0 an = 0 bn 6= 0 Integrate over T2 only
and multiply the coefficients by 2
Half-wave a0 = 0 a2n = 0 b2n = 0 Integrate over T2 only
a2n+1 6= 0 b2n+1 6= 0 and multiply the coefficients by 2

Table 4.1: Effects of Symmetry

Effect of Symmetry in Exponential Fourier Series

When x(t) has an even symmetry, bn = 0, and from Equation (4.27), cn = a2n
which is real (positive or negative). Hence ∠cn can only be 0 or ±π. Moreover,
we may compute cn = a2n using Equation (4.35). Similarly, when x(t) has an
odd symmetry, an = 0, and cn = − jb2n is imaginary (positive or negative).
Hence, ∠cn can only be 0 or ± π2 . We can compute cn = − jb2n using Equation
(4.36).

4.5.11 Parseval’s Theorem


In Chapter 1, it was shown that the average power of a periodic signal x(t) is
given by
Z
1 2
P = |x(t)| dt (4.37)
T T

For example the complex exponential signal x(t) = cn ejnω0 t has an average
power of
Z
1
P = cn ejnω0 t c∗n e−jnωo t dt
T T
Z
1
= |cn |2 dt = |cn |2
T T

A question that may well be asked at this point: If x(t) = cn ejnω0 t has an

X
average power |cn |2 will the signal x(t) = cn ejnω0 t has an average power
n=−∞

X
2
|cn | . One can further ask what is the relationship between the average
n=−∞
power of the signal x(t) and its harmonics?

Using the exponential Fourier series and substituting in Equation (4.37)

Z ∞ ∞
!∗
1 X X
P = cm ejmω0 t cn ejnω0 t dt
T T m=−∞ n=−∞
4.6. SYSTEM RESPONSE TO PERIODIC INPUTS 89

Reversing the order of integration and summation



X ∞
X Z
1
P = cm c∗n ej(m−n)ω0 t dt (4.38)
T T
m=−∞ n=−∞ | (
{z }
0 m 6= n
=
T m = n

The integral in (4.38) is zero except for the special case when m = n. For this
specific condition the double summation reduces to a single summation and we
have a new relationship for the average power in terms of the magnitudes of the
coefficients
X∞ X∞
P = cn c∗n = |cn |2 (4.39)
n=−∞ n=−∞

Combining Equations (4.37) and (4.39) we obtain a relationship known as Par-


seval’s theorem for periodic signals
Z ∞
X
1 2
P = |x(t)| dt = |cn |2 (4.40)
T T n=−∞

The result indicates that the total average power of a periodic signal x(t) is
equal to the sum of the powers of its Fourier coefficients. Therefore, if we know
the function x(t), we can find the average power. Alternatively, if we know
the Fourier coefficients, we can find the average power. Interpreting the result
physically simply means writing a signal as a Fourier series does not change its
energy. A graph of |cn |2 versus ω can plotted, such a graph is called the power
spectrum.
We can apply the same argument to the trigonometric Fourier series. It can
be shown that

1X 2
P = a20 + A
2 n=1 n

4.6 System Response to Periodic Inputs


Consider a linear time-invariant CT system with impulse response h(t). From
Chapter 3, we know that the response y(t) resulting from an input x(t) is given
by Z ∞
y(t) = h(τ )x(t − τ )dτ
−∞

For complex exponential inputs of the form x(t) = ejωt , the output of the system
is
Z ∞
y(t) = h(τ )ejω(t−τ ) dτ
−∞
Z ∞
=e jωt
h(τ )e−jωτ dτ
−∞

By defining Z ∞
H(ω) = h(τ )e−jωτ dτ
−∞
90 CHAPTER 4. THE FOURIER SERIES

we can write
y(t) = H(ω)ejωt (4.41)
H(ω) is called the system frequency response and is constant for fixed ω. Equa-
tion (4.41) tells us that the system response to a complex exponential is also a
complex exponential, with the same frequency ω, scaled by the quantity H(ω).
The magnitude |H(ω)| is called the magnitude function of the system, and
∠H(ω) is known as the phase function of the system. In summary, the response
y(t) of a CT LTI system to an input sinusoid of period T is also a sinusoid of
period T . Knowing H(ω), we can determine if the system changes the amplitude
of the input and how much of a phase shift the system adds to the sinusoidal
input.
To determine the response y(t) of a LTI system to a periodic input x(t) we saw
earlier that
ejωt −→ H(ω)ejωt
|{z} | {z }
input output

From the linearity property



X ∞
X
cn ejnω0 t −→ cn H(nω0 )ejnω0 t
n=−∞ n=−∞
| {z } | {z }
input output

The response y(t) of a LTI system to a periodic input with period T is periodic
with the same period.

Example 4.13 Find the output voltage y(t) of the system shown in Figure 4.17 if the in-
put voltage is the periodic signal x(t) = 4 cos t − 2 cos 2t, assume R = 1Ω and
L = 1H.

Figure 4.17: System for Example 4.13

 Solution Applying Kirchoff’s voltage law to the circuit yields


d R R
y(t) + y(t) = x(t)
dt L L
If we set x(t) = ejωt in this equation, the output voltage is y(t) = H(ω)ejωt .
Using the system differential equation, we obtain
R R
jωH(ω)ejωt + H(ω)ejωt = ejωt
L L
Solving for H(ω) yields
R/L
H(ω) =
R/L + jω
4.6. SYSTEM RESPONSE TO PERIODIC INPUTS 91

At any frequency ω = nω0 , the frequency response is

R/L
H(nω0 ) =
R/L + jnω0
R
For our case, ω0 = 1 and L = 1, hence

1
H(n) =
1 + jn
Using Euler identity the input signal is expressed as

x(t) = 2ejt + 2e−jt − ej2t − e−j2t

The output signal is



X
y(t) = cn H(n)ejnt
n=−∞

= c−2 H(−2)ej(−2)t + c−1 H(−1)ej(−1)t + c1 H(1)ej(1)t + c2 H(2)ej(2)t


1 1 −jt 1 jt 1
= (−1) e−j2t + (2) e + (2) e + (−1) ej2t
1 − j2 1−j 1+j 1 + j2
√ 2
= 2 2 cos(t − 45◦ ) − √ cos(2t − 63◦ ) 
5
92 CHAPTER 4. THE FOURIER SERIES
Chapter 5

The Fourier Transform

In Chapter 4 we saw how to represent a periodic signal as a sum of complex


exponentials or trigonometric series. The Fourier series is a powerful analysis
tool, but unfortunately it is limited. It can describe any signal over a finite time
and any periodic signal over all time as a linear combination of sinusoids. But
it cannot describe a non-periodic signal for all time. In this chapter we extend
this representation to non-periodic signals developing the Fourier transform. We
will see that the Fourier series is just a special case of the Fourier transform.

5.1 Development of the Fourier Transform


The difference between a periodic signal and a non-periodic signal is that a
periodic signal repeats in a finite time T , called the fundamental period. It has
been repeating with that fundamental period forever and will continue to do
so forever. On the other hand, a non-periodic signal does not have a period.
It may repeat a pattern many times within some finite time, but not over all
time. In making the transition from the Fourier series to the Fourier transform
our approach will be to consider a periodic signal and represent it in terms of
complex exponentials and then letting the fundamental period approach infinity.
If the fundamental period goes to infinity, the signal cannot repeat in a finite
time and therefore is no longer periodic. Before proceeding any further let us
investigate the effect of changing T on the frequency spectrum of a periodic
signal x(t) as the one shown in Figure 5.1. We saw earlier that the Fourier

Figure 5.1: Rectangular pulse train x(t).

93
94 CHAPTER 5. THE FOURIER TRANSFORM

series coefficients are  


2T1 2nT1
cn = sinc
T T
For fixed T1 , increasing T reduces the amplitude of each harmonic as well as
the fundamental frequency and, hence, the spacing between harmonics. How-
ever, the relative shape of the spectrum does not change as T increases except
for the amplitude factor Figure 5.2.

0.2

0
-15 -10 -5 0 5 10 15 n
(a)
0.1

0
-30 -20 -10 0 10 20 30 n
(b)
0.05

0
-60 -40 -20 0 20 40 60 n
(c)

Figure 5.2: Line spectra for x(t). (a) Magnitude spectrum for T = 10T1 , (b)
Magnitude spectrum for T = 20T1 , (c) Magnitude spectrum for T = 40T1 .

We conclude that as the period increases, the amplitude becomes smaller


and the spectrum becomes denser. In the limit as T → ∞, ω0 → 0 and cn → 0.
This is a fascinating behaviour, this result means the spectrum is so dense that
the spectral components are spaced at zero (infinitesimal) intervals. At the same
time, the amplitude of each component is zero (infinitesimal)!! We have nothing
of everything, yet we have something.
Suppose we are given a non-periodic signal x(t) such as the one shown in
Figure 5.3. To represent this signal as a sum of exponential functions over all
time we construct a new periodic signal xT (t) with large enough period T so that
x(t) = xT (t) for t ∈ (− T2 , T2 ), as illustrated in Figure 5.3. The periodic signal

Figure 5.3: Construction of a periodic signal by periodic extension of x(t).


5.1. DEVELOPMENT OF THE FOURIER TRANSFORM 95

xT (t) can be represented by an exponential Fourier series. The non-periodic


signal x(t) can be obtained back by letting T → ∞, that is
lim xT (t) = x(t)
T →∞

Taking the limit as the period approaches infinity, the pulses in the periodic
signal repeat after an infinite interval. In other words we moved all the pulses
to infinity except the desired pulse located at the origin. The new function xT (t)
is a periodic signal and can be represented by an exponential Fourier series given
by
X∞
xT (t) = cn ejnω0 t (5.1)
n=−∞

where Z T /2
1
cn = xT (t)e−jnω0 t dt (5.2)
T −T /2

and

ω0 =
T
Before taking any limiting operation, we need to do some adjustments so that
the magnitude components of the cn do not all go to zero as the period is
increased. We make the following changes
X(nω0 ) , T cn (5.3)
When we use this definition, (5.1) and (5.2) become
X∞
1
xT (t) = X(nω0 )ejnω0 t (5.4)
n=−∞
T
Z T /2
X(nω0 ) = xT (t)e−jnω0 t dt (5.5)
−T /2

If we were to multiply cn by T before plotting it, the amplitude in Figure 5.2


would not go to zero as T approached infinity but would stay where it is. As
T → ∞, ω0 → 0, i.e., the spacing between adjacent lines in the line spectrum are
spaced at zero (infinitesimal) intervals. Hence, we shall replace ω0 , the spacing
of xT (t), by a more appropriate notation Δω that is

Δω =
T
Using this relation for T in (5.4), we get

X Δω
xT (t) = X(nΔω)ej(nΔωt) (5.6)
n=−∞

In the limit as T → ∞, the discrete lines in the spectrum of xT merge and the
frequency spectrum becomes continuous i.e. Δω → 0, furthermore xT (t) → x(t).
Therefore,

1 X
x(t) = lim xT (t) = lim X(nΔω)ej(nΔωt) Δω (5.7)
T →∞ Δω→0 2π
n=−∞
96 CHAPTER 5. THE FOURIER TRANSFORM

becomes Z ∞
1
x(t) = X(ω)ejωt dw (5.8)
2π −∞

In a similar manner (5.5) becomes


Z ∞
X(ω) = x(t)e−jωt dt (5.9)
−∞

Equations (5.8) and (5.9) are commonly referred to as the Fourier transform
pair. Equation (5.9) is known as the direct Fourier transform of x(t) (more
commonly just the Fourier transform). Equation (5.8) is known as the inverse
Fourier transform. Symbolically, we use the following operator notation:
Z ∞
X(ω) = F [x(t)] = x(t)e−jωt dt
−∞
Z ∞
1
x(t) = F −1 [X(ω)] = X(ω)ejωt dw
2π −∞

It is also useful to note that the complex exponential Fourier series coefficients
can be evaluated in terms of the the Fourier transform by using (5.3) to give

1
cn = X(ω) (5.10)
T ω=nω0

1
This means that the Fourier coefficients cn are T times the samples of X(ω)
uniformly spaced at intervals of ω0 .

Example 5.1 Find the Fourier transform of x(t) = e−at u(t).

 Solution By definition [equation 5.9],


Z ∞ Z ∞ ∞
−at −jωt −(a+jω)t −1 −(a+jω)t
X(ω) = e u(t)e dt = e dt = e
−∞ 0 a + jω 0

But e−jωt = 1. Therefore, as t → ∞, e−(a+jω)t = e−at e−jωt = 0 if a > 0.
Therefore
1
X(ω) = a>0
a + jω
√ −1 ω
Since X(ω) is complex, Expressing a+jω in polar form as a2 + ω 2 ej tan ( a ) ,
we obtain
1 −1 ω
X(ω) = √ e−j tan ( a )
2
a +ω 2

Therefore
1 ω
|X(ω)| = √ and ∠X(ω) = − tan−1
a + ω2
2 a

The amplitude spectrum |X(ω)| and the phase spectrum ∠X(ω) are depicted
in Figure 5.4. Observe that |X(ω)| is an even function of ω, and ∠X(ω) is an
odd function of ω, as expected. 
5.2. EXAMPLES OF THE FOURIER TRANSFORM 97

|X(ω)|

0.5

0
0 ω→
(a)

π/2
∠X(ω)

−π/2

0 ω→
(b)

Figure 5.4: Fourier spectra for x(t) = e−at u(t), a = 1. (a) Amplitude spectrum
|X(ω)|, (b) Phase spectrum ∠X(ω).

Existence of the Fourier Transform


In Example 5.1 we observed that when a < 0, the Fourier transform for e−at u(t)
does not exist. Therefore, not all signals have a Fourier transform. The existence
of the Fourier transform is assured for any x(t) satisfying the Dirichlet conditions
mentioned in section 4.4.1. The first of these conditions is
Z ∞
|x(t)| dt < ∞ (5.11)
−∞

Because e−jωt = 1, then from equation (5.9), we obtain
Z ∞
|X(ω)| ≤ |x(t)| dt < ∞
−∞

This inequality shows that the existence of the Fourier transform is assured
if condition (5.11) is satisfied. Although this condition is sufficient, it is not
necessary for the existence of the Fourier transform of a signal. We shall see
later many signals violates condition (5.11), but does have a Fourier transform.

5.2 Examples of The Fourier Transform


In this section, we compute the transform of some useful time signals.

Find the Fourier transform of the unit impulse δ(t). Example 5.2

 Solution Using the sifting property of the impulse


Z ∞
F [δ(t)] = δ(t)e−jωt
−∞


= e−jωt =1
t=0
98 CHAPTER 5. THE FOURIER TRANSFORM

or
F
δ(t) ←→ 1
If the impulse is time shifted, we have
Z ∞
F [δ(t − τ )] = δ(t − τ )e−jωt
−∞


= e−jωt = e−jωτ 
t=τ

t

Example 5.3 Find the Fourier transform of the rectangular pulse x(t) = rect τ (Figure 5.5a).

 Solution The Fourier transform is


Z ∞  
t
X(ω) = rect e−jωt dt
−∞ τ
Z τ /2
= e−jωt dt
−τ /2
! 
1  −jωτ /2 jωτ /2 2 sin ωτ
2
=− e −e =
jω ω
ωτ
  
sin 2 ωτ
=τ ωτ
 = τ sinc
2

Alternatively,
Z ∞  
t
X(ω) = rect e−jωt dt
−∞ τ
 
Z τ /2 Z τ /2
= e−jωt dt = cos ωt −j sin ωt dt
| {z } | {z }
−τ /2 −τ /2
even odd
Z τ /2  ωτ 
2
=2 cos ωt dt = sin
0 ω 2
ωτ
  
sin 2 ωτ
=τ ωτ
 = τ sinc
2

Therefore,  
t  ωτ 
F
rect ←→ τ sinc (5.12)
τ 2π

Recall, sinc(x) = 0 when x = ±n. Hence, sinc ωτ ωτ
2π = 0 when 2π = ±n; that
2πn
is when ω = ± τ , (n = 1, 2, 3, ∙ ∙ ∙ ), as depicted in Figure 5.5b. The Fourier
transform X(ω) shown in Figure 5.5b is the amplitude spectrum since it exhibits
positive and negative values. Since X(ω) is a real valued function its phase is
zero for all ω. If the magnitude spectrum is required, the negative amplitudes
can be considered as a positive amplitude with a phase of −π or π. The magni-
tude spectrum |X(ω)| and the phase spectrum ∠X(ω) are shown in Figure 5.5c
and d respectively. Note that the phase spectrum, which is required to be an
odd function of ω since x(t) is real, may be drawn in several other ways because
a negative sign can be accounted for by a phase ±nπ where n is any odd integer.
5.2. EXAMPLES OF THE FOURIER TRANSFORM 99

Figure 5.5: A rect function x(t), its Fourier spectrum X(ω), magnitude spec-
trum |X(ω)|, and phase spectrum ∠X(ω)

Find the Fourier transform of a dc or constant signal x(t) = 1, −∞ < t < ∞. Example 5.4

 Solution Clearly this is an example of a problem using the Fourier transform,


the transform is Z ∞
X(ω) = (1)e−jωt dt
−∞

The integral does not converge. Therefore, strictly speaking, the Fourier trans-
form does not exist. But we can avoid this problem by use of the following
trick
Z τ /2
X(ω) = lim (1)e−jωt dt
τ →∞ −τ /2
1  −jωτ /2
= lim − e − ejωτ /2 )
τ →∞ jω
This can be simplified to
2  ωτ 
X(ω) = lim sin
τ →∞ ω
 2ωτ 
= lim τ sinc (5.13)
τ →∞ 2π
In the limit as τ → ∞, X(ω) in Equation (5.13) approaches the impulse function
δ(w) with strength 2π since the area under the sinc function is 2π from Figure
5.6 ( in our case A = τ and B = τ /2). Therefore,
F
1 ←→ 2πδ(ω)

The frequency function 2πδ(ω) is called the generalized Fourier transform of the
signal x(t). This result shows that the spectrum of a constant signal x(t) = 1
is an impulse 2πδ(ω). This situation can be thought of as a dc signal which has
a single frequency ω = 0 (dc). 
100 CHAPTER 5. THE FOURIER TRANSFORM

Z ∞
A sin Bt A
Figure 5.6: Area under a sinc function = dt = π
−∞ Bt B

It is interesting to see the result in Example 5.4 using the inverse Fourier trans-
form as shown in the next example.

Example 5.5 Find the inverse Fourier transform of δ(ω).

 Solution On the basis of Equation (5.8) and the sifting property of the
impulse function,
Z ∞
1 1
F −1 [δ(ω)] = δ(ω)ejωt dw =
2π −∞ 2π

Therefore,
1 F
←→ δ(ω)

or
F
1 ←→ 2πδ(ω) 

If an impulse at ω = 0 is a spectrum of a dc signal, what does an impulse at


ω = ω0 represent? Let us investigate this question in the next example.

Example 5.6 Find the inverse Fourier transform of δ(ω − ω0 ).

 Solution Using the sifting property of the impulse function, we obtain


Z ∞
1 1 jω0 t
F −1 [δ(ω − ω0 )] = δ(ω − ω0 )ejωt dw = e
2π −∞ 2π

Therefore,
1 jω0 t F
e ←→ δ(ω − ω0 )

or
F
ejω0 t ←→ 2πδ(ω − ω0 )

It follows that
F
e−jω0 t ←→ 2πδ(ω + ω0 ) 
5.3. FOURIER TRANSFORM OF PERIODIC SIGNALS 101

5.3 Fourier Transform of Periodic Signals


Periodic signals are power signals and we anticipate that the Fourier transform
contains impulses. In Chapter 4, we examined the spectrum of periodic signals
by computing the Fourier series coefficients. We found that the spectrum con-
sists of a set of lines at ±nω0 . Next we find the Fourier transform of periodic
signals and show that the spectra of periodic signals consists of train of impulses.
First consider the exponential signal x(t) = ejω0 t . The Fourier transform of
this signal (see example 5.6) is
F
ejω0 t ←→ 2πδ(ω − ω0 )
A periodic signal x(t) of period T ; thus ω0 = 2πT has the Fourier series repre-
sentation
X∞
x(t) = cn ejnω0 t
n=−∞
Taking the Fourier transform of both sides

X  
X(ω) = cn F ejnω0 t
n=−∞
X∞
= 2πcn δ(ω − nω0 ) (5.14)
n=−∞

Thus the Fourier transform of a periodic signal is simply an impulse train, with
impulses located at ω = nω0 , each of which has a strength of 2πcn , and all
impulses are separated from each other by ω0 .
A periodic function of considerable importance is that of a periodic sequence
of unit impulse functions shown in Figure 5.7. For convenience, we write such
X∞
a sequence with period T as δT (t) = δ(t − nT ). Because this is a periodic
n=−∞

function, we can express it in terms of a Fourier series by choosing ω0 = T ,
(see example 4.5)
X∞
δT (t) = cn ejnω0 t
n=−∞
1
where cn = T so that

1 X jnω0 t
δT (t) = e
T n=−∞

Figure 5.7: A train of impulses


102 CHAPTER 5. THE FOURIER TRANSFORM

Using (5.14), the Fourier transform of δT (t) is

∞ ∞
2π X X
F [δT (t)] = δ(ω − nω0 ) = ω0 δ(ω − nω0 ) (5.15)
T n=−∞ n=−∞

The Fourier transform of a periodic impulse train in the time domain gives an
impulse train that is periodic in the frequency domain. The frequency spectrum
of δT (t) is shown in Figure 5.8.

Figure 5.8: The frequency spectrum of a train of impulses δT (t)


.

Example 5.7 Find the Fourier transform of the periodic rectangular waveform x(t) shown in
Figure 5.9.

Figure 5.9: A periodic rectangular waveform x(t)

 Solution From (4.10),


 
2T1 2nT1
cn = sinc
T T

Substituting this into (5.14) yields the Fourier transform of the periodic rect-
5.4. PROPERTIES OF THE FOURIER TRANSFORM 103

angular pulses


X  
2nT1
X(ω) = 2T1 ω0 sinc δ(ω − nω0 )
n=−∞
T
X∞  
nω0 T1
= 2T1 ω0 sinc δ(ω − nω0 )
n=−∞
π

The frequency spectrum is sketched in Figure 5.10. The dashed curve indicates
the weights of the impulse functions. Note that in distribution of impulses in
frequency, Figure 5.10 shows the particular case that T = 8T1 

Figure 5.10: A periodic rectangular waveform x(t)

In the previous example it is interesting to note how the Fourier series co-
efficients of the periodic rectangular waveform x(t) is related to the Fourier
transform of the truncated signal xT (t) shown in Figure 5.9. From (5.10),

1
cn = XT (ω)
T ω=nω0

ωT1

The Fourier transform of XT (ω) = 2T1 sinc π (see example 5.3), therefore,
 
2T1 ωT1
cn = sinc
T π
ω=nω0

In words, the Fourier series coefficients of a periodic signal can be obtained from
samples of the Fourier transform of the truncated signal divided by the period
T , provided the periodic signal and the truncated one are equal in one period.
Note that in general the Fourier transform of a periodic function is not periodic.

5.4 Properties of the Fourier Transform


In this section we consider a number of properties of the Fourier Transform.
These properties allow some problems to be solved merely by inspection.
104 CHAPTER 5. THE FOURIER TRANSFORM

5.4.1 Linearity
The Fourier transform is a linear operation based on the properties of integra-
tion. Thus if
F
x1 (t) ←→ X1 (ω)
F
x2 (t) ←→ X2 (ω)

then
F
αx1 (t) + βx2 (t) ←→ αX1 (ω) + βX2 (ω)
where α and β are arbitrary constants. The property is the direct result of the
linear operation of integration.
Proof Let z(t) = αx1 (t) + βx2 (t), the proof is trivial and follows as
Z ∞
Z(ω) = z(t)e−jωt dt
−∞
Z ∞
= [αx1 (t) + βx2 (t)]e−jωt dt
−∞
Z ∞ Z ∞
−jωt
=α x1 (t)e dt + β x2 (t)e−jωt dt
−∞ −∞
= αX1 (ω) + βX2 (ω) 

Example 5.8 Find the Fourier transform of cos(ωo t).

 Solution Using the Euler identities we can write


1 jωo t 1 −jωo t
cos(ω0 t) = e + e
2 2
We saw earlier that
F
ejωo t ←→ 2πδ(ω − ωo )
F
e−jωo t ←→ 2πδ(ω + ωo )

Because of linearity
1 jωo t 1 −jωo t F
e + e ←→ πδ(ω − ω0 ) + πδ(ω + ω0 )
2 2
The Fourier spectrum of cos ω0 t consist of only two components of frequencies
ω and −ω0 . Therefore the spectrum has two impulses at ω and −ω0 . 

5.4.2 Time Shifting


If
F
x(t) ←→ X(ω)
then
F
x(t − t0 ) ←→ e−jωt0 X(ω) (5.16)
5.4. PROPERTIES OF THE FOURIER TRANSFORM 105

Proof By definition
Z ∞
F [x(t − t0 )] = x(t − t0 )e−jωt dt
−∞

Letting u = t − t0 , we have
Z ∞
F [x(t − t0 )] = x(u)e−jω(u+to ) du
−∞
Z ∞
=e −jωt0
x(u)e−jωu du = e−jωt0 X(ω) 
−∞

This result shows that if the signal is delayed in time by t0 , its magnitude spec-
trum remains unchanged. The phase spectrum, however, is changed and a phase
of −ωt0 , which is a linear function of ω, is added to each frequency component.
The slope of this linear phase term is equal to the time shift t0 .
 
Find the Fourier transform of the rectangular pulse x(t) = rect t−ττ /2 . Example 5.9

 Solution The pulse x(t) is the rectangular pulse rect τt in Figure ?? de-
layed by τ /2 seconds. Hence,
 according to (5.16), its Fourier transform is the
τ
Fourier transform of rect τt multiplied by e−jω 2 . Therefore
 ωτ  τ
X(ω) = τ sinc e−jω 2

The amplitude spectrum |X(ω)| is the same as that indicated in Figure 5.5. But
the phase spectrum has an added linear term − ωτ
2 , as shown in Figure 5.11.

Figure 5.11: The Phase spectrum ∠X(ω)

5.4.3 Frequency Shifting (Modulation)


The dual of the time shifting property is the frequency shifting property. If
F
x(t) ←→ X(ω)

then
F
x(t)ejω0 t ←→ X(ω − ω0 ) (5.17)
106 CHAPTER 5. THE FOURIER TRANSFORM

Proof By definition
Z ∞
F [x(t)e jω0 t
]= x(t)ejω0 t e−jωt dt
−∞
Z ∞
= x(t)e−j(ω−ω0 )t dt
−∞
= X(ω − ω0 )

Therefore, the multiplication of a signal by a factor ejω0 t causes the spectrum


of that signal to be shifted by ω0 .

Example 5.10 Find the Fourier transform of the complex sinusoidal pulse x(t) defined as
(
ej10t , |t| ≤ π
x(t) =
0, otherwise

 Solution We may express x(t) as a product of a complex sinusoid, ej10t , and


a rectangular pulse  
j10t t
x(t) = e rect

Using (5.12) we know that
 
t F
rect ←→ 2π sinc (ω)

Therefore  
t F
ej10t rect ←→ 2π sinc (ω − 10) 

Changing ω0 to −ω0 in Equation (5.17) yields
F
x(t)e−jω0 t ←→ X(ω + ω0 )

For real valued x(t), it is now easy to find the Fourier transform of x(t) multiplied
by a sinusoid since for example x(t) cos ω0 t can be expressed as
1
x(t) cos ω0 t = [x(t)ejω0 t + x(t)e−jω0 t ]
2
It follows from the frequency shifting property that

F 1
F [x(t) cos ω0 t] ←→ [X(ω − ω0 ) + X(ω + ω0 )] (5.18)
2
Multiplying a signal by a sinusoid cos ω0 t amounts to modulating the sinusoid
amplitude. Modulating means changing the amplitude of one signal by mul-
tiplying it by the other. This type of modulation is thus known as amplitude
modulation. The sinusoid cos ω0 t is called the carrier, the signal x(t) is the
modulating signal, and the signal x(t) cos ω0 t is the modulated signal. Further
discussion of modulation and demodulation will appear in chapter 6.
5.4. PROPERTIES OF THE FOURIER TRANSFORM 107

t

Find and sketch the Fourier transform of rect 4 cos 10t. Example 5.11
 F 
 Solution From (5.12) we find rect 4t ←→ 4 sinc 2ω
π , which is depicted
in Figure 5.12(a). From (5.18) it follows that

F 1
x(t) cos 10t ←→ [X(ω + 10) + X(ω − 10)]
2
In this case X(ω) = 4 sinc( 2ω
π ). Therefore

F
x(t) cos 10t ←→ 2 sinc [2(ω + 10)] + 2 sinc[2(ω − 10)]

The spectrum of x(t) cos 10t is obtained by shifting X(ω) in Figure 5.12(b) to

Figure 5.12: Frequency shifting by amplitude modulation.

the left by 10 and also to the right by 10, and then multiplying it by one-half,
as depicted in Figure 5.12(d). 

5.4.4 Time Scaling and Frequency Scaling


If
F
x(t) ←→ X(ω)
then, for any real valued scaling constant a
F 1 ω
x(at) ←→ X
|a| a
and  
1 t F
x ←→ X (aω)
|a| a
Proof For a positive real constant a and changing the variable of integration
to u = at, we have
Z
1 ∞ 1 ω 
F [x(at)] = x(u)e−jωu/a du = X for a > 0
a −∞ a a
108 CHAPTER 5. THE FOURIER TRANSFORM

However, if a < 0, the limits on the integral are reversed when the variable of
integration is changed so that
Z −∞
1
F [x(at)] = x(u)e−jωu/a du
a ∞
Z
1 ∞
1 ω 
=− x(u)e−jωu/a du = − X for a < 0
a −∞ a a

We can write the two cases as one because the factor − a1 is always positive when
a < 0; i.e.,
1 ω
F [x(at)] = X  (5.19)
|a| a
The frequency scaling property can be proven in a similar manner and the result
is  
1 t F
x ←→ X (aω)
|a| a
If a is positive and greater than unity, x(at) is a compressed version of x(t) and
clearly the function X ωa represents the function X(ω) expanded in frequency
by the same factor a. The scaling property states that time compression of a
signal results in its spectral expansion, and time expansion of the signal results
in its spectral compression. Figure 5.13 shows two cases where the pulse length
differs by a factor of two. Notice the longer pulse in Figure 5.13a has a narrower
transform shown in Figure 5.13b.

Figure 5.13: The Fourier transform duality property.

t

Example 5.12 What is the Fourier transform of rect τ .

 Solution The Fourier transform of rect t
τ is, by example 5.3,
   ωτ 
t F
rect ←→ τ sinc
τ 2π
5.4. PROPERTIES OF THE FOURIER TRANSFORM 109

By (5.19) the Fourier transform of rect 2t
τ is
   ωτ 
2t F τ
rect ←→ sinc 
τ 2 4π

5.4.5 Time Reflection


If
F
x(t) ←→ X(ω)
then
F
x(−t) ←→ X(−ω)
Proof By letting a = −1 in (5.19) we get
 
1 ω
F [x(−t)] = X = X(−ω) 
| − 1| −1

5.4.6 Time Differentiation


If
F
x(t) ←→ X(ω)
then
dx(t) F
←→ jωX(ω)
dt
Proof Differentiation of both sides of (5.8) yields
 Z ∞  Z ∞
d d 1 1
x(t) = jωt
X(ω) e dω = jωX(ω) ejωt dω 
dt dt 2π −∞ 2π −∞
This result shows that
dx(t) F
←→ jωX(ω)
dt
The differentiation property can be extended to yield the following
dn x(t) F
←→ (jω)n X(ω) (5.20)
dtn
Using the time differentiation
 property, find the Fourier transform of the triangle Example 5.13
pulse x(t) = Δ τt illustrated in Figure 5.14a and defined as
  (
t 1 − 2|t| τ
τ , |t| ≤ 2
Δ =
τ 0, otherwise

 Solution To find the Fourier transform of this pulse we differentiate the pulse
successively as illustrated in Figure 5.14b and c. From the time differentiation
property (5.22)
d2 x(t) F
←→ (jω)2 X(ω) = −ω 2 X(ω)
dt2
d2 x
The dt2 , consists of a sequence of impulses, as depicted in Figure 5.14c; that is

d2 x(t) 2
= [δ(t + τ2 ) − 2δ(t) + δ(t − τ2 )]
dt2 τ
110 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.14: The Fourier transform using the time differentiation property.

Taking the Fourier transform


 
2
−ω 2 X(ω) = F [δ(t + τ2 ) − 2δ(t) + δ(t − τ2 )]
τ
we obtain

−ω 2 X(ω) = τ2 [ejωτ /2 − 2 + e−jωτ /2 ] = τ4 (cos ωτ 8
2 − 1) = − τ sin
2 ωτ
4

and  2
 τ sin( ωτ  ωτ 
8 4 ) τ
X(ω) = 2
sin 2 ωτ
4 = ωτ = sinc2 
ω τ 2 4 2 4π
We must be careful when using the differentiation property. Note that since
 
dx(t)
F = jωX(ω)
dt
the differentiation property would suggest
h i
F dx(t)
dt
X(ω) = (5.21)

5.4. PROPERTIES OF THE FOURIER TRANSFORM 111

This relationship is indeterminate at ω = 0, note that differentiating x(t) de-


stroys any dc component of x(t) and, consequently, the Fourier transform of the
differentiated signal at ω = 0 is zero. Hence (5.21) applies only to signals with
zero average value, that is, X(0) = 0. Using the differentiation property to find
the Fourier transform of the unit step for example will yield a wrong answer.
The unit step is known to have an average value and differentiating the unit
step would destroy this average value. The derivative of the unit step is

du(t)
= δ(t)
dt
Taking Fourier transform of both sides yields

jωU (ω) = 1

We might be tempted at this stage to claim that the Fourier transform of u(t)
is
1
U (ω) =

This is not true, since U (0) 6= 0 and the above result is indeterminate at ω = 0.
At this point the signum function sgn(t) becomes handy since we know that the
signal being an odd function must have an average value of zero. Furthermore,
we can attempt to find the Fourier transform of the unit step since we can al-
ways express the unit step in terms of the signum function.

Find the Fourier transform of the signum function x(t) = sgn(t). Example 5.14

 Solution First express sgn(t) in terms of the unit step function as

x(t) = sgn(t) = 2u(t) − 1

The time derivative of sgn(t) is given by

dx(t)
= 2δ(t)
dt
Using the differentiation property and taking the Fourier transform of both sides

jωX(ω) = 2

or
jω F [sgn(t)] = 2
Hence
F 2
sgn(t) ←→

We know that X(ω) = 0 because sgn(t) is an odd function and thus has zero
average value. This knowledge removes the indeterminacy at ω = 0 associated
with the differentiation property. 

Find the Fourier transform of the unit step function u(t). Example 5.15
112 CHAPTER 5. THE FOURIER TRANSFORM

 Solution The unit step function can be written as

1 1
u(t) = + sgn(t)
2 2
By linearity of the Fourier transform, we obtain

F 1
u(t) ←→ πδ(ω) +

Therefore, the Fourier transform of the unit step function contains an impulse
at ω = 0 corresponding to the average value of 12 . 

5.4.7 Frequency Differentiation


If
F
x(t) ←→ X(ω)

then
F d
−jtx(t) ←→ X(ω)

Proof Differentiation of both sides of (5.9) yields
Z ∞  Z ∞
d d
X(ω) = x(t) e −jωt
dt = −jtx(t) e−jωt dt 
dω dω −∞ −∞

This result shows that


F d
−jtx(t) ←→ X(ω)

Note that this result can also be written as

F d
tx(t) ←→ j X(ω)

The frequency differentiation property can be extended to yield the following

F dn
tn x(t) ←→ (j)n X(ω) (5.22)
dω n

Example 5.16 Find the Fourier transform of z(t) = te−at u(t)

 Solution Using (5.22) and letting x(t) = e−at u(t)


 
d 1 −j
F [tx(t)] = j =j
dω a + jω (a + jω)2

Hence,
F 1
te−at u(t) ←→ 
(a + jω)2
5.4. PROPERTIES OF THE FOURIER TRANSFORM 113

5.4.8 Time Integration


If
F
x(t) ←→ X(ω)
then Z t
F 1
x(τ )dτ ←→ X(ω) + πX(0)δ(ω) (5.23)
−∞ jω
where Z

X(0) = X(ω) = x(t)dt
ω=0 −∞

If x(t) has a nonzero average (dc value), then X(0) 6= 0.

Proof By definition
Z t  Z ∞ Z t 
F x(τ )dτ = x(τ )dτ e−jωt dt
−∞ −∞ −∞
Z ∞ Z ∞ 
= x(τ )u(t − τ )dτ e−jωt dt
−∞ −∞

Interchanging the order of integration and noting that x(τ ) does not depend on
t, we have
Z t  Z ∞ Z ∞ 
F x(τ )dτ = x(τ ) u(t − τ )e−jωt dt dτ
−∞ −∞ −∞
| {z }
Fourier transform of shifted u(t)
Z ∞  
= x(τ ) U (ω)e−jωτ dτ
−∞
Z ∞
= U (ω) x(τ )e−jωτ dτ
−∞
| {z }
Simply X(ω)

The Fourier transform of u(t) from Example 5.14 is

1
U (ω) = πδ(ω) +

Therefore,
Z t   
1
F x(τ )dτ = πδ(ω) + X(ω)
−∞ jω
which can be written as
Z t 
1
F x(τ )dτ = X(ω) + πX(0)δ(ω) 
−∞ jω

The factor X(0) in the second term on the right follows from the sifting property
of the impulse function. This second term is needed to account for the average
value of x(τ ). Recall that a dc component will show as an impulse at ω = 0. If
114 CHAPTER 5. THE FOURIER TRANSFORM

x(τ ) has no dc component to consider the time integration property will simply
be Z t
F 1
x(τ )dτ ←→ X(ω)
−∞ jω
Example 5.17 Using the integration property derive the Fourier transform of the unit step
function u(t).

 Solution The unit step may be expressed as the integral of the impulse
function Z t
u(t) = δ(τ ) dτ
−∞
F
Since δ(t) ←→ 1, using (5.23) suggests
Z t
F 1
u(t) = δ(τ )dτ ←→ + πδ(ω)
−∞ jω
as found earlier in Example 5.14. 

5.4.9 Time Convolution


If
F
x(t) ←→ X(ω)
F
h(t) ←→ H(ω)
then
F
x(t) ∗ h(t) ←→ X(ω)H(ω)
Proof The proof follows from the definition of the convolution property, hence,
Z ∞ Z ∞ 
F [x(t) ∗ h(t)] = x(τ )h(t − τ )dτ e−jωt dt
−∞ −∞
Z ∞ Z ∞ 
= x(τ ) h(t − τ )e−jωt dt dτ
−∞ −∞
| {z }
Fourier transform of shifted h(t)
Z ∞  
= x(τ ) H(ω)e−jωτ dτ
−∞
Z ∞
= H(ω) x(τ )e−jωτ dτ
−∞
| {z }
= X(ω)

= H(ω)X(ω) = Y (ω) 
Thus convolution in the time domain is equivalent to multiplication in the fre-
quency domain. The use of the convolution property for LTI systems is demon-
strated in Figure 5.15. The amplitude and phase spectrum of the output y(t)
are related to the input x(t) and impulse response h(t) in the following manner:
|Y (ω)| = |X(ω)| |H(ω)|
∠Y (ω) = ∠X(ω) + ∠H(ω)
5.4. PROPERTIES OF THE FOURIER TRANSFORM 115

Thus the amplitude spectrum of the input is modified by |H(ω)| to produce


the amplitude spectrum of the output, and the phase spectrum of the input is
changed by ∠H(ω) to produce the phase spectrum of the output. H(ω), the
Fourier transform of the system impulse response, is generally referred to as the
frequency response of the system.

Figure 5.15: Convolution property of LTI system response.

Using the time convolution property prove the integration property. Example 5.18

 Solution Consider the convolution of x(t) with a unit step function it follows
Z ∞
x(t) ∗ u(t) = x(τ )u(t − τ ) dτ
−∞

The unit step function u(t − τ ) has a value zero for t < τ and a value of 1 for
t > τ , therefore, Z t
x(t) ∗ u(t) = x(τ ) dτ
−∞
Now from the time convolution property, it follows that
Z t  
F 1
x(t) ∗ u(t) = x(τ )dτ ←→X(ω) + πδ(ω)
−∞ jω
1
= X(ω) + πX(0)δ(ω) 

5.4.10 Frequency Convolution (Multiplication)


It will not be surprising that since convolution in the time domain corresponds
to multiplication of the Fourier transform, that multiplication in time domain
corresponds to convolution of the Fourier transforms. Therefore, if
F
x(t) ←→ X(ω)
F
p(t) ←→ P (ω)
then
F 1
x(t)p(t) ←→ [X(ω) ∗ P (ω)]

Proof By definition
Z ∞
F [x(t)p(t)] = x(t)p(t) e−jωt dt (5.24)
−∞

We substitute for p(t) in (5.24) by


Z ∞
1
p(t) = P (σ)ejσt dσ
2π −∞
116 CHAPTER 5. THE FOURIER TRANSFORM

using a different dummy variable since the variable ω is already used in (5.24),
therefore,
Z ∞  Z ∞ 
1
F [x(t)p(t)] = x(t) P (σ)e dσ e−jωt dt
jσt
−∞ 2π −∞
| {z }
p(t)

Interchanging the order of integration, we have


Z ∞ Z ∞ 
1 −jωt jσt
F [x(t)p(t)] = P (σ) x(t)e e dt dσ
2π −∞ −∞
Z ∞ Z ∞ 
1 −j(ω−σ)t
= P (σ) x(t)e dt dσ
2π −∞ −∞
| {z }
Frequency shifting = X(ω − σ)
Z ∞
1
= P (σ)X(ω − σ)dτ
2π −∞
1
= X(ω) ∗ P (ω) 

Figure 5.16 depicts a block diagram representation of the multiplication prop-
erty. The importance of this property is that the spectrum of a signal such
as x(t) cos ω0 t can be easily computed. These type of signals arise in many
communication systems, such as amplitude modulators as we shall see later.
Since
1 1
cos ω0 t = ejω0 t + e−jω0 t
2 2
then
1
F [x(t) cos ω0 t] = X(ω) ∗ [πδ(ω − ω0 ) + πδ(ω + ω0 )]

1 1
= X(ω − ω0 ) + X(ω + ω0 )
2 2
a result we have seen earlier (see the frequency shifting property). Some authors
refer to the multiplication property as modulation property.

Figure 5.16: Block diagram representation of the multiplication property.

5.4.11 Symmetry - Real and Imaginary Signals


First, suppose x(t) is real. This implies that x(t) = x∗ (t), then

X(−ω) = X ∗ (ω)
5.4. PROPERTIES OF THE FOURIER TRANSFORM 117

where ∗ denotes the complex conjugate. This is referred to as conjugate sym-


metry.

Proof The property follows by taking the conjugate of both sides of (5.9)
Z ∞ ∗
X ∗ (ω) = x(t)e−jωt dt
−∞
Z ∞
= x∗ (t)ejωt dt
−∞

Using the fact that x(t) is real we obtain


Z ∞
X ∗ (ω) = x(t)ejωt dt
−∞
= X(−ω)

Now, if we express X(ω) in polar form, we have

X(ω) = |X(ω)|ej∠X(ω) (5.25)

Conjugating both sides of (5.25) yields

X ∗ (ω) = |X(ω)|e−j∠X(ω)

Replacing each ω by −ω in (5.25) results in

X(−ω) = |X(−ω)|ej∠X(−ω)

Since X ∗ (ω) = X(−ω), the last two equations are equal. It then follows that

|X(ω)| = |X(−ω)| (5.26)


∠X(ω) = −∠X(−ω) (5.27)

showing that the magnitude spectrum is an even function of frequency and the
phase spectrum is an odd function of frequency.
Now suppose x(t) is purely imaginary so that x(t) = −x∗ (t). In this case,
we may write
Z ∞ ∗
∗ −jωt
X (ω) = x(t)e dt
−∞
Z ∞
= x∗ (t)ejωt dt
−∞
Z ∞
=− x(t)ejωt dt
−∞
= −X(−ω)

It then follows that

|X(ω)| = −|X(−ω)|
∠X(ω) = ∠X(−ω)
118 CHAPTER 5. THE FOURIER TRANSFORM

i.e., the magnitude spectrum is an odd function of frequency and the phase
spectrum is an even function of frequency.

Example 5.19 Show that if x(t) is real, the expression of the inverse Fourier transform in
(5.8) can be changed to an expression involving real cosinusoidal signals.

 Solution For real x(t)


Z ∞
1
x(t) = X(ω)ejωt dω
2π −∞
Z 0 Z ∞
1 1
= X(ω)ejωt dω + X(ω)ejωt dω
2π −∞ 2π 0
Expressing X(ω) in polar form as in (5.25) and replacing ω by −ω the first
integral term above yields (paying particular attention to how the limits of in-
tegration has changed )
Z ∞ Z ∞
1 1
x(t) = |X(−ω)|ej∠X(−ω) e−jωt dω + |X(ω)|ej∠X(ω) ejωt dω
2π 0 2π 0
Using (5.26) and (5.27) we obtain
Z ∞ Z ∞
1 1
x(t) = |X(ω)|e−j(ωt+∠X(ω)) dω + |X(ω)|ej(ωt+∠X(ω)) dω
2π 0 2π 0
Z ∞ h i
1
= |X(ω)| ej(ωt+∠X(ω)) + e−j(ωt+∠X(ω)) dω
2π 0
Z ∞
1
= 2|X(ω)| cos [ωt + ∠X(ω)] dω 
2π 0

5.4.12 Symmetry - Even and Odd Signals


Assume x(t) is real valued and has even symmetry. These conditions imply
x∗ (t) = x(t) and x(−t) = x(t). Using these relationships we may write
Z ∞
X ∗ (ω) = x∗ (t)ejωt dt
−∞
Z ∞
= x(t)ejωt dt
−∞
Z ∞
= x(−t)ejωt dt
−∞

Now perform a change of variable τ = −t to obtain


Z ∞

X (ω) = x(τ )e−jωτ dt
−∞
= X(ω) 
Therefore, the Fourier transform of an even and real valued time signal is an even
and a real valued signal in the frequency domain. Similarly, we may show that
if x(t) is real and odd, then X ∗ (ω) = −X(ω), i.e., the Fourier transform of an
odd and real valued time signal is an odd and imaginary signal in the frequency
domain. Table 5.1 summarizes the four types of combined symmetries.
5.4. PROPERTIES OF THE FOURIER TRANSFORM 119

x(t) X(ω)

Real and Even Real and Even

Real and Odd Imaginary and Odd

Imaginary and Even Imaginary and Even

Imaginary and Odd Real and Odd

Table 5.1: Symmetries of the Fourier transform.

5.4.13 Duality
A duality exists between the time domain and the frequency domain. Equations
(5.8) and (5.9) show an interesting fact: the direct and the inverse transform
operations are remarkably similar. The property states that if
F
x(t) ←→ X(ω) (5.28)

then
F
X(t) ←→ 2πx(−ω) (5.29)
This property states that if x(t) has the Fourier transform X(ω) and a function
of time exists such that

X(t) = X(ω)
ω=t


then F [X(t)] = 2πx(−ω), where x(−ω) = x(t) .
t=−ω

Proof According to (5.8)


Z ∞
1
x(t) = X(σ)ejσt dσ
2π −∞

Hence Z ∞
2πx(−t) = X(σ)e−jσt dσ
−∞

Changing t to ω yields
Z ∞
2πx(−ω) = X(σ)e−jσω dσ
−∞

Furthermore, changing σ to t yields


Z ∞
2πx(−ω) = X(t)e−jωt dt 
−∞

The duality relationship described by Equations (5.28) and (5.29) is illustrated


in Figure 5.17.
120 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.17: The Fourier transform duality property.

Example 5.20 Use duality to find the Fourier transform of


2
x(t) =
1 + t2
 Solution First recognize that
F 2
f (t) = e−|t| ←→ F (ω) =
1 + ω2
Using duality

which indicates that

X(ω) = 2πf (−ω)


= 2πe−|ω| 

Example 5.21 Use duality to find the inverse Fourier transform of X(ω) = sgn(ω)

 Solution We recognize that


F 2
f (t) = sgn(t) ←→ F (ω) =

Using duality
5.4. PROPERTIES OF THE FOURIER TRANSFORM 121

which indicates that


2 F
←→ 2πsgn(−ω)
jt
Using the fact that sgn(ω) = −sgn(−ω) we obtain

j
x(t) = 
πt

5.4.14 Energy of Non-periodic Signals


In Section 4.5.11, we related the total average power of a periodic signal to the
average power of each frequency component in its Fourier series. We did this
through Parseval’s theorem. We would like to find analogous relationship for
non-periodic signals. Non-periodic signals are energy signals, next we show that
the energy of these signals can be computed from their transform X(ω). The
signal energy is defined as
Z ∞ Z ∞
E∞ = 2
|x(t)| dt = x(t)x∗ (t)dt
−∞ −∞

Using (5.8) in the above equation


Z ∞  Z ∞ 
1 ∗ −jωt
E∞ = x(t) X (ω)e dω dt
−∞ 2π −∞

Interchanging the order of integration gives


Z ∞ Z ∞ 
1 ∗ −jωt
E∞ = X (ω) x(t)e dt dω
2π −∞ −∞
Z ∞
1
= X ∗ (ω)X(ω) dω
2π −∞
Z ∞
1
= |X(ω)|2 dω
2π −∞

Consequently,
Z ∞ Z ∞
12
E∞ = |x(t)| dt = |X(ω)|2 dω (5.30)
−∞ 2π −∞

Equation (5.30) is known as Parseval’s theorem for energy signals.

Evaluate the following integral Example 5.22


Z ∞
2

−∞ |jω + 2|2
122 CHAPTER 5. THE FOURIER TRANSFORM

 Solution Let
1
X(ω) =
jω + 2
Now the inverse Fourier transform of X(ω) is x(t) = e−2t u(t). Using (5.30)
Z ∞ Z ∞
1 2
dω = 2 |x(t)|2 dt
2π −∞ |jω + 2|2 −∞
Z ∞ Z ∞
2
2
dω = 4π e−4t dt
−∞ |jω + 2| 0
=π 

For convenience, a summary of the basic Fourier transform properties are listed
in Table 5.2.

Property Name x(t) X(ω)

Linearity αx1 (t) + βx2 (t) αX1 (ω) + βX2 (ω)

Time Shifting x(t − t0 ) X(ω)e−jωt0

Frequency Shifting x(t)ejω0 t X(ω − ω0 )

1 ω
Time Scaling x(at) X
|a| a

Time Reflection x(−t) X(−ω)

Conjugation x∗ (t) X ∗ (−ω)

dn x(t)
Time differentiation (jω)n X(ω)
dtn
dn X(ω)
Frequency Differentiation (−jt)n x(t)
dω n
Z t
1
Time Integration x(τ )dτ X(ω) + πX(0)δ(ω)
−∞ jω

Time Convolution x(t) ∗ h(t) X(ω)H(ω)

1
Multiplication x(t)p(t) X(ω) ∗ P (ω)

Duality X(t) 2πx(−ω)


Z ∞ Z ∞
1
Parseval’s Theorem |x(t)|2 dt |X(ω)|2 dω
−∞ 2π −∞

Table 5.2: Basic Fourier transform properties.


5.5. ENERGY AND POWER SPECTRAL DENSITY 123

5.5 Energy and Power Spectral Density


5.5.1 The Spectral Density
The Fourier spectrum (i.e.X(ω)) of a signal indicates the relative amplitudes and
phases of the sinusoids that are required to synthesize that signal. A periodic
signal Fourier spectrum has finite amplitudes and exist at discrete frequencies
(ω0 and its multiples). Such a spectrum is easy to visualize, but the spectrum
of a non-periodic signal is not easy to visualize because it has a continuous
spectrum. The continuous spectrum concept means the spectrum exists for
every value of ω, but the amplitude of each component in the spectrum is
zero (see Section 6.1.1). The meaningful measure here is not the amplitude of
a component of some frequency but the spectral density per unit bandwidth.
Equation (5.8) represents x(t) as a continuous sum of exponential functions with
frequencies lying in the interval (−∞, ∞). If the signal x(t) represents a voltage,
X(ω) has the dimensions of voltage multiplied by time. Because frequency
has the dimensions of inverse time, we can consider X(ω) as a voltage-density
spectrum, known more generally as the spectral density. In other words it is the
area under the spectral density function X(ω) that contributes and not each
point on the X(ω) curve. From (5.7) it is clear that x(t) is synthesized by
adding exponentials of the form ejnΔωt , in which the contribution by any one
exponential component is zero. But the contribution by the exponentials in an
1
infinitesimal band Δω located at ω = nΔω is 2π X(nΔω)Δω and the addition
of all these components yields x(t) in the integral form:
∞ Z ∞
1 X 1
x(t) = lim X(nΔω)ej(nΔωt) Δω = X(ω)ejωt dω
Δω→0 2π 2π −∞
n=−∞

1
The contribution by components within a band dω is 2π X(ω) dω = X(ω) dF ,

where dF = 2π is the bandwidth in hertz. Clearly, X(ω) is the spectral density
per unit bandwidth (in hertz). It also follows that even if the amplitude of
any one component is zero, the relative amount of component of frequency ω
is X(ω). Although X(ω) is a spectral density, in practice it is often called the
spectrum of x(t) rather that the spectral density of x(t). More commonly, X(ω)
is called the Fourier spectrum (or Fourier transform) of x(t).

5.5.2 Energy Spectral Density


We turn our attention now to investigate how any given frequency band con-
tributes to the energy of the signal. Equation (5.30) can be interpreted to
mean that the energy of a signal x(t) results from energies contributed by all
the spectral components of the signal x(t). The total signal energy is the area
under |X(ω)|2 (divided by 2π). If we consider an infinitesimally small band
Δω (Δω → 0) as illustrated in Figure 5.18, the energy ΔE∞ of the spectral
components in this band is the area |X(ω)|2 under this band (divided by 2π):
1
ΔE∞ = |X(ω)|2 Δω = |X(ω)|2 ΔF

Therefore, the energy contributed by the components in this band of ΔF (in
hertz) is |X(ω)|2 ΔF . The total signal energy is the sum of energies of all
124 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.18: Interpretation of Energy spectral density of a signal

such bands and is indicated by the area under |X(ω)|2 as in (5.30). Therefore,
|X(ω)|2 is called energy spectral density or simply the energy spectrum and is
given the symbol Ψ, that is Ψ(ω) = |X(ω)|2 . Thus the energy spectrum is that
function
• that describes the relative amount of energy of a given signal x(t) versus
frequency.
• whose total area under Ψ(ω) is the energy of the signal.
Note that the quantity Ψ(ω) describes only the relative amount of energy at
various frequencies. For continuous Ψ(ω), the energy at any given frequency is
zero, it is the area under Ψ(ω) that contributes energy. The energy contained
within a band ω1 ≤ ω ≤ ω2 is
Z ω2
1
ΔE∞ = |X(ω)|2 dω (5.31)
2π ω1
For real valued time signals, Ψ(ω) is an even function and (5.30) can be reduced
to Z
1 ∞
E∞ = |X(ω)|2 dω (5.32)
π 0
Note that the energy spectrum of a signal depends on the magnitude of the
spectrum and not the phase.

Example 5.23 Find the energy of the signal x(t) = e−t u(t) in the frequency band −4 < ω < 4

 Solution The total energy of x(t) is


Z ∞
1
E∞ = e−2t dt =
0 2
The energy in the frequency band −4 < ω < 4 is
Z
1 4 1
ΔE∞ = dω
π 0 1 + ω2
4
1
−1
= tan ω = 0.422 
π 0

Thus, approximately 84% of the total energy content of the signal lies in the
frequency band −4 < ω < 4. A result that could not be obtained in the time
domain.
5.5. ENERGY AND POWER SPECTRAL DENSITY 125

5.5.3 Power Spectral Density


Not all signals of interest have finite energy, some signals have infinite energy but
finite average power. A function that describes the distribution of the average
power of the signal as a function of frequency is called the power spectral density
or simply the power spectrum. In the following, we develop an expression for
the power spectrum of power signals. Let x(t) be a power signal (not necessary
periodic) shown in Figure 5.19a and define xτ (t), a truncated version of this
power signal shown in Figure 5.19c, as
(
x(t), |t| < T
xT (t) =
0, otherwise
 
t
= x(t)rect
2T

We also assume that


F
xT (t) ←→ XT (ω)

The average power in signal x(t) is


" Z #  Z 
T ∞
1 2 1
P∞ = lim |x(t)| dt = lim |xT (t)|2 dt (5.33)
T →∞ 2T −T T →∞ 2T −∞

Figure 5.19: The time truncation of a power signal.


126 CHAPTER 5. THE FOURIER TRANSFORM

Using Parseval’s relation, (5.33) can be written as


 Z ∞  Z ∞  
1 1 2 1 |XT (ω)|2
P∞ = lim |X (ω)| dω = lim dω
2π T →∞ 2T −∞ T 2π −∞ T →∞ 2T
Z ∞
1
= S(ω) dω
2π −∞
where  
|XT (ω)|2
S(ω) = lim
T →∞ 2T
S(ω) is referred to as power spectrum of signal x(t) and describes the distribu-
tion of the power of the signal versus frequency.
The above discussion holds for any general power signal. Next we show how
to compute the power spectrum of a periodic signal. Assume x(t) is periodic
and that it is represented by the exponential Fourier series

X
x(t) = cn ejnω0 t
n=−∞

t

Define the truncated signal xT (t) as the product x(t)rect 2T . By using the
modulation property
    
t F 1 ωT
xT (t) = x(t)rect ←→ XT (ω) = X(ω) ∗ 2T sinc
2T 2π π
Z ∞  
1 λT
= 2T sinc X(ω − λ) dλ
2π −∞ π

Substituting (5.14) for X(ω) we have


Z ∞  " X

#
1 λT
XT (ω) = 2T sinc 2πcn δ(ω − λ − nω0 ) dλ
2π −∞ π n=−∞

Interchanging the operations of integration and summation yields


X∞ Z ∞   
λT
Xτ (ω) = 2T cn sinc δ(ω − nω0 − λ) dλ
n=−∞ −∞ π
| {z
}

By sifting = sinc λT
π
λ=ω−nω0

X  
(ω − nω0 )T
= 2T cn sinc
n=−∞
π

|XT (ω)|2
Next we form the function to obtain
2T
X∞ ∞
X    
|XT (ω)|2 (ω − nω0 )T (ω − mω0 )T
= 2T cn c∗m sinc sinc
2T n=−∞ m=−∞
π π

The power spectrum of periodic signal x(t) is obtained by taking the limit of
the last expression as T → ∞. It has been observed earlier (see Example 5.4)
5.5. ENERGY AND POWER SPECTRAL DENSITY 127

that as T → ∞ the sinc function approaches an impulse function. Therefore,


we anticipate that the two sinc functions in the previous expression approach
δ(ω − kω0 ), k = m and n, with strength Tπ . Also observe that
(
δ(ω − nω0 ), m = n
δ(ω − nω0 )δ(ω − mω0 ) =
0, otherwise

Then the power spectrum of the periodic signals is

X∞
|XT (ω)|2
S(ω) = lim = 2π |cn |2 δ(ω − nω0 ) (5.34)
T →∞ 2T n=−∞

Now that we have obtained our result, note that to convert any line power

X
spectrum (i.e. |cn |2 ) to a power spectral density simply change the lines
n=−∞
to impulses. The weights (areas) of these impulses are equal to the squared
magnitudes of the line heights and multiplied by the factor 2π. Integrating the
power spectrum S(ω) in (5.34) over all frequencies yields

Z Z " ∞
#
1 ∞ ∞ X
2
P∞ = S(ω)dω = |cn | δ(ω − nω0 )dω
2π −∞ −∞ n=−∞

X Z ∞ 
2
= |cn | δ(ω − nω0 )dω
n=−∞ −∞
| {z }
=1

X
= |cn |2
n=−∞

Find the power spectral density for the periodic signal x(t) = A cos(ω0 t + θ) Example 5.23

 Solution Using Euler

A jθ jω0 t A −jθ −jω0 t


x(t) = e e + e e
2 2

Writing an exponential Fourier series for x(t), we find


   
A −jθ A jθ
c−1 = e and c1 = e
2 2

Using (5.34), we have


    
A2 A2
S(ω) = 2π δ(ω + ω0 ) + δ(ω − ω0 )
4 4
1 2 1
= πA δ(ω + ω0 ) + πA2 δ(ω − ω0 )
2 2
128 CHAPTER 5. THE FOURIER TRANSFORM

The power can be found from integrating the power spectrum


Z ∞
1
P∞ = S(ω)dω
2π −∞
Z ∞ 
1 1 2 1
= πA δ(ω + ω0 ) + πA2 δ(ω − ω0 ) dω
2π −∞ 2 2
1 2 1 2 A2
= A + A =
4 4 2
This result can be checked easily in the time domain. Also note that the result
could be obtained easily using Parseval’s relation

X A2 A2 A2
|cn |2 = c−1 c∗−1 + c1 c∗1 = + = 
n=−∞
4 4 2

5.6 Correlation Functions


The word correlation in general refers to the degree by which things are related,
we say drugs and crime are correlated. In the study of signal and system analysis
the characteristics of individual signals are important, but often the relationship
between signals are just as important. Correlation functions mathematically
define the similarity between two signals in both time domain and frequency
domain. The mathematical definition of a correlation function depends on the
type of signals being analyzed.

5.6.1 Energy Signals


For two real energy signals x(t) and y(t) the correlation function Rxy is defined
by (please note different authors use different definitions of correlation, however,
they establish the same results )
Z ∞ Z ∞
Rxy (t) = x(τ )y(τ − t)dτ = x(t + τ )y(τ )dτ
−∞ −∞

For complex signals we define


Z ∞ Z ∞

Rxy (t) = x(τ )y (τ − t)dτ = x(t + τ )y ∗ (τ )dτ
−∞ −∞

5.6.2 Power Signals


The correlation between two power signals x(t) and y(t) is defined by
Z Z
1 1
Rxy (t) = lim x(τ )y(τ − t)dτ = lim x(t + τ )y(τ )dτ
T →∞ T T T →∞ T T

For complex signals we define


Z Z
1 1
Rxy (t) = lim x(τ )y ∗ (τ − t)dτ = lim x(t + τ )y ∗ (τ )dτ
T →∞ T T T →∞ T T
5.6. CORRELATION FUNCTIONS 129

5.6.3 Convolution and Correlation


Notice the similarity between the correlation function for two energy signals and
the convolution of two signals presented earlier. The convolution of two signals
x(t) and y(t) is Z ∞
x(t) ∗ y(t) = x(τ )y(t − τ )dτ
−∞

We notice there is a simple mathematical relationship between correlation and


convolution
Rxy (t) = x(t) ∗ y(−t)
Using the convolution property of the Fourier transform and the fact that for a
F
real valued signal y(−t) ←→ X ∗ (ω) we have
F
Rxy (t) ←→ X(ω)Y ∗ (ω)

5.6.4 Autocorrelation
A very important special case of the correlation function is the correlation of
a function with itself. This type of correlation is called the autocorrelation
function. If x(t) is an energy signal and real valued, its autocorrelation is
Z ∞ Z ∞
Rxx (t) = x(τ )x(τ − t)dτ = x(t + τ )x(τ )dτ
−∞ −∞

At a shift of zero, i.e. t = 0, that becomes


Z ∞
Rxx (0) = x2 (τ )dτ
−∞

which is the total signal energy of the signal. On the other hand if x(t) is a
power signal, the autocorrelation is
Z Z
1 1
Rxx (t) = lim x(τ )x(τ − t)dτ = lim x(t + τ )x(τ )dτ
T →∞ T T T →∞ T T

If the shift is zero we have


Z
1
Rxx (0) = lim x2 (τ )dτ
T →∞ T T

which is the average signal power of the signal. Note that if x(t) is periodic the
limiting operation in the determination of Rxx (t) can be replaced by a compu-
tation over one period. The subscript (xx) of the autocorrelation function Rxx
is often written as Rx .

Determine and sketch the autocorrelation function of a periodic square wave Example 5.24
x(t) shown in Figure 5.20.

 Solution Because x(t) is real valued and periodic the autocorrelation function
is given by Z
1
Rx (t) = x(τ )x(τ − t)dτ
T T
130 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.20: A periodic square wave signal

For − T2 < t < 0, see Figure 5.21


Z t+( T4 )  
1 2 2 1 t
Rx (t) = A dτ = A +
T − T4 2 T
T
For 0 < t < 2 ,
Z T  
1 4 1 t
Rx (t) = A2 dτ = A2 −
T t−( T4 ) 2 T

Figure 5.21: A periodic square wave signal

A graph of Rx (t) is shown in Figure 5.22. Note that since x(t) is periodic,
all calculations repeat over every period. It follows that the autocorrelation
function of a periodic waveform is periodic. It is interesting to notice that
2
Rx (0) = A2 , the average signal power of the signal, a result we should all be
familiar with by now!! 

Figure 5.22: Autocorrelation function of periodic square wave signal


5.7. CORRELATION AND THE FOURIER TRANSFORM 131

5.7 Correlation and the Fourier Transform


In Section 5.5 we have seen how signals are handled using the power spectral
density function S(ω) and the energy spectral density Ψ(ω). Spectral density
functions give us great insight into which frequency band contains more energy
or more power. The question now naturally arises: Is there some operation
in the time domain which is equivalent to finding the power spectrum or the
energy spectrum in frequency?

5.7.1 Autocorrelation and The Energy Spectrum


In section 5.6.4 we have seen how autocorrelation functions are related to signal
energy and signal power. A relation between the autocorrelation function and
the energy spectrum can also be established. The autocorrelation function Rx (t)
for energy signals is
Z ∞ Z ∞
Rx (t) = x(τ )x∗ (τ − t)dτ = x(t + τ )x∗ (τ )dτ (5.35)
−∞ −∞

The Fourier transform of Equation (5.35) gives


Z ∞ Z ∞ 
F [Rx (t)] = x(t + τ )x∗ (τ )dτ e−jωt dt
−∞ −∞

Interchanging the order of integration, we have


Z ∞ Z ∞ 
F [Rx (t)] = x(t + τ )e−jωt dt x∗ (τ )dτ
−∞ −∞
Z ∞
 
= X(ω)ejωτ x∗ (τ )dτ = |X(ω)|2
−∞

Therefore,
F
Rx (t) ←→ Ψ(ω)
and we conclude that the energy spectral density Ψ(ω) is the Fourier transform
of the autocorrelation function of energy signals. It is clear that Rx (t) provides
spectral information of x(t) directly. It is interesting to note that for real valued
energy signal x(t), we can write
F
Rx (t) ←→ X(ω)X(−ω)

Recognizing multiplication in the frequency domain is equivalent to convolution


in the time domain
Z ∞
Rx (t) = x(t) ∗ x(−t) = x(τ )x(τ − t)dτ (5.36)
−∞

which is exactly the definition of the autocorrelation function. From (5.36) it is


clear that
Rx (−t) = x(−t) ∗ x(t) = Rx (t)
Therefore, for real x(t), autocorrelation function Rx (t) is an even function of t.
132 CHAPTER 5. THE FOURIER TRANSFORM

5.7.2 Autocorrelation and the Power Spectrum


Autocorrelation function has the same relation to power spectral density as it
had to the energy spectral density. In particular we show that the power spectral
density S(ω) is the Fourier transform of the autocorrelation function of power
signals, i.e.,
F
Rx (t) ←→ S(ω)
The autocorrelation function Rx (t) for power signals is
Z T Z T
1 1
Rx (t) = lim x(τ )x∗ (τ − t)dτ = lim x(t + τ )x∗ (τ )dτ
T →∞ 2T −T T →∞ 2T −T

We begin by taking the inverse Fourier transform of S(ω)


Z ∞ 
1 |XT (ω)|2 jωt
F −1 [S(ω)] = lim e dω
2π −∞ T →∞ 2T

Interchange the order of operations yields


Z ∞
1 1
F −1 [S(ω)] = lim XT (ω)XT∗ (ω)ejωt dω
T →∞ 2T −∞ 2π
Z ∞ "Z # "Z #
T T
1 1 −jωτ ∗
= lim x(τ )e dτ x (μ)e dμ ejωt dω
jωμ
T →∞ 2T −∞ 2π −T −T
Z T "Z # Z ∞ 
T
1 ∗ 1
= lim x(τ ) x (μ) ejω(t−τ +μ) dω dμ dτ
T →∞ 2T −T −T 2π −∞
| {z }
= δ(t − τ + μ)
| {z
}

By sifting = x∗ (μ)
μ=τ −t
Z T
1
= lim x(τ )x∗ (τ − t)dτ
T →∞ 2T −T

We now have another method to find the power spectrum, i.e., first determine
the autocorrelation function and then take a Fourier transform.
Chapter 6

Applications of The Fourier


Transform

The continuous time Fourier transform is a very important tool that has numer-
ous applications in communication systems, signal processing, control systems,
and many other engineering disciplines. In this chapter, we discuss some of
these applications, including linear filtering, modulation, and sampling.

6.1 Signal Filtering


Filtering is the process by which the essential and useful part of a signal is
separated from undesirable components. The idea of filtering using LTI systems
is based on the convolution property of the Fourier transform. We will analyze
systems called filters which are designed to have a certain frequency response.
We will define the term ideal filter. Since frequency response is so important in
the analysis of systems we discuss it in more details next.

6.1.1 Frequency Response


Every LTI system has an impulse response h(t) and, through the Fourier trans-
form, also a frequency response H(ω). Since LTI systems are completely defined
by convolution, the convolution property is essential for understanding LTI sys-
tems, as well as simplifying their analysis. Figure 6.1 depicts the time domain
and frequency domain representation of a LTI system. A cascade connection of
two LTI systems can be combined and simplified into a single equivalent system
as shown in Figure 6.2. The equivalent frequency response of a cascade of LTI
system is H(ω) = H1 (ω)H2 (ω). In a parallel connection of LTI systems the

Figure 6.1: LTI system depicted in both the time and frequency domain.

133
134 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.2: Frequency domain representation of cascade connection of Linear


time invariant systems.

equivalent frequency response is H(ω) = H1 (ω) + H2 (ω) as shown in Figure 6.3.

Figure 6.3: Frequency domain representation of parallel connection of Linear


time invariant systems.

Example 6.1 The output of a LTI system is given by


Z ∞
y(t) = e−τ x(t − τ )dτ
0

Find the inverse system.

 Solution First recognize that the impulse response to the system is h(t) =
e−t u(t). Therefore,
1
H(ω) =
1 + jω
and
H −1 (ω) = 1 + jω
In other words
X(ω) = (1 + jω)Y (ω)
or
d
x(t) = y(t) + y(t) 
dt
6.1. SIGNAL FILTERING 135

Find the frequency response of the LTI system shown in Figure 6.4. Example 6.2

Figure 6.4: Example of cascade/parallel system and delay.

 Solution First, the impulse response of the parallel system is h1 (t) = δ(t) −
δ(t − T ). The frequency response of the parallel part is

H1 (ω) = 1 − e−jωT

and the frequency response of the integrator is


1
H2 (ω) = πδ(ω) +

Therefore, the frequency response of the overall system in Figure 6.4 is the
product

H(ω) = H1 (ω)H2 (ω)


 
−jωT
 1
= 1−e πδ(ω) +

 
−jωT
 −jωT
 1
=π 1−e δ(ω) + 1 − e
| {z } jω
=0

ejωT /2 − e−jωT /2 −jωT /2
= e

sin(ωT /2) −jωT
= e 
(ω/2)

6.1.2 Ideal Filters


An ideal filter is one that passes certain frequencies without any change and
stops the rest. The range of frequencies that pass through is called the passband
of the filter, whereas the range of frequencies that don not pass is referred to as
the stopband. In the ideal case, |H(ω)| = 1 in the passband, while |H(ω)| = 0
in the stopband. The most common types of filters are the following:

1. Lowpass filter is a one that has its passband in the range of 0 < |ω| < ωc ,
where ωc is called the cutoff frequency of the lowpass filter, Figure 6.5a.

2. Highpass filter is a one that has its stopband in the range 0 < |ω| < ωc
and a passband that extends from ω = ωc to infinity, Figure 6.5b.
136 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

3. Bandpass filter has its passband in the range 0 < ω1 < |ω| < ω2 < ∞ and
all other frequencies are stopped, Figure 6.5c.
4. Bandstop filter stops frequencies in the range 0 < ω1 < |ω| < ω2 < ∞ and
pass all other frequencies, Figure 6.5d.

Figure 6.5: Frequency responses of most common type of ideal filters.

The class of filters described previously are referred to as ideal filters. It is


important to note that ideal filters are impossible to construct physically. Con-
sider the ideal low pass filter for instance, its impulse response corresponds to
the inverse Fourier transform of the frequency response shown in Figure 6.5a
and is given by  
ωc ωc t
h(t) = sinc
π π
Clearly the impulse response of this ideal filter is not zero for t < 0, as shown
in Figure 6.6. Systems such as this are noncausal and hence not physically re-
alizable.
6.1. SIGNAL FILTERING 137

Figure 6.6: The impulse response of an ideal low pass filter.

For the system shown in Figure 6.7, often used to generate communication Example 6.3
signals, design an ideal filter assuming that a certain application requires y(t) =
3 cos 1200πt.

Figure 6.7: Generation of communication signals

 Solution The signals x1 (t) and x2 (t) are multiplied together to give
x3 (t) = x1 (t)x2 (t) = 10 cos 200πt cos 1000πt
Using Euler’s identity
 
ej1000πt + e−j1000πt
x3 (t) = 10 cos 200πt
2
= 5 cos 200πt ej1000πt + 5 cos 200πt e−j1000πt
We can use the frequency shifting property together with the Fourier transform
of cos ω0 t to find the frequency spectrum of x3 (t) as
X3 (ω) = 5π[δ(ω − 200π − 1000π) + δ(ω + 200π − 1000π)]
+ 5π[δ(ω − 200π + 1000π) + δ(ω + 200π + 1000π)]
= 5π[δ(ω − 1200π) + δ(ω − 800π) + δ(ω + 800π) + δ(ω + 1200π)]
The frequency spectra of x1 (t), x2 (t), and x3 (t) are shown in Figure 6.8. The
Fourier transform of the required signal y(t) is
Y (ω) = 3π[δ(ω − 1200π) + δ(ω + 1200π)]
It can be seen that this can be obtained from X3 (ω) by a high pass filter whose
frequency response is H(ω) shown in Figure 6.8, the filtering process can be
written as
Y (ω) = X3 (ω)H(ω)
where
  
ω
H(ω) = 0.6 1 − rect , 800π < ωc < 1200π 
2ωc
138 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.8: The frequency spectrum of 10 cos 200πt cos 1000πt

Example 6.4 Consider the periodic square wave given in Example 5.7 as the input signal to
an ideal low pass filter, sketch the output signal in the time domain if the cutoff
frequency |ωc | = 8ω0 .

 Solution Recall in Example 5.7 we showed that the periodic square wave
has the Fourier transform depicted in Figure 6.9 which consists of impulses at
integer multiples of ω0 . If this signal is the input to an ideal low pass filter with
cutoff frequency ωc such that ωc = 8ω0 then the Fourier transform of the out-
put contains only the impulses lying within the filters bandwidth. The frequency
response of the low pass filter is drawn on top of the Fourier transform X(ω).
The output signal is plotted at the right side of Figure 6.9. 

It is appropriate here to define a word used in the previous example, namely


the term bandwidth, a word that is very commonly used in signal analysis.

6.1.3 Bandwidth
The term bandwidth is applied to both signals and filters. It generally means a
range of frequencies. This could be the range of frequencies present in a signal
or the range of frequencies a filter allows to pass. Usually, only the positive
frequencies are used to describe the range of frequencies. For example, the ideal
6.1. SIGNAL FILTERING 139

Figure 6.9: Example of ideal low pass filter.

low pass filter in the previous example with cutoff frequencies of ±8ω0 is said
to have a bandwidth of 8ω0 , even though the width of the filter is 16ω0 . The
ideal bandpass filter in Figure 6.5c has a bandwidth of ω2 − ω1 which is the
width of the region in positive frequency in which the filter passes a signal.
There are many different kinds of bandwidths, including absolute bandwidth,
3-dB bandwidth or half-power bandwidth and the null-to-null bandwidth or zero
crossing bandwidth.

Absolute Bandwidth
This definition is used in conjunction with band-limited signals. A signal x(t)
is called band-limited if its Fourier transform satisfies the conditions X(ω) = 0
for |ω| ≥ ωB . Furthermore, with the above conditions still satisfied it is also
called a baseband if X(ω) is centered at ω = 0. On the other hand if |X(ω)| = 0
for |ω − ω0 | ≥ ωB and X(ω) centered at ω0 it is called a bandpass signal, see
Figure6.10. If x(t) is a baseband signal and |X(ω)| = 0 outside the interval

Figure 6.10: Amplitude spectra for baseband and bandpass signals.


140 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.11: Absolute bandwidth of signals.

|ω| < ωB as shown in Figure 6.11, then the absolute bandwidth is

B = ωB

But if x(t) is a bandpass signal and |X(ω)| = 0 outside the interval ω1 < ω < ω2 ,
then
B = ω2 − ω1

3-dB (Half-Power) Bandwidth


For baseband signals it is defined as the frequency ω1 (Figure 6.12a), such that

1
|X(ω1 )| = √ |X(0)|
2

√ inside the band 0 < ω < ω1 , the magnitude |X(ω)| falls no lower
Note that
than 1/ 2 of its value at ω = 0. The term 3-dB bandwidth comes from the
relationship  
1
20 log10 √ = −3 dB
2

√that inside the band ω1 < ω < ω2


For bandpass signals (Figure 6.12b), note here
the magnitude |X(ω)| falls no lower than 1/ 2 of its value at ω = 0, then B =
ω2 −ω1 . The 3-dB bandwidth is also known as the half√power bandwidth because
if the magnitude of voltage or current is divided by 2, the power delivered to
a load by that signal is halved. The bandwidth can also be determined using
1
|X(ω1 )|2 = |X(0)|2
2

Example 6.5 Determine the 3-dB (half power) bandwidth for the signal x(t) = e−t/T u(t).

 Solution The signal x(t) is a baseband signal and has the Fourier trans-
form
1
X(ω) = 1 
T + jω
6.1. SIGNAL FILTERING 141

Figure 6.12: 3-dB or half power bandwidth.


.

The power spectrum shown in Figure 6.13 is given by


1
|X(ω)|2 = 
1 2
T + ω2

Clearly, |X(0)|2 = T 2 , and the 3-dB bandwidth is given by


1
B=
T

Null-to-Null (Zero-Crossing) Bandwidth


The null-to-null bandwidth or zero crossing is shown in Figure 6.14. It is defined
as the distance between the first null in the frequency spectrum above ωm and

Figure 6.13: Half power bandwidth.


.
142 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.14: Null-to-null and first null bandwidth.

the first null in the spectrum below ωm , where ωm is the frequency at which
the spectrum has its maximum magnitude. For baseband signals , the spectrum
maximum is at ω = 0 and the bandwidth is the distance between the first null
and the origin.

6.1.4 Practical Filters


As we saw earlier ideal filters does not exist in real life as they are impossible
to build. In practice, we can realize a variety of filter characteristics which can
only approach ideal characteristics. An ideal filter makes a sudden transition
from the passband to the stopband. There is no transition band. For practical
filters, on the other hand, the transition from the passband to the stopband
(or vice versa) is gradual, and takes place over a finite band of frequencies, as
shown in Figure 6.15.

Example 6.6 Consider the RC circuit shown below.

The impulse response of this circuit is given by

1 (−t/RC)
h(t) = e u(t)
RC
6.2. AMPLITUDE MODULATION 143

Figure 6.15: Practical filters.

and the frequency response is


1
H(ω) =
1 + jωRC
The magnitude spectrum is shown in Figure 6.16. It is clear that the RC circuit
with the output taken as the voltage across the capacitor performs as a low pass
filter. It is common practice to have the transition between the√passband, and
the stopband at the 3-dB cutoff frequency. Setting |H(ω)| = 1/ 2, we obtain
1
ωc =
RC
as shown in Figure 6.16.

Figure 6.16: Magnitude spectrum of lowpass RC circuit .

6.2 Amplitude Modulation


One of the most important applications of the Fourier transform is in the anal-
ysis and design of communication systems. The goal of all communication sys-
144 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

tems is to convey information from one point to another. Prior to sending the
information signal through the transmission channel, the information signal is
converted to a useful form through what is known as the modulation process. In
amplitude modulation, the amplitude of a sinusoidal signal is constantly being
modified in proportion to a given signal. This has the effect of simply shifting
the spectrum of the given signal up and down by the sinusoid frequency in the
frequency domain.
The use of amplitude modulation may be advantageous whenever a shift in
the frequency components of a given signal is desired. Consider for example the
transmission of a human voice through satellite communications. The maximum
voice frequency is 3 kHz, on the other hand, satellite links operate at much
higher frequencies (3-30 GHz). For this form of transmission to be feasible, we
clearly need to do things: shift the essential spectral content of a speech signal
to some higher frequency so that it lies inside the assigned frequency range
for satellite transmission, and shift it back to its original frequency band on
reception. The first operation is simply called modulation, and the second we
call demodulation.
We consider a very simple method of modulation called the double-sideband,
suppressed-carrier, amplitude modulation (DSB/SC-AM). This type of modula-
tion is accomplished by multiplying the information-carrying signal m(t) (known
as modulating signal), by a sinusoidal signal called the carrier signal, cos ω0 t,
ω0 is the carrier frequency as shown in Figure 6.17.

Figure 6.17: Signal multiplier.

The waveforms of Figure 6.18 illustrates the amplitude modulation process in


the time domain for a slowly varying m(t). We will now examine the spec-

Figure 6.18: DSB/SC amplitude modulation.


6.2. AMPLITUDE MODULATION 145

trum of the output signal (modulated signal). As was indicated earlier by the
frequency shifting property of the Fourier transform, we have

F 1
y(t) = m(t) cos ω0 t ←→ [M (ω − ω0 ) + M (ω + ω0 )] = Y (ω) (6.1)
2

Recall that M (ω −ω0 ) is M (ω) shifted to the right by ω0 and M (ω +ω0 ) is M (ω)
shifted to the left by ω0 . Thus, the process of modulation shifts the spectrum of
the modulating signal to the left and right by ω0 (Figure 6.19). Note also that
the if bandwidth of m(t) is ωB , then as indicated in Figure 6.19, the bandwidth
of the modulated signal is 2ωB . We also observe that the modulated signal
spectrum spectrum centered at ω0 is composed of two parts: a portion that lies
above ω0 , known as the upper sideband (USB), and a portion that lies below ω0 ,
known as the lower sideband (LSB), thus the name double sideband (DSB). The
name suppressed carrier (SC) comes from the fact that DSB/SC spectrum does
not have the component of the carrier frequency ω0 . In other words, there is no
impulse at the carrier frequency in the spectrum of the modulated signal. The
relationship of ωB to ω0 is of interest. Figure 6.19 shows that ω0 ≥ ωB in order
to avoid overlap of the spectra centered at ±ω0 . If ω0 < ωB , the spectra overlap
and the information of m(t) is lost in the process of modulation, a loss which
makes it impossible to get back m(t) from the modulated signal m(t) cos ω0 t.

Figure 6.19: DSB/SC amplitude modulation in the frequency domain.

To be able extract the information signal m(t) from the modulated signal the
modulation process must be reversed at the receiving end. This process is called
demodulation. In effect, demodulation shifts back the message spectrum to its
original low frequency position. This can be done by multiplying the modulated
signal again with cos ω0 t (generated by a so-called local oscillator) and then
filtering the result, as shown in Figure 6.20. The local oscillator is tuned to
produce a sinusoid wave at the same frequency as the carrier frequency, this
demodulation technique is known as synchronous detection. To see how the
146 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.20: Demodulation of DSB/SC.

system of Figure 6.20 works, note that since z(t) = y(t) cos ω0 t. It follows that
Z(ω) is
1 1
Z(ω) = Y (ω − ω0 ) + Y (ω + ω0 ) (6.2)
2 2
Substituting (6.1) for Y (ω) in (6.2) shows that Z(ω) illustrated in Figure 6.21
has three copies of M (ω)
 
1 1 1
Z(ω) = M (ω − ω0 − ω0 ) + M (ω + ω0 − ω0 )
2 2 2
 
1 1 1
+ M (ω + ω0 − ω0 ) + M (ω + ω0 + ω0 )
2 2 2
1 1 1
= M (ω − 2ω0 ) + M (ω) + M (ω + 2ω0 )
4 2 4
1
Note that we obtained the desired baseband spectrum, 2 M (ω), in Z(ω) in
addition to unwanted spectrum at ±2ω0 .

Figure 6.21: Demodulation process.

If the condition ω0 ≥ ωB is satisfied, it will be possible to extract M (ω) with


an ideal low pass filter shown in Figure 6.21, of the form
(
G, |ω| < ωc
H(ω) =
0, otherwise

where the gain of the lowpass filter should be G = 2 and the cutoff frequency
should satisfy ωB < ωc < 2ω0 − ωB .
6.3. SAMPLING 147

6.3 Sampling
Sampling a signal is the process of acquiring its values only at discrete points in
time. The main reason we acquire signals in this way is that most signal pro-
cessing and analysis today is done using digital computers. A digital computer
requires that all information in processes be in the form of numbers. Therefore,
the samples are acquired and stored as numbers. Since the memory and mass
storage capacity of a computer are finite, it can only handle a finite number of
numbers. Therefore, if a digital computer is to be used to analyze a signal, it
can only be sampled for a finite time. The questions that may rise here are:
To what extent do the samples accurately describe the signal from which they
are taken? How can all the signal information be stored in a finite number of
samples? How much information is lost, if any, by sampling the signal?

6.3.1 The Sampling Theorem


If we are to obtain a set of samples from a continuous function of time x(t), the
most important question is how to sample the signal so as to retain the informa-
tion it carries. The amazingly simple answer is given by the Shannon sampling
theorem which states that a bandlimited signal x(t) can be reconstructed ex-
actly from its samples if the samples are taken at a rate ωs = 2π T , provided ωs
is at least as large as 2ωB , which is twice the highest frequency present in the
bandlimited signal x(t). The minimum rate 2ωB is known as the Nyquist rate.
In practice sampling is most commonly done with two devices, the sample-
and-hold (S/H) and the analog-to-digital convertor (ADC). The input signal x(t)
is sampled at sampling instants nT , to obtain a sample x(nT ), where n is an
integer. The sampled signal may be mathematically represented as the product
of the original continuous time signal and an impulse train as shown in Figure
6.22. This representation is commonly termed impulse sampling. The result is
a sampled signal xs (t) that consists of impulses spaced every T seconds (the
sampling interval). The validity of the sampling theorem can be demonstrated
using either the modulation property or the frequency convolution property of

Figure 6.22: The ideal sampling process.


148 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

the Fourier transform. As can be seen from Figure 6.22, the sampled signal is
considered to be the product (modulation) of the continuous time signal x(t)
and the impulse train δT (t) and, hence, is usually referred to as the impulse
modulation model for the sampling operation. To obtain a frequency domain
representation of the sampled signal, we can begin by deriving an expression for
the Fourier transform of the signal xs (t). To do this we represent the periodic
impulse train in terms of its Fourier transform. From (6.3), it follows that

∞ ∞
2π X X
F [δT (t)] = δ(ω − nω0 ) = ωs δ(ω − nωs ) (6.3)
T n=−∞ n=−∞

where ωs = 2π/T is the sampling frequency in radians/second. The sampling


frequency in hertz is given by fs = 1/T ; therefore, ωs = 2πfs . Now,
" ∞
#
1 X
Xs (ω) = X(ω) ∗ ωs δ(ω − nωs )
2π n=−∞

1 X
= X(ω − nωs ) (6.4)
T n=−∞

Figure 6.23a shows a typical bandlimited Fourier transform representation X(ω).


From (6.4), we see that the effect of sampling x(t) is to replicate the frequency
spectrum of X(ω) about the frequencies nωs , n = ±1, ±2, ±3, ∙ ∙ ∙ . This result
is illustrated in Figure 6.23b. Note that Figure 6.23b was constructed with the
assumption that ωs > 2ωB , so the shifted copies of Xs (ω) do not overlap. To
recover the original signal x(t), we have to pass xs (t) through a lowpass filter

Figure 6.23: (a) Magnitude spectrum of a bandlimited signal. (b) The frequency
spectrum of a bandlimited signal which has been sampled at ωs > 2ωB .
6.3. SAMPLING 149

Figure 6.24: Impulse modulation followed by signal reconstruction.

with frequency response


(
T, |ω| ≤ ωB
H(ω) =
0, otherwise

as shown in Figure 6.24. For there to be no overlap between the different


components, it is clear that the sampling rate should be such that

ω s − ω B ≥ ωB

as illustrated in Figure 6.23b. Thus, the signal x(t) can be recovered from its
samples only if
ωs ≥ 2ωB
This is the sampling theorem that we stated earlier. The maximum time spacing
between samples is
π
T =
ωB
If T does not satisfy this condition, the different components of Xs (ω) overlap
and we will not be able to recover x(t) exactly. This is referred to as aliasing.
If x(t) is not bandlimited there will always be aliasing irrespective of the chosen
sampling rate. Figure 6.25 shows the magnitude spectrum of a bandlimited
signal which has been impulse-sampled at twice its highest frequency. If the
sampling rate was lower than 2ωB the components of Xs (ω) would overlap
and no filter could recover the original signal directly from the sampled signal.

Figure 6.25: Magnitude spectrum of a bandlimited signal which has been sam-
pled at twice its highest frequency.
150 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Example 6.7 Consider the signal x(t) given by


1 1
x(t) = + cos(πt + π2 )
3π 3π
Demonstrate how sampling and reconstruction is performed.

 Solution The Fourier transform of x(t) is

2 j j
X(ω) = δ(ω) + δ(ω − π) − δ(ω + π)
3 3 3
The highest frequency in this signal is π rad/sec, so any sampling rate such that
ωs ≥ 2π will guarantee that we can exactly reconstruct the signal x(t) from its
samples. Figure 6.26a shows X(ω) and Figure 6.26b shows Xs (ω) for ωs = 6π.
The light coloured box in Figure 6.26b is the frequency response of the ideal
lowpass filter H(ω) whose cutoff frequency is 3π rad/sec. The output of the
ideal lowpass filter will be exactly equal to the original signal as required. 

Figure 6.26: (a) Fourier transform of the signal x(t) (b) Fourier transform of
the sampled signal xs (t).

You might also like