0% found this document useful (0 votes)
561 views601 pages

Signals and Systems: Lecture Notes

Lecture Notes

Uploaded by

man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
561 views601 pages

Signals and Systems: Lecture Notes

Lecture Notes

Uploaded by

man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 601

48540

Signals and
Systems
Lecture Notes
2014

R (s )

E (s )

G (s)
controller

F (s )

M (s )

V (s )

1
s

C (s )

maze rover
H ( s)
controller

RF
R1
R5
R3
C1
R0
Vi

R2

A1
V1

C5

R4
A2
V2

A3

Vo

PMcL

i
Contents
LECTURE 1A SIGNALS
OVERVIEW ..........................................................................................................1A.1
SIGNAL OPERATIONS ...........................................................................................1A.2
CONTINUOUS AND DISCRETE SIGNALS ................................................................1A.3
PERIODIC AND APERIODIC SIGNALS ....................................................................1A.4
DETERMINISTIC AND RANDOM SIGNALS .............................................................1A.6
ENERGY AND POWER SIGNALS ............................................................................1A.7
COMMON CONTINUOUS-TIME SIGNALS.............................................................1A.10
THE CONTINUOUS-TIME STEP FUNCTION .....................................................1A.10
THE RECTANGLE FUNCTION .........................................................................1A.13
THE STRAIGHT LINE .....................................................................................1A.15
THE SINC FUNCTION .....................................................................................1A.20
THE IMPULSE FUNCTION ...............................................................................1A.22
SINUSOIDS .........................................................................................................1A.26
WHY SINUSOIDS? ..............................................................................................1A.26
REPRESENTATION OF SINUSOIDS ..................................................................1A.28
RESOLUTION OF AN ARBITRARY SINUSOID INTO ORTHOGONAL FUNCTIONS 1A.30
REPRESENTATION OF A SINUSOID BY A COMPLEX NUMBER .........................1A.31
FORMALISATION OF THE RELATIONSHIP BETWEEN PHASOR AND SINUSOID ..1A.33
EULERS COMPLEX EXPONENTIAL RELATIONSHIPS FOR COSINE AND SINE ..1A.34
A NEW DEFINITION OF THE PHASOR .............................................................1A.36
GRAPHICAL ILLUSTRATION OF THE RELATIONSHIP BETWEEN THE TWO TYPES OF
PHASOR AND THEIR CORRESPONDING SINUSOID ..........................................1A.37
NEGATIVE FREQUENCY ................................................................................1A.38
COMMON DISCRETE-TIME SIGNALS ..................................................................1A.39
THE DISCRETE-TIME STEP FUNCTION ..........................................................1A.40
THE UNIT-PULSE FUNCTION .........................................................................1A.41
SUMMARY .........................................................................................................1A.42
QUIZ ..................................................................................................................1A.43
EXERCISES ........................................................................................................1A.44
LEONHARD EULER (1707-1783) ........................................................................1A.47

Signals and Systems 2014

ii
LECTURE 1B SYSTEMS
LINEAR DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS ................. 1B.1
INITIAL CONDITIONS ...................................................................................... 1B.1
FIRST-ORDER CASE ....................................................................................... 1B.2
SYSTEM MODELLING .......................................................................................... 1B.3
ELECTRICAL CIRCUITS ................................................................................... 1B.3
MECHANICAL SYSTEMS ................................................................................. 1B.3
DISCRETE-TIME SYSTEMS ................................................................................... 1B.4
LINEAR DIFFERENCE EQUATIONS WITH CONSTANT COEFFICIENTS .................... 1B.5
SOLUTION BY RECURSION .............................................................................. 1B.5
COMPLETE SOLUTION .................................................................................... 1B.5
FIRST-ORDER CASE ....................................................................................... 1B.5
DISCRETE-TIME BLOCK DIAGRAMS.................................................................... 1B.6
DISCRETIZATION IN TIME OF DIFFERENTIAL EQUATIONS ................................... 1B.7
FIRST-ORDER CASE ....................................................................................... 1B.7
SECOND-ORDER CASE .................................................................................... 1B.8
CONVOLUTION IN LINEAR TIME-INVARIANT DISCRETE-TIME SYSTEMS.............. 1B.9
FIRST-ORDER SYSTEM ................................................................................... 1B.9
UNIT-PULSE RESPONSE OF A FIRST-ORDER SYSTEM ................................... 1B.10
GENERAL SYSTEM ....................................................................................... 1B.10
SYSTEM MEMORY ........................................................................................ 1B.14
SYSTEM STABILITY ...................................................................................... 1B.15
CONVOLUTION IN LINEAR TIME-INVARIANT CONTINUOUS-TIME SYSTEMS ...... 1B.15
GRAPHICAL DESCRIPTION OF CONVOLUTION ................................................... 1B.18
PROPERTIES OF CONVOLUTION ......................................................................... 1B.23
NUMERICAL CONVOLUTION ............................................................................. 1B.23
CONVOLUTION WITH AN IMPULSE..................................................................... 1B.26
SUMMARY ........................................................................................................ 1B.28
EXERCISES ........................................................................................................ 1B.29
GUSTAV ROBERT KIRCHHOFF (1824-1887) ...................................................... 1B.37

Signals and Systems 2014

iii
LECTURE 2A FOURIER SERIES, SPECTRA
ORTHOGONALITY ................................................................................................2A.1
ORTHOGONALITY IN MATHEMATICS ..............................................................2A.1
THE INNER PRODUCT ......................................................................................2A.4
ORTHOGONALITY IN POWER SIGNALS ............................................................2A.6
ORTHOGONALITY IN ENERGY SIGNALS ..........................................................2A.7
THE TRIGONOMETRIC FOURIER SERIES ...............................................................2A.8
THE COMPACT TRIGONOMETRIC FOURIER SERIES ............................................2A.14
THE SPECTRUM .................................................................................................2A.18
THE COMPLEX EXPONENTIAL FOURIER SERIES .................................................2A.22
HOW TO USE MATLAB TO CHECK FOURIER SERIES COEFFICIENTS...............2A.29
SYMMETRY IN THE TIME DOMAIN .....................................................................2A.31
EVEN SYMMETRY ..............................................................................................2A.31
ODD SYMMETRY ...............................................................................................2A.35
HALF-WAVE SYMMETRY ..................................................................................2A.39
POWER ..............................................................................................................2A.42
FILTERS .............................................................................................................2A.45
RELATIONSHIPS BETWEEN THE THREE FOURIER SERIES REPRESENTATIONS.....2A.49
SUMMARY .........................................................................................................2A.50
QUIZ ..................................................................................................................2A.51
EXERCISES ........................................................................................................2A.52
JOSEPH FOURIER (1768-1830) ..........................................................................2A.55

LECTURE 2B THE FOURIER TRANSFORM


THE FOURIER TRANSFORM .................................................................................. 2B.1
CONTINUOUS SPECTRA........................................................................................ 2B.7
EXISTENCE OF THE FOURIER TRANSFORM ........................................................... 2B.9
FINDING FOURIER TRANSFORMS ....................................................................... 2B.10
SYMMETRY BETWEEN THE TIME-DOMAIN AND FREQUENCY-DOMAIN .............. 2B.13
TIME SHIFTING .................................................................................................. 2B.18
FREQUENCY SHIFTING ....................................................................................... 2B.19
THE FOURIER TRANSFORM OF SINUSOIDS ......................................................... 2B.20
RELATIONSHIP BETWEEN THE FOURIER SERIES AND FOURIER TRANSFORM ...... 2B.21
THE FOURIER TRANSFORM OF A UNIFORM TRAIN OF IMPULSES ........................ 2B.23
STANDARD FOURIER TRANSFORMS ................................................................... 2B.24
FOURIER TRANSFORM PROPERTIES ................................................................... 2B.25
SUMMARY ......................................................................................................... 2B.26
EXERCISES ........................................................................................................ 2B.27
WILLIAM THOMSON (LORD KELVIN) (1824-1907)............................................ 2B.32
THE AGE OF THE EARTH ............................................................................... 2B.34
THE TRANSATLANTIC CABLE ....................................................................... 2B.37
OTHER ACHIEVEMENTS ................................................................................ 2B.40

Signals and Systems 2014

iv
LECTURE 3A FILTERING AND SAMPLING
INTRODUCTION ................................................................................................... 3A.1
RESPONSE TO A SINUSOIDAL INPUT .................................................................... 3A.1
RESPONSE TO AN ARBITRARY INPUT .................................................................. 3A.7
PERIODIC INPUTS ........................................................................................... 3A.7
APERIODIC INPUTS ......................................................................................... 3A.8
IDEAL FILTERS .................................................................................................. 3A.10
PHASE RESPONSE OF AN IDEAL FILTER ........................................................ 3A.11
WHAT DOES A FILTER DO TO A SIGNAL? ........................................................... 3A.16
SAMPLING......................................................................................................... 3A.19
SAMPLING AND PERIODICY .......................................................................... 3A.22
RECONSTRUCTION ............................................................................................ 3A.23
ALIASING .......................................................................................................... 3A.25
PRACTICAL SAMPLING AND RECONSTRUCTION ................................................ 3A.27
SUMMARY OF THE SAMPLING AND RECONSTRUCTION PROCESS ....................... 3A.28
FINDING THE FOURIER SERIES OF A PERIODIC FUNCTION FROM THE FOURIER
TRANSFORM OF A SINGLE PERIOD................................................................ 3A.32
WINDOWING IN THE TIME DOMAIN .................................................................. 3A.32
PRACTICAL MULTIPLICATION AND CONVOLUTION ........................................... 3A.37
SUMMARY ........................................................................................................ 3A.39
QUIZ ................................................................................................................. 3A.41
EXERCISES ........................................................................................................ 3A.42
LECTURE 3B AMPLITUDE MODULATION
INTRODUCTION ................................................................................................... 3B.1
THE COMMUNICATION CHANNEL .................................................................. 3B.2
ANALOG AND DIGITAL MESSAGES................................................................. 3B.3
BASEBAND AND CARRIER COMMUNICATION ................................................. 3B.3
MODULATION ..................................................................................................... 3B.3
EASE OF RADIATION ...................................................................................... 3B.5
SIMULTANEOUS TRANSMISSION OF SEVERAL SIGNALS .................................. 3B.5
DOUBLE-SIDEBAND, SUPPRESSED-CARRIER (DSB-SC) MODULATION .............. 3B.6
DEMODULATION ............................................................................................ 3B.9
SUMMARY OF DBS-SC MODULATION AND DEMODULATION ...................... 3B.11
AMPLITUDE MODULATION (AM) ..................................................................... 3B.12
ENVELOPE DETECTION ................................................................................ 3B.13
SINGLE SIDEBAND (SSB) MODULATION ........................................................... 3B.14
QUADRATURE AMPLITUDE MODULATION (QAM) ........................................... 3B.15
COHERENT DEMODULATION ........................................................................ 3B.17
SUMMARY ........................................................................................................ 3B.18
EXERCISES ........................................................................................................ 3B.19

Signals and Systems 2014

v
LECTURE 4A THE LAPLACE TRANSFORM
THE LAPLACE TRANSFORM .................................................................................4A.1
REGION OF CONVERGENCE (ROC) .................................................................4A.4
FINDING A FOURIER TRANSFORM USING A LAPLACE TRANSFORM ......................4A.6
FINDING LAPLACE TRANSFORMS ........................................................................4A.9
DIFFERENTIATION PROPERTY .......................................................................4A.11
STANDARD LAPLACE TRANSFORMS ..................................................................4A.12
LAPLACE TRANSFORM PROPERTIES...................................................................4A.13
EVALUATION OF THE INVERSE LAPLACE TRANSFORM ......................................4A.14
RATIONAL LAPLACE TRANSFORMS ..............................................................4A.14
TRANSFORMS OF DIFFERENTIAL EQUATIONS ....................................................4A.20
THE SYSTEM TRANSFER FUNCTION ...................................................................4A.23
BLOCK DIAGRAMS ............................................................................................4A.24
NOTATION ....................................................................................................4A.25
CASCADING BLOCKS .........................................................................................4A.26
STANDARD FORM OF A FEEDBACK CONTROL SYSTEM ......................................4A.28
BLOCK DIAGRAM TRANSFORMATIONS ..............................................................4A.30
SUMMARY .........................................................................................................4A.34
EXERCISES ........................................................................................................4A.35
PIERRE SIMON DE LAPLACE (1749-1827) ..........................................................4A.40
LECTURE 4B TRANSFER FUNCTIONS
OVERVIEW ..........................................................................................................4A.1
STABILITY ...........................................................................................................4A.1
UNIT-STEP RESPONSE .........................................................................................4A.2
FIRST-ORDER SYSTEMS ..................................................................................4A.4
SECOND-ORDER SYSTEMS ..............................................................................4A.5
DISTINCT REAL POLES ( 1 ) OVERDAMPED .............................................4A.6
REPEATED REAL POLES ( 1 ) CRITICALLY DAMPED ................................4A.7
COMPLEX CONJUGATE POLES ( 0 1 ) UNDERDAMPED ..........................4A.8
SECOND-ORDER POLE LOCATIONS .................................................................4A.9
SINUSOIDAL RESPONSE .....................................................................................4A.10
ARBITRARY RESPONSE ......................................................................................4A.11
SUMMARY .........................................................................................................4A.12
EXERCISES ........................................................................................................4A.13
OLIVER HEAVISIDE (1850-1925).......................................................................4A.15

Signals and Systems 2014

vi
LECTURE 5A FREQUENCY RESPONSE
OVERVIEW .......................................................................................................... 5A.1
THE FREQUENCY RESPONSE FUNCTION .............................................................. 5A.1
DETERMINING THE FREQUENCY RESPONSE FROM A TRANSFER FUNCTION ......... 5A.2
MAGNITUDE RESPONSES .................................................................................... 5A.3
PHASE RESPONSES .............................................................................................. 5A.7
FREQUENCY RESPONSE OF A LOWPASS SECOND-ORDER SYSTEM ...................... 5A.9
VISUALIZATION OF THE FREQUENCY RESPONSE FROM A POLE-ZERO PLOT ...... 5A.11
BODE PLOTS ..................................................................................................... 5A.13
APPROXIMATING BODE PLOTS USING TRANSFER FUNCTION FACTORS ............. 5A.14
TRANSFER FUNCTION SYNTHESIS ..................................................................... 5A.15
DIGITAL FILTERS .............................................................................................. 5A.19
SUMMARY ........................................................................................................ 5A.20
EXERCISES ........................................................................................................ 5A.21
LECTURE 5B TIME-DOMAIN RESPONSE
OVERVIEW .......................................................................................................... 5B.1
STEADY-STATE ERROR ....................................................................................... 5B.1
TRANSIENT RESPONSE ........................................................................................ 5B.3
SECOND-ORDER STEP RESPONSE ........................................................................ 5B.6
SETTLING TIME................................................................................................. 5B.10
PEAK TIME ....................................................................................................... 5B.17
PERCENT OVERSHOOT ...................................................................................... 5B.17
RISE TIME AND DELAY TIME ............................................................................ 5B.17
SUMMARY ........................................................................................................ 5B.18
EXERCISES ........................................................................................................ 5B.19
LECTURE 6A EFFECTS OF FEEDBACK
OVERVIEW .......................................................................................................... 6A.1
TRANSIENT RESPONSE ........................................................................................ 6A.2
CLOSED-LOOP CONTROL .................................................................................... 6A.4
PROPORTIONAL CONTROL (P CONTROLLER) .................................................. 6A.5
INTEGRAL CONTROL (I CONTROLLER) ........................................................... 6A.6
PROPORTIONAL PLUS INTEGRAL CONTROL (PI CONTROLLER) ...................... 6A.7
PROPORTIONAL, INTEGRAL, DERIVATIVE CONTROL (PID CONTROLLER) ...... 6A.9
DISTURBANCE REJECTION .................................................................................. 6A.9
SENSITIVITY ..................................................................................................... 6A.11
SYSTEM SENSITIVITY ................................................................................... 6A.11
SUMMARY ........................................................................................................ 6A.13
EXERCISES ........................................................................................................ 6A.14
LECTURE 6B REVISION

Signals and Systems 2014

vii
LECTURE 7A THE Z-TRANSFORM
OVERVIEW ..........................................................................................................7A.1
THE Z-TRANSFORM .............................................................................................7A.1
MAPPING BETWEEN S-DOMAIN AND Z-DOMAIN .............................................7A.4
MAPPING THE S-PLANE IMAGINARY AXIS.......................................................7A.5
ALIASING........................................................................................................7A.6
FINDING Z-TRANSFORMS .....................................................................................7A.8
RIGHT SHIFT (DELAY) PROPERTY .................................................................7A.12
STANDARD Z-TRANSFORMS ...............................................................................7A.13
Z-TRANSFORM PROPERTIES ...............................................................................7A.14
EVALUATION OF INVERSE Z-TRANSFORMS ........................................................7A.15
TRANSFORMS OF DIFFERENCE EQUATIONS .......................................................7A.17
THE SYSTEM TRANSFER FUNCTION ...................................................................7A.19
STABILITY ....................................................................................................7A.21
TRANSFER FUNCTION INTERCONNECTIONS ..................................................7A.22
SUMMARY .........................................................................................................7A.24
EXERCISES ........................................................................................................7A.25
LECTURE 7B DISCRETIZATION
OVERVIEW .......................................................................................................... 7B.1
SIGNAL DISCRETIZATION .................................................................................... 7B.1
SIGNAL RECONSTRUCTION .................................................................................. 7B.2
HOLD OPERATION .......................................................................................... 7B.2
SYSTEM DISCRETIZATION ................................................................................... 7B.4
FREQUENCY RESPONSE ....................................................................................... 7B.7
RESPONSE MATCHING ....................................................................................... 7B.10
SUMMARY ......................................................................................................... 7B.13
EXERCISES ........................................................................................................ 7B.14
LECTURE 8A SYSTEM DESIGN
OVERVIEW ..........................................................................................................8A.1
DESIGN CRITERIA FOR CONTINUOUS-TIME SYSTEMS ..........................................8A.2
PERCENT OVERSHOOT ....................................................................................8A.2
PEAK TIME .....................................................................................................8A.3
SETTLING TIME...............................................................................................8A.4
COMBINED SPECIFICATIONS ...........................................................................8A.5
DESIGN CRITERIA FOR DISCRETE-TIME SYSTEMS ...............................................8A.9
MAPPING OF A POINT FROM THE S-PLANE TO THE Z-PLANE ...........................8A.10
PERCENT OVERSHOOT ..................................................................................8A.11
PEAK TIME ...................................................................................................8A.12
SETTLING TIME.............................................................................................8A.13
COMBINED SPECIFICATIONS .........................................................................8A.14
SUMMARY .........................................................................................................8A.15

Signals and Systems 2014

viii
LECTURE 8B ROOT LOCUS
OVERVIEW .......................................................................................................... 8B.1
ROOT LOCUS ...................................................................................................... 8B.1
ROOT LOCUS RULES ........................................................................................... 8B.5
1. NUMBER OF BRANCHES ............................................................................. 8B.5
2. LOCUS END POINTS .................................................................................... 8B.5
3. REAL AXIS SYMMETRY .............................................................................. 8B.5
4. REAL AXIS SECTIONS ................................................................................. 8B.5
5. ASYMPTOTE ANGLES ................................................................................. 8B.6
6. ASYMPTOTIC INTERCEPT (CENTROID) ........................................................ 8B.6
7. REAL AXIS BREAKAWAY AND BREAK-IN POINTS ...................................... 8B.6
8. IMAGINARY AXIS CROSSING POINTS .......................................................... 8B.9
9. EFFECT OF POLES AND ZEROS .................................................................... 8B.9
10. USE A COMPUTER ..................................................................................... 8B.9
MATLABS RLTOOL .................................................................................... 8B.14
ROOT LOCI OF DISCRETE-TIME SYSTEMS ......................................................... 8B.16
TIME RESPONSE OF DISCRETE-TIME SYSTEMS ................................................. 8B.17
SUMMARY ........................................................................................................ 8B.19
EXERCISES ........................................................................................................ 8B.20
JAMES CLERK MAXWELL (1831-1879)............................................................. 8B.24
LECTURE 9A STATE-VARIABLES
OVERVIEW .......................................................................................................... 9A.1
STATE REPRESENTATION .................................................................................... 9A.1
STATES .......................................................................................................... 9A.2
OUTPUT ......................................................................................................... 9A.4
MULTIPLE INPUT-MULTIPLE OUTPUT SYSTEMS............................................. 9A.4
SOLUTION OF THE STATE EQUATIONS ................................................................. 9A.5
TRANSITION MATRIX .......................................................................................... 9A.6
TRANSFER FUNCTION ....................................................................................... 9A.10
IMPULSE RESPONSE .......................................................................................... 9A.10
LINEAR STATE-VARIABLE FEEDBACK .............................................................. 9A.12
SUMMARY ........................................................................................................ 9A.18
EXERCISES ........................................................................................................ 9A.19
LECTURE 9B STATE-VARIABLES 2
OVERVIEW .......................................................................................................... 9B.1
NORMAL FORM ................................................................................................... 9B.1
SIMILARITY TRANSFORM .................................................................................... 9B.4
SOLUTION OF THE STATE EQUATIONS FOR THE ZIR ........................................... 9B.6
POLES AND REPEATED EIGENVALUES............................................................... 9B.11
POLES .......................................................................................................... 9B.11
REPEATED EIGENVALUES ............................................................................ 9B.11
DISCRETE-TIME STATE-VARIABLES.................................................................. 9B.12
DISCRETE-TIME RESPONSE ............................................................................... 9B.16
DISCRETE-TIME TRANSFER FUNCTION .............................................................. 9B.17
SUMMARY ........................................................................................................ 9B.23
EXERCISES ........................................................................................................ 9B.24

Signals and Systems 2014

ix
APPENDIX A - THE FAST FOURIER TRANSFORM
OVERVIEW ............................................................................................................ F.1
THE DISCRETE-TIME FOURIER TRANSFORM (DTFT) ............................................ F.2
THE DISCRETE FOURIER TRANSFORM (DFT) ........................................................ F.4
THE FAST FOURIER TRANSFORM (FFT) ................................................................ F.8
CREATING FFTS .................................................................................................... F.9
INTERPRETING FFTS ........................................................................................... F.11
CASE 1 IDEALLY SAMPLED ONE-OFF WAVEFORM ......................................... F.11
CASE 2 IDEALLY SAMPLED PERIODIC WAVEFORM ........................................ F.12
CASE 3 CONTINUOUS TIME-LIMITED WAVEFORM ......................................... F.13
CASE 4 CONTINUOUS PERIODIC WAVEFORM ................................................ F.14
APPENDIX B - THE PHASE-LOCKED LOOP
OVERVIEW ............................................................................................................ P.1
DISTORTION IN SYNCHRONOUS AM DEMODULATION ...................................... P.1
CARRIER REGENERATION ................................................................................. P.2
THE PHASE-LOCKED LOOP (PLL) ......................................................................... P.3
VOLTAGE CONTROLLED OSCILLATOR (VCO) .................................................. P.4
PHASE DETECTOR ............................................................................................. P.7
PLL MODEL .......................................................................................................... P.9
LINEAR PLL MODEL ........................................................................................ P.9
LOOP FILTER .................................................................................................. P.10

FFT - QUICK REFERENCE GUIDE


MATLAB - QUICK REFERENCE GUIDE
MATRICES - QUICK REFERENCE GUIDE
ANSWERS

Signals and Systems 2014

1A.1
Lecture 1A Signals
Overview. Signal operations. Periodic & aperiodic signals. Deterministic &
random signals. Energy & power signals. Common signals. Sinusoids.

Overview
Electrical engineers should never forget the big picture.
Every day we take for granted the power that comes out of the wall, or the light
that instantly illuminates the darkness at the flick of a switch. We take for
granted the fact that electrical machines are at the heart of every manufacturing
industry. There has been no bigger benefit to humankind than the supply of
electricity to residential, commercial and industrial sites. Behind this magic
is a large infrastructure of generators, transmission lines, transformers,
protection relays, motors and motor drives.
We also take for granted the automation of once hazardous or laborious tasks,
we take for granted the ability of electronics to control something as
complicated as a jet aircraft, and we seem not to marvel at the idea of your Why we study
systems

cars engine having just the right amount of fuel injected into the cylinder with
just the right amount of air, with a spark igniting the mixture at just the right
time to produce the maximum power and the least amount of noxious gases as
you tell it to accelerate up a hill when the engine is cold!
We forget that we are now living in an age where we can communicate with
anyone (and almost anything), anywhere, at anytime. We have a point-to-point
telephone system, mobile phones, the Internet, radio and TV. We have never
lived in an age so information rich.
Electrical engineers are engaged in the business of designing, improving,
extending, maintaining and operating this amazing array of systems.
You are in the business of becoming an engineer.

Signals and Systems 2014

1A.2
One thing that engineers need to do well is to break down a seemingly complex
system into smaller, easier parts. We therefore need a way of describing these
systems mathematically, and a way of describing the inputs and outputs of
these systems - signals.

Signal Operations
There really arent many things that you can do to signals. Take a simple FM
modulator:
Example of signal
operations
L'

L
Preemphasiser

L+R

L+R+ (L-R)cos(2 f c t )

R'

R
Preemphasiser

L-R

FM
modulator

to
antenna

(L-R)cos(2 f c t )
cos(2 f c t )

Figure 1A.1
Some of the signals in this system come from natural sources, such as music
or speech, some come from artificial sources, such as a sinusoidal oscillator.
Linear system
operations

We can multiply two signals together, we can add two signals together, we can
amplify, attenuate and filter. We normally treat all the operations as linear,
although in practice some nonlinearity always arises.
We seek a way to analyse, synthesise and process signals that is
mathematically rigorous, yet simple to picture. It turns out that Fourier analysis
of signals and systems is one suitable method to achieve this goal, and the
Laplace Transform is even more powerful. But first, lets characterise
mathematically and pictorially some of the more common signal types.

Signals and Systems 2014

1A.3
Continuous and Discrete Signals
A continuous signal can be broadly defined as a quantity or measurement that
varies continuously in relation to another variable, usually time. We say that
the signal is a function of the independent variable, and it is usually described
mathematically as a function with the argument in parentheses, e.g. g t . The
parentheses indicate that t is a real number.
Common examples of continuous-time signals include temperature, voltage,
audio output from a speaker, and video signals displayed on a TV. An example
of a graph of a continuous-time signal is shown below:

g(t )

Figure 1A.2
A discrete signal is one which exists only at specific instants of the
independent variable, usually time. A discrete signal is usually described
mathematically as a function with the argument in brackets, e.g. g n . The
brackets indicate that n is an integer.
Common examples of discrete-time signals include your bank balance,
monthly sales of a corporation, and the data read from a CD. An example of a
graph of a discrete-time signal is shown below:

g [n]

Signals and Systems 2014

1A.4
Figure 1A.3

Periodic and Aperiodic Signals


A periodic signal g t is defined by the property:
Periodic function
defined

g t g t T0 , T0 0

(1A.1)

The smallest value of T0 that satisfies Eq. (1A.1) is called the period. The
fundamental frequency of the signal is defined as:
Fundamental
frequency defined

f0

1
T0

(1A.2)

A periodic signal remains unchanged by a positive or negative shift of any


integral multiple of T0 . This means a periodic signal must begin at t and
go on forever until t . An example of a periodic signal is shown below:
A periodic signal
g(t )

t
T0

Figure 1A.
We can also have periodic discrete-time signals, in which case:

g n g n T0 , T0 0
Aperiodic signals
defined

(1A.3)

An aperiodic signal is one for which Eq. (1A.1) or Eq. (1A.3) does not hold.
Any finite duration signal is aperiodic.

Signals and Systems 2014

1A.5
Example
Find the period and fundamental frequency (if they exist) of the following
signal:
g t 3 cos 2t 4 sin 4t

(1A.4)

We can graph this function to easily determine its period:

g(t )

-2

-1

t, (s)

T0 = 1

Figure 1A.5
From the graph we find that T0 1 s and f 0 1 Hz . It is difficult to determine

the period mathematically (in this case) until we look at the signals spectrum
(Lecture 2A).
If we add two periodic signals, then the result may or may not be periodic. The
result will only be periodic if an integral number of periods of the first signal
coincides with an integral number of periods of the second signal:

p
T0 qT1 pT2 where is rational
q

(1A.5)

In Eq. (1A.5), the integers p and q must have no common factors.


We know that a sinusoid is a periodic signal, and we shall soon see that any
signal composed of a sum of sinusoids with frequencies that are integral
multiples of a fundamental frequency is also periodic.

Signals and Systems 2014

1A.6
Deterministic and Random Signals
A deterministic signal is one in which the past, present and future values are
Deterministic signals
defined

completely specified. For example, g t cos2t and g t e t u t are


obviously signals specified completely for all time.
Random or stochastic signals cannot be specified at precisely one instant in

Random signals
defined

time. This does not necessarily mean that any particular random signal is
unknown - on the contrary they can be deterministic. For example, consider
some outputs of a binary signal generator over 8 bits:

g1(t)
t
g2(t)
t
g3(t)
t
g4(t)
t

Figure 1A.6

Each of the possible 2 8 256 waveforms is deterministic - the randomness of


this situation is associated not with the waveform but with the uncertainty as to
which waveform will occur. This is completely analogous to the situation of
tossing a coin. We know the outcome will be a head or a tail - the uncertainty is
the occurrence of a particular outcome in a given trial.
Random signals are
information bearing
signals

Random signals are most often the information bearing signals we are used to voice signals, television signals, digital data (computer files), etc. Electrical
noise is also a random signal.

Signals and Systems 2014

1A.7
Energy and Power Signals
For electrical engineers, the signals we wish to describe will be predominantly
voltage or current. Accordingly, the instantaneous power developed by these
signals will be either:

v2 t
p t
R

(1A.6)

p t Ri 2 t

(1A.7)

or:

In signal analysis it is customary to normalise these equations by setting R 1.


With a signal g t the formula for instantaneous power in both cases becomes:

p t g 2 t

(1A.8)

The dissipated energy, or the total energy of the signal is then given by:

E g t dt
2

(1A.9)

The total energy of a


signal

A signal is classified as an energy signal if and only if the total energy of the
signal is finite:

0 E

Signals and Systems 2014

(1A.10)

is finite for an
energy signal

1A.8
The average power of a signal is correspondingly defined as:

1 T2 2
g t dt
T T T 2

The average power


of a signal

P lim

(1A.11)

If the signal is periodic with period T0 , we then have the special case:

1 T0 2 2
P
g t dt
T0 T0 2

and the average


power of a periodic
signal

(1A.12)

Example

A sinusoid is a periodic signal and therefore has a finite power. If a sinusoid is


given by g t A cos2t T0 , then what is the average power?
The easiest way to find the average power is by performing the integration in
Eq. (1A.12) graphically. For the arbitrary sinusoid given, we can graph the
integrand g 2 t A 2 cos 2 2t T0 as:

g 2( t )
A2
equal areas

A
2

T0

Figure 1A.7

Note that in drawing the graph we dont really need to know the identity
cos 2 1 cos 2 2 all we need to know is that if we start off with a
Signals and Systems 2014

1A.9
sinusoid uniformly oscillating between A and A , then after squaring we
obtain a sinusoid that oscillates (at twice the frequency) between A 2 and 0. We
can also see that the average value of the resulting waveform is A 2 2 , because
there are equal areas above and below this level. Therefore, if we integrate (i.e.
find the area beneath the curve) over an interval spanning T0 , we must have an
area equal to the average value times the span, i.e. A 2T0 2 . (This is the Mean

Value Theorem for Integrals). So the average power is this area divided by the
period:

The power of any


arbitrary sinusoid of
amplitude A

A
2

(1A.13)

This is a surprising result! The power of any sinusoid, no matter what its
frequency or phase, is dependent only on its amplitude.
Confirm the above result by performing the integration algebraically.
A signal is classified as a power signal if and only if the average power of the
signal is finite and non-zero:

0 P

(1A.14)

We observe that if E, the energy of g t , is finite, then its power P is zero, and
if P is finite, then E is infinite. It is obvious that a signal may be classified as
one or the other but not both. On the other hand, there are some signals, for
example:

g t e at

(1A.15)

that cannot be classified as either energy or power signals, because both E and
P are infinite.
It is interesting to note that periodic signals and most random signals are power
signals, and signals that are both deterministic and aperiodic are energy signals.

Signals and Systems 2014

are finite and nonzero for a power


signal

1A.10
Common Continuous-Time Signals
Earlier we stated that the key to handling complexity is to reduce to many
Superposition is the
key to building
complexity out of
simple parts

simple parts. The converse is also true - we can apply superposition to build
complexity out of simple parts. It may come as a pleasant surprise that the
study of only a few signals will enable us to handle almost any amount of
complexity in a deterministic signal.
The Continuous-Time Step Function

We define the continuous-time step function to be:

0, t 0

ut 1 2, t 0
1, t 0

The continuous-time
step function
defined

(1A.16)

Graphically:
and graphed

u(t )
1
0

Figure 1A.8

We will now make a very important observation: it is the argument of the


The argument of a
function determines
its position

function which determines the position of the function along the t-axis. Now
consider the delayed step function:

0, t t0

ut t0 1 2, t t0
1, t t0

Signals and Systems 2014

(1A.17)

1A.11
We obtain the conditions on the values of the function by the simple
substitution t t t 0 in Eq. (1A.16).
Graphically, we have:

u(t- t0 )
1
0

t0

Figure 1A.9

We see that the argument t t 0 simply shifts the origin of the original
function to t 0 . A positive value of t 0 shifts the function to the right corresponding to a delay in time. A negative value shifts the function to the left
- an advance in time.
We will now introduce another concept associated with the argument of a
function: if we divide it by a real constant - a scaling factor - then we regulate as well as width
and orientation

the orientation of the function about the point t 0 , and usually change the
width of the function. Consider the scaled and shifted step function:

t t0

,
0

T T
t t0
t t0
1 2,
u

T
T T
t t0
1,

T T

Signals and Systems 2014

(1A.18)

1A.12
In this case it is not meaningful to talk about the width of the step function, and
the only purpose of the constant T is to allow the function to be reflected about
the line t t 0 , as shown below:

u(

t- t0
T

t0,T positive

1
0

-2

-1

t0
u(

t- 2
-1

)
2

Figure 1A.10

Use Eq. (1A.18) to verify the bottom step function in Figure 1A.10.
The utility of the step function is that it can be used as a switch to turn
another function on or off at some point. For example, the product given by
u t 1cos2t is as shown below:
The step function as
a switch

-2

-1

u ( t- 1) cos 2 t

Figure 1A.11

Signals and Systems 2014

1A.13
The Rectangle Function

One of the most useful functions in signal analysis is the rectangle function,
defined as:

0,

rect t 1 2,

1,

1
t
2
1
t
2
1
t
2

The rectangle
function defined

(1A.19)

Graphically, this is a rectangle with a height, width and area of one:


and graphed

rect ( t )
1

-1/2 0

1/2

Figure 1A.12

If we generalise the argument, as we did for the step function, we get:

0,

t t0
1 2,
rect
T

1,

t t0 1

T
2
t t0 1

2
T
t t0 1

2
T

Signals and Systems 2014

(1A.20)

1A.14
Graphically, it is a rectangle with a height of unity, it is centred on t t 0 , and
both its width and area are equal to T :

rect ( t- t0
T

t0 positive
Area = |T|

1
0

t0

|T|
Area = 3

-4

-3

-2

-1

rect ( t+

2
3

)
2

Figure 1A.13

In the time-domain the rectangle function can be used to represent a gating


The rectangle
function can be
used to turn another
function on and off

operation in an electrical circuit. Mathematically it provides an easy way to


turn a function on and off.
Notice from the symmetry of the function that the sign of T has no effect on the
functions orientation. However, the magnitude of T still acts as a scaling
factor.

Signals and Systems 2014

1A.15
The Straight Line

It is surprising how something so simple can cause a lot of confusion. We start


with one of the simplest straight lines possible. Let g t t :

g (t ) = t
slope=1

-2

-1

-1

Figure 1A.14

Now shift the straight line along the t-axis in the standard fashion, to make
g t t t 0 :

g ( t ) = t- t0
slope=1
0

t0

Figure 1A.15

To change the slope of the line, simply apply the usual scaling factor, to make:

g t

t t0
T

Signals and Systems 2014

The straight line


defined

(1A.21)

1A.16
This is the equation of a straight line, with slope 1 T and t-axis intercept t 0 :
and graphed

t- t0
g (t ) = T

t0

g (t ) =

slope = -1/3

slope = 1/T

t+T
0

t- 1
-3

1
-2

-1

-1

Figure 1A.16

We can now use our knowledge of the straight line and rectangle function to
completely specify piece-wise linear signals.

Signals and Systems 2014

1A.17
Example

A function generator produces the following sawtooth waveform:

-8

-4

g (t )

12

16

Figure 1A.17

We seek a mathematical description of this waveform. We start by recognising


the fact that the waveform is periodic, with period T0 4 . First we describe
only one period (say the one beginning at the origin). We recognise that the
ramp part of the sawtooth is a straight line multiplied by a rectangle function:

t
t 2

g0 t rect
4
4

Signals and Systems 2014

(1A.22)

1A.18
Graphically, it is:

-8

-4

12

16

12

16

12

16

-8

-4

-8

-4

g0( t )

Figure 1A.18

The next period is just g 0 t shifted right (delayed in time) by 4. The next
period is therefore described by:

g1 t g0 t 4

t 4
t 42

rect
4
4

(1A.23)

It is now easy to see the pattern. In general we have:

gn t g0 t 4n

t 4n
t 4n 2

rect

4
4

(1A.24)

where g n t is the nth period and n is any integer. Now all we have to do is
add up all the periods to get the complete mathematical expression for the
sawtooth:

t 4n
t 4n 2
rect

4
4
n

gt

Signals and Systems 2014

(1A.25)

1A.19
Example

Sketch the following waveform:


g t t 1 rect t 15
. rect t 2.5 0.5t 2.5 rect 0.5t 2

(1A.26)

We can start by putting arguments into our standard form:

t 5
t 4
g t t 1 rect t 15
. rect t 2.5
rect

2
2

(1A.27)

From this, we can compose the waveform out of the three specified parts:

(t -1)rect(t -1.5)
1

1 rect(t -2.5)

t-4
( t-5
-2 )rect( 2 )

g (t )

Figure 1A.19

Signals and Systems 2014

1A.20
The Sinc Function

The sinc function will show up quite often in studies of linear systems,
particularly in signal spectra, and it is interesting to note that there is a close
relationship between this function and the rectangle function. Its definition is:
The sinc function
defined

sinc t

sint
t

(1A.28)

Graphically, it looks like:


and graphed

1.0

sinc( t )

0.8
0.6
0.4
0.2

-4

-3

-2

-1

0
-0.2

-0.4

Figure 1A.20

The inclusion of in the formula for the sinc function gives it certain nice
properties. For example, the zeros of sinc t occur at non-zero integral values,
Features of the sinc
function

its width between the first two zeros is two, and its area (including positive and
negative lobes) is just one. Notice also that it appears that the sinc function is
undefined for t 0 . In one sense it is, but in another sense we can define a
functions value at a singularity by approaching it from either side and
averaging the limits.
This is not unusual we did it explicitly in the definition of the step function,
where there is obviously a singularity at the step. We overcame this by
calculating the limits of the function approaching zero from the positive and
negative sides. The limit is 0 (approaching from the negative side) and 1
(approaching from the positive side), and the average of these two is 1 2 . We
then made explicit the use of this value for a zero argument.
Signals and Systems 2014

1A.21
The limit of the sinc function as t 0 can be obtained using lHpitals rule:

sint
cost
lim
lim
1
t 0 t
t 0

(1A.29)

Therefore, we say the sinc function has a value of 1 when its argument is zero.
With a generalised argument, the sinc function becomes:

t t0

sin

t
t
T

sinc
T
t t
0
T

(1A.30)

Its zeros occur at t 0 nT , its height is 1, its width between the first two zeros is

2 T and its area is T :

t-t
sinc( T 0)

1.0

t 0-3T

t 0-T

t 0+T

t0
t 0-2T

1.0

t 0+3T
t 0+2T

10

t-5
sinc( 2 )

0.8
0.6
0.4

-2

0.2

0
-0.2

-0.4

Figure 1A.21

Signals and Systems 2014

1A.22
The Impulse Function

The impulse function, or Dirac delta function, is of great importance in the


The need for an
impulse function

study of signal theory. It will enable us to represent densities at a single


point. We shall employ the widely accepted symbol t to denote the impulse
function.

An informal
definition of the
impulse function

The impulse function is often described as having an infinite height and zero
width such that its area is equal to unity, but such a description is not
particularly satisfying.
A more formal definition is obtained by first forming a sequence of functions,
such as 1 T rectt T , and then defining t to be:

1
t
t lim rect
T 0 T
T

A more formal
definition of the
impulse function

(1A.31)

As T gets smaller and smaller the members of the sequence become taller and
narrower, but their area remains constant as shown below:

2rect ( 2t )

rect ( t )
1
2

-2

-1

rect (

Figure 1A.22

This definition, however, is not very satisfying either.

Signals and Systems 2014

t
2

)
2

1A.23
From a physical point of view, we can consider the delta function to be so

An impulse function

narrow that making it any narrower would in no way influence the results in in the lab is just a
which we are interested. As an example, consider the simple RC circuit shown

very narrow pulse

below, in which a rectangle function is applied as vi t :

vi

vo

Figure 1A.23

We choose the width of the pulse to be T, and its height to be equal to 1 T


such that its area is 1 as T is varied. The output voltage v o t will vary with
time, and its exact form will depend on the relative values of T and the product
RC.
If T is much larger than RC, as in the top diagram of Figure 1A.21, the As the duration of a
rectangular input

capacitor will be almost completely charged to the voltage 1 T before the pulse get smaller
pulse ends, at which time it will begin to discharge back to zero.
If we shorten the pulse so that T RC , the capacitor will not have a chance to
become fully charged before the pulse ends. Thus, the output voltage behaves and smaller
as in the middle diagram of Figure 1A.21, and it can be seen that there is a
considerable difference between this output and the proceeding one.
If we now make T still shorter, as in the bottom diagram of Figure 1A.21, we and smaller, the
note very little change in the shape of the output. In fact, as we continue to output of a linear

system approaches

make T shorter and shorter, the only noticeable change is in the time it takes the impulse
response

the output to reach a maximum, and this time is just equal to T.

Signals and Systems 2014

1A.24
If this interval is too short to be resolved by our measuring device, the input is
effectively behaving as a delta function and decreasing its duration further will
have no observable effect on the output, which now closely resembles the
impulse response of the circuit.
Graphical derivation
of the impulse
response of an RC
circuit by decreasing
a pulses width while
maintaining its area

vi ( t)

vo ( t)
Area=1

1
T
0

RC

1
T
T

vi ( t)

1 (1-e-t/RC)
T
RC

vo ( t)

1
T
Area=1

1
T

RC

1
RC
t

1 (1-e-t/RC)
T
0

vi ( t)

RC

vo ( t)

Area=1
-t/RC
1
T (1-e )

1
RC
0 T

RC

0 T

Figure 1A.24

Signals and Systems 2014

RC

1A.25
With the previous discussion in mind, we shall use the following properties as
the definition of the delta function: given the real constant t 0 and the arbitrary
complex-valued function f t , which is continuous at the point t t 0 , then

t t0 0,

f t t t dt f t
t2

t1

t t0

(1A.32a)

t1 t0 t2

(1A.32b)

The impulse
function defined - as
behaviour upon
integration

If f t is discontinuous at the point t t 0 , Eq. (1A.32b) may still be used but


the value of f t 0 is taken to be the average of the limiting values as t 0 is
approached from above and from below. With this definition of the delta
function, we do not specify the value of t t 0 at the point t t 0 . Note that
we do not care about the exact form of the delta function itself but only about
its behaviour under integration.
Eq. (1A.32b) is often called the sifting property of the delta function, because it Sifting property of

the impulse function


defined

sifts out a single value of the function f t .


Graphically we will represent t t 0 as a spike of unit height located at the
point t t 0 , but observe that the height of the spike corresponds to the area of
the delta function. Such a representation is shown below:

Graphical
representation of an
impulse function

t0 positive
Area = 1

(t- t0 )
1
0

Area = 2

-4

-3

t0

2 ( t+2 )
2
1
-2

-1

( t- 1)
0

Figure 1A.25

Signals and Systems 2014

Area = 1
2

1A.26
Sinusoids
Here we start deviating from the previous descriptions of time-domain signals.
Sinusoids can be
described in the
frequency-domain

All the signals described so far are aperiodic. With periodic signals (such as
sinusoids), we can start to introduce the concept of frequency content, i.e. a
shift from describing signals in the time-domain to one that describes them in
the frequency-domain. This new way of describing signals will be fully
exploited later when we look at Fourier series and Fourier transforms.
Why Sinusoids?

The sinusoid with which we are so familiar today appeared first as the solution
to differential equations that 17th century mathematicians had derived to
describe the behaviour of various vibrating systems. They were no doubt
surprised that the functions that had been used for centuries in trigonometry
appeared in this new context. Mathematicians who made important
contributions included Huygens (1596-1687), Bernoulli (1710-1790), Taylor
(1685-1731), Ricatti (1676-1754), Euler (1707-1783), DAlembert (17171783) and Lagrange (1736-1813).
Perhaps the greatest contribution was that of Euler (who invented the symbols

, e and i 1 which we call j). He first identified the fact that is highly
significant for us:
The special
relationship enjoyed
by sinusoids and
linear systems

For systems described by linear differential equations a

(1A.33)

sinusoidal input yields a sinusoidal output.

The output sinusoid has the same frequency as the input, it is however altered
in amplitude and phase.
We are so familiar with this fact that we sometimes overlook its significance.
Only sinusoids have this property with respect to linear systems. For example,
applying a square wave input does not produce a square wave output.

Signals and Systems 2014

1A.27
It was Euler who recognised that an arbitrary sinusoid could be resolved into a
pair of complex exponentials:

Acost Xe jt X *e jt

(1A.34)

A sinusoid can be
expressed as the
sum of a forward
and backward
rotating phasor

where X A 2e j . Euler found that the input/output form invariance exhibited


by the sinusoid could be mathematically expressed most concisely using the
exponential components. Thus if A 2e j t was the input, then for a linear
system the output would be H . A 2e j t where H is a complex-valued
function of .
We know this already from circuit analysis, and the whole topic of system
design concerns the manipulation of H.
The second major reason for the importance of the sinusoid can be attributed to
Fourier, who in 1807 recognised that:
Any periodic function can be represented as the weighted

(1A.35)

sum of a family of sinusoids.

The way now lay open for the analysis of the behaviour of a linear system for
any periodic input by determining the response to the individual sinusoidal (or
exponential) components and adding them up (superposition).
This technique is called frequency analysis. Its first application was probably
that of Newton passing white light through a prism and discovering that red
light was unchanged after passage through a second prism.

Signals and Systems 2014

Periodic signals are


made up of
sinusoids - Fourier
Series

1A.28
Representation of Sinusoids

Sinusoids can be resolved into pairs of complex exponentials as in Eq. (1A.34).


In this section, instead of following the historical, mathematical approach, we
relate sinusoids to complex numbers using a utilitarian argument. Consider the
sinusoid:

2t
x t Acos

(1A.36)

A is the amplitude, 2 T is the angular frequency, and T is the period.


Graphically, we have:

cos(

2 t
)
T

t
T

Figure 1A.26

Delaying the waveform by t 0 results in:

t
t0

Figure 1A.27

Signals and Systems 2014

1A.29
Mathematically, we have:

2 t t0
2t 2t0

x t Acos
Acos T T
T

2t
Acos

(1A.37)

A sinusoid
expressed in the
most general
fashion

where 2t 0 T . is the phase of the sinusoid. Note the negative sign in


the definition of phase, which means a delayed sinusoid has negative phase.
We note that when t 0 T 4 we get:

t0=T/4

Figure 1A.28

and:

2t


x t Acos
T 2
2t

Asin
T

(1A.38)

We can thus represent an arbitrary sinusoid as either:

x t Asint or Acost 2

Signals and Systems 2014

(1A.39)

1A.30
Similarly:

xt Acost or Asint 2

(1A.40)

We shall use the cosine form. If a sinusoid is expressed in the sine form, then
we need to subtract 90 from the phase angle to get the cosine form.
Phase refers to the
angle of a
cosinusoid at t=0

When the phase of a sinusoid is referred to, it is the phase angle in the cosine
form (in these lecture notes).
Resolution of an Arbitrary Sinusoid into Orthogonal Functions

An arbitrary sinusoid can be expressed as a weighted sum of a cos() and


a -sin() term. Mathematically, we can derive the following:
A sinusoid can be
broken down into
two orthogonal
components

Acost Acos cost Asin sint


Acos cost Asin sint

(1A.41)

Example

Lets resolve a simple sinusoid into the above components:

2cos 3t 15 2cos 15cos3t 2sin 15 sin3t


193
. cos3t 0.52 sin3t

Signals and Systems 2014

(1A.42)

1A.31
Graphically, this is:

1.93
cos3t term

0.52

Any sinusoid can be


decomposed into
two orthogonal
sinusoids of the
same frequency, but
different amplitude

-sin3t term

Figure 1A.29

The cos() and -sin() terms are said to be orthogonal. For our purposes
orthogonal may be understood as the property that two vectors or functions
A quick definition of

mutually have when one cannot be expressed as a sum containing the other. orthogonal
We will look at basis functions and orthogonality more formally later on.
Representation of a Sinusoid by a Complex Number

There are three distinct pieces of information about a sinusoid - frequency,


amplitude and phase. Provided the frequency is unambiguous, two real Three real numbers
completely specify a

numbers are required to describe the sinusoid completely. Why not use a sinusoid
complex number to store these two real numbers?
Suppose the convention is adopted that: the real part of a complex number is

Two real numbers

the cost amplitude, and the imaginary part is the sint amplitude of the completely specify a
resolution described by Eq. (1A.41).

sinusoid of a given
frequency

Suppose we call the complex number created in this way the phasor associated which we store as a
complex number
called a phasor

with the sinusoid.


Signals and Systems 2014

1A.32
Our notation will be:

x t time-domain expression
Time-domain and
frequency-domain
symbols

X phasor associated with x t


The reason for the bar over X will become apparent shortly.
Example

With the previous example, we had x t 2cos 3t 15 . Therefore the phasor


associated with it, using our new convention, is X 193
. j 0.52 2 15 .
A phasor can be
read off a timedomain expression

We can see that the magnitude of the complex number is the amplitude of the
sinusoid and the angle of the complex number is the phase of the sinusoid. In
general, the correspondence between a sinusoid and its phasor is:

xt A cost

The phasor that


corresponds to an
arbitrary sinusoid

X Ae j

(1A.43)

Example

If x t 3sint 30

then we have to convert to our cos notation:

x t 3cost 120 . Therefore X 3120 .


Note

carefully

that

X 3cost 120 .

All

we

can

say

is

that

x t 3cost 120 is represented by X 3120 .


The convenience of complex numbers extends beyond their compact
Phasors make
manipulating
sinusoids of the
same frequency
easy

representation of the amplitude and phase. The sum of two phasors corresponds
to the sinusoid which is the sum of the two component sinusoids represented
by the phasors. That is, if x 3 t x1 t x 2 t where x1 t , x 2 t and x 3 t are
sinusoids with the same frequency, then X 3 X 1 X 2 .
Example

If

x 3 t cost 2sint

then X 3 102 901 j 2 2.2463

corresponds to x 3 t 2.24cost 63 .

Signals and Systems 2014

which

1A.33
Formalisation of the Relationship between Phasor and Sinusoid

Using Eulers expansion:

e j cos j sin

(1A.44)

Ae j e jt Ae j t A cos t jA sin t

(1A.45)

we have:

We can see that the sinusoid A cos t represented by the phasor

X Ae j is equal to Re Xe jt . Therefore:

xt Re Xe jt

(1A.46)

A phasor / timedomain relationship

This can be visualised as:


Graphical
interpretation of
rotating phasor /
time-domain
relationship

Im
Xe j t = Ae je j t
A

x( t )

Re

complex plane
Figure 1A.30

Signals and Systems 2014

1A.34
Eulers Complex Exponential Relationships for Cosine and Sine

Eulers expansion:

e j cos j sin

(1A.47)

can be visualized as:

Im

j sin

cos

Re

Figure 1A.31

By mirroring the vectors about the real axis, it is obvious that:

e j cos j sin

(1A.48)

and:

Im

cos

Re

-j sin

-j

Figure 1A.32
Signals and Systems 2014

1A.35
By adding Eqs. (1A.45) and (1A.46), we can write:

e j e j
cos
2

(1A.49)

Cos represented as
a sum of complex
exponentials

and:

Im

cos
Re

-j

Figure 1A.33

By subtracting Eqs. (1A.45) and (1A.46), we can write:

sin

e
j2

(1A.50)

and:

Im

-e-j

jsin

Re

-j

Figure 1A.34
Signals and Systems 2014

Sin represented as
a sum of complex
exponentials

1A.36
A New Definition of the Phasor

To avoid the mathematical clumsiness of needing to take the real part,


another definition of the phasor is often adopted. In circuit analysis we use the
definition of the phasor as given before. In communication engineering we will
find it more convenient to use a new definition:
A new phasor
definition

A
X
X e j , or X
2
2

(1A.51)

We realise that this phasor definition, although a unique and sufficient


representation for every sinusoid, is just half of the sinusoids full
representation. Using Eulers complex exponential expansion of cos, we get:
A better phasor /
time-domain
relationship

A cost

A j jt A j jt
e e e e
2
2
Xe jt X *e jt

(1A.52)

The two terms in the summation represent two counter-rotating phasors with
angular velocities and in the complex plane, as shown below:
Graphical
interpretation of
counter-rotating
phasors / timedomain relationship

Im
A /2

Xe j t = 2Ae je j t

x ( t ) Re
X*e- j t
complex plane
Figure 1A.35

Signals and Systems 2014

1A.37
Graphical Illustration of the Relationship between the Two Types of Phasor
and their Corresponding Sinusoid

Consider the first representation of a sinusoid: x t Re Xe jt . Graphically,


x t can be generated by taking the projection of the rotating phasor formed

by multiplying X by e jt , onto the real axis:

complex plane

time-domain

Figure 1A.36

Signals and Systems 2014

A sinusoid can be
generated by taking
the real part of a
rotating complex
number

1A.38
Now consider the second representation of a sinusoid: x t Xe jt X * e jt .
Graphically, x t can be generated by simply adding the two complex
conjugate counter-rotating phasors Xe jt and X * e jt . The result will always
be a real number:
A sinusoid can be
generated by adding
up a forward rotating
complex number
and its backward
rotating complex
conjugate

complex plane

time-domain

X
X

Figure 1A.37
Negative Frequency
Negative frequency
just means the
phasor is going
clockwise

The phasor X * e jt rotates with a speed but in the clockwise direction.


Therefore, we consider this counter-rotating phasor to have a negative
frequency. The concept of negative frequency will be very useful when we start

to manipulate signals in the frequency-domain.


The link between
maths and graphs
should be well
understood

You should become very familiar with all of these signal types, and you should
feel comfortable representing a sinusoid as a complex exponential. Being able
to manipulate signals mathematically, while at the same time imagining what is
happening to them graphically, is the key to readily understanding signals and
systems.

Signals and Systems 2014

1A.39
Common Discrete-Time Signals
A lot of discrete-time signals are obtained by sampling continuous-time
signals at regular intervals. In these cases, we can simply form a discrete-time Superposition is the
key to building
into the mathematical expression for the complexity out of
simple parts

signal by substituting t nTs

continuous-time signal, and then rewriting it using discrete notation.


Example

Consider the sinusoidal function g t cos2f 0 t . If we sample this signal at


discrete times, we get:

g t t nT g nTs cos2f 0 nTs

(1A.53)

Since this is valid only at times which are multiples of Ts , it is a discrete-time


signal and can be written as such:
g n cos2f 0 nTs cosn

(1A.54)

where 2f 0Ts is obviously a constant. A graph of this discrete-time signal


is given below:

g [n]

1
n
-1

Figure 1A.38

This sampling process only works when the continuous-time signal is


smooth and of finite value. Therefore, the discrete-time versions of the
rectangle and impulse are defined from first principles.

Signals and Systems 2014

1A.40
The Discrete-Time Step Function

We define the discrete-time step function to be:

0, n 0
un
1, n 0

The discrete-time
step function
defined

(1A.55)

Graphically:
and graphed

u [n]
1
-4

-3

-2

-1

Figure 1A.39

Note that this is not a sampled version of the continuous-time step function,
which has a discontinuity at t 0 . We define the discrete-time step to have a
value of 1 at n 0 (instead of having a value of 1 2 if it were obtained by
sampling the continuous-time signal).
Example

A discrete-time signal has the following graph:

g [n]
5
4
3
2
1
-2 -1 0

We recognise that the signal is increasing linearly after it turns on at n 0 .


Therefore, an expression for this signal is g n nun .

Signals and Systems 2014

1A.41
The Unit-Pulse Function

There is no way to sample an impulse, since its value is undefined. However,


we shall see that the discrete-time unit-pulse, or Kronecker delta function,
plays the same role in discrete-time systems as the impulse does in continuoustime systems. It is defined as:

0, n 0
n
1, n 0

The unit-pulse
defined

(1A.56)

Graphically:
and graphed

[n]
1
-4

-3

-2

-1

Figure 1A.40
Example

An arbitrary-looking discrete-time signal has the following graph:

g [n]

-2 -1
-4 -3

5
4
3
2
1
-1
-2

With no obvious formula, we can express the signal as the sum of a series of
delayed and weighted unit-pulses. Working from left to right, we get:
g n n 2 2 n 1 3 n 2 n 2 n 3 4 n 4 .

Signals and Systems 2014

1A.42
Summary

Signals can be characterized with many attributes continuous or discrete,


periodic or aperiodic, energy or power, deterministic or random. Each
characterization tells us something useful about the signal.

Signals written mathematically in a standard form will benefit our analysis


t t0
in the future. We use arguments of the form
.
T

Sinusoids are special signals. They are the only real signals that retain their
shape when passing through a linear time-invariant system. They are often
represented as the sum of two complex conjugate, counter-rotating phasors.

The phasor corresponding to xt A cost is X

Most discrete-time signals can be obtained by sampling continuous-time

A j
e .
2

signals. The discrete-time step function and unit-pulse are defined from
first principles.

References
Kamen, E.W. & Heck, B.S.: Fundamentals of Signals and Systems Using
MATLAB, Prentice-Hall, Inc., 1997.

Signals and Systems 2014

1A.43
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]

1.

The signal cos(10t 3) sin(11t 2) has:


(a) period = 1 s

(b) period = 2 s

(c) no period

2.

g(t)

The periodic sawtooth

waveform can be
-12

-8

-4

(a)
gt

3.

12

(b)

t 4n

t 2
rect

represented by:
(c)

g t

t 4n

t 2n
rect

g t

t 4

t 2 4n
rect

4
4

The sinusoid 5sin 314t 80 can be represented by the phasor X :


(a) 5 80

(b) 5100

(c) 510

4.
4
2
-12

-8

-4

0
-2

The energy of the


4

12

-4

(a) 16

5.

(b) 64

signal shown is:

(c) 0

Using forward and backward rotating phasor notation, X 330 . If the


angular frequency is 1 rads-1, x 2 is:
(a) 5.994

(b) 5.088

Answers: 1. b 2. x 3. c 4. x 5. c

Signals and Systems 2014

(c) -4.890

1A.44
Exercises
1.
For each of the periodic signals shown, determine their time-domain
expressions, their periods and average power:

1
(a)
-2

g (t )

0
2

g (t )

(b)
-2

-1

0
1

(c)

-20

-10

g (t )
e

-t / 10

10

20

30

40

g ( t ) 2cos 200 t

(d)
-3 -1

Signals and Systems 2014

1A.45
2.
For the signals shown, determine their time-domain expressions and calculate
their energy:
g (t )
10
(a)
-2

-1

-5

g (t )
cos t

1
(b)

/2

-/2

g ( t ) Two identical
cycles
2
t

1
(c)
0

g (t )
3
2
1
(d)
0

3.
Using the sifting property of the impulse function, evaluate the following
integrals:
(iv)

f t

dt

(v)

4 dt

(vi)

(i)

(ii)

t 3e

(iii)

1 t t

t 2 sintdt

Signals and Systems 2014

t t t 2 dt

e jt t 2dt

f t t t 0

1A.46
4.
t t0
T t t 0
T

Show that:

Hint: Show that:

tt
f t T

dt T f t 0

This is a case where common sense does not prevail. At first glance it might

appear that t t 0 T t t 0 because you obviously cant scale


something whose width is zero to begin with. However, it is the impulse
functions behaviour upon integration that is important, and not its shape! Also
notice how it conforms nicely to our notion of the impulse function being a
limit of a narrow rectangle function. If a rectangle function is scaled by T, then
its area is also T.

5.
Complete the following table:
(a) x t 27cos 100t 15

(b) x t 5sin 2t 80

(c) x t 100sin 2t 45

phasor diagram is:

(d) x t 16cos 314t 30

amplitude of cos 314t term =


amplitude of sin 314t term =

(e) X 27 j11

x t

(f) X 100 60

x t

(g) x t 4cos 2t 45 2sin 2t

(h) x t 2cos t 30

(i) X 3 30 , 1

x 2

(j) X 3 30

Signals and Systems 2014

X*

X*

1A.47
Leonhard Euler (1707-1783) (Len ard Oy ler)
The work of Euler built upon that of Newton and made mathematics the tool of
analysis. Astronomy, the geometry of surfaces, optics, electricity and
magnetism, artillery and ballistics, and hydrostatics are only some of Eulers
fields. He put Newtons laws, calculus, trigonometry, and algebra into a
recognizably modern form.
Euler was born in Switzerland, and before he was an adolescent it was
recognized that he had a prodigious memory and an obvious mathematical gift.
He received both his bachelors and his masters degrees at the age of 15, and
at the age of 23 he was appointed professor of physics, and at age 26 professor
of mathematics, at the Academy of Sciences in Russia.
Among the symbols that Euler initiated are the sigma () for summation
(1755), e to represent the constant 2.71828(1727), i for the imaginary

(1777), and even a, b, and c for the sides of a triangle and A, B, and C for the
opposite angles. He transformed the trigonometric ratios into functions and
abbreviated them sin , cos and tan , and treated logarithms and exponents as
functions instead of merely aids to calculation. He also standardised the use of

for 3.14159
His 1736 treatise, Mechanica, represented the flourishing state of Newtonian
physics under the guidance of mathematical rigor. An introduction to pure
mathematics, Introductio in analysin infinitorum, appeared in 1748 which
treated algebra, the theory of equations, trigonometry and analytical geometry.
In this work Euler gave the formula e ix cos x i sin x . It did for calculus what
Euclid had done for geometry. Euler also published the first two complete
works on calculus: Institutiones calculi differentialis, from 1755, and
Institutiones calculi integralis, from 1768.

Euler's work in mathematics is vast. He was the most prolific writer of


mathematics of all time. After his death in 1783 the St Petersburg Academy
continued to publish Euler's unpublished work for nearly 50 more years!
Signals and Systems 2014

1A.48
Some of his phenomenal output includes: books on the calculus of variations;
on the calculation of planetary orbits; on artillery and ballistics; on analysis; on
shipbuilding and navigation; on the motion of the moon; lectures on the
differential calculus. He made decisive and formative contributions to
geometry, calculus and number theory. He integrated Leibniz's differential
calculus and Newton's method of fluxions into mathematical analysis. He
introduced beta and gamma functions, and integrating factors for differential
equations. He studied continuum mechanics, lunar theory, the three body
problem, elasticity, acoustics, the wave theory of light, hydraulics, and music.
He laid the foundation of analytical mechanics. He proved many of Fermats
assertions including Fermats Last Theorem for the case n 3 . He published a
full theory of logarithms of complex numbers. Analytic functions of a complex
variable were investigated by Euler in a number of different contexts, including
the study of orthogonal trajectories and cartography. He discovered the
Cauchy-Riemann equations used in complex variable theory.
Euler made a thorough investigation of integrals which can be expressed in
terms of elementary functions. He also studied beta and gamma functions. As
well as investigating double integrals, Euler considered ordinary and partial
differential equations. The calculus of variations is another area in which Euler
made fundamental discoveries.
He considered linear equations with constant coefficients, second order
differential equations with variable coefficients, power series solutions of
differential equations, a method of variation of constants, integrating factors, a
method of approximating solutions, and many others. When considering
vibrating membranes, Euler was led to the Bessel equation which he solved by
introducing Bessel functions.
Euler made substantial contributions to differential geometry, investigating the
theory of surfaces and curvature of surfaces. Many unpublished results by
Euler in this area were rediscovered by Gauss.

Signals and Systems 2014

1A.49
Euler considered the motion of a point mass both in a vacuum and in a resisting
medium. He analysed the motion of a point mass under a central force and also
considered the motion of a point mass on a surface. In this latter topic he had to
solve various problems of differential geometry and geodesics.
He wrote a two volume work on naval science. He decomposed the motion of a
solid into a rectilinear motion and a rotational motion. He studied rotational
problems which were motivated by the problem of the precession of the
equinoxes.
He set up the main formulas for the topic of fluid mechanics, the continuity
equation, the Laplace velocity potential equation, and the Euler equations for
the motion of an inviscid incompressible fluid.
He did important work in astronomy including: the determination of the orbits
of comets and planets by a few observations; methods of calculation of the
parallax of the sun; the theory of refraction; consideration of the physical
nature of comets.
Euler also published on the theory of music...
Euler did not stop working in old age, despite his eyesight failing. He
eventually went blind and employed his sons to help him write down long
equations which he was able to keep in memory. Euler died of a stroke after a
day spent: giving a mathematics lesson to one of his grandchildren; doing some
calculations on the motion of balloons; and discussing the calculation of the
orbit of the planet Uranus, recently discovered by William Herschel.
His last words, while playing with one of his grandchildren, were: I die.

Signals and Systems 2014

1B.1
Lecture 1B Systems
Differential equations. System modelling. Discrete-time signals and systems.
Difference equations. Discrete-time block diagrams. Discretization in time of
differential equations. Convolution in LTI discrete-time systems. Convolution
in LTI continuous-time systems. Graphical description of convolution.
Properties of convolution. Numerical convolution.

Linear Differential Equations with Constant Coefficients


Modelling of real systems involves approximating the real system to such a
degree that it is tractable to our mathematics. Obviously the more assumptions
we make about a system, the simpler the model, and the more easily solved.
The more accurate we make the model, the harder it is to analyse. We need to
make a trade-off based on some specification or our previous experience.
A lot of the time our modelling ends up describing a continuous-time system
that is linear, time-invariant (LTI) and finite dimensional. In these cases, the
system is described by the following equation:

N 1

t ai y t bi x t

(1B.1)

d N y t
t N
dt

(1B.2)

i 0

i 0

where:

Initial Conditions
The above equation needs the N initial conditions:

y 0 , y 1 0 , , y N 1 0

(1B.3)

We take 0 as the time for initial conditions to take into account the possibility
of an impulse being applied at t 0 , which will change the output
instantaneously.
Signals and Systems 2014

Linear differential
equation

1B.2
First-Order Case
For the first order case we can express the solution to Eq. (1B.1) in a useful
(and familiar) form. A first order system is given by:

dy t
ay t bxt
dt

First-order linear
differential equation

(1B.4)

To solve, first multiply both sides by an integrating factor equal to e at . This


gives:

dy t

e at
ay t e at bxt
dt

(1B.5)

Thus:

(1B.6)

e at y t y 0 e a bx d , t 0

(1B.7)

d at
e y t e at bxt
dt
Integrating both sides gives:

Finally, dividing both sides by the integrating factor gives:


First glimpse at a
convolution integral
as the solution of
a first-order linear
differential equation

y t e at y 0 e a t bx d , t 0
t

(1B.8)

Use this to solve the simple revision problem for the case of the unit step.
The two parts of the response given in Eq. (1B.8) have the obvious names zeroinput response (ZIR) and zero-state response (ZSR). It will be shown later that
the ZSR is given by a convolution between the systems impulse response and
the input signal.

Signals and Systems 2014

1B.3
System Modelling
In modelling a system, we are nearly always after the input/output relationship,
which is a differential equation in the case of continuous-time systems. If were
clever, we can break a system down into a connection of simple components,
each having a relationship between cause and effect.
Electrical Circuits
The three basic linear, time-invariant relationships for the resistor, capacitor
and inductor are respectively:

vt Rit

dvt
it C
dt
dit
vt L
dt

(1B.9a)

(1B.9b)

Cause / effect
relationships for
electrical systems

(1B.9c)

Mechanical Systems
In linear translational systems, the three basic linear, time-invariant
relationships for the inertia force, damping force and spring force are
respectively:

d 2 xt
F t M
dt
F t k d

dxt
dt

F t k s xt
Where xt is the position of the object under study.

Signals and Systems 2014

(1B.10a)

(1B.10b)

(1B.10c)

Cause / effect
relationships for
mechanical
translational
systems

1B.4
For rotational motion, the relationships for the inertia torque, damping torque
and spring torque are:

Cause / effect
relationships for
mechanical
rotational systems

d 2 t
T t I
dt
d t
T t kd
dt
T t k s t

(1B.11a)

(1B.11b)

(1B.11c)
Finding an input-output relationship for signals in systems is just a matter of
applying the above relationships to a conservation law: for electrical circuits it
is one of Kirchhoffs laws, in mechanical systems it is DAlemberts principle.

Discrete-time Systems
A discrete-time signal is one that takes on values only at discrete instants of
time. Discrete-time signals arise naturally in studies of economic systems
amortization (paying off a loan), models of the national income (monthly,
quarterly or yearly), models of the inventory cycle in a factory, etc. They arise
in science, e.g. in studies of population, chemical reactions, the deflection of a
Discrete-time
systems are
important

weighted beam. They arise all the time in electrical engineering, because of
digital control, e.g. radar tracking system, processing of electrocardiograms,
digital communication (CD, mobile phone, Internet). Their importance is
probably now reaching that of continuous-time systems in terms of analysis
and design specifically because today signals are processed digitally, and
hence they are a special case of discrete-time signals.
It is now cheaper and easier to perform most signal operations inside a

especially as
microprocessors
play a central role in
todays signal
processing

microprocessor or microcontroller than it is with an equivalent analog


continuous-time system. But since there is a great depth to the analysis and
design techniques of continuous-time systems, and since most physical systems
are continuous-time in nature, it is still beneficial to study systems in the
continuous-time domain.

Signals and Systems 2014

1B.5
Linear Difference Equations with Constant Coefficients
Linear, time-invariant, discrete-time systems can be modelled with the
difference equation:
N

yn ai yn i bi xn i
i 1

(1B.12)

i 0

Linear time-invariant
(LTI) difference
equation

Solution by Recursion
We can solve difference equations by a direct numerical procedure.

There is a MATLAB function available for download from the Signals and
Systems web site called recur that solves the above equation.
Complete Solution
By solving Eq. (1B.12) recursively it is possible to generate an expression for
the complete solution yn in terms of the initial conditions and the input xn .
First-Order Case
Consider the first-order linear difference equation:

yn ayn 1 bxn

(1B.13)

with initial condition y 1 . By successive substitution, show that:

y0 ay 1 bx0

y1 a 2 y 1 abx0 bx1
y2 a 3 y 1 a 2 bx0 abx1 bx2

Signals and Systems 2014

(1B.14)

First-order linear
difference equation

1B.6
From the pattern, it can be seen that for n 0 ,
First look at a
convolution
summation as the
solution of a firstorder linear
difference equation

yn a

n 1

y 1 a bxi
n i

(1B.15)

i 0

This solution is the discrete-time counterpart to Eq.(1B.8).

Discrete-Time Block Diagrams


An LTI discrete-time system can be represented as a block diagram consisting
of adders, gains and delays. The gain element is shown below:
A discrete-time gain
element

x [n ]

y [n ] = A x [n ]

Figure 1B.1
The unit-delay element is shown below:
A discrete-time unitdelay element

x [n ]

y [n ] = x [n -1]

Figure 1B.2
Such an element is normally implemented by the memory of a computer, or a
digital delay line.

Signals and Systems 2014

1B.7
Example
Using these two elements and an adder, we can construct a representation of
the discrete-time system given by yn ayn 1 bxn . The system is shown
below:

x [n ]

y [n ]
b
y [n -1]
a

Discretization in Time of Differential Equations


Often we wish to use a computer for the solution of continuous-time
differential equations. We can: if we are careful about interpreting the results.
First-Order Case
Lets see if we can approximate the first-order linear differential equation
given by Eq. (1B.4) with a discrete-time equation. We can approximate the
continuous-time derivative using Eulers approximation, or forward difference:

dy t
y nT T y nT

dt t nT
T

Approximating a
derivative with a
difference

(1B.16)

If T is suitably small and y t is continuous, the approximation will be


accurate. Substituting this approximation into Eq. (1B.4) results in a discretetime approximation given by the difference equation:

yn 1 aT yn 1 bTxn 1

(1B.17)

The discrete values yn are approximations to the solution y nT .

Show that yn gives approximate values of the solution y t at the times


t nT with arbitrary initial condition y 1 for the special case of zero input.
Signals and Systems 2014

The first-order
difference equation
approximation of a
first-order differential
equation

1B.8
Second-order Case
We can generalize the discretization process to higher-order differential
equations. In the second-order case the following approximation can be used:
The second-order
difference equation
approximation of a
second-order
derivative

d 2 y t
y nT 2T 2 y nT T y nT

dt 2 t nT
T2

(1B.18)

Now consider the second-order differential equation:

d 2 y t
dyt
dxt

b0 xt
a
a
y
t
b
1
0
1
dt
dt
dt 2

(1B.19)

Show that the discrete-time approximation to the solution y t is given by:


An nth-order
differential equation
can be
approximated with
an nth-order
difference equation

yn 2 a1T yn 1 1 a1T a0T 2 yn 2

b1Txn 1 b0T 2 b1T xn 2

Signals and Systems 2014

(1B.20)

1B.9
Convolution in Linear Time-invariant Discrete-time Systems
Although the linear difference equation is the most basic description of a linear
discrete-time system, we can develop an equivalent representation called the
convolution representation. This representation will help us to determine
important system properties that are not readily apparent from observation of
the difference equation.
One advantage of this representation is that the output is written as a linear
combination of past and present input signal elements. It is only valid when the

systems initial conditions are all zero.


First-Order System
We have previously considered the difference equation:

yn ayn 1 bxn

(1B.21)

Linear difference
equation

and showed by successive substitution that:

yn a

n 1

y 1 a bxi

The complete
response

n i

i 0

(1B.22)

By the definition of the convolution representation, we are after an expression


for the output with all initial conditions zero. We then have:
n

yn a bxi
n i

(1B.23)

i 0

In contrast to Eq. (1B.21), we can see that Eq. (1B.23) depends exclusively on
present and past values of the input signal. One advantage of this is that we
may directly observe how each past input affects the present output signal. For
example, an input xi contributes an amount a bxi to the totality of the
ni

output at the nth period.

Signals and Systems 2014

The zero-state
response (ZSR) a
convolution
summation

1B.10
Unit-Pulse Response of a First-Order System
The output of a system subjected to a unit-pulse response n is denoted hn
A discrete-time
systems unit-pulse
response defined

and is called the unit-pulse response, or weighting sequence of the discretetime system. It is very important because it completely characterises a systems
behaviour. It may also provide an experimental or mathematical means to
determine system behaviour.
For the first-order system of Eq. (1B.21), if we let xn n , then the output
of the system to a unit-pulse input can be expressed using Eq. (1B.23) as:
n

yn a b i
n i

(1B.24)

i 0

which reduces to:

yn a b
n

(1B.25)

The unit-pulse response for this system is therefore given by:


A first-order
discrete-time
systems unit-pulse
response

hn a bun
n

(1B.26)

General System
For a general linear time-invariant (LTI) system, the response to a delayed
unit-pulse n i must be hn i .
Since xn can be written as:

xn xi n i
i 0

Signals and Systems 2014

(1B.27)

1B.11
and since the system is LTI, the response yi n to x[i ] n i is given by:

yi n xi hn i

(1B.28)

The response to the sum Eq. (1B.27) must be equal to the sum of the yi n
defined by Eq. (1B.28). Thus the response to xn is:

yn xi h[n i ]

(1B.29)

i 0

Convolution
summation defined
for a discrete-time
system

This is the convolution representation of a discrete-time system, also written


as:

yn hn* xn

(1B.30)

Convolution notation
for a discrete-time
system

Graphically, we can now represent the system as:

x [n ]

Graphical notation
for a discrete-time
system using the
unit-pulse response

y [n ]
h [n ]

Figure 1B.3
It should be pointed out that the convolution representation is not very efficient
in terms of a digital implementation of the output of a system (needs lots more
memory and calculating time) compared with the difference equation.
Convolution is commutative which means that it is also true to write:

yn hi x[n i ]
i 0

Signals and Systems 2014

(1B.31)

1B.12
Discrete-time convolution can be illustrated as follows. Suppose the unit-pulse
response is that of a filter of finite length k. Then the output of such a filter is:

yn hn* xn

(1B.32)

hi x[n i ]
i 0

h0xn h1xn 1 hk xn k
Graphically, this summation can be viewed as two buffers, or arrays, sliding
past one another. The array locations that overlap are multiplied and summed
to form the output at that instant.
Graphical view of
the convolution
operation in
discrete-time

h0

h1

h2

h3

h4

h5

xn

x n-1

x n-2

x n-3

x n-4

x n-5

fixed array
sliding array

future input
signal values

past input signal values


present input
signal value

y n = h0 x n + h1x n-1+ h2 x n-2 + h3 x n-3 + h4 x n-4 + h5 x n-5

Figure 1B.4
In other words, the output at time n is equal to a linear combination of past and
present values of the input signal, x. The system can be considered to have a
memory because at any particular time, the output is still responding to an
input at a previous time.

Signals and Systems 2014

1B.13
Discrete-time convolution can be implemented by a transversal digital filter:

x [n -1]

x [n ]
D

h [0]

x [n -2]
D

h [1]

x [n - k]
D

h [2]

y [n ]
Figure 1B.5

MATLAB can do convolution for us. Use the conv function.

Signals and Systems 2014

h [k ]

Transversal digital
filter performs
discrete-time
convolution

1B.14
System Memory
A systems memory can be roughly interpreted as a measure of how significant
past inputs are on the current output. Consider the two unit-pulse responses
below:
System memory
depends on the unitpulse response

h1 [n]

18 19 20

18 19 20

h2 [n]

Figure 1B.6
System 1 depends strongly on inputs applied five or six iterations ago and less
so on inputs applied more than six iterations ago. The output of system 2
depends strongly on inputs 20 or more iterations ago. System 1 is said to have a
shorter memory than system 2.
It is apparent that a measure of system memory is obtained by noting how
quickly the system unit-pulse response decays to zero: the more quickly a
specifically - on
how long it takes to
decay to zero.

systems weighting sequence goes to zero, the shorter the memory. Some
applications require a short memory, where the output is more readily
influenced by the most recent behaviour of the input signal. Such systems are

fast responding. A system with long memory does not respond as readily to
changes in the recent behaviour of the input signal and is said to be sluggish.
Signals and Systems 2014

1B.15
System Stability
A system is stable if its output signal remains bounded in

(1B.33)

response to any bounded signal.

BIBO stability
defined

If a bounded input (BI) produces a bounded output (BO), then the system is
termed BIBO stable. This implies that:

lim hi 0

(1B.34)

This is something not readily apparent from the difference equation. A more
thorough treatment of system stability will be given later.

What can you say about the stability of the system described by Eq. (1B.21)?

Convolution in Linear Time-invariant Continuous-time Systems


The input / output relationship of a continuous time system can be specified in Deriving convolution
the continuousterms of a convolution operation between the input and the impulse response of for
time case
the system.
Recall that we can consider the impulse as the limit of a rectangle function:

1
t
xr t rect
T
T

Start with a
rectangle input

(1B.35)

as T 0 . The system response to this input is:

y t y r t

(1B.36)

and the output


response.

lim xr t t

(1B.37)

As the input
approaches an
impulse function

and since:

T 0

Signals and Systems 2014

1B.16
then:

lim y r t ht

then the output


approaches the
impulse response

(1B.38)

T 0

Now expressing the general input signal as the limit of a staircase


approximation as shown in Figure 1B.7:
Treat an arbitrary
input waveform as a
sum of rectangles

x( t )
~
x (t,T)

- 4T

- 2T

2T

4T

6T

Figure 1B.7
we have:
which get smaller
and smaller and
eventually approach
the original
waveform

xt lim ~
x t , T

(1B.39)

T 0

where:

~
x t , T

t iT

x
iT
rect

(1B.40)

We can rewrite Eq. (1B.40) using Eq. (1B.35) as:


The staircase is just
a sum of weighted
rectangle inputs

~
x t , T

xiT Tx t iT

Signals and Systems 2014

(1B.41)

1B.17
Since the system is time-invariant, the response to xr t iT is y r t iT .
Therefore the system response to ~
x t , T is:

~
y t , T

xiT Ty t iT
r

(1B.42)

and we already
know the output

because superposition holds for linear systems. The system response to xt is


just the response:

y t lim ~
y t , T lim xiT yr t iT T
T 0

T 0

(1B.43)

even in the limit


as the staircase
approximation
approaches the
original input

When we perform the limit, xiT x , y r t iT ht and T d .


Hence the output response can be expressed in the form:

y t x ht d

(1B.44)

Convolution integral
for continuous-time
systems defined

(1B.45)

Convolution integral
if the input starts at
time t=0

If the input xt 0 for all t 0 then:

y t x ht d
0

If the input is causal, then ht 0 for negative arguments, i.e. when t .


The upper limit in the integration can then be changed so that:

y t x ht d
t

(1B.46)

Convolution integral
if the input starts at
time t=0, and the
system is causal

Once again, it can be shown that convolution is commutative which means that
it is also true to write (compare with Eq. (1B.31) ):

y t h xt d
t

Signals and Systems 2014

(1B.47)

Alternative way of
writing the
convolution integral

1B.18
With the convolution operation denoted by an asterisk, *, the input / output
relationship becomes:
Convolution notation
for a continuoustime system

y t ht * xt

(1B.48)

Graphically, we can represent the system as:


Graphical notation
for a continuoustime system using
the impulse
response

x (t )

y ( t ) = h (t )* x ( t )
h (t )

Figure 1B.8
It should be pointed out, once again, that the convolution relationship is only

valid when there is no initial energy stored in the system. ie. initial conditions
are zero. The output response using convolution is just the ZSR.

Graphical Description of Convolution


Consider the following continuous-time example which has a causal impulse
response function. A causal impulse response implies that there is no response
from the system until an impulse is applied at t 0 . In other words, ht 0
for t 0 . Let the impulse response of the system be a decaying exponential,
and let the input signal be the unit-step:

h (t )
1

x (t )
1

e -t
t

t
Figure 1B.9

Using graphical convolution, the output y t can be obtained. First, the input
signal is flipped in time about the origin. Then, as the time parameter t
Signals and Systems 2014

1B.19
advances, the input signal slides past the impulse response in much the
same way as the input values slide past the unit-pulse values for discrete-time
convolution. You can think of this graphical technique as the continuous-time
version of a digital transversal filter (you might like to think of it as a discretetime system and input signal, with the time delay between successive values so
tiny that the finite summation of Eq. (1B.30) turns into a continuous-time
integration).
When t 0 , there is obviously no overlap between the impulse response and
input signal. The output must be zero since we have assumed the system to be
in the zero-state (all initial conditions zero). Therefore y 0 0 . This is
illustrated below:
Graphical illustration
of continuous-time snapshot at t=0

h ( )
1

x (0- )
1

h ( ) x (0- )
1

Area under the curve = y(0)


Figure 1B.10
Letting time roll-on a bit further, we take a snapshot of the situation when
t 1 . This is shown below:

Signals and Systems 2014

1B.20
Graphical illustration
of continuous-time snapshot at t=1

h ( )
1

1
1

x (1- )

h ( ) x (1- )
1 e
1

Area under the curve = y(1)


Figure 1B.11
The output value at t 1 is now given by:

y 1 h x1 d
1

e d e
0

1
0

(1B.49)

1 e 1 0.63

Signals and Systems 2014

1B.21
Taking a snapshot at t 2 gives:
Graphical illustration
of continuous-time snapshot at t=2

h ( )
1

e
1 2

x (2- )

1 2

h ( ) x (2- )
1
e
1 2

Area under the curve = y(2)


Figure 1B.12
The output value at t 2 is now given by:

y 2 h x2 d
2

e d e
0

2
0

(1B.50)

1 e

Signals and Systems 2014

0.86

1B.22
If we keep evaluating the output for various values of t, we can build up a
graphical picture of the output for all time:

y( t )

1-e -t

1
y(2) = 0.86
y(1) = 0.63
y(0) = 0

1 2

Figure 1B.13
In this simple case, it is easy to verify the graphical solution using Eq. (1B.47).
The output value at any time t is given by:

y t h xt d
t

e d e
0

t
0

(1B.51)

1 e t

In more complicated situations, it is often the graphical approach that provides


a quick insight into the form of the output signal, and it can be used to give a
rough sketch of the output without too much work.

Signals and Systems 2014

1B.23
Properties of Convolution
In the following list of continuous-time properties, the notation xt y t
should be read as the input xt produces the output y t . Similar properties
also hold for discrete-time convolution.

axt ay t

(1B.52a)

x1 t x2 t y1 t y2 t

(1B.52b)

a1 x1 t a2 x2 t a1 y1 t a2 y2 t

(1B.52c)

xt t 0 yt t 0

(1B.52d)

Convolution
properties

Linearity

Time-invariance

Convolution is also associative, commutative and distributive with addition, all


due to the linearity property.

Numerical Convolution
We have already looked at how to discretize a continuous-time system by
Computers work

discretizing a systems input / output differential equation. The following with discrete data
procedure provides another method for discretizing a continuous-time system.
It should be noted that the two different methods produce two different
discrete-time representations.
We start by thinking about how to simulate a continuous-time convolution with
a computer, which operates on discrete-time data. The integral in Eq. (1B.47)
can be discretized by setting t nT :

y nT h xnT d
nT

Signals and Systems 2014

(1B.53)

1B.24
By effectively reversing the procedure in arriving at Eq. (1B.47), we can break
this integral into regions of width T:

y nT h xnT d

(1B.54)

h xnT d
2T

i 1T

iT

h xnT d

which can be rewritten using the summation symbol:


n

y nT
i 0

iT T

iT

h xnT d

(1B.55)

If T is small enough, h and xnT can be taken to be constant over each


interval:

h ( )
h (iT)

iT iT +T

Figure 1B.14
That is, apply Eulers approximation:

h hiT
xnT xnT iT

Signals and Systems 2014

(1B.56)

1B.25
so that Eq. (1B.55) becomes:
n

y nT
i 0

iT T

iT

hiT xnT iT d

(1B.57)

Since the integrand is constant with respect to , it can be moved outside the
integral which is easily evaluated:
n

y nT hiT xnT iT T
i 0

(1B.58)

We approximate the
integral with a
summation

Writing in the notation for discrete-time signals, we have the following input /
output relationship:
n

yn hi xn i T , n 0, 1, 2,
i 0

(1B.59)

This equation can be viewed as the convolution-summation representation of a


linear time-invariant system with unit-pulse response Thn , where hn is the
sampled version of the impulse response ht of the original continuous-time
system.

Signals and Systems 2014

Convolution
approximation for
causal systems with
inputs applied at t=0

1B.26
Convolution with an Impulse
One very important particular case of convolution that we will use all the time
is that of convolving a function with a delayed impulse. We can tackle the
problem three ways: graphically, algebraically, or by using the concept that a
system performs convolution. Using this last approach, we can surmise what
the solution is by recognising that the convolution of a function ht with an
impulse is equivalent to applying an impulse to a system that has an impulse
response given by ht :
Applying an impulse
to a system creates
the impulse
response

( t)

y ( t ) = h (t )* ( t ) = h (t )
h (t )

( t)

h (t )

h (t )

1
t

1
t

Figure 1B.15
The output, by definition, is the impulse response, ht . We can also arrive at
this result algebraically by performing the convolution integral, and noting that
it is really a sifting integral:

t ht ht d ht

Signals and Systems 2014

(1B.60)

1B.27
If we now apply a delayed impulse to the system, and since the system is timeinvariant, we should get out a delayed impulse response:

( t- t 0)

y ( t ) = h (t )* ( t- t 0) = h ( t- t 0)
h (t )

( t- t 0)

h (t )

h ( t- t 0)

1
t0

Applying a delayed
impulse to a system
creates a delayed
impulse response

t0

Figure 1B.16
Again, using the definition of the convolution integral and the sifting property
of the impulse, we can arrive at the result algebraically:

t t 0 ht t 0 ht d

ht t 0

(1B.61)

Therefore, in general, we have:

f x x x0 f x x0

(1B.62)

This can be represented graphically as:

(x- x 0 )

f (x )

x0

Figure 1B.17

Signals and Systems 2014

Convolving a
function with an
impulse shifts the
original function to
the impulses
location

f (x- x 0 )

x0

1B.28
Summary

Systems are predominantly described by differential or difference equations


they are the equations of dynamics, and tell us how outputs and various

states of the system change with time for a given input.

Most systems can be derived from simple cause / effect relationships,


together with a few conservation laws.

Discrete-time signals occur naturally and frequently they are signals that
exist only at discrete points in time. Discrete-time systems are commonly
implemented using microprocessors.

We can approximate continuous-time systems with discrete-time systems


by a process known as discretization we replace differentials with
differences.

Convolution is another (equivalent) way of representing an input / output


relationship of a system. It shows us features of the system that were
otherwise hidden when written in terms of a differential or difference
equation.

Convolution introduces us to the concept of an impulse response for a


continuous-time system, and a unit-pulse response for a discrete-time
system. Knowing this response, we can determine the output for any input,

if the initial conditions are zero.

A system is BIBO stable if its impulse response decays to zero in the


continuous-time case, or if its unit-pulse response decays to zero in the
discrete-time case.

Convolving a function with an impulse shifts the original function to the


impulses location.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using

MATLAB, Prentice-Hall International, Inc., 1997.


Signals and Systems 2014

1B.29
Exercises
1.
The following continuous-time functions are to be uniformly sampled. Plot the
discrete signals which result if the sampling period T is (i) T 0.1 s ,
(ii) T 0.3 s , (iii) T 0.5 s , (iv) T 1 s . How does the sampling time affect
the accuracy of the resulting signal?
(a) xt 1

(b) xt cos 4t

(c) xt cos10t

2.
Plot the sequences given by:
(a) y1 n 3 n 1 n 2 n 1 1 2 n 2
(b) y 2 n 4 n n 2 3 n 3

3.
From your solution in Question 2, find an y1 n y 2 n . Show graphically
that the resulting sequence is equivalent to the sum of the following delayed
unit-step sequences:
an 3un 1 un 1 1 2 un 2 9 2 un 3 3un 4

4.
Find yn y1 n y 2 n when:

n 1, -2, -3,
0,
y1 n
n 2 1
n 0, 1, 2,
1 ,
0,
n 1, -2, -3,

y 2 n
n
n 0, 1, 2,
1 2 1 1 ,

Signals and Systems 2014

1B.30
5.
The following series of numbers is known as the Fibonacci sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34
(a) Find a difference equation which describes this number sequence yn for
n 2 , when y0 0 and y1 1 .

(b) By evaluating the first few terms show that the following formula also
describes the numbers in the Fibonacci sequence:

yn

n
n
1
0.5 1.25 0.5 1.25

(c) Using your answer in (a) find y20 and y25 . Check your results using
the equation in (b). Which approach is easier?

6.
Construct block diagrams for the following difference equations:
(i)

yn yn 2 xn xn 1

(ii)

yn 2 yn 1 yn 2 3 xn 4

7.
(i) Construct a difference equation from the following block diagram:

x [n ]

y [n ]
3

D
-2

(ii) From your solution calculate yn for n = 0, 1, 2 and 3 given y 2 2 ,


y 1 1 , xn 0 for n 0 and xn 1 for n = 0, 1, 2
n

Signals and Systems 2014

1B.31
8.
(a) Find the unit-pulse response of the linear systems given by the following
equations:
(i)

yn

T
xn xn 1 yn 1
2

(ii)

yn xn 0.75 xn 1 0.5 yn 1

(b) Determine the first five terms of the response of the equation in (ii) to the
input:
n 2, 3, 4,
0,

xn 1,
n 1
n
1 ,
n 0, 1, 2,

using (i) the basic difference equation, (ii) graphical convolution and
(iii) the convolution summation. (Note yn 0 for n 2 ).

9.
For the single input-single output continuous- and discrete-time systems
characterized by the following equations, determine which coefficients must be
zero for the systems to be
(a) linear
(b) time invariant
2

(i)

d3y
d2y
dy
a1 3 a2 2 a3 a4 y a5 sin t a6 y a7 x
dt
dt
dt

(ii)

a1 y 2 n 3 a2 yn 2 a3 a4 yn a5 sin n yn 1 a6 yn a7 xn

Signals and Systems 2014

1B.32
10.
To demonstrate that nonlinear systems do not obey the principle of
superposition, determine the first five terms of the response of the system:
yn 2 yn 1 x 2 n
to the input:
0, n 1, -2, -3,
x1 n
n 0, 1, 2,
1,
If y1 n denotes this response, show that the response of the system to the input
xn 2 x1 n is not 2 y1 n .

Can convolution methods be applied to nonlinear systems? Why?

11.
A system has the unit-pulse response:
hn 2un un 2 un 4

Find the response of this system when the input is the sequence:

n n 1 n 2 n 3
using (i) graphical convolution and (ii) convolution summation.

Signals and Systems 2014

1B.33
12.
For x1 n and x2 n as shown below find
(i) x1 n x1 n (ii) x1 n x2 n

(iii) x2 n x2 n

using (a) graphical convolution and (b) convolution summation.

x 1[n]

x 2[n]

2
0

1 2 3

1 2 3 4 5 6 7 8

13.
Use MATLAB and discretization to produce approximate solutions to the
revision problem.

14.
Use MATLAB to graph the output voltage of the following RLC circuit:

v i (t )

v o(t )

when R 2, L C 1, vo 0 1, vo 0 1 and vi t sin t u t .

Compare with the exact solution: vo t 0.5 3 t e t cost , t 0 . How do


you decide what value of T to use?

Signals and Systems 2014

1B.34
15.
A feedback control system is used to control a rooms temperature with respect
to a preset value. A simple model for this system is represented by the block
diagram shown below:

x (t )

y (t )

In the model, the signal xt represents the commanded temperature change


from the preset value, y t represents the produced temperature change, and t
is measured in minutes. Find:
a) the differential equation relating xt and y t ,
b) the impulse response of the system, and
c) the temperature change produced by the system when the gain K is 0.5 and
a step change of 0.75 is commanded at t 4 min .
d) Plot the temperature change produced.
e) Use MATLAB and numerical convolution to produce approximate
solutions to this problem and compare with the theoretical answer.

16.
Use MATLAB and the numerical convolution method to solve Q14.

Signals and Systems 2014

1B.35
17.
Sketch the convolution of the two functions shown below.

x(t)

y(t)

1
-1 0

0 0.5

18.
Quickly changing inputs to an aircraft rudder control are smoothed using a
digital processor. That is, the control signal is converted to a discrete-time
signal by an A/D converter, the discrete-time signal is smoothed with a
discrete-time filter, and the smoothed discrete-time signal is converted to a
continuous-time, smoothed, control signal by a D/A converter. The smoothing
filter has the unit-pulse response:

hnT 0.5 n 0.25 n unT , T 0.25 s

Find the zero-state response of the discrete-time filter when the input signal
samples are:
xnT 1, 1, 1, T 0.25 s

Plot the input, unit-pulse response, and output for 0.75 t 1.5 s .

Signals and Systems 2014

1B.36
19.
A wave staff measures ocean wave height in meters as a function of time. The
height signal is sampled at a rate of 5 samples per second. These samples form
the discrete-time signal:
snT cos2 0.2 nT 1.1 0.5 cos2 0.3nT 1.5

The signal is transmitted to a central wave-monitoring station. The


transmission system corrupts the signal with additive noise given by the
MATLAB function:
function n0=drn(n)
N=size(n,2);
rand(seed, 0);
no(1)=rand-0.5;
for I=2:N;
no(i)=0.2*no(i-1)+(rand-0.5);
end

The received signal plus noise, xnT , is processed with a low-pass filter to
reduce the noise.
The filter unit-pulse response is:

hnT 0.1820.76 0.1440.87 cos0.41n 0.1940.87 sin 0.41n unT


n

Plot the sampled height signal, snT , the filter input signal, xnT , the unitpulse response of the filter, hnT , and the filter output signal ynT , for
0t 6s.

Signals and Systems 2014

1B.37
Gustav Robert Kirchhoff (1824-1887)
Kirchhoff was born in Russia, and showed an early interest in mathematics. He
studied at the University of Knigsberg, and in 1845, while still a student, he
pronounced Kirchhoffs Laws, which allow the calculation of current and
voltage for any circuit. They are the Laws electrical engineers apply on a
routine basis they even apply to non-linear circuits such as those containing
semiconductors, or distributed parameter circuits such as microwave striplines.
He graduated from university in 1847 and received a scholarship to study in
Paris, but the revolutions of 1848 intervened. Instead, he moved to Berlin
where he met and formed a close friendship with Robert Bunsen, the inorganic
chemist and physicist who popularized use of the Bunsen burner.
In 1857 Kirchhoff extended the work done by the German physicist Georg
Simon Ohm, by describing charge flow in three dimensions. He also analysed
circuits using topology. In further studies, he offered a general theory of how
electricity is conducted. He based his calculations on experimental results
which determine a constant for the speed of the propagation of electric charge.
Kirchhoff noted that this constant is approximately the speed of light but the
greater implications of this fact escaped him. It remained for James Clerk
Maxwell to propose that light belongs to the electromagnetic spectrum.
Kirchhoffs most significant work, from 1859 to 1862, involved his close
collaboration with Bunsen. Bunsen was in his laboratory, analysing various
salts that impart specific colours to a flame when burned. Bunsen was using
coloured glasses to view the flame. When Kirchhoff visited the laboratory, he
suggested that a better analysis might be achieved by passing the light from the
flame through a prism. The value of spectroscopy became immediately clear.
Each element and compound showed a spectrum as unique as any fingerprint,
which could be viewed, measured, recorded and compared.
Spectral analysis, Kirchhoff and Bunsen wrote not long afterward, promises
the chemical exploration of a domain which up till now has been completely
closed. They not only analysed the known elements, they discovered new
Signals and Systems 2014

1B.38
ones. Analyzing salts from evaporated mineral water, Kirchhoff and Bunsen
detected a blue spectral line it belonged to an element they christened

caesium (from the Latin caesius, sky blue). Studying lepidolite (a lithiumbased mica) in 1861, Bunsen found an alkali metal he called rubidium (from
the Latin rubidius, deepest red). Both of these elements are used today in
atomic clocks. Using spectroscopy, ten more new elements were discovered
before the end of the century, and the field had expanded enormously
between 1900 and 1912 a handbook of spectroscopy was published by
Kayser in six volumes comprising five thousand pages!
[Kirchhoff is] a
perfect example of
the true German
investigator. To
search after truth in
its purest shape and
to give utterance
with almost an
abstract selfforgetfulness, was
the religion and
purpose of his life.
Robert von
Helmholtz, 1890.

Kirchhoffs work on spectrum analysis led on to a study of the composition of


light from the Sun. He was the first to explain the dark lines (Fraunhofer lines)
in the Sun's spectrum as caused by absorption of particular wavelengths as the
light passes through a gas. Kirchhoff wrote It is plausible that spectroscopy is
also applicable to the solar atmosphere and the brighter fixed stars. We can
now analyse the collective light of a hundred billion stars in a remote galaxy
billions of light-years away we can tell its composition, its age, and even how
fast the galaxy is receding from us simply by looking at its spectrum!
As a consequence of his work with Fraunhofers lines, Kirchhoff developed a
general theory of emission and radiation in terms of thermodynamics. It stated
that a substances capacity to emit light is equivalent to its ability to absorb it at
the same temperature. One of the problems that this new theory created was the
blackbody problem, which was to plague physics for forty years. This
fundamental quandary arose because heating a black body such as a metal
bar causes it to give off heat and light. The spectral radiation, which depends
only on the temperature and not on the material, could not be predicted by
classical physics. In 1900 Max Planck solved the problem by discovering
quanta, which had enormous implications for twentieth-century science.
In 1875 he was appointed to the chair of mathematical physics at Berlin and he
ceased his experimental work. An accident-related disability meant he had to
spend much of his life on crutches or in a wheelchair. He remained at the
University of Berlin until he retired in 1886, shortly before his death in 1887.

Signals and Systems 2014

2A.1
Lecture 2A Fourier Series and Spectra
Orthogonality. The trigonometric Fourier series. The compact trigonometric
Fourier series. The spectrum. The complex exponential Fourier series. How to
use MATLAB to check Fourier series coefficients. Symmetry in the time
domain. Power. Filters. Relationships between the three Fourier series
representations.

Orthogonality
The idea of breaking a complex phenomenon down into easily comprehended
components is quite fundamental to human understanding. Instead of trying to
commit the totality of something to memory, and then, in turn having to think
concept of
about it in its totality, we identify characteristics, perhaps associating a scale The
breaking the
with each characteristic. Our memory of a person might be confined to the complex down into
the simple

characteristics gender, age, height, skin colour, hair colour, weight and how
they rate on a small number of personality attribute scales such as optimistpessimist, extrovert-introvert, aggressive-submissive etc.
We only need to know how to travel east and how to travel north and we can
go from any point on earth to any other point. An artist (or a television tube)
needs only three primary colours to make any colour.
In choosing the components it is most efficient if they are independent as in
Independent

gender and height for a person. It would waste memory capacity, for example, characteristics are
to adopt as characteristics both current age and birthdate, as one could be efficient descriptors
predicted from the other, or all three of total height and height above and
below the waist.
Orthogonality in Mathematics
Vectors and functions are similarly often best represented, memorised and
manipulated in terms of a set of magnitudes of independent components. Recall
that any vector A in 3 dimensional space can be expressed in terms of any
three vectors a , b and c which do not lie in the same plane as:

A A1a A2 b A3c
where A1 , A2 and A3 are appropriately chosen constants.
Signals and Systems 2014

(2A.1)

Specifying a 3D
vector in terms of 3
components

2A.2
Vector in 3D space
showing
components

c
A3
A
A1

A2
b

a
Figure 2A.1

The vectors a , b and c are said to be linearly independent for no one of them
can be expressed as a linear combination of the other two. For example, it is
Basis set described
as set of linearly
independent vectors

impossible to write c a b no matter what choice is made for and


(because a linear combination of a and b must stay in the plane of a and b).
Such a set of linearly independent vectors is said to form a basis set for three
dimensional vector space. They span three dimensional space in the sense that
any vector A can be expressed as a linear combination of them.
If a , b and c are mutually perpendicular they form an orthogonal basis set.

Orthogonality
defined for vectors

Orthogonal means that the projection of one component onto another is zero.
In vector analysis the projection of one vector onto another is given by the dot
product.
Thus, for a , b and c orthogonal, we have the relations:

a a a2
ab 0
ac 0

b a 0
b b b2
bc 0

ca 0
cb 0
c c c2

Here a is the length of a , b is the length of b , etc.


Signals and Systems 2014

(2A.2)

2A.3
Hence, if a , b and c are orthogonal, when we project the vector A onto each
of the basis vectors, we get:

A a A1a a A2b a A3c a A1a

(2A.3)

Finding the
components of a
vector

A b A1a b A2b b A3c b A1b 2


A c A1a c A2b c A3c c A1c 2
We can see that projection of A onto a particular basis vector results in a scalar
quantity that is proportional to the amount of A in that direction. For
example, when projecting A onto a, we get the quantity A1 a 2 , where A1 is the
constant that scales a in the original expression A A1a A2 b A3c .
Hence, if we have no knowledge of the components of A, we have an easy way
of finding them when the basis set is orthogonal just project the vector onto
each of the component vectors in the basis set and normalise:

Aa Ab Ac
A 2 a 2 b 2 c
a b c

(2A.4)

A vector described
in terms of
orthogonal
components

It is straightforward to extend this analysis to an infinite dimensional vector


space (although hard to imagine in geometrical terms):

Az
A a A b
A 2 a 2 b 2 z
z
a b

(2A.5)

If a 2 , b 2 , c 2 etc. are 1, the set of vectors are not just orthogonal they are
Orthonormal defined

orthonormal.
The above description of a vector in three dimensional space is exactly
analogous to resolving a colour into three primary (orthogonal) components. In

Orthogonal

this case we project light through red, green and blue filters and find the components is a

general concept with

intensity of each of the three components. The original colour can be wide application
synthesised once again by red, green and blue lights of appropriate intensity.
Signals and Systems 2014

2A.4
The Inner Product
The definition of the dot product for vectors in space can be extended to any
general vector space. Consider two n-dimensional vectors:
Two n-dimensional
vectors

u
Figure 2A.2

The inner product can be written mathematically as:


Inner product for
vectors defined

u, v u T v v T u

(2A.6)

For example, in 3 dimensions:

v1
u, v u1 u 2 u3 v2
v3
u1v1 u 2 v2 u3 v3

(2A.7)

u v cos
If 90 the two vectors are orthogonal and u, v 0 . They are linearly
independent, that is, one vector cannot be written in a way that contains a
component of the other.

Signals and Systems 2014

2A.5
Real functions such as xt and y t on a given interval a t b can be
considered a vector space, since they obey the same laws of addition and
scalar multiplication as spatial vectors. For functions, we can define the inner
product by the integral:

x, y xt y t dt
b

(2A.8)

Inner product for


functions defined

This definition ensures that the inner product of functions behaves in exactly
the same way as the inner product of vectors. Just like vectors, we say that
two functions are orthogonal if their inner product is zero:

x, y 0

(2A.9)

Orthogonality for
functions defined

Example
Let xt sin 2t and y t sin 4t over the interval 1 2 t 1 2 :

1
x( t )

t
-1/2

1/2

-1
1
y( t )

t
-1/2

1/2

-1
Then the inner product is (due to odd symmetry of the integrand):

x, y

12
1 2

sin 2t sin 4t dt 0

Thus, the functions xt and y t are orthogonal over the interval 1 2 t 1 2 .

Signals and Systems 2014

2A.6
Orthogonality in Power Signals
Consider a finite power signal, xt . The average power of xt is:

1 T 2
1
Px x t dt
x, x
T 0
T

(2A.10)

Now consider two finite power signals, xt and y t . The average value of
the product of the two signals observed over a particular interval, T, is given by
the following expression:

1 T
1

x
t
y
t
dt

x, y
T 0
T

(2A.11)

This average can also be interpreted as a measure of the correlation between


xt and y t . If the two signals are orthogonal, then over a long enough

period, T, the average of the product tends to zero, since x, y 0 . In this


case, when the signals are added together the total power (i.e. the mean square
value), is:
The power of a
signal made up of
orthogonal
components is the
sum of the
component signal
powers

1
x y, x y
T
1 T
2
xt y t dt
T 0
1 T
x 2 t 2 xt y t y 2 t dt
T 0
1
2
1

x, x
x, y
y, y
T
T
T
Px 0 Py

Px y

(2A.12)

This means that the total power in the combined signal can be obtained by
adding the power of the individual orthogonal signals.

Signals and Systems 2014

2A.7
Orthogonality in Energy Signals
Consider two finite energy signals in the form of pulses in a digital system.
Two pulses, p1 t and p 2 t , are orthogonal over a time interval, T, if:

xt yt dt
T

x, y 0

(2A.13)

Similar to the orthogonal finite power signals discussed above, the total energy
of a pulse produced by adding together two orthogonal pulses can be obtained
by summing the individual energies of the separate pulses.
The energy of a
signal made up of
orthogonal
components is the
sum of the
component signal
energies

Ex y x y, x y
xt y t dt
T

x 2 t 2 xt y t y 2 t dt
T

x, x 2 x, y y , y

(2A.14)

Ex 0 E y

For example, Figure 2A.3, illustrates two orthogonal pulses because they
occupy two completely separate portions of the time interval 0 to T. Therefore,
their product is zero over the time period of interest which means that they are
orthogonal.

p 1( t )

1
0

p 2( t )

1
T/2

T t

Figure 2A.3

Signals and Systems 2014

T/2

T t

2A.8
The Trigonometric Fourier Series
In 1807 Joseph Fourier showed how to represent any periodic function as a
weighted sum of a family of harmonically related sinusoids. This discovery
turned out to be just a particular case of a more general concept. We can
actually represent any periodic function by a weighted sum of orthogonal

functions Fouriers sinusoids are then just a special class of orthogonal


functions. Thus, in general, we can declare a function to be represented by:

g t Ann t

(2A.15)

n0

where the functions n t form an orthogonal basis set, just like the vectors
a, b and c etc. in the geometric vector case. The equivalent of the dot product

(or the light filter) for obtaining a projection in this case is the inner product
given by:

g t , n t
T

Definition of inner
product for a
function

T0 2
0

g t n t dt

(2A.16)

This is the projection of g t onto n t , the nth member of the orthogonal


basis set. Equivalent relationships hold between orthogonal functions as they
do between orthogonal vectors:

The projection of a
function onto itself
gives a number

T0 2

T0 2

n t m t dt cn2

aa a

(2A.17)

ab 0

(2A.18)

nm

and:

The projection of a
function onto an
orthogonal function
gives zero

T0 2

t m t dt 0

T0 2 n

nm

When cn2 1 the basis set of functions n t (all n) are said to be orthonormal.
Signals and Systems 2014

2A.9
There are many possible orthogonal basis sets for representing a function over
an interval of time T0 . For example, the infinite set of Walsh functions shown
below can be used as a basis set:

1( t )

Example of an
orthogonal basis set
the Walsh
functions

5( t )

T0 t

T0 t

6( t )

2( t )

T0

T0

3( t )

7( t )

4( t )

T0 t

T0 t

8( t )

T0

T0

Figure 2A.4

We can confirm that the Walsh functions are orthogonal with a few simple
integrations, best performed graphically. For example with n 2 and m 3 :

22 ( t )

A2

T0
0

2 t 2 t dt A2T0

T0 t

2 ( t) 3 ( t)
A

T0

T0
0

2 t 3 t dt 0

t
Figure 2A.5
Signals and Systems 2014

2A.10
The trigonometric Fourier series is a very special way of representing periodic
functions. The basis set chosen for the Fourier series is the set of pairs of sines
and cosines with frequencies that are integer multiples of f 0 1 T0 :

an t cos2nf 0t
bn t sin 2nf 0t

The orthogonal
functions for the
Fourier series are
sinusoids

(2A.19)

They were chosen for the two reasons outlined in Lecture 1A for linear
systems a sinusoidal input yields a sinusoidal output; and they have a compact
notation using complex numbers. The constant cn2 in this case is either T0 2 or

T0 , as can be seen from the following relations:

mn0
T0

cos2mf 0t cos2nf 0t dt T0 2 m n 0
2
0
mn

T0 2

T0

mn0
0

sin 2mf 0t sin 2nf 0t dt T0 2 m n 0


2
0
mn

(2A.20a)

T0 2

T0

T0 2

T0 2

cos 2mf 0 t sin 2nf 0 t dt 0

all m, n

(2A.20b)

(2A.20c)

If we choose the orthogonal basis set as in Eq. (2A.19), and the representation
of a function as given by Eq. (2A.15), then any periodic function g t with
period T0 be expressed as a sum of orthogonal components:
The trigonometric
Fourier series
defined

g t a0 an cos2nf 0 t bn sin2nf 0 t

(2A.21)

n 1

The frequency f 0 1 T0 is the fundamental frequency and the frequency nf 0 is


the nth harmonic frequency. The right-hand side of Eq. (2A.21) is known as a
Fourier series, with a n and bn known as Fourier series coefficients.
Signals and Systems 2014

2A.11
Now look back at Eqs. (2A.4) and (2A.17). If we want to determine the
coefficients in the Fourier series, all we have to do is project the function
onto each of the components of the basis set and normalise by dividing by cn2 :

a0
2
an
T0

bn

2
T0

1
T0

T0 2

T0 2

T0 2

T0 2

T0 2

T0 2

g t dt

(2A.22a)

g t cos2nf 0 t dt

(2A.22b)

g t sin 2nf 0 t dt

(2A.22c)

Compare these equations with Eq. (2A.3). These equations tell us how to filter
out one particular component of the Fourier series. Note that frequency 0 is
DC, and the coefficient a 0 represents the average, or DC part of the periodic
signal g t .
Example

The following function:


g t 4 3 cos( 4t ) 2 sin(12t )

is already written out as a Fourier series. We identify f 0 2 Hz and by


comparison with:

g t a0 an cos2nf 0 t bn sin2nf 0 t
n 1

the Fourier series coefficients are:


a 0 4, a1 3, b3 2
with all other Fourier series coefficients zero.
Signals and Systems 2014

How to find the


trigonometric
Fourier series
coefficients

2A.12
Example

Find the Fourier series for the rectangular pulse train g t shown below:

g (t)
A
0

t
T0

Figure 2A.6

Here the period is T0 and f 0 1 T0 . Using Eqs. (2A.18), we have for the
Fourier series coefficients:

a0

1
T0

an

T0 2

T0 2

2
T0

g t dt

1
T0

Adt

A
Af 0
T0

(2A.23a)

A cos2nf 0t dt

2A
sin nf 0 2 Af 0 sin cnf 0
n

(2A.23b)

2 2
Asin2nf 0 t dt 0
T0 2

(2A.23c)

bn

We can therefore say:

t nT0
2 Af 0 sinc nf 0 cos 2nf 0 t
Arect Af 0
n
n 1

(2A.24)

This expression is quite unwieldy. To visualize what the equation means, we


usually tabulate or graph the Fourier series coefficients, and think about the
individual constituent sinusoids.
Signals and Systems 2014

2A.13
For example, consider the case where T0 5 . We can draw up a table of the
Fourier series coefficients as a function of n:
n

an

bn

0.2A

0.3742A

0.3027A

0.2018A

0.0935A

-0.0624A

etc.

-0.0865A

The trigonometric
Fourier series
coefficients can be
tabled

A more useful representation of the Fourier series coefficients is a graph:


but a graph of the
trigonometric
Fourier series
coefficients is better

an
0.4 A

0.2 A
6 f0
0

2 f0

8 f0

4 f0

10 f 0 12 f 0

Figure 2A.7

Observe that the graph of the Fourier series coefficients is discrete the values
of a n (and bn ) are associated with frequencies that are only multiples of the
fundamental, nf 0 . This graphical representation of the Fourier series
coefficients lets us see at a glance what the dominant frequencies are, and
how rapidly the amplitudes reduce in magnitude at higher harmonics.

Signals and Systems 2014

2A.14
The Compact Trigonometric Fourier Series
The trigonometric Fourier series in Eq. (2A.21) can be written in a more
compact and meaningful way as follows:
The compact
trigonometric
Fourier series
defined

g t An cos2nf 0 t n

(2A.25)

n 0

By expanding An cos 2nf 0 t n it is easy to show that:

an An cos n
bn An sin n

(2A.26)

and therefore:

and associated
constants

An an2 bn2

(2A.27a)

n tan 1 bn an

(2A.27b)

From the compact Fourier series it follows that g t consists of sinusoidal


signals of frequencies 0,

f0 , 2 f0 ,

, nf 0 , etc. The nth harmonic,

An cos2nf 0 t n , has amplitude An and phase n .


We can store the amplitude ( An ) and phase ( n ) information at a harmonic
frequency ( nf 0 ) using our phasor notation of Lecture 1A. We can represent the
nth harmonic sinusoid by its corresponding phasor:
Harmonic phasors
defined

Gn An e j n An cos n jAn sin n

(2A.28)

Gn can be derived directly from the Fourier series coefficients using

Eq. (2A.26):
and related to the
trigonometric
Fourier series
coefficients

Gn an jbn
Signals and Systems 2014

(2A.29)

2A.15
The negative sign in Gn a n jbn comes from the fact that in the phasor
representation of a sinusoid the real part of Gn is the amplitude of the cos
component, and the imaginary part of Gn is the amplitude of the -sin
component.
Substituting for a n and bn from Eqs. (2A.22a) - (2A.22c) results in:

G0 a0

1
T0

T0 2

T0 2

(2A.30)

g t dt

Gn an jbn

2
T0

2
T0

T0 2

T0 2
T0 2

T0 2

g t cos2nf 0t dt j

2
T0

T0 2

T0 2

g t sin 2nf 0t dt

g t cos2nf 0t j sin 2nf 0t dt

which can be simplified using Eulers identity, e j cos j sin , to give:

1
Gn
T0
2
Gn
T0

T0 2

T0 2

T0 2

T0 2

g t e j 2nf 0t dt

n0

g t e j 2nf 0t dt

n0

(2A.31a)

Obtaining the
harmonic phasors
directly

(2A.31b)

The expression for the compact Fourier series, as in Eq. (2A.25), can now be
written as:

gt Re Gn e j 2nf 0 t
n0

(2A.32)

Each term in the sum is a phasor rotating at an integer multiple of the


fundamentals angular speed, n 0 2nf 0 . g t is the projection of the
instantaneous vector sum of these phasors onto the real axis.

Signals and Systems 2014

Fourier series
expressed as a sum
of harmonic phasors
projected onto the
real axis

2A.16
Example

Find the compact Fourier series coefficients for the rectangular pulse train g t
shown below:
g (t)

A
0

t
T0

Again, the period is T0 and f 0 1 T0 . Using Eqs. (2A.27), we have for the
average value:

1
G0
T0

Adt Af 0

(2A.33)

which can be seen (and checked) from direct inspection of the waveform.
The compact Fourier series coefficients (harmonic phasors) are given by:

2
Gn
T0

Ae j 2nf 0 t dt
(2A.34)

1 j 2nf 0 t
2 Af 0
e

j
2
nf
0

0
A

1 e j 2nf 0
jn

Signals and Systems 2014

2A.17
The next step is not obvious. We separate out the term e jnf 0 :

Gn

A jnf 0 jnf 0
e
e
e jnf 0
jn

(2A.35)

and apply Eulers identity:

e j e j
sin
j2

(2A.36)

to give:

Gn

2A
sin nf 0 e jnf 0
n

(2A.37)

Now remembering that:

sincx

sin x
x

(2A.38)

we get:

sin nf 0 jnf 0
Gn 2 Af 0
e
nf 0
2 Af 0sincnf 0 e jnf 0

(2A.39)

An e j n
Therefore:

An 2 Af 0sincnf 0

n nf 0

Signals and Systems 2014

(2A.40)

2A.18
The Spectrum
Using the compact trigonometric Fourier series, we can formulate an easy-tointerpret graphical representation of a periodic waveforms constituent
sinusoids. The graph below shows a periodic waveform, made up of the first 6
terms of the expansion of a square wave as a Fourier series, versus time:

Figure 2A.8

The graph also shows the constituent sinusoids superimposed on the original
waveform. Lets now imagine that we can graph each of the constituent
sinusoids on its own time-axis, and extend these into the z-direction:

Figure 2A.9

Signals and Systems 2014

2A.19
Each constituent sinusoid exists at a frequency that is harmonically related to
the fundamental. Therefore, we could set up a graph showing the amplitude of
the constituent sinusoids, with the horizontal scale set up so that each
amplitude value is graphed at the corresponding frequency:

Figure 2A.10

We now have a graph that shows the amplitudes of the constituent sinusoids
versus frequency:

amplitude

3 f0
f0

7 f0
5 f0

11f0
9 f0

Figure 2A.11

This conceptual view now needs to be formalised.

Signals and Systems 2014

frequency

2A.20
In a periodic signal, the constituent sinusoids are given by An cos2nf 0 t n .
So, amplitude information is not enough we need to graph phase as well. We
also know that each constituent sinusoid is completely characterised by its
corresponding phasor, Gn An e jn . If we plot G n versus frequency, we have
The spectrum
definedas a graph
of phasor values vs.
frequency

what is called a spectrum. However, since G n is complex, we have to resort to


two plots. Therefore, for a given periodic signal, we can make a plot of the
magnitude spectrum An vs. f and the phase spectrum n vs. f .

We can now think of a spectrum as a graph of the phasor value as a function of


frequency. We call this representation a single-sided spectrum, since it only
involves positive frequencies:

ampltiude

time

ampltiude

frequency
phase

spectrum

frequency

Figure 2A.12

Signals and Systems 2014

2A.21
Example

The compact Fourier series representation for a rectangular pulse train was
found in the previous example. A graph of the magnitude and phase spectra
appear below for the case T0 5 :

An
0.4 A

0.2 A
6 f0
0

2 f0

8 f0

4 f0

10 f 0 12 f 0

Note how we can have a negative magnitude of a sinusoid as the sinc


function envelope goes negative. If we wanted to, we could make An positive
(so it relates directly to the amplitude of the corresponding sinusoid) by simply
adding 180 to n .

n
0

2 f0

4 f0

6 f0

8 f0

10 f 0 12 f 0

-
-2

Note how we can have phase even though the amplitude is zero (at 5 f 0 ,
10 f 0 , etc). We could also wrap the phase spectrum around to 0 instead of
graphing linearly past the 2 point in fact we could choose any range that
spans 2 since phase is periodic with period 2 .

Signals and Systems 2014

2A.22
The Complex Exponential Fourier Series
The complex exponential Fourier series is the most mathematically convenient
and useful representation of a periodic signal. Recall that Eulers formulas
relating the complex exponential to cosines and sines are:

e j cos j sin

(2A.41)

e j e j
cos
2

(2A.42)

e j e j
sin
j2

(2A.43)

Substitution of Eqs. (2A.42) and (2A.43) into the trigonometric Fourier series,
Eq. (2A.21), gives:

g t a0 an cos2nf0t bn sin 2nf0t


n 1

a
a
a0 n e j 2nf 0t n e j 2nf 0 t
2
n 1 2
b
b

j n e j 2nf 0t j n e j 2nf 0t
2
2

a
b
a0 n j n e j 2nf 0t
2
n 1 2

b
a
n j n e j 2nf 0 t
2
2

Signals and Systems 2014

(2A.44)

2A.23
This can be rewritten in the form:

g t

G e

j 2nf 0t

(2A.45)

The complex
exponential Fourier
series

where:

G0 a0 A0

(2A.46a)

an jbn An j n
e

2
2
n 1
an jbn An j n
e

2
2

Gn
G n

(2A.46b)

The relationship
between complex
exponential and
trigonometric
Fourier series
coefficients

(2A.46c)

From Eqs. (2A.31a) and (2A.46a), we can see that an alternative way of
writing the Fourier series coefficients, in one neat formula instead of three, is:

1
Gn
T0

T0 2

T0 2

g t e

j 2nf 0 t

dt

(2A.47)

Thus, the trigonometric and complex exponential Fourier series are not two
different series but represent two different ways of writing the same series. The
coefficients of one series can be obtained from those of the other.

Signals and Systems 2014

The complex
exponential Fourier
series coefficients
defined

2A.24
The complex exponential Fourier series can also be viewed as being based on
the compact Fourier series but uses the fact that we can use the alternative
Harmonic phasors
can also have
negative frequency

phasor definition Gn instead of Gn . In this case we remember that Gn Gn 2

and that for every forward rotating phasor Gn there is a corresponding


backward rotating phasor Gn* . Eqs. (2A.32) and (2A.31a) then become:

g t

The complex
exponential Fourier
series

G e

1
Gn
T0

and Fourier series


coefficients defined

T0 2

T0 2

j 2nf 0 t

g t e j 2nf 0 t dt

(2A.48a)

(2A.48b)

where:
The symmetry of the
complex exponential
Fourier series
coefficients

Gn Gn*

(2A.49)

Gn Gn e jn

(2A.50)

Gn Gn e jn

(2A.51)

Thus, if:

then:

A double-sided
spectrum shows
negative frequency
phasors

Gn is the magnitude and n is the phase of Gn . For a real g t , G n Gn ,


and the double-sided magnitude spectrum Gn vs. f is an even function of f.
Similarly, the phase spectrum n vs. f is an odd function of f because n n .

Signals and Systems 2014

2A.25
Example

Find the complex exponential Fourier series for the rectangular pulse train
g t shown below:

g (t)
A
0

t
T0

Figure 2A.13

The period is T0 and f 0 1 T0 . Using Eq. (2A.48a), we have for the complex
exponential Fourier series coefficients:

1
Gn
T0

Ae j 2nf 0t dt

A jnf 0
e jnf 0
e
j 2n
A
sin nf 0
n
Af 0sincnf 0

(2A.52)

In this case Gn turns out to be a real number (the phase of all the constituent
sinusoids is 0 or 180).

Signals and Systems 2014

2A.26
For the case of T0 5 , the double-sided magnitude spectrum is then:
The double-sided
magnitude spectrum
of a rectangular
pulse train

Gn
0.2 A

-8 f0 -6 f0

6 f0
-4 f0 -2 f0 0

-12 f0 -10 f0

8 f0

4 f0

2 f0

10f0 12f0

Figure 2A.14

For the case of T0 2 , the spectrum is:


and for a 50% duty
cycle square wave

Gn

-7f0

-11f0
-9 f0

A /2

-3f0
-5f0

3f0
- f0 0 f0

7f0
5f0

11f0
9f0

Figure 2A.15

Thus, the Fourier series can be represented by spectral lines at all harmonics of
f 0 1 T0 , where each line varies according to the complex quantity Gn . In
particular, for a rectangular pulse train, Gn follows the envelope of a sinc
function, with amplitude A T0 and zero crossings at integer multiples of 1 .

Signals and Systems 2014

2A.27
Another case of interest, which is fundamental to the analysis of digital
systems, is when we allow each pulse in a rectangular pulse train to turn into
an impulse, i.e. 0 and A such that A 1 . In this case, each pulse in
Figure 2A.13 becomes an impulse of unit strength, and g t is simply a
uniform train of unit impulses, as shown below:
A uniform train of
unit impulses,

g (t)
1
-2T0

-T0

T0

2T0

3T0

4T0

Figure 2A.16

The result of Eq. (2A.52) is still valid if we take the appropriate limit:

Gn lim Af 0sincnf 0 f 0

(2A.53)

Thus, with A 1 , the amplitude of the sinc function envelope is Af 0 f 0 ,


and when 0 , sincnf 0 sinc0 1 . Therefore:

g t

G e

t nT f e

j 2nf 0 t

its Fourier series,

j 2nf 0 t

Signals and Systems 2014

(2A.54)

2A.28
The spectrum has components of frequencies nf 0 , n varying from to ,
including 0, all with an equal strength of f 0 , as shown below:
and its spectrum

Gn
f0

-4f0 -3f0 -2f0

- f0

f0

2 f0

3 f0

4 f0

Figure 2A.17

The complex exponential Fourier series is so convenient we will use it almost


exclusively. Therefore, when we refer to the spectrum of a signal, we are
referring to its double-sided spectrum. It turns out that the double-sided
spectrum is the easiest to use when describing signal operations in systems. It
also enables us to calculate the average power of a signal in an easy manner.

Signals and Systems 2014

2A.29
How to Use MATLAB to Check Fourier Series Coefficients
MATLAB is a software package that is particularly suited to signal
processing. It has instructions that will work on vectors and matrices. A vector
can be set up which gives the samples of a signal. Provided the sample spacing
meets the Nyquist criterion the instruction G=fft(g) returns a vector
containing N times the Fourier series coefficients, where G1 N G0 is the DC
term, G2 N G1 , G3 N G2 etc. and G N N G1 , G N-1 N G 2 etc.
where N is the size of the vector. G=ifft(g)does the inverse Fourier
transform.
Example

Suppose we want to find the Fourier series coefficients of g t cos 2t . Note


period = 1 s.
Step 1

Choose sample frequency - since highest frequency present is 1 Hz, choose


4 Hz (minimum is > 2 Hz).
Step 2

Take samples over one period starting at t 0 . Note N 4 .

g 1

Step 3

Find G fftg 0

2 .

Hence G0 0 , G1 2 4 1 2 , G1 2 4 1 2 . G 2 should be zero if the Nyquist


criterion is met. These are in fact the Fourier series coefficients of g t .

Signals and Systems 2014

2A.30
Example

Find the Fourier series coefficients of a 50% duty cycle square wave.
Step 1

In this case the spectrum is so we can never choose f s high enough. There
is always some error. Suppose we choose 8 points of one cycle.
Step 2

g 1

0.5

0.5

Note: if samples occur on transitions, input half way point.


Step 3

G fftg 4

2.4142

0.4142

0.4142

2.4142

Therefore G1 2.4142 8 0.3015 . The true value is 0.3183. Using 16 points,


G1 0.3142 .

You should read Appendix A The Fast Fourier Transform, and look at the
example MATLAB code in the FFT - Quick Reference Guide for more
complicated and useful examples of setting up and using the FFT.

Signals and Systems 2014

2A.31
Symmetry in the Time Domain
A waveform which is symmetric in the time domain will have a spectrum with
certain properties. Identification of time-domain symmetry can lead to
conclusions about the spectrum without doing any calculations a useful skill
to possess.
Even Symmetry

An even function is one which possesses symmetry about the g t axis.


Mathematically, an even function is such that g t g t . An even function
can be expressed as a sum of cosine waves only (cosines are even functions),
and therefore all the G n are real. To see why, we consider the imaginary
component of Gn a n jbn 2 , which must be zero. That is, we look at the
formula for bn as given by the trigonometric Fourier series:

2
bn
T0

T0 2
T0 2

g t sin 2nf 0t dt

(2A.55)

When g t has even symmetry, the integrand in the formula for bn is


even odd odd . The limits on the integral span equal portions of negative

and positive time, and so we get bn 0 after performing the integration. The
integrand in the formula for a n is even even even , and will only be zero if
the function g t is zero. Thus we have the property:

g t even Gn real

(2A.56)

The phase of Gn is therefore 0 or 180, depending on whether the real number


is positive or negative.

Signals and Systems 2014

Even symmetry in
the time-domain
leads to real Fourier
series coefficients

2A.32
Example

Find the compact Fourier series coefficients for the cos-shaped pulse train g t
shown below:

g( t )
2

-1

t
1

-2

Figure 2A.18

The functions is represented by:

0
1 t 1 2

g t 2 cos3t 1 2 t 1 2

0
1 2t

(2A.57)

and g t g t 2 .
The period is T0 2 and f 0 1 T0 1 2 Hz . Using Eq. (2A.48a), we have for
the complex exponential Fourier series coefficients:

1 1
jnt

g
t
e
dt

1
2
1 0.5
0 2 cos3t e jnt dt 0
2 0.5
j 3t
0.5 e
e j 3t jnt
e
dt

0.5
2

Gn

Signals and Systems 2014

(2A.58)

2A.33
Now, combining the exponentials and integrating, we have:
e j 3 n t
e j 3 n t
Gn

j 3 n j 3 n 0.5
0.5

(2A.59)

e j 3 n 2
e j 3 n 2 e j 3 n 2
e j 3 n 2

j 3 n j 3 n j 3 n j 3 n
1 sin 3 n 2 1 sin 3 n 2

2 3 n 2
2 3 n 2

Putting sincx sin x x , we obtain:

1
1
Gn sinc 3 n 2 sinc 3 n 2
2
2

(2A.60)

Now since G0 a0 and Gn a n jbn 2 for n 1 , we can see that the


imaginary part is zero (as we expect for an even function) and that:

1
1
sinc3 n 2 sinc3 n 2
2
2
an 1
1
Gn
sinc3 n 2 sinc3 n 2 n 1
2 2
2
G0 a0

(2A.61)

and therefore:

2
1
1
a0 sinc3 n 2 sinc3 n 2
3
2
2
an sinc3 n 2 sinc3 n 2

n 1

(2A.62)

Thus, the Fourier series expansion of the wave contains only cosine terms:

2
g t
sinc3 n 2 sinc3 n 2cosnt
3 n1

(2A.63)

Recall that sincx 0 for all integer values of x. So we expect Gn 0 for


every odd term, except n 3 when a3 1 .

Signals and Systems 2014

2A.34
The Fourier series expansion, for the first 5 terms and 10 terms, is shown
below together with the original wave:

One interesting point to note is that, for even functions, Gn an 2 for n 1 ,


and therefore Gn Gn* a n 2 Gn for n 1 . That is, for even functions, the
complex exponential Fourier series coefficients have even symmetry too!
A graph of the spectrum in this case looks like:

Gn

An even symmetric
waveform has an
even and real
spectrum

-5 f 0

-4 f 0

-3 f 0

-2 f 0

- f0

f0

2 f0

Signals and Systems 2014

3 f0

4 f0

5 f0

2A.35
Odd Symmetry

An odd function is one which possesses rotational symmetry about the origin.
Mathematically, an odd function is such that g t g t . An odd function
can be expressed as a sum of sine waves only (sine waves are odd functions),
and therefore all the G n are imaginary. To see why, we consider the real
component of Gn a n jbn 2 , which must be zero. That is, we look at the
formula for a n as given by the trigonometric Fourier series:

2
an
T0

T0 2
T0 2

g t cos2nf 0t dt

(2A.64)

When g t has odd symmetry, the integrand in the formula for a n is


odd even odd . The limits on the integral span equal portions of negative

and positive time, and so we get a n 0 after performing the integration. The
integrand in the formula for bn is odd odd even , and will only be zero if
the function g t is zero. Thus we have the property:

g t odd Gn imaginary

(2A.65)

The phase of Gn is therefore 90 or -90, depending on whether the imaginary


number is positive or negative.

Signals and Systems 2014

Odd symmetry in
the time-domain
leads to imaginary
Fourier series
coefficients

2A.36
Example

Find the complex exponential Fourier series coefficients for the sawtooth
waveform g t shown below:

g (t )
A
-T0

-T0
2

-A

T0
2

T0

Figure 2A.19

The period is T0 and f 0 1 T0 . Using Eq. (2A.48a), we have for the complex
exponential Fourier series coefficients:

1
Gn
T0

2 A j 2nf 0 t
te
dt
2 T
0

T0 2
T0

(2A.66)

By inspection of the original waveform, the DC component is zero and so:

G0 0

(2A.67)

For n 1 , apply integration by parts, where:

ut
dv e j 2nf 0 t dt
du dt
e j 2nf 0 t
v
j 2nf 0

Signals and Systems 2014

(2A.68)

2A.37
We then have:
T0 2
j 2nf 0t

T0 2 e
2 A e j 2nf 0t
t
dt

T0 j 2nf 0 T 2 T0 2 j 2nf 0
0

T
e jn
e jn
2 A T
1

2 0
0
T0 2 j 2nf 0 2 j 2nf 0 j 2nf 0

Gn

e jn e jn
2 A
1

T0 j 2nf 0
2

(2A.69)

j 2nf 0 t
e
dt

T0 2

T0 2

1 e j 2nf 0t

j 2n j 2nf 0 T0 2
T0 2

2A
1
e jn e jn
cosn
j 2n
j 2n

jA
sin n

n 1
cosn

n
n

Putting sin n 0 and cosn 1 for all integer values of n, we obtain:


n

Gn

jA
1n
n

n 1

(2A.70)

Now since G0 a0 and Gn a n jbn 2 for n 1 , we can see that the real
part is zero (as we expect for an odd function) and that:

Gn j

bn
jA
1n

2 n

n 1

(2A.71)

and therefore:

bn

2A
1n
n

n 1

(2A.72)

Thus, the Fourier series expansion of the sawtooth wave contains only sine
terms:

g t

2A
1n 1 sin 2nf0t
n 1 n
Signals and Systems 2014

(2A.73)

2A.38
The Fourier series expansion, for the first 5 terms and 10 terms, is shown
below together with the original sawtooth wave:

One interesting point to note is that, for odd functions, Gn j bn 2 , and


therefore Gn Gn* j bn 2 Gn . That is, for odd functions, the complex
exponential Fourier series coefficients have odd symmetry too!
A graph of the spectrum in this case looks like (the values are imaginary):

Gn

An odd symmetric
waveform has an
odd and imaginary
spectrum

-4 f 0

f0

-2 f 0
-3 f 0

- f0

3 f0
2 f0

Signals and Systems 2014

4 f0

2A.39
Half-Wave Symmetry

A half-wave symmetric function is one in which each half-period is the same


as the one before it, except it is upside down. Mathematically, this is
expressed as g t gt T0 2 . Half-wave symmetry is not dependent on our
choice of origin. An example of a half-wave symmetric function is:
A half-wave
symmetric waveform

g(t )

Figure 2A.20

Many half-wave symmetric waveforms occur in electrical engineering


examples include the magnetising current of a transformer, the saturated output
of an amplifier, and any system whose transfer characteristic possesses odd
symmetry.
Looking at the formula for the complex exponential Fourier series coefficients:

Gn

1
T0

T0 2
T0 2

g t e j 2nf 0t dt

(2A.74)

we split it up into two parts:

Gn

1 0
1 T0 2
j 2nf 0 t

g
t
e
dt

g t e j 2nf 0t dt

T0 T0 2
T0 0

I1

I2

Signals and Systems 2014

(2A.75)

2A.40
Now, if g t is half-wave symmetric, then:

I2

1
T0

1
T0

T0 2
0
T0 2
0

(2A.76)

g t e j 2nf0t dt
g t T0 2e j 2nf0t dt

Letting t T0 2 , we have:

I2

1
T0

0
-T0 2

e jn

T0

g e j 2nf0 T0 2 d

0
-T0 2

(2A.77)

g e j 2nf0 d

Now since the value of a definite integral is independent of the variable used in
the integration, and noting that e jn 1 , we can see that:
n

I 2 1 I1
n

(2A.78)

Therefore:

Gn I1 I 2

(2A.79)

I1 1 I1
n

2 I n odd
1
0 n even
Thus, a half-wave symmetric function will have only odd harmonics all even
harmonics are zero.
Half-wave symmetry
in the time-domain
leads to all odd
harmonics being
zero

Thus we have the property:

half - wave symmetry only odd harmonics

Signals and Systems 2014

(2A.80)

2A.41
Example

A square pulse train (50% duty cycle square wave) is a special case of the
rectangular pulse train, for which T0 2 . For this case g t is:
A 50% duty cycle
square wave

g(t )
A
0

t
T0

Figure 2A.21

The period is T0 and f 0 1 T0 . Using Eq. (2A.52), and substituting T0 2 ,


we have for the complex exponential Fourier series coefficients:

Gn Af 0sincnf 0

A
n
sinc
2
2

(2A.81)

Recalling that sinc x 0 for all integer values of x , we can see that Gn will
be zero for all even harmonics (except G0 ) . We expect this, since apart from a
DC component of A 2 , g t is half-wave symmetric. The spectrum is:
and its spectrum

Gn A /2

-11f0

-7f0
-9 f0

-3f0
-5f0

3f0
- f0 0 f0

7f0
5f0

11f0
9f0

Figure 2A.22

In this case the spacing of the discrete spectral frequencies, nf 0 , is such that
each even harmonic falls on a zero of the sinc function envelope.

Signals and Systems 2014

2A.42
Power
One of the advantages of representing a signal in terms of a set of orthogonal
Calculating power
using orthogonal
components

components is that it is very easy to calculate its average power. Because the
components are orthogonal, the total average power is just the sum of the
average powers of the orthogonal components.
For example, if the double-sided spectrum is being used, since the magnitude
of the sinusoid represented by the phasor Gn is 2 Gn , and the average power
of a sinusoid of amplitude A is P A 2 2 , the total power in the signal g t is:

P Gn Gn Gn*

Power for a doublesided spectrum

(2A.82)

Note that the DC component only appears once in the sum ( n 0 ). Its power
contribution is G02 which is correct.
Example

How much of the power of a 50% duty cycle rectangular pulse train is
contained in the first three harmonics?

g(t )

T0 /2

A
0

t
T0

Figure 2A.23

We first find the total power in the time-domain:

A2
1 T0 4 2
P A dt
T0 T0 4
2

Signals and Systems 2014

(2A.83)

2A.43
To find the power in each harmonic, we work in the frequency domain. We
note that the Fourier series coefficients are given by Eq. (2A.52):

A
n
sinc
2
2

Gn

(2A.84)

and draw the double-sided spectrum:

Gn A /2

-11f0

-7f0
-9 f0

-3f0

3f0
- f0 0 f0

-5f0

7f0
5f0

11f0
9f0

Figure 2A.24

We have for the DC power contribution:

P0 G0

A2

(2A.85)

which is 50% of the total power. The DC plus fundamental power is:

P0 P1

(2A.86)

n 1

G0 2 G1
2

A2 2 A2

2 0.4526 A2
4

which is 90.5% of the total power.

Signals and Systems 2014

2A.44
The power of the components up to and including the 3rd harmonic is:

P0 P1 P2 P3

Gn

(2A.87)

n 3

A2 2 A2
2 A2

2 0 2
4
9

0.4752 A2
which is 95% of the total power. Thus, the spectrum makes obvious a
characteristic of a periodic signal that is not obvious in the time domain. In this
case, it was surprising to learn that 95% of the power in a square wave is
contained in the frequency components up to the 3rd harmonic. This is
important we may wish to lowpass filter this signal for some reason, but
retain most of its power. We are now in a position to give the cutoff frequency
of a lowpass filter to retain any amount of power that we desire.

Signals and Systems 2014

2A.45
Filters
Filters are devices that shape the input signals spectrum to produce a new
output spectrum. They shape the input spectrum by changing the amplitude and
phase of each component sinusoid. This frequency-domain view of filters has
been with us implicitly we specify the filter in terms of a transfer function A filter acts to
change each

H s . When evaluated at s j the transfer function is a complex number. component sinusoid

The magnitude of this complex number, H j , multiplies the corresponding

of a periodic
function

magnitude of the component phasor of the input signal. The phase of the

complex number, H j , adds to the phase of the component phasor.


Visual view of a
filters operation
X0

Y0 = H(0 ) X 0

excitation at DC

H(0)

X1
excitation with a sinusoid
at the fundamental frequency

H ( f0 )

Xn

excitation with a sinusoid


at the n th harmonic

Y1 = H( f0) X 1

H (n f 0 )

response to a sinusoid
at the fundamental frequency

Yn = H ( n f 0 ) X n
response to a sinusoid
at the n th harmonic

H( f )

Xn
Xn

response to DC

Yn = H( f ) X n
f

Yn

f
output secptrum

input spectrum

Figure 2A.25

Recall from Lecture 1A that it is the sinusoid that possesses the special
property with a linear system of a sinusoid in gives a sinusoid out. We now
have a view that a sum of sinusoids in gives a sum of sinusoids out, or more
simply: a spectrum in gives a spectrum out. The input spectrum is changed
by the frequency response of the system to give the output spectrum.

Signals and Systems 2014

2A.46
This view of filters operating on individual components of an input signal has
been implicit in the characterisation of systems via the frequency response.
Experimentally, we determine the frequency response of a system by
performing the operations in the top half of Figure 2A.25. That is, we apply
different sinusoids (including DC which can be thought of as a sinusoid of zero
frequency) to a system and measure the resulting amplitude change and phase
shift of the output sinusoid. We then build up a picture of H f by plotting the
experimentally derived points on a graph (if log scales are chosen then we have
a Bode plot). After obtaining the frequency response, we should be able to tell
what happens when we apply any periodic signal, as shown in the bottom half
of Figure 2A.25. The next example illustrates this process.
Example

Lets see what happens to a square wave when it is passed through a 3rd
order Butterworth filter. For the filter, we have:

A filter is defined in
terms of its
magnitude and
phase response

H j

1
1 0

2 02 3

H j tan
2
3
2 0 0

(2A.88a)

(2A.69b)

Here, 0 is the cutoff frequency of the filter. It does not represent the
angular frequency of the fundamental component of the input signal filters
know nothing about the signals to be applied to them. Since the filter is linear,
superposition applies. For each component phasor of the input signal, we
multiply by H j e jH j . We then reconstruct the signal by adding up the
filtered component phasors.

Signals and Systems 2014

2A.47
This is an operation best performed and thought about graphically. For the case
of the filter cutoff frequency set at twice the input signals fundamental
frequency we have for the output magnitude spectrum:

Gn

-11f0 -9 f0 -7f0

-5f0

-3f0

input signal
magnitude spectrum

A /2

- f0 0 f0

3f0

5f0

7f0

9f0

11f0

H ()

Gn

-11f0 -9 f0 -7f0

-5f0

-3f0

3rd order
Butterworth filter
magnitude response

A /2

output signal
magnitude spectrum

- f0 0 f0

3f0

5f0

7f0

9f0

11f0

Figure 2A.26

We could perform a similar operation for the phase spectrum.

Signals and Systems 2014

Filter output
magnitude spectrum
obtained graphically
using the input
signals magnitude
spectrum and the
filters magnitude
response

2A.48
If we now take the output spectrum and reconstruct the time-domain waveform
it represents, we get:
The output signal in
the time-domain
obtained from the
output spectrum

g (t)
A

T0

2T0

Figure 2A.27

This looks like a shifted sinusoid (DC + sine wave) with a touch of 3rd
harmonic distortion. With practice, you are able to recognise the components
Some features to
look for in a
spectrum

of waveforms and hence relate them to their magnitude spectrum as in


Figure 2A.26. How do we know it is 3rd harmonic distortion? Without the DC
component the waveform exhibits half-wave symmetry, so we know the 2nd
harmonic (an even harmonic) is zero.
If we extend the cutoff frequency to ten times the fundamental frequency of the
input square wave, the filter has less effect:

Sharp transitions in
the time-domain are
caused by high
frequencies

g (t)
A

T0

2T0

Figure 2A.28

From this example it should be apparent that high frequencies are needed to
make sharp transitions in the time-domain.

Signals and Systems 2014

2A.49
Relationships Between the Three Fourier Series Representations
The table below shows the relationships between the three different
representations of the Fourier series:
Trigonometric
Fourier
Series

Fourier

g t a0 an cos2nf 0t
n 1

bn sin 2nf 0 t

a0

Series
Coefficients
an

bn

Spectrum of
a single

Compact Trigonometric

g t An cos2nf 0 t n
n 0

Complex Exponential

gt Gn e j 2nf t
0

Re Gn e j 2nf 0t
n 0

1 T0 2
g t dt
T0 T0 2

Gn

1
T0

2
T0

T0 2

T0 2

g t e j 2nf0t dt

Gn

n 0

2 T0 2
gt cos 2nf 0 t dt
T0 T0 2
2 T0 2
g t sin2nf 0 t dt
T0 T0 2
an

Gn

T0 2

T0 2

1 T0 2
gt e j 2nf 0 t dt

2
T

T0 0

g t e j 2nf0t dt

n 0

|Gn|

|Gn|
A

A
2

A cos

sinusoid
A cos2f 0 t

- f0

f0

bn

f0

f0

Signals and Systems 2014

f0

- f0

0
-

f0

Gn

Gn

- A sin

f0

2A.50
Summary

All periodic waveforms are made up of a sum of sinusoids a Fourier


series. There are three equivalent notations for the Fourier series.

The trigonometric Fourier series expresses a periodic signal as a DC (or


average) term and a sum of harmonically related cosinusoids and sinusoids.

The compact trigonometric Fourier series expresses a periodic signal as a


sum of cosinusoids of varying amplitude and phase.

The complex exponential Fourier series expresses a periodic signal as a


sum of counter-rotating harmonic phasors (sum of harmonic phasors).

The coefficients of the basis functions in the Fourier series are called
Fourier series coefficients. For the complex exponential Fourier series,
they are complex numbers, and are just the phasor representation of the
sinusoid at that particular harmonic frequency.

Fourier series coefficients can be found for any periodic waveform by


taking projections of the periodic waveform onto each of the orthogonal
basis functions making up the Fourier series.

A spectrum of a periodic waveform is a graph of the amplitudes and phases


of constituent basis functions.

The most convenient spectrum is the double-sided spectrum. It is graph of


the complex exponential Fourier series coefficients. We usually have to
graph a magnitude spectrum and phase spectrum.

References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.

Signals and Systems 2014

2A.51
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]

1.

The signal 3cos(2t ) 4sin(4t ) has:


(a) A3 3 , 3 0

(b) G1 3 2 2

(c) a 2 0 , b2 4

2.
The Fourier series of

g(t)

the periodic signal will


have no:

t
(a) DC term

(b) odd harmonics

(c) even harmonics

3.
The double-sided amplitude spectrum of a real signal always possesses:
(a) even symmetry

(b) odd symmetry

(c) no symmetry

4.
|Gn| (V)

Amplitude spectrum of

a signal. The power

1.5
1

(across 1) is:
-3

-2

-1

(a) 14.5 W

f (Hz)

(b) 10 W

(c) 15.5 W

5.
The phase of the 3rd harmonic component of the periodic signal

t 1 4n
is:
0.2

g t Arect
n

(a) 90

(b) 9 0

Answers: 1. c 2. c 3. a 4. c 5. a

Signals and Systems 2014

(c) 0

2A.52
Exercises
1.
A signal g t 2cos100t sin100t .
(a)

Plot the single-sided magnitude spectrum and phase spectrum.

(b)

Plot the double-sided magnitude spectrum and phase spectrum.

(c)

Plot the real and imaginary components of the double-sided spectrum.

2.
What is G for the sinusoid whose spectrum is shown below.
G

|G|
2

45
-100

-100

(rads )
-1

100

100

(rads-1 )

-45

3.
What is g t for the signal whose spectrum is sketched below.
Re{G n }

Im{G n }

-100

100

f (Hz)

-100

100

f (Hz)

4.
Sketch photographs of the counter rotating phasors associated with the
spectrum below at t 0 , t 1 and t 2 .
G

|G|
1

30
1

-1

(rads )
-1

-1

0
-30

Signals and Systems 2014

(rads-1 )

2A.53
5.
For each of the periodic signals shown, find the complex exponential Fourier
series coefficients. Plot the magnitude and phase spectra in MATLAB,
for 25 n 25 .
g (t )

rectified sine wave

1
(a)
-2

g (t )

(b)
-2

-1

1
g (t )

1
(c)

-2

e -t / 10

-1

g ( t ) cos(8 t )

(d)
-3 -1

g (t )
1
(e)
0

T0

Signals and Systems 2014

2A.54
6.
A voltage waveform vi t

with a period of 0.08s is defined by:

vi 60 V, 0 t 0.01 s ; vi 0 V, 0.01 t 0.08 s . The voltage vi is applied as the


source in the circuit shown below. What average power is delivered to the
load?

Vo
Vi

vi

vo
16
mH

f (Hz)

0 20 40 60 80

load

7.
A periodic signal g t is transmitted through a system with transfer function

H . For three different values of T0 ( T0 2 3, 3, and 6 ) find the


double-sided magnitude spectrum of the output signal. Calculate the power of
the output signal as a percentage of the power of the input signal g t .
g (t)

H ()

T0
2

-12

12

T0

8.
Estimate the bandwidth B of the periodic signal g t shown below if the power
of all components of g t within the band B is to be at least 99.9 percent of the
total power of g t .
g (t)
A

T0

t
-A

Signals and Systems 2014

2A.55
Joseph Fourier (1768-1830) (Jo sef Foor yay)
Fourier is famous for his study of the flow of heat in metallic plates and rods.
The theory that he developed now has applications in industry and in the study
of the temperature of the Earths interior. He is also famous for the discovery
that many functions could be expressed as infinite sums of sine and cosine
terms, now called a trigonometric series, or Fourier series.
Fourier first showed talent in literature, but by the age of thirteen, mathematics
became his real interest. By fourteen, he had completed a study of six volumes
of a course on mathematics. Fourier studied for the priesthood but did not end
up taking his vows. Instead he became a teacher of mathematics. In 1793 he
became involved in politics and joined the local Revolutionary Committee. As
he wrote:-

As the natural ideas of equality developed it was possible to conceive the


sublime hope of establishing among us a free government exempt from kings
and priests, and to free from this double yoke the long-usurped soil of
Europe. I readily became enamoured of this cause, in my opinion the
greatest and most beautiful which any nation has ever undertaken.
Fourier became entangled in the French Revolution, and in 1794 he was
arrested and imprisoned. He feared he would go to the guillotine but political
changes allowed him to be freed. In 1795, he attended the Ecole Normal and
was taught by, among others, Lagrange and Laplace. He started teaching again,
and began further mathematical research. In 1797, after another brief period in
prison, he succeeded Lagrange in being appointed to the chair of analysis and
mechanics. He was renowned as an outstanding lecturer but did not undertake
original research at this time.
In 1798 Fourier joined Napoleon on his invasion of Egypt as scientific adviser.
The expedition was a great success (from the French point of view) until
August 1798 when Nelsons fleet completely destroyed the French fleet in the
Battle of the Nile, so that Napoleon found himself confined to the land he was
occupying. Fourier acted as an administrator as French type political

Signals and Systems 2014

2A.56
institutions and administrations were set up. In particular he helped establish
educational facilities in Egypt and carried out archaeological explorations.
The Institute
d'gypte was
responsible for the
completely
serendipitous
discovery of the
Rosetta Stone in
1799. The three
inscriptions on this
stone in two
languages and three
scripts (hieroglyphic,
demotic and Greek)
enabled Thomas
Young and JeanFranois
Champollion, a
protg of Fourier,
to invent a method
of translating
hieroglyphic writings
of ancient Egypt in
1822.

While in Cairo, Fourier helped found the Institute d'gypte and was put in
charge of collating the scientific and literary discoveries made during the time
in Egypt. Napoleon abandoned his army and returned to Paris in 1799 and soon
held absolute power in France. Fourier returned to France in 1801 with the
remains of the expeditionary force and resumed his post as Professor of
Analysis at the Ecole Polytechnique.
Napoleon appointed Fourier to be Prefect at Grenoble where his duties were
many and varied they included draining swamps and building highways. It
was during his time in Grenoble that Fourier did his important mathematical
work on the theory of heat. His work on the topic began around 1804 and by
1807 he had completed his important memoir On the Propagation of Heat in

Solid Bodies. It caused controversy both Lagrange and Laplace objected to


Fouriers expansion of functions as trigonometric series.

This extract is from


a letter found among
Fouriers papers,
and unfortunately
lacks the name of
the addressee, but
was probably
intended for
Lagrange.

it was in attempting to verify a third theorem that I employed the


procedure which consists of multiplying by cos x dx the two sides of the
equation

x a0 a1 cos x a2 cos 2 x ...


and integrating between x 0 and x . I am sorry not to have known the
name of the mathematician who first made use of this method because I
would have cited him. Regarding the researches of dAlembert and Euler
could one not add that if they knew this expansion they made but a very
imperfect use of it. They were both persuaded that an arbitraryfunction
could never be resolved in a series of this kind, and it does not seem that
any one had developed a constant in cosines of multiple arcs
[i.e. found a1 , a 2 ,, with 1 a1 cos x a 2 cos 2 x ... for 2 x 2 ]
the first problem which I had to solve in the theory of heat.
Other

people

before

Fourier

had

used

expansions

of

the

form

f x ~ r a r expirt but Fouriers work extended this idea in two totally

new ways. One was the Fourier integral (the formula for the Fourier series
coefficients) and the other marked the birth of Sturm-Liouville theory (Sturm
and Liouville were nineteenth century mathematicians who found solutions to

Signals and Systems 2014

2A.57
many classes of partial differential equations arising in physics that were
analogous to Fourier series).
Napoleon was defeated in 1815 and Fourier returned to Paris. Fourier was
elected to the Acadmie des Sciences in 1817 and became Secretary in 1822.
Shortly after, the Academy published his prize winning essay Thorie

analytique de la chaleur (Analytical Theory of Heat). In this he obtains for the


first time the equation of heat conduction, which is a partial differential
equation in three dimensions. As an application he considered the temperature
of the ground at a certain depth due to the suns heating. The solution consists
of a yearly component and a daily component. Both effects die off
exponentially with depth but the high frequency daily effect dies off much
more rapidly than the low frequency yearly effect. There is also a phase lag for
the daily and yearly effects so that at certain depths the temperature will be
completely out of step with the surface temperature.
All these predictions are confirmed by measurements which show that annual
variations in temperature are imperceptible at quite small depths (this accounts
for the permafrost, i.e. permanently frozen subsoil, at high latitudes) and that
daily variations are imperceptible at depths measured in tenths of metres. A
reasonable value of soil thermal conductivity leads to a prediction that annual
temperature changes will lag by six months at about 23 metres depth. Again
this is confirmed by observation and, as Fourier remarked, gives a good depth
for the construction of cellars.
As Fourier grew older, he developed at least one peculiar notion. Whether
influenced by his stay in the heat of Egypt or by his own studies of the flow of
heat in metals, he became obsessed with the idea that extreme heat was the
natural condition for the human body. He was always bundled in woollen
clothing, and kept his rooms at high temperatures. He died in his sixty-third
year, thoroughly cooked.

References
Krner, T.W.: Fourier Analysis, Cambridge University Press, 1988.

Signals and Systems 2014

2B.1
Lecture 2B The Fourier Transform
The Fourier transform. Continuous spectra. Existence of the Fourier
transform. Finding Fourier transforms. Symmetry between the time-domain
and frequency-domain. Time shifting. Frequency shifting. Fourier transform of
sinusoids. Relationship between the Fourier series and Fourier transform.
Fourier transform of a uniform train of impulses. Standard Fourier transforms.
Fourier transform properties.

The Fourier Transform


The Fourier series is used to represent a periodic function as a weighted sum of

Developing the
sinusoidal (or complex exponential) functions. We would like to extend this Fourier transform

result to functions that are not periodic. Such an extension is possible by what
is known as the Fourier transform representation of a function.
To derive the Fourier transform, we start with an aperiodic signal g t :

g(t )

t
Figure 2B.1
Now we construct a new periodic signal g p t consisting of the signal g t
repeating itself every T0 seconds:
Make an artificial
periodic waveform
from the original
aperiodic waveform

gp(t)

0
T0

t
T0

Figure 2B.2
Signals and Systems 2014

2B.2
The period T0 is made long enough so that there is no overlap between the
repeating pulses. This new signal g p t is a periodic signal and so it can be
represented by an exponential Fourier series.
In the limit, if we let T0 become infinite, the pulses in the periodic signal
repeat after an infinite interval, and:

lim g p t gt

(2B.1)

T0

Thus, the Fourier series representing g p t will also represent g t , in the


limit T0 .
The exponential Fourier series for g p t is:
The Fourier series
for our artificial
periodic waveform

g p t Gn e j 2nf 0t
n

Gn

1 T0 2
j 2 nf 0t

g
t
e
dt
p

T
2
0
T0

(2B.2a)

(2B.2b)

In the time-domain, as T0 , the pulse train becomes a single non-periodic


pulse located at the origin, as in Figure 2B.1.
In the frequency-domain, as T0 becomes larger, f 0 becomes smaller and the
spectrum becomes denser (the discrete frequencies are closer together). In
the limit as T0 , the fundamental frequency f 0 df and the harmonics
become infinitesimally close together. The individual harmonics no longer
exist as they form a continuous function of frequency. In other words, the
spectrum exists for every value of f and is no longer a discrete function of f but
a continuous function of f.

Signals and Systems 2014

2B.3
As seen from Eq. (2B.2b), the amplitudes of the individual components
become smaller, too. The shape of the frequency spectrum, however, is
unaltered (all Gn s are scaled by the same amount, T0 ). In the limit as T0 ,
the magnitude of each component becomes infinitesimally small, until they
eventually vanish!
Clearly, it no longer makes sense to define the concept of a spectrum as the
amplitudes and phases of certain harmonic frequencies. Instead, a new concept
is introduced called spectral density.
To illustrate, consider a rectangular pulse train, for which we know:

Gn Af 0 sincnf 0

(2B.3)

The spectrum for 1 5 s and T0 1 s looks like:

Gn
0.2 A

-8
-12 -10

-6

6
-4

-2

Figure 2B.3

Signals and Systems 2014

8
10

12

2B.4
For T0 5 s we have:
As the period
increases, the
spectral lines get
closer and closer,
but smaller and
smaller

Gn
0.04 A

-8

-6

6
-4

-12 -10

-2

8
10

12

Figure 2B.4
The envelope retains the same shape (it was never a function of T0 anyway),
but the amplitude gets smaller and smaller with increasing T0 . The spectral
lines get closer and closer with increasing T0 . In the limit, it is impossible to
draw the magnitude spectrum as a graph of Gn because the amplitudes of the
harmonic phasors have reduced to zero.
It is possible, however, to graph a new quantity:

Gn T0

T0 2

T0 2

g p t e j 2nf t dt

(2B.4)

which is just a rearrangement of Eq. (2B.2b). We suspect that the product

Gn T0 will be finite as T0 , in the same way that the area remained constant
as T 0 in the family of rect functions we used to explain the impulse
function.
As T0 , the frequency of any harmonic nf 0 must now correspond to the
general frequency variable which describes the continuous spectrum.

Signals and Systems 2014

2B.5
In other words, n must tend to infinity as f 0 approaches zero, such that the
product is finite:

nf 0 f ,

T0

(2B.5)

With the limiting process as defined in Eqs. (2B.1) and (2B.5), Eq. (2B.4)
becomes:

Gn T0 g t e

j 2 ft

dt

(2B.6)

The right-hand side of this expression is a function of f (and not of t), and we
represent it by:

G f g t e j 2ft dt

(2B.7)

Therefore, G f lim GnT0 has the dimensions amplitude per unit frequency,
T0

and G f can be called the spectral density.


To express g t in the time-domain using frequency components, we apply the
limiting process to Eq. (2B.2a). Here, we now notice that:

f 0 df ,

T0

(2B.8)

That is, the fundamental frequency becomes infinitesimally small as the period
is made larger and larger. This agrees with our reasoning in obtaining
Eq. (2B.5), since there we had an infinite number, n, of infinitesimally small
discrete frequencies, f 0 df , to give the finite continuous frequency f. In order
to apply the limiting process to Eq. (2B.2a), we multiply the summation by

T0 f 0 1 :

g p t

G T e

n 0

j 2nf 0 t

f0

Signals and Systems 2014

(2B.9)

The Fourier
transform defined

2B.6
and use Eq. (2B.8) and the new quantity G f GnT0 . In the limit the
summation becomes an integral, nf 0 ndf f , g p t g t , and:

gt G f e j 2ft df

The inverse Fourier


transform defined

(2B.10)

Eqs. (2B.7) and (2B.10) are collectively known as a Fourier transform pair.
The function G f is the Fourier transform of g t , and g t is the inverse

Fourier transform of G f .
To recapitulate, we have shown that:
The Fourier
transform and
inverse transform
side-by-side

gt G f e

j 2 ft

df

G f g t e j 2ft dt

(2B.11a)

(2B.11b)

These relationships can also be expressed symbolically by a Fourier transform


pair:
Notation for a
Fourier transform
pair

gt G f

Signals and Systems 2014

(2B.12)

2B.7
Continuous Spectra
The concept of a continuous spectrum is sometimes bewildering because we

Making sense of a

generally picture the spectrum as existing at discrete frequencies and with continuous
spectrum

finite amplitudes. The continuous spectrum concept can be appreciated by


considering a simple analogy.
Consider a beam loaded with weights of G1 , G 2 , G 3 , , G n kilograms at
uniformly spaced points x1 , x 2 , x 3 , , x n as shown in (a) below:

G1 G2 G3

Gn

x1 x2

xn

x3

G(x )

x1

(a)

xn
(b)

Figure 2B.5

The beam is loaded at n discrete points, and the total weight W on the beam is
given by:
We can have either
individual finite
weights

W Gi
i 1

(2B.13)

Now consider the case of a continuously loaded beam, as shown in Figure


2B.5(b) above. The loading density G x , in kilograms per meter, is a function
of x. The total weight on the beam is now given by a continuous sum of the
infinitesimal weights at each point- that is, the integral of G x over the entire
length:

W G x dx

or continuous
infinitesimal weights

xn

x1

Signals and Systems 2014

(2B.14)

2B.8
In the discrete loading case, the weight existed only at discrete points. At other
points there was no load. On the other hand, in the continuously distributed
case, the loading exists at every point, but at any one point the load is zero. The
load in a small distance dx, however, is given by G x dx . Therefore G x
represents the relative loading at a point x.
An exactly analogous situation exists in the case of a signal and its frequency
spectrum. A periodic signal can be represented by a sum of discrete
exponentials with finite amplitudes (harmonic phasors):

g t Gn e j 2nf0t
n

(2B.15)

For a nonperiodic signal, the distribution of exponentials becomes continuous;


that is, the spectrum exists at every value of f. At any one frequency f, the
amplitude of that frequency component is zero. The total contribution in an
infinitesimal interval df is given by G f e j 2ft df , and the function g t can be
expressed in terms of the continuous sum of such infinitesimal components:
We can get a signal
from infinitesimal
sinusoids if we
have an infinite
number of them!

gt G f e j 2ft df

(2B.16)

An electrical analogy could also be useful: just replace the discrete loaded
beam with an array of filamentary conductors, and the continuously loaded
beam with a current sheet. The analysis is the same.

Signals and Systems 2014

2B.9
Existence of the Fourier Transform
Dirichlet (1805-1859) investigated the sufficient conditions that could be
imposed on a function g t for its Fourier transform to exist. These so-called

Dirichlet conditions are:


(2B.17a)

1. On any finite interval:


a) g t is bounded
b) g t has a finite number of maxima and minima
c) g t has a finite number of discontinuities.
2. g t is absolutely integrable, i.e.

g t dt

(2B.17b)

Note that these are sufficient conditions and not necessary conditions. Use of
the Fourier transform for the analysis of many useful signals would be
impossible if these were necessary conditions.
Any signal with finite energy:

g t dt
2

(2B.18)

is absolutely integrable, and so all energy signals are Fourier transformable.


Many signals of interest to electrical engineers are not energy signals and are
therefore not absolutely integrable. These include the step function and all
periodic functions. It can be shown that signals which have infinite energy but
which contain a finite amount of power and meet Dirichlet condition 1 do have
valid Fourier transforms.
For practical purposes, for the signals or functions we may wish to analyse as
engineers, we can use the rule of thumb that if we can draw a picture of the
waveform g t , then it has a Fourier transform G f .

Signals and Systems 2014

The Dirichlet
conditions are
sufficient but not
necessary
conditions for the FT
to exist

2B.10
Finding Fourier Transforms
Example

Find the spectrum of the following rectangular pulse:


A common and
useful signal

g(t )

-T /2

T /2

Figure 2B.6

We note that g t Arect t T . We then have:

t
G f A rect e j 2ft dt

T
T 2

e j 2ft dt

T 2

A
e jfT e jfT
j 2f
A
sin fT

f
AT sinc fT

(2B.19)

Therefore, we have the Fourier transform pair:

and its transform

t
A rect AT sinc fT
T

Signals and Systems 2014

(2B.20)

2B.11
We can also state this graphically:

g(t )

-T /2

G (f )

T /2

-5
T

-4
T

-3
T

-2
T

AT

-1
T

1
T

2
T

3
T

4
T

5
T

Figure 2B.7

There are a few interesting observations we can make in general by Time and frequency
characteristics

considering this result. One is that a time limited function has frequency reflect an inverse
components approaching infinity. Another is that compression in time (by relationship
making T smaller) will result in an expansion of the frequency spectrum
(wider sinc function in this case). Another is that the Fourier transform is a
linear operation multiplication by A in the time domain results in the
spectrum being multiplied by A.
Letting A=1 and T=1 in Eq. (2B.20) results in what we shall call a standard
transform:

rect t sinc f

(2B.21)

One of the most


common transform
pairs commit it to
memory!

It is standard in the sense that we cannot make any further simplifications. It


expresses a fundamental relationship between the rect and sinc functions
without any complicating factors.
We may now state our observations more formally. We see that the linearity
property of the Fourier transform pair can be defined as:

agt aG f

Signals and Systems 2014

(2B.22)

Linearity is obeyed
by transform pairs

2B.12
We can also generalise the time scaling property to:

t
g T G fT
T

Time scaling
property of
transform pairs

(2B.23)

Thus, expansion in time corresponds to compression in frequency and vice


versa.
We could (and should in future) derive our Fourier transforms by starting with
a standard Fourier transform and then applying appropriate Fourier transform
pair properties. For example, we could arrive at Eq. (2B.20) without
integration by the following:
Applying standard
properties to derive
a Fourier transform

rectt sinc f (standard transform)

t
rect T sinc fT (scaling property)
T
t
A rect AT sinc fT (linearity)
T

(2B.24)

We now only have to derive enough standard transforms and discover a few
Fourier transform properties to be able to handle almost any signal and system
of interest.

Signals and Systems 2014

2B.13
Symmetry between the Time-Domain and Frequency-Domain
From the definition of the inverse Fourier transform:

gt G f e

j 2 ft

df

(2B.25)

we can make the transformations t f and f x to give:

g f G x e j 2xf dx

(2B.26)

Now since the value of a definite integral is independent of the variable of


integration, we can make the change of variable x t to give:

g f G t e j 2tf dt

(2B.27)

Notice that the right-hand side is precisely the definition of the Fourier
transform of the function G t . Thus, there is an almost symmetrical
relationship between the transform and its inverse. This is summed up in the
duality property:

G t g f

Signals and Systems 2014

(2B.28)

Duality property of
transform pairs

2B.14
Example

Consider a rectangular pulse in the frequency-domain. This represents the


frequency response of an ideal low-pass filter, with a cut-off frequency of B.

G (f )

-B

B
Figure 2B.8

We are interested in finding the inverse Fourier transform of this function.


Using the duality property, we know that if rectt sinc f then
sinct rect f . Since the rect function is symmetric, this can also be

written as sinct rect f . But our frequency-domain transform is


G f rect f 2 B , so we need to apply the time-scale property in reverse to

arrive at:

2 B sinc2 Bt rect f 2 B

(2B.29)

Graphically:
Fourier transform of
a sinc function

G (t )

g(-f )

2B

-5
2B

-2
B

-3
2B

-1
B

-1
2B

1
2B

1
B

3
2B

2
B

5
2B

Figure 2B.9

Signals and Systems 2014

-B

2B.15
Example

We wish to find the Fourier transform of the impulse function t . We can


find by direct integration (using the sifting property of the impulse function):

G f t e j 2ft dt 1

(2B.30)

That is, we have another standard Fourier transform pair:

t 1

(2B.31)

It is interesting to note that we could have obtained the same result using
Eq. (2B.20). Let the height be such that the area AT is always equal to 1, then
we have from Eq. (2B.20):

1
t
rect sinc fT
T
T

(2B.32)

Now let T 0 so that the rectangle function turns into the impulse function

t . Noting that sinc 0 is unity, we arrive at Eq. (2B.31). Graphically, we


have:

g(t )

G (f )
1

Figure 2B.10

This result says that an impulse function is composed of all frequencies from
DC to infinity, and all contribute equally.

Signals and Systems 2014

Fourier transform of
an impulse function

2B.16
This example highlights one very important feature of finding Fourier
transforms - there is usually more than one way to find them, and it is up to us
(with experience and ability) to find the easiest way. However, once the
Fourier transform has been obtained for a function, it is unique - this serves as
a check for different methods.
Example

We wish to find the Fourier transform of the constant 1 (if it exists), which is a
DC signal. Choosing the path of direct evaluation of the integral, we get:

G f 1e j 2ft dt

(2B.33)

which appears intractable. However, we can approach the problem another


way. If we let our rectangular function of Figure 2B.6 take on infinite width, it
becomes a DC signal. As the width of the rectangular pulse becomes larger and
larger, the sinc function becomes narrower and narrower, and its amplitude
increases. Eventually, with infinite pulse width, the sinc function has no width
and infinite height an impulse function. Using Eq. (2B.20) with T , and
noting that the area under the sinc function is 1 T (see Lecture 1A) we
therefore have:
Fourier transform of
a constant

1 f

Signals and Systems 2014

(2B.34)

2B.17
Yet another way to arrive at this result is through recognising that a certain
amount of symmetry exists in equations for the Fourier transform and inverse
Fourier transform.
Applying the duality property to Eq. (2B.31) gives:

1 f

(2B.35)

Since the impulse function is an even function, we get:

1 f

(2B.36)

which is the same as Eq. (2B.34). Again, two different methods converged on
the same result, and one method (direct integration) seemed impossible! It is
therefore advantageous to become familiar with the properties of the Fourier
transform.

Signals and Systems 2014

2B.18
Time Shifting
A time shift to the right of t0 seconds (i.e. a time delay) can be represented by
g t t 0 . A time shift to the left of t0 seconds can be represented by g t t 0 .
The Fourier transform of a time shifted function to the right is:

g t t 0 g t t 0 e j 2ft dt

(2B.37)

Letting x t t 0 then dx dt and t x t 0 and we have:


Time shift property
defined

g t t 0 g x e j 2f x t 0 dx

g t t 0 G f e

(2B.38)

j 2ft 0

Therefore, a time shift to the right of t 0 seconds in the time-domain is


equivalent to multiplication by e j 2ft0 in the frequency-domain. Thus, a time
delay simply causes a linear phase change in the spectrum the magnitude
spectrum is left unaltered. For example, the Fourier transform of
t T 2
jfT
g t A rect
and is shown below:
is G f AT sinc fT e
T

Time shifting a
function just
changes the phase

G (f )

g(t )
A

-5
T

-4
T

-3
T

-2
T

-1
T

G (f )
0 T /2 T

AT

1
T

2
T

3
T

4
T

5
T

1
T

2
T

3
T

4
T

5
T

AT

t
-5
T

-4
T

-3
T

-2
T

-1
T -2

f
slope = T

Figure 2B.11

Signals and Systems 2014

2B.19
Frequency Shifting
Similarly to the last section, a spectrum G f can be shifted to the right,
G f f 0 , or to the left, G f f 0 . This property will be used to derive
several standard transforms and is particularly important in communications
where it forms the basis for modulation and demodulation.
The inverse Fourier transform of a frequency shifted function to the right is:

G f f 0 e

j 2ft

df

G f f 0

(2B.39)

Letting x f f 0 then dx df and f x f 0 and we have:

Frequency shift
property defined

G x e j 2 x f 0 t dx G f f 0
g t e

j 2f 0 t

(2B.40)

G f f 0

Therefore, a frequency shift to the right of f 0 Hertz in the frequency-domain is


equivalent to multiplication by e j 2ft0 in the time-domain. Similarly, a shift to
the left by f 0 Hertz in the frequency-domain is equivalent to multiplication by
e j 2ft0 in the time-domain. The sign of the exponent in the shifting factor is
opposite to that with time shifting this can also be explained in terms of the
duality property.
For example, consider amplitude modulation, making use of the of the Euler
expansion for a cosine:

g t cos2f c t g t 12 e j 2f c t 12 e j 2f c t

1
2

G f f c 12 G f f c

Signals and Systems 2014

Multiplication by a
sinusoid in the timedomain shifts the
original spectrum up
and down by the
carrier frequency

(2B.41)

2B.20
The Fourier Transform of Sinusoids
Using Eq. (2B.34) and the frequency shifting property, we know that:

e j 2f 0t

e j 2f 0t

f f0

(2B.42a)

f f0

(2B.42b)

Therefore, by substituting into Eulers relationship for the cosine and sine:

cos2f 0 t 12 e j 2f 0t 12 e j 2f 0t
sin 2f 0 t

1
2j

(2B.43a)

e j 2f 0t 21j e j 2f 0t

2j e j 2f 0t 2j e j 2f 0t

(2B.43b)

we get the following transform pairs:


Standard transforms
for cos and sin

(2B.44a)

1
1
cos2f 0 t f f 0 f f 0
2
2
j
j
sin 2f 0 t f f 0 f f 0
2
2

(2B.44b)

Graphically:
Spectra for cos and
sin

cos(2 f 0t )
1

G (f )
1
2
T0

sin(2 f 0t )
1

f0

- f0

1
2

G (f )
j
2

j
-2

T0 t
- f0

Figure 2B.12

Signals and Systems 2014

f0

2B.21
Relationship between the Fourier Series and Fourier Transform
From the definition of the Fourier series, we know we can express any periodic
waveform as a sum of harmonic phasors:

gt Gn e j 2nf 0 t

(2B.45)

This sum of harmonic phasors is just a linear combination of complex


exponentials. From Eq. (2B.42a), we already know the transform pair:

e j 2nf 0t

f nf 0

(2B.46)

We therefore expect, since the Fourier transform is a linear operation, that we


can easily find the Fourier transform of any periodic signal. Scaling
Eq. (2B.46) by a complex constant, Gn , we have:

Gn e j 2nf 0 t

G n f nf 0

(2B.47)

Summing over all harmonically related exponentials, we get:

G e

j 2nf 0 t

G f nf

(2B.48)

Therefore, in words:
The spectrum of a periodic signal is a weighted train of
impulses each weight is equal to the Fourier series
coefficient at that frequency, Gn

Signals and Systems 2014

(2B.49)

2B.22
Example

Find the Fourier transform of the rectangular pulse train g t shown below:

g (t)
A
0

t
T0

Figure 2B.13

We have already found the Fourier series coefficients for this waveform:

Gn Af 0sincnf 0

(2B.50)

For the case of T0 5 , the spectrum is then a weighted train of impulses, with
spacing equal to the fundamental of the waveform, f 0 :
The double-sided
magnitude spectrum
of a rectangular
pulse train is a
weighted train of
impulses

G (f )

0.2 A
weights = Gn

-8 f0 -6 f0
-12 f0 -10 f0

6 f0
-4 f0 -2 f0 0

2 f0

4 f0

8 f0
10f0 12f0

Figure 2B.14

This should make intuitive sense the spectrum is now defined as spectral
Pairs of impulses in
the spectrum
correspond to a
sinusoid in the timedomain

density, or a graph of infinitesimally small phasors spaced infinitesimally


close together. If we have an impulse in the spectrum, then that must mean a
finite phasor at a specific frequency, i.e. a sinusoid (we recognise that a single
sinusoid has a pair of impulses for its spectrum see Figure 2B.12).

Signals and Systems 2014

2B.23
The Fourier Transform of a Uniform Train of Impulses
We will encounter a uniform train of impulses frequently in our analysis of
communication systems:

g (t)
1
-2T0

-T0

T0

2T0

3T0

Figure 2B.15

To find the Fourier transform of this waveform, we simply note that it is


periodic, and so Eq. (2B.48) applies. We have already found the Fourier series
coefficients of the uniform train of impulses as the limit of a rectangular pulse
train:

Gn f 0

(2B.51)

Using Eq. (2B.48), we therefore have the following Fourier transform pair:

t kT f f nf
0

(2B.52)

Graphically:
g (t)
1
-2T0

-T0

T0

2T0

3T0

f0

2 f0

3 f0

G( f )
f0

-2f 0

- f0

Figure 2B.16
Signals and Systems 2014

The Fourier
transform of a
uniform train of
impulses is a
uniform train of
impulses

2B.24
Standard Fourier Transforms

rect t sinc f

(F.1)

t 1

(F.2)

e t u t

1
1 j 2f

(F.3)

A
A
Acos 2f 0t e j f f 0 e j f f 0
2
2

t kT f f nf

(F.4)

Signals and Systems 2014

(F.5)

2B.25
Fourier Transform Properties
Assuming g t G f .

ag t aG f

(F.6)

Linearity

t
g T G fT
T

(F.7)

Scaling

gt t 0 G f e j 2ft0

(F.8)

Time-shifting

g t e j 2 f 0 t G f f 0

(F.9)

Frequency-shifting

Gt g f

(F.10)

d
gt j 2fG f
dt

(F.11)

g d

1
G 0
G f
f
2
j 2f

Duality

Time-differentiation

(F.12)

Time-integration

g1 t g2 t G1 f *G2 f

(F.13)

Multiplication

g1 t * g2 t G1 f G2 f

(F.14)

Convolution

gt dt G0

(F.15)
Area

G f df g 0

Signals and Systems 2014

(F.16)

2B.26
Summary

Aperiodic waveforms do not have a Fourier series they have a Fourier


transform. Periodic waveforms also have a Fourier transform if we allow
for the existence of impulses in the transform.

A spectrum, or spectral density, of any waveform is a graph of the


amplitude and phase of the Fourier transform of the waveform.

To find the Fourier transform of a signal, we start with a known Fourier


transform pair, and apply any number of Fourier transform pair properties
to arrive at the solution. We can check if necessary by using the definition
of the Fourier transform and its inverse.

The Fourier transform of a periodic signal is a weighted train of impulses


each impulse occurs at a harmonic of the fundamental and is weighted by
the corresponding Fourier series coefficient at that frequency.

References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.

Signals and Systems 2014

2B.27
Exercises
1.
Write an expression for the time domain representation of the voltage signal
with double sided spectrum given below:

X( f )
2 -30

2 30
-1000

1000

What is the power of the signal?

2.
Use the integration and time shift rules to express the Fourier transform of the
pulse below as a sum of exponentials.

x(t)
2

-1 0

Signals and Systems 2014

2B.28
3.
Find the Fourier transforms of the following functions:
g (t )
10
(a)
-2

-1 -5

g (t )
1

cos t

(b)

/2

-/2
1

g ( t ) Two identical
cycles
t2

(c)
0

g (t )
3
2
1
(d)
0

4.
Show that:

a t

2a
2
a 2f
2

Signals and Systems 2014

2B.29
5.
The Fourier transform of a pulse p t is P f . Find the Fourier transform of
g t in terms of P f .

g(t )
2
p(t )
1

1
t

-1.5

6.
Show that:
g t t 0 g t t 0 2G f cos2ft 0

7.
Use G1 f and G2 f to evaluate g1 t g 2 t for the signals shown below:

g (t )

g (t )

ae -at
t

Signals and Systems 2014

2B.30
8.
Find G1 f G2 f for the signals shown below:

G1( f )
A
-f 0

f0

G2( f )
k
-f 0

f0

9.
Determine signals g t whose Fourier transforms are shown below:

-f 0

|G ( f )|

f0

-2 ft 0

G (f )

(a)

|G ( f )|

G (f )
/2
f0

-f 0

f0

-f 0
- /2

(b)

Signals and Systems 2014

2B.31
10.
Using the time-shifting property, determine the Fourier transform of the signal
shown below:

g (t )
A
T
t

-T
-A

11.
Using the time-differentiation property, determine the Fourier transform of the
signal below:

g (t )
A
-2
2

-A

Signals and Systems 2014

2B.32
William Thomson (Lord Kelvin) (1824-1907)
William Thomson was probably the first true electrical engineer. His
engineering was firmly founded on a solid bedrock of mathematics. He
invented, experimented, advanced the state-of-the-art, was entrepreneurial, was
a businessman, had a multi-disciplinary approach to problems, held office in
the professional body of his day (the Royal Society), published papers, gave
lectures to lay people, strived for an understanding of basic physical principles
and exploited that knowledge for the benefit of mankind.
William Thomson was born in Belfast, Ireland. His father was a professor of
engineering. When Thomson was 8 years old his father was appointed to the
chair of mathematics at the University of Glasgow. By age 10, William
Thomson was attending Glasgow University. He studied astronomy, chemistry
and natural philosophy (physics, heat, electricity and magnetism). Prizes in
Greek, logic (philosophy), mathematics, astronomy and physics marked his
progress. In 1840 he read Fouriers The Analytical Theory of Heat and wrote:
I had become filled with the utmost admiration for the splendour and
poetry of Fourier I took Fourier out of the University Library; and in a
fortnight I had mastered it - gone right through it.
At the time, lecturers at Glasgow University took a strong interest in the
approach of the French mathematicians towards physical science, such as
Lagrange, Laplace, Legendre, Fresnel and Fourier. In 1840 Thomson also read
Laplaces Mcanique Cleste and visited Paris.
In 1841 Thomson entered Cambridge and in the same year he published a
paper on Fourier's expansions of functions in trigonometrical series. A more
important paper On the uniform motion of heat and its connection with the
mathematical theory of electricity was published in 1842.
The examinations in Cambridge were fiercely competitive exercises in problem
solving against the clock. The best candidates were trained as for an athletics
contest. Thomson (like Maxwell later) came second. A day before he left
Cambridge, his coach gave him two copies of Greens Essay on the
Signals and Systems 2014

2B.33
Application of Mathematical Analysis to the Theories of Electricity and
Magnetism.
After graduating, he moved to Paris on the advice of his father and because of
his interest in the French approach to mathematical physics. Thomson began
trying to bring together the ideas of Faraday, Coulomb and Poisson on
electrical theory. He began to try and unify the ideas of action-at-a-distance,
the properties of the ether and ideas behind an electrical fluid. He also
became aware of Carnots view of heat.
In 1846, at the age of twenty two, he returned to Glasgow on a wave of
testimonials from, among others, De Morgan, Cayley, Hamilton, Boole,
Sylvester, Stokes and Liouville, to take up the post of professor of natural
philosophy. In 1847-49 he collaborated with Stokes on hydrodynamic studies,
which Thomson applied to electrical and atomic theory. In electricity Thomson
provided the link between Faraday and Maxwell. He was able to mathematise
Faradays laws and to show the formal analogy between problems in heat and
electricity. Thus the work of Fourier on heat immediately gave rise to theorems
on electricity and the work of Green on potential theory immediately gave rise
to theorems on heat flow. Similarly, methods used to deal with linear and
rotational displacements in elastic solids could be applied to give results on
electricity and magnetism. The ideas developed by Thomson were later taken
up by Maxwell in his new theory of electromagnetism.
Thomsons other major contribution to fundamental physics was his
combination of the almost forgotten work of Carnot with the work of Joule on
the conservation of energy to lay the foundations of thermodynamics. The
thermodynamical studies of Thomson led him to propose an absolute
temperature scale in 1848 (The Kelvin absolute temperature scale, as it is now
known, was defined much later after conservation of energy was better
understood).

Signals and Systems 2014

2B.34
The Age of the Earth
In the first decades of the nineteenth century geological evidence for great
changes in the past began to build up. Large areas of land had once been under
water, mountain ranges had been thrown up from lowlands and the evidence of
fossils showed the past existence of species with no living counterparts. Lyell,
in his Principles of Geology, sought to explain these changes by causes now
in operation. According to his theory, processes such as slow erosion by
wind and water; gradual deposition of sediment by rivers; and the cumulative
effect of earthquakes and volcanic action combined over very long periods of
time to produce the vast changes recorded in the Earths surface. Lyells socalled uniformitarian theory demanded that the age of the Earth be measured
in terms of hundreds of millions and probably in terms of billions of years.
Lyell was able to account for the disappearance of species in the geological
record but not for the appearance of new species. A solution to this problem
was provided by Charles Darwin (and Wallace) with his theory of evolution by
natural selection. Darwins theory also required vast periods of time for
operation. For natural selection to operate, the age of the Earth had to be
measured in many hundreds of millions of years.
Such demands for vast amounts of time run counter to the laws of
thermodynamics. Every day the sun radiates immense amounts of energy. By
the law of conservation of energy there must be some source of this energy.
Thomson, as one of the founders of thermodynamics, was fascinated by this
problem. Chemical processes (such as the burning of coal) are totally
insufficient as a source of energy and Thomson was forced to conclude that
gravitational potential energy was turned into heat as the sun contracted. On
this assumption his calculations showed that the Sun (and therefore the Earth)
was around 100 million years old.

Signals and Systems 2014

2B.35
However, Thomsons most compelling argument concerned the Earth rather
than the Sun. It is well known that the temperature of the Earth increases with
depth and
this implies a continual loss of heat from the interior, by conduction
outwards through or into the upper crust. Hence, since the upper crust does
not become hotter from year to year there must be aloss of heat from the
whole earth. It is possible that no cooling may result from this loss of heat
but only an exhaustion of potential energy which in this case could scarcely
be other than chemical.
Since there is no reasonable mechanism to keep a chemical reaction going at a
steady pace for millions of years, Thomson concluded that the earth is
merely a warm chemically inert body cooling. Thomson was led to believe
that the Earth was a solid body and that it had solidified at a more or less
uniform temperature. Taking the best available measurements of the
conductivity of the Earth and the rate of temperature change near the surface,
he arrived at an estimate of 100 million years as the age of the Earth
(confirming his calculations of the Suns age).
The problems posed to Darwins theory of evolution became serious as
Thomsons arguments sank in. In the fifth edition of The Origin of Species,
Darwin attempted to adjust to the new time scale by allowing greater scope for
evolution by processes other than natural selection. Darwin was forced to ask
for a suspension of judgment of his theory and in the final chapter he added
With respect to the lapse of time not having been sufficient since our planet
was consolidated for the assumed amount of organic change, and this
objection, as argued by [Thomson], is probably one of the gravest yet
advanced, I can only say, firstly that we do not know at what rate species
change as measured by years, and secondly, that many philosophers are not
yet willing to admit that we know enough of the constitution of the universe
and of the interior of our globe to speculate with safety on its past duration.
(Darwin, The Origin of Species, Sixth Edition, p.409)

Signals and Systems 2014

2B.36
The chief weakness of Thomsons arguments was exposed by Huxley

A variant of the
adage:
Garbage in equals
garbage out.

this seems to be one of the many cases in which the admitted accuracy of
mathematical processes is allowed to throw a wholly inadmissible
appearance of authority over the results obtained by them. Mathematics
may be compared to a mill of exquisite workmanship, which grinds you stuff
of any degree of fineness; but nevertheless, what you get out depends on
what you put in; and as the grandest mill in the world will not extract
wheat-flour from peascods, so pages of formulae will not get a definite
result out of loose data.
(Quarterly Journal of the Geological Society of London, Vol. 25, 1869)
However, Thomsons estimates were the best available and for the next thirty
years geology took its time from physics, and biology took its time from
geology. But Thomson and his followers began to adjust his first estimate
down until at the end of the nineteenth century the best physical estimates of
the age of the Earth and Sun were about 20 million years whilst the minimum
the geologists could allow was closer to Thomsons original 100 million years.
Then in 1904 Rutherford announced that the radioactive decay of radium was
accompanied by the release of immense amounts of energy and speculated that
this could replace the heat lost from the surface of the Earth.
The discovery of the radioactive elementsthus increases the possible limit
of the duration of life on this planet, and allows the time claimed by the
geologist and biologist for the process of evolution.
(Rutherford quoted in Burchfield, p.164)
A problem for the geologists was now replaced by a problem for the physicists.
The answer was provided by a theory which was just beginning to be gossiped
about. Einsteins theory of relativity extended the principle of conservation of
energy by taking matter as a form of energy. It is the conversion of matter to
heat which maintains the Earths internal temperature and supplies the energy
radiated by the sun. The ratios of lead isotopes in the Earth compared to
meteorites now leads geologists to give the Earth an age of about 4.55 billion
years.

Signals and Systems 2014

2B.37
The Transatlantic Cable
The invention of the electric telegraph in the 1830s led to a network of
telegraph wires covering England, western Europe and the more settled parts of
the USA. The railroads, spawned by the dual inventions of steam and steel,
were also beginning to crisscross those same regions. It was vital for the
smooth and safe running of the railroads, as well as the running of empires, to
have speedy communication.
Attempts were made to provide underwater links between the various separate
systems. The first cable between Britain and France was laid in 1850. The
operators found the greatest difficulty in transmitting even a few words. After
12 hours a trawler accidentally caught and cut the cable. A second, more
heavily armoured cable was laid and it was a complete success. The short lines
worked, but the operators found that signals could not be transmitted along
submarine cables as fast as along land lines without becoming confused.
In spite of the record of the longer lines, the American Cyrus J. Fields
proposed a telegraph line linking Europe and America. Oceanographic surveys
showed that the bottom of the Atlantic was suitable for cable laying. The
connection of existing land telegraph lines had produced a telegraph line of the
length of the proposed cable through which signals had been passed extremely
rapidly. The British government offered a subsidy and money was rapidly
raised.
Faraday had predicted signal retardation but he and others like Morse had in
mind a model of a submarine cable as a hosepipe which took longer to fill with
water (signal) as it got longer. The remedy was thus to use a thin wire (so that
less electricity was needed to charge it) and high voltages to push the signal
through. Faradays opinion was shared by the electrical adviser to the project,
Dr Whitehouse (a medical doctor).
Thomsons researches had given him a clearer mathematical picture of the
problem. The current in a telegraph wire in air is approximately governed by
the wave equation. A pulse on such a wire travels at a well defined speed with

Signals and Systems 2014

2B.38
no change of shape or magnitude with time. Signals can be sent as close
together as the transmitter can make them and the receiver distinguish them.
In undersea cables of the type proposed, capacitive effects dominate and the
current is approximately governed by the diffusion (i.e. heat) equation. This
equation predicts that electric pulses will last for a time that is proportional to
the length of the cable squared. If two or more signals are transmitted within
this time, the signals will be jumbled at the receiver. In going from submarine
cables of 50 km length to cables of length 2400 km, retardation effects are
2500 times worse. Also, increasing the voltage makes this jumbling (called
intersymbol interference) worse. Finally, the diffusion equation shows that the
wire should have as large a diameter as possible (small resistance).
Whitehouse, whose professional reputation was now involved, denied these
conclusions. Even though Thomson was on the board of directors of Fields
company, he had no authority over the technical advisers. Moreover the
production of the cable was already underway on principles contrary to
Thomsons. Testing the cable, Thomson was astonished to find that some
sections conducted only half as well as others, even though the manufacturers
were supplying copper to the then highest standards of purity.
Realising that the success of the enterprise would depend on a fast, sensitive
detector, Thomson set about to invent one. The problem with an ordinary
galvanometer is the high inertia of the needle. Thomson came up with the
mirror galvanometer in which the pointer is replaced by a beam of light.
In a first attempt in 1857 the cable snapped after 540 km had been laid. In
1858, Europe and America were finally linked by cable. On 16 August it
carried a 99-word message of greeting from Queen Victoria to President
Buchanan. But that 99-word message took 16 hours to get through. In vain,
Whitehouse tried to get his receiver to work. Only Thomsons galvanometer
was sensitive enough to interpret the minute and blurred messages coming
through. Whitehouse ordered that a series of huge two thousand volt induction
coils be used to try to push the message through faster after four weeks of

Signals and Systems 2014

2B.39
this treatment the insulation finally failed; 2500 tons of cable and 350 000 of
capital lay useless on the ocean floor.
In 1859 eighteen thousand kilometres of undersea cable had been laid in other
parts of the world, and only five thousand kilometres were operating. In 1861
civil war broke out in the United States. By 1864 Field had raised enough
capital for a second attempt. The cable was designed in accordance with
Thomsons theories. Strict quality control was exercised: the copper was so
pure that for the next 50 years telegraphists copper was the purest available.
Once again the British Government supported the project the importance of
quick communication in controlling an empire was evident to everybody. The
new cable was mechanically much stronger but also heavier. Only one ship
was large enough to handle it and that was Brunels Great Eastern. She was
fives time larger than any other existing ship.
This time there was a competitor. The Western Union Company had decided to
build a cable along the overland route across America, Alaska, the Bering
Straits, Siberia and Russia to reach Europe the long way round. The
commercial success of the cable would therefore depend on the rate at which
messages could be transmitted. Thomson had promised the company a rate of
8 or even 12 words a minute. Half a million pounds was being staked on the
correctness of the solution of a partial differential equation.
In 1865 the Great Eastern laid cable for nine days, but after 2000 km the cable
parted. After two weeks of unsuccessfully trying to recover the cable, the
expedition left a buoy to mark the spot and sailed for home. Since
communication had been perfect up until the final break, Thomson was
confident that the cable would do all that was required. The company decided
to build and lay a new cable and then go back and complete the old one.
Cable laying for the third attempt started on 12 July 1866 and the cable was
landed on the morning of the 27th. On the 28th the cable was open for business
and earned 1000. Western Union ordered all work on their project to be
stopped at a loss of $3 000 000.

Signals and Systems 2014

2B.40
On 1 September after three weeks of effort the old cable was recovered and on
8 September a second perfect cable linked America and Europe. A wave of
knighthoods swept over the engineers and directors. The patents which
Thomson held made him a wealthy man.
For his work on the transatlantic cable Thomson was created Baron Kelvin of
Largs in 1892. The Kelvin is the river which runs through the grounds of
Glasgow University and Largs is the town on the Scottish coast where
Thomson built his house.

There are many


other factors
influencing local
tides such as
channel width
which produce
phenomena akin to
resonance in the
tides. One example
of this is the narrow
Bay of Fundy,
between Nova
Scotia and New
Brunswick, where
the tide can be as
high as 21m. In
contrast, the
Mediterranean Sea
is almost tideless
because it is a
broad body of water
with a narrow
entrance.

Other Achievements
Thomson worked on several problems associated with navigation sounding
machines, lighthouse lights, compasses and the prediction of tides. Tides are
primarily due to the gravitational effects of the Moon, Sun and Earth on the
oceans but their theoretical investigation, even in the simplest case of a single
ocean covering a rigid Earth to a uniform depth, is very hard. Even today, the
study of only slightly more realistic models is only possible by numerical
computer modelling. Thomson recognised that the forces affecting the tides
change periodically. He then approximated the height of the tide by a
trigonometric polynomial a Fourier series with a finite number of terms. The
coefficients of the polynomial required calculation of the Fourier series
coefficients by numerical integration a task that required not less than
twenty hours of calculation by skilled arithmeticians. To reduce this labour
Thomson designed and built a machine which would trace out the predicted

Michelson (of
Michelson-Morley
fame) was to build a
better machine that
used up to 80
Fourier series
coefficients. The
production of blips
at discontinuities by
this machine was
explained by Gibbs
in two letters to
Nature. These blips
are now referred to
as the Gibbs
phenomenon.

height of the tides for a year in a few minutes, given the Fourier series
coefficients.
Thomson also built another machine, called the harmonic analyser, to perform
the task which seemed to the Astronomer Royal so complicated and difficult
that no machine could master it of computing the Fourier series coefficients
from the record of past heights. This was the first major victory in the struggle
to substitute brass for brain in calculation.

Signals and Systems 2014

2B.41
Thomson introduced many teaching innovations to Glasgow University. He
introduced laboratory work into the degree courses, keeping this part of the
work distinct from the mathematical side. He encouraged the best students by
offering prizes. There were also prizes which Thomson gave to the student that
he considered most deserving.
Thomson worked in collaboration with Tait to produce the now famous text
Treatise on Natural Philosophy which they began working on in the early
1860s. Many volumes were intended but only two were ever written which
cover kinematics and dynamics. These became standard texts for many
generations of scientists.
In later life he developed a complete range of measurement instruments for
physics and electricity. He also established standards for all the quantities in
use in physics. In all he published over 300 major technical papers during the
53 years that he held the chair of Natural Philosophy at the University of
Glasgow.
During the first half of Thomson's career he seemed incapable of being wrong
while during the second half of his career he seemed incapable of being right.
This seems too extreme a view, but Thomson's refusal to accept atoms, his
opposition to Darwin's theories, his incorrect speculations as to the age of the
Earth and the Sun, and his opposition to Rutherford's ideas of radioactivity,
certainly put him on the losing side of many arguments later in his career.
William Thomson, Lord Kelvin, died in 1907 at the age of 83. He was buried
in Westminster Abbey in London where he lies today, adjacent to Isaac
Newton.

References
Burchfield, J.D.: Lord Kelvin and The Age of the Earth, Macmillan, 1975.
Krner, T.W.: Fourier Analysis, Cambridge University Press, 1988.
Encyclopedia Britannica, 2004.
Morrison, N.: Introduction to Fourier Analysis, John Wiley & Sons, Inc., 1994.
Thomson, S.P.: The Life of Lord Kelvin, London, 1976.
Kelvin & Tait: Treatise on Natural Philosophy, Appendix B.
Signals and Systems 2014

3A.1
Lecture 3A Filtering and Sampling
Response to a sinusoidal input. Response to an arbitrary input. Ideal filters.
What does a filter do to a signal? Sampling. Reconstruction. Aliasing.
Practical sampling and reconstruction. Summary of the sampling and
reconstruction process. Finding the Fourier series of a periodic function from
the Fourier transform of a single period. Windowing in the time domain.
Practical multiplication and convolution.

Introduction
Since we can now represent signals in terms of a Fourier series (for periodic
signals) or a Fourier transform (for aperiodic signals), we seek a way to

We have a

describe a system in terms of frequency. That is, we seek a model of a linear, description of
signals in the

time-invariant system governed by continuous-time differential equations that frequency domain we need one for

expresses its behaviour with respect to frequency, rather than time. The systems
concept of a signals spectrum and a systems frequency response will be seen
to be of fundamental importance in the frequency-domain characterisation of a
system.
The power of the frequency-domain approach will be seen as we are able to
determine a systems output given almost any input. Fundamental signal
operations can also be explained easily such as sampling / reconstruction and
modulation / demodulation in the frequency domain that would otherwise
appear bewildering in the time domain.

Response to a Sinusoidal Input


We have already seen that the output of a LTI system is given by:

yt ht xt

(3A.1)

Starting with a
convolution
description of a
system

(3A.2)

We apply a sinusoid

if initial conditions are zero.


Suppose the input to the system is:

xt A cos 0t

Signals and Systems 2014

3A.2
We have already seen that this can be expressed (thanks to Euler) as:
A sinusoid is just a
sum of two complex
conjugate counterrotating phasors

A
A
xt e j e j 0t e j e j 0t
2
2
Xe j 0t X *e j 0t

(3A.3)

Where X is the phasor representing xt . Inserting this into Eq. (3A.1) gives:

yt h Xe j0 t X *e j0 t d

h e j0 Xe j0t d h e j0 X *e j0t d

h e j0 d Xe j0t h e j0 d X *e j0t

(3A.4)

This rather unwieldy expression can be simplified. First of all, if we take the
Fourier transform of the impulse response, we get:
The Fourier
transform of the
impulse response
appears in our
analysis

H ht e jt dt

(3A.5)

where obviously 2f . Now Eq. (3A.4) can be written as:

yt H 0 Xe j 0t H 0 X *e j 0t

(3A.6)

If ht is real, then:

H H *

(3A.7)

which should be obvious by looking at the definition of the Fourier transform.


Now let:
and relates the
output phasor with
the input phasor!

Y H 0 X

Signals and Systems 2014

(3A.8)

3A.3
This equation is of fundamental importance! It says that the output phasor to a
system is equal to the input phasor to the system, scaled in magnitude and
changed in angle by an amount equal to H 0 (a complex number). Also:

Y * H * 0 X * H 0 X *

(3A.9)

Eq. (3A.6) can now be written as:

yt Ye j 0t Y *e j 0t

(3A.10)

which is just another way of writing the sinusoid:


The magnitude and
(3A.11) phase of the input
sinusoid change
0
0
0
according to the
Fourier transform of
the impulse
Hence the response resulting from the sinusoidal input xt A cos 0 t is response

yt A H cos t H

also a sinusoid with the same frequency 0 , but with the amplitude scaled by
the factor H 0 and the phase shifted by an amount H 0 .
The function H is termed the frequency response. H is called the Frequency,
magnitude and

magnitude response and H is called the phase response. Note that the phase response
defined

system impulse response and the frequency response form a Fourier transform
pair:

ht H f

(3A.12)

The impulse
response and
frequency response
form a Fourier
transform pair

We now have an easy way of analysing systems with sinusoidal inputs: simply
determine H f and apply Y H f 0 X .
There are two ways to get H f . We can find the system impulse response
ht

Two ways to find the


and take the Fourier transform, or we can find it directly from the frequency response

differential equation describing the system.

Signals and Systems 2014

3A.4
Example
For the simple RC circuit below, find the forced response to an arbitrary
sinusoid (this is also termed the sinusoidal steady-state response).
Finding the frequency
response of a simple
system

vi

vo

Figure 3A.1
The input/output differential equation for the circuit is:

dvo t 1
1

vo t
vi t
dt
RC
RC

(3A.13)

which is obtained by KVL. Since the input is a sinusoid, which is really just a
sum of conjugate complex exponentials, we know from Eq. (3A.6) that if the
input is Vi Ae j 0t then the output is Vo H 0 Ae j 0t . Note that Vi and

Vo are complex numbers, and if the factor e j0t were suppressed they would be

phasors. The differential equation Eq. (3A.13) becomes:

(3A.14)

1
1
H 0 Ae j 0t
Ae j 0t
RC
RC

(3A.15)

d
1
1
H 0 Ae j 0t
H 0 Ae j 0t
Ae j 0t
dt
RC
RC
and thus:

j 0 H 0 Ae j 0t

Signals and Systems 2014

3A.5
Dividing both sides by Vi Ae j 0t gives:

1
1
j 0 H 0
H 0
RC
RC

(3A.16)

and therefore:

H 0

1 RC
j 0 1 RC

(3A.17)

which yields for an arbitrary frequency:

1 RC
H
j 1 RC

Frequency response
of a lowpass RC
circuit

(3A.18)

This is the frequency response for the simple RC circuit. As a check, we know
that the impulse response is:

ht 1 RC e t RC u t

(3A.19)

Impulse response of
a lowpass RC circuit

Using your standard transforms, show that the frequency response is the
Fourier transform of the impulse response.
The magnitude function is:

1 RC

2 1 RC 2

(3A.20)

Magnitude response
of a lowpass RC
circuit

and the phase function is:

H tan 1 RC

Signals and Systems 2014

(3A.21) Phase response of a


lowpass RC circuit

3A.6
Plots of the magnitude and phase function are shown below:
A graph of the
frequency response
in this case as
magnitude and phase

|H ( ) |
1
1 2

0
0

0
0

2 0
2 0

-45

-90
H ( )

Figure 3A.2

The systems
behaviour described
in terms of frequency

The behaviour of the RC circuit is summarized by noting that it passes low


frequency signals without any significant attenuation and without producing
any significant phase shift. As the frequency increases, the attenuation and the
phase shift become larger. Finally as the frequency increases to , the RC
circuit completely blocks the sinusoidal input. As a result of this behaviour,

Filter terminology
defined

the circuit is an example of a lowpass filter. The frequency 0 1 RC is


termed the cutoff frequency. The bandwidth of the filter is also equal to 0 .

Signals and Systems 2014

3A.7
Response to an Arbitrary Input
Since we can readily establish the response of a system to a single sinusoid, we
should be able to find the response of a system to a sum of sinusoids. That is,
a spectrum in gives a spectrum

If we can do one
out, with the relationship between the output sinusoid, we can do
an infinite number

and input spectra given by the frequency response. For periodic inputs, the

spectrum is effectively given by the Fourier series, and for aperiodic inputs, we
use the Fourier transform. The system in both cases is described by its
frequency response.
Periodic Inputs
For periodic inputs, we can express the input signal by a complex exponential
Fourier series:

xt

jn o t

(3A.22)

which is just a
Fourier series for a
periodic signal

It follows from the previous section that the output response resulting from the
complex exponential input X n e jn0t is equal to H n 0 X n e jn0t . By linearity,
the response to the periodic input xt is:

y t

H n X

e jn o t

(3A.23)

Since the right-hand side is a complex exponential Fourier series, the output
y t must be periodic, with fundamental frequency equal to that of the input,

i.e. the output has the same period as the input.


It can be seen that the only thing we need to determine is new Fourier series The frequency
coefficients, given by:

Yn H n 0 X n

Signals and Systems 2014

(3A.24)

response simply
multiplies the input
Fourier series
coefficients to
produce the output
Fourier series
coefficients

3A.8
The output magnitude spectrum is just:

Yn H n 0 X n
Dont forget the
frequency response is
just a frequency
dependent complex
number

(3A.25)

and the output phase spectrum is:

Yn H n 0 X n

(3A.26)

These relationships describe how the system processes the various complex
exponential components comprising the periodic input signal. In particular,
Eq. (3A.25) determines if the system will pass or attenuate a given component
of the input. Eq. (3A.26) determines the phase shift the system will give to a
particular component of the input.
Aperiodic Inputs
If we can do finite
sinusoids, we can do
infinitesimal sinusoids
too!

Taking the Fourier transform of both sides of the time domain input/output
relationship of an LTI system:

Start with the


convolution integral
again

yt ht xt

(3A.27)

we get:
and transform to the
frequency domain

ht xt e jt dt

Y f

(3A.28)

Substituting the definition of convolution, we get:

Y f h xt d e jt dt

(3A.29)

This can be rewritten in the form:

Y f h xt e jt dt d

Signals and Systems 2014

(3A.30)

3A.9
Using the change of variable t in the second integral gives:

Y f h x e j d d

(3A.31)

Factoring out e j from the second integral, we can write:

Y f h e d x e j d

(3A.32)

which is:

Y f H f X f

(3A.33)

Convolution in the
time-domain is
multiplication in the
frequency-domain

This is also a proof of the convolution in time property of Fourier


transforms.
Eq. (3A.33) is the frequency-domain version of the equation given by The output spectrum
is obtained by

Eq. (3A.27). It says that the spectrum of the output signal is equal to the multiplying the input
product of the frequency response and the spectrum of the input signal.

spectrum by the
frequency response

The output magnitude spectrum is:

Y f H f X f

(3A.34)

The magnitude
spectrum is scaled

(3A.35)

The phase spectrum


is added to

and the output phase spectrum is:

Y f H f X f

Note that the frequency-domain description applies to all inputs that can be
Fourier transformed, including sinusoids if we allow impulses in the spectrum.
Periodic inputs are then a special case of Eq. (3A.33).
By similar arguments together with the duality property of the Fourier Convolution in the
frequency-domain is

transform, it can be shown that convolution in the frequency-domain is multiplication in the


equivalent to multiplication in the time-domain.
Signals and Systems 2014

time-domain

3A.10
Ideal Filters
A first look at
frequency-domain
descriptions - filters

Now that we have a feel for the frequency-domain description and behaviour
of a system, we will briefly examine a very important application of electronic
circuits that of frequency selection, or filtering. Here we will examine ideal
filters the topic of real filter design is rather involved.
Ideal filters pass sinusoids within a given frequency range, and reject
(completely attenuate) all other sinusoids. An example of an ideal lowpass
filter is shown below:

Cutoff

|H|
1

ideal
Pass
0

Vi

Stop

filter

Vo

Figure 3A.3

Filter types

Other basic types of filters are highpass, bandpass and bandstop. All have
similar definitions as given in Figure 3A.3. Frequencies that are passed are said
to be in the passband, while those that are rejected lie in the stopband. The
point where passband and stopband meet is called 0 , the cutoff frequency.
The term bandwidth as applied to a filter corresponds to the width of the
passband.
An ideal lowpass filter with a bandwidth of B Hz has a magnitude response:

f
H f Krect

2B

Signals and Systems 2014

(3A.36)

3A.11
Phase Response of an Ideal Filter
Most filter specifications deal with the magnitude response. In systems where
the filter is designed to pass a particular wave shape, phase response is
particular phase
extremely important. For example, in a digital system we may be sending 1s A
response is crucial
and 0s using a specially shaped pulse that has nice properties, e.g. its value for retaining a
signals shape

is zero at the centre of all other pulses. At the receiver it is passed through a
lowpass filter to remove high frequency noise. The filter introduces a delay of

D seconds, but the output of the filter is as close as possible to the desired
pulse shape.
This is illustrated below:
A filter introducing
delay, but retaining
signal shape

delay
D
vi

vo

vi

Lowpass
Filter

vo

1.5

Figure 3A.4
To the pulse, the filter just looks like a delay. We can see that distortionless
transmission through a filter is characterised by a constant delay of the input
signal:

vo t Kvi t D

(3A.37)

In Eq. (3A.37) we have also included the fact that all frequencies in the
passband of the filter can have their amplitudes multiplied by a constant
without affecting the wave shape. Also note that Eq. (3A.37) applies only to
frequencies in the passband of a filter - we do not care about any distortion in a
stopband.

Signals and Systems 2014

Distortionless
transmission defined

3A.12
We will now relate these distortionless transmission requirements to the phase
response of the filter. From Fourier analysis we know that any periodic signal
can be decomposed into an infinite summation of sinusoidal signals. Let one of
these be:

vi A cost

(3A.38)

From Eq. (3A.37), the output of the filter will be:

vo KA cos t D
KA cost D

(3A.39)

The input and output signals differ only by the gain K and the phase angle
which is:
Distortionless
transmission
requires a linear
phase

(3A.40)

That is, the phase response must be a straight line with negative slope that
passes through the origin.
In general, the requirement for the phase response in the passband to achieve
distortionless transmission through the filter is:

Group delay defined

d
D
d

(3A.41)

The delay D in this case is referred to as the group delay. (This means the
group of sinusoids that make up the wave shape have a delay of D).
The ideal lowpass filter can be expressed completely as:

f j 2fD
H f Krect
e
B
2

Signals and Systems 2014

(3A.42)

3A.13
Example
We would like to determine the maximum frequency for which transmission is
practically distortionless in the following simple filter:
Model of a short
piece of co-axial
cable, or twisted pair

1k
R
1 nF

0 = 1
RC

Figure 3A.5
We would also like to know the group delay caused by this filter.
We know the magnitude and phase response already:

1
1 0

H tan 1 0

Signals and Systems 2014

(3A.43a)
(3A.43b)

3A.14
These responses are shown below:
The deviation from
linear phase and
constant magnitude
for a simple firstorder filter

|H|
1
1 2

2 0

H
/4
-1

/2

Figure 3A.6
Suppose we can tolerate a deviation in the magnitude response of 1% in the
passband. We then have:

1
1 0

0.99
(3A.44)

01425
0
.
Also, suppose we can tolerate a deviation in the delay of 1% in the passband.
We first find an expression for the delay:

1
1
d

d 1 0 2 0

Signals and Systems 2014

(3A.45)

3A.15
and then impose the condition that the delay be within 1% of the delay at DC:

1
1 0

0.99
(3A.46)

01005
0
.
We can see from Eqs. (3A.44) and (3A.46) that we must have 01
. 0 for

. 0 , according to
practically distortionless transmission. The delay for 01
Eq. (3A.45), is approximately given by:

(3A.47)

Approximate group
delay for a firstorder lowpass circuit

For the values shown in Figure 3A.6, the group delay is approximately 1 s . In
practice, variations in the magnitude transfer function up to the half-power
frequency are considered tolerable (this is the bandwidth, BW, of the filter).
Over this range of frequencies, the phase deviates from the ideal linear
characteristic by at most 4 1 0.2146 radians (see Figure 3A.6).
Frequencies well below 0 are transmitted practically without distortion, but
frequencies in the vicinity of 0 will suffer some distortion.
The ideal filter is unrealizable. To show this, take the inverse Fourier transform
of the ideal filters frequency response, Eq. (3A.42):

ht 2 BKsinc2 Bt D

(3A.48)

It should be clear that the impulse response is not zero for t 0 , and the filter An ideal filter is
unrealizable

is therefore not causal (how can there be a response to an impulse at t 0 because it is nonbefore it is applied?). One way to design a real filter is simply to multiply causal
Eq. (3A.48) by u t , the unit step.

Signals and Systems 2014

3A.16
What does a Filter do to a Signal?
Filtering a periodic
signal

Passing a periodic signal through a filter will distort the signal (so called linear
distortion) because the filter will change the relative amplitudes and phases of
the sinusoids that make up its Fourier series. Once we can calculate the
amplitude change and phase shift that the filter imposes on an arbitrary
sinusoid we are in a position to find out how each sinusoid is affected, and
hence synthesise the filtered waveform. In general, the output Fourier
transform is just the input Fourier transform multiplied by the filter frequency
response H f .
R
x (t )

H (f )

y (t )

1
j 2 f C

eg.

H (f ) =

1
j 2 f RC +1

Figure 3A.7

Y f H f X f

(3A.49)

For a periodic signal:

Xf

X f nf

(3A.50)

and therefore:
Spectrum of a
filtered periodic
signal

Yf

H nf X f nf

Signals and Systems 2014

(3A.51)

3A.17
Example

A periodic signal has a Fourier series as given in the following table:


Input Fourier series table

Amplitude

Phase

-30

-90

What is the Fourier series of the output if the signal is passed through a filter
with transfer function H f j 4f ? The period is 2 seconds.
There are 3 components in the input signal with frequencies 0, 0.5 Hz and 1
Hz. The complex gain of the filter at each frequency is:
Filter gain table

Hf

Gain

j 4 0.5

90

j 4 1

90

Phase shift

Hence, the output Fourier series table is:


Output Fourier series table

Amplitude

Phase

60

12

Signals and Systems 2014

3A.18
Example

Suppose the same signal was sent through a filter with transfer function as
sketched:

|H ( f )|

1
0.5

-1

-0.5

0.5

0.5

H ( f)

-1

-0.5

Figure 3A.8

The output Fourier series is:


Output Fourier series table

Amplitude

Phase

-120

1.5

-270

Signals and Systems 2014

3A.19
Sampling
Sampling is one of the most important operations we can perform on a signal.
Sampling is one of

Samples can be quantized and then operated upon digitally (digital signal the most important
processing). Once processed, the samples are turned back into a continuous- things we can do to

a continuous-time
time waveform. (e.g. CD, mobile phone!) Here we demonstrate how, if certain signal because we
can then process it
parameters are right, a sampled signal can be reconstructed from its samples digitally

almost perfectly.

Ideal sampling involves multiplying a waveform by a uniform train of


impulses. The result is a weighted train of impulses. Weights of the impulses
are the sample values to be used by a digital signal processor (computer). An
ideal sampler is shown below:

g (t ) p (t ) = gs ( t )

g (t )

p (t )

-2Ts

-Ts

Ts

2Ts

Figure 3A.9

The period Ts of the sampling waveform p t is called the sample period or


sampling interval. The inverse of the sample period is f s 1 Ts and is called
the sample rate or sampling frequency. It is usually expressed in samples per
second (Sa/s), or simply Hz.

Signals and Systems 2014

An ideal sampler
multiplies a signal
by a uniform train of
impulses

3A.20

Let g t be a time domain signal. If we multiply it by pt

t kT we
s

get an ideally sampled version:

g s t gt t kTs

(3A.52)

Graphically, in the time-domain, we have:


An ideal sampler
produces a train of
impulses - each
impulse is weighted
by the original signal

g (t )

g s (t )
sampled signal

g (t )
-2 /B -1 /B 0

1/B 2/B

-2 /B -1 /B 0

1/B 2/B

p (t )

1
-2Ts

- Ts

Ts

2Ts t

Figure 3A.10

That is, for ideal sampling, the original signal forms an envelope for the train
of impulses, and we have simply generated a weighted train of impulses, where
each weight takes on the signal value at that particular instant of time.
Note that is it physically impossible to ideally sample a waveform, since we
cannot create a real function containing impulses. In practice, we use an
analog-to-digital converter to get actual values of a signal (e.g. a voltage).
Then, when we perform digital signal processing (DSP) on the signal, we
understand that we should treat the sample value as the weight of an impulse.
To practically sample a waveform using analog circuitry, we have to use finite
value pulses. It will be shown later that it doesnt matter what pulse shape is
used for the sampling waveform it could be rectangular, triangular, or indeed
any shape.

Signals and Systems 2014

3A.21
Taking

the

Fourier

transform

of

both

sides

of

Eq.

(3A.52):

Gs f G f f s
fs

f nf
s

G f nf

(3A.53)

Graphically, in the frequency-domain, we have:

G (f )

-2B

-B

Gs ( f )

1
B

- fs -B

2B f

- fs - fs+B

-B 0

A sampled signals
spectrum is a scaled
replica of the
original, periodically
repeated

fs
B

fs -B

fs

fs+B

P (f )
fs
-2 fs

- fs

fs

2 fs

Figure 3A.11

Thus the Fourier transform of the sampled waveform, G s f , is a scaled


replica of the original, periodically repeated along the frequency axis. Spacing
between repeats is equal to the sampling frequency, f s (the inverse of the
sampling interval).

Signals and Systems 2014

3A.22
Sampling and Periodicy

If we ideally sample a continuous Fourier transform G f , by multiplying it


with

f nf 0 , then we have S f G f

An ideally sampled
spectrum is a train
of impulses - each
impulse is weighted
by the original
spectrum

f nf :

S (f )
G( f )

-8 f 0

0 f0

-4 f 0

4 f0

8 f0

Figure 3A.12

Now taking the inverse Fourier transform we get:

S f g t T0 t kT0

(3A.54)

which is a periodic repeat of a weighted version of the original signal:

s( t )

- T0

T0

Figure 3A.13

Sampling in one
domain implies
periodicy in the
other

Thus, sampling in the frequency-domain results in periodicy in the time


domain. We already know this! We know a periodic time-domain signal can be
synthesised from sinusoids with frequencies nf 0 , i.e. has a transform consisting
of impulses at frequencies nf 0 .
We now see the general pattern:
Sampling in one domain implies periodicy in the other.
Signals and Systems 2014

3A.23
Reconstruction
If a sampled signal g s t is applied to an ideal lowpass filter of bandwidth B,
the only component of the spectrum Gs f that is passed is just the original
spectrum G f .

Gs( f )

We recover the
original spectrum by
lowpass filtering

G( f )

lowpass
filter

Figure 3A.14

Graphically, in the frequency-domain:

Gs ( f )
- fs -B

- fs - fs+B

-B 0

fs
fs

fs -B

G (f )

fs+B

f
-2B

lowpass
filter
H( f )

-2 B

-B

-B

1
B

2B f

1/ fs

2B

Figure 3A.15

Hence the time-domain output of the filter is equal to g t , which shows that
the original signal can be completely and exactly reconstructed from the
sampled waveform g s t .

Signals and Systems 2014

Lowpass filtering a
sampled signals
spectrum results in
the original signals
spectrum an
operation that is
easy to see in the
frequency-domain!

3A.24
Graphically, in the time-domain:
A weighted train of
impulses turns back
into the original
signal after lowpass
filteringan
operation that is not
so easy to see in the
time-domain!

g s (t )
-2 /B -1 /B 0

1/B 2/B

lowpass
filter

g( t )

-2 /B -1 /B 0

1/B 2/B

Figure 3A.16

We cant sample
and reconstruct
perfectly, but we can
get close!

There are some limitations to perfect reconstruction though. One is that timelimited signals are not bandlimited (e.g. a rect time-domain waveform has a
spectrum which is a sinc function which has infinite frequency content). Any
time-limited signal therefore cannot be perfectly reconstructed, since there is
no sample rate high enough to ensure repeats of the original spectrum do not
overlap. However, many signals are essentially bandlimited, which means
spectral components higher than, say B, do not make a significant contribution
to either the shape or energy of the signal.

Signals and Systems 2014

3A.25
Aliasing
We saw that sampling in one domain implies periodicy in the other. If the

We have to ensure

function being made periodic has an extent that is smaller than the period, there no spectral overlap
will be no resulting overlap and hence it will be possible to recover the

when sampling

continuous (unsampled) function by windowing out just one period from the
domain displaying periodicy.
Nyquists sampling criterion is the formal expression of the above fact:
Perfect reconstruction of a sampled signal is possible if the

Nyquists sampling
criterion

sampling rate is greater than twice the bandwidth of the


signal being sampled

f s 2B

(3A.55)

To avoid aliasing, we have to sample at a rate f s 2 B . The frequency f s 2 is

Fold-over frequency
called the spectral fold-over frequency, and it is determined only by the defined

selected sample rate, and it may be selected independently of the

characteristics of the signal being sampled. The frequency B is termed the Nyquist frequency
Nyquist frequency, and it is a function only of the signal and is independent of

defined

the selected sampling rate. Do not confuse these two independent entities! The
Nyquist frequency is a lower bound for the fold-over frequency in the sense
that failure to select a fold-over frequency at or above the Nyquist frequency
will result in spectral aliasing and loss of the capability to reconstruct a
continuous-time signal from its samples without error. The Nyquist frequency
for a signal which is not bandlimited is infinity; that is, there is no finite sample
rate that would permit errorless reconstruction of the continuous-time signal
from its samples.
The Nyquist rate is defined as f N 2 B , and is not to be confused with the
similar term Nyquist frequency. The Nyquist rate is 2 B , whereas the Nyquist
frequency is B. To prevent aliasing, we need to sample at a rate greater than the
Nyquist rate, i.e. f s f N .

Signals and Systems 2014

Nyquist rate defined

3A.26
To illustrate aliasing, consider the case where we have failed to select the
sample rate higher than twice the bandwidth, B, of a lowpass signal. The
sampled spectrum is shown below, where the repeats of the original spectrum
now overlap:
An illustration of
aliasing in the
frequency-domain

folded-over high-frequency
components

X s( f )

- fs - B

B fs
f s /2 = fold-over frequency

Figure 3A.17

If the sampled signal xs t is lowpass filtered with cutoff frequency f s 2 , the


output spectrum of the filter will contain high-frequency components of xt
folded-over to low-frequency components, and we will not be able to perfectly
reconstruct the original signal:

X r( f )

-B

X (f )

-B

"reconstructed"

original

Figure 3A.18
How to avoid
aliasing

To summarise we can avoid aliasing by either:


1. Selecting a sample rate higher than twice the bandwidth of the signal
(equivalent to saying that the fold-over frequency is greater than the
bandwidth of the signal); or
2. By bandlimiting (using a filter) the signal so that its bandwidth is less than
half the sample rate.
Signals and Systems 2014

3A.27
Practical Sampling and Reconstruction
To prevent aliasing, practical systems are constructed so that the signal-to-be-

We have to ensure

sampled is guaranteed to meet Nyquists sampling criterion. They do this by no spectral overlap
passing the original signal through a lowpass filter, known as an anti-alias
filter (AAF), to ensure that no frequencies are present above the fold-over

frequency, f s 2 :

Vi ( f )

-B

Vo ( f )

-B

f s /2

Vi ( f )

Vo ( f )

anti-alias
filter
H ( f)

- f s /2 0

f s /2

Figure 3A.19

If the system is designed correctly, then the high frequency components of the
original signal that are rejected by the AAF are not important in that they
carry little energy and/or information (e.g. noise). Sampling then takes place at
a rate of f s Sa/s. The reconstruction filter has the same bandwidth as the AAF,
but a different passband magnitude to correct for the sampling process.
A practical sampling scheme is shown below:

vi

Anti-alias
Filter

ADC

Digital
Signal
Processor

DAC

Figure 3A.20

Signals and Systems 2014

Reconstruction
Filter

vo

when sampling

3A.28
Summary of the Sampling and Reconstruction Process

The sampling and


reconstruction
process in both the
time-domain and
frequency-domain

Time-Domain

Frequency-Domain

Sampling

g (t )

G( f )
A

-B

S(f )

s(t )
1

fs

Ts

-f s

g s (t )

fs

Gs ( f )
Afs

-f s

-B

fs

Reconstruction
h (t )

H( f )

fs

fs
0

Ts

g r (t )

- f s /2

f s /2

Gr ( f )
A
C

-B

Figure 3A.21

Signals and Systems 2014

3A.29
Finding the Fourier Series of a Periodic Function from the
Fourier Transform of a Single Period
It is usually easier to find the Fourier transform of a single period than The quick way to
performing the integration needed to find Fourier series coefficients (because determine Fourier

series coefficients

all the standard Fourier properties can be used). This method allows the
Fourier series coefficients to be determined directly from the Fourier
transform, provided the period is known. Dont forget, only periodic functions
have Fourier series representation.
Suppose we draw one period of a periodic waveform:
A single period

g1(t)
1

T1

Figure 3A.22

We can create the periodic version by convolving g 1 t with a train of unit


impulse functions with spacing equal to the period, T0 :
Convolved with a
uniform impulse
train
1

-2T0

-T0

Figure 3A.23

Signals and Systems 2014

T0

2T0 t

3A.30

t kT .

That is, we need to convolve g 1 t with

Thus, g p t , the

periodic version is:

g p t g1 t t kT0

(3A.56)

gives the periodic


waveform

gp( t )
1

-2T0

-T0

T1

2T0

T0

Figure 3A.24

Using the convolution multiplication rule:

F g p t F g1 t F t kT0
k

G1 f f 0

f nf

(3A.57)

In words, the Fourier transform of the periodic signal consists of impulses


located at harmonics of f 0 1 T0 , whose weights are:
Fourier series
coefficients from the
Fourier transform of
one period

Gn f 0 G1 nf 0

(3A.58)

These are the Fourier series coefficients.


In Figure 3A.24 we have:

Gn f 0 T1 sincnf 0 T1 e jnf 0T1

Signals and Systems 2014

(3A.59)

3A.31
Graphically, the operation indicated by Eq. (3A.57) takes the original spectrum
and multiplies it by a train of impulses effectively creating a weighted train
of impulses:

G1( f )

Sampling in the
frequency domain
produces a periodic
waveform in the
time-domain

T1

2f1

f1

f0

0 f 0 2f 0

Gp( f )

f 0 T1

-8 f0 -6 f0
-12 f0 -10 f0

6 f0
-4 f0 -2 f0 0

2 f0

4 f0

8 f0
10 f0 12 f0

Figure 3A.25

According to Eq. (3A.58), the Fourier series coefficients are just the weights of
the impulses in the spectrum of the periodic function. To get the nth Fourier
series coefficient, use the weight of the impulse located at nf 0 .
This is in perfect agreement with the concept of a continuous spectrum. Each Remember that
pairs of impulses in

frequency has an infinitesimal amplitude sinusoid associated with it. If an a spectrum

represent a sinusoid

impulse exists at a certain frequency, then there is a finite amplitude sinusoid at in the time-domain
that frequency.
Signals and Systems 2014

3A.32
Windowing in the Time Domain
Often we wish to deal with only a segment of a signal, say from t 0 to t T .
Some practical
effects of looking at
signals over a finite
time

Sometimes we have no choice, as this is the only part of the signal we have
access to - our measuring instrument has restricted us to a window of
duration T beginning at t 0 . Outside this window the signal is forced to be
zero. How is the signals Fourier transform affected when the signal is viewed
through a window?

Windowing defined

Windowing in the time domain is equivalent to multiplying the original signal


g t by a function which is non-zero over the window interval and zero

elsewhere. So the Fourier transform of the windowed signal is the original


signal convolved with the Fourier transform of the window.
Example

Find the Fourier transform of sin 2t when it is viewed through a rectangular


window from 0 to 1 second:
A rectangular
window applied to a
sinusoid

Figure 3A.26

The viewed signal is:

g w t sin 2t rect t 0.5

Signals and Systems 2014

(3A.60)

3A.33
The Fourier transform will be:

Fsin 2t Frectt 0.5


j
j

f 1 f 1 sinc f e jf
2
2

j
j
sinc f 1e j f 1 sinc f 1e j f 1
2
2

(3A.61)

Graphically, the magnitude spectrum of the windowed signal is:


Spectrum of a
rectangular
windowed sinusoid

-4

-3

-2

-1

1/2 90

1/2 -90

-1

G( f )
0.5

-4

-3

-2

-1

Figure 3A.27

Signals and Systems 2014

3A.34
If the window were changed to 4 seconds, we would then have:
The longer we look,
the better the
spectrum

-4

-3

-2

-1

1/2 90

-1

1/2 -90

G( f )
2

-4

-3

-2

-1

Figure 3A.28

Obviously, the longer the window, the more accurate the spectrum becomes.

Signals and Systems 2014

3A.35
Example

Find the Fourier transform of sinc t when it is viewed through a window from
-2 to 2 seconds:
Viewing a sinc
function through a
rectangular window

g (t )
w

-2

-1

2 t

Figure 3A.29

We have:

g w t sinct rect t 4

(3A.62)

F g w t rect f 4 sinc4 f

(3A.63)

and:

Signals and Systems 2014

3A.36
Graphically:
Ripples in the
spectrum caused by
a rectangular
window

-1

-0.5

0.5

1 f

0.5

1 f

0.5

-1

-0.5

-1

-0.5

Figure 3A.30

We see that windowing in the time domain by T produces ripples in the


frequency domain with an approximate spacing of 2 T between peaks. (In a
magnitude spectrum, there would be a peak every 1 T ).

Signals and Systems 2014

3A.37
Practical Multiplication and Convolution
We have learned about multiplication and convolution as mathematical
operations that we can apply in either the time or frequency domain.
Remember however that the time and frequency domain are just two ways of
describing the same signal a time varying voltage that we measure across two
terminals. What do we need to physically do to our signal to perform an
operation equivalent to say multiplying its frequency domain by some
function?
Two physical operations we can do on signals are multiplication (with another Multiplication in the
time-domain is
signal) and filtering (with a filter with a defined transfer function).
convolution in the
frequency-domain

Multiplication in time domain convolution in frequency domain


You can buy a 4 quadrant multiplier as an IC from any electronics supplier.
Depending on the bandwidth of the signal they can handle, the price varies
from several dollars to over a hundred dollars. They have a pair of terminals
for the two signals to be multiplied together. At higher frequencies any nonlinear device can be used for multiplication.

x (t )

x (t ) y (t )

y (t )
Figure 3A.31
Time domain output is x(t) multiplied with y(t)

(3A.64a)

Frequency domain output is X(f) convolved with Y(f)

(3A.65b)

Signals and Systems 2014

3A.38
Convolution in the
time-domain is
multiplication in the
frequency-domain

Convolution in time domain multiplication in frequency domain


You can design a network, either passive or active that performs as a filter. Its
characteristics can be specified by its transfer function (a plot of magnitude vs.
frequency and a second plot of phase vs. frequency or a complex expression of
frequency) or equivalently by its impulse response which is the inverse Fourier
transform of the transfer function.

x (t )

Filter
specified by
H ( f ) or h ( t )

y (t ) = h (t ) * x (t )

Figure 3A.32
Time domain output is x(t) convolved with h(t)

(3A.65a)

Frequency domain output is X(f) multiplied with H(f)

(3A.66b)

Signals and Systems 2014

3A.39
Summary

Sinusoidal inputs to linear time-invariant systems yield sinusoidal outputs.


The output sinusoid is related to the input sinusoid by a complex-valued
function known as the frequency response, H f .

The frequency response of a system is just the Fourier transform of the


impulse response of the system. That is, the impulse response and the
frequency response form a Fourier transform pair: ht H f .

The frequency response of a system can be obtained directly by performing


analysis in the frequency domain.

The output of an LTI system due to any input signal is obtained most easily
by considering the spectrum: Y f H f X f . This expresses the
important property: convolution in the time-domain is equivalent to
multiplication in the frequency-domain.

Filters are devices best thought about in the frequency-domain. They are
frequency selective devices, changing both the magnitude and phase of

frequency components of an input signal to produce an output signal.

Linear phase is desirable in filters because it produces a constant delay,

thereby preserving wave shape.

Sampling is the process of converting a continuous-time signal into a

discrete-time signal. It is achieved, in the ideal case, by multiplying the


signal by a train of impulses.

Reconstruction is the process of converting signal samples back into a

continuous-time signal. It is achieved by passing the samples through a


lowpass filter.

Aliasing is an effect of sampling where spectral overlap occurs, thus

destroying the ability to later reconstruct the signal. It is caused by not


meeting the Nyquist criterion: f s 2 B .
Signals and Systems 2014

3A.40

Fourier series coefficients can be obtained from the Fourier transform of

one period of the signal by the formula: Gn f 0 G1 nf 0 .

Using finite-duration signals in the time-domain is called windowing.


Windowing affects the spectrum of the original signal.

References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.

Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.

Signals and Systems 2014

3A.41
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]

1.

The convolution of x t and y t is given by:


(a)

x y t d

(b)

x y t d

(c)

X Y f d

2.
vi

The Fourier transform

vo
vi

vo

Ideal
Filter

|H( f )|

(a)

of the impulse response

of the filter resembles:

h (t)

|H( f )|

(b)

(c)

3.
The Fourier transform of one period of a periodic waveform is G(f). The
Fourier series coefficients, Gn , are given by:
(a) nf 0 G f 0

(b) Gnf 0

(c) f 0Gnf 0

4.
x(t)

The peak value of the

y(t)

convolution is:
t

(a) 9

(b) 4.5

(c) 6

5.
The scaling property of the Fourier transform is:

f
(a) g at G
a

(b) g at

1
G f
a

Answers: 1. a 2. c 3. c 4. x 5. x

Signals and Systems 2014

(c) ag t aG f

3A.42
Exercises
1.
Calculate the magnitude and phase of the 4 kHz component in the spectrum of
the periodic pulse train shown below. The pulse repetition rate is 1 kHz.
x(t)
10

0.4
0

0.6
0.5

0.1

0.9 1

t (ms)

-5

2.
By relating the triangular pulse shown below to the convolution of a pair of
identical rectangular pulses, deduce the Fourier transform of the triangular
pulse:

x(t)

3.
The pulse x t 2 Bsinc 2 Bt rect Bt 8 has ripples in the amplitude spectrum.
What is the spacing in frequency between positive peaks of the ripples?

4.
A signal is to be sampled with an ideal sampler operating at 8000 samples per
second. Assuming an ideal low pass anti-aliasing filter, how can the sampled
signal be reconstituted in its original form, and under what conditions?

Signals and Systems 2014

3A.43
5.
A train of impulses in one domain implies what in the other?

6.
The following table gives information about the Fourier series of a periodic
waveform, g t , which has a period of 50 ms.
Table 1

Harmonic #
0
1
2
3
4
(a)

Amplitude
1
3
1
1
0.5

Phase ()
-30
-30
-60
-90

Give the frequencies of the fundamental and the 2nd harmonic. What is
the signal power, assuming that g t is measured across 50 ?

(b)

Express the third harmonic as a pair of counter rotating phasors. What is


the value of the third harmonic at t 20 ms ?

(c)

The periodic waveform is passed through a filter with transfer function

H f as shown below.
|H( f )|

1
0.5

-100

-50

50

100 f (Hz)

50

100 f (Hz)

H( f )

-100

-50

Draw up a table in the same form as Table 1 of the Fourier series of the
output waveform. Is there a DC component in the output of the
amplifier?

Signals and Systems 2014

3A.44
7.
A signal, bandlimited to 1 kHz, is sampled by multiplying it with a rectangular
pulse train with repetition rate 4 kHz and pulse width 50 s. Can the original
signal be recovered without distortion, and if so, how?

8.
A signal is to be analysed to identify the relative amplitudes of components
which are known to exist at 9 kHz and 9.25 kHz. To do the analysis a digital
storage oscilloscope takes a record of length 2 ms and then computes the
Fourier series. The 18th harmonic thus computed can be non-zero even when
no 9 kHz component is present in the input signal. Explain.

9.
Use MATLAB to determine the output of a simple RC lowpass filter
subjected to a square wave input given by:

xt

rectt 2n

for the cases: 1 RC 1 , 1 RC 10 , 1 RC 100 . Plot the time domain from


t 3 to 3 and Fourier series coefficients up to the 50th harmonic for each

case.

Signals and Systems 2014

3B.1
Lecture 3B Amplitude Modulation
Modulation. Double-sideband, suppressed-carrier (DSB-SC) modulation.
Amplitude modulation (AM). Single sideband (SSB) Modulation. Quadrature
amplitude modulation (QAM).

Introduction
The modern world cannot exist without communication systems. A model of a
typical communication system is:
A communication
system
Source

Input
transducer

Message
signal

Received
signal

Transmitted
signal
Transmitter

Channel

Output
signal
Receiver

Output
transducer

Destination

Distortion
and
noise

Figure 3B.1
The source originates the message, such as a human voice, a television picture,
or data. If the data is nonelectrical, it must be converted by an input transducer
into an electrical waveform referred to as the baseband signal or the message
signal.
The transmitter modifies the baseband signal for efficient transmission.
The channel is a medium such as a twisted pair, coaxial cable, a waveguide,
an optical fibre, or a radio link through which the transmitter output is sent.
The receiver processes the signal from the channel by undoing the signal
modifications made at the transmitter and by the channel.
The receiver output is fed to the output transducer, which converts the
electrical signal to its original form.
The destination is the thing to which the message is communicated.

Signals and Systems 2014

3B.2
The Communication Channel
A channel acts partly as a filter it attenuates the signal and distorts the
The communication
channel attenuates
and distorts the
signal, and adds
noise

waveform. Attenuation of the signal increases with channel length. The


waveform is distorted because of different amounts of attenuation and phase
shift suffered by different frequency components of the signal. This type of
distortion, called linear distortion, can be partly corrected at the receiver by an
equalizer with gain and phase characteristics complementary to those of the
channel.
The channel may also cause nonlinear distortion through attenuation that
varies with the signal amplitude. Such distortion can also be partly corrected
by a complementary equalizer at the receiver.
The signal is not only distorted by the channel, but it is also contaminated
along the path by undesirable signals lumped under the broad term noise which
are random and unpredictable signals from causes external and internal.
External noise includes interference from signals transmitted on nearby
channels, human-made noise generated by electrical equipment (such as motor
drives, combustion engine ignition radiation, fluorescent lighting) and natural
noise (electrical storms, solar and intergalactic radiation). Internal noise results
from thermal motion of electrons in conductors, diffusion or recombination of
charged carriers in electronic devices, etc. Noise is one of the basic factors that
sets a limit on the rate of communication.

Signals and Systems 2014

3B.3
Analog and Digital Messages
Messages are analog or digital. In an analog message, the waveform is
important and even a slight distortion or interference in the waveform will
cause an error in the received signal. On the other hand, digital messages are
transmitted by using a finite set of electrical waveforms. Easier message
extraction (in the presence of noise), and regeneration of the original digital
signal means that digital communication can transmit messages with a greater
accuracy than an analog system in the presence of distortion and noise. This is
the reason why digital communication is so prevalent, and analog
communication is all but obsolete.
Baseband and Carrier Communication
Some baseband signals produced by various information sources are suitable
for direct transmission over a given communication channel this is called
baseband communication.
Communication that uses modulation to shift the frequency spectrum of a
signal is known as carrier communication. In this mode, one of the basic
parameters (amplitude, frequency or phase) of a high frequency sinusoidal
carrier is varied in proportion to the baseband signal mt .

Modulation
Baseband signals produced by various information sources are not always
Modulation is the

suitable for direct transmission over a given channel. These signals are process of modifying
a high-frequency

modified to facilitate transmission. This conversion process is known as sinusoids


modulation. In this process, the baseband signal is used to modify some amplitude,
parameter of a high-frequency carrier signal.
A carrier is a sinusoid of high frequency, and one of its parameters such as
amplitude, frequency, or phase is varied in proportion to the baseband signal
mt . Accordingly, we have amplitude modulation (AM), frequency

modulation (FM), or phase modulation (PM). FM and PM are similar, in


essence, and are grouped under the name angle modulation.
Signals and Systems 2014

frequency, or phase
by a message signal

3B.4
The figure below shows a baseband signal mt and the corresponding AM
and FM waveforms:
Examples of AM
and FM modulated
waveforms

Carrier
v

Modulating (baseband) signal


v

Amplitude-modulated wave
v

Frequency-modulated wave
Figure 3B.2
At the receiver, the modulated signal must pass through a reverse process
called demodulation in order to retrieve the baseband signal.

Signals and Systems 2014

3B.5
Modulation facilitates the transmission of information for the following
reasons.
Ease of Radiation
For efficient radiation of electromagnetic energy, the radiating antenna should Modulation is used

for ease of radiation

be of the order of one-tenth of the wavelength of the signal radiated. For many (broadcasting)
baseband signals, the wavelengths are too large for reasonable antenna
dimensions. We modulate a high-frequency carrier, thus translating the signal
spectrum to the region of carrier frequencies that corresponds to a much
smaller wavelength.
Simultaneous Transmission of Several Signals
We can translate many different baseband signals into separate channels by
using different carrier frequencies. If the carrier frequencies are chosen and to
simultaneously

sufficiently far apart in frequency, the spectra of the modulated signals will not transmit messages
through a single

overlap and thus will not interfere with each other. At the receiver, a tuneable channel
bandpass filter is used to select the desired signal. This method of transmitting
several signals simultaneously is known as frequency-division multiplexing
(FDM). A type of FDM, known as orthogonal frequency-division multiplexing
(OFDM) is at the heart of Wi-Fi and was invented at the CSIRO!

Signals and Systems 2014

3B.6
Double-Sideband, Suppressed-Carrier (DSB-SC) Modulation
Let xt be a signal such as an audio signal that is to be transmitted through a
cable or the atmosphere. In amplitude modulation (AM), the signal modifies
(or modulates) the amplitude of a carrier sinusoid cos c t . In one form of
AM transmission, the signal xt and the carrier cos c t are simply
multiplied together. The process is illustrated below:
Double side-band
suppressed carrier
(DSB-SC)
modulation

Signal multiplier
x( t )

Local
Oscillator

y ( t ) = x ( t ) cos(2 f c t ) = modulated signal

cos(2 f c t )

Figure 3B.3
The local oscillator in Figure 3B.3 is a device that produces the sinusoidal
signal cos c t . The multiplier is implemented with a non-linear device, and is
usually an integrated circuit at low frequencies.
By the multiplication property of Fourier transforms, the output spectrum is
obtained by convolving the spectrum of xt with the spectrum of cos2f c t .
We now restate a very important property of convolution involving an impulse:

X f f f0 X f f0

Signals and Systems 2014

(3B.1)

3B.7
The output spectrum of the modulator is therefore:

Y f X f

DSB-SC uptranslates the


baseband spectrum

1
f fc f fc
2

1
X f fc X f fc
2

(3B.2)

The spectrum of the modulated signal is a replica of the signal spectrum but
shifted up in frequency. If the signal has a bandwidth equal to B then the
modulated signal spectrum has an upper sideband from f c to f c B and a
lower sideband from f c B to f c , and the process is therefore called doublesideband transmission, or DSB transmission for short. An example of
modulation is given below in the time-domain:
DSB-SC in the timedomain
x( t )

y( t )

Signal multiplier

1
y ( t ) = x ( t ) cos(2 f ct )

x( t )
-2

-1

modulated signal

Local
Oscillator

cos(2 f ct )

1
-2

-1

Figure 3B.4

Signals and Systems 2014

-2

-1

3B.8
And in the frequency domain:
DSB-SC in the
frequency-domain
X (f )
1

Y f 1 X f f c X f f c
2
Y(f )

X (f )
-2

-1

1/2

1
f fc f fc
2
1/2
- fc

fc

- fc

fc

cos(2 f ct )
Local
Oscillator

Figure 3B.5

Modulation lets us
share the spectrum,
and achieves
practical
propagation

The higher frequency range of the modulated signal makes it possible to


achieve good propagation in transmission through a cable or the atmosphere. It
also allows the spectrum to be shared by independent users e.g. radio, TV,
mobile phone etc.

Signals and Systems 2014

3B.9
Demodulation
The reconstruction of xt from xt cos c t is called demodulation. There are
many ways to demodulate a signal, here we will consider one common method
called synchronous or coherent demodulation.

x ( t ) cos (2 f c t )
2

x ( t ) cos(2 f c t )

Local
Oscillator

lowpass
filter

x (t )

Coherent
demodulation of a
DSB-SC signal

cos(2 f c t )

Figure 3B.6
The first stage of the demodulation process involves applying the modulated Coherent
waveform xt cos c t to a multiplier. The other signal applied to the demodulation
requires a carrier at
the receiver that is

multiplier is a local oscillator which is assumed to be synchronized with the synchronized with
carrier signal cos c t , i.e. there is no phase shift between the carrier and the the transmitter
signal generated by the local oscillator.
The output of the multiplier is:

1
X f f c X f f c 1 f f c f f c
2
2
1
1
X f X f 2 f c X f 2 f c
2
4

(3B.3)

xt can be extracted from the output of the multiplier by lowpass filtering

with a cutoff frequency of B Hz and a gain of 2.

Signals and Systems 2014

3B.10
Another way to think of this is in the time-domain:

1
xt cos 2 2f c t xt 1 cos4f c t
2

(3B.4)

Therefore, it is easy to see that lowpass filtering, with a gain of 2, will produce
xt . An example of demodulation in the time-domain is given below:
Demodulation in the
time-domain
x ( t ) cos (2 f c t )
2

x ( t ) cos(2 f c t )
-2

1
-2

-1

-1

x( t )

x(t )

lowpass
filter

-2

-1

cos(2 f c t )

Local
Oscillator
1
-2

-1

Figure 3B.7
The operation of demodulation is best understood in the frequency domain:
Demodulation in the
frequency-domain
Yf

1
2

X f

f c X f f c

1/2

1/4

Y (f )

1/4

-2 f c

- fc

2 fc

1/2

X (f )

1
2

x (t )

lowpass
filter

fc

-2

f c f f c

H( f )

1/2
- fc

fc

cos(2 f c t )
Local
Oscillator

-2

-1

Figure 3B.8

Signals and Systems 2014

-1

3B.11
Summary of DBS-SC Modulation and Demodulation

Time-Domain

The DSB-SC
modulation and
demodulation
process in both the
time-domain and
frequency-domain

Frequency-Domain

Modulation

g (t )

G( f )
A

-B 0

L( f )

l (t )
1
0

1/2
Tc

t
-f c

gm(t )

fc

Gm( f )
C

A /2
t
-f c -B

-f c

Demodulation

-f c +B

f c -B

f c +B

fc

L( f )
1/2

l (t )
1
0

Tc

-f c

fc

Gm( f )
g i( t )

A /2
C

A /4

-2 f c

A /4
-f c -B

fc

2 fc

H( f )

h (t )

T0

- f0

f0

Gd( f )

gd( t )

-B 0

Figure 3B.9

Signals and Systems 2014

3B.12
Amplitude Modulation (AM)
Let g t be a message signal such as an audio signal that is to be transmitted
through a cable or the atmosphere. In amplitude modulation (AM), the message
signal modifies (or modulates) the amplitude of a carrier sinusoid cos2f c t .
In one form of AM transmission, a constant bias A is added to the message
signal g t prior to multiplication by a carrier cos2f c t . The process is
illustrated below:
AM modulation

Adder
g (t )

Multiplier
gAM( t ) = modulated signal

A + g (t )

+
+

cos(2 f c t )
A

Local
oscillator

Figure 3B.10
The local oscillator in Figure 3B.10 is a device that produces the sinusoidal
signal cos2f c t . The multiplier is implemented with a non-linear device, and
is usually an integrated circuit at low frequencies. The adder is a simple opamp circuit.
The spectrum of the modulated signal should show a replica of the signal
spectrum but shifted up in frequency. Ideally, there should also be a pair of
impulses representing the carrier sinusoid. If G f , the spectrum of g t , is
bandlimited to B Hz, then the modulated signal spectrum has an upper
sideband from f c to f c B and a lower sideband from f c B to f c . Since the
appearance of the modulated signal in the time domain is that of a sinusoid
with a time-varying amplitude proportional to the message signal, this
modulation technique is called amplitude modulation, or AM for short.

Signals and Systems 2014

3B.13
Envelope Detection
There are several ways to demodulate the AM signal. One way is coherent (or

synchronous) demodulation. If the magnitude of the signal g t never exceeds


the bias A, then it is possible to demodulate the AM signal using a very simple
and practical technique called envelope detection.
An AM envelope
detector used for
demodulation

gAM( t )

C A+ g (t )

0 = 1/ RC
Figure 3B.11
As long as the envelope of the signal A g t is non-negative, the message
g t appears to ride on top of the half-wave rectified version of g AM t . The

half-wave rectified g AM t can be smoothed with a capacitor so that the output


closely approximates A g t . The time constant of the RC smoothing circuit
is not extremely critical, so long as B f 0 f c .
The envelope detector can be thought of as a rectifier followed by a lowpass
filter:

g (t)

g (t )

AM

g (t )
lowpass
filter

precision
rectifier
Figure 3B.12

Signals and Systems 2014

3B.14
Single Sideband (SSB) Modulation
In so far as the transmission of information is concerned, only one sideband is
necessary, and if the carrier and the other sidebands are suppressed at the
transmitter, no information is lost. When only one sideband is transmitted, the
modulation system is referred to as a single-sideband (SSB) system.
A method for the creation of an SSB signal is illustrated below:
SSB modulation

Multiplier

Adder

g (t )

g SSB (t )
modulated
signal

-90
phaseshifter

cos(2 f c t )
Local Oscillators
sin(2 f c t )

Figure 3B.13
The local oscillators in Figure 3B.13 are devices that produce sinusoidal
signals. One oscillator is cos2f c t . The other oscillator has a phase which is
said to be in quadrature, or a phase of 2 with respect to the first oscillator,
to give sin 2f c t . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz). The -90 phase-shifter is a device that shifts the phase of all
positive frequencies by 90 and all negative frequencies by +90. It is more
commonly referred to as a Hilbert transformer. Note that the local oscillator
sin 2f c t can be generated from cos2f c t by passing it through a Hilbert
transformer.
Signals and Systems 2014

3B.15
Quadrature Amplitude Modulation (QAM)
In one form of AM transmission, two messages that occupy the same part of
the spectrum can be sent by combining their spectra in quadrature. If the first
message signal g1 t is multiplied by a carrier cos2f c t , then the second
message signal g 2 t is multiplied by sin 2f c t . The process is illustrated
below:
QAM modulation

Multiplier

Adder

g 1( t )

Local
Oscillators

g QAM( t ) = modulated signal

cos(2 f c t )
sin(2 f c t )

g 2( t )

Figure 3B.14
The local oscillators in Figure 3B.14 are devices that produce sinusoidal
signals. One oscillator is cos2f c t . The other oscillator has a phase which is
said to be in quadrature, or a phase of 2 with respect to the first oscillator,
to give sin 2f c t . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz).

Signals and Systems 2014

3B.16
The spectrum of the modulated signal has two parts in quadrature. Each part is
a replica of the original message spectrum but shifted up in frequency. The
parts do not interfere since the first message forms the real (or in-phase, I)
part of the modulated spectrum and the second message forms the imaginary
(or quadrature, Q) part. An abstract view of the spectrum showing its real and
imaginary parts is shown below:

G1 ( f )

G2 ( f )
f
original message spectra

G QAM( f )
Re (I)
-fc
Im (Q)

fc

QAM spectrum

Figure 3B.15
Normally, we represent a spectrum by its magnitude and phase, and not its real
and imaginary parts, but in this case it is easier to picture the spectrum in
rectangular coordinates rather than polar coordinates.
If the spectrum of both message signals is bandlimited to B Hz, then the
modulated signal spectrum has an upper sideband from f c to f c B and a
lower sideband from f c B to f c .
The appearance of the modulated signal in the time domain is that of a sinusoid
with a time-varying amplitude and phase, but since the amplitude of the
quadrature components (cos and sin) of the carrier vary in proportion to the
message signals, this modulation technique is called quadrature amplitude

modulation, or QAM for short.

Signals and Systems 2014

3B.17
Coherent Demodulation
There are several ways to demodulate the QAM signal - we will consider a
simple analog method called coherent (or synchronous) demodulation.
Coherent QAM
demodulation

g QAM( t ) cos(2 f c t )
lowpass
filter

g QAM( t )

g 1( t )

cos(2 f c t )
sin(2 f c t )
g QAM( t ) sin(2 f c t )
lowpass
filter

g 2( t )

Figure 3B.16
The first stage of the demodulation process involves applying the modulated
waveform g QAM t to two separate multipliers. The other signals applied to
each multiplier are local oscillators (in quadrature again) which are assumed to
be synchronized with the modulator, i.e. the frequency and phase of the
demodulators local oscillators are exactly the same as the frequency and phase
of the modulators local oscillators. The signals g1 t and

g 2 t can be

extracted from the output of each multiplier by lowpass filtering.

Signals and Systems 2014

3B.18
Summary

Modulation shifts a baseband spectrum to a higher frequency range. It is


achieved in many ways the simplest being the multiplication of the signal
by a carrier sinusoid.

Demodulation is the process of returning a modulated signal to the


baseband. Modulation and demodulation form the basis of modern
communication systems.

References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using

MATLAB, Prentice-Hall, 1997.


Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.

Signals and Systems 2014

3B.19
Exercises
1.
Sketch the Fourier transform of the waveform g t 1 cos 2t sin 20t .

2.
A scheme used in stereophonic FM broadcasting is shown below:
A
L

B
cos(2 f c t )

The input to the left channel (L) is a 1 kHz sinusoid, the input to the right
channel (R) is a 2 kHz sinusoid. Draw the spectrum of the signal at points A, B
and C if f c 38 kHz .

3.
Draw a block diagram of a scheme that could be used to recover the left (L)
and right (R) signals of the system shown in Question 12 if it uses the signal at
C as the input.

Signals and Systems 2014

4A.1
Lecture 4A The Laplace Transform
The Laplace transform. Finding a Fourier transform using a Laplace
transform. Finding Laplace transforms. Standard Laplace transforms. Laplace
transform properties. Evaluation of the inverse Laplace transform. Transforms
of differential equations. The system transfer function. Block diagrams.
Cascading blocks. Standard form of a feedback control system. Block diagram
transformations.

The Laplace Transform


The Laplace transform is a generalisation of the Fourier transform. We need it
for two reasons:
1. there are some important signals, such as the unbounded ramp function The need for the
Laplace transform
r t tu t , which do not have a Fourier transform. Also, unstable systems,
such as ht e at u t , a 0 , do not have a Fourier transform. In both
cases the time-domain function is such that

g t dt .

2. we wish to determine a systems response from the application of an input


signal at t 0 , and include any initial conditions in the systems response.
The Laplace transform fulfils both of these objectives.
If we multiply a time-domain function by an exponential convergence factor,

e t , before taking the Fourier transform, then we guarantee convergence of


the Fourier transform for a broader class of functions. For example, if we
multiply the ramp r t tu t by e t , then for sufficiently large, the
decay of the exponential will be greater than the increase of the ramp as

t :

t
t

t e - t

e - t
t

Figure 4A.1
Signals and Systems 2014

4A.2
In other words, the factor e t is used to try and force the overall function to
zero for large values of t in an attempt to achieve convergence of the Fourier
transform (note that this is not always possible).
After multiplying the original function by an exponential convergence factor,
the Fourier transform is:

F xt e

xt e e

jt

xt e

j t

dt
(4A.1)

dt

The resulting integral is a function of j , and so we can write:

X j xt e j t dt

(4A.2)

Letting s j , this becomes:


The two-sided
Laplace transform

X s xt e st dt

(4A.3)

which is called the two-sided Laplace transform.


By making the integral only valid for time t 0 , we can incorporate initial
conditions into the s-domain description of signals and systems. That is, we
assume that signals are only applied at t 0 and impulse responses are causal
and start at t 0 . The lower limit in the Laplace transform can then be
replaced with zero. The result is termed the one-sided Laplace transform,
which we will refer to simply as the Laplace transform:
The one-sided
Laplace transform

X s xt e st dt
0

Signals and Systems 2014

(4A.4)

4A.3
The Laplace transform variable, s , is termed complex frequency it is a
complex number and can be graphed on a complex plane:

Im s

s= + j

Re s

Figure 4A.2
The inverse Laplace transform can be obtained by considering the inverse
Fourier transform of X j :

xt et

1
2

X j e jt d

(4A.5)

where we have used the fact that d 2df . We then have:

xt

1
2

X j e j t d

(4A.6)

But s j ( fixed) and so ds jd . We then get:

1 c j
st

xt
X
s
e
ds

2j

(4A.7)

The inverse Laplace


transform

It is common practice to use a bidirectional arrow to indicate a Laplace


transform pair, as follows:

xt X s
Signals and Systems 2014

(4A.8)

The Laplace
transform pair
defined

4A.4
Region of Convergence (ROC)
The region of convergence (ROC) for the Laplace transform X s , is the set of
values of s (the region in the complex plane) for which the integral in
Eq. (4A.4) converges.
Example
To find the Laplace transform of a signal xt e at u t and its ROC, we
substitute into the definition of the Laplace transform:

X s e at u t e st dt
0

(4A.9)

Because u t 0 for t 0 and u t 1 for t 0 ,

X s e e dt e sa t dt
0

at st

1 sa t
e

sa
0

(4A.10)

Note that s is complex and as t , the term e s a t does not necessarily


vanish. Here we recall that for a complex number z j ,

e zt e j t e t e jt

(4A.11)

Now e jt 1 regardless of the value of t . Therefore, as t , e zt 0

only if 0 , and e zt if 0 . Thus:

0 Re z 0
lim e zt
t
Re z 0

Signals and Systems 2014

(4A.12)

4A.5
Clearly:

0 Res a 0
lim e s a t
t
Res a 0

(4A.13)

Use of this result in Eq. (4A.10) yields:

X s

1
sa

Res a 0

(4A.14)

or:

e at u t

1
sa

Re s a

(4A.15)

The ROC of X s is Re s a , as shown in the shaded area below:

Signal x ( t )

-at

A signal and the


region of
convergence of its
Laplace transform in
the s-plane

Region of convergence
j
c+j

u( t )

-a

c-j

Figure 4A.3

This fact means that the integral defining X s in Eq. (4A.10) exists only for
the values of s in the shaded region in Figure 4A.3. For other values of s , the
integral does not converge.

Signals and Systems 2014

4A.6
The ROC is required for evaluating the inverse Laplace transform, as defined
The ROC is needed
to establish the
convergence of the
Laplace transform

by Eq. (4A.7). The operation of finding the inverse transform requires


integration in the complex plane. The path of integration is along c j , with

varying from to . This path of integration must lie in the ROC for
X s . For the signal e at u t , this is possible if c a . One possible path of

integration is shown (dotted) in Figure 4A.3. We can avoid this integration by


compiling a table of Laplace transforms.
The ROC is not
needed if we deal
with causal signals
only

If all the signals we deal with are restricted to t 0 , then the inverse transform
is unique, and we do not need to specify the ROC.

Finding a Fourier Transform using a Laplace Transform


If the function xt is zero for all t 0 , and the Laplace transform exists along
the imaginary axis of the s-plane ( s j ), then:

X X s s j

The Fourier
transform from the
Laplace transform

(4A.16)

That is, the Fourier transform X is just the Laplace transform X s with
s j .
Example

To find the Fourier transform of a signal xt e 3t u t , we substitute s j


into its Laplace transform:

1
1
1

s 3 s j j 3 3 j 2f

(4A.17)

A quick check from our knowledge of the Fourier transform shows this to be
correct (because the Laplace transform includes the j axis in its region of
convergence).

Signals and Systems 2014

4A.7
We can also view the Laplace transform geometrically, if we are willing to
split the transform into its magnitude and phase (remember X s is a complex
number). The magnitude of X s 1 s 3 can be visualised by graphing
X s as the height of a surface spread out over the s-plane:
The graph of the
magnitude of the
Laplace transform
over the s-plane
forms a surface with
poles and zeros
1
0.8
0.6
|X(s)|
0.4
0.2
0
-6
10
-4

5
0

-2

-5
0

-10

Figure 4A.4

There are several things we can notice about the plot above. First, note that the
surface has been defined only in the ROC, i.e. for Re s 3 . Secondly, the
surface approaches an infinite value at the point s 3 . Such a point is termed
a pole, in obvious reference to the surface being analogous to a tent (a zero is a
point where the surface has a value of zero).

Signals and Systems 2014

4A.8
We can completely specify X s , apart from a constant gain factor, by
drawing a so-called pole-zero plot:
A pole-zero plot is a
shorthand way of
representing a
Laplace transform

-3
s -plane
Figure 4A.5

A pole-zero plot locates all the critical points in the s-plane that completely
specify the function X s (to within an arbitrary constant), and it is a useful
analytic and design tool.
Thirdly, one cut of the surface has been fortuitously placed along the imaginary
axis. If we graph the height of the surface along this cut against , we get a
picture of the magnitude of the Fourier transform vs. :
The Fourier
transform is
obtained from the
j-axis of a plot of
the Laplace
transform over the
entire s-plane

|X |
1/3

magnitude of
1
X ( ) =
3+ j

Figure 4A.6

With these ideas in mind, it should be apparent that a function that has a
Laplace transform with a ROC in the right-half plane does not have a Fourier
transform (because the Laplace transform surface will never intersect the
j -axis).

Signals and Systems 2014

4A.9
Finding Laplace Transforms
Like the Fourier transform, it is only necessary from a practical viewpoint to
find Laplace transforms for a few standard signals, and then formulate several
properties of the Laplace transform. Then, finding a Laplace transform of a
function will consist of starting with a known transform and successively
applying known transform properties.
Example

To find the Fourier transform of a signal xt t , we substitute into the


Laplace transform definition:

X s t e st dt

(4A.18)

Recognising this as a sifting integral, we arrive at a standard transform pair:

t 1

(4A.19)

The Laplace
transform of an
impulse

Thus, the Laplace transform of an impulse is 1, just like the Fourier transform.
Example

To find the Laplace transform of the unit-step, just substitute a 0 into


Eq. (4A.15). The result is:

u t

1
s

The Laplace
transform of a unitstep

(4A.20)

This is a frequently used transform in the study of electrical circuits and control
systems.

Signals and Systems 2014

4A.10
Example

To find the Laplace transform of cos 0 t u t , we recognise that:

cos 0t u t

1 j 0t
e e j 0t u t
2

(4A.21)

From Eq. (4A.15), it follows that:

1 j 0 t
1 1
1
e e j 0 t u t

2
2 s j0 s j0

s
s 2 02

(4A.22)

and so we have another standard transform:

cos 0t u t

s
s 2 02

(4A.23)

A similar derivation can be used to find the Laplace transform of sin 0 t u t .


Most of the Laplace transform properties are inherited generalisations of
Fourier transform properties. There are a few important exceptions, based on
the fact that the Laplace transform is one-sided (from 0 to ), whereas the
Fourier transform is double-sided (from to ) and the Laplace transform
is more general in the sense that it covers the entire s-plane, not just the
j -axis.

Signals and Systems 2014

4A.11
Differentiation Property

One of the most important properties of the Laplace transform is the


differentiation property. It enables us to directly transform a differential
equation into an algebraic equation in the complex variable s. It is much easier
to solve algebraic equations than differential equations!
The Laplace transform of the derivative of a function is given by:
dx
dx
L e st dt
dt 0 dt

(4A.24)

Integrating by parts, we obtain:

dx
L xt e st s xt e st dt
0
0
dt

(4A.25)

For the Laplace integral to converge [i.e. for X s to exist], it is necessary that
xt e st 0 as t for the values of s in the ROC for X s . Thus:

d
xt sX s x 0
dt

Signals and Systems 2014

(4A.26)

The Laplace
transform
differentiation
property

4A.12
Standard Laplace Transforms

u t

1
s

(L.1)

t 1

(L.2)

e t u t

1
s 1

cost u t

s
s2 2

sin t u t

s2 2

Signals and Systems 2014

(L.3)

(L.4)

(L.5)

4A.13
Laplace Transform Properties
Assuming xt X s .

axt aX s

(L.6)

Linearity

t
x T X sT
T

(L.7)

Scaling

xt c u t c e cs X s

(L.8)

Time shifting

e at xt X s a

(L.9)

Multiplication by
exponential

(L.10)

Multiplication by t

(L.11)

Differentiation

(L.12)

Integration

x1 t x2 t X 1 s X 2 s

(L.13)

Convolution

x0 lim sX s

(L.14)

Initial-value theorem

lim xt lim sX s

(L.15)

Final-value theorem

t xt 1
N

dN
X s
ds N

d
xt sX s x 0
dt

t
0

x d

1
X s
s

s 0

Signals and Systems 2014

4A.14
Evaluation of the Inverse Laplace Transform
Finding the inverse Laplace transform requires integration in the complex
plane, which is normally difficult and time consuming to compute. Instead, we
can find inverse transforms from a table of Laplace transforms. All we need to
do is express X s as a sum of simpler functions of the forms listed in the
table. Most of the transforms of practical interest are rational functions, that is,
ratios of polynomials in s. Such functions can be expressed as a sum of simpler
functions by using partial fraction expansion.
Rational Laplace Transforms

A rational Laplace transform of degree N can be expressed as:

bM s M bM 1 s M 1 ... b1 s b0
X s
a N s N a N 1 s N 1 ... a1s a0

(4A.27)

This can also be written:

bM s M bM 1s M 1 ... b1s b0
X s
a N s p1 s p2 ...s p N

(4A.28)

where the pi are called the poles of X s . If the poles are all distinct, then the
partial fraction expansion of Eq. (4A.28) is:
Rational Laplace
transforms written in
terms of poles

cN
c
c
X s 1 2 ...
s p1 s p 2
s pN

(4A.29)

We call the coefficients ci residues, which is a term derived from complex


variable theory. They are given by:
Residues defined

ci s pi X s s pi

Signals and Systems 2014

(4A.30)

4A.15
Taking the inverse Laplace transform of Eq. (4A.29) using standard transform
(L.3) and property (L.7), gives us:

xt c1e p1t c 2 e p2t ... c N e p N t , t 0

(4A.31)

The time-domain
form depends only
on the poles

Note that the form of the time-domain expression is determined only by the
poles of X s !
Example

Find the inverse Laplace transform of:


Y s

2s 1
s 2 s 4

(4A.32)

By expansion into partial fractions we have:


c
c
2s 1
1 2
s 2 s 4 s 2 s 4

(4A.33)

To find c1 , multiply both sides of Eq. (4A.33) by s 2 :


c s 2
2s 1
c1 2
s 4
s4

(4A.34)

As this equation must be true for all values of s, set s 2 to remove c2 :


2 2 1
c 1
2 4 1

(4A.35)

An equivalent way to find c1 , without performing algebraic manipulation by

Heavisides
hand, is to mentally cover up the factor s 2 on the left-hand side, and then cover-up rule

evaluate the left-hand side at a value of s that makes the factor s 2 0 ,


i.e. at s 2 . This mental procedure is known as Heavisides cover-up rule.

Signals and Systems 2014

4A.16
Applying Heavisides cover-up rule for c2 results in the mental equation:
2 4 1
c 3
4 2 2

(4A.36)

Therefore, the partial fraction expansion is:


Y s

1
3

s2 s4

(4A.37)

The inverse Laplace transform can now be easily evaluated using standard
transform (L.3) and property (L.7):
y t e 2 t 3e 4 t , t 0

(4A.38)

Note that the continual writing of u t after each function has been replaced by
the more notationally convenient condition of t 0 on the total solution.
If there is a pole p1 j , then the complex conjugate p1 j is also
a pole (thats how we get real coefficients in the polynomial). In this case the
residues of the two poles are complex conjugate and:

c3
cN
c1
c1
X s

...
s p1 s1 p1 s p3
s pN

(4A.39)

The inverse transform is:

xt c1e

p1t

c e

p1 t

... c N e p N t , t 0

(4A.40)

which can be expressed as:


Complex-conjugate
poles lead to a
sinusoidal response

x t 2 c1 et cost c1 c3 e p3t ... c N e p N t , t 0

Signals and Systems 2014

(4A.41)

4A.17
Now suppose the pole p1 of X s is repeated r times. Then the partial fraction
expansion of Eq. (4A.28) is:
X s

cN
c1
c2
cr
cr 1

...

...
2
r
s p N
s p1 s p1
s p1 s pr 1

(4A.42)

Partial fraction
expansion with
repeated poles

The residues are given by Eq. (4A.30) for the distinct poles r 1 i N and:

cr i

1 di
r
i s p1 X s
i! ds
s p

(4A.43)

for the repeated poles 0 i r 1 .


Example

Find the inverse Laplace transform of:


Y s

s4
s 1s 2 2

(4A.44)

Expanding into partial fractions we have:


c3
c
c
s4
1 2
2
s 1s 2 s 1 s 2 s 2 2

(4A.45)

Find c1 by multiplying both sides by s 1 and setting s 1 (Heavisides


cover-up rule):
1 4
c 3
1 2 2 1

(4A.46)

To find c3 , multiply throughout by s 2 and set s 2 :


2

24
c 2
2 1 3

Signals and Systems 2014

(4A.47)

Residue defined for


repeated poles

4A.18
Note that Heavisides cover-up rule only applies to the repeated partial fraction
with the highest power. To find c2 , we have to use Eq. (4A.43). Multiplying
throughout by s 2 gives:
2

c
s4
2
1 s 2 c 2 s 2 c 3
s 1 s 1

(4A.48)

Now to get rid of c3 , differentiate with respect to s:


2
d s 4 d c1 s 2 d

c 2 s 2
ds s 1 ds s 1 ds

(4A.49)

The differentiation of the quotients can be obtained using:


du
dv
u
d u
dx 2 dx

v
dx v
v

(4A.50)

Therefore, the LHS of our problem becomes:


3
d s 4 s 1 s 4

ds s 1
s 12
s 1

(4A.51)

The second term on the RHS becomes:


d
c2 s 2 c2
ds

(4A.52)

Differentiation of the c1 term will result in an s 2 multiplier. Therefore, if s


is set to 2 in the equation after differentiation, we can resolve c2 :
c2

3
3
2 12

(4A.53)

Therefore, the partial fraction expansion is:


Y s

3
3
2

s 1 s 2 s 2 2

Signals and Systems 2014

(4A.54)

4A.19
The inverse Laplace transform can now be easily evaluated using (L.3), (L.7)
and (L.10):
y t 3e 3e
t

2 t

2te

2 t

, t0

(4A.55)

A multiple pole will, in general, produce a coefficient of the exponential term


which is a polynomial in t.
MATLAB can be used to obtain the poles and residues for a given rational
function X s .
Example

Given:

s 2 2s 1
X s 3
s 3s 2 4 s 2
calculate xt .
Using MATLAB we just do:
num = [1 2 1];
den = [1 3 4 2];
[r,p] = residue(num,den);

which returns vectors of the residues and poles in r and p respectively.


In summary, there are three types of response from an LTI system:

Real poles: the response is a sum of exponentials.

Complex poles: the response is a sum of exponentially damped


sinusoids.

Multiple poles: the response is a sum of exponentials but the


coefficients of the exponentials are polynomials in t.

Thus, we see that any high order rational function X s can be decomposed
into a combination of first-order factors.
Signals and Systems 2014

Multiple poles
produce coefficients
that are polynomials
of t

4A.20
Transforms of Differential Equations
The time-differentiation property of the Laplace transform sets the stage for
solving linear differential equations with constant coefficients. Because
d k y dt k s k Y s , the Laplace transform of a differential equation is an
algebraic equation that can be readily solved for Y s . Next we take the inverse
Laplace transform of Y s to find the desired solution y t .

differential
equation
time-domain

difficult?

LT

frequency-domain

time-domain
solution
ILT

algebaric
equation

easy!

s -domain
solution

Figure 4A.7
Example

Solve the second-order linear differential equation:

5 D 6 y t D 1x t

(4A.56)

for the initial conditions y 0 2 and y 0 1 and the input xt e 4t u t .


The equation is:
d2y
dy
dx
5 6y
x
2
dt
dt
dt

Signals and Systems 2014

(4A.57)

4A.21
Let y t Y s . Then from property (L.11):

dy
sY s y 0 sY s 2
dt
d2y
s 2Y s sy 0 y 0 s 2Y s 2 s 1
2
dt

(4A.58)

Also, for xt e 4t u t , we have:


X s

1
s4

and

dx
s
s
sX s x 0
0
dt
s4
s4

(4A.59)

Taking the Laplace transform of Eq. (4A.57), we obtain:

s Y s 2s 1 5sY s 2 6Y s s s 4 s 1 4
2

(4A.60)

Collecting all the terms of Y s and the remaining terms separately on the lefthand side, we get:

5s 6 Y s 2 s 11

s 1
s4

(4A.61)

so that:

5 s 6 Y s

2 s 11 s 1

initial condition terms

(4A.62)

input term s

Therefore:
Y s

2 s 11
s 1

s 4 s 2 5s 6
5s
6
s

zero -input component

zero -state component

5 1 2
2
32
7

s 2 s 3 s 2 s 3 s 4

Signals and Systems 2014

(4A.63)

4A.22
Taking the inverse transform yields:
- 2t
y t
7 e
-5
e 3t 12 e 2t 2e 3t 32 e 4t

zero -input response


zero -state response

13
2

- 2t

e -3e

3t

e
3
2

4t

, t0

(4A.64)

The Laplace transform method gives the total response, which includes
zero-input and zero-state components. The initial condition terms give rise to
the zero-input response. The zero-state response terms are exclusively due to
the input.
Consider the Nth order input/output differential equation:
Systems described
by differential
equations

d N y t N 1 d i y t M d i xt
ai
bi
i
dt N
dt
dt i
i 0
i 0

(4A.65)

If we take the Laplace transform of this equation using (L.11), assuming zero
initial conditions, we get:
transform into
rational functions of
s

bM s M bM 1s M 1 ... b1s b0
Y s N
X s
N 1
s a N 1s ... a1s a0

(4A.66)

Now define:
Transfer function
defined

Y s H s X s

(4A.67)

The function H s is called the transfer function of the system since it


specifies the transfer from the input to the output in the s-domain (assuming no
initial energy). This is true for any system. For the case described by
Eq. (4A.65), the transfer function is a rational polynomial given by:

bM s M bM 1s M 1 ... b1s b0
H s N
s aN 1s N 1 ... a1s a0
Do the revision problem assuming zero initial conditions.
Signals and Systems 2014

(4A.68)

4A.23
The System Transfer Function
For a linear time-invariant system described by a convolution integral, we can
take the Laplace transform and get:

Y s H s X s

(4A.69)

which shows us that:

ht H s

(4A.70a)

The transfer function is

(4A.67b)

The relationship
between timedomain and
frequency-domain
descriptions of a
system

the Laplace transform


of the impulse response!
Instead of writing H s as in Eq. (4A.68), it can be expressed in the factored
form:

b s z1 s z 2 s z M
H s M
s p1 s p2 s p N

Transfer function
factored to get zeros
and poles

(4A.71)

where the zs are the zeros of the system and the ps are the poles of the
system. This shows us that apart from a constant factor bM , the poles and zeros
determine the transfer function completely. They are often displayed on a polezero diagram, which is a plot in the s-domain showing the location of all the
poles (marked by x) and all the zeros (marked by o).
You should be familiar with direct construction of the transfer function for
electric circuits from previous subjects.

Signals and Systems 2014

4A.24
Block Diagrams
Block diagrams are
transfer functions

Systems are often represented as interconnections of s-domain blocks, with


each block containing a transfer function.
Example

Given the following electrical system:


A simple first-order
system

v i (t )

v o(t )

Figure 4A.8

we can perform KVL around the loop to get the differential equation:

vo t RC

dvo t
vi t
dt

(4A.72)

Assuming zero initial conditions, the Laplace transform of this is:

Vo s sRCVo s Vi s

(4A.73)

and therefore the system transfer function is:

Vo s
1
1

Vi s 1 sRC 1 sT
where T RC , the time constant.

Signals and Systems 2014

(4A.74)

4A.25
Therefore the block diagram is:

Vi (s )

Block diagram of the


simple lowpass RC
circuit

Vo(s )

1
1 sT

Figure 4A.9

Note that theres no hint of what makes up the inside of the block, except for
the input and output signals. It could be a simple RC circuit, a complex passive
circuit, or even an active circuit (with op-amps). The important thing the block
does is hide all this detail.
Notation

We use the following notation, where G s is the transfer function:

Vi (s )

A block represents
multiplication with a
transfer function

Vo(s )

G (s)

Figure 4A.10

Most systems have several blocks interconnected by various forwards and


backwards paths. Signals in block diagrams can not only be transformed by a
transfer function, they can also be added and subtracted.

Z=X+Y

Z=X-Y
Y

Y
Figure 4A.11

Signals and Systems 2014

Addition and
subtraction of
signals in a block
diagram

4A.26
Cascading Blocks
Blocks can be connected in cascade.
Cascading blocks
implies multiplying
the transfer
functions

G1( s)

Y=G1X

Z=G1G2 X

G2( s)

Figure 4A.12

Care must be taken when cascading blocks. Consider what happens when we
try to create a second-order circuit by cascading two first-order circuits:
A circuit which IS
NOT the cascade of
two first-order
circuits

Vi

Vo

Figure 4A.13

Show that the transfer function for the above circuit is:

Vo
1 RC
2
Vi s 3 RC s 1 RC 2
2

Signals and Systems 2014

(4A.75)

4A.27
Compare with the following circuit:
A circuit which IS
the cascade of two
first-order circuits

Vi

Vo

Figure 4A.14

which has the transfer function:

Vo
1 RC
1 RC

Vi s 1 RC s 1 RC
2
1 RC

2
2
s 2 RC s 1 RC

(4A.76)

They are different! In the first case, the second network loads the first (i.e. they We can only
interact). We can only cascade circuits if the outputs of the circuits present a cascade circuits if
they are buffered

low impedance to the next stage, so that each successive circuit does not load
the previous circuit. Op-amp circuits of both the inverting and non-inverting
type are ideal for cascading.

Signals and Systems 2014

4A.28
Standard Form of a Feedback Control System
Perhaps the most important block diagram is that of a feedback connection,
shown below:
Standard form for
the feedback
connection

R (s )

C (s )

E (s )

G (s)

B (s )

H (s)

Figure 4A.15

We have the following definitions:


G s = forward path transfer function
H s = feedback path transfer function
R s = reference, input, or desired output
C s = controlled variable, or output
B s = output multiplied by H s
E s = actuating error signal
R s C s = system error

C s
= closed-loop transfer function
R s
G s H s = loop gain

To find the transfer function, we solve the following two equations which are
self-evident from the block diagram:

C s G s E s

E s Rs H s C s
Signals and Systems 2014

(4A.77)

4A.29
Then the output C s is given by:

C s G s Rs G s H s C s
C s 1 G s H s G s Rs

(4A.78)

and therefore:
Transfer function for
the standard
feedback connection

C s
G s

R s 1 G s H s

(4A.79)

Similarly, we can show that:

E s
1

Rs 1 G s H s

(4A.80)

Bs
G s H s

Rs 1 G s H s

(4A.81)

Finding the error


signals transfer
function for the
standard feedback
connection

and:

Notice how all the above expressions have the same denominator.
We define 1 G s H s 0 as the characteristic equation of the differential
equation describing the system. Note that for negative feedback we get
1 G s H s and for positive feedback we get 1 G s H s .

negative feedback

positive feedback

1+GH

1-GH

Figure 4A.16
Signals and Systems 2014

Characteristic
equations for
positive and
negative feedback

4A.30
Block Diagram Transformations
Simplifying block
diagrams

We can manipulate the signals and blocks in a block diagram in order to


simplify it. The overall system transfer function is obtained by combining the
blocks in the diagram into just one block. This is termed block-diagram
reduction.
Original Diagram

Combining blocks in
cascade

Moving a summing
point behind a block

X1

X2

G1

Equivalent Diagram
X3

G2

X1

X3

X1

X1

X3

G1 G2

X3

X2

X1

Moving a summing
point ahead of a
block

X3

X1

X1

X1
X1

X1

X1

Moving a pick-off
point ahead of a
block

Eliminating a
feedback loop

X2

X2

X2

X2

X2

1
G

X1

X2

X1

X2

1
G

X2

X3

X2

Moving a pick-off
point behind a block

X2

X1

Signals and Systems 2014

G
1 GH

X2

4A.31
Example

Given:

G
H

then alternatively we can get:


C G R HC
R

GH C
H

which is drawn as:

1
H

C
GH

Signals and Systems 2014

4A.32
Example

Given:

G3

G1

G2

G4

H1
H2

we put:
G1

G2

G1G2

G3
G4

G3+ G4

G1G2

G1G2

1- G1G2 H1

H1

Therefore we get:
i

G1G2
1- G1G2 H1

G3+ G4
H2

which simplifies to:


i

G1G2 (G3+ G4 )
1- G1G2 H1 + G1G2 (G3+ G4 ) H2

Signals and Systems 2014

4A.33
Example

Consider a system where d is a disturbance input which we want to suppress:


d

G1

G2

Using superposition, consider i only:


o1

G1

G2

Therefore:

o1
G1G2

i 1 G1G2
Considering d only:
o2

G2
G1

we have:

o2
G2

i 1 G1G2
Therefore, the total output is:

o o1 o 2

G1G2
G2
i
d
1 G1G2
1 G1G2

Therefore, use a small G2 and a large G1 !

Signals and Systems 2014

4A.34
Summary

The Laplace transform is a generalization of the Fourier transform. To find


the Laplace transform of a function, we start from a known Laplace
transform pair, and apply any number of Laplace transform pair properties
to arrive at the solution.

Inverse Laplace transforms are normally obtained by the method of partial


fractions using residues.

Systems described by differential equations have rational Laplace


transforms. The Laplace transforms of the input signal and output signal are
related by the transfer function of the system: Y s H s X s . There is a
one-to-one correspondence between the coefficients in the differential
equation and the coefficients in the transfer function.

The impulse response and the transfer function form a Laplace transform
pair: ht H s .

The transfer function of a system can be obtained by performing analysis in


the s-domain.

A block diagram is composed of blocks containing transfer functions and


adders. They are used to diagrammatically represent systems. All singleinput single-output systems can be reduced to one equivalent transfer
function.

References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.

Signals and Systems 2014

4A.35
Exercises
1.
Obtain transfer functions for the following networks:
a)

b)

C1

R1

R2
C2

c)

d)

R2

C
L

L1

C1
R1

R
2.
Obtain the Laplace transforms for the following integro-differential equations:

a)

di t
1 t
Rit i d et
C 0
dt

dx t
d 2 xt

Kx t 3t
B
b) M
dt
dt 2
d t
d 2 t

K t 10 sin t
B
c) J
dt
dt 2
Signals and Systems 2014

4A.36
3.
Find the f t which corresponds to each F s below:

a)

s2 5
F s 3
s 2s 2 4s

b)

F s

s
s 4 5s 2 4

c)

F s

3s 1
2
5 s 3 s 2

4.
Use the final value theorem to determine the final value for each f t in 3 a),
b) and c) above.

5.
Find an expression for the transfer function of the following network (assume
the op-amp is ideal):

C2

R1
Vi

C1

R2
Vo

Signals and Systems 2014

4A.37
6.
In the circuit below, the switch is in a closed position for a long time before
t 0 , when it is opened instantaneously.

Find the inductor current y t for t 0 .

y (t ) 1 H
5

10 V

1
5

t =0

7.
Given:

R (s )

E (s )

B (s )

C (s )
G (s)

H (s)

a) Find expressions for:


(i)

C s
R s

(ii)

E s
R s

(iii)

B s
R s

b) What do you notice about the denominator in each of your solutions?

Signals and Systems 2014

4A.38
8.
Using block diagram reduction techniques, find the transfer functions of the
following systems:
a)

G1( s)

G2( s)

H2( s)

G3( s)

H3( s)

b)
c

X1

Y
X5

Signals and Systems 2014

4A.39
9.
Use block diagram reduction techniques to find the transfer functions of the
following systems:
a)
H2

G1

G2

G3

G4

H1

H3

b)

Signals and Systems 2014

4A.40
Pierre Simon de Laplace (1749-1827)
The application of mathematics to problems in physics became a primary task
in the century after Newton. Foremost among a host of brilliant mathematical
thinkers was the Frenchman Laplace. He was a powerful and influential figure,
contributing to the areas of celestial mechanics, cosmology and probability.
Laplace was the son of a farmer of moderate means, and while at the local
military school, his uncle (a priest) recognised his exceptional mathematical
talent. At sixteen, he began to study at the University of Caen. Two years later
he travelled to Paris, where he gained the attention of the great mathematician
and philosopher Jean Le Rond dAlembert by sending him a paper on the
principles of mechanics. His genius was immediately recognised, and Laplace
became a professor of mathematics.
He began producing a steady stream of remarkable mathematical papers. Not
only did he make major contributions to difference equations and differential
equations but he examined applications to mathematical astronomy and to the
theory of probability, two major topics which he would work on throughout his
life. His work on mathematical astronomy before his election to the Acadmie
des Sciences included work on the inclination of planetary orbits, a study of
how planets were perturbed by their moons, and in a paper read to the
Academy on 27 November 1771 he made a study of the motions of the planets
which would be the first step towards his later masterpiece on the stability of
the solar system.
In 1773, before the Academy of Sciences, Laplace proposed a model of the
solar system which showed how perturbations in a planets orbit would not
change its distance from the sun. For the next decade, Laplace contributed a
stream of papers on planetary motion, clearing up discrepancies in the orbits
of Jupiter and Saturn, he showed how the moon accelerates as a function of the
Earths orbit, he introduced a new calculus for discovering the motion of
celestial bodies, and even a new means of computing planetary orbits which
led to astronomical tables of improved accuracy.
Signals and Systems 2014

4A.41
The 1780s were the period in which Laplace produced the depth of results
which have made him one of the most important and influential scientists that
the world has seen. Laplace let it be known widely that he considered himself
the best mathematician in France. The effect on his colleagues would have
been only mildly eased by the fact that Laplace was right!
In 1784 Laplace was appointed as examiner at the Royal Artillery Corps, and
in this role in 1785, he examined and passed the 16 year old Napoleon
Bonaparte.
In 1785, he introduced a field equation in spherical harmonics, now known as
Laplaces equation, which is found to be applicable to a great deal of
phenomena, including gravitation, the propagation of sound, light, heat, water,
electricity and magnetism.
Laplace presented his famous nebular hypothesis in 1796 in Exposition du
systeme du monde, which viewed the solar system as originating from the
contracting and cooling of a large, flattened, and slowly rotating cloud of "Your Highness, I
have no need of this
incandescent gas. The Exposition consisted of five books: the first was on the hypothesis. "
apparent motions of the celestial bodies, the motion of the sea, and also
atmospheric refraction; the second was on the actual motion of the celestial
bodies; the third was on force and momentum; the fourth was on the theory of
universal gravitation and included an account of the motion of the sea and the
shape of the Earth; the final book gave an historical account of astronomy and
included his famous nebular hypothesis which even predicted black holes.
Laplace stated his philosophy of science in the Exposition:If man were restricted to collecting facts the sciences were only a sterile
nomenclature and he would never have known the great laws of nature. It is
in comparing the phenomena with each other, in seeking to grasp their
relationships, that he is led to discover these laws...
Exposition du systeme du monde was written as a non-mathematical
introduction to Laplace's most important work. Laplace had already discovered
the invariability of planetary mean motions. In 1786 he had proved that the
eccentricities and inclinations of planetary orbits to each other always remain
small, constant, and self-correcting. These and many of his earlier results
Signals and Systems 2014

- Laplace, to
Napoleon on why
his works on
celestial mechanics
make no mention of
God.

4A.42
formed the basis for his great work the Trait du Mcanique Cleste published
in 5 volumes, the first two in 1799.
The first volume of the Mcanique Cleste is divided into two books, the first
on general laws of equilibrium and motion of solids and also fluids, while the
second book is on the law of universal gravitation and the motions of the
centres of gravity of the bodies in the solar system. The main mathematical
approach was the setting up of differential equations and solving them to
describe the resulting motions. The second volume deals with mechanics
applied to a study of the planets. In it Laplace included a study of the shape of
the Earth which included a discussion of data obtained from several different
expeditions, and Laplace applied his theory of errors to the results.
In 1812 he published the influential study of probability, Thorie analytique
des probabilits. The work consists of two books. The first book studies
generating functions and also approximations to various expressions occurring
in probability theory. The second book contains Laplace's definition of
probability, Bayes's rule (named by Poincar many years later), and remarks on
mathematical expectation. The book continues with methods of finding
probabilities of compound events when the probabilities of their simple
components are known, then a discussion of the method of least squares, and
inverse probability. Applications to mortality, life expectancy, length of
marriages and probability in legal matters are given.
After the publication of the fourth volume of the Mcanique Cleste, Laplace
continued to apply his ideas of physics to other problems such as capillary
action (1806-07), double refraction (1809), the velocity of sound (1816), the
theory of heat, in particular the shape and rotation of the cooling Earth
(1817-1820), and elastic fluids (1821).
Many original documents concerning his life have been lost, and gaps in his
biography have been filled by myth. Some papers were lost in a fire that
destroyed the chateau of a descendant, and others went up in flames when
Allied forces bombarded Caen during WWII.
Laplace died on 5 March, 1827 at his home outside Paris.
Signals and Systems 2014

4B.1
Lecture 4B Transfer Functions
Stability. Unit-step response. Sinusoidal response. Arbitrary response.

Overview
Transfer functions are obtained by Laplace transforming a systems

The transfer function

input/output differential equation, or by analysing a system directly in the tells us lots of things
s-domain. From the transfer function, we can derive many important properties
of a system.

Stability
To look at stability, lets examine the rational system transfer function that
ordinarily arises from linear differential equations:

bM s M bM 1s M 1 ... b1s b0
H s N
s a N 1s N 1 ... a1s a0

(4B.1)

The transfer function H s is the Laplace transform of the impulse response


ht . Since its the poles of a Laplace transform that determine the systems

time-domain response, then the poles of H s determine the form of ht . In


particular, for real and complex non-repeated poles:

c
ce pt
s p
A cos s A sin
Ae t cost
2
2
s

(4B.2a)
The impulse
response is always
some sort of
exponential

(4B.2b)

If H s has repeated poles, then ht will contain terms of the form ct i e pt


and/or At i e t cost .

Signals and Systems 2014

4B.2
From the time-domain expressions, it follows that ht converges to zero as

t if and only if:

Re pi 0

Conditions on the
poles for a stable
system

(4B.3a)

where pi are the poles of H s .


This is equivalent to saying:
Stability defined

A system is stable if all the poles of the transfer function lie

(4B.3b)

in the open left-half s-plane

Unit-Step Response
Consider a system with rational transfer function H s Bs As . If an input
xt is applied for t 0 with no initial energy in the system, then the transform

of the resulting output response is:

Y s

B s
X s
As

(4B.4)

Suppose xt is the unit-step function u t , so that X s 1 s . Then the


transform of the step response is:
Transform of stepresponse for any
system

Y s

B s
As s

(4B.5)

Using partial fractions, this can be written as:

Y s

H 0 E s

s
As

Signals and Systems 2014

(4B.6)

4B.3
where E s is a polynomial in s and the residue of the pole at the origin was
given by:

c sY s s 0 H 0

(4B.7)

Taking the inverse Laplace transform of Y s , we get the time-domain


response to the unit-step function:

yt H 0 y tr t , t 0

(4B.8)

The complete
response consists of
a transient part and
a steady-state part

where y tr t is the inverse Laplace transform of E s As . If the system is


stable so that all the roots of As 0 lie in the open left-half plane, then the
term y tr t converges to zero as t , in which case y tr t is the transient

part of the response.


So, if the system is stable, the step response contains a transient that decays to
zero and it contains a constant with value H 0 . The constant H 0 is the

steady-state value of the step response.


An analysis of the transient response is very important because we may wish to
design a system with certain time-domain behaviour. For example, we may Transients are
important, especially

have a requirement to reach 99% of the steady-state value within a certain time, for control system
or we may wish to limit any oscillations about the steady-state value to a design
certain amplitude etc. This will be examined for the case of first-order and
second-order systems.

Signals and Systems 2014

4B.4
First-Order Systems
For the first-order transfer function:

H s

p
s p

(4B.9)

The unit-step response is:


Step-response of a
first-order system

y t 1 e pt , t 0

(4B.10)

which has been written in the form of Eq. (4B.8). If the system is stable, then p
lies in the open left-half plane, and the second term decays to zero. The rate at
which the transient decays to zero depends on how far over to the left the pole
is. Since the total response is equal the constant 1 plus the transient response,
the rate at which the step response converges to the steady-state value is equal
to the rate at which the transient decays to zero. This may be an important
design consideration.
An important quantity that characterizes the rate of decay of the transient is the
Time constant
defined

time constant T. It is defined as 1 p , assuming p 0 . You are probably


familiar with the concept of a time constant for electric circuits (eg. T RC
for a simple RC circuit), but it is a concept applicable to all first-order systems.
The smaller the time constant, the faster the rate of decay of the transient.

Signals and Systems 2014

4B.5
Second-Order Systems

Now consider the second-order system given by the transfer function:

n2
H s 2
s 2 n s n2

(4B.11)

Standard form of a
second-order
lowpass transfer
function

The real parameters in the denominator are:

damping ratio
n natural frequency

(4B.12a)
(4B.12b)

Damping ratio and


natural frequency
defined

If we write:

H s

n2

s p1 s p2

(4B.13)

then the poles of H s are given by the quadratic formula:

p1 n n 2 1
p 2 n n 2 1
There are three cases to consider.

Signals and Systems 2014

(4B.14a)
(4B.14b)

Pole locations for a


second-order
system

4B.6
Distinct Real Poles ( 1 ) - Overdamped

In this case, the transfer function can be expressed as:

H s

n2

s p1 s p2

(4B.15)

and the transform of the unit-step response is given by:

Y s

n2

s p1 s p2 s

(4B.16)

Rewriting as partial fractions and taking the inverse Laplace transform, we get
the unit-step response:
Step response of a
second-order
overdamped system

y t 1 c1e p1t c2 e p 2 t , t 0

(4B.17)

Therefore, the transient part of the response is given by the sum of two
exponentials:
Transient part of the
step response of a
second-order
overdamped system

y tr t c1e p1t c2 e p 2t

(4B.18)

and the steady-state value:


Steady-state part of
the step response of
a second-order
overdamped system

yss t H 0

n2
p1 p 2

1
(4B.19)

It often turns out that the transient response is dominated by one pole (the one
closer to the origin why?), so that the step response looks like that of a firstorder system.

Signals and Systems 2014

4B.7
Repeated Real Poles ( 1 ) Critically Damped

In this case, the transfer function has the factored form:

n2
H s
s n 2

(4B.20)

Expanding H s s via partial fractions and taking the inverse transform yields
the step response:

y t 1 1 n t e

n t

(4B.21)

Unit-step response
of a second-order
critically damped
system

Hence, the transient response is:

y tr t 1 n t e n t

(4B.22)

Transient part of the


unit-step response
of a second-order
critically damped
system

and the steady-state response is:

yss t H 0 1
as before.

Signals and Systems 2014

(4B.23)

Steady-state part of
the unit-step
response of a
second-order
critically damped
system

4B.8
Complex Conjugate Poles ( 0 1 ) Underdamped

For this case, we define the damped frequency:


Damped frequency
defined

d n 1 2

(4B.24)

so that the poles are located at:

p1, 2 n j d

Underdamped pole
locations are
complex conjugates

(4B.25)

The transfer function is then:

n2
H s
s n 2 d2

(4B.26)

and the transform of the unit-step response Y s H s s can be expanded as:

Y s

1 s n n

s s n 2 d2

(4B.27)

Thus, from the transform Eq. (4B.2b), the unit-step response is:
Step response of a
second-order
underdamped
system

y t 1

n t
e
sin d t cos 1
d
n

(4B.28)

Verify the above result.


The transient response is an exponentially decaying sinusoid with frequency

d rads -1 . Thus second-order systems with complex poles have an oscillatory


step response.

Signals and Systems 2014

4B.9
Second-Order Pole Locations

It is convenient to think of the second-order response in terms of the pole-zero


plot of the second-order transfer function. Only two parameters determine the
pole locations: and n . It is instructive how varying either parameter moves
the pole locations around the s-plane. A graph of some pole locations for
various values of and one value of n are shown below:
Second-order pole
locations

Im s

=0

=5

= 0.707

j n
j d

=1

Re s
- n

- j n

Figure 4B.1

We can see, for fixed n , that varying from 0 to 1 causes the poles to move
from the imaginary axis along a circular arc with radius n until they meet at
the point s n . If 0 then the poles lie on the imaginary axis and the
transient never dies out we have a marginally stable system. As is
increased, the response becomes less oscillatory and more and more damped,
until 1 . Now the poles are real and repeated, and there is no sinusoid in the
response. As is increased, the poles move apart on the real axis, with one
moving to the left, and one moving toward the origin. The response becomes
more and more damped due to the right-hand pole getting closer and closer to
the origin.
Signals and Systems 2014

How the damping


ratio, , varies the
pole location

4B.10
Sinusoidal Response
Again consider a system with rational transfer function H s Bs As . To
determine the system response to the sinusoidal input xt C cos 0 t , we first
find the Laplace transform of the input:

X s

Cs
Cs

s 2 02 s j 0 s j 0

(4B.29)

The transform Y s of the ZSR is then:

Y s

B s
Cs
As s j 0 s j 0

(4B.30)

The partial fraction expansion of Eq. (4B.30) is:

C 2 H j 0 C 2 H * j 0 E s
Y s

s j 0
s j 0
As

(4B.31)

You should confirm the partial fraction expansion.

The inverse Laplace transform of both sides of Eq. (4B.31) yields:


The sinusoidal
response of a
system

y t

C
H j 0 e j 0 t H * j 0 e j 0 t ytr t
2

(4B.32)

and from Eulers identity, this can be written:


is sinusoidal (plus a
transient)

y t C H j 0 cos 0t H j 0 ytr t

Signals and Systems 2014

(4B.33)

4B.11
When the system is stable, the y tr t term decays to zero and we are left with
the steady-state response:

yss t C H j 0 cos 0t H j 0 , t 0

(4B.34)

Steady-state
sinusoidal response
of a system

This result is exactly the same expression as that found when performing
Fourier analysis, except there the expression was for all time, and hence there
was no transient! This means that the frequency response function H can
be obtained directly from the transfer function:

H H j H s s j

(4B.35)

Frequency response
from transfer
function

Arbitrary Response
Suppose we apply an arbitrary input xt that has rational Laplace transform
X s C s Ds where the degree of C s is less than Ds . If this input is

applied to a system with transfer function H s Bs As , the transform of


the response is:

B s C s
As Ds

(4B.36)

F s E s

D s As

(4B.37)

Y s
This can be written as:

Y s

Taking the inverse transform of both sides gives:

y t yss t ytr t

(4B.38)

where yss t is the inverse transform of F s Ds and y tr t is the inverse


transform of E s As .

Signals and Systems 2014

The Laplace
transform of an
output signal always
contains a steadystate term and a
transient term

4B.12
The important point to note about this simple analysis is that the form of the
transient response is determined by the poles of the system transfer function

The form of the


response is
determined by the
poles only

H s regardless of the particular form of the input signal xt , while the form

of the steady-state response depends directly on the poles of the input X s ,


regardless of what the system transfer function H s is!

Summary

A system is stable if all the poles of the transfer function lie in the open left-half
s-plane.

The complete response of a system consists of a transient response and a steadystate response. The transient response consists of the ZIR and a part of the ZSR.
The steady-state response is part of the ZSR. The transfer function gives us the
ZSR only!

The step response is an important response because it occurs so frequently in


engineering applications control systems in particular. Second-order systems
exhibit different step responses depending on their pole locations overdamped,
critically damped and underdamped.

The frequency response of a system can be obtained from the transfer function by
setting s j .

The poles of the system determine the transient response.

The poles of the signal determine the steady-state response.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.

Signals and Systems 2014

4B.13
Exercises
1.
Determine whether the following circuit is stable for any element values, and
for any bounded inputs:

2.
Suppose that a system has the following transfer function:
H s

8
s4

a) Compute the system response to the following inputs. Identify the steadystate solution and the transient solution.
(i)

xt u t

(ii) xt tu t
(iii) xt 2 sin 2t u t
(iv) xt 2 sin 10t u t
b) Use MATLAB to compute the responses numerically. Plot the responses
and compare them to the responses obtained analytically in part a).

Signals and Systems 2014

4B.14
3.
Consider three systems which have the following transfer functions:
(i)

H s

32
s 4 s 16

(ii)

H s

32
s 8s 16

(iii)

H s

32
s 10s 16

For each system:


a) Determine if the system is critically damped, underdamped, or
overdamped.
b) Calculate the step response of the system.
c) Use MATLAB to compute the step response numerically. Plot the
response and compare it to the plot of the response obtained analytically in
part b).

4.
For the circuit shown, compute the steady-state response yss t resulting from
the inputs given below assuming that there is no initial energy at time t 0 .
a) xt u t

10 F

b) xt 10 cost u t

100 k

c) xt cos5t 6 u t

Signals and Systems 2014

200 k
10 F

4B.15
Oliver Heaviside (1850-1925)
The mid-Victorian age was a time when the divide between the rich and the
poor was immense (and almost insurmountable), a time of unimaginable
disease and lack of sanitation, a time of steam engines belching forth a steady
rain of coal dust, a time of horses clattering along cobblestoned streets, a time
when social services were the fantasy of utopian dreamers. It was into this
smelly, noisy, unhealthy and class-conscious world that Oliver Heaviside was
born the son of a poor man on 18 May, 1850.
A lucky marriage made Charles Wheatstone (of Wheatstone Bridge fame)
Heavisides uncle. This enabled Heaviside to be reasonably well educated, and
at the age of sixteen he obtained his first (and last) job as a telegraph operator
with the Danish-Norwegian-English Telegraph Company. It was during this
job that he developed an interest in the physical operation of the telegraph
cable. At that time, telegraph cable theory was in a state left by Professor
William Thomson (later Lord Kelvin) a diffusion theory modelling the
passage of electricity through a cable with the same mathematics that describes
heat flow.
By the early 1870s, Heaviside was contributing technical papers to various
publications he had taught himself calculus, differential equations, solid
geometry and partial differential equations. But the greatest impact on
Heaviside was Maxwells treatise on electricity and magnetism Heaviside
was swept up by its power.
In 1874 Heaviside resigned from his job as telegraph operator and went back to
live with his parents. He was to live off his parents, and other relatives, for the
rest of his life. He dedicated his life to writing technical papers on telegraphy
and electrical theory much of his work forms the basis of modern circuit
theory and field theory.
In 1876 he published a paper entitled On the extra current which made it clear
that Heaviside (a 26-year-old unemployed nobody) was a brilliant talent. He
had extended the mathematical understanding of telegraphy far beyond
Signals and Systems 2014

I remember my first
look at the great
treatise of
MaxwellsI saw
that it was great,
greater and
greatest, with
prodigious
possibilities in its
power. Oliver
Heaviside

4B.16
Thomsons submarine cable theory. It showed that inductance was needed to
permit finite-velocity wave propagation, and would be the key to solving the
problems of long distance telephony. Unfortunately, although Heavisides
paper was correct, it was also unreadable by all except a few this was a trait
of Heaviside that would last all his life, and led to his eventual isolation from
the academic world. In 1878, he wrote a paper On electromagnets, etc.
which introduced the expressions for the AC impedances of resistors,
capacitors and inductors. In 1879, his paper On the theory of faults showed that
by faulting a long telegraph line with an inductance, it would actually
improve the signalling rate of the line thus was born the idea of inductive
loading, which allowed transcontinental telegraphy and long-distance
telephony to be developed in the USA.
Now all has been
blended into one
theory, the main
equations of which
can be written on a
page of a pocket
notebook. That we
have got so far is
due in the first place
to Maxwell, and next
to him to Heaviside
and Hertz. H.A.
Lorentz

When Maxwell died in 1879 he left his electromagnetic theory as twenty


equations in twenty variables! It was Heaviside (and independently, Hertz)
who recast the equations in modern form, using a symmetrical vector calculus
notation (also championed by Josiah Willard Gibbs (1839-1903)). From these
equations, he was able to solve an enormous amount of problems involving
field theory, as well as contributing to the ideas behind field theory, such as
energy being carried by fields, and not electric charges.
A major portion of Heavisides work was devoted to operational calculus.1
This caused a controversy with the mathematicians of the day because although

Rigorous
mathematics is
narrow, physical
mathematics bold
and broad. Oliver
Heaviside

it seemed to solve physical problems, its mathematical rigor was not at all
clear. His knowledge of the physics of problems guided him correctly in many
instances to the development of suitable mathematical processes. In 1887
Heaviside introduced the concept of a resistance operator, which in modern
terms would be called impedance, and Heaviside introduced the symbol Z for
it. He let p be equal to time-differentiation, and thus the resistance operator for
an inductor would be written as pL. He would then treat p just like an algebraic

The Ukrainian Mikhail Egorovich Vashchenko-Zakharchenko published The Symbolic

Calculus and its Application to the Integration of Linear Differential Equations in 1862.
Heaviside independently invented (and applied) his own version of the operational calculus.
Signals and Systems 2014

4B.17
quantity, and solve for voltage and current in terms of a power series in p. In
other words, Heavisides operators allowed the reduction of the differential
equations of a physical system to equivalent algebraic equations.
Heaviside was fond of using the unit-step as an input to electrical circuits,
especially since it was a very practical matter to send such pulses down a
telegraph line. The unit-step was even called the Heaviside step, and given the
symbol H t , but Heaviside simply used the notation 1. He was tantalizingly
close to discovering the impulse by stating p 1 means a function of t
which is wholly concentrated at the moment t 0 , of total amount 1. It is an
impulsive function, so to speak[it] involves only ordinary ideas of
differentiation and integration pushed to their limit.

Paul Dirac derived


the modern notion of
the impulse, when
he used it in 1927,
at age 25, in a paper
on quantum
mechanics. He did
his undergraduate
work in electrical
engineering and
was both familiar
with all of
Heavisides work
and a great admirer
of his.

Heaviside also played a role in the debate raging at the end of the 19th century
about the age of the Earth, with obvious implications for Darwins theory of
evolution. In 1862 Thomson wrote his famous paper On the secular cooling of
the Earth, in which he imagined the Earth to be a uniformly heated ball of
The practice of

molten rock, modelled as a semi-infinite mass. Based on experimentally eliminating the


physics by reducing

derived thermal conductivity of rock, sand and sandstone, he then a problem to a


mathematical
mathematically allowed the globe to cool according to the physical law of purely
exercise should be
thermodynamics embedded in Fouriers famous partial differential equation for avoided as much as
possible. The

heat flow. The resulting age of the Earth (100 million years) fell short of that physics should be
carried on right

needed by Darwins theory, and also went against geologic and palaeontologic through, to give life
and reality to the

evidence. John Perry (a professor of mechanical engineering) redid Thomsons problem, and to
the great
analysis using discontinuous diffusivity, and arrived at approximate results that obtain
assistance which
could (based on the conductivity and specific heat of marble and quartz) put the physics gives to
the mathematics.

the age of the Earth into the billions of years. But Heaviside, using his Oliver Heaviside,
Collected Works,

operational calculus, was able to solve the diffusion equation for a finite Vol II, p.4
spherical Earth. We now know that such a simple model is based on faulty
premises radioactive decay within the Earth maintains the thermal gradient
without a continual cooling of the planet. But the power of Heavisides
methods to solve remarkably complex problems became readily apparent.
Throughout his career, Heaviside released 3 volumes of work entitled
Electromagnetic Theory, which was really just a collection of his papers.
Signals and Systems 2014

4B.18
Heaviside shunned all honours, brushing aside his honorary doctorate from the
University of Gttingen and even refusing to accept the medal associated with
his election as a Fellow of the Royal Society, in 1891.
In 1902, Heaviside wrote an article for the Encyclopedia Britannica entitled
The theory of electric telegraphy. Apart from developing the wave propagation
theory of telegraphy, he extended his essay to include wireless telegraphy,
and explained how the remarkable success of Marconi transmitting from
Ireland to Newfoundland might be due to the presence of a permanently
conducting upper layer in the atmosphere. This supposed layer was referred to
as the Heaviside layer, which was directly detected by Edward Appleton and
M.A.F. Barnett in the mid-1920s. Today we merely call it the ionosphere.
Heaviside spent much of his life being bitter at those who didnt recognise his
genius he had disdain for those that could not accept his mathematics without
formal proof, and he felt betrayed and cheated by the scientific community
who often ignored his results or used them later without recognising his prior
work. It was with much bitterness that he eventually retired and lived out the
rest of his life in Torquay on a government pension. He withdrew from public
and private life, and was taunted by insolently rude imbeciles. Objects were
thrown at his windows and doors and numerous practical tricks were played on
him.
Heaviside should be
remembered for his
vectors, his field
theory analyses, his
brilliant discovery of
the distortionless
circuit, his
pioneering applied
mathematics, and
for his wit and
humor. P.J. Nahin

Today, the historical obscurity of Heavisides work is evident in the fact that
his vector analysis and vector formulation of Maxwells theory have become
basic knowledge. His operational calculus was made obsolete with the 1937
publication of a book by the German mathematician Gustav Doetsch it
showed how, with the Laplace transform, Heavisides operators could be
replaced with a mathematically rigorous and systematic method.
The last five years of Heavisides life, with both hearing and sight failing, were
years of great privation and mystery. He died on 3rd February, 1925.

References
Nahin, P.: Oliver Heaviside: Sage in Solitude, IEEE Press, 1988.

Signals and Systems 2014

5A.1
Lecture 5A Frequency Response
The frequency response function. Determining the frequency response from a
transfer function. Magnitude responses. Phase responses. Frequency response
of a lowpass second-order system. Visualization of the frequency response
from a pole-zero plot. Bode plots. Approximating Bode plots using transfer
function factors. Transfer function synthesis. Digital filters.

Overview
An examination of a systems frequency response is useful in several respects.
It can help us determine things such as the DC gain and bandwidth, how well a
system meets the stability criterion, and whether the system is robust to
disturbance inputs.
Despite all this, remember that the time- and frequency-domain are
inextricably related we cant alter the characteristics of one without affecting
the other. This will be demonstrated for a second-order system later.

The Frequency Response Function


Recall that for a LTI system characterized by H s , and for a sinusoidal input
xt A cos 0 t , the steady-state response is:

yss t A H 0 cos 0 t H 0

(5A.1)

where H is the frequency response function, obtained by setting s j in


H s . Thus, the system behaviour for sinusoidal inputs is completely specified

by the magnitude response H and the phase response H .


The definition above is precisely how we determine the frequency response
experimentally we input a sinusoid and, in the steady-state, measure the
magnitude and phase change at the output.

Signals and Systems 2014

5A.2
Determining the Frequency Response from a Transfer Function
We can get the frequency response of a system by manipulating its transfer
function. Consider a simple first-order transfer function:

H s

K
s p

(5A.2)

The sinusoidal steady state corresponds to s j . Therefore, Eq. (5A.2) is, for
the sinusoidal steady state:

K
j p

(5A.3)

The complex function H can also be written using a complex exponential


in terms of magnitude and phase:

H H e jH

(5A.4a)

which is normally written in polar coordinates:


The transfer function
in terms of
magnitude and
phase

H H H

(5A.5A)

We plot the magnitude and phase of H as a function of or f . We use


both linear and logarithmic scales.
If the logarithm (base 10) of the magnitude is multiplied by 20, then we have
the gain of the transfer function in decibels (dB):
The magnitude of
the transfer function
in dB

H dB 20 log H dB

(5A.5)

A negative gain in decibels is referred to as attenuation. For example, -3 dB


gain is the same as 3 dB attenuation.

Signals and Systems 2014

5A.3
The phase function is usually plotted in degrees.
For example, in Eq. (5A.2), let K p 0 so that:

1
1 j 0

(5A.6)

The magnitude function is found directly as:

1
1 0

(5A.7)

and the phase is:


H tan 1
0

(5A.8)

Magnitude Responses
A magnitude response is the magnitude of the transfer function for a sinusoidal
steady-state input, plotted against the frequency of the input. Magnitude The magnitude
response is the
responses can be classified according to their particular properties. To look at magnitude of the
these properties, we will use linear magnitude versus linear frequency plots. transfer function in
the sinusoidal
For the simple first-order RC circuit that you are so familiar with, the steady state

magnitude function given by Eq. (5A.7) has three frequencies of special


interest corresponding to these values of H :

H 0 1
H 0

H 0

0.707

Signals and Systems 2014

(5A.9)

5A.4
The frequency 0 is known as the half-power frequency. The plot below
shows the complete magnitude response of H as a function of , and the
circuit that produces it:
A simple lowpass
filter

|H|
1
R
1 2
Vi
0

Vo

2 0

Figure 5A.1
An idealisation of the response in Figure 5A.1, known as a brick wall, and the
circuit that produces it are shown below:
An ideal lowpass
filter

|T|
1

Cutoff

ideal
Vi
Pass
0

Stop

filter

Vo

Figure 5A.2
For the ideal filter, the output voltage remains fixed in amplitude until a critical
frequency is reached, called the cutoff frequency, 0 . At that frequency, and
for all higher frequencies, the output is zero. The range of frequencies with
Pass and stop
bands defined

output is called the passband; the range with no output is called the stopband.
The obvious classification of the filter is a lowpass filter.

Signals and Systems 2014

5A.5
Even though the response shown in the plot of Figure 5A.1 differs from the
ideal, it is still known as a lowpass filter, and, by convention, the half-power
frequency is taken as the cutoff frequency.
If the positions of the resistor and capacitor in the circuit of Figure 5A.1 are
interchanged, then the resulting circuit is:

Vi

Vo

Figure 5A.3
Show that the transfer function is:

H s

s
s 1 RC

(5A.10)

Letting 1 RC 0 again, and with s j , we obtain:

j 0
1 j 0

(5A.11)

The magnitude function of this equation, at the three frequencies given in


Eq. (5A.9), is:

H 0 0
H 0
H 1

1
2

0.707

Signals and Systems 2014

(5A.12)

5A.6
The plot below shows the complete magnitude response of H as a function
of , and the circuit that produces it:
A simple highpass
filter

|H|
1

1 2
Vi
0

2 0

3 0

Vo

Figure 5A.4

This filter is classified as a highpass filter. The ideal brick wall highpass filter
is shown below:
An ideal highpass
filter

|T|
1

Cutoff

ideal
Vi
Stop
0

Pass

Figure 5A.5

The cutoff frequency is 0 , as it was for the lowpass filter.

Signals and Systems 2014

filter

Vo

5A.7
Phase Responses
Like magnitude responses, phase responses are only meaningful when we look Phase response is
at sinusoidal steady-state signals. A transfer function for a sinusoidal input is: obtained in the
sinusoidal steady
state

Y
Y

H
X
X 0

(5A.13)

where the input is taken as the phase reference (zero phase).


For the bilinear transfer function:

H K

j z
j p

(5A.14)

the phase is:

K tan tan 1
z
p
1

(5A.15)

The phase of the


bilinear transfer
function

We use the sign of this phase angle to classify systems. Those giving positive Lead and lag circuits
defined
are known as lead systems, those giving negative as lag systems.
For the simple RC circuit of Figure 5A.5, for which H is given by
Eq. (5A.6), we have:

tan 1

(5A.16)

Since is negative for all , the circuit is a lag circuit. When 0 ,

tan 1 1 45 .

Signals and Systems 2014

5A.8
A complete plot of the phase response is shown below:
Lagging phase
response for a
simple lowpass filter

0
0

2 0

-45

-90

Figure 5A.6

For the circuit in Figure 5A.5, show that the phase is given by:

90 tan 1

(5A.17)

The phase response has the same shape as Figure 5A.6 but is shifted upward
by 90 :
Leading phase
response for a
simple highpass
filter

90

45

0
0

2 0

Figure 5A.7

The angle is positive for all , and so the circuit is a lead circuit.
Signals and Systems 2014

5A.9
Frequency Response of a Lowpass Second-Order System
Starting from the usual definition of a lowpass second-order system transfer
function:

n2
H s 2
s 2 n s n2

(5A.18)

we get the following frequency response function:

n2

(5A.19)

n2 2 j 2 n

The magnitude is:

H j

n2
2
n

2
2

(5A.20)

The magnitude
response of a
lowpass second
order transfer
function

and the phase is:

2 n

2
2

tan 1

Signals and Systems 2014

(5A.21)

The phase response


of a lowpass second
order transfer
function

5A.10
The magnitude and phase functions are plotted below for 0.4 :
Typical magnitude
and phase
responses of a
lowpass second
order transfer
function

r = n 1-2
|H|

1
2

1
-40 dB / decade

0
0

All

, degrees
-90

-180 asymptote
for all
-180

Figure 5A.8

For the magnitude function, from Eq. (5A.20) we see that:

H 0 1,

H n 1 2 ,

H 0

(5A.22)

and for large , the magnitude decreases at a rate of -40 dB per decade, which
is sometimes described as two-pole rolloff.
For the phase function, we see that:

0 0, n 90, 180

Signals and Systems 2014

(5A.23)

5A.11
Visualization of the Frequency Response from a Pole-Zero Plot
The frequency response can be visualised in terms of the pole locations of the
transfer function. For example, for a second-order lowpass system:

n2
H s 2
s 2 n s n2

(5A.24)

Standard form for a


lowpass second
order transfer
function

the poles are located on a circle of radius n and at an angle with respect to
the negative real axis of cos 1 . These complex conjugate pole locations
are shown below:
Pole locations for a
lowpass second
order transfer
function

- n

p*

Figure 5A.9

In terms of the poles shown in Figure 5A.9, the transfer function is:

H s

n2

s p s p

Signals and Systems 2014

(5A.25)

Lowpass second
order transfer
function using pole
factors

5A.12
With s j , the two pole factors in this equation become:
Polar representation
of the pole factors

j p m1 1

j p m2 2

and

(5A.26)

In terms of these quantities, the magnitude and phase are:


Magnitude function
written using the
polar representation
of the pole factors

n2
m1m2

(5A.27)

1 2

(5A.28)

and:
Phase function
written using the
polar representation
of the pole factors

Vectors representing Eq. (5A.26) are shown below:


Determining the
magnitude and
phase response
from the s plane

j
m1

m1

1
m1

j n

j 1

m2

m2

m2

2
p*

p*

p*

|H|

1
2

1
-90

j 2

-180

Figure 5A.10
Signals and Systems 2014

5A.13
Figure 5A.10 shows three different frequencies - one below n , one at n ,
and one above n . From this construction we can see that the short length of

m1 near the frequency n is the reason why the magnitude function reaches a
peak near n . These plots are useful in visualising the frequency response of
the circuit.

Bode Plots
Bode* plots are plots of the magnitude function H dB 20 log H and the
phase function H , where the scale of the frequency variable (usually )
is logarithmic. The use of logarithmic scales has several desirable properties:

we can approximate a frequency response with straight lines. This is called


an approximate Bode plot.

the shape of a Bode plot is preserved if we decide to scale the frequency


this makes design easy.

we add and subtract individual factors in a transfer function, rather than


multiplying and dividing.

the slope of all asymptotic lines in a magnitude plot is 20n dB/decade ,


and n 45/decade for phase plots, where n is any integer.

by examining a few features of a Bode plot, we can readily determine the


transfer function (for simple systems).

We normally dont deal with equations when drawing Bode plots we rely on
our knowledge of the asymptotic approximations for the handful of factors that
go to make up a transfer function.

Dr. Hendrik Bode grew up in Urbana, Illinois, USA, where his name is pronounced boh-dee.

Purists insist on the original Dutch boh-dah. No one uses bohd.


Signals and Systems 2014

The advantages of
using Bode plots

5A.14
Approximating Bode Plots using Transfer Function Factors
The table below gives transfer function factors and their corresponding
magnitude asymptotic plots and phase linear approximations:

Transfer
Function
Factor

Magnitude

Phase

Asymptote

Linear Approximation

H , dB

H ,

40

20

-90

-20

-180

-40
0.01 n

0.1 n

10 n

100 n

40

0.01 n

0.1 n

10 n

100 n

0.1 n

10 n

100 n

0.1 n

10 n

100 n

0.1 n

10 n

100 n

20

s n

-90

-20

-180

-40
0.01 n

0.1 n

10 n

100 n

40

0.01 n

20

s n 1

-90

-20

-180

-40
0.01 n

0.1 n

10 n

100 n

40

1
s

2
2
s 1
n n

0.01 n

20
0

-90

-20

-180

-40
0.01 n

0.1 n

10 n

100 n

0.01 n

The corresponding numerator factors are obtained by mirroring the above


plots about the 0 dB line and 0 line.

Signals and Systems 2014

5A.15
Transfer Function Synthesis
One of the main reasons for using Bode plots is that we can synthesise a
desired frequency response by placing poles and zeros appropriately. This is
easy to do asymptotically, and the results can be checked using MATLAB.
Example

The asymptotic Bode plot shown below is for a band-enhancement filter:


A band
enhancement filter

|H|, dB

20 dB
0 dB
2

10

10

10

10

rad/s (log scale)

Figure 5A.11

We wish to provide additional gain over a narrow band of frequencies, leaving


the gain at higher and lower frequencies unchanged. We wish to design a filter
to these specifications and the additional requirement that all capacitors have
the value C 10 nF .

Signals and Systems 2014

5A.16
The composite plot may be decomposed into four first-order factors as shown
below:
Decomposing a
Bode plot into firstorder factors

|H|, dB

20 dB
0 dB
2

10

5
10 rad/s (log scale)

10

10

|H|, dB
1

rad/s (log scale)


2

Figure 5A.12

Those marked 1 and 4 represent zero factors, while those marked 2 and 3 are
pole factors. The pole-zero plot corresponding to these factors is shown below:
The pole-zero plot
corresponding to the
Bode plot

j
4

Figure 5A.13

Signals and Systems 2014

5A.17
From the break frequencies given, we have:

1 j 10 1 j 10
H
1 j 10 1 j 10
2

(5A.29)

Substituting s for j gives the transfer function:

s 10 s 10
H s
s 10 s 10
2

(5A.30)

The transfer function


corresponding to the
Bode plot

We next write H s as a product of bilinear functions. The choice is arbitrary,


but one possibility is:

H s H 1 s H 2 s

The transfer function


as a cascade of
bilinear functions

s 10 s 10

s 10 3 s 10 4
2

(5A.31)

For a circuit realisation of H 1 and H 2 we decide to use an inverting op-amp


circuit that implements a bilinear transfer function:
A realisation of the
specifications
-2

10

-3

10

-5

10

-4

10

Vi
Vo
1

Figure 5A.14

Signals and Systems 2014

5A.18
To obtain realistic element values, we need to scale the components so that the
transfer function remains unaltered. This is accomplished with the equations:
Magnitude scaling is
required to get
realistic element
values

Cnew

1
Cold
km

and

Rnew km Rold
(5A.32)

Since the capacitors are to have the value 10 nF, this means k m 108 . The
element values that result are shown below and the design is complete:
A realistic
implementation of
the specifications

1 M

100 k

1 k

10 k

Vi
Vo
10 nF

10 nF

10 nF

10 nF

Figure 5A.15

In this simple example, the response only required placement of the poles and
zeros on the real axis. However, complex pole-pair placement is not unusual in
design problems.

Signals and Systems 2014

5A.19
Digital Filters
Digital filtering involves sampling, quantising and coding of the input analog

Digital filters use

signal (using an analog to digital converter, or ADC for short). Once we have analog filters
converted voltages to mere numbers, we are free to do any processing on them
that we desire. Usually, the signals spectrum is found using a fast Fourier
transform, or FFT. The spectrum can then be modified by scaling the
amplitudes and adjusting the phase of each sinusoid. An inverse FFT can then
be performed, and the processed numbers are converted back into analog form
(using a digital to analog converter, or DAC). In modern digital signal
processors, an operation corresponding to a fast convolution is also sometimes
employed that is, the signal is convolved in the time-domain in real-time.
The components of a digital filter are shown below:
The components of
a digital filter
vi

Anti-alias
Filter

ADC

Digital
Signal
Processor

DAC

Reconstruction
Filter

vo

Figure 5A.16

The digital signal processor can be custom built digital circuitry, or it can be a

Digital filter

general purpose computer. There are many advantages of digitally processing advantages
analog signals:
1. A digital filter may be just a small part of a larger system, so it makes sense
to implement it in software rather than hardware.
2. The cost of digital implementation is often considerably lower than that of
its analog counterpart (and it is falling all the time).
3. The accuracy of a digital filter is dependent only on the processor word
length, the quantising error in the ADC and the sampling rate.
4. Digital filters are generally unaffected by such factors as component
accuracy, temperature stability, long-term drift, etc. that affect analog filter
circuits.
Signals and Systems 2014

5A.20
5. Many circuit restrictions imposed by physical limitations of analog devices
can be circumvented in a digital processor.
6. Filters of high order can be realised directly and easily.
7. Digital filters can be modified easily by changing the algorithm of the
computer.
8. Digital filters can be designed that are always stable.
9. Filter responses can be made which always have linear phase (constant
delay), regardless of the magnitude response.
Digital filter
disadvantages

Some disadvantages are:


1. Processor speed limits the frequency range over which digital filters can be
used (although this limit is continuously being pushed back with ever faster
processors).
2. Analog filters (and signal conditioning) are still necessary to convert the
analog signal to digital form and back again.

Summary

A frequency response consists of two parts a magnitude response and a


phase response. It tells us the change in the magnitude and phase of a
sinusoid at any frequency, in the steady-state.

Bode plots are magnitude (dB) and phase responses drawn on a semi-log
scale, enabling the easy analysis or design of high-order systems.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.

Signals and Systems 2014

5A.21
Exercises
1.
With respect to a reference frequency f 0 20 Hz , find the frequency which is
(a) 2 decades above f 0 and (b) 3 octaves below f 0 .

2.
Express the following magnitude ratios in dB: (a) 1, (b) 40, (c) 0.5

3.
Draw the approximate Bode plots (both magnitude and phase) for the transfer
functions shown. Use MATLAB to draw the exact Bode plots and compare.
(a) G s 10

(d) G s

(b) G s

1
10s 1

4
s

(c) G s

(e) G s 5s 1

1
10s 1
(f) G s 5s 1

Note that the magnitude plots for the transfer functions (c) and (d); (e) and (f)
are the same. Why?

4.
Prove that if G s has a single pole at s 1 the asymptotes of the log
magnitude response versus log frequency intersect at 1 . Prove this not
only analytically but also graphically using MATLAB.

Signals and Systems 2014

5A.22
5.
Make use of the property that the logarithm converts multiplication and
division into addition and subtraction, respectively, to draw the Bode plot for:
G s

100s 1
s 0.01s 1

Use asymptotes for the magnitude response and a linear approximation for the
phase response.

6.
Draw the exact Bode plot using MATLAB (magnitude and phase) for
G s

100s 2
s 2 2 n s n2

when

n 10 rads -1 and 0.3


Compare this plot with the approximate Bode plot (which ignores the
value of ).

7.
Given
(a) G s

206.66s 1
2
s 0.1s 1 94.6s 1

(b) G s

4 s 1 .5 s 2
s 2 10 s

Draw the approximate Bode plots and from these graphs find G and G at
(i) 0.1 rads -1 , (ii) 1 rads -1 , (iii) 10 rads -1 , (iv) 100 rads -1 .

Signals and Systems 2014

5A.23
8.
The experimental responses of two systems are given below. Plot the Bode
diagrams and identify the transfer functions.

rads-1
0.1
0.2
0.5
1
2
3
5
10
20
30
40
50
100

(a)
G1
(dB)
40
34
25
20
14
10
2
-9
-23
-32
-40
-46
-64

G1
()
-92
-95
-100
-108
-126
-138
-160
-190
-220
-235
-243
-248
-258

rads-1
0.01
0.02
0.04
0.07
0.1
0.2
0.4
0.7
1
2
4
7
10
20
40
100
500
1000

(b)
G2
(dB)
-26
-20
-14
-10
-7
-3
-1
-0.3
0
0
0
-2
-3
-7
-12
-20
-34
-40

G2
()
87
84
79
70
61
46
29
20
17
17
25
36
46
64
76
84
89
89

9.
Given G s K s s 5
(a) Plot the closed loop frequency response of this system using unity feedback
when K 1 . What is the 3 dB bandwidth of the system?
(b) Plot the closed loop frequency response when K is increased to K 100 .
What is the effect on the frequency response?

10.
The following measurements were taken for an open-loop system:
(i)

1 , G 6 dB , G 25

(ii)

2 , G 18 dB , G 127

Find G and G at 1 and 2 when the system is connected in a unity


feedback arrangement.

Signals and Systems 2014

5A.24
11.
An amplifier has the following frequency response. Find the transfer function.

40
30
20

|H(w)| dB

10
0
-10
-20
-30
-40
0
10

10

10

10
w

10

10

(rad/s)

100
80
60

arg(H(w)) deg

40
20
0
-20
-40
-60
-80
-100
0
10

10

10

10
w (rad/s)

Signals and Systems 2014

10

10

5B.1
Lecture 5B Time-Domain Response
Steady-state error. Transient response. Second-order step response. Settling
time. Peak time. Percent overshoot. Rise time and delay time.

Overview
Control systems employing feedback usually operate to bring the output of the
system being controlled in line with the reference input. For example, a maze
rover may receive a command to go forward 4 units how does it respond?
Can we control the dynamic behaviour of the rover, and if we can, what are the
limits of the control? Obviously we cannot get a rover to move infinitely fast,
so it will never follow a step input exactly. It must undergo a transient just
like an electric circuit with storage elements. However, with feedback, we may
be able to change the transient response to suit particular requirements, like
time taken to get to a certain position within a small tolerance, not
overshooting the mark and hitting walls, etc.

Steady-State Error
One of the main objectives of control is for the system output to follow the
system input. The difference between the input and output, in the steady-state,
is termed the steady-state error:

et r t ct
ess lim et
t

Signals and Systems 2014

Steady-state error
defined

(5B.1)

5B.2
Consider the unity-feedback system:

R (s )

E (s )

G (s)

C (s )

Figure 5B.1
System type defined

The type of the control system, or simply system type, is the number of poles
that G s has at s 0 . For example:

101 3s
s s 2 2s 2
4
Gs 3
s s 2

G s

type1
type 3

(5B.2)

When the input to the control system in Figure 5B.1 is a step function with
magnitude R, then Rs R s and the steady-state error is:

ess lim et lim sE s


t
s0

sRs
R
lim
lim
s 0 1 Gs s 0 1 Gs

R
1 lim Gs
s0

Signals and Systems 2014

(5B.3)

5B.3
For convenience, we define the step-error constant, K P , as:
Step-error constant
only defined for a
step-input

K P lim G s
s0

(5B.4)

so that Eq. (5B.3) becomes:

ess

R
1 KP

(5B.5)

We see that for the steady-state error to be zero, we require K P . This will
only occur if there is at least one pole of G s at the origin. We can summarise
the errors of a unity-feedback system to a step-input as:

R
ess
type 0 system :
constant
1 KP
type 1 or higher system : ess 0

(5B.6)

Steady-state error to
a step-input for a
unity-feedback
system

Transient Response
Consider a maze rover (MR) described by the differential equation:

1
dvt
xt
vt
M
M
dt

Maze rover force /


velocity differential
equation

kf

(5B.7)

where vt is the velocity, xt is the driving force, M is the mass and k f


represents frictional losses. We may represent the MR by the following block
diagram:
MR transfer function
for velocity output

X (s )

1M
skf M

V (s )

Figure 5B.2
Signals and Systems 2014

5B.4
Now, from the diagram above, it appears that our input to the rover affects the
velocity in some way. But we need to control the output position, not the
output velocity.
Were therefore actually interested in the following model of the MR:
MR transfer function
for position output

X (s )

V (s )

1M
skf M

1
s

C (s )

Figure 5B.3
This should be obvious, since position ct is given by:

ct v d

(5B.8)

Using block diagram reduction, our position model of the MR is:

X (s )

1M
ss k f M

C (s )

Figure 5B.4
The whole point of modelling the rover is so that we can control it. Suppose we
wish to build a maze rover position control system. We will choose for
simplicity a unity-feedback system, and place a controller in the feedforward path in front of the MRs input. Such a control strategy is termed

series compensation.

Signals and Systems 2014

5B.5
A block diagram of the proposed feedback system, with its unity-feedback and
series compensation controller is:

R (s )

E (s )

Gc( s )

X (s )

1M
ss k f M

C (s )

Simple MR position
control scheme

controller

Figure 5B.5

Let the controller be the simplest type possible a proportional controller


which is just a gain K P . Then the closed-loop transfer function is given by:

KP M
C s

Rs s s k f M K P M

(5B.9)

which can be manipulated into the standard form for a second-order transfer
function:

n2
C s

Rs s 2 2 n s n2

Second-order
control system

(5B.10)

Note: For a more complicated controller, we will not in general obtain a


second-order transfer function. The reason we examine second-order systems
is because they are amenable to analytical techniques the concepts remain
the same though for higher-order systems.
Although this simple controller can only vary n2 , with n fixed (why?), we Controllers change

the transfer function

can still see what sort of performance this controller has in the time-domain as and therefore the
time-domain

K P is varied. In fact, the goal of the controller design is to choose a suitable response

value of K P to achieve certain criteria.

Signals and Systems 2014

5B.6
For a unit-step function input, Rs 1 s , and the output response of the
system is obtained by taking the inverse Laplace transform of the output
transform:

n2
C s
s s 2 2 n s n2

(5B.11)

We have seen previously that the result for an underdamped system is:

ct 1

e n t
1

sin n 1 2 t cos 1

(5B.12)

We normally desire the output to be slightly underdamped due to its fast


characteristics it rises to near the steady-state output much quicker than an
overdamped or critically damped system.
Although second-order control systems are rare in practice, their analysis
generally helps to form a basis for the understanding of analysis and design for
higher-order systems, especially ones that can be approximated by secondorder systems. Also, time-domain specifications of systems can be directly
related to an underdamped second-order response via simple formula.

Second-order Step Response


The time-domain unit-step response to the system described by Eq. (5B.10) has
been solved previously. We found that there were three distinct solutions that
depended upon the pole locations. We termed the responses overdamped (two
real poles), critically damped (repeated real poles) and underdamped (complex
conjugate poles).
Some time-domain criteria only apply to certain types of response. For
example, percent overshoot only exists for the underdamped case.

Signals and Systems 2014

5B.7
We will now examine what sort of criteria we usually specify, with respect to
the following diagram:
Step response
definitions

c (t )
P.O. = percent overshoot
1.05
1.00
0.95
0.90

0.50

td
0.10
0

tr
tp
ts

Figure 5B.6

The following definitions are made:

c c
P.O. percent overshoot max ss 100%
css

delay time td

(5B.13a)

Percent overshoot

(5B.13b)

Delay time

(5B.13c)

Rise time

(5B.13d)

Settling time

ctd css 50%

rise time t r t90% t10%

ct90% css 90%, ct10% css 10%


settling time ts

ct ts css
5%
css
Signals and Systems 2014

5B.8
The poles of Eq. (5B.10) are given by:

p1 , p2 n j n 1 2
j d

(5B.14)

The undamped natural frequency is:


Natural frequency
defined

n radial distance from origin

(5B.15)

The damping factor is defined as:


Damping factor
defined

real part of the poles

(5B.16)

and the damped natural frequency is:


Damped frequency
defined

d imaginary part of the poles

(5B.17)

The damping ratio is defined as:


Damping ratio
defined

(5B.18)

All of the above are illustrated in the following diagram:


Second-order
complex pole
locations

j
jn
j d = j n 1- 2

- = - n

- jn

Figure 5B.7

Signals and Systems 2014

5B.9
The effect of is readily apparent from the following graph of the step-input
response:

1.8

Second-order step
response for varying
damping ratio

=0.1

1.6
1.4
1.2

c( t )

0.5

1.0
0.8
0.6

1.0
1.5
2.0

0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13

nt
Figure 5B.8

Signals and Systems 2014

5B.10
Settling Time
The settling time, t s , is the time required for the output to come within and
stay within a given band about the actual steady-state value. This band is
usually expressed as a percentage p of the steady-state value. To derive an
estimate for the settling time, we need to examine the step-response more
closely.
The standard, second-order lowpass transfer function of Eq. (5B.10) has the
s-plane plot shown below if 0 1 :
Pole locations
showing definition of
the angle

n
- n

j n 1- 2

s -plane

Figure 5B.9

It can be seen that the angle is given by:

cos

(5B.19)

The step-response, as given by Eq. (5B.12), is then:

ct 1

e n t
1

sin n 1 2 t

Signals and Systems 2014

(5B.20)

5B.11

The curves 1 e n t

1 2

are the envelope curves of the transient

response for a unit-step input. The response curve ct always remains within a
pair of the envelope curves, as shown below:

1+

1
1

1+ e

- nt

1 2

T= 1

1
e - t
n

1-

1-

1 2

0
T

1 2

2T

3T

1
t
cos
2

1 2
1

4T

5T

1
cos

1 2
1

Figure 5B.10

To determine the settling time, we need to find the time it takes for the
response to fall within and stay within a certain band about the steady-state
value. This time depends on and n in a non-linear fashion, because of the
oscillatory response. It can be obtained numerically from the responses shown
in Figure 5B.8.

Signals and Systems 2014

Pair of envelope
curves for the unitstep response of a
lowpass secondorder underdamped
system

5B.12
One way of analytically estimating the settling time with a simple equation is
to consider only the minima and maxima of the step-response. For 0 1 ,
the step-response is the damped sinusoid shown below:
Exponential curves
intersecting the
maxima and minima
of the step-response

1+e - t
n

steady-state value

1- e

- nt

Figure 5B.11

The dashed curves in Figure 5B.11 represent the loci of maxima and minima of
the step-response. The maxima and minima are found by differentiating the
time response, Eq. (5B.20), and equating to zero:

dct e n t

cos n 1 2 t n 1 2
dt
1 2

e n t
1

sin n 1 2 t n
(5B.21)

Signals and Systems 2014

5B.13
Dividing through by the common term and rearranging, we get:

1 2 cos n 1 2 t sin n 1 2 t
1 2

1 2

tan n 1 2 t

tan n 1 2 t n

(5B.22)

where we have used the fact that tan tan n , n 0, 1, 2,


Then, taking the arctangent of both sides of Eq. (5B.22), we have:

tan

1 2

n 1 2 t n

(5B.23)

From the s-plane plot of Figure 5B.9, we have:

tan

1 2

(5B.24)

Substituting Eq. (5B.24) into Eq. (5B.23), and solving for t, we obtain:

n 1 2

n 0, 1, 2,

(5B.25)

Eq. (5B.25) gives the time at which the maxima and minima of the stepresponse occur. Since ct in Eq. (5B.20) is only defined for t 0 , Eq. (5B.25)
only gives valid results for t 0 .

Signals and Systems 2014

Times at which the


maxima and minima
of the step-response
occur

5B.14
Substituting Eq. (5B.25) into Eq. (5B.20), we get:

cn 1

n n

1
1 2

1 2 n

sin

1 2

n 1 2

(5B.26)

Cancelling common terms, we have:

cn 1

1
1

1 2

sin n

(5B.27)

Since:

sin n sin ,
sin n sin ,

n odd
n even

(5B.28)

then Eq. (5B.27) defines two curves:

c1 n 1
c2 n 1

1
1

1 2

sin ,

n odd

(5B.29a)

1
1

1 2

sin , n even
(5B.29b)

From Figure 5B.9, we have:

sin 1 2

Signals and Systems 2014

(5B.30)

5B.15
Substituting this equation for sin into Eqs. (5B.29), we get:

c1 n 1 e
c2 n 1 e

n
1 2

n odd

n
1 2

, n even

(5B.31a)
(5B.31b)

Eq. (5B.31a) and Eq. (5B.31b) are, respectively, the relative maximum and
minimum values of the step-response, with the times for the maxima and
minima given by Eq. (5B.25). But these values will be exactly the same as
those given by the following exponential curves:

c1 t 1 e

n t

(5B.32a)

c2 t 1 e nt

(5B.32b)

evaluated at the times for the maxima and minima:

n 1 2

n 0, 1, 2,

(5B.33)

Since the exponential curves, Eqs (5B.32), pass through the maxima and
minima of the step-response, they can be used to approximate the extreme
bounds of the step-response (note that the response actually goes slightly
outside the exponential curves, especially after the first peak the exponential
curves are only an estimate of the bounds).

Signals and Systems 2014

Exponential curves
passing through the
maxima and minima
of the step-response

5B.16
We can make an estimate of the settling time by simply determining the time at
which c1 t [or c2 t ] enters the band 1 ct 1 about the steady-state
value, as indicated graphically below:
Graph of an
underdamped stepresponse showing
exponential curves
bounding the
maxima and minima

1+ e

- nt

1+
1
1-

steady-state value

1- e - t
n

ts
Figure 5B.12

The exponential terms in Eqs. (5B.32) represent the deviation from the steadystate value. Since the exponential response is monotonic, it is sufficient to
calculate the time when the magnitude of the exponential is equal to the
required error . This time is the settling time, t s :

e t

n s

(5B.34)

Taking the natural logarithm of both sides and solving for t s gives
the p-percent settling time for a step-input:
Settling time for a
second-order
system

ts
where

ln

p
.
100

Note that this formula is an approximation to the real settling time.


Signals and Systems 2014

(5B.35)

5B.17
Peak Time
The peak time, t p , at which the response has a maximum overshoot is given by
Eq. (5B.25), with n 1 (the first local maxima):

tp

n 1 2

(5B.36)

Peak time for a


second-order
system

This formula is only applicable if 0 1 , otherwise the peak time is t p .

Percent Overshoot
The magnitudes of the overshoots can be determined using Eq. (5B.31a). The
maximum value is obtained by letting n 1 . Therefore, the maximum value is:

ct max 1 e

1 2

(5B.37)

Hence, the maximum overshoot is:

maximum overshoot e

1 2

(5B.38)

and the maximum percent overshoot is:

P.O. e

100%

(5B.39)

Rise Time and Delay Time


To determine rise times and delay times, we usually dont resort to solving the
non-linear equation that results in substitution of the 10%, 50% and 90%
values of the steady-state response and solving for t. We use a normalised
delay time graph, or solve the resulting equations numerically using
MATLAB, or measure from a graph of the response (or on the DSO).

Signals and Systems 2014

Percent overshoot
for a second-order
system

5B.18
Summary

The time-domain response of a system is important in control systems. In


set point control systems, the time-domain response is the step response
of the system. We usually employ feedback in a system so that the output
tracks the input with some acceptable steady-state error.

The transient part of a time-domain response is important. Control systems


usually specify acceptable system behaviour with regards to percent
overshoot, settling time, rise time, etc. For second-order systems, most of
these quantities can be obtained from simple formula in general they
cannot.

For second-order all-pole systems, we can directly relate pole locations


( and n ) to transient behaviour.

Three important parameters of a second-order underdamped response are


used to specify a transient response. They are:
Settling time:

ts

Peak time:

tp

ln

n 1 2

Percent overshoot: P.O. e

1 2

100%

References
Kuo, B.: Automatic Control Systems 7th ed., Prentice-Hall, 1995.

Signals and Systems 2014

5B.19
Exercises
1.
A second-order all-pole system has roots at 2 j 3 rads -1 . If the input to the
system is a step of 10 units, determine:
(a) the P.O. of the output
(b) the peak time of the output
(c) the damping ratio
(d) the natural frequency of the system
(e) the actual frequency of oscillation of the output
(f) the 0-100% rise time
(g) the 5% settling time
(h) the 2% settling time

2.
Determine a second-order, all-pole transfer function which will meet the
following specifications for a step input:
(a) 10%-90% rise time 150 ms
(b) 5% overshoot
(c) 1% settling time 1 s

Signals and Systems 2014

5B.20
3.
Given:
C s

n2
s s 2 2 n s n2

find ct using the residue approach.


[Hint: Expand the denominator into the form ss j d s j d ]

4.
The experimental zero-state response to a unit-step input of a second-order allpole system is shown below:

c( t )
a

1.0

0.4

0.8

(a) Derive an expression (in terms of a and b) for the damping ratio.
(b) Determine values for the natural frequency and damping ratio, given
a 0.4 and b 0.08 .

Signals and Systems 2014

5B.21
5.
Given:
(i)

G1 s

10
s 10

(ii)

G 2 s

1
s 1

(iii)

G3 s

10
s 1s 10

(a) Sketch the poles of each system on the s-plane.


(b) Sketch the time responses of each system to a unit-step input.
(c) Which pole dominates the time response of G3 s ?

6.
Find an approximate first-order model of the transfer function:
G s

4
s 3s 2
2

by fitting a single-time-constant step response curve to its step response.


Sketch and compare the two step responses.
Hint: A suitable criterion to use that is common in control theory is the integral

of the square of the error, ISE, which is defined as:


I 1 e 2 t dt
T

The upper limit T is a finite time chosen somewhat arbitrarily so that the
integral approaches a steady-state value.

Signals and Systems 2014

5B.22
7.
Automatically controlled machine-tools form an important aspect of control
system application. The major trend has been towards the use of automatic
numerically controlled machine tools using direct digital inputs. Many
CAD/CAM tools produce numeric output for the direct control of these tools,
eliminating the tedium of repetitive operations required of human operators,
and the possibility of human error. The figure below illustrates the block
diagram of an automatic numerically controlled machine-tool position control
system, using a computer to supply the reference signal.

R ( s)

servo
amplifier

servo motor

Ka = 9

1
s ( s +1)

C( s)

position feedback

(a) What is the undamped natural frequency n and damping factor ?


(b) What is the percent overshoot and time to peak resulting from the
application of a unit-step input?
(c) What is the steady-state error resulting from the application of a unit-step
input?
(d) What is the steady-state error resulting from the application of a unit-ramp
r t tu t input?

Signals and Systems 2014

5B.23
8.
Find the steady-state errors to (a) a unit-step, and (b) a unit-ramp input, for the
following feedback system:

R (s )

1
2
s +1

C (s )

3s
Note that the error is defined as the difference between the actual input r t
and the actual output ct .

9.
Given
G s

0.1
s 0.1

a) Find an expression for the step-response of this system. Sketch this


response. What is the system time constant?
b) A unity feedback loop is to be connected around the system as shown:

R (s )

G ( s)

C (s )

Sketch the time responses of the closed-loop system and find the system
time constants when (i) K 0.1 , (ii) K 1 and (iii) K 10 .

What affect does feedback have on the time response of a first-order


system?

Signals and Systems 2014

6A.1
Lecture 6A Effects of Feedback
Transient response. Closed-loop control. Disturbance rejection. Sensitivity.

Overview
We apply feedback in control systems for a variety of reasons. The primary
purpose of feedback is to more accurately control the output - we wish to
reduce the difference between a reference input and the actual output. When
the input signal is a step, this is called set-point control.
Reduction of the system error is only one advantage of feedback. Feedback
also affects the transient response, stability, bandwidth, disturbance rejection
and sensitivity to system parameters.
Recall that the basic feedback system was described by the block diagram:
General feedback
control system

R (s )

E (s )

B (s )

C (s )
G (s)

H (s)

Figure 6A.1
The system is described by the following transfer function:

C s
G s

R s 1 G s H s

(6A.1)

The only way we can improve system performance whatever that may be
is by choosing a suitable H s or G s . Some of the criteria for choosing
G s , with H s 1 , will be given in the following sections.
Signals and Systems 2014

General feedback
control system
transfer function

6A.2
Transient Response
One of the most important characteristics of control systems is their transient
response. We might desire a speedy response, or a response without an
overshoot which may be physically impossible or cause damage to the
system being controlled (e.g. Maze rover hitting a wall!).
We can modify the response of a system by cascading the system with a
transfer function which has been designed so that the overall transfer function
achieves some design objective. This is termed open-loop control.
Open-loop control
system

R (s )

Gc ( s)

Gp( s)

controller

plant

C (s )

Figure 6A.2
A better way of modifying the response of a system is to apply feedback. This
is termed closed-loop control. By adjusting the loop feedback parameters, we
can control the transient response (within limits). A typical control system for
set-point control simply derives the error signal by comparing the output
directly with the input. Such a system is called a unity-feedback system.
Unity-feedback
closed-loop control
system

R (s )

E (s )

C (s )
Gc ( s)

Gp( s)

controller

plant

Figure 6A.3

Signals and Systems 2014

6A.3
Example

We have already seen that the MR can be described by the following block
diagram (remember its just a differential equation!):
MR transfer function
for velocity output

X (s )

V (s )

1M
skf M

Figure 6A.4

In this example, the objective is to force vt to be equal to a desired constant


speed, v0 . As we have seen, this implies a partial fraction term v0 s in the
expression for V s . Since the MRs transfer function does not contain a K s
term, then the only way it can appear in the expression for V s is if
X s K s .

Assuming the input is a step function, then we have:

V s G s X s

1M
K
s kf M s

K kf
s

K kf
s kf M

(6A.2)

Inverse transforming yields:

vt

K
k
1 e f
kf

M t

t0

Signals and Systems 2014

(6A.3)

6A.4
If K is set to v0 k f then:

Step response of
MR velocity

vt v0 1 e

kf M t

t0

(6A.4)

and since k f M 0 , then vt v0 as t . If our reference signal were

r t v0 u t then the system we have described is:


Simple open-loop
MR controller

R (s )

X (s )

kf
controller

1M
skf M

V (s )

plant

Figure 6A.5

This is referred to as open-loop control, since it depends only on the reference


Open-loop control
has many
disadvantages

signal and not on the output. This type of control is deficient in several aspects.
Note that the reference input has to be converted to the MR input through a
gain stage equal to k f , which must be known. Also, by examining Eq. (6A.4),
we can see that we have no control over how fast the velocity converges to v0 .

Closed-Loop Control
To better control the output, well implement a closed-loop control system. For
simplicity, well use a unity-feedback system, with our controller placed in the
feed-forward path:
Closed-loop MR
controller

R (s )

E (s )

Gc( s )

X (s )

controller

1M
skf M

plant

Figure 6A.6

Signals and Systems 2014

V (s )

6A.5
Proportional Control (P Controller)

The simplest type of controller has transfer function Gc s K P . This is called


proportional control since the control signal xt is directly proportional to the
error signal et .

R (s )

E (s )

X (s )

KP
controller

1M
skf M

Proportional
controller
(P controller)

V (s )

plant

Figure 6A.7

With this type of control, the transform of the output is:


V s

Gc s G s
R s
1 Gc s G s
KP M
v0
s k f M K P M s
K P v0 k f K P
s

K P v0 k f K P

s k

M KP M

(6A.5)

Inverse transforming yields the response:

vt

K P v0
k K M t
1 e f P
, t0
k f KP

(6A.6)

Now the velocity converges to the value K P v0 k f K P . Since it is now

MR velocity stepresponse using a


proportional
controller

Sometimes a
closed-loop system
impossible for vt v0 , the proportional controller will always result in a exhibits a steadystate error

steady-state tracking error equal to k f v0 k f K P . However, we are free to


make this error as small as desired by choosing a suitably large value for K P .
Also,

from

Eq.

(6A.6), we can see that the rate at which vt converges to the steady-state
value can be made as fast as desired by again taking K P to be suitably large.
Signals and Systems 2014

6A.6
Integral Control (I Controller)

One of the deficiencies of the simple P controller in controlling the maze rover
was that it did not have a zero steady-state error. This was due to the fact that
the overall feedforward system was Type 0, instead of Type 1. We can easily
make the overall feedforward system Type 1 by changing the controller so that
it has a pole at the origin:
Integral controller
(I controller)

R (s )

E (s )

X (s )

KI
s
controller

1M
skf M

V (s )

plant

Figure 6A.8

We recognise that the controller transfer function is just an integrator, so this


form of control is called integral control. With this type of control, the overall
transfer function of the closed-loop system is:

T s

K I sM
s k f M K I sM

KI M
s k f M s KI M
2

(6A.7)

We can see straight away that the transfer function is 1 at DC (set s 0 in the
transfer function). This means that the output will follow the input in the
steady-state (zero steady-state error). By comparing this second-order transfer
function with the standard form:

n2
T s 2
s 2 n s n2

(6A.8)

we can see that the controller is only able to adjust the natural frequency, or the
distance of the poles from the origin, n . This may be good enough, but we
would prefer to be able to control the damping ratio as well.
Signals and Systems 2014

6A.7
Proportional Plus Integral Control (PI Controller)

If we combine the two previous controllers we have what is known as a PI


controller.

R (s )

E (s )

KP +

KI
s

X (s )

controller

1M
skf M

V (s )

plant

Figure 6A.9

The controller in this case causes the plant to respond to both the error and the
integral of the error. With this type of control, the overall transfer function of
the closed-loop system is:

T s

K P K I s 1 M
s k f M K P K I s 1 M
KI M KP M s
s 2 k f M K P M s K I M

(6A.9)

Again, we can see straight away that the transfer function is 1 at DC (set s 0
in the transfer function), and again that the output will follow the input in the
steady-state (zero steady-state error). We can now control the damping ratio ,
as well as the natural frequency n , independently of each other. But we also
have a zero in the numerator. Intuitively we can conclude that the response will
be similar to the response with integral control, but will also contain a term
which is the derivative of this response (we see multiplication by s the zero
as a derivative). We can analyse the response by rewriting Eq. (6A.9) as:

n2
K P K I n2 s
T s 2
2
2
s 2 n s n s 2 n s n2
Signals and Systems 2014

(6A.10)

Proportional plus
Integral controller
(PI controller)

6A.8
For a unit-step input, let the output response that is due to the first term on the
right-hand

side

of

Eq.

(6A.10) be c I t . Then the total unit-step response is:

ct cI t

K P dcI t
K I dt

(6A.11)

The figure below shows that the addition of the zero at s K I K P reduces
the rise time and increases the maximum overshoot, compared to the I
controller step response:
PI controller
response to a unitstep input

cI (t ) +
c (t )

KP d cI (t )
KI
dt

cI (t )
1.00

KP d cI (t )
KI
dt
0

Figure 6A.10

The response is easy to sketch by drawing the derivative of c I t and adding a


scaled version of this to c I t . The derivative is sketched by noting that the
derivative of c I t is zero when the tangent to c I t is horizontal, and the slope
of c I t oscillates between positive and negative values between these points.

Signals and Systems 2014

6A.9
The total response can then be sketched in, ensuring that the total response
goes through the points where the c I t slope is zero.
Proportional, Integral, Derivative Control (PID Controller)

One of the best known controllers used in practice is the PID controller, where
the letters stand for proportional, integral, and derivative. The addition of a
derivative to the PI controller means that PID control contains anticipatory
control. That is, by knowing the slope of the error, the controller can anticipate
the direction of the error and use it to better control the process. The PID
controller transfer function is:

G c s K P K D s

KI
s

(6A.12)

There are established procedures for designing control systems with PID
controllers, in both the time and frequency-domains.

Disturbance Rejection
A major problem with open-loop control is that the output ct of the plant will
be perturbed by a disturbance input d t . Since the control signal r t does Feedback minimizes
not depend on the plant output ct

the effect of
in open-loop control, the control signal disturbance inputs

cannot compensate for the disturbance d t . Closed-loop control can


compensate to some degree for disturbance inputs.
Example

Consider the MR in open-loop:

D2( s)

D1( s)

R (s )

E (s )

X (s )

Gc( s )
controller

Signals and Systems 2014

Open-loop MR
system with
disturbance inputs

D3( s)

1M
skf M

V (s )

6A.10
Figure 6A.11

The disturbance inputs could be modelling electronic noise in amplifiers, or a


sudden increase in velocity due to an incline, or any other unwanted signal. In
open-loop control, a disturbance d 2 t has just as much control as our
controller!
Closed-loop feedback reduces the response of the system to disturbance inputs:
Closed-loop MR
system with
disturbance inputs

D2( s)

D1( s)

R (s )

E (s )

X (s )

Gc( s )

D3( s)

1M
skf M

V (s )

controller

Figure 6A.12

Using superposition, the output is given by:

V s

Most disturbance
inputs are minimized
when using
feedback

Gc s G s
Rs D1 s
1 Gc s G s

G s
D2 s
1 Gc s G s

1
D3 s
1 Gc s G s

(6A.13)

Thus, we cannot eliminate noise at the input d1 t . The system cannot


discriminate between d1 t and r t . To minimise the other disturbances on the
output, we need to make the loop-gain 1 Gc s G s large.

Signals and Systems 2014

6A.11
Sensitivity
Sensitivity is a measure of how the characteristics of a system depend on the Sensitivity defines
how one element of

variations of some component (or parameter) of the system. The effect of the a system affects a

characteristic of the

parameter change can be expressed quantitatively in terms of a sensitivity system


function.
System Sensitivity

This is a general formulation which applies to any type of system, open- or


closed-loop. Let the system transfer function be expressed as T s, , where
is some parameter in the transfer function. Then:

ST

T T T

System sensitivity
defined

(6A.14)

is called the system sensitivity (with respect to ). It represents the fractional


change in the system transfer function due to a fractional change in some
parameter.
If ST is small, the effect on T of changes in is small. For small changes in

ST

T T0
0

where T0 and 0 are the nominal or design values.

Signals and Systems 2014

(6A.15)

6A.12
Example

How does T s depend on changes in G s for:

R (s )
R (s )

G ( s)

C (s )

E (s )

G ( s)

C (s )
B (s )

a) open loop

H ( s)
b) closed loop

Figure 6A.13

Intuitively for open-loop, the output must depend directly on G s , so S GT 1 .


Intuitively for closed-loop:

T s

G s
1

1 G s H s H s

(6A.16)

if G s H s 1 . Therefore changes in G s dont matter, or the system is


not sensitive to changes in G s .
Analytically, using system sensitivity for the open-loop case:

T s G s
T
1
G

(6A.17)

(here G corresponds to the parameter ). Therefore:


System sensitivity to
G for open-loop
systems

SGT

G T
1
T G

(6A.18)

The transfer function is therefore directly sensitive to changes in G, as we


thought.
Signals and Systems 2014

6A.13
Analytically, for the closed-loop case:

G s
1 G s H s
T 1 GH GH

G
1 GH 2
T

(6A.19)

Then:

G
1
1
S

G 1 GH 1 GH 2 1 GH
T
G

(6A.20)

System sensitivity to
G for closed-loop
systems

Therefore with feedback, the effect of a percentage change in G is reduced by


the factor 1 1 GH . If the input R is held constant, the effect on the output C
of a change in G is 1 1 GH less than it would have been without feedback.
Show that S HT 1 for this example. This result means that stable feedback Feedback elements
components must be used in order to receive the full benefits of feedback.

Summary

Various types of controllers, such as P, PI, PID are used to compensate the
transfer function of the plant in unity-feedback systems.

Feedback systems can minimise the effect of disturbance inputs or system


parameter variations.

Sensitivity is a measure of the dependence of a systems characteristics


with respect to variations of a particular element (or parameter). Feedback
can reduce the sensitivity of forward-path elements, but input and feedback
elements must be highly stable because they have a much greater effect on
the output.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Signals and Systems 2014

must be accurate
and stable

6A.14
Exercises
1.
Show that the following block diagrams are equivalent:
C (s )

R (s )

G (s)

H (s)

C (s )

R (s )

where G s

G' ( s)

G s
1 G s H s 1

2.
Assume that an operational amplifier has an infinite input impedance, zero
output impedance and a very large gain K.
Show for the feedback configuration shown that Vo Vi K K 1 1 if K is
large.

K
V1

V2

Signals and Systems 2014

6A.15
3.
For the system shown:
R ( s)

K1 =10

G=

100
s ( s +1)

C( s)

K2 =10

a) Determine the sensitivity of the systems transfer function T with respect


to the input transducer, K1 .
b) Determine the sensitivity of the systems transfer function T with respect
to the output transducer, K 2 .
c) Determine the sensitivity of the systems transfer function T with respect
to the plant, G.
d) Indicate qualitatively the frequency dependency of S GT .

Signals and Systems 2014

6A.16
4.
It is important to ensure passenger comfort on ships by stabilizing the ships
oscillations due to waves. Most ship stabilization systems use fins or hydrofoils
projecting into the water in order to generate a stabilization torque on the ship.
A simple diagram of a ship stabilization system is shown below:
Td( s )
R (s )

Ka

(s )

Tf ( s )

G (s)
roll

fin actuator

ship
K1
roll sensor

The rolling motion of a ship can be regarded as an oscillating pendulum with a


deviation from the vertical of degrees and a typical period of 3 seconds. The
transfer function of a typical ship is:
G s

n2
s 2 2 n s n2

where n 2 T 2 , T 3.14 s , and 0.1 . With this low damping factor

, the oscillations continue for several cycles and the rolling amplitude can
reach 18 for the expected amplitude of waves in a normal sea. Determine and
compare the open-loop and closed-loop system for:
a) sensitivity to changes in the actuator constant K a and the roll sensor K1
b) the ability to reduce the effects of the disturbance of the waves.
Note: 1.

2.

The desired roll angle t is zero degrees.


This regulating system is only effective for disturbances (waves)
with frequencies d n , the natural frequency of the ship. Can
you show this?

Comment on these results with respect to the effect of feedback on the


sensitivity of the system to parameter variations.
Signals and Systems 2014

6A.17
5.
The system shown uses a unity feedback loop and a PI compensator to control
the plant.

D (s)
R (s )

E (s )

KI
s
PI compensator
KP +

G (s)

C (s )

plant

Find the steady-state error, e , for the following conditions [note that the
error is always the difference between the reference input r t and the plant
output ct ]:

(a)

G s

K1
,
1 sT1

KI 0

(b)

G s

K1
,
1 sT1

(c)

G s

K1
, KI 0
s1 sT1

(d)

G s

K1
, KI 0
s1 sT1

KI 0

when:
(i)

d t 0 , r t unit-step

(ii)

d t 0 , r t unit-ramp

(iii) d t unit-step, r t 0
(iv)

d t unit-step, r t unit-step

How does the addition of the integral term in the compensator affect the
steady-state errors of the controlled system?

Signals and Systems 2014

7A.1
Lecture 7A The z-Transform
The z-transform. Mapping between s-domain and z-domain. Finding ztransforms. Standard z-transforms. z-transform properties. Evaluation of
inverse z-transforms. Transforms of difference equations. The system transfer
function.

Overview
Digital control of continuous-time systems has become common thanks to the
ever-increasing

performance/price

ratio

of

digital

signal

processors

(microcontrollers, DSPs, gate arrays etc). Once we convert an analog signal to


a sequence of numbers, we are free to do anything we like to them.
Complicated control structures can be implemented easily in a computer, and
even things which are impossible using analog components (such as median
filtering). We can even create systems which learn or adapt to changing
systems, something rather difficult to do with analog circuitry.
The side-effect of all these wonderful benefits is the fact that we have to learn a
new (yet analogous) body of theory to handle what are essentially discrete-time
systems. It will be seen that many of the techniques we use in the continuoustime domain can be applied to the discrete-time case, so in some cases we will
design a system using continuous-time techniques and then simply discretize it.
To handle signals in the discrete-time domain, well need something akin to the
Laplace transform in the continuous-time domain. That thing is the ztransform.

The z-Transform
Well start with a discrete-time signal which is obtained by ideally and
uniformly sampling a continuous-time signal:

xs t

xt t nT

Signals and Systems 2014

(7A.1)

Well treat a
discrete-time signal
as the weights of an
ideally sampled
continuous-time
signal

7A.2
Now since the function x s t is zero everywhere except at the sampling
instants, we replace the xt with the value at the sample instant:

xs t

xnT t nT

(7A.2)

Also, if we establish the time reference so that xt 0, t 0 , then we can say:

xs t xnTs t nTs

(7A.3)

n 0

This is the discrete-time signal in the time-domain. The value of each sample is
represented by the area of an impulse, as shown below:
A discrete-time
signal made by
ideally sampling a
continuous-time
signal

x (t )

Ts

xs ( t )

Figure 7A.1

Signals and Systems 2014

7A.3
What if we were to analyse this signal in the frequency domain? Taking the
Laplace transform yields:

X s s

xnT t nT e
s

n 0

st

dt
(7A.4)

Since summation and integration are linear, well change the order to give:

X s s xnTs t nTs e st dt
0

n 0

(7A.5)

Using the sifting property of the delta function, this integrates to:

X s s xnTs e snTs
n 0

(7A.6)

Therefore, we can see that the Laplace transform of a sampled signal, xs t ,


involves a summation of a series of functions, e snTs . This is a problem because
one of the advantages of transforming to the s-domain was to be able to work
with algebraic polynomial equations. Fortunately, a very simple idea
transforms the Laplace transform of a sampled signal into one which is an
algebraic polynomial equation.
The idea is to define a complex variable z as:

z e sTs

(7A.7)

Notice that this is a non-linear transformation. This definition gives us a new


transform, called the z-transform, with independent variable z:

X z xnTs z n
n 0

Signals and Systems 2014

(7A.8)

Definition of z

7A.4
Since xnTs is just a sequence of sample values, xn , we normally write the

z-transform as:

X z xnz n

Definition of
z-transform

n 0

(7A.9)

Thus, if we know the sample values of a signal, xn , it is a relatively easy step


to write down the z-transform for the sampled signal:

X z x0 x1z 1 x2z 2

(7A.10)

Note that Eq. (7A.9) is the one-sided z-transform. We use this for the same
reasons that we used the one-sided Laplace transform.

Mapping Between s-Domain and z-Domain


The mapping from the continuous-time s-plane into the discrete-time z-plane
defined by z e sTs leads to some interesting observations. Since s j ,
we have:

z eTs e jTs

(7A.11)

so that the magnitude and phase of z are, respectively:

z eTs

z Ts

Signals and Systems 2014

(7A.12a)

(7A.12b)

7A.5
Therefore, we can draw the mapping between the s-domain and the z-domain
as follows:
The mapping
between s-plane
and z-plane

Mapping
Im z

Im s

unit circle
j

z=e

sTs

Ts

Ts

Re s

Re z

s -plane

z -plane

Figure 7A.2
The mapping of the s-domain to the z-domain depends on the sample
interval, Ts . Therefore, the choice of sampling interval is crucial when
designing a digital system.
Mapping the s-Plane Imaginary Axis
The j -axis in the s-plane is when 0 . In the z-domain, this corresponds
to a magnitude of z eTs e 0 1 . Therefore, the frequency in the

s-domain maps linearly on to the unit-circle in the z-domain with a phase angle
z Ts . In other words, distance along the j -axis in the s-domain maps
linearly onto angular displacement around the unit-circle in the z-domain.

unit circle

Im z

Im s

uniform angular
spacing

uniform linear
spacing

Ts

Re s

Re z

s -plane

z -plane

Figure 7A.3
Signals and Systems 2014

The mapping
between the j-axis
in the s-plane and
the unit-circle in the
z-plane

7A.6
Aliasing
With a sample period Ts , the angular sample rate is given by:

s 2f s

2
Ts

(7A.13)

When the frequency is equal to the foldover frequency (half the sample rate),

s j s 2 and:

ze

sTs

s
2

Ts

e j 1

(7A.14)

Thus, as we increase the frequency from 0 to half the sampling frequency


along the j -axis in the s-plane, there is a mapping to the z-domain in an
anticlockwise direction around the unit-circle from z 10 to z 1 . In a
similar manner, if we decrease the frequency along the j -axis from 0 to

s j s 2 , the mapping to the z-plane is in a clockwise direction from


z 10 to z 1 . Thus, between the foldover frequencies s j s 2 ,

there is a unique one-to-one mapping from the s-plane to the z-plane.


The mapping of the
j-axis in the splane up to the
foldover frequency,
and the unit-circle in
the z-plane, is
unique

Im s
s /2

Im z
2

unit circle

2
3

s /2

Re s

Ts

Re z

s -plane

z -plane

Figure 7A.4

Signals and Systems 2014

7A.7
However, if the frequency is increased beyond the foldover frequency, the
mapping just continues to go around the unit-circle. This means that aliasing
occurs higher frequencies in the s-domain are mapped to lower frequencies in
the z-domain.

Im s
2

Im z

3 s /2

s
s /2

unit circle

1 2 3

Re s
3

s /2
s

3 s /2

Re z

z -plane

s -plane

Figure 7A.5
Thus, absolute frequencies greater than the foldover frequency in the s-plane
are mapped on to the same point as frequencies less than the foldover
frequency in the z-plane. That is, they assume the alias of a lower frequency.
The energies of frequencies higher than the foldover frequency add to the
energy of frequencies less than the foldover frequency and this is referred to as
frequency folding.

Signals and Systems 2014

The mapping of the


j-axis higher than
the foldover
frequency in the
s-plane, and the
unit-circle in the zplane, causes
aliasing

7A.8
Finding z-Transforms
We will use the same strategy for finding z-transforms of a signal as we did for
the other transforms start with a known standard transform and successively
apply transform properties. We first need a few standard transforms.
Example
To find the z-transform of a signal xn n , we substitute into the

z- transform definition:

X z nz n
n 0

0 1z 1 2z 1

(7A.15)

Since 0 1 , and n 0 for n 0 , we get a standard transform pair:

n 1

The z-transform of a
unit-pulse

(7A.16)

Thus, the z-transform of a unit-pulse is 1. Note that there is no such thing as an


impulse for discrete-time systems this transform pair is therefore similar to,
but not identical to, the transform of an impulse for the Fourier and Laplace
transforms.

Signals and Systems 2014

7A.9
Example
To find the z-transform of a signal xn a n un, we substitute into the
definition of the z-transform:

X z a n unz n

(7A.17)

n 0

Since un 1 for all n 0 ,

a
X z
n0 z

a a a
1
z z z

(7A.18)

To convert this geometric progression into closed-form, let the sum of the
first k terms of a general geometric progression be written as S k :
k 1

S k x n 1 x x 2 x n x k 1
n 0

(7A.19)

Then, multiplying both sides by x gives:

xS k x x 2 x 3 x k

(7A.20)

Subtracting Eqs. (7A.19) and (7A.20) gives:

S k 1 x 1 x x 2 x 3 x k 1
x x2 x3 x k

(7A.21)

This is a telescoping sum, where we can see that on the right-hand side only the
first and last terms remain after performing the subtraction.
Signals and Systems 2014

7A.10
Dividing both sides by 1 x then results in:

1 xk
Sk x
1 x
n 0
k 1

(7A.22)

If k then the series will only converge if x 1 , because then x k 0 .

For this special case, we have:


Closed-form
expression for an
infinite geometric
progression

xn 1 x x2
n 0

1
,
1 x

x 1
(7A.23)

Using this result to express Eq. (7A.18) results in:

X z

1
,
a
1
z

a
1
z

(7A.24)

This can be rewritten as:


The z-transform of a
geometric
progression

X z

z
,
za

z a

Signals and Systems 2014

(7A.25)

7A.11
The ROC of X z is z a , as shown in the shaded area below:

Signal x [ n]

A signal and the


region of
convergence of its
z-transform in the zplane

Region of convergence
Im z

a u [n]
|a|

Re z

Figure 7A.6

As was the case with the Laplace transform, if we restrict the z-transform to
causal signals, then we do not need to worry about the ROC.
Example

To find the z-transform of the unit-step, just substitute a 1 into


Eq. (7A.25). The result is:

z
un
z 1
This is a frequently used transform in the study of control systems.

Signals and Systems 2014

The z-transform of a
unit-step

(7A.26)

7A.12
Example

To find the z-transform of cosn un , we recognise that:

cosn e jn e jn 2

(7A.27)

According to Eq. (7A.25), it follows that:

e jn

z
z e j

(7A.28)

Therefore:

1
z
z
cosn
,

j
j
2 z e
z e

(7A.29)

and so we have another standard transform:

cosn un

z z cos
z 2 2 z cos 1

(7A.30)

A similar derivation can be used to find the z-transform of sin n un .


Most of the z-transform properties are inherited Laplace transform properties.
Right Shift (Delay) Property

One of the most important properties of the z-transform is the right shift
property. It enables us to directly transform a difference equation into an
algebraic equation in the complex variable z.
The z-transform of a function shifted to the right by one unit is given by:

Z xn 1 xn 1z n
n 0

Signals and Systems 2014

(7A.31)

7A.13
Letting r n 1 yields:

Z xn 1

xr z

r 1

r 1

x 1 z

xr z

r 0

z 1 X z x 1
(7A.32)
Thus:

xn 1 z 1 X z x 1

(7A.33)

Standard z-Transforms

z
un
z 1

n 1
z
a un
za
n

z 2 a cos z
a cos n un 2
z 2a cos z a 2

(Z.1)

(Z.2)

(Z.3)

(Z.4)

sin n un 2 a sin z 2
z 2a cos z a

Signals and Systems 2014

(Z.5)

The z-transform
right shift property

7A.14
z-Transform Properties
Assuming xn X z .

axn aX z

Linearity

Multiplication by

z
a xn X
a

(Z.7)

an

Right shifting

(Z.6)

q 1

xn q z X z xk q z
q

(Z.8)

k 0

Multiplication by n

Left shifting

no corresponding transform

(Z.9)

d
nxn z X z
dz

(Z.10)

q 1

xn q z X z xk z
q

q k

(Z.11)

k 0

(Z.12)

Summation

z
x

X z

z 1
k 0

Convolution

x1 n x2 n X 1 z X 2 z

(Z.13)

x0 lim X z

(Z.14)

lim xn lim z 1X z

(Z.15)

Initial-value theorem

Final-value theorem

z 1

Signals and Systems 2014

7A.15
Evaluation of Inverse z-Transforms
From complex variable theory, the definition of the inverse z-transform is:

1
n 1

xn
X
z
z
dz

C
2j

Inverse z-transform
defined but too
hard to apply!

(7A.34)

You wont need to evaluate this integral to determine the inverse z-transform,
just like we hardly ever use the definition of the inverse Laplace transform. We
manipulate X z into a form where we can simply identify sums of standard
transforms that may have had a few properties applied to them.
Given:

b0 z n b1 z n 1 bn
F z
z p1 z p2 z pn

(7A.35)

we want to put F z in the form:

F z k 0 k1

z
z
z
k2
kn
z p1
z p2
z pn

(7A.36)

The approach we will use is to firstly expand F z z into partial fractions,


then multiply through by z.
Example

F z

z 3 2z 2 z 1
z 22 z 3

Therefore:
F z z 3 2 z 2 z 1

2
z
z z 2 z 3
k
k
k
k2
0 1
3
2
z z 2 z 2
z 3
Signals and Systems 2014

Expand functions of
z into partial
fractions then find
the inverse ztransform

7A.16
Now we evaluate the residues:
k0 z

1
F z
z 3 2z 2 z 1

2
z z 0 z 2 z 3 z 0 12

d
d z 3 2 z 2 z 1
2 F z
k1 z 2

dz
z z 2 dz
z z 3
z 2

z z 3 3 z 2 4 z 1 z 3 2 z 2 z 1 2 z 3
77

2
2
100
z z 3
z 2
k 2 z 2

k 3 z 3

F z
z 3 2z 2 z 1
19

z z 2
z z 3
10
z 2

11
F z
z 3 2z 2 z 1

2
75
z z 3
z z 2
z 3

Therefore:

F z

1 77 z
19
z
11 z

2
12 100 z 2 10 z 2 75 z 3

From our standard transforms:


f n

Note: For k 2

1
77 n 19 n n 11
n
n
2 2 3 , n 0
12
100
10 2
75

z
za
use na n
(Standard transform Z.3 and
2
z p2
z a 2

property Z.10). Then with linearity (property Z.6) we have:

Signals and Systems 2014

n n
z
.
a
a
z a 2

7A.17
Transforms of Difference Equations
The right shift property of the z-transform sets the stage for solving linear
difference equations with constant coefficients. Because yn k z k Y z ,
the z-transform of a difference equation is an algebraic equation that can be
readily solved for Y z . Next we take the inverse z-transform of Y z to find
the desired solution yn .

difference
equation
time-domain
frequency-domain

difficult?

time-domain
solution

ZT

IZT

algebaric
equation

easy!

z -domain
solution

Figure 7A.7
Example

Solve the second-order linear difference equation:


yn 5 yn 1 6 yn 2 3 xn 1 5 xn 2

(7A.37)

if the initial conditions are y 1 11 6 , y 2 37 36 and the input


xn 2 un.
n

Now:
yn u n Y z
yn 1u n z 1Y z y 1 z 1Y z

11
6

yn 2u n z 2Y z z 1 y 1 y 2 z 2Y z

Signals and Systems 2014

11 1 37
z
6
36

(7A.38)

7A.18
For the input, x 1 x 2 0 . Then:
xn 2 u n 0.5 u n
n

z
z 0 .5

xn 1u n z 1 X z x 1 z 1

z
1

z 0 .5 z 0 .5

xn 2u n z 2 X z z 1 x 1 x 2 z 2 X z

1
z z 0 .5

(7A.39)

Taking the z-transform of Eq. (7A.37) and substituting the foregoing results,
we obtain:
11
11 37

Y z 5 z 1Y z 6 z 2Y z z 1
6
6 36

3
5

z 0 .5 z z 0 .5

(7A.40)

or:

1 5 z

z 3zz05.5

6 z 2 Y z 3 11z 1

(7A.41)

from which we obtain:


Y z

z 3 z 11
z 3 z 5

2
z 0.5 z 2 5 z 6
z 5z 6

zero -input component

zero -state component

(7A.42)

and:

3 z 5
3 z 11
Y z

z 2 z 3 z 0.5z 2 z 3
z
5
2
26 15 22 3 28 5

2
3
0
.
5
2
3
z
z
z
z
z

zero -input component

zero -state component

(7A.43)

Therefore:
z
z
26 z
22 z
28 z
2

Y z 5
2z

3
15z

0.5
3z

2 5z
3
z

zero -input component

zero -state component

Signals and Systems 2014

(7A.44)

7A.19
and:
26
0.5 22 2 285
yn 52 23 15
3

zero -input response

26
15

0.5

zero -state response

73 2 185 3 , n 0
n

(7A.45)

As can be seen, the z-transform method gives the total response, which
includes zero-input and zero-state components. The initial condition terms give
rise to the zero-input response. The zero-state response terms are exclusively
due to the input.

The System Transfer Function


Consider the simple first-order discrete-time system described by the
difference equation:

yn ayn 1 bxn

(7A.46)

First-order
difference equation

Taking the z-transform of both sides and using the right-shift property gives:

Y z a z 1Y z y 1 bX z

(7A.47)

Solving for Y z gives:

b
ay 1
Y z

X z
1 az 1 1 az 1

(7A.48)

which can be written:

Y z

ay 1z
bz

X z
za
za

(7A.49)

The first part of the response results from the initial conditions, the second part
results from the input.

Signals and Systems 2014

and corresponding
z-transform

7A.20
If the system has no initial energy (zero initial conditions) then:

Y z

bz
X z
za

(7A.50)

We now define the transfer function for this system as:

H z

bz
za

(7A.51)

so that:

Y z H z X z

Discrete-time
transfer function
defined

(7A.52)

This is the transfer function representation of the system. To determine the


output yn we simply evaluate Eq. (7A.52) and take the inverse z-transform.
For a general nth order system described by the difference equation:
N

i 1

i 0

yn ai yn i bi xn i

(7A.53)

and if the system has zero initial conditions, then taking the z-transform of both
sides results in:

b0 z N b1 z N 1 bM z N M
Y z N
X z
N 1
z a1 z a N 1 z a N

(7A.54)

so that the transfer function is:


Transfer function
derived directly from
difference equation

b0 z N b1 z N 1 bM z N M
H z N
z a1 z N 1 a N 1 z a N

Signals and Systems 2014

(7A.55)

7A.21
We can show that the convolution relationship of a linear discrete-time system:

yn hn xn hi xn i , n 0
i 0

(7A.56)

when transformed gives Eq. (7A.52). We therefore have:

hn H z

(7A.57)

The unit-pulse
response and
transfer function
form a z-transform
pair

That is, the unit-pulse response and the transfer function form a z-transform
pair.
Stability

The left-half s-plane, where 0 , corresponds to z eTs 1 , which is


inside the unit-circle. The right-half s-plane maps outside the unit circle.
Recall that functions of the Laplace variable s having poles with negative real
parts decay to zero as t . In a similar manner, transfer functions of z
having poles with magnitudes less than one decay to zero in the time-domain
as n . Therefore, for a stable system, we must have:

pi 1

(7A.58a)

where pi are the poles of H z . This is equivalent to saying:


A system is stable if all the poles of the transfer function
lie inside the unit-circle

Signals and Systems 2014

(7A.58b)

Stability defined for


a discrete-time
system

7A.22
Transfer Function Interconnections

The transfer function of an LTI discrete-time system can be computed from a


block diagram of the system, just like for continuous-time systems.
Recall that an LTI discrete-time system is composed of elements such as
adders, gains (which multiply the input by a constant), and the unit-delay
element, which is shown below:
A discrete-time unitdelay element

x [n ]

y [n ] = x [n -1]

Figure 7A.8

By taking the z-transform of the input and output, we can see that we should
represent a delay in the z-domain by:
Delay element in
block diagram form
in the z-domain

X ( z)

z -1

Y ( z ) = z -1 X ( z)

Figure 7A.9
Example

The discrete-time approximation to continuous-time integration is given by the


difference equation:

yn yn 1 Txn 1

Signals and Systems 2014

(7A.59)

7A.23
The block diagram of the system is:
Time-domain block
diagram of a
numeric integrator

T x [n -1]
x [n ]

y [n ]

T x [n ]
T

y [n -1]

Figure 7A.10

The system in the z-domain is:


z-domain block
diagram of a
numeric integrator

z -1T X ( z )
X ( z)

T X ( z)
T

Y ( z)

z -1

-1

z Y ( z)

z -1

Figure 7A.11

Through the standard block diagram reduction techniques, we get:


Transfer function of
a numeric integrator

X ( z)

T z -1
1- z -1

Y ( z)

Figure 7A.12

You should confirm that this transfer function obtained using block diagram
reduction methods is the same as that found by taking the z-transform of
Eq. (7A.59).

Signals and Systems 2014

7A.24
Summary

The z-transform is the discrete-time counterpart of the Laplace transform.


There is a mapping from the s-domain to the z-domain given by z e sTs .
The mapping is not unique, and for frequencies above the foldover
frequency, aliasing occurs.

We evaluate inverse z-transforms using partial fractions, standard


transforms and the z-transform properties.

Systems described by difference equations have rational z-transforms. The


z-transforms of the input signal and output signal are related by the transfer
function of the system: Y z H z X z . There is a one-to-one
correspondence between the coefficients in the difference equation and the
coefficients in the transfer function.

We can use the z-transform to express discrete-time systems in transfer


function (block diagram) form.

The unit-pulse response and the transfer function form a z-transform pair:
hn H z .

The transfer function of a system can be obtained by performing analysis in


the z-domain.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.

Signals and Systems 2014

7A.25
Exercises
1.
Construct z-domain block diagrams for the following difference equations:
(i)

yn yn 2 xn xn 1

(ii)

yn 2 yn 1 yn 2 3 xn 4

2.
(i) Construct a difference equation from the following block diagram:

X ( z)

Y ( z)
3

z -1

z -1
-2

z -2

(ii) From your solution calculate yn for n = 0, 1, 2 and 3 given y 2 2 ,


y 1 1 , xn 0 for n 0 and xn 1 for n = 0, 1, 2
n

3.
Using z-transforms:
(a) Find the unit-pulse response for the system given by:

1
yn xn yn 1
3
(b) Find the response of this system to the input
0, n 1, 2, 3,
xn 2,
n 0, 1
1,
n 2, 3, 4,

Hint: xn can be written as the sum of a unit step and two unit-pulses, or as
the subtraction of two unit-steps.
Signals and Systems 2014

7A.26
4.
Determine the weighting sequence for the system shown below in terms of the
individual weighting sequences h1 n and h2 n .

x [n ]

h1[n]

y [n ]

h2[n]

5.
For the feedback configuration shown below, determine the first three terms of
the weighting sequence of the overall system by applying a unit-pulse input
and calculating the resultant response. Express your results as a function of the
weighting sequence elements h1 n and h2 n .

x [n ]

h1[n]

y [n ]

h2[n]

6.
Using the definition of the z-transform
(a) Find Y z when:
(i)

yn 0 for n 0 , yn 1 2 for n 0, 1, 2,

(ii)

yn 0 for n 0 , yn a n1 for n 1, 2, 3,

(iii)

yn 0 for n 0 , yn na n1 for n 1, 2, 3,

(iv)

yn 0 for n 0 , yn n 2 a n1 for n 1, 2, 3,

Signals and Systems 2014

7A.27
(b) Determine the z-transform of the sequence
2, n 0, 2, 4,
xn
0, all other n
by noting that xn x1 n x2 n where x1 n is the unit-step sequence and
x2 n is the unit-alternating sequence. Verify your result by directly

determining the z-transform of xn .


(c) Use the linearity property of the z-transform and the z-transform of e anT
(from tables) to find Z cosnT . Check the result using a table of
z-transforms.

7.
Poles and zeros are defined for the z-transform in exactly the same manner as
for the Laplace transform. For each of the z-transforms given below, find the
poles and zeros and plot the locations in the z-plane. Which of these systems
are stable and unstable?
1 2 z 1
(a) H z
3 4 z 1 z 2
(b) H z

(c) H z

1
2

1 3 4 z 1 8 z 4
5 2 z 2
1 6 z 1 3z 2

Note: for a discrete-time system to be stable all the poles must lie inside the
unit-circle.

Signals and Systems 2014

7A.28
8.
Given:
(a) yn 3 xn xn 1 2 xn 4 yn 1 yn 2
(b) yn 4 yn 3 yn 2 3 xn 4 xn 3 2 xn
Find the transfer function of the systems
(i)

by first finding the unit-pulse response

(ii)

by directly taking the z-transform of the difference equation

9.
Use the direct division method to find the first four terms of the data sequence
xn , given:

X z

14 z 2 14 z 3
z 1 4z 1 2z 1

10.
Use the partial fraction expansion method to find a general expression for xn
in Question 9. Confirm that the first four terms are the same as those obtained
by direct division.

11.
Determine the inverse z-transforms of:
z2
(a) X z
z 1z a

(b) X z 3 2 z 1 6 z 4

(c) X z

1 e z
z 1z e

(d) X z

z z 1
z 1z 2 z 1 4

(e) X z

4
z 2 z 1

(f) X z

z
z z 1

aT

aT

Signals and Systems 2014

7A.29
12.
Given:
8 yn 6 yn 1 yn 2 xn

(a) Find the unit-pulse response hn using time-domain methods.


(b) Find H z
(i) by directly taking the z-transform of the difference equation
(ii) from your answer in (a)
(c) From your solution in (b) find the unit-step response of the system. Check
your solution using a convolution method on the original difference
equation.
(d) Find the zero-state response if xn n .

13.
Given yn xn yn 1 yn 2 , find the unit-step response of this system
using the transfer function method, assuming zero initial conditions.

14.
Use z-transform techniques to find a closed-form expression for the Fibonacci
sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34
Hint: To use transfer function techniques, the initial conditions must be zero.

Construct an input so that the ZSR of the system described by the difference
equation gives the above response.

Signals and Systems 2014

7A.30
The ancient Greeks considered a rectangle to be perfectly proportioned (saying
that the lengths of its sides were in a golden ratio to each other) if the ratio of
the length to the width of the outer rectangle equalled the ratio of the length to
the width of the inner rectangle:

-1

That is:

1
1

Find the two values of that satisfy the golden ratio. Are they familiar values?

15.
Given F s

1
, find F z .
s a 2

Hint: 1. First find f t . 2. Then put t nT to obtain f nT . 3. Take the ztransform of f nT .

Signals and Systems 2014

7A.31
16.
Given:
X 2 z

X 1 z z 6

X 3 z D z
z 1 z 1

X 3 z X 1 z kX 2 z
X 4 z

10
3
X 2 z
X 3 z
z2
z A

(a) Draw a block diagram. Use block diagram reduction to find X 3 z .


Are there any differences between the operations which apply to discrete-time
and continuous-time transfer functions?

17.
Using the initial and final value theorems, find f 0 and f of the
following functions:
(a) F z

1
z 0.3

z2
(c) F z
z 14 z a

(b) F z 1 5 z 3 2 z 2
14 z 2 14 z 3
(d) F z
z 1 4z 1 2z 1

Signals and Systems 2014

7A.32
18.
Perform the convolution yn yn when
(i)

yn un

(ii)

yn n n 1 n 2

using
(a) the property that convolution in the time-domain is equivalent to
multiplication in the frequency domain, and
(b) using any other convolution technique.
Compare answers.

19.
Determine the inverse z-transform of:
F z

z z 1
z 1 z 2 z 1 4

HINT: This has a repeated root. Use techniques analogous to those for the
Laplace transform when multiple roots are present.

Signals and Systems 2014

7B.1
Lecture 7B Discretization
Signal discretization. Signal reconstruction. System discretization. Frequency
response. Response matching.

Overview
Digital signal processing is now the preferred method of signal processing.
Communication schemes and controllers implemented digitally have inherent
advantages over their analog counterparts: reduced cost, repeatability, stability
with aging, flexibility (one H/W design, different S/W), in-system
programming, adaptability (ability to track changes in the environment), and
this will no doubt continue into the future. However, much of how we think,
analyse and design systems still uses analog concepts, and ultimately most
embedded systems eventually interface to a continuous-time world.
Its therefore important that we now take all that we know about continuoustime systems and transfer or map it into the discrete-time domain.

Signal Discretization
We have already seen how to discretize a signal. An ideal sampler produces a
weighted train of impulses:

g (t ) p (t ) = gs ( t )

g (t )

p (t )

-2Ts

-Ts

Ts

2Ts

Figure 7B.1
This was how we approached the topic of z-transforms. Of course an ideal
sampler does not exist, but a real system can come close to the ideal. We saw
Signals and Systems 2014

Sampling a
continuous-time
signal produces a
discrete-time signal
(a train of impulses)

7B.2
in the lab that it didnt matter if we used a rectangular pulse train instead of an
impulse train the only effect was that repeats of the spectrum were weighted
by a sinc function. This didnt matter since reconstruction of the original
continuous-time signal was accomplished by lowpass filtering the baseband
spectrum which was not affected by the sinc function.
Digital signal
processing also
quantizes the
discrete-time signal

In a computer, values can only be stored as discrete values, not only at discrete
times. Thus, in a digital system, the output of a sampler is quantized so that we
have a digital representation of the signal. The effects of quantization will be
ignored for now be aware that they exist, and are a source of errors for digital
signal processors.

Signal Reconstruction
The reconstruction of a signal from ideal samples was accomplished by an
ideal filter:
Lowpass filtering a
discrete-time signal
(train of impulses)
produces a
continuous-time
signal

gs (t )

lowpass
filter

g( t )

Figure 7B.2
We then showed that so long as the Nyquist criterion was met, we could
reconstruct the signal perfectly from its samples (if the lowpass filter is ideal).
To ensure the Nyquist criterion is met, we normally place an anti-alias filter
before the sampler.
Hold Operation
The output from a digital signal processor is obviously digital we need a way
to convert a discrete-time signal back into a continuous-time signal. One way
is by using a lowpass filter on the sampled signal, as above. But digital systems
have to first convert their digital data into analog data, and this is accomplished
with a DAC.
Signals and Systems 2014

7B.3
To model a DAC, we note that there is always some output (it never turns off),
and that the values it produces are quantized:
The output from a
DAC looks like a
train of impulses
convolved with a
rectangle

g( t )
~
g (t )

- 5Ts - 4Ts - 3Ts - 2Ts - Ts

Ts 2Ts 3Ts 4Ts 5Ts 6Ts

Figure 7B.3
The mathematical model we use for the output of a DAC is:

gs (t )

zero-order
hold

A DAC is modelled
as a zero-order hold
device

~
g( t )

Figure 7B.4
The output of the zero-order hold device is:

g~ t g nTs , nTs t nTs T s

(7B.1)

where Ts is the sample period.


The operation of the zero-order hold device in terms of frequency response
shows us that it acts like a lowpass filter (but not an ideal one):

~
G f Ts sinc fTs e jfTs Gs f
Show that the above is true by taking the Fourier transform of g~ t .
Signals and Systems 2014

(7B.2)

Zero-order hold
frequency response

7B.4
System Discretization
Suppose we wish to discretize a continuous-time LTI system. We would expect
the input/output values of the discrete-time system to be:

xn xnT ,

yn y nT

(7B.3)

In the frequency-domain, we want an equivalent relationship by taking the


Laplace transform of the continuous time system and the z-transform of the
discrete-time system, while still maintaining Eq. (7B.3):
Discretizing a
system should
produce the same
(sampled) signals

x (t )

y (t )

x [n ] = x ( nTs )

h (t )

h [n ]

Laplace transform
X ( s)

y [n ] = y ( nTs )

z- transform
Y (s )

X ( z)

H (s)

Y (z )
Hd ( z)

Figure 7B.5
We now have to determine the discrete-time transfer function H d z so that
the relationship Eq. (7B.3) holds true. One way is to match the inputs and
outputs in the frequency-domain. You would expect that since z e sTs , then
we can simply do:
An exact match of
the two systems

We cant implement
the exact match
because the transfer
function is not a
finite rational
polynomial

H d z H s s 1 T

ln z

(7B.4)

Unfortunately, this leads to a z-domain expression which is not a finite rational


polynomial in z. We want polynomials of z because the discrete-time system
will be implemented as a difference equation in a computer. Recall that each

z 1 in a transfer function represents a shift to the right in the signal a task


easily handled by indexing into an array of stored values (there are special

Signals and Systems 2014

7B.5
signal processing techniques to implement fractional delays, but for now we
will stay with the easier concept of integer delays).
Therefore, we seek a rational polynomial approximation to the ln z function.
We start with Taylors theorem:
2

x a
f x f a x a f a

2!

f a

(7B.5)

Now set f x ln x , a 1 and let x 1 y :

y2 y3 y4
ln 1 y y

2
3
4

(7B.6)

Now, from inspection, we can also have:

y2 y3 y4
ln 1 y y

2
3
4

(7B.7)

Subtracting Eq. (7B.7) from Eq. (7B.6), and dividing by 2 gives us:

y3 y5 y7
1 1 y
y
ln

2 1 y
3
5
7

(7B.8)

Now let:

1 y
1 y

(7B.9)

z 1
z 1

(7B.10)

or, rearranging:

Signals and Systems 2014

7B.6
Substituting Eqs. (7B.9) and (7B.10) into Eq. (7B.8) yields:

z 1 1 z 1 3 1 z 1 5
ln z 2


z 1 3 z 1 5 z 1

(7B.11)

Now if z 1 , then we can truncate higher than first-order terms to get the
approximation:

z 1
ln z 2

z 1

(7B.12)

We can now use this as an approximate value for s. This is called the bilinear

transformation (since it has 2 (bi) linear terms):


Bilinear
transformation
defined - an
approximate
mapping between s
and z

1
2 z 1
ln z

Ts
Ts z 1

(7B.13)

This transformation has several desirable properties:

The open LHP in the s-domain maps onto the open unit disk in the zdomain (thus the bilinear transformation preserves the stability condition).

The j -axis maps onto the unit circle in the z-domain. This will be used
later when we look at frequency response of discrete-time systems.

So, an approximate mapping from a continuous-time system to a discrete-time


system, from Eq. (7B.4) is:
Discretizing a
system using the
bilinear
transformation

2 z 1

H d z H

T
z
1
s

Signals and Systems 2014

(7B.14)

7B.7
Frequency Response
Since z e sTs , then letting s j gives z e jTs . If we define:

Ts

(7B.15)

The relationship
between discretetime frequency and
continuous-time
frequency

(7B.16)

Value of z to
determine discretetime frequency
response

then the frequency response in the z-domain is found by setting:

ze

This value of z is found on the unit-circle and has an angle of Ts :


Frequency response
mapping

z=e

-1

1
-j
Figure 7B.6

But this point is also given by the angle 2n , so the mapping from the zplane to the s-domain is not unique. In fact any point on the unit circle maps to
the s-domain frequency Ts 2n , or:
Frequency response
mapping is not
unique aliasing

2
n
Ts
Ts

n s
Ts

where s is the sampling frequency in radians.

Signals and Systems 2014

(7B.17)

7B.8
This shows us two things:

The mapping from the z-domain to the s-domain is not unique. Conversely,
the mapping from the s-domain to the z-domain is not unique. This means
that frequencies spaced at s map to the same frequency in the z-domain.
We already know this! Its called aliasing.

Discrete-time
frequency response
is periodic

The frequency response of a discrete-time system is periodic with period


2 , which means it can be completely characterised by restricting so

that .
Just like the continuous-time case where we set s j in H s to give the
frequency response H , we can set z e j in H d z to give the frequency
response H d . Doing this with Eq. (7B.14) yields the approximation:

2 e j 1

H d H
j

T
e
1
s

(7B.18)

The frequency that corresponds to in the s-domain is approximately given


by Eq. (7B.13):

2 e j 1
j
Ts e j 1

(7B.19)

Using Eulers identity show that this can be manipulated into the form:
Mapping
continuous-time
frequency to
discrete-time
frequency using the
bilinear
transformation

tan
Ts
2

(7B.20)

The inverse relationship is:

2 tan 1

Ts
2

Signals and Systems 2014

(7B.21)

7B.9
So now Eq. (7B.18), the approximate frequency response of an equivalent
discrete-time system, is given by:

H d H tan
2
Ts

Discretizing a
system to get a
similar frequency
response

(7B.22)

The distortion caused by the approximation Eq. (7B.21) is called frequency

warping, since the relationship is non-linear. If there is some critical frequency


(like the cutoff frequency of a filter) that must be preserved in the
transformation, then we can pre-warp the continuous-time frequency response
before applying Eq. (7B.21).
Note that we may be able to select the sample period T so that all our critical
frequencies will be mapped by:

c 2 tan 1

cTs

T
2 c s
2
2

cTs

(7B.23)

which is a small deviation from the real mapping Ts given by


Eq. (7B.15).

Signals and Systems 2014

7B.10
Response Matching
In control systems, it is usual to consider a mapping from continuous-time to
discrete-time in terms of the time-domain response instead of the frequency
Time-domain (step
response) matching

response. For set-point control, this mapping is best performed as step

invariance synthesis, although other mappings can be made (like impulse


invariance).
We saw before that we want:

xn xnTs ,

yn y nTs

(7B.24)

where the output of the discrete-time system is obtained by sampling the stepresponse of the continuous-time system.
Since for a unit-step input we have:

X z

z
z 1

(7B.25)

then we want:
Step response
matching to get the
discrete-time
transfer function

H d z Y z

z 1
z

(7B.26)

where Y z is the z-transform of the ideally sampled step-response of the


continuous-time system.

Signals and Systems 2014

7B.11
Example
Suppose we design a maze rover velocity controller in the continuous-time
domain and we are now considering its implementation in the rovers
microcontroller. We might come up with a continuous-time controller transfer
function such as:

Gc s 500

5
s

(7B.27)

Our closed loop system is therefore:

R (s )

E (s )

X (s )

500s 5
s

controller

0.001
s 0.01

V (s )

maze rover

Figure 7B.7

Show that the block diagram can be reduced to the transfer function:

R (s )

0.5
s 0.5

V (s )

Figure 7B.8

The transform of the step response is then:

Y s

0.5 1
s 0.5 s
1
1

s s 0.5
Signals and Systems 2014

(7B.28)

7B.12
and taking the inverse Laplace transform gives the step response:
Continuous-time
step response

y t 1 e 0.5t u t

(7B.29)

The discretized version of the step response is:


and desired
discrete-time step
response

yn 1 e 0.5 nTs un

(7B.30)

and taking the z-transform gives:

Y z

z
z

z 1 z e 0.5Ts

(7B.31)

Hence, using Eq. (7B.26) yields the following transfer function for the
corresponding discrete-time system:
Equivalent discretetime transfer
function using step
response

z 1
bz 1
1 e 0.5Ts
H d z 1

z e 0.5Ts z e 0.5Ts 1 az 1

(7B.32)

We would therefore implement the difference equation:


Equivalent discretetime difference
equation using step
response

yn ayn 1 bxn 1

(7B.33)

You should confirm that this difference equation gives an equivalent stepresponse at the sample instants using MATLAB for various values of T.

Signals and Systems 2014

7B.13
Summary

We can discretize a continuous-time signal by sampling. If we meet the


Nyquist criterion for sampling, then all the information will be contained in
the resulting discrete-time signal. We can then process the signal digitally.

We can reconstruct a continuous-time signal from mere numbers by using a


DAC. We model this as passing our discrete-time signal (a weighted
impulse train) through a lowpass filter with a rectangular impulse response.

A system (or signal) may be discretized using the bilinear transform. This
maps the LHP in the s-domain into the open unit disk in the z-domain. It is
an approximation only, and introduces warping when examining the
frequency response.

Response matching derives an equivalent discrete-time transfer function so


that the signals in the discrete-time system exactly match samples of the
continuous-time systems input and output signals.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.

Signals and Systems 2014

7B.14
Exercises
1.
A Maze Rover phase-lead compensator has the transfer function:
H s

20s 1
s 4

Determine a difference equation that approximates this continuous-time system


using the method of step-response matching. The sample period is 64 ms.

2.
Repeat Exercise 1, but this time use the bilinear transform.

3.
A continuous-time system has a transfer function G s

s2
and it is
s 3s 2
2

required to find the transfer function of an equivalent discrete-time system


H z whose unit-step response consists of samples of the continuous-time

systems unit-step response. Find H z assuming a sample time of 1 s .


Compare the time solutions at t 0, 1, 10, 100 s to verify your answer.

4.
Compare the step responses of each answer in Exercises 1 and 2 using
MATLAB.

Signals and Systems 2014

8A.1
Lecture 8A System Design
Design criteria for continuous-time systems. Design criteria for discrete-time
systems.

Overview
The design of a control system centres around two important specifications
the steady-state performance and the transient performance. The steady-state
performance is relatively simple. Given a reference input, we want the output
to be exactly, or nearly, equal to the input. The way to achieve the desired
steady-state error, for any type of input, is by employing a closed-loop
configuration that gives the desired system type. The more complex
specification of transient behaviour is tackled by an examination of the
systems poles. We will examine transient performance as specified for an allpole second-order system (the transient performance criteria still exist for
higher-order systems, but the formula shown here only apply to all-pole
second-order systems).
The transient specification for a control system will usually consist of times
taken to reach certain values, and allowable deviations from the final steadystate value. For example, we might specify percent overshoot, peak time, and
5% settling time for a control system. Our task is to find suitable pole locations
for the system.

Signals and Systems 2014

8A.2
Design Criteria for Continuous-Time Systems
Percent Overshoot
The percent overshoot for a second-order all pole step-response is given by:

P.O. e

1 2

(8A.1)

Therefore, for a given maximum overshoot specification, we can evaluate the


minimum allowable . But what specific do we choose? We dont know
until we take into consideration other specifications! What we do is simply
define a region in the s-plane where we can meet this specification.
Since cos , where is the angle with respect to the negative real axis,

then we can shade the region of the s-plane where the P.O. specification is
satisfied:
PO specification
region in the s-plane

shaded region is where


PO is satisfied
j

s -plane

Figure 8A.1

Signals and Systems 2014

=cos -1

8A.3
Peak Time

The peak time for a second-order all-pole step-response is given by:

tp

(8A.2)

Since a specification will specify a maximum peak time, then we can find the
minimum d t p required to meet the specification.

Again, we define the region in the s-plane where this specification is satisfied:
Peak time
specification region
in the s-plane

shaded region is where


t p is satisfied

s -plane

Figure 8A.2

Signals and Systems 2014

d= t
p

8A.4
Settling Time

The settling time for a second-order all-pole step-response is given by:

ts

ln

(8A.3)

Since a specification will specify a maximum settling time, then we can find
the minimum ln t s required to meet the specification.
We define the region in the s-plane where this specification is satisfied:
Settling time
specification region
in the s-plane

shaded region is where


t s is satisfied
j

s -plane

Figure 8A.3

Signals and Systems 2014

-ln
= t
s

8A.5
Combined Specifications

Since we have now specified simple regions in the s-plane that satisfy each
specification, all we have to do is combine all the regions to meet every
specification.

shaded region is where


all specs are satisfied
j

Combined
specifications region
in the s-plane

s -plane

Figure 8A.4

Sometimes a specification is automatically met by meeting the other


specifications this is clear once the regions are drawn on the s-plane.
It is now up to us to choose, within reasonable limits, the desired closed-loop
pole locations. The region we have drawn on the s-plane is an output
specification we must give consideration to the inputs of the system! For
example, there is nothing theoretically wrong with choosing closed-loop pole

We choose s-plane
locations out near infinity wed achieve a very nice, sharp, almost step-like poles close to the
origin so as not to
response! In practice, we cant do this, because the inputs to our system would exceed the linear
range of systems

exceed the allowable linear range. Our analysis has considered the system to be
linear we know it is not in practice! Op amps saturate; motors cannot have
megavolts applied to their windings without breaking down; and we cant put
our foot on the accelerator past the floor of the car (no matter how much we
try)!

Signals and Systems 2014

8A.6
Practical considerations therefore mean we should choose pole locations that
are close to the origin this will mean we meet the specifications, and
hopefully we wont be exceeding the linear bounds of our system. We
normally indicate the desired pole locations by placing a square around them:
Desired closed-loop
pole locations are
represented with a
square around them

chosen pole location that meets the


specs and keeps the system linear
j

s -plane

Figure 8A.5

Can we apply these specifications to real (higher-order) systems? Yes if its a


dominant second-order all-pole system:
Dominant secondorder all-pole
system

s -plane

Figure 8A.6

Signals and Systems 2014

8A.7
or we introduce a minor-loop to move a dominant first-order pole away:
Dominant secondorder all-pole
system made by
moving a pole using
a minor-feedback
loop

minor-loop feedback

s -plane

Figure 8A.7

or we achieve a pole-zero cancellation by placing a zero very close to the


dominant pole:
Dominant secondorder all-pole
system made by
pole-zero
cancellation

close zero

s -plane

Figure 8A.8

We should be careful when implementing pole-zero cancellation. Normally we


do not know exactly the pole locations of a system, so the cancellation is Pole-zero
cancellation is

only approximate. In some cases this inexact cancellation can cause the inexact
opposite of the desired effect the system will have a dominant first-order pole
and the response will be sluggish.

Signals and Systems 2014

8A.8
Example

We need to design a maze rover position controller to achieve the following


specifications:
(a) PO < 10%
(b) t p 1.57 s

(c) 2% settling time t s 9 s


We evaluate the regions in the order they are given. For the PO spec, we have:

0.1 e 1
0.591
2

so that:

cos 1 53
For the peak time spec, we have:

1.57

2 rads-1

For the settling time spec, we get:

ln 0.02 39

0.43 nepers
9
9

Signals and Systems 2014

8A.9
The regions can now be drawn:
j
j 2.1
j2

53
-1.6

-0.43

53

s -plane
-j 2
-j 2.1

We choose the desired closed-loop poles to be at s 1.6 j 2.1 to meet all


our specifications.

Design Criteria for Discrete-Time Systems


A second-order continuous-time system can be translated into a second-order
discrete-time system using either the bilinear (Tustin) transform, zero-order
holds, or a variety of other methods. Discretizing a continuous-time system
inevitably involves a sample time this sample time affects the pole locations
of the discrete-time system. We therefore would like to see where our poles Specification

regions in the zshould be to satisfy the initial design specifications. In transforming the plane

performance specifications to the z-plane, it is first convenient to see how a


single point of the s-plane maps into the z-plane.

Signals and Systems 2014

8A.10
Mapping of a Point from the s-plane to the z-plane

By definition, z e sTs , and s j . Therefore:

z eTs e jTs
eTs Ts
z z

(8A.4)

That is, we have:


Mapping a point
from the s-plane to
the z-plane

unit-circle
j

e Ts

Ts

s -plane
z -plane

Figure 8A.9

We can see that if:

0
0
0

z 1
z 1
z 1

(8A.5)

We will now translate the performance specification criteria areas from the splane to the z-plane.

Signals and Systems 2014

8A.11
Percent Overshoot

In the s-plane, the PO specification is given by s j tan cos 1 . This


line in the z-plane must be:

z e sTs e Ts e jTs tan cos

e Ts Ts tan cos 1

(8A.6)

For a given , this locus is a logarithmic spiral:

j
j

=0.5

unit-circle

=0

=0

=1

s -plane

z -plane

Figure 8A.10

The region in the z-plane that corresponds to the region in the s-plane where
the PO specification is satisfied is shown below:
PO specification
region in the z-plane

=0.5

unit-circle

=0.5

s
2

s
2

=0

s -plane
z -plane

Figure 8A.11
Signals and Systems 2014

8A.12
Peak Time

In the s-plane, the peak time specification is given by s j d . A line of


constant frequency in the z-plane must be:

z e sTs eTs e j d Ts
eTs d Ts

(8A.7)

For a given d , this locus is a straight line between the origin and the unitcircle, at an angle d Ts (remember we only consider the LHP of the splane so that 0 ):
j

unit-circle

j
2
j 2
j 1

s Ts
2

-j 1

2 Ts
1 Ts
- 1 Ts

s -plane
z -plane

Figure 8A.12

The region in the z-plane that corresponds to the region in the s-plane where
the peak time specification is satisfied is shown below:
j

Peak time
specification region
in the z-plane

unit-circle

j d

d Ts
- d Ts

-j d

z -plane

s -plane

Figure 8A.13
Signals and Systems 2014

8A.13
Settling Time

In the s-plane, the settling time specification is given by s j . The


corresponding locus in the z-plane must be:

z e sTs e Ts e jTs
e Ts Ts

(8A.8)

For a given , this locus is a circle centred at the origin with a radius e Ts :

unit-circle

2T s

e
- 2

- 1

1T s

s -plane

z -plane

Figure 8A.14

The region in the z-plane that corresponds to the region in the s-plane where
the settling time specification is satisfied is shown below:
j

Settling time
specification region
in the z-plane

unit-circle

e T s

s -plane

z -plane

Figure 8A.15

Signals and Systems 2014

8A.14
Combined Specifications

A typical specification will mean we have to combine the PO, peak time and
We choose z-plane
poles close to the
unit-circle so as not
to exceed the linear
range of systems

settling time regions in the z-plane. For the s-plane we chose pole locations
close to the origin. Since z eTs , and is a small negative number near the
origin, then we need to maximize z . We therefore choose pole locations in the

z-plane which satisfy all the criteria, and we choose them as far away from the
origin as possible:
Combined
specifications region
in the z-plane

unit-circle

desired pole
locations

z -plane
Figure 8A.16

Sometimes we perform a design in the s-plane, then discretize it using a


method such as the bilinear transform or step-response matching. We should

always check whether the resulting discrete-time system will meet the
specifications it may not, due to the imperfections of the discretization!

Signals and Systems 2014

8A.15
Summary

The percent overshoot, peak time and settling time specifications for an allpole second-order system can easily be found in the s-plane. Desired pole
locations can then be chosen that satisfy all the specifications, and are
usually chosen close to the origin so that the system remains in its linear
region.

We can apply the all-pole second-order specification regions to systems


that are dominant second-order systems.

The percent overshoot, peak time and settling time specifications for an allpole second-order system can be found in the z-plane. The desired pole
locations are chosen as close to the unit-circle as possible so the system
stays within its linear bounds.

References
Kuo, B: Automatic Control Systems, Prentice-Hall, 1995.
Nicol, J.: Circuits and Systems 2 Notes, NSWIT, 1986.

Signals and Systems 2014

8B.1
Lecture 8B Root Locus
Root locus. Root locus rules. MATLABs RLTool. Root loci of discrete-time
systems. Time-response of discrete-time systems.

Overview
The roots of a systems characteristic equation (the poles), determine the
mathematical form of the systems transient response to any input. They are
very important for system designers in two respects. If we can control the pole
locations then we can position them to meet time-domain specifications such as
P.O., settling time, etc. The pole locations also determine whether the system is
stable we must have all poles in the open LHP for stability. A root locus is
a graphical way of showing how the roots of a characteristic equation in the
complex (s or z) plane vary as some parameter is varied. It is an extremely
valuable aid in the analysis and design of control systems, and was developed
by W. R. Evans in his 1948 paper Graphical Analysis of Control Systems,
Trans. AIEE, vol. 67, pt. II, pp. 547-551, 1948.

Root Locus
As an example of the root locus technique, we will consider a simple unityfeedback control system:

R (s )

E (s )

C (s )
Gc( s )

Gp( s)

controller

plant

Figure 8B.1

We know that the closed-loop transfer function is just:

Gc s G p s
C s

Rs 1 Gc s G p s
Signals and Systems 2014

(8B.1)

8B.2
This transfer function has the characteristic equation:

1 Gc s G p s 0

(8B.2)

Now suppose that we can separate out the parameter of interest, K, in the
characteristic equation it may be the gain of an amplifier, or a sampling rate,
or some other parameter that we have control over. It could be part of the
controller, or part of the plant. Then the characteristic equation can be written:

1 KPs 0

Characteristic
equation of a unityfeedback system

(8B.3)

where Ps does not depend on K. The graph of the roots of this equation, as
the parameter K is varied, gives the root locus. In general:

KPs K

s z1 s z 2 s z m
s p1 s p2 s pn

(8B.4)

where zi are the m open-loop zeros and pi are the n open-loop poles of the
system. Also, rearranging Eq. (8B.3) gives us:

KPs 1

(8B.5)

Taking magnitudes of both sides leads to the magnitude criterion for a root
locus:
Root locus
magnitude criterion

P s 1 K

(8B.6a)

Similarly, taking angles of both sides of Eq. (8B.5) gives the angle criterion for
a root locus:
Root locus angle
criterion

K Ps 180 360r r 0,1,2,

(8B.6b)

To construct a root locus we can just apply the angle criterion. To find a
particular point on the root locus, we need to know the magnitude of K.
Signals and Systems 2014

8B.3
Example

A simple maze rover positioning scheme is shown below:


R (s )

E (s )

0.5
ss

Kp
controller

C (s )

maze rover

We are trying to see the effect that the controller parameter K p has on the
closed-loop system. First of all, we can make the following assignments:

G p s

0.5
and Gc s K p
ss 2

Putting into the form of Eq. (8B.3), we then have:

K K p and Ps

0.5
s s 2

For such a simple system, it is easier to derive the root locus algebraically
rather than use the angle criterion. The characteristic equation of the system is:
1 KPs 1 K p

0.5
0
ss 2

or just:
s 2 2 s 0 .5 K p 0

The roots are then given by:


s 1 1 0.5 K P

Signals and Systems 2014

8B.4
We will now evaluate the roots for various values of the parameter K p :
Kp 0
K p 1
Kp 2

s1, 2 0, 2
s1, 2 1 1 2 , 1 1
s1, 2 1, 1

K p 3 s1, 2 1 j 2 , 1 j
Kp 4
s1, 2 1 j , 1 j

The root locus is thus:


Root locus for a
simple two-pole
system

j
j
Kp =4
Kp =0
-2

-1
Kp =2

s -plane
-j

What may have not been obvious before is now readily revealed: the system is
unconditionally stable (for positive K p ) since the poles always lie in the LHP;
and the K p parameter can be used to position the poles for an overdamped,
critically damped, or underdamped response. Also note that we cant arbitrarily
position the poles anywhere on the s-plane we are restricted to the root locus.
This means, for example, that we cannot increase the damping of our
underdamped response it will always be e t .

Signals and Systems 2014

8B.5
Root Locus Rules
We will now examine a few important rules about a root locus construction
that can give us insight into how a system behaves as the parameter K is varied.
Root locus
construction rules

1. Number of Branches

If Ps has n poles and m zeros then there will be n branches.


2. Locus End Points

The root locus starts (i.e. K 0 ) at the poles of P s . This can be seen by
substituting Eq. (8B.4) into Eq. (8B.3) and rearranging:

s p1 s p2 s pn K s z1 s z 2 s zm 0

(8B.7)

This shows us that the roots of Eq. (8B.3), when K 0 , are just the open-loop
poles of P s , which are also the poles of Gc s G p s .
As K , the root locus branches terminate at the zeros of P s . For n m
then n m branches go to infinity.
3. Real Axis Symmetry

The root locus is symmetrical with respect to the real axis.


4. Real Axis Sections

Any portion of the real axis forms part of the root locus for K 0 if the total
number of real poles and zeros to the right of an exploratory point along the
real axis is an odd integer. For K 0 , the number is zero or even.

Signals and Systems 2014

8B.6
5. Asymptote Angles

The n m branches of the root locus going to infinity have asymptotes given
by:

r
180
nm

(8B.8)

For K 0 , r is odd (1, 3, 5, , n m ).


For K 0 , r is even (0, 2, 4, , n m ).
This can be shown by considering the root locus as it is mapped far away from
the group of open-loop poles and zeros. In this area, all the poles and zeros
contribute about the same angular component. Since the total angular
component must add up to 180 or some odd multiple, Eq. (8B.8) follows.
6. Asymptotic Intercept (Centroid)

The asymptotes all intercept at one point on the real axis given by:

poles of Ps

zeros of Ps

nm

(8B.9)

The value of A is the centroid of the open-loop pole and zero configuration.
7. Real Axis Breakaway and Break-In Points

A breakaway point is where a section of the root locus branches from the real
axis and enters the complex region of the s-plane in order to approach zeros
which are finite or are located at infinity. Similarly, there are branches of the
root locus which must break-in onto the real axis in order to terminate on
zeros.

Signals and Systems 2014

8B.7
Examples of breakaway and break-in points are shown below:
Breakaway and
break-in points of a
root locus

breakaway
point

break-in
point
-5

-4

-3

-2

-1

s -plane

Figure 8B.2

The breakaway (and break-in) points correspond to points in the s-plane where
multiple real roots of the characteristic equation occur. A simple method for
finding the breakaway points is available. Taking a lead from Eq. (8B.7), we
can write the characteristic equation 1 KP s 0 as:

f s B s KAs 0

(8B.10)

where As and B s do not contain K. Suppose that f s has multiple roots


of order r. Then f s may be written as:

f s s p1 s p2 s pk
r

(8B.11)

If we differentiate this equation with respect to s and set s p1 , then we get:

df s
0
ds s p1

Signals and Systems 2014

(8B.12)

8B.8
This means that multiple roots will satisfy Eq. (8B.12). From Eq. (8B.10) we
obtain:

df s
Bs KAs 0
ds

(8B.13)

The particular value of K that will yield multiple roots of the characteristic
equation is obtained from Eq. (8B.13) as:

Bs
As

(8B.14)

If we substitute this value of K into Eq. (8B.10), we get:

B s
f s Bs
As 0
As

(8B.15)

or

Bs As Bs As 0

(8B.16)

On the other hand, from Eq. (8B.10) we obtain:

B s
As

(8B.17)

and:

dK
Bs As Bs As

ds
A 2 s

Signals and Systems 2014

(8B.18)

8B.9
If dK ds is set equal to zero, we get the same equation as Eq. (8B.16).
Therefore, the breakaway points can be determined from the roots of:

dK
0
ds

Equation to find
breakaway and
break-in points

(8B.19)

It should be noted that not all points that satisfy Eq. (8B.19) correspond to
actual breakaway and break-in points, if K 0 (those that do not, satisfy
K 0 instead). Also, valid solutions must lie on the real-axis.

8. Imaginary Axis Crossing Points

The intersection of the root locus and the imaginary axis can be found by
solving the characteristic equation whilst restricting solution points to s j .
This is useful to analytically determine the value of K that causes the system to
become unstable (or stable if the roots are entering from the right-half plane).
9. Effect of Poles and Zeros

Zeros tend to attract the locus, while poles tend to repel it.
10. Use a computer

Use a computer to plot the root locus! The other rules provide intuition in
shaping the root locus, and are also used to derive analytical quantities for the
gain K, such as accurately evaluating stability.

Signals and Systems 2014

8B.10
Example

Consider a unity negative-feedback system:

R (s )

s
K

s
s

E (s )

C (s )

This system has four poles (two being a double pole) and one zero, all on the
negative real axis. In addition, it has three zeros at infinity. The root locus of
this system, illustrated below, can be drawn on the basis of the rules presented.

j
K =10.25
= 1.7

60
-8

-7

-6

-5

-4

-3
double pole

-2

-1 -60

-7.034

1
-0.4525
s -plane

Rule 1 There are four separate loci since the characteristic equation,
1 G s 0 , is a fourth-order equation.

Rule 2 The root locus starts ( K 0 ) from the poles located at 0, -1, and a

double pole located at -4. One pole terminates ( K ) at the zero located at -6
and three branches terminate at zeros which are located at infinity.
Rule 3 Complex portions of the root locus occur in complex-conjugate pairs.

Signals and Systems 2014

8B.11
Rule 4 The portions of the real axis between the origin and -1, the double poles

at -4, and between -6 and are part of the root locus.


Rule 5 The loci approach infinity as K becomes large at angles given by:

1
180 60 ,
4 1

3
180 180 ,
4 1

5
180 300
4 1

Rule 6 The intersection of the asymptotic lines and the real axis occurs at:

9 6
1
4 1

Rule 7 The point of breakaway (or break-in) from the real axis is determined as

follows. From the relation:


1 G s H s 1

K s 6
0
2
ss 1s 4

we have:
ss 1s 4
s 6

Taking the derivative we get:

s 6 2ss 1s 4 2s 1s 4 ss 1s 4 0
dK

ds
s 62
2

Therefore:

s 6s 42ss 1 2s 1s 4 ss 1s 42 0
s 4s 64s 2 11s 4 ss 1s 4 0
s 43s 3 30s 2 66s 24 0
s 4s 3 10s 2 22s 8 0
Signals and Systems 2014

8B.12
One of the roots of this equation is obviously -4 (since it is written in factored
form) which corresponds to the open-loop double pole. For the cubic equation

s 3 10s 2 22s 8 0 , we resort to finding the roots using MATLAB with


the command roots([1 10 22 8]). The roots are -7.034, -2.5135 and
-0.4525. It is impossible for the root at -2.5135 to be a breakaway or break-in
point for the negative feedback case, since the root locus doesnt lie there (the
root at -2.5135 is the breakaway point for a positive feedback system, i.e. when
K 0 ). Thus, the breakaway and break-in points are -4, -7.034 and -0.4525.

Rule 8 The intersection of the root locus and the imaginary axis can be

determined by solving the characteristic equation when s j . The


characteristic equation becomes:
ss 1s 4 K s 6 0
2

s 4 9s 3 24s 2 16 K s 6 K 0

4 j 9 3 24 2 j 16 K 6 K 0
Letting the real and imaginary parts go to zero, we have:

4 24 2 6 K 0 and 9 3 16 K 0
Solving the second equation gives:

16 K
3

Substituting this value into the first equation, we obtain:


16 K
16 K
6K 0
24

9
9
2

Solving this quadratic, we finally get K 10.2483 . Substituting this into the
preceding equation, we obtain:

1.7 rads-1
as the frequency of crossover.

Signals and Systems 2014

8B.13
Rule 9 follows by observation of the resulting sketch.
Rule 10 is shown below:
Root Locus Editor (C)
8

Imag Axis

-2

-4

-6

-8

-15

-10

-5

10

Real Axis

Thus, the computer solution matches the analytical solution, although the
analytical solution provides more accuracy for points such as the crossover
point and breakaway / break-in points; and it also provides insight into the
systems behaviour.

Signals and Systems 2014

8B.14
MATLABs RLTool
We can use MATLAB to graph our root loci. From the command window,
just type rltool to start the root locus user interface.
Example

The previous maze rover positioning scheme has a zero introduced in the
controller:
R (s )

E (s )

K p
s

0.5
s
s

controller

maze rover

C (s )

Using MATLABs rltool, we would enter the following:


Gp=tf(0.5, [1 2 0]);
Gc=tf([1 3], 1);
rltool

We choose File|Import from the main menu. We place our transfer


function G p into the G position of MATLABs control system model. We
then place our Gc transfer function into the C position and press OK.
MATLAB will draw the root locus and choose appropriate graphing
quantities. The closed-loop poles are shown by red squares, and can be dragged
by the mouse. The status line gives the closed-loop pole locations, and the gain
K p that was needed to put them there can be observed at the top of the user
interface.

Signals and Systems 2014

8B.15
The locus for this case looks like:
j

Kp =15
-5

Kp =0
-4

-3

-2

-1
Kp =1

s -plane

We can see how the rules help us to bend and shape the root locus for our
purposes. For example, we may want to increase the damping (move the poles
to the left) which we have now done using a zero on the real axis. We couldnt
do this for the case of a straight gain in the previous example:

we can "bend and shape"


j
the root locus to pass
through our desired pole
region

-5

-4

-3

-2

-1

Root locus bent


and shaped to pass
through desired pole
region

s -plane

"old" locus never passes


through the desired
pole region

This system is no longer an all-pole system, so any parameters such as rise


time etc. that we were aiming for in the design must be checked by observing
the step-response. This is done in MATLAB by choosing Analysis|Response
to Step Command from the main menu.
Signals and Systems 2014

8B.16
Example

We will now see what happens if we place a pole in the controller instead of a
zero:
R (s )

E (s )

Kp

C (s )

0.5
s
s
maze rover

controller

MATLAB gives us the following root locus:


j

A root locus can


clearly show the
limits of gain for a
stable system

Kp =60

-4

-3

-2

-1
Kp =4

s -plane

We see that the pole repels the root locus. Also, unfortunately in this case,
the root locus heads off into the RHP. If the parameter K p is increased to over
60, then our system will be unstable!

Root Loci of Discrete-Time Systems


We can perform the same analysis for discrete-time systems, but of course the
interpretation of the pole locations in the z-plane is different to the s-plane. The
characteristic equation in this case is just:
Root locus applies
to discrete-time
systems also

1 Gc z G p z 1 KP z 0

Signals and Systems 2014

(8B.20)

8B.17
Time Response of Discrete-Time Systems
We have already seen how to discretize a continuous-time system there are
methods such as the bilinear transform and step response matching. One
parameter of extreme importance in discretization is the sample period, Ts . We
need to choose it to be sufficiently small so that the discrete-time system
approximates the continuous-time system closely. If we dont, then the pole
and zero locations in the z-plane may lie out of the specification area, and in
extreme cases can even lead to instability!
Example

We decide to simulate a maze rover by replacing its continuous-time model


with a discrete-time model found using the bilinear transform:
R (z )

E (z )

Kp

G
z

controller

maze rover

If the original maze rover continuous-time transfer function was:


G s

1
s s 1

then application of the bilinear transform gives:


Ts2 z 1
G z
4 2Ts z 2 8 z 4 2Ts
2

The closed-loop transfer function is now found to be:


T z

K pTs2 z 1

4 2T

Ts2 z 2 2Ts2 8 z 4 2Ts Ts2

Clearly, the pole locations depend upon the sample period Ts .

Signals and Systems 2014

C (z )

8B.18
A locus of the closed-loop poles as Ts varies can be constructed. The figure
below shows the root locus as Ts is varied from 0.1 s to 1 s:

unit-circle

Root locus for a


discrete-time system
as sample period is
varied

Ts =1
Ts =0.1

z -plane
The corresponding step-responses for the two extreme cases are shown below:

Ts =1

Step response due


to two different
sample periods

1
Ts =0.1

n Ts

We can see that the sample time affects the control systems ability to respond
to changes in the output. If the sample period is small relative to the time
constants in the system, then the output will be a good approximation to the
continuous-time case. If the sample period is much larger, then we inhibit the
ability of the feedback to correct for errors at the output - causing oscillations,
increased overshoot, and sometimes even instability.

Signals and Systems 2014

8B.19
Summary

The root locus for a unity-feedback system is a graph of the closed-loop


pole locations as a system parameter, such as controller gain, is varied from
0 to infinity.

There are various rules for drawing the root locus that help us to
analytically derive various values, such as gains that cause instability. A
computer is normally used to graph the root locus, but understanding the
root locus rules provides insight into the design of compensators in
feedback systems.

The root locus can tell us why and when a system becomes unstable.

We can bend and shape a root locus by the addition of poles and zeros so
that it passes through a desired location of the complex plane.

Root locus techniques can be applied to discrete-time systems.

The root locus of a discrete-time system as the sample period is varied


gives us insight into how close an approximation we have of a continuoustime system, and whether the chosen sample period can meet the
specifications.

References
Kuo, B: Automatic Control Systems, Prentice-Hall, 1995.
Nicol, J.: Circuits and Systems 2 Notes, NSWIT, 1986.

Signals and Systems 2014

8B.20
Exercises
1.
For the system shown:

R ( s)

K (s + 4)
s (s +1) (s + 2)

C( s)

Use RLTool in MATLAB for the following:


(a) Plot the root locus as K is varied from 0 to .
(b) Find the range of K for stability and the frequency of oscillation if unstable.
(c) Find the value of K for which the closed-loop system will have a 5%
overshoot for a step input.
(d) Estimate the 1% settling time for the closed-loop pole locations found in
part (c).

Signals and Systems 2014

8B.21
2.
The plant shown is open-loop unstable due to the right-half plane pole.

U (s)

C( s)

( s +3) ( s 8)
Use RLTool in MATLAB for the following:
(a) Show by a plot of the root locus that the plant cannot be stabilized for any
K, K , if unity feedback is placed around it as shown.

R ( s)

K
( s +3) ( s 8)

C( s)

(b) An attempt is made to stabilize the plant using the feedback compensator
shown:

R ( s)

C( s)

( s +3) ( s 8)

( s +1)
( s +2)
Determine whether this design is successful by performing a root locus
analysis for 0 K . (Explain, with the aid of a sketch, why K 0 is
not worth pursuing).

Signals and Systems 2014

8B.22
3.
For the system shown the required value of a is to be determined using the
root-locus technique.

R ( s)

(a) Sketch the root-locus of

(s + a )
(s + 3)

1
s ( s +1)

C( s)

C s
as a varies from to .
R s

(b) From the root-locus plot, find the value of a which gives both minimum
overshoot and settling time when r t is a step function.
(c) Find the maximum value of a which just gives instability and determine the
frequency of oscillation for this value of a.

Signals and Systems 2014

8B.23
4.
The block diagram of a DC motor position control system is shown below.

R ( s)
desired
position

10
s ( s +4)
amplifier
and motor

0.3
gear
train

C( s)
actual
position

Ks
tachometer
The performance is adjusted by varying the tachometer gain K. K can vary
from -100 to +100; 0 to +100 for the negative feedback configuration shown,
and 0 to -100 if the electrical output connections from the tachometer are
reversed (giving positive feedback).
(a) Sketch the root-locus of

C s
as K varies from to .
R s

Use two plots: one for negative feedback and one for positive feedback.
Find all important geometrical properties of the locus.
(b) Find the largest magnitude of K which just gives instability, and determine
the frequency of oscillation of the system for this value of K.
(c) Find the steady-state error (as a function of K) when r t is a step function.
(d) From the root locus plots, find the value of K which will give 10%
overshoot when r t is a step function, and determine the 10-90% rise time
for this value of K.
Note: The closed-loop system has two poles (as found from the root locus) and

no zeros. Verify this yourself using block diagram reduction.

Signals and Systems 2014

8B.24
James Clerk Maxwell (1831-1879)
Maxwell produced a most spectacular work of individual genius he unified
electricity and magnetism. Maxwell was able to summarize all observed
phenomena of electrodynamics in a handful of partial differential equations
known as Maxwells equations1:

B
t

B 0
B J

E
t

From these he was able to predict that there should exist electromagnetic waves
which could be transmitted through free space at the speed of light. The
revolution in human affairs wrought by these equations and their experimental
verification

by

Heinrich

Hertz

in

1888

is

well

known:

wireless

communication, control and measurement - so spectacularly demonstrated by


television and radio transmissions across the globe, to the moon, and even to
the edge of the solar system!
James Maxwell was born in Edinburgh, Scotland. His mother died when he
was 8, but his childhood was something of a model for a future scientist. He
was endowed with an exceptional memory, and had a fascination with
mechanical toys which he retained all his life. At 14 he presented a paper to the
Royal Society of Edinburgh on ovals. At 16 he attended the University of
Edinburgh where the library still holds records of the books he borrowed while
still an undergraduate they include works by Cauchy on differential

It was Oliver Heaviside, who in 1884-1885, cast the long list of equations that Maxwell had

given into the compact and symmetrical set of four vector equations shown here and now
universally known as Maxwell's equations. It was in this new form ("Maxwell redressed," as
Heaviside called it) that the theory eventually passed into general circulation in the 1890s.

Signals and Systems 2014

8B.25
equations, Fourier on the theory of heat, Newton on optics, Poisson on
mechanics and Taylors scientific memoirs. In 1850 he moved to Trinity
College, Cambridge, where he graduated with a degree in mathematics in 1854.
Maxwell was edged out of first place in their final examinations by his
classmate Edward Routh, who was also an excellent mathematician.
Maxwell stayed at Trinity where, in 1855, he formulated a theory of three
primary colour-perceptions for the human perception of colour. In 1855 and
1856 he read papers to the Cambridge Philosophical Society On Faradays
Lines of Force in which he showed how a few relatively simple mathematical
equations could express the behaviour of electric and magnetic fields.
In 1856 he became Professor of Natural Philosophy at Aberdeen, Scotland, and
started to study the rings of Saturn. In 1857 he showed that stability could be
achieved only if the rings consisted of numerous small solid particles, an
explanation now confirmed by the Voyager spacecraft.
In 1860 Maxwell moved to Kings College in London. In 1861 he created the
first colour photograph of a Scottish tartan ribbon and was elected to the
Royal Society. In 1862 he calculated that the speed of propagation of an
electromagnetic wave is approximately that of the speed of light:
We can scarcely avoid the conclusion that light consists in the transverse
undulations of the same medium which is the cause of electric and magnetic
phenomena.
Maxwells famous account, A Dynamical Theory of the Electromagnetic
Field was read before a largely perplexed Royal Society in 1864. Here he
brought forth, for the first time, the equations which comprise the basic laws of
electromagnetism.
Maxwell also continued work he had begun at Aberdeen, on the kinetic theory
of gases (he had first considered the problem while studying the rings of
Saturn). In 1866 he formulated, independently of Ludwig Boltzmann, the
kinetic theory of gases, which showed that temperature and heat involved only
molecular motion.

Signals and Systems 2014

All the mathematical


sciences are
founded on relations
between physical
laws and laws of
numbers, so that the
aim of exact science
is to reduce the
problems of nature
to the determination
of quantities by
operations with
numbers. James
Clerk Maxwell

8B.26
Maxwell was the first to publish an analysis of the effect of a capacitor in a
circuit containing inductance, resistance and a sinusoidal voltage source, and to
show the conditions for resonance. The way in which he came to solve this
problem makes an interesting story:
Maxwell was spending an evening with Sir William Grove who was then
engaged in experiments on vacuum tube discharges. He used an induction
coil for this purpose, and found the if he put a capacitor in series with the
primary coil he could get much larger sparks. He could not see why. Grove
knew that Maxwell was a splendid mathematician, and that he also had
mastered the science of electricity, especially the theoretical art of it, and so
he thought he would ask this young man [Maxwell was 37] for an
explanation. Maxwell, who had not had very much experience in
experimental electricity at that time, was at a loss. But he spent that night in
working over his problem, and the next morning he wrote a letter to Sir
William Grove explaining the whole theory of the capacitor in series
connection with a coil. It is wonderful what a genius can do in one night!
Maxwells letter, which began with the sentence, Since our conversation
yesterday on your experiment on magneto-electric induction, I have considered
it mathematically, and now send you the result, was dated March 27, 1868.
Preliminary to the mathematical treatment, Maxwell gave in this letter an
unusually clear exposition of the analogy existing between certain electrical
and mechanical effects. In the postscript, or appendix, he gave the
mathematical theory of the experiment. Using different, but equivalent,
symbols, he derived and solved the now familiar expression for the current i in
such a circuit:
L

di
1
Ri idt V sin t
dt
C

The solution for the current amplitude of the resulting sinusoid, in the steadystate is:
V

R L

from which Maxwell pointed out that the current would be a maximum when:

1
C

Signals and Systems 2014

8B.27
Following Maxwell, Heinrich Hertz later showed a thorough acquaintance with
electrical resonance and made good use of it in his experimental apparatus that
proved the existence of electromagnetic waves, as predicted by Maxwells
equations. In the first of his series of papers describing his experiment, On
Very Rapid Electric Oscillations, published in 1887, he devotes one section to
a discussion of Resonance Phenomena and published the first electrical
resonance curve:
The first electrical
resonance curve
published, by Hertz,
1887

When creating his standard for electrical resistance, Maxwell wanted to design
a governor to keep a coil spinning at a constant rate. He made the system stable
by using the idea of negative feedback. It was known for some time that the
governor was essentially a centrifugal pendulum, which sometimes exhibited
hunting about a set point that is, the governor would oscillate about an
equilibrium position until limited in amplitude by the throttle valve or the
travel allowed to the bobs. This problem was solved by Airy in 1840 by fitting
a damping disc to the governor. It was then possible to minimize speed

fluctuations by adjusting the controller gain. But as the gain was increased,
the governors would burst into oscillation again. In 1868, Maxwell published

s -plane

his paper On Governors in which he derived the equations of motion of


engines fitted with governors of various types, damped in several ways, and
explained in mathematical terms the source of the oscillation. He was also able
to set bounds on the parameters of the system that would ensure stable
operation. He posed the problem for more complicated control systems, but
thought that a general solution was insoluble. It was left to Routh some years
later to solve the general problem of linear system stability: It has recently
come to my attention that my good friend James Clerk Maxwell has had
difficulty with a rather trivial problem.

Signals and Systems 2014

8B.28
In 1870 Maxwell published his textbook Theory of Heat. The following year he
returned to Cambridge to be the first Cavendish Professor of Physics he
designed the Cavendish laboratory and helped set it up.
The four partial differential equations describing electromagnetism, now
known as Maxwells equations, first appeared in fully developed form in his
Treatise on Electricity and Magnetism in 1873. The significance of the work
was not immediately grasped, mainly because an understanding of the atomic
nature of electromagnetism was not yet at hand.
The Cavendish laboratory was opened in 1874, and Maxwell spent the next 5
years editing Henry Cavendishs papers.
Maxwell died of abdominal cancer, in 1879, at the age of forty-eight. At his
death, Maxwells reputation was uncertain. He was recognised to have been an
exceptional scientist, but his theory of electromagnetism remained to be
convincingly demonstrated. About 1880 Hermann von Helmholtz, an admirer
of Maxwell, discussed the possibility of confirming his equations with a
student, Heinrich Hertz. In 1888 Hertz performed a series of experiments
which produced and measured electromagnetic waves and showed how they
behaved like light. Thereafter, Maxwells reputation continued to grow, and he
may be said to have prepared the way for twentieth-century physics.

References
Blanchard, J.: The History of Electrical Resonance, Bell System Technical
Journal, Vol. 20 (4), p. 415, 1941.

Signals and Systems 2014

9A.1
Lecture 9A State-Variables
State representation. Solution of the state equations. Transition matrix.
Transfer function. Impulse response. Linear state-variable feedback.

Overview
The frequency-domain has dominated our analysis and design of signals and
systems up until now. Frequency-domain techniques are powerful tools, but
they do have limitations. High-order systems are hard to analyse and design.
Initial conditions are hard to incorporate into the analysis process (remember
the transfer function only gives the zero-state response). A time-domain
approach, called the state-space approach, overcomes these deficiencies and
also offers additional features of analysis and design that we have not yet
considered.

State Representation
Consider the following simple electrical system:

R
vs

vC

Figure 9A.1
In the analysis of a system via the state-space approach, the system is
characterized by a set of first-order differential equations that describe its
state variables. State variables are usually denoted by q1 , q2 , q3 , , qn .
They characterize the future behaviour of a system once the inputs to a system
are specified, together with a knowledge of the initial states.

Signals and Systems 2014

9A.2
States
For the system in Figure 9A.1, we can choose i and vC as the state variables.
Therefore, let:
State variables

q1 i

(9A.1a)

q 2 vC

(9A.1b)

From KVL, we get:

vs Ri L

di
vC
dt

(9A.2)

Rearranging to get the derivative on the left-hand side, we get:

di
R
1
1
i vC v s
dt
L
L
L

(9A.3)

In terms of our state variables, given in Eqs. (9A.1), we can rewrite this as:

dq1
1
1
R
q1 q2 vs
dt
L
L
L

(9A.4)

Finally, we write Eq. (9A.4) in the standard nomenclature for state variable
analysis we use q

dq
and also let the input, v s , be represented by the
dt

symbol x :

q1

R
1
1
q1 q2 x
L
L
L

Signals and Systems 2014

(9A.5)

9A.3
Returning to the analysis of the circuit in Figure 9A.1, we have for the current
through the capacitor:

dvC
dt

iC

(9A.6)

Substituting our state variables, we have:

q1 Cq 2

(9A.7)

Finally, rearranging to get the derivative on the left-hand side:

q 2

1
q1
C

(9A.8)

Notice how, for a second-order system, we need to find two first-order


differential equations to describe the system. The two equations can be written
in matrix form:

q1 R L 1 L q1 1 L
q 1 C
q 0 x
0
2
2

(9A.9)

Using matrix symbols, this set of equations can be compactly written:

q Aq bx

(9A.10)

We will reserve small boldface letters for column vectors, such as q and b

State equation

Matrix, vector and


and capital boldface letters for matrices, such as A . Scalar variables such as x scalar notation

are written in italics, as usual.


Eq. (9A.10) is very important it tells us how the states of the system q
change in time due to the input x.

Signals and Systems 2014

9A.4
Output
The system output can usually be expressed as a linear combination of all the
state variables.
For example, if for the RLC circuit of Figure 9A.1 the output y is vC then:

y vC
q2

(9A.11)

Therefore, in matrix notation, we write:

q
y 0 1 1
q 2

(9A.12)

which is usually expressed as:

y cT q

(9A.13)

y cT q dx

(9A.14)

Output equation

Sometimes we also have:

Multiple Input-Multiple Output Systems


State variable representation is good for multiple input multiple output
(MIMO) systems. All we have to do is generalise our input and output above to
vector inputs and outputs:
Multiple input multiple output
(MIMO) state and
output equations

q Aq Bx
y CT q Dx

Signals and Systems 2014

(9A.15)

9A.5
Solution of the State Equations
Once the state equations for a system have been obtained, it is usually
necessary to find the output of a system for a given input (However, some
parameters of the system can be directly determined by examining the A
matrix, in which case we may not need to solve the state equations).
We can solve the state equations in the s-domain. Taking the Laplace
Transform of Eq. (9A.10) gives:

sQs q 0 AQs bX s

(9A.16)

Notice how the initial conditions are automatically included by the Laplace
transform of the derivative. The solution will be the complete response, not just
the ZSR.
Making Qs the subject, we get:

Qs sI A q 0 sI A bX s
1

(9A.17)

Because of its importance in state variable analysis, we define the following


matrix:

s sI A

resolvent matrix

Resolvent matrix
defined

(9A.18)

This simplifies Eq. (9A.17) to:

Qs s q0 s bX s

(9A.19)

Using Eq. (9A.13), the LT of the output (for d 0 ) is then:

Y s cT s q 0 cT s bX s

Signals and Systems 2014

(9A.20)

9A.6
All we have to do is take the inverse Laplace transform (ILT) to get the
solution in the time-domain.
Before we do that, we also define the ILT of the resolvent matrix, called the
transition matrix:

The transition matrix


and resolvent matrix
form a Laplace
transform pair

transition
matrix

resolvent
matrix

(9A.21)

The ILT of Eq. (9A.19) is just:

Complete solution of
the state equation

qt t q 0 t bx d
t

ZIR ZSR

(9A.22)

Notice how multiplication in the s-domain turned into convolution in the timedomain. The transition matrix is a generalisation of impulse response, but it
applies to states not the output!
We can get the output response of the system after solving for the states by
direct substitution into Eq. (9A.14).

Transition Matrix
The transition matrix possesses two interesting properties that help it to be
calculated by a digital computer:

0 I

(9A.23a)

t e At

(9A.23b)

The first property is obvious by substituting t 0 into Eq. (9A.22). The second
relationship arises by observing that the solution to the state equation for the
case of zero input, q Aq , is q q0 e At . For zero input, Eq. (9A.22) gives

Signals and Systems 2014

9A.7

qt t q 0 , so that we must have t e At . The matrix e At is defined


by:

At

2
3

At At At
I

1!

2!

3!

How to raise e to a
matrix power

(9A.24)

This is easy to calculate on a digital computer, because it consists of matrix


multiplication and addition. The series is truncated when the desired accuracy
is reached.

Example
Suppose a system is described by the following differential equation:

d2y
dy
dr
2 y r
2
dt
dt
dt

(9A.25)

where the input and initial conditions are:


y 0 1 y 0 0

(9A.26)

q1 y, q 2 y , x r r

(9A.27)

r sin t

Let:

then:

q1 q2
q 2 q1 2q2 x
(9A.28)
or just:

q Aq bx

Signals and Systems 2014

(9A.29)

9A.8
with:
1
q
q
0
0
A
, b , q 1 , q 1

1 2
1
q 2
q 2

(9A.30)

We form the resolvent matrix by firstly finding sI A :


1 s 1
s 0 0

sI A

0 s 1 2 1 s 2

(9A.31)

Then remembering that the inverse of a matrix B is given by:


B 1

adj B
B

(9A.32)

we get the resolvent matrix:

s 2 1 s 2
1 s
2

s 1
1

s sI A

s 12 1
s 12

1
s 12
s
s 12

(9A.33)

The transition matrix is the inverse Laplace transform of the resolvent matrix:

t L1 s
e t t 1
te t

t
e t 1 t
te

(9A.34)

So, from Eq. (9A.22), the ZIR is given by:

q ZIR t t q 0

e t t 1
te t 1 e t t 1
q1

q
t
e t 1 t 0 te t
2 ZIR te

(9A.35)

Since we dont like performing convolution in the time-domain, we use


Eq. (9A.19) to find the ZSR:

q ZSR t L1 s bX s

Signals and Systems 2014

(9A.36)

9A.9
The Laplace transform of the input is:

X s

(9A.37)

1
s
2
s 1 s 1
2

so the ZSR is:


s 2

2
1 s 1
q ZSR t L
1

2
s 1

1
2
0
s 1 s 1
s 1 s 2 1

s 12

s 1 s 2 1
L1

s 1 s 2 1

(9A.38)

Use partial fractions to get:

12 e t cos t sin t
q1
1

q
t
2 ZSR 2 e cos t sin t

(9A.39)

The total response is the sum of the ZIR and the ZSR:

q1 32 e t te t 12 cos t 12 sin t

q 1 t
t
1
1
2 2 e te 2 cos t 2 sin t

(9A.40)

This is just the solution for the states. To get the output, we use:
y c T q dx

(9A.41)

cT 1 0, d 0

Therefore, the output is:

y 32 e t te t 12 cos t 12 sin t

(9A.42)

You should confirm this solution by solving the differential equation directly
using your previous mathematical knowledge, eg. method of undetermined
coefficients.

Signals and Systems 2014

9A.10
Transfer Function
The transfer function of a single input-single output (SISO) system can be
obtained easily from the state variable equations. Since a transfer function only
gives the ZSR (all initial conditions are zero), then Eq. (9A.19) becomes:

Qs s bX s

(9A.43)

The output in the s-domain, using the Laplace transform of Eq. (9A.13) and
Eq. (9A.43), is just:

Y s c T Qs
c T s bX s

(9A.44)

Therefore, the transfer function (for d 0 ) is given by:


Obtaining the
transfer function
from a state-variable
description of a
system

H s cT s b

(9A.45)

Impulse Response
The impulse response is just the inverse Laplace transform of the transfer
function:

ht cT t b
cT e At b

(9A.46)

It is possible to compute the impulse response directly from the coefficient


matrices of the state model of the system.

Signals and Systems 2014

9A.11
Example

Continuing the analysis of the system used in the previous example, we can
find the transfer function using Eq. (9A.45):
s2

s 12
H s 1 0
1
s 12

s 1 0
s 1
s 12
1

s 12
1 0
s
s 12

s 12

(9A.47)

Compare with the Laplace transform of the original differential equation:


d2y
dy
2 y x
2
dt
dt
2
s 2s 1 Y s X s

(9A.48)

from which the transfer function is:


Y s
X s
1
2
s 2s 1
1

s 12

H s

(9A.49)

Why would we use the state-variable approach to obtain the transfer function?
For a simple system, we probably wouldnt, but for multiple-input multipleoutput systems, it is much easier using the state-variable approach.

Signals and Systems 2014

9A.12
Linear State-Variable Feedback
Consider the following system drawn using a state-variable approach:
Block diagram of
linear state-variable
feedback
r (t )

k0

x (t )

dt

cT

y (t )

A
kT
controller

q
system to be controlled

Figure 9A.2
The system has been characterised in terms of states you should confirm that
the above diagram of the system to be controlled is equivalent to the matrix
formulation of Eqs. (9A.10) and (9A.13).
We have placed a controller in front of the system, and we desire the output y
to follow the set point, or reference input, r. The design of the controller
involves the determination of the controller variables k 0 and k to achieve a
desired response from the system (The desired response could be a timedomain specification, such as rise time, or a frequency specification, such as
bandwidth).
The controller just multiples each of the states qi by a gain k i , subtracts the
sum of these from the input r , and multiplies the result by a gain k 0 .
Now, the input x to the controlled system is:
The input to the
open-loop system is
modified by the
feedback

(9A.50)

q A k q bk 0 r

(9A.51)

x k0 r k T q
Therefore, the state equations are:

Signals and Systems 2014

9A.13
where:

A k A k 0 bk T

(9A.52)

State-variable
feedback modifies
the A matrix

Eq. (9A.51) is the state-variable representation of the overall system (controller


plus system to be controlled). The state equations Eq. (9A.51) still have the
same form as Eq. (9A.10), but A changes to A k and the input changes from x
to k 0 r . Analysis of the overall system can now proceed as follows.
For the ZSR, the transfer function is given by Eq. (9A.45) with the above
substitutions:

H s k 0 cT k s b

(9A.53)

The closed-loop
transfer function
when linear statevariable feedback is
applied

where:

k s sI A k

We choose the controller variables k 0 and k T k1

(9A.54)

k 2 k n to create the

transfer function obtained from the design criteria (easy for an n 2 secondorder system).

Signals and Systems 2014

The modified
resolvent matrix
when linear statevariable feedback is
applied

9A.14
Example
Suppose that it is desired to control an open-loop process using state-variable
control techniques. The open-loop system is shown below:

X (s )

1
s +70

Q1

1
s

Q2 = Y ( s )

Suppose it is desired that the closed-loop second-order characteristics of the


feedback control system have the following parameters:

n 50 rads 1 , 0.7071

(9A.55)

Then the desired transfer function is:

n
C s
2
Rs s 2 n s n 2
2

2500
2
s 70.71s 2500

(9A.56)

The first step is to formulate a state-variable representation of the system. The


relationships for each state-variable and the output variable in the
s-domain are obtained directly from the block diagram:
1
X
s 70
1
Q2 Q1
s
Y Q2
Q1

(9A.57)

Rearranging, we get:
sQ1 70Q1 X
sQ2 Q1
Y Q2

Signals and Systems 2014

(9A.58)

9A.15
The corresponding state-variable representation is readily found to be:
70 0
1
q
q x

0
1
0
y 0 1q

(9A.59)

q Aq bx

(9A.60)

or just:

y c q
T

All linear, time-invariant systems have this state-variable representation. To


implement state-variable feedback, we form the following system:

Controller
R (s )

k0

X (s )

1
s +70

Process
Q1( s)

1
s

Q2( s) = C (s )

k1

k2

We see that the controller accepts a linear combination of the states, and
compares this with the reference input. It then provides gain and applies the
resulting signal as the control effort, X s , to the process.
The input signal to the process is therefore:
x k 0 r k1q1 k 2 q2

(9A.61)

or in matrix notation:

x k0 r k T q

Signals and Systems 2014

(9A.62)

9A.16
Applying this as the input to the system changes the describing state equation
of Eq. (9A.60) to:

q Aq b k 0 r k T q

q A k 0 bk T q bk 0 r
q A k q bk 0 r

(9A.63)

A k A k 0 bk T

(9A.64)

where:

If we let:

k s sI A k

(9A.65)

then the transfer function of the closed-loop system is given by:

H s k0cT k s b

(9A.66)

We now need to evaluate a few matrices. First:

A k A k 0 bk T
70

1
70

0
1
k 0 k1

0
0
0
k
k0 1

0
0

70 k 0 k1

k2

k2
0

k0 k2
0

(9A.67)

Then:

k s sI A k

s 70 k0 k1 k0 k2

s
1

1
2
s 70 k0 k1 s k0 k2

k0 k 2
s
1 s 70 k k
0 1

Signals and Systems 2014

(9A.68)

9A.17
The closed-loop transfer function is then found as:

H s k0cT k s b
s
k0 k2 1
k0
0 1

s 70 k0 k1 s k0 k2
1 s 70 k0 k1 0
s
k0
2
0 1
s 70 k0 k1 s k0 k2
1
k0
2
s 70 k0 k1 s k0 k2

(9A.69)

The values of k 0 , k1 and k 2 can be found from Eqs. (9A.56) and (9A.69). The
following set of simultaneous equations result:

k 0 2500
k 0 k 2 2500
70 k 0 k1 70.71

(9A.70)

We have three equations and three unknowns. Solving, we find that:

k 0 2500
k1 2.843 10 4
k2 1

(9A.71)

This completes the controller design. The final step would be to draw the root
locus and examine the relative stability, and the sensitivity of slight gain
variations. For this simple system, the final step is not necessary.

Signals and Systems 2014

9A.18
Summary

State-variables describe internal states of a system rather than just the


input-output relationship.

The state-variable approach is a time-domain approach. We can include


initial conditions in the analysis of a system to obtain the complete
response.

The state-variable equations can be solved using Laplace transform


techniques.

We can derive the transfer function of a SISO system using state-variables.

Linear state-variable feedback involves the design of gains for each of the
states, plus the input.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Shinners, S.: Modern Control System Theory and Application, AddisonWesley, 1978.

Signals and Systems 2014

9A.19
Exercises
1.
Using capacitor voltages and inductor currents write a state-variable
representation for the following circuit:

R1

L1

C1
R2

vs

R3

2.
Consider a linear system with input u and output y. Three experiments are
performed on this system using the inputs x1 t , x2 t and x3 t for t 0 . In
each case, the initial state at t 0 , x0 , is the same. The corresponding
observed outputs are y1 t , y 2 t and y3 t . Which of the following three
predictions are true if x0 0 ?
(a) If x3 x1 x2 , then y3 y1 y 2 .
(b) If x3

1
2

x1 x2 , then

y3

1
2

y1 y2 .

(c) If x3 x1 x2 , then y3 y1 y 2 .
Which are true if x0 0 ?

Signals and Systems 2014

9A.20
3.
Write dynamical equation descriptions for the block diagrams shown below
with the chosen state variables.
(a)
1
s +2

Q1

1
s +2

Q3

2
1
s

Q2

(b)

1
s +2

Q1

1
s +1

Q2

1
s +2

Q3

4.
Find the transfer function of the systems in Q3:
(i)

by block diagram reduction

(ii)

directly from your answers in Q3 (use the resolvent matrix etc)

5.
Given:
1 1
1
q
q x

4 1
2
and:
3
q0
0

0 t 0
xt
1 t 0

Find qt .
Signals and Systems 2014

9A.21
6.
Write a simple MATLAB script to evaluate the time response of the system
described in state equation form in Q5 using the approximate relationship:

qt T qt
T

Use this script to plot qt using:


(a) T 0.01 s , (b) T 0.1 s , (c) T 1 s
Compare these plots to the values given by the exact solution to Q5 (obtained
by finding the inverse Laplace transforms of the answer given).
Comment on the effect of varying the time increment T.

Signals and Systems 2014

9B.1
Lecture 9B State-Variables 2
Normal form. Similarity transform. Solution of the state equations for the ZIR.
Poles and repeated eigenvalues. Discrete-time state-variables. Discrete-time
response. Discrete-time transfer function.

Overview
State-variable analysis is useful for high-order systems and multiple-input
multiple-output systems. It can also be used to find transfer functions between
any output and any input of a system. It also gives us the complete response.
The only drawback to all this analytical power is that solving the state-variable
equations for high-order systems is difficult to do symbolically. Any computer
solution also has to be thought about in terms of processing time and memory
storage requirements.
The use of eigenvectors and eigenvalues solves this problem.
State-variable analysis can also be extended to discrete-time systems,
producing exactly analogous equations as for continuous-time systems.

Normal Form
Solving matrix equations is hard...unless we have a trivial system:

z1 1 0 0 0 z1
z
z 0
0
0
2
2
2

0 0 0

z
0
0
0

n zn
n

(9B.1)

which we would just write as:

z z

Signals and Systems 2014

(9B.2)

9B.2
This is useful for the ZIR of a state-space representation where:

q Aq

(9B.3)

We therefore seek a similarity transform to turn q Aq into z z .


When solving differential equations, we know we should get solutions
containing exponentials. Let us therefore try one possible exponential trial
solution in Eq. (9B.3) by letting:

q q0 e t

(9B.4)

where q0 is a constant vector determined by the initial conditions of the


states. Then q q and, substituting into Eq. (9B.3), we get:

Aq q

(9B.5)

A I q 0

(9B.6)

Therefore:

For a non-trivial solution (one where q 0 ), we need to have:


Characteristic
equation

A I 0

(9B.7)

This is called the characteristic equation of the system. The eigenvalues are
the values of which satisfy A I 0 . Once we have all the s, each
column vector q i which satisfies the original equation Eq. (9B.5) is called a
column eigenvector.
Eigenvalues and
eigenvectors
defined

A I 0

i eigenvalue

Aq i i q i q i eigenvector

Signals and Systems 2014

(9B.8)

9B.3
An eigenvector corresponding to an eigenvalue is not unique an eigenvector
can be multiplied by any non-zero arbitrary constant. We therefore tend to
choose the simplest eigenvectors to make the mathematics easy.
Example
Given a systems A matrix, we want to find the eigenvalues and eigenvectors.
2 3 2
A 10 3 4
3 6 1

(9B.9)

We find the eigenvalues first by solving:

2
A I

10
3

3
6

2
4 0
1

(9B.10)

Evaluating the determinant, we get the characteristic equation:

3 62 49 66 0

(9B.11)

Factorising the characteristic equation, we get:

2 3 11 0

(9B.12)

The solutions to the characteristic equation are:

1 2

2 3 eigenvalues
3 11

Signals and Systems 2014

(9B.13)

9B.4
To find the eigenvectors, substitute each into A I q 0 and solve for q.
Take 1 2 :
2 3 2 2 0


10 3 4 0 2
3 6 1 0
0

4
10

0 q1 0

0 q 2 0
2 q3 0
3 2 q1 0
5 4 q 2 0
6 3 q3 0

(9B.14)

Solve to get:
1
q 1 2 for 1 2
5

(9B.15)

The other eigenvectors are:


0
q 2 2 for 2 3
3

and

2
q 3 4 for 3 11
3

(9B.16)

Similarity Transform
The eigenvalues and eigenvectors that arise from Eq. (9B.5) are put to good
use by transforming q Aq into z z . First, form the square n n matrix:
Similarity transform
defined

U q 1 q 2 q n

q11

q12

q1n

q2 1
qn 1
q

2 2 qn 2

q2 n
qn n

(9B.17)

The columns of the U matrix are the column eigenvectors corresponding to


each of the n eigenvalues. Then since the columns of U are solutions to:

Aq i i q i

Signals and Systems 2014

(9B.18)

9B.5
then by some simple matrix manipulation, we get:

AU U

(9B.19)

where:
1
0

2
0
0

Diagonal matrix
defined

0
0
0

0 n
0
0

(9B.20)

Example
From the previous example, we can confirm the following relationship.

AU U
0 2 1
0 2 2 0 0
2 3 2 1

10 3 4 2
2 4 2
2 4 0 3 0

3 6 1 5 3 3 5 3 3 0
0 11

(9B.21)

Perform the multiplication shown to verify the result.


The matrix U is called the right-hand eigenvector matrix. If, instead of
Eq. (9B.5), we were to solve the equation q T A q T we get the same
eigenvalues but different eigenvectors. Also, we can form an n n matrix V so
that:

VA V

(9B.22)

where V is made up of rows of left-hand eigenvectors:

q T (1) q (1)1 q (1) 2 q (1) n


T ( 2) ( 2)
q q 1 q ( 2) 2 q ( 2) n

T (n) (n)
(n)
(n)
q q 1 q 2 q n

Signals and Systems 2014

(9B.23)

9B.6
Since eigenvectors can be arbitrarily scaled by any non-zero constant, it can be
shown that we can choose V such that:

VU I

(9B.24)

V U 1

(9B.25)

which implies:
Relationship
between the two
similarity transforms

Now, starting from Eq. (9B.19), pre-multiply by V:

AU U
VAU VU

(9B.26)

but using Eq. (9B.25), this turns into:


The similarity
transform
diagonalizes a
matrix

U 1AU

(9B.27)

Solution of the State Equations for the ZIR


Given:

q Aq

(9B.28)

and q 0 , we would like to determine the ZIR q ZIR t .


First, put q Uz and substitute into Eq. (9B.28) to give:

Uz AUz

(9B.29)

Now pre-multiply by U 1 :

z U 1 AUz

Signals and Systems 2014

(9B.30)

9B.7
and using Eq. (9B.27), the end result of the change of variable is:

z z

(9B.31)

Written explicitly, this is:

z1 1 0 0 0 z1
z 0
z
0
0
2
2
2
0 0 0

z
0
0
0

n zn
n

(9B.32)

This is just a set of n independent first-order differential equations:

z1 1 z1
z 2 2 z 2

z n n z n

(9B.33)

Consider the first equation:

dz1
1 z1
dt

(9B.34)

The solution is easily seen to be:

z1 z1 0e 1t

Signals and Systems 2014

(9B.35)

Diagonal form of
state equations

9B.8
The solution to Eq. (9B.33) is therefore just:

z1 z1 0e 1t
z 2 z 2 0e 2t

z n z n 0e n t

(9B.36)

or written using matrix notation:


Solution to the
diagonal form of
state equations

z e t z 0
The matrix

The matrix e
defined

e t

(9B.37)

is defined by:

2
3

t t t
I

1!

e 1t

2!

0
e 2t
0
0

3!
0
0

0
0
0

0 e n t

(9B.38)

We now revert back to our original variable:

q Uz
Ue t z 0
Ue t U 1q0

(9B.39)

and since we know the ZIR is q ZIR t t q0 then the transition matrix is:
Transition matrix
written in terms of
eigenvalues and
eigenvectors

t Ue t U 1
Signals and Systems 2014

(9B.40)

9B.9
This is a quick way to find the transition matrix t for high-order systems.
The ZIR of the states is then just:

qt t q0
Ue t U 1q0

(9B.41)

Example
Given:
1
0
0
q t
qt xt

2 3
2
y t 3 1qt

2
q0
3

(9B.42)

find the ZIR.


We start by determining the eigenvalues:

A I 0

1
0
2 3

2 3 2 0
1 2 0
1 1, 2 2

(9B.43)

Next we find the right-hand eigenvectors:


1
U 1 1 , U 2 1 , U 1

(9B.44)

and the left-hand eigenvectors:


2 1
V 1 2 1, V 2 1 1, V

1 1
Signals and Systems 2014

(9B.45)

The ZIR written in


terms of
eigenvalues and
eigenvectors, and
initial conditions

9B.10
As a check, we can see if UV I .
Forming is easy:
1 0

0 2

(9B.46)

The transition matrix is also easy:


t Ue t U 1
1 e t
1

1 2 0

0 2 1

e 2t 1 1
1 2e t
e t
1

2t
e 2t
1 2 e
2e t e 2t
e t e 2 t

t
2t
e t 2e 2t
2e 2e

(9B.47)

The ZIR of the states is now just:


q ZIR t t q0
2e t e 2t
e t e 2 t 2


t
2t
e t 2e 2t 3
2e 2e
7e t 5e 2t

t
2t
7e 10e

(9B.48)

The ZIR of the output is then:

y ZIR t 3 1q ZIR t

(9B.49)

14e t 5e 2t
For higher-order systems and computer analysis, this method results in
considerable time and computational savings.

Signals and Systems 2014

9B.11
Poles and Repeated Eigenvalues
Poles
We have seen before that the transfer function of a system using state-variables
is given by:

H s cT s b
cT sI A b
1

cT adjsI A b

sI A

(9B.50)

where we have used the formula for the inverse B 1 adj B B . We can see
that the poles of the system are formed directly by the characteristic equation
sI A 0 . Thus, the poles of a system are given by the eigenvalues of the

matrix A .

poles = eigenvalues of A

(9B.51)

There is one qualifier to this statement: there could be a pole-zero cancellation,


in which case the corresponding pole will disappear. These are special cases,
and are termed uncontrollable and/or unobservable, depending on the state
assignments.
Repeated Eigenvalues
When eigenvalues are repeated, we get repeated eigenvectors if we try to solve

Aq q . In these cases, it is not possible to diagonalize the original matrix


(because the similarity matrix will have two or more repeated columns, and
will be singular hence no inverse).
In cases of repeated eigenvalues, the closest we can get to a diagonal form is a
Jordan canonical form. Handling repeated eigenvalues and examining the
Jordan form are topics for more advanced subjects in control theory.
Signals and Systems 2014

Poles of a system
and eigenvalues of
A are the same!

9B.12
Discrete-time State-Variables
The concepts of state, state vectors and state-variables can be extended to
discrete-time systems.
A discrete-time SISO system is described by the following equations:

qn 1 Aqn bxn

State and output


equations for a
discrete-time system

yn cT qn dxn

(9B.52)

Example
Given the following second-order linear difference equation:
yn yn 1 yn 2 xn

(9B.53)

q1 n yn 1

(9B.54)

we select:

q2 n yn 2

We now want to write the given difference equation as a set of equations in


state-variable form. Now:
q1 n yn 1

(9B.55)

Therefore:

q1 n 1 yn

yn 1 yn 2 xn

(9B.56)

so that:
q1 n 1 q1 n q 2 n xn

(9B.57)

q 2 n yn 2

(9B.58)

Also:

Signals and Systems 2014

9B.13
Therefore:
q 2 n 1 yn 1

(9B.59)

so that, using Eq. (9B.55):


q 2 n 1 q1 n

(9B.60)

From Eq. (9B.53), we also have:


yn q1 n q 2 n xn

(9B.61)

The equations are now in state variable form, and we can write:
1 1
1
qn 1
qn xn

1 0
0
yn 1 1qn 1xn

(9B.62)

Thus, we can proceed as above to convert any given difference equation to


state-variable form.
Example

If we are given the transfer function:


H z

b0 b1 z 1 bN z N
1 a1 z 1 a N z N

(9B.63)

then:
Y z
X z

Pz
p
1
b0 b1 z b p z
1 a1 z a p z p

(9B.64)

From the right-hand side, we have:


Pz a1 z 1 P z a2 z 2 P z a N z N P z X z

Signals and Systems 2014

(9B.65)

9B.14
Now select the state variables as:
Q1 z z 1 P z

Q2 z z 2 P z

QN z z N P z

(9B.66)

The state equations are then built up as follows. From the first equation in
Eq. (9B.66):
Q1 z z 1 Pz

zQ1 z P z

a1 z 1 P z a2 z 2 P z a N z N P z X z
a1Q1 z a 2 Q2 z a N QN z X z

(9B.67)

Taking the inverse z-transform, we get:


q1 n 1 a1q1 n a2 q2 n a N q N n xn

(9B.68)

From the second equation in Eq. (9B.66), we have:


Q2 z z 2 P z

z 1 z 1 P z
z 1Q1 z

zQ2 z Q1 z

(9B.69)

Taking the inverse z-transform gives us:


q 2 n 1 q1 n

(9B.70)

q N n 1 q N 1 n

(9B.71)

Similarly:

We now have all the state equations. Returning to Eq. (9B.64), we have:

Y z b0 P z b1 z 1 P z bN z N P z

Signals and Systems 2014

(9B.72)

9B.15
Taking the inverse z-transform gives:

yn b0 q1 n 1 b1q1 n b2 q2 n bN q N n

(9B.73)

Eliminating the q1 n 1 term using Eq. (9B.68) and grouping like terms gives:

yn b1 a1b0 q1 n b2 a2 b0 q2 n
bN a N b0 q N n b0 xn

(9B.74)

which is in the required form.


Using matrix notation we therefore have:
a1 a2
1
0

qn 1 0
1


0
0
yn c1 c2

a N 1

aN
1

0
0
0


0
0 qn 0 xn

1
0
c N qn b0 xn

where ci bi ai b0 i 1, 2, , N

Thus we can convert H z to state-variable form.

Signals and Systems 2014

(9B.75)

9B.16
Discrete-time Response
Once we have the equations in state-variable form we can then obtain the
discrete-time response.
For a SISO system we have:

qn 1 Aqn bxn
yn cT qn dxn

(9B.76)

First establish the states for the first few values of n:

q1 Aq0 bx0

q2 Aq1 bx1
AAq0 bx0 bx1
A 2 q0 Abx0 bx1
q3 Aq2 bx2

A 3q0 A 2 bx0 Abx1 bx2

(9B.77)

The general formula can then be seen as:

qn A n q0 A n 1bx0

Abxn 2 bxn 1 n 1, 2,

(9B.78)

We now define:
Fundamental matrix
defined

n A n fundamental matrix

Signals and Systems 2014

(9B.79)

9B.17
From Eq. (9B.78) and the above definition, the response of the discrete-time
system to any input is given by:
Solution to the
discrete-time state
equations in terms
of convolution
summation

n 1

qn nq0 n i 1bxi
i 0

yn cT qn dxn

(9B.80)

This is the expected form of the output response. For the states, it can be seen
that:

q ZIR nq0

The ZIR and ZSR


for a discrete-time
state-variable
system

n 1

q ZSR n i 1bxi
i 0

(9B.81)

Discrete-time Transfer Function


We can determine the transfer function from the state-variable representation
in the same manner as we did for continuous-time systems.
Take the z-transform of Eq. (9B.76) to get:

zQ z zq0 AQ z bX z
zI A Qz zq0 bX z

(9B.82)

Therefore:

Q z zI A zq0 zI A bX z
1

(9B.83)

Similarly:

Y z zcT zI A q0
1

cT zI A b d X z
1

Signals and Systems 2014

(9B.84)

9B.18
For the transfer function, we put all initial conditions q0 0 . Therefore, the
transfer function is:
The discrete-time
transfer function in
terms of statevariable quantities

H z cT zI A b d
1

(9B.85)

To get the unit-pulse response, we revert to Eq. (9B.80), set the initial
conditions to zero and apply a unit-pulse response:

h0 d
hn cT n 1b n 1, 2,

(9B.86)

Using Eq. (9B.79), we get:


The unit-pulse
response in terms of
state-variable
quantities

h0 d

hn cT A n 1b n 1, 2,

Signals and Systems 2014

(9B.87)

9B.19
Example

A linear time-invariant discrete-time system is given by the figure shown:

x [n ]
q [ n +1]
2

D
q2[ n]
5/6

y [n ]
5

D
q [ n]
1

1/6

1
If

We would like to find the output yn if the input is xn un and the initial
conditions are q1 0 2 and q 2 0 3 .
Recognizing that q 2 n q1 n 1 , the state equations are:
q1 n 1 0
q n 1 1
2
6

1 q n 0
5 1 x
q n
1
6 2

(9B.88)

and:
q n
yn 1 5 1
q 2 n

(9B.89)

To find the solution using time-domain techniques [Eq. (9B.81)], we must


determine n A n . One way of finding this is to first determine a similarity
transform to diagonalize A .

Signals and Systems 2014

9B.20
If such a transform can be found, then U 1 AU . Rearranging we then have:
A U U 1
A 2 A A U U 1U U 1 U 2 U 1

A n U n U 1

(9B.90)

The characteristic equation of A is:


1

5
1
1
1
I A 1 5 2 0
6
6
3
2
6

(9B.91)

Hence, 1 1 3 and 2 1 2 are the eigenvalues of A , and:

1 3 0

0 1 2

(9B.92)

The associated eigenvectors are, for 1 1 3 :

3
1 3 1 x1 0
1 6 1 2 x 0 so choose u 1 1

(9B.93)

and for 2 1 2 :

2
1 2 1 x1 0
1 6 1 3 x 0 so choose u 2 1

(9B.94)

3 2
U

1 1

(9B.95)

Therefore:

The inverse of U is readily found to be:


U 1

adjU 1 2

U
1 3

Signals and Systems 2014

(9B.96)

9B.21
Therefore n A n U n U 1 , and:
3 2 1 3n
0 1 2
n

1 2n 1 3
1 1 0
3 2 1 3n 21 3n

n
n
31 2
1 1 1 2
31 3n 21 2 n 61 3n 61 2n

n
n
n
n
21 3 31 2
1 3 1 2

(9B.97)

To solve for the ZIR, we have, using Eq. (9B.81):

y ZIR n cT nq0
31 3n 21 2n 61 3n 61 2n 2
1 5
n
n
n
n
21 3 31 2 3
1 3 1 2
121 3n 141 2n
1 5
n
n
41 3 71 2
81 3 211 2
n

(9B.98)

To solve for the ZSR, we have, using Eq. (9B.81):


n 1

(9B.99)

y ZSR n cT n i 1bxi
i 0

We have:

cT n i 1b
31 3ni 1 21 2ni 1
1 5
n i 1
n i 1
1 2
1 3

61 3

61 2

0
n i 1
n i 1
21 3
31 2
1
n i 1

n i 1

61 3ni 1 61 2ni 1
1 5
n i 1
n i 1
31 2
21 3

41 3

n i 1

91 2

n i 1

(9B.100)

Signals and Systems 2014

9B.22
so that:
n 1

y ZSR n 41 3

n i 1

91 2

n i 1

ui

i 0

41 3

n 1

n 1

n 1

i
n 1
i
1 3 91 2 1 2
i 0

i 0

n
1 3
n 1 2
181 2
1 3
1 2
n
n
n
61 3 1 3 181 2 1 2 n
n

121 3

61 3 181 2 12
n

(9B.101)

for n 0 . Therefore the total response is:

yn 81 3 211 2 12 61 3 181 2
n

12 21 3 31 2 , n 0
n

(9B.102)

The solution using z-transforms follows directly from Eq. (9B.84):

Y z zcT zI A q0 cT zI A b d X z
1

1
1 2
z
z
1 5
z 1 5

1 6 z 5 6 3
1 6 z 5 6

0
z
z 1

z 5 6 1 0
z 5 6 1 2
z
1

1
5
1
5
1 6 z z
1 6 z 3 z 2 5 6 z 1 6
z 2 5 6 z 1 6

z 1


z
z 1
2 z 4 3
z
1
1 5

2
1
5
z2

2
z 5 6 z 1 6
3z 1 3 z 5 6 z 1 6

z 1
13z 2 3z
5z 2 z
2

z 5 6 z 1 6 z 1 z 2 5 6 z 1 6
z
z
z
z
z
8
21
12
6
18
z 1 3
z 1 2
z 1
z 1 3
z 1 2

(9B.103)

Therefore:

yn 81 3 211 2 12 61 3 181 2 n 0

zero-input response

(9B.104)

zero-state response

Obviously, the two solutions obtained using different techniques are in perfect
agreement.
Signals and Systems 2014

9B.23
Summary

The similarity transform uses a knowledge of a systems eigenvalues and


eigenvectors to reduce a high-order coupled system to a simple diagonal
form.

A diagonal form exists only when there are no repeated eigenvalues.

A systems poles and eigenvalues are equal.

Discrete-time state-variable representation can be used to derive the


complete response of a system, as well as the transfer function and unitpulse response.

References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Shinners, S.: Modern Control System Theory and Application, AddisonWesley, 1978.

Signals and Systems 2014

9B.24
Exercises
1.
Find , U and V for the system described by:
0
q 0
6
y 0 0

0
1
0

0
1 q 2
7 x
20 5
11 6
1

1q

Note: U and V should be found directly (i.e. Do not find V by taking the
inverse of U). You can then verify your solution by:
(i)

checking that VU I

(ii)

checking that U V A

2.
Consider the system:
q1 q1 x
q 2 q1 2q2 x
with:
0
q0
1
(a) Find the system eigenvalues and find the zero input response of the system.
(b) If the system is given a unit-step input with the same initial conditions, find
q t . (Use the resolvent matrix to obtain the time solution). What do you

notice about the output?

Signals and Systems 2014

9B.25
3.
A system employing state-feedback is described by the following equations:
1
0 q1 0
q1 0
q 0
0
1 q2 0 xt
2
q 3 5 7 3 q3 1
q1
y 2 4 3q2
q3
xt k1

k2

q1
k 3 q2 r t
q3

(i)

Draw a block diagram of the state-feedback system.

(ii)

Find the poles of the system without state feedback.

(iii)

Obtain values for k1 , k 2 and k 3 to place the closed-loop system poles


at -4, -4 and -5.

(iv)

Find the steady-state value of the output due to a unit-step input.

(v)

Comment upon the possible uses of this technique.

Signals and Systems 2014

9B.26
4.
For the difference equation:
yn 3 xn

5
5
1
1
1
xn 1 xn 2 yn 1 yn 2 yn 3
4
8
4
4
16

(a) Show that H z

3z 3 5 4 z 2 5 8 z
z 3 1 4 z 2 1 4 z 1 16

(b) Form the state-variable description of the system.


(c) Find the transfer function H z from your answer in (b).
(d) Draw a block diagram of the state-variable description found in (b), and use
block-diagram reduction to find H z .

5.
Find y5 and y10 for the answer to Q4b given:
1
q0 1 and
0

0
xn
n
1

n0
n0

(a) by calculating qn and yn for n 0, 1, , 10 directly from the state

equations.
(b) by using the fundamental matrix and finding y5 and y10 directly.

6.
A discrete-time system with:
H z

k z b
z z a z c

is to be controlled using unity feedback. Find a state-space representation of


the resulting closed-loop system.

Signals and Systems 2014

F.1
Appendix A - The Fast Fourier Transform
The discrete-time Fourier transform (DTFT). The discrete Fourier Transform
(DFT). Fast Fourier transform (FFT).

Overview
Digital signal processing is becoming prevalent throughout engineering. We

The motivation
have digital audio equipment (CDs, MP3s), digital video (MPEG2 and behind developing
the FFT

MPEG4, DVD, digital TV), digital phones (fixed and mobile). An increasing
number (billions!) of embedded systems exist that rely on a digital computer (a
microcontroller normally). They take input signals from the real analog world,
convert them to digital signals, process them digitally, and produce outputs that
are again suitable for the real analog world. (Think of the computers
controlling any modern form of transport car, plane, boat or those that
control nearly all industrial processes). Our motivation is to extend our existing
frequency-domain analytical techniques to the digital world.
Our reason for hope that this can be accomplished is the fact that a signals Samples contain all
the information of a

samples can convey the complete information about a signal (if Nyquists signal
criterion is met). We should be able to turn those troublesome continuous-time
integrals into simple summations a task easily carried out by a digital
computer.

Signals and Systems 2014

F.2
The Discrete-Time Fourier Transform (DTFT)
To illustrate the derivation of the discrete-time Fourier Transform, we will
consider the signal and its Fourier Transform below:
A strictly time-limited
signal has an infinite
bandwidth

x (t )
A

X (f )
C

-2 B

-B

2B

Figure F.1
Since the signal is strictly time-limited (it only exists for a finite amount of
time), its spectrum must be infinite in extent. We therefore cannot choose a
sample rate high enough to satisfy Nyquists criterion (and therefore prevent
aliasing). However, in practice we normally find that the spectral content of
signals drops off at high frequencies, so that the signal is essentially bandlimited to B:
A time-limited signal
which is also
essentially bandlimited

x (t )
A

X (f )
C

-B

Figure F.2
We will now assume that Nyquists criterion is met if we sample the timedomain signal at a sample rate f s 2 B .

Signals and Systems 2014

F.3
If we ideally sample using a uniform train of impulses (with spacing
Ts 1 f s ), the mathematical expression for the sampled time-domain
waveform is:

xs t xt t kTs
k

xk t kT

(F.1)

This corresponding operation in the frequency-domain gives:

X s f X f f s

f nf
s

f X f nf

(F.2)

The sampled waveform and its spectrum are shown graphically below:
A sampled signal
and its spectrum

x s (t )
A

weights are
x ( kTs) = x [ k ]

Xs (f )

-B

fs C

B = f s /2

M samples with spacing Ts

Figure F.3
We are free to take as many samples as we like, so long as MTs . That is,
we need to ensure that our samples will encompass the whole signal in the
time-domain. For a one-off waveform, we can also sample past the extent of
the signal a process known as zero padding.
Signals and Systems 2014

F.4
Substituting x s t into the definition of the Fourier transform, we get:

X s f x s t e j 2ft dt

xk t kT e

j 2ft

dt

(F.3)

Using the sifting property of the impulse function, this simplifies into the
definition of the discrete-time Fourier transform (DTFT):
The discrete-time
Fourier transform
defined

Xsf

xk e

j 2fkTs

(F.4)

The DTFT is a continuous function of f. It is discrete in the sense that it


operates on a discrete-time signal (in this case, the discrete-time signal
corresponds to the weights of the impulses of the sampled signal).
As shown in Figure F.3, the DTFT is periodic with period f s 2 B , i.e. the
range of frequencies f s 2 f f s 2 uniquely specifies it.

The Discrete Fourier Transform (DFT)


The DTFT is a continuous function of f, but we need a discrete function of f to
be able to store it in a computer. Our reasoning now is that since samples of a
time-domain waveform (taken at the right rate) uniquely determine the original
waveform, then samples of a spectrum (taken at the right rate) uniquely
determine the original spectrum.

The discrete Fourier


transform is just
samples of the
DTFT

One way to discretize the DTFT is to ideally sample it in the frequencydomain. Since the DTFT is periodic with period f s , then we choose N samples
per period where N is an integer. This yields periodic spectrum samples and we
only need to compute N of them (the rest will be the same)!

Signals and Systems 2014

F.5
The spacing between samples is then:

f0

The frequency
spacing of a DFT

fs
N

(F.5)

which gives a time-domain relationship:

T0 NTs

(F.6)

The time-domain
sample spacing of a
DFT

The ideally sampled spectrum X ss f is then:

X ss f X s f f nf 0
n

X nf f nf

(F.7)

The ideally sampled spectrum is shown below:


Ideal samples of a
DTFT spectrum

X ss ( f )

- f s /2

fs C

f s /2

sample spacing is f0
N samples over one period
Figure F.4

Signals and Systems 2014

F.6
What signal does this spectrum correspond to in the time-domain? The
corresponding operation of Eq. (F.7) is shown below in the time-domain:

1
xss t xs t
f0

t kT0

T x t kT
0 s

(F.8)

This time-domain signal is shown below:


Periodic extension
of the time-domain
signal due to ideally
sampling the DTFT
spectrum

x ss(t )
A T0

- T0

t
T0

2T0

M samples in each repeat with spacing Ts

Figure F.5
We see that sampling in the frequency-domain causes periodicy in the timedomain. What we have created is a periodic extension of the original sampled
signal, but scaled in amplitude. From Figure F.5, we can see that no timedomain aliasing occurs (no overlap of repeats of the original sampled signal) if:

T0 MTs

(F.9)

Using Eq. (F.6), this means:

NTs MTs
or

NM
Signals and Systems 2014

(F.10)

F.7
If the original time-domain waveform is periodic, and our samples represent
one period in the time-domain, then we must choose the frequency sample
spacing to be f 0 1 T0 , where T0 is the period of the original waveform. The
process of ideally sampling the spectrum at this spacing creates the original
periodic waveform back in the time-domain. In this case, we must have

T0 MTs , and therefore NTs MTs so that N M .


We choose the number of samples N in the frequency-domain so that
Eq. (F.10) is satisfied, but we also choose N to handle the special case above.
In any case, setting N M minimises the DFT calculations, and we therefore
choose:

N M
(F.11)

number of frequency-domain samples


= number of time-domain samples

If we do this, then the reconstructed sampled waveform has its repeats next to
each other:

x ss(t )
A T0

- T0

t
T0

2T0

M samples in each repeat with spacingTs

Figure F.6
Returning now to the spectrum in Figure F.4, we need to find the sampled
spectrum impulse weights covering just one period, say from 0 f f s . These
are given by the weights of the impulses in Eq. (F.7) where 0 n N 1 .

Signals and Systems 2014

The FFT produces


the same number of
samples in the
frequency-domain
as there are in the
time-domain

F.8
The DTFT evaluated at those frequencies is then obtained by substituting

f nf 0 into Eq. (F.4):


N 1

X s nf 0 xk e j 2nf 0 kTs
k 0

(F.12)

Notice how the infinite summation has turned into a finite summation over
M N samples of the waveform, since we know the waveform is zero for all

the other values of k.


Now using Eq. (F.5), we have f 0Ts 1 N . We then have the definition of the
discrete Fourier transform (DFT):
The discrete Fourier
transform defined

N 1

X n xk e

2nk
N

k 0

0 n N 1

(F.13)

The DFT takes an input vector xk (time-spacing unknown just a function


of k) and produces an output vector X n (frequency-spacing unknown just a
function of n). It is up to us to interpret the input and output of the DFT.

The Fast Fourier Transform (FFT)


The Fast Fourier Transform is really a family of algorithms that are used to
The FFT is a
computationally fast
version of the DFT

evaluate the DFT. They are optimised to take advantage of the periodicy
inherent in the exponential term in the DFT definition. The roots of the FFT
algorithm go back to the great German mathematician Gauss in the early
1800s, but was formally introduced by Cooley and Tukey in their paper An
Algorithm for the Machine Calculation of Complex Fourier Series, Math.

Comput. 19, no. 2, April 1965:297-301. Most FFTs are designed so that the
number of sample points, N, is a power of 2.
All we need to consider here is the creation and interpretation of FFT results,
and not the algorithms behind them.
Signals and Systems 2014

F.9
Creating FFTs
Knowing the background behind the DFT, we can now choose various
parameters in the creation of the FFT to suit our purposes. For example, there
may be a requirement for the FFT results to have a certain frequency
resolution, or we may be restricted to a certain number of samples in the timedomain and we wish to know the frequency range and spacing of the FFT
output.
The important relationships that combine all these parameters are:

T0 NTs

The relationships
between FFT
sample parameters

or

f s Nf 0
or

N T0 f s

(F.14)

Example
An analog signal with a known bandwidth of 2000 Hz is to be sampled at the
minimum possible frequency, and the frequency resolution is to be 5 Hz. We
need to find the sample rate, the time-domain window size, and the number of
samples.
The minimum sampling frequency we can choose is the Nyquist rate of

f s 2 B 2 2000 4000 Sa/s . To achieve a frequency resolution of 5 Hz


requires a window size of T0 1 f 0 1 5 0.2 s . The resulting number of
samples is then N T0 f s 0.2 4000 800 samples.

Signals and Systems 2014

F.10
Example
An analog signal is viewed on a DSO with a window size of 1 ms. The DSO
takes 1024 samples. What is the frequency resolution of the spectrum, and
what is the folding frequency (half the sample rate)?
The frequency resolution is f 0 1 T0 1 0.001 1 kHz .
The sample rate is f s Nf 0 1024 1000 1.024 MHz . The folding frequency
is therefore f s 2 512 kHz , and this is the maximum frequency displayed on
the DSO spectrum.
Example
A simulation of a system and its signals is being performed using MATLAB.
The following code shows how to set up the appropriate time and frequency
vectors if the sample rate and number of samples are specified:
% Sample rate
fs=1e6;
Ts=1/fs;
% Number of samples
N=1000;
% Time window
T0=N*Ts;
f0=1/T0;
% Time vector of N time samples spaced Ts apart
t=0:Ts:T0-Ts;
% Frequency vector of N frequencies spaced f0 apart
f=-fs/2:f0:fs/2-f0;

The frequency vector is used to graph the shifted (double-sided) spectrum


produced by the code below:
G=fftshift(fft(g));

The frequency resolution of the FFT in this case will be f 0 1 kHz and the
output will range from 500 kHz to 499 kHz. Note carefully how the time and
frequency vectors were specified so that the last value does not coincide with
the first value of the second periodic extension or spectrum repeat.

Signals and Systems 2014

F.11
Interpreting FFTs
The output of the FFT can be interpreted in four ways, depending on how we
interpret the time-domain values that we feed into it.
Case 1 Ideally sampled one-off waveform
If the FFT input, xn , is the weights of the impulses of an ideally sampled
time-limited one-off waveform, then we know the FT is a periodic repeat of
the original unsampled waveform. The DFT gives the value of the FT at
frequencies nf 0 for the first spectral repeat. This interpretation comes directly
from Eq. (F.12).
With our example waveform, we would have:

weights are
x ( nTs) = x [ n ]

x s (t )
A

x [ n]
A

input to

Interpretation of FFT
of ideally sampled
one-off waveform

T0

n
computer

real one-off sampled signal

discrete signal

FT
Xs ( f )

fs C

FFT

samples are
X [n ]

f
- f s /2

human

X [n ]

fs C
f

interpretation

f s /2

- f s /2

real spectrum

f s /2

discrete spectrum

Figure F.7

Signals and Systems 2014

F.12
Case 2 Ideally sampled periodic waveform
Consider the case where the FFT input, xn , is the weights of the impulses of
an ideally sampled periodic waveform over one period with period T0 .
According to Eq. (F.8), the DFT gives the FT impulse weights for the first
spectral repeat, if the time-domain waveform were scaled by T0 . To get the FT,
we therefore have to scale the DFT by f 0 and recognise that the spectrum is
periodic.
With our example waveform, we would have:
Interpretation of FFT
of ideally sampled
periodic waveform

weights are
x ( nTs) = x [ n ]

x s (t )
A

input to

x [ n]
A
n

T0

computer

real periodic sampled signal

discrete signal

FT
Xs ( f )

FFT

f0 fs C

weights are
f0 X [n ]

f
- f s /2

human

X [n ]

fs C
f

interpretation

f s /2

- f s /2

real spectrum

f s /2

discrete spectrum

Figure F.8

Signals and Systems 2014

F.13
Case 3 Continuous time-limited waveform
If we ideally sample the one-off waveform at intervals Ts , we get a signal corresponding to Case 1. The sampling process
creates periodic repeats of the original spectrum, scaled by f s . To undo the scaling and spectral repeats caused by sampling, we
should multiply the Case 1 spectrum by Ts and filter out all periodic repeats except for the first. The DFT gives the value of the
FT of the sampled waveform at frequencies nf 0 for the first spectral repeat. This interpretation comes directly from Eq. (F.12). All
we have to do is scale the DFT output by Ts to obtain the true FT at frequencies nf 0 .
Interpretation of FFT
of one-off waveform
x (t )
A

ideal
t

weights are
x ( nTs) = x [ n ]

x s (t )
A

T0

sampling

real periodic signal

samples are
Ts X [n ]

n
computer

N
discrete signal

FT

f
- f s /2

T0

real periodic sampled signal

FT
X ( f)

x [ n]
A

input to

f s /2

real spectrum

ideal

FFT
human

Xs ( f )

reconstruction

f
- f s /2

X [n ]

fs C

f s /2

real spectrum

Figure F.9
Signals and Systems 2014

fs C
f

interpretation
- f s /2

f s /2

discrete spectrum

F.14
Case 4 Continuous periodic waveform
To create the FFT input for this case, we must ideally sample the continuous signal at intervals Ts to give xn . The sampling
process creates periodic repeats of the original spectrum, scaled by f s . We are now essentially equivalent to Case 2. To undo the
scaling and spectral repeats caused by sampling, we should multiply the Case 2 spectrum by Ts and filter out all periodic repeats
except for the first. According to Eq. (F.8), the DFT gives the FT impulse weights for the first spectral repeat. All we have to do is
scale the DFT output by f 0Ts 1 N to obtain the true FT impulse weights.
Interpretation of FFT
of periodic
waveform

x (t )
A

ideal

x s (t )
A

weights are
x ( nTs) = x [ n ]

T0

t
0

T0

t
sampling

real periodic signal

f0 C

computer

N
discrete signal

FT
weights are
X [n ]
Xn =
N
f

- f s /2

n
0

real periodic sampled signal

FT
X ( f)

input to

x [ n]
A

f s /2

real spectrum

ideal

Xs ( f )

FFT

f0 fs C

weights are
f0 X [n ]

reconstruction

f
- f s /2

f s /2

real spectrum

Figure F.10
Signals and Systems 2014

human

X [n ]

fs C
f

interpretation
- f s /2

f s /2

discrete spectrum

P.1
Appendix B - The Phase-Locked Loop
Phase-locked loop. Voltage controlled oscillator. Phase Detector. Loop Filter..

Overview
The phase-locked loop (PLL) is an important building block for many
electronic systems. PLLs are used in frequency synthesisers, demodulators,
clock multipliers and many other communications and electronic applications.
Distortion in Synchronous AM Demodulation
In suppressed carrier amplitude modulation schemes, the receiver requires a
local carrier for synchronous demodulation. Ideally, the local carrier must be in
frequency and phase synchronism with the incoming carrier. Any discrepancy
in the frequency or phase of the local carrier gives rise to distortion in the
demodulator output.
For DSB-SC modulation, a constant phase error will cause attenuation of the

Frequency and
output signal. Unfortunately, the phase error may vary randomly with time. A phase errors cause
distorted
frequency error will cause a beating effect (the output of the demodulator is demodulator output

the original message multiplied by a low-frequency sinusoid). This is a serious


type of distortion.
For SSB-SC modulation, a phase error in the local carrier gives rise to a phase
distortion in the demodulator output. Phase distortion is generally not a
problem with voice signals because the human ear is somewhat insensitive to
phase distortion it changes the quality of the speech but still remains
intelligible. In video signals and data transmission, phase distortion is usually
intolerable. A frequency error causes the output to have a component which is
slightly shifted in frequency. For voice signals, a frequency shift of 20 Hz is
tolerable.

Signals and Systems 2014

P.2
Carrier Regeneration
The human ear can tolerate a drift between the carriers of up to about 30 Hz.
A pilot is transmitted
to enable the
receiver to generate
a local oscillator in
frequency and
phase synchronism
with the transmitter

Quartz crystals can be cut for the same frequency at the transmitter and
receiver, and are very stable. However, at high frequencies (> 1 MHz), even
quartz-crystal performance may not be adequate. In such a case, a carrier, or
pilot, is transmitted at a reduced level (usually about -20 dB) along with the
sidebands.

speech

SSB with pilot


pilot tone
f

-B

f
-fc-B

-fc

fc

fc+B

Figure P.1
One conventional technique to generate the receivers local carrier is to
separate the pilot at the receiver by a very narrowband filter tuned to the pilot
frequency. The pilot is then amplified and used to synchronize the local
oscillator.
A PLL is used to
regenerate a local
carrier

In demodulation applications, the PLL is primarily used in tracking the phase


and frequency of the carrier of an incoming signal. It is therefore a useful
device for synchronous demodulation of AM signals with a suppressed carrier
or with a little carrier (the pilot). In the presence of strong noise, the PLL is
more effective than conventional techniques.
For this reason, the PLL is used in such applications as space-vehicle-to-earth
data links, where there is a premium on transmitter weight; or where the loss
along the transmission path is very large; and, since the introduction of CMOS
circuits that have entire PLLs built-in, commercial FM receivers.

Signals and Systems 2014

P.3
The Phase-Locked Loop (PLL)
A block diagram of a PLL is shown below:
A phase-locked loop

v i (t )

phase
detector

pilot tone
(input sinusoid)

phase
difference

loop
filter

v in( t )

v o( t )
VCO
local carrier
(output sinusoid)

Figure P.2
It can be seen that the PLL is a feedback system. In a typical feedback system,
the signal fed back tends to follow the input signal. If the signal fed back is not
equal to the input signal, the difference (the error) will change the signal fed
back until it is close to the input signal. A PLL operates on a similar principle,
expect that the quantity fed back and compared is not the amplitude, but the
phase of a sinusoid. The voltage controlled oscillator (VCO) adjusts its
frequency until its phase angle comes close to the angle of the incoming signal.
At this point, the frequency and phase of the two signals are in synchronism
(expect for a difference of 90, as will be seen later).
The three components of the PLL will now be examined in detail.

Signals and Systems 2014

P.4
Voltage Controlled Oscillator (VCO)
The voltage controlled oscillator (VCO) is a device that produces a constant
amplitude sinusoid at a frequency determined by its input voltage. For a fixed
DC input voltage, the VCO will produce a sinusoid of a fixed frequency. The
purpose of the control system built around it is to change the input to the VCO
so that its output tracks the incoming signals frequency and phase.

v o( t )

v in( t )
VCO
input voltage

output sinusoid
Figure P.3

The characteristic of a VCO is shown below:

fi
fo
slope = k v
vin

0
Figure P.4

The horizontal axis is the applied input voltage, and the vertical axis is the
frequency of the output sinusoid. The amplitude of the output sinusoid is fixed.
We now seek a model of the VCO that treats the output as the phase of the
sinusoid rather than the sinusoid itself. The frequency f o is the nominal
frequency of the VCO (the frequency of the output for no applied input).

Signals and Systems 2014

P.5
The instantaneous VCO frequency is given by:

f i t f o k v vin t

(P.1)

To relate this to the phase of the sinusoid, we need to generalise our definition
of phase. In general, the frequency of a sinusoid cos cos2f1t is
proportional to the rate of change of the phase angle:

d
2f1
dt
1 d
f1
2 dt

(P.2)

We are used to having a constant f1 for a sinusoids frequency. The


instantaneous phase of a sinusoid is given by:

2f1d

(P.3)

2f1t

(P.4)

which reduces to:

for f1 a constant. For f i t f o k v vin t , the phase is:

2f o 2k v vin d

2f o t 2k v vin d
t

Signals and Systems 2014

(P.5)

P.6
Therefore, the VCO output is:

vo Ao cos o t o t

(P.6)

where:

o t 2k v vin d
t

(P.7)

This equation expresses the relationship between the input to the VCO and the
phase of the resulting output sinusoid.
Example
Suppose a DC voltage of VDC V is applied to the VCO input. Then the output
phase of the VCO is given by:

o t 2k v VDC d 2k vVDC t
t

The resulting sinusoidal output of the VCO can then be written as:

vo Ao cos o t 2k vVDC t

Ao cos o 2k vVDC t
Ao cos1t

In other words, a constant DC voltage applied to the input of the VCO will
produce a sinusoid of fixed frequency, f1 f 0 k vVDC .
When used in a PLL, the VCO input should eventually be a constant voltage
(the PLL has locked onto the phase, but the VCO needs a constant input
voltage to output the tracked frequency).

Signals and Systems 2014

P.7
Phase Detector
The phase detector is a device that produces the phase difference between two
input sinusoids:

v i (t )
sinusoid 1

phase
detector

phase
difference

v o(t )
sinusoid 2
Figure P.5
A practical implementation of a phase detector is a four-quadrant multiplier:

Four-quadrant multiplier
x ( t ) = v i (t )v o(t )

v i (t )

v o(t )

Figure P.6
To see why a multiplier can be used, let the incoming signal of the PLL, with
constant frequency and phase, be:

vi Ai sin i t i

(P.8)

i i o t i

(P.9)

If we let:

Signals and Systems 2014

P.8
then we can write the input in terms of the nominal frequency of the VCO as:

vi Ai sin o t i t

(P.10)

Note that the incoming signal is in phase quadrature with the VCO output (i.e.
one is a sine, the other a cosine). This comes about due to the way the
multiplier works as a phase comparator, as will be shown shortly.
Thus, the PLL will lock onto the incoming signal but it will have a 90
phase difference.
The output of the multiplier is:

x vi v o

Ai sin o t i Ao cos o t o

Ai Ao
sin i o sin 2 o t i o
2

(P.11)

If we now look forward in the PLL block diagram, we can see that this signal
passes through the loop filter. If we assume that the loop filter is a lowpass
filter that adequately suppresses the high frequency term of the above equation
(not necessarily true in all cases!), then the phase detector output can be
written as:

Ai Ao
sin i o K 0 sin i o
2

(P.12)

where K 0 Ai Ao 2 . Thus, the multiplier used as a phase detector produces an


output voltage that is proportional to the sine of the phase difference between
the two input sinusoids.

Signals and Systems 2014

P.9
PLL Model
A model of the PLL, in terms of phase, rather than voltages, is shown below:
A PLL model

i (t )

e (t )

x (t )

K0 sin( )

h (t )

v in( t )

loop filter

2k v

o(t )

VCO

phase detector

Figure P.7
Linear PLL Model
If the PLL is close to lock, i.e. its frequency and phase are close to that of the
incoming signal, then we can linearize the model above by making the
approximation sin e e . With a linear model, we can convert to the s-domain
and do our analysis with familiar block diagrams:
A linear PLL model
in the s-domain

i (s )

e (s )

K0

H (s)
loop filter

2k v

o(s )

s
VCO

phase detector

Figure P.8
Note that the integrator in the VCO in the time-domain becomes 1 s in the
block diagram thanks to the integration property of the Laplace transform.

Signals and Systems 2014

P.10
Reducing the block diagram gives:
The PLL transfer
function for phase
signals

i (s )

2k v K0 H ( s )
s 2k v K0 H ( s )

o( s )

Figure P.9
This is the closed-loop transfer function relating the VCO output phase and the
incoming signals phase.
Loop Filter
The loop filter is designed to meet certain control system performance
requirements. Once of those requirements is for the PLL to track input
sinusoids with constant frequency and phase errors. That is, if the input phase
is given by:

i t i o t i

(P.13)

lim e t 0

(P.14)

then we want:

The analysis is best performed in the s-domain. The Laplace transform of the
input signal is:

i s

i o
s

i
s

Signals and Systems 2014

(P.15)

P.11
The Laplace transform of the error signal is:

e s i s o s

1 T s i s

s
i s
s 2k v K 0 H s

s
i o i

2

s 2k v K 0 H s s
s

(P.16)

The final value theorem then gives:

lim e t lim s e s
t

s 0

s2
i o i
lim

2
s 0 s 2k K H s
s
s
v 0

i o
i s

s 2k v K 0 H s s 0 s 2k v K 0 H s s 0

(P.17)

To satisfy Eq. (P.14) we must have:

lim H s

(P.18)

s 0

We can therefore choose a simple PI loop filter to meet the steady-state


constraints:

b
H s a
s

(P.19)

Obviously, the constants a and b must be chosen to meet other system


requirements, such as overshoot and bandwidth.

Signals and Systems 2014

FFT.1
FFT - Quick Reference Guide
Definitions
Symbol
T0

Description
time-window

f 0 1 T0

discrete frequency spacing

fs

sample rate

Ts 1 f s
N

sample period
number of samples

Creating FFTs
Given
T0 - time-window
N - number of samples

f s - sample rate

Choose
f s - sample rate

Then
N T0 f s

N - number of samples

f s Nf 0

f s - sample rate

T0 NTs

T0 - time-window

f s Nf 0

N - number of samples

T0 NTs

T0 - time-window

N T0 f s

MATLAB Code
Code
% Sample rate
fs=1e6;
Ts=1/fs;

Description
This code starts off with a given sample rate, and
chooses to restrict the number of samples to 1024 for
computational speed.

% Number of samples
N=1024;
% Time window and
fundamental
T0=N*Ts;
f0=1/T0;
% Time vector for
specified DSO
parameters
t=0:Ts:T0-Ts;
% Frequency vector for
specified DSO
parameters
f=-fs/2:f0:fs/2-f0;

The time window takes samples up to but not


including the point at t T0 (since this sample will
be the same as the one at t 0 ).
The corresponding frequency vector for the chosen
sample rate and sample number. Note that because of
the periodicy of the DFT, it does not produce a
spectrum sample at f f s 2 since this corresponds
to the sample at f f s 2 .

Signals and Systems 2014

FFT.2
Interpreting FFTs
Case

1. one-off ideally
sampled waveform
x s (t )
A

T0

Sample
weights
over one
period.

weights are
x ( nTs) = x [ n ]

t
T0

x (t )
A

Values of
waveform
at intervals
Ts .

T0

4. periodic continuous
waveform (period T0 )
x (t )
A
t
0

of X n

3. one-off
continuous waveform

of xn
Sample
weights.

2. periodic ideally
sampled waveform
(period T0 )

Interpretation

weights are
x ( nTs) = x [ n ]

x s (t )
A

Derivation

T0

Values of
waveform
over one
period at
intervals
Ts .

X n gives
values of one
period of the
true continuous
FT at
frequencies nf 0 .

Multiply X n
by f 0 to give
weights of
impulses at
frequencies nf 0
in one period of
true FT.
Multiply X n
by Ts to give
values of true
continuous FT
at nf 0 .

Multiply X n
by 1 N to give
weights of
impulses at
frequencies nf 0
in true FT.

Action

f
X s f sinc X n f nf 0
f 0 n
samples are
X [n ]
fs C

Xs ( f )

- f s /2

Xs ( f )

f
f s /2
weights are
f0 X [n ]

f
- f s /2

f s /2

f N 1
X f sinc Ts X n f nf 0
f 0 n 0
samples are
X ( f)
Ts X [n ]

- f s /2

X ( f)

f
f s /2
weights are
X [n ]
f0 C
Xn =
N

f
- f s /2

f s /2

Note: Periodic waveforms should preferably be sampled so that an integral number of


samples extend over an integral number of periods. If this condition is not met, then
the periodic extension of the signal assumed by the DFT will have discontinuities and
produce spectral leakage. In this case, the waveform is normally windowed to
ensure it goes to zero at the ends of the period this will create a smooth periodic
extension, but give rise to windowing artefacts in the spectrum.

Signals and Systems 2014

M.1
MATLAB - Quick Reference Guide
General
Code
Ts=0.256;

t=0:Ts:T;

N=length(t);
r=ones(1,N);
n=45000*[1 18 900];

wd=sqrt(1-zeta ^2)*wn;
gs=p.*g;

Description
Assigns the 1 x 1 matrix Ts with the value 0.256.
The semicolon prevents the matrix being displayed
after it is assigned.
Assigns the vector t with values, starting from 0,
incrementing by Ts, and stopping when T is reached
or exceeded.
N will equal the number of elements in the vector t.
r will be a 1 x N matrix filled with the value 1.
Useful to make a step.
Creates a vector ([]) with elements 1, 18 and 900,
then scales all elements by 45000. Useful for
creating vectors of transfer function coefficients.
Typical formula, showing the use of sqrt; taking the
square (^2); and multiplication with a scalar (*).
Performs a vector multiplication (.*) on an elementby-element basis. Note that p*g will be undefined.

Graphing
Code

Description
Creates
a new graph, titled Figure 1.
figure(1);
Graphs y vs. t on the current figure.
plot(t,y);
Creates a 2 x 1 matrix of graphs in the current figure.
subplot(211);
Makes the current figure the top one.
title('Complex poles'); Puts a title 'Complex poles' on the current figure.
Puts the label 'Time (s)' on the x-axis.
xlabel('Time (s)');
Puts the label 'y(t)' on the y-axis.
ylabel('y(t)');
Makes a graph with a logarithmic x-axis, and uses a
semilogx(w,H,'k:');
black dotted line ('k:').
Plots a point (200,-2.38) on the current figure,
plot(200,-2.38,'kx');
using a black cross ('kx').
Sets the range of the x-axis from 1 to 1e5, and the
axis([1 1e5 -40 40]);
range of the y-axis from 40 to 40. Note all this
information is stored in the vector [1 1e5 -40 40].
Displays a grid on the current graph.
grid on;
Next time you plot, it will appear on the current graph
hold on;
instead of a new graph.

Signals and Systems 2014

M.2
Frequency-domain
Code
f=logspace(f1,f2,100);
H=freqs(n,d,w);

Y=fft(y);
Hmag=abs(H);
Hang=angle(H);
X=fftshift(X);

Description
Creates a logarithmically spaced vector from f1 to
f2 with 100 elements.
H contains the frequency response of the transfer
function defined by numerator vector n and
denominator vector d, at frequency points w.
Performs a fast Fourier transform (FFT) on y, and
stores the result in Y.
Takes the magnitude of a complex number.
Takes the angle of a complex number.
Swaps halves of the vector X - useful for displaying
the spectrum with a negative frequency component.

Time-domain
Code
step(Gcl);
S=stepinfo(Gcl);
y=conv(m2,Ts*h);
y=y(1:length(t));

square(2*pi*fc*t,10)

Description
Calculates the step response of the system transfer
function Gcl and plots it on the screen.
Computes the step-response characteristics of the
system transfer function Gcl.
Performs a convolution on m2 and Ts*h, with the
result stored in y.
Reassigns the vector y by taking elements from
position 1 to position length(t). Normally used
after a convolution, since convolution produces end
effects which we usually wish to ignore for steadystate analysis.
Creates a square wave from 1 to +1, of frequency
fc, with a 10 percent duty cycle. Useful for
generating a real sampling waveform.

Control
Code

Description

Gmrv=tf(Kr,[Tr 1]);

Creates the transfer function Gmrv

I=tf(1,[1 0]);

Creates the transfer function I

Gcl=feedback(Gol,H);

rlocus(Gol);
K=rlocfind(Gol);

Gd=c2d(G,Ts,'tustin');

m=csvread(test.csv);

Kr
.
1 sTr

1
, an integrator.
s
Gol
.
Creates the transfer function Gcl
1 Gol H
Makes a root locus using the system Gol.
An interactive command that allows you to position
the closed-loop poles on the root locus. It returns the
value of K that puts the roots at the chosen location.
Creates a discrete-time equivalent system Gd of the
continuous-time system G, using a sample rate of Ts
and the bilinear ('tustin') method of discretization.
Read from a CSV file into a matrix.
Signals and Systems 2014

B.1
Matrices - Quick Reference Guide
Definitions
Symbol
aij
a11
A a21
a31
x1
x x2
x3
0 0
0 0 0
0 0

Description
Element of a matrix. i is the row, j is the column.
a12
a 22
a32

a13
a 23 aij
a33

0
0
0
1 0 0
I 0 1 0
0 0 1
0 0
I 0 0
0 0
1 0 0
0 2 0
0 0 3

A is the representation of the matrix with elements


aij .
x is a column vector with elements xi .

Null matrix, every element is zero.

Identity matrix, diagonal elements are one.

Scalar matrix.

Diagonal matrix, aij 0 i j .

Multiplication
Multiplication
Z kY

Description
Multiplication by a scalar: zij kyij

z Ax

Multiplication by a vector: zi aik xk

k 1

Z AB

Matrix multiplication: zij aik bkj .


k 1

AB BA

In general, matrix multiplication is not commutative.

Signals and Systems 2014

B.2
Operations
Terminology
a11 a 21
t
A a12 a22
a13 a23

Description
Transpose of A (interchange rows and columns):
aijt a ji .

a11

a31
a32
a33
a12

a13

A det A a 21
a31

a22
a32

a 23
a33

a11 a1 j
.
.
.

a1n
.
.

aij ai1 aij


.
.
.

ain

a n1 a nj

a nn

If A 0 , then A is non-singular.
Minor of aij . Delete the row and column containing
the element aij and obtain a new determinant.

Aij 1 aij
i j

A11
adj A A12
A13
adj A
A 1
A

Determinant of A.
If A 0 , then A is singular.

Cofactor of aij .
A21
A22
A23

A31
A32
A33

Adjoint matrix of A. Replace every element aij by its


cofactor in A , and then transpose the resulting
matrix.
Reciprocal of A: A 1 A AA 1 I .
Only exists if A is square and non-singular.
Formula is only used for 3x3 matrices or smaller.

Linear Equations
Terminology
a11 x1 a12 x2 a13 x3 b1
a21 x1 a22 x2 a23 x3 b2
a31 x1 a32 x2 a33 x3 b3

Description
Set of linear equations written explicitly.

a11
a
21
a31

Set of linear equations written using matrix elements.

a12
a 22
a32

a13 x1 b1
a23 x2 b2
a33 x3 b3

Ax b
x A 1b

Set of linear equations written using matrix notation.


Solution to set of linear equations.

Eigenvalues
Equations
Ax x
A I 0

Description
are the eigenvalues.
eigenvectors.
Finding eigenvalues.

Signals and Systems 2014

are

the

column

A.1
Answers
1A.1

t 2 2k
(a) g t sin t rect
, T0 2 , P

(b) g t

1
4

t 1 3k
t 34 3k t 1 3k
t 34 3k

2
rect
t
3
k
rect
rect

1
1
1
1
k
4
2
4
2

P 169

T0 3 ,

(c) g t

t 10 k 10

1k rect t 5 10k , T0 20 ,

10

1
2

1 e
2

t 4k
(d) g t 2 cos200t rect
, T0 4 , P 1
2
k

1A.2
t 2
t 74 t 1
t 43 t 1
t 43 t 2
t 74
rect

rect

rect

rect

1 1
3 1
3 1
1
1
10
2 10
2 10
2 10
2

(a) g t

E 83 13

t
(b) g t cos t rect , E 2

(c) g t t 2 rectt 21 t 2 2 rectt 25 , E


t 25
t 25
5
rect

3 rect t 2
5

(d) g t rect

2
5

E 19

1A.3
(i) 0

(ii) e 3 (iii) 5 (iv) f t 1 t 2 (v) 1

(vi) f t t 0

Signals and Systems 2014

A.2
1A.4
Let t T . Then:

t t0
f t
dt
T

or

if T 0

if T 0

all T

f T t 0 T Td
f T t 0 T Td
f T t 0 T T d

T f t0

(using the sifting property)

which is exactly the same result as if we started with T t t 0 .

1A.5
45

(a)

X 2715

(b)

X 510

(e) x t 29.15 cost 22.17

(f)

(h) X 130 , X * 1 30

(i) -2.445

(c)

x t 100 cost 60

100

(g)

(d)

X 5596
. 59.64

(j) X 15
. 30 , X * 15
. 30

Signals and Systems 2014

8 3, 8

A.3
1B.1
a)
(ii)
2

1.5

1.5

x(t)

x(t)

(i)
2

0.5

0.5

-0.5

-0.5

-1

0.5

1.5
Time (sec)

2.5

-1

0.5

1.5
Time (sec)

1.5

1.5

x(t)

x(t)

0.5

-0.5

-0.5

0.5

1.5
Time (sec)

0.5

2.5

(iv)

(iii)
2

-1

2.5

-1

0.5

1.5
Time (sec)

2.5

2.5

2.5

b)
(ii)
1.5

0.5

0.5

x(t)

x(t)

(i)
1.5

-0.5

-0.5

-1

-1

-1.5

0.5

1.5
Time (sec)

2.5

-1.5

0.5

(iv)
1.5

0.5

0.5

x(t)

x(t)

(iii)
1.5

-0.5

-0.5

-1

-1

-1.5

0.5

1.5
Time (sec)

1.5
Time (sec)

2.5

-1.5

0.5

1.5
Time (sec)

Signals and Systems 2014

A.4
c)
(ii)
1.5

0.5

0.5

x(t)

x(t)

(i)
1.5

-0.5

-0.5

-1

-1

-1.5

0.5

1.5
Time (sec)

2.5

-1.5

0.5

1.5
Time (sec)

1.5

0.5

0.5

-0.5

-1

-1

0.5

2.5

-0.5

-1.5

2.5

(iv)

1.5

x(t)

x(t)

(iii)

1.5
Time (sec)

2.5

-1.5

0.5

1.5
Time (sec)

Sampling becomes more accurate as the sample time becomes smaller.

1B.2
a)

b)

y1[n ]

y2[n ]

-2 -1 0

-1

-2 -1 0
-1
-2
-3
-4

1B.3
a [n]
3
2
1
-2 -1 0

-1
-2
-3

Signals and Systems 2014

1 2

A.5
1B.4
1, n 1, 3, 5,
yn
0, all other n

1B.5
a) yn yn 1 yn 2

1B.6
(i)

x [n ]

y [n ]

D
(ii)

x [n ]

y [n ]
3

2
D

1B.7
(i)

yn yn 1 2 yn 2 3 xn 1

(ii)

y0 3 , y1 8 , y2 1 , y3 14

Signals and Systems 2014

A.6
1B.8
(a)
(i)

h0

T
, hn T for n 1, 2, 3,...
2

(ii) h0 1 , hn 0.250.5

n 1

for n 1, 2, 3,...

(b) y 1 1 , y0 0.75 , y1 1.375 , y2 1.0625 , y3 1.21875

1B.9
(i)

(a) a1 a 4 0

(b) a5 0

(ii)

(a) a1 a 4 0

(b) a5 0

1B.10
y1 0 1 , y1 1 3 , y1 2 7 , y1 3 15 , y1 4 31 , y1 5 63
y0 4 , y1 12 , y2 26 , y3 56 , y4 116 , y5 236

No, since they rely on superposition.

1B.11
yn 2 n n 2 2 n 4 n 6

1B.12
(i)

12
8
4
-1 0

1 2 3 4 5

Signals and Systems 2014

A.7
(ii)

12
8
4
-1 0

1 2 3 4 5 6 7 8 9 10 11 n

(iii)

24
20
16
12
8
4
-1 0

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 n

1B.14
The describing differential equation is:
d 2 vo R dvo
1
1

vo
vi .
2
dt
L dt LC
LC
The resulting discrete-time approximation is:
yn 2 RT L yn 1 1 RT L T 2 LC yn 2 T 2 LC xn 2

Signals and Systems 2014

A.8
1

0.8

0.6

y(t)

0.4

0.2

-0.2

-0.4

-0.6

10
Time (sec)

12

14

16

18

20

Use T 0.1 so that the waveform appears smooth (any value smaller has a
minimal effect in changing the solution).

1B.15
(a)

dy
Ky Kx
dt

(b) ht Ke Kt u t

(c) y t 0.75 1 e 0.5t 4 u t 4


(e)

(d)

y( t )

1-e

-0.5( t -4)

0.8

0.7

0.75

0.6

y(t)

0.5

t (min)

0.4

0.3

0.2

0.1

Signals and Systems 2014

10
Time (sec)

12

14

16

18

20

A.9
1B.16
The impulse response is ht te t u t . Using numerical convolution will only
give the ZSR. The ZIR needs to be obtained using other methods and
is e t u t . The code snippet below shows relevant MATLAB code:
t=0:Ts:To-Ts;
h=t.*exp(-t);
x=sin(t);
yc=conv(x,h*Ts);
yc=yc(1:N)+exp(-t);

% ZSR
% total solution = ZSR + ZIR

1B.17
1

-1-0.50

4 4.5 5 t

1B.18
The output control signal is smoothed, since it is not changing as rapidly as the
input control signal. It is also delayed in time.

x(t)

0.5

0
-1

-0.5

0.5

1.5

0.5

1.5

0.5

1.5

Time (sec)
0.4

h(t)

0.3
0.2
0.1
0
-1

-0.5

0
Time (sec)

0.2

y(t)

0.15
0.1
0.05
0
-1

-0.5

0
Time (sec)

Signals and Systems 2014

A.10

1
x[nT]

si[nT]

1B.19

-1

-2

-1

-2

t (s)

t (s)

0.5

y[nT]

h[nT]

-1

-0.5

-2

t (s)

2
t (s)

Note that the filter effectively removes the added noise; however, it also
introduces a time delay of between three and four samples.

Signals and Systems 2014

A.11
2A.1
|G|

G
2.236

90

45
26.57

(a)

50

100

50

100

|G|

2
1.118

90
1.118

45
26.57

-100 -50
-50 0

-100

50

100

50

-26.57

100

-45

(b)
Re{G}
1

Im{G }
1

1
1/2
-50

-50 0

-100

50

100

-100

50

-1/2

(c)

2A.2
G 445

2A.3
g t 4 cos 200t

2A.4
1
0.5
0
-0.5

G*

t=0

0.5

G
27.3

30
1

0.5

-0.5

G
1
84.59

0
1
t=1

-0.5 1
-1

Signals and Systems 2014

G*

t=2

100

A.12
2A.5
(a) Gn

1
j n sinc n 1 sinc n 1
4
2
2

0.4

0.3

0.2

0.1

0
-25

-20

-15

-10

-5

10

15

20

25

-20

-15

-10

-5

10

15

20

25

4
3
2
1
0
-1
-2
-25

1
(b) Gn sincn 2sincn 6
2

0.5
0.4
0.3
0.2
0.1
0
-25

-20

-15

-10

-5

10

15

20

25

-20

-15

-10

-5

10

15

20

25

0
-25

Signals and Systems 2014

A.13
(c) Gn

1 1 1
2
0.1 jn

10

0.8

0.6

0.4

0.2

0
-25

-20

-15

-10

-5

10

15

20

25

-20

-15

-10

-5

10

15

20

25

-1

-2
-25

(d) Gn

1
n
n
sinc 8 sinc 8

4
2
2

0.25
0.2
0.15
0.1
0.05
0
-25

-20

-15

-10

-5

10

15

20

25

-20

-15

-10

-5

10

15

20

25

0
-25

Signals and Systems 2014

A.14
sincn 1 1
(e) Gn
and the special case of n 0 can be evaluated by
j 2n
n

applying lHospitals Rule or from first principles: G0

0.5
0.4
0.3
0.2
0.1
0
-25

-20

-15

-10

-5

10

15

20

25

-20

-15

-10

-5

10

15

20

25

-1

-2
-25

2A.6
P 4.294 W

Signals and Systems 2014

1
.
2

A.15
2A.7
|G|
0.5
0.2387

(a)

2
T0
3

-12

6
12
-0.02653

-6

, P 73%

|G|
0.5
0.1592

(b)

T0

-12

-6

12

12

, P 60%

:
|G|
0.5

(c)

T0

-12

-6

, P 50%

2A.8
5 T0

n
Note: Gn Asinc 2 Asincn
2

Signals and Systems 2014

A.16
2B.1
x t 4 cos 2000t 30 , P 8

2B.2
Xf

sinc f e jf sinc2 f e j 6f
jf

2B.3
5 10 cos3f 5 cos4f
2f 2

(a)

(b)

sincf 12

sincf 12

(c)

1 jf e

(d)

5sinc5 f 3sinc3 f sinc f e j 5f

jf

sinc f cos2f e j 3f
2f 2

2B.4
Hint: e

a t

e at u t e at u t

2B.5
3P 1.5 f

2B.6
This follows directly from the time shift property.

2B.7
G1 f

a
1
1
f , g1 t g 2 t 1 e at u t
, G2 f
j 2f 2
a j 2f

Signals and Systems 2014

A.17
2B.8

G1( f ) * G2( f )
2 Ak
Ak
-2 f 0

2 f0

2B.9
(a)

2 Af 0 sinc2 f 0 t t 0

(b)

2 Af 0 sinc f 0 t sin f 0 t

2B.10
j 2 A sin 2 fT
f

2B.11
Asinc4f cos4f
jf

Signals and Systems 2014

A.18
3A.1
0.2339 0

3A.2
X f A sinc 2 f

3A.3
B4

3A.4
By passing the signal through a lowpass filter with 4 kHz cutoff - provided the
original signal contained no spectral components above 4 kHz.

3A.5
Periodicy.

3A.6
(a) 20 Hz, 40 Hz, P 01325
.
W
(b) G3 0.5e

j 3

e j120t 0.5e 3 e j120t , 0.9781

(c)
Harmonic #
0
1
2
3
4

Amplitude
1
3
1
0.5
0.25

Phase ()
-66
-102
-168
-234

Yes.

3A.7
Yes. The flat topped sampling pulses would simply reduce the amplitudes of
the repeats of the baseband spectrum as one moved along the frequency axis.
For ideal sampling, with impulses, all repeats have the same amplitude. Note
that after sampling the pulses have tops which follow the original waveform.
Signals and Systems 2014

A.19
3A.8
Truncating the 9.25 kHz sinusoid has the effect of convolving the impulse in
the original transform with the transform of the window (a sinc function for a
rectangular window). This introduces leakage which will give a spurious
9 kHz component.

Signals and Systems 2014

A.20
3B.1
G( f )
j/2
j/4

- j/2

j/4

- j/4

-10
-11 -9

- j/4

10 f
9 11

3B.2
G( f )
0.5

-2 -1 0 1 2

f (kHz)

A:
G( f )
0.5
0.25
-40

-36

36
0

-38
-39 -37

40

f (kHz)

38
37 39

B:
G( f )
0.5

-40

-36
-38
-39 -37

36
-2 -1

1 2

40

38
37 39

f (kHz)

C:

3B.3
lowpass
filter

C
lowpass
filter
cos(2 f c t )

Signals and Systems 2014

A.21
4A.1

s 1 R1C1 s 1 R2C2
1 R1C1 1 R2 C 2 1 R2 C1 s 1 R1 R2 C1C 2

a)

1 RC
s 1 RC

c)

1 R2 C1 s R1 L1
1 2 s R L
d) 2
sR L
s 1 R2 C1 R1 L1 s R1 R2 R2 L1C1

b)

4A.2

a) I s sL R
Li 0 E s

sC

dx 0
3

b) X s Ms Bs K M sx 0
Bx 0 2

dt
s

10
d 0

c) s Js 2 Bs K J s 0
B 0 2
dt
s 2

4A.3
a)

f t

5
7 t
9
e cos 3t tan 1


4 2
3

b)

f t

1
cost cos2t
3

c)

f t

1 2
15
15

t 8t e 2t 7t

40
2
2

4A.4
a) f 5 4

b), c) The final value theorem does not apply. Why?

4A.5
T s

1 R1C 2 s
s 1 R1C1 s 1 R2C2

Signals and Systems 2014

A.22
4A.6
y t 5e t cos2t 26.6u t

4A.7
a) (i)

G
1 GH

(ii)

1
1 GH

(iii)

GH
1 GH

b) All denominators are the same.

4A.8
a)

Y G1 G1G3 H 3 G1G2 H 2 G1G2 G3 H 2 H 3 G1G2 G1G2 G3 H 3

R 1 G2 H 2 G1G3 G1G2 G3 H 2 G1G2 G3 G3 H 3 G2 G3 H 2 H 3

ab

ac 1
b) X 5
X1
Y
1 bd ac
1 bd ac

4A.9
a)

G1G2 G3G4
C

R 1 G3G4 H 1 G2 G3 H 2 G1G2 G3G4 H 3

b)

Y
AE CE CD AD

X 1 AB EF ABEF

Signals and Systems 2014

A.23
4B.1
Yes, by examining the pole locations for R, L, C > 0.

4B.2

(i)

y t 2 1 e 4t u t

(ii)

y t 2t 12 12 e 4t u t

(iii)

8
cos 2t tan 1 2 e 4t u t
y t
5
5

(iv)

40 4t
2
y t
e u t
cos10t tan 1
5

29
29

4B.3
(i)

4 2t

underdamped, y t 2
e sin 2 3t cos 1 0.5 u t ,
3

ht

16 2t
e sin 2 3t
3

(ii)

critically damped, y t 2 21 4t e 4t u t , ht 32te 4t

(iii)

overdamped, y t 2 83 e 2t 23 e 8t u t , ht 163 e 2t e 8t

4B.4
(a) y ss t u t (b) y ss t

10 170
1
cos t tan 1 u t
17
13

5A.1
a) 2 kHz

b) 2.5 Hz

5A.2
a) 0 dB

b) 32 dB

c) 6 dB

Signals and Systems 2014

A.24
5A.3
a)

b)
Magnitude response

Magnitude response

40

40
20
|H(w)| dB

|H(w)| dB

30

20

0
-20

10

-40

0
1
10

10

10

10

10

-60
-1
10

10

10

10

10

w (rad/s)

w (rad/s)

Phase response

Phase response

10

10

arg(H(w)) deg

arg(H(w)) deg

50

-50

-50

-100

-150
1

10

10

10

10

10

-1

10

10

10

10

w (rad/s)

10

10

10

w (rad/s)

d)

c)

Magnitude response
20

0
|H(w)| dB

|H(w)| dB

Magnitude response
20

-20

-20

-40

-40

-60
-3
10

-2

10

-1

10

10

10

-60
-3
10

10

-2

10

-1

10

w (rad/s)

10

10

10

w (rad/s)

Phase response

Phase response
-80

0
-100
arg(H(w)) deg

arg(H(w)) deg

-20
-40
-60

-120
-140
-160

-80
-180
-100
-3
10

-2

10

-1

10

10

10

-3

10

10

-2

10

-1

10

w (rad/s)

10

10

10

w (rad/s)

f)

e)

Magnitude response
80

60

60
|H(w)| dB

|H(w)| dB

Magnitude response
80

40

20

40

20

0
-3
10

-2

10

-1

10

10

10

0
-3
10

10

-2

10

-1

10

w (rad/s)

10

10

10

w (rad/s)

Phase response

Phase response

100
180

arg(H(w)) deg

arg(H(w)) deg

80
60
40
20

160
140
120
100

0
-3

10

-2

10

-1

10

10

10

10

80
-3
10

-2

10

w (rad/s)

-1

10

10
w (rad/s)

Signals and Systems 2014

10

10

A.25
5A.5
Magnitude response
80

|H(w)| dB

60

40

20

0
-2
10

-1

10

10

10
w (rad/s)

10

10

10

Phase response

arg(H(w)) deg

-50

-100

-150
-2

10

-1

10

10

10
w (rad/s)

10

10

10

5A.6
Magnitude response
50

|H(w)| dB

40
30
20
10
0
0
10

10
w (rad/s)

10

Phase response

arg(H(w)) deg

150

100

50

0
0

10

10
w (rad/s)

Signals and Systems 2014

10

A.26
5A.7
a)
Magnitude response
100

|H(w)| dB

50

-50

-100
-2
10

-1

10

10

10

10

10

w (rad/s)
Phase response

arg(H(w)) deg

-100

-150

-200

-250
-2

10

-1

10

10

10

10

10

w (rad/s)

(i) +28 dB, -135 (ii) +5 dB, -105 (iii) 15 dB, -180 (iv) 55 dB, -270
b)
Magnitude response
40

|H(w)| dB

20
0
-20
-40
-60
-1
10

10

10
w (rad/s)

10

10

Phase response
200

arg(H(w)) deg

150

100

50

0
-1
10

10

10
w (rad/s)

10

10

(i) +35 dB, +180 (ii) 5 dB, +100 (iii) 8 dB, +55 (iv) 28 dB, +90

Signals and Systems 2014

A.27
5A.8
a) G1 s

450
ss 25s 20
Magnitude response
50

|H(w)| dB

-50

-100
0
10

10

10

10

w (rad/s)
Phase response
-50

arg(H(w)) deg

-100
-150
-200
-250
-300
0
10

10

10

10

w (rad/s)

b) G2 s

10s
s 0.2s 10
Magnitude response
20

|H(w)| dB

-20

-40

-60
-2
10

-1

10

10

10

10

10

w (rad/s)
Phase response
100

arg(H(w)) deg

80
60
40
20
0
-2
10

-1

10

10

10
w (rad/s)

Signals and Systems 2014

10

10

A.28
5A.9
a)
Magnitude response
10
0

|H(w)| dB

-10
-20
-30
-40
-50
-60
-2
10

-1

10

10
w (rad/s)

10

10

Phase response

arg(H(w)) deg

-50

-100

-150

-2

10

-1

10

10
w (rad/s)

10

10

Bandwidth is 0 to 0.2 rads-1.


b)
Magnitude response
10
0

|H(w)| dB

-10
-20
-30
-40
-50
-60
-2
10

-1

10

10
w (rad/s)

10

10

Phase response

arg(H(w)) deg

-50

-100

-150

-2

10

-1

10

10
w (rad/s)

10

Bandwidth is increased to 15 rads-1 but is now peakier.

Signals and Systems 2014

10

A.29
5A.10
(i) 3.3 dB, 8 (ii) 17.4 dB, 121

5A.11
Magnitude response
40

|H(w)| dB

20

-20

-40
0
10

10

10

10

10

10

w (rad/s)
Phase response
100

arg(H(w)) deg

50

-50

-100
0
10

10

10

10
w (rad/s)

45000s 2 18s 900


H s
s 902 s 1000

Signals and Systems 2014

10

10

A.30
5B.1
b) t p 1.05 s c) 0.555 d) n 13 rads -1

a) 12.3%

e) d 3 rads -1 f) t r 0.72 s g) t s 1.5 s

h) t s 1.95 s

5B.2
G s

196
s 19.6s 196
2

5B.3
ct 1

e nt

sin 1 2 n t cos 1

5B.4
a)

ln a b

b) 0.247 , n 16.21 rads -1

4 ln a b
2

5B.5
(a)
j

(i)

-10
j

(ii)

-1

(iii) -10

-1

Signals and Systems 2014

A.31
(b)
(i)

v
100%

63.2%

0.1
(ii)

v
100%
steady-state
63.2%

1
(iii)

v
100%

63.2%

different

(c) The pole at -1.

Signals and Systems 2014

A.32
5B.6
G1 s

1.299
s 0.6495
Step response - First and Second-order systems
2

1.8
1.6

First-order system

1.4

Amplitude

1.2
1
0.8
0.6
0.4

Second-order system
0.2
0

4
Time (s)

5B.7
a) n 3 , 1 6

b) 58.8%

c) 0

d) 1 9

5B.8
a) 0

b) 3

5B.9
a) T 10 sec

b) (i) T 9.09 sec (ii) T 5 sec (iii) T 0.91 sec

Feedback improves the time response.

Signals and Systems 2014

A.33
6A.3
a) S

T
K1

b) S

T
K2

s2 s
c) S 2
s s 1000

1000
2
s s 1000

T
G

6A.4
a)
open-loop S KT a 1
open-loop S KT1 0
closed-loop S KT a

1
1

for n
1 K 1 K a G s 1 K 1 K a

closed-loop S KT1

K1 K a
1 for K1 K a 1
1 K1 K a

b)
open-loop

s
G s 1 for n
Td s

closed-loop

G s
s
1

for n
Td s 1 K1 K a G s 1 K1 K a

6A.5
1
1 K P K1

(ii)

(iii)

K1
1 K P K1

a)

(i)

b)

(i) 0

(ii)

1
K I K1

(iii) 0

c)

(i) 0

(ii)

1
K P K1

(iii)

d)

(i) 0

(ii) 0

1
KP

(iii) 0

(iv)

1 K1
1 K P K1

(iv) 0

(iv)

1
KP

(iv) 0

The integral term in the compensator reduces the order of the error,
i.e. infinite values turn into finite values, and finite values become zero.

Signals and Systems 2014

A.34
7A.1
(i)

z -2
x [n ]

y [n ]

z -1
(ii)

x [n ]

y [n ]

3 z -4

2
z -1

z -1

7A.2
(iii)

yn yn 1 2 yn 2 3 xn 1

(iv)

y0 3 , y1 8 , y2 1 , y3 14

7A.3
(a) hn 1 3 un
n

(b) yn 3 n 3 2 un 7 2 1 3 un
n

or yn 3 2 un 1 2 1 3 un 1 3 un 1
n 1

or yn 3 1 3 un 1 2 3 1 3
n

n2

un 2

(the responses above are equivalent to see this, graph them or rearrange
terms)

Signals and Systems 2014

A.35
7A.4
hn h1 n h2 n

7A.5
h0 h1 0

h1 h1 1 h2 1h12 0
h2 h1 2 h22 1h13 0 2h1 0h1 1h2 1 h12 0h2 2

7A.6
(a)
(i) F z

z
z 1 2

(iii) F z

(b) X z

(ii) F z

1
za

z z a
z
(iv) F z
2
z a 3
z a

2z 2
z 2 1

7A.7
(a) zeros at z 0, - 2 ; poles at z 1 3, - 1 ; stable
(b) fourth-order zero at z 0 ; poles at z j 1

2 , j 1 2 ; stable

(c) zeros at z j 2 5 ; poles at z 3 6 ; unstable

7A.8
H z

3 z 1 2 z 4
1 z 1 z 2

7A.9
xn 0, 14, 10.5, 9.125, ...
Signals and Systems 2014

A.36
7A.10
xn 81 4 un 81 2 un 8un 24 n
n

7A.11
(a) xn

1 a n1
for n 0, 1, 2,...
1 a

(b) xn 3 n 2 n 1 6 n 4 for n 0, 1, 2,...


(c) xn 1 e anT for n 0, 1, 2,...
(d) xn 8 81 2 6n1 2 for n 0, 1, 2,...
n

(e) xn 21 2

n4

(f) xn 2

for n 0, 1, 2,...

3 sin 2n 3 for n 0, 1, 2,...

7A.12
(a) hn

2 n1 1
n
n
1 4 1 2 1 8 1 4
2 2 n 3

(b) H z

1 8 z2
z 1 2z 1 4

(c) yn 1 3 1 24 1 4 1 4 1 2
n

(d) yn 1 2 1 2 1 18 1 4 n 3 4 9
n

7A.13

yn 1

1
0.5 1.25
5

n 3

1
0.5 1.25
5

n 3

7A.14
yn

n
n
1
0.5 1.25 0.5 1.25

Signals and Systems 2014

for n 0, 1, 2,...

A.37
7A.15
F z

zTe T

z e

T 2

7A.16

z 1 k
k z 1
(a) X 3 z
X 1 z
Dz

k 1z 6k 1
k 1z 6k 1

7A.17
(a) f 0 0 , f 0 (b) f 0 1 , f 0
(c) f 0 1 4 , f

1
1 a

(d) f 0 0 , f 8

7A.18
(i)

n 1un

(ii)

n 2 n 1 3 n 2 2 n 3 n 4

7A.19
f n 8 6n1 2 81 2 , n 0
n

7B.1
yn 0.7741 yn 1 20 xn 18.87 xn 1

7B.2
yn 0.7730 yn 1 17.73 xn 17.73 xn 1

7B.3
H d z

0.1670 z 0.3795
z 0.3679z 0.1353

Signals and Systems 2014

A.38
8B.1
(a)
Root Locus Editor (C)
8

Imag Axis

-2

-4

-6

-8
-4

-3.5

-3

-2.5

-2
-1.5
Real Axis

-1

-0.5

0.5

(b) 0 K 6 , 2.82 rads 1


(c) K 0.206
(d) t s 10.7 s

8B.2
(a) One pole is always in the right-half plane:
For K 0 :

For K 0 :
Root Locus Editor (C)

Root Locus Editor (C)

1
0.8

0.6
2
0.4
0.2
Imag Axis

Imag Axis

0
-0.2

-1

-0.4
-2
-0.6
-3

-4
-4

-0.8

-2

2
Real Axis

-1
-20

-15

-10

Signals and Systems 2014

-5

0
5
Real Axis

10

15

20

25

A.39
(b)
For K 0 :

For K 0 :
Root Locus Editor (C)

Root Locus Editor (C)

15

1
0.8

10
0.6
0.4
5

Imag Axis

Imag Axis

0.2
0

0
-0.2

-5
-0.4
-0.6
-10
-0.8
-15
-4

-2

2
Real Axis

-1
-20

-15

-10

-5

0
5
Real Axis

10

15

20

25

The system is always unstable because the pole pair moves into the right-half
plane ( s j 3.5 ) at a lower gain ( K 37.5 ) than that for which the righthalf plane pole enters the left-half plane ( K 48 ). The principle is sound,
however. A different choice of pole-zero locations for the feedback
compensator is required in order to produce a stable feedback system.
For K 0 , the root in the right-half plane stays on the real-axis and moves to
the right. Thus, negative values of K are not worth pursuing.
For the given open-loop pole-zero pattern, there are two different feasible locus
paths, one of which includes a nearly circular segment from the left-half
s-plane to the right-half s-plane. The relative magnitude of the poles and zeros
determines which of the two paths occurs.
Root Locus Editor (C)

Root Locus Editor (C)

30

40

30
20
20
10

Imag Axis

Imag Axis

10

-10
-10
-20
-20
-30

-30
-90

-80

-70

-60

-50

-40
Real Axis

-30

-20

-10

10

-40
-20

Signals and Systems 2014

-15

-10

-5
Real Axis

10

A.40
8B.3
(a)
Root Locus Editor (C)
5
4
3
2

Imag Axis

1
0
-1
-2
-3
-4
-5
-7

-6

-5

-4

-3
-2
Real Axis

-1

(b) a 32 27
(c) 0 a 16 for stability. For a 16 , the frequency of oscillation is 2 rads 1 .

Signals and Systems 2014

A.41
8B.4
(a)
j

K >0

-3

-1

s -plane

K <0
K

-3

j 3

3 2
5

-1

2
5

32
5

s -plane

j 3

(b) K 2 5 , n 3 rads -1
(c) Any value of K 2 5 gives zero steady-state error. [The system is type 1.
Therefore any value of K will give zero steady-state error provided the
system is stable.]
(d) K 0.1952 , t r 1.06 s

Signals and Systems 2014

A.42
9A.1
1

C R R3
vC
q 1, A 1 2
R2
iL1
L R R
3
1 2

R2

0
C1 R2 R3
, b 1

1 R2 R3
L

R1
1
L1 R2 R3

9A.2
(a) false

(b) false

(c) false

[all true for x0 0 ]

9A.3

(a)

2
q 0
2
y 2 2

0
3

0 0 q 1 x
0
2 2

(b)

2
q 1
0
y 0 0

(b)

H s

1q 0x

1 1
1

1 0 q 0 x
0
1 2
1q 0x

9A.4
(a)

H s 8

s 12 s 3
2
s s 2

1
s 5s 9 s 7
3

9A.5
s 3
3s 1
2
2

q
t
1
s s 3
-1 s 3
q t L 12
2s 6

2
2
2
s 3 s s 3

1 4 cos 3t 3 sin 3t

u t
2 2 cos 3t 14 sin 3t

Signals and Systems 2014

A.43
9B.1
1 1 , 2 2 , 3 3

9B.2
1 e t
(a) 1, 2 (b) qt
t
e

9B.3
(ii) 1, 1 j 2 (iii) k1 75 , k 2 49 , k 3 10

(iv)

y ss t 1 40

(v) State feedback can place the poles of any system arbitrarily (if the system is
controllable).

9B.4
0 0 1 16
3 16

qn 1 1 0 1 4 qn 1 8 xn
0 1 1 4
1 2
yn 0 0 1qn 3 xn

9B.6
1 0
0
a
qn 1 k
0 k qn k xn
b c
b c 0 c
yn 1 0 0qn

Signals and Systems 2014

You might also like