Signals and Systems: Lecture Notes
Signals and Systems: Lecture Notes
Signals and
Systems
Lecture Notes
2014
R (s )
E (s )
G (s)
controller
F (s )
M (s )
V (s )
1
s
C (s )
maze rover
H ( s)
controller
RF
R1
R5
R3
C1
R0
Vi
R2
A1
V1
C5
R4
A2
V2
A3
Vo
PMcL
i
Contents
LECTURE 1A SIGNALS
OVERVIEW ..........................................................................................................1A.1
SIGNAL OPERATIONS ...........................................................................................1A.2
CONTINUOUS AND DISCRETE SIGNALS ................................................................1A.3
PERIODIC AND APERIODIC SIGNALS ....................................................................1A.4
DETERMINISTIC AND RANDOM SIGNALS .............................................................1A.6
ENERGY AND POWER SIGNALS ............................................................................1A.7
COMMON CONTINUOUS-TIME SIGNALS.............................................................1A.10
THE CONTINUOUS-TIME STEP FUNCTION .....................................................1A.10
THE RECTANGLE FUNCTION .........................................................................1A.13
THE STRAIGHT LINE .....................................................................................1A.15
THE SINC FUNCTION .....................................................................................1A.20
THE IMPULSE FUNCTION ...............................................................................1A.22
SINUSOIDS .........................................................................................................1A.26
WHY SINUSOIDS? ..............................................................................................1A.26
REPRESENTATION OF SINUSOIDS ..................................................................1A.28
RESOLUTION OF AN ARBITRARY SINUSOID INTO ORTHOGONAL FUNCTIONS 1A.30
REPRESENTATION OF A SINUSOID BY A COMPLEX NUMBER .........................1A.31
FORMALISATION OF THE RELATIONSHIP BETWEEN PHASOR AND SINUSOID ..1A.33
EULERS COMPLEX EXPONENTIAL RELATIONSHIPS FOR COSINE AND SINE ..1A.34
A NEW DEFINITION OF THE PHASOR .............................................................1A.36
GRAPHICAL ILLUSTRATION OF THE RELATIONSHIP BETWEEN THE TWO TYPES OF
PHASOR AND THEIR CORRESPONDING SINUSOID ..........................................1A.37
NEGATIVE FREQUENCY ................................................................................1A.38
COMMON DISCRETE-TIME SIGNALS ..................................................................1A.39
THE DISCRETE-TIME STEP FUNCTION ..........................................................1A.40
THE UNIT-PULSE FUNCTION .........................................................................1A.41
SUMMARY .........................................................................................................1A.42
QUIZ ..................................................................................................................1A.43
EXERCISES ........................................................................................................1A.44
LEONHARD EULER (1707-1783) ........................................................................1A.47
ii
LECTURE 1B SYSTEMS
LINEAR DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS ................. 1B.1
INITIAL CONDITIONS ...................................................................................... 1B.1
FIRST-ORDER CASE ....................................................................................... 1B.2
SYSTEM MODELLING .......................................................................................... 1B.3
ELECTRICAL CIRCUITS ................................................................................... 1B.3
MECHANICAL SYSTEMS ................................................................................. 1B.3
DISCRETE-TIME SYSTEMS ................................................................................... 1B.4
LINEAR DIFFERENCE EQUATIONS WITH CONSTANT COEFFICIENTS .................... 1B.5
SOLUTION BY RECURSION .............................................................................. 1B.5
COMPLETE SOLUTION .................................................................................... 1B.5
FIRST-ORDER CASE ....................................................................................... 1B.5
DISCRETE-TIME BLOCK DIAGRAMS.................................................................... 1B.6
DISCRETIZATION IN TIME OF DIFFERENTIAL EQUATIONS ................................... 1B.7
FIRST-ORDER CASE ....................................................................................... 1B.7
SECOND-ORDER CASE .................................................................................... 1B.8
CONVOLUTION IN LINEAR TIME-INVARIANT DISCRETE-TIME SYSTEMS.............. 1B.9
FIRST-ORDER SYSTEM ................................................................................... 1B.9
UNIT-PULSE RESPONSE OF A FIRST-ORDER SYSTEM ................................... 1B.10
GENERAL SYSTEM ....................................................................................... 1B.10
SYSTEM MEMORY ........................................................................................ 1B.14
SYSTEM STABILITY ...................................................................................... 1B.15
CONVOLUTION IN LINEAR TIME-INVARIANT CONTINUOUS-TIME SYSTEMS ...... 1B.15
GRAPHICAL DESCRIPTION OF CONVOLUTION ................................................... 1B.18
PROPERTIES OF CONVOLUTION ......................................................................... 1B.23
NUMERICAL CONVOLUTION ............................................................................. 1B.23
CONVOLUTION WITH AN IMPULSE..................................................................... 1B.26
SUMMARY ........................................................................................................ 1B.28
EXERCISES ........................................................................................................ 1B.29
GUSTAV ROBERT KIRCHHOFF (1824-1887) ...................................................... 1B.37
iii
LECTURE 2A FOURIER SERIES, SPECTRA
ORTHOGONALITY ................................................................................................2A.1
ORTHOGONALITY IN MATHEMATICS ..............................................................2A.1
THE INNER PRODUCT ......................................................................................2A.4
ORTHOGONALITY IN POWER SIGNALS ............................................................2A.6
ORTHOGONALITY IN ENERGY SIGNALS ..........................................................2A.7
THE TRIGONOMETRIC FOURIER SERIES ...............................................................2A.8
THE COMPACT TRIGONOMETRIC FOURIER SERIES ............................................2A.14
THE SPECTRUM .................................................................................................2A.18
THE COMPLEX EXPONENTIAL FOURIER SERIES .................................................2A.22
HOW TO USE MATLAB TO CHECK FOURIER SERIES COEFFICIENTS...............2A.29
SYMMETRY IN THE TIME DOMAIN .....................................................................2A.31
EVEN SYMMETRY ..............................................................................................2A.31
ODD SYMMETRY ...............................................................................................2A.35
HALF-WAVE SYMMETRY ..................................................................................2A.39
POWER ..............................................................................................................2A.42
FILTERS .............................................................................................................2A.45
RELATIONSHIPS BETWEEN THE THREE FOURIER SERIES REPRESENTATIONS.....2A.49
SUMMARY .........................................................................................................2A.50
QUIZ ..................................................................................................................2A.51
EXERCISES ........................................................................................................2A.52
JOSEPH FOURIER (1768-1830) ..........................................................................2A.55
iv
LECTURE 3A FILTERING AND SAMPLING
INTRODUCTION ................................................................................................... 3A.1
RESPONSE TO A SINUSOIDAL INPUT .................................................................... 3A.1
RESPONSE TO AN ARBITRARY INPUT .................................................................. 3A.7
PERIODIC INPUTS ........................................................................................... 3A.7
APERIODIC INPUTS ......................................................................................... 3A.8
IDEAL FILTERS .................................................................................................. 3A.10
PHASE RESPONSE OF AN IDEAL FILTER ........................................................ 3A.11
WHAT DOES A FILTER DO TO A SIGNAL? ........................................................... 3A.16
SAMPLING......................................................................................................... 3A.19
SAMPLING AND PERIODICY .......................................................................... 3A.22
RECONSTRUCTION ............................................................................................ 3A.23
ALIASING .......................................................................................................... 3A.25
PRACTICAL SAMPLING AND RECONSTRUCTION ................................................ 3A.27
SUMMARY OF THE SAMPLING AND RECONSTRUCTION PROCESS ....................... 3A.28
FINDING THE FOURIER SERIES OF A PERIODIC FUNCTION FROM THE FOURIER
TRANSFORM OF A SINGLE PERIOD................................................................ 3A.32
WINDOWING IN THE TIME DOMAIN .................................................................. 3A.32
PRACTICAL MULTIPLICATION AND CONVOLUTION ........................................... 3A.37
SUMMARY ........................................................................................................ 3A.39
QUIZ ................................................................................................................. 3A.41
EXERCISES ........................................................................................................ 3A.42
LECTURE 3B AMPLITUDE MODULATION
INTRODUCTION ................................................................................................... 3B.1
THE COMMUNICATION CHANNEL .................................................................. 3B.2
ANALOG AND DIGITAL MESSAGES................................................................. 3B.3
BASEBAND AND CARRIER COMMUNICATION ................................................. 3B.3
MODULATION ..................................................................................................... 3B.3
EASE OF RADIATION ...................................................................................... 3B.5
SIMULTANEOUS TRANSMISSION OF SEVERAL SIGNALS .................................. 3B.5
DOUBLE-SIDEBAND, SUPPRESSED-CARRIER (DSB-SC) MODULATION .............. 3B.6
DEMODULATION ............................................................................................ 3B.9
SUMMARY OF DBS-SC MODULATION AND DEMODULATION ...................... 3B.11
AMPLITUDE MODULATION (AM) ..................................................................... 3B.12
ENVELOPE DETECTION ................................................................................ 3B.13
SINGLE SIDEBAND (SSB) MODULATION ........................................................... 3B.14
QUADRATURE AMPLITUDE MODULATION (QAM) ........................................... 3B.15
COHERENT DEMODULATION ........................................................................ 3B.17
SUMMARY ........................................................................................................ 3B.18
EXERCISES ........................................................................................................ 3B.19
v
LECTURE 4A THE LAPLACE TRANSFORM
THE LAPLACE TRANSFORM .................................................................................4A.1
REGION OF CONVERGENCE (ROC) .................................................................4A.4
FINDING A FOURIER TRANSFORM USING A LAPLACE TRANSFORM ......................4A.6
FINDING LAPLACE TRANSFORMS ........................................................................4A.9
DIFFERENTIATION PROPERTY .......................................................................4A.11
STANDARD LAPLACE TRANSFORMS ..................................................................4A.12
LAPLACE TRANSFORM PROPERTIES...................................................................4A.13
EVALUATION OF THE INVERSE LAPLACE TRANSFORM ......................................4A.14
RATIONAL LAPLACE TRANSFORMS ..............................................................4A.14
TRANSFORMS OF DIFFERENTIAL EQUATIONS ....................................................4A.20
THE SYSTEM TRANSFER FUNCTION ...................................................................4A.23
BLOCK DIAGRAMS ............................................................................................4A.24
NOTATION ....................................................................................................4A.25
CASCADING BLOCKS .........................................................................................4A.26
STANDARD FORM OF A FEEDBACK CONTROL SYSTEM ......................................4A.28
BLOCK DIAGRAM TRANSFORMATIONS ..............................................................4A.30
SUMMARY .........................................................................................................4A.34
EXERCISES ........................................................................................................4A.35
PIERRE SIMON DE LAPLACE (1749-1827) ..........................................................4A.40
LECTURE 4B TRANSFER FUNCTIONS
OVERVIEW ..........................................................................................................4A.1
STABILITY ...........................................................................................................4A.1
UNIT-STEP RESPONSE .........................................................................................4A.2
FIRST-ORDER SYSTEMS ..................................................................................4A.4
SECOND-ORDER SYSTEMS ..............................................................................4A.5
DISTINCT REAL POLES ( 1 ) OVERDAMPED .............................................4A.6
REPEATED REAL POLES ( 1 ) CRITICALLY DAMPED ................................4A.7
COMPLEX CONJUGATE POLES ( 0 1 ) UNDERDAMPED ..........................4A.8
SECOND-ORDER POLE LOCATIONS .................................................................4A.9
SINUSOIDAL RESPONSE .....................................................................................4A.10
ARBITRARY RESPONSE ......................................................................................4A.11
SUMMARY .........................................................................................................4A.12
EXERCISES ........................................................................................................4A.13
OLIVER HEAVISIDE (1850-1925).......................................................................4A.15
vi
LECTURE 5A FREQUENCY RESPONSE
OVERVIEW .......................................................................................................... 5A.1
THE FREQUENCY RESPONSE FUNCTION .............................................................. 5A.1
DETERMINING THE FREQUENCY RESPONSE FROM A TRANSFER FUNCTION ......... 5A.2
MAGNITUDE RESPONSES .................................................................................... 5A.3
PHASE RESPONSES .............................................................................................. 5A.7
FREQUENCY RESPONSE OF A LOWPASS SECOND-ORDER SYSTEM ...................... 5A.9
VISUALIZATION OF THE FREQUENCY RESPONSE FROM A POLE-ZERO PLOT ...... 5A.11
BODE PLOTS ..................................................................................................... 5A.13
APPROXIMATING BODE PLOTS USING TRANSFER FUNCTION FACTORS ............. 5A.14
TRANSFER FUNCTION SYNTHESIS ..................................................................... 5A.15
DIGITAL FILTERS .............................................................................................. 5A.19
SUMMARY ........................................................................................................ 5A.20
EXERCISES ........................................................................................................ 5A.21
LECTURE 5B TIME-DOMAIN RESPONSE
OVERVIEW .......................................................................................................... 5B.1
STEADY-STATE ERROR ....................................................................................... 5B.1
TRANSIENT RESPONSE ........................................................................................ 5B.3
SECOND-ORDER STEP RESPONSE ........................................................................ 5B.6
SETTLING TIME................................................................................................. 5B.10
PEAK TIME ....................................................................................................... 5B.17
PERCENT OVERSHOOT ...................................................................................... 5B.17
RISE TIME AND DELAY TIME ............................................................................ 5B.17
SUMMARY ........................................................................................................ 5B.18
EXERCISES ........................................................................................................ 5B.19
LECTURE 6A EFFECTS OF FEEDBACK
OVERVIEW .......................................................................................................... 6A.1
TRANSIENT RESPONSE ........................................................................................ 6A.2
CLOSED-LOOP CONTROL .................................................................................... 6A.4
PROPORTIONAL CONTROL (P CONTROLLER) .................................................. 6A.5
INTEGRAL CONTROL (I CONTROLLER) ........................................................... 6A.6
PROPORTIONAL PLUS INTEGRAL CONTROL (PI CONTROLLER) ...................... 6A.7
PROPORTIONAL, INTEGRAL, DERIVATIVE CONTROL (PID CONTROLLER) ...... 6A.9
DISTURBANCE REJECTION .................................................................................. 6A.9
SENSITIVITY ..................................................................................................... 6A.11
SYSTEM SENSITIVITY ................................................................................... 6A.11
SUMMARY ........................................................................................................ 6A.13
EXERCISES ........................................................................................................ 6A.14
LECTURE 6B REVISION
vii
LECTURE 7A THE Z-TRANSFORM
OVERVIEW ..........................................................................................................7A.1
THE Z-TRANSFORM .............................................................................................7A.1
MAPPING BETWEEN S-DOMAIN AND Z-DOMAIN .............................................7A.4
MAPPING THE S-PLANE IMAGINARY AXIS.......................................................7A.5
ALIASING........................................................................................................7A.6
FINDING Z-TRANSFORMS .....................................................................................7A.8
RIGHT SHIFT (DELAY) PROPERTY .................................................................7A.12
STANDARD Z-TRANSFORMS ...............................................................................7A.13
Z-TRANSFORM PROPERTIES ...............................................................................7A.14
EVALUATION OF INVERSE Z-TRANSFORMS ........................................................7A.15
TRANSFORMS OF DIFFERENCE EQUATIONS .......................................................7A.17
THE SYSTEM TRANSFER FUNCTION ...................................................................7A.19
STABILITY ....................................................................................................7A.21
TRANSFER FUNCTION INTERCONNECTIONS ..................................................7A.22
SUMMARY .........................................................................................................7A.24
EXERCISES ........................................................................................................7A.25
LECTURE 7B DISCRETIZATION
OVERVIEW .......................................................................................................... 7B.1
SIGNAL DISCRETIZATION .................................................................................... 7B.1
SIGNAL RECONSTRUCTION .................................................................................. 7B.2
HOLD OPERATION .......................................................................................... 7B.2
SYSTEM DISCRETIZATION ................................................................................... 7B.4
FREQUENCY RESPONSE ....................................................................................... 7B.7
RESPONSE MATCHING ....................................................................................... 7B.10
SUMMARY ......................................................................................................... 7B.13
EXERCISES ........................................................................................................ 7B.14
LECTURE 8A SYSTEM DESIGN
OVERVIEW ..........................................................................................................8A.1
DESIGN CRITERIA FOR CONTINUOUS-TIME SYSTEMS ..........................................8A.2
PERCENT OVERSHOOT ....................................................................................8A.2
PEAK TIME .....................................................................................................8A.3
SETTLING TIME...............................................................................................8A.4
COMBINED SPECIFICATIONS ...........................................................................8A.5
DESIGN CRITERIA FOR DISCRETE-TIME SYSTEMS ...............................................8A.9
MAPPING OF A POINT FROM THE S-PLANE TO THE Z-PLANE ...........................8A.10
PERCENT OVERSHOOT ..................................................................................8A.11
PEAK TIME ...................................................................................................8A.12
SETTLING TIME.............................................................................................8A.13
COMBINED SPECIFICATIONS .........................................................................8A.14
SUMMARY .........................................................................................................8A.15
viii
LECTURE 8B ROOT LOCUS
OVERVIEW .......................................................................................................... 8B.1
ROOT LOCUS ...................................................................................................... 8B.1
ROOT LOCUS RULES ........................................................................................... 8B.5
1. NUMBER OF BRANCHES ............................................................................. 8B.5
2. LOCUS END POINTS .................................................................................... 8B.5
3. REAL AXIS SYMMETRY .............................................................................. 8B.5
4. REAL AXIS SECTIONS ................................................................................. 8B.5
5. ASYMPTOTE ANGLES ................................................................................. 8B.6
6. ASYMPTOTIC INTERCEPT (CENTROID) ........................................................ 8B.6
7. REAL AXIS BREAKAWAY AND BREAK-IN POINTS ...................................... 8B.6
8. IMAGINARY AXIS CROSSING POINTS .......................................................... 8B.9
9. EFFECT OF POLES AND ZEROS .................................................................... 8B.9
10. USE A COMPUTER ..................................................................................... 8B.9
MATLABS RLTOOL .................................................................................... 8B.14
ROOT LOCI OF DISCRETE-TIME SYSTEMS ......................................................... 8B.16
TIME RESPONSE OF DISCRETE-TIME SYSTEMS ................................................. 8B.17
SUMMARY ........................................................................................................ 8B.19
EXERCISES ........................................................................................................ 8B.20
JAMES CLERK MAXWELL (1831-1879)............................................................. 8B.24
LECTURE 9A STATE-VARIABLES
OVERVIEW .......................................................................................................... 9A.1
STATE REPRESENTATION .................................................................................... 9A.1
STATES .......................................................................................................... 9A.2
OUTPUT ......................................................................................................... 9A.4
MULTIPLE INPUT-MULTIPLE OUTPUT SYSTEMS............................................. 9A.4
SOLUTION OF THE STATE EQUATIONS ................................................................. 9A.5
TRANSITION MATRIX .......................................................................................... 9A.6
TRANSFER FUNCTION ....................................................................................... 9A.10
IMPULSE RESPONSE .......................................................................................... 9A.10
LINEAR STATE-VARIABLE FEEDBACK .............................................................. 9A.12
SUMMARY ........................................................................................................ 9A.18
EXERCISES ........................................................................................................ 9A.19
LECTURE 9B STATE-VARIABLES 2
OVERVIEW .......................................................................................................... 9B.1
NORMAL FORM ................................................................................................... 9B.1
SIMILARITY TRANSFORM .................................................................................... 9B.4
SOLUTION OF THE STATE EQUATIONS FOR THE ZIR ........................................... 9B.6
POLES AND REPEATED EIGENVALUES............................................................... 9B.11
POLES .......................................................................................................... 9B.11
REPEATED EIGENVALUES ............................................................................ 9B.11
DISCRETE-TIME STATE-VARIABLES.................................................................. 9B.12
DISCRETE-TIME RESPONSE ............................................................................... 9B.16
DISCRETE-TIME TRANSFER FUNCTION .............................................................. 9B.17
SUMMARY ........................................................................................................ 9B.23
EXERCISES ........................................................................................................ 9B.24
ix
APPENDIX A - THE FAST FOURIER TRANSFORM
OVERVIEW ............................................................................................................ F.1
THE DISCRETE-TIME FOURIER TRANSFORM (DTFT) ............................................ F.2
THE DISCRETE FOURIER TRANSFORM (DFT) ........................................................ F.4
THE FAST FOURIER TRANSFORM (FFT) ................................................................ F.8
CREATING FFTS .................................................................................................... F.9
INTERPRETING FFTS ........................................................................................... F.11
CASE 1 IDEALLY SAMPLED ONE-OFF WAVEFORM ......................................... F.11
CASE 2 IDEALLY SAMPLED PERIODIC WAVEFORM ........................................ F.12
CASE 3 CONTINUOUS TIME-LIMITED WAVEFORM ......................................... F.13
CASE 4 CONTINUOUS PERIODIC WAVEFORM ................................................ F.14
APPENDIX B - THE PHASE-LOCKED LOOP
OVERVIEW ............................................................................................................ P.1
DISTORTION IN SYNCHRONOUS AM DEMODULATION ...................................... P.1
CARRIER REGENERATION ................................................................................. P.2
THE PHASE-LOCKED LOOP (PLL) ......................................................................... P.3
VOLTAGE CONTROLLED OSCILLATOR (VCO) .................................................. P.4
PHASE DETECTOR ............................................................................................. P.7
PLL MODEL .......................................................................................................... P.9
LINEAR PLL MODEL ........................................................................................ P.9
LOOP FILTER .................................................................................................. P.10
1A.1
Lecture 1A Signals
Overview. Signal operations. Periodic & aperiodic signals. Deterministic &
random signals. Energy & power signals. Common signals. Sinusoids.
Overview
Electrical engineers should never forget the big picture.
Every day we take for granted the power that comes out of the wall, or the light
that instantly illuminates the darkness at the flick of a switch. We take for
granted the fact that electrical machines are at the heart of every manufacturing
industry. There has been no bigger benefit to humankind than the supply of
electricity to residential, commercial and industrial sites. Behind this magic
is a large infrastructure of generators, transmission lines, transformers,
protection relays, motors and motor drives.
We also take for granted the automation of once hazardous or laborious tasks,
we take for granted the ability of electronics to control something as
complicated as a jet aircraft, and we seem not to marvel at the idea of your Why we study
systems
cars engine having just the right amount of fuel injected into the cylinder with
just the right amount of air, with a spark igniting the mixture at just the right
time to produce the maximum power and the least amount of noxious gases as
you tell it to accelerate up a hill when the engine is cold!
We forget that we are now living in an age where we can communicate with
anyone (and almost anything), anywhere, at anytime. We have a point-to-point
telephone system, mobile phones, the Internet, radio and TV. We have never
lived in an age so information rich.
Electrical engineers are engaged in the business of designing, improving,
extending, maintaining and operating this amazing array of systems.
You are in the business of becoming an engineer.
1A.2
One thing that engineers need to do well is to break down a seemingly complex
system into smaller, easier parts. We therefore need a way of describing these
systems mathematically, and a way of describing the inputs and outputs of
these systems - signals.
Signal Operations
There really arent many things that you can do to signals. Take a simple FM
modulator:
Example of signal
operations
L'
L
Preemphasiser
L+R
L+R+ (L-R)cos(2 f c t )
R'
R
Preemphasiser
L-R
FM
modulator
to
antenna
(L-R)cos(2 f c t )
cos(2 f c t )
Figure 1A.1
Some of the signals in this system come from natural sources, such as music
or speech, some come from artificial sources, such as a sinusoidal oscillator.
Linear system
operations
We can multiply two signals together, we can add two signals together, we can
amplify, attenuate and filter. We normally treat all the operations as linear,
although in practice some nonlinearity always arises.
We seek a way to analyse, synthesise and process signals that is
mathematically rigorous, yet simple to picture. It turns out that Fourier analysis
of signals and systems is one suitable method to achieve this goal, and the
Laplace Transform is even more powerful. But first, lets characterise
mathematically and pictorially some of the more common signal types.
1A.3
Continuous and Discrete Signals
A continuous signal can be broadly defined as a quantity or measurement that
varies continuously in relation to another variable, usually time. We say that
the signal is a function of the independent variable, and it is usually described
mathematically as a function with the argument in parentheses, e.g. g t . The
parentheses indicate that t is a real number.
Common examples of continuous-time signals include temperature, voltage,
audio output from a speaker, and video signals displayed on a TV. An example
of a graph of a continuous-time signal is shown below:
g(t )
Figure 1A.2
A discrete signal is one which exists only at specific instants of the
independent variable, usually time. A discrete signal is usually described
mathematically as a function with the argument in brackets, e.g. g n . The
brackets indicate that n is an integer.
Common examples of discrete-time signals include your bank balance,
monthly sales of a corporation, and the data read from a CD. An example of a
graph of a discrete-time signal is shown below:
g [n]
1A.4
Figure 1A.3
g t g t T0 , T0 0
(1A.1)
The smallest value of T0 that satisfies Eq. (1A.1) is called the period. The
fundamental frequency of the signal is defined as:
Fundamental
frequency defined
f0
1
T0
(1A.2)
t
T0
Figure 1A.
We can also have periodic discrete-time signals, in which case:
g n g n T0 , T0 0
Aperiodic signals
defined
(1A.3)
An aperiodic signal is one for which Eq. (1A.1) or Eq. (1A.3) does not hold.
Any finite duration signal is aperiodic.
1A.5
Example
Find the period and fundamental frequency (if they exist) of the following
signal:
g t 3 cos 2t 4 sin 4t
(1A.4)
g(t )
-2
-1
t, (s)
T0 = 1
Figure 1A.5
From the graph we find that T0 1 s and f 0 1 Hz . It is difficult to determine
the period mathematically (in this case) until we look at the signals spectrum
(Lecture 2A).
If we add two periodic signals, then the result may or may not be periodic. The
result will only be periodic if an integral number of periods of the first signal
coincides with an integral number of periods of the second signal:
p
T0 qT1 pT2 where is rational
q
(1A.5)
1A.6
Deterministic and Random Signals
A deterministic signal is one in which the past, present and future values are
Deterministic signals
defined
Random signals
defined
time. This does not necessarily mean that any particular random signal is
unknown - on the contrary they can be deterministic. For example, consider
some outputs of a binary signal generator over 8 bits:
g1(t)
t
g2(t)
t
g3(t)
t
g4(t)
t
Figure 1A.6
Random signals are most often the information bearing signals we are used to voice signals, television signals, digital data (computer files), etc. Electrical
noise is also a random signal.
1A.7
Energy and Power Signals
For electrical engineers, the signals we wish to describe will be predominantly
voltage or current. Accordingly, the instantaneous power developed by these
signals will be either:
v2 t
p t
R
(1A.6)
p t Ri 2 t
(1A.7)
or:
p t g 2 t
(1A.8)
The dissipated energy, or the total energy of the signal is then given by:
E g t dt
2
(1A.9)
A signal is classified as an energy signal if and only if the total energy of the
signal is finite:
0 E
(1A.10)
is finite for an
energy signal
1A.8
The average power of a signal is correspondingly defined as:
1 T2 2
g t dt
T T T 2
P lim
(1A.11)
If the signal is periodic with period T0 , we then have the special case:
1 T0 2 2
P
g t dt
T0 T0 2
(1A.12)
Example
g 2( t )
A2
equal areas
A
2
T0
Figure 1A.7
Note that in drawing the graph we dont really need to know the identity
cos 2 1 cos 2 2 all we need to know is that if we start off with a
Signals and Systems 2014
1A.9
sinusoid uniformly oscillating between A and A , then after squaring we
obtain a sinusoid that oscillates (at twice the frequency) between A 2 and 0. We
can also see that the average value of the resulting waveform is A 2 2 , because
there are equal areas above and below this level. Therefore, if we integrate (i.e.
find the area beneath the curve) over an interval spanning T0 , we must have an
area equal to the average value times the span, i.e. A 2T0 2 . (This is the Mean
Value Theorem for Integrals). So the average power is this area divided by the
period:
A
2
(1A.13)
This is a surprising result! The power of any sinusoid, no matter what its
frequency or phase, is dependent only on its amplitude.
Confirm the above result by performing the integration algebraically.
A signal is classified as a power signal if and only if the average power of the
signal is finite and non-zero:
0 P
(1A.14)
We observe that if E, the energy of g t , is finite, then its power P is zero, and
if P is finite, then E is infinite. It is obvious that a signal may be classified as
one or the other but not both. On the other hand, there are some signals, for
example:
g t e at
(1A.15)
that cannot be classified as either energy or power signals, because both E and
P are infinite.
It is interesting to note that periodic signals and most random signals are power
signals, and signals that are both deterministic and aperiodic are energy signals.
1A.10
Common Continuous-Time Signals
Earlier we stated that the key to handling complexity is to reduce to many
Superposition is the
key to building
complexity out of
simple parts
simple parts. The converse is also true - we can apply superposition to build
complexity out of simple parts. It may come as a pleasant surprise that the
study of only a few signals will enable us to handle almost any amount of
complexity in a deterministic signal.
The Continuous-Time Step Function
0, t 0
ut 1 2, t 0
1, t 0
The continuous-time
step function
defined
(1A.16)
Graphically:
and graphed
u(t )
1
0
Figure 1A.8
function which determines the position of the function along the t-axis. Now
consider the delayed step function:
0, t t0
ut t0 1 2, t t0
1, t t0
(1A.17)
1A.11
We obtain the conditions on the values of the function by the simple
substitution t t t 0 in Eq. (1A.16).
Graphically, we have:
u(t- t0 )
1
0
t0
Figure 1A.9
We see that the argument t t 0 simply shifts the origin of the original
function to t 0 . A positive value of t 0 shifts the function to the right corresponding to a delay in time. A negative value shifts the function to the left
- an advance in time.
We will now introduce another concept associated with the argument of a
function: if we divide it by a real constant - a scaling factor - then we regulate as well as width
and orientation
the orientation of the function about the point t 0 , and usually change the
width of the function. Consider the scaled and shifted step function:
t t0
,
0
T T
t t0
t t0
1 2,
u
T
T T
t t0
1,
T T
(1A.18)
1A.12
In this case it is not meaningful to talk about the width of the step function, and
the only purpose of the constant T is to allow the function to be reflected about
the line t t 0 , as shown below:
u(
t- t0
T
t0,T positive
1
0
-2
-1
t0
u(
t- 2
-1
)
2
Figure 1A.10
Use Eq. (1A.18) to verify the bottom step function in Figure 1A.10.
The utility of the step function is that it can be used as a switch to turn
another function on or off at some point. For example, the product given by
u t 1cos2t is as shown below:
The step function as
a switch
-2
-1
u ( t- 1) cos 2 t
Figure 1A.11
1A.13
The Rectangle Function
One of the most useful functions in signal analysis is the rectangle function,
defined as:
0,
rect t 1 2,
1,
1
t
2
1
t
2
1
t
2
The rectangle
function defined
(1A.19)
rect ( t )
1
-1/2 0
1/2
Figure 1A.12
0,
t t0
1 2,
rect
T
1,
t t0 1
T
2
t t0 1
2
T
t t0 1
2
T
(1A.20)
1A.14
Graphically, it is a rectangle with a height of unity, it is centred on t t 0 , and
both its width and area are equal to T :
rect ( t- t0
T
t0 positive
Area = |T|
1
0
t0
|T|
Area = 3
-4
-3
-2
-1
rect ( t+
2
3
)
2
Figure 1A.13
1A.15
The Straight Line
g (t ) = t
slope=1
-2
-1
-1
Figure 1A.14
Now shift the straight line along the t-axis in the standard fashion, to make
g t t t 0 :
g ( t ) = t- t0
slope=1
0
t0
Figure 1A.15
To change the slope of the line, simply apply the usual scaling factor, to make:
g t
t t0
T
(1A.21)
1A.16
This is the equation of a straight line, with slope 1 T and t-axis intercept t 0 :
and graphed
t- t0
g (t ) = T
t0
g (t ) =
slope = -1/3
slope = 1/T
t+T
0
t- 1
-3
1
-2
-1
-1
Figure 1A.16
We can now use our knowledge of the straight line and rectangle function to
completely specify piece-wise linear signals.
1A.17
Example
-8
-4
g (t )
12
16
Figure 1A.17
t
t 2
g0 t rect
4
4
(1A.22)
1A.18
Graphically, it is:
-8
-4
12
16
12
16
12
16
-8
-4
-8
-4
g0( t )
Figure 1A.18
The next period is just g 0 t shifted right (delayed in time) by 4. The next
period is therefore described by:
g1 t g0 t 4
t 4
t 42
rect
4
4
(1A.23)
gn t g0 t 4n
t 4n
t 4n 2
rect
4
4
(1A.24)
where g n t is the nth period and n is any integer. Now all we have to do is
add up all the periods to get the complete mathematical expression for the
sawtooth:
t 4n
t 4n 2
rect
4
4
n
gt
(1A.25)
1A.19
Example
(1A.26)
t 5
t 4
g t t 1 rect t 15
. rect t 2.5
rect
2
2
(1A.27)
From this, we can compose the waveform out of the three specified parts:
(t -1)rect(t -1.5)
1
1 rect(t -2.5)
t-4
( t-5
-2 )rect( 2 )
g (t )
Figure 1A.19
1A.20
The Sinc Function
The sinc function will show up quite often in studies of linear systems,
particularly in signal spectra, and it is interesting to note that there is a close
relationship between this function and the rectangle function. Its definition is:
The sinc function
defined
sinc t
sint
t
(1A.28)
1.0
sinc( t )
0.8
0.6
0.4
0.2
-4
-3
-2
-1
0
-0.2
-0.4
Figure 1A.20
The inclusion of in the formula for the sinc function gives it certain nice
properties. For example, the zeros of sinc t occur at non-zero integral values,
Features of the sinc
function
its width between the first two zeros is two, and its area (including positive and
negative lobes) is just one. Notice also that it appears that the sinc function is
undefined for t 0 . In one sense it is, but in another sense we can define a
functions value at a singularity by approaching it from either side and
averaging the limits.
This is not unusual we did it explicitly in the definition of the step function,
where there is obviously a singularity at the step. We overcame this by
calculating the limits of the function approaching zero from the positive and
negative sides. The limit is 0 (approaching from the negative side) and 1
(approaching from the positive side), and the average of these two is 1 2 . We
then made explicit the use of this value for a zero argument.
Signals and Systems 2014
1A.21
The limit of the sinc function as t 0 can be obtained using lHpitals rule:
sint
cost
lim
lim
1
t 0 t
t 0
(1A.29)
Therefore, we say the sinc function has a value of 1 when its argument is zero.
With a generalised argument, the sinc function becomes:
t t0
sin
t
t
T
sinc
T
t t
0
T
(1A.30)
Its zeros occur at t 0 nT , its height is 1, its width between the first two zeros is
t-t
sinc( T 0)
1.0
t 0-3T
t 0-T
t 0+T
t0
t 0-2T
1.0
t 0+3T
t 0+2T
10
t-5
sinc( 2 )
0.8
0.6
0.4
-2
0.2
0
-0.2
-0.4
Figure 1A.21
1A.22
The Impulse Function
An informal
definition of the
impulse function
The impulse function is often described as having an infinite height and zero
width such that its area is equal to unity, but such a description is not
particularly satisfying.
A more formal definition is obtained by first forming a sequence of functions,
such as 1 T rectt T , and then defining t to be:
1
t
t lim rect
T 0 T
T
A more formal
definition of the
impulse function
(1A.31)
As T gets smaller and smaller the members of the sequence become taller and
narrower, but their area remains constant as shown below:
2rect ( 2t )
rect ( t )
1
2
-2
-1
rect (
Figure 1A.22
t
2
)
2
1A.23
From a physical point of view, we can consider the delta function to be so
An impulse function
narrow that making it any narrower would in no way influence the results in in the lab is just a
which we are interested. As an example, consider the simple RC circuit shown
vi
vo
Figure 1A.23
capacitor will be almost completely charged to the voltage 1 T before the pulse get smaller
pulse ends, at which time it will begin to discharge back to zero.
If we shorten the pulse so that T RC , the capacitor will not have a chance to
become fully charged before the pulse ends. Thus, the output voltage behaves and smaller
as in the middle diagram of Figure 1A.21, and it can be seen that there is a
considerable difference between this output and the proceeding one.
If we now make T still shorter, as in the bottom diagram of Figure 1A.21, we and smaller, the
note very little change in the shape of the output. In fact, as we continue to output of a linear
system approaches
make T shorter and shorter, the only noticeable change is in the time it takes the impulse
response
1A.24
If this interval is too short to be resolved by our measuring device, the input is
effectively behaving as a delta function and decreasing its duration further will
have no observable effect on the output, which now closely resembles the
impulse response of the circuit.
Graphical derivation
of the impulse
response of an RC
circuit by decreasing
a pulses width while
maintaining its area
vi ( t)
vo ( t)
Area=1
1
T
0
RC
1
T
T
vi ( t)
1 (1-e-t/RC)
T
RC
vo ( t)
1
T
Area=1
1
T
RC
1
RC
t
1 (1-e-t/RC)
T
0
vi ( t)
RC
vo ( t)
Area=1
-t/RC
1
T (1-e )
1
RC
0 T
RC
0 T
Figure 1A.24
RC
1A.25
With the previous discussion in mind, we shall use the following properties as
the definition of the delta function: given the real constant t 0 and the arbitrary
complex-valued function f t , which is continuous at the point t t 0 , then
t t0 0,
f t t t dt f t
t2
t1
t t0
(1A.32a)
t1 t0 t2
(1A.32b)
The impulse
function defined - as
behaviour upon
integration
Graphical
representation of an
impulse function
t0 positive
Area = 1
(t- t0 )
1
0
Area = 2
-4
-3
t0
2 ( t+2 )
2
1
-2
-1
( t- 1)
0
Figure 1A.25
Area = 1
2
1A.26
Sinusoids
Here we start deviating from the previous descriptions of time-domain signals.
Sinusoids can be
described in the
frequency-domain
All the signals described so far are aperiodic. With periodic signals (such as
sinusoids), we can start to introduce the concept of frequency content, i.e. a
shift from describing signals in the time-domain to one that describes them in
the frequency-domain. This new way of describing signals will be fully
exploited later when we look at Fourier series and Fourier transforms.
Why Sinusoids?
The sinusoid with which we are so familiar today appeared first as the solution
to differential equations that 17th century mathematicians had derived to
describe the behaviour of various vibrating systems. They were no doubt
surprised that the functions that had been used for centuries in trigonometry
appeared in this new context. Mathematicians who made important
contributions included Huygens (1596-1687), Bernoulli (1710-1790), Taylor
(1685-1731), Ricatti (1676-1754), Euler (1707-1783), DAlembert (17171783) and Lagrange (1736-1813).
Perhaps the greatest contribution was that of Euler (who invented the symbols
, e and i 1 which we call j). He first identified the fact that is highly
significant for us:
The special
relationship enjoyed
by sinusoids and
linear systems
(1A.33)
The output sinusoid has the same frequency as the input, it is however altered
in amplitude and phase.
We are so familiar with this fact that we sometimes overlook its significance.
Only sinusoids have this property with respect to linear systems. For example,
applying a square wave input does not produce a square wave output.
1A.27
It was Euler who recognised that an arbitrary sinusoid could be resolved into a
pair of complex exponentials:
Acost Xe jt X *e jt
(1A.34)
A sinusoid can be
expressed as the
sum of a forward
and backward
rotating phasor
(1A.35)
The way now lay open for the analysis of the behaviour of a linear system for
any periodic input by determining the response to the individual sinusoidal (or
exponential) components and adding them up (superposition).
This technique is called frequency analysis. Its first application was probably
that of Newton passing white light through a prism and discovering that red
light was unchanged after passage through a second prism.
1A.28
Representation of Sinusoids
2t
x t Acos
(1A.36)
cos(
2 t
)
T
t
T
Figure 1A.26
t
t0
Figure 1A.27
1A.29
Mathematically, we have:
2 t t0
2t 2t0
x t Acos
Acos T T
T
2t
Acos
(1A.37)
A sinusoid
expressed in the
most general
fashion
t0=T/4
Figure 1A.28
and:
2t
x t Acos
T 2
2t
Asin
T
(1A.38)
x t Asint or Acost 2
(1A.39)
1A.30
Similarly:
xt Acost or Asint 2
(1A.40)
We shall use the cosine form. If a sinusoid is expressed in the sine form, then
we need to subtract 90 from the phase angle to get the cosine form.
Phase refers to the
angle of a
cosinusoid at t=0
When the phase of a sinusoid is referred to, it is the phase angle in the cosine
form (in these lecture notes).
Resolution of an Arbitrary Sinusoid into Orthogonal Functions
(1A.41)
Example
(1A.42)
1A.31
Graphically, this is:
1.93
cos3t term
0.52
-sin3t term
Figure 1A.29
The cos() and -sin() terms are said to be orthogonal. For our purposes
orthogonal may be understood as the property that two vectors or functions
A quick definition of
mutually have when one cannot be expressed as a sum containing the other. orthogonal
We will look at basis functions and orthogonality more formally later on.
Representation of a Sinusoid by a Complex Number
numbers are required to describe the sinusoid completely. Why not use a sinusoid
complex number to store these two real numbers?
Suppose the convention is adopted that: the real part of a complex number is
the cost amplitude, and the imaginary part is the sint amplitude of the completely specify a
resolution described by Eq. (1A.41).
sinusoid of a given
frequency
Suppose we call the complex number created in this way the phasor associated which we store as a
complex number
called a phasor
1A.32
Our notation will be:
x t time-domain expression
Time-domain and
frequency-domain
symbols
We can see that the magnitude of the complex number is the amplitude of the
sinusoid and the angle of the complex number is the phase of the sinusoid. In
general, the correspondence between a sinusoid and its phasor is:
xt A cost
X Ae j
(1A.43)
Example
If x t 3sint 30
carefully
that
X 3cost 120 .
All
we
can
say
is
that
representation of the amplitude and phase. The sum of two phasors corresponds
to the sinusoid which is the sum of the two component sinusoids represented
by the phasors. That is, if x 3 t x1 t x 2 t where x1 t , x 2 t and x 3 t are
sinusoids with the same frequency, then X 3 X 1 X 2 .
Example
If
x 3 t cost 2sint
corresponds to x 3 t 2.24cost 63 .
which
1A.33
Formalisation of the Relationship between Phasor and Sinusoid
e j cos j sin
(1A.44)
Ae j e jt Ae j t A cos t jA sin t
(1A.45)
we have:
X Ae j is equal to Re Xe jt . Therefore:
xt Re Xe jt
(1A.46)
Im
Xe j t = Ae je j t
A
x( t )
Re
complex plane
Figure 1A.30
1A.34
Eulers Complex Exponential Relationships for Cosine and Sine
Eulers expansion:
e j cos j sin
(1A.47)
Im
j sin
cos
Re
Figure 1A.31
e j cos j sin
(1A.48)
and:
Im
cos
Re
-j sin
-j
Figure 1A.32
Signals and Systems 2014
1A.35
By adding Eqs. (1A.45) and (1A.46), we can write:
e j e j
cos
2
(1A.49)
Cos represented as
a sum of complex
exponentials
and:
Im
cos
Re
-j
Figure 1A.33
sin
e
j2
(1A.50)
and:
Im
-e-j
jsin
Re
-j
Figure 1A.34
Signals and Systems 2014
Sin represented as
a sum of complex
exponentials
1A.36
A New Definition of the Phasor
A
X
X e j , or X
2
2
(1A.51)
A cost
A j jt A j jt
e e e e
2
2
Xe jt X *e jt
(1A.52)
The two terms in the summation represent two counter-rotating phasors with
angular velocities and in the complex plane, as shown below:
Graphical
interpretation of
counter-rotating
phasors / timedomain relationship
Im
A /2
Xe j t = 2Ae je j t
x ( t ) Re
X*e- j t
complex plane
Figure 1A.35
1A.37
Graphical Illustration of the Relationship between the Two Types of Phasor
and their Corresponding Sinusoid
complex plane
time-domain
Figure 1A.36
A sinusoid can be
generated by taking
the real part of a
rotating complex
number
1A.38
Now consider the second representation of a sinusoid: x t Xe jt X * e jt .
Graphically, x t can be generated by simply adding the two complex
conjugate counter-rotating phasors Xe jt and X * e jt . The result will always
be a real number:
A sinusoid can be
generated by adding
up a forward rotating
complex number
and its backward
rotating complex
conjugate
complex plane
time-domain
X
X
Figure 1A.37
Negative Frequency
Negative frequency
just means the
phasor is going
clockwise
You should become very familiar with all of these signal types, and you should
feel comfortable representing a sinusoid as a complex exponential. Being able
to manipulate signals mathematically, while at the same time imagining what is
happening to them graphically, is the key to readily understanding signals and
systems.
1A.39
Common Discrete-Time Signals
A lot of discrete-time signals are obtained by sampling continuous-time
signals at regular intervals. In these cases, we can simply form a discrete-time Superposition is the
key to building
into the mathematical expression for the complexity out of
simple parts
(1A.53)
(1A.54)
g [n]
1
n
-1
Figure 1A.38
1A.40
The Discrete-Time Step Function
0, n 0
un
1, n 0
The discrete-time
step function
defined
(1A.55)
Graphically:
and graphed
u [n]
1
-4
-3
-2
-1
Figure 1A.39
Note that this is not a sampled version of the continuous-time step function,
which has a discontinuity at t 0 . We define the discrete-time step to have a
value of 1 at n 0 (instead of having a value of 1 2 if it were obtained by
sampling the continuous-time signal).
Example
g [n]
5
4
3
2
1
-2 -1 0
1A.41
The Unit-Pulse Function
0, n 0
n
1, n 0
The unit-pulse
defined
(1A.56)
Graphically:
and graphed
[n]
1
-4
-3
-2
-1
Figure 1A.40
Example
g [n]
-2 -1
-4 -3
5
4
3
2
1
-1
-2
With no obvious formula, we can express the signal as the sum of a series of
delayed and weighted unit-pulses. Working from left to right, we get:
g n n 2 2 n 1 3 n 2 n 2 n 3 4 n 4 .
1A.42
Summary
Sinusoids are special signals. They are the only real signals that retain their
shape when passing through a linear time-invariant system. They are often
represented as the sum of two complex conjugate, counter-rotating phasors.
A j
e .
2
signals. The discrete-time step function and unit-pulse are defined from
first principles.
References
Kamen, E.W. & Heck, B.S.: Fundamentals of Signals and Systems Using
MATLAB, Prentice-Hall, Inc., 1997.
1A.43
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]
1.
(b) period = 2 s
(c) no period
2.
g(t)
waveform can be
-12
-8
-4
(a)
gt
3.
12
(b)
t 4n
t 2
rect
represented by:
(c)
g t
t 4n
t 2n
rect
g t
t 4
t 2 4n
rect
4
4
(b) 5100
(c) 510
4.
4
2
-12
-8
-4
0
-2
12
-4
(a) 16
5.
(b) 64
(c) 0
(b) 5.088
Answers: 1. b 2. x 3. c 4. x 5. c
(c) -4.890
1A.44
Exercises
1.
For each of the periodic signals shown, determine their time-domain
expressions, their periods and average power:
1
(a)
-2
g (t )
0
2
g (t )
(b)
-2
-1
0
1
(c)
-20
-10
g (t )
e
-t / 10
10
20
30
40
g ( t ) 2cos 200 t
(d)
-3 -1
1A.45
2.
For the signals shown, determine their time-domain expressions and calculate
their energy:
g (t )
10
(a)
-2
-1
-5
g (t )
cos t
1
(b)
/2
-/2
g ( t ) Two identical
cycles
2
t
1
(c)
0
g (t )
3
2
1
(d)
0
3.
Using the sifting property of the impulse function, evaluate the following
integrals:
(iv)
f t
dt
(v)
4 dt
(vi)
(i)
(ii)
t 3e
(iii)
1 t t
t 2 sintdt
t t t 2 dt
e jt t 2dt
f t t t 0
1A.46
4.
t t0
T t t 0
T
Show that:
tt
f t T
dt T f t 0
This is a case where common sense does not prevail. At first glance it might
5.
Complete the following table:
(a) x t 27cos 100t 15
(b) x t 5sin 2t 80
(c) x t 100sin 2t 45
(e) X 27 j11
x t
(f) X 100 60
x t
(h) x t 2cos t 30
(i) X 3 30 , 1
x 2
(j) X 3 30
X*
X*
1A.47
Leonhard Euler (1707-1783) (Len ard Oy ler)
The work of Euler built upon that of Newton and made mathematics the tool of
analysis. Astronomy, the geometry of surfaces, optics, electricity and
magnetism, artillery and ballistics, and hydrostatics are only some of Eulers
fields. He put Newtons laws, calculus, trigonometry, and algebra into a
recognizably modern form.
Euler was born in Switzerland, and before he was an adolescent it was
recognized that he had a prodigious memory and an obvious mathematical gift.
He received both his bachelors and his masters degrees at the age of 15, and
at the age of 23 he was appointed professor of physics, and at age 26 professor
of mathematics, at the Academy of Sciences in Russia.
Among the symbols that Euler initiated are the sigma () for summation
(1755), e to represent the constant 2.71828(1727), i for the imaginary
(1777), and even a, b, and c for the sides of a triangle and A, B, and C for the
opposite angles. He transformed the trigonometric ratios into functions and
abbreviated them sin , cos and tan , and treated logarithms and exponents as
functions instead of merely aids to calculation. He also standardised the use of
for 3.14159
His 1736 treatise, Mechanica, represented the flourishing state of Newtonian
physics under the guidance of mathematical rigor. An introduction to pure
mathematics, Introductio in analysin infinitorum, appeared in 1748 which
treated algebra, the theory of equations, trigonometry and analytical geometry.
In this work Euler gave the formula e ix cos x i sin x . It did for calculus what
Euclid had done for geometry. Euler also published the first two complete
works on calculus: Institutiones calculi differentialis, from 1755, and
Institutiones calculi integralis, from 1768.
1A.48
Some of his phenomenal output includes: books on the calculus of variations;
on the calculation of planetary orbits; on artillery and ballistics; on analysis; on
shipbuilding and navigation; on the motion of the moon; lectures on the
differential calculus. He made decisive and formative contributions to
geometry, calculus and number theory. He integrated Leibniz's differential
calculus and Newton's method of fluxions into mathematical analysis. He
introduced beta and gamma functions, and integrating factors for differential
equations. He studied continuum mechanics, lunar theory, the three body
problem, elasticity, acoustics, the wave theory of light, hydraulics, and music.
He laid the foundation of analytical mechanics. He proved many of Fermats
assertions including Fermats Last Theorem for the case n 3 . He published a
full theory of logarithms of complex numbers. Analytic functions of a complex
variable were investigated by Euler in a number of different contexts, including
the study of orthogonal trajectories and cartography. He discovered the
Cauchy-Riemann equations used in complex variable theory.
Euler made a thorough investigation of integrals which can be expressed in
terms of elementary functions. He also studied beta and gamma functions. As
well as investigating double integrals, Euler considered ordinary and partial
differential equations. The calculus of variations is another area in which Euler
made fundamental discoveries.
He considered linear equations with constant coefficients, second order
differential equations with variable coefficients, power series solutions of
differential equations, a method of variation of constants, integrating factors, a
method of approximating solutions, and many others. When considering
vibrating membranes, Euler was led to the Bessel equation which he solved by
introducing Bessel functions.
Euler made substantial contributions to differential geometry, investigating the
theory of surfaces and curvature of surfaces. Many unpublished results by
Euler in this area were rediscovered by Gauss.
1A.49
Euler considered the motion of a point mass both in a vacuum and in a resisting
medium. He analysed the motion of a point mass under a central force and also
considered the motion of a point mass on a surface. In this latter topic he had to
solve various problems of differential geometry and geodesics.
He wrote a two volume work on naval science. He decomposed the motion of a
solid into a rectilinear motion and a rotational motion. He studied rotational
problems which were motivated by the problem of the precession of the
equinoxes.
He set up the main formulas for the topic of fluid mechanics, the continuity
equation, the Laplace velocity potential equation, and the Euler equations for
the motion of an inviscid incompressible fluid.
He did important work in astronomy including: the determination of the orbits
of comets and planets by a few observations; methods of calculation of the
parallax of the sun; the theory of refraction; consideration of the physical
nature of comets.
Euler also published on the theory of music...
Euler did not stop working in old age, despite his eyesight failing. He
eventually went blind and employed his sons to help him write down long
equations which he was able to keep in memory. Euler died of a stroke after a
day spent: giving a mathematics lesson to one of his grandchildren; doing some
calculations on the motion of balloons; and discussing the calculation of the
orbit of the planet Uranus, recently discovered by William Herschel.
His last words, while playing with one of his grandchildren, were: I die.
1B.1
Lecture 1B Systems
Differential equations. System modelling. Discrete-time signals and systems.
Difference equations. Discrete-time block diagrams. Discretization in time of
differential equations. Convolution in LTI discrete-time systems. Convolution
in LTI continuous-time systems. Graphical description of convolution.
Properties of convolution. Numerical convolution.
N 1
t ai y t bi x t
(1B.1)
d N y t
t N
dt
(1B.2)
i 0
i 0
where:
Initial Conditions
The above equation needs the N initial conditions:
y 0 , y 1 0 , , y N 1 0
(1B.3)
We take 0 as the time for initial conditions to take into account the possibility
of an impulse being applied at t 0 , which will change the output
instantaneously.
Signals and Systems 2014
Linear differential
equation
1B.2
First-Order Case
For the first order case we can express the solution to Eq. (1B.1) in a useful
(and familiar) form. A first order system is given by:
dy t
ay t bxt
dt
First-order linear
differential equation
(1B.4)
dy t
e at
ay t e at bxt
dt
(1B.5)
Thus:
(1B.6)
e at y t y 0 e a bx d , t 0
(1B.7)
d at
e y t e at bxt
dt
Integrating both sides gives:
y t e at y 0 e a t bx d , t 0
t
(1B.8)
Use this to solve the simple revision problem for the case of the unit step.
The two parts of the response given in Eq. (1B.8) have the obvious names zeroinput response (ZIR) and zero-state response (ZSR). It will be shown later that
the ZSR is given by a convolution between the systems impulse response and
the input signal.
1B.3
System Modelling
In modelling a system, we are nearly always after the input/output relationship,
which is a differential equation in the case of continuous-time systems. If were
clever, we can break a system down into a connection of simple components,
each having a relationship between cause and effect.
Electrical Circuits
The three basic linear, time-invariant relationships for the resistor, capacitor
and inductor are respectively:
vt Rit
dvt
it C
dt
dit
vt L
dt
(1B.9a)
(1B.9b)
Cause / effect
relationships for
electrical systems
(1B.9c)
Mechanical Systems
In linear translational systems, the three basic linear, time-invariant
relationships for the inertia force, damping force and spring force are
respectively:
d 2 xt
F t M
dt
F t k d
dxt
dt
F t k s xt
Where xt is the position of the object under study.
(1B.10a)
(1B.10b)
(1B.10c)
Cause / effect
relationships for
mechanical
translational
systems
1B.4
For rotational motion, the relationships for the inertia torque, damping torque
and spring torque are:
Cause / effect
relationships for
mechanical
rotational systems
d 2 t
T t I
dt
d t
T t kd
dt
T t k s t
(1B.11a)
(1B.11b)
(1B.11c)
Finding an input-output relationship for signals in systems is just a matter of
applying the above relationships to a conservation law: for electrical circuits it
is one of Kirchhoffs laws, in mechanical systems it is DAlemberts principle.
Discrete-time Systems
A discrete-time signal is one that takes on values only at discrete instants of
time. Discrete-time signals arise naturally in studies of economic systems
amortization (paying off a loan), models of the national income (monthly,
quarterly or yearly), models of the inventory cycle in a factory, etc. They arise
in science, e.g. in studies of population, chemical reactions, the deflection of a
Discrete-time
systems are
important
weighted beam. They arise all the time in electrical engineering, because of
digital control, e.g. radar tracking system, processing of electrocardiograms,
digital communication (CD, mobile phone, Internet). Their importance is
probably now reaching that of continuous-time systems in terms of analysis
and design specifically because today signals are processed digitally, and
hence they are a special case of discrete-time signals.
It is now cheaper and easier to perform most signal operations inside a
especially as
microprocessors
play a central role in
todays signal
processing
1B.5
Linear Difference Equations with Constant Coefficients
Linear, time-invariant, discrete-time systems can be modelled with the
difference equation:
N
yn ai yn i bi xn i
i 1
(1B.12)
i 0
Linear time-invariant
(LTI) difference
equation
Solution by Recursion
We can solve difference equations by a direct numerical procedure.
There is a MATLAB function available for download from the Signals and
Systems web site called recur that solves the above equation.
Complete Solution
By solving Eq. (1B.12) recursively it is possible to generate an expression for
the complete solution yn in terms of the initial conditions and the input xn .
First-Order Case
Consider the first-order linear difference equation:
yn ayn 1 bxn
(1B.13)
y0 ay 1 bx0
y1 a 2 y 1 abx0 bx1
y2 a 3 y 1 a 2 bx0 abx1 bx2
(1B.14)
First-order linear
difference equation
1B.6
From the pattern, it can be seen that for n 0 ,
First look at a
convolution
summation as the
solution of a firstorder linear
difference equation
yn a
n 1
y 1 a bxi
n i
(1B.15)
i 0
x [n ]
y [n ] = A x [n ]
Figure 1B.1
The unit-delay element is shown below:
A discrete-time unitdelay element
x [n ]
y [n ] = x [n -1]
Figure 1B.2
Such an element is normally implemented by the memory of a computer, or a
digital delay line.
1B.7
Example
Using these two elements and an adder, we can construct a representation of
the discrete-time system given by yn ayn 1 bxn . The system is shown
below:
x [n ]
y [n ]
b
y [n -1]
a
dy t
y nT T y nT
dt t nT
T
Approximating a
derivative with a
difference
(1B.16)
yn 1 aT yn 1 bTxn 1
(1B.17)
The first-order
difference equation
approximation of a
first-order differential
equation
1B.8
Second-order Case
We can generalize the discretization process to higher-order differential
equations. In the second-order case the following approximation can be used:
The second-order
difference equation
approximation of a
second-order
derivative
d 2 y t
y nT 2T 2 y nT T y nT
dt 2 t nT
T2
(1B.18)
d 2 y t
dyt
dxt
b0 xt
a
a
y
t
b
1
0
1
dt
dt
dt 2
(1B.19)
(1B.20)
1B.9
Convolution in Linear Time-invariant Discrete-time Systems
Although the linear difference equation is the most basic description of a linear
discrete-time system, we can develop an equivalent representation called the
convolution representation. This representation will help us to determine
important system properties that are not readily apparent from observation of
the difference equation.
One advantage of this representation is that the output is written as a linear
combination of past and present input signal elements. It is only valid when the
yn ayn 1 bxn
(1B.21)
Linear difference
equation
yn a
n 1
y 1 a bxi
The complete
response
n i
i 0
(1B.22)
yn a bxi
n i
(1B.23)
i 0
In contrast to Eq. (1B.21), we can see that Eq. (1B.23) depends exclusively on
present and past values of the input signal. One advantage of this is that we
may directly observe how each past input affects the present output signal. For
example, an input xi contributes an amount a bxi to the totality of the
ni
The zero-state
response (ZSR) a
convolution
summation
1B.10
Unit-Pulse Response of a First-Order System
The output of a system subjected to a unit-pulse response n is denoted hn
A discrete-time
systems unit-pulse
response defined
and is called the unit-pulse response, or weighting sequence of the discretetime system. It is very important because it completely characterises a systems
behaviour. It may also provide an experimental or mathematical means to
determine system behaviour.
For the first-order system of Eq. (1B.21), if we let xn n , then the output
of the system to a unit-pulse input can be expressed using Eq. (1B.23) as:
n
yn a b i
n i
(1B.24)
i 0
yn a b
n
(1B.25)
hn a bun
n
(1B.26)
General System
For a general linear time-invariant (LTI) system, the response to a delayed
unit-pulse n i must be hn i .
Since xn can be written as:
xn xi n i
i 0
(1B.27)
1B.11
and since the system is LTI, the response yi n to x[i ] n i is given by:
yi n xi hn i
(1B.28)
The response to the sum Eq. (1B.27) must be equal to the sum of the yi n
defined by Eq. (1B.28). Thus the response to xn is:
yn xi h[n i ]
(1B.29)
i 0
Convolution
summation defined
for a discrete-time
system
yn hn* xn
(1B.30)
Convolution notation
for a discrete-time
system
x [n ]
Graphical notation
for a discrete-time
system using the
unit-pulse response
y [n ]
h [n ]
Figure 1B.3
It should be pointed out that the convolution representation is not very efficient
in terms of a digital implementation of the output of a system (needs lots more
memory and calculating time) compared with the difference equation.
Convolution is commutative which means that it is also true to write:
yn hi x[n i ]
i 0
(1B.31)
1B.12
Discrete-time convolution can be illustrated as follows. Suppose the unit-pulse
response is that of a filter of finite length k. Then the output of such a filter is:
yn hn* xn
(1B.32)
hi x[n i ]
i 0
h0xn h1xn 1 hk xn k
Graphically, this summation can be viewed as two buffers, or arrays, sliding
past one another. The array locations that overlap are multiplied and summed
to form the output at that instant.
Graphical view of
the convolution
operation in
discrete-time
h0
h1
h2
h3
h4
h5
xn
x n-1
x n-2
x n-3
x n-4
x n-5
fixed array
sliding array
future input
signal values
Figure 1B.4
In other words, the output at time n is equal to a linear combination of past and
present values of the input signal, x. The system can be considered to have a
memory because at any particular time, the output is still responding to an
input at a previous time.
1B.13
Discrete-time convolution can be implemented by a transversal digital filter:
x [n -1]
x [n ]
D
h [0]
x [n -2]
D
h [1]
x [n - k]
D
h [2]
y [n ]
Figure 1B.5
h [k ]
Transversal digital
filter performs
discrete-time
convolution
1B.14
System Memory
A systems memory can be roughly interpreted as a measure of how significant
past inputs are on the current output. Consider the two unit-pulse responses
below:
System memory
depends on the unitpulse response
h1 [n]
18 19 20
18 19 20
h2 [n]
Figure 1B.6
System 1 depends strongly on inputs applied five or six iterations ago and less
so on inputs applied more than six iterations ago. The output of system 2
depends strongly on inputs 20 or more iterations ago. System 1 is said to have a
shorter memory than system 2.
It is apparent that a measure of system memory is obtained by noting how
quickly the system unit-pulse response decays to zero: the more quickly a
specifically - on
how long it takes to
decay to zero.
systems weighting sequence goes to zero, the shorter the memory. Some
applications require a short memory, where the output is more readily
influenced by the most recent behaviour of the input signal. Such systems are
fast responding. A system with long memory does not respond as readily to
changes in the recent behaviour of the input signal and is said to be sluggish.
Signals and Systems 2014
1B.15
System Stability
A system is stable if its output signal remains bounded in
(1B.33)
BIBO stability
defined
If a bounded input (BI) produces a bounded output (BO), then the system is
termed BIBO stable. This implies that:
lim hi 0
(1B.34)
This is something not readily apparent from the difference equation. A more
thorough treatment of system stability will be given later.
What can you say about the stability of the system described by Eq. (1B.21)?
1
t
xr t rect
T
T
Start with a
rectangle input
(1B.35)
y t y r t
(1B.36)
lim xr t t
(1B.37)
As the input
approaches an
impulse function
and since:
T 0
1B.16
then:
lim y r t ht
(1B.38)
T 0
x( t )
~
x (t,T)
- 4T
- 2T
2T
4T
6T
Figure 1B.7
we have:
which get smaller
and smaller and
eventually approach
the original
waveform
xt lim ~
x t , T
(1B.39)
T 0
where:
~
x t , T
t iT
x
iT
rect
(1B.40)
~
x t , T
xiT Tx t iT
(1B.41)
1B.17
Since the system is time-invariant, the response to xr t iT is y r t iT .
Therefore the system response to ~
x t , T is:
~
y t , T
xiT Ty t iT
r
(1B.42)
and we already
know the output
y t lim ~
y t , T lim xiT yr t iT T
T 0
T 0
(1B.43)
y t x ht d
(1B.44)
Convolution integral
for continuous-time
systems defined
(1B.45)
Convolution integral
if the input starts at
time t=0
y t x ht d
0
y t x ht d
t
(1B.46)
Convolution integral
if the input starts at
time t=0, and the
system is causal
Once again, it can be shown that convolution is commutative which means that
it is also true to write (compare with Eq. (1B.31) ):
y t h xt d
t
(1B.47)
Alternative way of
writing the
convolution integral
1B.18
With the convolution operation denoted by an asterisk, *, the input / output
relationship becomes:
Convolution notation
for a continuoustime system
y t ht * xt
(1B.48)
x (t )
y ( t ) = h (t )* x ( t )
h (t )
Figure 1B.8
It should be pointed out, once again, that the convolution relationship is only
valid when there is no initial energy stored in the system. ie. initial conditions
are zero. The output response using convolution is just the ZSR.
h (t )
1
x (t )
1
e -t
t
t
Figure 1B.9
Using graphical convolution, the output y t can be obtained. First, the input
signal is flipped in time about the origin. Then, as the time parameter t
Signals and Systems 2014
1B.19
advances, the input signal slides past the impulse response in much the
same way as the input values slide past the unit-pulse values for discrete-time
convolution. You can think of this graphical technique as the continuous-time
version of a digital transversal filter (you might like to think of it as a discretetime system and input signal, with the time delay between successive values so
tiny that the finite summation of Eq. (1B.30) turns into a continuous-time
integration).
When t 0 , there is obviously no overlap between the impulse response and
input signal. The output must be zero since we have assumed the system to be
in the zero-state (all initial conditions zero). Therefore y 0 0 . This is
illustrated below:
Graphical illustration
of continuous-time snapshot at t=0
h ( )
1
x (0- )
1
h ( ) x (0- )
1
1B.20
Graphical illustration
of continuous-time snapshot at t=1
h ( )
1
1
1
x (1- )
h ( ) x (1- )
1 e
1
y 1 h x1 d
1
e d e
0
1
0
(1B.49)
1 e 1 0.63
1B.21
Taking a snapshot at t 2 gives:
Graphical illustration
of continuous-time snapshot at t=2
h ( )
1
e
1 2
x (2- )
1 2
h ( ) x (2- )
1
e
1 2
y 2 h x2 d
2
e d e
0
2
0
(1B.50)
1 e
0.86
1B.22
If we keep evaluating the output for various values of t, we can build up a
graphical picture of the output for all time:
y( t )
1-e -t
1
y(2) = 0.86
y(1) = 0.63
y(0) = 0
1 2
Figure 1B.13
In this simple case, it is easy to verify the graphical solution using Eq. (1B.47).
The output value at any time t is given by:
y t h xt d
t
e d e
0
t
0
(1B.51)
1 e t
1B.23
Properties of Convolution
In the following list of continuous-time properties, the notation xt y t
should be read as the input xt produces the output y t . Similar properties
also hold for discrete-time convolution.
axt ay t
(1B.52a)
x1 t x2 t y1 t y2 t
(1B.52b)
a1 x1 t a2 x2 t a1 y1 t a2 y2 t
(1B.52c)
xt t 0 yt t 0
(1B.52d)
Convolution
properties
Linearity
Time-invariance
Numerical Convolution
We have already looked at how to discretize a continuous-time system by
Computers work
discretizing a systems input / output differential equation. The following with discrete data
procedure provides another method for discretizing a continuous-time system.
It should be noted that the two different methods produce two different
discrete-time representations.
We start by thinking about how to simulate a continuous-time convolution with
a computer, which operates on discrete-time data. The integral in Eq. (1B.47)
can be discretized by setting t nT :
y nT h xnT d
nT
(1B.53)
1B.24
By effectively reversing the procedure in arriving at Eq. (1B.47), we can break
this integral into regions of width T:
y nT h xnT d
(1B.54)
h xnT d
2T
i 1T
iT
h xnT d
y nT
i 0
iT T
iT
h xnT d
(1B.55)
h ( )
h (iT)
iT iT +T
Figure 1B.14
That is, apply Eulers approximation:
h hiT
xnT xnT iT
(1B.56)
1B.25
so that Eq. (1B.55) becomes:
n
y nT
i 0
iT T
iT
hiT xnT iT d
(1B.57)
Since the integrand is constant with respect to , it can be moved outside the
integral which is easily evaluated:
n
y nT hiT xnT iT T
i 0
(1B.58)
We approximate the
integral with a
summation
Writing in the notation for discrete-time signals, we have the following input /
output relationship:
n
yn hi xn i T , n 0, 1, 2,
i 0
(1B.59)
Convolution
approximation for
causal systems with
inputs applied at t=0
1B.26
Convolution with an Impulse
One very important particular case of convolution that we will use all the time
is that of convolving a function with a delayed impulse. We can tackle the
problem three ways: graphically, algebraically, or by using the concept that a
system performs convolution. Using this last approach, we can surmise what
the solution is by recognising that the convolution of a function ht with an
impulse is equivalent to applying an impulse to a system that has an impulse
response given by ht :
Applying an impulse
to a system creates
the impulse
response
( t)
y ( t ) = h (t )* ( t ) = h (t )
h (t )
( t)
h (t )
h (t )
1
t
1
t
Figure 1B.15
The output, by definition, is the impulse response, ht . We can also arrive at
this result algebraically by performing the convolution integral, and noting that
it is really a sifting integral:
t ht ht d ht
(1B.60)
1B.27
If we now apply a delayed impulse to the system, and since the system is timeinvariant, we should get out a delayed impulse response:
( t- t 0)
y ( t ) = h (t )* ( t- t 0) = h ( t- t 0)
h (t )
( t- t 0)
h (t )
h ( t- t 0)
1
t0
Applying a delayed
impulse to a system
creates a delayed
impulse response
t0
Figure 1B.16
Again, using the definition of the convolution integral and the sifting property
of the impulse, we can arrive at the result algebraically:
t t 0 ht t 0 ht d
ht t 0
(1B.61)
f x x x0 f x x0
(1B.62)
(x- x 0 )
f (x )
x0
Figure 1B.17
Convolving a
function with an
impulse shifts the
original function to
the impulses
location
f (x- x 0 )
x0
1B.28
Summary
Discrete-time signals occur naturally and frequently they are signals that
exist only at discrete points in time. Discrete-time systems are commonly
implemented using microprocessors.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
1B.29
Exercises
1.
The following continuous-time functions are to be uniformly sampled. Plot the
discrete signals which result if the sampling period T is (i) T 0.1 s ,
(ii) T 0.3 s , (iii) T 0.5 s , (iv) T 1 s . How does the sampling time affect
the accuracy of the resulting signal?
(a) xt 1
(b) xt cos 4t
(c) xt cos10t
2.
Plot the sequences given by:
(a) y1 n 3 n 1 n 2 n 1 1 2 n 2
(b) y 2 n 4 n n 2 3 n 3
3.
From your solution in Question 2, find an y1 n y 2 n . Show graphically
that the resulting sequence is equivalent to the sum of the following delayed
unit-step sequences:
an 3un 1 un 1 1 2 un 2 9 2 un 3 3un 4
4.
Find yn y1 n y 2 n when:
n 1, -2, -3,
0,
y1 n
n 2 1
n 0, 1, 2,
1 ,
0,
n 1, -2, -3,
y 2 n
n
n 0, 1, 2,
1 2 1 1 ,
1B.30
5.
The following series of numbers is known as the Fibonacci sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34
(a) Find a difference equation which describes this number sequence yn for
n 2 , when y0 0 and y1 1 .
(b) By evaluating the first few terms show that the following formula also
describes the numbers in the Fibonacci sequence:
yn
n
n
1
0.5 1.25 0.5 1.25
(c) Using your answer in (a) find y20 and y25 . Check your results using
the equation in (b). Which approach is easier?
6.
Construct block diagrams for the following difference equations:
(i)
yn yn 2 xn xn 1
(ii)
yn 2 yn 1 yn 2 3 xn 4
7.
(i) Construct a difference equation from the following block diagram:
x [n ]
y [n ]
3
D
-2
1B.31
8.
(a) Find the unit-pulse response of the linear systems given by the following
equations:
(i)
yn
T
xn xn 1 yn 1
2
(ii)
yn xn 0.75 xn 1 0.5 yn 1
(b) Determine the first five terms of the response of the equation in (ii) to the
input:
n 2, 3, 4,
0,
xn 1,
n 1
n
1 ,
n 0, 1, 2,
using (i) the basic difference equation, (ii) graphical convolution and
(iii) the convolution summation. (Note yn 0 for n 2 ).
9.
For the single input-single output continuous- and discrete-time systems
characterized by the following equations, determine which coefficients must be
zero for the systems to be
(a) linear
(b) time invariant
2
(i)
d3y
d2y
dy
a1 3 a2 2 a3 a4 y a5 sin t a6 y a7 x
dt
dt
dt
(ii)
a1 y 2 n 3 a2 yn 2 a3 a4 yn a5 sin n yn 1 a6 yn a7 xn
1B.32
10.
To demonstrate that nonlinear systems do not obey the principle of
superposition, determine the first five terms of the response of the system:
yn 2 yn 1 x 2 n
to the input:
0, n 1, -2, -3,
x1 n
n 0, 1, 2,
1,
If y1 n denotes this response, show that the response of the system to the input
xn 2 x1 n is not 2 y1 n .
11.
A system has the unit-pulse response:
hn 2un un 2 un 4
Find the response of this system when the input is the sequence:
n n 1 n 2 n 3
using (i) graphical convolution and (ii) convolution summation.
1B.33
12.
For x1 n and x2 n as shown below find
(i) x1 n x1 n (ii) x1 n x2 n
(iii) x2 n x2 n
x 1[n]
x 2[n]
2
0
1 2 3
1 2 3 4 5 6 7 8
13.
Use MATLAB and discretization to produce approximate solutions to the
revision problem.
14.
Use MATLAB to graph the output voltage of the following RLC circuit:
v i (t )
v o(t )
1B.34
15.
A feedback control system is used to control a rooms temperature with respect
to a preset value. A simple model for this system is represented by the block
diagram shown below:
x (t )
y (t )
16.
Use MATLAB and the numerical convolution method to solve Q14.
1B.35
17.
Sketch the convolution of the two functions shown below.
x(t)
y(t)
1
-1 0
0 0.5
18.
Quickly changing inputs to an aircraft rudder control are smoothed using a
digital processor. That is, the control signal is converted to a discrete-time
signal by an A/D converter, the discrete-time signal is smoothed with a
discrete-time filter, and the smoothed discrete-time signal is converted to a
continuous-time, smoothed, control signal by a D/A converter. The smoothing
filter has the unit-pulse response:
Find the zero-state response of the discrete-time filter when the input signal
samples are:
xnT 1, 1, 1, T 0.25 s
Plot the input, unit-pulse response, and output for 0.75 t 1.5 s .
1B.36
19.
A wave staff measures ocean wave height in meters as a function of time. The
height signal is sampled at a rate of 5 samples per second. These samples form
the discrete-time signal:
snT cos2 0.2 nT 1.1 0.5 cos2 0.3nT 1.5
The received signal plus noise, xnT , is processed with a low-pass filter to
reduce the noise.
The filter unit-pulse response is:
Plot the sampled height signal, snT , the filter input signal, xnT , the unitpulse response of the filter, hnT , and the filter output signal ynT , for
0t 6s.
1B.37
Gustav Robert Kirchhoff (1824-1887)
Kirchhoff was born in Russia, and showed an early interest in mathematics. He
studied at the University of Knigsberg, and in 1845, while still a student, he
pronounced Kirchhoffs Laws, which allow the calculation of current and
voltage for any circuit. They are the Laws electrical engineers apply on a
routine basis they even apply to non-linear circuits such as those containing
semiconductors, or distributed parameter circuits such as microwave striplines.
He graduated from university in 1847 and received a scholarship to study in
Paris, but the revolutions of 1848 intervened. Instead, he moved to Berlin
where he met and formed a close friendship with Robert Bunsen, the inorganic
chemist and physicist who popularized use of the Bunsen burner.
In 1857 Kirchhoff extended the work done by the German physicist Georg
Simon Ohm, by describing charge flow in three dimensions. He also analysed
circuits using topology. In further studies, he offered a general theory of how
electricity is conducted. He based his calculations on experimental results
which determine a constant for the speed of the propagation of electric charge.
Kirchhoff noted that this constant is approximately the speed of light but the
greater implications of this fact escaped him. It remained for James Clerk
Maxwell to propose that light belongs to the electromagnetic spectrum.
Kirchhoffs most significant work, from 1859 to 1862, involved his close
collaboration with Bunsen. Bunsen was in his laboratory, analysing various
salts that impart specific colours to a flame when burned. Bunsen was using
coloured glasses to view the flame. When Kirchhoff visited the laboratory, he
suggested that a better analysis might be achieved by passing the light from the
flame through a prism. The value of spectroscopy became immediately clear.
Each element and compound showed a spectrum as unique as any fingerprint,
which could be viewed, measured, recorded and compared.
Spectral analysis, Kirchhoff and Bunsen wrote not long afterward, promises
the chemical exploration of a domain which up till now has been completely
closed. They not only analysed the known elements, they discovered new
Signals and Systems 2014
1B.38
ones. Analyzing salts from evaporated mineral water, Kirchhoff and Bunsen
detected a blue spectral line it belonged to an element they christened
caesium (from the Latin caesius, sky blue). Studying lepidolite (a lithiumbased mica) in 1861, Bunsen found an alkali metal he called rubidium (from
the Latin rubidius, deepest red). Both of these elements are used today in
atomic clocks. Using spectroscopy, ten more new elements were discovered
before the end of the century, and the field had expanded enormously
between 1900 and 1912 a handbook of spectroscopy was published by
Kayser in six volumes comprising five thousand pages!
[Kirchhoff is] a
perfect example of
the true German
investigator. To
search after truth in
its purest shape and
to give utterance
with almost an
abstract selfforgetfulness, was
the religion and
purpose of his life.
Robert von
Helmholtz, 1890.
2A.1
Lecture 2A Fourier Series and Spectra
Orthogonality. The trigonometric Fourier series. The compact trigonometric
Fourier series. The spectrum. The complex exponential Fourier series. How to
use MATLAB to check Fourier series coefficients. Symmetry in the time
domain. Power. Filters. Relationships between the three Fourier series
representations.
Orthogonality
The idea of breaking a complex phenomenon down into easily comprehended
components is quite fundamental to human understanding. Instead of trying to
commit the totality of something to memory, and then, in turn having to think
concept of
about it in its totality, we identify characteristics, perhaps associating a scale The
breaking the
with each characteristic. Our memory of a person might be confined to the complex down into
the simple
characteristics gender, age, height, skin colour, hair colour, weight and how
they rate on a small number of personality attribute scales such as optimistpessimist, extrovert-introvert, aggressive-submissive etc.
We only need to know how to travel east and how to travel north and we can
go from any point on earth to any other point. An artist (or a television tube)
needs only three primary colours to make any colour.
In choosing the components it is most efficient if they are independent as in
Independent
gender and height for a person. It would waste memory capacity, for example, characteristics are
to adopt as characteristics both current age and birthdate, as one could be efficient descriptors
predicted from the other, or all three of total height and height above and
below the waist.
Orthogonality in Mathematics
Vectors and functions are similarly often best represented, memorised and
manipulated in terms of a set of magnitudes of independent components. Recall
that any vector A in 3 dimensional space can be expressed in terms of any
three vectors a , b and c which do not lie in the same plane as:
A A1a A2 b A3c
where A1 , A2 and A3 are appropriately chosen constants.
Signals and Systems 2014
(2A.1)
Specifying a 3D
vector in terms of 3
components
2A.2
Vector in 3D space
showing
components
c
A3
A
A1
A2
b
a
Figure 2A.1
The vectors a , b and c are said to be linearly independent for no one of them
can be expressed as a linear combination of the other two. For example, it is
Basis set described
as set of linearly
independent vectors
Orthogonality
defined for vectors
Orthogonal means that the projection of one component onto another is zero.
In vector analysis the projection of one vector onto another is given by the dot
product.
Thus, for a , b and c orthogonal, we have the relations:
a a a2
ab 0
ac 0
b a 0
b b b2
bc 0
ca 0
cb 0
c c c2
(2A.2)
2A.3
Hence, if a , b and c are orthogonal, when we project the vector A onto each
of the basis vectors, we get:
(2A.3)
Finding the
components of a
vector
Aa Ab Ac
A 2 a 2 b 2 c
a b c
(2A.4)
A vector described
in terms of
orthogonal
components
Az
A a A b
A 2 a 2 b 2 z
z
a b
(2A.5)
If a 2 , b 2 , c 2 etc. are 1, the set of vectors are not just orthogonal they are
Orthonormal defined
orthonormal.
The above description of a vector in three dimensional space is exactly
analogous to resolving a colour into three primary (orthogonal) components. In
Orthogonal
this case we project light through red, green and blue filters and find the components is a
intensity of each of the three components. The original colour can be wide application
synthesised once again by red, green and blue lights of appropriate intensity.
Signals and Systems 2014
2A.4
The Inner Product
The definition of the dot product for vectors in space can be extended to any
general vector space. Consider two n-dimensional vectors:
Two n-dimensional
vectors
u
Figure 2A.2
u, v u T v v T u
(2A.6)
v1
u, v u1 u 2 u3 v2
v3
u1v1 u 2 v2 u3 v3
(2A.7)
u v cos
If 90 the two vectors are orthogonal and u, v 0 . They are linearly
independent, that is, one vector cannot be written in a way that contains a
component of the other.
2A.5
Real functions such as xt and y t on a given interval a t b can be
considered a vector space, since they obey the same laws of addition and
scalar multiplication as spatial vectors. For functions, we can define the inner
product by the integral:
x, y xt y t dt
b
(2A.8)
This definition ensures that the inner product of functions behaves in exactly
the same way as the inner product of vectors. Just like vectors, we say that
two functions are orthogonal if their inner product is zero:
x, y 0
(2A.9)
Orthogonality for
functions defined
Example
Let xt sin 2t and y t sin 4t over the interval 1 2 t 1 2 :
1
x( t )
t
-1/2
1/2
-1
1
y( t )
t
-1/2
1/2
-1
Then the inner product is (due to odd symmetry of the integrand):
x, y
12
1 2
sin 2t sin 4t dt 0
2A.6
Orthogonality in Power Signals
Consider a finite power signal, xt . The average power of xt is:
1 T 2
1
Px x t dt
x, x
T 0
T
(2A.10)
Now consider two finite power signals, xt and y t . The average value of
the product of the two signals observed over a particular interval, T, is given by
the following expression:
1 T
1
x
t
y
t
dt
x, y
T 0
T
(2A.11)
1
x y, x y
T
1 T
2
xt y t dt
T 0
1 T
x 2 t 2 xt y t y 2 t dt
T 0
1
2
1
x, x
x, y
y, y
T
T
T
Px 0 Py
Px y
(2A.12)
This means that the total power in the combined signal can be obtained by
adding the power of the individual orthogonal signals.
2A.7
Orthogonality in Energy Signals
Consider two finite energy signals in the form of pulses in a digital system.
Two pulses, p1 t and p 2 t , are orthogonal over a time interval, T, if:
xt yt dt
T
x, y 0
(2A.13)
Similar to the orthogonal finite power signals discussed above, the total energy
of a pulse produced by adding together two orthogonal pulses can be obtained
by summing the individual energies of the separate pulses.
The energy of a
signal made up of
orthogonal
components is the
sum of the
component signal
energies
Ex y x y, x y
xt y t dt
T
x 2 t 2 xt y t y 2 t dt
T
x, x 2 x, y y , y
(2A.14)
Ex 0 E y
For example, Figure 2A.3, illustrates two orthogonal pulses because they
occupy two completely separate portions of the time interval 0 to T. Therefore,
their product is zero over the time period of interest which means that they are
orthogonal.
p 1( t )
1
0
p 2( t )
1
T/2
T t
Figure 2A.3
T/2
T t
2A.8
The Trigonometric Fourier Series
In 1807 Joseph Fourier showed how to represent any periodic function as a
weighted sum of a family of harmonically related sinusoids. This discovery
turned out to be just a particular case of a more general concept. We can
actually represent any periodic function by a weighted sum of orthogonal
g t Ann t
(2A.15)
n0
where the functions n t form an orthogonal basis set, just like the vectors
a, b and c etc. in the geometric vector case. The equivalent of the dot product
(or the light filter) for obtaining a projection in this case is the inner product
given by:
g t , n t
T
Definition of inner
product for a
function
T0 2
0
g t n t dt
(2A.16)
The projection of a
function onto itself
gives a number
T0 2
T0 2
n t m t dt cn2
aa a
(2A.17)
ab 0
(2A.18)
nm
and:
The projection of a
function onto an
orthogonal function
gives zero
T0 2
t m t dt 0
T0 2 n
nm
When cn2 1 the basis set of functions n t (all n) are said to be orthonormal.
Signals and Systems 2014
2A.9
There are many possible orthogonal basis sets for representing a function over
an interval of time T0 . For example, the infinite set of Walsh functions shown
below can be used as a basis set:
1( t )
Example of an
orthogonal basis set
the Walsh
functions
5( t )
T0 t
T0 t
6( t )
2( t )
T0
T0
3( t )
7( t )
4( t )
T0 t
T0 t
8( t )
T0
T0
Figure 2A.4
We can confirm that the Walsh functions are orthogonal with a few simple
integrations, best performed graphically. For example with n 2 and m 3 :
22 ( t )
A2
T0
0
2 t 2 t dt A2T0
T0 t
2 ( t) 3 ( t)
A
T0
T0
0
2 t 3 t dt 0
t
Figure 2A.5
Signals and Systems 2014
2A.10
The trigonometric Fourier series is a very special way of representing periodic
functions. The basis set chosen for the Fourier series is the set of pairs of sines
and cosines with frequencies that are integer multiples of f 0 1 T0 :
an t cos2nf 0t
bn t sin 2nf 0t
The orthogonal
functions for the
Fourier series are
sinusoids
(2A.19)
They were chosen for the two reasons outlined in Lecture 1A for linear
systems a sinusoidal input yields a sinusoidal output; and they have a compact
notation using complex numbers. The constant cn2 in this case is either T0 2 or
mn0
T0
cos2mf 0t cos2nf 0t dt T0 2 m n 0
2
0
mn
T0 2
T0
mn0
0
(2A.20a)
T0 2
T0
T0 2
T0 2
all m, n
(2A.20b)
(2A.20c)
If we choose the orthogonal basis set as in Eq. (2A.19), and the representation
of a function as given by Eq. (2A.15), then any periodic function g t with
period T0 be expressed as a sum of orthogonal components:
The trigonometric
Fourier series
defined
g t a0 an cos2nf 0 t bn sin2nf 0 t
(2A.21)
n 1
2A.11
Now look back at Eqs. (2A.4) and (2A.17). If we want to determine the
coefficients in the Fourier series, all we have to do is project the function
onto each of the components of the basis set and normalise by dividing by cn2 :
a0
2
an
T0
bn
2
T0
1
T0
T0 2
T0 2
T0 2
T0 2
T0 2
T0 2
g t dt
(2A.22a)
g t cos2nf 0 t dt
(2A.22b)
g t sin 2nf 0 t dt
(2A.22c)
Compare these equations with Eq. (2A.3). These equations tell us how to filter
out one particular component of the Fourier series. Note that frequency 0 is
DC, and the coefficient a 0 represents the average, or DC part of the periodic
signal g t .
Example
g t a0 an cos2nf 0 t bn sin2nf 0 t
n 1
2A.12
Example
Find the Fourier series for the rectangular pulse train g t shown below:
g (t)
A
0
t
T0
Figure 2A.6
Here the period is T0 and f 0 1 T0 . Using Eqs. (2A.18), we have for the
Fourier series coefficients:
a0
1
T0
an
T0 2
T0 2
2
T0
g t dt
1
T0
Adt
A
Af 0
T0
(2A.23a)
A cos2nf 0t dt
2A
sin nf 0 2 Af 0 sin cnf 0
n
(2A.23b)
2 2
Asin2nf 0 t dt 0
T0 2
(2A.23c)
bn
t nT0
2 Af 0 sinc nf 0 cos 2nf 0 t
Arect Af 0
n
n 1
(2A.24)
2A.13
For example, consider the case where T0 5 . We can draw up a table of the
Fourier series coefficients as a function of n:
n
an
bn
0.2A
0.3742A
0.3027A
0.2018A
0.0935A
-0.0624A
etc.
-0.0865A
The trigonometric
Fourier series
coefficients can be
tabled
an
0.4 A
0.2 A
6 f0
0
2 f0
8 f0
4 f0
10 f 0 12 f 0
Figure 2A.7
Observe that the graph of the Fourier series coefficients is discrete the values
of a n (and bn ) are associated with frequencies that are only multiples of the
fundamental, nf 0 . This graphical representation of the Fourier series
coefficients lets us see at a glance what the dominant frequencies are, and
how rapidly the amplitudes reduce in magnitude at higher harmonics.
2A.14
The Compact Trigonometric Fourier Series
The trigonometric Fourier series in Eq. (2A.21) can be written in a more
compact and meaningful way as follows:
The compact
trigonometric
Fourier series
defined
g t An cos2nf 0 t n
(2A.25)
n 0
an An cos n
bn An sin n
(2A.26)
and therefore:
and associated
constants
An an2 bn2
(2A.27a)
n tan 1 bn an
(2A.27b)
f0 , 2 f0 ,
(2A.28)
Eq. (2A.26):
and related to the
trigonometric
Fourier series
coefficients
Gn an jbn
Signals and Systems 2014
(2A.29)
2A.15
The negative sign in Gn a n jbn comes from the fact that in the phasor
representation of a sinusoid the real part of Gn is the amplitude of the cos
component, and the imaginary part of Gn is the amplitude of the -sin
component.
Substituting for a n and bn from Eqs. (2A.22a) - (2A.22c) results in:
G0 a0
1
T0
T0 2
T0 2
(2A.30)
g t dt
Gn an jbn
2
T0
2
T0
T0 2
T0 2
T0 2
T0 2
g t cos2nf 0t dt j
2
T0
T0 2
T0 2
g t sin 2nf 0t dt
1
Gn
T0
2
Gn
T0
T0 2
T0 2
T0 2
T0 2
g t e j 2nf 0t dt
n0
g t e j 2nf 0t dt
n0
(2A.31a)
Obtaining the
harmonic phasors
directly
(2A.31b)
The expression for the compact Fourier series, as in Eq. (2A.25), can now be
written as:
gt Re Gn e j 2nf 0 t
n0
(2A.32)
Fourier series
expressed as a sum
of harmonic phasors
projected onto the
real axis
2A.16
Example
Find the compact Fourier series coefficients for the rectangular pulse train g t
shown below:
g (t)
A
0
t
T0
Again, the period is T0 and f 0 1 T0 . Using Eqs. (2A.27), we have for the
average value:
1
G0
T0
Adt Af 0
(2A.33)
which can be seen (and checked) from direct inspection of the waveform.
The compact Fourier series coefficients (harmonic phasors) are given by:
2
Gn
T0
Ae j 2nf 0 t dt
(2A.34)
1 j 2nf 0 t
2 Af 0
e
j
2
nf
0
0
A
1 e j 2nf 0
jn
2A.17
The next step is not obvious. We separate out the term e jnf 0 :
Gn
A jnf 0 jnf 0
e
e
e jnf 0
jn
(2A.35)
e j e j
sin
j2
(2A.36)
to give:
Gn
2A
sin nf 0 e jnf 0
n
(2A.37)
sincx
sin x
x
(2A.38)
we get:
sin nf 0 jnf 0
Gn 2 Af 0
e
nf 0
2 Af 0sincnf 0 e jnf 0
(2A.39)
An e j n
Therefore:
An 2 Af 0sincnf 0
n nf 0
(2A.40)
2A.18
The Spectrum
Using the compact trigonometric Fourier series, we can formulate an easy-tointerpret graphical representation of a periodic waveforms constituent
sinusoids. The graph below shows a periodic waveform, made up of the first 6
terms of the expansion of a square wave as a Fourier series, versus time:
Figure 2A.8
The graph also shows the constituent sinusoids superimposed on the original
waveform. Lets now imagine that we can graph each of the constituent
sinusoids on its own time-axis, and extend these into the z-direction:
Figure 2A.9
2A.19
Each constituent sinusoid exists at a frequency that is harmonically related to
the fundamental. Therefore, we could set up a graph showing the amplitude of
the constituent sinusoids, with the horizontal scale set up so that each
amplitude value is graphed at the corresponding frequency:
Figure 2A.10
We now have a graph that shows the amplitudes of the constituent sinusoids
versus frequency:
amplitude
3 f0
f0
7 f0
5 f0
11f0
9 f0
Figure 2A.11
frequency
2A.20
In a periodic signal, the constituent sinusoids are given by An cos2nf 0 t n .
So, amplitude information is not enough we need to graph phase as well. We
also know that each constituent sinusoid is completely characterised by its
corresponding phasor, Gn An e jn . If we plot G n versus frequency, we have
The spectrum
definedas a graph
of phasor values vs.
frequency
ampltiude
time
ampltiude
frequency
phase
spectrum
frequency
Figure 2A.12
2A.21
Example
The compact Fourier series representation for a rectangular pulse train was
found in the previous example. A graph of the magnitude and phase spectra
appear below for the case T0 5 :
An
0.4 A
0.2 A
6 f0
0
2 f0
8 f0
4 f0
10 f 0 12 f 0
n
0
2 f0
4 f0
6 f0
8 f0
10 f 0 12 f 0
-
-2
Note how we can have phase even though the amplitude is zero (at 5 f 0 ,
10 f 0 , etc). We could also wrap the phase spectrum around to 0 instead of
graphing linearly past the 2 point in fact we could choose any range that
spans 2 since phase is periodic with period 2 .
2A.22
The Complex Exponential Fourier Series
The complex exponential Fourier series is the most mathematically convenient
and useful representation of a periodic signal. Recall that Eulers formulas
relating the complex exponential to cosines and sines are:
e j cos j sin
(2A.41)
e j e j
cos
2
(2A.42)
e j e j
sin
j2
(2A.43)
Substitution of Eqs. (2A.42) and (2A.43) into the trigonometric Fourier series,
Eq. (2A.21), gives:
a
a
a0 n e j 2nf 0t n e j 2nf 0 t
2
n 1 2
b
b
j n e j 2nf 0t j n e j 2nf 0t
2
2
a
b
a0 n j n e j 2nf 0t
2
n 1 2
b
a
n j n e j 2nf 0 t
2
2
(2A.44)
2A.23
This can be rewritten in the form:
g t
G e
j 2nf 0t
(2A.45)
The complex
exponential Fourier
series
where:
G0 a0 A0
(2A.46a)
an jbn An j n
e
2
2
n 1
an jbn An j n
e
2
2
Gn
G n
(2A.46b)
The relationship
between complex
exponential and
trigonometric
Fourier series
coefficients
(2A.46c)
From Eqs. (2A.31a) and (2A.46a), we can see that an alternative way of
writing the Fourier series coefficients, in one neat formula instead of three, is:
1
Gn
T0
T0 2
T0 2
g t e
j 2nf 0 t
dt
(2A.47)
Thus, the trigonometric and complex exponential Fourier series are not two
different series but represent two different ways of writing the same series. The
coefficients of one series can be obtained from those of the other.
The complex
exponential Fourier
series coefficients
defined
2A.24
The complex exponential Fourier series can also be viewed as being based on
the compact Fourier series but uses the fact that we can use the alternative
Harmonic phasors
can also have
negative frequency
g t
The complex
exponential Fourier
series
G e
1
Gn
T0
T0 2
T0 2
j 2nf 0 t
g t e j 2nf 0 t dt
(2A.48a)
(2A.48b)
where:
The symmetry of the
complex exponential
Fourier series
coefficients
Gn Gn*
(2A.49)
Gn Gn e jn
(2A.50)
Gn Gn e jn
(2A.51)
Thus, if:
then:
A double-sided
spectrum shows
negative frequency
phasors
2A.25
Example
Find the complex exponential Fourier series for the rectangular pulse train
g t shown below:
g (t)
A
0
t
T0
Figure 2A.13
The period is T0 and f 0 1 T0 . Using Eq. (2A.48a), we have for the complex
exponential Fourier series coefficients:
1
Gn
T0
Ae j 2nf 0t dt
A jnf 0
e jnf 0
e
j 2n
A
sin nf 0
n
Af 0sincnf 0
(2A.52)
In this case Gn turns out to be a real number (the phase of all the constituent
sinusoids is 0 or 180).
2A.26
For the case of T0 5 , the double-sided magnitude spectrum is then:
The double-sided
magnitude spectrum
of a rectangular
pulse train
Gn
0.2 A
-8 f0 -6 f0
6 f0
-4 f0 -2 f0 0
-12 f0 -10 f0
8 f0
4 f0
2 f0
10f0 12f0
Figure 2A.14
Gn
-7f0
-11f0
-9 f0
A /2
-3f0
-5f0
3f0
- f0 0 f0
7f0
5f0
11f0
9f0
Figure 2A.15
Thus, the Fourier series can be represented by spectral lines at all harmonics of
f 0 1 T0 , where each line varies according to the complex quantity Gn . In
particular, for a rectangular pulse train, Gn follows the envelope of a sinc
function, with amplitude A T0 and zero crossings at integer multiples of 1 .
2A.27
Another case of interest, which is fundamental to the analysis of digital
systems, is when we allow each pulse in a rectangular pulse train to turn into
an impulse, i.e. 0 and A such that A 1 . In this case, each pulse in
Figure 2A.13 becomes an impulse of unit strength, and g t is simply a
uniform train of unit impulses, as shown below:
A uniform train of
unit impulses,
g (t)
1
-2T0
-T0
T0
2T0
3T0
4T0
Figure 2A.16
The result of Eq. (2A.52) is still valid if we take the appropriate limit:
Gn lim Af 0sincnf 0 f 0
(2A.53)
g t
G e
t nT f e
j 2nf 0 t
j 2nf 0 t
(2A.54)
2A.28
The spectrum has components of frequencies nf 0 , n varying from to ,
including 0, all with an equal strength of f 0 , as shown below:
and its spectrum
Gn
f0
- f0
f0
2 f0
3 f0
4 f0
Figure 2A.17
2A.29
How to Use MATLAB to Check Fourier Series Coefficients
MATLAB is a software package that is particularly suited to signal
processing. It has instructions that will work on vectors and matrices. A vector
can be set up which gives the samples of a signal. Provided the sample spacing
meets the Nyquist criterion the instruction G=fft(g) returns a vector
containing N times the Fourier series coefficients, where G1 N G0 is the DC
term, G2 N G1 , G3 N G2 etc. and G N N G1 , G N-1 N G 2 etc.
where N is the size of the vector. G=ifft(g)does the inverse Fourier
transform.
Example
g 1
Step 3
Find G fftg 0
2 .
2A.30
Example
Find the Fourier series coefficients of a 50% duty cycle square wave.
Step 1
In this case the spectrum is so we can never choose f s high enough. There
is always some error. Suppose we choose 8 points of one cycle.
Step 2
g 1
0.5
0.5
G fftg 4
2.4142
0.4142
0.4142
2.4142
You should read Appendix A The Fast Fourier Transform, and look at the
example MATLAB code in the FFT - Quick Reference Guide for more
complicated and useful examples of setting up and using the FFT.
2A.31
Symmetry in the Time Domain
A waveform which is symmetric in the time domain will have a spectrum with
certain properties. Identification of time-domain symmetry can lead to
conclusions about the spectrum without doing any calculations a useful skill
to possess.
Even Symmetry
2
bn
T0
T0 2
T0 2
g t sin 2nf 0t dt
(2A.55)
and positive time, and so we get bn 0 after performing the integration. The
integrand in the formula for a n is even even even , and will only be zero if
the function g t is zero. Thus we have the property:
g t even Gn real
(2A.56)
Even symmetry in
the time-domain
leads to real Fourier
series coefficients
2A.32
Example
Find the compact Fourier series coefficients for the cos-shaped pulse train g t
shown below:
g( t )
2
-1
t
1
-2
Figure 2A.18
0
1 t 1 2
g t 2 cos3t 1 2 t 1 2
0
1 2t
(2A.57)
and g t g t 2 .
The period is T0 2 and f 0 1 T0 1 2 Hz . Using Eq. (2A.48a), we have for
the complex exponential Fourier series coefficients:
1 1
jnt
g
t
e
dt
1
2
1 0.5
0 2 cos3t e jnt dt 0
2 0.5
j 3t
0.5 e
e j 3t jnt
e
dt
0.5
2
Gn
(2A.58)
2A.33
Now, combining the exponentials and integrating, we have:
e j 3 n t
e j 3 n t
Gn
j 3 n j 3 n 0.5
0.5
(2A.59)
e j 3 n 2
e j 3 n 2 e j 3 n 2
e j 3 n 2
j 3 n j 3 n j 3 n j 3 n
1 sin 3 n 2 1 sin 3 n 2
2 3 n 2
2 3 n 2
1
1
Gn sinc 3 n 2 sinc 3 n 2
2
2
(2A.60)
1
1
sinc3 n 2 sinc3 n 2
2
2
an 1
1
Gn
sinc3 n 2 sinc3 n 2 n 1
2 2
2
G0 a0
(2A.61)
and therefore:
2
1
1
a0 sinc3 n 2 sinc3 n 2
3
2
2
an sinc3 n 2 sinc3 n 2
n 1
(2A.62)
Thus, the Fourier series expansion of the wave contains only cosine terms:
2
g t
sinc3 n 2 sinc3 n 2cosnt
3 n1
(2A.63)
2A.34
The Fourier series expansion, for the first 5 terms and 10 terms, is shown
below together with the original wave:
Gn
An even symmetric
waveform has an
even and real
spectrum
-5 f 0
-4 f 0
-3 f 0
-2 f 0
- f0
f0
2 f0
3 f0
4 f0
5 f0
2A.35
Odd Symmetry
An odd function is one which possesses rotational symmetry about the origin.
Mathematically, an odd function is such that g t g t . An odd function
can be expressed as a sum of sine waves only (sine waves are odd functions),
and therefore all the G n are imaginary. To see why, we consider the real
component of Gn a n jbn 2 , which must be zero. That is, we look at the
formula for a n as given by the trigonometric Fourier series:
2
an
T0
T0 2
T0 2
g t cos2nf 0t dt
(2A.64)
and positive time, and so we get a n 0 after performing the integration. The
integrand in the formula for bn is odd odd even , and will only be zero if
the function g t is zero. Thus we have the property:
g t odd Gn imaginary
(2A.65)
Odd symmetry in
the time-domain
leads to imaginary
Fourier series
coefficients
2A.36
Example
Find the complex exponential Fourier series coefficients for the sawtooth
waveform g t shown below:
g (t )
A
-T0
-T0
2
-A
T0
2
T0
Figure 2A.19
The period is T0 and f 0 1 T0 . Using Eq. (2A.48a), we have for the complex
exponential Fourier series coefficients:
1
Gn
T0
2 A j 2nf 0 t
te
dt
2 T
0
T0 2
T0
(2A.66)
G0 0
(2A.67)
ut
dv e j 2nf 0 t dt
du dt
e j 2nf 0 t
v
j 2nf 0
(2A.68)
2A.37
We then have:
T0 2
j 2nf 0t
T0 2 e
2 A e j 2nf 0t
t
dt
T0 j 2nf 0 T 2 T0 2 j 2nf 0
0
T
e jn
e jn
2 A T
1
2 0
0
T0 2 j 2nf 0 2 j 2nf 0 j 2nf 0
Gn
e jn e jn
2 A
1
T0 j 2nf 0
2
(2A.69)
j 2nf 0 t
e
dt
T0 2
T0 2
1 e j 2nf 0t
j 2n j 2nf 0 T0 2
T0 2
2A
1
e jn e jn
cosn
j 2n
j 2n
jA
sin n
n 1
cosn
n
n
Gn
jA
1n
n
n 1
(2A.70)
Now since G0 a0 and Gn a n jbn 2 for n 1 , we can see that the real
part is zero (as we expect for an odd function) and that:
Gn j
bn
jA
1n
2 n
n 1
(2A.71)
and therefore:
bn
2A
1n
n
n 1
(2A.72)
Thus, the Fourier series expansion of the sawtooth wave contains only sine
terms:
g t
2A
1n 1 sin 2nf0t
n 1 n
Signals and Systems 2014
(2A.73)
2A.38
The Fourier series expansion, for the first 5 terms and 10 terms, is shown
below together with the original sawtooth wave:
Gn
An odd symmetric
waveform has an
odd and imaginary
spectrum
-4 f 0
f0
-2 f 0
-3 f 0
- f0
3 f0
2 f0
4 f0
2A.39
Half-Wave Symmetry
g(t )
Figure 2A.20
Gn
1
T0
T0 2
T0 2
g t e j 2nf 0t dt
(2A.74)
Gn
1 0
1 T0 2
j 2nf 0 t
g
t
e
dt
g t e j 2nf 0t dt
T0 T0 2
T0 0
I1
I2
(2A.75)
2A.40
Now, if g t is half-wave symmetric, then:
I2
1
T0
1
T0
T0 2
0
T0 2
0
(2A.76)
g t e j 2nf0t dt
g t T0 2e j 2nf0t dt
Letting t T0 2 , we have:
I2
1
T0
0
-T0 2
e jn
T0
g e j 2nf0 T0 2 d
0
-T0 2
(2A.77)
g e j 2nf0 d
Now since the value of a definite integral is independent of the variable used in
the integration, and noting that e jn 1 , we can see that:
n
I 2 1 I1
n
(2A.78)
Therefore:
Gn I1 I 2
(2A.79)
I1 1 I1
n
2 I n odd
1
0 n even
Thus, a half-wave symmetric function will have only odd harmonics all even
harmonics are zero.
Half-wave symmetry
in the time-domain
leads to all odd
harmonics being
zero
(2A.80)
2A.41
Example
A square pulse train (50% duty cycle square wave) is a special case of the
rectangular pulse train, for which T0 2 . For this case g t is:
A 50% duty cycle
square wave
g(t )
A
0
t
T0
Figure 2A.21
Gn Af 0sincnf 0
A
n
sinc
2
2
(2A.81)
Recalling that sinc x 0 for all integer values of x , we can see that Gn will
be zero for all even harmonics (except G0 ) . We expect this, since apart from a
DC component of A 2 , g t is half-wave symmetric. The spectrum is:
and its spectrum
Gn A /2
-11f0
-7f0
-9 f0
-3f0
-5f0
3f0
- f0 0 f0
7f0
5f0
11f0
9f0
Figure 2A.22
In this case the spacing of the discrete spectral frequencies, nf 0 , is such that
each even harmonic falls on a zero of the sinc function envelope.
2A.42
Power
One of the advantages of representing a signal in terms of a set of orthogonal
Calculating power
using orthogonal
components
components is that it is very easy to calculate its average power. Because the
components are orthogonal, the total average power is just the sum of the
average powers of the orthogonal components.
For example, if the double-sided spectrum is being used, since the magnitude
of the sinusoid represented by the phasor Gn is 2 Gn , and the average power
of a sinusoid of amplitude A is P A 2 2 , the total power in the signal g t is:
P Gn Gn Gn*
(2A.82)
Note that the DC component only appears once in the sum ( n 0 ). Its power
contribution is G02 which is correct.
Example
How much of the power of a 50% duty cycle rectangular pulse train is
contained in the first three harmonics?
g(t )
T0 /2
A
0
t
T0
Figure 2A.23
A2
1 T0 4 2
P A dt
T0 T0 4
2
(2A.83)
2A.43
To find the power in each harmonic, we work in the frequency domain. We
note that the Fourier series coefficients are given by Eq. (2A.52):
A
n
sinc
2
2
Gn
(2A.84)
Gn A /2
-11f0
-7f0
-9 f0
-3f0
3f0
- f0 0 f0
-5f0
7f0
5f0
11f0
9f0
Figure 2A.24
P0 G0
A2
(2A.85)
which is 50% of the total power. The DC plus fundamental power is:
P0 P1
(2A.86)
n 1
G0 2 G1
2
A2 2 A2
2 0.4526 A2
4
2A.44
The power of the components up to and including the 3rd harmonic is:
P0 P1 P2 P3
Gn
(2A.87)
n 3
A2 2 A2
2 A2
2 0 2
4
9
0.4752 A2
which is 95% of the total power. Thus, the spectrum makes obvious a
characteristic of a periodic signal that is not obvious in the time domain. In this
case, it was surprising to learn that 95% of the power in a square wave is
contained in the frequency components up to the 3rd harmonic. This is
important we may wish to lowpass filter this signal for some reason, but
retain most of its power. We are now in a position to give the cutoff frequency
of a lowpass filter to retain any amount of power that we desire.
2A.45
Filters
Filters are devices that shape the input signals spectrum to produce a new
output spectrum. They shape the input spectrum by changing the amplitude and
phase of each component sinusoid. This frequency-domain view of filters has
been with us implicitly we specify the filter in terms of a transfer function A filter acts to
change each
of a periodic
function
magnitude of the component phasor of the input signal. The phase of the
Y0 = H(0 ) X 0
excitation at DC
H(0)
X1
excitation with a sinusoid
at the fundamental frequency
H ( f0 )
Xn
Y1 = H( f0) X 1
H (n f 0 )
response to a sinusoid
at the fundamental frequency
Yn = H ( n f 0 ) X n
response to a sinusoid
at the n th harmonic
H( f )
Xn
Xn
response to DC
Yn = H( f ) X n
f
Yn
f
output secptrum
input spectrum
Figure 2A.25
Recall from Lecture 1A that it is the sinusoid that possesses the special
property with a linear system of a sinusoid in gives a sinusoid out. We now
have a view that a sum of sinusoids in gives a sum of sinusoids out, or more
simply: a spectrum in gives a spectrum out. The input spectrum is changed
by the frequency response of the system to give the output spectrum.
2A.46
This view of filters operating on individual components of an input signal has
been implicit in the characterisation of systems via the frequency response.
Experimentally, we determine the frequency response of a system by
performing the operations in the top half of Figure 2A.25. That is, we apply
different sinusoids (including DC which can be thought of as a sinusoid of zero
frequency) to a system and measure the resulting amplitude change and phase
shift of the output sinusoid. We then build up a picture of H f by plotting the
experimentally derived points on a graph (if log scales are chosen then we have
a Bode plot). After obtaining the frequency response, we should be able to tell
what happens when we apply any periodic signal, as shown in the bottom half
of Figure 2A.25. The next example illustrates this process.
Example
Lets see what happens to a square wave when it is passed through a 3rd
order Butterworth filter. For the filter, we have:
A filter is defined in
terms of its
magnitude and
phase response
H j
1
1 0
2 02 3
H j tan
2
3
2 0 0
(2A.88a)
(2A.69b)
Here, 0 is the cutoff frequency of the filter. It does not represent the
angular frequency of the fundamental component of the input signal filters
know nothing about the signals to be applied to them. Since the filter is linear,
superposition applies. For each component phasor of the input signal, we
multiply by H j e jH j . We then reconstruct the signal by adding up the
filtered component phasors.
2A.47
This is an operation best performed and thought about graphically. For the case
of the filter cutoff frequency set at twice the input signals fundamental
frequency we have for the output magnitude spectrum:
Gn
-11f0 -9 f0 -7f0
-5f0
-3f0
input signal
magnitude spectrum
A /2
- f0 0 f0
3f0
5f0
7f0
9f0
11f0
H ()
Gn
-11f0 -9 f0 -7f0
-5f0
-3f0
3rd order
Butterworth filter
magnitude response
A /2
output signal
magnitude spectrum
- f0 0 f0
3f0
5f0
7f0
9f0
11f0
Figure 2A.26
Filter output
magnitude spectrum
obtained graphically
using the input
signals magnitude
spectrum and the
filters magnitude
response
2A.48
If we now take the output spectrum and reconstruct the time-domain waveform
it represents, we get:
The output signal in
the time-domain
obtained from the
output spectrum
g (t)
A
T0
2T0
Figure 2A.27
This looks like a shifted sinusoid (DC + sine wave) with a touch of 3rd
harmonic distortion. With practice, you are able to recognise the components
Some features to
look for in a
spectrum
Sharp transitions in
the time-domain are
caused by high
frequencies
g (t)
A
T0
2T0
Figure 2A.28
From this example it should be apparent that high frequencies are needed to
make sharp transitions in the time-domain.
2A.49
Relationships Between the Three Fourier Series Representations
The table below shows the relationships between the three different
representations of the Fourier series:
Trigonometric
Fourier
Series
Fourier
g t a0 an cos2nf 0t
n 1
bn sin 2nf 0 t
a0
Series
Coefficients
an
bn
Spectrum of
a single
Compact Trigonometric
g t An cos2nf 0 t n
n 0
Complex Exponential
gt Gn e j 2nf t
0
Re Gn e j 2nf 0t
n 0
1 T0 2
g t dt
T0 T0 2
Gn
1
T0
2
T0
T0 2
T0 2
g t e j 2nf0t dt
Gn
n 0
2 T0 2
gt cos 2nf 0 t dt
T0 T0 2
2 T0 2
g t sin2nf 0 t dt
T0 T0 2
an
Gn
T0 2
T0 2
1 T0 2
gt e j 2nf 0 t dt
2
T
T0 0
g t e j 2nf0t dt
n 0
|Gn|
|Gn|
A
A
2
A cos
sinusoid
A cos2f 0 t
- f0
f0
bn
f0
f0
f0
- f0
0
-
f0
Gn
Gn
- A sin
f0
2A.50
Summary
The coefficients of the basis functions in the Fourier series are called
Fourier series coefficients. For the complex exponential Fourier series,
they are complex numbers, and are just the phasor representation of the
sinusoid at that particular harmonic frequency.
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.
2A.51
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]
1.
(b) G1 3 2 2
(c) a 2 0 , b2 4
2.
The Fourier series of
g(t)
t
(a) DC term
3.
The double-sided amplitude spectrum of a real signal always possesses:
(a) even symmetry
(c) no symmetry
4.
|Gn| (V)
Amplitude spectrum of
1.5
1
(across 1) is:
-3
-2
-1
(a) 14.5 W
f (Hz)
(b) 10 W
(c) 15.5 W
5.
The phase of the 3rd harmonic component of the periodic signal
t 1 4n
is:
0.2
g t Arect
n
(a) 90
(b) 9 0
Answers: 1. c 2. c 3. a 4. c 5. a
(c) 0
2A.52
Exercises
1.
A signal g t 2cos100t sin100t .
(a)
(b)
(c)
2.
What is G for the sinusoid whose spectrum is shown below.
G
|G|
2
45
-100
-100
(rads )
-1
100
100
(rads-1 )
-45
3.
What is g t for the signal whose spectrum is sketched below.
Re{G n }
Im{G n }
-100
100
f (Hz)
-100
100
f (Hz)
4.
Sketch photographs of the counter rotating phasors associated with the
spectrum below at t 0 , t 1 and t 2 .
G
|G|
1
30
1
-1
(rads )
-1
-1
0
-30
(rads-1 )
2A.53
5.
For each of the periodic signals shown, find the complex exponential Fourier
series coefficients. Plot the magnitude and phase spectra in MATLAB,
for 25 n 25 .
g (t )
1
(a)
-2
g (t )
(b)
-2
-1
1
g (t )
1
(c)
-2
e -t / 10
-1
g ( t ) cos(8 t )
(d)
-3 -1
g (t )
1
(e)
0
T0
2A.54
6.
A voltage waveform vi t
Vo
Vi
vi
vo
16
mH
f (Hz)
0 20 40 60 80
load
7.
A periodic signal g t is transmitted through a system with transfer function
H ()
T0
2
-12
12
T0
8.
Estimate the bandwidth B of the periodic signal g t shown below if the power
of all components of g t within the band B is to be at least 99.9 percent of the
total power of g t .
g (t)
A
T0
t
-A
2A.55
Joseph Fourier (1768-1830) (Jo sef Foor yay)
Fourier is famous for his study of the flow of heat in metallic plates and rods.
The theory that he developed now has applications in industry and in the study
of the temperature of the Earths interior. He is also famous for the discovery
that many functions could be expressed as infinite sums of sine and cosine
terms, now called a trigonometric series, or Fourier series.
Fourier first showed talent in literature, but by the age of thirteen, mathematics
became his real interest. By fourteen, he had completed a study of six volumes
of a course on mathematics. Fourier studied for the priesthood but did not end
up taking his vows. Instead he became a teacher of mathematics. In 1793 he
became involved in politics and joined the local Revolutionary Committee. As
he wrote:-
2A.56
institutions and administrations were set up. In particular he helped establish
educational facilities in Egypt and carried out archaeological explorations.
The Institute
d'gypte was
responsible for the
completely
serendipitous
discovery of the
Rosetta Stone in
1799. The three
inscriptions on this
stone in two
languages and three
scripts (hieroglyphic,
demotic and Greek)
enabled Thomas
Young and JeanFranois
Champollion, a
protg of Fourier,
to invent a method
of translating
hieroglyphic writings
of ancient Egypt in
1822.
While in Cairo, Fourier helped found the Institute d'gypte and was put in
charge of collating the scientific and literary discoveries made during the time
in Egypt. Napoleon abandoned his army and returned to Paris in 1799 and soon
held absolute power in France. Fourier returned to France in 1801 with the
remains of the expeditionary force and resumed his post as Professor of
Analysis at the Ecole Polytechnique.
Napoleon appointed Fourier to be Prefect at Grenoble where his duties were
many and varied they included draining swamps and building highways. It
was during his time in Grenoble that Fourier did his important mathematical
work on the theory of heat. His work on the topic began around 1804 and by
1807 he had completed his important memoir On the Propagation of Heat in
people
before
Fourier
had
used
expansions
of
the
form
new ways. One was the Fourier integral (the formula for the Fourier series
coefficients) and the other marked the birth of Sturm-Liouville theory (Sturm
and Liouville were nineteenth century mathematicians who found solutions to
2A.57
many classes of partial differential equations arising in physics that were
analogous to Fourier series).
Napoleon was defeated in 1815 and Fourier returned to Paris. Fourier was
elected to the Acadmie des Sciences in 1817 and became Secretary in 1822.
Shortly after, the Academy published his prize winning essay Thorie
References
Krner, T.W.: Fourier Analysis, Cambridge University Press, 1988.
2B.1
Lecture 2B The Fourier Transform
The Fourier transform. Continuous spectra. Existence of the Fourier
transform. Finding Fourier transforms. Symmetry between the time-domain
and frequency-domain. Time shifting. Frequency shifting. Fourier transform of
sinusoids. Relationship between the Fourier series and Fourier transform.
Fourier transform of a uniform train of impulses. Standard Fourier transforms.
Fourier transform properties.
Developing the
sinusoidal (or complex exponential) functions. We would like to extend this Fourier transform
result to functions that are not periodic. Such an extension is possible by what
is known as the Fourier transform representation of a function.
To derive the Fourier transform, we start with an aperiodic signal g t :
g(t )
t
Figure 2B.1
Now we construct a new periodic signal g p t consisting of the signal g t
repeating itself every T0 seconds:
Make an artificial
periodic waveform
from the original
aperiodic waveform
gp(t)
0
T0
t
T0
Figure 2B.2
Signals and Systems 2014
2B.2
The period T0 is made long enough so that there is no overlap between the
repeating pulses. This new signal g p t is a periodic signal and so it can be
represented by an exponential Fourier series.
In the limit, if we let T0 become infinite, the pulses in the periodic signal
repeat after an infinite interval, and:
lim g p t gt
(2B.1)
T0
g p t Gn e j 2nf 0t
n
Gn
1 T0 2
j 2 nf 0t
g
t
e
dt
p
T
2
0
T0
(2B.2a)
(2B.2b)
2B.3
As seen from Eq. (2B.2b), the amplitudes of the individual components
become smaller, too. The shape of the frequency spectrum, however, is
unaltered (all Gn s are scaled by the same amount, T0 ). In the limit as T0 ,
the magnitude of each component becomes infinitesimally small, until they
eventually vanish!
Clearly, it no longer makes sense to define the concept of a spectrum as the
amplitudes and phases of certain harmonic frequencies. Instead, a new concept
is introduced called spectral density.
To illustrate, consider a rectangular pulse train, for which we know:
Gn Af 0 sincnf 0
(2B.3)
Gn
0.2 A
-8
-12 -10
-6
6
-4
-2
Figure 2B.3
8
10
12
2B.4
For T0 5 s we have:
As the period
increases, the
spectral lines get
closer and closer,
but smaller and
smaller
Gn
0.04 A
-8
-6
6
-4
-12 -10
-2
8
10
12
Figure 2B.4
The envelope retains the same shape (it was never a function of T0 anyway),
but the amplitude gets smaller and smaller with increasing T0 . The spectral
lines get closer and closer with increasing T0 . In the limit, it is impossible to
draw the magnitude spectrum as a graph of Gn because the amplitudes of the
harmonic phasors have reduced to zero.
It is possible, however, to graph a new quantity:
Gn T0
T0 2
T0 2
g p t e j 2nf t dt
(2B.4)
Gn T0 will be finite as T0 , in the same way that the area remained constant
as T 0 in the family of rect functions we used to explain the impulse
function.
As T0 , the frequency of any harmonic nf 0 must now correspond to the
general frequency variable which describes the continuous spectrum.
2B.5
In other words, n must tend to infinity as f 0 approaches zero, such that the
product is finite:
nf 0 f ,
T0
(2B.5)
With the limiting process as defined in Eqs. (2B.1) and (2B.5), Eq. (2B.4)
becomes:
Gn T0 g t e
j 2 ft
dt
(2B.6)
The right-hand side of this expression is a function of f (and not of t), and we
represent it by:
G f g t e j 2ft dt
(2B.7)
Therefore, G f lim GnT0 has the dimensions amplitude per unit frequency,
T0
f 0 df ,
T0
(2B.8)
That is, the fundamental frequency becomes infinitesimally small as the period
is made larger and larger. This agrees with our reasoning in obtaining
Eq. (2B.5), since there we had an infinite number, n, of infinitesimally small
discrete frequencies, f 0 df , to give the finite continuous frequency f. In order
to apply the limiting process to Eq. (2B.2a), we multiply the summation by
T0 f 0 1 :
g p t
G T e
n 0
j 2nf 0 t
f0
(2B.9)
The Fourier
transform defined
2B.6
and use Eq. (2B.8) and the new quantity G f GnT0 . In the limit the
summation becomes an integral, nf 0 ndf f , g p t g t , and:
gt G f e j 2ft df
(2B.10)
Eqs. (2B.7) and (2B.10) are collectively known as a Fourier transform pair.
The function G f is the Fourier transform of g t , and g t is the inverse
Fourier transform of G f .
To recapitulate, we have shown that:
The Fourier
transform and
inverse transform
side-by-side
gt G f e
j 2 ft
df
G f g t e j 2ft dt
(2B.11a)
(2B.11b)
gt G f
(2B.12)
2B.7
Continuous Spectra
The concept of a continuous spectrum is sometimes bewildering because we
Making sense of a
generally picture the spectrum as existing at discrete frequencies and with continuous
spectrum
G1 G2 G3
Gn
x1 x2
xn
x3
G(x )
x1
(a)
xn
(b)
Figure 2B.5
The beam is loaded at n discrete points, and the total weight W on the beam is
given by:
We can have either
individual finite
weights
W Gi
i 1
(2B.13)
W G x dx
or continuous
infinitesimal weights
xn
x1
(2B.14)
2B.8
In the discrete loading case, the weight existed only at discrete points. At other
points there was no load. On the other hand, in the continuously distributed
case, the loading exists at every point, but at any one point the load is zero. The
load in a small distance dx, however, is given by G x dx . Therefore G x
represents the relative loading at a point x.
An exactly analogous situation exists in the case of a signal and its frequency
spectrum. A periodic signal can be represented by a sum of discrete
exponentials with finite amplitudes (harmonic phasors):
g t Gn e j 2nf0t
n
(2B.15)
gt G f e j 2ft df
(2B.16)
An electrical analogy could also be useful: just replace the discrete loaded
beam with an array of filamentary conductors, and the continuously loaded
beam with a current sheet. The analysis is the same.
2B.9
Existence of the Fourier Transform
Dirichlet (1805-1859) investigated the sufficient conditions that could be
imposed on a function g t for its Fourier transform to exist. These so-called
g t dt
(2B.17b)
Note that these are sufficient conditions and not necessary conditions. Use of
the Fourier transform for the analysis of many useful signals would be
impossible if these were necessary conditions.
Any signal with finite energy:
g t dt
2
(2B.18)
The Dirichlet
conditions are
sufficient but not
necessary
conditions for the FT
to exist
2B.10
Finding Fourier Transforms
Example
g(t )
-T /2
T /2
Figure 2B.6
t
G f A rect e j 2ft dt
T
T 2
e j 2ft dt
T 2
A
e jfT e jfT
j 2f
A
sin fT
f
AT sinc fT
(2B.19)
t
A rect AT sinc fT
T
(2B.20)
2B.11
We can also state this graphically:
g(t )
-T /2
G (f )
T /2
-5
T
-4
T
-3
T
-2
T
AT
-1
T
1
T
2
T
3
T
4
T
5
T
Figure 2B.7
There are a few interesting observations we can make in general by Time and frequency
characteristics
considering this result. One is that a time limited function has frequency reflect an inverse
components approaching infinity. Another is that compression in time (by relationship
making T smaller) will result in an expansion of the frequency spectrum
(wider sinc function in this case). Another is that the Fourier transform is a
linear operation multiplication by A in the time domain results in the
spectrum being multiplied by A.
Letting A=1 and T=1 in Eq. (2B.20) results in what we shall call a standard
transform:
rect t sinc f
(2B.21)
agt aG f
(2B.22)
Linearity is obeyed
by transform pairs
2B.12
We can also generalise the time scaling property to:
t
g T G fT
T
Time scaling
property of
transform pairs
(2B.23)
t
rect T sinc fT (scaling property)
T
t
A rect AT sinc fT (linearity)
T
(2B.24)
We now only have to derive enough standard transforms and discover a few
Fourier transform properties to be able to handle almost any signal and system
of interest.
2B.13
Symmetry between the Time-Domain and Frequency-Domain
From the definition of the inverse Fourier transform:
gt G f e
j 2 ft
df
(2B.25)
g f G x e j 2xf dx
(2B.26)
g f G t e j 2tf dt
(2B.27)
Notice that the right-hand side is precisely the definition of the Fourier
transform of the function G t . Thus, there is an almost symmetrical
relationship between the transform and its inverse. This is summed up in the
duality property:
G t g f
(2B.28)
Duality property of
transform pairs
2B.14
Example
G (f )
-B
B
Figure 2B.8
arrive at:
2 B sinc2 Bt rect f 2 B
(2B.29)
Graphically:
Fourier transform of
a sinc function
G (t )
g(-f )
2B
-5
2B
-2
B
-3
2B
-1
B
-1
2B
1
2B
1
B
3
2B
2
B
5
2B
Figure 2B.9
-B
2B.15
Example
G f t e j 2ft dt 1
(2B.30)
t 1
(2B.31)
It is interesting to note that we could have obtained the same result using
Eq. (2B.20). Let the height be such that the area AT is always equal to 1, then
we have from Eq. (2B.20):
1
t
rect sinc fT
T
T
(2B.32)
Now let T 0 so that the rectangle function turns into the impulse function
g(t )
G (f )
1
Figure 2B.10
This result says that an impulse function is composed of all frequencies from
DC to infinity, and all contribute equally.
Fourier transform of
an impulse function
2B.16
This example highlights one very important feature of finding Fourier
transforms - there is usually more than one way to find them, and it is up to us
(with experience and ability) to find the easiest way. However, once the
Fourier transform has been obtained for a function, it is unique - this serves as
a check for different methods.
Example
We wish to find the Fourier transform of the constant 1 (if it exists), which is a
DC signal. Choosing the path of direct evaluation of the integral, we get:
G f 1e j 2ft dt
(2B.33)
1 f
(2B.34)
2B.17
Yet another way to arrive at this result is through recognising that a certain
amount of symmetry exists in equations for the Fourier transform and inverse
Fourier transform.
Applying the duality property to Eq. (2B.31) gives:
1 f
(2B.35)
1 f
(2B.36)
which is the same as Eq. (2B.34). Again, two different methods converged on
the same result, and one method (direct integration) seemed impossible! It is
therefore advantageous to become familiar with the properties of the Fourier
transform.
2B.18
Time Shifting
A time shift to the right of t0 seconds (i.e. a time delay) can be represented by
g t t 0 . A time shift to the left of t0 seconds can be represented by g t t 0 .
The Fourier transform of a time shifted function to the right is:
g t t 0 g t t 0 e j 2ft dt
(2B.37)
g t t 0 g x e j 2f x t 0 dx
g t t 0 G f e
(2B.38)
j 2ft 0
Time shifting a
function just
changes the phase
G (f )
g(t )
A
-5
T
-4
T
-3
T
-2
T
-1
T
G (f )
0 T /2 T
AT
1
T
2
T
3
T
4
T
5
T
1
T
2
T
3
T
4
T
5
T
AT
t
-5
T
-4
T
-3
T
-2
T
-1
T -2
f
slope = T
Figure 2B.11
2B.19
Frequency Shifting
Similarly to the last section, a spectrum G f can be shifted to the right,
G f f 0 , or to the left, G f f 0 . This property will be used to derive
several standard transforms and is particularly important in communications
where it forms the basis for modulation and demodulation.
The inverse Fourier transform of a frequency shifted function to the right is:
G f f 0 e
j 2ft
df
G f f 0
(2B.39)
Frequency shift
property defined
G x e j 2 x f 0 t dx G f f 0
g t e
j 2f 0 t
(2B.40)
G f f 0
g t cos2f c t g t 12 e j 2f c t 12 e j 2f c t
1
2
G f f c 12 G f f c
Multiplication by a
sinusoid in the timedomain shifts the
original spectrum up
and down by the
carrier frequency
(2B.41)
2B.20
The Fourier Transform of Sinusoids
Using Eq. (2B.34) and the frequency shifting property, we know that:
e j 2f 0t
e j 2f 0t
f f0
(2B.42a)
f f0
(2B.42b)
Therefore, by substituting into Eulers relationship for the cosine and sine:
cos2f 0 t 12 e j 2f 0t 12 e j 2f 0t
sin 2f 0 t
1
2j
(2B.43a)
e j 2f 0t 21j e j 2f 0t
2j e j 2f 0t 2j e j 2f 0t
(2B.43b)
(2B.44a)
1
1
cos2f 0 t f f 0 f f 0
2
2
j
j
sin 2f 0 t f f 0 f f 0
2
2
(2B.44b)
Graphically:
Spectra for cos and
sin
cos(2 f 0t )
1
G (f )
1
2
T0
sin(2 f 0t )
1
f0
- f0
1
2
G (f )
j
2
j
-2
T0 t
- f0
Figure 2B.12
f0
2B.21
Relationship between the Fourier Series and Fourier Transform
From the definition of the Fourier series, we know we can express any periodic
waveform as a sum of harmonic phasors:
gt Gn e j 2nf 0 t
(2B.45)
e j 2nf 0t
f nf 0
(2B.46)
Gn e j 2nf 0 t
G n f nf 0
(2B.47)
G e
j 2nf 0 t
G f nf
(2B.48)
Therefore, in words:
The spectrum of a periodic signal is a weighted train of
impulses each weight is equal to the Fourier series
coefficient at that frequency, Gn
(2B.49)
2B.22
Example
Find the Fourier transform of the rectangular pulse train g t shown below:
g (t)
A
0
t
T0
Figure 2B.13
We have already found the Fourier series coefficients for this waveform:
Gn Af 0sincnf 0
(2B.50)
For the case of T0 5 , the spectrum is then a weighted train of impulses, with
spacing equal to the fundamental of the waveform, f 0 :
The double-sided
magnitude spectrum
of a rectangular
pulse train is a
weighted train of
impulses
G (f )
0.2 A
weights = Gn
-8 f0 -6 f0
-12 f0 -10 f0
6 f0
-4 f0 -2 f0 0
2 f0
4 f0
8 f0
10f0 12f0
Figure 2B.14
This should make intuitive sense the spectrum is now defined as spectral
Pairs of impulses in
the spectrum
correspond to a
sinusoid in the timedomain
2B.23
The Fourier Transform of a Uniform Train of Impulses
We will encounter a uniform train of impulses frequently in our analysis of
communication systems:
g (t)
1
-2T0
-T0
T0
2T0
3T0
Figure 2B.15
Gn f 0
(2B.51)
Using Eq. (2B.48), we therefore have the following Fourier transform pair:
t kT f f nf
0
(2B.52)
Graphically:
g (t)
1
-2T0
-T0
T0
2T0
3T0
f0
2 f0
3 f0
G( f )
f0
-2f 0
- f0
Figure 2B.16
Signals and Systems 2014
The Fourier
transform of a
uniform train of
impulses is a
uniform train of
impulses
2B.24
Standard Fourier Transforms
rect t sinc f
(F.1)
t 1
(F.2)
e t u t
1
1 j 2f
(F.3)
A
A
Acos 2f 0t e j f f 0 e j f f 0
2
2
t kT f f nf
(F.4)
(F.5)
2B.25
Fourier Transform Properties
Assuming g t G f .
ag t aG f
(F.6)
Linearity
t
g T G fT
T
(F.7)
Scaling
gt t 0 G f e j 2ft0
(F.8)
Time-shifting
g t e j 2 f 0 t G f f 0
(F.9)
Frequency-shifting
Gt g f
(F.10)
d
gt j 2fG f
dt
(F.11)
g d
1
G 0
G f
f
2
j 2f
Duality
Time-differentiation
(F.12)
Time-integration
g1 t g2 t G1 f *G2 f
(F.13)
Multiplication
g1 t * g2 t G1 f G2 f
(F.14)
Convolution
gt dt G0
(F.15)
Area
G f df g 0
(F.16)
2B.26
Summary
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.
2B.27
Exercises
1.
Write an expression for the time domain representation of the voltage signal
with double sided spectrum given below:
X( f )
2 -30
2 30
-1000
1000
2.
Use the integration and time shift rules to express the Fourier transform of the
pulse below as a sum of exponentials.
x(t)
2
-1 0
2B.28
3.
Find the Fourier transforms of the following functions:
g (t )
10
(a)
-2
-1 -5
g (t )
1
cos t
(b)
/2
-/2
1
g ( t ) Two identical
cycles
t2
(c)
0
g (t )
3
2
1
(d)
0
4.
Show that:
a t
2a
2
a 2f
2
2B.29
5.
The Fourier transform of a pulse p t is P f . Find the Fourier transform of
g t in terms of P f .
g(t )
2
p(t )
1
1
t
-1.5
6.
Show that:
g t t 0 g t t 0 2G f cos2ft 0
7.
Use G1 f and G2 f to evaluate g1 t g 2 t for the signals shown below:
g (t )
g (t )
ae -at
t
2B.30
8.
Find G1 f G2 f for the signals shown below:
G1( f )
A
-f 0
f0
G2( f )
k
-f 0
f0
9.
Determine signals g t whose Fourier transforms are shown below:
-f 0
|G ( f )|
f0
-2 ft 0
G (f )
(a)
|G ( f )|
G (f )
/2
f0
-f 0
f0
-f 0
- /2
(b)
2B.31
10.
Using the time-shifting property, determine the Fourier transform of the signal
shown below:
g (t )
A
T
t
-T
-A
11.
Using the time-differentiation property, determine the Fourier transform of the
signal below:
g (t )
A
-2
2
-A
2B.32
William Thomson (Lord Kelvin) (1824-1907)
William Thomson was probably the first true electrical engineer. His
engineering was firmly founded on a solid bedrock of mathematics. He
invented, experimented, advanced the state-of-the-art, was entrepreneurial, was
a businessman, had a multi-disciplinary approach to problems, held office in
the professional body of his day (the Royal Society), published papers, gave
lectures to lay people, strived for an understanding of basic physical principles
and exploited that knowledge for the benefit of mankind.
William Thomson was born in Belfast, Ireland. His father was a professor of
engineering. When Thomson was 8 years old his father was appointed to the
chair of mathematics at the University of Glasgow. By age 10, William
Thomson was attending Glasgow University. He studied astronomy, chemistry
and natural philosophy (physics, heat, electricity and magnetism). Prizes in
Greek, logic (philosophy), mathematics, astronomy and physics marked his
progress. In 1840 he read Fouriers The Analytical Theory of Heat and wrote:
I had become filled with the utmost admiration for the splendour and
poetry of Fourier I took Fourier out of the University Library; and in a
fortnight I had mastered it - gone right through it.
At the time, lecturers at Glasgow University took a strong interest in the
approach of the French mathematicians towards physical science, such as
Lagrange, Laplace, Legendre, Fresnel and Fourier. In 1840 Thomson also read
Laplaces Mcanique Cleste and visited Paris.
In 1841 Thomson entered Cambridge and in the same year he published a
paper on Fourier's expansions of functions in trigonometrical series. A more
important paper On the uniform motion of heat and its connection with the
mathematical theory of electricity was published in 1842.
The examinations in Cambridge were fiercely competitive exercises in problem
solving against the clock. The best candidates were trained as for an athletics
contest. Thomson (like Maxwell later) came second. A day before he left
Cambridge, his coach gave him two copies of Greens Essay on the
Signals and Systems 2014
2B.33
Application of Mathematical Analysis to the Theories of Electricity and
Magnetism.
After graduating, he moved to Paris on the advice of his father and because of
his interest in the French approach to mathematical physics. Thomson began
trying to bring together the ideas of Faraday, Coulomb and Poisson on
electrical theory. He began to try and unify the ideas of action-at-a-distance,
the properties of the ether and ideas behind an electrical fluid. He also
became aware of Carnots view of heat.
In 1846, at the age of twenty two, he returned to Glasgow on a wave of
testimonials from, among others, De Morgan, Cayley, Hamilton, Boole,
Sylvester, Stokes and Liouville, to take up the post of professor of natural
philosophy. In 1847-49 he collaborated with Stokes on hydrodynamic studies,
which Thomson applied to electrical and atomic theory. In electricity Thomson
provided the link between Faraday and Maxwell. He was able to mathematise
Faradays laws and to show the formal analogy between problems in heat and
electricity. Thus the work of Fourier on heat immediately gave rise to theorems
on electricity and the work of Green on potential theory immediately gave rise
to theorems on heat flow. Similarly, methods used to deal with linear and
rotational displacements in elastic solids could be applied to give results on
electricity and magnetism. The ideas developed by Thomson were later taken
up by Maxwell in his new theory of electromagnetism.
Thomsons other major contribution to fundamental physics was his
combination of the almost forgotten work of Carnot with the work of Joule on
the conservation of energy to lay the foundations of thermodynamics. The
thermodynamical studies of Thomson led him to propose an absolute
temperature scale in 1848 (The Kelvin absolute temperature scale, as it is now
known, was defined much later after conservation of energy was better
understood).
2B.34
The Age of the Earth
In the first decades of the nineteenth century geological evidence for great
changes in the past began to build up. Large areas of land had once been under
water, mountain ranges had been thrown up from lowlands and the evidence of
fossils showed the past existence of species with no living counterparts. Lyell,
in his Principles of Geology, sought to explain these changes by causes now
in operation. According to his theory, processes such as slow erosion by
wind and water; gradual deposition of sediment by rivers; and the cumulative
effect of earthquakes and volcanic action combined over very long periods of
time to produce the vast changes recorded in the Earths surface. Lyells socalled uniformitarian theory demanded that the age of the Earth be measured
in terms of hundreds of millions and probably in terms of billions of years.
Lyell was able to account for the disappearance of species in the geological
record but not for the appearance of new species. A solution to this problem
was provided by Charles Darwin (and Wallace) with his theory of evolution by
natural selection. Darwins theory also required vast periods of time for
operation. For natural selection to operate, the age of the Earth had to be
measured in many hundreds of millions of years.
Such demands for vast amounts of time run counter to the laws of
thermodynamics. Every day the sun radiates immense amounts of energy. By
the law of conservation of energy there must be some source of this energy.
Thomson, as one of the founders of thermodynamics, was fascinated by this
problem. Chemical processes (such as the burning of coal) are totally
insufficient as a source of energy and Thomson was forced to conclude that
gravitational potential energy was turned into heat as the sun contracted. On
this assumption his calculations showed that the Sun (and therefore the Earth)
was around 100 million years old.
2B.35
However, Thomsons most compelling argument concerned the Earth rather
than the Sun. It is well known that the temperature of the Earth increases with
depth and
this implies a continual loss of heat from the interior, by conduction
outwards through or into the upper crust. Hence, since the upper crust does
not become hotter from year to year there must be aloss of heat from the
whole earth. It is possible that no cooling may result from this loss of heat
but only an exhaustion of potential energy which in this case could scarcely
be other than chemical.
Since there is no reasonable mechanism to keep a chemical reaction going at a
steady pace for millions of years, Thomson concluded that the earth is
merely a warm chemically inert body cooling. Thomson was led to believe
that the Earth was a solid body and that it had solidified at a more or less
uniform temperature. Taking the best available measurements of the
conductivity of the Earth and the rate of temperature change near the surface,
he arrived at an estimate of 100 million years as the age of the Earth
(confirming his calculations of the Suns age).
The problems posed to Darwins theory of evolution became serious as
Thomsons arguments sank in. In the fifth edition of The Origin of Species,
Darwin attempted to adjust to the new time scale by allowing greater scope for
evolution by processes other than natural selection. Darwin was forced to ask
for a suspension of judgment of his theory and in the final chapter he added
With respect to the lapse of time not having been sufficient since our planet
was consolidated for the assumed amount of organic change, and this
objection, as argued by [Thomson], is probably one of the gravest yet
advanced, I can only say, firstly that we do not know at what rate species
change as measured by years, and secondly, that many philosophers are not
yet willing to admit that we know enough of the constitution of the universe
and of the interior of our globe to speculate with safety on its past duration.
(Darwin, The Origin of Species, Sixth Edition, p.409)
2B.36
The chief weakness of Thomsons arguments was exposed by Huxley
A variant of the
adage:
Garbage in equals
garbage out.
this seems to be one of the many cases in which the admitted accuracy of
mathematical processes is allowed to throw a wholly inadmissible
appearance of authority over the results obtained by them. Mathematics
may be compared to a mill of exquisite workmanship, which grinds you stuff
of any degree of fineness; but nevertheless, what you get out depends on
what you put in; and as the grandest mill in the world will not extract
wheat-flour from peascods, so pages of formulae will not get a definite
result out of loose data.
(Quarterly Journal of the Geological Society of London, Vol. 25, 1869)
However, Thomsons estimates were the best available and for the next thirty
years geology took its time from physics, and biology took its time from
geology. But Thomson and his followers began to adjust his first estimate
down until at the end of the nineteenth century the best physical estimates of
the age of the Earth and Sun were about 20 million years whilst the minimum
the geologists could allow was closer to Thomsons original 100 million years.
Then in 1904 Rutherford announced that the radioactive decay of radium was
accompanied by the release of immense amounts of energy and speculated that
this could replace the heat lost from the surface of the Earth.
The discovery of the radioactive elementsthus increases the possible limit
of the duration of life on this planet, and allows the time claimed by the
geologist and biologist for the process of evolution.
(Rutherford quoted in Burchfield, p.164)
A problem for the geologists was now replaced by a problem for the physicists.
The answer was provided by a theory which was just beginning to be gossiped
about. Einsteins theory of relativity extended the principle of conservation of
energy by taking matter as a form of energy. It is the conversion of matter to
heat which maintains the Earths internal temperature and supplies the energy
radiated by the sun. The ratios of lead isotopes in the Earth compared to
meteorites now leads geologists to give the Earth an age of about 4.55 billion
years.
2B.37
The Transatlantic Cable
The invention of the electric telegraph in the 1830s led to a network of
telegraph wires covering England, western Europe and the more settled parts of
the USA. The railroads, spawned by the dual inventions of steam and steel,
were also beginning to crisscross those same regions. It was vital for the
smooth and safe running of the railroads, as well as the running of empires, to
have speedy communication.
Attempts were made to provide underwater links between the various separate
systems. The first cable between Britain and France was laid in 1850. The
operators found the greatest difficulty in transmitting even a few words. After
12 hours a trawler accidentally caught and cut the cable. A second, more
heavily armoured cable was laid and it was a complete success. The short lines
worked, but the operators found that signals could not be transmitted along
submarine cables as fast as along land lines without becoming confused.
In spite of the record of the longer lines, the American Cyrus J. Fields
proposed a telegraph line linking Europe and America. Oceanographic surveys
showed that the bottom of the Atlantic was suitable for cable laying. The
connection of existing land telegraph lines had produced a telegraph line of the
length of the proposed cable through which signals had been passed extremely
rapidly. The British government offered a subsidy and money was rapidly
raised.
Faraday had predicted signal retardation but he and others like Morse had in
mind a model of a submarine cable as a hosepipe which took longer to fill with
water (signal) as it got longer. The remedy was thus to use a thin wire (so that
less electricity was needed to charge it) and high voltages to push the signal
through. Faradays opinion was shared by the electrical adviser to the project,
Dr Whitehouse (a medical doctor).
Thomsons researches had given him a clearer mathematical picture of the
problem. The current in a telegraph wire in air is approximately governed by
the wave equation. A pulse on such a wire travels at a well defined speed with
2B.38
no change of shape or magnitude with time. Signals can be sent as close
together as the transmitter can make them and the receiver distinguish them.
In undersea cables of the type proposed, capacitive effects dominate and the
current is approximately governed by the diffusion (i.e. heat) equation. This
equation predicts that electric pulses will last for a time that is proportional to
the length of the cable squared. If two or more signals are transmitted within
this time, the signals will be jumbled at the receiver. In going from submarine
cables of 50 km length to cables of length 2400 km, retardation effects are
2500 times worse. Also, increasing the voltage makes this jumbling (called
intersymbol interference) worse. Finally, the diffusion equation shows that the
wire should have as large a diameter as possible (small resistance).
Whitehouse, whose professional reputation was now involved, denied these
conclusions. Even though Thomson was on the board of directors of Fields
company, he had no authority over the technical advisers. Moreover the
production of the cable was already underway on principles contrary to
Thomsons. Testing the cable, Thomson was astonished to find that some
sections conducted only half as well as others, even though the manufacturers
were supplying copper to the then highest standards of purity.
Realising that the success of the enterprise would depend on a fast, sensitive
detector, Thomson set about to invent one. The problem with an ordinary
galvanometer is the high inertia of the needle. Thomson came up with the
mirror galvanometer in which the pointer is replaced by a beam of light.
In a first attempt in 1857 the cable snapped after 540 km had been laid. In
1858, Europe and America were finally linked by cable. On 16 August it
carried a 99-word message of greeting from Queen Victoria to President
Buchanan. But that 99-word message took 16 hours to get through. In vain,
Whitehouse tried to get his receiver to work. Only Thomsons galvanometer
was sensitive enough to interpret the minute and blurred messages coming
through. Whitehouse ordered that a series of huge two thousand volt induction
coils be used to try to push the message through faster after four weeks of
2B.39
this treatment the insulation finally failed; 2500 tons of cable and 350 000 of
capital lay useless on the ocean floor.
In 1859 eighteen thousand kilometres of undersea cable had been laid in other
parts of the world, and only five thousand kilometres were operating. In 1861
civil war broke out in the United States. By 1864 Field had raised enough
capital for a second attempt. The cable was designed in accordance with
Thomsons theories. Strict quality control was exercised: the copper was so
pure that for the next 50 years telegraphists copper was the purest available.
Once again the British Government supported the project the importance of
quick communication in controlling an empire was evident to everybody. The
new cable was mechanically much stronger but also heavier. Only one ship
was large enough to handle it and that was Brunels Great Eastern. She was
fives time larger than any other existing ship.
This time there was a competitor. The Western Union Company had decided to
build a cable along the overland route across America, Alaska, the Bering
Straits, Siberia and Russia to reach Europe the long way round. The
commercial success of the cable would therefore depend on the rate at which
messages could be transmitted. Thomson had promised the company a rate of
8 or even 12 words a minute. Half a million pounds was being staked on the
correctness of the solution of a partial differential equation.
In 1865 the Great Eastern laid cable for nine days, but after 2000 km the cable
parted. After two weeks of unsuccessfully trying to recover the cable, the
expedition left a buoy to mark the spot and sailed for home. Since
communication had been perfect up until the final break, Thomson was
confident that the cable would do all that was required. The company decided
to build and lay a new cable and then go back and complete the old one.
Cable laying for the third attempt started on 12 July 1866 and the cable was
landed on the morning of the 27th. On the 28th the cable was open for business
and earned 1000. Western Union ordered all work on their project to be
stopped at a loss of $3 000 000.
2B.40
On 1 September after three weeks of effort the old cable was recovered and on
8 September a second perfect cable linked America and Europe. A wave of
knighthoods swept over the engineers and directors. The patents which
Thomson held made him a wealthy man.
For his work on the transatlantic cable Thomson was created Baron Kelvin of
Largs in 1892. The Kelvin is the river which runs through the grounds of
Glasgow University and Largs is the town on the Scottish coast where
Thomson built his house.
Other Achievements
Thomson worked on several problems associated with navigation sounding
machines, lighthouse lights, compasses and the prediction of tides. Tides are
primarily due to the gravitational effects of the Moon, Sun and Earth on the
oceans but their theoretical investigation, even in the simplest case of a single
ocean covering a rigid Earth to a uniform depth, is very hard. Even today, the
study of only slightly more realistic models is only possible by numerical
computer modelling. Thomson recognised that the forces affecting the tides
change periodically. He then approximated the height of the tide by a
trigonometric polynomial a Fourier series with a finite number of terms. The
coefficients of the polynomial required calculation of the Fourier series
coefficients by numerical integration a task that required not less than
twenty hours of calculation by skilled arithmeticians. To reduce this labour
Thomson designed and built a machine which would trace out the predicted
Michelson (of
Michelson-Morley
fame) was to build a
better machine that
used up to 80
Fourier series
coefficients. The
production of blips
at discontinuities by
this machine was
explained by Gibbs
in two letters to
Nature. These blips
are now referred to
as the Gibbs
phenomenon.
height of the tides for a year in a few minutes, given the Fourier series
coefficients.
Thomson also built another machine, called the harmonic analyser, to perform
the task which seemed to the Astronomer Royal so complicated and difficult
that no machine could master it of computing the Fourier series coefficients
from the record of past heights. This was the first major victory in the struggle
to substitute brass for brain in calculation.
2B.41
Thomson introduced many teaching innovations to Glasgow University. He
introduced laboratory work into the degree courses, keeping this part of the
work distinct from the mathematical side. He encouraged the best students by
offering prizes. There were also prizes which Thomson gave to the student that
he considered most deserving.
Thomson worked in collaboration with Tait to produce the now famous text
Treatise on Natural Philosophy which they began working on in the early
1860s. Many volumes were intended but only two were ever written which
cover kinematics and dynamics. These became standard texts for many
generations of scientists.
In later life he developed a complete range of measurement instruments for
physics and electricity. He also established standards for all the quantities in
use in physics. In all he published over 300 major technical papers during the
53 years that he held the chair of Natural Philosophy at the University of
Glasgow.
During the first half of Thomson's career he seemed incapable of being wrong
while during the second half of his career he seemed incapable of being right.
This seems too extreme a view, but Thomson's refusal to accept atoms, his
opposition to Darwin's theories, his incorrect speculations as to the age of the
Earth and the Sun, and his opposition to Rutherford's ideas of radioactivity,
certainly put him on the losing side of many arguments later in his career.
William Thomson, Lord Kelvin, died in 1907 at the age of 83. He was buried
in Westminster Abbey in London where he lies today, adjacent to Isaac
Newton.
References
Burchfield, J.D.: Lord Kelvin and The Age of the Earth, Macmillan, 1975.
Krner, T.W.: Fourier Analysis, Cambridge University Press, 1988.
Encyclopedia Britannica, 2004.
Morrison, N.: Introduction to Fourier Analysis, John Wiley & Sons, Inc., 1994.
Thomson, S.P.: The Life of Lord Kelvin, London, 1976.
Kelvin & Tait: Treatise on Natural Philosophy, Appendix B.
Signals and Systems 2014
3A.1
Lecture 3A Filtering and Sampling
Response to a sinusoidal input. Response to an arbitrary input. Ideal filters.
What does a filter do to a signal? Sampling. Reconstruction. Aliasing.
Practical sampling and reconstruction. Summary of the sampling and
reconstruction process. Finding the Fourier series of a periodic function from
the Fourier transform of a single period. Windowing in the time domain.
Practical multiplication and convolution.
Introduction
Since we can now represent signals in terms of a Fourier series (for periodic
signals) or a Fourier transform (for aperiodic signals), we seek a way to
We have a
describe a system in terms of frequency. That is, we seek a model of a linear, description of
signals in the
time-invariant system governed by continuous-time differential equations that frequency domain we need one for
expresses its behaviour with respect to frequency, rather than time. The systems
concept of a signals spectrum and a systems frequency response will be seen
to be of fundamental importance in the frequency-domain characterisation of a
system.
The power of the frequency-domain approach will be seen as we are able to
determine a systems output given almost any input. Fundamental signal
operations can also be explained easily such as sampling / reconstruction and
modulation / demodulation in the frequency domain that would otherwise
appear bewildering in the time domain.
yt ht xt
(3A.1)
Starting with a
convolution
description of a
system
(3A.2)
We apply a sinusoid
xt A cos 0t
3A.2
We have already seen that this can be expressed (thanks to Euler) as:
A sinusoid is just a
sum of two complex
conjugate counterrotating phasors
A
A
xt e j e j 0t e j e j 0t
2
2
Xe j 0t X *e j 0t
(3A.3)
Where X is the phasor representing xt . Inserting this into Eq. (3A.1) gives:
yt h Xe j0 t X *e j0 t d
h e j0 Xe j0t d h e j0 X *e j0t d
h e j0 d Xe j0t h e j0 d X *e j0t
(3A.4)
This rather unwieldy expression can be simplified. First of all, if we take the
Fourier transform of the impulse response, we get:
The Fourier
transform of the
impulse response
appears in our
analysis
H ht e jt dt
(3A.5)
yt H 0 Xe j 0t H 0 X *e j 0t
(3A.6)
If ht is real, then:
H H *
(3A.7)
Y H 0 X
(3A.8)
3A.3
This equation is of fundamental importance! It says that the output phasor to a
system is equal to the input phasor to the system, scaled in magnitude and
changed in angle by an amount equal to H 0 (a complex number). Also:
Y * H * 0 X * H 0 X *
(3A.9)
yt Ye j 0t Y *e j 0t
(3A.10)
yt A H cos t H
also a sinusoid with the same frequency 0 , but with the amplitude scaled by
the factor H 0 and the phase shifted by an amount H 0 .
The function H is termed the frequency response. H is called the Frequency,
magnitude and
magnitude response and H is called the phase response. Note that the phase response
defined
system impulse response and the frequency response form a Fourier transform
pair:
ht H f
(3A.12)
The impulse
response and
frequency response
form a Fourier
transform pair
We now have an easy way of analysing systems with sinusoidal inputs: simply
determine H f and apply Y H f 0 X .
There are two ways to get H f . We can find the system impulse response
ht
3A.4
Example
For the simple RC circuit below, find the forced response to an arbitrary
sinusoid (this is also termed the sinusoidal steady-state response).
Finding the frequency
response of a simple
system
vi
vo
Figure 3A.1
The input/output differential equation for the circuit is:
dvo t 1
1
vo t
vi t
dt
RC
RC
(3A.13)
which is obtained by KVL. Since the input is a sinusoid, which is really just a
sum of conjugate complex exponentials, we know from Eq. (3A.6) that if the
input is Vi Ae j 0t then the output is Vo H 0 Ae j 0t . Note that Vi and
Vo are complex numbers, and if the factor e j0t were suppressed they would be
(3A.14)
1
1
H 0 Ae j 0t
Ae j 0t
RC
RC
(3A.15)
d
1
1
H 0 Ae j 0t
H 0 Ae j 0t
Ae j 0t
dt
RC
RC
and thus:
j 0 H 0 Ae j 0t
3A.5
Dividing both sides by Vi Ae j 0t gives:
1
1
j 0 H 0
H 0
RC
RC
(3A.16)
and therefore:
H 0
1 RC
j 0 1 RC
(3A.17)
1 RC
H
j 1 RC
Frequency response
of a lowpass RC
circuit
(3A.18)
This is the frequency response for the simple RC circuit. As a check, we know
that the impulse response is:
ht 1 RC e t RC u t
(3A.19)
Impulse response of
a lowpass RC circuit
Using your standard transforms, show that the frequency response is the
Fourier transform of the impulse response.
The magnitude function is:
1 RC
2 1 RC 2
(3A.20)
Magnitude response
of a lowpass RC
circuit
H tan 1 RC
3A.6
Plots of the magnitude and phase function are shown below:
A graph of the
frequency response
in this case as
magnitude and phase
|H ( ) |
1
1 2
0
0
0
0
2 0
2 0
-45
-90
H ( )
Figure 3A.2
The systems
behaviour described
in terms of frequency
Filter terminology
defined
3A.7
Response to an Arbitrary Input
Since we can readily establish the response of a system to a single sinusoid, we
should be able to find the response of a system to a sum of sinusoids. That is,
a spectrum in gives a spectrum
If we can do one
out, with the relationship between the output sinusoid, we can do
an infinite number
and input spectra given by the frequency response. For periodic inputs, the
spectrum is effectively given by the Fourier series, and for aperiodic inputs, we
use the Fourier transform. The system in both cases is described by its
frequency response.
Periodic Inputs
For periodic inputs, we can express the input signal by a complex exponential
Fourier series:
xt
jn o t
(3A.22)
which is just a
Fourier series for a
periodic signal
It follows from the previous section that the output response resulting from the
complex exponential input X n e jn0t is equal to H n 0 X n e jn0t . By linearity,
the response to the periodic input xt is:
y t
H n X
e jn o t
(3A.23)
Since the right-hand side is a complex exponential Fourier series, the output
y t must be periodic, with fundamental frequency equal to that of the input,
Yn H n 0 X n
(3A.24)
response simply
multiplies the input
Fourier series
coefficients to
produce the output
Fourier series
coefficients
3A.8
The output magnitude spectrum is just:
Yn H n 0 X n
Dont forget the
frequency response is
just a frequency
dependent complex
number
(3A.25)
Yn H n 0 X n
(3A.26)
These relationships describe how the system processes the various complex
exponential components comprising the periodic input signal. In particular,
Eq. (3A.25) determines if the system will pass or attenuate a given component
of the input. Eq. (3A.26) determines the phase shift the system will give to a
particular component of the input.
Aperiodic Inputs
If we can do finite
sinusoids, we can do
infinitesimal sinusoids
too!
Taking the Fourier transform of both sides of the time domain input/output
relationship of an LTI system:
yt ht xt
(3A.27)
we get:
and transform to the
frequency domain
ht xt e jt dt
Y f
(3A.28)
Y f h xt d e jt dt
(3A.29)
Y f h xt e jt dt d
(3A.30)
3A.9
Using the change of variable t in the second integral gives:
Y f h x e j d d
(3A.31)
Y f h e d x e j d
(3A.32)
which is:
Y f H f X f
(3A.33)
Convolution in the
time-domain is
multiplication in the
frequency-domain
Eq. (3A.27). It says that the spectrum of the output signal is equal to the multiplying the input
product of the frequency response and the spectrum of the input signal.
spectrum by the
frequency response
Y f H f X f
(3A.34)
The magnitude
spectrum is scaled
(3A.35)
Y f H f X f
Note that the frequency-domain description applies to all inputs that can be
Fourier transformed, including sinusoids if we allow impulses in the spectrum.
Periodic inputs are then a special case of Eq. (3A.33).
By similar arguments together with the duality property of the Fourier Convolution in the
frequency-domain is
time-domain
3A.10
Ideal Filters
A first look at
frequency-domain
descriptions - filters
Now that we have a feel for the frequency-domain description and behaviour
of a system, we will briefly examine a very important application of electronic
circuits that of frequency selection, or filtering. Here we will examine ideal
filters the topic of real filter design is rather involved.
Ideal filters pass sinusoids within a given frequency range, and reject
(completely attenuate) all other sinusoids. An example of an ideal lowpass
filter is shown below:
Cutoff
|H|
1
ideal
Pass
0
Vi
Stop
filter
Vo
Figure 3A.3
Filter types
Other basic types of filters are highpass, bandpass and bandstop. All have
similar definitions as given in Figure 3A.3. Frequencies that are passed are said
to be in the passband, while those that are rejected lie in the stopband. The
point where passband and stopband meet is called 0 , the cutoff frequency.
The term bandwidth as applied to a filter corresponds to the width of the
passband.
An ideal lowpass filter with a bandwidth of B Hz has a magnitude response:
f
H f Krect
2B
(3A.36)
3A.11
Phase Response of an Ideal Filter
Most filter specifications deal with the magnitude response. In systems where
the filter is designed to pass a particular wave shape, phase response is
particular phase
extremely important. For example, in a digital system we may be sending 1s A
response is crucial
and 0s using a specially shaped pulse that has nice properties, e.g. its value for retaining a
signals shape
is zero at the centre of all other pulses. At the receiver it is passed through a
lowpass filter to remove high frequency noise. The filter introduces a delay of
D seconds, but the output of the filter is as close as possible to the desired
pulse shape.
This is illustrated below:
A filter introducing
delay, but retaining
signal shape
delay
D
vi
vo
vi
Lowpass
Filter
vo
1.5
Figure 3A.4
To the pulse, the filter just looks like a delay. We can see that distortionless
transmission through a filter is characterised by a constant delay of the input
signal:
vo t Kvi t D
(3A.37)
In Eq. (3A.37) we have also included the fact that all frequencies in the
passband of the filter can have their amplitudes multiplied by a constant
without affecting the wave shape. Also note that Eq. (3A.37) applies only to
frequencies in the passband of a filter - we do not care about any distortion in a
stopband.
Distortionless
transmission defined
3A.12
We will now relate these distortionless transmission requirements to the phase
response of the filter. From Fourier analysis we know that any periodic signal
can be decomposed into an infinite summation of sinusoidal signals. Let one of
these be:
vi A cost
(3A.38)
vo KA cos t D
KA cost D
(3A.39)
The input and output signals differ only by the gain K and the phase angle
which is:
Distortionless
transmission
requires a linear
phase
(3A.40)
That is, the phase response must be a straight line with negative slope that
passes through the origin.
In general, the requirement for the phase response in the passband to achieve
distortionless transmission through the filter is:
d
D
d
(3A.41)
The delay D in this case is referred to as the group delay. (This means the
group of sinusoids that make up the wave shape have a delay of D).
The ideal lowpass filter can be expressed completely as:
f j 2fD
H f Krect
e
B
2
(3A.42)
3A.13
Example
We would like to determine the maximum frequency for which transmission is
practically distortionless in the following simple filter:
Model of a short
piece of co-axial
cable, or twisted pair
1k
R
1 nF
0 = 1
RC
Figure 3A.5
We would also like to know the group delay caused by this filter.
We know the magnitude and phase response already:
1
1 0
H tan 1 0
(3A.43a)
(3A.43b)
3A.14
These responses are shown below:
The deviation from
linear phase and
constant magnitude
for a simple firstorder filter
|H|
1
1 2
2 0
H
/4
-1
/2
Figure 3A.6
Suppose we can tolerate a deviation in the magnitude response of 1% in the
passband. We then have:
1
1 0
0.99
(3A.44)
01425
0
.
Also, suppose we can tolerate a deviation in the delay of 1% in the passband.
We first find an expression for the delay:
1
1
d
d 1 0 2 0
(3A.45)
3A.15
and then impose the condition that the delay be within 1% of the delay at DC:
1
1 0
0.99
(3A.46)
01005
0
.
We can see from Eqs. (3A.44) and (3A.46) that we must have 01
. 0 for
. 0 , according to
practically distortionless transmission. The delay for 01
Eq. (3A.45), is approximately given by:
(3A.47)
Approximate group
delay for a firstorder lowpass circuit
For the values shown in Figure 3A.6, the group delay is approximately 1 s . In
practice, variations in the magnitude transfer function up to the half-power
frequency are considered tolerable (this is the bandwidth, BW, of the filter).
Over this range of frequencies, the phase deviates from the ideal linear
characteristic by at most 4 1 0.2146 radians (see Figure 3A.6).
Frequencies well below 0 are transmitted practically without distortion, but
frequencies in the vicinity of 0 will suffer some distortion.
The ideal filter is unrealizable. To show this, take the inverse Fourier transform
of the ideal filters frequency response, Eq. (3A.42):
ht 2 BKsinc2 Bt D
(3A.48)
It should be clear that the impulse response is not zero for t 0 , and the filter An ideal filter is
unrealizable
is therefore not causal (how can there be a response to an impulse at t 0 because it is nonbefore it is applied?). One way to design a real filter is simply to multiply causal
Eq. (3A.48) by u t , the unit step.
3A.16
What does a Filter do to a Signal?
Filtering a periodic
signal
Passing a periodic signal through a filter will distort the signal (so called linear
distortion) because the filter will change the relative amplitudes and phases of
the sinusoids that make up its Fourier series. Once we can calculate the
amplitude change and phase shift that the filter imposes on an arbitrary
sinusoid we are in a position to find out how each sinusoid is affected, and
hence synthesise the filtered waveform. In general, the output Fourier
transform is just the input Fourier transform multiplied by the filter frequency
response H f .
R
x (t )
H (f )
y (t )
1
j 2 f C
eg.
H (f ) =
1
j 2 f RC +1
Figure 3A.7
Y f H f X f
(3A.49)
Xf
X f nf
(3A.50)
and therefore:
Spectrum of a
filtered periodic
signal
Yf
H nf X f nf
(3A.51)
3A.17
Example
Amplitude
Phase
-30
-90
What is the Fourier series of the output if the signal is passed through a filter
with transfer function H f j 4f ? The period is 2 seconds.
There are 3 components in the input signal with frequencies 0, 0.5 Hz and 1
Hz. The complex gain of the filter at each frequency is:
Filter gain table
Hf
Gain
j 4 0.5
90
j 4 1
90
Phase shift
Amplitude
Phase
60
12
3A.18
Example
Suppose the same signal was sent through a filter with transfer function as
sketched:
|H ( f )|
1
0.5
-1
-0.5
0.5
0.5
H ( f)
-1
-0.5
Figure 3A.8
Amplitude
Phase
-120
1.5
-270
3A.19
Sampling
Sampling is one of the most important operations we can perform on a signal.
Sampling is one of
Samples can be quantized and then operated upon digitally (digital signal the most important
processing). Once processed, the samples are turned back into a continuous- things we can do to
a continuous-time
time waveform. (e.g. CD, mobile phone!) Here we demonstrate how, if certain signal because we
can then process it
parameters are right, a sampled signal can be reconstructed from its samples digitally
almost perfectly.
g (t ) p (t ) = gs ( t )
g (t )
p (t )
-2Ts
-Ts
Ts
2Ts
Figure 3A.9
An ideal sampler
multiplies a signal
by a uniform train of
impulses
3A.20
t kT we
s
g s t gt t kTs
(3A.52)
g (t )
g s (t )
sampled signal
g (t )
-2 /B -1 /B 0
1/B 2/B
-2 /B -1 /B 0
1/B 2/B
p (t )
1
-2Ts
- Ts
Ts
2Ts t
Figure 3A.10
That is, for ideal sampling, the original signal forms an envelope for the train
of impulses, and we have simply generated a weighted train of impulses, where
each weight takes on the signal value at that particular instant of time.
Note that is it physically impossible to ideally sample a waveform, since we
cannot create a real function containing impulses. In practice, we use an
analog-to-digital converter to get actual values of a signal (e.g. a voltage).
Then, when we perform digital signal processing (DSP) on the signal, we
understand that we should treat the sample value as the weight of an impulse.
To practically sample a waveform using analog circuitry, we have to use finite
value pulses. It will be shown later that it doesnt matter what pulse shape is
used for the sampling waveform it could be rectangular, triangular, or indeed
any shape.
3A.21
Taking
the
Fourier
transform
of
both
sides
of
Eq.
(3A.52):
Gs f G f f s
fs
f nf
s
G f nf
(3A.53)
G (f )
-2B
-B
Gs ( f )
1
B
- fs -B
2B f
- fs - fs+B
-B 0
A sampled signals
spectrum is a scaled
replica of the
original, periodically
repeated
fs
B
fs -B
fs
fs+B
P (f )
fs
-2 fs
- fs
fs
2 fs
Figure 3A.11
3A.22
Sampling and Periodicy
f nf 0 , then we have S f G f
An ideally sampled
spectrum is a train
of impulses - each
impulse is weighted
by the original
spectrum
f nf :
S (f )
G( f )
-8 f 0
0 f0
-4 f 0
4 f0
8 f0
Figure 3A.12
S f g t T0 t kT0
(3A.54)
s( t )
- T0
T0
Figure 3A.13
Sampling in one
domain implies
periodicy in the
other
3A.23
Reconstruction
If a sampled signal g s t is applied to an ideal lowpass filter of bandwidth B,
the only component of the spectrum Gs f that is passed is just the original
spectrum G f .
Gs( f )
We recover the
original spectrum by
lowpass filtering
G( f )
lowpass
filter
Figure 3A.14
Gs ( f )
- fs -B
- fs - fs+B
-B 0
fs
fs
fs -B
G (f )
fs+B
f
-2B
lowpass
filter
H( f )
-2 B
-B
-B
1
B
2B f
1/ fs
2B
Figure 3A.15
Hence the time-domain output of the filter is equal to g t , which shows that
the original signal can be completely and exactly reconstructed from the
sampled waveform g s t .
Lowpass filtering a
sampled signals
spectrum results in
the original signals
spectrum an
operation that is
easy to see in the
frequency-domain!
3A.24
Graphically, in the time-domain:
A weighted train of
impulses turns back
into the original
signal after lowpass
filteringan
operation that is not
so easy to see in the
time-domain!
g s (t )
-2 /B -1 /B 0
1/B 2/B
lowpass
filter
g( t )
-2 /B -1 /B 0
1/B 2/B
Figure 3A.16
We cant sample
and reconstruct
perfectly, but we can
get close!
There are some limitations to perfect reconstruction though. One is that timelimited signals are not bandlimited (e.g. a rect time-domain waveform has a
spectrum which is a sinc function which has infinite frequency content). Any
time-limited signal therefore cannot be perfectly reconstructed, since there is
no sample rate high enough to ensure repeats of the original spectrum do not
overlap. However, many signals are essentially bandlimited, which means
spectral components higher than, say B, do not make a significant contribution
to either the shape or energy of the signal.
3A.25
Aliasing
We saw that sampling in one domain implies periodicy in the other. If the
We have to ensure
function being made periodic has an extent that is smaller than the period, there no spectral overlap
will be no resulting overlap and hence it will be possible to recover the
when sampling
continuous (unsampled) function by windowing out just one period from the
domain displaying periodicy.
Nyquists sampling criterion is the formal expression of the above fact:
Perfect reconstruction of a sampled signal is possible if the
Nyquists sampling
criterion
f s 2B
(3A.55)
Fold-over frequency
called the spectral fold-over frequency, and it is determined only by the defined
characteristics of the signal being sampled. The frequency B is termed the Nyquist frequency
Nyquist frequency, and it is a function only of the signal and is independent of
defined
the selected sampling rate. Do not confuse these two independent entities! The
Nyquist frequency is a lower bound for the fold-over frequency in the sense
that failure to select a fold-over frequency at or above the Nyquist frequency
will result in spectral aliasing and loss of the capability to reconstruct a
continuous-time signal from its samples without error. The Nyquist frequency
for a signal which is not bandlimited is infinity; that is, there is no finite sample
rate that would permit errorless reconstruction of the continuous-time signal
from its samples.
The Nyquist rate is defined as f N 2 B , and is not to be confused with the
similar term Nyquist frequency. The Nyquist rate is 2 B , whereas the Nyquist
frequency is B. To prevent aliasing, we need to sample at a rate greater than the
Nyquist rate, i.e. f s f N .
3A.26
To illustrate aliasing, consider the case where we have failed to select the
sample rate higher than twice the bandwidth, B, of a lowpass signal. The
sampled spectrum is shown below, where the repeats of the original spectrum
now overlap:
An illustration of
aliasing in the
frequency-domain
folded-over high-frequency
components
X s( f )
- fs - B
B fs
f s /2 = fold-over frequency
Figure 3A.17
X r( f )
-B
X (f )
-B
"reconstructed"
original
Figure 3A.18
How to avoid
aliasing
3A.27
Practical Sampling and Reconstruction
To prevent aliasing, practical systems are constructed so that the signal-to-be-
We have to ensure
sampled is guaranteed to meet Nyquists sampling criterion. They do this by no spectral overlap
passing the original signal through a lowpass filter, known as an anti-alias
filter (AAF), to ensure that no frequencies are present above the fold-over
frequency, f s 2 :
Vi ( f )
-B
Vo ( f )
-B
f s /2
Vi ( f )
Vo ( f )
anti-alias
filter
H ( f)
- f s /2 0
f s /2
Figure 3A.19
If the system is designed correctly, then the high frequency components of the
original signal that are rejected by the AAF are not important in that they
carry little energy and/or information (e.g. noise). Sampling then takes place at
a rate of f s Sa/s. The reconstruction filter has the same bandwidth as the AAF,
but a different passband magnitude to correct for the sampling process.
A practical sampling scheme is shown below:
vi
Anti-alias
Filter
ADC
Digital
Signal
Processor
DAC
Figure 3A.20
Reconstruction
Filter
vo
when sampling
3A.28
Summary of the Sampling and Reconstruction Process
Time-Domain
Frequency-Domain
Sampling
g (t )
G( f )
A
-B
S(f )
s(t )
1
fs
Ts
-f s
g s (t )
fs
Gs ( f )
Afs
-f s
-B
fs
Reconstruction
h (t )
H( f )
fs
fs
0
Ts
g r (t )
- f s /2
f s /2
Gr ( f )
A
C
-B
Figure 3A.21
3A.29
Finding the Fourier Series of a Periodic Function from the
Fourier Transform of a Single Period
It is usually easier to find the Fourier transform of a single period than The quick way to
performing the integration needed to find Fourier series coefficients (because determine Fourier
series coefficients
all the standard Fourier properties can be used). This method allows the
Fourier series coefficients to be determined directly from the Fourier
transform, provided the period is known. Dont forget, only periodic functions
have Fourier series representation.
Suppose we draw one period of a periodic waveform:
A single period
g1(t)
1
T1
Figure 3A.22
-2T0
-T0
Figure 3A.23
T0
2T0 t
3A.30
t kT .
Thus, g p t , the
g p t g1 t t kT0
(3A.56)
gp( t )
1
-2T0
-T0
T1
2T0
T0
Figure 3A.24
F g p t F g1 t F t kT0
k
G1 f f 0
f nf
(3A.57)
Gn f 0 G1 nf 0
(3A.58)
(3A.59)
3A.31
Graphically, the operation indicated by Eq. (3A.57) takes the original spectrum
and multiplies it by a train of impulses effectively creating a weighted train
of impulses:
G1( f )
Sampling in the
frequency domain
produces a periodic
waveform in the
time-domain
T1
2f1
f1
f0
0 f 0 2f 0
Gp( f )
f 0 T1
-8 f0 -6 f0
-12 f0 -10 f0
6 f0
-4 f0 -2 f0 0
2 f0
4 f0
8 f0
10 f0 12 f0
Figure 3A.25
According to Eq. (3A.58), the Fourier series coefficients are just the weights of
the impulses in the spectrum of the periodic function. To get the nth Fourier
series coefficient, use the weight of the impulse located at nf 0 .
This is in perfect agreement with the concept of a continuous spectrum. Each Remember that
pairs of impulses in
represent a sinusoid
impulse exists at a certain frequency, then there is a finite amplitude sinusoid at in the time-domain
that frequency.
Signals and Systems 2014
3A.32
Windowing in the Time Domain
Often we wish to deal with only a segment of a signal, say from t 0 to t T .
Some practical
effects of looking at
signals over a finite
time
Sometimes we have no choice, as this is the only part of the signal we have
access to - our measuring instrument has restricted us to a window of
duration T beginning at t 0 . Outside this window the signal is forced to be
zero. How is the signals Fourier transform affected when the signal is viewed
through a window?
Windowing defined
Figure 3A.26
(3A.60)
3A.33
The Fourier transform will be:
f 1 f 1 sinc f e jf
2
2
j
j
sinc f 1e j f 1 sinc f 1e j f 1
2
2
(3A.61)
-4
-3
-2
-1
1/2 90
1/2 -90
-1
G( f )
0.5
-4
-3
-2
-1
Figure 3A.27
3A.34
If the window were changed to 4 seconds, we would then have:
The longer we look,
the better the
spectrum
-4
-3
-2
-1
1/2 90
-1
1/2 -90
G( f )
2
-4
-3
-2
-1
Figure 3A.28
Obviously, the longer the window, the more accurate the spectrum becomes.
3A.35
Example
Find the Fourier transform of sinc t when it is viewed through a window from
-2 to 2 seconds:
Viewing a sinc
function through a
rectangular window
g (t )
w
-2
-1
2 t
Figure 3A.29
We have:
g w t sinct rect t 4
(3A.62)
F g w t rect f 4 sinc4 f
(3A.63)
and:
3A.36
Graphically:
Ripples in the
spectrum caused by
a rectangular
window
-1
-0.5
0.5
1 f
0.5
1 f
0.5
-1
-0.5
-1
-0.5
Figure 3A.30
3A.37
Practical Multiplication and Convolution
We have learned about multiplication and convolution as mathematical
operations that we can apply in either the time or frequency domain.
Remember however that the time and frequency domain are just two ways of
describing the same signal a time varying voltage that we measure across two
terminals. What do we need to physically do to our signal to perform an
operation equivalent to say multiplying its frequency domain by some
function?
Two physical operations we can do on signals are multiplication (with another Multiplication in the
time-domain is
signal) and filtering (with a filter with a defined transfer function).
convolution in the
frequency-domain
x (t )
x (t ) y (t )
y (t )
Figure 3A.31
Time domain output is x(t) multiplied with y(t)
(3A.64a)
(3A.65b)
3A.38
Convolution in the
time-domain is
multiplication in the
frequency-domain
x (t )
Filter
specified by
H ( f ) or h ( t )
y (t ) = h (t ) * x (t )
Figure 3A.32
Time domain output is x(t) convolved with h(t)
(3A.65a)
(3A.66b)
3A.39
Summary
The output of an LTI system due to any input signal is obtained most easily
by considering the spectrum: Y f H f X f . This expresses the
important property: convolution in the time-domain is equivalent to
multiplication in the frequency-domain.
Filters are devices best thought about in the frequency-domain. They are
frequency selective devices, changing both the magnitude and phase of
3A.40
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.
3A.41
Quiz
Encircle the correct answer, cross out the wrong answers. [one or none correct]
1.
x y t d
(b)
x y t d
(c)
X Y f d
2.
vi
vo
vi
vo
Ideal
Filter
|H( f )|
(a)
h (t)
|H( f )|
(b)
(c)
3.
The Fourier transform of one period of a periodic waveform is G(f). The
Fourier series coefficients, Gn , are given by:
(a) nf 0 G f 0
(b) Gnf 0
(c) f 0Gnf 0
4.
x(t)
y(t)
convolution is:
t
(a) 9
(b) 4.5
(c) 6
5.
The scaling property of the Fourier transform is:
f
(a) g at G
a
(b) g at
1
G f
a
Answers: 1. a 2. c 3. c 4. x 5. x
(c) ag t aG f
3A.42
Exercises
1.
Calculate the magnitude and phase of the 4 kHz component in the spectrum of
the periodic pulse train shown below. The pulse repetition rate is 1 kHz.
x(t)
10
0.4
0
0.6
0.5
0.1
0.9 1
t (ms)
-5
2.
By relating the triangular pulse shown below to the convolution of a pair of
identical rectangular pulses, deduce the Fourier transform of the triangular
pulse:
x(t)
3.
The pulse x t 2 Bsinc 2 Bt rect Bt 8 has ripples in the amplitude spectrum.
What is the spacing in frequency between positive peaks of the ripples?
4.
A signal is to be sampled with an ideal sampler operating at 8000 samples per
second. Assuming an ideal low pass anti-aliasing filter, how can the sampled
signal be reconstituted in its original form, and under what conditions?
3A.43
5.
A train of impulses in one domain implies what in the other?
6.
The following table gives information about the Fourier series of a periodic
waveform, g t , which has a period of 50 ms.
Table 1
Harmonic #
0
1
2
3
4
(a)
Amplitude
1
3
1
1
0.5
Phase ()
-30
-30
-60
-90
Give the frequencies of the fundamental and the 2nd harmonic. What is
the signal power, assuming that g t is measured across 50 ?
(b)
(c)
H f as shown below.
|H( f )|
1
0.5
-100
-50
50
100 f (Hz)
50
100 f (Hz)
H( f )
-100
-50
Draw up a table in the same form as Table 1 of the Fourier series of the
output waveform. Is there a DC component in the output of the
amplifier?
3A.44
7.
A signal, bandlimited to 1 kHz, is sampled by multiplying it with a rectangular
pulse train with repetition rate 4 kHz and pulse width 50 s. Can the original
signal be recovered without distortion, and if so, how?
8.
A signal is to be analysed to identify the relative amplitudes of components
which are known to exist at 9 kHz and 9.25 kHz. To do the analysis a digital
storage oscilloscope takes a record of length 2 ms and then computes the
Fourier series. The 18th harmonic thus computed can be non-zero even when
no 9 kHz component is present in the input signal. Explain.
9.
Use MATLAB to determine the output of a simple RC lowpass filter
subjected to a square wave input given by:
xt
rectt 2n
case.
3B.1
Lecture 3B Amplitude Modulation
Modulation. Double-sideband, suppressed-carrier (DSB-SC) modulation.
Amplitude modulation (AM). Single sideband (SSB) Modulation. Quadrature
amplitude modulation (QAM).
Introduction
The modern world cannot exist without communication systems. A model of a
typical communication system is:
A communication
system
Source
Input
transducer
Message
signal
Received
signal
Transmitted
signal
Transmitter
Channel
Output
signal
Receiver
Output
transducer
Destination
Distortion
and
noise
Figure 3B.1
The source originates the message, such as a human voice, a television picture,
or data. If the data is nonelectrical, it must be converted by an input transducer
into an electrical waveform referred to as the baseband signal or the message
signal.
The transmitter modifies the baseband signal for efficient transmission.
The channel is a medium such as a twisted pair, coaxial cable, a waveguide,
an optical fibre, or a radio link through which the transmitter output is sent.
The receiver processes the signal from the channel by undoing the signal
modifications made at the transmitter and by the channel.
The receiver output is fed to the output transducer, which converts the
electrical signal to its original form.
The destination is the thing to which the message is communicated.
3B.2
The Communication Channel
A channel acts partly as a filter it attenuates the signal and distorts the
The communication
channel attenuates
and distorts the
signal, and adds
noise
3B.3
Analog and Digital Messages
Messages are analog or digital. In an analog message, the waveform is
important and even a slight distortion or interference in the waveform will
cause an error in the received signal. On the other hand, digital messages are
transmitted by using a finite set of electrical waveforms. Easier message
extraction (in the presence of noise), and regeneration of the original digital
signal means that digital communication can transmit messages with a greater
accuracy than an analog system in the presence of distortion and noise. This is
the reason why digital communication is so prevalent, and analog
communication is all but obsolete.
Baseband and Carrier Communication
Some baseband signals produced by various information sources are suitable
for direct transmission over a given communication channel this is called
baseband communication.
Communication that uses modulation to shift the frequency spectrum of a
signal is known as carrier communication. In this mode, one of the basic
parameters (amplitude, frequency or phase) of a high frequency sinusoidal
carrier is varied in proportion to the baseband signal mt .
Modulation
Baseband signals produced by various information sources are not always
Modulation is the
suitable for direct transmission over a given channel. These signals are process of modifying
a high-frequency
frequency, or phase
by a message signal
3B.4
The figure below shows a baseband signal mt and the corresponding AM
and FM waveforms:
Examples of AM
and FM modulated
waveforms
Carrier
v
Amplitude-modulated wave
v
Frequency-modulated wave
Figure 3B.2
At the receiver, the modulated signal must pass through a reverse process
called demodulation in order to retrieve the baseband signal.
3B.5
Modulation facilitates the transmission of information for the following
reasons.
Ease of Radiation
For efficient radiation of electromagnetic energy, the radiating antenna should Modulation is used
be of the order of one-tenth of the wavelength of the signal radiated. For many (broadcasting)
baseband signals, the wavelengths are too large for reasonable antenna
dimensions. We modulate a high-frequency carrier, thus translating the signal
spectrum to the region of carrier frequencies that corresponds to a much
smaller wavelength.
Simultaneous Transmission of Several Signals
We can translate many different baseband signals into separate channels by
using different carrier frequencies. If the carrier frequencies are chosen and to
simultaneously
sufficiently far apart in frequency, the spectra of the modulated signals will not transmit messages
through a single
overlap and thus will not interfere with each other. At the receiver, a tuneable channel
bandpass filter is used to select the desired signal. This method of transmitting
several signals simultaneously is known as frequency-division multiplexing
(FDM). A type of FDM, known as orthogonal frequency-division multiplexing
(OFDM) is at the heart of Wi-Fi and was invented at the CSIRO!
3B.6
Double-Sideband, Suppressed-Carrier (DSB-SC) Modulation
Let xt be a signal such as an audio signal that is to be transmitted through a
cable or the atmosphere. In amplitude modulation (AM), the signal modifies
(or modulates) the amplitude of a carrier sinusoid cos c t . In one form of
AM transmission, the signal xt and the carrier cos c t are simply
multiplied together. The process is illustrated below:
Double side-band
suppressed carrier
(DSB-SC)
modulation
Signal multiplier
x( t )
Local
Oscillator
cos(2 f c t )
Figure 3B.3
The local oscillator in Figure 3B.3 is a device that produces the sinusoidal
signal cos c t . The multiplier is implemented with a non-linear device, and is
usually an integrated circuit at low frequencies.
By the multiplication property of Fourier transforms, the output spectrum is
obtained by convolving the spectrum of xt with the spectrum of cos2f c t .
We now restate a very important property of convolution involving an impulse:
X f f f0 X f f0
(3B.1)
3B.7
The output spectrum of the modulator is therefore:
Y f X f
1
f fc f fc
2
1
X f fc X f fc
2
(3B.2)
The spectrum of the modulated signal is a replica of the signal spectrum but
shifted up in frequency. If the signal has a bandwidth equal to B then the
modulated signal spectrum has an upper sideband from f c to f c B and a
lower sideband from f c B to f c , and the process is therefore called doublesideband transmission, or DSB transmission for short. An example of
modulation is given below in the time-domain:
DSB-SC in the timedomain
x( t )
y( t )
Signal multiplier
1
y ( t ) = x ( t ) cos(2 f ct )
x( t )
-2
-1
modulated signal
Local
Oscillator
cos(2 f ct )
1
-2
-1
Figure 3B.4
-2
-1
3B.8
And in the frequency domain:
DSB-SC in the
frequency-domain
X (f )
1
Y f 1 X f f c X f f c
2
Y(f )
X (f )
-2
-1
1/2
1
f fc f fc
2
1/2
- fc
fc
- fc
fc
cos(2 f ct )
Local
Oscillator
Figure 3B.5
Modulation lets us
share the spectrum,
and achieves
practical
propagation
3B.9
Demodulation
The reconstruction of xt from xt cos c t is called demodulation. There are
many ways to demodulate a signal, here we will consider one common method
called synchronous or coherent demodulation.
x ( t ) cos (2 f c t )
2
x ( t ) cos(2 f c t )
Local
Oscillator
lowpass
filter
x (t )
Coherent
demodulation of a
DSB-SC signal
cos(2 f c t )
Figure 3B.6
The first stage of the demodulation process involves applying the modulated Coherent
waveform xt cos c t to a multiplier. The other signal applied to the demodulation
requires a carrier at
the receiver that is
multiplier is a local oscillator which is assumed to be synchronized with the synchronized with
carrier signal cos c t , i.e. there is no phase shift between the carrier and the the transmitter
signal generated by the local oscillator.
The output of the multiplier is:
1
X f f c X f f c 1 f f c f f c
2
2
1
1
X f X f 2 f c X f 2 f c
2
4
(3B.3)
3B.10
Another way to think of this is in the time-domain:
1
xt cos 2 2f c t xt 1 cos4f c t
2
(3B.4)
Therefore, it is easy to see that lowpass filtering, with a gain of 2, will produce
xt . An example of demodulation in the time-domain is given below:
Demodulation in the
time-domain
x ( t ) cos (2 f c t )
2
x ( t ) cos(2 f c t )
-2
1
-2
-1
-1
x( t )
x(t )
lowpass
filter
-2
-1
cos(2 f c t )
Local
Oscillator
1
-2
-1
Figure 3B.7
The operation of demodulation is best understood in the frequency domain:
Demodulation in the
frequency-domain
Yf
1
2
X f
f c X f f c
1/2
1/4
Y (f )
1/4
-2 f c
- fc
2 fc
1/2
X (f )
1
2
x (t )
lowpass
filter
fc
-2
f c f f c
H( f )
1/2
- fc
fc
cos(2 f c t )
Local
Oscillator
-2
-1
Figure 3B.8
-1
3B.11
Summary of DBS-SC Modulation and Demodulation
Time-Domain
The DSB-SC
modulation and
demodulation
process in both the
time-domain and
frequency-domain
Frequency-Domain
Modulation
g (t )
G( f )
A
-B 0
L( f )
l (t )
1
0
1/2
Tc
t
-f c
gm(t )
fc
Gm( f )
C
A /2
t
-f c -B
-f c
Demodulation
-f c +B
f c -B
f c +B
fc
L( f )
1/2
l (t )
1
0
Tc
-f c
fc
Gm( f )
g i( t )
A /2
C
A /4
-2 f c
A /4
-f c -B
fc
2 fc
H( f )
h (t )
T0
- f0
f0
Gd( f )
gd( t )
-B 0
Figure 3B.9
3B.12
Amplitude Modulation (AM)
Let g t be a message signal such as an audio signal that is to be transmitted
through a cable or the atmosphere. In amplitude modulation (AM), the message
signal modifies (or modulates) the amplitude of a carrier sinusoid cos2f c t .
In one form of AM transmission, a constant bias A is added to the message
signal g t prior to multiplication by a carrier cos2f c t . The process is
illustrated below:
AM modulation
Adder
g (t )
Multiplier
gAM( t ) = modulated signal
A + g (t )
+
+
cos(2 f c t )
A
Local
oscillator
Figure 3B.10
The local oscillator in Figure 3B.10 is a device that produces the sinusoidal
signal cos2f c t . The multiplier is implemented with a non-linear device, and
is usually an integrated circuit at low frequencies. The adder is a simple opamp circuit.
The spectrum of the modulated signal should show a replica of the signal
spectrum but shifted up in frequency. Ideally, there should also be a pair of
impulses representing the carrier sinusoid. If G f , the spectrum of g t , is
bandlimited to B Hz, then the modulated signal spectrum has an upper
sideband from f c to f c B and a lower sideband from f c B to f c . Since the
appearance of the modulated signal in the time domain is that of a sinusoid
with a time-varying amplitude proportional to the message signal, this
modulation technique is called amplitude modulation, or AM for short.
3B.13
Envelope Detection
There are several ways to demodulate the AM signal. One way is coherent (or
gAM( t )
C A+ g (t )
0 = 1/ RC
Figure 3B.11
As long as the envelope of the signal A g t is non-negative, the message
g t appears to ride on top of the half-wave rectified version of g AM t . The
g (t)
g (t )
AM
g (t )
lowpass
filter
precision
rectifier
Figure 3B.12
3B.14
Single Sideband (SSB) Modulation
In so far as the transmission of information is concerned, only one sideband is
necessary, and if the carrier and the other sidebands are suppressed at the
transmitter, no information is lost. When only one sideband is transmitted, the
modulation system is referred to as a single-sideband (SSB) system.
A method for the creation of an SSB signal is illustrated below:
SSB modulation
Multiplier
Adder
g (t )
g SSB (t )
modulated
signal
-90
phaseshifter
cos(2 f c t )
Local Oscillators
sin(2 f c t )
Figure 3B.13
The local oscillators in Figure 3B.13 are devices that produce sinusoidal
signals. One oscillator is cos2f c t . The other oscillator has a phase which is
said to be in quadrature, or a phase of 2 with respect to the first oscillator,
to give sin 2f c t . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz). The -90 phase-shifter is a device that shifts the phase of all
positive frequencies by 90 and all negative frequencies by +90. It is more
commonly referred to as a Hilbert transformer. Note that the local oscillator
sin 2f c t can be generated from cos2f c t by passing it through a Hilbert
transformer.
Signals and Systems 2014
3B.15
Quadrature Amplitude Modulation (QAM)
In one form of AM transmission, two messages that occupy the same part of
the spectrum can be sent by combining their spectra in quadrature. If the first
message signal g1 t is multiplied by a carrier cos2f c t , then the second
message signal g 2 t is multiplied by sin 2f c t . The process is illustrated
below:
QAM modulation
Multiplier
Adder
g 1( t )
Local
Oscillators
cos(2 f c t )
sin(2 f c t )
g 2( t )
Figure 3B.14
The local oscillators in Figure 3B.14 are devices that produce sinusoidal
signals. One oscillator is cos2f c t . The other oscillator has a phase which is
said to be in quadrature, or a phase of 2 with respect to the first oscillator,
to give sin 2f c t . The multiplier is implemented with a non-linear device and
the adder is a simple op-amp circuit (for input signals with a bandwidth less
than 1 MHz).
3B.16
The spectrum of the modulated signal has two parts in quadrature. Each part is
a replica of the original message spectrum but shifted up in frequency. The
parts do not interfere since the first message forms the real (or in-phase, I)
part of the modulated spectrum and the second message forms the imaginary
(or quadrature, Q) part. An abstract view of the spectrum showing its real and
imaginary parts is shown below:
G1 ( f )
G2 ( f )
f
original message spectra
G QAM( f )
Re (I)
-fc
Im (Q)
fc
QAM spectrum
Figure 3B.15
Normally, we represent a spectrum by its magnitude and phase, and not its real
and imaginary parts, but in this case it is easier to picture the spectrum in
rectangular coordinates rather than polar coordinates.
If the spectrum of both message signals is bandlimited to B Hz, then the
modulated signal spectrum has an upper sideband from f c to f c B and a
lower sideband from f c B to f c .
The appearance of the modulated signal in the time domain is that of a sinusoid
with a time-varying amplitude and phase, but since the amplitude of the
quadrature components (cos and sin) of the carrier vary in proportion to the
message signals, this modulation technique is called quadrature amplitude
3B.17
Coherent Demodulation
There are several ways to demodulate the QAM signal - we will consider a
simple analog method called coherent (or synchronous) demodulation.
Coherent QAM
demodulation
g QAM( t ) cos(2 f c t )
lowpass
filter
g QAM( t )
g 1( t )
cos(2 f c t )
sin(2 f c t )
g QAM( t ) sin(2 f c t )
lowpass
filter
g 2( t )
Figure 3B.16
The first stage of the demodulation process involves applying the modulated
waveform g QAM t to two separate multipliers. The other signals applied to
each multiplier are local oscillators (in quadrature again) which are assumed to
be synchronized with the modulator, i.e. the frequency and phase of the
demodulators local oscillators are exactly the same as the frequency and phase
of the modulators local oscillators. The signals g1 t and
g 2 t can be
3B.18
Summary
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
3B.19
Exercises
1.
Sketch the Fourier transform of the waveform g t 1 cos 2t sin 20t .
2.
A scheme used in stereophonic FM broadcasting is shown below:
A
L
B
cos(2 f c t )
The input to the left channel (L) is a 1 kHz sinusoid, the input to the right
channel (R) is a 2 kHz sinusoid. Draw the spectrum of the signal at points A, B
and C if f c 38 kHz .
3.
Draw a block diagram of a scheme that could be used to recover the left (L)
and right (R) signals of the system shown in Question 12 if it uses the signal at
C as the input.
4A.1
Lecture 4A The Laplace Transform
The Laplace transform. Finding a Fourier transform using a Laplace
transform. Finding Laplace transforms. Standard Laplace transforms. Laplace
transform properties. Evaluation of the inverse Laplace transform. Transforms
of differential equations. The system transfer function. Block diagrams.
Cascading blocks. Standard form of a feedback control system. Block diagram
transformations.
g t dt .
t :
t
t
t e - t
e - t
t
Figure 4A.1
Signals and Systems 2014
4A.2
In other words, the factor e t is used to try and force the overall function to
zero for large values of t in an attempt to achieve convergence of the Fourier
transform (note that this is not always possible).
After multiplying the original function by an exponential convergence factor,
the Fourier transform is:
F xt e
xt e e
jt
xt e
j t
dt
(4A.1)
dt
X j xt e j t dt
(4A.2)
X s xt e st dt
(4A.3)
X s xt e st dt
0
(4A.4)
4A.3
The Laplace transform variable, s , is termed complex frequency it is a
complex number and can be graphed on a complex plane:
Im s
s= + j
Re s
Figure 4A.2
The inverse Laplace transform can be obtained by considering the inverse
Fourier transform of X j :
xt et
1
2
X j e jt d
(4A.5)
xt
1
2
X j e j t d
(4A.6)
1 c j
st
xt
X
s
e
ds
2j
(4A.7)
xt X s
Signals and Systems 2014
(4A.8)
The Laplace
transform pair
defined
4A.4
Region of Convergence (ROC)
The region of convergence (ROC) for the Laplace transform X s , is the set of
values of s (the region in the complex plane) for which the integral in
Eq. (4A.4) converges.
Example
To find the Laplace transform of a signal xt e at u t and its ROC, we
substitute into the definition of the Laplace transform:
X s e at u t e st dt
0
(4A.9)
X s e e dt e sa t dt
0
at st
1 sa t
e
sa
0
(4A.10)
e zt e j t e t e jt
(4A.11)
0 Re z 0
lim e zt
t
Re z 0
(4A.12)
4A.5
Clearly:
0 Res a 0
lim e s a t
t
Res a 0
(4A.13)
X s
1
sa
Res a 0
(4A.14)
or:
e at u t
1
sa
Re s a
(4A.15)
Signal x ( t )
-at
Region of convergence
j
c+j
u( t )
-a
c-j
Figure 4A.3
This fact means that the integral defining X s in Eq. (4A.10) exists only for
the values of s in the shaded region in Figure 4A.3. For other values of s , the
integral does not converge.
4A.6
The ROC is required for evaluating the inverse Laplace transform, as defined
The ROC is needed
to establish the
convergence of the
Laplace transform
varying from to . This path of integration must lie in the ROC for
X s . For the signal e at u t , this is possible if c a . One possible path of
If all the signals we deal with are restricted to t 0 , then the inverse transform
is unique, and we do not need to specify the ROC.
X X s s j
The Fourier
transform from the
Laplace transform
(4A.16)
That is, the Fourier transform X is just the Laplace transform X s with
s j .
Example
1
1
1
s 3 s j j 3 3 j 2f
(4A.17)
A quick check from our knowledge of the Fourier transform shows this to be
correct (because the Laplace transform includes the j axis in its region of
convergence).
4A.7
We can also view the Laplace transform geometrically, if we are willing to
split the transform into its magnitude and phase (remember X s is a complex
number). The magnitude of X s 1 s 3 can be visualised by graphing
X s as the height of a surface spread out over the s-plane:
The graph of the
magnitude of the
Laplace transform
over the s-plane
forms a surface with
poles and zeros
1
0.8
0.6
|X(s)|
0.4
0.2
0
-6
10
-4
5
0
-2
-5
0
-10
Figure 4A.4
There are several things we can notice about the plot above. First, note that the
surface has been defined only in the ROC, i.e. for Re s 3 . Secondly, the
surface approaches an infinite value at the point s 3 . Such a point is termed
a pole, in obvious reference to the surface being analogous to a tent (a zero is a
point where the surface has a value of zero).
4A.8
We can completely specify X s , apart from a constant gain factor, by
drawing a so-called pole-zero plot:
A pole-zero plot is a
shorthand way of
representing a
Laplace transform
-3
s -plane
Figure 4A.5
A pole-zero plot locates all the critical points in the s-plane that completely
specify the function X s (to within an arbitrary constant), and it is a useful
analytic and design tool.
Thirdly, one cut of the surface has been fortuitously placed along the imaginary
axis. If we graph the height of the surface along this cut against , we get a
picture of the magnitude of the Fourier transform vs. :
The Fourier
transform is
obtained from the
j-axis of a plot of
the Laplace
transform over the
entire s-plane
|X |
1/3
magnitude of
1
X ( ) =
3+ j
Figure 4A.6
With these ideas in mind, it should be apparent that a function that has a
Laplace transform with a ROC in the right-half plane does not have a Fourier
transform (because the Laplace transform surface will never intersect the
j -axis).
4A.9
Finding Laplace Transforms
Like the Fourier transform, it is only necessary from a practical viewpoint to
find Laplace transforms for a few standard signals, and then formulate several
properties of the Laplace transform. Then, finding a Laplace transform of a
function will consist of starting with a known transform and successively
applying known transform properties.
Example
X s t e st dt
(4A.18)
t 1
(4A.19)
The Laplace
transform of an
impulse
Thus, the Laplace transform of an impulse is 1, just like the Fourier transform.
Example
u t
1
s
The Laplace
transform of a unitstep
(4A.20)
This is a frequently used transform in the study of electrical circuits and control
systems.
4A.10
Example
cos 0t u t
1 j 0t
e e j 0t u t
2
(4A.21)
1 j 0 t
1 1
1
e e j 0 t u t
2
2 s j0 s j0
s
s 2 02
(4A.22)
cos 0t u t
s
s 2 02
(4A.23)
4A.11
Differentiation Property
(4A.24)
dx
L xt e st s xt e st dt
0
0
dt
(4A.25)
For the Laplace integral to converge [i.e. for X s to exist], it is necessary that
xt e st 0 as t for the values of s in the ROC for X s . Thus:
d
xt sX s x 0
dt
(4A.26)
The Laplace
transform
differentiation
property
4A.12
Standard Laplace Transforms
u t
1
s
(L.1)
t 1
(L.2)
e t u t
1
s 1
cost u t
s
s2 2
sin t u t
s2 2
(L.3)
(L.4)
(L.5)
4A.13
Laplace Transform Properties
Assuming xt X s .
axt aX s
(L.6)
Linearity
t
x T X sT
T
(L.7)
Scaling
xt c u t c e cs X s
(L.8)
Time shifting
e at xt X s a
(L.9)
Multiplication by
exponential
(L.10)
Multiplication by t
(L.11)
Differentiation
(L.12)
Integration
x1 t x2 t X 1 s X 2 s
(L.13)
Convolution
x0 lim sX s
(L.14)
Initial-value theorem
lim xt lim sX s
(L.15)
Final-value theorem
t xt 1
N
dN
X s
ds N
d
xt sX s x 0
dt
t
0
x d
1
X s
s
s 0
4A.14
Evaluation of the Inverse Laplace Transform
Finding the inverse Laplace transform requires integration in the complex
plane, which is normally difficult and time consuming to compute. Instead, we
can find inverse transforms from a table of Laplace transforms. All we need to
do is express X s as a sum of simpler functions of the forms listed in the
table. Most of the transforms of practical interest are rational functions, that is,
ratios of polynomials in s. Such functions can be expressed as a sum of simpler
functions by using partial fraction expansion.
Rational Laplace Transforms
bM s M bM 1 s M 1 ... b1 s b0
X s
a N s N a N 1 s N 1 ... a1s a0
(4A.27)
bM s M bM 1s M 1 ... b1s b0
X s
a N s p1 s p2 ...s p N
(4A.28)
where the pi are called the poles of X s . If the poles are all distinct, then the
partial fraction expansion of Eq. (4A.28) is:
Rational Laplace
transforms written in
terms of poles
cN
c
c
X s 1 2 ...
s p1 s p 2
s pN
(4A.29)
ci s pi X s s pi
(4A.30)
4A.15
Taking the inverse Laplace transform of Eq. (4A.29) using standard transform
(L.3) and property (L.7), gives us:
(4A.31)
The time-domain
form depends only
on the poles
Note that the form of the time-domain expression is determined only by the
poles of X s !
Example
2s 1
s 2 s 4
(4A.32)
(4A.33)
(4A.34)
(4A.35)
Heavisides
hand, is to mentally cover up the factor s 2 on the left-hand side, and then cover-up rule
4A.16
Applying Heavisides cover-up rule for c2 results in the mental equation:
2 4 1
c 3
4 2 2
(4A.36)
1
3
s2 s4
(4A.37)
The inverse Laplace transform can now be easily evaluated using standard
transform (L.3) and property (L.7):
y t e 2 t 3e 4 t , t 0
(4A.38)
Note that the continual writing of u t after each function has been replaced by
the more notationally convenient condition of t 0 on the total solution.
If there is a pole p1 j , then the complex conjugate p1 j is also
a pole (thats how we get real coefficients in the polynomial). In this case the
residues of the two poles are complex conjugate and:
c3
cN
c1
c1
X s
...
s p1 s1 p1 s p3
s pN
(4A.39)
xt c1e
p1t
c e
p1 t
... c N e p N t , t 0
(4A.40)
(4A.41)
4A.17
Now suppose the pole p1 of X s is repeated r times. Then the partial fraction
expansion of Eq. (4A.28) is:
X s
cN
c1
c2
cr
cr 1
...
...
2
r
s p N
s p1 s p1
s p1 s pr 1
(4A.42)
Partial fraction
expansion with
repeated poles
The residues are given by Eq. (4A.30) for the distinct poles r 1 i N and:
cr i
1 di
r
i s p1 X s
i! ds
s p
(4A.43)
s4
s 1s 2 2
(4A.44)
(4A.45)
(4A.46)
24
c 2
2 1 3
(4A.47)
4A.18
Note that Heavisides cover-up rule only applies to the repeated partial fraction
with the highest power. To find c2 , we have to use Eq. (4A.43). Multiplying
throughout by s 2 gives:
2
c
s4
2
1 s 2 c 2 s 2 c 3
s 1 s 1
(4A.48)
c 2 s 2
ds s 1 ds s 1 ds
(4A.49)
v
dx v
v
(4A.50)
ds s 1
s 12
s 1
(4A.51)
(4A.52)
3
3
2 12
(4A.53)
3
3
2
s 1 s 2 s 2 2
(4A.54)
4A.19
The inverse Laplace transform can now be easily evaluated using (L.3), (L.7)
and (L.10):
y t 3e 3e
t
2 t
2te
2 t
, t0
(4A.55)
Given:
s 2 2s 1
X s 3
s 3s 2 4 s 2
calculate xt .
Using MATLAB we just do:
num = [1 2 1];
den = [1 3 4 2];
[r,p] = residue(num,den);
Thus, we see that any high order rational function X s can be decomposed
into a combination of first-order factors.
Signals and Systems 2014
Multiple poles
produce coefficients
that are polynomials
of t
4A.20
Transforms of Differential Equations
The time-differentiation property of the Laplace transform sets the stage for
solving linear differential equations with constant coefficients. Because
d k y dt k s k Y s , the Laplace transform of a differential equation is an
algebraic equation that can be readily solved for Y s . Next we take the inverse
Laplace transform of Y s to find the desired solution y t .
differential
equation
time-domain
difficult?
LT
frequency-domain
time-domain
solution
ILT
algebaric
equation
easy!
s -domain
solution
Figure 4A.7
Example
5 D 6 y t D 1x t
(4A.56)
(4A.57)
4A.21
Let y t Y s . Then from property (L.11):
dy
sY s y 0 sY s 2
dt
d2y
s 2Y s sy 0 y 0 s 2Y s 2 s 1
2
dt
(4A.58)
1
s4
and
dx
s
s
sX s x 0
0
dt
s4
s4
(4A.59)
s Y s 2s 1 5sY s 2 6Y s s s 4 s 1 4
2
(4A.60)
Collecting all the terms of Y s and the remaining terms separately on the lefthand side, we get:
5s 6 Y s 2 s 11
s 1
s4
(4A.61)
so that:
5 s 6 Y s
2 s 11 s 1
(4A.62)
input term s
Therefore:
Y s
2 s 11
s 1
s 4 s 2 5s 6
5s
6
s
5 1 2
2
32
7
s 2 s 3 s 2 s 3 s 4
(4A.63)
4A.22
Taking the inverse transform yields:
- 2t
y t
7 e
-5
e 3t 12 e 2t 2e 3t 32 e 4t
13
2
- 2t
e -3e
3t
e
3
2
4t
, t0
(4A.64)
The Laplace transform method gives the total response, which includes
zero-input and zero-state components. The initial condition terms give rise to
the zero-input response. The zero-state response terms are exclusively due to
the input.
Consider the Nth order input/output differential equation:
Systems described
by differential
equations
d N y t N 1 d i y t M d i xt
ai
bi
i
dt N
dt
dt i
i 0
i 0
(4A.65)
If we take the Laplace transform of this equation using (L.11), assuming zero
initial conditions, we get:
transform into
rational functions of
s
bM s M bM 1s M 1 ... b1s b0
Y s N
X s
N 1
s a N 1s ... a1s a0
(4A.66)
Now define:
Transfer function
defined
Y s H s X s
(4A.67)
bM s M bM 1s M 1 ... b1s b0
H s N
s aN 1s N 1 ... a1s a0
Do the revision problem assuming zero initial conditions.
Signals and Systems 2014
(4A.68)
4A.23
The System Transfer Function
For a linear time-invariant system described by a convolution integral, we can
take the Laplace transform and get:
Y s H s X s
(4A.69)
ht H s
(4A.70a)
(4A.67b)
The relationship
between timedomain and
frequency-domain
descriptions of a
system
b s z1 s z 2 s z M
H s M
s p1 s p2 s p N
Transfer function
factored to get zeros
and poles
(4A.71)
where the zs are the zeros of the system and the ps are the poles of the
system. This shows us that apart from a constant factor bM , the poles and zeros
determine the transfer function completely. They are often displayed on a polezero diagram, which is a plot in the s-domain showing the location of all the
poles (marked by x) and all the zeros (marked by o).
You should be familiar with direct construction of the transfer function for
electric circuits from previous subjects.
4A.24
Block Diagrams
Block diagrams are
transfer functions
v i (t )
v o(t )
Figure 4A.8
we can perform KVL around the loop to get the differential equation:
vo t RC
dvo t
vi t
dt
(4A.72)
Vo s sRCVo s Vi s
(4A.73)
Vo s
1
1
Vi s 1 sRC 1 sT
where T RC , the time constant.
(4A.74)
4A.25
Therefore the block diagram is:
Vi (s )
Vo(s )
1
1 sT
Figure 4A.9
Note that theres no hint of what makes up the inside of the block, except for
the input and output signals. It could be a simple RC circuit, a complex passive
circuit, or even an active circuit (with op-amps). The important thing the block
does is hide all this detail.
Notation
Vi (s )
A block represents
multiplication with a
transfer function
Vo(s )
G (s)
Figure 4A.10
Z=X+Y
Z=X-Y
Y
Y
Figure 4A.11
Addition and
subtraction of
signals in a block
diagram
4A.26
Cascading Blocks
Blocks can be connected in cascade.
Cascading blocks
implies multiplying
the transfer
functions
G1( s)
Y=G1X
Z=G1G2 X
G2( s)
Figure 4A.12
Care must be taken when cascading blocks. Consider what happens when we
try to create a second-order circuit by cascading two first-order circuits:
A circuit which IS
NOT the cascade of
two first-order
circuits
Vi
Vo
Figure 4A.13
Show that the transfer function for the above circuit is:
Vo
1 RC
2
Vi s 3 RC s 1 RC 2
2
(4A.75)
4A.27
Compare with the following circuit:
A circuit which IS
the cascade of two
first-order circuits
Vi
Vo
Figure 4A.14
Vo
1 RC
1 RC
Vi s 1 RC s 1 RC
2
1 RC
2
2
s 2 RC s 1 RC
(4A.76)
They are different! In the first case, the second network loads the first (i.e. they We can only
interact). We can only cascade circuits if the outputs of the circuits present a cascade circuits if
they are buffered
low impedance to the next stage, so that each successive circuit does not load
the previous circuit. Op-amp circuits of both the inverting and non-inverting
type are ideal for cascading.
4A.28
Standard Form of a Feedback Control System
Perhaps the most important block diagram is that of a feedback connection,
shown below:
Standard form for
the feedback
connection
R (s )
C (s )
E (s )
G (s)
B (s )
H (s)
Figure 4A.15
C s
= closed-loop transfer function
R s
G s H s = loop gain
To find the transfer function, we solve the following two equations which are
self-evident from the block diagram:
C s G s E s
E s Rs H s C s
Signals and Systems 2014
(4A.77)
4A.29
Then the output C s is given by:
C s G s Rs G s H s C s
C s 1 G s H s G s Rs
(4A.78)
and therefore:
Transfer function for
the standard
feedback connection
C s
G s
R s 1 G s H s
(4A.79)
E s
1
Rs 1 G s H s
(4A.80)
Bs
G s H s
Rs 1 G s H s
(4A.81)
and:
Notice how all the above expressions have the same denominator.
We define 1 G s H s 0 as the characteristic equation of the differential
equation describing the system. Note that for negative feedback we get
1 G s H s and for positive feedback we get 1 G s H s .
negative feedback
positive feedback
1+GH
1-GH
Figure 4A.16
Signals and Systems 2014
Characteristic
equations for
positive and
negative feedback
4A.30
Block Diagram Transformations
Simplifying block
diagrams
Combining blocks in
cascade
Moving a summing
point behind a block
X1
X2
G1
Equivalent Diagram
X3
G2
X1
X3
X1
X1
X3
G1 G2
X3
X2
X1
Moving a summing
point ahead of a
block
X3
X1
X1
X1
X1
X1
X1
Moving a pick-off
point ahead of a
block
Eliminating a
feedback loop
X2
X2
X2
X2
X2
1
G
X1
X2
X1
X2
1
G
X2
X3
X2
Moving a pick-off
point behind a block
X2
X1
G
1 GH
X2
4A.31
Example
Given:
G
H
GH C
H
1
H
C
GH
4A.32
Example
Given:
G3
G1
G2
G4
H1
H2
we put:
G1
G2
G1G2
G3
G4
G3+ G4
G1G2
G1G2
1- G1G2 H1
H1
Therefore we get:
i
G1G2
1- G1G2 H1
G3+ G4
H2
G1G2 (G3+ G4 )
1- G1G2 H1 + G1G2 (G3+ G4 ) H2
4A.33
Example
G1
G2
G1
G2
Therefore:
o1
G1G2
i 1 G1G2
Considering d only:
o2
G2
G1
we have:
o2
G2
i 1 G1G2
Therefore, the total output is:
o o1 o 2
G1G2
G2
i
d
1 G1G2
1 G1G2
4A.34
Summary
The impulse response and the transfer function form a Laplace transform
pair: ht H s .
References
Haykin, S.: Communication Systems, John-Wiley & Sons, Inc., New York,
1994.
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Lathi, B. P.: Modern Digital and Analog Communication Systems, HoltSaunders, Tokyo, 1983.
4A.35
Exercises
1.
Obtain transfer functions for the following networks:
a)
b)
C1
R1
R2
C2
c)
d)
R2
C
L
L1
C1
R1
R
2.
Obtain the Laplace transforms for the following integro-differential equations:
a)
di t
1 t
Rit i d et
C 0
dt
dx t
d 2 xt
Kx t 3t
B
b) M
dt
dt 2
d t
d 2 t
K t 10 sin t
B
c) J
dt
dt 2
Signals and Systems 2014
4A.36
3.
Find the f t which corresponds to each F s below:
a)
s2 5
F s 3
s 2s 2 4s
b)
F s
s
s 4 5s 2 4
c)
F s
3s 1
2
5 s 3 s 2
4.
Use the final value theorem to determine the final value for each f t in 3 a),
b) and c) above.
5.
Find an expression for the transfer function of the following network (assume
the op-amp is ideal):
C2
R1
Vi
C1
R2
Vo
4A.37
6.
In the circuit below, the switch is in a closed position for a long time before
t 0 , when it is opened instantaneously.
y (t ) 1 H
5
10 V
1
5
t =0
7.
Given:
R (s )
E (s )
B (s )
C (s )
G (s)
H (s)
C s
R s
(ii)
E s
R s
(iii)
B s
R s
4A.38
8.
Using block diagram reduction techniques, find the transfer functions of the
following systems:
a)
G1( s)
G2( s)
H2( s)
G3( s)
H3( s)
b)
c
X1
Y
X5
4A.39
9.
Use block diagram reduction techniques to find the transfer functions of the
following systems:
a)
H2
G1
G2
G3
G4
H1
H3
b)
4A.40
Pierre Simon de Laplace (1749-1827)
The application of mathematics to problems in physics became a primary task
in the century after Newton. Foremost among a host of brilliant mathematical
thinkers was the Frenchman Laplace. He was a powerful and influential figure,
contributing to the areas of celestial mechanics, cosmology and probability.
Laplace was the son of a farmer of moderate means, and while at the local
military school, his uncle (a priest) recognised his exceptional mathematical
talent. At sixteen, he began to study at the University of Caen. Two years later
he travelled to Paris, where he gained the attention of the great mathematician
and philosopher Jean Le Rond dAlembert by sending him a paper on the
principles of mechanics. His genius was immediately recognised, and Laplace
became a professor of mathematics.
He began producing a steady stream of remarkable mathematical papers. Not
only did he make major contributions to difference equations and differential
equations but he examined applications to mathematical astronomy and to the
theory of probability, two major topics which he would work on throughout his
life. His work on mathematical astronomy before his election to the Acadmie
des Sciences included work on the inclination of planetary orbits, a study of
how planets were perturbed by their moons, and in a paper read to the
Academy on 27 November 1771 he made a study of the motions of the planets
which would be the first step towards his later masterpiece on the stability of
the solar system.
In 1773, before the Academy of Sciences, Laplace proposed a model of the
solar system which showed how perturbations in a planets orbit would not
change its distance from the sun. For the next decade, Laplace contributed a
stream of papers on planetary motion, clearing up discrepancies in the orbits
of Jupiter and Saturn, he showed how the moon accelerates as a function of the
Earths orbit, he introduced a new calculus for discovering the motion of
celestial bodies, and even a new means of computing planetary orbits which
led to astronomical tables of improved accuracy.
Signals and Systems 2014
4A.41
The 1780s were the period in which Laplace produced the depth of results
which have made him one of the most important and influential scientists that
the world has seen. Laplace let it be known widely that he considered himself
the best mathematician in France. The effect on his colleagues would have
been only mildly eased by the fact that Laplace was right!
In 1784 Laplace was appointed as examiner at the Royal Artillery Corps, and
in this role in 1785, he examined and passed the 16 year old Napoleon
Bonaparte.
In 1785, he introduced a field equation in spherical harmonics, now known as
Laplaces equation, which is found to be applicable to a great deal of
phenomena, including gravitation, the propagation of sound, light, heat, water,
electricity and magnetism.
Laplace presented his famous nebular hypothesis in 1796 in Exposition du
systeme du monde, which viewed the solar system as originating from the
contracting and cooling of a large, flattened, and slowly rotating cloud of "Your Highness, I
have no need of this
incandescent gas. The Exposition consisted of five books: the first was on the hypothesis. "
apparent motions of the celestial bodies, the motion of the sea, and also
atmospheric refraction; the second was on the actual motion of the celestial
bodies; the third was on force and momentum; the fourth was on the theory of
universal gravitation and included an account of the motion of the sea and the
shape of the Earth; the final book gave an historical account of astronomy and
included his famous nebular hypothesis which even predicted black holes.
Laplace stated his philosophy of science in the Exposition:If man were restricted to collecting facts the sciences were only a sterile
nomenclature and he would never have known the great laws of nature. It is
in comparing the phenomena with each other, in seeking to grasp their
relationships, that he is led to discover these laws...
Exposition du systeme du monde was written as a non-mathematical
introduction to Laplace's most important work. Laplace had already discovered
the invariability of planetary mean motions. In 1786 he had proved that the
eccentricities and inclinations of planetary orbits to each other always remain
small, constant, and self-correcting. These and many of his earlier results
Signals and Systems 2014
- Laplace, to
Napoleon on why
his works on
celestial mechanics
make no mention of
God.
4A.42
formed the basis for his great work the Trait du Mcanique Cleste published
in 5 volumes, the first two in 1799.
The first volume of the Mcanique Cleste is divided into two books, the first
on general laws of equilibrium and motion of solids and also fluids, while the
second book is on the law of universal gravitation and the motions of the
centres of gravity of the bodies in the solar system. The main mathematical
approach was the setting up of differential equations and solving them to
describe the resulting motions. The second volume deals with mechanics
applied to a study of the planets. In it Laplace included a study of the shape of
the Earth which included a discussion of data obtained from several different
expeditions, and Laplace applied his theory of errors to the results.
In 1812 he published the influential study of probability, Thorie analytique
des probabilits. The work consists of two books. The first book studies
generating functions and also approximations to various expressions occurring
in probability theory. The second book contains Laplace's definition of
probability, Bayes's rule (named by Poincar many years later), and remarks on
mathematical expectation. The book continues with methods of finding
probabilities of compound events when the probabilities of their simple
components are known, then a discussion of the method of least squares, and
inverse probability. Applications to mortality, life expectancy, length of
marriages and probability in legal matters are given.
After the publication of the fourth volume of the Mcanique Cleste, Laplace
continued to apply his ideas of physics to other problems such as capillary
action (1806-07), double refraction (1809), the velocity of sound (1816), the
theory of heat, in particular the shape and rotation of the cooling Earth
(1817-1820), and elastic fluids (1821).
Many original documents concerning his life have been lost, and gaps in his
biography have been filled by myth. Some papers were lost in a fire that
destroyed the chateau of a descendant, and others went up in flames when
Allied forces bombarded Caen during WWII.
Laplace died on 5 March, 1827 at his home outside Paris.
Signals and Systems 2014
4B.1
Lecture 4B Transfer Functions
Stability. Unit-step response. Sinusoidal response. Arbitrary response.
Overview
Transfer functions are obtained by Laplace transforming a systems
input/output differential equation, or by analysing a system directly in the tells us lots of things
s-domain. From the transfer function, we can derive many important properties
of a system.
Stability
To look at stability, lets examine the rational system transfer function that
ordinarily arises from linear differential equations:
bM s M bM 1s M 1 ... b1s b0
H s N
s a N 1s N 1 ... a1s a0
(4B.1)
c
ce pt
s p
A cos s A sin
Ae t cost
2
2
s
(4B.2a)
The impulse
response is always
some sort of
exponential
(4B.2b)
4B.2
From the time-domain expressions, it follows that ht converges to zero as
Re pi 0
Conditions on the
poles for a stable
system
(4B.3a)
(4B.3b)
Unit-Step Response
Consider a system with rational transfer function H s Bs As . If an input
xt is applied for t 0 with no initial energy in the system, then the transform
Y s
B s
X s
As
(4B.4)
Y s
B s
As s
(4B.5)
Y s
H 0 E s
s
As
(4B.6)
4B.3
where E s is a polynomial in s and the residue of the pole at the origin was
given by:
c sY s s 0 H 0
(4B.7)
yt H 0 y tr t , t 0
(4B.8)
The complete
response consists of
a transient part and
a steady-state part
have a requirement to reach 99% of the steady-state value within a certain time, for control system
or we may wish to limit any oscillations about the steady-state value to a design
certain amplitude etc. This will be examined for the case of first-order and
second-order systems.
4B.4
First-Order Systems
For the first-order transfer function:
H s
p
s p
(4B.9)
y t 1 e pt , t 0
(4B.10)
which has been written in the form of Eq. (4B.8). If the system is stable, then p
lies in the open left-half plane, and the second term decays to zero. The rate at
which the transient decays to zero depends on how far over to the left the pole
is. Since the total response is equal the constant 1 plus the transient response,
the rate at which the step response converges to the steady-state value is equal
to the rate at which the transient decays to zero. This may be an important
design consideration.
An important quantity that characterizes the rate of decay of the transient is the
Time constant
defined
4B.5
Second-Order Systems
n2
H s 2
s 2 n s n2
(4B.11)
Standard form of a
second-order
lowpass transfer
function
damping ratio
n natural frequency
(4B.12a)
(4B.12b)
If we write:
H s
n2
s p1 s p2
(4B.13)
p1 n n 2 1
p 2 n n 2 1
There are three cases to consider.
(4B.14a)
(4B.14b)
4B.6
Distinct Real Poles ( 1 ) - Overdamped
H s
n2
s p1 s p2
(4B.15)
Y s
n2
s p1 s p2 s
(4B.16)
Rewriting as partial fractions and taking the inverse Laplace transform, we get
the unit-step response:
Step response of a
second-order
overdamped system
y t 1 c1e p1t c2 e p 2 t , t 0
(4B.17)
Therefore, the transient part of the response is given by the sum of two
exponentials:
Transient part of the
step response of a
second-order
overdamped system
y tr t c1e p1t c2 e p 2t
(4B.18)
yss t H 0
n2
p1 p 2
1
(4B.19)
It often turns out that the transient response is dominated by one pole (the one
closer to the origin why?), so that the step response looks like that of a firstorder system.
4B.7
Repeated Real Poles ( 1 ) Critically Damped
n2
H s
s n 2
(4B.20)
Expanding H s s via partial fractions and taking the inverse transform yields
the step response:
y t 1 1 n t e
n t
(4B.21)
Unit-step response
of a second-order
critically damped
system
y tr t 1 n t e n t
(4B.22)
yss t H 0 1
as before.
(4B.23)
Steady-state part of
the unit-step
response of a
second-order
critically damped
system
4B.8
Complex Conjugate Poles ( 0 1 ) Underdamped
d n 1 2
(4B.24)
p1, 2 n j d
Underdamped pole
locations are
complex conjugates
(4B.25)
n2
H s
s n 2 d2
(4B.26)
Y s
1 s n n
s s n 2 d2
(4B.27)
Thus, from the transform Eq. (4B.2b), the unit-step response is:
Step response of a
second-order
underdamped
system
y t 1
n t
e
sin d t cos 1
d
n
(4B.28)
4B.9
Second-Order Pole Locations
Im s
=0
=5
= 0.707
j n
j d
=1
Re s
- n
- j n
Figure 4B.1
We can see, for fixed n , that varying from 0 to 1 causes the poles to move
from the imaginary axis along a circular arc with radius n until they meet at
the point s n . If 0 then the poles lie on the imaginary axis and the
transient never dies out we have a marginally stable system. As is
increased, the response becomes less oscillatory and more and more damped,
until 1 . Now the poles are real and repeated, and there is no sinusoid in the
response. As is increased, the poles move apart on the real axis, with one
moving to the left, and one moving toward the origin. The response becomes
more and more damped due to the right-hand pole getting closer and closer to
the origin.
Signals and Systems 2014
4B.10
Sinusoidal Response
Again consider a system with rational transfer function H s Bs As . To
determine the system response to the sinusoidal input xt C cos 0 t , we first
find the Laplace transform of the input:
X s
Cs
Cs
s 2 02 s j 0 s j 0
(4B.29)
Y s
B s
Cs
As s j 0 s j 0
(4B.30)
C 2 H j 0 C 2 H * j 0 E s
Y s
s j 0
s j 0
As
(4B.31)
y t
C
H j 0 e j 0 t H * j 0 e j 0 t ytr t
2
(4B.32)
y t C H j 0 cos 0t H j 0 ytr t
(4B.33)
4B.11
When the system is stable, the y tr t term decays to zero and we are left with
the steady-state response:
yss t C H j 0 cos 0t H j 0 , t 0
(4B.34)
Steady-state
sinusoidal response
of a system
This result is exactly the same expression as that found when performing
Fourier analysis, except there the expression was for all time, and hence there
was no transient! This means that the frequency response function H can
be obtained directly from the transfer function:
H H j H s s j
(4B.35)
Frequency response
from transfer
function
Arbitrary Response
Suppose we apply an arbitrary input xt that has rational Laplace transform
X s C s Ds where the degree of C s is less than Ds . If this input is
B s C s
As Ds
(4B.36)
F s E s
D s As
(4B.37)
Y s
This can be written as:
Y s
y t yss t ytr t
(4B.38)
The Laplace
transform of an
output signal always
contains a steadystate term and a
transient term
4B.12
The important point to note about this simple analysis is that the form of the
transient response is determined by the poles of the system transfer function
H s regardless of the particular form of the input signal xt , while the form
Summary
A system is stable if all the poles of the transfer function lie in the open left-half
s-plane.
The complete response of a system consists of a transient response and a steadystate response. The transient response consists of the ZIR and a part of the ZSR.
The steady-state response is part of the ZSR. The transfer function gives us the
ZSR only!
The frequency response of a system can be obtained from the transfer function by
setting s j .
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
4B.13
Exercises
1.
Determine whether the following circuit is stable for any element values, and
for any bounded inputs:
2.
Suppose that a system has the following transfer function:
H s
8
s4
a) Compute the system response to the following inputs. Identify the steadystate solution and the transient solution.
(i)
xt u t
(ii) xt tu t
(iii) xt 2 sin 2t u t
(iv) xt 2 sin 10t u t
b) Use MATLAB to compute the responses numerically. Plot the responses
and compare them to the responses obtained analytically in part a).
4B.14
3.
Consider three systems which have the following transfer functions:
(i)
H s
32
s 4 s 16
(ii)
H s
32
s 8s 16
(iii)
H s
32
s 10s 16
4.
For the circuit shown, compute the steady-state response yss t resulting from
the inputs given below assuming that there is no initial energy at time t 0 .
a) xt u t
10 F
b) xt 10 cost u t
100 k
c) xt cos5t 6 u t
200 k
10 F
4B.15
Oliver Heaviside (1850-1925)
The mid-Victorian age was a time when the divide between the rich and the
poor was immense (and almost insurmountable), a time of unimaginable
disease and lack of sanitation, a time of steam engines belching forth a steady
rain of coal dust, a time of horses clattering along cobblestoned streets, a time
when social services were the fantasy of utopian dreamers. It was into this
smelly, noisy, unhealthy and class-conscious world that Oliver Heaviside was
born the son of a poor man on 18 May, 1850.
A lucky marriage made Charles Wheatstone (of Wheatstone Bridge fame)
Heavisides uncle. This enabled Heaviside to be reasonably well educated, and
at the age of sixteen he obtained his first (and last) job as a telegraph operator
with the Danish-Norwegian-English Telegraph Company. It was during this
job that he developed an interest in the physical operation of the telegraph
cable. At that time, telegraph cable theory was in a state left by Professor
William Thomson (later Lord Kelvin) a diffusion theory modelling the
passage of electricity through a cable with the same mathematics that describes
heat flow.
By the early 1870s, Heaviside was contributing technical papers to various
publications he had taught himself calculus, differential equations, solid
geometry and partial differential equations. But the greatest impact on
Heaviside was Maxwells treatise on electricity and magnetism Heaviside
was swept up by its power.
In 1874 Heaviside resigned from his job as telegraph operator and went back to
live with his parents. He was to live off his parents, and other relatives, for the
rest of his life. He dedicated his life to writing technical papers on telegraphy
and electrical theory much of his work forms the basis of modern circuit
theory and field theory.
In 1876 he published a paper entitled On the extra current which made it clear
that Heaviside (a 26-year-old unemployed nobody) was a brilliant talent. He
had extended the mathematical understanding of telegraphy far beyond
Signals and Systems 2014
I remember my first
look at the great
treatise of
MaxwellsI saw
that it was great,
greater and
greatest, with
prodigious
possibilities in its
power. Oliver
Heaviside
4B.16
Thomsons submarine cable theory. It showed that inductance was needed to
permit finite-velocity wave propagation, and would be the key to solving the
problems of long distance telephony. Unfortunately, although Heavisides
paper was correct, it was also unreadable by all except a few this was a trait
of Heaviside that would last all his life, and led to his eventual isolation from
the academic world. In 1878, he wrote a paper On electromagnets, etc.
which introduced the expressions for the AC impedances of resistors,
capacitors and inductors. In 1879, his paper On the theory of faults showed that
by faulting a long telegraph line with an inductance, it would actually
improve the signalling rate of the line thus was born the idea of inductive
loading, which allowed transcontinental telegraphy and long-distance
telephony to be developed in the USA.
Now all has been
blended into one
theory, the main
equations of which
can be written on a
page of a pocket
notebook. That we
have got so far is
due in the first place
to Maxwell, and next
to him to Heaviside
and Hertz. H.A.
Lorentz
Rigorous
mathematics is
narrow, physical
mathematics bold
and broad. Oliver
Heaviside
it seemed to solve physical problems, its mathematical rigor was not at all
clear. His knowledge of the physics of problems guided him correctly in many
instances to the development of suitable mathematical processes. In 1887
Heaviside introduced the concept of a resistance operator, which in modern
terms would be called impedance, and Heaviside introduced the symbol Z for
it. He let p be equal to time-differentiation, and thus the resistance operator for
an inductor would be written as pL. He would then treat p just like an algebraic
Calculus and its Application to the Integration of Linear Differential Equations in 1862.
Heaviside independently invented (and applied) his own version of the operational calculus.
Signals and Systems 2014
4B.17
quantity, and solve for voltage and current in terms of a power series in p. In
other words, Heavisides operators allowed the reduction of the differential
equations of a physical system to equivalent algebraic equations.
Heaviside was fond of using the unit-step as an input to electrical circuits,
especially since it was a very practical matter to send such pulses down a
telegraph line. The unit-step was even called the Heaviside step, and given the
symbol H t , but Heaviside simply used the notation 1. He was tantalizingly
close to discovering the impulse by stating p 1 means a function of t
which is wholly concentrated at the moment t 0 , of total amount 1. It is an
impulsive function, so to speak[it] involves only ordinary ideas of
differentiation and integration pushed to their limit.
Heaviside also played a role in the debate raging at the end of the 19th century
about the age of the Earth, with obvious implications for Darwins theory of
evolution. In 1862 Thomson wrote his famous paper On the secular cooling of
the Earth, in which he imagined the Earth to be a uniformly heated ball of
The practice of
heat flow. The resulting age of the Earth (100 million years) fell short of that physics should be
carried on right
needed by Darwins theory, and also went against geologic and palaeontologic through, to give life
and reality to the
evidence. John Perry (a professor of mechanical engineering) redid Thomsons problem, and to
the great
analysis using discontinuous diffusivity, and arrived at approximate results that obtain
assistance which
could (based on the conductivity and specific heat of marble and quartz) put the physics gives to
the mathematics.
the age of the Earth into the billions of years. But Heaviside, using his Oliver Heaviside,
Collected Works,
operational calculus, was able to solve the diffusion equation for a finite Vol II, p.4
spherical Earth. We now know that such a simple model is based on faulty
premises radioactive decay within the Earth maintains the thermal gradient
without a continual cooling of the planet. But the power of Heavisides
methods to solve remarkably complex problems became readily apparent.
Throughout his career, Heaviside released 3 volumes of work entitled
Electromagnetic Theory, which was really just a collection of his papers.
Signals and Systems 2014
4B.18
Heaviside shunned all honours, brushing aside his honorary doctorate from the
University of Gttingen and even refusing to accept the medal associated with
his election as a Fellow of the Royal Society, in 1891.
In 1902, Heaviside wrote an article for the Encyclopedia Britannica entitled
The theory of electric telegraphy. Apart from developing the wave propagation
theory of telegraphy, he extended his essay to include wireless telegraphy,
and explained how the remarkable success of Marconi transmitting from
Ireland to Newfoundland might be due to the presence of a permanently
conducting upper layer in the atmosphere. This supposed layer was referred to
as the Heaviside layer, which was directly detected by Edward Appleton and
M.A.F. Barnett in the mid-1920s. Today we merely call it the ionosphere.
Heaviside spent much of his life being bitter at those who didnt recognise his
genius he had disdain for those that could not accept his mathematics without
formal proof, and he felt betrayed and cheated by the scientific community
who often ignored his results or used them later without recognising his prior
work. It was with much bitterness that he eventually retired and lived out the
rest of his life in Torquay on a government pension. He withdrew from public
and private life, and was taunted by insolently rude imbeciles. Objects were
thrown at his windows and doors and numerous practical tricks were played on
him.
Heaviside should be
remembered for his
vectors, his field
theory analyses, his
brilliant discovery of
the distortionless
circuit, his
pioneering applied
mathematics, and
for his wit and
humor. P.J. Nahin
Today, the historical obscurity of Heavisides work is evident in the fact that
his vector analysis and vector formulation of Maxwells theory have become
basic knowledge. His operational calculus was made obsolete with the 1937
publication of a book by the German mathematician Gustav Doetsch it
showed how, with the Laplace transform, Heavisides operators could be
replaced with a mathematically rigorous and systematic method.
The last five years of Heavisides life, with both hearing and sight failing, were
years of great privation and mystery. He died on 3rd February, 1925.
References
Nahin, P.: Oliver Heaviside: Sage in Solitude, IEEE Press, 1988.
5A.1
Lecture 5A Frequency Response
The frequency response function. Determining the frequency response from a
transfer function. Magnitude responses. Phase responses. Frequency response
of a lowpass second-order system. Visualization of the frequency response
from a pole-zero plot. Bode plots. Approximating Bode plots using transfer
function factors. Transfer function synthesis. Digital filters.
Overview
An examination of a systems frequency response is useful in several respects.
It can help us determine things such as the DC gain and bandwidth, how well a
system meets the stability criterion, and whether the system is robust to
disturbance inputs.
Despite all this, remember that the time- and frequency-domain are
inextricably related we cant alter the characteristics of one without affecting
the other. This will be demonstrated for a second-order system later.
yss t A H 0 cos 0 t H 0
(5A.1)
5A.2
Determining the Frequency Response from a Transfer Function
We can get the frequency response of a system by manipulating its transfer
function. Consider a simple first-order transfer function:
H s
K
s p
(5A.2)
The sinusoidal steady state corresponds to s j . Therefore, Eq. (5A.2) is, for
the sinusoidal steady state:
K
j p
(5A.3)
H H e jH
(5A.4a)
H H H
(5A.5A)
H dB 20 log H dB
(5A.5)
5A.3
The phase function is usually plotted in degrees.
For example, in Eq. (5A.2), let K p 0 so that:
1
1 j 0
(5A.6)
1
1 0
(5A.7)
H tan 1
0
(5A.8)
Magnitude Responses
A magnitude response is the magnitude of the transfer function for a sinusoidal
steady-state input, plotted against the frequency of the input. Magnitude The magnitude
response is the
responses can be classified according to their particular properties. To look at magnitude of the
these properties, we will use linear magnitude versus linear frequency plots. transfer function in
the sinusoidal
For the simple first-order RC circuit that you are so familiar with, the steady state
H 0 1
H 0
H 0
0.707
(5A.9)
5A.4
The frequency 0 is known as the half-power frequency. The plot below
shows the complete magnitude response of H as a function of , and the
circuit that produces it:
A simple lowpass
filter
|H|
1
R
1 2
Vi
0
Vo
2 0
Figure 5A.1
An idealisation of the response in Figure 5A.1, known as a brick wall, and the
circuit that produces it are shown below:
An ideal lowpass
filter
|T|
1
Cutoff
ideal
Vi
Pass
0
Stop
filter
Vo
Figure 5A.2
For the ideal filter, the output voltage remains fixed in amplitude until a critical
frequency is reached, called the cutoff frequency, 0 . At that frequency, and
for all higher frequencies, the output is zero. The range of frequencies with
Pass and stop
bands defined
output is called the passband; the range with no output is called the stopband.
The obvious classification of the filter is a lowpass filter.
5A.5
Even though the response shown in the plot of Figure 5A.1 differs from the
ideal, it is still known as a lowpass filter, and, by convention, the half-power
frequency is taken as the cutoff frequency.
If the positions of the resistor and capacitor in the circuit of Figure 5A.1 are
interchanged, then the resulting circuit is:
Vi
Vo
Figure 5A.3
Show that the transfer function is:
H s
s
s 1 RC
(5A.10)
j 0
1 j 0
(5A.11)
H 0 0
H 0
H 1
1
2
0.707
(5A.12)
5A.6
The plot below shows the complete magnitude response of H as a function
of , and the circuit that produces it:
A simple highpass
filter
|H|
1
1 2
Vi
0
2 0
3 0
Vo
Figure 5A.4
This filter is classified as a highpass filter. The ideal brick wall highpass filter
is shown below:
An ideal highpass
filter
|T|
1
Cutoff
ideal
Vi
Stop
0
Pass
Figure 5A.5
filter
Vo
5A.7
Phase Responses
Like magnitude responses, phase responses are only meaningful when we look Phase response is
at sinusoidal steady-state signals. A transfer function for a sinusoidal input is: obtained in the
sinusoidal steady
state
Y
Y
H
X
X 0
(5A.13)
H K
j z
j p
(5A.14)
K tan tan 1
z
p
1
(5A.15)
We use the sign of this phase angle to classify systems. Those giving positive Lead and lag circuits
defined
are known as lead systems, those giving negative as lag systems.
For the simple RC circuit of Figure 5A.5, for which H is given by
Eq. (5A.6), we have:
tan 1
(5A.16)
tan 1 1 45 .
5A.8
A complete plot of the phase response is shown below:
Lagging phase
response for a
simple lowpass filter
0
0
2 0
-45
-90
Figure 5A.6
For the circuit in Figure 5A.5, show that the phase is given by:
90 tan 1
(5A.17)
The phase response has the same shape as Figure 5A.6 but is shifted upward
by 90 :
Leading phase
response for a
simple highpass
filter
90
45
0
0
2 0
Figure 5A.7
The angle is positive for all , and so the circuit is a lead circuit.
Signals and Systems 2014
5A.9
Frequency Response of a Lowpass Second-Order System
Starting from the usual definition of a lowpass second-order system transfer
function:
n2
H s 2
s 2 n s n2
(5A.18)
n2
(5A.19)
n2 2 j 2 n
H j
n2
2
n
2
2
(5A.20)
The magnitude
response of a
lowpass second
order transfer
function
2 n
2
2
tan 1
(5A.21)
5A.10
The magnitude and phase functions are plotted below for 0.4 :
Typical magnitude
and phase
responses of a
lowpass second
order transfer
function
r = n 1-2
|H|
1
2
1
-40 dB / decade
0
0
All
, degrees
-90
-180 asymptote
for all
-180
Figure 5A.8
H 0 1,
H n 1 2 ,
H 0
(5A.22)
and for large , the magnitude decreases at a rate of -40 dB per decade, which
is sometimes described as two-pole rolloff.
For the phase function, we see that:
0 0, n 90, 180
(5A.23)
5A.11
Visualization of the Frequency Response from a Pole-Zero Plot
The frequency response can be visualised in terms of the pole locations of the
transfer function. For example, for a second-order lowpass system:
n2
H s 2
s 2 n s n2
(5A.24)
the poles are located on a circle of radius n and at an angle with respect to
the negative real axis of cos 1 . These complex conjugate pole locations
are shown below:
Pole locations for a
lowpass second
order transfer
function
- n
p*
Figure 5A.9
In terms of the poles shown in Figure 5A.9, the transfer function is:
H s
n2
s p s p
(5A.25)
Lowpass second
order transfer
function using pole
factors
5A.12
With s j , the two pole factors in this equation become:
Polar representation
of the pole factors
j p m1 1
j p m2 2
and
(5A.26)
n2
m1m2
(5A.27)
1 2
(5A.28)
and:
Phase function
written using the
polar representation
of the pole factors
j
m1
m1
1
m1
j n
j 1
m2
m2
m2
2
p*
p*
p*
|H|
1
2
1
-90
j 2
-180
Figure 5A.10
Signals and Systems 2014
5A.13
Figure 5A.10 shows three different frequencies - one below n , one at n ,
and one above n . From this construction we can see that the short length of
m1 near the frequency n is the reason why the magnitude function reaches a
peak near n . These plots are useful in visualising the frequency response of
the circuit.
Bode Plots
Bode* plots are plots of the magnitude function H dB 20 log H and the
phase function H , where the scale of the frequency variable (usually )
is logarithmic. The use of logarithmic scales has several desirable properties:
We normally dont deal with equations when drawing Bode plots we rely on
our knowledge of the asymptotic approximations for the handful of factors that
go to make up a transfer function.
Dr. Hendrik Bode grew up in Urbana, Illinois, USA, where his name is pronounced boh-dee.
The advantages of
using Bode plots
5A.14
Approximating Bode Plots using Transfer Function Factors
The table below gives transfer function factors and their corresponding
magnitude asymptotic plots and phase linear approximations:
Transfer
Function
Factor
Magnitude
Phase
Asymptote
Linear Approximation
H , dB
H ,
40
20
-90
-20
-180
-40
0.01 n
0.1 n
10 n
100 n
40
0.01 n
0.1 n
10 n
100 n
0.1 n
10 n
100 n
0.1 n
10 n
100 n
0.1 n
10 n
100 n
20
s n
-90
-20
-180
-40
0.01 n
0.1 n
10 n
100 n
40
0.01 n
20
s n 1
-90
-20
-180
-40
0.01 n
0.1 n
10 n
100 n
40
1
s
2
2
s 1
n n
0.01 n
20
0
-90
-20
-180
-40
0.01 n
0.1 n
10 n
100 n
0.01 n
5A.15
Transfer Function Synthesis
One of the main reasons for using Bode plots is that we can synthesise a
desired frequency response by placing poles and zeros appropriately. This is
easy to do asymptotically, and the results can be checked using MATLAB.
Example
|H|, dB
20 dB
0 dB
2
10
10
10
10
Figure 5A.11
5A.16
The composite plot may be decomposed into four first-order factors as shown
below:
Decomposing a
Bode plot into firstorder factors
|H|, dB
20 dB
0 dB
2
10
5
10 rad/s (log scale)
10
10
|H|, dB
1
Figure 5A.12
Those marked 1 and 4 represent zero factors, while those marked 2 and 3 are
pole factors. The pole-zero plot corresponding to these factors is shown below:
The pole-zero plot
corresponding to the
Bode plot
j
4
Figure 5A.13
5A.17
From the break frequencies given, we have:
1 j 10 1 j 10
H
1 j 10 1 j 10
2
(5A.29)
s 10 s 10
H s
s 10 s 10
2
(5A.30)
H s H 1 s H 2 s
s 10 s 10
s 10 3 s 10 4
2
(5A.31)
10
-3
10
-5
10
-4
10
Vi
Vo
1
Figure 5A.14
5A.18
To obtain realistic element values, we need to scale the components so that the
transfer function remains unaltered. This is accomplished with the equations:
Magnitude scaling is
required to get
realistic element
values
Cnew
1
Cold
km
and
Rnew km Rold
(5A.32)
Since the capacitors are to have the value 10 nF, this means k m 108 . The
element values that result are shown below and the design is complete:
A realistic
implementation of
the specifications
1 M
100 k
1 k
10 k
Vi
Vo
10 nF
10 nF
10 nF
10 nF
Figure 5A.15
In this simple example, the response only required placement of the poles and
zeros on the real axis. However, complex pole-pair placement is not unusual in
design problems.
5A.19
Digital Filters
Digital filtering involves sampling, quantising and coding of the input analog
signal (using an analog to digital converter, or ADC for short). Once we have analog filters
converted voltages to mere numbers, we are free to do any processing on them
that we desire. Usually, the signals spectrum is found using a fast Fourier
transform, or FFT. The spectrum can then be modified by scaling the
amplitudes and adjusting the phase of each sinusoid. An inverse FFT can then
be performed, and the processed numbers are converted back into analog form
(using a digital to analog converter, or DAC). In modern digital signal
processors, an operation corresponding to a fast convolution is also sometimes
employed that is, the signal is convolved in the time-domain in real-time.
The components of a digital filter are shown below:
The components of
a digital filter
vi
Anti-alias
Filter
ADC
Digital
Signal
Processor
DAC
Reconstruction
Filter
vo
Figure 5A.16
The digital signal processor can be custom built digital circuitry, or it can be a
Digital filter
general purpose computer. There are many advantages of digitally processing advantages
analog signals:
1. A digital filter may be just a small part of a larger system, so it makes sense
to implement it in software rather than hardware.
2. The cost of digital implementation is often considerably lower than that of
its analog counterpart (and it is falling all the time).
3. The accuracy of a digital filter is dependent only on the processor word
length, the quantising error in the ADC and the sampling rate.
4. Digital filters are generally unaffected by such factors as component
accuracy, temperature stability, long-term drift, etc. that affect analog filter
circuits.
Signals and Systems 2014
5A.20
5. Many circuit restrictions imposed by physical limitations of analog devices
can be circumvented in a digital processor.
6. Filters of high order can be realised directly and easily.
7. Digital filters can be modified easily by changing the algorithm of the
computer.
8. Digital filters can be designed that are always stable.
9. Filter responses can be made which always have linear phase (constant
delay), regardless of the magnitude response.
Digital filter
disadvantages
Summary
Bode plots are magnitude (dB) and phase responses drawn on a semi-log
scale, enabling the easy analysis or design of high-order systems.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
5A.21
Exercises
1.
With respect to a reference frequency f 0 20 Hz , find the frequency which is
(a) 2 decades above f 0 and (b) 3 octaves below f 0 .
2.
Express the following magnitude ratios in dB: (a) 1, (b) 40, (c) 0.5
3.
Draw the approximate Bode plots (both magnitude and phase) for the transfer
functions shown. Use MATLAB to draw the exact Bode plots and compare.
(a) G s 10
(d) G s
(b) G s
1
10s 1
4
s
(c) G s
(e) G s 5s 1
1
10s 1
(f) G s 5s 1
Note that the magnitude plots for the transfer functions (c) and (d); (e) and (f)
are the same. Why?
4.
Prove that if G s has a single pole at s 1 the asymptotes of the log
magnitude response versus log frequency intersect at 1 . Prove this not
only analytically but also graphically using MATLAB.
5A.22
5.
Make use of the property that the logarithm converts multiplication and
division into addition and subtraction, respectively, to draw the Bode plot for:
G s
100s 1
s 0.01s 1
Use asymptotes for the magnitude response and a linear approximation for the
phase response.
6.
Draw the exact Bode plot using MATLAB (magnitude and phase) for
G s
100s 2
s 2 2 n s n2
when
7.
Given
(a) G s
206.66s 1
2
s 0.1s 1 94.6s 1
(b) G s
4 s 1 .5 s 2
s 2 10 s
Draw the approximate Bode plots and from these graphs find G and G at
(i) 0.1 rads -1 , (ii) 1 rads -1 , (iii) 10 rads -1 , (iv) 100 rads -1 .
5A.23
8.
The experimental responses of two systems are given below. Plot the Bode
diagrams and identify the transfer functions.
rads-1
0.1
0.2
0.5
1
2
3
5
10
20
30
40
50
100
(a)
G1
(dB)
40
34
25
20
14
10
2
-9
-23
-32
-40
-46
-64
G1
()
-92
-95
-100
-108
-126
-138
-160
-190
-220
-235
-243
-248
-258
rads-1
0.01
0.02
0.04
0.07
0.1
0.2
0.4
0.7
1
2
4
7
10
20
40
100
500
1000
(b)
G2
(dB)
-26
-20
-14
-10
-7
-3
-1
-0.3
0
0
0
-2
-3
-7
-12
-20
-34
-40
G2
()
87
84
79
70
61
46
29
20
17
17
25
36
46
64
76
84
89
89
9.
Given G s K s s 5
(a) Plot the closed loop frequency response of this system using unity feedback
when K 1 . What is the 3 dB bandwidth of the system?
(b) Plot the closed loop frequency response when K is increased to K 100 .
What is the effect on the frequency response?
10.
The following measurements were taken for an open-loop system:
(i)
1 , G 6 dB , G 25
(ii)
2 , G 18 dB , G 127
5A.24
11.
An amplifier has the following frequency response. Find the transfer function.
40
30
20
|H(w)| dB
10
0
-10
-20
-30
-40
0
10
10
10
10
w
10
10
(rad/s)
100
80
60
arg(H(w)) deg
40
20
0
-20
-40
-60
-80
-100
0
10
10
10
10
w (rad/s)
10
10
5B.1
Lecture 5B Time-Domain Response
Steady-state error. Transient response. Second-order step response. Settling
time. Peak time. Percent overshoot. Rise time and delay time.
Overview
Control systems employing feedback usually operate to bring the output of the
system being controlled in line with the reference input. For example, a maze
rover may receive a command to go forward 4 units how does it respond?
Can we control the dynamic behaviour of the rover, and if we can, what are the
limits of the control? Obviously we cannot get a rover to move infinitely fast,
so it will never follow a step input exactly. It must undergo a transient just
like an electric circuit with storage elements. However, with feedback, we may
be able to change the transient response to suit particular requirements, like
time taken to get to a certain position within a small tolerance, not
overshooting the mark and hitting walls, etc.
Steady-State Error
One of the main objectives of control is for the system output to follow the
system input. The difference between the input and output, in the steady-state,
is termed the steady-state error:
et r t ct
ess lim et
t
Steady-state error
defined
(5B.1)
5B.2
Consider the unity-feedback system:
R (s )
E (s )
G (s)
C (s )
Figure 5B.1
System type defined
The type of the control system, or simply system type, is the number of poles
that G s has at s 0 . For example:
101 3s
s s 2 2s 2
4
Gs 3
s s 2
G s
type1
type 3
(5B.2)
When the input to the control system in Figure 5B.1 is a step function with
magnitude R, then Rs R s and the steady-state error is:
sRs
R
lim
lim
s 0 1 Gs s 0 1 Gs
R
1 lim Gs
s0
(5B.3)
5B.3
For convenience, we define the step-error constant, K P , as:
Step-error constant
only defined for a
step-input
K P lim G s
s0
(5B.4)
ess
R
1 KP
(5B.5)
We see that for the steady-state error to be zero, we require K P . This will
only occur if there is at least one pole of G s at the origin. We can summarise
the errors of a unity-feedback system to a step-input as:
R
ess
type 0 system :
constant
1 KP
type 1 or higher system : ess 0
(5B.6)
Steady-state error to
a step-input for a
unity-feedback
system
Transient Response
Consider a maze rover (MR) described by the differential equation:
1
dvt
xt
vt
M
M
dt
kf
(5B.7)
X (s )
1M
skf M
V (s )
Figure 5B.2
Signals and Systems 2014
5B.4
Now, from the diagram above, it appears that our input to the rover affects the
velocity in some way. But we need to control the output position, not the
output velocity.
Were therefore actually interested in the following model of the MR:
MR transfer function
for position output
X (s )
V (s )
1M
skf M
1
s
C (s )
Figure 5B.3
This should be obvious, since position ct is given by:
ct v d
(5B.8)
X (s )
1M
ss k f M
C (s )
Figure 5B.4
The whole point of modelling the rover is so that we can control it. Suppose we
wish to build a maze rover position control system. We will choose for
simplicity a unity-feedback system, and place a controller in the feedforward path in front of the MRs input. Such a control strategy is termed
series compensation.
5B.5
A block diagram of the proposed feedback system, with its unity-feedback and
series compensation controller is:
R (s )
E (s )
Gc( s )
X (s )
1M
ss k f M
C (s )
Simple MR position
control scheme
controller
Figure 5B.5
KP M
C s
Rs s s k f M K P M
(5B.9)
which can be manipulated into the standard form for a second-order transfer
function:
n2
C s
Rs s 2 2 n s n2
Second-order
control system
(5B.10)
can still see what sort of performance this controller has in the time-domain as and therefore the
time-domain
K P is varied. In fact, the goal of the controller design is to choose a suitable response
5B.6
For a unit-step function input, Rs 1 s , and the output response of the
system is obtained by taking the inverse Laplace transform of the output
transform:
n2
C s
s s 2 2 n s n2
(5B.11)
We have seen previously that the result for an underdamped system is:
ct 1
e n t
1
sin n 1 2 t cos 1
(5B.12)
5B.7
We will now examine what sort of criteria we usually specify, with respect to
the following diagram:
Step response
definitions
c (t )
P.O. = percent overshoot
1.05
1.00
0.95
0.90
0.50
td
0.10
0
tr
tp
ts
Figure 5B.6
c c
P.O. percent overshoot max ss 100%
css
delay time td
(5B.13a)
Percent overshoot
(5B.13b)
Delay time
(5B.13c)
Rise time
(5B.13d)
Settling time
ct ts css
5%
css
Signals and Systems 2014
5B.8
The poles of Eq. (5B.10) are given by:
p1 , p2 n j n 1 2
j d
(5B.14)
(5B.15)
(5B.16)
(5B.17)
(5B.18)
j
jn
j d = j n 1- 2
- = - n
- jn
Figure 5B.7
5B.9
The effect of is readily apparent from the following graph of the step-input
response:
1.8
Second-order step
response for varying
damping ratio
=0.1
1.6
1.4
1.2
c( t )
0.5
1.0
0.8
0.6
1.0
1.5
2.0
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13
nt
Figure 5B.8
5B.10
Settling Time
The settling time, t s , is the time required for the output to come within and
stay within a given band about the actual steady-state value. This band is
usually expressed as a percentage p of the steady-state value. To derive an
estimate for the settling time, we need to examine the step-response more
closely.
The standard, second-order lowpass transfer function of Eq. (5B.10) has the
s-plane plot shown below if 0 1 :
Pole locations
showing definition of
the angle
n
- n
j n 1- 2
s -plane
Figure 5B.9
cos
(5B.19)
ct 1
e n t
1
sin n 1 2 t
(5B.20)
5B.11
The curves 1 e n t
1 2
response for a unit-step input. The response curve ct always remains within a
pair of the envelope curves, as shown below:
1+
1
1
1+ e
- nt
1 2
T= 1
1
e - t
n
1-
1-
1 2
0
T
1 2
2T
3T
1
t
cos
2
1 2
1
4T
5T
1
cos
1 2
1
Figure 5B.10
To determine the settling time, we need to find the time it takes for the
response to fall within and stay within a certain band about the steady-state
value. This time depends on and n in a non-linear fashion, because of the
oscillatory response. It can be obtained numerically from the responses shown
in Figure 5B.8.
Pair of envelope
curves for the unitstep response of a
lowpass secondorder underdamped
system
5B.12
One way of analytically estimating the settling time with a simple equation is
to consider only the minima and maxima of the step-response. For 0 1 ,
the step-response is the damped sinusoid shown below:
Exponential curves
intersecting the
maxima and minima
of the step-response
1+e - t
n
steady-state value
1- e
- nt
Figure 5B.11
The dashed curves in Figure 5B.11 represent the loci of maxima and minima of
the step-response. The maxima and minima are found by differentiating the
time response, Eq. (5B.20), and equating to zero:
dct e n t
cos n 1 2 t n 1 2
dt
1 2
e n t
1
sin n 1 2 t n
(5B.21)
5B.13
Dividing through by the common term and rearranging, we get:
1 2 cos n 1 2 t sin n 1 2 t
1 2
1 2
tan n 1 2 t
tan n 1 2 t n
(5B.22)
tan
1 2
n 1 2 t n
(5B.23)
tan
1 2
(5B.24)
Substituting Eq. (5B.24) into Eq. (5B.23), and solving for t, we obtain:
n 1 2
n 0, 1, 2,
(5B.25)
Eq. (5B.25) gives the time at which the maxima and minima of the stepresponse occur. Since ct in Eq. (5B.20) is only defined for t 0 , Eq. (5B.25)
only gives valid results for t 0 .
5B.14
Substituting Eq. (5B.25) into Eq. (5B.20), we get:
cn 1
n n
1
1 2
1 2 n
sin
1 2
n 1 2
(5B.26)
cn 1
1
1
1 2
sin n
(5B.27)
Since:
sin n sin ,
sin n sin ,
n odd
n even
(5B.28)
c1 n 1
c2 n 1
1
1
1 2
sin ,
n odd
(5B.29a)
1
1
1 2
sin , n even
(5B.29b)
sin 1 2
(5B.30)
5B.15
Substituting this equation for sin into Eqs. (5B.29), we get:
c1 n 1 e
c2 n 1 e
n
1 2
n odd
n
1 2
, n even
(5B.31a)
(5B.31b)
Eq. (5B.31a) and Eq. (5B.31b) are, respectively, the relative maximum and
minimum values of the step-response, with the times for the maxima and
minima given by Eq. (5B.25). But these values will be exactly the same as
those given by the following exponential curves:
c1 t 1 e
n t
(5B.32a)
c2 t 1 e nt
(5B.32b)
n 1 2
n 0, 1, 2,
(5B.33)
Since the exponential curves, Eqs (5B.32), pass through the maxima and
minima of the step-response, they can be used to approximate the extreme
bounds of the step-response (note that the response actually goes slightly
outside the exponential curves, especially after the first peak the exponential
curves are only an estimate of the bounds).
Exponential curves
passing through the
maxima and minima
of the step-response
5B.16
We can make an estimate of the settling time by simply determining the time at
which c1 t [or c2 t ] enters the band 1 ct 1 about the steady-state
value, as indicated graphically below:
Graph of an
underdamped stepresponse showing
exponential curves
bounding the
maxima and minima
1+ e
- nt
1+
1
1-
steady-state value
1- e - t
n
ts
Figure 5B.12
The exponential terms in Eqs. (5B.32) represent the deviation from the steadystate value. Since the exponential response is monotonic, it is sufficient to
calculate the time when the magnitude of the exponential is equal to the
required error . This time is the settling time, t s :
e t
n s
(5B.34)
Taking the natural logarithm of both sides and solving for t s gives
the p-percent settling time for a step-input:
Settling time for a
second-order
system
ts
where
ln
p
.
100
(5B.35)
5B.17
Peak Time
The peak time, t p , at which the response has a maximum overshoot is given by
Eq. (5B.25), with n 1 (the first local maxima):
tp
n 1 2
(5B.36)
Percent Overshoot
The magnitudes of the overshoots can be determined using Eq. (5B.31a). The
maximum value is obtained by letting n 1 . Therefore, the maximum value is:
ct max 1 e
1 2
(5B.37)
maximum overshoot e
1 2
(5B.38)
P.O. e
100%
(5B.39)
Percent overshoot
for a second-order
system
5B.18
Summary
ts
Peak time:
tp
ln
n 1 2
1 2
100%
References
Kuo, B.: Automatic Control Systems 7th ed., Prentice-Hall, 1995.
5B.19
Exercises
1.
A second-order all-pole system has roots at 2 j 3 rads -1 . If the input to the
system is a step of 10 units, determine:
(a) the P.O. of the output
(b) the peak time of the output
(c) the damping ratio
(d) the natural frequency of the system
(e) the actual frequency of oscillation of the output
(f) the 0-100% rise time
(g) the 5% settling time
(h) the 2% settling time
2.
Determine a second-order, all-pole transfer function which will meet the
following specifications for a step input:
(a) 10%-90% rise time 150 ms
(b) 5% overshoot
(c) 1% settling time 1 s
5B.20
3.
Given:
C s
n2
s s 2 2 n s n2
4.
The experimental zero-state response to a unit-step input of a second-order allpole system is shown below:
c( t )
a
1.0
0.4
0.8
(a) Derive an expression (in terms of a and b) for the damping ratio.
(b) Determine values for the natural frequency and damping ratio, given
a 0.4 and b 0.08 .
5B.21
5.
Given:
(i)
G1 s
10
s 10
(ii)
G 2 s
1
s 1
(iii)
G3 s
10
s 1s 10
6.
Find an approximate first-order model of the transfer function:
G s
4
s 3s 2
2
The upper limit T is a finite time chosen somewhat arbitrarily so that the
integral approaches a steady-state value.
5B.22
7.
Automatically controlled machine-tools form an important aspect of control
system application. The major trend has been towards the use of automatic
numerically controlled machine tools using direct digital inputs. Many
CAD/CAM tools produce numeric output for the direct control of these tools,
eliminating the tedium of repetitive operations required of human operators,
and the possibility of human error. The figure below illustrates the block
diagram of an automatic numerically controlled machine-tool position control
system, using a computer to supply the reference signal.
R ( s)
servo
amplifier
servo motor
Ka = 9
1
s ( s +1)
C( s)
position feedback
5B.23
8.
Find the steady-state errors to (a) a unit-step, and (b) a unit-ramp input, for the
following feedback system:
R (s )
1
2
s +1
C (s )
3s
Note that the error is defined as the difference between the actual input r t
and the actual output ct .
9.
Given
G s
0.1
s 0.1
R (s )
G ( s)
C (s )
Sketch the time responses of the closed-loop system and find the system
time constants when (i) K 0.1 , (ii) K 1 and (iii) K 10 .
6A.1
Lecture 6A Effects of Feedback
Transient response. Closed-loop control. Disturbance rejection. Sensitivity.
Overview
We apply feedback in control systems for a variety of reasons. The primary
purpose of feedback is to more accurately control the output - we wish to
reduce the difference between a reference input and the actual output. When
the input signal is a step, this is called set-point control.
Reduction of the system error is only one advantage of feedback. Feedback
also affects the transient response, stability, bandwidth, disturbance rejection
and sensitivity to system parameters.
Recall that the basic feedback system was described by the block diagram:
General feedback
control system
R (s )
E (s )
B (s )
C (s )
G (s)
H (s)
Figure 6A.1
The system is described by the following transfer function:
C s
G s
R s 1 G s H s
(6A.1)
The only way we can improve system performance whatever that may be
is by choosing a suitable H s or G s . Some of the criteria for choosing
G s , with H s 1 , will be given in the following sections.
Signals and Systems 2014
General feedback
control system
transfer function
6A.2
Transient Response
One of the most important characteristics of control systems is their transient
response. We might desire a speedy response, or a response without an
overshoot which may be physically impossible or cause damage to the
system being controlled (e.g. Maze rover hitting a wall!).
We can modify the response of a system by cascading the system with a
transfer function which has been designed so that the overall transfer function
achieves some design objective. This is termed open-loop control.
Open-loop control
system
R (s )
Gc ( s)
Gp( s)
controller
plant
C (s )
Figure 6A.2
A better way of modifying the response of a system is to apply feedback. This
is termed closed-loop control. By adjusting the loop feedback parameters, we
can control the transient response (within limits). A typical control system for
set-point control simply derives the error signal by comparing the output
directly with the input. Such a system is called a unity-feedback system.
Unity-feedback
closed-loop control
system
R (s )
E (s )
C (s )
Gc ( s)
Gp( s)
controller
plant
Figure 6A.3
6A.3
Example
We have already seen that the MR can be described by the following block
diagram (remember its just a differential equation!):
MR transfer function
for velocity output
X (s )
V (s )
1M
skf M
Figure 6A.4
V s G s X s
1M
K
s kf M s
K kf
s
K kf
s kf M
(6A.2)
vt
K
k
1 e f
kf
M t
t0
(6A.3)
6A.4
If K is set to v0 k f then:
Step response of
MR velocity
vt v0 1 e
kf M t
t0
(6A.4)
R (s )
X (s )
kf
controller
1M
skf M
V (s )
plant
Figure 6A.5
signal and not on the output. This type of control is deficient in several aspects.
Note that the reference input has to be converted to the MR input through a
gain stage equal to k f , which must be known. Also, by examining Eq. (6A.4),
we can see that we have no control over how fast the velocity converges to v0 .
Closed-Loop Control
To better control the output, well implement a closed-loop control system. For
simplicity, well use a unity-feedback system, with our controller placed in the
feed-forward path:
Closed-loop MR
controller
R (s )
E (s )
Gc( s )
X (s )
controller
1M
skf M
plant
Figure 6A.6
V (s )
6A.5
Proportional Control (P Controller)
R (s )
E (s )
X (s )
KP
controller
1M
skf M
Proportional
controller
(P controller)
V (s )
plant
Figure 6A.7
Gc s G s
R s
1 Gc s G s
KP M
v0
s k f M K P M s
K P v0 k f K P
s
K P v0 k f K P
s k
M KP M
(6A.5)
vt
K P v0
k K M t
1 e f P
, t0
k f KP
(6A.6)
Sometimes a
closed-loop system
impossible for vt v0 , the proportional controller will always result in a exhibits a steadystate error
from
Eq.
(6A.6), we can see that the rate at which vt converges to the steady-state
value can be made as fast as desired by again taking K P to be suitably large.
Signals and Systems 2014
6A.6
Integral Control (I Controller)
One of the deficiencies of the simple P controller in controlling the maze rover
was that it did not have a zero steady-state error. This was due to the fact that
the overall feedforward system was Type 0, instead of Type 1. We can easily
make the overall feedforward system Type 1 by changing the controller so that
it has a pole at the origin:
Integral controller
(I controller)
R (s )
E (s )
X (s )
KI
s
controller
1M
skf M
V (s )
plant
Figure 6A.8
T s
K I sM
s k f M K I sM
KI M
s k f M s KI M
2
(6A.7)
We can see straight away that the transfer function is 1 at DC (set s 0 in the
transfer function). This means that the output will follow the input in the
steady-state (zero steady-state error). By comparing this second-order transfer
function with the standard form:
n2
T s 2
s 2 n s n2
(6A.8)
we can see that the controller is only able to adjust the natural frequency, or the
distance of the poles from the origin, n . This may be good enough, but we
would prefer to be able to control the damping ratio as well.
Signals and Systems 2014
6A.7
Proportional Plus Integral Control (PI Controller)
R (s )
E (s )
KP +
KI
s
X (s )
controller
1M
skf M
V (s )
plant
Figure 6A.9
The controller in this case causes the plant to respond to both the error and the
integral of the error. With this type of control, the overall transfer function of
the closed-loop system is:
T s
K P K I s 1 M
s k f M K P K I s 1 M
KI M KP M s
s 2 k f M K P M s K I M
(6A.9)
Again, we can see straight away that the transfer function is 1 at DC (set s 0
in the transfer function), and again that the output will follow the input in the
steady-state (zero steady-state error). We can now control the damping ratio ,
as well as the natural frequency n , independently of each other. But we also
have a zero in the numerator. Intuitively we can conclude that the response will
be similar to the response with integral control, but will also contain a term
which is the derivative of this response (we see multiplication by s the zero
as a derivative). We can analyse the response by rewriting Eq. (6A.9) as:
n2
K P K I n2 s
T s 2
2
2
s 2 n s n s 2 n s n2
Signals and Systems 2014
(6A.10)
Proportional plus
Integral controller
(PI controller)
6A.8
For a unit-step input, let the output response that is due to the first term on the
right-hand
side
of
Eq.
ct cI t
K P dcI t
K I dt
(6A.11)
The figure below shows that the addition of the zero at s K I K P reduces
the rise time and increases the maximum overshoot, compared to the I
controller step response:
PI controller
response to a unitstep input
cI (t ) +
c (t )
KP d cI (t )
KI
dt
cI (t )
1.00
KP d cI (t )
KI
dt
0
Figure 6A.10
6A.9
The total response can then be sketched in, ensuring that the total response
goes through the points where the c I t slope is zero.
Proportional, Integral, Derivative Control (PID Controller)
One of the best known controllers used in practice is the PID controller, where
the letters stand for proportional, integral, and derivative. The addition of a
derivative to the PI controller means that PID control contains anticipatory
control. That is, by knowing the slope of the error, the controller can anticipate
the direction of the error and use it to better control the process. The PID
controller transfer function is:
G c s K P K D s
KI
s
(6A.12)
There are established procedures for designing control systems with PID
controllers, in both the time and frequency-domains.
Disturbance Rejection
A major problem with open-loop control is that the output ct of the plant will
be perturbed by a disturbance input d t . Since the control signal r t does Feedback minimizes
not depend on the plant output ct
the effect of
in open-loop control, the control signal disturbance inputs
D2( s)
D1( s)
R (s )
E (s )
X (s )
Gc( s )
controller
Open-loop MR
system with
disturbance inputs
D3( s)
1M
skf M
V (s )
6A.10
Figure 6A.11
D2( s)
D1( s)
R (s )
E (s )
X (s )
Gc( s )
D3( s)
1M
skf M
V (s )
controller
Figure 6A.12
V s
Most disturbance
inputs are minimized
when using
feedback
Gc s G s
Rs D1 s
1 Gc s G s
G s
D2 s
1 Gc s G s
1
D3 s
1 Gc s G s
(6A.13)
6A.11
Sensitivity
Sensitivity is a measure of how the characteristics of a system depend on the Sensitivity defines
how one element of
variations of some component (or parameter) of the system. The effect of the a system affects a
characteristic of the
ST
T T T
System sensitivity
defined
(6A.14)
ST
T T0
0
(6A.15)
6A.12
Example
R (s )
R (s )
G ( s)
C (s )
E (s )
G ( s)
C (s )
B (s )
a) open loop
H ( s)
b) closed loop
Figure 6A.13
T s
G s
1
1 G s H s H s
(6A.16)
T s G s
T
1
G
(6A.17)
SGT
G T
1
T G
(6A.18)
6A.13
Analytically, for the closed-loop case:
G s
1 G s H s
T 1 GH GH
G
1 GH 2
T
(6A.19)
Then:
G
1
1
S
G 1 GH 1 GH 2 1 GH
T
G
(6A.20)
System sensitivity to
G for closed-loop
systems
Summary
Various types of controllers, such as P, PI, PID are used to compensate the
transfer function of the plant in unity-feedback systems.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Signals and Systems 2014
must be accurate
and stable
6A.14
Exercises
1.
Show that the following block diagrams are equivalent:
C (s )
R (s )
G (s)
H (s)
C (s )
R (s )
where G s
G' ( s)
G s
1 G s H s 1
2.
Assume that an operational amplifier has an infinite input impedance, zero
output impedance and a very large gain K.
Show for the feedback configuration shown that Vo Vi K K 1 1 if K is
large.
K
V1
V2
6A.15
3.
For the system shown:
R ( s)
K1 =10
G=
100
s ( s +1)
C( s)
K2 =10
6A.16
4.
It is important to ensure passenger comfort on ships by stabilizing the ships
oscillations due to waves. Most ship stabilization systems use fins or hydrofoils
projecting into the water in order to generate a stabilization torque on the ship.
A simple diagram of a ship stabilization system is shown below:
Td( s )
R (s )
Ka
(s )
Tf ( s )
G (s)
roll
fin actuator
ship
K1
roll sensor
n2
s 2 2 n s n2
, the oscillations continue for several cycles and the rolling amplitude can
reach 18 for the expected amplitude of waves in a normal sea. Determine and
compare the open-loop and closed-loop system for:
a) sensitivity to changes in the actuator constant K a and the roll sensor K1
b) the ability to reduce the effects of the disturbance of the waves.
Note: 1.
2.
6A.17
5.
The system shown uses a unity feedback loop and a PI compensator to control
the plant.
D (s)
R (s )
E (s )
KI
s
PI compensator
KP +
G (s)
C (s )
plant
Find the steady-state error, e , for the following conditions [note that the
error is always the difference between the reference input r t and the plant
output ct ]:
(a)
G s
K1
,
1 sT1
KI 0
(b)
G s
K1
,
1 sT1
(c)
G s
K1
, KI 0
s1 sT1
(d)
G s
K1
, KI 0
s1 sT1
KI 0
when:
(i)
d t 0 , r t unit-step
(ii)
d t 0 , r t unit-ramp
(iii) d t unit-step, r t 0
(iv)
d t unit-step, r t unit-step
How does the addition of the integral term in the compensator affect the
steady-state errors of the controlled system?
7A.1
Lecture 7A The z-Transform
The z-transform. Mapping between s-domain and z-domain. Finding ztransforms. Standard z-transforms. z-transform properties. Evaluation of
inverse z-transforms. Transforms of difference equations. The system transfer
function.
Overview
Digital control of continuous-time systems has become common thanks to the
ever-increasing
performance/price
ratio
of
digital
signal
processors
The z-Transform
Well start with a discrete-time signal which is obtained by ideally and
uniformly sampling a continuous-time signal:
xs t
xt t nT
(7A.1)
Well treat a
discrete-time signal
as the weights of an
ideally sampled
continuous-time
signal
7A.2
Now since the function x s t is zero everywhere except at the sampling
instants, we replace the xt with the value at the sample instant:
xs t
xnT t nT
(7A.2)
xs t xnTs t nTs
(7A.3)
n 0
This is the discrete-time signal in the time-domain. The value of each sample is
represented by the area of an impulse, as shown below:
A discrete-time
signal made by
ideally sampling a
continuous-time
signal
x (t )
Ts
xs ( t )
Figure 7A.1
7A.3
What if we were to analyse this signal in the frequency domain? Taking the
Laplace transform yields:
X s s
xnT t nT e
s
n 0
st
dt
(7A.4)
Since summation and integration are linear, well change the order to give:
X s s xnTs t nTs e st dt
0
n 0
(7A.5)
Using the sifting property of the delta function, this integrates to:
X s s xnTs e snTs
n 0
(7A.6)
z e sTs
(7A.7)
X z xnTs z n
n 0
(7A.8)
Definition of z
7A.4
Since xnTs is just a sequence of sample values, xn , we normally write the
z-transform as:
X z xnz n
Definition of
z-transform
n 0
(7A.9)
X z x0 x1z 1 x2z 2
(7A.10)
Note that Eq. (7A.9) is the one-sided z-transform. We use this for the same
reasons that we used the one-sided Laplace transform.
z eTs e jTs
(7A.11)
z eTs
z Ts
(7A.12a)
(7A.12b)
7A.5
Therefore, we can draw the mapping between the s-domain and the z-domain
as follows:
The mapping
between s-plane
and z-plane
Mapping
Im z
Im s
unit circle
j
z=e
sTs
Ts
Ts
Re s
Re z
s -plane
z -plane
Figure 7A.2
The mapping of the s-domain to the z-domain depends on the sample
interval, Ts . Therefore, the choice of sampling interval is crucial when
designing a digital system.
Mapping the s-Plane Imaginary Axis
The j -axis in the s-plane is when 0 . In the z-domain, this corresponds
to a magnitude of z eTs e 0 1 . Therefore, the frequency in the
s-domain maps linearly on to the unit-circle in the z-domain with a phase angle
z Ts . In other words, distance along the j -axis in the s-domain maps
linearly onto angular displacement around the unit-circle in the z-domain.
unit circle
Im z
Im s
uniform angular
spacing
uniform linear
spacing
Ts
Re s
Re z
s -plane
z -plane
Figure 7A.3
Signals and Systems 2014
The mapping
between the j-axis
in the s-plane and
the unit-circle in the
z-plane
7A.6
Aliasing
With a sample period Ts , the angular sample rate is given by:
s 2f s
2
Ts
(7A.13)
When the frequency is equal to the foldover frequency (half the sample rate),
s j s 2 and:
ze
sTs
s
2
Ts
e j 1
(7A.14)
Im s
s /2
Im z
2
unit circle
2
3
s /2
Re s
Ts
Re z
s -plane
z -plane
Figure 7A.4
7A.7
However, if the frequency is increased beyond the foldover frequency, the
mapping just continues to go around the unit-circle. This means that aliasing
occurs higher frequencies in the s-domain are mapped to lower frequencies in
the z-domain.
Im s
2
Im z
3 s /2
s
s /2
unit circle
1 2 3
Re s
3
s /2
s
3 s /2
Re z
z -plane
s -plane
Figure 7A.5
Thus, absolute frequencies greater than the foldover frequency in the s-plane
are mapped on to the same point as frequencies less than the foldover
frequency in the z-plane. That is, they assume the alias of a lower frequency.
The energies of frequencies higher than the foldover frequency add to the
energy of frequencies less than the foldover frequency and this is referred to as
frequency folding.
7A.8
Finding z-Transforms
We will use the same strategy for finding z-transforms of a signal as we did for
the other transforms start with a known standard transform and successively
apply transform properties. We first need a few standard transforms.
Example
To find the z-transform of a signal xn n , we substitute into the
z- transform definition:
X z nz n
n 0
0 1z 1 2z 1
(7A.15)
n 1
The z-transform of a
unit-pulse
(7A.16)
7A.9
Example
To find the z-transform of a signal xn a n un, we substitute into the
definition of the z-transform:
X z a n unz n
(7A.17)
n 0
a
X z
n0 z
a a a
1
z z z
(7A.18)
To convert this geometric progression into closed-form, let the sum of the
first k terms of a general geometric progression be written as S k :
k 1
S k x n 1 x x 2 x n x k 1
n 0
(7A.19)
xS k x x 2 x 3 x k
(7A.20)
S k 1 x 1 x x 2 x 3 x k 1
x x2 x3 x k
(7A.21)
This is a telescoping sum, where we can see that on the right-hand side only the
first and last terms remain after performing the subtraction.
Signals and Systems 2014
7A.10
Dividing both sides by 1 x then results in:
1 xk
Sk x
1 x
n 0
k 1
(7A.22)
xn 1 x x2
n 0
1
,
1 x
x 1
(7A.23)
X z
1
,
a
1
z
a
1
z
(7A.24)
X z
z
,
za
z a
(7A.25)
7A.11
The ROC of X z is z a , as shown in the shaded area below:
Signal x [ n]
Region of convergence
Im z
a u [n]
|a|
Re z
Figure 7A.6
As was the case with the Laplace transform, if we restrict the z-transform to
causal signals, then we do not need to worry about the ROC.
Example
z
un
z 1
This is a frequently used transform in the study of control systems.
The z-transform of a
unit-step
(7A.26)
7A.12
Example
cosn e jn e jn 2
(7A.27)
e jn
z
z e j
(7A.28)
Therefore:
1
z
z
cosn
,
j
j
2 z e
z e
(7A.29)
cosn un
z z cos
z 2 2 z cos 1
(7A.30)
One of the most important properties of the z-transform is the right shift
property. It enables us to directly transform a difference equation into an
algebraic equation in the complex variable z.
The z-transform of a function shifted to the right by one unit is given by:
Z xn 1 xn 1z n
n 0
(7A.31)
7A.13
Letting r n 1 yields:
Z xn 1
xr z
r 1
r 1
x 1 z
xr z
r 0
z 1 X z x 1
(7A.32)
Thus:
xn 1 z 1 X z x 1
(7A.33)
Standard z-Transforms
z
un
z 1
n 1
z
a un
za
n
z 2 a cos z
a cos n un 2
z 2a cos z a 2
(Z.1)
(Z.2)
(Z.3)
(Z.4)
sin n un 2 a sin z 2
z 2a cos z a
(Z.5)
The z-transform
right shift property
7A.14
z-Transform Properties
Assuming xn X z .
axn aX z
Linearity
Multiplication by
z
a xn X
a
(Z.7)
an
Right shifting
(Z.6)
q 1
xn q z X z xk q z
q
(Z.8)
k 0
Multiplication by n
Left shifting
no corresponding transform
(Z.9)
d
nxn z X z
dz
(Z.10)
q 1
xn q z X z xk z
q
q k
(Z.11)
k 0
(Z.12)
Summation
z
x
X z
z 1
k 0
Convolution
x1 n x2 n X 1 z X 2 z
(Z.13)
x0 lim X z
(Z.14)
lim xn lim z 1X z
(Z.15)
Initial-value theorem
Final-value theorem
z 1
7A.15
Evaluation of Inverse z-Transforms
From complex variable theory, the definition of the inverse z-transform is:
1
n 1
xn
X
z
z
dz
C
2j
Inverse z-transform
defined but too
hard to apply!
(7A.34)
You wont need to evaluate this integral to determine the inverse z-transform,
just like we hardly ever use the definition of the inverse Laplace transform. We
manipulate X z into a form where we can simply identify sums of standard
transforms that may have had a few properties applied to them.
Given:
b0 z n b1 z n 1 bn
F z
z p1 z p2 z pn
(7A.35)
F z k 0 k1
z
z
z
k2
kn
z p1
z p2
z pn
(7A.36)
F z
z 3 2z 2 z 1
z 22 z 3
Therefore:
F z z 3 2 z 2 z 1
2
z
z z 2 z 3
k
k
k
k2
0 1
3
2
z z 2 z 2
z 3
Signals and Systems 2014
Expand functions of
z into partial
fractions then find
the inverse ztransform
7A.16
Now we evaluate the residues:
k0 z
1
F z
z 3 2z 2 z 1
2
z z 0 z 2 z 3 z 0 12
d
d z 3 2 z 2 z 1
2 F z
k1 z 2
dz
z z 2 dz
z z 3
z 2
z z 3 3 z 2 4 z 1 z 3 2 z 2 z 1 2 z 3
77
2
2
100
z z 3
z 2
k 2 z 2
k 3 z 3
F z
z 3 2z 2 z 1
19
z z 2
z z 3
10
z 2
11
F z
z 3 2z 2 z 1
2
75
z z 3
z z 2
z 3
Therefore:
F z
1 77 z
19
z
11 z
2
12 100 z 2 10 z 2 75 z 3
Note: For k 2
1
77 n 19 n n 11
n
n
2 2 3 , n 0
12
100
10 2
75
z
za
use na n
(Standard transform Z.3 and
2
z p2
z a 2
n n
z
.
a
a
z a 2
7A.17
Transforms of Difference Equations
The right shift property of the z-transform sets the stage for solving linear
difference equations with constant coefficients. Because yn k z k Y z ,
the z-transform of a difference equation is an algebraic equation that can be
readily solved for Y z . Next we take the inverse z-transform of Y z to find
the desired solution yn .
difference
equation
time-domain
frequency-domain
difficult?
time-domain
solution
ZT
IZT
algebaric
equation
easy!
z -domain
solution
Figure 7A.7
Example
(7A.37)
Now:
yn u n Y z
yn 1u n z 1Y z y 1 z 1Y z
11
6
yn 2u n z 2Y z z 1 y 1 y 2 z 2Y z
11 1 37
z
6
36
(7A.38)
7A.18
For the input, x 1 x 2 0 . Then:
xn 2 u n 0.5 u n
n
z
z 0 .5
xn 1u n z 1 X z x 1 z 1
z
1
z 0 .5 z 0 .5
xn 2u n z 2 X z z 1 x 1 x 2 z 2 X z
1
z z 0 .5
(7A.39)
Taking the z-transform of Eq. (7A.37) and substituting the foregoing results,
we obtain:
11
11 37
Y z 5 z 1Y z 6 z 2Y z z 1
6
6 36
3
5
z 0 .5 z z 0 .5
(7A.40)
or:
1 5 z
z 3zz05.5
6 z 2 Y z 3 11z 1
(7A.41)
z 3 z 11
z 3 z 5
2
z 0.5 z 2 5 z 6
z 5z 6
(7A.42)
and:
3 z 5
3 z 11
Y z
z 2 z 3 z 0.5z 2 z 3
z
5
2
26 15 22 3 28 5
2
3
0
.
5
2
3
z
z
z
z
z
(7A.43)
Therefore:
z
z
26 z
22 z
28 z
2
Y z 5
2z
3
15z
0.5
3z
2 5z
3
z
(7A.44)
7A.19
and:
26
0.5 22 2 285
yn 52 23 15
3
26
15
0.5
73 2 185 3 , n 0
n
(7A.45)
As can be seen, the z-transform method gives the total response, which
includes zero-input and zero-state components. The initial condition terms give
rise to the zero-input response. The zero-state response terms are exclusively
due to the input.
yn ayn 1 bxn
(7A.46)
First-order
difference equation
Taking the z-transform of both sides and using the right-shift property gives:
Y z a z 1Y z y 1 bX z
(7A.47)
b
ay 1
Y z
X z
1 az 1 1 az 1
(7A.48)
Y z
ay 1z
bz
X z
za
za
(7A.49)
The first part of the response results from the initial conditions, the second part
results from the input.
and corresponding
z-transform
7A.20
If the system has no initial energy (zero initial conditions) then:
Y z
bz
X z
za
(7A.50)
H z
bz
za
(7A.51)
so that:
Y z H z X z
Discrete-time
transfer function
defined
(7A.52)
i 1
i 0
yn ai yn i bi xn i
(7A.53)
and if the system has zero initial conditions, then taking the z-transform of both
sides results in:
b0 z N b1 z N 1 bM z N M
Y z N
X z
N 1
z a1 z a N 1 z a N
(7A.54)
b0 z N b1 z N 1 bM z N M
H z N
z a1 z N 1 a N 1 z a N
(7A.55)
7A.21
We can show that the convolution relationship of a linear discrete-time system:
yn hn xn hi xn i , n 0
i 0
(7A.56)
hn H z
(7A.57)
The unit-pulse
response and
transfer function
form a z-transform
pair
That is, the unit-pulse response and the transfer function form a z-transform
pair.
Stability
pi 1
(7A.58a)
(7A.58b)
7A.22
Transfer Function Interconnections
x [n ]
y [n ] = x [n -1]
Figure 7A.8
By taking the z-transform of the input and output, we can see that we should
represent a delay in the z-domain by:
Delay element in
block diagram form
in the z-domain
X ( z)
z -1
Y ( z ) = z -1 X ( z)
Figure 7A.9
Example
yn yn 1 Txn 1
(7A.59)
7A.23
The block diagram of the system is:
Time-domain block
diagram of a
numeric integrator
T x [n -1]
x [n ]
y [n ]
T x [n ]
T
y [n -1]
Figure 7A.10
z -1T X ( z )
X ( z)
T X ( z)
T
Y ( z)
z -1
-1
z Y ( z)
z -1
Figure 7A.11
X ( z)
T z -1
1- z -1
Y ( z)
Figure 7A.12
You should confirm that this transfer function obtained using block diagram
reduction methods is the same as that found by taking the z-transform of
Eq. (7A.59).
7A.24
Summary
The unit-pulse response and the transfer function form a z-transform pair:
hn H z .
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
7A.25
Exercises
1.
Construct z-domain block diagrams for the following difference equations:
(i)
yn yn 2 xn xn 1
(ii)
yn 2 yn 1 yn 2 3 xn 4
2.
(i) Construct a difference equation from the following block diagram:
X ( z)
Y ( z)
3
z -1
z -1
-2
z -2
3.
Using z-transforms:
(a) Find the unit-pulse response for the system given by:
1
yn xn yn 1
3
(b) Find the response of this system to the input
0, n 1, 2, 3,
xn 2,
n 0, 1
1,
n 2, 3, 4,
Hint: xn can be written as the sum of a unit step and two unit-pulses, or as
the subtraction of two unit-steps.
Signals and Systems 2014
7A.26
4.
Determine the weighting sequence for the system shown below in terms of the
individual weighting sequences h1 n and h2 n .
x [n ]
h1[n]
y [n ]
h2[n]
5.
For the feedback configuration shown below, determine the first three terms of
the weighting sequence of the overall system by applying a unit-pulse input
and calculating the resultant response. Express your results as a function of the
weighting sequence elements h1 n and h2 n .
x [n ]
h1[n]
y [n ]
h2[n]
6.
Using the definition of the z-transform
(a) Find Y z when:
(i)
yn 0 for n 0 , yn 1 2 for n 0, 1, 2,
(ii)
yn 0 for n 0 , yn a n1 for n 1, 2, 3,
(iii)
yn 0 for n 0 , yn na n1 for n 1, 2, 3,
(iv)
yn 0 for n 0 , yn n 2 a n1 for n 1, 2, 3,
7A.27
(b) Determine the z-transform of the sequence
2, n 0, 2, 4,
xn
0, all other n
by noting that xn x1 n x2 n where x1 n is the unit-step sequence and
x2 n is the unit-alternating sequence. Verify your result by directly
7.
Poles and zeros are defined for the z-transform in exactly the same manner as
for the Laplace transform. For each of the z-transforms given below, find the
poles and zeros and plot the locations in the z-plane. Which of these systems
are stable and unstable?
1 2 z 1
(a) H z
3 4 z 1 z 2
(b) H z
(c) H z
1
2
1 3 4 z 1 8 z 4
5 2 z 2
1 6 z 1 3z 2
Note: for a discrete-time system to be stable all the poles must lie inside the
unit-circle.
7A.28
8.
Given:
(a) yn 3 xn xn 1 2 xn 4 yn 1 yn 2
(b) yn 4 yn 3 yn 2 3 xn 4 xn 3 2 xn
Find the transfer function of the systems
(i)
(ii)
9.
Use the direct division method to find the first four terms of the data sequence
xn , given:
X z
14 z 2 14 z 3
z 1 4z 1 2z 1
10.
Use the partial fraction expansion method to find a general expression for xn
in Question 9. Confirm that the first four terms are the same as those obtained
by direct division.
11.
Determine the inverse z-transforms of:
z2
(a) X z
z 1z a
(b) X z 3 2 z 1 6 z 4
(c) X z
1 e z
z 1z e
(d) X z
z z 1
z 1z 2 z 1 4
(e) X z
4
z 2 z 1
(f) X z
z
z z 1
aT
aT
7A.29
12.
Given:
8 yn 6 yn 1 yn 2 xn
13.
Given yn xn yn 1 yn 2 , find the unit-step response of this system
using the transfer function method, assuming zero initial conditions.
14.
Use z-transform techniques to find a closed-form expression for the Fibonacci
sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34
Hint: To use transfer function techniques, the initial conditions must be zero.
Construct an input so that the ZSR of the system described by the difference
equation gives the above response.
7A.30
The ancient Greeks considered a rectangle to be perfectly proportioned (saying
that the lengths of its sides were in a golden ratio to each other) if the ratio of
the length to the width of the outer rectangle equalled the ratio of the length to
the width of the inner rectangle:
-1
That is:
1
1
Find the two values of that satisfy the golden ratio. Are they familiar values?
15.
Given F s
1
, find F z .
s a 2
7A.31
16.
Given:
X 2 z
X 1 z z 6
X 3 z D z
z 1 z 1
X 3 z X 1 z kX 2 z
X 4 z
10
3
X 2 z
X 3 z
z2
z A
17.
Using the initial and final value theorems, find f 0 and f of the
following functions:
(a) F z
1
z 0.3
z2
(c) F z
z 14 z a
(b) F z 1 5 z 3 2 z 2
14 z 2 14 z 3
(d) F z
z 1 4z 1 2z 1
7A.32
18.
Perform the convolution yn yn when
(i)
yn un
(ii)
yn n n 1 n 2
using
(a) the property that convolution in the time-domain is equivalent to
multiplication in the frequency domain, and
(b) using any other convolution technique.
Compare answers.
19.
Determine the inverse z-transform of:
F z
z z 1
z 1 z 2 z 1 4
HINT: This has a repeated root. Use techniques analogous to those for the
Laplace transform when multiple roots are present.
7B.1
Lecture 7B Discretization
Signal discretization. Signal reconstruction. System discretization. Frequency
response. Response matching.
Overview
Digital signal processing is now the preferred method of signal processing.
Communication schemes and controllers implemented digitally have inherent
advantages over their analog counterparts: reduced cost, repeatability, stability
with aging, flexibility (one H/W design, different S/W), in-system
programming, adaptability (ability to track changes in the environment), and
this will no doubt continue into the future. However, much of how we think,
analyse and design systems still uses analog concepts, and ultimately most
embedded systems eventually interface to a continuous-time world.
Its therefore important that we now take all that we know about continuoustime systems and transfer or map it into the discrete-time domain.
Signal Discretization
We have already seen how to discretize a signal. An ideal sampler produces a
weighted train of impulses:
g (t ) p (t ) = gs ( t )
g (t )
p (t )
-2Ts
-Ts
Ts
2Ts
Figure 7B.1
This was how we approached the topic of z-transforms. Of course an ideal
sampler does not exist, but a real system can come close to the ideal. We saw
Signals and Systems 2014
Sampling a
continuous-time
signal produces a
discrete-time signal
(a train of impulses)
7B.2
in the lab that it didnt matter if we used a rectangular pulse train instead of an
impulse train the only effect was that repeats of the spectrum were weighted
by a sinc function. This didnt matter since reconstruction of the original
continuous-time signal was accomplished by lowpass filtering the baseband
spectrum which was not affected by the sinc function.
Digital signal
processing also
quantizes the
discrete-time signal
In a computer, values can only be stored as discrete values, not only at discrete
times. Thus, in a digital system, the output of a sampler is quantized so that we
have a digital representation of the signal. The effects of quantization will be
ignored for now be aware that they exist, and are a source of errors for digital
signal processors.
Signal Reconstruction
The reconstruction of a signal from ideal samples was accomplished by an
ideal filter:
Lowpass filtering a
discrete-time signal
(train of impulses)
produces a
continuous-time
signal
gs (t )
lowpass
filter
g( t )
Figure 7B.2
We then showed that so long as the Nyquist criterion was met, we could
reconstruct the signal perfectly from its samples (if the lowpass filter is ideal).
To ensure the Nyquist criterion is met, we normally place an anti-alias filter
before the sampler.
Hold Operation
The output from a digital signal processor is obviously digital we need a way
to convert a discrete-time signal back into a continuous-time signal. One way
is by using a lowpass filter on the sampled signal, as above. But digital systems
have to first convert their digital data into analog data, and this is accomplished
with a DAC.
Signals and Systems 2014
7B.3
To model a DAC, we note that there is always some output (it never turns off),
and that the values it produces are quantized:
The output from a
DAC looks like a
train of impulses
convolved with a
rectangle
g( t )
~
g (t )
Figure 7B.3
The mathematical model we use for the output of a DAC is:
gs (t )
zero-order
hold
A DAC is modelled
as a zero-order hold
device
~
g( t )
Figure 7B.4
The output of the zero-order hold device is:
(7B.1)
~
G f Ts sinc fTs e jfTs Gs f
Show that the above is true by taking the Fourier transform of g~ t .
Signals and Systems 2014
(7B.2)
Zero-order hold
frequency response
7B.4
System Discretization
Suppose we wish to discretize a continuous-time LTI system. We would expect
the input/output values of the discrete-time system to be:
xn xnT ,
yn y nT
(7B.3)
x (t )
y (t )
x [n ] = x ( nTs )
h (t )
h [n ]
Laplace transform
X ( s)
y [n ] = y ( nTs )
z- transform
Y (s )
X ( z)
H (s)
Y (z )
Hd ( z)
Figure 7B.5
We now have to determine the discrete-time transfer function H d z so that
the relationship Eq. (7B.3) holds true. One way is to match the inputs and
outputs in the frequency-domain. You would expect that since z e sTs , then
we can simply do:
An exact match of
the two systems
We cant implement
the exact match
because the transfer
function is not a
finite rational
polynomial
H d z H s s 1 T
ln z
(7B.4)
7B.5
signal processing techniques to implement fractional delays, but for now we
will stay with the easier concept of integer delays).
Therefore, we seek a rational polynomial approximation to the ln z function.
We start with Taylors theorem:
2
x a
f x f a x a f a
2!
f a
(7B.5)
y2 y3 y4
ln 1 y y
2
3
4
(7B.6)
y2 y3 y4
ln 1 y y
2
3
4
(7B.7)
Subtracting Eq. (7B.7) from Eq. (7B.6), and dividing by 2 gives us:
y3 y5 y7
1 1 y
y
ln
2 1 y
3
5
7
(7B.8)
Now let:
1 y
1 y
(7B.9)
z 1
z 1
(7B.10)
or, rearranging:
7B.6
Substituting Eqs. (7B.9) and (7B.10) into Eq. (7B.8) yields:
z 1 1 z 1 3 1 z 1 5
ln z 2
z 1 3 z 1 5 z 1
(7B.11)
Now if z 1 , then we can truncate higher than first-order terms to get the
approximation:
z 1
ln z 2
z 1
(7B.12)
We can now use this as an approximate value for s. This is called the bilinear
1
2 z 1
ln z
Ts
Ts z 1
(7B.13)
The open LHP in the s-domain maps onto the open unit disk in the zdomain (thus the bilinear transformation preserves the stability condition).
The j -axis maps onto the unit circle in the z-domain. This will be used
later when we look at frequency response of discrete-time systems.
2 z 1
H d z H
T
z
1
s
(7B.14)
7B.7
Frequency Response
Since z e sTs , then letting s j gives z e jTs . If we define:
Ts
(7B.15)
The relationship
between discretetime frequency and
continuous-time
frequency
(7B.16)
Value of z to
determine discretetime frequency
response
ze
z=e
-1
1
-j
Figure 7B.6
But this point is also given by the angle 2n , so the mapping from the zplane to the s-domain is not unique. In fact any point on the unit circle maps to
the s-domain frequency Ts 2n , or:
Frequency response
mapping is not
unique aliasing
2
n
Ts
Ts
n s
Ts
(7B.17)
7B.8
This shows us two things:
The mapping from the z-domain to the s-domain is not unique. Conversely,
the mapping from the s-domain to the z-domain is not unique. This means
that frequencies spaced at s map to the same frequency in the z-domain.
We already know this! Its called aliasing.
Discrete-time
frequency response
is periodic
that .
Just like the continuous-time case where we set s j in H s to give the
frequency response H , we can set z e j in H d z to give the frequency
response H d . Doing this with Eq. (7B.14) yields the approximation:
2 e j 1
H d H
j
T
e
1
s
(7B.18)
2 e j 1
j
Ts e j 1
(7B.19)
Using Eulers identity show that this can be manipulated into the form:
Mapping
continuous-time
frequency to
discrete-time
frequency using the
bilinear
transformation
tan
Ts
2
(7B.20)
2 tan 1
Ts
2
(7B.21)
7B.9
So now Eq. (7B.18), the approximate frequency response of an equivalent
discrete-time system, is given by:
H d H tan
2
Ts
Discretizing a
system to get a
similar frequency
response
(7B.22)
c 2 tan 1
cTs
T
2 c s
2
2
cTs
(7B.23)
7B.10
Response Matching
In control systems, it is usual to consider a mapping from continuous-time to
discrete-time in terms of the time-domain response instead of the frequency
Time-domain (step
response) matching
xn xnTs ,
yn y nTs
(7B.24)
where the output of the discrete-time system is obtained by sampling the stepresponse of the continuous-time system.
Since for a unit-step input we have:
X z
z
z 1
(7B.25)
then we want:
Step response
matching to get the
discrete-time
transfer function
H d z Y z
z 1
z
(7B.26)
7B.11
Example
Suppose we design a maze rover velocity controller in the continuous-time
domain and we are now considering its implementation in the rovers
microcontroller. We might come up with a continuous-time controller transfer
function such as:
Gc s 500
5
s
(7B.27)
R (s )
E (s )
X (s )
500s 5
s
controller
0.001
s 0.01
V (s )
maze rover
Figure 7B.7
Show that the block diagram can be reduced to the transfer function:
R (s )
0.5
s 0.5
V (s )
Figure 7B.8
Y s
0.5 1
s 0.5 s
1
1
s s 0.5
Signals and Systems 2014
(7B.28)
7B.12
and taking the inverse Laplace transform gives the step response:
Continuous-time
step response
y t 1 e 0.5t u t
(7B.29)
yn 1 e 0.5 nTs un
(7B.30)
Y z
z
z
z 1 z e 0.5Ts
(7B.31)
Hence, using Eq. (7B.26) yields the following transfer function for the
corresponding discrete-time system:
Equivalent discretetime transfer
function using step
response
z 1
bz 1
1 e 0.5Ts
H d z 1
z e 0.5Ts z e 0.5Ts 1 az 1
(7B.32)
yn ayn 1 bxn 1
(7B.33)
You should confirm that this difference equation gives an equivalent stepresponse at the sample instants using MATLAB for various values of T.
7B.13
Summary
A system (or signal) may be discretized using the bilinear transform. This
maps the LHP in the s-domain into the open unit disk in the z-domain. It is
an approximation only, and introduces warping when examining the
frequency response.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
7B.14
Exercises
1.
A Maze Rover phase-lead compensator has the transfer function:
H s
20s 1
s 4
2.
Repeat Exercise 1, but this time use the bilinear transform.
3.
A continuous-time system has a transfer function G s
s2
and it is
s 3s 2
2
4.
Compare the step responses of each answer in Exercises 1 and 2 using
MATLAB.
8A.1
Lecture 8A System Design
Design criteria for continuous-time systems. Design criteria for discrete-time
systems.
Overview
The design of a control system centres around two important specifications
the steady-state performance and the transient performance. The steady-state
performance is relatively simple. Given a reference input, we want the output
to be exactly, or nearly, equal to the input. The way to achieve the desired
steady-state error, for any type of input, is by employing a closed-loop
configuration that gives the desired system type. The more complex
specification of transient behaviour is tackled by an examination of the
systems poles. We will examine transient performance as specified for an allpole second-order system (the transient performance criteria still exist for
higher-order systems, but the formula shown here only apply to all-pole
second-order systems).
The transient specification for a control system will usually consist of times
taken to reach certain values, and allowable deviations from the final steadystate value. For example, we might specify percent overshoot, peak time, and
5% settling time for a control system. Our task is to find suitable pole locations
for the system.
8A.2
Design Criteria for Continuous-Time Systems
Percent Overshoot
The percent overshoot for a second-order all pole step-response is given by:
P.O. e
1 2
(8A.1)
then we can shade the region of the s-plane where the P.O. specification is
satisfied:
PO specification
region in the s-plane
s -plane
Figure 8A.1
=cos -1
8A.3
Peak Time
tp
(8A.2)
Since a specification will specify a maximum peak time, then we can find the
minimum d t p required to meet the specification.
Again, we define the region in the s-plane where this specification is satisfied:
Peak time
specification region
in the s-plane
s -plane
Figure 8A.2
d= t
p
8A.4
Settling Time
ts
ln
(8A.3)
Since a specification will specify a maximum settling time, then we can find
the minimum ln t s required to meet the specification.
We define the region in the s-plane where this specification is satisfied:
Settling time
specification region
in the s-plane
s -plane
Figure 8A.3
-ln
= t
s
8A.5
Combined Specifications
Since we have now specified simple regions in the s-plane that satisfy each
specification, all we have to do is combine all the regions to meet every
specification.
Combined
specifications region
in the s-plane
s -plane
Figure 8A.4
We choose s-plane
locations out near infinity wed achieve a very nice, sharp, almost step-like poles close to the
origin so as not to
response! In practice, we cant do this, because the inputs to our system would exceed the linear
range of systems
exceed the allowable linear range. Our analysis has considered the system to be
linear we know it is not in practice! Op amps saturate; motors cannot have
megavolts applied to their windings without breaking down; and we cant put
our foot on the accelerator past the floor of the car (no matter how much we
try)!
8A.6
Practical considerations therefore mean we should choose pole locations that
are close to the origin this will mean we meet the specifications, and
hopefully we wont be exceeding the linear bounds of our system. We
normally indicate the desired pole locations by placing a square around them:
Desired closed-loop
pole locations are
represented with a
square around them
s -plane
Figure 8A.5
s -plane
Figure 8A.6
8A.7
or we introduce a minor-loop to move a dominant first-order pole away:
Dominant secondorder all-pole
system made by
moving a pole using
a minor-feedback
loop
minor-loop feedback
s -plane
Figure 8A.7
close zero
s -plane
Figure 8A.8
only approximate. In some cases this inexact cancellation can cause the inexact
opposite of the desired effect the system will have a dominant first-order pole
and the response will be sluggish.
8A.8
Example
0.1 e 1
0.591
2
so that:
cos 1 53
For the peak time spec, we have:
1.57
2 rads-1
ln 0.02 39
0.43 nepers
9
9
8A.9
The regions can now be drawn:
j
j 2.1
j2
53
-1.6
-0.43
53
s -plane
-j 2
-j 2.1
regions in the zshould be to satisfy the initial design specifications. In transforming the plane
8A.10
Mapping of a Point from the s-plane to the z-plane
z eTs e jTs
eTs Ts
z z
(8A.4)
unit-circle
j
e Ts
Ts
s -plane
z -plane
Figure 8A.9
0
0
0
z 1
z 1
z 1
(8A.5)
We will now translate the performance specification criteria areas from the splane to the z-plane.
8A.11
Percent Overshoot
e Ts Ts tan cos 1
(8A.6)
j
j
=0.5
unit-circle
=0
=0
=1
s -plane
z -plane
Figure 8A.10
The region in the z-plane that corresponds to the region in the s-plane where
the PO specification is satisfied is shown below:
PO specification
region in the z-plane
=0.5
unit-circle
=0.5
s
2
s
2
=0
s -plane
z -plane
Figure 8A.11
Signals and Systems 2014
8A.12
Peak Time
z e sTs eTs e j d Ts
eTs d Ts
(8A.7)
For a given d , this locus is a straight line between the origin and the unitcircle, at an angle d Ts (remember we only consider the LHP of the splane so that 0 ):
j
unit-circle
j
2
j 2
j 1
s Ts
2
-j 1
2 Ts
1 Ts
- 1 Ts
s -plane
z -plane
Figure 8A.12
The region in the z-plane that corresponds to the region in the s-plane where
the peak time specification is satisfied is shown below:
j
Peak time
specification region
in the z-plane
unit-circle
j d
d Ts
- d Ts
-j d
z -plane
s -plane
Figure 8A.13
Signals and Systems 2014
8A.13
Settling Time
z e sTs e Ts e jTs
e Ts Ts
(8A.8)
For a given , this locus is a circle centred at the origin with a radius e Ts :
unit-circle
2T s
e
- 2
- 1
1T s
s -plane
z -plane
Figure 8A.14
The region in the z-plane that corresponds to the region in the s-plane where
the settling time specification is satisfied is shown below:
j
Settling time
specification region
in the z-plane
unit-circle
e T s
s -plane
z -plane
Figure 8A.15
8A.14
Combined Specifications
A typical specification will mean we have to combine the PO, peak time and
We choose z-plane
poles close to the
unit-circle so as not
to exceed the linear
range of systems
settling time regions in the z-plane. For the s-plane we chose pole locations
close to the origin. Since z eTs , and is a small negative number near the
origin, then we need to maximize z . We therefore choose pole locations in the
z-plane which satisfy all the criteria, and we choose them as far away from the
origin as possible:
Combined
specifications region
in the z-plane
unit-circle
desired pole
locations
z -plane
Figure 8A.16
always check whether the resulting discrete-time system will meet the
specifications it may not, due to the imperfections of the discretization!
8A.15
Summary
The percent overshoot, peak time and settling time specifications for an allpole second-order system can easily be found in the s-plane. Desired pole
locations can then be chosen that satisfy all the specifications, and are
usually chosen close to the origin so that the system remains in its linear
region.
The percent overshoot, peak time and settling time specifications for an allpole second-order system can be found in the z-plane. The desired pole
locations are chosen as close to the unit-circle as possible so the system
stays within its linear bounds.
References
Kuo, B: Automatic Control Systems, Prentice-Hall, 1995.
Nicol, J.: Circuits and Systems 2 Notes, NSWIT, 1986.
8B.1
Lecture 8B Root Locus
Root locus. Root locus rules. MATLABs RLTool. Root loci of discrete-time
systems. Time-response of discrete-time systems.
Overview
The roots of a systems characteristic equation (the poles), determine the
mathematical form of the systems transient response to any input. They are
very important for system designers in two respects. If we can control the pole
locations then we can position them to meet time-domain specifications such as
P.O., settling time, etc. The pole locations also determine whether the system is
stable we must have all poles in the open LHP for stability. A root locus is
a graphical way of showing how the roots of a characteristic equation in the
complex (s or z) plane vary as some parameter is varied. It is an extremely
valuable aid in the analysis and design of control systems, and was developed
by W. R. Evans in his 1948 paper Graphical Analysis of Control Systems,
Trans. AIEE, vol. 67, pt. II, pp. 547-551, 1948.
Root Locus
As an example of the root locus technique, we will consider a simple unityfeedback control system:
R (s )
E (s )
C (s )
Gc( s )
Gp( s)
controller
plant
Figure 8B.1
Gc s G p s
C s
Rs 1 Gc s G p s
Signals and Systems 2014
(8B.1)
8B.2
This transfer function has the characteristic equation:
1 Gc s G p s 0
(8B.2)
Now suppose that we can separate out the parameter of interest, K, in the
characteristic equation it may be the gain of an amplifier, or a sampling rate,
or some other parameter that we have control over. It could be part of the
controller, or part of the plant. Then the characteristic equation can be written:
1 KPs 0
Characteristic
equation of a unityfeedback system
(8B.3)
where Ps does not depend on K. The graph of the roots of this equation, as
the parameter K is varied, gives the root locus. In general:
KPs K
s z1 s z 2 s z m
s p1 s p2 s pn
(8B.4)
where zi are the m open-loop zeros and pi are the n open-loop poles of the
system. Also, rearranging Eq. (8B.3) gives us:
KPs 1
(8B.5)
Taking magnitudes of both sides leads to the magnitude criterion for a root
locus:
Root locus
magnitude criterion
P s 1 K
(8B.6a)
Similarly, taking angles of both sides of Eq. (8B.5) gives the angle criterion for
a root locus:
Root locus angle
criterion
(8B.6b)
To construct a root locus we can just apply the angle criterion. To find a
particular point on the root locus, we need to know the magnitude of K.
Signals and Systems 2014
8B.3
Example
E (s )
0.5
ss
Kp
controller
C (s )
maze rover
We are trying to see the effect that the controller parameter K p has on the
closed-loop system. First of all, we can make the following assignments:
G p s
0.5
and Gc s K p
ss 2
K K p and Ps
0.5
s s 2
For such a simple system, it is easier to derive the root locus algebraically
rather than use the angle criterion. The characteristic equation of the system is:
1 KPs 1 K p
0.5
0
ss 2
or just:
s 2 2 s 0 .5 K p 0
8B.4
We will now evaluate the roots for various values of the parameter K p :
Kp 0
K p 1
Kp 2
s1, 2 0, 2
s1, 2 1 1 2 , 1 1
s1, 2 1, 1
K p 3 s1, 2 1 j 2 , 1 j
Kp 4
s1, 2 1 j , 1 j
j
j
Kp =4
Kp =0
-2
-1
Kp =2
s -plane
-j
What may have not been obvious before is now readily revealed: the system is
unconditionally stable (for positive K p ) since the poles always lie in the LHP;
and the K p parameter can be used to position the poles for an overdamped,
critically damped, or underdamped response. Also note that we cant arbitrarily
position the poles anywhere on the s-plane we are restricted to the root locus.
This means, for example, that we cannot increase the damping of our
underdamped response it will always be e t .
8B.5
Root Locus Rules
We will now examine a few important rules about a root locus construction
that can give us insight into how a system behaves as the parameter K is varied.
Root locus
construction rules
1. Number of Branches
The root locus starts (i.e. K 0 ) at the poles of P s . This can be seen by
substituting Eq. (8B.4) into Eq. (8B.3) and rearranging:
s p1 s p2 s pn K s z1 s z 2 s zm 0
(8B.7)
This shows us that the roots of Eq. (8B.3), when K 0 , are just the open-loop
poles of P s , which are also the poles of Gc s G p s .
As K , the root locus branches terminate at the zeros of P s . For n m
then n m branches go to infinity.
3. Real Axis Symmetry
Any portion of the real axis forms part of the root locus for K 0 if the total
number of real poles and zeros to the right of an exploratory point along the
real axis is an odd integer. For K 0 , the number is zero or even.
8B.6
5. Asymptote Angles
The n m branches of the root locus going to infinity have asymptotes given
by:
r
180
nm
(8B.8)
The asymptotes all intercept at one point on the real axis given by:
poles of Ps
zeros of Ps
nm
(8B.9)
The value of A is the centroid of the open-loop pole and zero configuration.
7. Real Axis Breakaway and Break-In Points
A breakaway point is where a section of the root locus branches from the real
axis and enters the complex region of the s-plane in order to approach zeros
which are finite or are located at infinity. Similarly, there are branches of the
root locus which must break-in onto the real axis in order to terminate on
zeros.
8B.7
Examples of breakaway and break-in points are shown below:
Breakaway and
break-in points of a
root locus
breakaway
point
break-in
point
-5
-4
-3
-2
-1
s -plane
Figure 8B.2
The breakaway (and break-in) points correspond to points in the s-plane where
multiple real roots of the characteristic equation occur. A simple method for
finding the breakaway points is available. Taking a lead from Eq. (8B.7), we
can write the characteristic equation 1 KP s 0 as:
f s B s KAs 0
(8B.10)
f s s p1 s p2 s pk
r
(8B.11)
df s
0
ds s p1
(8B.12)
8B.8
This means that multiple roots will satisfy Eq. (8B.12). From Eq. (8B.10) we
obtain:
df s
Bs KAs 0
ds
(8B.13)
The particular value of K that will yield multiple roots of the characteristic
equation is obtained from Eq. (8B.13) as:
Bs
As
(8B.14)
B s
f s Bs
As 0
As
(8B.15)
or
Bs As Bs As 0
(8B.16)
B s
As
(8B.17)
and:
dK
Bs As Bs As
ds
A 2 s
(8B.18)
8B.9
If dK ds is set equal to zero, we get the same equation as Eq. (8B.16).
Therefore, the breakaway points can be determined from the roots of:
dK
0
ds
Equation to find
breakaway and
break-in points
(8B.19)
It should be noted that not all points that satisfy Eq. (8B.19) correspond to
actual breakaway and break-in points, if K 0 (those that do not, satisfy
K 0 instead). Also, valid solutions must lie on the real-axis.
The intersection of the root locus and the imaginary axis can be found by
solving the characteristic equation whilst restricting solution points to s j .
This is useful to analytically determine the value of K that causes the system to
become unstable (or stable if the roots are entering from the right-half plane).
9. Effect of Poles and Zeros
Zeros tend to attract the locus, while poles tend to repel it.
10. Use a computer
Use a computer to plot the root locus! The other rules provide intuition in
shaping the root locus, and are also used to derive analytical quantities for the
gain K, such as accurately evaluating stability.
8B.10
Example
R (s )
s
K
s
s
E (s )
C (s )
This system has four poles (two being a double pole) and one zero, all on the
negative real axis. In addition, it has three zeros at infinity. The root locus of
this system, illustrated below, can be drawn on the basis of the rules presented.
j
K =10.25
= 1.7
60
-8
-7
-6
-5
-4
-3
double pole
-2
-1 -60
-7.034
1
-0.4525
s -plane
Rule 1 There are four separate loci since the characteristic equation,
1 G s 0 , is a fourth-order equation.
Rule 2 The root locus starts ( K 0 ) from the poles located at 0, -1, and a
double pole located at -4. One pole terminates ( K ) at the zero located at -6
and three branches terminate at zeros which are located at infinity.
Rule 3 Complex portions of the root locus occur in complex-conjugate pairs.
8B.11
Rule 4 The portions of the real axis between the origin and -1, the double poles
1
180 60 ,
4 1
3
180 180 ,
4 1
5
180 300
4 1
Rule 6 The intersection of the asymptotic lines and the real axis occurs at:
9 6
1
4 1
Rule 7 The point of breakaway (or break-in) from the real axis is determined as
K s 6
0
2
ss 1s 4
we have:
ss 1s 4
s 6
s 6 2ss 1s 4 2s 1s 4 ss 1s 4 0
dK
ds
s 62
2
Therefore:
s 6s 42ss 1 2s 1s 4 ss 1s 42 0
s 4s 64s 2 11s 4 ss 1s 4 0
s 43s 3 30s 2 66s 24 0
s 4s 3 10s 2 22s 8 0
Signals and Systems 2014
8B.12
One of the roots of this equation is obviously -4 (since it is written in factored
form) which corresponds to the open-loop double pole. For the cubic equation
Rule 8 The intersection of the root locus and the imaginary axis can be
s 4 9s 3 24s 2 16 K s 6 K 0
4 j 9 3 24 2 j 16 K 6 K 0
Letting the real and imaginary parts go to zero, we have:
4 24 2 6 K 0 and 9 3 16 K 0
Solving the second equation gives:
16 K
3
9
9
2
Solving this quadratic, we finally get K 10.2483 . Substituting this into the
preceding equation, we obtain:
1.7 rads-1
as the frequency of crossover.
8B.13
Rule 9 follows by observation of the resulting sketch.
Rule 10 is shown below:
Root Locus Editor (C)
8
Imag Axis
-2
-4
-6
-8
-15
-10
-5
10
Real Axis
Thus, the computer solution matches the analytical solution, although the
analytical solution provides more accuracy for points such as the crossover
point and breakaway / break-in points; and it also provides insight into the
systems behaviour.
8B.14
MATLABs RLTool
We can use MATLAB to graph our root loci. From the command window,
just type rltool to start the root locus user interface.
Example
The previous maze rover positioning scheme has a zero introduced in the
controller:
R (s )
E (s )
K p
s
0.5
s
s
controller
maze rover
C (s )
8B.15
The locus for this case looks like:
j
Kp =15
-5
Kp =0
-4
-3
-2
-1
Kp =1
s -plane
We can see how the rules help us to bend and shape the root locus for our
purposes. For example, we may want to increase the damping (move the poles
to the left) which we have now done using a zero on the real axis. We couldnt
do this for the case of a straight gain in the previous example:
-5
-4
-3
-2
-1
s -plane
8B.16
Example
We will now see what happens if we place a pole in the controller instead of a
zero:
R (s )
E (s )
Kp
C (s )
0.5
s
s
maze rover
controller
Kp =60
-4
-3
-2
-1
Kp =4
s -plane
We see that the pole repels the root locus. Also, unfortunately in this case,
the root locus heads off into the RHP. If the parameter K p is increased to over
60, then our system will be unstable!
1 Gc z G p z 1 KP z 0
(8B.20)
8B.17
Time Response of Discrete-Time Systems
We have already seen how to discretize a continuous-time system there are
methods such as the bilinear transform and step response matching. One
parameter of extreme importance in discretization is the sample period, Ts . We
need to choose it to be sufficiently small so that the discrete-time system
approximates the continuous-time system closely. If we dont, then the pole
and zero locations in the z-plane may lie out of the specification area, and in
extreme cases can even lead to instability!
Example
E (z )
Kp
G
z
controller
maze rover
1
s s 1
K pTs2 z 1
4 2T
C (z )
8B.18
A locus of the closed-loop poles as Ts varies can be constructed. The figure
below shows the root locus as Ts is varied from 0.1 s to 1 s:
unit-circle
Ts =1
Ts =0.1
z -plane
The corresponding step-responses for the two extreme cases are shown below:
Ts =1
1
Ts =0.1
n Ts
We can see that the sample time affects the control systems ability to respond
to changes in the output. If the sample period is small relative to the time
constants in the system, then the output will be a good approximation to the
continuous-time case. If the sample period is much larger, then we inhibit the
ability of the feedback to correct for errors at the output - causing oscillations,
increased overshoot, and sometimes even instability.
8B.19
Summary
There are various rules for drawing the root locus that help us to
analytically derive various values, such as gains that cause instability. A
computer is normally used to graph the root locus, but understanding the
root locus rules provides insight into the design of compensators in
feedback systems.
The root locus can tell us why and when a system becomes unstable.
We can bend and shape a root locus by the addition of poles and zeros so
that it passes through a desired location of the complex plane.
References
Kuo, B: Automatic Control Systems, Prentice-Hall, 1995.
Nicol, J.: Circuits and Systems 2 Notes, NSWIT, 1986.
8B.20
Exercises
1.
For the system shown:
R ( s)
K (s + 4)
s (s +1) (s + 2)
C( s)
8B.21
2.
The plant shown is open-loop unstable due to the right-half plane pole.
U (s)
C( s)
( s +3) ( s 8)
Use RLTool in MATLAB for the following:
(a) Show by a plot of the root locus that the plant cannot be stabilized for any
K, K , if unity feedback is placed around it as shown.
R ( s)
K
( s +3) ( s 8)
C( s)
(b) An attempt is made to stabilize the plant using the feedback compensator
shown:
R ( s)
C( s)
( s +3) ( s 8)
( s +1)
( s +2)
Determine whether this design is successful by performing a root locus
analysis for 0 K . (Explain, with the aid of a sketch, why K 0 is
not worth pursuing).
8B.22
3.
For the system shown the required value of a is to be determined using the
root-locus technique.
R ( s)
(s + a )
(s + 3)
1
s ( s +1)
C( s)
C s
as a varies from to .
R s
(b) From the root-locus plot, find the value of a which gives both minimum
overshoot and settling time when r t is a step function.
(c) Find the maximum value of a which just gives instability and determine the
frequency of oscillation for this value of a.
8B.23
4.
The block diagram of a DC motor position control system is shown below.
R ( s)
desired
position
10
s ( s +4)
amplifier
and motor
0.3
gear
train
C( s)
actual
position
Ks
tachometer
The performance is adjusted by varying the tachometer gain K. K can vary
from -100 to +100; 0 to +100 for the negative feedback configuration shown,
and 0 to -100 if the electrical output connections from the tachometer are
reversed (giving positive feedback).
(a) Sketch the root-locus of
C s
as K varies from to .
R s
Use two plots: one for negative feedback and one for positive feedback.
Find all important geometrical properties of the locus.
(b) Find the largest magnitude of K which just gives instability, and determine
the frequency of oscillation of the system for this value of K.
(c) Find the steady-state error (as a function of K) when r t is a step function.
(d) From the root locus plots, find the value of K which will give 10%
overshoot when r t is a step function, and determine the 10-90% rise time
for this value of K.
Note: The closed-loop system has two poles (as found from the root locus) and
8B.24
James Clerk Maxwell (1831-1879)
Maxwell produced a most spectacular work of individual genius he unified
electricity and magnetism. Maxwell was able to summarize all observed
phenomena of electrodynamics in a handful of partial differential equations
known as Maxwells equations1:
B
t
B 0
B J
E
t
From these he was able to predict that there should exist electromagnetic waves
which could be transmitted through free space at the speed of light. The
revolution in human affairs wrought by these equations and their experimental
verification
by
Heinrich
Hertz
in
1888
is
well
known:
wireless
It was Oliver Heaviside, who in 1884-1885, cast the long list of equations that Maxwell had
given into the compact and symmetrical set of four vector equations shown here and now
universally known as Maxwell's equations. It was in this new form ("Maxwell redressed," as
Heaviside called it) that the theory eventually passed into general circulation in the 1890s.
8B.25
equations, Fourier on the theory of heat, Newton on optics, Poisson on
mechanics and Taylors scientific memoirs. In 1850 he moved to Trinity
College, Cambridge, where he graduated with a degree in mathematics in 1854.
Maxwell was edged out of first place in their final examinations by his
classmate Edward Routh, who was also an excellent mathematician.
Maxwell stayed at Trinity where, in 1855, he formulated a theory of three
primary colour-perceptions for the human perception of colour. In 1855 and
1856 he read papers to the Cambridge Philosophical Society On Faradays
Lines of Force in which he showed how a few relatively simple mathematical
equations could express the behaviour of electric and magnetic fields.
In 1856 he became Professor of Natural Philosophy at Aberdeen, Scotland, and
started to study the rings of Saturn. In 1857 he showed that stability could be
achieved only if the rings consisted of numerous small solid particles, an
explanation now confirmed by the Voyager spacecraft.
In 1860 Maxwell moved to Kings College in London. In 1861 he created the
first colour photograph of a Scottish tartan ribbon and was elected to the
Royal Society. In 1862 he calculated that the speed of propagation of an
electromagnetic wave is approximately that of the speed of light:
We can scarcely avoid the conclusion that light consists in the transverse
undulations of the same medium which is the cause of electric and magnetic
phenomena.
Maxwells famous account, A Dynamical Theory of the Electromagnetic
Field was read before a largely perplexed Royal Society in 1864. Here he
brought forth, for the first time, the equations which comprise the basic laws of
electromagnetism.
Maxwell also continued work he had begun at Aberdeen, on the kinetic theory
of gases (he had first considered the problem while studying the rings of
Saturn). In 1866 he formulated, independently of Ludwig Boltzmann, the
kinetic theory of gases, which showed that temperature and heat involved only
molecular motion.
8B.26
Maxwell was the first to publish an analysis of the effect of a capacitor in a
circuit containing inductance, resistance and a sinusoidal voltage source, and to
show the conditions for resonance. The way in which he came to solve this
problem makes an interesting story:
Maxwell was spending an evening with Sir William Grove who was then
engaged in experiments on vacuum tube discharges. He used an induction
coil for this purpose, and found the if he put a capacitor in series with the
primary coil he could get much larger sparks. He could not see why. Grove
knew that Maxwell was a splendid mathematician, and that he also had
mastered the science of electricity, especially the theoretical art of it, and so
he thought he would ask this young man [Maxwell was 37] for an
explanation. Maxwell, who had not had very much experience in
experimental electricity at that time, was at a loss. But he spent that night in
working over his problem, and the next morning he wrote a letter to Sir
William Grove explaining the whole theory of the capacitor in series
connection with a coil. It is wonderful what a genius can do in one night!
Maxwells letter, which began with the sentence, Since our conversation
yesterday on your experiment on magneto-electric induction, I have considered
it mathematically, and now send you the result, was dated March 27, 1868.
Preliminary to the mathematical treatment, Maxwell gave in this letter an
unusually clear exposition of the analogy existing between certain electrical
and mechanical effects. In the postscript, or appendix, he gave the
mathematical theory of the experiment. Using different, but equivalent,
symbols, he derived and solved the now familiar expression for the current i in
such a circuit:
L
di
1
Ri idt V sin t
dt
C
The solution for the current amplitude of the resulting sinusoid, in the steadystate is:
V
R L
from which Maxwell pointed out that the current would be a maximum when:
1
C
8B.27
Following Maxwell, Heinrich Hertz later showed a thorough acquaintance with
electrical resonance and made good use of it in his experimental apparatus that
proved the existence of electromagnetic waves, as predicted by Maxwells
equations. In the first of his series of papers describing his experiment, On
Very Rapid Electric Oscillations, published in 1887, he devotes one section to
a discussion of Resonance Phenomena and published the first electrical
resonance curve:
The first electrical
resonance curve
published, by Hertz,
1887
When creating his standard for electrical resistance, Maxwell wanted to design
a governor to keep a coil spinning at a constant rate. He made the system stable
by using the idea of negative feedback. It was known for some time that the
governor was essentially a centrifugal pendulum, which sometimes exhibited
hunting about a set point that is, the governor would oscillate about an
equilibrium position until limited in amplitude by the throttle valve or the
travel allowed to the bobs. This problem was solved by Airy in 1840 by fitting
a damping disc to the governor. It was then possible to minimize speed
fluctuations by adjusting the controller gain. But as the gain was increased,
the governors would burst into oscillation again. In 1868, Maxwell published
s -plane
8B.28
In 1870 Maxwell published his textbook Theory of Heat. The following year he
returned to Cambridge to be the first Cavendish Professor of Physics he
designed the Cavendish laboratory and helped set it up.
The four partial differential equations describing electromagnetism, now
known as Maxwells equations, first appeared in fully developed form in his
Treatise on Electricity and Magnetism in 1873. The significance of the work
was not immediately grasped, mainly because an understanding of the atomic
nature of electromagnetism was not yet at hand.
The Cavendish laboratory was opened in 1874, and Maxwell spent the next 5
years editing Henry Cavendishs papers.
Maxwell died of abdominal cancer, in 1879, at the age of forty-eight. At his
death, Maxwells reputation was uncertain. He was recognised to have been an
exceptional scientist, but his theory of electromagnetism remained to be
convincingly demonstrated. About 1880 Hermann von Helmholtz, an admirer
of Maxwell, discussed the possibility of confirming his equations with a
student, Heinrich Hertz. In 1888 Hertz performed a series of experiments
which produced and measured electromagnetic waves and showed how they
behaved like light. Thereafter, Maxwells reputation continued to grow, and he
may be said to have prepared the way for twentieth-century physics.
References
Blanchard, J.: The History of Electrical Resonance, Bell System Technical
Journal, Vol. 20 (4), p. 415, 1941.
9A.1
Lecture 9A State-Variables
State representation. Solution of the state equations. Transition matrix.
Transfer function. Impulse response. Linear state-variable feedback.
Overview
The frequency-domain has dominated our analysis and design of signals and
systems up until now. Frequency-domain techniques are powerful tools, but
they do have limitations. High-order systems are hard to analyse and design.
Initial conditions are hard to incorporate into the analysis process (remember
the transfer function only gives the zero-state response). A time-domain
approach, called the state-space approach, overcomes these deficiencies and
also offers additional features of analysis and design that we have not yet
considered.
State Representation
Consider the following simple electrical system:
R
vs
vC
Figure 9A.1
In the analysis of a system via the state-space approach, the system is
characterized by a set of first-order differential equations that describe its
state variables. State variables are usually denoted by q1 , q2 , q3 , , qn .
They characterize the future behaviour of a system once the inputs to a system
are specified, together with a knowledge of the initial states.
9A.2
States
For the system in Figure 9A.1, we can choose i and vC as the state variables.
Therefore, let:
State variables
q1 i
(9A.1a)
q 2 vC
(9A.1b)
vs Ri L
di
vC
dt
(9A.2)
di
R
1
1
i vC v s
dt
L
L
L
(9A.3)
In terms of our state variables, given in Eqs. (9A.1), we can rewrite this as:
dq1
1
1
R
q1 q2 vs
dt
L
L
L
(9A.4)
Finally, we write Eq. (9A.4) in the standard nomenclature for state variable
analysis we use q
dq
and also let the input, v s , be represented by the
dt
symbol x :
q1
R
1
1
q1 q2 x
L
L
L
(9A.5)
9A.3
Returning to the analysis of the circuit in Figure 9A.1, we have for the current
through the capacitor:
dvC
dt
iC
(9A.6)
q1 Cq 2
(9A.7)
q 2
1
q1
C
(9A.8)
q1 R L 1 L q1 1 L
q 1 C
q 0 x
0
2
2
(9A.9)
q Aq bx
(9A.10)
We will reserve small boldface letters for column vectors, such as q and b
State equation
9A.4
Output
The system output can usually be expressed as a linear combination of all the
state variables.
For example, if for the RLC circuit of Figure 9A.1 the output y is vC then:
y vC
q2
(9A.11)
q
y 0 1 1
q 2
(9A.12)
y cT q
(9A.13)
y cT q dx
(9A.14)
Output equation
q Aq Bx
y CT q Dx
(9A.15)
9A.5
Solution of the State Equations
Once the state equations for a system have been obtained, it is usually
necessary to find the output of a system for a given input (However, some
parameters of the system can be directly determined by examining the A
matrix, in which case we may not need to solve the state equations).
We can solve the state equations in the s-domain. Taking the Laplace
Transform of Eq. (9A.10) gives:
sQs q 0 AQs bX s
(9A.16)
Notice how the initial conditions are automatically included by the Laplace
transform of the derivative. The solution will be the complete response, not just
the ZSR.
Making Qs the subject, we get:
Qs sI A q 0 sI A bX s
1
(9A.17)
s sI A
resolvent matrix
Resolvent matrix
defined
(9A.18)
Qs s q0 s bX s
(9A.19)
Y s cT s q 0 cT s bX s
(9A.20)
9A.6
All we have to do is take the inverse Laplace transform (ILT) to get the
solution in the time-domain.
Before we do that, we also define the ILT of the resolvent matrix, called the
transition matrix:
transition
matrix
resolvent
matrix
(9A.21)
Complete solution of
the state equation
qt t q 0 t bx d
t
ZIR ZSR
(9A.22)
Notice how multiplication in the s-domain turned into convolution in the timedomain. The transition matrix is a generalisation of impulse response, but it
applies to states not the output!
We can get the output response of the system after solving for the states by
direct substitution into Eq. (9A.14).
Transition Matrix
The transition matrix possesses two interesting properties that help it to be
calculated by a digital computer:
0 I
(9A.23a)
t e At
(9A.23b)
The first property is obvious by substituting t 0 into Eq. (9A.22). The second
relationship arises by observing that the solution to the state equation for the
case of zero input, q Aq , is q q0 e At . For zero input, Eq. (9A.22) gives
9A.7
At
2
3
At At At
I
1!
2!
3!
How to raise e to a
matrix power
(9A.24)
Example
Suppose a system is described by the following differential equation:
d2y
dy
dr
2 y r
2
dt
dt
dt
(9A.25)
(9A.26)
q1 y, q 2 y , x r r
(9A.27)
r sin t
Let:
then:
q1 q2
q 2 q1 2q2 x
(9A.28)
or just:
q Aq bx
(9A.29)
9A.8
with:
1
q
q
0
0
A
, b , q 1 , q 1
1 2
1
q 2
q 2
(9A.30)
sI A
0 s 1 2 1 s 2
(9A.31)
adj B
B
(9A.32)
s 2 1 s 2
1 s
2
s 1
1
s sI A
s 12 1
s 12
1
s 12
s
s 12
(9A.33)
The transition matrix is the inverse Laplace transform of the resolvent matrix:
t L1 s
e t t 1
te t
t
e t 1 t
te
(9A.34)
q ZIR t t q 0
e t t 1
te t 1 e t t 1
q1
q
t
e t 1 t 0 te t
2 ZIR te
(9A.35)
q ZSR t L1 s bX s
(9A.36)
9A.9
The Laplace transform of the input is:
X s
(9A.37)
1
s
2
s 1 s 1
2
2
1 s 1
q ZSR t L
1
2
s 1
1
2
0
s 1 s 1
s 1 s 2 1
s 12
s 1 s 2 1
L1
s 1 s 2 1
(9A.38)
12 e t cos t sin t
q1
1
q
t
2 ZSR 2 e cos t sin t
(9A.39)
The total response is the sum of the ZIR and the ZSR:
q1 32 e t te t 12 cos t 12 sin t
q 1 t
t
1
1
2 2 e te 2 cos t 2 sin t
(9A.40)
This is just the solution for the states. To get the output, we use:
y c T q dx
(9A.41)
cT 1 0, d 0
y 32 e t te t 12 cos t 12 sin t
(9A.42)
You should confirm this solution by solving the differential equation directly
using your previous mathematical knowledge, eg. method of undetermined
coefficients.
9A.10
Transfer Function
The transfer function of a single input-single output (SISO) system can be
obtained easily from the state variable equations. Since a transfer function only
gives the ZSR (all initial conditions are zero), then Eq. (9A.19) becomes:
Qs s bX s
(9A.43)
The output in the s-domain, using the Laplace transform of Eq. (9A.13) and
Eq. (9A.43), is just:
Y s c T Qs
c T s bX s
(9A.44)
H s cT s b
(9A.45)
Impulse Response
The impulse response is just the inverse Laplace transform of the transfer
function:
ht cT t b
cT e At b
(9A.46)
9A.11
Example
Continuing the analysis of the system used in the previous example, we can
find the transfer function using Eq. (9A.45):
s2
s 12
H s 1 0
1
s 12
s 1 0
s 1
s 12
1
s 12
1 0
s
s 12
s 12
(9A.47)
(9A.48)
s 12
H s
(9A.49)
Why would we use the state-variable approach to obtain the transfer function?
For a simple system, we probably wouldnt, but for multiple-input multipleoutput systems, it is much easier using the state-variable approach.
9A.12
Linear State-Variable Feedback
Consider the following system drawn using a state-variable approach:
Block diagram of
linear state-variable
feedback
r (t )
k0
x (t )
dt
cT
y (t )
A
kT
controller
q
system to be controlled
Figure 9A.2
The system has been characterised in terms of states you should confirm that
the above diagram of the system to be controlled is equivalent to the matrix
formulation of Eqs. (9A.10) and (9A.13).
We have placed a controller in front of the system, and we desire the output y
to follow the set point, or reference input, r. The design of the controller
involves the determination of the controller variables k 0 and k to achieve a
desired response from the system (The desired response could be a timedomain specification, such as rise time, or a frequency specification, such as
bandwidth).
The controller just multiples each of the states qi by a gain k i , subtracts the
sum of these from the input r , and multiplies the result by a gain k 0 .
Now, the input x to the controlled system is:
The input to the
open-loop system is
modified by the
feedback
(9A.50)
q A k q bk 0 r
(9A.51)
x k0 r k T q
Therefore, the state equations are:
9A.13
where:
A k A k 0 bk T
(9A.52)
State-variable
feedback modifies
the A matrix
H s k 0 cT k s b
(9A.53)
The closed-loop
transfer function
when linear statevariable feedback is
applied
where:
k s sI A k
(9A.54)
k 2 k n to create the
transfer function obtained from the design criteria (easy for an n 2 secondorder system).
The modified
resolvent matrix
when linear statevariable feedback is
applied
9A.14
Example
Suppose that it is desired to control an open-loop process using state-variable
control techniques. The open-loop system is shown below:
X (s )
1
s +70
Q1
1
s
Q2 = Y ( s )
n 50 rads 1 , 0.7071
(9A.55)
n
C s
2
Rs s 2 n s n 2
2
2500
2
s 70.71s 2500
(9A.56)
(9A.57)
Rearranging, we get:
sQ1 70Q1 X
sQ2 Q1
Y Q2
(9A.58)
9A.15
The corresponding state-variable representation is readily found to be:
70 0
1
q
q x
0
1
0
y 0 1q
(9A.59)
q Aq bx
(9A.60)
or just:
y c q
T
Controller
R (s )
k0
X (s )
1
s +70
Process
Q1( s)
1
s
Q2( s) = C (s )
k1
k2
We see that the controller accepts a linear combination of the states, and
compares this with the reference input. It then provides gain and applies the
resulting signal as the control effort, X s , to the process.
The input signal to the process is therefore:
x k 0 r k1q1 k 2 q2
(9A.61)
or in matrix notation:
x k0 r k T q
(9A.62)
9A.16
Applying this as the input to the system changes the describing state equation
of Eq. (9A.60) to:
q Aq b k 0 r k T q
q A k 0 bk T q bk 0 r
q A k q bk 0 r
(9A.63)
A k A k 0 bk T
(9A.64)
where:
If we let:
k s sI A k
(9A.65)
H s k0cT k s b
(9A.66)
A k A k 0 bk T
70
1
70
0
1
k 0 k1
0
0
0
k
k0 1
0
0
70 k 0 k1
k2
k2
0
k0 k2
0
(9A.67)
Then:
k s sI A k
s 70 k0 k1 k0 k2
s
1
1
2
s 70 k0 k1 s k0 k2
k0 k 2
s
1 s 70 k k
0 1
(9A.68)
9A.17
The closed-loop transfer function is then found as:
H s k0cT k s b
s
k0 k2 1
k0
0 1
s 70 k0 k1 s k0 k2
1 s 70 k0 k1 0
s
k0
2
0 1
s 70 k0 k1 s k0 k2
1
k0
2
s 70 k0 k1 s k0 k2
(9A.69)
The values of k 0 , k1 and k 2 can be found from Eqs. (9A.56) and (9A.69). The
following set of simultaneous equations result:
k 0 2500
k 0 k 2 2500
70 k 0 k1 70.71
(9A.70)
k 0 2500
k1 2.843 10 4
k2 1
(9A.71)
This completes the controller design. The final step would be to draw the root
locus and examine the relative stability, and the sensitivity of slight gain
variations. For this simple system, the final step is not necessary.
9A.18
Summary
Linear state-variable feedback involves the design of gains for each of the
states, plus the input.
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Shinners, S.: Modern Control System Theory and Application, AddisonWesley, 1978.
9A.19
Exercises
1.
Using capacitor voltages and inductor currents write a state-variable
representation for the following circuit:
R1
L1
C1
R2
vs
R3
2.
Consider a linear system with input u and output y. Three experiments are
performed on this system using the inputs x1 t , x2 t and x3 t for t 0 . In
each case, the initial state at t 0 , x0 , is the same. The corresponding
observed outputs are y1 t , y 2 t and y3 t . Which of the following three
predictions are true if x0 0 ?
(a) If x3 x1 x2 , then y3 y1 y 2 .
(b) If x3
1
2
x1 x2 , then
y3
1
2
y1 y2 .
(c) If x3 x1 x2 , then y3 y1 y 2 .
Which are true if x0 0 ?
9A.20
3.
Write dynamical equation descriptions for the block diagrams shown below
with the chosen state variables.
(a)
1
s +2
Q1
1
s +2
Q3
2
1
s
Q2
(b)
1
s +2
Q1
1
s +1
Q2
1
s +2
Q3
4.
Find the transfer function of the systems in Q3:
(i)
(ii)
5.
Given:
1 1
1
q
q x
4 1
2
and:
3
q0
0
0 t 0
xt
1 t 0
Find qt .
Signals and Systems 2014
9A.21
6.
Write a simple MATLAB script to evaluate the time response of the system
described in state equation form in Q5 using the approximate relationship:
qt T qt
T
9B.1
Lecture 9B State-Variables 2
Normal form. Similarity transform. Solution of the state equations for the ZIR.
Poles and repeated eigenvalues. Discrete-time state-variables. Discrete-time
response. Discrete-time transfer function.
Overview
State-variable analysis is useful for high-order systems and multiple-input
multiple-output systems. It can also be used to find transfer functions between
any output and any input of a system. It also gives us the complete response.
The only drawback to all this analytical power is that solving the state-variable
equations for high-order systems is difficult to do symbolically. Any computer
solution also has to be thought about in terms of processing time and memory
storage requirements.
The use of eigenvectors and eigenvalues solves this problem.
State-variable analysis can also be extended to discrete-time systems,
producing exactly analogous equations as for continuous-time systems.
Normal Form
Solving matrix equations is hard...unless we have a trivial system:
z1 1 0 0 0 z1
z
z 0
0
0
2
2
2
0 0 0
z
0
0
0
n zn
n
(9B.1)
z z
(9B.2)
9B.2
This is useful for the ZIR of a state-space representation where:
q Aq
(9B.3)
q q0 e t
(9B.4)
Aq q
(9B.5)
A I q 0
(9B.6)
Therefore:
A I 0
(9B.7)
This is called the characteristic equation of the system. The eigenvalues are
the values of which satisfy A I 0 . Once we have all the s, each
column vector q i which satisfies the original equation Eq. (9B.5) is called a
column eigenvector.
Eigenvalues and
eigenvectors
defined
A I 0
i eigenvalue
Aq i i q i q i eigenvector
(9B.8)
9B.3
An eigenvector corresponding to an eigenvalue is not unique an eigenvector
can be multiplied by any non-zero arbitrary constant. We therefore tend to
choose the simplest eigenvectors to make the mathematics easy.
Example
Given a systems A matrix, we want to find the eigenvalues and eigenvectors.
2 3 2
A 10 3 4
3 6 1
(9B.9)
2
A I
10
3
3
6
2
4 0
1
(9B.10)
3 62 49 66 0
(9B.11)
2 3 11 0
(9B.12)
1 2
2 3 eigenvalues
3 11
(9B.13)
9B.4
To find the eigenvectors, substitute each into A I q 0 and solve for q.
Take 1 2 :
2 3 2 2 0
10 3 4 0 2
3 6 1 0
0
4
10
0 q1 0
0 q 2 0
2 q3 0
3 2 q1 0
5 4 q 2 0
6 3 q3 0
(9B.14)
Solve to get:
1
q 1 2 for 1 2
5
(9B.15)
and
2
q 3 4 for 3 11
3
(9B.16)
Similarity Transform
The eigenvalues and eigenvectors that arise from Eq. (9B.5) are put to good
use by transforming q Aq into z z . First, form the square n n matrix:
Similarity transform
defined
U q 1 q 2 q n
q11
q12
q1n
q2 1
qn 1
q
2 2 qn 2
q2 n
qn n
(9B.17)
Aq i i q i
(9B.18)
9B.5
then by some simple matrix manipulation, we get:
AU U
(9B.19)
where:
1
0
2
0
0
Diagonal matrix
defined
0
0
0
0 n
0
0
(9B.20)
Example
From the previous example, we can confirm the following relationship.
AU U
0 2 1
0 2 2 0 0
2 3 2 1
10 3 4 2
2 4 2
2 4 0 3 0
3 6 1 5 3 3 5 3 3 0
0 11
(9B.21)
VA V
(9B.22)
T (n) (n)
(n)
(n)
q q 1 q 2 q n
(9B.23)
9B.6
Since eigenvectors can be arbitrarily scaled by any non-zero constant, it can be
shown that we can choose V such that:
VU I
(9B.24)
V U 1
(9B.25)
which implies:
Relationship
between the two
similarity transforms
AU U
VAU VU
(9B.26)
U 1AU
(9B.27)
q Aq
(9B.28)
Uz AUz
(9B.29)
Now pre-multiply by U 1 :
z U 1 AUz
(9B.30)
9B.7
and using Eq. (9B.27), the end result of the change of variable is:
z z
(9B.31)
z1 1 0 0 0 z1
z 0
z
0
0
2
2
2
0 0 0
z
0
0
0
n zn
n
(9B.32)
z1 1 z1
z 2 2 z 2
z n n z n
(9B.33)
dz1
1 z1
dt
(9B.34)
z1 z1 0e 1t
(9B.35)
Diagonal form of
state equations
9B.8
The solution to Eq. (9B.33) is therefore just:
z1 z1 0e 1t
z 2 z 2 0e 2t
z n z n 0e n t
(9B.36)
z e t z 0
The matrix
The matrix e
defined
e t
(9B.37)
is defined by:
2
3
t t t
I
1!
e 1t
2!
0
e 2t
0
0
3!
0
0
0
0
0
0 e n t
(9B.38)
q Uz
Ue t z 0
Ue t U 1q0
(9B.39)
and since we know the ZIR is q ZIR t t q0 then the transition matrix is:
Transition matrix
written in terms of
eigenvalues and
eigenvectors
t Ue t U 1
Signals and Systems 2014
(9B.40)
9B.9
This is a quick way to find the transition matrix t for high-order systems.
The ZIR of the states is then just:
qt t q0
Ue t U 1q0
(9B.41)
Example
Given:
1
0
0
q t
qt xt
2 3
2
y t 3 1qt
2
q0
3
(9B.42)
A I 0
1
0
2 3
2 3 2 0
1 2 0
1 1, 2 2
(9B.43)
(9B.44)
1 1
Signals and Systems 2014
(9B.45)
9B.10
As a check, we can see if UV I .
Forming is easy:
1 0
0 2
(9B.46)
1 2 0
0 2 1
e 2t 1 1
1 2e t
e t
1
2t
e 2t
1 2 e
2e t e 2t
e t e 2 t
t
2t
e t 2e 2t
2e 2e
(9B.47)
t
2t
e t 2e 2t 3
2e 2e
7e t 5e 2t
t
2t
7e 10e
(9B.48)
y ZIR t 3 1q ZIR t
(9B.49)
14e t 5e 2t
For higher-order systems and computer analysis, this method results in
considerable time and computational savings.
9B.11
Poles and Repeated Eigenvalues
Poles
We have seen before that the transfer function of a system using state-variables
is given by:
H s cT s b
cT sI A b
1
cT adjsI A b
sI A
(9B.50)
where we have used the formula for the inverse B 1 adj B B . We can see
that the poles of the system are formed directly by the characteristic equation
sI A 0 . Thus, the poles of a system are given by the eigenvalues of the
matrix A .
poles = eigenvalues of A
(9B.51)
Poles of a system
and eigenvalues of
A are the same!
9B.12
Discrete-time State-Variables
The concepts of state, state vectors and state-variables can be extended to
discrete-time systems.
A discrete-time SISO system is described by the following equations:
qn 1 Aqn bxn
yn cT qn dxn
(9B.52)
Example
Given the following second-order linear difference equation:
yn yn 1 yn 2 xn
(9B.53)
q1 n yn 1
(9B.54)
we select:
q2 n yn 2
(9B.55)
Therefore:
q1 n 1 yn
yn 1 yn 2 xn
(9B.56)
so that:
q1 n 1 q1 n q 2 n xn
(9B.57)
q 2 n yn 2
(9B.58)
Also:
9B.13
Therefore:
q 2 n 1 yn 1
(9B.59)
(9B.60)
(9B.61)
The equations are now in state variable form, and we can write:
1 1
1
qn 1
qn xn
1 0
0
yn 1 1qn 1xn
(9B.62)
b0 b1 z 1 bN z N
1 a1 z 1 a N z N
(9B.63)
then:
Y z
X z
Pz
p
1
b0 b1 z b p z
1 a1 z a p z p
(9B.64)
(9B.65)
9B.14
Now select the state variables as:
Q1 z z 1 P z
Q2 z z 2 P z
QN z z N P z
(9B.66)
The state equations are then built up as follows. From the first equation in
Eq. (9B.66):
Q1 z z 1 Pz
zQ1 z P z
a1 z 1 P z a2 z 2 P z a N z N P z X z
a1Q1 z a 2 Q2 z a N QN z X z
(9B.67)
(9B.68)
z 1 z 1 P z
z 1Q1 z
zQ2 z Q1 z
(9B.69)
(9B.70)
q N n 1 q N 1 n
(9B.71)
Similarly:
We now have all the state equations. Returning to Eq. (9B.64), we have:
Y z b0 P z b1 z 1 P z bN z N P z
(9B.72)
9B.15
Taking the inverse z-transform gives:
yn b0 q1 n 1 b1q1 n b2 q2 n bN q N n
(9B.73)
Eliminating the q1 n 1 term using Eq. (9B.68) and grouping like terms gives:
yn b1 a1b0 q1 n b2 a2 b0 q2 n
bN a N b0 q N n b0 xn
(9B.74)
qn 1 0
1
0
0
yn c1 c2
a N 1
aN
1
0
0
0
0
0 qn 0 xn
1
0
c N qn b0 xn
where ci bi ai b0 i 1, 2, , N
(9B.75)
9B.16
Discrete-time Response
Once we have the equations in state-variable form we can then obtain the
discrete-time response.
For a SISO system we have:
qn 1 Aqn bxn
yn cT qn dxn
(9B.76)
q1 Aq0 bx0
q2 Aq1 bx1
AAq0 bx0 bx1
A 2 q0 Abx0 bx1
q3 Aq2 bx2
(9B.77)
qn A n q0 A n 1bx0
Abxn 2 bxn 1 n 1, 2,
(9B.78)
We now define:
Fundamental matrix
defined
n A n fundamental matrix
(9B.79)
9B.17
From Eq. (9B.78) and the above definition, the response of the discrete-time
system to any input is given by:
Solution to the
discrete-time state
equations in terms
of convolution
summation
n 1
qn nq0 n i 1bxi
i 0
yn cT qn dxn
(9B.80)
This is the expected form of the output response. For the states, it can be seen
that:
q ZIR nq0
n 1
q ZSR n i 1bxi
i 0
(9B.81)
zQ z zq0 AQ z bX z
zI A Qz zq0 bX z
(9B.82)
Therefore:
Q z zI A zq0 zI A bX z
1
(9B.83)
Similarly:
Y z zcT zI A q0
1
cT zI A b d X z
1
(9B.84)
9B.18
For the transfer function, we put all initial conditions q0 0 . Therefore, the
transfer function is:
The discrete-time
transfer function in
terms of statevariable quantities
H z cT zI A b d
1
(9B.85)
To get the unit-pulse response, we revert to Eq. (9B.80), set the initial
conditions to zero and apply a unit-pulse response:
h0 d
hn cT n 1b n 1, 2,
(9B.86)
h0 d
hn cT A n 1b n 1, 2,
(9B.87)
9B.19
Example
x [n ]
q [ n +1]
2
D
q2[ n]
5/6
y [n ]
5
D
q [ n]
1
1/6
1
If
We would like to find the output yn if the input is xn un and the initial
conditions are q1 0 2 and q 2 0 3 .
Recognizing that q 2 n q1 n 1 , the state equations are:
q1 n 1 0
q n 1 1
2
6
1 q n 0
5 1 x
q n
1
6 2
(9B.88)
and:
q n
yn 1 5 1
q 2 n
(9B.89)
9B.20
If such a transform can be found, then U 1 AU . Rearranging we then have:
A U U 1
A 2 A A U U 1U U 1 U 2 U 1
A n U n U 1
(9B.90)
5
1
1
1
I A 1 5 2 0
6
6
3
2
6
(9B.91)
1 3 0
0 1 2
(9B.92)
3
1 3 1 x1 0
1 6 1 2 x 0 so choose u 1 1
(9B.93)
and for 2 1 2 :
2
1 2 1 x1 0
1 6 1 3 x 0 so choose u 2 1
(9B.94)
3 2
U
1 1
(9B.95)
Therefore:
adjU 1 2
U
1 3
(9B.96)
9B.21
Therefore n A n U n U 1 , and:
3 2 1 3n
0 1 2
n
1 2n 1 3
1 1 0
3 2 1 3n 21 3n
n
n
31 2
1 1 1 2
31 3n 21 2 n 61 3n 61 2n
n
n
n
n
21 3 31 2
1 3 1 2
(9B.97)
y ZIR n cT nq0
31 3n 21 2n 61 3n 61 2n 2
1 5
n
n
n
n
21 3 31 2 3
1 3 1 2
121 3n 141 2n
1 5
n
n
41 3 71 2
81 3 211 2
n
(9B.98)
(9B.99)
y ZSR n cT n i 1bxi
i 0
We have:
cT n i 1b
31 3ni 1 21 2ni 1
1 5
n i 1
n i 1
1 2
1 3
61 3
61 2
0
n i 1
n i 1
21 3
31 2
1
n i 1
n i 1
61 3ni 1 61 2ni 1
1 5
n i 1
n i 1
31 2
21 3
41 3
n i 1
91 2
n i 1
(9B.100)
9B.22
so that:
n 1
y ZSR n 41 3
n i 1
91 2
n i 1
ui
i 0
41 3
n 1
n 1
n 1
i
n 1
i
1 3 91 2 1 2
i 0
i 0
n
1 3
n 1 2
181 2
1 3
1 2
n
n
n
61 3 1 3 181 2 1 2 n
n
121 3
61 3 181 2 12
n
(9B.101)
yn 81 3 211 2 12 61 3 181 2
n
12 21 3 31 2 , n 0
n
(9B.102)
Y z zcT zI A q0 cT zI A b d X z
1
1
1 2
z
z
1 5
z 1 5
1 6 z 5 6 3
1 6 z 5 6
0
z
z 1
z 5 6 1 0
z 5 6 1 2
z
1
1
5
1
5
1 6 z z
1 6 z 3 z 2 5 6 z 1 6
z 2 5 6 z 1 6
z 1
z
z 1
2 z 4 3
z
1
1 5
2
1
5
z2
2
z 5 6 z 1 6
3z 1 3 z 5 6 z 1 6
z 1
13z 2 3z
5z 2 z
2
z 5 6 z 1 6 z 1 z 2 5 6 z 1 6
z
z
z
z
z
8
21
12
6
18
z 1 3
z 1 2
z 1
z 1 3
z 1 2
(9B.103)
Therefore:
yn 81 3 211 2 12 61 3 181 2 n 0
zero-input response
(9B.104)
zero-state response
Obviously, the two solutions obtained using different techniques are in perfect
agreement.
Signals and Systems 2014
9B.23
Summary
References
Kamen, E. & Heck, B.: Fundamentals of Signals and Systems using
MATLAB, Prentice-Hall, 1997.
Shinners, S.: Modern Control System Theory and Application, AddisonWesley, 1978.
9B.24
Exercises
1.
Find , U and V for the system described by:
0
q 0
6
y 0 0
0
1
0
0
1 q 2
7 x
20 5
11 6
1
1q
Note: U and V should be found directly (i.e. Do not find V by taking the
inverse of U). You can then verify your solution by:
(i)
checking that VU I
(ii)
checking that U V A
2.
Consider the system:
q1 q1 x
q 2 q1 2q2 x
with:
0
q0
1
(a) Find the system eigenvalues and find the zero input response of the system.
(b) If the system is given a unit-step input with the same initial conditions, find
q t . (Use the resolvent matrix to obtain the time solution). What do you
9B.25
3.
A system employing state-feedback is described by the following equations:
1
0 q1 0
q1 0
q 0
0
1 q2 0 xt
2
q 3 5 7 3 q3 1
q1
y 2 4 3q2
q3
xt k1
k2
q1
k 3 q2 r t
q3
(i)
(ii)
(iii)
(iv)
(v)
9B.26
4.
For the difference equation:
yn 3 xn
5
5
1
1
1
xn 1 xn 2 yn 1 yn 2 yn 3
4
8
4
4
16
3z 3 5 4 z 2 5 8 z
z 3 1 4 z 2 1 4 z 1 16
5.
Find y5 and y10 for the answer to Q4b given:
1
q0 1 and
0
0
xn
n
1
n0
n0
equations.
(b) by using the fundamental matrix and finding y5 and y10 directly.
6.
A discrete-time system with:
H z
k z b
z z a z c
F.1
Appendix A - The Fast Fourier Transform
The discrete-time Fourier transform (DTFT). The discrete Fourier Transform
(DFT). Fast Fourier transform (FFT).
Overview
Digital signal processing is becoming prevalent throughout engineering. We
The motivation
have digital audio equipment (CDs, MP3s), digital video (MPEG2 and behind developing
the FFT
MPEG4, DVD, digital TV), digital phones (fixed and mobile). An increasing
number (billions!) of embedded systems exist that rely on a digital computer (a
microcontroller normally). They take input signals from the real analog world,
convert them to digital signals, process them digitally, and produce outputs that
are again suitable for the real analog world. (Think of the computers
controlling any modern form of transport car, plane, boat or those that
control nearly all industrial processes). Our motivation is to extend our existing
frequency-domain analytical techniques to the digital world.
Our reason for hope that this can be accomplished is the fact that a signals Samples contain all
the information of a
samples can convey the complete information about a signal (if Nyquists signal
criterion is met). We should be able to turn those troublesome continuous-time
integrals into simple summations a task easily carried out by a digital
computer.
F.2
The Discrete-Time Fourier Transform (DTFT)
To illustrate the derivation of the discrete-time Fourier Transform, we will
consider the signal and its Fourier Transform below:
A strictly time-limited
signal has an infinite
bandwidth
x (t )
A
X (f )
C
-2 B
-B
2B
Figure F.1
Since the signal is strictly time-limited (it only exists for a finite amount of
time), its spectrum must be infinite in extent. We therefore cannot choose a
sample rate high enough to satisfy Nyquists criterion (and therefore prevent
aliasing). However, in practice we normally find that the spectral content of
signals drops off at high frequencies, so that the signal is essentially bandlimited to B:
A time-limited signal
which is also
essentially bandlimited
x (t )
A
X (f )
C
-B
Figure F.2
We will now assume that Nyquists criterion is met if we sample the timedomain signal at a sample rate f s 2 B .
F.3
If we ideally sample using a uniform train of impulses (with spacing
Ts 1 f s ), the mathematical expression for the sampled time-domain
waveform is:
xs t xt t kTs
k
xk t kT
(F.1)
X s f X f f s
f nf
s
f X f nf
(F.2)
The sampled waveform and its spectrum are shown graphically below:
A sampled signal
and its spectrum
x s (t )
A
weights are
x ( kTs) = x [ k ]
Xs (f )
-B
fs C
B = f s /2
Figure F.3
We are free to take as many samples as we like, so long as MTs . That is,
we need to ensure that our samples will encompass the whole signal in the
time-domain. For a one-off waveform, we can also sample past the extent of
the signal a process known as zero padding.
Signals and Systems 2014
F.4
Substituting x s t into the definition of the Fourier transform, we get:
X s f x s t e j 2ft dt
xk t kT e
j 2ft
dt
(F.3)
Using the sifting property of the impulse function, this simplifies into the
definition of the discrete-time Fourier transform (DTFT):
The discrete-time
Fourier transform
defined
Xsf
xk e
j 2fkTs
(F.4)
One way to discretize the DTFT is to ideally sample it in the frequencydomain. Since the DTFT is periodic with period f s , then we choose N samples
per period where N is an integer. This yields periodic spectrum samples and we
only need to compute N of them (the rest will be the same)!
F.5
The spacing between samples is then:
f0
The frequency
spacing of a DFT
fs
N
(F.5)
T0 NTs
(F.6)
The time-domain
sample spacing of a
DFT
X ss f X s f f nf 0
n
X nf f nf
(F.7)
X ss ( f )
- f s /2
fs C
f s /2
sample spacing is f0
N samples over one period
Figure F.4
F.6
What signal does this spectrum correspond to in the time-domain? The
corresponding operation of Eq. (F.7) is shown below in the time-domain:
1
xss t xs t
f0
t kT0
T x t kT
0 s
(F.8)
x ss(t )
A T0
- T0
t
T0
2T0
Figure F.5
We see that sampling in the frequency-domain causes periodicy in the timedomain. What we have created is a periodic extension of the original sampled
signal, but scaled in amplitude. From Figure F.5, we can see that no timedomain aliasing occurs (no overlap of repeats of the original sampled signal) if:
T0 MTs
(F.9)
NTs MTs
or
NM
Signals and Systems 2014
(F.10)
F.7
If the original time-domain waveform is periodic, and our samples represent
one period in the time-domain, then we must choose the frequency sample
spacing to be f 0 1 T0 , where T0 is the period of the original waveform. The
process of ideally sampling the spectrum at this spacing creates the original
periodic waveform back in the time-domain. In this case, we must have
N M
(F.11)
If we do this, then the reconstructed sampled waveform has its repeats next to
each other:
x ss(t )
A T0
- T0
t
T0
2T0
Figure F.6
Returning now to the spectrum in Figure F.4, we need to find the sampled
spectrum impulse weights covering just one period, say from 0 f f s . These
are given by the weights of the impulses in Eq. (F.7) where 0 n N 1 .
F.8
The DTFT evaluated at those frequencies is then obtained by substituting
X s nf 0 xk e j 2nf 0 kTs
k 0
(F.12)
Notice how the infinite summation has turned into a finite summation over
M N samples of the waveform, since we know the waveform is zero for all
N 1
X n xk e
2nk
N
k 0
0 n N 1
(F.13)
evaluate the DFT. They are optimised to take advantage of the periodicy
inherent in the exponential term in the DFT definition. The roots of the FFT
algorithm go back to the great German mathematician Gauss in the early
1800s, but was formally introduced by Cooley and Tukey in their paper An
Algorithm for the Machine Calculation of Complex Fourier Series, Math.
Comput. 19, no. 2, April 1965:297-301. Most FFTs are designed so that the
number of sample points, N, is a power of 2.
All we need to consider here is the creation and interpretation of FFT results,
and not the algorithms behind them.
Signals and Systems 2014
F.9
Creating FFTs
Knowing the background behind the DFT, we can now choose various
parameters in the creation of the FFT to suit our purposes. For example, there
may be a requirement for the FFT results to have a certain frequency
resolution, or we may be restricted to a certain number of samples in the timedomain and we wish to know the frequency range and spacing of the FFT
output.
The important relationships that combine all these parameters are:
T0 NTs
The relationships
between FFT
sample parameters
or
f s Nf 0
or
N T0 f s
(F.14)
Example
An analog signal with a known bandwidth of 2000 Hz is to be sampled at the
minimum possible frequency, and the frequency resolution is to be 5 Hz. We
need to find the sample rate, the time-domain window size, and the number of
samples.
The minimum sampling frequency we can choose is the Nyquist rate of
F.10
Example
An analog signal is viewed on a DSO with a window size of 1 ms. The DSO
takes 1024 samples. What is the frequency resolution of the spectrum, and
what is the folding frequency (half the sample rate)?
The frequency resolution is f 0 1 T0 1 0.001 1 kHz .
The sample rate is f s Nf 0 1024 1000 1.024 MHz . The folding frequency
is therefore f s 2 512 kHz , and this is the maximum frequency displayed on
the DSO spectrum.
Example
A simulation of a system and its signals is being performed using MATLAB.
The following code shows how to set up the appropriate time and frequency
vectors if the sample rate and number of samples are specified:
% Sample rate
fs=1e6;
Ts=1/fs;
% Number of samples
N=1000;
% Time window
T0=N*Ts;
f0=1/T0;
% Time vector of N time samples spaced Ts apart
t=0:Ts:T0-Ts;
% Frequency vector of N frequencies spaced f0 apart
f=-fs/2:f0:fs/2-f0;
The frequency resolution of the FFT in this case will be f 0 1 kHz and the
output will range from 500 kHz to 499 kHz. Note carefully how the time and
frequency vectors were specified so that the last value does not coincide with
the first value of the second periodic extension or spectrum repeat.
F.11
Interpreting FFTs
The output of the FFT can be interpreted in four ways, depending on how we
interpret the time-domain values that we feed into it.
Case 1 Ideally sampled one-off waveform
If the FFT input, xn , is the weights of the impulses of an ideally sampled
time-limited one-off waveform, then we know the FT is a periodic repeat of
the original unsampled waveform. The DFT gives the value of the FT at
frequencies nf 0 for the first spectral repeat. This interpretation comes directly
from Eq. (F.12).
With our example waveform, we would have:
weights are
x ( nTs) = x [ n ]
x s (t )
A
x [ n]
A
input to
Interpretation of FFT
of ideally sampled
one-off waveform
T0
n
computer
discrete signal
FT
Xs ( f )
fs C
FFT
samples are
X [n ]
f
- f s /2
human
X [n ]
fs C
f
interpretation
f s /2
- f s /2
real spectrum
f s /2
discrete spectrum
Figure F.7
F.12
Case 2 Ideally sampled periodic waveform
Consider the case where the FFT input, xn , is the weights of the impulses of
an ideally sampled periodic waveform over one period with period T0 .
According to Eq. (F.8), the DFT gives the FT impulse weights for the first
spectral repeat, if the time-domain waveform were scaled by T0 . To get the FT,
we therefore have to scale the DFT by f 0 and recognise that the spectrum is
periodic.
With our example waveform, we would have:
Interpretation of FFT
of ideally sampled
periodic waveform
weights are
x ( nTs) = x [ n ]
x s (t )
A
input to
x [ n]
A
n
T0
computer
discrete signal
FT
Xs ( f )
FFT
f0 fs C
weights are
f0 X [n ]
f
- f s /2
human
X [n ]
fs C
f
interpretation
f s /2
- f s /2
real spectrum
f s /2
discrete spectrum
Figure F.8
F.13
Case 3 Continuous time-limited waveform
If we ideally sample the one-off waveform at intervals Ts , we get a signal corresponding to Case 1. The sampling process
creates periodic repeats of the original spectrum, scaled by f s . To undo the scaling and spectral repeats caused by sampling, we
should multiply the Case 1 spectrum by Ts and filter out all periodic repeats except for the first. The DFT gives the value of the
FT of the sampled waveform at frequencies nf 0 for the first spectral repeat. This interpretation comes directly from Eq. (F.12). All
we have to do is scale the DFT output by Ts to obtain the true FT at frequencies nf 0 .
Interpretation of FFT
of one-off waveform
x (t )
A
ideal
t
weights are
x ( nTs) = x [ n ]
x s (t )
A
T0
sampling
samples are
Ts X [n ]
n
computer
N
discrete signal
FT
f
- f s /2
T0
FT
X ( f)
x [ n]
A
input to
f s /2
real spectrum
ideal
FFT
human
Xs ( f )
reconstruction
f
- f s /2
X [n ]
fs C
f s /2
real spectrum
Figure F.9
Signals and Systems 2014
fs C
f
interpretation
- f s /2
f s /2
discrete spectrum
F.14
Case 4 Continuous periodic waveform
To create the FFT input for this case, we must ideally sample the continuous signal at intervals Ts to give xn . The sampling
process creates periodic repeats of the original spectrum, scaled by f s . We are now essentially equivalent to Case 2. To undo the
scaling and spectral repeats caused by sampling, we should multiply the Case 2 spectrum by Ts and filter out all periodic repeats
except for the first. According to Eq. (F.8), the DFT gives the FT impulse weights for the first spectral repeat. All we have to do is
scale the DFT output by f 0Ts 1 N to obtain the true FT impulse weights.
Interpretation of FFT
of periodic
waveform
x (t )
A
ideal
x s (t )
A
weights are
x ( nTs) = x [ n ]
T0
t
0
T0
t
sampling
f0 C
computer
N
discrete signal
FT
weights are
X [n ]
Xn =
N
f
- f s /2
n
0
FT
X ( f)
input to
x [ n]
A
f s /2
real spectrum
ideal
Xs ( f )
FFT
f0 fs C
weights are
f0 X [n ]
reconstruction
f
- f s /2
f s /2
real spectrum
Figure F.10
Signals and Systems 2014
human
X [n ]
fs C
f
interpretation
- f s /2
f s /2
discrete spectrum
P.1
Appendix B - The Phase-Locked Loop
Phase-locked loop. Voltage controlled oscillator. Phase Detector. Loop Filter..
Overview
The phase-locked loop (PLL) is an important building block for many
electronic systems. PLLs are used in frequency synthesisers, demodulators,
clock multipliers and many other communications and electronic applications.
Distortion in Synchronous AM Demodulation
In suppressed carrier amplitude modulation schemes, the receiver requires a
local carrier for synchronous demodulation. Ideally, the local carrier must be in
frequency and phase synchronism with the incoming carrier. Any discrepancy
in the frequency or phase of the local carrier gives rise to distortion in the
demodulator output.
For DSB-SC modulation, a constant phase error will cause attenuation of the
Frequency and
output signal. Unfortunately, the phase error may vary randomly with time. A phase errors cause
distorted
frequency error will cause a beating effect (the output of the demodulator is demodulator output
P.2
Carrier Regeneration
The human ear can tolerate a drift between the carriers of up to about 30 Hz.
A pilot is transmitted
to enable the
receiver to generate
a local oscillator in
frequency and
phase synchronism
with the transmitter
Quartz crystals can be cut for the same frequency at the transmitter and
receiver, and are very stable. However, at high frequencies (> 1 MHz), even
quartz-crystal performance may not be adequate. In such a case, a carrier, or
pilot, is transmitted at a reduced level (usually about -20 dB) along with the
sidebands.
speech
-B
f
-fc-B
-fc
fc
fc+B
Figure P.1
One conventional technique to generate the receivers local carrier is to
separate the pilot at the receiver by a very narrowband filter tuned to the pilot
frequency. The pilot is then amplified and used to synchronize the local
oscillator.
A PLL is used to
regenerate a local
carrier
P.3
The Phase-Locked Loop (PLL)
A block diagram of a PLL is shown below:
A phase-locked loop
v i (t )
phase
detector
pilot tone
(input sinusoid)
phase
difference
loop
filter
v in( t )
v o( t )
VCO
local carrier
(output sinusoid)
Figure P.2
It can be seen that the PLL is a feedback system. In a typical feedback system,
the signal fed back tends to follow the input signal. If the signal fed back is not
equal to the input signal, the difference (the error) will change the signal fed
back until it is close to the input signal. A PLL operates on a similar principle,
expect that the quantity fed back and compared is not the amplitude, but the
phase of a sinusoid. The voltage controlled oscillator (VCO) adjusts its
frequency until its phase angle comes close to the angle of the incoming signal.
At this point, the frequency and phase of the two signals are in synchronism
(expect for a difference of 90, as will be seen later).
The three components of the PLL will now be examined in detail.
P.4
Voltage Controlled Oscillator (VCO)
The voltage controlled oscillator (VCO) is a device that produces a constant
amplitude sinusoid at a frequency determined by its input voltage. For a fixed
DC input voltage, the VCO will produce a sinusoid of a fixed frequency. The
purpose of the control system built around it is to change the input to the VCO
so that its output tracks the incoming signals frequency and phase.
v o( t )
v in( t )
VCO
input voltage
output sinusoid
Figure P.3
fi
fo
slope = k v
vin
0
Figure P.4
The horizontal axis is the applied input voltage, and the vertical axis is the
frequency of the output sinusoid. The amplitude of the output sinusoid is fixed.
We now seek a model of the VCO that treats the output as the phase of the
sinusoid rather than the sinusoid itself. The frequency f o is the nominal
frequency of the VCO (the frequency of the output for no applied input).
P.5
The instantaneous VCO frequency is given by:
f i t f o k v vin t
(P.1)
To relate this to the phase of the sinusoid, we need to generalise our definition
of phase. In general, the frequency of a sinusoid cos cos2f1t is
proportional to the rate of change of the phase angle:
d
2f1
dt
1 d
f1
2 dt
(P.2)
2f1d
(P.3)
2f1t
(P.4)
2f o 2k v vin d
2f o t 2k v vin d
t
(P.5)
P.6
Therefore, the VCO output is:
vo Ao cos o t o t
(P.6)
where:
o t 2k v vin d
t
(P.7)
This equation expresses the relationship between the input to the VCO and the
phase of the resulting output sinusoid.
Example
Suppose a DC voltage of VDC V is applied to the VCO input. Then the output
phase of the VCO is given by:
o t 2k v VDC d 2k vVDC t
t
The resulting sinusoidal output of the VCO can then be written as:
vo Ao cos o t 2k vVDC t
Ao cos o 2k vVDC t
Ao cos1t
In other words, a constant DC voltage applied to the input of the VCO will
produce a sinusoid of fixed frequency, f1 f 0 k vVDC .
When used in a PLL, the VCO input should eventually be a constant voltage
(the PLL has locked onto the phase, but the VCO needs a constant input
voltage to output the tracked frequency).
P.7
Phase Detector
The phase detector is a device that produces the phase difference between two
input sinusoids:
v i (t )
sinusoid 1
phase
detector
phase
difference
v o(t )
sinusoid 2
Figure P.5
A practical implementation of a phase detector is a four-quadrant multiplier:
Four-quadrant multiplier
x ( t ) = v i (t )v o(t )
v i (t )
v o(t )
Figure P.6
To see why a multiplier can be used, let the incoming signal of the PLL, with
constant frequency and phase, be:
vi Ai sin i t i
(P.8)
i i o t i
(P.9)
If we let:
P.8
then we can write the input in terms of the nominal frequency of the VCO as:
vi Ai sin o t i t
(P.10)
Note that the incoming signal is in phase quadrature with the VCO output (i.e.
one is a sine, the other a cosine). This comes about due to the way the
multiplier works as a phase comparator, as will be shown shortly.
Thus, the PLL will lock onto the incoming signal but it will have a 90
phase difference.
The output of the multiplier is:
x vi v o
Ai sin o t i Ao cos o t o
Ai Ao
sin i o sin 2 o t i o
2
(P.11)
If we now look forward in the PLL block diagram, we can see that this signal
passes through the loop filter. If we assume that the loop filter is a lowpass
filter that adequately suppresses the high frequency term of the above equation
(not necessarily true in all cases!), then the phase detector output can be
written as:
Ai Ao
sin i o K 0 sin i o
2
(P.12)
P.9
PLL Model
A model of the PLL, in terms of phase, rather than voltages, is shown below:
A PLL model
i (t )
e (t )
x (t )
K0 sin( )
h (t )
v in( t )
loop filter
2k v
o(t )
VCO
phase detector
Figure P.7
Linear PLL Model
If the PLL is close to lock, i.e. its frequency and phase are close to that of the
incoming signal, then we can linearize the model above by making the
approximation sin e e . With a linear model, we can convert to the s-domain
and do our analysis with familiar block diagrams:
A linear PLL model
in the s-domain
i (s )
e (s )
K0
H (s)
loop filter
2k v
o(s )
s
VCO
phase detector
Figure P.8
Note that the integrator in the VCO in the time-domain becomes 1 s in the
block diagram thanks to the integration property of the Laplace transform.
P.10
Reducing the block diagram gives:
The PLL transfer
function for phase
signals
i (s )
2k v K0 H ( s )
s 2k v K0 H ( s )
o( s )
Figure P.9
This is the closed-loop transfer function relating the VCO output phase and the
incoming signals phase.
Loop Filter
The loop filter is designed to meet certain control system performance
requirements. Once of those requirements is for the PLL to track input
sinusoids with constant frequency and phase errors. That is, if the input phase
is given by:
i t i o t i
(P.13)
lim e t 0
(P.14)
then we want:
The analysis is best performed in the s-domain. The Laplace transform of the
input signal is:
i s
i o
s
i
s
(P.15)
P.11
The Laplace transform of the error signal is:
e s i s o s
1 T s i s
s
i s
s 2k v K 0 H s
s
i o i
2
s 2k v K 0 H s s
s
(P.16)
lim e t lim s e s
t
s 0
s2
i o i
lim
2
s 0 s 2k K H s
s
s
v 0
i o
i s
s 2k v K 0 H s s 0 s 2k v K 0 H s s 0
(P.17)
lim H s
(P.18)
s 0
b
H s a
s
(P.19)
FFT.1
FFT - Quick Reference Guide
Definitions
Symbol
T0
Description
time-window
f 0 1 T0
fs
sample rate
Ts 1 f s
N
sample period
number of samples
Creating FFTs
Given
T0 - time-window
N - number of samples
f s - sample rate
Choose
f s - sample rate
Then
N T0 f s
N - number of samples
f s Nf 0
f s - sample rate
T0 NTs
T0 - time-window
f s Nf 0
N - number of samples
T0 NTs
T0 - time-window
N T0 f s
MATLAB Code
Code
% Sample rate
fs=1e6;
Ts=1/fs;
Description
This code starts off with a given sample rate, and
chooses to restrict the number of samples to 1024 for
computational speed.
% Number of samples
N=1024;
% Time window and
fundamental
T0=N*Ts;
f0=1/T0;
% Time vector for
specified DSO
parameters
t=0:Ts:T0-Ts;
% Frequency vector for
specified DSO
parameters
f=-fs/2:f0:fs/2-f0;
FFT.2
Interpreting FFTs
Case
1. one-off ideally
sampled waveform
x s (t )
A
T0
Sample
weights
over one
period.
weights are
x ( nTs) = x [ n ]
t
T0
x (t )
A
Values of
waveform
at intervals
Ts .
T0
4. periodic continuous
waveform (period T0 )
x (t )
A
t
0
of X n
3. one-off
continuous waveform
of xn
Sample
weights.
2. periodic ideally
sampled waveform
(period T0 )
Interpretation
weights are
x ( nTs) = x [ n ]
x s (t )
A
Derivation
T0
Values of
waveform
over one
period at
intervals
Ts .
X n gives
values of one
period of the
true continuous
FT at
frequencies nf 0 .
Multiply X n
by f 0 to give
weights of
impulses at
frequencies nf 0
in one period of
true FT.
Multiply X n
by Ts to give
values of true
continuous FT
at nf 0 .
Multiply X n
by 1 N to give
weights of
impulses at
frequencies nf 0
in true FT.
Action
f
X s f sinc X n f nf 0
f 0 n
samples are
X [n ]
fs C
Xs ( f )
- f s /2
Xs ( f )
f
f s /2
weights are
f0 X [n ]
f
- f s /2
f s /2
f N 1
X f sinc Ts X n f nf 0
f 0 n 0
samples are
X ( f)
Ts X [n ]
- f s /2
X ( f)
f
f s /2
weights are
X [n ]
f0 C
Xn =
N
f
- f s /2
f s /2
M.1
MATLAB - Quick Reference Guide
General
Code
Ts=0.256;
t=0:Ts:T;
N=length(t);
r=ones(1,N);
n=45000*[1 18 900];
wd=sqrt(1-zeta ^2)*wn;
gs=p.*g;
Description
Assigns the 1 x 1 matrix Ts with the value 0.256.
The semicolon prevents the matrix being displayed
after it is assigned.
Assigns the vector t with values, starting from 0,
incrementing by Ts, and stopping when T is reached
or exceeded.
N will equal the number of elements in the vector t.
r will be a 1 x N matrix filled with the value 1.
Useful to make a step.
Creates a vector ([]) with elements 1, 18 and 900,
then scales all elements by 45000. Useful for
creating vectors of transfer function coefficients.
Typical formula, showing the use of sqrt; taking the
square (^2); and multiplication with a scalar (*).
Performs a vector multiplication (.*) on an elementby-element basis. Note that p*g will be undefined.
Graphing
Code
Description
Creates
a new graph, titled Figure 1.
figure(1);
Graphs y vs. t on the current figure.
plot(t,y);
Creates a 2 x 1 matrix of graphs in the current figure.
subplot(211);
Makes the current figure the top one.
title('Complex poles'); Puts a title 'Complex poles' on the current figure.
Puts the label 'Time (s)' on the x-axis.
xlabel('Time (s)');
Puts the label 'y(t)' on the y-axis.
ylabel('y(t)');
Makes a graph with a logarithmic x-axis, and uses a
semilogx(w,H,'k:');
black dotted line ('k:').
Plots a point (200,-2.38) on the current figure,
plot(200,-2.38,'kx');
using a black cross ('kx').
Sets the range of the x-axis from 1 to 1e5, and the
axis([1 1e5 -40 40]);
range of the y-axis from 40 to 40. Note all this
information is stored in the vector [1 1e5 -40 40].
Displays a grid on the current graph.
grid on;
Next time you plot, it will appear on the current graph
hold on;
instead of a new graph.
M.2
Frequency-domain
Code
f=logspace(f1,f2,100);
H=freqs(n,d,w);
Y=fft(y);
Hmag=abs(H);
Hang=angle(H);
X=fftshift(X);
Description
Creates a logarithmically spaced vector from f1 to
f2 with 100 elements.
H contains the frequency response of the transfer
function defined by numerator vector n and
denominator vector d, at frequency points w.
Performs a fast Fourier transform (FFT) on y, and
stores the result in Y.
Takes the magnitude of a complex number.
Takes the angle of a complex number.
Swaps halves of the vector X - useful for displaying
the spectrum with a negative frequency component.
Time-domain
Code
step(Gcl);
S=stepinfo(Gcl);
y=conv(m2,Ts*h);
y=y(1:length(t));
square(2*pi*fc*t,10)
Description
Calculates the step response of the system transfer
function Gcl and plots it on the screen.
Computes the step-response characteristics of the
system transfer function Gcl.
Performs a convolution on m2 and Ts*h, with the
result stored in y.
Reassigns the vector y by taking elements from
position 1 to position length(t). Normally used
after a convolution, since convolution produces end
effects which we usually wish to ignore for steadystate analysis.
Creates a square wave from 1 to +1, of frequency
fc, with a 10 percent duty cycle. Useful for
generating a real sampling waveform.
Control
Code
Description
Gmrv=tf(Kr,[Tr 1]);
I=tf(1,[1 0]);
Gcl=feedback(Gol,H);
rlocus(Gol);
K=rlocfind(Gol);
Gd=c2d(G,Ts,'tustin');
m=csvread(test.csv);
Kr
.
1 sTr
1
, an integrator.
s
Gol
.
Creates the transfer function Gcl
1 Gol H
Makes a root locus using the system Gol.
An interactive command that allows you to position
the closed-loop poles on the root locus. It returns the
value of K that puts the roots at the chosen location.
Creates a discrete-time equivalent system Gd of the
continuous-time system G, using a sample rate of Ts
and the bilinear ('tustin') method of discretization.
Read from a CSV file into a matrix.
Signals and Systems 2014
B.1
Matrices - Quick Reference Guide
Definitions
Symbol
aij
a11
A a21
a31
x1
x x2
x3
0 0
0 0 0
0 0
Description
Element of a matrix. i is the row, j is the column.
a12
a 22
a32
a13
a 23 aij
a33
0
0
0
1 0 0
I 0 1 0
0 0 1
0 0
I 0 0
0 0
1 0 0
0 2 0
0 0 3
Scalar matrix.
Multiplication
Multiplication
Z kY
Description
Multiplication by a scalar: zij kyij
z Ax
k 1
Z AB
AB BA
B.2
Operations
Terminology
a11 a 21
t
A a12 a22
a13 a23
Description
Transpose of A (interchange rows and columns):
aijt a ji .
a11
a31
a32
a33
a12
a13
A det A a 21
a31
a22
a32
a 23
a33
a11 a1 j
.
.
.
a1n
.
.
ain
a n1 a nj
a nn
If A 0 , then A is non-singular.
Minor of aij . Delete the row and column containing
the element aij and obtain a new determinant.
Aij 1 aij
i j
A11
adj A A12
A13
adj A
A 1
A
Determinant of A.
If A 0 , then A is singular.
Cofactor of aij .
A21
A22
A23
A31
A32
A33
Linear Equations
Terminology
a11 x1 a12 x2 a13 x3 b1
a21 x1 a22 x2 a23 x3 b2
a31 x1 a32 x2 a33 x3 b3
Description
Set of linear equations written explicitly.
a11
a
21
a31
a12
a 22
a32
a13 x1 b1
a23 x2 b2
a33 x3 b3
Ax b
x A 1b
Eigenvalues
Equations
Ax x
A I 0
Description
are the eigenvalues.
eigenvectors.
Finding eigenvalues.
are
the
column
A.1
Answers
1A.1
t 2 2k
(a) g t sin t rect
, T0 2 , P
(b) g t
1
4
t 1 3k
t 34 3k t 1 3k
t 34 3k
2
rect
t
3
k
rect
rect
1
1
1
1
k
4
2
4
2
P 169
T0 3 ,
(c) g t
t 10 k 10
1k rect t 5 10k , T0 20 ,
10
1
2
1 e
2
t 4k
(d) g t 2 cos200t rect
, T0 4 , P 1
2
k
1A.2
t 2
t 74 t 1
t 43 t 1
t 43 t 2
t 74
rect
rect
rect
rect
1 1
3 1
3 1
1
1
10
2 10
2 10
2 10
2
(a) g t
E 83 13
t
(b) g t cos t rect , E 2
3 rect t 2
5
(d) g t rect
2
5
E 19
1A.3
(i) 0
(vi) f t t 0
A.2
1A.4
Let t T . Then:
t t0
f t
dt
T
or
if T 0
if T 0
all T
f T t 0 T Td
f T t 0 T Td
f T t 0 T T d
T f t0
1A.5
45
(a)
X 2715
(b)
X 510
(f)
(h) X 130 , X * 1 30
(i) -2.445
(c)
x t 100 cost 60
100
(g)
(d)
X 5596
. 59.64
(j) X 15
. 30 , X * 15
. 30
8 3, 8
A.3
1B.1
a)
(ii)
2
1.5
1.5
x(t)
x(t)
(i)
2
0.5
0.5
-0.5
-0.5
-1
0.5
1.5
Time (sec)
2.5
-1
0.5
1.5
Time (sec)
1.5
1.5
x(t)
x(t)
0.5
-0.5
-0.5
0.5
1.5
Time (sec)
0.5
2.5
(iv)
(iii)
2
-1
2.5
-1
0.5
1.5
Time (sec)
2.5
2.5
2.5
b)
(ii)
1.5
0.5
0.5
x(t)
x(t)
(i)
1.5
-0.5
-0.5
-1
-1
-1.5
0.5
1.5
Time (sec)
2.5
-1.5
0.5
(iv)
1.5
0.5
0.5
x(t)
x(t)
(iii)
1.5
-0.5
-0.5
-1
-1
-1.5
0.5
1.5
Time (sec)
1.5
Time (sec)
2.5
-1.5
0.5
1.5
Time (sec)
A.4
c)
(ii)
1.5
0.5
0.5
x(t)
x(t)
(i)
1.5
-0.5
-0.5
-1
-1
-1.5
0.5
1.5
Time (sec)
2.5
-1.5
0.5
1.5
Time (sec)
1.5
0.5
0.5
-0.5
-1
-1
0.5
2.5
-0.5
-1.5
2.5
(iv)
1.5
x(t)
x(t)
(iii)
1.5
Time (sec)
2.5
-1.5
0.5
1.5
Time (sec)
1B.2
a)
b)
y1[n ]
y2[n ]
-2 -1 0
-1
-2 -1 0
-1
-2
-3
-4
1B.3
a [n]
3
2
1
-2 -1 0
-1
-2
-3
1 2
A.5
1B.4
1, n 1, 3, 5,
yn
0, all other n
1B.5
a) yn yn 1 yn 2
1B.6
(i)
x [n ]
y [n ]
D
(ii)
x [n ]
y [n ]
3
2
D
1B.7
(i)
yn yn 1 2 yn 2 3 xn 1
(ii)
y0 3 , y1 8 , y2 1 , y3 14
A.6
1B.8
(a)
(i)
h0
T
, hn T for n 1, 2, 3,...
2
(ii) h0 1 , hn 0.250.5
n 1
for n 1, 2, 3,...
1B.9
(i)
(a) a1 a 4 0
(b) a5 0
(ii)
(a) a1 a 4 0
(b) a5 0
1B.10
y1 0 1 , y1 1 3 , y1 2 7 , y1 3 15 , y1 4 31 , y1 5 63
y0 4 , y1 12 , y2 26 , y3 56 , y4 116 , y5 236
1B.11
yn 2 n n 2 2 n 4 n 6
1B.12
(i)
12
8
4
-1 0
1 2 3 4 5
A.7
(ii)
12
8
4
-1 0
1 2 3 4 5 6 7 8 9 10 11 n
(iii)
24
20
16
12
8
4
-1 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 n
1B.14
The describing differential equation is:
d 2 vo R dvo
1
1
vo
vi .
2
dt
L dt LC
LC
The resulting discrete-time approximation is:
yn 2 RT L yn 1 1 RT L T 2 LC yn 2 T 2 LC xn 2
A.8
1
0.8
0.6
y(t)
0.4
0.2
-0.2
-0.4
-0.6
10
Time (sec)
12
14
16
18
20
Use T 0.1 so that the waveform appears smooth (any value smaller has a
minimal effect in changing the solution).
1B.15
(a)
dy
Ky Kx
dt
(b) ht Ke Kt u t
(d)
y( t )
1-e
-0.5( t -4)
0.8
0.7
0.75
0.6
y(t)
0.5
t (min)
0.4
0.3
0.2
0.1
10
Time (sec)
12
14
16
18
20
A.9
1B.16
The impulse response is ht te t u t . Using numerical convolution will only
give the ZSR. The ZIR needs to be obtained using other methods and
is e t u t . The code snippet below shows relevant MATLAB code:
t=0:Ts:To-Ts;
h=t.*exp(-t);
x=sin(t);
yc=conv(x,h*Ts);
yc=yc(1:N)+exp(-t);
% ZSR
% total solution = ZSR + ZIR
1B.17
1
-1-0.50
4 4.5 5 t
1B.18
The output control signal is smoothed, since it is not changing as rapidly as the
input control signal. It is also delayed in time.
x(t)
0.5
0
-1
-0.5
0.5
1.5
0.5
1.5
0.5
1.5
Time (sec)
0.4
h(t)
0.3
0.2
0.1
0
-1
-0.5
0
Time (sec)
0.2
y(t)
0.15
0.1
0.05
0
-1
-0.5
0
Time (sec)
A.10
1
x[nT]
si[nT]
1B.19
-1
-2
-1
-2
t (s)
t (s)
0.5
y[nT]
h[nT]
-1
-0.5
-2
t (s)
2
t (s)
Note that the filter effectively removes the added noise; however, it also
introduces a time delay of between three and four samples.
A.11
2A.1
|G|
G
2.236
90
45
26.57
(a)
50
100
50
100
|G|
2
1.118
90
1.118
45
26.57
-100 -50
-50 0
-100
50
100
50
-26.57
100
-45
(b)
Re{G}
1
Im{G }
1
1
1/2
-50
-50 0
-100
50
100
-100
50
-1/2
(c)
2A.2
G 445
2A.3
g t 4 cos 200t
2A.4
1
0.5
0
-0.5
G*
t=0
0.5
G
27.3
30
1
0.5
-0.5
G
1
84.59
0
1
t=1
-0.5 1
-1
G*
t=2
100
A.12
2A.5
(a) Gn
1
j n sinc n 1 sinc n 1
4
2
2
0.4
0.3
0.2
0.1
0
-25
-20
-15
-10
-5
10
15
20
25
-20
-15
-10
-5
10
15
20
25
4
3
2
1
0
-1
-2
-25
1
(b) Gn sincn 2sincn 6
2
0.5
0.4
0.3
0.2
0.1
0
-25
-20
-15
-10
-5
10
15
20
25
-20
-15
-10
-5
10
15
20
25
0
-25
A.13
(c) Gn
1 1 1
2
0.1 jn
10
0.8
0.6
0.4
0.2
0
-25
-20
-15
-10
-5
10
15
20
25
-20
-15
-10
-5
10
15
20
25
-1
-2
-25
(d) Gn
1
n
n
sinc 8 sinc 8
4
2
2
0.25
0.2
0.15
0.1
0.05
0
-25
-20
-15
-10
-5
10
15
20
25
-20
-15
-10
-5
10
15
20
25
0
-25
A.14
sincn 1 1
(e) Gn
and the special case of n 0 can be evaluated by
j 2n
n
0.5
0.4
0.3
0.2
0.1
0
-25
-20
-15
-10
-5
10
15
20
25
-20
-15
-10
-5
10
15
20
25
-1
-2
-25
2A.6
P 4.294 W
1
.
2
A.15
2A.7
|G|
0.5
0.2387
(a)
2
T0
3
-12
6
12
-0.02653
-6
, P 73%
|G|
0.5
0.1592
(b)
T0
-12
-6
12
12
, P 60%
:
|G|
0.5
(c)
T0
-12
-6
, P 50%
2A.8
5 T0
n
Note: Gn Asinc 2 Asincn
2
A.16
2B.1
x t 4 cos 2000t 30 , P 8
2B.2
Xf
sinc f e jf sinc2 f e j 6f
jf
2B.3
5 10 cos3f 5 cos4f
2f 2
(a)
(b)
sincf 12
sincf 12
(c)
1 jf e
(d)
jf
sinc f cos2f e j 3f
2f 2
2B.4
Hint: e
a t
e at u t e at u t
2B.5
3P 1.5 f
2B.6
This follows directly from the time shift property.
2B.7
G1 f
a
1
1
f , g1 t g 2 t 1 e at u t
, G2 f
j 2f 2
a j 2f
A.17
2B.8
G1( f ) * G2( f )
2 Ak
Ak
-2 f 0
2 f0
2B.9
(a)
2 Af 0 sinc2 f 0 t t 0
(b)
2 Af 0 sinc f 0 t sin f 0 t
2B.10
j 2 A sin 2 fT
f
2B.11
Asinc4f cos4f
jf
A.18
3A.1
0.2339 0
3A.2
X f A sinc 2 f
3A.3
B4
3A.4
By passing the signal through a lowpass filter with 4 kHz cutoff - provided the
original signal contained no spectral components above 4 kHz.
3A.5
Periodicy.
3A.6
(a) 20 Hz, 40 Hz, P 01325
.
W
(b) G3 0.5e
j 3
(c)
Harmonic #
0
1
2
3
4
Amplitude
1
3
1
0.5
0.25
Phase ()
-66
-102
-168
-234
Yes.
3A.7
Yes. The flat topped sampling pulses would simply reduce the amplitudes of
the repeats of the baseband spectrum as one moved along the frequency axis.
For ideal sampling, with impulses, all repeats have the same amplitude. Note
that after sampling the pulses have tops which follow the original waveform.
Signals and Systems 2014
A.19
3A.8
Truncating the 9.25 kHz sinusoid has the effect of convolving the impulse in
the original transform with the transform of the window (a sinc function for a
rectangular window). This introduces leakage which will give a spurious
9 kHz component.
A.20
3B.1
G( f )
j/2
j/4
- j/2
j/4
- j/4
-10
-11 -9
- j/4
10 f
9 11
3B.2
G( f )
0.5
-2 -1 0 1 2
f (kHz)
A:
G( f )
0.5
0.25
-40
-36
36
0
-38
-39 -37
40
f (kHz)
38
37 39
B:
G( f )
0.5
-40
-36
-38
-39 -37
36
-2 -1
1 2
40
38
37 39
f (kHz)
C:
3B.3
lowpass
filter
C
lowpass
filter
cos(2 f c t )
A.21
4A.1
s 1 R1C1 s 1 R2C2
1 R1C1 1 R2 C 2 1 R2 C1 s 1 R1 R2 C1C 2
a)
1 RC
s 1 RC
c)
1 R2 C1 s R1 L1
1 2 s R L
d) 2
sR L
s 1 R2 C1 R1 L1 s R1 R2 R2 L1C1
b)
4A.2
a) I s sL R
Li 0 E s
sC
dx 0
3
b) X s Ms Bs K M sx 0
Bx 0 2
dt
s
10
d 0
c) s Js 2 Bs K J s 0
B 0 2
dt
s 2
4A.3
a)
f t
5
7 t
9
e cos 3t tan 1
4 2
3
b)
f t
1
cost cos2t
3
c)
f t
1 2
15
15
t 8t e 2t 7t
40
2
2
4A.4
a) f 5 4
4A.5
T s
1 R1C 2 s
s 1 R1C1 s 1 R2C2
A.22
4A.6
y t 5e t cos2t 26.6u t
4A.7
a) (i)
G
1 GH
(ii)
1
1 GH
(iii)
GH
1 GH
4A.8
a)
ab
ac 1
b) X 5
X1
Y
1 bd ac
1 bd ac
4A.9
a)
G1G2 G3G4
C
b)
Y
AE CE CD AD
X 1 AB EF ABEF
A.23
4B.1
Yes, by examining the pole locations for R, L, C > 0.
4B.2
(i)
y t 2 1 e 4t u t
(ii)
y t 2t 12 12 e 4t u t
(iii)
8
cos 2t tan 1 2 e 4t u t
y t
5
5
(iv)
40 4t
2
y t
e u t
cos10t tan 1
5
29
29
4B.3
(i)
4 2t
underdamped, y t 2
e sin 2 3t cos 1 0.5 u t ,
3
ht
16 2t
e sin 2 3t
3
(ii)
(iii)
overdamped, y t 2 83 e 2t 23 e 8t u t , ht 163 e 2t e 8t
4B.4
(a) y ss t u t (b) y ss t
10 170
1
cos t tan 1 u t
17
13
5A.1
a) 2 kHz
b) 2.5 Hz
5A.2
a) 0 dB
b) 32 dB
c) 6 dB
A.24
5A.3
a)
b)
Magnitude response
Magnitude response
40
40
20
|H(w)| dB
|H(w)| dB
30
20
0
-20
10
-40
0
1
10
10
10
10
10
-60
-1
10
10
10
10
10
w (rad/s)
w (rad/s)
Phase response
Phase response
10
10
arg(H(w)) deg
arg(H(w)) deg
50
-50
-50
-100
-150
1
10
10
10
10
10
-1
10
10
10
10
w (rad/s)
10
10
10
w (rad/s)
d)
c)
Magnitude response
20
0
|H(w)| dB
|H(w)| dB
Magnitude response
20
-20
-20
-40
-40
-60
-3
10
-2
10
-1
10
10
10
-60
-3
10
10
-2
10
-1
10
w (rad/s)
10
10
10
w (rad/s)
Phase response
Phase response
-80
0
-100
arg(H(w)) deg
arg(H(w)) deg
-20
-40
-60
-120
-140
-160
-80
-180
-100
-3
10
-2
10
-1
10
10
10
-3
10
10
-2
10
-1
10
w (rad/s)
10
10
10
w (rad/s)
f)
e)
Magnitude response
80
60
60
|H(w)| dB
|H(w)| dB
Magnitude response
80
40
20
40
20
0
-3
10
-2
10
-1
10
10
10
0
-3
10
10
-2
10
-1
10
w (rad/s)
10
10
10
w (rad/s)
Phase response
Phase response
100
180
arg(H(w)) deg
arg(H(w)) deg
80
60
40
20
160
140
120
100
0
-3
10
-2
10
-1
10
10
10
10
80
-3
10
-2
10
w (rad/s)
-1
10
10
w (rad/s)
10
10
A.25
5A.5
Magnitude response
80
|H(w)| dB
60
40
20
0
-2
10
-1
10
10
10
w (rad/s)
10
10
10
Phase response
arg(H(w)) deg
-50
-100
-150
-2
10
-1
10
10
10
w (rad/s)
10
10
10
5A.6
Magnitude response
50
|H(w)| dB
40
30
20
10
0
0
10
10
w (rad/s)
10
Phase response
arg(H(w)) deg
150
100
50
0
0
10
10
w (rad/s)
10
A.26
5A.7
a)
Magnitude response
100
|H(w)| dB
50
-50
-100
-2
10
-1
10
10
10
10
10
w (rad/s)
Phase response
arg(H(w)) deg
-100
-150
-200
-250
-2
10
-1
10
10
10
10
10
w (rad/s)
(i) +28 dB, -135 (ii) +5 dB, -105 (iii) 15 dB, -180 (iv) 55 dB, -270
b)
Magnitude response
40
|H(w)| dB
20
0
-20
-40
-60
-1
10
10
10
w (rad/s)
10
10
Phase response
200
arg(H(w)) deg
150
100
50
0
-1
10
10
10
w (rad/s)
10
10
(i) +35 dB, +180 (ii) 5 dB, +100 (iii) 8 dB, +55 (iv) 28 dB, +90
A.27
5A.8
a) G1 s
450
ss 25s 20
Magnitude response
50
|H(w)| dB
-50
-100
0
10
10
10
10
w (rad/s)
Phase response
-50
arg(H(w)) deg
-100
-150
-200
-250
-300
0
10
10
10
10
w (rad/s)
b) G2 s
10s
s 0.2s 10
Magnitude response
20
|H(w)| dB
-20
-40
-60
-2
10
-1
10
10
10
10
10
w (rad/s)
Phase response
100
arg(H(w)) deg
80
60
40
20
0
-2
10
-1
10
10
10
w (rad/s)
10
10
A.28
5A.9
a)
Magnitude response
10
0
|H(w)| dB
-10
-20
-30
-40
-50
-60
-2
10
-1
10
10
w (rad/s)
10
10
Phase response
arg(H(w)) deg
-50
-100
-150
-2
10
-1
10
10
w (rad/s)
10
10
|H(w)| dB
-10
-20
-30
-40
-50
-60
-2
10
-1
10
10
w (rad/s)
10
10
Phase response
arg(H(w)) deg
-50
-100
-150
-2
10
-1
10
10
w (rad/s)
10
10
A.29
5A.10
(i) 3.3 dB, 8 (ii) 17.4 dB, 121
5A.11
Magnitude response
40
|H(w)| dB
20
-20
-40
0
10
10
10
10
10
10
w (rad/s)
Phase response
100
arg(H(w)) deg
50
-50
-100
0
10
10
10
10
w (rad/s)
10
10
A.30
5B.1
b) t p 1.05 s c) 0.555 d) n 13 rads -1
a) 12.3%
h) t s 1.95 s
5B.2
G s
196
s 19.6s 196
2
5B.3
ct 1
e nt
sin 1 2 n t cos 1
5B.4
a)
ln a b
4 ln a b
2
5B.5
(a)
j
(i)
-10
j
(ii)
-1
(iii) -10
-1
A.31
(b)
(i)
v
100%
63.2%
0.1
(ii)
v
100%
steady-state
63.2%
1
(iii)
v
100%
63.2%
different
A.32
5B.6
G1 s
1.299
s 0.6495
Step response - First and Second-order systems
2
1.8
1.6
First-order system
1.4
Amplitude
1.2
1
0.8
0.6
0.4
Second-order system
0.2
0
4
Time (s)
5B.7
a) n 3 , 1 6
b) 58.8%
c) 0
d) 1 9
5B.8
a) 0
b) 3
5B.9
a) T 10 sec
A.33
6A.3
a) S
T
K1
b) S
T
K2
s2 s
c) S 2
s s 1000
1000
2
s s 1000
T
G
6A.4
a)
open-loop S KT a 1
open-loop S KT1 0
closed-loop S KT a
1
1
for n
1 K 1 K a G s 1 K 1 K a
closed-loop S KT1
K1 K a
1 for K1 K a 1
1 K1 K a
b)
open-loop
s
G s 1 for n
Td s
closed-loop
G s
s
1
for n
Td s 1 K1 K a G s 1 K1 K a
6A.5
1
1 K P K1
(ii)
(iii)
K1
1 K P K1
a)
(i)
b)
(i) 0
(ii)
1
K I K1
(iii) 0
c)
(i) 0
(ii)
1
K P K1
(iii)
d)
(i) 0
(ii) 0
1
KP
(iii) 0
(iv)
1 K1
1 K P K1
(iv) 0
(iv)
1
KP
(iv) 0
The integral term in the compensator reduces the order of the error,
i.e. infinite values turn into finite values, and finite values become zero.
A.34
7A.1
(i)
z -2
x [n ]
y [n ]
z -1
(ii)
x [n ]
y [n ]
3 z -4
2
z -1
z -1
7A.2
(iii)
yn yn 1 2 yn 2 3 xn 1
(iv)
y0 3 , y1 8 , y2 1 , y3 14
7A.3
(a) hn 1 3 un
n
(b) yn 3 n 3 2 un 7 2 1 3 un
n
or yn 3 2 un 1 2 1 3 un 1 3 un 1
n 1
or yn 3 1 3 un 1 2 3 1 3
n
n2
un 2
(the responses above are equivalent to see this, graph them or rearrange
terms)
A.35
7A.4
hn h1 n h2 n
7A.5
h0 h1 0
h1 h1 1 h2 1h12 0
h2 h1 2 h22 1h13 0 2h1 0h1 1h2 1 h12 0h2 2
7A.6
(a)
(i) F z
z
z 1 2
(iii) F z
(b) X z
(ii) F z
1
za
z z a
z
(iv) F z
2
z a 3
z a
2z 2
z 2 1
7A.7
(a) zeros at z 0, - 2 ; poles at z 1 3, - 1 ; stable
(b) fourth-order zero at z 0 ; poles at z j 1
2 , j 1 2 ; stable
7A.8
H z
3 z 1 2 z 4
1 z 1 z 2
7A.9
xn 0, 14, 10.5, 9.125, ...
Signals and Systems 2014
A.36
7A.10
xn 81 4 un 81 2 un 8un 24 n
n
7A.11
(a) xn
1 a n1
for n 0, 1, 2,...
1 a
(e) xn 21 2
n4
(f) xn 2
for n 0, 1, 2,...
7A.12
(a) hn
2 n1 1
n
n
1 4 1 2 1 8 1 4
2 2 n 3
(b) H z
1 8 z2
z 1 2z 1 4
(c) yn 1 3 1 24 1 4 1 4 1 2
n
(d) yn 1 2 1 2 1 18 1 4 n 3 4 9
n
7A.13
yn 1
1
0.5 1.25
5
n 3
1
0.5 1.25
5
n 3
7A.14
yn
n
n
1
0.5 1.25 0.5 1.25
for n 0, 1, 2,...
A.37
7A.15
F z
zTe T
z e
T 2
7A.16
z 1 k
k z 1
(a) X 3 z
X 1 z
Dz
k 1z 6k 1
k 1z 6k 1
7A.17
(a) f 0 0 , f 0 (b) f 0 1 , f 0
(c) f 0 1 4 , f
1
1 a
(d) f 0 0 , f 8
7A.18
(i)
n 1un
(ii)
n 2 n 1 3 n 2 2 n 3 n 4
7A.19
f n 8 6n1 2 81 2 , n 0
n
7B.1
yn 0.7741 yn 1 20 xn 18.87 xn 1
7B.2
yn 0.7730 yn 1 17.73 xn 17.73 xn 1
7B.3
H d z
0.1670 z 0.3795
z 0.3679z 0.1353
A.38
8B.1
(a)
Root Locus Editor (C)
8
Imag Axis
-2
-4
-6
-8
-4
-3.5
-3
-2.5
-2
-1.5
Real Axis
-1
-0.5
0.5
8B.2
(a) One pole is always in the right-half plane:
For K 0 :
For K 0 :
Root Locus Editor (C)
1
0.8
0.6
2
0.4
0.2
Imag Axis
Imag Axis
0
-0.2
-1
-0.4
-2
-0.6
-3
-4
-4
-0.8
-2
2
Real Axis
-1
-20
-15
-10
-5
0
5
Real Axis
10
15
20
25
A.39
(b)
For K 0 :
For K 0 :
Root Locus Editor (C)
15
1
0.8
10
0.6
0.4
5
Imag Axis
Imag Axis
0.2
0
0
-0.2
-5
-0.4
-0.6
-10
-0.8
-15
-4
-2
2
Real Axis
-1
-20
-15
-10
-5
0
5
Real Axis
10
15
20
25
The system is always unstable because the pole pair moves into the right-half
plane ( s j 3.5 ) at a lower gain ( K 37.5 ) than that for which the righthalf plane pole enters the left-half plane ( K 48 ). The principle is sound,
however. A different choice of pole-zero locations for the feedback
compensator is required in order to produce a stable feedback system.
For K 0 , the root in the right-half plane stays on the real-axis and moves to
the right. Thus, negative values of K are not worth pursuing.
For the given open-loop pole-zero pattern, there are two different feasible locus
paths, one of which includes a nearly circular segment from the left-half
s-plane to the right-half s-plane. The relative magnitude of the poles and zeros
determines which of the two paths occurs.
Root Locus Editor (C)
30
40
30
20
20
10
Imag Axis
Imag Axis
10
-10
-10
-20
-20
-30
-30
-90
-80
-70
-60
-50
-40
Real Axis
-30
-20
-10
10
-40
-20
-15
-10
-5
Real Axis
10
A.40
8B.3
(a)
Root Locus Editor (C)
5
4
3
2
Imag Axis
1
0
-1
-2
-3
-4
-5
-7
-6
-5
-4
-3
-2
Real Axis
-1
(b) a 32 27
(c) 0 a 16 for stability. For a 16 , the frequency of oscillation is 2 rads 1 .
A.41
8B.4
(a)
j
K >0
-3
-1
s -plane
K <0
K
-3
j 3
3 2
5
-1
2
5
32
5
s -plane
j 3
(b) K 2 5 , n 3 rads -1
(c) Any value of K 2 5 gives zero steady-state error. [The system is type 1.
Therefore any value of K will give zero steady-state error provided the
system is stable.]
(d) K 0.1952 , t r 1.06 s
A.42
9A.1
1
C R R3
vC
q 1, A 1 2
R2
iL1
L R R
3
1 2
R2
0
C1 R2 R3
, b 1
1 R2 R3
L
R1
1
L1 R2 R3
9A.2
(a) false
(b) false
(c) false
9A.3
(a)
2
q 0
2
y 2 2
0
3
0 0 q 1 x
0
2 2
(b)
2
q 1
0
y 0 0
(b)
H s
1q 0x
1 1
1
1 0 q 0 x
0
1 2
1q 0x
9A.4
(a)
H s 8
s 12 s 3
2
s s 2
1
s 5s 9 s 7
3
9A.5
s 3
3s 1
2
2
q
t
1
s s 3
-1 s 3
q t L 12
2s 6
2
2
2
s 3 s s 3
1 4 cos 3t 3 sin 3t
u t
2 2 cos 3t 14 sin 3t
A.43
9B.1
1 1 , 2 2 , 3 3
9B.2
1 e t
(a) 1, 2 (b) qt
t
e
9B.3
(ii) 1, 1 j 2 (iii) k1 75 , k 2 49 , k 3 10
(iv)
y ss t 1 40
(v) State feedback can place the poles of any system arbitrarily (if the system is
controllable).
9B.4
0 0 1 16
3 16
qn 1 1 0 1 4 qn 1 8 xn
0 1 1 4
1 2
yn 0 0 1qn 3 xn
9B.6
1 0
0
a
qn 1 k
0 k qn k xn
b c
b c 0 c
yn 1 0 0qn