Signalsand Systemswith MATLAB
Signalsand Systemswith MATLAB
N –1
∑ x [n ]e
mn
– j2π -------
N
Signals and
Systems
n=0
Second Edition
Steven T. Karris
with MATLAB® Applications
Includes
step-by-step
procedures
for designing
analog and
digital filters
Orchard Publications
www.orchardpublications.com
Students and working professionals will find
Signals and Systems Signals and Systems with MATLAB® Applications,
Second Edition, to be a concise and easy-to-learn
with MATLAB® Applications text. It provides complete, clear, and detailed expla-
Second Edition nations of the principal analog and digital signal
Steven T. Karris processing concepts and analog and digital filter
design illustrated with numerous practical examples.
Steven T. Karris is the president and founder of Orchard Publications. He earned a bachelors
degree in electrical engineering at Christian Brothers University, Memphis, Tennessee, a mas-
ters degree in electrical engineering at Florida Institute of Technology, Melbourne, Florida, and
has done post-master work at the latter. He is a registered professional engineer in California
and Florida. He has over 30 years of professional engineering experience in industry. In addi-
tion, he has over 25 years of teaching experience that he acquired at several educational insti-
tutions as an adjunct professor. He is currently with UC Berkeley Extension.
ISBN 0-9709511-8-3
$39.95 U.S.A.
Signals and Systems
with MATLAB® Applications
Second Edition
Steven T. Karris
Orchard Publications
www.orchardpublications.com
Signals and Systems with MATLAB Applications, Second Edition
Copyright © 2003 Orchard Publications. All rights reserved. Printed in the United States of America. No part of this
publication may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system,
without the prior written permission of the publisher.
Direct all inquiries to Orchard Publications, 39510 Paseo Padre Parkway, Fremont, California 94538
Product and corporate names are trademarks or registered trademarks of the Microsoft™ Corporation and The
MathWorks™ Inc. They are used only for identification and explanation, without intent to infringe.
ISBN 0-9709511-8-3
Copyright TX 5-471-562
Preface
This text contains a comprehensive discussion on continuous and discrete time signals and systems
with many MATLAB® examples. It is written for junior and senior electrical engineering students,
and for self-study by working professionals. The prerequisites are a basic course in differential and
integral calculus, and basic electric circuit theory.
This book can be used in a two-quarter, or one semester course. This author has taught the subject
material for many years at San Jose State University, San Jose, California, and was able to cover all
material in 16 weeks, with 2½ lecture hours per week.
To get the most out of this text, it is highly recommended that Appendix A is thoroughly reviewed.
This appendix serves as an introduction to MATLAB, and is intended for those who are not familiar
with it. The Student Edition of MATLAB is an inexpensive, and yet a very powerful software
package; it can be found in many college bookstores, or can be obtained directly from
The MathWorks™ Inc., 3 Apple Hill Drive , Natick, MA 01760-2098
Phone: 508 647-7000, Fax: 508 647-7001
http://www.mathworks.com
e-mail: [email protected]
The elementary signals are reviewed in Chapter 1 and several examples are presented. The intent of
this chapter is to enable the reader to express any waveform in terms of the unit step function, and
subsequently the derivation of the Laplace transform of it. Chapters 2 through 4 are devoted to
Laplace transformation and circuit analysis using this transform. Chapter 5 discusses the state
variable method, and Chapter 6 the impulse response. Chapters 7 and 8 are devoted to Fourier series
and transform respectively. Chapter 9 introduces discrete-time signals and the Z transform.
Considerable time was spent on Chapter 10 to present the Discrete Fourier transform and FFT with
the simplest possible explanations. Chapter 11 contains a thorough discussion to analog and digital
filters analysis and design procedures. As mentioned above, Appendix A is an introduction to
MATLAB. Appendix B contains a review of complex numbers, and Appendix C discusses matrices.
New to the Second Edition
This is an refined revision of the first edition. The most notable changes are chapter-end summaries,
and detailed solutions to all exercises. The latter is in response to many students and working
professionals who expressed a desire to obtain the author’s solutions for comparison with their own.
The author has prepared more exercises and they are available with their solutions to those
instructors who adopt this text for their class.
The chapter-end summaries will undoubtedly be a valuable aid to instructors for the preparation of
presentation material.
The last major change is the improvement of the plots generated by the latest revisions of the
MATLAB® Student Version, Release 13.
Orchard Publications
Fremont, California
www.orchardpublications.com
[email protected]
2
Table of Contents
Chapter 1
Elementary Signals
Signals Described in Math Form .................................................................................................................1-1
The Unit Step Function ................................................................................................................................1-2
The Unit Ramp Function ...........................................................................................................................1-10
The Delta Function .....................................................................................................................................1-12
Sampling Property of the Delta Function................................................................................................1-12
Sifting Property of the Delta Function.....................................................................................................1-13
Higher Order Delta Functions...................................................................................................................1-15
Summary........................................................................................................................................................1-19
Exercises........................................................................................................................................................1-20
Solutions to Exercises .................................................................................................................................1-21
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Fourier Series
Wave Analysis.................................................................................................................................................7-1
Evaluation of the Coefficients .....................................................................................................................7-2
Symmetry.........................................................................................................................................................7-7
Waveforms in Trigonometric Form of Fourier Series .......................................................................... 7-11
Gibbs Phenomenon.................................................................................................................................... 7-24
Alternate Forms of the Trigonometric Fourier Series .......................................................................... 7-25
Circuit Analysis with Trigonometric Fourier Series .............................................................................. 7-29
The Exponential Form of the Fourier Series ......................................................................................... 7-31
Line Spectra ................................................................................................................................................. 7-35
Computation of RMS Values from Fourier Series ................................................................................ 7-40
Computation of Average Power from Fourier Series ........................................................................... 7-42
Numerical Evaluation of Fourier Coefficients....................................................................................... 7-44
Summary....................................................................................................................................................... 7-48
Exercises....................................................................................................................................................... 7-51
Solutions to Exercises ................................................................................................................................ 7-53
Chapter 8
Chapter 10
Chapter 11
Appendix A
Introduction to MATLAB®
MATLAB® and Simulink® ........................................................................................................................A-1
Command Window.......................................................................................................................................A-1
Roots of Polynomials ...................................................................................................................................A-3
Polynomial Construction from Known Roots.........................................................................................A-4
Evaluation of a Polynomial at Specified Values.......................................................................................A-6
Rational Polynomials ....................................................................................................................................A-8
Using MATLAB to Make Plots ................................................................................................................A-10
Subplots ........................................................................................................................................................A-18
Multiplication, Division and Exponentiation .........................................................................................A-18
Script and Function Files...........................................................................................................................A-25
Display Formats ..........................................................................................................................................A-30
Appendix B
Appendix C
his chapter begins with a discussion of elementary signals that may be applied to electric net-
T works. The unit step, unit ramp, and delta functions are introduced. The sampling and sifting
properties of the delta function are defined and derived. Several examples for expressing a vari-
ety of waveforms in terms of these elementary signals are provided.
vS t = 0
+
+ v out open terminals
−
−
For the time interval 0 < t < ∞ , the switch is closed. Then, the input voltage v S appears at the output,
i.e.,
v out = v S for 0 < t < ∞ (1.2)
⎧ 0 –∞ < t < 0
v out = ⎨ (1.3)
⎩ vS 0 < t < ∞
v out
vS
0
t
The waveform of Figure 1.2 is an example of a discontinuous function. A function is said to be dis-
continuous if it exhibits points of discontinuity, that is, the function jumps from one value to another
without taking on any intermediate values.
⎧0 t<0
u0 ( t ) = ⎨ (1.4)
⎩1 t>0
In the waveform of Figure 1.3, the unit step function u 0 ( t ) changes abruptly from 0 to 1 at t = 0 .
But if it changes at t = t 0 instead, it is denoted as u 0 ( t – t 0 ) . Its waveform and definition are as
shown in Figure 1.4 and relation (1.5).
1
u0 ( t – t0 )
t
0 t0
* In some books, the unit step function is denoted as u ( t ) , that is, without the subscript 0. In this text, however, we
will reserve the u ( t ) designation for any input when we discuss state variables in a later chapter.
⎧0 t < t0
u0 ( t – t0 ) = ⎨ (1.5)
⎩1 t > t0
waveform and definition are as shown in Figure 1.5 and relation (1.6).
u0 ( t + t0 )
1
−t0 0 t
⎧0 t < –t0
u0 ( t + t0 ) = ⎨ (1.6)
⎩1 t > –t0
Example 1.1
Consider the network of Figure 1.6, where the switch is closed at time t = T .
vS t = T
+
+ v out open terminals
−
−
Express the output voltage v out as a function of the unit step function, and sketch the appropriate
waveform.
Solution:
For this example, the output voltage v out = 0 for t < T , and v out = v S for t > T . Therefore,
v out = v S u 0 ( t – T ) (1.7)
vS u0 ( t – T )
v out
t
0 T
Τ −Τ
t t t
0 0 0
(a) (b) (c)
−A −A −A
–A u0 ( t ) –A u0 ( t – T ) –A u0 ( t + T )
Au 0 ( – t ) Au 0 ( – t + T ) Au 0 ( – t – T )
A A A
t t t
0 0 Τ (e) −Τ 0 (f)
(d)
Τ −Τ
t t t
0 (g) 0 0 (i)
(h)
−A −A −A
–A u0 ( –t ) –A u0 ( – t + T ) –A u0 ( – t – T )
Figure 1.9. A rectangular pulse expressed as the sum of two unit step functions
Thus, the pulse of Figure 1.9(a) is the sum of the unit step functions of Figures 1.9(b) and 1.9(c) is
represented as u 0 ( t ) – u 0 ( t – 1 ) .
The unit step function offers a convenient method of describing the sudden application of a voltage
or current source. For example, a constant voltage source of 24 V applied at t = 0 , can be denoted
as 24u 0 ( t ) V . Likewise, a sinusoidal voltage source v ( t ) = V m cos ωt V that is applied to a circuit at
t = t 0 , can be described as v ( t ) = ( V m cos ωt )u 0 ( t – t 0 ) V . Also, if the excitation in a circuit is a rect-
angular, or triangular, or sawtooth, or any other recurring pulse, it can be represented as a sum (dif-
ference) of unit step functions.
Example 1.2
Express the square waveform of Figure 1.10 as a sum of unit step functions. The vertical dotted lines
indicate the discontinuities at T, 2T, 3T and so on.
v(t)
A
{ }
T 2T 3T
t
0
–A | ~
Line segment | has height – A , starts at t = T and terminates at t = 2T . This segment is expressed
as
v 2 ( t ) = – A [ u 0 ( t – T ) – u 0 ( t – 2T ) ] (1.9)
Line segment } has height A , starts at t = 2T and terminates at t = 3T . This segment is expressed as
v 3 ( t ) = A [ u 0 ( t – 2T ) – u 0 ( t – 3T ) ] (1.10)
Thus, the square waveform of Figure 1.10 can be expressed as the summation of (1.8) through (1.11),
that is,
v ( t ) = v1 ( t ) + v2 ( t ) + v3 ( t ) + v4 ( t )
= A [ u 0 ( t ) – u 0 ( t – T ) ] – A [ u 0 ( t – T ) – u 0 ( t – 2T ) ] (1.12)
+A [ u 0 ( t – 2T ) – u 0 ( t – 3T ) ] – A [ u 0 ( t – 3T ) – u 0 ( t – 4T ) ]
Example 1.3
Express the symmetric rectangular pulse of Figure 1.11 as a sum of unit step functions.
i(t)
A
t
–T ⁄ 2 0 T⁄2
Example 1.4
Express the symmetric triangular waveform of Figure 1.12 as a sum of unit step functions.
v(t)
1
t
–T ⁄ 2 0 T⁄2
2 v(t) 2
--- t + 1 1 – --- t + 1
T T
{ |
t
–T ⁄ 2 0 T⁄2
v 1 ( t ) = ⎛ --- t + 1⎞ u 0 ⎛ t + --- ⎞ – u 0 ( t )
2 T
⎝T ⎠ ⎝ 2⎠
(1.15)
v 2 ( t ) = ⎛ – --- t + 1⎞ u 0 ( t ) – u 0 ⎛ t – --- ⎞
2 T
⎝ T ⎠ ⎝ 2⎠
(1.16)
Example 1.5
Express the waveform of Figure 1.14 as a sum of unit step functions.
v( t)
3
t
0 1 2 3
v(t)
3
{
2
2t + 1
1 –t+3
|
t
0 1 2 3
or
v ( t ) = ( 2t + 1 )u 0 ( t ) + [ – ( 2t + 1 ) + 3 ]u 0 ( t – 1 )
+ [ – 3 + ( – t + 3 ) ]u 0 ( t – 2 ) – ( – t + 3 )u 0 ( t – 3 )
Two other functions of interest are the unit ramp function, and the unit impulse or delta function. We
will introduce them with the examples that follow.
Example 1.6
In the network of Figure 1.16 i S is a constant current source and the switch is closed at time t = 0 .
R
iS t = 0
+
vC ( t )
−
C
Solution:
The current through the capacitor is i C ( t ) = i S = cons tan t , and the capacitor voltage v C ( t ) is
t
1
∫– ∞ i
v C ( t ) = ---- *
C ( τ ) dτ (1.19)
C
iC ( t ) = iS u0 ( t ) (1.20)
i 0
1 t ---S-
∫–∞ u0 ( τ ) dτ iS t
v C ( t ) = ----
C ∫– ∞ i S u 0 ( τ ) dτ = C + ----
C ∫ 0 u 0 ( τ ) dτ (1.21)
⎧
⎪
⎪
⎨
⎪
⎪
⎩
0
or
iS
v C ( t ) = ----- tu 0 ( t ) (1.22)
C
Therefore, we see that when a capacitor is charged with a constant current, the voltage across it is a
linear function and forms a ramp with slope i S ⁄ C as shown in Figure 1.17.
vC ( t )
slope = i S ⁄ C
t
0
Figure 1.17. Voltage across a capacitor when charged with a constant current source.
* Since the initial condition for the capacitor voltage was not specified, we express this integral with – ∞ at the
lower limit of integration so that any non-zero value prior to t < 0 would be included in the integration.
t
u1 ( t ) = ∫– ∞ u 0 ( τ ) d τ (1.23)
Area = 1 × τ = τ = t
1
t
τ
⎧0 t<0
u1 ( t ) = ⎨ (1.24)
⎩t t≥0
d
----- u 1 ( t ) = u 0 ( t ) (1.25)
dt
Higher order functions of t can be generated by repeated integration of the unit step function. For
example, integrating u 0 ( t ) twice and multiplying by 2, we define u 2 ( t ) as
⎧0 t<0 t
u2 ( t ) = ⎨ 2
⎩t t≥0
or u2 ( t ) = 2 ∫–∞ u1 ( τ ) dτ (1.26)
Similarly,
⎧0 t<0 t
u3 ( t ) = ⎨ 3
⎩t t≥0
or u3 ( t ) = 3 ∫–∞ u2 ( τ ) dτ (1.27)
and in general,
⎧0 t<0 t
un ( t ) = ⎨ n
⎩t t≥0
or un ( t ) = 3 ∫–∞ un – 1 ( τ ) dτ (1.28)
Also,
1d
u n – 1 ( t ) = --- ----- u n ( t ) (1.29)
n dt
Example 1.7
In the network of Figure 1.19, the switch is closed at time t = 0 and i L ( t ) = 0 for t < 0 .
R t = 0
iS +
`
iL ( t )
L −
v (t) L
Solution:
The voltage across the inductor is
di L
v L ( t ) = L ------- (1.30)
dt
d
v L ( t ) = Li S ----- u 0 ( t ) (1.32)
dt
∫– ∞ δ ( τ ) d τ = u0 ( t ) (1.33)
and
To better understand the delta function δ ( t ) , let us represent the unit step u 0 ( t ) as shown in Figure
1.20 (a).
1
Figure (a)
0
t
−ε ε
1
Area =1 2ε Figure (b)
0
−ε ε t
f ( t )δ ( t – a ) = f ( a )δ ( t ) (1.35)
or, when a = 0 ,
f ( t )δ ( t ) = f ( 0 )δ ( t ) (1.36)
that is, multiplication of any function f ( t ) by the delta function δ ( t ) results in sampling the function
at the time instants where the delta function is not zero. The study of discrete-time systems is based
on this property.
Proof:
Since δ ( t ) = 0 for t < 0 and t > 0 then,
f ( t )δ ( t ) = 0 for t < 0 and t > 0 (1.37)
We rewrite f ( t ) as
f(t) = f(0) + [f(t) – f(0)] (1.38)
Integrating (1.37) over the interval – ∞ to t and using (1.38), we get
t t t
∫– ∞ f ( τ )δ ( τ ) dτ = ∫– ∞ f ( 0 )δ ( τ ) dτ + ∫–∞ [ f ( τ ) – f ( 0 ) ]δ ( τ ) dτ (1.39)
The first integral on the right side of (1.39) contains the constant term f ( 0 ) ; this can be written out-
side the integral, that is,
t t
∫– ∞ f ( 0 )δ ( τ ) dτ = f ( 0 ) ∫– ∞ δ ( τ ) d τ (1.40)
The second integral of the right side of (1.39) is always zero because
δ ( t ) = 0 for t < 0 and t > 0
and
[f(τ) – f(0 ) ] τ=0
= f( 0) – f(0 ) = 0
Therefore, (1.39) reduces to
t t
∫– ∞ f ( τ )δ ( τ ) dτ = f ( 0 ) ∫– ∞ δ ( τ ) d τ (1.41)
f ( t )δ ( t ) = f ( 0 )δ ( t )
(1.42)
Sampling Property of δ ( t )
that is, if we multiply any function f ( t ) by δ ( t – α ) , and integrate from – ∞ to +∞ , we will obtain the
value of f ( t ) evaluated at t = α .
Proof:
Let us consider the integral
b
We will use integration by parts to evaluate this integral. We recall from the derivative of products
that
d ( xy ) = xdy + ydx or xdy = d ( xy ) – ydx (1.45)
and integrating both sides we get
∫ x dy = xy – y dx ∫ (1.46)
∫a ∫a u0 ( t – α )f ′( t ) dt
b
f ( t )δ ( t – α ) dt = f ( t )u 0 ( t – α ) – (1.47)
a
We have assumed that a < α < b ; therefore, u 0 ( t – α ) = 0 for α < a , and thus the first term of the
right side of (1.47) reduces to f ( b ) . Also, the integral on the right side is zero for α < a , and there-
fore, we can replace the lower limit of integration a by α . We can now rewrite (1.47) as
b b
∫–∞ f ( t )δ ( t – α ) dt = f ( α ) (1.48)
Sifting Property of δ ( t )
n
n δ
δ ( t ) = ----- [ u 0 ( t ) ] (1.49)
dt
The function δ' ( t ) is called doublet, δ'' ( t ) is called triplet, and so on. By a procedure similar to the
derivation of the sampling property of the delta function, we can show that
Also, the derivation of the sifting property of the delta function can be extended to show that
∞ n
nd
∫
n
f ( t )δ ( t – α ) dt = ( – 1 ) -------n- [ f ( t ) ] (1.51)
–∞ dt t=α
Example 1.8
Evaluate the following expressions:
4
a. 3t δ ( t – 1 )
∞
b. ∫–∞ tδ ( t – 2 ) dt
2
c. t δ' ( t – 3 )
Solution:
4
a. The sampling property states that f ( t )δ ( t – a ) = f ( a )δ ( t – a ) For this example, f ( t ) = 3t and
a = 1 . Then,
4 4
3t δ ( t – 1 ) = { 3t t=1
}δ ( t – 1 ) = 3δ ( t – 1 )
∞
b. The sifting property states that ∫–∞ f ( t )δ ( t – α ) dt = f ( α ) . For this example, f ( t ) = t and α = 2 .
Then,
∞
∫–∞ tδ ( t – 2 ) dt = f ( 2 ) = t t = 2 = 2
c. The given expression contains the doublet; therefore, we use the relation
Example 1.9
a. Express the voltage waveform v ( t ) shown in Figure 1.21 as a sum of unit step functions for the
time interval – 1 < t < 7 s .
b. Using the result of part (a), compute the derivative of v ( t ) and sketch its waveform.
v(t) (V)
1
−1 1 2 3 4 5 6 7
0
t (s)
−1
−2
v ( t ) = 2t [ u 0 ( t + 1 ) – u 0 ( t – 1 ) ] + 2 [ u 0 ( t – 1 ) – u 0 ( t – 2 ) ]
+ ( – t + 5 ) [ u0 ( t – 2 ) – u0 ( t – 4 ) ] + [ u0 ( t – 4 ) – u0 ( t – 5 ) ] (1.52)
+ ( – t + 6 ) [ u0 ( t – 5 ) – u0 ( t – 7 ) ]
–t+5
3
2 –t+6
1
−1 0
1 2 3 4 5 6 7
t (s)
−1
2t
−2
b. The derivative of v ( t ) is
dv
------ = 2u 0 ( t + 1 ) + 2tδ ( t + 1 ) – 2u 0 ( t – 1 ) + ( – 2t + 2 )δ ( t – 1 )
dt
– u 0 ( t – 2 ) + ( – t + 3 )δ ( t – 2 ) + u 0 ( t – 4 ) + ( t – 4 )δ ( t – 4 ) (1.53)
– u 0 ( t – 5 ) + ( – t + 5 )δ ( t – 5 ) + u 0 ( t – 7 ) + ( t – 6 )δ ( t – 7 )
From the given waveform, we observe that discontinuities occur only at t = – 1 , t = 2 , and
t = 7 . Therefore, δ ( t – 1 ) = 0 , δ ( t – 4 ) = 0 , and δ ( t – 5 ) = 0 , and the terms that contain these
delta functions vanish. Also, by application of the sampling property,
2tδ ( t + 1 ) = { 2t t = –1
}δ ( t + 1 ) = – 2δ ( t + 1 )
( – t + 3 )δ ( t – 2 ) = { ( – t + 3 ) t=2
}δ ( t – 2 ) = δ ( t – 2 )
( t – 6 )δ ( t – 7 ) = { ( t – 6 ) t=7
}δ ( t – 7 ) = δ ( t – 7 )
dv
------ = 2u 0 ( t + 1 ) – 2 δ ( t + 1 ) – 2u 0 ( t – 1 ) – u 0 ( t – 2 )
dt (1.54)
+ δ ( t – 2 ) + u0 ( t – 4 ) – u0 ( t – 5 ) + u0 ( t – 7 ) + δ ( t – 7 )
δ(t – 2) δ(t – 7)
1
−1 0 1 2 3 4 5 6 7
t (s)
−1
– 2δ ( t + 1 )
MATLAB* has built-in functions for the unit step, and the delta functions. These are denoted by the
names of the mathematicians who used them in their work. The unit step function u 0 ( t ) is referred
to as Heaviside(t), and the delta function δ ( t ) is referred to as Dirac(t). Their use is illustrated with
the examples below.
syms k a t; % Define symbolic variables
u=k*sym('Heaviside(t-a)') % Create unit step function at t = a
u =
k*Heaviside(t-a)
d=diff(u) % Compute the derivative of the unit step function
d =
k*Dirac(t-a)
1.8 Summary
• The unit step function u 0 ( t ) that is defined as
⎧0 t<0
u0 ( t ) = ⎨
⎩1 t>0
• The unit step function offers a convenient method of describing the sudden application of a volt-
age or current source.
• The unit ramp function, denoted as u 1 ( t ) , is defined as
t
u1 ( t ) = ∫– ∞ u 0 ( τ ) d τ
• The unit impulse or delta function, denoted as δ ( t ) , is the derivative of the unit step u 0 ( t ) . It is also
defined as
t
∫– ∞ δ ( τ ) d τ = u0 ( t )
and
δ ( t ) = 0 for all t ≠ 0
• The sampling property of the delta function states that
f ( t )δ ( t – a ) = f ( a )δ ( t )
or, when a = 0 ,
f ( t )δ ( t ) = f ( 0 )δ ( t )
• The sifting property of the delta function states that
∞
∫–∞ f ( t )δ ( t – α ) dt = f(α)
1.9 Exercises
1. Evaluate the following functions:
a. sin tδ ⎛⎝ t – π
---⎞
6⎠
b. cos 2tδ ⎛⎝ t – π
---⎞
⎠ 4
c. cos t δ ⎛⎝ t – π
---⎞
2
2⎠
d. tan 2tδ ⎛⎝ t – π
---⎞
⎠ 8
∞
2 –t
e. ∫– ∞ t e δ ( t – 2 ) dt
f. sin t δ 1 ⎛⎝ t – π
---⎞
2
2⎠
2.
a. Express the voltage waveform v ( t ) shown in Figure 1.24, as a sum of unit step functions for
the time interval 0 < t < 7 s .
b. Using the result of part (a), compute the derivative of v ( t ) , and sketch its waveform.
20
– 2t
e
10
0
1 2 3 4 5 6 7 t( s)
−10
−20
1. We apply the sampling property of the δ ( t ) function for all expressions except (e) where we apply
the sifting property. For part (f) we apply the sampling property of the doublet.
We recall that the sampling property states that f ( t )δ ( t – a ) = f ( a )δ ( t – a ) . Thus,
π
a. sin tδ ⎛⎝ t – π
---⎞ = sin t
π π π
δ ⎛ t – ---⎞ = sin --- δ ⎛ t – ---⎞ = 0.5δ ⎛ t – ---⎞
6⎠ t = π⁄6 ⎝ 6⎠ 6 ⎝ 6⎠ ⎝ 6⎠
π
b. cos 2tδ ⎛⎝ t – π
---⎞ = cos 2t
⎠ ⎝ 4⎠
---⎞ = cos --- δ ⎛ t – π
δ⎛ t – π --- ⎞ = 0
4 t = π⁄4 2 ⎝ 4⎠
c. cos t δ ⎛⎝ t – π
---⎞ = --- ( 1 + cos 2t )
π π π
δ ⎛ t – ---⎞ = --- ( 1 + cos π )δ ⎛ t – ---⎞ = --- ( 1 – 1 )δ ⎛ t – ---⎞ = 0
2 1 1 1
2⎠ 2 t=π⁄2
⎝ 2⎠ 2 ⎝ 2⎠ 2 ⎝ 2⎠
π
d. tan 2tδ ⎛⎝ t – π
---⎞ = tan 2t
⎠
π π π
δ ⎛ t – ---⎞ = tan --- δ ⎛ t – --- ⎞ = δ ⎛ t – ---⎞
⎝ 8⎠
8 t = π⁄8 4 ⎝ 8⎠ ⎝ 8⎠
∞
We recall that the sampling property states that ∫–∞ f ( t )δ ( t – α ) dt = f ( α ) . Thus,
∞
2 –t 2 –t –2
e. ∫– ∞ t e δ ( t – 2 ) dt = t e t=2
= 4e = 0.54
We recall that the sampling property for the doublet states that
f ( t )δ' ( t – a ) = f ( a )δ' ( t – a ) – f ' ( a )δ ( t – a )
Thus,
π π π
sin t δ ⎛ t – --- ⎞ = sin t δ ⎛ t – --- ⎞ – ----- sin t δ ⎛ t – --- ⎞
2 1 2 1 d 2
⎝ 2⎠ t = π⁄2 ⎝ 2 ⎠ dt t = π⁄2 ⎝ 2⎠
π π
δ ⎛ t – --- ⎞ – sin 2t δ ⎛ t – --- ⎞
1
= --- ( 1 – cos 2t )
1
f. t = π⁄2 ⎝ 2⎠ t = π⁄2 ⎝ 2⎠
2
π π π
= --- ( 1 + 1 )δ ⎛ t – --- ⎞ – sin πδ ⎛ t – --- ⎞ = δ ⎛ t – --- ⎞
1 1 1
2 ⎝ 2⎠ ⎝ 2⎠ ⎝ 2⎠
2.
– 2t
v( t) = e [ u 0 ( t ) – u 0 ( t – 2 ) ] + ( 10t – 30 ) [ u 0 ( t – 2 ) – u 0 ( t – 3 ) ]
a.
+ ( – 10 t + 50 ) [ u 0 ( t – 3 ) – u 0 ( t – 5 ) ] + ( 10t – 70 ) [ u 0 ( t – 5 ) – u 0 ( t – 7 ) ]
or
b.
dv – 2t – 2t – 2t – 2t
------ = – 2e u 0 ( t ) + e δ ( t ) + ( 2e + 10 )u 0 ( t – 2 ) + ( – e + 10t – 30 )δ ( t – 2 )
dt
– 20u 0 ( t – 3 ) + ( – 20t + 80 )δ ( t – 3 ) + 20u 0 ( t – 5 ) + ( 20t – 120 )δ ( t – 5 ) (1)
– 10u 0 ( t – 7 ) + ( – 10t + 70 )δ ( t – 7 )
1 2 3 4 5 6 7 t (s)
– 10
– 10δ ( t – 2 )
– 20
– 2t – 20 δ ( t – 5 )
– 2e
NOTES
his chapter begins with an introduction to the Laplace transformation, definitions, and proper-
T ties of the Laplace transformation. The initial value and final value theorems are also discussed
and proved. It concludes with the derivation of the Laplace transform of common functions
of time, and the Laplace transforms of common waveforms.
σ + jω
–1 1
∫σ – jω
st
L { F ( s ) } = f ( t ) = -------- F ( s ) e ds (2.2)
2πj
–1
where L { f ( t ) } denotes the Laplace transform of the time function f ( t ) , L { F ( s ) } denotes the
Inverse Laplace transform, and s is a complex variable whose real part is σ , and imaginary part ω ,
that is, s = σ + jω .
In most problems, we are concerned with values of time t greater than some reference time, say
t = t 0 = 0 , and since the initial conditions are generally known, the two-sided Laplace transform
pair of (2.1) and (2.2) simplifies to the unilateral or one-sided Laplace transform defined as
∞ ∞
– st – st
L {f(t)}= F(s) = ∫t 0
f( t)e dt = ∫0 f ( t ) e dt (2.3)
σ + jω
–1 1
∫σ – jω
st
L { F ( s ) } = f ( t ) = -------- F ( s ) e ds (2.4)
2πj
The Laplace Transform of (2.3) has meaning only if the integral converges (reaches a limit), that is, if
∞
– st
∫0 f ( t ) e dt < ∞ (2.5)
To determine the conditions that will ensure us that the integral of (2.3) converges, we rewrite (2.5)
as
∞
– σt – jωt
∫0 f ( t )e e dt < ∞ (2.6)
– jωt – jωt
The term e in the integral of (2.6) has magnitude of unity, i.e., e = 1 , and thus the condition
for convergence becomes
∞
– σt
∫0 f ( t )e dt < ∞ (2.7)
Fortunately, in most engineering applications the functions f ( t ) are of exponential order*. Then, we
can express (2.7) as,
∞ ∞ σ 0 t – σt
– σt
∫0 f ( t )e dt < ∫0 ke e dt (2.8)
and we see that the integral on the right side of the inequality sign in (2.8), converges if σ > σ 0 .
Therefore, we conclude that if f ( t ) is of exponential order, L { f ( t ) } exists if
Re { s } = σ > σ 0 (2.9)
σ0 t
* A function f ( t ) is said to be of exponential order if f ( t ) < ke for all t ≥ 0 .
F 1 ( s ), F 2 ( s ), … , F n ( s )
respectively, and
c 1 , c 2 , …, c n
are arbitrary constants, then,
c1 f1 ( t ) + c2 f2 ( t ) + … + cn fn ( t ) ⇔ c1 F1 ( s ) + c2 F2 ( s ) + … + cn Fn ( s ) (2.11)
Proof:
∞
L { c1 f1 ( t ) + c2 f2 ( t ) + … + cn fn ( t ) } = ∫t 0
[ c 1 f 1 ( t ) + c 2 f 2 ( t ) + … + c n f n ( t ) ] dt
∞ ∞ ∞
– st – st – st
= c1 ∫t 0
f1 ( t ) e dt + c 2 ∫t 0
f2 ( t ) e dt + … + c n ∫t 0
fn ( t ) e dt
= c1 F1 ( s ) + c2 F2 ( s ) + … + cn Fn ( s )
Note 1:
It is desirable to multiply f ( t ) by u 0 ( t ) to eliminate any unwanted non-zero values of f ( t ) for t < 0 .
– as
f ( t – a )u 0 ( t – a ) ⇔ e F(s) (2.12)
Proof:
a ∞
– st – st
L { f ( t – a )u 0 ( t – a ) } = ∫0 0e dt + ∫ a f(t – a)e dt (2.13)
Now, we let t – a = τ ; then, t = τ + a and dt = dτ . With these substitutions, the second integral
on the right side of (2.13) becomes
∞ ∞
–s ( τ + a ) – as – sτ – as
∫0 f(τ)e dτ = e ∫0 f ( τ ) e dτ = e F(s)
– at
e f(t) ⇔ F(s + a) (2.14)
Proof:
∞ ∞
– at – at – st – ( s + a )t
L {e f(t)} = ∫0 e f(t)e dt = ∫0 f ( t ) e dt = F ( s + a )
Note 2:
A change of scale is represented by multiplication of the time variable t by a positive scaling factor
a . Thus, the function f ( t ) after scaling the time axis, becomes f ( at ) .
4. Scaling Property
Let a be an arbitrary positive constant; then, the scaling property states that
f ( at ) ⇔ --- F ⎛ --- ⎞
1 s
(2.15)
a ⎝a⎠
Proof:
∞
– st
L { f ( at ) } = ∫0 f ( at ) e dt
d −
f ' ( t ) = ----- f ( t ) ⇔ sF ( s ) – f ( 0 ) (2.16)
dt
Proof:
∞
– st
L {f '(t)} = ∫0 f ' ( t ) e dt
∫ v du = uv – u dv ∫ (2.17)
– st – st
we let du = f ' ( t ) and v = e . Then, u = f ( t ) , dv = – se , and thus
∞
– st ∞ – st – st a
L { f ' ( t ) } = f ( t )e
0
−
+s ∫0 −
f(t)e dt = lim
a→∞
f ( t )e
0
−
+ sF ( s )
– sa − −
= lim [ e f ( a ) – f ( 0 ) ] + sF ( s ) = 0 – f ( 0 ) + sF ( s )
a→∞
d2
-------- f ( t ) ⇔ s 2 F ( s ) – sf ( 0 − ) – f ' ( 0 − ) (2.18)
2
dt
d3
-------- f ( t ) ⇔ s 3 F ( s ) – s 2 f ( 0 − ) – sf ' ( 0 − ) – f '' ( 0 − ) (2.19)
3
dt
and in general
n
d
-------- f ( t ) ⇔ s n F ( s ) – s n – 1 f ( 0 − ) – s n – 2 f ' ( 0 − ) – … – f n–1
(0 )
−
(2.20)
n
dt
We must remember that the terms f ( 0 − ), f ' ( 0 − ), f '' ( 0 − ) , and so on, represent the initial conditions.
Therefore, when all initial conditions are zero, and we differentiate a time function f ( t ) n times,
this corresponds to F ( s ) multiplied by s to the nth power.
d
tf ( t ) ⇔ – ----- F ( s ) (2.21)
ds
Proof:
∞
– st
L { f( t)} = F(s) = ∫0 f ( t ) e dt
Differentiating with respect to s, and applying Leibnitz’s rule* for differentiation under the integral, we
get
∞ ∞ ∞
d- d – st ∂ e – st f ( t )dt = – st
----
ds
F ( s ) = -----
ds ∫0 f( t )e dt = ∫0 ∂s ∫0 – t e f ( t )dt
∞
– st
= – ∫0 [ tf ( t ) ] e dt = – L [ tf ( t ) ]
In general,
n
n nd
t f ( t ) ⇔ ( – 1 ) -------n- F ( s ) (2.22)
ds
The proof for n ≥ 2 follows by taking the second and higher-order derivatives of F ( s ) with respect
to s .
7. Integration in Time Domain
This property states that integration in time domain corresponds to F ( s ) divided by s plus the initial
value of f ( t ) at t = 0 − , also divided by s . That is,
t −
F ( s -) + f------------
(0 )
∫ –∞
f ( τ ) dτ ⇔ ----------
s s
(2.23)
b
* This rule states that if a function of a parameter α is defined by the equation F ( α ) = ∫a f ( x, α ) dx where f is some
known function of integration x and the parameter α , a and b are constants independent of x and α , and the par-
dF- b
∂( x, α )
tial derivative ∂f ⁄ ∂α exists and it is continuous, then ------ =
dα ∫a ----------------
∂( α )
- dx .
Proof:
We express the integral of (2.23) as two integrals, that is,
t 0 t
∫– ∞ f ( τ ) dτ = ∫– ∞ f ( τ ) dτ + ∫ 0 f ( τ ) dτ (2.24)
The first integral on the right side of (2.24), represents a constant value since neither the upper, nor
the lower limits of integration are functions of time, and this constant is an initial condition denoted
as f ( 0 − ) . We will find the Laplace transform of this constant, the transform of the second integral
on the right side of (2.24), and will prove (2.23) by the linearity property. Thus,
∞ ∞ – st ∞
– st – st e
∫0 f ( 0 ) e ∫0 e
− − − −
L {f (0 )} = dt = f ( 0 ) dt = f ( 0 ) --------
–s 0 (2.25)
− −
f(0 ) f(0 )
= f ( 0 ) × 0 – ⎛ – ------------⎞ = ------------
−
⎝ s ⎠ s
This is the value of the first integral in (2.24). Next, we will show that
t
F(s)
∫0 f ( τ ) dτ ⇔ ----------
s
-
We let
t
g(t) = ∫0 f ( τ ) dτ
then,
g' ( t ) = f ( τ )
and
0
g( 0) = ∫0 f ( τ ) dτ = 0
Now,
−
L { g' ( t ) } = G ( s ) = sL { g ( t ) } – g ( 0 ) = G ( s ) – 0
sL { g ( t ) } = G ( s )
G(s)
L { g ( t ) } = -----------
s
⎧ t ⎫ F(s)
L ⎨
⎩
∫0 f ( τ ) dτ ⎬⎭ = -----------
s
(2.26)
∞
f( t)
-------- ⇔
t ∫s F ( s ) ds (2.27)
Proof:
∞
– st
F(s) = ∫0 f ( t ) e dt
and performing the inner integration on the right side integral with respect to s , we get
∞ ∞ ∞ ⎧ f ( t )⎫
– st ∞ f(t) – st
∫s F ( s ) ds = ∫0 –1
--- e
t s
f ( t ) dt = ∫0 -------t - e dt = L ⎨ --------⎬
⎩ t ⎭
9. Time Periodicity
The time periodicity property states that a periodic function of time with period T corresponds to
T
– st – sT
the integral ∫0 f ( t ) e dt divided by ( 1 – e ) in the complex frequency domain. Thus, if we let f ( t )
be a periodic function with period T , that is, f ( t ) = f ( t + nT ) , for n = 1, 2, 3, … we get the trans-
form pair
T – st
∫0 f ( t ) e dt
f ( t + nT ) ⇔ ----------------------------
– sT
- (2.28)
1–e
Proof:
The Laplace transform of a periodic function can be expressed as
∞ T 2T 3T
– st – st – st – st
L {f(t)} = ∫0 f( t )e dt = ∫0 f( t)e dt + ∫T f( t)e dt + ∫ 2T f ( t ) e dt + …
In the first integral of the right side, we let t = τ , in the second t = τ + T , in the third t = τ + 2T ,
and so on. The areas under each period of f ( t ) are equal, and thus the upper and lower limits of
integration are the same for each integral. Then,
T T T
– sτ –s ( τ + T ) – s ( τ + 2T )
L {f(t)} = ∫0 f(τ)e dτ + ∫0 f(τ + T )e dτ + ∫0 f ( τ + 2T ) e dτ + … (2.29)
Proof:
From the time domain differentiation property,
d- −
---- f ( t ) ⇔ sF ( s ) – f ( 0 )
dt
or
⎧d ⎫ ∞
d
----- f ( t ) e –st dt
∫0
−
L ⎨ ----- f ( t ) ⎬ = sF ( s ) – f ( 0 ) =
⎩ dt ⎭ dt
T
d – st
∫ ----- f ( t ) e
−
lim [ sF ( s ) – f ( 0 ) ] = lim lim dt
s→∞ s→∞ T → ∞ ε dt
ε→0
Proof:
From the time domain differentiation property,
d
----- f ( t ) ⇔ sF ( s ) – f ( 0 − )
dt
or
⎧d ⎫ ∞
d
----- f ( t ) e – st dt
∫0
−
L ⎨ ----- f ( t ) ⎬ = sF ( s ) – f ( 0 ) =
⎩ dt ⎭ dt
T
d – st
∫ ----- f ( t ) e
−
lim [ sF ( s ) – f ( 0 ) ] = lim lim dt
s→0 s→0 T → ∞ ε dt
ε→0
Proof:
∞ ∞ ∞
– st
L { f 1 ( t )*f 2 ( t ) } = L ∫– ∞ f 1 ( τ )f 2 ( t – τ ) dτ = ∫0 ∫0 f1 ( τ )f2 ( t – τ ) dτ e dt
(2.35)
∞ ∞
– st
= ∫ 0 f 1 ( τ ) ∫0 f 2 ( t – τ ) e dt dτ
* Convolution is the process of overlapping two signals. The convolution of two time functions f 1 ( t ) and f 2 ( t ) is
∞
denoted as f 1 ( t )*f 2 ( t ) , and by definition, f 1 ( t )*f 2 ( t ) = ∫–∞ f1 ( τ )f2 ( t – τ ) dτ where τ is a dummy variable. We will
discuss it in detail in Chapter 6.
∞ ∞ ∞ ∞
–s ( λ + τ ) – sτ – sλ
L { f 1 ( t )*f 2 ( t ) } = ∫0 f1 ( τ ) ∫0 f2 ( λ ) e dλ dτ = ∫0 f 1 ( τ )e dτ ∫0 f 2 ( λ ) e dλ
= F 1 ( s )F 2 ( s )
σ + jω ∞
1 – ( s – µ )t
= --------
2πj ∫σ – jω F1 ( µ ) ∫0 f 2 ( t ) e dt dµ
For easy reference, we have summarized the Laplace transform pairs and theorems in Table 2.1.
2 Time Shifting f ( t – a )u 0 ( t – a ) – as
e F(s)
3 Frequency Shifting – as F(s + a)
e f(t)
4 Time Scaling f ( at ) 1 ⎛ s⎞
--- F ---
a ⎝ a⎠
5 Time Differentiation d −
----- f ( t ) sF ( s ) – f ( 0 )
See also (2.18) through (2.20) dt
6 Frequency Differentiation tf ( t ) d
– ----- F ( s )
See also (2.22) ds
7 Time Integration t −
∫–∞ f ( τ ) dτ F(s) f (0 )
----------- + ------------
s s
8 Frequency Integration f( t) ∞
--------
t ∫s F ( s ) d s
9 Time Periodicity f ( t + nT ) T – st
∫0 f ( t ) e dt
--------------------------------
– sT
1–e
10 Initial Value Theorem lim f ( t ) −
t→0 lim sF ( s ) = f ( 0 )
s→∞
11 Final Value Theorem lim f ( t ) lim sF ( s ) = f ( ∞ )
t→∞ s→0
12 Time Convolution f 1 ( t )*f 2 ( t ) F 1 ( s )F 2 ( s )
13 Frequency Convolution f 1 ( t )f 2 ( t ) 1
-------- F 1 ( s )*F 2 ( s )
2πj
Solution:
We start with the definition of the Laplace transform, that is,
∞
– st
L { f(t)} = F(s) = ∫0 f ( t ) e dt
1
u 0 ( t ) ⇔ --- (2.38)
s
for Re { s } = σ > 0 .*
Example 2.2
Find L { u 1 ( t ) }
Solution:
We apply the definition
∞
– st
L { f( t)} = F(s) = ∫0 f ( t ) e dt
∫ u dv = uv – v du ∫ (2.39)
We let
– st
u = t and dv = e
then,
– st
du = 1 and v = – e -
----------
s
Since the upper limit of integration in (2.40) produces an indeterminate form, we apply L’ Hôpital’s
rule*, that is,
d
(t)
– st t = lim d t
= lim ----- 1
lim te - ---------------- = lim -------- = 0
t→∞ t → ∞ e st t→∞ d st t → ∞ se st
(e )
dt
1
t ⇔ ---2- (2.41)
s
for σ > 0 .
Example 2.3
n
Find L { t u 0 ( t ) }
Solution:
To find the Laplace transform of this function, we must first review the gamma or generalized facto-
rial function Γ ( n ) which is defined as
∞
n – 1 –x
Γ(n) = ∫0 x e dx (2.42)
f( x)
* Often, the ratio of two functions, such as ---------- , for some value of x, say a, results in an indeterminate form. To
g(x)
f(x)
work around this problem, we consider the limit lim ---------- , and we wish to find this limit, if it exists. To find this
x→a g(x)
d d
limit, we use L’Hôpital’s rule which states that if f ( a ) = g ( a ) = 0 , and if the limit ------ f ( x ) ⁄ ------ g ( x ) as x
dx dx
f ( x ) = lim ⎛ -----
approaches a exists, then, lim ----------
d-
f ( x ) ⁄ ------ g ( x )⎞
d
x → a g(x) x → a ⎝ dx dx ⎠
The integral of (2.42) is an improper integral* but converges (approaches a limit) for all n > 0 .
We will now derive the basic properties of the gamma function, and its relation to the well known
factorial function
n! = n ( n – 1 ) ( n – 2 ) ⋅ ⋅ 3 ⋅ 2 ⋅ 1
The integral of (2.42) can be evaluated by performing integration by parts. Thus, in (2.39) we let
–x n–1
u = e and dv = x
Then,
n
–x
du = – e dx and v = x-----
n
and (2.42) is written as
n –x ∞ ∞
1 n –x
∫0 x e
x e
Γ ( n ) = ------------ + --- dx (2.43)
n x=0
n
With the condition that n > 0 , the first term on the right side of (2.43) vanishes at the lower limit
x = 0 . It also vanishes at the upper limit as x → ∞ . This can be proved with L’ Hôpital’s rule by dif-
ferentiating both numerator and denominator m times, where m ≥ n . Then,
m m–1
d n d n–1
n –x n m
x m–1
nx
x e - = lim -------
lim ----------- dx - = lim d
x - = lim ------------------ x - = …
-----------------------------------
x→∞ n x → ∞ ne x x→∞ d m x→∞ m–1 x
x d
m
ne m–1
ne
dx dx
n–m
n ( n – 1 ) ( n – 2 )… ( n – m + 1 )x
= lim ----------------------------------------------------------------------------------- ( n – 1 ) ( n – 2 )… ( n – m + 1 )
- = lim -------------------------------------------------------------------- = 0
x→∞ ne
x x→∞ m–n x
x e
Therefore, (2.43) reduces to
∞
1 n –x
Γ ( n ) = ---
n ∫0 x e dx
b
b. ∫a f ( x ) dx where f(x) becomes infinite at a value x between the lower and upper limits of integration inclusive.
∞ ∞
n– 1 –x 1 n –x
Γ(n) = ∫0 x e dx = ---
n ∫0 x e dx (2.44)
Γ(n + 1)
Γ ( n ) = --------------------- (2.45)
n
or
nΓ ( n ) = Γ ( n + 1 ) (2.46)
It is convenient to use (2.45) for n < 0 , and (2.46) for n > 0 . From (2.45), we see that Γ ( n ) becomes
infinite as n → 0 .
Γ ( n + 1 ) = n! (2.50)
for n = 1, 2, 3, …
The formula of (2.50) is a noteworthy relation; it establishes the relationship between the Γ ( n )
function and the factorial n!
n
We now return to the problem of finding the Laplace transform pair for t u 0 t , that is,
∞
n – st
∫0 t
n
L { t u0 t } = e dt (2.51)
To make this integral resemble the integral of the gamma function, we let st = y , or t = y ⁄ s , and
n n!
t u 0 ( t ) ⇔ ----------
n+1
- (2.52)
s
Example 2.4
Find L { δ ( t )}
Solution:
∞
– st
L { δ ( t )} = ∫0 δ ( t ) e dt
δ(t) ⇔ 1 (2.53)
for all σ .
Example 2.5
Find L { δ ( t – a ) }
Solution:
∞
– st
L {δ(t – a)} = ∫0 δ ( t – a ) e dt
and again, using the sifting property of the delta function, we get
∞
– st – as
L {δ(t – a)} = ∫0 δ ( t – a ) e dt = e
– as
δ(t – a) ⇔ e (2.54)
for σ > 0 .
Example 2.6
– at
Find L { e u0 ( t ) }
Solution:
∞ ∞
– at – at – st – ( s + a )t
L {e u0 ( t ) } = ∫0 e e dt = ∫0 e dt
∞
= ⎛ – -----------⎞ e
1 – ( s + a )t 1
= -----------
⎝ s + a⎠ s+a
0
– at 1
e u 0 ( t ) ⇔ ----------- (2.55)
s+a
for σ > – a .
Example 2.7
⎧ n – at ⎫
Find L ⎨ t e u0 ( t ) ⎬
⎩ ⎭
Solution:
For this example, we will use the transform pair of (2.52), i.e.,
n n!
t u 0 ( t ) ⇔ ----------
n+1
- (2.56)
s
and the frequency shifting property of (2.14), that is,
– at
e f( t) ⇔ F( s + a) (2.57)
n – at n!
t e u 0 ( t ) ⇔ -------------------------
n+1
(2.58)
(s + a)
where n is a positive integer, and σ > – a . Thus, for n = 1 , we get the transform pair
– at 1
te u 0 ( t ) ⇔ ------------------2 (2.59)
(s + a)
for σ > – a .
For n = 2 , we get the transform
2 – at 2!
t e u 0 ( t ) ⇔ ------------------3 (2.60)
(s + a)
and in general,
n – at n!
t e u 0 ( t ) ⇔ -------------------------
n+1
(2.61)
(s + a)
for σ > – a .
Example 2.8
Find L { sin ωt u 0 ( t ) }
Solution:
∞ a
– st – st
L { sin ωt u 0 ( t ) } = ∫0 ( sin ωt ) e dt = lim ∫
a→∞ 0
( sin ωt ) e dt
1 jωt – jωt – at 1
* This can also be derived from sin ωt = ----- ( e – e ) , and the use of (2.55) where e u 0 ( t ) ⇔ ----------- . By the lin-
j2 s+a
earity property, the sum of these terms corresponds to the sum of their Laplace transforms. Therefore,
1 ⎛ ------------- ω
1 ⎞ = -----------------
1 - – --------------
L [ sin ωtu 0 ( t ) ] = -----
j2 ⎝ s – jω s + jω⎠
s + ω2
2
ax
e ( a sin bx – b cos bx )
∫
ax
e sin bx dx = ------------------------------------------------------
2 2
a +b
Then,
– st a
( – s sin ωt – ω cos ωt )
L { sin ωt u 0 ( t ) } = lim e----------------------------------------------------------
-
a→∞ 2
s +ω
2
0
– as
= lim ( – s sin ωa – ω cos ωa ) + ----------------
e-------------------------------------------------------------- ω - = ----------------
ω -
a→∞ 2
s +ω
2 2
s +ω
2 2
s +ω
2
ω
sin ωt u 0 t ⇔ ----------------
2
-
2
(2.62)
s +ω
for σ > 0 .
Example 2.9
Find L { cos ω t u 0 ( t ) }
Solution:
∞ a
– st – st
L { cos ω t u 0 ( t ) } = ∫0 ( cos ωt ) e dt = lim
a → ∞∫ 0
( cos ωt ) e dt
1 jωt – jωt
* We can use the relation cos ωt = --- ( e + e ) and the linearity property, as in the derivation of the transform
2
d −
of sin ω t on the footnote of the previous page. We can also use the transform pair ----- f ( t ) ⇔ sF ( s ) – f ( 0 ) ; this
dt
is the time differentiation property of (2.16). Applying this transform pair for this derivation, we get
1d 1 d 1 ω s
L [ cos ω tu 0 ( t ) ] = L ---- ----- sin ω tu 0 ( t ) = ---- L ----- sin ω tu 0 ( t ) = ---- s ----------------- = -----------------
ω dt ω dt ω s2 + ω2 s2 + ω2
– st a
e ( – s cos ωt + ω sin ωt )
L { cos ω t u 0 ( t ) } = lim ----------------------------------------------------------
-
a→∞ 2
s +ω
2
0
– as
e ( – s cos ωa + ω sin ωa ) s s
= lim --------------------------------------------------------------- + ----------------- = -----------------
a→∞ 2
s +ω
2 2
s +ω
2 2
s +ω
2
s -
cos ω t u 0 t ⇔ ----------------
2 2
(2.63)
s +ω
for σ > 0 .
Example 2.10
– at
Find L { e sin ωt u 0 ( t ) }
Solution:
Since
ω
sin ωtu 0 t ⇔ ----------------
2
-
2
s +ω
using the frequency shifting property of (2.14), that is,
– at
e f(t) ⇔ F(s + a ) (2.64)
– at ω
e sin ωt u 0 ( t ) ⇔ -------------------------------
2 2
(2.65)
(s + a) + ω
for σ > 0 and a > 0 .
Example 2.11
– at
Find L { e cos ω t u 0 ( t ) }
Solution:
Since
s -
cos ω t u 0 ( t ) ⇔ ----------------
2 2
s +ω
using the frequency shifting property of (2.14), we replace s with s + a , and we get
– at s+a
e cos ω t u 0 ( t ) ⇔ -------------------------------
2 2
(2.66)
(s + a) + ω
for σ > 0 and a > 0 .
For easy reference, we have summarized the above derivations in Table 2.2.
f (t) F(s)
1 u0 ( t ) 1⁄s
2 t u0 ( t ) 1⁄s
2
3 n
t u0 ( t ) n!
-----------
n+1
s
4 δ(t) 1
5 δ(t – a) e
– as
6 – at 1
e u0 ( t ) -----------
s+a
7 n – at n!
t e u0 ( t ) -------------------------
n+1
(s + a)
8 sin ωt u 0 ( t ) ω
-----------------
2 2
s +ω
9 cos ω t u 0 ( t ) s -
----------------
2 2
s +ω
10 e
– at
sin ωt u 0 ( t ) ω
-------------------------------
2 2
(s + a) + ω
11 e
– at
cos ω t u 0 ( t ) s+a
-------------------------------
2 2
(s + a) + ω
Example 2.12
Find the Laplace transform of the waveform f P ( t ) of Figure 2.1. The subscript P stands for pulse.
fP ( t )
A
t
0 a
Solution:
We first express the given waveform as a sum of unit step functions. Then,
fP ( t ) = A [ u0 ( t ) – u0 ( t – a ) ] (2.67)
A –as A A – as
A [ u 0 ( t ) – u 0 ( t – a ) ] ⇔ --- – e --- = --- ( 1 – e )
s s s
Example 2.13
Find the Laplace transform for the waveform f L ( t ) of Figure 2.2. The subscript L stands for line.
fL ( t )
1
t
0 1 2
Figure 2.2. Waveform for Example 2.13
Solution:
We must first derive the equation of the linear segment. This is shown in Figure 2.3. Then, we
express the given waveform in terms of the unit step function.
fL ( t ) t–1
1
t
0 1 2
Figure 2.3. Waveform for Example 2.13 with the equation of the linear segment
–s 1
( t – 1 )u 0 ( t – 1 ) ⇔ e ---2- (2.68)
s
Example 2.14
Find the Laplace transform for the triangular waveform f T ( t ) of Figure 2.4.
Solution:
We must first derive the equations of the linear segments. These are shown in Figure 2.5. Then, we
express the given waveform in terms of the unit step function.
fT ( t )
1
t
0 1 2
Figure 2.4. Waveform for Example 2.14
fT ( t )
1 t –t+2
t
0 1 2
Figure 2.5. Waveform for Example 2.13 with the equations of the linear segments
1 –s 2
f T ( t ) ⇔ ---2- ( 1 – e ) (2.69)
s
Example 2.15
Find the Laplace transform for the rectangular periodic waveform f R ( t ) of Figure 2.6.
fR ( t )
A
t
0 a 2a 3a
−A
Figure 2.6. Waveform for Example 2.15
Solution:
This is a periodic waveform with period T = 2a , and thus we can apply the time periodicity prop-
erty
T
– sτ
∫0 f ( τ ) e
dτ
L { f ( τ ) } = -------------------------------
– sT
1–e
– st a– st 2a
- –
A e e
= --------------------
– 2as
----------
- + -------
-
1–e s 0 s a
or
A - ( – e – as + 1 + e – 2as – e –as )
L { f R ( t ) } = ---------------------------
– 2as
s(1 – e )
– as 2
A A(1 – e )
- ( 1 – 2e –as + e – 2as ) = -------------------------------------------------
= --------------------------- -
– 2as – as – as
s(1 – e ) s(1 + e )(1 – e )
⎛ A ( 1 – e – as ) A ⎛ e as ⁄ 2 e – as ⁄ 2 – e – as ⁄ 2 e – as ⁄ 2⎞ ⎞
- = --- ⎜ ---------------------------------------------------------------⎟ ⎟
= ⎜ --- ----------------------
⎝ s ( 1 + e –as ) s ⎝ e as ⁄ 2 e –as ⁄ 2 + e –as ⁄ 2 e –as ⁄ 2⎠ ⎠
– as ⁄ 2 ⎛ as ⁄ 2 – as ⁄ 2 ⎞
Ae e –e A sinh ( as ⁄ 2 )
- ⎜ --------------------------------- ⎟ = --- -----------------------------
= --- -------------
s e – as ⁄ 2 ⎝ e as ⁄ 2 + e –as ⁄ 2 ⎠ s cosh ( as ⁄ 2 )
or
Example 2.16
Find the Laplace transform for the half-rectified sine wave f HW ( t ) of Figure 2.7.
1
sint f HW ( t )
π 2π 3π 4π
T
– sτ
∫0 f ( τ ) e dτ
L { f ( τ ) } = -------------------------------
– sT
1–e
π
– st – πs
1 e ( s sin t – cos t ) 1 (1 + e )
= --------------------
– 2πs
- ------------------------------------------
2
= ------------------
2
--------------------------
– 2πs
1–e s +1 (s + 1) (1 – e )
0
– πs
(1 + e )
1 -----------------------------------------------
L { f HW ( t ) } = ------------------
2 – πs – πs
(s + 1) (1 + e )(1 – e )
or
1
f HW ( t ) ⇔ ------------------------------------------
2 – πs
(2.71)
(s + 1)(1 – e )
2.5 Summary
• The two-sided or bilateral Laplace Transform pair is defined as
∞
– st
L {f(t )}= F(s ) = ∫– ∞ f ( t ) e dt
σ + jω
–1 1
∫σ – jω
st
L { F ( s ) } = f ( t ) = -------- F ( s ) e ds
2πj
–1
where L { f ( t ) } denotes the Laplace transform of the time function f ( t ) , L { F ( s ) } denotes
the Inverse Laplace transform, and s is a complex variable whose real part is σ , and imaginary
part ω , that is, s = σ + jω .
• The unilateral or one-sided Laplace transform defined as
∞ ∞
– st – st
L {f(t)}= F(s) = ∫t 0
f(t)e dt = ∫0 f ( t ) e dt
• We denote transformation from the time domain to the complex frequency domain, and vice
versa, as
f(t) ⇔ F(s)
• The linearity property states that
c1 f1 ( t ) + c2 f2 ( t ) + … + cn fn ( t ) ⇔ c1 F1 ( s ) + c2 F2 ( s ) + … + cn Fn ( s )
f ( at ) ⇔ --- F ⎛ --- ⎞
1 s
a ⎝ a⎠
d −
f ' ( t ) = ----- f ( t ) ⇔ sF ( s ) – f ( 0 )
dt
d 2- − −
-------
2
f ( t ) ⇔ s 2 F ( s ) – sf ( 0 ) – f ' ( 0 )
dt
d3
-------- f ( t ) ⇔ s 3 F ( s ) – s 2 f ( 0 − ) – sf ' ( 0 − ) – f '' ( 0 − )
3
dt
and in general
n
d
-------- f ( t ) ⇔ s n F ( s ) – s n – 1 f ( 0 − ) – s n – 2 f ' ( 0 − ) – … – f n–1 −
(0 )
n
dt
where the terms f ( 0 − ), f ' ( 0 − ), f '' ( 0 − ) , and so on, represent the initial conditions.
• The differentiation in complex frequency domain property states that
d
tf ( t ) ⇔ – ----- F ( s )
ds
and in general,
n
n nd
t f ( t ) ⇔ ( – 1 ) -------n- F ( s )
ds
lim f ( t ) = lim sF ( s ) = f ( ∞ )
t→∞ s→0
• Convolution in the time domain corresponds to multiplication in the complex frequency domain,
that is,
f 1 ( t )*f 2 ( t ) ⇔ F 1 ( s )F 2 ( s )
• The Laplace transforms of some common functions of time are shown below.
u0 ( t ) ⇔ 1 ⁄ s
t ⇔ 1 ⁄ s2
n n! -
t u 0 ( t ) ⇔ ----------
n+1
s
δ(t) ⇔ 1
– as
δ(t – a) ⇔ e
– at 1
e u 0 ( t ) ⇔ -----------
s+a
– at 1
te u 0 ( t ) ⇔ ------------------2
(s + a)
2 – at 2!
t e u 0 ( t ) ⇔ ------------------3
(s + a)
n – at n!
t e u 0 ( t ) ⇔ -------------------------
n+1
(s + a)
ω -
sin ωt u 0 t ⇔ ----------------
2 2
s +ω
s
cos ω t u 0 t ⇔ ----------------
2
-
2
s +ω
– at ω
e sin ωt u 0 ( t ) ⇔ -------------------------------
2 2
(s + a) + ω
– at s+a
e cos ω t u 0 ( t ) ⇔ -------------------------------
2 2
(s + a) + ω
fP ( t )
A
t
0 a
A – as A A – as
A [ u 0 ( t ) – u 0 ( t – a ) ] ⇔ --- – e --- = --- ( 1 – e )
s s s
fL ( t )
1
t
0 1 2
–s 1
( t – 1 )u 0 ( t – 1 ) ⇔ e ---2-
s
fT ( t )
1
t
0 1 2
1 –s 2
f T ( t ) ⇔ ---2- ( 1 – e )
s
fR ( t )
A
t
0 a 2a 3a
−A
1
sint f HW ( t )
π 2π 3π 4π
1
f HW ( t ) ⇔ ------------------------------------------
2 – πs
(s + 1 )(1 – e )
2.6 Exercises
1. Find the Laplace transform of the following time domain functions:
a. 12
b. 6u0 ( t )
c. 24u 0 ( t – 12 )
d. 5tu 0 ( t )
5
e. 4t u 0 ( t )
7 – 5t
d. 8t e u0 ( t )
e. 15δ ( t – 4 )
3. Find the Laplace transform of the following time domain functions:
3 2
a. ( t + 3t + 4t + 3 )u 0 ( t )
b. 3 ( 2t – 3 )δ ( t – 3 )
c. ( 3 sin 5t )u 0 ( t )
d. ( 5 cos 3t )u 0 ( t )
– 3t
d. 8e cos 4t
e. ( cos t )δ ( t – π ⁄ 4 )
5. Find the Laplace transform of the following time domain functions:
a. 5tu 0 ( t – 3 )
2
b. ( 2t – 5t + 4 )u 0 ( t – 3 )
– 2t
c. ( t – 3 )e u0 ( t – 2 )
2(t – 2)
d. ( 2t – 4 )e u0 ( t – 3 )
– 3t
e. 4te ( cos 2t )u 0 ( t )
a. d ( sin 3t )
dt
b. d ( 3e )
– 4t
dt
c. d ( t cos 2t )
2
dt
d. d ( e sin 2t )
– 2t
dt
e. d ( t e )
2 – 2t
dt
7. Find the Laplace transform of the following time domain functions:
sin t
a. ---------
t
t
sin τ
b. ∫0 ---------
τ
- dτ
sin at
c. ------------
t
∞
cos τ
d. ∫t ----------- dτ
τ
∞ –τ
e-------
e. ∫t τ
dτ
8. Find the Laplace transform for the sawtooth waveform f ST ( t ) of Figure 2.8.
f ST ( t )
A
t
a 2a 3a
9. Find the Laplace transform for the full rectification waveform f FR ( t ) of Figure 2.9.
a 2a 3a 4a
– 3s
b. 3 ( 2t – 3 )δ ( t – 3 ) = 3 ( 2t – 3 ) t=3
δ ( t – 3 ) = 9δ ( t – 3 ) and 9δ ( t – 3 ) ⇔ 9e
2 2
5 s sin 4t 4 ⁄ (s + 2 )
- d. 5 ⋅ ---------------- e. 2 tan 4t = 2 ⋅ ------------- ⇔ 2 ⋅ ---------------------------- = 8
c. 3 ⋅ --------------- ---
2 2 2 2 cos 4t 2 2
s +5 s +3 s ⁄ (s + 2 ) s
This answer looks suspicious because 8 ⁄ s ⇔ 8u 0 ( t ) and the Laplace transform is unilateral,
that is, there is one-to-one correspondence between the time domain and the complex fre-
quency domain. The fallacy with this procedure is that we assumed that if f 1 ( t ) ⇔ F 1 ( s ) and
f1 ( t ) F1 ( s )
f 2 ( t ) ⇔ F 2 ( s ) , we cannot conclude that ---------- ⇔ ------------- .
f2 ( t ) F2 ( s )
1
For this exercise f 1 ( t ) ⋅ f 2 ( t ) = sin 4t ⋅ ------------- and as we’ve learned multiplication in the time
cos 4t
domain corresponds to convolution in the complex frequency domain. Accordingly, we must
∞
– st
use the Laplace transform definition ∫0 ( 2 tan 4t )e dt and this requires integration by parts. We
skip this analytical derivation. The interested reader may try to find the answer with the MAT-
LAB code syms s t; 2*laplace(sin(4*t)/cos(4*t))
4. From (2.22)
n
n nd
t f ( t ) ⇔ ( – 1 ) -------n- F ( s )
ds
Then,
a.
– 5 ⋅ ( 2s )
3 ( – 1 ) ----- ⎛ ---------------
1 d 5 ⎞ 30s -
- = – 3 ------------------------ = -----------------------
⎝
ds s + 5 2 2 ⎠ 2 2 2
( s + 25 ) 2
( s + 25 )
b.
d ⎛ –s +9⎞
2 2 2 2
d s + 3 – s ( 2s )
2 ( – 1 ) -------2- ⎛ ---------------
2 d s ⎞
- = 2 ----- ----------------------------------- = 2 ----- ⎜ ---------------------⎟
⎝ 2 2⎠ ds ⎝ 2
(s + 9) ⎠
ds 2 2
ds s + 3 2
(s + 9)
2 2 2 2
( s + 9 ) ( – 2s ) – 2 ( s + 9 ) ( 2s ) ( – s + 9 -)
= 2 ⋅ -------------------------------------------------------------------------------------------------
4
2
(s + 9)
2 2 3 3
( s + 9 ) ( – 2s ) – 4s ( – s + 9 -) – 2s – 18s + 4s – 36s-
= 2 ⋅ -------------------------------------------------------------------
3
= 2 ⋅ -------------------------------------------------------
3
2 2
(s + 9) (s + 9)
3 2
2s – 54s 2s ( s – 27 ) 4s ( s 2 – 27 )
= 2 ⋅ ----------------------3 = 2 ⋅ --------------------------
3
- = ---------------------------
2 2 3
(s + 9) (s + 9) 2
(s + 9)
c.
2×5
------------------------------ 10
= ------------------------------
-
2 2 2
(s + 5) + 5 ( s + 5 ) + 25
d.
8 ( s + 3 ) - = ------------------------------
----------------------------- 8(s + 3) -
2 2 2
(s + 3) + 4 ( s + 3 ) + 16
e.
– ( π ⁄ 4 )s
cos t π⁄4
δ ( t – π ⁄ 4 ) = ( 2 ⁄ 2 )δ ( t – π ⁄ 4 ) and ( 2 ⁄ 2 )δ ( t – π ⁄ 4 ) ⇔ ( 2 ⁄ 2 )e
5.
a.
– 3s ⎛
---- + ------ ⎞ = --- e ⎛ --- + 3⎞
5 15 5 –3s 1
5tu 0 ( t – 3 ) = [ 5 ( t – 3 ) + 15 ]u 0 ( t – 3 ) ⇔ e
⎝ 2 s ⎠ s ⎝s ⎠
s
b.
2 2
( 2t – 5t + 4 )u 0 ( t – 3 ) = [ 2 ( t – 3 ) + 12t – 18 – 5t + 4 ]u 0 ( t – 3 )
2
= [ 2 ( t – 3 ) + 7t – 14 ]u 0 ( t – 3 )
2
= [ 2 ( t – 3 ) + 7 ( t – 3 ) + 21 – 14 ]u 0 ( t – 3 )
2
= [ 2 ( t – 3 ) + 7 ( t – 3 ) + 7 ]u 0 ( t – 3 ) ⇔ e
– 3s ⎛ 2 × 2!- + ---
------------- --- ⎞
7- + 7
⎝ 3 2 s⎠
s s
c.
– 2t –2 ( t – 2 ) –4
( t – 3 )e u 0 ( t – 2 ) = [ ( t – 2 ) – 1 ]e ⋅ e u0 ( t – 2 )
–4 – 2s 1 1 –4 – 2s – ( s + 1 )
⇔e ⋅e ------------------ – ---------------- = e ⋅ e -------------------
2 (s + 2) 2
(s + 2) (s + 2)
d.
2(t – 2) –2 ( t – 3 ) –2
( 2t – 4 )e u 0 ( t – 3 ) = [ 2 ( t – 3 ) + 6 – 4 ]e ⋅ e u0 ( t – 3 )
–2 – 3s 2 2 –2 – 3s s+4
⇔e ⋅e ------------------ + ---------------- = 2e ⋅ e ------------------
(s + 3)
2 ( s + 3 ) (s + 3)
2
e.
– 3t 1 d s+3 d s+3
4te ( cos 2t )u 0 ( t ) ⇔ 4 ( – 1 ) ----- -----------------------------
- = – 4 ----- -----------------------------------
ds ( s + 3 ) 2 + 2 2 ds s 2 + 6s + 9 + 4
2
d s+3 - s + 6s + 13 – ( s + 3 ) ( 2s + 6 )-
⇔ – 4 ----- ----------------------------
2
= – 4 -----------------------------------------------------------------------
2
ds s + 6s + 13 2
( s + 6s + 13 )
2 2 2
s + 6s + 13 – 2s – 6s – 6s – 18 4 ( s + 6s + 5 )
⇔ – 4 -----------------------------------------------------------------------------
2
- = -----------------------------------
-
2 2
( s + 6s + 13 ) 2
( s + 6s + 13 )
6.
a.
----- f ( t ) ⇔ sF ( s ) – f ( 0 − ) −
3 d
sin 3t ⇔ ---------------
2
-
2
f ( 0 ) = sin 3t t=0
= 0
s +3 dt
d 3 3s
( sin 3t ) ⇔ s ---------------
- – 0 = -------------
-
dt 2
s +3
2 2
s +9
b.
– 4t 3 d- − − – 4t
3e ⇔ ----------- ---- f ( t ) ⇔ sF ( s ) – f ( 0 ) f ( 0 ) = 3e = 3
s+4 dt t=0
d – 4t 3 3s 3 ( s + 4 ) – 12
( 3e ) ⇔ s ----------- – 3 = ----------- – ------------------- = -----------
dt s + 4 s+4 s+4 s+4
c.
2
s - 2 2 d s -
cos 2t ⇔ ---------------
2 2
t cos 2t ⇔ ( – 1 ) -------2- -------------
2
s +2 ds s + 4
2 2 2
d s + 4 – s ( 2s ) d –s + 4 2
( s + 4 ) ( – 2s ) – ( – s + 4 ) ( s + 4 )2 ( 2s )
2 2
----- --------------------------------- = ----- --------------------- = ------------------------------------------------------------------------------------------------
-
ds 2 2 ds 2 2 4
(s + 4) (s + 4) 2
(s + 4)
2 2 3 3 2
( s + 4 ) ( – 2s ) – ( – s + 4 ) ( 4s -) = –
= ----------------------------------------------------------------------- 2s – 8s + 4s – 16s- = 2s
---------------------------------------------------- ( s – 12 -)
--------------------------
2 3 2 3 2 3
(s + 4) (s + 4) (s + 4)
Thus,
2
2 2s ( s – 12 )
t cos 2t ⇔ --------------------------
3
-
2
(s + 4)
and
d 2 −
( t cos 2t ) ⇔ sF ( s ) – f ( 0 )
dt
2
2s ( s – 12 ) 2 2
( s – 12 )
- – 0 = 2s
⇔ s -------------------------- -----------------------------
-
2 3 3
(s + 4) 2
(s + 4)
d.
----- f ( t ) ⇔ sF ( s ) – f ( 0 − )
2 – 2t 2 d
sin 2 t ⇔ ---------------
2
-
2
e sin 2t ⇔ ---------------------------
2
-
s +2 (s + 2) + 4 dt
d – 2t 2 2s
( e sin 2t ) ⇔ s ---------------------------
- – 0 = ---------------------------
-
dt (s + 2) + 4
2
(s + 2) + 4
2
e.
2 2! 2 – 2t 2! d- −
t ⇔ ----3- t e ⇔ ------------------3 ---- f ( t ) ⇔ sF ( s ) – f ( 0 )
s (s + 2) dt
d 2 – 2t 2! 2s
( t e ) ⇔ s ------------------3 – 0 = ------------------
dt (s + 2) (s + 2)
3
7.
a.
1 ⎧ sin t ⎫ sin t exists. Since
sin t ⇔ -------------
2
- but to find L ⎨ --------- ⎬ we must show that the limit lim --------
-
s +1 ⎩ t ⎭ t→0 t
∞
sin t 1
∫s
sin x = 1 this condition is satisfied and thus --------
lim --------- -⇔ -------------- ds . From tables of integrals
- 2
x→0 x t s +1
1 1 –1 1 –1
∫ x----------------
2
+a
2
dx = --- tan ( x ⁄ a ) + C . Then,
a ∫ s-------------
2
+1
- ds = tan ( 1 ⁄ s ) + C and the constant of inte-
b.
t −
sin t –1 F( s) f ( 0 )
From (a) above --------- ⇔ tan ( 1 ⁄ s ) and since
t ∫ –∞
f ( τ ) dτ ⇔ ----------- + ------------ , it follows that
s s
t
sin τ 1 –1
∫0 ---------
τ
- dτ ⇔ --- tan
s
(1 ⁄ s)
c.
From (a) above --------- ⇔ tan ( 1 ⁄ s ) and since f ( at ) ⇔ --- F ⎛⎝ --- ⎞⎠ , it follows that
sin t –1 1 s
t a a
–1
1⁄s
⇔ --- tan ⎛ ---------⎞ or ------------ ⇔ tan ( a ⁄ s )
sin at
------------ 1 sin at –1
at a ⎝ a ⎠ t
d.
∞
s cos t s
cos t ⇔ -------------
2
s +1 t ∫
- , ---------- ⇔ -------------- ds , and from tables of integrals
2
s s +1
x 1 s - 1
∫ ∫
2 2 2
----------------
2 2
dx = --- ln ( x + a ) + C . Then, ------------- 2
ds = --- ln ( s + 1 ) + C and the constant of inte-
x +a 2 s +1 2
gration C is evaluated from the final value theorem. Thus,
t −
1 F( s) f ( 0 )
∫
2
lim f ( t ) = lim sF ( s ) = lim s --- ln ( s + 1 ) + C = 0 and using f ( τ ) dτ ⇔ ----------- + ------------ we
t→∞ s→0 s→0 2 –∞ s s
∞
cos τ 1
get ∫t ----------- dτ ⇔ ----- ln ( s 2 + 1 )
τ 2s
e.
–t ∞
–t 1 e 1
e ⇔ ----------- , ------ ⇔
s+1 t ∫s ----------
s+1
- ds , and from tables of integrals
1 1 1
∫ --------------
ax + b
- dx
2 s+1 ∫
= --- ln ( ax + b ) . Then, ----------- ds = ln ( s + 1 ) + C and the constant of integration C
is evaluated from the final value theorem. Thus,
t −
F ( s -) + ------------
f ( 0 ) we
lim f ( t ) = lim sF ( s ) = lim s [ ln ( s + 1 ) + C ] = 0
t→∞ s→0 s→0
and using ∫ –∞
f ( τ ) dτ ⇔ ----------
s s
∞ –τ
e 1
get ∫t ------- dτ ⇔ --- ln ( s + 1 )
τ s
8.
f ST ( t ) A
--- t
a
A
t
a 2a 3a
a a
1 - A A
--- te –st dt = ------------------------- – st
F ( s ) = -----------------
1–e
– as ∫ 0 a a(1 – e )
– as
-
∫0 te dt (1)
– as – as
1
1- – ae
= --- e - = ---
------------ – -------- - [ 1 – ( 1 + as )e – as ]
2 s 2 2
s s s
his chapter is a continuation to the Laplace transformation topic of the previous chapter and
T presents several methods of finding the Inverse Laplace Transformation. The partial fraction
expansion method is explained thoroughly and it is illustrated with several examples.
This integral is difficult to evaluate because it requires contour integration using complex variables
theory. Fortunately, for most engineering problems we can refer to Tables of Properties, and Com-
mon Laplace transform pairs to lookup the Inverse Laplace transform.
The coefficients a k and b k are real numbers for k = 1, 2, …, n , and if the highest power m of N ( s )
is less than the highest power n of D ( s ) , i.e., m < n , F ( s ) is said to be expressed as a proper rational
function. If m ≥ n , F ( s ) is an improper rational function.
In a proper rational function, the roots of N ( s ) in (3.3) are found by setting N ( s ) = 0 ; these are
called the zeros of F ( s ) . The roots of D ( s ) , found by setting D ( s ) = 0 , are called the poles of F ( s ) .
We assume that F ( s ) in (3.3) is a proper rational function. Then, it is customary and very convenient
n
to make the coefficient of s unity; thus, we rewrite F ( s ) as
1- m m–1 m–2
---- ( bm s + bm – 1 s + bm – 2 s + … + b1 s + b0 )
N(s) a
F ( s ) = ----------- = -----------------------------------------------------------------------------------------------------------------------------
n
(3.4)
D(s) n an – 1 n – 1 an – 2 n – 2 a1 a
s + ----------- s + ----------- s + … + ----- s + ----0-
an an an an
The zeros and poles of (3.4) can be real and distinct, or repeated, or complex conjugates, or combina-
tions of real and complex conjugates. However, we are mostly interested in the nature of the poles, so
we will consider each case separately.
Case I: Distinct Poles
If all the poles p 1, p 2, p 3, …, p n of F ( s ) are distinct (different from each another), we can factor the
denominator of F ( s ) in the form
N(s)
F ( s ) = ------------------------------------------------------------------------------------------------- (3.5)
( s – p1 ) ⋅ ( s – p2 ) ⋅ ( s – p3 ) ⋅ … ⋅ ( s – pn )
where p k is distinct from all other poles. Next, using the partial fraction expansion method,*we can
express (3.5) as
r1 r2 r3 rn
F ( s ) = -----------------
- + ----------------- - + … + -----------------
- + ----------------- - (3.6)
( s – p1 ) ( s – p2 ) ( s – p3 ) ( s – pn )
To evaluate the residue r k , we multiply both sides of (3.6) by ( s – p k ) ; then, we let s → p k , that is,
r k = lim ( s – p k )F ( s ) = ( s – p k )F ( s ) (3.7)
s → pk s = pk
Example 3.1
Use the partial fraction expansion method to simplify F 1 ( s ) of (3.8) below, and find the time domain
function f 1 ( t ) corresponding to F 1 ( s ) .
3s + 2
F 1 ( s ) = -------------------------- (3.8)
2
s + 3s + 2
* The partial fraction expansion method applies only to proper rational functions. It is used extensively in integra-
tion, and in finding the inverses of the Laplace transform, the Fourier transform, and the z-transform. This
method allows us to decompose a rational polynomial into smaller rational polynomials with simpler denomina-
tors from which we can easily recognize their integrals and inverse transformations. This method is also being
taught in intermediate algebra and introductory calculus courses.
Solution:
Using (3.6), we get
3s + 2 3s + 2 r1 r2
F 1 ( s ) = -------------------------- = --------------------------------- = ---------------
- + ---------------
- (3.9)
2
s + 3s + 2 (s + 1)(s + 2) (s + 1) (s + 2)
and
3s + 2
r 2 = lim ( s + 2 )F ( s ) = ---------------- = 4 (3.11)
s → –2 (s + 1) s = –2
F 1 ( s ) = ------------------------- – 1 - + ---------------
3s + 2 - = --------------- 4 - (3.12)
2
s + 3s + 2 ( s + 1 ) ( s + 2)
The residues and poles of a rational function of polynomials such as (3.8), can be found easily using
the MATLAB residue(a,b) function. For this example, we use the code
Ns = [3, 2]; Ds = [1, 3, 2]; [r, p, k] = residue(Ns, Ds)
and MATLAB returns the values
r =
4
-1
p =
-2
-1
k =
[]
For this MATLAB code, we defined Ns and Ds as two vectors that contain the numerator and
denominator coefficients of F ( s ) . When this code is executed, MATLAB displays the r and p vec-
tors that represent the residues and poles respectively. The first value of the vector r is associated
with the first value of the vector p, the second value of r is associated with the second value of p,
and so on.
The vector k is referred to as the direct term and it is always empty (has no value) whenever F ( s ) is
a proper rational function, that is, when the highest degree of the denominator is larger than that of
the numerator. For this example, we observe that the highest power of the denominator is s 2 ,
whereas the highest power of the numerator is s and therefore the direct term is empty.
We can also use the MATLAB ilaplace(f) function to obtain the time domain function directly from
F ( s ) . This is done with the code that follows.
syms s t; Fs=(3*s+2)/(s^2+3*s+2); ft=ilaplace(Fs); pretty(ft)
When this code is executed, MATLAB displays the expression
4 exp(-2 t)- exp(-t)
Example 3.2
Use the partial fraction expansion method to simplify F 2 ( s ) of (3.15) below, and find the time
domain function f 2 ( t ) corresponding to F 2 ( s ) .
2
3s + 2s + 5
F 2 ( s ) = ------------------------------------------------- (3.15)
3 2
s + 12s + 44s + 48
Solution:
First, we use the MATLAB factor(s) symbolic function to express the denominator polynomial of
F 2 ( s ) in factored form. For this example,
2
3s + 2s + 5 37
r 2 = --------------------------------- = – ------ (3.18)
(s + 2)(s + 6) s = –4
4
2
3s + 2s + 5 89
r 3 = --------------------------------- = ------ (3.19)
(s + 2)(s + 4) s = –6
8
9⁄8 – 37 ⁄ 4 89 ⁄ 8
F 2 ( s ) = ---------------- + ---------------- + ---------------- ⇔ ⎛ --- e – ------ e + ------ e ⎞ u 0 ( t ) = f 2 ( t )
9 – 2t 37 – 4t 89 – 6t
⎝ ⎠
(3.22)
(s + 2) (s + 4) (s + 6) 8 4 8
Quite often, the poles of F ( s ) are complex*, and since complex poles occur in complex conjugate
pairs, the number of complex poles is even. Thus, if p k is a complex root of D ( s ) , then, its complex
conjugate pole, denoted as p k∗ , is also a root of D ( s ) . The partial fraction expansion method can
also be used in this case, but it may be necessary to manipulate the terms of the expansion in order to
express them in a recognizable form. The procedure is illustrated with the following example.
Example 3.3
Use the partial fraction expansion method to simplify F 3 ( s ) of (3.23) below, and find the time
domain function f 3 ( t ) corresponding to F 3 ( s ) .
s+3
F 3 ( s ) = ------------------------------------------- (3.23)
3 2
s + 5s + 12s + 8
Solution:
Let us first express the denominator in factored form to identify the poles of F 3 ( s ) using the MAT-
LAB factor(s) function. Then,
syms s; factor(s^3 + 5*s^2 + 12*s + 8)
ans =
(s+1)*(s^2+4*s+8)
The factor(s) function did not factor the quadratic term. We will use the roots(p) function.
p=[1 4 8]; roots_p=roots(p)
roots_p =
-2.0000 + 2.0000i
-2.0000 - 2.0000i
Then,
s+3 s+3
F 3 ( s ) = ------------------------------------------- = ------------------------------------------------------------------------
3 2 ( s + 1 ) ( s + 2 + j2 ) ( s + 2 – j2 )
s + 5s + 12s + 8
s+3 r1 r2 r 2∗
F 3 ( s ) = ------------------------------------------
- = ---------------
- + --------------------------- + ------------------------
- (3.24)
3 2 ( s + 1 ) ( s + 2 + j2 ) ( s + 2 – j 2 )
s + 5s + 12s + 8
The residues are
s+3 2
r 1 = -------------------------- = --- (3.25)
2 5
s + 4s + 8 s = –1
s+3
r 2 = -----------------------------------------
- 1 – j2
= ------------------------------------ 1 – j2
= ------------------
( s + 1 ) ( s + 2 –j 2 ) s = – 2 – j2
( – 1 – j2 ) ( – j4 ) – 8 + j4
(3.26)
( 1 – j2 ) ( – 8 – j4 )
= ----------------------- ----------------------- = –------------------------
16 + j12 = – 1 j3-
--- + -----
( – 8 + j4 ) ( – 8 – j4 ) 80 5 20
1 j3 ∗
r 2∗ = ⎛ – --- + ------⎞ = – --- – ------
1 j3
⎝ 5 20⎠
(3.27)
5 20
2⁄5 1 ( 2s + 1 )
F 3 ( s ) = ---------------- – --- ⋅ ------------------------------
- (3.29)
( s + 1 ) 5 ( s 2 + 4s + 8 )
For convenience, we denote the first term on the right side of (3.29) as F 31 ( s ) , and the second as
F 32 ( s ) . Then,
2⁄5 2 –t
F 31 ( s ) = ---------------- ⇔ --- e = f 31 ( t ) (3.30)
(s + 1) 5
Next, for F 32 ( s )
1 ( 2s + 1 )
F 32 ( s ) = – --- ⋅ ------------------------------
2
- (3.31)
5 ( s + 4s + 8 )
we express F 32 ( s ) as
1 3 3
⎛ s + --- + --- – --- ⎞
2 ⎜ --------------------------------
2 2 2⎟ s+2 –3 ⁄ 2
F 32 ( s ) = – --- --- ⎛ --------------------------------- + --------------------------------- ⎞
- = –2
5 ⎜ ( s + 2 ) 2 + 2 2 )⎟ 5 ⎝ ( s + 2 )2 + 22 ) ( s + 2 )2 + 22 ) ⎠
⎝ ⎠
s + 2 - ⎞ ------------ 6 ⁄ 10- ⎛ -------------------------------- (3.33)
--- ⎛ -------------------------------- -⎞
2
= –2 +
5 ⎝ ( s + 2 )2 + 22 ) ⎠ 2 ⎝ ( s + 2 )2 + 22 ) ⎠
s + 2 - ⎞ -----
= – --- ⎛ -------------------------------- + - ⎛ --------------------------------
- ⎞
2 3 2
⎝
5 (s + 2 ) + 2 ) 2 2 ⎠ 10 ⎝ 2 2 ⎠
(s + 2 ) + 2 )
Addition of (3.30) with (3.33) yields
2⁄5 s+2
F 3 ( s ) = F 31 ( s ) + F 32 ( s ) = ---------------- – --- ⎛ --------------------------------
- ⎞ + ------ ⎛ --------------------------------- ⎞
2 3 2
( s + 1 ) 5 ⎝ ( s + 2 ) 2 + 2 2 ) ⎠ 10 ⎝ ( s + 2 ) 2 + 2 2 ) ⎠
2 –t 2 – 2t 3 –2t
⇔ --- e – --- e cos 2t + ------ e sin 2t = f 3 ( t )
5 5 10
Check with MATLAB:
* Here, we used MATLAB with simple((−1/5 +3j/20)/(s+2+2j)+(−1/5 −3j/20)/(s+2−2j)). The simple func-
tion, after several simplification tools that were displayed on the screen, returned (-2*s-1)/
(5*s^2+20*s+40)
Denoting the m residues corresponding to multiple pole p 1 as r 11, r 12, r 13, …, r 1m , the partial frac-
tion expansion of (3.34) is written as
r 11 r 12 r 13 r 1m
F ( s ) = --------------------
- + --------------------------- - + … + -----------------
- + --------------------------- -
( s – p1 )
m
( s – p1 )
m–1
( s – p1 )
m–2 ( s – p1 )
(3.35)
r2 r3 rn
+ ----------------- - + … + -----------------
- + ----------------- -
( s – p2 ) ( s – p3 ) ( s – pn )
For the simple poles p 1, p 2, …, p n , we proceed as before, that is, we find the residues as
r k = lim ( s – p k )F ( s ) = ( s – p k )F ( s ) (3.36)
s → pk s = pk
The residues r 11, r 12, r 13, …, r 1m corresponding to the repeated poles, are found by multiplication of
m
both sides of (3.35) by ( s – p ) . Then,
m 2 m–1
( s – p 1 ) F ( s ) = r 11 + ( s – p 1 )r 12 + ( s – p 1 ) r 13 + … + ( s – p 1 ) r 1m
rn ⎞ (3.37)
+ ( s – p 1 ) ⎛ -----------------
m r2 r3
- + … + -----------------
- + ----------------- -
⎝ ( s – p2 ) ( s – p3 ) ( s – p n )⎠
rn ⎞
( s – p 1 ) ⎛ -----------------
m r2 r3
+ lim - + … + -----------------
- + ----------------- -
s → p1 ⎝ ( s – p2 ) ( s – p3 ) ( s – p n )⎠
or
m
r 11 = lim ( s – p 1 ) F ( s ) (3.38)
s → p1
and thus (3.38) yields the residue of the first repeated pole.
The residue r 12 for the second repeated pole p 1 , is found by differentiating (3.37) with respect to s
and again, we let s → p 1 , that is,
or
k–1
d
1 ------------- m
r 1k - k – 1- [ ( s – p 1 ) F ( s ) ]
= lim ----------------- (3.42)
s → p 1 ( k – 1 )! ds
Example 3.4
Use the partial fraction expansion method to simplify F 4 ( s ) of (3.43) below, and find the time
domain function f 4 ( t ) corresponding to F 4 ( s ) .
s+3
F 4 ( s ) = ----------------------------------- (3.43)
2
(s + 2)(s + 1)
Solution:
We observe that there is a pole of multiplicity 2 at s = – 1 , and thus in partial fraction expansion
form, F 4 ( s ) is written as
s+3
F 4 ( s ) = -----------------------------------
r1
= ---------------
r 21
- + ------------------
r 22
+ ---------------
- (3.44)
( s + 2 )( s + 1 )
2 (s + 2) (s + 1) 2 (s + 1)
s+3
r 1 = ------------------ = 1
2
(s + 1) s = –2
s+3
r 21 = ----------- = 2
s+2 s = –1
d s+3 (s + 2) – (s + 3)
r 22 = ----- ⎛ ----------- ⎞ = --------------------------------------- = –1
ds ⎝ s + 2 ⎠ (s + 2)
2
s = –1 s = –1
The value of the residue r 22 can also be found without differentiation as follows:
Substitution of the already known values of r 1 and r 21 into (3.44), and letting s = 0 *, we get
s+3 1 2 r 22
----------------------------------- = ---------------- + ------------------ + ---------------
-
(s + 1) (s + 2)
2 (s + 2) s = 0 (s + 1)
2 (s + 1) s=0
s=0 s=0
or
3 1
--- = --- + 2 + r 22
2 2
s+3
F 4 ( s ) = ----------------------------------- = --------------- 2
1 - + ------------------ – 1 - ⇔ e – 2t + 2te – t – e – t = f ( t )
+ --------------- (3.45)
(s + 2)(s + 1)
2 ( s + 2 ) (s + 1)
2 ( s + 1) 4
r =
1.0000
-1.0000
2.0000
p =
-2.0000
-1.0000
-1.0000
k =
[]
Example 3.5
Use the partial fraction expansion method to simplify F 5 ( s ) of (3.46) below, and find the time
domain function f 5 ( t ) corresponding to the given F 5 ( s ) .
2
s + 3s + 1 -
F 5 ( s ) = ------------------------------------- (3.46)
3 2
(s + 1) (s + 2)
Solution:
We observe that there is a pole of multiplicity 3 at s = – 1 , and a pole of multiplicity 2 at s = – 2 .
Then, in partial fraction expansion form, F 5 ( s ) is written as
r 11 r 12 r 13 r 21 r 22
F 5 ( s ) = ------------------ + ------------------ + ---------------
- + ------------------ + ---------------
- (3.47)
(s + 1)
3
(s + 1)
2 ( s + 1 ) ( s + 2 )2 ( s + 2 )
d ⎛ s + 3 s + 1⎞
2
r 12 = ----
- ⎜ -⎟
-------------------------
ds ⎝ ( s + 2 ) 2 ⎠
s = –1
2 2
s + 2 ) ( 2s + 3 ) – 2 ( s + 2 ) ( s + 3 s + 1 )-
= (--------------------------------------------------------------------------------------------- s+4
= ------------------ =3
4 3
(s + 2) s = –1
(s + 2) s = –1
1 d ⎛ s + 3 s + 1⎞ 1 d d ⎛ s + 3 s + 1⎞
2 2 2
r 13 = ----- -------2- ⎜ -------------------------
-⎟ = --- ----- ----- ⎜ -------------------------
-⎟
2! ds ⎝ ( s + 2 ) 2 ⎠ 2 ds ds ⎝ ( s + 2 ) 2 ⎠
s = –1 s = –1
3 2
1d s+4 1 ( s + 2 ) – 3 ( s + 2 ) ( s + 4 )-
= --- ----- ⎛ ------------------3 ⎞ = --- ---------------------------------------------------------------
2 ds ⎝ ( s + 2 ) ⎠ 2 (s + 2)
6
s = –1 s = –1
1 s + 2 – 3s – 12 ⎞ –s–5
= --- ⎛ ----------------------------------
- = ------------------ = –4
2⎝ (s + 2)
4 ⎠
(s + 2)
4
s = –1 s = –1
–1 3 –4 1 4
F 5 ( s ) = ------------------ + ------------------ + ---------------- + ------------------ + ---------------- (3.48)
(s + 1)
3
(s + 1)
2 (s + 1) (s + 2) 2 (s + 2)
We will check the values of these residues with the MATLAB code below.
syms s; % The function collect(s) below multiplies (s+1)^3 by (s+2)^2
% and we use it to express the denominator D(s) as a polynomial so that we can
% we can use the coefficients of the resulting polynomial with the residue function
Ds=collect(((s+1)^3)*((s+2)^2))
Ds =
s^5+7*s^4+19*s^3+25*s^2+16*s+4
Ns=[1 3 1]; Ds=[1 7 19 25 16 4]; [r,p,k]=residue(Ns,Ds)
r =
4.0000
1.0000
-4.0000
3.0000
-1.0000
p =
-2.0000
-2.0000
-1.0000
-1.0000
-1.0000
k =
[]
From Table 2.2 of Chapter 2
– at 1 – at 1 n – 1 – at ( n – 1 )!
e ⇔ ----------- te ⇔ ------------------2 t e ⇔ ------------------n
s+a (s + a) (s + a)
1 2 –t –t –t – 2t – 2t
f 5 ( t ) = – --- t e + 3te – 4e + te + 4e (3.49)
2
We can verify (3.49) with MATLAB as follows:
syms s t; Fs=−1/((s+1)^3) + 3/((s+1)^2) − 4/(s+1) + 1/((s+2)^2) + 4/(s+2);
ft=ilaplace(Fs)
ft = -1/2*t^2*exp(-t)+3*t*exp(-t)-4*exp(-t)
+t*exp(-2*t)+4*exp(-2*t)
Example 3.6
Derive the Inverse Laplace transform f 6 ( t ) of
2
s + 2s + 2
F 6 ( s ) = -------------------------- (3.51)
s+1
Solution:
For this example, F 6 ( s ) is an improper rational function. Therefore, we must express it in the form
of (3.50) before we use the partial fraction expansion method.
By long division, we get
2
s + 2s + 2- = ----------
F 6 ( s ) = ------------------------- 1 -+1+s
s+1 s+1
Now, we recognize that
1
----------- ⇔ e –t
s+1
and
1 ⇔ δ(t)
but
s⇔?
To answer that question, we recall that
u 0' ( t ) = δ ( t )
and
u 0'' ( t ) = δ' ( t )
where δ' ( t ) is the doublet of the delta function. Also, by the time differentiation property
2 2 2 1
u 0'' ( t ) = δ' ( t ) ⇔ s F ( s ) – sf ( 0 ) – f ' (0 ) = s F ( s ) = s ⋅ --- = s
s
Therefore, we have the new transform pair
s ⇔ δ' ( t ) (3.52)
and thus,
2
s + 2s + 2- = ----------
F 6 ( s ) = ------------------------- 1 - + 1 + s ⇔ e – t + δ ( t ) + δ' ( t ) = f ( t ) (3.53)
6
s+1 s+1
In general,
n
d - n
-------
n
δ(t) ⇔ s (3.54)
dt
We verify (3.53) with MATLAB as follows:
Ns = [1 2 2]; Ds = [1 1]; [r, p, k] = residue(Ns, Ds)
r =
1
p =
-1
k =
1 1
Here, the direct terms k= [1 1] are the coefficients of δ ( t ) and δ' ( t ) respectively.
N(s) r1 r2 rm
F ( s ) = ----------- = ---------- - + … ------------------
- + ----------------- - (3.55)
D(s) s – a (s – a) 2
(s – a)
m
2 2 n
Let s + αs + β be a quadratic factor of D ( s ) , and suppose that ( s + αs + β ) is the highest power
of this factor that divides D ( s ) . Now, we perform the following steps:
1. To this factor, we assign the sum of n partial fractions, that is,
r1 s + k1 r2 s + k2 rn s + kn
-------------------------- - + … + ---------------------------------
- + --------------------------------- -
2 2 n
s + αs + β ( s 2 + αs + β ) 2
( s + αs + β )
2. We repeat step 1 for each of the distinct linear and quadratic factors of D ( s )
3. We set the given F ( s ) equal to the sum of these partial fractions
4. We clear the resulting expression of fractions and arrange the terms in decreasing powers of s
5. We equate the coefficients of corresponding powers of s
6. We solve the resulting equations for the residues
Example 3.7
Express F 7 ( s ) of (3.56) below as a sum of partial fractions using the method of clearing the fractions.
– 2s + 4
F 7 ( s ) = ------------------------------------- (3.56)
2 2
(s + 1)(s – 1)
Solution:
Using Steps 1 through 3 above, we get
– 2s + 4 r1 s + A r 21 r 22
F 7 ( s ) = ------------------------------------- = ------------------ + -----------------
- + ---------------
- (3.57)
2
(s + 1)(s – 1)
2 2
(s + 1) (s – 1)
2 (s – 1)
With Step 4,
2 2 2
– 2s + 4 = ( r 1 s + A ) ( s – 1 ) + r 21 ( s + 1 ) + r 22 ( s – 1 ) ( s + 1 ) (3.58)
and with Step 5,
3 2
– 2s + 4 = ( r 1 + r 22 )s + ( – 2r 1 + A – r 22 + r 21 )s
(3.59)
+ ( r 1 – 2A + r 22 ) s + ( A – r 22 + r 21 )
Relation (3.59) will be an identity is s if each power of s is the same on both sides of this relation.
Therefore, we equate like powers of s and we get
0 = r 1 + r 22
0 = – 2r 1 + A – r 22 + r 21
(3.60)
– 2 = r 1 – 2A + r 22
4 = A – r 22 + r 21
Next, substitution of (3.61) and (3.62) into the third equation of (3.60) yields
– 2 = 2 – 2A – 2
or
A = 1 (3.63)
Finally by substitution of (3.61), (3.62), and (3.63) into the fourth equation of (3.60), we get
4 = 1 + 2 + r 21
or
r 21 = 1 (3.64)
Example 3.8
Use partial fraction expansion to simplify F 8 ( s ) of (3.66) below, and find the time domain function
f 8 ( t ) corresponding to F 8 ( s ) .
s+3
F 8 ( s ) = ------------------------------------------
- (3.66)
3 2
s + 5s + 12s + 8
Solution:
This is the same transform as in Example 3.3, where we found that the denominator D ( s ) can be
expressed in factored form of a linear term and a quadratic. Thus, we write F 8 ( s ) as
s+3
F 8 ( s ) = ------------------------------------------------ (3.67)
2
( s + 1 ) ( s + 4s + 8 )
and using the method of clearing the fractions, we rewrite (3.67) as
s+3 r1 r2 s + r3
F 8 ( s ) = ------------------------------------------------ = ----------
- + -------------------------
- (3.68)
2
( s + 1 ) ( s + 4s + 8 ) s + 1 s + 4s + 82
As in Example 3.3,
s+3 -
r 1 = ------------------------- = 2
--- (3.69)
2 5
s + 4s + 8 s = –1
Next, to compute r 2 and r 3 , we follow the procedure of this section and we get
2
( s + 3 ) = r 1 ( s + 4s + 8 ) + ( r 2 s + r 3 ) ( s + 1 ) (3.70)
Since r 1 is already known, we only need two equations in r 2 and r 3 . Equating the coefficient of s 2
on the left side, which is zero, with the coefficients of s 2 on the right side of (3.70), we get
0 = r1 + r2 (3.71)
To obtain the third residue r 3 , we equate the constant terms of (3.70). Then, 3 = 8r 1 + r 3 or
3 = 8 × 2 ⁄ 5 + r 3 , or r 3 = – 1 ⁄ 5 . Then, by substitution into (3.68), we get
2⁄5 1 ( 2s + 1 )
F 8 ( s ) = ---------------- – --- ⋅ ------------------------------
- (3.72)
( s + 1 ) 5 ( s 2 + 4s + 8 )
as before.
The remaining steps are the same as in Example 3.3, and thus f 8 ( t ) is the same as f 3 ( t ) , that is,
3.5 Summary
• The Inverse Laplace Transform Integral defined as
σ + jω
–1 1
∫σ – jω
st
L { F ( s ) } = f ( t ) = -------- F ( s ) e ds
2πj
is difficult to evaluate because it requires contour integration using complex variables theory.
• For most engineering problems we can refer to Tables of Properties, and Common Laplace trans-
form pairs to lookup the Inverse Laplace transform.
• The partial fraction expansion method offers a convenient means of expressing Laplace trans-
forms in a recognizable form from which we can obtain the equivalent time-domain functions.
• If the highest power m of the numerator N ( s ) is less than the highest power n of the denomina-
tor D ( s ) , i.e., m < n , F ( s ) is said to be expressed as a proper rational function. If m ≥ n , F ( s ) is
an improper rational function.
• The Laplace transform F ( s ) must be expressed as a proper rational function before applying the
partial fraction expansion. If F ( s ) is an improper rational function, that is, if m ≥ n , we must first
divide the numerator N ( s ) by the denominator D ( s ) to obtain an expression of the form
2 m–n N(s)
F ( s ) = k0 + k1 s + k2 s + … + km – n s + -----------
D(s)
• In a proper rational function, the roots of numerator N ( s ) are called the zeros of F ( s ) and the
roots of the denominator D ( s ) are called the poles of F ( s ) .
• The partial fraction expansion method can be applied whether the poles of F ( s ) are distinct, com-
plex conjugates, repeated, or a combination of these.
• When F ( s ) is expressed as
r1 r2 r3 rn
F ( s ) = -----------------
- + ----------------- - + … + -----------------
- + ----------------- -
( s – p1 ) ( s – p2 ) ( s – p3 ) ( s – pn )
• The residues and poles of a rational function of polynomials can be found easily using the MAT-
LAB residue(a,b) function. The direct term is always empty (has no value) whenever F ( s ) is a
proper rational function.
• We can use the MATLAB factor(s) symbolic function to convert the denominator polynomial
form of F 2 ( s ) into a factored form.
• We can use the MATLAB collect(s) and expand(s) symbolic functions to convert the denomi-
nator factored form of F 2 ( s ) into a polynomial form.
s ⇔ δ' ( t )
and in general,
n
d
-------- δ ( t ) ⇔ s n
n
dt
• The method of clearing the fractions is an alternate method of partial fraction expansion.
3.6 Exercises
1. Find the Inverse Laplace transform of the following:
4
a. -----------
s+3
4
b. ------------------2
(s + 3)
4
c. ------------------4
(s + 3)
3s + 4
d. ------------------5
(s + 3)
2
s + 6s + 3-
e. -------------------------
5
(s + 3)
2. Find the Inverse Laplace transform of the following:
3s + 4
a. ----------------------------
2
-
s + 4s + 85
4s + 5
b. --------------------------------
2
-
s + 5s + 18.5
2
s + 3s + 2
c. -----------------------------------------------
3 2
-
s + 5s + 10.5s + 9
2
s – 16
d. ---------------------------------------------
3 2
-
s + 8s + 24s + 32
s+1
e. ------------------------------------------
3 2
-
s + 6s + 11s + 6
⎧ 1 s
2
⎫
⎪ ------- ( sin αt + αt cos αt ) ⇔ -----------------------
-⎪
2
⎪ 2α (s + α ) ⎪
2 2
Hint: ⎨ ⎬
⎪ --------
1 1
-⎪
⎪ 3- ( sin αt – αt cos αt ) ⇔ -----------------------
2 2 2⎪
⎩ 2α (s + α ) ⎭
2s + 3 -
c. --------------------------------
2
s + 4.25s + 1
3 2
s + 8s + 24s + 32
d. ---------------------------------------------
2
-
s + 6s + 8
– 2s 3 -
e. e ---------------------
3
( 2s + 3 )
4. Use the Initial Value Theorem to find f ( 0 ) given that the Laplace transform of f ( t ) is
2s + 3
---------------------------------
2
s + 4.25s + 1
Compare your answer with that of Exercise 3(c).
5. It is known that the Laplace transform F ( s ) has two distinct poles, one at s = 0 , the other at
s = – 1 . It also has a single zero at s = 1 , and we know that lim f ( t ) = 10 . Find F ( s ) and f ( t ) .
t→∞
e.
2 2 2
+ 6s + 3- = ----------------------------------
s------------------------- ( s + 3 ) – ------------------
s + 6s + 9 – 6- = ------------------ 6 1
= ------------------
1
– 6 ⋅ ------------------5
(s + 3)
5
(s + 3)
5 5
(s + 3) (s + 3)
5
(s + 3)
3
(s + 3)
2.
a.
3s + 4 3(s + 4 ⁄ 3 + 2 ⁄ 3 – 2 ⁄ 3) (s + 2) – 2 ⁄ 3 (s + 2) 1 2×9
----------------------------- = ----------------------------------------------------------- = 3 ⋅ -------------------------------
2
- = 3 ⋅ ------------------------------ – --- ⋅ ------------------------------
2 2 2 9 2 2
2
s + 4s + 85 ( s + 2 ) + 81
2
(s + 2) + 9 (s + 2) + 9 (s + 2) + 9
(s + 2) 2 9 2
- – --- ⋅ ------------------------------ ⇔ 3e – 2t cos 9t – --- e – 2t sin 9t
= 3 ⋅ -----------------------------
2 2 9 2 2 9
(s + 2) + 9 (s + 2) + 9
b.
4s + 5
4s + 5 - = ---------------------------------------------------- 4s + 5 s+5⁄4
-------------------------------- - = --------------------------------------- = 4 ⋅ ---------------------------------------
2 2
2
s + 5s + 18.5
2
s + 5s + 6.25 + 12.25
2
( s + 2.5 ) + 3.5
2
( s + 2.5 ) + 3.5
s + 10 ⁄ 4 – 10 ⁄ 4 + 5 ⁄ 4 s + 2.5 1 5 × 3.5
= 4 ⋅ --------------------------------------------------------
2 2
- = 4 ⋅ --------------------------------------- – ------- ⋅ ---------------------------------------
2 2 3.5 2 2
( s + 2.5 ) + 3.5 ( s + 2.5 ) + 3.5 ( s + 2.5 ) + 3.5
( s + 2.5 ) 10 3.5 – 2.5t 10 – 2.5t
= 4 ⋅ ---------------------------------------
2 2
– ------ ⋅ ---------------------------------------
2 2
⇔ 4e cos 3.5t – ------ e sin 3.5t
( s + 2.5 ) + 3.5 7 ( s + 2.5 ) + 3.5 7
Then,
2
s + 3s + 2 (s + 1)(s + 2) (s + 1) s+1
------------------------------------------------ = ---------------------------------------------------- = ----------------------------------- = ----------------------------------------------------------------
3 2 2 2 2
s + 5s + 10.5s + 9 ( s + 2 ) ( s + 3s + 4.5 ) ( s + 3s + 4.5 ) s + 3s + 2.25 – 2.25 + 4.5
s + 1.5 – 1.5 + 1 s + 1.5 1 0.5 × 1.5
= -------------------------------------------- = -------------------------------------------- – ------- ⋅ -------------------------------------------
2
-
2
2 1.5
( s + 1.5 ) + ( 1.5 )
2 2
( s + 1.5 ) + ( 1.5 )
2
( s + 1.5 ) + ( 1.5 )
s + 1.5 1 1.5 – 1.5t 1 – 1.5t
= -------------------------------------------- – --- ⋅ ---------------------------------------
2 2
⇔e cos 1.5t – --- e sin 1.5t
2 3 3
( s + 1.5 ) + ( 1.5 )
2
( s + 2.5 ) + 3.5
d.
2
s – 16 ( s + 4 )( s – 4 ) (s – 4) s+2–2–4
---------------------------------------------- = ------------------------------------------------ = ------------------------------ = ------------------------------
3 2 2 2 2 2 2
s + 8s + 24s + 32 ( s + 4 ) ( s + 4s + 8 ) (s + 2) + 2 (s + 2) + 2
s+2 1 6×2
= ------------------------------ – --- ⋅ ------------------------------
2 2
2 2
(s + 2) + 2
2
(s + 2) + 2
s+2
= ------------------------------
2 - ⇔ e –2t cos 2t – 3e – 2t sin 2t
– 3 ⋅ -----------------------------
2 2
(s + 2) + 2
2 2
(s + 2) + 2
e.
s+1 (s + 1)
- = -------------------------------------------------
------------------------------------------ 1
- = --------------------------------
-
3
s + 6s + 11s + 6
2 ( s + 1 ) ( s + 2 ) ( s + 3 ) ( s + 2 ) (s + 3)
1 r1 r2 1- 1-
= --------------------------------
- = ----------
- + ----------- r 1 = ---------- =1 r 2 = ---------- = –1
( s + 2 )( s + 3 ) s + 2 s + 3 s+3 s = –2
s+2 s = –3
1 1 1 – 2t – 3t
= --------------------------------- = ----------- – ----------- ⇔ e – e
( s + 2 )( s + 3 ) s+2 s+3
3.
a.
3s + 2 3s 1 2×5 s 2 5 2
2
- = 3 ⋅ ---------------- + --- ⋅ ---------------- ⇔ 3 cos 5t + --- sin 5t
----------------- = ---------------- + --- ⋅ ---------------
2 2 2 2 2
2
s + 25
2
s +5
2 5 s +5 s +5 5 s +5 5
b.
2 2
5s + 3 5s 3 1 1
--------------------- = ----------------------- + ----------------------- ⇔ 5 ⋅ ------------ ( sin 2t + 2t cos 2t ) + 3 ⋅ ------------ ( sin 2t – 2t cos 2t )
2 2 2 2 2 2 2 2 2×2 2×8
(s + 4) (s + 2 ) (s + 2 )
⇔ ⎛ --- + ------⎞ sin 2t + ⎛ --- – ------⎞ 2t cos 2t = ------ sin 2t + ------ t cos 2t
5 3 5 3 23 17
⎝ 4 16⎠ ⎝ 4 16⎠ 16 8
c.
-------------------------------- 2s + 3
2s + 3 - = --------------------------------------- r1
- = ----------
r2
- + -----------------
-
2
s + 4.25s + 1 ( s + 4 ) ( s + 1 ⁄ 4 ) s + 4 s + 1⁄4
2s + 3-
r 1 = ----------------- –5 - = 4
= --------------- --- r 2 = 2s + 3-
-------------- 5 ⁄ 2- = 2
= ------------ ---
s+1⁄4 s = –4
– 15 ⁄ 4 3 s+4 s = –1 ⁄ 4
15 ⁄ 4 3
4⁄3 2⁄3 2 – 4t –t ⁄ 4
----------- + ------------------ ⇔ --- ( 2e + e )
s+4 s+1⁄4 3
d.
3 2 2 2
s + 8s + 24s + 32 ( s + 4 ) ( s + 4s + 8 ) ( s + 4s + 8 )
---------------------------------------------- = ------------------------------------------------ = ------------------------------- and by long division
2
s + 6s + 8 ( s + 2 )( s + 4 ) (s + 2)
2
s + 4s + 8 4 – 2t
-------------------------- = s + 2 + ----------- ⇔ δ' ( t ) + 2δ ( t ) + 4e
s+2 s+2
e.
– 2s 3 – 2s
e ----------------------
3
e F ( s ) ⇔ f ( t – 2 )u 0 ( t – 2 )
( 2s + 3 )
3⁄2
3
3⁄8 3⁄8
F ( s ) = ---------------------- = ------------------------------- = ---------------------------------- = -------------------------3- ⇔ --- ⎛ ----- t e
3 3 1 2 – ( 3 ⁄ 2 )t⎞ 3 2 –( 3 ⁄ 2 )t
= ------ t e
8 2!⎝ ⎠ 16
( 2s + 3 )
3
( 2s + 3 ) ⁄ 2
3 3
[ ( 2s + 3 ) ⁄ 2 ]
3
(s + 3 ⁄ 2)
---------------------- ⇔ ------ ( t – 2 ) 2 e –( 3 ⁄ 2 ) ( t – 2 ) u 0 ( t – 2 )
– 2s – 2s 3 3
e F(s)= e 3
( 2s + 3 ) 16
2 2 2
2s ⁄ s + 3s ⁄ s
= lim ----------------------------------------------------------- 2+3⁄s
- = lim -------------------------------------------
-=2
s → ∞ s ⁄ s + 4.25s ⁄ s + 1 ⁄ s
2 2 2 2 s → ∞ 1 + 4.25 ⁄ s + 1 ⁄ s 2
The value f ( 0 ) = 2 is the same as in the time domain expression that we found in Exercise 3(c).
A(s – 1)
- and lim f ( t ) = lim sF ( s ) = 10 . Then,
5. We are given that F ( s ) = -------------------
s(s + 1) t→∞ s→0
A(s – 1) ( s – 1 )- = – A = 10 . Therefore,
lim s -------------------- = A lim ---------------
s → 0 s(s + 1) s → 0 (s + 1)
10 ( s – 1 -) = ---
F ( s ) = –------------------------
r1 r2
- = 10
- + ---------- 20 - ⇔ ( 10 – 20e – t )u ( t ) , that is,
------ – ----------
s(s + 1) s s+1 s s+1 0
–t
f ( t ) = ( 10 – 20e )u 0 ( t ) and we see that lim f ( t ) = 10
t→∞
his chapter presents applications of the Laplace transform. Several examples are given to illus-
T trate how the Laplace transformation is applied to circuit analysis. Complex impedance, com-
plex admittance, and transfer functions are also defined.
Figure 4.1. Resistive circuit in time domain and complex frequency domain
b. Inductor
The time and complex frequency domains for purely inductive circuits is shown in Figure 4.2.
Time Domain Complex Frequency Domain
−
di L + V L ( s ) = sLI L ( s ) – Li L ( 0 )
+ v L ( t ) = L -------
dt −
`
sL IL ( s ) VL ( s ) iL ( 0 )
iL ( t )
vL ( t ) L
` I L ( s ) = ------------- + --------------
t
1
i L ( t ) = ---
L ∫–∞ vL dt VL ( s )
− −
Ls s
− + Li L ( 0 )
−
Figure 4.2. Inductive circuit in time domain and complex frequency domain
c. Capacitor
The time and complex frequency domains for purely capacitive circuits is shown in Figure 4.3.
vS 1Ω
+
C +
− −
vC ( t )
12u 0 ( t ) V 1F
R iR A
vS 1Ω
+
C + iC
− −
vC ( t )
12u 0 ( t ) V 1F
v C ( t ) – 12u 0 ( t ) dv C
------------------------------------- + 1 ⋅ --------- = 0
1 dt
or
dv
--------C- + v C ( t ) = 12u 0 ( t ) (4.1)
dt
The Laplace transform of (4.1) is
sV C ( s ) – v C ( 0 ) + V C ( s ) = 12
−
------
s
or
12
( s + 1 )V C ( s ) = ------ + 6
s
or
6s + 12-
V C ( s ) = ------------------
s(s + 1)
By partial fraction expansion,
6s + 12 r r2
V C ( s ) = ------------------- = ----1 + ---------------
-
s(s + 1) s (s + 1)
r 1 = 6s + 12-
----------------- = 12
(s + 1) s=0
6s + 12
r 2 = ------------------ = –6
s s = –1
Therefore,
12 6 –t –t
V C ( s ) = ------ – ----------- ⇔ 12 – 6e = ( 12 – 6e )u 0 ( t ) = v C ( t )
s s+1
Example 4.2
Use the Laplace transform method to find the current i C ( t ) through the capacitor for the circuit of
Figure 4.6, given that v C ( 0 − ) = 6 V .
vS 1Ω iC ( t )
+
C +
− −
vC ( t )
12u 0 ( t ) V 1F
Solution:
This is the same circuit as in Example 4.1. We apply KVL for the loop shown in Figure 4.7.
R 1Ω
vS
+
C +
− iC ( t ) −
vC ( t )
12u 0 ( t ) V 1F
⎛1 + 1
--- ⎞ I C ( s ) = ------ – --- = ---
12 6 6
⎝ s⎠ s s s
s+1 ⎞
⎛ ----------
- I (s) = 6
---
⎝ s ⎠ C s
or
6 –t
I C ( s ) = ----------- ⇔ i C ( t ) = 6e u 0 ( t )
s+1
Check: From Example 4.1,
–t
v C ( t ) = ( 12 – 6e )u 0 ( t )
Then,
dv C dv d –t –t
i C ( t ) = C --------- = --------C- = ( 12 – 6e )u 0 ( t ) = 6e u 0 ( t ) + 6δ ( t ) (4.3)
dt dt dt
The presence of the delta function in (4.3) is a result of the unit step that is applied at t = 0 .
Example 4.3
In the circuit of Figure 4.8, switch S 1 closes at t = 0 , while at the same time, switch S 2 opens. Use
the Laplace transform method to find v out ( t ) for t > 0 .
is ( t )
t = 0 2A
S2
t = 0 R1 L 1 0.5 H
`
1F
+
S1 2Ω i L1 ( t )
`
C + 0.5 H
R2 v out ( t )
− v ( 0− ) = 3 V
C 1Ω L 2
−
Figure 4.8. Circuit for Example 4.3
Solution:
Since the circuit contains a capacitor and an inductor, we must consider two initial conditions One
is given as v C ( 0 − ) = 3 V . The other initial condition is obtained by observing that there is an initial
current of 2 A in inductor L 1 ; this is provided by the 2 A current source just before switch S 2
opens. Therefore, our second initial condition is i L1 ( 0 − ) = 2 A .
For t > 0 , we transform the circuit of Figure 4.8 into its s-domain* equivalent shown in Figure 4.9.
1
`
+
+
−
1/s 2 0.5s 1V
+
− 3/s
1 0.5s
`V out ( s )
−
Figure 4.9. Transformed circuit of Example 4.3
In Figure 4.9 the current in L 1 has been replaced by a voltage source of 1 V . This is found from the
relation
− 1
L 1 i L1 ( 0 ) = --- × 2 = 1 V (4.4)
2
The polarity of this voltage source is as shown in Figure 4.9 so that it is consistent with the direction
of the current i L1 ( t ) in the circuit of Figure 4.8 just before switch S 2 opens.
* Henceforth, for convenience, we will refer the time domain as t-domain and the complex frequency domain as s-
domain
2s ( s + 3 ) -
V out ( s ) = ------------------------------------------ (4.6)
3 2
s + 8s + 10s + 4
We will use MATLAB to factor the denominator D ( s ) of (4.6) into a linear and a quadratic factor.
p=[1 8 10 4]; r=roots(p) % Find the roots of D(s)
r =
-6.5708
-0.7146 + 0.3132i
-0.7146 - 0.3132i
y=expand((s + 0.7146 − 0.3132j)*(s + 0.7146 + 0.3132j))% Find quadratic form
y =
s^2+3573/2500*s+3043737/5000000
3573/2500 % Find coefficient of s
ans =
1.4292
3043737/5000000 % Find constant term
ans =
0.6087
Therefore,
2s ( s + 3 ) 2s ( s + 3 )
V out ( s ) = ------------------------------------------- = ---------------------------------------------------------------------- (4.7)
3 2 2
s + 8s + 10s + 4 ( s + 6.57 ) ( s + 1.43s + 0.61 )
Now, we perform partial fraction expansion.
2s ( s + 3 ) r1 r2 s + r3
V out ( s ) = ---------------------------------------------------------------------- = ------------------
- + ----------------------------------------
- (4.8)
( s + 6.57 ) ( s + 1.43s + 0.61 ) s + 6.57 s + 1.43s + 0.61
2 2
2s ( s + 3 )
r 1 = ----------------------------------------- = 1.36 (4.9)
2
s + 1.43s + 0.61 s = – 6.57
r 3 = – 0.12
r 2 = 0.64 (4.11)
By substitution into (4.8),
0.64s – 0.12 0.64s + 0.46 – 0.58
V out ( s ) = ------------------- + ----------------------------------------- = ------------------- + ------------------------------------------------------- *
1.36 1.36
s + 6.57 s 2 + 1.43s + 0.61 s + 6.57 s 2 + 1.43s + 0.51 + 0.1
or
s + 0.715 – 0.91 -
1.36 - + ( 0.64 ) -------------------------------------------------------
V out ( s ) = ------------------
s + 6.57 ( s + 0.715 ) + ( 0.316 )
2 2
0.64s – 0.12 -
* We perform these steps to express the term ----------------------------------------
2
in a form that resembles the transform pairs
s + 1.43s + 0.61
– at s+a ω
- and e – at sin ωtu 0 ( t ) ⇔ -------------------------------- . The remaining steps are carried out in
e cos ωtu 0 ( t ) ⇔ -------------------------------
2 2 2 2
(s + a) + ω (s + a) + ω
(4.12).
`
+
+ V out ( s )
− I(s ) 1-
-----
VS ( s ) sC
−
Figure 4.10. Series RLC circuit in s-domain
1
For this circuit, the sum R + sL + -----
- represents the total opposition to current flow. Then,
sC
VS ( s )
I ( s ) = ----------------------------
- (4.14)
R + sL + ----- 1-
sC
VS ( s ) 1-
Z ( s ) ≡ ------------- = R + sL + ----- (4.15)
I(s ) sC
VS ( s )
I ( s ) = ------------
- (4.16)
Z(s )
where
1
Z ( s ) = R + sL + ------ (4.17)
sC
Example 4.4
Find Z ( s ) for the circuit of Figure 4.11. All values are in Ω (ohms).
+
1 1⁄s
VS ( s ) s
` s
`
−
Figure 4.11. Circuit for Example 4.4
Solution:
First Method:
We will first find I ( s ) , and we will compute Z ( s ) from (4.15). We assign the voltage V A ( s ) at node
A as shown in Figure 4.12.
1 VA ( s ) 1⁄s
+ A
I( s)
VS ( s ) s
` s
`
−
Figure 4.12. Circuit for finding I ( s ) in Example 4.4
By nodal analysis,
VA ( s ) – VS ( s ) VA ( s ) VA ( s )
--------------------------------- + -------------- + ------------------ = 0
1 s s+1⁄s
⎛1 + 1
--- + ------------------ ⎞ V A ( s ) = V S ( s )
1
⎝ s s+1⁄s ⎠
3
s +1
V A ( s ) = ------------------------------------
3 2
- ⋅ VS ( s )
s + 2s + s + 1
VS ( s ) – VA ( s ) ⎛ 3
s +1 ⎞ 2
2s + 1 -
I ( s ) = --------------------------------
- = ⎜ 1 – ------------------------------------- ⎟ V S ( s ) = ------------------------------------ ⋅ VS ( s )
1 ⎝ s + 2s + s + 1 ⎠
3 2 3
s + 2s + s + 1
2
and thus,
VS ( s ) 3
s + 2s + s + 1
2
Z ( s ) = ------------
- = ------------------------------------- (4.18)
I(s ) 2s + 1
2
Second Method:
1 1⁄s
a
Z1 Z3
Z(s) s
` Z2 s
`Z 4
b
Figure 4.13. Computation of the impedance of Example 4.4 by series − parallel combinations
To find the equivalent impedance Z ( s ) , looking to the right of terminals a and b , we start on the
right side of the network and we proceed to the left combining impedances as we combine resis-
tances. Then,
Z ( s ) = [ ( Z 3 + Z 4 ) || Z 2 ] + Z 1
2 3 3 2
s(s + 1 ⁄ s ) s +1 s +s s + 2s + s + 1
Z ( s ) = -------------------------- + 1 = ---------------------------- + 1 = ----------------- + 1 = ------------------------------------- (4.19)
s+s+1⁄s 2
( 2s + 1 ) ⁄ s 2s + 1
2
2s + 1
2
`
1-
V(s ) G -----
sL
IS ( s ) sC
−
Figure 4.14. Parallel GLC circuit in s-domain
For this circuit,
1
GV ( s ) + ------ V ( s ) + sCV ( s ) = I ( s )
sL
⎛ G + -----
1
- + sC⎞ ( V ( s ) ) = I ( s )
⎝ sL ⎠
I(s) 1 1
Y ( s ) ≡ ----------- = G + ------ + sC = ---------- (4.20)
V(s) sL Z(s)
IS ( s )
V ( s ) = ----------- (4.21)
Y(s)
where
1
Y ( s ) = G + ------ + sC (4.22)
sL
Example 4.5
Compute Z ( s ) and Y ( s ) for the circuit of Figure 4.15. All values are in Ω (ohms). Verify your
answers with MATLAB.
`
13s
8⁄s 10 20
Z(s)
Y(s)
` 5s 16 ⁄ s
Z1
Z ( s ), Y ( s ) Z2 Z3
2
Z 1 = 13s + 8 13s + 8-
--- = -------------------
s s
Z 2 = 10 + 5s
16 4 ( 5s + 4 )
Z 3 = 20 + ------ = -----------------------
s s
Then,
4 ( 5s + 4 )
( 10 + 5s ) ⎛ -----------------------⎞
Z2 Z3 2 ⎝ ⎠
+ 8- + ---------------------------------------------------
s
Z ( s ) = Z1 + ----------------- = 13s
------------------- -
Z2 + Z3 s 4 ( 5s + 4 )
10 + 5s + -----------------------
s
4 ( 5s + 4 )
( 10 + 5s ) ⎛ -----------------------⎞
2 ⎝
13s + 8- + ----------------------------------------------------
s ⎠ 13s 2 + 8 20 ( 5s 2 + 14s + 8 )
= ------------------- - = -------------------- + -------------------------------------------
s 2 s 2
5s + 10s + 4 ( 5s + 4 )
---------------------------------------------------- 5s + 30s + 16
s
4 3 2
65s + 490s + 528s + 400s + 128
= -------------------------------------------------------------------------------------
2
s ( 5s + 30s + 16 )
Check with MATLAB:
syms s; z1 = 13*s + 8/s; z2 = 5*s + 10; z3 = 20 + 16/s; z = z1 + z2 * z3 / (z2+z3)
z =
13*s+8/s+(5*s+10)*(20+16/s)/(5*s+30+16/s)
z10 = simplify(z)
z10 =
(65*s^4+490*s^3+528*s^2+400*s+128)/s/(5*s^2+30*s+16)
pretty(z10)
4 3 2
65 s + 490 s + 528 s + 400 s + 128
-------------------------------------
2
s (5 s + 30 s + 16)
The complex input admittance Y ( s ) is found by taking the reciprocal of Z ( s ) , that is,
2
Y ( s ) = ---------- s ( 5s + 30s + 16 )
1 = ------------------------------------------------------------------------------------
- (4.23)
Z(s) 4 3
65s + 490s + 528s + 400s + 128
2
V out ( s )
G v ( s ) ≡ ----------------- (4.24)
V in ( s )
Similarly, the ratio of the output current I out ( s ) to the input current I in ( s ) under zero state condi-
tions, is called the current transfer function denoted as G i ( s ) , that is,
I out ( s )
G i ( s ) ≡ ---------------- (4.25)
I in ( s )
The current transfer function of (4.25) is rarely used; therefore, from now on, the transfer function
will have the meaning of the voltage transfer function, i.e.,
V out ( s )
G ( s ) ≡ ----------------- (4.26)
V in ( s )
Example 4.6
Derive an expression for the transfer function G ( s ) for the circuit of Figure 4.17, where R g repre-
sents the internal resistance of the applied (source) voltage V S , and R L represents the resistance of
the load that consists of R L , L , and C .
+
RL
Rg
`
L v out
vg
+ C
−
−
Figure 4.17. Circuit for Example 4.6
Solution:
No initial conditions are given, and even if they were, we would disregard them since the transfer
function was defined as the ratio of the output voltage V out ( s ) to the input voltage V in ( s ) under
+
RL
Rg
V in ( s )
sL
`V out ( s )
+
− 1
------
sC
−
Figure 4.18. The s-domain circuit for Example 4.6
The transfer function G ( s ) is readily found by application of the voltage division expression of the
s – domain circuit of Figure 4.18, i.e.,
R L + sL + 1 ⁄ sC
V out ( s ) = --------------------------------------------------- V in ( s )
R g + R L + sL + 1 ⁄ sC
Then,
V out ( s ) R L + Ls + 1 ⁄ sC
G ( s ) = ----------------
- = --------------------------------------------------
- (4.27)
V in ( s ) R g + R L + Ls + 1 ⁄ sC
Example 4.7
Compute the transfer function G ( s ) for the circuit of Figure 4.19 in terms of the circuit constants
R 1, R 2, R 3, C 1, and C 2 Then, replace the complex variable s with jω , and the circuit constants with
their numerical values and plot the magnitude G ( s ) = V out ( s ) ⁄ V in ( s ) versus radian frequency ω .
C2 10 nF
R2 40 K
R1 R3
200 K 50K
vin vout
C1
25 nF
1/sC2
R2
R1 R3
1 2
V1 ( s ) V2 ( s )
Vin (s) Vout (s)
1/sC1
V2 ( s ) – V1 ( s ) V out ( s )
- = -----------------
------------------------------- - (4.29)
R3 1 ⁄ sC 2
V 1 ( s ) = ( – sR 3 C 2 )V out ( s ) (4.30)
and by substitution of (4.30) into (4.28), rearranging, and collecting like terms, we get:
⎛ ----
1 1
- + ---- 1
- + sC 1 ⎞ ( – sR 3 C 2 ) – ----
- + ---- 1 1
- V ( s ) = ----- V in ( s )
⎝ R1 R2 R3 ⎠ R 2 out R1
or
V out ( s ) –1
G ( s ) = -----------------
- = ----------------------------------------------------------------------------------------------------
- (4.31)
V in ( s ) ⎛ 1 1 1 ⎞ 1
R1 ----- + ----- + ----- + sC 1 ( sR 3 C 2 ) + -----
⎝ R1 R2 R3 ⎠ R2
By substitution of s with jω and the given values for resistors and capacitors, we get
–1
G ( j ω ) = -------------------------------------------------------------------------------------------------------------------------------------------------------------
-
5 ⎛ 1 –8 ⎞ –8 1
2 × 10 ------------------- + j 2.5 × 10 ω ( j 5 × 10 × 10 ω ) + ---------------4-
4
⎝ 20 × 10 3 ⎠ 4 × 10
or
V out ( j ω ) –1
G ( j ω ) = --------------------- = ------------------------------------------------------------------------
– –3
(4.32)
V in ( j ω ) 2.5 × 10 ω – j5 × 10 ω + 5
6 2
We use MATLAB to plot the magnitude of (4.32) on a semilog scale with the following code:
w=1:10:10000; Gs=−1./(2.5.*10.^(−6).*w.^2−5.*j.*10.^(−3).*w+5);
semilogx(w,abs(Gs)); grid; hold on
xlabel('Radian Frequency w'); ylabel('|Vout/Vin|');
title('Magnitude Vout/Vin vs. Radian Frequency')
The plot is shown in Figure 4.21. We observe that the given op amp circuit is a second order low-
pass filter whose cutoff frequency ( – 3 dB ) occurs at about 700 r ⁄ s .
4.5 Summary
• The Laplace transformation provides a convenient method of analyzing electric circuits since
integrodifferential equations in the t – domain are transformed to algebraic equations in the
s – domain .
• In the s – domain the terms sL and 1 ⁄ sC are called complex inductive impedance, and com-
plex capacitive impedance respectively. Likewise, the terms and sC and 1 ⁄ sL are called complex
capacitive admittance and complex inductive admittance respectively.
• The expression
1
Z ( s ) = R + sL + ------
sC
is a complex quantity, and it is referred to as the complex input impedance of an s – domain
RLC series circuit.
VS ( s )
I ( s ) = ------------
-
Z(s)
• The expression
1
Y ( s ) = G + ------ + sC
sL
is a complex quantity, and it is referred to as the complex input admittance of an s – domain
GLC parallel circuit.
IS ( s )
V ( s ) = -----------
Y(s )
• In an s – domain circuit, the ratio of the output voltage V out ( s ) to the input voltage V in ( s )
under zero state conditions is referred to as the voltage transfer function and it is denoted as
G ( s ) , that is,
V out ( s )
G ( s ) ≡ -----------------
V in ( s )
4.6 Exercises
1. In the circuit of Figure 4.22, switch S has been closed for a long time, and opens at t = 0 . Use
the Laplace transform method to compute i L ( t ) for t > 0 .
t = 0 R1
S 10 Ω
L
20 Ω
i (t) `
1 mH +
R2 −
L 32 V
R1 t = 0 R3 R4
6 KΩ S 30 KΩ 20 KΩ
60 KΩ C +
+ R2 v (t) R5
− 40 − C
72 V ------ µF 10 KΩ
9
L1 R2
`
2H 3Ω
`1 H
R1 1Ω L2
+
− +
i1 ( t )
C +
v1 ( t ) = u0 ( t ) − i2 ( t ) −
1F
v 2 ( t ) = 2u 0 ( t )
b. compute the t – domain value of i 1 ( t ) when v 1 ( t ) = u 0 ( t ) , and all initial conditions are zero.
I1 ( s ) VC ( s )
R1 − R3
+
1Ω 1⁄s 3Ω
+ 1Ω
+
R2
− −
V1 ( s ) R4 2Ω V 2 ( s ) = 2V C ( s )
5. Derive the transfer functions for the networks (a) and (b) of Figure 4.26.
R L
+ + +
` +
C
V in ( s ) V out ( s ) V in ( s ) R V out ( s )
− (b)
− (a)
− −
6. Derive the transfer functions for the networks (a) and (b) of Figure 4.27.
C R
+ + + +
V in ( s ) V out ( s ) V in ( s )
`V out ( s )
R L
(b)
− (a) − − −
7. Derive the transfer functions for the networks (a) and (b) of Figure 4.28.
`
+ L C + + R +
V in ( s ) R V out ( s ) V in ( s )
L
`V out ( s )
(b)
−
(a) − − −
8. Derive the transfer function for the networks (a) and (b) of Figure 4.29.
R2
C
R1 R2 R1 C
V in ( s ) V in ( s ) V out ( s )
V out ( s )
(a) (b)
Figure 4.29. Networks for Exercise 8
9. Derive the transfer function for the network of Figure 4.30. Using MATLAB, plot G ( s ) versus
frequency in Hertz, on a semilog scale.
R4 R1 = 11.3 kΩ
R2 = 22.6 kΩ
R3 R3=R4 = 68.1 kΩ
R1 R2 C1=C2 = 0.01 µF
V out ( s )
V in ( s ) C1
C2
Then,
32
iL ( t ) = ------ = 3.2 A
t=0
- 10
−
and thus the initial condition has been established as i L ( 0 ) = 3.2 A
For all t > 0 the t – domain and s – domain circuits are as shown below.
IL ( s )
–3
−
i L ( 0 ) = 3.2 A ` 10 s
20 Ω
` 20 Ω
−
1 mH +
− –3
Li L ( 0 ) = 3.2 × 10 V
−
2. At t = 0 the t – domain circuit is as shown below.
iT ( t )
6 KΩ 30 KΩ + 20 KΩ
i2 ( t )
+ 60 KΩ vC ( t ) 10 KΩ
−
72 V
−
Then,
− 72 V 72 V 72 V
i T ( 0 ) = ------------------------------------------------------------ = ------------------------------------- = ----------------- = 2 mA
6 KΩ + 60 KΩ || 60 KΩ 6 KΩ + 30 KΩ 36 KΩ
and
− 1 −
i 2 ( 0 ) = --- i T ( 0 ) = 1 mA
2
Therefore, the initial condition is
− −
v C ( 0 ) = ( 20 KΩ + 10 KΩ ) ⋅ i 2 ( 0 ) = ( 30 KΩ ) ⋅ ( 1 mA ) = 30 V
30 KΩ
VR = VC ( s )
20 KΩ 6
1 VC ( s ) 9 × 10
-------------------
---------------------------------
–6
- 40s +
60 KΩ 40 ⁄ 9 × 10 s 10 KΩ
+ VR 22.5 KΩ
− +
− −
30 ⁄ s 30 ⁄ s
( 60 KΩ + 30 KΩ ) || ( 20 KΩ + 10 KΩ ) = 22.5 KΩ
3
22.5 × 10 30 30 × 22.5 × 10
3
6
- ⋅ ------ = ------------------------------------------------------------
V C ( s ) = V R = ------------------------------------------------------------
3
-
9 × 10 ⁄ 40s + 22.5 × 10 s 6
9 × 10 ⁄ 40 + 22.5 × 10 s
3
3 3
( 30 × 22.5 × 10 ) ⁄ ( 22.5 × 10 ) 30 30
= ---------------------------------------------------------------------------- = --------------------------------------------------- = --------------
6
9 × 10 ⁄ ( 40 × 22.5 × 10 ) + s
3 6
9 × 10 ⁄ 90 × 10 + s
4 10 +s
Then,
30 – 10t
V C ( s ) = -------------- ⇔ 30e u 0 ( t ) V = v C ( t )
s + 10
z3
z1
`
2s 3
+
−
1
z2 `s
1⁄s + +
1⁄s I1 ( s ) I2 ( s ) −
−
2⁄s
Then,
( z 1 + z 2 )I 1 ( s ) – z 2 I 2 ( s ) = 1 ⁄ s
– z 2 I 1 ( s ) + ( z 2 + z 3 )I 2 ( s ) = – 2 ⁄ s
and in matrix form
( z1 + z2 ) –z2 I1 ( s ) 1⁄s
⋅ =
–z2 ( z2 + z3 ) I2 ( s ) –2 ⁄ s
Using MATLAB we get
Z=[z1+z2 −z2; −z2 z2+z3]; Vs=[1/s −2/s]'; Is=Z\Vs; fprintf(' \n');...
disp('Is1 = '); pretty(Is(1)); disp('Is2 = '); pretty(Is(2))
Is1 =
2
2 s - 1 + s
-------------------------------
2 3
(6 s + 3 + 9 s + 2 s ) conj(s)
Is2 =
2
4 s + s + 1
- -------------------------------
2 3
(6 s + 3 + 9 s + 2 s ) conj(s)
Therefore,
2
s + 2s – 1 - (1)
I 1 ( s ) = -------------------------------------------
3 2
2s + 9s + 6s + 3
2
4s + s + 1
I 2 ( s ) = – -------------------------------------------- (2)
3 2
2s + 9s + 6s + 3
We express the denominator of (1) as a product of a linear and quadratic term using MATLAB.
p=[2 9 6 3]; r=roots(p); fprintf(' \n'); disp('root1 ='); disp(r(1));...
disp('root2 ='); disp(r(2)); disp('root3 ='); disp(r(3)); disp('root2 + root3 =');
disp(r(2)+r(3));...
disp('root2 * root3 ='); disp(r(2)*r(3))
root1 =
-3.8170
root2 =
-0.3415 + 0.5257i
root3 =
-0.3415 - 0.5257i
root2 + root3 =
-0.6830
root2 * root3 =
0.3930
and with these values (1) is written as
2
s + 2s – 1 r1 r2 s + r3
I 1 ( s ) = ----------------------------------------------------------------------------------- = -------------------------- - (3)
- + ---------------------------------------------------
2
( s + 3.817 ) ⋅ ( s + 0.683s + 0.393 ) ( s + 3.817 ) ( s + 0.683s + 0.393 )
2
2
Equating s , s , and constant terms we get
r1 + r2 = 1
0.683r 1 + 3.817r 2 + r 3 = 2
0.393r 1 + 3.817r 3 = – 1
By inspection, the Inverse Laplace of first term on the right side of (4) is
0.48
------------------------ ⇔ 0.48e – 3.82t (5)
( s + 3.82 )
The second term on the right side of (4) requires some manipulation. Therefore, we will use the
MATLAB ilaplace(s) function to find the Inverse Laplace as shown below.
syms s t
IL=ilaplace((0.52*s−0.31)/(s^2+0.68*s+0.39));
pretty(IL)
1217 17 1/2 1/2
- ---- exp(- -- t) 14 sin(7/50 14 t)
4900 50
13 17 1/2
+ -- exp(- -- t) cos(7/50 14 t)
25 50
Thus,
– 3.82t – 0.34t – 0.34t
i 1 ( t ) = 0.48e – 0.93e sin 0.53t + 0.52e cos 0.53t
2
Equating s , s , and constant terms we get
r1 + r2 = –4
0.683r 1 + 3.817r 2 + r 3 = – 1
0.393r 1 + 3.817r 3 = – 1
By inspection, the Inverse Laplace of first term on the right side of (7) is
0.48
------------------------ ⇔ – 4.47 e – 3.82t (8)
( s + 3.82 )
The second term on the right side of (7) requires some manipulation. Therefore, we will use the
MATLAB ilaplace(s) function to find the Inverse Laplace as shown below.
syms s t
IL=ilaplace((0.49*s+0.20)/(s^2+0.68*s+0.39)); pretty(IL)
167 17 1/2 1/2
---- exp(- -- t) 14 sin(7/50 14 t)
9800 50
49 17 1/2
+ --- exp(- -- t) cos(7/50 14 t)
100 50
Thus,
– 3.82t – 0.34t – 0.34t
i 2 ( t ) = – 4.47 e + 0.06e sin 0.53t + 0.49e cos 0.53t
4.
VC ( s )
−
+
1 3
1⁄s
+
+
1
− −
I1 ( s ) I2 ( s )
V1 ( s ) 2 V 2 ( s ) = 2V C ( s )
a. Mesh 1:
( 2 + 1 ⁄ s ) ⋅ I1 ( s ) – I2 ( s ) = V1 ( s )
or
6 ( 2 + 1 ⁄ s ) ⋅ I 1 ( s ) – 6I 2 ( s ) = 6V 1 ( s ) (1)
Mesh 2:
– I 1 ( s ) + 6I 2 ( s ) = – V 2 ( s ) = – ( 2 ⁄ s )I 1 ( s ) (2)
( 11 + 8 ⁄ s ) ⋅ I 1 ( s ) = 6V 1 ( s )
and thus
I1 ( s ) 6 6s
Y ( s ) = ------------
- = --------------------- = ------------------
V 1 ( s ) 11 + 8 ⁄ s 11s + 8
b. With V 1 ( s ) = 1 ⁄ s we get
6s 1 6 6 ⁄ 11 6 – ( 8 ⁄ 11 )t
I 1 ( s ) = Y ( s ) ⋅ V 1 ( s ) = ------------------ ⋅ --- = ------------------ = --------------------- ⇔ ------ e = i1 ( t )
11s + 8 s 11s + 8 s + 8 ⁄ 11 11
5.
Circuit (a):
R
+ +
1 ⁄ Cs
V in ( s ) V out ( s )
− −
1 ⁄ Cs
V out ( s ) = ----------------------- ⋅ V in ( s )
R + 1 ⁄ Cs
and
V out ( s ) 1 ⁄ Cs 1 ⁄ Cs 1 1 ⁄ RC
G ( s ) = ----------------
- = ----------------------- = ---------------------------------------- = -------------------- = -----------------------
V in ( s ) R + 1 ⁄ Cs ( RCs + 1 ) ⁄ ( Cs ) RCs + 1 s + 1 ⁄ RC
Circuit (b):
L
`
+ +
V in ( s ) R V out ( s )
− −
R
V out ( s ) = ---------------- ⋅ V in ( s )
Ls + R
and
V out ( s ) R⁄L -
R - = ------------------
G ( s ) = ----------------
- = ---------------
V in ( s ) Ls + R s+R⁄L
6.
Circuit (a):
C
+ +
V in ( s ) R V out ( s )
− −
R
V out ( s ) = ----------------------- ⋅ V in ( s )
1 ⁄ Cs + R
and
V out ( s ) R RCs s
G ( s ) = ----------------
- = ----------------------- = ------------------------ = -----------------------
V in ( s ) 1 ⁄ Cs + R ( RCs + 1 ) s + 1 ⁄ RC
Circuit (b):
R
+
+
V in ( s ) L
` V− out ( s )
−
Ls
V out ( s ) = ---------------- ⋅ V in ( s )
R + Ls
and
V out ( s ) Ls s
G ( s ) = ----------------
- = ---------------- = -------------------
V in ( s ) R + Ls s+R⁄L
+ L C +
V in ( s ) R V out ( s )
− −
R
V out ( s ) = ------------------------------------ ⋅ V in ( s )
Ls + 1 ⁄ Cs + R
and
V out ( s ) R RCs ( R ⁄ L )s
G ( s ) = ----------------
- = ------------------------------------ = --------------------------------------- = --------------------------------------------------
V in ( s ) Ls + 1 ⁄ Cs + R 2
LCs + 1 + RCs
2
s + ( R ⁄ L )s + 1 ⁄ LC
This circuit is a second-order band-pass filter.
Circuit (b):
+ R +
V in ( s )
L
`V out ( s )
− −
Ls + 1 ⁄ Cs
V out ( s ) = ------------------------------------ ⋅ V in ( s )
R + Ls + 1 ⁄ Cs
and
V out ( s ) Ls + 1 ⁄ Cs LCs + 1
2 2
s + 1 ⁄ LC
G ( s ) = ----------------
- = ------------------------------------ = --------------------------------------- = --------------------------------------------------
V in ( s ) R + Ls + 1 ⁄ Cs 2
LCs + RCs + 1
2
s + ( R ⁄ L )s + 1 ⁄ LC
This circuit is a second-order band-elimination (band-reject) filter.
8.
Circuit (a):
R1 R2
V in ( s )
V out ( s )
R × 1 ⁄ Cs V (s) z
Let z 1 = R 1 and z 2 = -------------------------
2
- and since for inverting op-amp ----------------
out
- = – ---2- , for this circuit
R 2 + 1 ⁄ Cs V in ( s ) z1
V out ( s ) – [ ( R 2 × 1 ⁄ Cs ) ⁄ ( R 2 + 1 ⁄ Cs ) ] – ( R 2 × 1 ⁄ Cs ) –R1 C
G ( s ) = ----------------
- = -------------------------------------------------------------------------
- = ----------------------------------------
- = -------------------------
-
V in ( s ) R1 R 1 ⋅ ( R 2 + 1 ⁄ Cs ) s + 1 ⁄ R2 C
Circuit (b):
R2
R1 C
V in ( s ) V out ( s )
V (s) z
Let z 1 = R 1 + 1 ⁄ Cs and z 2 = R 2 and since for inverting op-amp ----------------
out
- = – ---2- , for this circuit
V in ( s ) z1
V out ( s ) –R2 – ( R 2 ⁄ R 1 )s
G ( s ) = ----------------
- = -------------------------
- = --------------------------
V in ( s ) R 1 + 1 ⁄ Cs s + 1 ⁄ R1C
R1 R2 V3 C1=C2 = 0.01 µF
V2
V out ( s )
V in ( s ) C1
C2
At Node V 1 :
V 1 ( s ) V 1 ( s ) – V out ( s )
------------- + ------------------------------------- = 0
R3 R4
or
⎛ ----- 1- ⎞ V ( s ) = -----
1- + ----- 1
- V ( s ) (1)
⎝R R4 ⎠ 1
R 4 out
3
At Node V 3 :
V3 ( s ) – V2 ( s ) V3 ( s )
--------------------------------- + ---------------- = 0
R2 1 ⁄ C1 s
V1 ( s ) – V2 ( s )
- + C 1 sV 1 ( s ) = 0
--------------------------------
R2
or
⎛ -----
1
- + C 1 s⎞ V 1 ( s ) = ------ V 2 ( s ) (2)
1
⎝R ⎠ R 2
2
At Node V 2 :
V 2 ( s ) – V in ( s ) V 2 ( s ) – V 1 ( s ) V 2 ( s ) – V out ( s )
---------------------------------- + --------------------------------- + ------------------------------------- = 0
R1 R2 1 ⁄ C2 s
or
⎛ ----- V in ( s ) V 1 ( s )
- + ------ + C 2 s⎞ V 2 ( s ) = --------------
1 1
- + ------------- + C 2 sV out ( s ) (3)
⎝R R2 ⎠ R1 R2
1
From (1)
( 1 ⁄ R4 ) R3
V 1 ( s ) = --------------------------------------- V out ( s ) = ----------------------- V out ( s ) (4)
( R3 + R4 ) ⁄ R3 R4 ( R3 + R4 )
From (2)
V 2 ( s ) = R 2 ⎛ ------ + C 1 s⎞ V 1 ( s ) = ( 1 + R 2 C 1 s )V 1 ( s )
1
⎝R ⎠
2
and thus
V out ( s ) 1
G ( s ) = ----------------
- = --------------------------------------------------------------------------------------------------------------------------------------------
V in ( s ) R3 ( 1 + R2 C1 s ) 1 R3
R 1 ⎛ ------ + ------ + C 2 s⎞ ------------------------------------ – ------ ----------------------- – C 2 s
1 1
⎝R R2 ⎠ ( R3 + R4 ) R2 ( R3 + R4 )
1
7
7.83 × 10
G ( s ) = ---------------------------------------------------------------------
-
2 4 7
s + 1.77 × 10 s + 5.87 × 10
w=1:10:10000; s=j.*w; Gs=7.83.*10.^7./(s.^2+1.77.*10.^4.*s+5.87.*10.^7);...
semilogx(w,abs(Gs)); grid; hold on
xlabel('Radian Frequency w'); ylabel('|Vout/Vin|');
title('Magnitude Vout/Vin vs. Radian Frequency')
The plot above indicates that this circuit is a second-order low-pass filter.
his chapter is an introduction to state variables and state equations as they apply in circuit anal-
T ysis. The state transition matrix is defined, and the state space-to-transfer function equivalence
is presented. Several examples are given to illustrate their application.
Example 5.1
A series RLC circuit with excitation
jωt
vS ( t ) = e (5.2)
* These are discussed in “Numerical Analysis using MATLAB and Spreadsheets” ISBN 0-9709511-1-6.
or
2
d t R di 1 1 jωt
------- = – --- ----- – ------- i + --- jωe (5.5)
dt
2 L dt LC L
x1 = i (5.6)
and
dx
x 2 = di
----- = --------1 = x· 1 (5.7)
dt dt
Then,
2 2
x· 2 = d i ⁄ dt (5.8)
It is convenient and customary to express the state equations in matrix* form. Thus, we write the
state equations of (5.9) as
x· 1 0 1 x 0
= 1
1
+ u (5.10)
x· 2 – ------
- –R
--- x 2 --1- j ω e jωt
LC L L
x· 0 1 x1 0
x· = 1 , A = 1 , x = , b= 1 , and u = any input (5.12)
x· 2 – ------
- –R
--- x2 --- j ω e jωt
LC L L
x· = Ax + bu
y = Cx + du (5.14)
The state space equations of (5.14) can be realized with the block diagram of Figure 5.1.
x· x
Σ Σ
+ +
u b ∫ dt C y
+ +
Figure 5.1. Block diagram for the realization of the state equations of (5.14)
We will learn how to solve the matrix equations of (5.14) in the subsequent sections.
Example 5.2
A fourth-order network is described by the differential equation
4 3 2
d y d y d y dy
--------- + a 3 --------3- + a 2 -------2- + a 1 ------ + a 0 y ( t ) = u ( t ) (5.15)
dt
4
dt dt dt
where y ( t ) is the output representing the voltage or current of the network, and u ( t ) is any input.
Express (5.15) as a set of state equations.
Solution:
The differential equation of (5.15) is of fourth-order; therefore, we must define four state variables
that will be used with the resulting four first-order state equations.
We denote the state variables as x 1, x 2, x 3 , and x 4 , and we relate them to the terms of the given dif-
ferential equation as
2 3
dy d y d y
x1 = y ( t ) x 2 = ------ x 3 = --------- x 4 = --------- (5.16)
dt 2 3
dt dt
We observe that
x· 1 = x 2
x· 2 = x 3
x· 3 = x 4 (5.17)
4
d y
--------- = x· 4 = – a 0 x 1 – a 1 x 2 – a 2 x 3 – a 3 x 4 + u ( t )
4
dt
and in matrix form
x· 1 0 1 0 0 x1 0
x· 2 0 0 1 0 x2
= + 0 u(t) (5.18)
x· 3 0 0 0 1 x3 0
x· 4 –a0 –a1 –a2 –a3 x4 1
We can also obtain the state equations directly from given circuits. We choose the state variables to
represent inductor currents and capacitor voltages. In other words, we assign state variables to
energy storing devices. The examples that follow illustrate the procedure.
Example 5.3
Write state equation(s) for the circuit of Figure 5.2, given that v C ( 0 − ) = 0 .
R
C +
+ v C ( t ) = v out ( t )
− −
vS u0 ( t )
Solution:
This circuit contains only one energy-storing device, the capacitor. Therefore, we need only one
state variable. We choose the state variable to denote the voltage across the capacitor as shown in
Figure 5.3. The output is defined as the voltage across the capacitor.
R
+ vR ( t ) −
C +
+ v C ( t ) = v out ( t ) = x
− −
i
vS u0 ( t )
Figure 5.3. Circuit for Example 5.3 with state variable x assigned to it
Example 5.4
Write state equation(s) for the circuit of Figure 5.4 assuming i L ( 0 − ) = 0 , and the output y is defined
as y = i ( t ) .
R
+ i(t)
`
L
−
vS u0 ( t )
Solution:
This circuit contains only one energy-storing device, the inductor; therefore, we need only one state
variable. We choose the state variable to denote the current through the inductor as shown in Figure
5.5.
R
i(t) = x
+
`
L
−
vS u0 ( t )
Figure 5.5. Circuit for Example 5.4 with state variable x assigned to it
By KVL,
vR + vL = vS u0 ( t )
or
di
Ri + L ----- = v S u 0 ( t )
dt
or
Rx + Lx· = v S u 0 ( t )
Therefore, the state equations are
R 1
x· = – --- x + --- v S u 0 ( t )
L L (5.21)
y = x
where α , β , k 1 , and k 2 are scalar constants, and the initial condition, if non-zero, is denoted as
x0 = x ( t0 ) (5.23)
We will now prove that the solution of the first state equation in (5.22) is
α ( t – t0 ) t
αt – ατ
x( t) = e x0 + e ∫t e
0
β u ( τ ) dτ (5.24)
Proof:
First, we must show that (5.24) satisfies the initial condition of (5.23). This is done by substitution of
t = t 0 in (5.24). Then,
t0
α ( t0 – t0 ) αt –α τ
x ( t0 ) = e x0 + e ∫t 0
e β u ( τ ) dτ (5.25)
d α ( t – t0 ) d ⎧ αt t
– ατ ⎫
x· ( t ) = ----- ( e
dt
x 0 ) + ----- ⎨ e
dt ⎩ ∫t e 0
β u ( τ ) dτ ⎬
⎭
or
α ( t – t0 ) t
αt – ατ αt – ατ
x· ( t ) = α e x0 + α e ∫t 0
e β u ( τ ) dτ + e [ e βu(τ )] τ = t
α ( t – t0 ) t
αt – ατ αt – αt
= α e x0 + e ∫t 0
e β u ( τ ) dτ + e e βu( t)
or
α ( t – t0 ) t
α(t – τ)
x· ( t ) = α e x0 + ∫t e
0
β u ( τ ) dτ + β u ( t ) (5.27)
We observe that the bracketed terms of (5.27) are the same as the right side of the assumed solution
of (5.24). Therefore,
x· = α x + β u
and this is the same as the first equation of (5.22).
In summary, if α and β are scalar constants, the solution of
x· = α x + β u (5.28)
with initial condition
x0 = x ( t0 ) (5.29)
Example 5.5
Use (5.28) through (5.30) to find the capacitor voltage v c ( t ) of the circuit of Figure 5.6 for t > 0 ,
given that the initial condition is v C ( 0 − ) = 1 V
R=2 Ω
+
+
− vC ( t )
C=0.5 F −
2u 0 ( t )
Solution:
From (5.20) of Example 5.3,
1
x· = – -------- x + v S u 0 ( t )
RC
and by comparison with (5.28),
1 –1
α = – -------- = ---------------- = – 1
RC 2 × 0.5
and
β = 2
or
–t
v C ( t ) = x ( t ) = ( 2 – e )u 0 ( t ) (5.31)
If we assume that the output y is the capacitor voltage, the output state equation is
–t
y ( t ) = x ( t ) = ( 2 – e )u 0 ( t ) (5.32)
where for two or more simultaneous differential equations, A and C are 2 × 2 or higher order
matrices, and b and d are column vectors with two or more rows. In this section we will introduce
At
the state transition matrix e , and we will prove that the solution of the matrix differential equation
x· = Ax + bu (5.34)
with initial conditions
x ( t0 ) = x0 (5.35)
Proof:
Let A be any n × n matrix whose elements are constants. Then, another n × n matrix denoted as
ϕ ( t ) , is said to be the state transition matrix of (5.34), if it is related to the matrix A as the matrix
power series
At 1 22 1 33 1 nn
ϕ(t) ≡ e = I + At + ----- A t + ----- A t + … + ----- A t (5.37)
2! 3! n!
where we have used (5.38) for the initial condition. The integral is zero since the upper and lower
limits of integration are the same.
To prove that (5.34) is also satisfied, we differentiate the assumed solution
A ( t – t0 ) t
–A τ
∫t e
At
x( t) = e x0 + e bu ( τ ) dτ
0
or
A ( t – t0 ) t
–A τ At – A t
∫t e
At
x· ( t ) = A e x0 + e bu ( τ ) dτ + e e bu ( t ) (5.42)
0
We recognize the bracketed terms in (5.42) as x ( t ) , and the last term as bu ( t ) . Thus, the expression
(5.42) reduces to
x· ( t ) = Ax + bu
is
A ( t – t0 ) t
–A τ
∫t e
At
x(t) = e x0 + e bu ( τ ) dτ (5.45)
0
Therefore, the solution of second or higher order circuits using the state variable method, entails the
At
computation of the state transition matrix e , and integration of (5.45).
At
5.4 Computation of the State Transition Matrix e
Let A be an n × n matrix, and I be the n × n identity matrix. By definition, the eigenvalues λ i ,
i = 1, 2, …, n of A are the roots of the nth order polynomial
det [ A – λI ] = 0 (5.46)
We recall that expansion of a determinant produces a polynomial. The roots of the polynomial of
(5.46) can be real (unequal or equal), or complex numbers.
At
Evaluation of the state transition matrix e is based on the Cayley-Hamilton theorem. This theorem
states that a matrix can be expressed as an ( n – 1 )th degree polynomial in terms of the matrix A as
At 2 n–1
e = a0 I + a1 A + a2 A + … + an – 1 A (5.47)
We accept (5.47) without proving it. The proof can be found in Linear Algebra and Matrix Theory
textbooks.
Since the coefficients a i are functions of the eigenvalues λ , we must consider the following cases:
2 n–1 λ1 t
a0 + a1 λ1 + a2 λ1 + … + an – 1 λ1 = e
2 n–1 λ2 t
a0 + a1 λ2 + a2 λ2 + … + an – 1 λ2 = e
(5.48)
…
2 n–1 λn t
a0 + a1 λn + a2 λn + … + an – 1 λn = e
Example 5.6
given that A = – 2 1
At
Compute the state transition matrix e
0 –1
Solution:
We must first find the eigenvalues λ of the given matrix A . These are found from the expansion of
det [ A – λI ] = 0
For this example,
⎧ ⎫
det [ A – λI ] = det ⎨ – 2 1 – λ 1 0 ⎬ = det – 2 – λ 1 = 0
⎩ 0 –1 0 1 ⎭ 0 –1–λ
= (– 2 – λ)(– 1 – λ) = 0
or
(λ + 1 )( λ + 2) = 0
Therefore,
λ 1 = – 1 and λ 2 = – 2 (5.49)
Next, we must find the coefficients a i of (5.47). Since A is a 2 × 2 matrix, we only need to consider
the first two terms of that relation, that is,
At
e = a0 I + a1 A (5.50)
The coefficients a 0 and a 1 are found from (5.48). For this example,
λ1 t
a0 + a1 λ1 = e
λ2 t
a0 + a1 λ2 = e
or
–t
a0 + a1 ( –1 ) = e
(5.51)
– 2t
a0 + a1 ( –2 ) = e
) –2 1
At –t – 2t 1 0 –t – 2t
e = ( 2e – e ) + (e – e
0 1 0 –1
or
– 2t –t – 2t
e
At
= e e –e (5.53)
–t
0 e
At
In summary, we compute the state transition matrix e for a given matrix A using the following
procedure:
1. We find the eigenvalues λ from det [ A – λI ] = 0 . We can write [ A – λI ] at once by subtracting
λ from each of the main diagonal elements of A . If the dimension of A is a 2 × 2 matrix, it will
yield two eigenvalues; if it is a 3 × 3 matrix, it will yield three eigenvalues, and so on. If the eigen-
values are distinct, we perform steps 2 through 4; otherwise we refer to Case II below.
2. If the dimension of A is a 2 × 2 matrix, we use only the first 2 terms of the right side of the state
transition matrix
At 2 n–1
e = a0 I + a1 A + a 2 A + … + an – 1 A (5.54)
2 n–1 λ1 t
a0 + a1 λ1 + a2 λ1 + … + an – 1 λ1 = e
2 n–1 λ2 t
a0 + a1 λ2 + a2 λ2 + … + an – 1 λ2 = e
…
2 n–1 λn t
a0 + a1 λn + a2 λn + … + an – 1 λn = e
We use as many equations as the number of the eigenvalues, and we solve for the coefficients a i .
4. We substitute the a i coefficients into the state transition matrix of (5.54), and we simplify.
Example 5.7
At
Compute the state transition matrix e given that
5 7 –5
A = 0 4 –1 (5.55)
2 8 –3
Solution:
1. We first compute the eigenvalues from det [ A – λI ] = 0 . We obtain [ A – λI ] at once, by subtract-
ing λ from each of the main diagonal elements of A . Then,
5–λ 7 –5
det [ A – λI ] = det 0 4–λ –1 = 0 (5.56)
2 8 –3–λ
2. Since A is a 3 × 3 matrix, we need to use the first 3 terms of (5.54), that is,
At 2
e = a0 I + a1 A + a2 A (5.59)
We will use the following MATLAB code for the solution of (5.60).
B=sym('[1 1 1; 1 2 4; 1 3 9]'); b=sym('[exp(t); exp(2*t); exp(3*t)]'); a=B\b; fprintf(' \n');...
disp('a0 = '); disp(a(1)); disp('a1 = '); disp(a(2)); disp('a2 = '); disp(a(3))
a0 =
3*exp(t)-3*exp(2*t)+exp(3*t)
a1 =
-5/2*exp(t)+4*exp(2*t)-3/2*exp(3*t)
a2 =
1/2*exp(t)-exp(2*t)+1/2*exp(3*t)
Thus,
t 2t 3t
a 0 = 3e – 3e + e
5 t 2t 3 3t
a 1 = – --- e + 4e – --- e (5.61)
2 2
1 t 2t 1 3t
a 2 = --- e – e + --- e
2 2
4. We also use MATLAB to perform the substitution into the state transition matrix, and to per-
form the matrix multiplications. The code is shown below.
syms t; a0 = 3*exp(t)+exp(3*t)-3*exp(2*t); a1 = -5/2*exp(t)-3/2*exp(3*t)+4*exp(2*t);...
a2 = 1/2*exp(t)+1/2*exp(3*t)-exp(2*t);...
A = [5 7 −5; 0 4 −1; 2 8 −3]; eAt=a0*eye(3)+a1*A+a2*A^2
eAt =
[ -2*exp(t)+2*exp(2*t)+exp(3*t), -6*exp(t)+5*exp(2*t)+exp(3*t), 4*exp(t)-3*exp(2*t)-exp(3*t)]
[ -exp(t)+2*exp(2*t)-exp(3*t), -3*exp(t)+5*exp(2*t)-exp(3*t), 2*exp(t)-3*exp(2*t)+exp(3*t)]
[ -3*exp(t)+4*exp(2*t)-exp(3*t), -9*exp(t)+10*exp(2*t)-exp(3*t), 6*exp(t)-6*exp(2*t)+exp(3*t)]
Thus,
t 2t 3t t 2t 3t t 2t 3t
– 2e + 2e + e – 6 e + 5e + e 4e – 3e – e
At
e = t 2t
– e + 2e – e
3t t
– 3e + 5e – e
2t 3t t 2t
2e – 3e + e
3t
t 2t 3t t 2t 3t t 2t 3t
– 3e + 4e – e – 9e + 10e – e 6e – 6e + e
has n roots, and m of these roots are equal. In other words, the roots are
λ1 = λ2 = λ3 … = λm , λm + 1 , λn (5.63)
are found from the simultaneous solution of the system of equations of (5.65) below.
n–1 λ1 t
a0 + a1 λ1 + a2 λ1 + … + an – 1 λ1
2
= e
d- n–1 d λ1 t
( a 0 + a 1 λ 1 + a 2 λ 1 + … + a n – 1 λ 1 ) = -------- e
2
--------
dλ 1 dλ 1
2 2
d d λt
--------2 ( a 0 + a 1 λ 1 + a 2 λ 21 + … + a n – 1 λ n1 – 1 ) = --------2 e 1
dλ 1 dλ 1
… (5.65)
m–1 m–1
d d λ t
- ( a 0 + a 1 λ 1 + a 2 λ 21 + … + a n – 1 λ n1 – 1 ) = --------------
--------------
m–1 m–1
-e 1
dλ 1 dλ 1
n–1 λ m + 1t
a0 + a1 λm + 1 + a2 λm + 1 + … + an – 1 λm + 1 = e
2
…
n–1 λn t
a 0 + a 1 λn + a 2 λ n + … + a n – 1 λ n
2
= e
Example 5.8
At
Compute the state transition matrix e given that
A = –1 0
2 –1
Solution:
1. We first find the eigenvalues λ of the matrix A and these are found from the polynomial of
det [ A – λI ] = 0 . For this example,
det [ A – λI ] = det – 1 – λ 0 = 0
2 –1–λ
= (– 1 – λ)(– 1 – λ) = 0
2
= (λ + 1) = 0
and thus,
λ1 = λ2 = –1
2. Since A is a 2 × 2 matrix, we only need the first two terms of the state transition matrix, that is,
At
e = a0 I + a1 A (5.66)
e
At –t –t
= ( e + te ) 1 0 + te – t – 1 0
0 1 2 –1
or
–t
e
At
= e 0 (5.68)
–t –t
2te e
We can use the MATLAB eig(x) function to find the eigenvalues of an n × n matrix. To find out
how it is used, we invoke the help eig command.
We will first use MATLAB to verify the values of the eigenvalues found in Examples 5.6 through 5.8,
and we will briefly discuss eigenvectors on the next section.
For Example 5.6
A= [−2 1; 0 −1]; lambda=eig(A)
lambda =
-2
-1
For Example 5.7
B = [5 7 −5; 0 4 −1; 2 8 −3]; lambda=eig(B)
lambda =
1.0000
3.0000
2.0000
For Example 5.8
C = [−1 0; 2 −1]; lambda=eig(C)
lambda =
-1
-1
5.5 Eigenvectors
Consider the relation
AX = λX (5.69)
where A is an n × n matrix, X is a column vector, and λ is a scalar number. We can express this rela-
tion in matrix form as
a 11 a 12 … a 1n x 1 x1
a 21 a 22 … a 2n x 2 x2
= λ (5.70)
… … … … … …
a n1 a n2 … a nn x n xn
We write (5.70) as
( A – λI )X = 0 (5.71)
Then, (5.71) can be written as
( a 11 – λ )x 1 a 12 x 2 … a1 n xn
a 21 x 1 ( a 22 – λ )x 2 … a2 n xn
= 0 (5.72)
… … … …
an 1 x1 an 2 x2 … ( a nn – λ )x n
The equations of (5.72) will have non-trivial solutions if and only if its determinant is zero*, that is, if
( a 11 – λ ) a 12 … a1 n
a 21 ( a 22 – λ ) … a2 n
det = 0 (5.73)
… … … …
an 1 an2 … ( a nn – λ )
* This is because we want the vector X in (5.71) to be a non-zero vector and the product ( A – λI )X to be zero.
Example 5.9
Given the matrix
5 7 –5
A = 0 4 –1
2 8 –3
5 7 –5 x1 x1
0 4 –1 x2 = λ x2 (5.75)
2 8 –3 x3 x3
or
5x 1 7x 2 – 5x 3 λx 1
0 4x 2 –x3 = λx 2 (5.76)
2x 1 8x 2 – 3x 3 λx 3
( 5 – λ )x 1 7x 2 – 5x 3 0
0 ( 4 – λ )x 2 –x3 = 0 (5.77)
2x 1 8x 2 – ( 3 – λ )x 3 0
4x 1 + 7x 2 – 5x 3 = 0
3x 2 – x 3 = 0 (5.78)
2x 1 + 8x 2 – 4x 3 = 0
Since the unknowns x 1, x 2, and x 3 are scalars, we can assume that one of these, say x 2 , is known,
and solve x 1 and x 3 in terms of x 2 . Then, we get x 1 = 2x2 , and x 3 = 3x 2 .
x1 2x 2 2 2
Xλ = 1 = x2 = x2 = x2 1 = 1 (5.80)
x3 3x 2 3 3
x1 x2 1 1
Xλ = 2 = x2 = x2 = x2 1 = 1 (5.81)
x3 2x 2 2 2
x1 –x2 –1 –1
Xλ = 3 = x2 = x2 = x2 1 = 1 (5.82)
x3 x2 1 1
c. We find the unit eigenvectors by dividing the components of each vector by the square root of
the sum of the squares of the components. These are:
2 2 2
2 +1 +3 = 14
2 2 2
1 +1 +2 = 6
2 2 2
( –1 ) + 1 + 1 = 3
The unit eigenvectors are
2 1 –1
---------- ------- -------
14 6 3
1
Unit X λ = 1 = ---------- 1
Unit X λ = 2 = ------- 1
Unit X λ = 3 = ------- (5.83)
14 6 3
3 2 1
---------- ------- -------
14 6 3
We observe that for the first unit eigenvector the sum of the squares is unity, that is,
⎛ ---------
2 - ⎞ 2 ⎛ ---------
+
1 - ⎞ 2 ⎛ ---------
+
3 - ⎞2 4- + -----
= ----- 9- = 1
1- + ----- (5.84)
⎝ 14 ⎠ ⎝ 14 ⎠ ⎝ 14 ⎠ 14 14 14
and the same is true for the other two unit eigenvectors in (5.83).
Example 5.10
For the circuit of Figure 5.7, the initial conditions are i L ( 0 − ) = 0 , and v c ( 0 − ) = 0.5 V . Use the state
variable method to compute i L ( t ) and v c ( t ) .
R L
`
+
1Ω 1⁄4 H
C
+ vC ( t )
− i(t)
vs ( t ) = u0 ( t ) 4⁄3 F
−
Figure 5.7. Circuit for Example 5.10
Solution:
For this example,
i = iL
and
di
Ri L + L ------L- + v C = u 0 ( t )
dt
Substitution of given values and rearranging, yields
1 di L
--- ------- = ( – 1 )i L – v C + 1
4 dt
or
di
------L- = – 4i L – 4v C + 4 (5.85)
dt
di
x· 1 = ------L- (5.86)
dt
and
dv
x· 2 = -------C-
dt
Also,
dv
i L = C -------C-
dt
and thus,
dv 4
x 1 = i L = C -------C- = Cx· 2 = --- x· 2
dt 3
or
3
x· 2 = --- x 1 (5.87)
4
Therefore, from (5.85), (5.86), and (5.87), we get the state equations
x· 1 = – 4x 1 – 4x 2 + 4
3
x· 2 = --- x 1
4
and in matrix form,
x· 1
= –4 –4 1 + 4 u0 ( t )
x
(5.88)
x· 2 3 ⁄ 4 0 x2 0
A ( t – t0 ) t
–A τ
∫t e
At
x( t) = e x0 + e bu ( τ ) dτ (5.89)
0
where
–4 –4 iL ( 0 ) 0
A = x0 = = b = 4 (5.90)
3⁄4 0 vC ( 0 ) 1⁄2 0
At
First, we compute the state transition matrix e . We find the eigenvalues from
det [ A – λI ] = 0
Then,
det [ A – λI ] = det – 4 – λ – 4 = 0
3 ⁄ 4 –λ
= ( –λ ) ( – 4 – λ ) + 3 = 0
2
= λ + 4λ + 3 = 0
Therefore,
λ 1 = – 1 and λ 2 = – 3
The next step is to find the coefficients a i . Since A is a 2 × 2 matrix, we only need the first two
terms of the state transition matrix, that is,
At
e = a0 I + a1 A (5.91)
e
At –t
= ( 1.5e – 0.5e
– 3t
) 1 0 + ( 0.5e – t – 0.5e –2t ) – 4 – 4
0 1 3⁄4 0
–t – 3t –t – 3t
–t – 3t – 2 e + 2e – 2 e + 2e
= 1.5e – 0.5e 0 + – 3t
–t – 3t 3
--- e –t – 3
--- e
0 1.5e – 0.5e 8 8
0
or
–t – 3t –t – 3t
At – 0.5 e + 1.5e – 2 e + 2e
e = – 3t
3
--- e – t – 3 –t – 3t
--- e 1.5e – 0.5e
8 8
The initial conditions vector is the second vector in (5.90); then, the first term of (5.89) becomes
–t – 3t –t – 3t
At – 0.5 e + 1.5e – 2 e + 2e 0
e x0 = – 3t
3
--- e – t – 3 –t – 3t
1⁄2
--- e 1.5e – 0.5e
8 8
or
–t – 3t
At
e x0 = –e +e (5.94)
–t – 3t
0.75e – 0.25e
We also need to evaluate the integral on the right side of (5.89). From (5.90)
b = 4 = 1 4
0 0
–( t – τ ) –3 ( t – τ ) –( t – τ ) –3 ( t – τ )
t – 0.5 e + 1.5e –2 e + 2e
∫t
Int = 1 4 dτ
3 –( t – τ ) 3 –3 ( t – τ ) –( t – τ ) –3 ( t – τ )
0 --- e – --- e 1.5e – 0.5e 0
8 8
or
–( t – τ ) –3 ( t – τ )
t – 0.5 e + 1.5e
Int = ∫t 0 --- e –( t – τ ) – 3
3 --- e
–3 ( t – τ ) 4 dτ (5.95)
8 8
The integration in (5.95) is with respect to τ ; then, integrating the column vector under the integral,
we get
t
–( t – τ ) –3 ( t – τ )
Int = 4 – 0.5 e + 0.5e
–( t – τ ) –3 ( t – τ )
0.375e – 0.125e
τ=0
or
–t – 3t –t – 3t
Int = 4 – 0.5 + 0.5 – 4 – 0.5 e + 0.5e = 4 0.5e – 0.5 e
0.375 – 0.125 – t
0.375e – 0.125e
– 3t –t
0.25 – 0.375 e + 0.125e
– 3t
is
–t – 3t –t – 3t –t – 3t
x1
= –e +e +4 0.5e – 0.5 e = e –e
x2 –t – 3t –t – 3t –t – 3t
0.75e – 0.25e 0.25 – 0.375 e + 0.125e 1 – 0.75 e + 0.25e
Then,
–t – 3t
x1 = iL = e –e (5.96)
and
–t – 3t
x 2 = v C = 1 – 0.75e + 0.25e (5.97)
Other variables of the circuit can now be computed from (5.96) and (5.97). For example, the voltage
across the inductor is
di L 1 d – t –3t 1 – t 3 – 3t
v L = L ------- = --- ----- ( e – e ) = – --- e + --- e
dt 4 dt 4 4
Example 5.11
A circuit is described by the state equation
x· = Ax + bu (5.98)
where
A = 1 0 x0 = 1 b = –1 and u = δ ( t ) (5.99)
1 –1 0 1
Solution:
We compute the eigenvalues from
det [ A – λI ] = 0
For this example,
det [ A – λI ] = det 1 – λ 0 = 0
1 –1 –λ
= ( 1 –λ ) ( – 1 – λ ) = 0
Then,
λ 1 = 1 and λ 2 = – 1
Since A is a 2 × 2 matrix, we only need the first two terms of the state transition matrix to find the
coefficients a i , that is,
At
e = a0 I + a1 A (5.100)
t
a0 + a1 = e
(5.102)
–t
a0 –a1 = e
A ( t – t0 ) t t
–A τ –A τ
∫t ∫0 e
At At At
x( t) = e x0 + e e bu ( τ ) dτ = e x 0 + e bδ ( τ ) dτ (5.104)
0
Using the sifting property of the delta function we find that (5.104) reduces to
At ⎧ ⎫
x ( t ) = e x0 + e b = e ( x0 + b ) = e ⎨ 1 + –1 ⎬ = e 0
At At At At
⎩ 0 1 ⎭ 1
= cosh t + sinh t 0 0 = x1
sinh t cosh t – sinh t 1 x2
Therefore,
x1 0 0
x = = = (5.105)
–t
x2 cosh t – sinh t e
we observe that the right side of (5.108) is the Laplace transform of (5.109). Therefore, we can com-
from the Inverse Laplace of ( sI – A ) –1 , that is, we can use the
At
pute the state transition matrix e
relation
At –1 –1
e = L { ( sI – A ) } (5.110)
Next, we consider the output state equation
y = Cx + du (5.111)
Taking the Laplace of both sides of (5.111), we get
Y ( s ) = CX ( s ) + dU ( s ) (5.112)
and using (5.108), we get
–1 –1
Y ( s ) = C ( sI – A ) x ( 0 ) + [ C ( sI – A ) b + d ]U ( s ) (5.113)
If the initial condition x ( 0 ) = 0 , (5.113) reduces to
–1
Y ( s ) = [ C ( sI – A ) b + d ]U ( s ) (5.114)
In (5.114), U ( s ) is the Laplace transform of the input u ( t ) ; then, division of both sides by U ( s )
yields the transfer function
Y( s) –1
G ( s ) = ----------- = C ( sI – A ) b + d (5.115)
U(s)
Example 5.12
At
In the circuit of Figure 5.8, all initial conditions are zero. Compute the state transition matrix e
using the Inverse Laplace transform method.
R L
`
+
3Ω 1H
C
+ vC ( t )
− i( t)
vs ( t ) = u0 ( t ) 1⁄2 F
−
Solution:
For this circuit,
i = iL
and
di
Ri L + L ------L- + v C = u 0 ( t )
dt
Substitution of given values and rearranging,
di L
------- = – 3 i L – v C + 1 (5.116)
dt
Now, we define the state variables
x1 = iL
and
x2 = vC
Then,
di
x· 1 = ------L- = – 3 i L – v C + 1 (5.117)
dt
and
dv
x· 2 = --------C
dt
Also,
dv dv
i L = C --------C = 0.5 -------C- (5.118)
dt dt
and thus,
dv
x 1 = i L = 0.5 --------C = 0.5x· 2
dt
or
x· 2 = 2x 1 (5.119)
Therefore, from (5.117) and (5.119) we get the state equations
x· 1 = – 3x 1 – x 2 + 1
(5.120)
x· 2 = 2x 1
x· 1
= –3 –1 1 + 1 1
x
(5.121)
·x 2 2 0 x2 0
By inspection,
A = –3 –1 (5.122)
2 0
( sI – A ) = s 0 – –3 –1 = s + 3 1
0 s 2 0 –2 s
Then,
s –1
--------------------------------- ---------------------------------
–1 adj ( sI – A ) 1
- s –1 = ( s + 1 ) ( s + 2 ) ( s + 1 )( s + 2)
( sI – A ) = --------------------------- = -------------------------
det ( sI – A ) 2
s + 3s + 2 2 s+3 2 s+3
--------------------------------- ---------------------------------
( s + 1 )( s + 2) ( s + 1 )( s + 2)
We find the Inverse Laplace of each term by partial fraction expansion. Then,
–t – 2t –t – 2t
{ ( sI – A ) } = – e + 2e –e +e
At –1 –1
e = L
–t – 2t –t – 2t
2e – 2e 2e – e
Now, we can find the state variables representing the inductor current and the capacitor voltage
from
t
–A τ
∫0 e
At At
x ( t ) = e x0 + e bu ( τ ) dτ
* We have used capital letters for vectors b and c to be consistent with MATLAB’s designations.
This is used with the statement [num,den]=ss2tf(A,B,C,D,iu) where A, B, C, D are the matrices of
(5.124) and iu is 1 if there is only one input. The MATLAB help command provides the following
information:
help ss2tf
SS2TF State-space to transfer function conversion.
[NUM,DEN] = SS2TF(A,B,C,D,iu) calculates the
transfer function:
NUM(s) -1
G(s) = -------- = C(sI-A) B + D
DEN(s)
of the system:
x = Ax + Bu
y = Cx + Du
from the iu'th input. Vector DEN contains the coefficients of
the denominator in descending powers of s. The numerator coeffi-
cients are returned in matrix NUM with as many rows as there
are outputs y.
See also TF2SS
The other function, tf2ss, converts the transfer function of (5.125) to the state-space equations of
(5.124). It is used with the statement [A,B,C,D]=tf2ss(num,den) where A, B, C, and D are the
matrices of (5.124), and num, den are N ( s ) and D ( s ) of (5.125) respectively. The MATLAB help
command provides the following information:
help tf2ss
TF2SS Transfer function to state-space conversion.
[A,B,C,D] = TF2SS(NUM,DEN) calculates the state-space
representation:
x = Ax + Bu
y = Cx + Du
of the system:
NUM(s)
G(s) = --------
DEN(s)
from a single input. Vector DEN must contain the coefficients of
the denominator in descending powers of s. Matrix NUM must con-
Example 5.13
For the circuit of Figure 5.9,
R L
`
+
1Ω 1H
C
+
− v C ( t ) = v out ( t )
i(t)
vs ( t ) = u0 ( t ) 1F
−
We let
x1 = iL = i
and
x 2 = v C = v out
Then,
di
x· 1 = -----
dt
and
dv
x· 2 = --------c = x 1
dt
Thus, the state equations are
x· 1 = – x 1 – x 2 + u 0 ( t )
x· 2 = x 1
y = x2
and in matrix form,
x·
x· = Ax + Bu ↔ 1 = – 1 –1 x1 + 1 u ( t )
0
x· 2 1 0 x2 0
(5.126)
x1
y = Cx + Du ↔ y = 0 1 + 0 u0 ( t )
x2
R L
`
s +
1Ω C
+
− V C ( s ) = V out ( s )
V in ( s ) 1⁄s
−
V out ( s ) 1
- = ---------------------
---------------- -
V in ( s ) 2
s +s+1
Therefore,
V out ( s ) 1
G ( s ) = ----------------
- = ---------------------
- (5.127)
V in ( s ) 2
s +s+1
c.
A = [−1 −1; 1 0]; B = [1 0]'; C = [0 1]; D = [0];% The matrices of (5.126)
[num, den] = ss2tf(A, B, C, D, 1) % Verify coefficients of G(s) in (5.127)
num =
0 0 1
den =
1.0000 1.0000 1.0000
num = [0 0 1]; den = [1 1 1]; % The coefficients of G(s) in (5.127)
[A B C D] = tf2ss(num, den) % Verify the matrices of (5.126)
A =
-1 -1
1 0
B =
1
0
C =
0 1
D =
0
5.8 Summary
• An nth-order differential equation can be resolved to n first-order simultaneous differential equa-
tions with a set of auxiliary variables called state variables. The resulting first-order differential
equations are called state space equations, or simply state equations.
• The state space equations can be obtained either from the nth-order differential equation, or
directly from the network, provided that the state variables are chosen appropriately.
• When we obtain the state equations directly from given circuits, we choose the state variables to
represent inductor currents and capacitor voltages.
• The state variable method offers the advantage that it can also be used with non-linear and time-
varying devices.
• If a circuit contains only one energy-storing device, the state equations are written as
x· = α x + β u
y = k1 x + k2 u
where α , β , k 1 , and k 2 are scalar constants, and the initial condition, if non-zero, is denoted as
x0 = x ( t0 ) (5.128)
x· = Ax + bu
y = Cx + du
where A and C are 2 × 2 or higher order matrices, and b and d are column vectors with two or
At
more rows, entails the computation of the state transition matrix e , and integration of
A ( t – t0 ) t
–A τ
∫t e
At
x( t) = e x0 + e bu ( τ ) dτ
0
• The eigenvalues λ i , where i = 1, 2, …, n , of an n × n matrix A are the roots of the nth order
polynomial
det [ A – λI ] = 0
• We can use the MATLAB eig(x) function to find the eigenvalues of an n × n matrix.
• The Cayley-Hamilton theorem states that a matrix can be expressed as an ( n – 1 )th degree poly-
nomial in terms of the matrix A as
At 2 n–1
e = a0 I + a1 A + a2 A + … + an – 1 A
• If all eigenvalues of a given matrix A are distinct, that is, if λ 1 ≠ λ 2 ≠ λ 3 ≠ … ≠ λ n , the coefficients
a i are found from the simultaneous solution of the system of equations
2 n–1 λ1 t
a0 + a1 λ1 + a2 λ1 + … + an – 1 λ1 = e
2 n–1 λ2 t
a0 + a1 λ2 + a2 λ2 + … + an – 1 λ2 = e
…
2 n–1 λn t
a0 + a1 λn + a2 λn + … + an – 1 λn = e
n–1 λ1 t
a0 + a1 λ1 + a2 λ1 + … + an – 1 λ1
2
= e
d d λt
--------- ( a 0 + a 1 λ 1 + a 2 λ 21 + … + a n – 1 λ n1 – 1 ) = -------- e 1
dλ 1 dλ 1
2 2
d- n–1 d λ1 t
( a 0 + a 1 λ 1 + a 2 λ 1 + … + a n – 1 λ 1 ) = --------2 e
2
-------
2
dλ 1 dλ 1
…
m–1 m–1
d d λ t
- ( a 0 + a 1 λ 1 + a 2 λ 21 + … + a n – 1 λ n1 – 1 ) = --------------
--------------
m–1 m–1
-e 1
dλ 1 dλ 1
n–1 λ m + 1t
a0 + a1 λm + 1 + a2 λm + 1 + … + an – 1 λm + 1 = e
2
…
n–1 λn t
a 0 + a 1 λn + a 2 λ n + … + a n – 1 λ n
2
= e
AX = λX
• A set of eigenvectors constitutes an orthonormal basis if the set is normalized (expressed as unit
eigenvectors) and these vector are mutually orthogonal.
• The state transition matrix can be computed from the Inverse Laplace transform using the rela-
tion
At –1 –1
e = L { ( sI – A ) }
• If U ( s ) is the Laplace transform of the input u ( t ) and Y ( s ) is the Laplace transform of the out-
put y ( t ) , the transfer function can be computed using the relation
Y ( s ) = C ( sI – A ) –1 b + d
G ( s ) = -----------
U(s)
• MATLAB provides two very useful functions to convert state space (state equations), to transfer
function (s-domain), and vice versa. The function ss2tf (state space to transfer function) converts
the state space equations to the transfer function equivalent, and the function tf2ss, converts the
transfer function to state-space equations.
5.9 Exercises
1. Express the integrodifferential equation below as a matrix of state equations where
k 1, k 2, and k 3 are constants.
2 t
dv
∫0 v dt
dv
-------2- + k 3 ------ + k 2 v + k 1 = sin 3t + cos 3t
dt dt
2. Express the matrix of the state equations below as a single differential equation, and let
x( y) = y(t) .
x· 1 0 1 0 0 x1 0
x· 2 x
= 0 0 1 0 ⋅ 2 + 0 u(t)
x· 3 0 0 0 1 x3 0
x· 4 –1 –2 –3 –4 x4 1
3. For the circuit of Figure 5.11, all initial conditions are zero, and u ( t ) is any input. Write state
equations in matrix form.
R C
u( t)
+
− L
`
Figure 5.11. Circuit for Exercise 3
4. In the circuit of Figure 5.12, all initial conditions are zero. Write state equations in matrix form.
L
R
`
1Ω C1 1H C2
+
−
V p cos ωtu 0 ( t ) 2F 2F
5. In the circuit of Figure 5.13, i L ( 0 − ) = 2 A . Use the state variable method to find i L ( t ) for t > 0 .
R 2Ω
10u 0 ( t )
+
− L
`2 H
Figure 5.13. Circuit for Exercise 5
0 1 0
A = 1 2 B = a 0 C = 0 0 1
3 –1 –a b
– 6 – 11 – 6
A= 1 0 , b= 1 , x0 = –1 , u = δ ( t ), t0 = 0
–2 2 2 0
C
R
3⁄4 Ω
L
` 4H
4⁄3 F
2.
Expansion of the given matrix yields
· · · ·
x1 = x2 x2 = x3 x3 = x2 x 4 = – x 1 – 2x 2 – 3x 3 – 4x 4 + u ( t )
Letting x = y we get
4 3 2
dy -
dy - + 4 ------- dy dy
------- 3
+ 3 -------2- + 2 ------ + y = u ( t )
dt
4
dt dt dt
3.
R iT
iL + iC
u(t)
+
− L
` vC
−
C
u ( t ) – vC dv C
--------------------- = i L + C ---------
R dt
or
u ( t ) – x2 ·
-------------------- = x 1 + Cx 2
R
Also,
·
x 2 = Lx 1
Then,
· 1 · 1 1 1
x 1 = --- x 2 and x 2 = – ---- x 1 – -------- x 2 + -------- u ( t )
L C RC RC
and in matrix form
·
x1
= 0 1 ⁄ L ⋅ x1 + 0 ⋅ u(t)
·
x2 – 1 ⁄ C – 1 ⁄ RC x 2 1 ⁄ RC
4.
L
R v C1 iL
`
1Ω 1H
+ +
+ v C1 v C2
− − −
V p cos ωt C1 2 F C2 2F
v C1 – V p cos ωt dv C1 ·
----------------------------------- + 2 ----------- + i L = 0 or x 2 – V p cos ωt + 2x 2 + x 1 = 0
1 dt
or
· 1 1 1
x 2 = – --- x 1 – --- x 2 + --- V p cos ωt (1)
2 2 2
By KVL
di L · ·
v C1 = L ------- + v C2 or x 2 = 1x 1 + x 3 or x 1 = x 2 – x 3 (2)
dt
Also,
dv C2 · · 1
i L = C ----------- or x 1 = 2x 3 or x 3 = --- x 1 (3)
dt 2
Combining (1), (2), and (3) into matrix form we get
·
x1 0 1 –1 x1 0
· =
x2 –1 ⁄ 2 –1 ⁄ 2 0 ⋅ x 2
+ 1 ⁄ 2 ⋅ V p cos ωt
· 1⁄2 0 0 x3 0
x3
5.
R 2Ω
10u 0 ( t )
+
− L
`2 H
From (5.21) of Example 5.4
R 1
x· = – --- x + --- v S u 0 ( t )
L L
α ( t – t0 ) t
αt –α τ
x( t) = e x0 + e ∫t e 0
β u ( τ ) dτ
t t
–1 ( t – 0 ) –t τ –t –t τ
= e 2+e ∫0 e 5u 0 ( τ ) dτ = 2e + 5e ∫0 e dτ
–t –t t –t –t –t
= 2e + 5e ( e – 1 ) = 2e + 5 – 5 e = ( 5 – 3e )u 0 ( t )
( 1 – λ )( – 1 – λ ) – 6 = 0
2
–1–λ+λ+λ –6 = 0
2
λ = 7
and thus
λ1 = 7 λ2 = – 7
b.
⎛ ⎞
B = a 0 det ( B – λI ) = det ⎜ a 0 – λ 1 0 ⎟ = det a – λ 0 = 0
–a b ⎝ –a b 0 1⎠ –a b – λ
(a – λ )(b – λ ) = 0
and thus
λ1 = a λ2 = b
c.
0 1 0 ⎛ 0 1 0 1 0 0 ⎞⎟
⎜
C = 0 0 1 det ( C – λI ) = det ⎜ 0 0 1 –λ 0 1 0 ⎟
⎜ ⎟
– 6 – 11 – 6 ⎝ – 6 – 11 – 6 0 0 1⎠
–λ 1 0
= det 0 – λ 1 =0
– 6 – 11 – 6 – λ
2 3 2
λ ( – 6 – λ ) – 6 – ( – 11 ) ( – λ ) = λ + 6λ + 11λ + 6 = 0
2 λ1 t –t
a0 + a1 λ1 + a2 λ1 = e ⇒ a0 – a1 + a2 = e
2 λ2 t – 2t
a0 + a1 λ2 + a2 λ2 = e ⇒ a 0 – 2a 1 + 4a 2 = e
2 λ3 t – 3t
a0 + a1 λ3 + a2 λ3 = e ⇒ a 0 – 3a 1 + 9a 2 = e
At
Now, we compute e of (1) with the following MATLAB code:
syms t; a0=3*exp(−t)−3*exp(−2*t)+exp(−3*t); a1=5/2*exp(−t)−4*exp(−2*t)+3/2*exp(−
3*t);...
a2=1/2*exp(−t)−exp(−2*t)+1/2*exp(−3*t); A=[0 1 0; 0 0 1; −6 −11 −6]; fprintf(' \n');...
eAt=a0*eye(3)+a1*A+a2*A^2
eAt =
Then,
–t – 2t – 3t –t – 2t – 3t –t – 2t – 3t
3e – 3e +e 2.5e – 4e + 1.5e 0.5e – e + 0.5e
At
e = – 3 e – t + 6e – 2t – 3e – 3t –t
– 2.5 e + 8e
– 2t
– 4.5e
– 3t –t
– 0.5 e + 2e
– 2t
– 1.5e
– 3t
–t – 2t – 3t –t – 2t – 3t –t – 2t – 3t
3e – 12e + 9e 2.5e – 16e + 13.5e 0.5e – 4e + 4.5e
8.
A= 1 0 , b= 1 , x0 = –1 , u = δ ( t ), t0 = 0
–2 2 2 0
t t
A(t – 0) –A τ –A τ
∫0 ∫0 e
At At At
x( t) = e x0 + e e bu ( τ ) dτ = e x 0 + e bδ ( τ ) dτ
(1)
⎞ At ⎛
= e x0 + e b = e ( x0 + b ) = e ⎜ –1 + 1 ⎟ = e 0
At At At At
⎝ 0 2 ⎠ 2
= a 0 I + a 1 A = ( 2e – e ) 1 0 + ( e – e ) 1 0
At t 2t 2t t
e
0 1 –2 2
t 2t 2t t t
= 2e – e 0 + e –e 0 = e 0
t 2t 2t t 2t t t 2t 2t
0 2e – e – 2e + 2e 2e – 2e 2e – 2e e
t
0 = e 0 0
⋅ 0 =
At
x(t) = e
t 2t 2t 2t
2 2e – 2e e 2 2e
and thus
2t
x1 = 0 x 2 = 2e
9.
−
C iL ( 0 ) = 0
iL + iC −
`4 H
R iR L vC ( 0 ) = 1 V
−
3⁄4 Ω 4⁄3 F
We let x 1 = i L x 2 = v C . Then,
a.
iR + iL + iC = 0
vC vC
----- + i L + C ----- = 0
R dt
x2 4·
- + x 1 + --- x 2 = 0
--------
3⁄4 3
or
· 3
x 2 = – --- x 1 – x 2 (1)
4
Also,
di L ·
v L = v C = L ------- = 4x 1 = x 2
dt
or
· 1
x 1 = --- x (2)
4 2
From (1) and (2)
·
x1
= 0 1 ⁄ 4 ⋅ x1
·
x2 –3 ⁄ 4 –1 x2
and thus
A = 0 1⁄4
–3 ⁄ 4 –1
b.
At –1 –1
e = L { [ sI – A ] }
[ sI – A ] = s 0 – 0 1⁄4 = s –1 ⁄ 4
0 s –3 ⁄ 4 –1 3⁄4 s+1
∆ = det [ sI – A ] = det s – 1 ⁄ 4 = s 2 + s + 3 ⁄ 16 = ( s + 1 ⁄ 4 ) ( s + 3 ⁄ 4 )
3⁄4 s+1
[ sI – A ]
–1
= --- adj [ sI – A ] = ----------------------------------------------- s + 1
1 1 1⁄4
∆ ( s + 1 ⁄ 4 ) ( s + 3 ⁄ 4 ) –3 ⁄ 4 s
s+1 1⁄4
----------------------------------------------- -----------------------------------------------
= ( s + 1 ⁄ 4 )(s + 3 ⁄ 4 ) (s + 1 ⁄ 4)(s + 3 ⁄ 4)
–3 ⁄ 4 s
----------------------------------------------- -----------------------------------------------
(s + 1 ⁄ 4 )(s + 3 ⁄ 4 ) (s + 1 ⁄ 4)(s + 3 ⁄ 4)
At –1 –1
We use MATLAB to find e = L { [ sI – A ] } with the code below.
syms s t
Fs1=(s+1)/(s^2+s+3/16); Fs2=(1/4)/(s^2+s+3/16); Fs3=(-3/4)/(s^2+s+3/16); Fs4=s/
(s^2+s+3/16);...
fprintf(' \n'); disp('a11 = '); disp(simple(ilaplace(Fs1))); disp('a12 = '); disp(simple(ilaplace(Fs2)));...
disp('a21 = '); disp(simple(ilaplace(Fs3))); disp('a22 = '); disp(simple(ilaplace(Fs4)))
a11 =
-1/2*exp(-3/4*t)+3/2*exp(-1/4*t)
a12 =
1/2*exp(-1/4*t)-1/2*exp(-3/4*t)
a21 =
-3/2*exp(-1/4*t)+3/2*exp(-3/4*t)
a22 =
3/2*exp(-3/4*t)-1/2*exp(-1/4*t)
Thus,
– 0.25t – 0.75t – 0.25t – 0.75t
e
At
= 1.5e – 0.5e 0.5e – 0.5e
– 0.25t – 0.75t – 0.25t – 0.75t
– 1.5 e + 1.5e – 0.5 e + 1.5e
c.
At ⎛ ⎞
t
A(t – 0) –A τ
∫0 e bu ( τ ) dτ = e x 0 + 0 = e ⎜ 0 + 0 ⎟
At At
x( t) = e x0 + e
⎝ 1 0⎠
his chapter begins with the definition of the impulse response, that is, the response of a circuit
T that is subjected to the excitation of the impulse function. Then, it defines convolution and
how it is applied to circuit analysis. Evaluation of the convolution integral using graphical
methods is also presented and illustrated with several examples.
A ( t – t0 ) t
–A τ
∫0 e
At
x(t) = e x0 + e bu ( τ ) dτ (6.2)
Therefore, with initial condition x 0 = 0 , and with the input u ( t ) = δ ( t ) , the solution of (6.2)
reduces to
t
–A τ
∫0 e
At
x(t) = e bδ ( τ ) dτ (6.3)
At
h ( t ) = e bu 0 ( t ) (6.5)
Example 6.1
Compute the impulse response of the series RC circuit of Figure 6.1 in terms of the constants R and
−
C , where the response is considered to be the voltage across the capacitor, and v c ( 0 ) = 0 . Then,
compute the current through the capacitor.
R
C + h(t) = v (t) = v (t)
+
−
C out
−
δ(t)
Solution:
We assign currents i C and i R with the directions shown in Figure 6.2, and we apply KCL.
iR
R iC
C +
+ h ( t ) = v C ( t ) = v out ( t )
− −
δ(t)
Figure 6.2. Application of KCL for the circuit for Example 6.1
Then,
iR + iC = 0
or
dv vC – δ ( t )
C -------C- + --------------------
- = 0 (6.6)
dt R
We assign the state variable
vC = x
Then,
dv C
-------- = x·
dt
and (6.6) becomes
x δ(t)
Cx· + --- = ---------
R R
or
1 1
x· = – -------- x + -------- δ ( t ) (6.7)
RC RC
This equation has the form
x· = ax + bu
and as we found in (6.5),
At
h ( t ) = e bu 0 ( t )
For this example,
a = – 1 ⁄ RC
and
b = 1 ⁄ RC
Therefore,
– t ⁄ RC 1
h ( t ) = vC ( t ) = e --------
RC
or
1 – t ⁄ RC
h ( t ) = -------- e u0 ( t ) (6.8)
RC
Example 6.2
For the circuit of Figure 6.3, compute the impulse response h ( t ) = v C ( t ) given that the initial condi-
tions are zero, that is, i L ( 0 − ) = 0 , and v C ( 0 − ) = 0 .
Solution:
This is the same circuit as that of Example 5.10 of the previous chapter, where we found that
R L
`
+
1Ω 1⁄4 H
C
+ h ( t ) = vC ( t )
−
δ(t) 4⁄3 F
−
b = 4
0
and
–t – 3t –t – 3t
At – 0.5 e + 1.5e – 2 e + 2e
e = – 3t
3
--- e – t – 3 –t – 3t
--- e 1.5e – 0.5e
8 8
–t – 3t –t – 3t –t – 3t
x1 – 0.5 e + 1.5e – 2 e + 2e 4 u ( t ) = – 2 e + 6e u ( t )
h( t)= x(t) = = – 3t 0 – 3t 0 (6.10)
3
--- e – t – 3 –t – 3t 3
--- e – t – 3
x2 --- e 1.5e – 0.5e 0 --- e
8 8 2 2
f ( –t ) = f ( t ) (6.12)
that is, if in an even function we replace t with – t , the function f ( t ) does not change. Thus, polyno-
mials with even exponents only, and with or without constants, are even functions. For instance, the
cosine function is an even function because it can be written as the power series
2 4 6
cos t = 1 – ---- t - – ----
t - + ---- t- + …
2! 4! 6!
Other examples of even functions are shown in Figure 6.4.
f(t) f(t) f(t)
t2 + k
t2 k
t t t
0 0 0
–f ( –t ) = f ( t ) (6.13)
that is, if in an odd function we replace t with – t , we obtain the negative of the function f ( t ) . Thus,
polynomials with odd exponents only, and no constants are odd functions. For instance, the sine
function is an odd function because it can be written as the power series
3 5 7
sin t = t – ---- t - – ----
t - + ---- t- + …
3! 5! 7!
Other examples of odd functions are shown in Figure 6.5.
f(t) f(t) f(t)
mt t3
t t t
0 0 0
We observe that for odd functions, f ( 0 ) = 0 . However, the reverse is not always true; that is, if
f ( 0 ) = 0 , we should not conclude that f ( t ) is an odd function. An example of this is the function
f ( t ) = t in Figure 6.4.
2
The product of two even or two odd functions is an even function, and the product of an even func-
tion times an odd function, is an odd function.
Henceforth, we will denote an even function with the subscript e , and an odd function with the sub-
script o . Thus, f e ( t ) and f o ( t ) will be used to represent even and odd functions of time respectively.
Also,
T T
∫– T f e ( t ) dt = 2 ∫0 f e ( t ) dt (6.14)
and
T
∫– T f o ( t ) dt = 0 (6.15)
1
f e ( t ) = --- [ f ( t ) + f ( – t ) ] (6.16)
2
or as
1
f o ( t ) = --- [ f ( t ) – f ( – t ) ] (6.17)
2
f ( t ) = fe ( t ) + fo ( t ) (6.18)
that is, any function of time can be expressed as the sum of an even and an odd function.
Example 6.3
Determine whether the delta function is an even or an odd function of time.
Solution:
Let f ( t ) be an arbitrary function of time that is continuous at t = t 0 . Then, by the sifting property of
the delta function
∫–∞ f ( t )δ ( t – t ) dt
0 = f ( t0 )
and for t 0 = 0 ,
∞
∫–∞ f ( t )δ ( t ) dt = f(0)
Also,
∞
∫–∞ fe ( t )δ ( t ) dt = fe ( 0 )
and
∞
∫–∞ fo ( t )δ ( t ) dt = fo ( 0 )
∫–∞ fo ( t )δ ( t ) dt = fo ( 0 ) = 0 (6.19)
and this indicates that the product f o ( t )δ ( t ) is an odd function of t . Then, since f o ( t ) is odd, it fol-
lows that δ ( t ) must be an even function of t for (6.19) to hold.
6.3 Convolution
Consider a network whose input is δ ( t ) , and its output is the impulse response h ( t ) . We can repre-
sent the input-output relationship as the block diagram shown below.
δ(t) h(t)
Network
or in general,
δ(t – τ) h( t – τ)
Network
Multiplying both sides by the constant dτ , integrating from – ∞ to +∞ , and making use of the fact
that the delta function is even, i.e., δ ( t – τ ) = δ ( τ – t ) , we get
∞ ⎫ ⎧ ∞
∫ –∞
u ( τ )δ ( t – τ ) dτ ⎪
⎪
⎪
⎪ ∫–∞ u ( τ )h ( t – τ ) dτ
⎬ Network ⎨ ∞
∞
⎪ ⎪
∫ u ( τ )δ ( τ – t ) dτ ⎪ ⎪
⎩
∫–∞ u ( t – τ )h ( τ ) dτ
–∞ ⎭
Using the sifting property of the delta function, we find that the second integral on the left side
reduces to u ( t ) and thus
⎧ ∞
⎪
⎪
∫–∞ u ( τ )h ( t – τ ) dτ
u( t) Network ⎨ ∞
⎪
⎪
⎩
∫–∞ u ( t – τ )h ( τ ) dτ
The integral
∞ ∞
∫– ∞ u ( τ )h ( t – τ ) dτ or ∫–∞ u ( t – τ )h ( τ ) dτ (6.20)
is known as the convolution integral; it states that if we know the impulse response of a network, we
can compute the response to any input u ( t ) using either of the integrals of (6.20).
The convolution integral is usually denoted as u ( t )*h ( t ) or h ( t )*u ( t ) , where the asterisk (*) denotes
convolution.
At
In Section 6.1, we found that the impulse response for a single input is h ( t ) = e b . Therefore, if we
know h ( t ) , we can use the convolution integral to compute the response y ( t ) of any input u ( t ) using
the relation
∞ ∞
A(t – τ) –A τ
∫– ∞ ∫– ∞ e
At
y( t) = e bu ( τ ) dτ = e bu ( τ ) dτ (6.21)
Example 6.4
The signals h ( t ) and u ( t ) are as shown in Figure 6.6. Compute h ( t )*u ( t ) using the graphical evalua-
tion.
u ( t ) = u0 ( t ) – u0 ( t – 1 )
1 h(t) = – t + 1 1
t t
0 1 0 1
where τ is a dummy variable, that is, u ( τ ) and h ( τ ) , are considered to be the same as u ( t ) and h ( t ) .
We form u ( t – τ ) by first constructing the image of u ( τ ) ; this is shown as u ( – τ ) in Figure 6.7.
u ( –τ )
1
−1 0
τ
Next, we form u ( t – τ ) by shifting u ( – τ ) to the right by some value t as shown in Figure 6.8.
1 u ( t –τ )
τ
0 t
u ( t – τ ), t = 0 u ( t – τ )*h ( τ ) = 0 for t = 0
1
h(τ)
A
τ
−1 0
Shifting u ( t – τ ) to the right so that t > 0 , we get the sketch of Figure 6.10 where the integral of the
product is denoted by the shaded area, and it increases as point A moves further to the right.
u ( t – τ ), t > 0
1
h(τ)
A τ
0 t 1
The maximum area is obtained when point A reaches t = 1 as shown in Figure 6.11.
u ( t – τ ), t = 0
1
h(τ)
A
τ
0 1
∫– ∞ ∫0 ∫
t
u ( t – τ )h ( τ ) dτ = u ( t – τ )h ( τ ) dτ = ( 1 ) ( – τ + 1 ) dτ = τ – ---- = t – --- (6.23)
0 2 0
2
Figure 6.12 shows how u ( τ )*h ( τ ) increases during the interval 0 < t < 1 . This is not an exponential
increase; it is the function t – t 2 ⁄ 2 in (6.23), and each point on the curve of Figure 6.12 represents
the area under the convolution integral.
u(t)*h(t)
Figure 6.12. Curve for the convolution of u ( τ )*h ( τ ) for 0 < t < 1 in Example 6.4
As we continue shifting u ( t – τ ) to the right, the area starts decreasing, and it becomes zero at t = 2 ,
as shown in Figure 6.14.
0.5
0.4
2
t–t ⁄2
0.3
0.2
0.1
0
0.0 0.5 1.0 1.5 2.0
u ( t – τ ), 1 < t < 2 u ( t – τ ), t = 2
1 1
h(τ) h(τ)
A A
τ τ
0 t−1 1 t 0 1 2
Using the convolution integral, we find that the area for the interval 1 < t < 2 is
∞ 1 1 2 1
τ
∫– ∞ u ( t – τ )h ( τ ) dτ = ∫t – 1 u ( t – τ )h ( τ ) dτ = ∫ t–1
( 1 ) ( – τ + 1 ) dτ = τ – ----
2 t–1 (6.25)
2 2
1 t – 2t + 1 t
= 1 – --- – ( t – 1 ) + ------------------------ = --- – 2t + 2
2 2 2
2
Thus, for 1 < t < 2 , the area decreases in accordance with t ⁄ 2 – 2t + 2 .
Evaluating (6.25) at t = 2 , we find that u ( τ )*h ( τ ) = 0 . For t > 2 , the product u ( t – τ )h ( τ ) is zero
since there is no overlap between these two signals. The convolution of these signals for 0 ≤ t ≤ 2 , is
shown in Figure 6.15.
0.5
2
t ⁄ 2 – 2t + 2
0.4
0.3
2
t–t ⁄2
0.2
0.1
0
0.0 0.5 1.0 1.5 2.0
Example 6.5
The signals h ( t ) and u ( t ) are as shown in Figure 6.16. Compute h ( t )*u ( t ) using the graphical evalu-
ation method.
u ( t ) = u0 ( t ) – u0 ( t – 1 )
1 1
–t
h(t) = e
t t
0 0 1
Solution:
Following the same procedure as in the previous example, we form u ( t – τ ) by first constructing the
image of u ( τ ) . This is shown as u ( – τ ) in Figure 6.17.
u ( –τ )
1
τ
−1 0
Next, we form u ( t – τ ) by shifting u ( – τ ) to the right by some value t as shown in Figure 6.18.
1 u ( t –τ )
τ
0 t
u ( t – τ ), t = 0
1 u ( t – τ )*h ( τ ) = 0 for t = 0
h(τ)
A
τ
−1 0
Shifting u ( t – τ ) to the right so that t > 0 , we get the sketch of Figure 6.20 where the integral of the
product is denoted by the shaded area, and it increases as point A moves further to the right.
u ( t – τ ), t > 0 1
h(τ)
A τ
0 t
u(t)*h(t)
1.0
0.8 –t
1–e
0.6
0.4
0.2
0.0
0.0 0.5 1.0 1.5 2.0
As we continue shifting u ( t – τ ) to the right, the area starts decreasing. As shown in Figure 6.23, it
approaches zero as t becomes large but never reaches the value of zero.
u ( t – τ ), t = 1
1
h(τ)
A
τ
0 t−1 t
–τ t – 1
t t
–τ –τ t –( t – 1 ) –t –t
∫t – 1 u ( t – τ )h ( τ ) dτ = ∫t – 1 ( 1 ) ( e ) dτ = – e t–1
= e t
= e –e = e (e – 1) (6.28)
u(t)*h(t)
1.0
–t
0.8 1–e
–t
e (e – 1)
0.6
0.4
0.2
0.0
0.0 0.5 1.0 1.5 2.0
Example 6.6
Perform the convolution v 1 ( t )*v 2 ( t ) where v 1 ( t ) and v 2 ( t ) are as shown in Figure 6.25.
Solution:
We will use the convolution integral
v1 ( t )
2
v2 ( t )
1
t t
1 2
∞
v 1 ( t )∗ v 2 ( t ) = ∫–∞ v ( τ )v ( t – τ ) dτ
1 2 (6.29)
The computation steps are as in the two previous examples, and are evident from the sketches of Fig-
ures 6.26 through 6.29.
Figure 6.26 shows the formation of v 2 ( – τ ) .
v1 ( t )
2
v2 ( –τ )
1
τ τ
−2 1
Figure 6.27 shows the formation of v 2 ( t – τ ) and convolution with v 1 ( t ) for 0 < t < 1 .
v1 ( t )
2
v2 ( t – τ ) v 1 ( t )*v 2 ( t ) = 2 × 1 × t = 2t
1
τ
t 1
v1 ( t )
2
v2 ( t – τ ) 1
τ
1t
Figure 6.28. Convolution of v 2 ( t – τ ) with v 1 ( t ) for 1 < t < 2
v1 ( t )
2
v2 ( t – τ )
1
1
τ
t–2 t
From (6.30), (6.31), and (6.32), we obtain the waveform of Figure 6.30 that represents the convolu-
tion of the signals v 1 ( t ) and v 2 ( t – τ ) .
( v 1 ( t ) )∗ v 2 ( t )
2
t
0 1 2 3
Figure 6.30. Convolution of v 1 ( t ) with v 2 ( t ) for 0 < t < 3
In summary, the procedure for the graphical evaluation of the convolution integral, is as follows:
1. We substitute u ( t ) and h ( t ) with u ( τ ) and h ( τ ) respectively.
2. We fold (form the mirror image of) u ( τ ) or h ( τ ) about the vertical axis to obtain u ( – τ ) or h ( – τ ) .
3. We slide u ( – τ ) or h ( – τ ) to the right a distance t to obtain u ( t – τ ) or h ( t – τ ) .
4. We multiply the two functions to obtain the product u ( t – τ ) h ( τ ) , or u ( τ ) h ( t – τ ) .
5. We integrate this product by varying t from – ∞ to +∞ .
Example 6.7
For the circuit of Figure 6.31, use the convolution integral to find the capacitor voltage when the
input is the unit step function u 0 ( t ) , and v C ( 0 − ) = 0 .
R 1Ω
C +
+ vC ( t )
− −
1F
u0 ( t )
Solution:
Before we apply the convolution integral, we must know the impulse response h ( t ) of this circuit.
The circuit of Figure 6.31 was analyzed in Example 6.1 where we found that
1 – t ⁄ RC
h ( t ) = -------- e u0 ( t ) (6.33)
RC
With the given values, (6.33) reduces to
–t
h ( t ) = e u0 ( t ) (6.34)
Next, we use the graphical evaluation of the convolution integral as shown in Figures 6.32 through
6.34.
The formation of u 0 ( – τ ) is shown in Figure 6.32.
u0 ( –τ )
1
τ
0
u0 ( t –τ ) 1
τ
0 t
h(τ)
τ
0 t
6.6 Summary
• The impulse response is the output (voltage or current) of a network when the input is the delta
function.
• The determination of the impulse response assumes zero initial conditions.
f ( –t ) = f ( t )
–f ( –t ) = f ( t )
• The product of two even or two odd functions is an even function, and the product of an even
function times an odd function, is an odd function.
• A function f ( t ) that is neither even nor odd, can be expressed as
1
f e ( t ) = --- [ f ( t ) + f ( – t ) ]
2
or as
1
f o ( t ) = --- [ f ( t ) – f ( – t ) ]
2
• Any function of time can be expressed as the sum of an even and an odd function, that is,
f ( t ) = fe ( t ) + fo ( t )
∫–∞ u ( τ )h ( t – τ ) dτ
or
∞
∫–∞ u ( t – τ )h ( τ ) dτ
is known as the convolution integral.
• If we know the impulse response of a network, we can compute the response to any input u ( t )
with the use of the convolution integral.
• The convolution integral is usually denoted as u ( t )*h ( t ) or h ( t )*u ( t ) , where the asterisk (*)
denotes convolution.
• The convolution integral is more conveniently evaluated by the graphical evaluation method.
6.7 Exercises
1. Compute the impulse response h ( t ) = i L ( t ) in terms of R and L for the circuit of Figure 6.36.
Then, compute the voltage v L ( t ) across the inductor.
iL ( t )
`
+ L
−
δ(t)
2. Repeat Example 6.4 by forming h ( t – τ ) instead of u ( t – τ ) , that is, use the convolution integral
∞
∫–∞ u( τ )h ( t – τ ) dτ
3. Repeat Example 6.5 by forming h ( t – τ ) instead of u ( t – τ ) .
⎧ 4t t≥0 ⎧ e – 2t t≥0
v1 ( t ) = ⎨ v2 ( t ) = ⎨
⎩0 t<0 ⎩ 0 t<0
5. For the series RL circuit shown in Figure 6.37, the response is the current i L ( t ) . Use the convolu-
tion integral to find the response when the input is the unit step u 0 ( t ) .
R
iL ( t )
1Ω
u0 ( t )
+
-
i(t)
L
` 1H
Figure 6.37. Circuit for Exercise 5
6. Compute v out ( t ) for the network of Figure 6.38 using the convolution integral, given that
v in ( t ) = u 0 ( t ) – u 0 ( t – 1 ) .
`
+ 1H +
v in ( t ) R v out ( t )
1Ω
− −
R
+
1Ω +
v in ( t ) L
` 1H
v out ( t )
− −
`
+ L
−
δ(t) iL ( t )
di
Ri + L ----- = δ ( t )
dt
A ( t – t0 ) t
–A τ
∫0 e
At
x( t) = e x0 + e bu ( τ ) dτ
and using the sampling property of the delta function the above reduces to
– ( R ⁄ L )t
v L = ( – R ⁄ L )e u0 ( t ) + δ ( t )
2.
1 h(t) 1 1
1 1
h ( –τ ) h ( t –τ )
0 1 −1 0 τ 0 t τ 0 1 τ 0 t–1 1 t τ
From the plots above we observe that the area reaches the maximum value of 1 ⁄ 2 at t = 1 , and
then decreases to zero at t = 2 . Alternately, using the convolution integral we get
2 t 2 2
t
τ - + ( 1 – t )τ
∫ = t---- + ( 1 – t )t = t – ---
Area 1 = [ ( 1 – t ) + τ ] dτ = ---- t-
0 2 0
2 2
2 1
1
τ
Area 2 =
t–1
∫
[ ( 1 – t ) + τ ] dτ = ----- + ( 1 – t )τ
2 t–1
2 2
1 (t – 1) t
= --- + 1 – t – ----------------- – ( 1 – t ) ⋅ ( t – 1 ) = ---- – 2t + 2
2 2 2
3.
1 1 1 1
h(t) h ( –τ )
h ( t –τ )
t τ τ
0 0 τ 0 t 1 0 1 t
From the plots above we observe that the area reaches its maximum value at t = 1 , and then
decreases exponentially to zero as t → ∞ . Alternately, using the convolution integral we get
∞
u ( t )*h ( t ) = ∫–∞ u ( τ )h ( t – τ ) dτ
–t –τ τ –( t – τ )
where h ( t ) = e , h ( τ ) = e , h ( – τ ) = e , and h ( t – τ ) = e . Then, for 0 < t < 1
t t
–( t – τ ) –t τ –t –t
∫0 ∫0 e dτ
t 0
Area 1 = (1 ⋅ e ) dτ = e = e (e – e ) = 1 – e
For t > 1
1 1
–( t – τ ) –t τ –t τ 1 –t –t
∫0 ∫0
1 0
Area 2 = (1 ⋅ e ) dτ = e e dτ = e e 0
= e (e – e ) = e (e – 1)
4.
v2 ( t –τ ) v 1 ( t )*v 2 ( t )
1
1 v2 ( t )
v2 ( –τ ) 4t
t τ τ
0 0 τ 0 0 t
t t t
–2 ( t – τ ) – 2t
∫0 ∫0 ∫0 τe
2τ
v 1 ( t )*v 2 ( t ) = v 1 ( τ )v 2 ( t – τ ) dτ = 4τe dτ = 4e dτ
4 4
V 1 ( s ) ⋅ V 2 ( s ) = --------------------- = -------------------
2 3 2
s (s + 2) s + 2s
syms s t; ilaplace(4/(s^3+2*s^2))
ans =
2*t-1+exp(-2*t)
5.
To use the convolution integral we must first find the impulse response. It was found in Exercise
1 as
– ( R ⁄ L )t
h ( t ) = i ( t ) = ( 1 ⁄ L )e u0 ( t )
1Ω
iL ( t )
u0 ( t )
+
-
i(t)
L
` 1H
When the input is the unit step u 0 ( t ) ,
∞
i( t ) v = u (t) =
in 0
∫–∞ u0 ( t – τ )h ( τ ) dτ
h(t) 1
1 1
u0 ( –τ ) 1 u0 ( t –τ )
–t
e
t
0 τ 0 τ τ
0 0 t
h ( t –τ )
t
–τ –τ t –τ 0 –t
i( t ) v = u (t) =
in 0
∫0 ( 1 ) ⋅ e dτ = – e 0
= e t
= ( 1 – e ) ( u0 ( t ) )
1
u 0 ( t )*h ( t )
6.
We will first compute the impulse response, that is, the output when the input is the delta func-
tion, i.e., v in ( t ) = δ ( t ) . Then, by KVL
di L
L ------- + Ri L = δ ( t )
dt
and with i L = x
1 ⋅ x· + 1 ⋅ x = δ ( t )
or
x· = – x + δ ( t )
From (6.5)
At –t –t
h ( t ) = e bu 0 ( t ) = e ⋅ 1 = e
t>1
1
–τ t – 1
t
–τ –τ t –t
h(τ) ∫t – 1 ( 1 ) ( e ) dτ = – e t–1
= e t
= e (e – 1)
τ
0 t−1 t
7.
R
+
1Ω +
v in ( t ) = u 0 ( t ) – u 0 ( t – 1 )
v in ( t ) L
` 1H
v out ( t )
− −
v out ( t ) = v L = v in – v R
where from Exercise 6,
–t
⎧1 – e 0< t<1
vR = ⎨
⎩ e –t ( e – 1 ) t>1
Then, for this circuit,
–t –t
⎧ (1 – (1 – e ) = e ) 0< t<1
vR = ⎨
–t –t
⎩ 0 – e ( e – 1 ) = ( 1 – e )e t>1
his chapter is an introduction to Fourier series. We begin with the definition of sinusoids that
T are harmonically related and the procedure for determining the coefficients of the trigonomet-
ric form of the series. Then, we discuss the different types of symmetry and how they can be
used to predict the terms that may be present. Several examples are presented to illustrate the
approach. The alternate trigonometric and the exponential forms are also presented.
or
∞
1
f ( t ) = --- a 0 +
2 ∑ ( a cos nωt + b sin nωt )
n n (7.2)
n=1
where the first term a 0 ⁄ 2 is a constant, and represents the DC (average) component of f ( t ) . Thus,
if f ( t ) represents some voltage v ( t ) , or current i ( t ) , the term a 0 ⁄ 2 is the average value of v ( t ) or
i(t) .
The terms with the coefficients a 1 and b 1 together, represent the fundamental frequency compo-
nent ω *. Likewise, the terms with the coefficients a 2 and b 2 together, represent the second har-
monic component 2ω , and so on.
Since any periodic waveform f ( t ) ) can be expressed as a Fourier series, it follows that the sum of the
DC , the fundamental, the second harmonic, and so on, must produce the waveform f ( t ) . Generally,
the sum of two or more sinusoids of different frequencies produce a waveform that is not a sinusoid
as shown in Figure 7.1.
Total
2nd Harmonic
Fundamental
3rd Harmonic
∫0 sin mt dt = 0 (7.3)
2π
∫0 cos m t dt = 0 (7.4)
2π
The integrals of (7.3) and (7.4) are zero since the net area over the 0 to 2π area is zero. The integral
of (7.5) is also is zero since
1
sin x cos y = --- [ sin ( x + y ) + sin ( x – y ) ]
2
This is also obvious from the plot of Figure 7.2, where we observe that the net shaded area above
and below the time axis is zero.
sin x cos x
sin x ⋅ cos x
2π
Figure 7.2. Graphical proof of ∫0 ( sin mt ) ( cos nt ) dt = 0
since
1
( sin x ) ( sin y ) = --- [ cos ( x – y ) – cos ( x – y ) ]
2
The integral of (7.6) can also be confirmed graphically as shown in Figure 7.3, where m = 2 and
n = 3 . We observe that the net shaded area above and below the time axis is zero.
2π
Figure 7.3. Graphical proof of ∫0 ( sin mt ) ( sin nt ) dt = 0 for m = 2 and n = 3
2π
since
1
( cos x ) ( cos y ) = --- [ cos ( x + y ) + cos ( x – y ) ]
2
The integral of (7.7) can also be confirmed graphically as shown in Figure 7.4, where m = 2 and
n = 3 . We observe that the net shaded area above and below the time axis is zero.
2π
Figure 7.4. Graphical proof of ∫0 ( cos m t ) ( cos nt ) dt = 0 for m = 2 and n = 3
∫0
2
( sin mt ) dt = π (7.8)
and
2π
∫0
2
( cos m t ) dt = π (7.9)
The integrals of (7.8) and (7.9) can also be seen to be true graphically with the plots of Figures 7.5
and 7.6.
It was stated earlier that the sine and cosine functions are orthogonal to each other. The simplifica-
tion obtained by application of the orthogonality properties of the sine and cosine functions,
becomes apparent in the discussion that follows.
2
sin x
sin x
2π
∫0
2
Figure 7.5. Graphical proof of ( sin mt ) dt = π
2
cos x
cos x
2π
∫0
2
Figure 7.6. Graphical proof of ( cos m t ) dt = π
To evaluate any coefficient, say b 2 , we multiply both sides of (7.10) by sin 2t . Then,
1
f ( t ) sin 2t = --- a 0 sin 2t + a 1 cos t sin 2t + a 2 cos 2t sin 2t + a 3 cos 3t sin 2t + a 4 cos 4t sin 2t + …
2
2
b 1 sin t sin 2t + b 2 ( sin 2t ) + b 3 sin 3t sin 2t + b 4 sin 4t sin 2t + …
Next, we multiply both sides of the above expression by dt , and we integrate over the period 0 to
2π . Then,
2π 2π 2π 2π
1
∫0 f ( t ) sin 2t dt = --- a 0
2 ∫0 sin 2t dt + a 1 ∫0 cos t sin 2t dt + a 2 ∫0 cos 2t sin 2t dt
2π 2π
+ a3 ∫0 cos 3t sin 2t dt + a 4 ∫0 cos 4t sin 2t dt + …
(7.11)
2π 2π 2π
∫0 ∫0 ∫0
2
+ b1 sin t sin 2t dt + b 2 ( sin 2t ) dt + b 3 sin 3t sin 2t dt
2π
+ b4 ∫0 sin 4t sin 2t dt + …
We observe that every term on the right side of (7.11) except the term
2π
∫0 ( sin 2t ) dt
2
b2
∫0 f ( t ) sin 2t dt = b 2 ∫0 ( sin 2t ) dt = b 2 π
2
or
2π
1
b 2 = ---
π ∫0 f ( t ) sin 2t dt
and thus we can evaluate this integral for any given function f ( t ) . The remaining coefficients can be
evaluated similarly.
The coefficients a 0 , a n , and b n are found from the following relations.
2π
1 1-
--- a 0 = -----
2 2π ∫0 f ( t ) dt (7.12)
2π
1
a n = ---
π ∫0 f ( t ) cos nt dt (7.13)
2π
1
b n = ---
π ∫0 f ( t ) sin nt dt (7.14)
7.3 Symmetry
With a few exceptions such as the waveform of Example 7.6, the most common waveforms that are
used in science and engineering, do not have the average, cosine, and sine terms all present. Some
waveforms have cosine terms only, while others have sine terms only. Still other waveforms have or
have not DC components. Fortunately, it is possible to predict which terms will be present in the
trigonometric Fourier series, by observing whether or not the given waveform possesses some kind
of symmetry.
We will discuss three types of symmetry that can be used to facilitate the computation of the trigo-
nometric Fourier series form. These are:
1. Odd symmetry − If a waveform has odd symmetry, that is, if it is an odd function, the series will
consist of sine terms only. In other words, if f ( t ) is an odd function, all the a i coefficients includ-
ing a 0 , will be zero.
2. Even symmetry − If a waveform has even symmetry, that is, if it is an even function, the series will
consist of cosine terms only, and a 0 may or may not be zero. In other words, if f ( t ) is an even
function, all the b i coefficients will be zero.
3. Half-wave symmetry − If a waveform has half-wave symmetry (to be defined shortly), only odd
(odd cosine and odd sine) harmonics will be present. In other words, all even (even cosine and
even sine) harmonics will be zero.
We defined odd and even functions in Chapter 6. We recall that odd functions are those for which
–f ( –t ) = f ( t ) (7.15)
and even functions are those for which
f ( –t ) = f ( t ) (7.16)
Examples of odd and even functions were given in Chapter 6. Generally, an odd function has odd
powers of the independent variable t , and an even function has even powers of the independent
variable t . Thus, the product of two odd functions or the product of two even functions will result
in an even function, whereas the product of an odd function and an even function will result in an
odd function. However, the sum (or difference) of an odd and an even function will yield a function
which is neither odd nor even.
To understand half-wave symmetry, we recall that any periodic function with period T , is expressed
as
f(t) = f(t + T) (7.17)
that is, the function with value f ( t ) at any time t , will have the same value again at a later time t + T .
T
A
T/2
f(b) π 2π
ωt
0
T/2
f(a)
−A
Obviously, if the ordinate axis is shifted by any other value other than an odd multiple of π ⁄ 2 ,
the waveform will have neither odd nor even symmetry.
−π/2 π/2 2π
−2π −π π ωt
0
T/2 T/2
−A
3. Sawtooth waveform
T
A
−2π −π π 2π
ωt
0
T/2 T/2
−A
−2π
−π ωt
0 π 2π
T/2
−A T/2
For this triangular waveform of Figure 7.10, the average value over one period T is zero and
therefore, a 0 = 0 . It is also an odd function since – f ( – t ) = f ( t ) . Moreover, it has half-wave
symmetry because – f ( t + T ⁄ 2 ) = f ( t )
5. Fundamental, Second and Third Harmonics of a Sinusoid
Figure 7.11 shows a fundamental, second, and third harmonic of a typical sinewave.
−a
−c
Figure 7.11. Fundamental, second, and third harmonic test for symmetry
In Figure 7.11, the half period T ⁄ 2 , is chosen as the half period of the period of the fundamental
frequency. This is necessary in order to test the fundamental, second, and third harmonics for half-
wave symmetry. The fundamental has half-wave symmetry since the a and – a values, when sepa-
rated by T ⁄ 2 , are equal and opposite. The second harmonic has no half-wave symmetry because the
ordinates b on the left and b on the right, although are equal, there are not opposite in sign. The
third harmonic has half-wave symmetry since the c and – c values, when separated by T ⁄ 2 are
equal and opposite. These waveforms can be either odd or even depending on the position of the
ordinate. Also, all three waveforms have zero average value unless the abscissa axis is shifted up or
down.
In the expressions of the integrals in (7.12) through (7.14), the limits of integration for the coeffi-
cients a n and b n are given as 0 to 2π , that is, one period T . Of course, we can choose the limits of
integration as – π to +π . Also, if the given waveform is an odd function, or an even function, or has
half-wave symmetry, we can compute the non-zero coefficients a n and b n by integrating from 0 to
π only, and multiply the integral by 2 . Moreover, if the waveform has half-wave symmetry and is
also an odd or an even function, we can choose the limits of integration from 0 to π ⁄ 2 and multiply
the integral by 4 . The proof is based on the fact that, the product of two even functions is another
even function, and also that the product of two odd functions results also in an even function. How-
ever, it is important to remember that when using these shortcuts, we must evaluate the coefficients
a n and b n for the integer values of n that will result in non-zero coefficients. This point will be
illustrated in Example 7.2.
We will now derive the trigonometric Fourier series of the most common periodic waveforms.
T
A
π 2π
ωt
0
−A
Solution:
The trigonometric series will consist of sine terms only because, as we already know, this waveform
is an odd function. Moreover, only odd harmonics will be present since this waveform has half-wave
symmetry. However, we will compute all coefficients to verify this. Also, for brevity, we will assume
that ω = 1 .
The a i coefficients are found from
2π π 2π
1 1 A π
∫0 ∫0 ∫π
2π
a n = --- f ( t ) cos nt dt = --- A cos nt dt + ( – A ) cos nt dt = ------ ( sin nt 0 – sin nt )
π π nπ π
(7.19)
A A
= ------ ( sin nπ – 0 – sin n2π + sin nπ ) = ------ ( 2 sin nπ – sin n2π )
nπ nπ
and since n is an integer (positive or negative) or zero, the terms inside the parentheses on the sec-
ond line of (7.19) are zero and therefore, all a i coefficients are zero, as expected since the square
waveform has odd symmetry. Also, by inspection, the average ( DC ) value is zero, but if we attempt
to verify this using (7.19), we will get the indeterminate form 0 ⁄ 0 . To work around this problem, we
will evaluate a 0 directly from (7.12). Then,
π 2π
1 A
a 0 = ---
π ∫0 A dt + ∫π ( – A ) dt = --- ( π – 0 – 2π + π ) = 0
π
(7.20)
4A
b 3 = -------
3π
4A
b 5 = -------
5π
and so on.
Therefore, the trigonometric Fourier series for the square waveform with odd symmetry is
It was stated above that, if the given waveform has half-wave symmetry, and it is also an odd or an
even function, we can integrate from 0 to π ⁄ 2 , and multiply the integral by 4 . We will apply this
property to the following example.
Example 7.2
Compute the trigonometric Fourier series of the square waveform of Example 1 by integrating from
0 to π ⁄ 2 , and multiplying the result by 4 .
Solution:
Since the waveform is an odd function and has half-wave symmetry, we are only concerned with the
odd b n coefficients. Then,
Example 7.3
Compute the trigonometric Fourier series of the square waveform of Figure 7.13.
Solution:
This is the same waveform as in Example 7.1, except that the ordinate has been shifted to the right
by π ⁄ 2 radians, and has become an even function. However, it still has half-wave symmetry. There-
fore, the trigonometric Fourier series will consist of odd cosine terms only.
Since the waveform has half-wave symmetry and is an even function, it will suffice to integrate from
0 to π ⁄ 2 , and multiply the integral by 4 . The a n coefficients are found from
π⁄2 π⁄2
π
= ------- ⎛ sin n ---⎞
1 4 4A π⁄2 4A
a n = 4 ---
π ∫0 f ( t ) cos nt dt = ---
π ∫0 A cos nt dt = ------- ( sin nt
nπ 0
)
nπ ⎝ 2⎠
(7.25)
We observe that for n = even , all a n coefficients are zero, and thus all even harmonics are zero as
expected. Also, by inspection, the average ( DC ) value is zero.
T
A
π/2 3π / 2
π
ωt
0 2π
−A
π
For n = odd , we observe from (7.25) that sin n --- , will alternate between +1 and – 1 depending on
2
the odd integer assigned to n . Thus,
4A
a n = ± ------- (7.26)
nπ
(n – 1)
----------------
f ( t ) = ------- ⎛ cos ω t – --- cos 3ωt + --- cos 5ωt – …⎞ = -------
4A 1 4A 2 1
∑
1
( –1 ) --- cos n ωt (7.27)
π⎝ 3 5 ⎠ π n
n = odd
Alternate Solution:
Since the waveform of Example 7.3 is the same as of Example 7.1, but shifted to the right by π ⁄ 2
radians, we can use the result of Example 7.11, i.e.,
and substitute ωt with ωt + π ⁄ 2 , that is, we let ωt = ωτ + π ⁄ 2 . With this substitution, relation
(7.28) becomes
and using the identities sin ( x + π ⁄ 2 ) = cos x , sin ( x + 3π ⁄ 2 ) = – cos x , and so on, we rewrite (7.29)
as
4A 1 1
f ( τ ) = ------- cos ωτ – --- cos 3ωτ + --- cos 5ωτ – … (7.30)
π 3 5
Example 7.4
Compute the trigonometric Fourier series of the sawtooth waveform of Figure 7.14.
T
−π π
ωt
−2π 0 2π
−A
Solution:
This waveform is an odd function but has no half-wave symmetry; therefore, it contains sine terms
only with both odd and even harmonics. Accordingly, we only need to evaluate the b n coefficients.
⎧ A --- t 0<t<π
⎪ π
f(t) = ⎨
⎪A--- t – 2A π < t < 2π
⎩π
However, we can choose the limits from – π to +π , and thus we will only need one integration since
A
f ( t ) = --- t –π < t < π
π
Better yet, since the waveform is an odd function, we can integrate from 0 to π , and multiply the
integral by 2 ; this is what we will do.
From tables of integrals,
1
∫ x sin ax dx = ----2- sin a x – --x- cos ax
a a
(7.31)
Then,
π π π
t sin nt dt = ------2- ⎛ ----2- sin nt – --- cos nt⎞
2 A
--- t sin nt dt = 2A 2A 1 t
b n = ---
π ∫0 π
-------
π2 ∫0 π ⎝n n ⎠
0 (7.32)
2A π 2A
- ( sin nt – nt cos nt )
= ---------- - ( sin nπ – nπ cos nπ )
= ----------
n2 π2 0 n2 π2
We observe that:
1. If n = even , sin nπ = 0 and cos nπ = 1 . Then, (7.32) reduces to
2A
- ( – nπ ) = – 2A
b n = ---------- -------
n2 π2 nπ
that is, the even harmonics have negative coefficients.
2. If n = odd , sin nπ = 0 , cos nπ = – 1 . Then,
2A
- ( nπ ) = 2A
b n = ---------- -------
n2 π2 nπ
that is, the odd harmonics have positive coefficients.
Thus, the trigonometric Fourier series for the sawtooth waveform with odd symmetry is
f ( t ) = ------- ⎛ sin ωt – --- sin 2ωt + --- sin 3ωt – --- sin 4ωt + …⎞ = -------
2A 1 1 1 2A n–1 1
π⎝ 2 3 4 ⎠ π ∑ ( –1 ) --- sin nωt
n
(7.33)
Example 7.5
Find the trigonometric Fourier series of the triangular waveform of Figure 7.15. Assume ω = 1 .
T
A
−2π
−π ωt
0 π/2 π 2π
−A
1
∫ x sin ax dx
x
= -----2 sin a x – --- cos ax (7.34)
a a
Then,
π⁄2 π⁄2 π⁄2
t sin nt dt = ------2- ⎛ -----2 sin nt – --- cos nt⎞
4 2A 8A 8A 1 t
b n = ---
π ∫0 ------- t sin nt dt = ------2-
π π ∫0 π n ⎝ n ⎠
0
(7.35)
π π π
= ----------2 ⎛ sin n --- – n --- cos n --- ⎞
8A π⁄2 8A
= ----------2 ( sin nt – nt cos nt ) 0 ⎝ 2 2 2⎠
n π n π
2 2
(n – 1)
----------------
f ( t ) = ------2- ⎛ sin ω t – 1
--- sin 3ωt + ------ sin 5ωt – ------ sin 7ωt + …⎞ = ------2-
8A 1 1 8A 1-
∑
2
( –1 ) ---- sin n ωt (7.36)
⎝ 25 49 ⎠ 2
π 9 π n = odd n
Example 7.6
The circuit of Figure 7.16 is a half-wave rectifier whose input is the sinusoid v in ( t ) = sin ωt , and its
output v out ( t ) is defined as
Solution:
The input and output waveforms for this example are shown in Figures 7.17 and 7.18.
+
+ v out ( t )
v in ( t ) R
− −
−2π −π 0 π 2π
Figure 7.19. Half-wave rectifier waveform for the circuit of Figure 7.16
By inspection, the average is a non-zero value, and the waveform has neither odd nor even symme-
try. Therefore, we expect all terms to be present.
Then,
π
A ⎧ 1 cos ( 1 – n )t cos ( 1 + n )t ⎫
a n = --- ⎨ – --- --------------------------- + ---------------------------- ⎬
π ⎩ 2 1–n 1+n
0⎭
(7.38)
A- ⎧ ----------------------------
= – ----- cos ( π + nπ )- – -----------
cos ( π – nπ -) + ----------------------------- 1 - ⎫
1 - + -----------
⎨ ⎬
2π ⎩ 1–n 1+n 1–n n+1 ⎭
a 0 = 2A ⁄ π
Therefore, the DC value is
1
--- a 0 = A
--- (7.40)
2 π
We cannot use (7.39) to obtain the value of a 1 ; therefore, we will evaluate the integral
π
A
a 1 = ---
π ∫0 sin t cos t dt
From tables of integrals,
1
∫ ( sin ax ) ( cos ax ) dx
2
= ------ ( sin ax )
2a
and thus,
π
A 2
a 1 = ------ ( sin t ) = 0 (7.41)
2π 0
A cos 2π + 1 ⎞
a 2 = --- ⎛ ------------------------
- = – 2A
------- (7.42)
π ⎝ ( 1 – 22 ) ⎠ 3π
A ( cos 3π + 1 ) = 0
a 3 = --------------------------------- (7.43)
2
π(1 – 3 )
A ( cos 4π + 1 ) 2A
a 4 = --------------------------------- = – --------- (7.44)
2 15π
π(1 – 4 )
A ( cos 6π + 1 ) 2A
a 6 = --------------------------------- = – --------- (7.45)
2 35π
π(1 – 6 )
A ( cos 8π + 1 ) 2A
a 8 = --------------------------------- = – --------- (7.46)
2 63π
π(1 – 8 )
and so on.
Now, we need to evaluate the bn coefficients. For this example,
2π π 2π
1 A A
b n = A ---
π ∫0 f ( t ) sin nt dt = ---
π ∫0 sin t sin nt dt + ---
π ∫π 0 sin nt dt
Therefore,
π
A 1 ⎧ sin ( 1 – n )t sin ( 1 + n )t ⎫
b n = --- ⋅ --- ⎨ -------------------------- – --------------------------- ⎬
π 2⎩ 1–n 1+n
0⎭
A sin ( 1 – n )π sin ( 1 + n )π
= ------ ---------------------------- – ---------------------------- – 0 + 0 = 0 ( n ≠ 1 )
2π 1–n 1+n
Combining (7.40), and (7.42) through (7.47), we find that the trigonometric Fourier series for the
half-wave rectifier with no symmetry is
Example 7.7
Figure 7.20 shows a full-wave rectifier circuit with input the sinusoid v in ( t ) = A sin ωt . The output
of that circuit is v out ( t ) = A sin ωt . Express v out ( t ) as a trigonometric Fourier series. Assume
ω = 1.
+ R
v in ( t ) − +
− +
v out ( t )
−
Therefore, we expect only cosine terms to be present. The a n coefficients are found from
vIN(t)
π 2π
-A
A
vOUT(t)
π 2π
−2π −π 0 π 2π
2π
1
a n = ---
π ∫0 f ( t ) cos nt dt
π π
1 2A
a n = ---
π ∫ –π
A sin t cos nt dt = -------
π ∫0 sin t cos nt dt (7.49)
cos ( m + n )x
cos ( m – n )x- – -----------------------------
∫ ( sin mx ) ( cos nx ) dx = -----------------------------
2(n – m) 2(m + n)
- ( m2 ≠ n2 )
Since
cos ( x – y ) = cos ( y – x ) = cos x cos y + sin xsiny
we express (7.49) as
π
2A 1 ⎧ cos ( n – 1 )t cos ( n + 1 )t ⎫
a n = ------- ⋅ --- ⎨ --------------------------- – ---------------------------- ⎬
π 2⎩ n–1 n+1
0⎭
A 1 – cos ( nπ + π ) cos ( nπ – π ) – 1
= --- --------------------------------------- + --------------------------------------
π n+1 n–1
To simplify the last expression in (7.50), we make use of the trigonometric identities
cos ( nπ + π ) = cos nπ cos π – sin nπsinπ = – cos nπ
and
cos ( nπ – π ) = cos nπ cos π + sin nπsinπ = – cos nπ
Then, (7.50) simplifies to
A 1 + cos nπ 1 + cos nπ A – 2 + ( n – 1 ) cos nπ – ( n + 1 ) cos nπ
- – ------------------------- = --- -------------------------------------------------------------------------------------
a n = --- ------------------------
π n+1 n–1 π 2
n –1
(7.51)
– 2A ( cos nπ + 1 -)
= ---------------------------------------
2
n≠1
π(n – 1)
Now, we can evaluate all the a n coefficients, except a 1 , from (7.51). First, we will evaluate a 0 to
obtain the DC value. By substitution of n = 0 , we get
4A
a 0 = -------
π
Therefore, the DC value is
1
--- a 0 = 2A
------- (7.52)
2 π
π
1
a 1 = ---
π ∫0 sin t cos t dt
From tables of integrals,
1
∫ ( sin ax ) ( cos ax ) dx
2
= ------ ( sin ax )
2a
and thus,
π
1 2
a 1 = ------ ( sin t ) = 0 (7.53)
2π 0
2A ( cos 4π + 1 )- = – --------
a 4 = –--------------------------------------- 4A- (7.55)
2 15π
π(4 – 1)
2A ( cos 6π + 1 )- = – --------
a 6 = –--------------------------------------- 4A- (7.56)
2 35π
π(6 – 1)
– 2A ( cos 8π + 1 ) 4A
a 8 = ---------------------------------------- = – --------- (7.57)
2 63π
π(8 – 1)
and so on. Then, combining the terms of (7.52) and (7.54) through (7.57) we get
Therefore, the trigonometric form of the Fourier series for the full-wave rectifier with even symmetry
is
∞
4A 1 -
f ( t ) = 2A
------- – -------
π π ∑ ------------------
( n
2
– 1 )
cos nωt (7.59)
n = 2 , 4, 6 , …
This series of (7.59) shows that there is no component of the input (fundamental) frequency. This is
because we chose the period to be from – π and +π . Generally, the period is defined as the shortest
period of repetition. In any waveform where the period is chosen appropriately, it is very unlikely
that a Fourier series will consist of even harmonic terms only.
is
f ( t ) = ------- ⎛ sin ωt + --- sin 3ωt + --- sin 5ωt + …⎞ = -------
4A 1 1 4A 1
π⎝ 3 5 ⎠ π ∑ --- sin nωt
n
n = odd
Figure 7.24 shows the first 11 harmonics and their sum. We observe that as we add more and more
harmonics, the sum looks more and more like the square waveform. However, the crests do not get
flattened; this is known as Gibbs phenomenon and it occurs because of the discontinuity of the per-
fect square waveform as it changes from +A to – A .
Crest (Gibbs Phenomenon)
If a given waveform does not have any kind of symmetry, it may be advantageous of using the alter-
nate form of the trigonometric Fourier series where the cosine and sine terms of the same frequency
are grouped together, and the sum is combined to a single term, either cosine or sine. However, we
still need to compute the a n and b n coefficients separately.
We use the triangle shown in Figure 7.25 for the derivation of the alternate forms.
cn = an + bn
ϕn
cn an a bn b
bn cos θ n = --------------------
- = ----n- sin θ n = --------------------
- = ----n-
an + bn cn an + bn cn
θn
an
b a
cos θ n = sin ϕ n θ n = atan ----n- ϕ n = atan ----n-
an bn
Figure 7.25. Derivation of the alternate form of the trigonometric Fourier series
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
2 ⎝ cos ( t – θ 1 ) ⎠ ⎝ cos ( 2t – θ 2 ) ⎠
cos θ n cos nt + sin θ n sin nt
+ cn ⎛ ⎞
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
⎝ cos ( nt – θ n ) ⎠
∞ ∞ bn
∑ cn cos ⎛⎝ nωt – atan ----
-⎞
1 1
f ( t ) = --- a 0 +
2 ∑ cn cos ( nωt – θn ) = --- a 0 +
2 a n⎠
(7.61)
n=1 n=1
Similarly,
1 sin ϕ 1 cos t + cos ϕ 1 sin t
f ( t ) = --- a 0 + c 1 ⎛ ⎞
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
2 ⎝ sin ( t + ϕ 1 ) ⎠
sin ϕ 2 cos 2t + cos ϕ 2 sin 2t sin ϕ n cos nt + cos ϕ n sin nt
c2 ⎛ ⎞ + … + cn ⎛ ⎞
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
⎝ sin ( 2t + ϕ 2 ) ⎠ ⎝ sin ( nt + ϕ n ) ⎠
∞ ∞ an
∑ cn sin ⎛⎝ nωt + atan ----
-⎞
1 1
f ( t ) = --- a 0 +
2 ∑ c n sin ( nωt + ϕ n ) = --- a 0 +
2 b n⎠
(7.62)
n=1 n=1
When used in circuit analysis, (7.61) and (7.62) can be expressed as phasors. Since it is customary to
use the cosine function in the time domain to phasor transformation, we choose to use the transfor-
mation of (7.63) below.
∞ b ∞ bn
c n cos ⎛ nωt – atan ----n-⎞ ⇔ --- a 0 +
1 1
--- a 0 +
2 ∑ ⎝ an ⎠ 2 ∑ cn ∠– atan ----
an
- (7.63)
n=1 n=1
Example 7.8
Find the first 5 terms of the alternate form of the trigonometric Fourier series for the waveform of
Figure 7.26.
f( t)
3
1
ω = 1
t
π/2 π 3π/2 2π
Solution:
The given waveform has no symmetry; thus, we expect both cosine and sine functions with odd and
even terms present. Also, by inspection the DC value is not zero.
We will compute the a n and b n coefficients, the DC value, and we will combine them to get an
expression in the form of (7.63). Then,
π⁄2 2π π⁄2 2π
1 1 3 1
a n = ---
π ∫0 ( 3 ) cos nt dt + ---
π ∫ π⁄2
( 1 ) cos nt dt = ------ sin nt
nπ 0
+ ------ sin nt
nπ π⁄2 (7.64)
3 π 1 1 π 2 π
= ------ sin n --- + ------ sin n2π – ------ sin n --- = ------ sin n ---
nπ 2 nπ nπ 2 nπ 2
2
a 1 = --- (7.65)
π
and
2
a 3 = – ------ (7.66)
3π
The DC value is
π⁄2 2π
1 1- 1 1 π⁄2
∫0 ∫π ⁄ 2 ( 1 ) d t
2π
--- a 0 = ----- ( 3 ) dt + ------ = ------ ( 3t 0
+t π⁄2
)
2 2π 2π 2π
(7.67)
π
= ------ ⎛ ------ + 2π – ---⎞ = ------ ( π + 2π ) = ---
1 3π 1 3
2π ⎝ 2 2⎠ 2π 2
π⁄2 2π π⁄2 2π
1 1 –3 –1
b n = ---
π ∫0 ( 3 ) sin nt dt + ---
π ∫ π⁄2
( 1 ) sin nt dt = ------ cos nt
nπ 0
+ ------ cos nt
nπ π⁄2 (7.68)
–3 π 3 –1 1 π 1 2
= ------ cos n --- + ------ + ------ cos n2π + ------ cos n --- = ------ ( 3 – cos n2π ) = ------
nπ 2 nπ nπ nπ 2 nπ nπ
Then,
b1 = 2 ⁄ π (7.69)
b2 = 1 ⁄ π (7.70)
b 3 = 2 ⁄ 3π (7.71)
b 4 = 1 ⁄ 2π (7.72)
From (7.63),
∞ b ∞ bn
c n cos ⎛ nωt – atan ----n-⎞ ⇔ --- a 0 +
1 1
--- a 0 +
2 ∑ ⎝ an ⎠ 2 ∑ cn ∠– atan ----
an
-
n=1 n=1
where
b 2 2 b 2 2
c n ∠– atan ----n- = a n + b n ∠– atan ----n- = a n + b n ∠– θ n = a n – jb n (7.73)
an an
2 2 2 2 2 2
a 3 – jb 3 = – ------ – j ------ = ---------- ∠– 135° ⇔ ---------- cos ( 3ωt – 135° ) (7.76)
3π 3π 3π 3π
and
1 1 1
a 4 – jb 4 = 0 – j ------ = ------ ∠– 90° ⇔ ------ cos ( 4ωt – 90° ) (7.77)
2π 2π 2π
Combining the terms of (7.67) and (7.74) through (7.77), we find that the alternate form of the trig-
onometric Fourier series representing the waveform of this example is
3 1
f ( t ) = --- + ---
2 π
[2 2 cos ( ωt – 45° ) + cos ( 2ωt – 90° )
(7.78)
2 2 1
+ ---------- cos ( 3ωt – 135° ) + --- cos ( 4ωt – 90° ) + …
3 2
]
Example 7.9
The input to the series RC circuit of Figure 7.27, is the square waveform of Figure 7.28. Compute
the voltage v c ( t ) across the capacitor. Consider only the first three terms of the series, and assume
ω = 1.
R=1Ω
+
+ vC ( t )
− −
v in ( t ) C =1F
v in ( t ) T
A
π 2π ωt
0
−A
Since this series is the sum of sinusoids, we will use phasor analysis to obtain the solution.
The equivalent phasor circuit is shown in Figure 7.29.
R=1
C +
+ VC
− 1- −
-----
V in jω
1 ⁄ ( jnω ) 1
V C n = ------------------------------ V in n = -------------- V in n (7.80)
1 + 1 ⁄ ( jnω ) 1 + jn
With reference to (7.79) the phasors of the first 3 terms of (7.80) are
4A 4A 4A
------- sin t = ------- cos ( t – 90° ) ⇔ V in1 = ------- ∠– 90° (7.81)
π π π
4A 1 4A 1 4A 1
------- ⋅ --- sin 2t = ------- ⋅ --- cos ( 2t – 90° ) ⇔ V = ------- ⋅ --- ∠– 90° (7.82)
π 3 π 3 in 2 π 3
------- ⋅ 1
4A ------- ⋅ 1
--- sin 3t = 4A --- cos ( 3t – 90° ) ⇔ V 4A 1
= ------- ⋅ --- ∠– 90° (7.83)
π 5 π 5 in 3 π 5
By substitution of (7.81) through (7.83) into (7.80), we obtain the phasor and time domain voltages
indicated in (7.84) through (7.86) below.
1 4A 1 4A
V C 1 = ----------- ⋅ ------- ∠– 90° = -------------------- ⋅ ------- ∠– 90°
1+j π 2 ∠45° π
(7.84)
4A 2 4A 2
= ------- ⋅ ------- ∠– 135° ⇔ ------- ⋅ ------- cos ( t – 135° )
π 2 π 2
1 4A 1 1 4A 1
V C 2 = -------------- ⋅ ------- ⋅ --- ∠– 90° = ------------------------- ------- ⋅ --- ∠– 90°
1 + j2 π 3 5 ∠63.4° π 3
(7.85)
4A 5 4A 5
= ------- ⋅ ------- ∠– 153.4° ⇔ ------- ⋅ ------- cos ( 2t – 153.4° )
π 15 π 15
1 4A 1 1 4A 1
V C 3 = -------------- ⋅ ------- ⋅ --- ∠– 90° = ---------------------------- ------- ⋅ --- ∠– 90°
1 + j3 π 5 10 ∠71.6° π 5
(7.86)
4A 10 4A 10
= ------- ⋅ ---------- ∠– 161.6° ⇔ ------- ⋅ ---------- cos ( 3t – 161.6° )
π 50 π 50
Thus, the capacitor voltage in the time domain is
4A 2 5 10
v C ( t ) = ------- ------- cos ( t – 135° ) + ------- cos ( 2t – 153.4° ) + ---------- cos ( 3t – 161.6° ) + … (7.87)
π 2 15 50
into f ( t ) . Thus,
jωt – jωt j2ωt – j2ωt
+e +e
f ( t ) = --- a 0 + a 1 ⎛ ---------------------------- ⎞ + a 2 ⎛ --------------------------------- ⎞ +
1 e e
⎝ ⎠ ⎝ ⎠
(7.90)
2 2 2
jωt – jωt j2ωt – j2ωt
–e –e
… + b 1 ⎛ ---------------------------⎞ + b 2 ⎛ --------------------------------⎞ + …
e e
⎝ j2 ⎠ ⎝ j2 ⎠
f ( t ) = … + ⎛ -----2 – -----2 ⎞ e
– j2ωt ⎛ a 1 b 1 ⎞ – jωt 1
+ --- a 0 + ⎛ -----1 + -----1⎞ e + ⎛ -----2 + -----2⎞ e
a b a b jωt a b j2ωt
+ ----- – ----- e (7.91)
⎝ 2 j2 ⎠ ⎝ 2 j2 ⎠ 2 ⎝ 2 j2 ⎠ ⎝ 2 j2⎠
b
C n = --- ⎛ a n + ----n-⎞ = --- ( a n – j b n )
1 1
(7.93)
2⎝ j⎠ 2
1
C 0 = --- a 0 (7.94)
2
Then, (7.91) is written as
We must remember that the C i coefficients, except C 0 , are complex and occur in complex conju-
gate pairs, that is,
C –n = C n∗ (7.96)
We can derive a general expression for the complex coefficients C n , by multiplying both sides of
– jnωt
(7.95) by e and integrating over one period, as we did in the derivation of the a n and b n coeffi-
cients of the trigonometric form. Then, with ω = 1 ,
2π 2π 2π
– jnt – j2t – jnt – jt – jnt
∫0 f ( t )e dt = … + ∫0 C–2 e e dt + ∫0 C–1 e e dt (7.97)
2π 2π
– jnt jt – jnt
+ ∫0 C0 e dt + ∫0 C1 e e dt
2π 2π
j2t – jnt jnt – jnt
+ ∫0 C2 e e dt + … + ∫0 Cn e e dt
We observe that all the integrals on the right side of (7.97) are zero except the last one. Therefore,
2π 2π 2π
– jnt jnt – jnt
∫0 f ( t )e dt = ∫0 Cn e e dt = ∫0 C n dt = 2πC n
or
2π
1 – jnt
C n = ------
2π ∫0 f ( t )e dt
2π
1 – jnωt
C n = ------
2π ∫0 f ( t )e d( ωt ) (7.98)
or
T
1 – jnωt
C n = ---
T ∫0 f ( t )e d( ωt ) (7.99)
We can derive the trigonometric Fourier series from the exponential series by addition and subtrac-
tion of the exponential form coefficients C n and C –n . Thus, from (7.92) and (7.93),
1
C n + C – n = --- ( a n – jb n + a n + jb n )
2
or
a n = C n + C –n (7.100)
Similarly,
1
C n – C – n = --- ( a n – jb n – a n – j b n ) (7.101)
2
or
b n = j ( Cn – C–n ) (7.102)
Since odd functions have no cosine terms, the a n coefficients in (7.103) and (7.104) are zero.
Therefore, both C –n and C n are imaginary.
We recall from the trigonometric Fourier series that if there is half-wave symmetry, all even har-
monics are zero. Therefore, in (7.103) and (7.104) the coefficients a n and b n are both zero for
n = even , and thus, both C – n and C n are also zero for n = even .
5. C –n = C n∗ always
π 2π
ωt
0
−A
and for n = 0 ,
π 2π
1 –0 –0 A
C 0 = ------
2π ∫0 Ae dt + ∫π ( – A )e dt = ------ ( π – 2π + π ) = 0
2π
as expected.
For n ≠ 0 ,
π 2π π 2π
1 – jnt – jnt 1 A – jnt – A – jnt
C n = ------
2π ∫0 Ae dt + ∫π –A e dt = ------ -------- e
2π – jn 0
+ -------- e
– jn π
Cn A – jnπ 2 A 2
= ------------ ( e – 1 ) = ------------ ( 1 – 1 ) = 0 (7.106)
n = even 2jπn 2jπn
as expected.
– jnπ
For n = odd , e = – 1 . Therefore,
A – jnπ 2 A 2 A 2 2A
Cn = ------------ ( e – 1 ) = ------------ ( – 1 – 1 ) = ------------ ( – 2 ) = -------- (7.107)
n = odd 2jπn 2jπn 2jπn jπn
Using (7.95), that is,
– j2ωt – jωt jωt j2ωt
f ( t ) = … + C–2 e + C–1 e + C0 + C1 e + C2 e +…
we obtain the exponential Fourier series for the square waveform with odd symmetry as
f ( t ) = ------- ⎛ … – --- e
2A 1 – j3ωt – jωt jωt 1 j3ωt ⎞ 2A 1
∑
jnωt
–e +e + --- e = ------- --- e (7.108)
jπ ⎝ 3 3 ⎠ jπ n
n = odd
The minus ( −) sign of the first two terms within the parentheses results from the fact that
C –n = C n∗ . For instance, since C 3 = 2A ⁄ j3π , it follows that C – 3 = C 3∗ = – 2A ⁄ j3π . We observe
that f ( t ) is purely imaginary, as expected, since the waveform is an odd function.
To prove that (7.108) and (7.22) are the same, we group the two terms inside the parentheses of
(7.108) for which n = 1 ; this will produce the fundamental frequency sin ωt . Then, we group the
two terms for which n = 3 , and this will produce the third harmonic sin 3ωt , and so on.
bn
4/π
nωt
0 1 2 3 4 5 6 7 8 9
A/ 2
A/π DC
2 4 6 8
nωt
0 1
Figure 7.32. Line spectrum for half-wave rectifier of Example 7.6
Example 7.11
Compute the exponential Fourier series for the waveform of Figure 7.33, and plot its line spectra.
Assume ω = 1 .
Solution:
This recurrent rectangular pulse is used extensively in digital communications systems. To determine
how faithfully such pulses will be transmitted, it is necessary to know the frequency components.
T
A
T/κ
0
−π/κ ωt
−2π −π π/κ π 2π
As shown in Figure 7.33, the pulse duration is T ⁄ k . Thus, the recurrence interval (period) T, is k
times the pulse duration. In other words, k is the ratio of the pulse repetition time to the duration of
each pulse.
For this example, the components of the exponential Fourier series are found from
The value of the average ( DC component) is found by letting n = 0 . Then, from (7.109) we get
π⁄k
A π π
= ------ ⎛ --- + --- ⎞
A
C 0 = ------ t
2π –π ⁄ k
2π ⎝ k k ⎠
or
A
C 0 = --- (7.110)
k
sin ⎛ ------⎞
nπ
A - – jnt π ⁄ k
jnπ ⁄ k
–e
– jnπ ⁄ k ⎝ k⎠
= ------ ⋅ ------------------------------------- = ------ ⋅ sin ⎛ ------⎞ = A --------------------
A e A nπ
Cn = -------------- e
– jn2 π –π ⁄ k nπ j2 nπ ⎝ k⎠ nπ
or
A sin ( nπ ⁄ k )
C n = --- ⋅ -------------------------- (7.111)
k nπ ⁄ k
and thus,
∞
A sin ( nπ ⁄ k )
f(t) = ∑ --- ⋅ --------------------------
k nπ ⁄ k
(7.112)
n = –∞
The relation of (7.112) has the sin x ⁄ x form, and the line spectrum is shown in Figures 7.34 through
7.36, for k = 2 , k = 5 and k = 10 .
s in (n π /k )/(n π /k )
k = 2
-1 0 -8 -6 -4 -2 0 2 4 6 8 10
sin(nπ/k)/(nπ/k)
k=5
-10 -5 0 5 10
sin(nπ/k)/(nπ/k)
k = 10
-10 -5 0 5 10
The spectral lines are separated by the distance 1 ⁄ k and thus, as k gets larger, the lines get closer
together while the lines are further apart as k gets smaller. Although the space between lines seems
to be the same in each case, we should observe that the number of lines between line crossings, are
different.
Example 7.12
Use the result of Example 7.11 to compute the exponential Fourier series of the unit impulse train
Aδ ( t ± 2πn ) shown in Figure 7.37.
Solution:
From Example 7.11,
A sin ( nπ ⁄ k )
C n = --- ⋅ -------------------------- (7.113)
k nπ ⁄ k
... ...
4π
ωt
−8π −6π −4π −2π 0 2π 6π 8π
1 1 k
A = ---------- = ------------- = ------ (7.115)
T⁄k 2π ⁄ k 2π
as shown in Figure 7.38.
1 - T
A = ------------ A
2π ⁄ k
2π/κ
0
−π/κ
ωt
−2π −π π/κ π 2π
1
Figure 7.38. Recurrent pulse with amplitude A = ------------
-
2π ⁄ k
By substitution of (7.115) into (7.113), we get
k ⁄ 2π sin ( nπ ⁄ k ) 1 sin ( nπ ⁄ k )
C n = ------------- ⋅ -------------------------- = ------ -------------------------- (7.116)
k nπ ⁄ k 2π nπ ⁄ k
and as π ⁄ k → 0 , we observe from Figure 7.38, that each recurrent pulse becomes a unit impulse,
and the total number of the pulses reduce to a unit impulse train. Moreover, recalling that
1
sin x = 1 , we see that (7.116) reduces to C = -----
lim ---------
- n - , that is, all coefficients of the exponential
x→0 x 2π
Fourier series have the same amplitude and thus,
∞
1
∑
jnωt
f ( t ) = ------ e (7.117)
2π
n = –∞
The series of (7.117) reveals that the line spectrum of the impulse train of Figure 7.38, consists of a
train of equal amplitude, and are equally spaced harmonics as shown in Figure 7.39.
Since these spectral lines extend from – ∞ to + ∞ , the bandwidth approaches infinity.
1/ 2π
... ...
n
−4 −3 −2 −1 0 1 2 3 4
T
A
T/κ
0
ωt
−2π −π −π/κ π/κ π 2π
where I 0 represents a constant current, and I 1, I 2, …, I N represent the amplitudes of the sinusoids,
the RMS value of i is found from
or
1 2 1 2 1 2
I 0 + --- I + --- I + … + --- I
2
I RMS = (7.120)
2 1m 2 2m 2 Nm
The proof of (7.119) is based on Parseval’s theorem; we will discuss this theorem on the next chap-
ter. A brief description of the proof follows.
We recall that the RMS (effective) value of a function, such as current i ( t ) , is defined as
T
--1-
∫0 i d t
2
I RMS = (7.121)
T
Substitution of (7.118) into (7.121), will produce the terms I 02 , I 1m [ cos ( ω 1 t – θ 1 ) ] , and other
2 2
similar terms representing higher order harmonics. The result will also contain products of cosine
functions multiplied by a constant, or other cosine terms of different harmonic frequencies. But as
we know, from the orthogonality principle, the integration of (7.121), will produce all zero terms
except the cosine squared terms which, for each harmonic, will be
2T
I m ---
2 1 2
--------- = --- I m (7.122)
T 2
as in (7.120).
Example 7.13
Find the I RMS value of the square waveform shown in Figure 7.41 by application of
a. relation (7.121)
b. relation (7.120)
ωt
−1
Solution:
a. By inspection, the period is T = 2π as shown in Figure 7.42.
π 2π ωt
−1
or
I RMS = 1
and as we know, the RMS value of a sinusoid is a real number independent of the frequency and
the phase angle, and it is equal to 0.707 times its maximum value, that is, I RMS = 0.707I max .
Then, from (7.120) and (7.123),
1 1 2 1 1 2
I RMS = --- 0 + --- ( 1 ) + --- ⎛ --- ⎞ + --- ⎛ --- ⎞ + … = 0.97
4 1 2
(7.124)
π 2 2⎝3⎠ 2⎝5⎠
This is a good approximation to unity, considering that higher harmonics have been neglected.
T T
1 1
P ave = ---
T ∫0 p dt = ---
T ∫0 vi dt (7.126)
and the expression for the alternate trigonometric Fourier series, that is,
∞
1
f ( t ) = --- a 0 +
2 ∑ cn cos ( nωt – θn ) (7.127)
n=1
where f ( t ) can represent voltages and currents. Then, by substitution of these series for v and i into
(7.126), we will find that the products of v and i that have different frequencies, will be zero, and
only the products of the same frequency terms will have non-zero values. The non-zero values will
represent the average power for each harmonic in (7.125).
Example 7.14
For the circuit of Figure 7.43, compute:
a. The current i c ( t ) given that v in ( t ) = 6 ⎛⎝ cos ωt – --- cos 3ωt⎞⎠ V where ω = 1000 r ⁄ s .
1
3
Solution:
a. We will use the subscripts 1 and 3 to represent the quantities due to the fundamental and third
harmonic frequencies respectively. Since the excitation consists of two sinusoids of different fre-
quencies, we can use phasor quantities, and we will denote them with capital letters.
R
1Ω C
+ iC ( t )
+
− –3 −
10 -
-----------
v in ( t ) F
3
–j –j
---------- = -------------------------------- = – j3
ω1 C 3
10 × 10 ⁄ 3
–3
Z 1 = 1 – j3 = 10 ∠– 71.6°
V in1 6 ∠0°
- = ------------------------------- = 1.90 ∠71.6° ⇔ i C1 ( t ) = 1.90 cos ( ωt + 71.6° ) A
I C1 = --------- (7.128)
Z1 10 ∠– 71.6°
Next,
v in3 ( t ) = – 2 cos 3ωt = 2 cos ( 3ωt + 180° ) ⇔ V in3 = 2 ∠180° V
–j –j
---------- = ----------------------------------------- = – j1
ω3 C 3 × 10 × 10 ⁄ 3
3 –3
Z 3 = 1 – j1 = 2 ∠– 45°
V in3 2 ∠180°
- = ------------------------ = 1.41 ∠225° = 1.41 ∠( 225 – 135 )°
I C3 = ---------
Z3 2 ∠– 45° (7.129)
⇔ i C3 ( t ) = 1.41 cos ( 3ωt – 135 ° ) A
period may be a day, a week, a month or even a year. In these situations, we need to evaluate the inte-
gral(s) using numerical integration.
The procedure presented here, will work for both the waveforms that have an analytical solution and
those that do not. Even though we may already know the Fourier series from analytical methods, we
can use this procedure to check our results.
Consider the waveform of f ( x ) shown in Figure 7.44, were we have divided it into small pulses of
width ∆x . Obviously, the more pulses we use, the better the approximation.
If the time axis is in degrees, we can choose ∆x to be 2.5° and it is convenient to start at the zero
point of the waveform. Then, using a spreadsheet, such as Microsoft Excel, we can divide the period
0° to 360° in 2.5° intervals, and enter these values in Column A of the spreadsheet.
f(x)
Since the arguments of the sine and the cosine are in radians, we multiply degrees by π (3.1459...)
and divide by 180 to perform the conversion. We enter these in Column B and we denote them as
x . In Column C we enter the corresponding values of y = f ( x ) as measured from the waveform.
In Columns D and E we enter the values of cos x and the product y cos x respectively. Similarly, we
enter the values of sin x and y sin x in Columns F and G respectively.
Next, we form the sums of y cos x and y sin x , we multiply these by ∆x , and we divide by π to
obtain the coefficients a 1 and b 1 . To compute the coefficients of the higher order harmonics, we
form the products y cos 2x , y sin 2x , y cos 3x , y sin 3x , and so on, and we enter these in subsequent
columns of the spreadsheet.
Figure 7.45 is a partial table showing the computation of the coefficients of the square waveform,
and Figure 7.46 is a partial table showing the computation of the coefficients of a clipped sine wave-
form. The complete tables extend to the seventh harmonic to the right and to 360° down.
7-46
1.5 Square waveform f(t)=4(sinwt/p+sin3wt/3p+sin5wt/5p+ ….)
1.0
Numerical:
0.5
x(deg) x(rad) y=f(x) 0.5*a0 cosx ycosx sinx ysinx cos2x ycox2x sin2x ysin2x cos3x ycos3x sin3x ysin3x
0.0 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000
2.5 0.044 1.000 0.044 0.999 0.999 0.044 0.044 0.996 0.996 0.087 0.087 0.991 0.991 0.131 0.131
5.0 0.087 1.000 0.044 0.996 0.996 0.087 0.087 0.985 0.985 0.174 0.174 0.966 0.966 0.259 0.259
7.5 0.131 1.000 0.044 0.991 0.991 0.131 0.131 0.966 0.966 0.259 0.259 0.924 0.924 0.383 0.383
10.0 0.175 1.000 0.044 0.985 0.985 0.174 0.174 0.940 0.940 0.342 0.342 0.866 0.866 0.500 0.500
12.5 0.218 1.000 0.044 0.976 0.976 0.216 0.216 0.906 0.906 0.423 0.423 0.793 0.793 0.609 0.609
15.0 0.262 1.000 0.044 0.966 0.966 0.259 0.259 0.866 0.866 0.500 0.500 0.707 0.707 0.707 0.707
17.5 0.305 1.000 0.044 0.954 0.954 0.301 0.301 0.819 0.819 0.574 0.574 0.609 0.609 0.793 0.793
20.0 0.349 1.000 0.044 0.940 0.940 0.342 0.342 0.766 0.766 0.643 0.643 0.500 0.500 0.866 0.866
22.5 0.393 1.000 0.044 0.924 0.924 0.383 0.383 0.707 0.707 0.707 0.707 0.383 0.383 0.924 0.924
25.0 0.436 1.000 0.044 0.906 0.906 0.423 0.423 0.643 0.643 0.766 0.766 0.259 0.259 0.966 0.966
27.5 0.480 1.000 0.044 0.887 0.887 0.462 0.462 0.574 0.574 0.819 0.819 0.131 0.131 0.991 0.991
30.0 0.524 1.000 0.044 0.866 0.866 0.500 0.500 0.500 0.500 0.866 0.866 0.000 0.000 1.000 1.000
32.5 0.567 1.000 0.044 0.843 0.843 0.537 0.537 0.423 0.423 0.906 0.906 -0.131 -0.131 0.991 0.991
35.0 0.611 1.000 0.044 0.819 0.819 0.574 0.574 0.342 0.342 0.940 0.940 -0.259 -0.259 0.966 0.966
37.5 0.654 1.000 0.044 0.793 0.793 0.609 0.609 0.259 0.259 0.966 0.966 -0.383 -0.383 0.924 0.924
40.0 0.698 1.000 0.044 0.766 0.766 0.643 0.643 0.174 0.174 0.985 0.985 -0.500 -0.500 0.866 0.866
42.5 0.742 1.000 0.044 0.737 0.737 0.676 0.676 0.087 0.087 0.996 0.996 -0.609 -0.609 0.793 0.793
45.0 0.785 1.000 0.044 0.707 0.707 0.707 0.707 0.000 0.000 1.000 1.000 -0.707 -0.707 0.707 0.707
Figure 7.45. Numerical computation of the coefficients of the square waveform (partial listing)
47.5 0.829 1.000 0.044 0.676 0.676 0.737 0.737 -0.087 -0.087 0.996 0.996 -0.793 -0.793 0.609 0.609
Orchard Publications
Signals and Systems with MATLAB Applications, Second Edition
50.0 0.873 1.000 0.044 0.643 0.643 0.766 0.766 -0.174 -0.174 0.985 0.985 -0.866 -0.866 0.500 0.500
Analytical:
1.5 f(t)=unknown
Sine wave clipped at π/6, 5π/6 etc.
1.0
Numerical:
0.5
DC= 0.000
0.0
Orchard Publications
a1= 0.000 b1= 0.609
-0.5 a2= 0.000 b2= 0.000
a3= 0.000 b3= 0.138
-1.0 a4= 0.000 b4= 0.000
-1.5 a5= 0.000 b5= 0.028
0.0 2.0 4.0 6.0 8.0 a6= 0.000 b6= 0.000
a7= 0.000 b7= -0.010
x(deg) x(rad) y=f(x) 0.5*a0 cosx ycosx sinx ysinx cos2x ycox2x sin2x ysin2x cos3x ycos3x sin3x ysin3x
0.0 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000
2.5 0.044 0.044 0.002 0.999 0.044 0.044 0.002 0.996 0.043 0.087 0.004 0.991 0.043 0.131 0.006
5.0 0.087 0.087 0.004 0.996 0.087 0.087 0.008 0.985 0.086 0.174 0.015 0.966 0.084 0.259 0.023
7.5 0.131 0.131 0.006 0.991 0.129 0.131 0.017 0.966 0.126 0.259 0.034 0.924 0.121 0.383 0.050
10.0 0.175 0.174 0.008 0.985 0.171 0.174 0.030 0.940 0.163 0.342 0.059 0.866 0.150 0.500 0.087
12.5 0.218 0.216 0.009 0.976 0.211 0.216 0.047 0.906 0.196 0.423 0.091 0.793 0.172 0.609 0.132
Figure 7.46. Numerical computation of the coefficients of a clipped sine waveform (partial listing)
50.0 0.873 0.500 0.022 0.643 0.321 0.766 0.383 -0.174 -0.087 0.985 0.492 -0.866 -0.433 0.500 0.250
Numerical Evaluation of Fourier Coefficients
7-47
Chapter 7 Fourier Series
7.13 Summary
• Any periodic waveform f ( t ) can be expressed as
∞
1
f ( t ) = --- a 0 +
2 ∑ ( a cos nωt + b sin nωt )
n n
n=1
where the first term a 0 ⁄ 2 is a constant, and represents the DC (average) component of f ( t ) .
The terms with the coefficients a 1 and b 1 together, represent the fundamental frequency com-
ponent ω . Likewise, the terms with the coefficients a 2 and b 2 together, represent the second
harmonic component 2ω , and so on. The coefficients a 0 , a n , and b n are found from the follow-
ing relations:
2π
1 1
--- a 0 = ------
2 2π ∫0 f ( t ) dt
2π
1
a n = ---
π ∫0 f ( t ) cos nt dt
2π
1
b n = ---
π ∫0 f ( t ) sin nt dt
• If a waveform has odd symmetry, that is, if it is an odd function, the series will consist of sine
terms only. We recall that odd functions are those for which – f ( – t ) = f ( t ) .
• If a waveform has even symmetry, that is, if it is an even function, the series will consist of cosine
terms only, and a 0 may or may not be zero. We recall that even functions are those for which
f( –t ) = f ( t )
–f ( t + T ⁄ 2 ) = f ( t )
that is, the shape of the negative half-cycle of the waveform is the same as that of the positive
half-cycle, but inverted. If a waveform has half-wave symmetry only odd (odd cosine and odd
sine) harmonics will be present. In other words, all even (even cosine and even sine) harmonics
will be zero.
• The trigonometric Fourier series for the square waveform with odd symmetry is
• The trigonometric Fourier series for the square waveform with even symmetry is
(n – 1)
----------------
f ( t ) = ------- ⎛ cos ω t – --- cos 3ωt + --- cos 5ωt – …⎞ = -------
4A 1 4A 2 1
∑
1
( –1 ) --- cos n ωt
π⎝ 3 5 ⎠ π n
n = odd
• The trigonometric Fourier series for the sawtooth waveform with odd symmetry is
f ( t ) = ------- ⎛ sin ωt – --- sin 2ωt + --- sin 3ωt – --- sin 4ωt + …⎞ = -------
2A 1 1 1 2A n–1 1
π ⎝ 2 3 4 ⎠ π ∑ ( –1 ) --- sin nωt
n
• The trigonometric Fourier series for the triangular waveform with odd symmetry is
(n – 1)
----------------
f ( t ) = ------2- ⎛ sin ω t – --- sin 3ωt + ------ sin 5ωt – ------ sin 7ωt + …⎞ = ------2-
8A 1 1 8A 1
∑
1 2
( –1 ) ----- sin n ωt
⎝ 25 49 ⎠ 2
π 9 π n = odd n
• The trigonometric Fourier series for the half-wave rectifier with no symmetry is
• The trigonometric form of the Fourier series for the full-wave rectifier with even symmetry is
∞
2A 4A 1
f ( t ) = ------- – -------
π π ∑ ------------------
( n
2
– 1
- cos nωt
)
n = 2, 4, 6, …
b
C –n = --- ⎛ a n – ----n-⎞ = --- ( a n + jb n )
1 1
2 ⎝ j ⎠ 2
b
C n = --- ⎛ a n + ----n-⎞ = --- ( a n – j b n )
1 1
2 ⎝ j ⎠ 2
1
C 0 = --- a 0
2
• The C i coefficients, except C 0 , are complex, and appear as complex conjugate pairs, that is,
C – n = C n∗
• In general, for ω ≠ 1 ,
T 2π
1 – jn ω t 1 – jn ω t
C n = ---
T ∫0 f ( t )e d( ωt ) = ------
2π ∫0 f ( t )e d( ωt )
• We can derive the trigonometric Fourier series from the exponential series from the relations
an = Cn + C–n
and
bn = j ( Cn – C–n )
• C –n = C n∗ always
• A line spectrum is a plot that shows the amplitudes of the harmonics on a frequency scale.
• We can compute the average power of a Fourier series from the relation
P ave = P dc + P 1ave + P 2ave + …
= V dc I dc + V 1RMS I 1RMS cos θ 1 + V 2RMS I 2RMS cos θ 2 + …
• We can evaluate the Fourier coefficients of a function based on observed values instead of an
analytic expression using numerical evaluations with the aid of a spreadsheet.
7.14 Exercises
1. Compute the first 5 components of the trigonometric Fourier series for the waveform of Figure
7.47. Assume ω = 1 .
f(t)
A
ωt
0
Figure 7.47. Waveform for Exercise 1
2. Compute the first 5 components of the trigonometric Fourier series for the waveform of Figure
7.48. Assume ω = 1 .
f(t)
A
ωt
0
Figure 7.48. Waveform for Exercise 2
3. Compute the first 5 components of the exponential Fourier series for the waveform of Figure
7.49. Assume ω = 1 .
f(t)
A
ωt
0
Figure 7.49. Waveform for Exercise 3
4. Compute the first 5 components of the exponential Fourier series for the waveform of Figure
7.50. Assume ω = 1 .
f( t)
A⁄2
0 ωt
–A ⁄ 2
5. Compute the first 5 components of the exponential Fourier series for the waveform of Figure
7.51. Assume ω = 1 .
f( t)
A
ωt
0
Figure 7.51. Waveform for Exercise 5
6. Compute the first 5 components of the exponential Fourier series for the waveform of Figure
7.52. Assume ω = 1 .
f(t)
A
ωt
0
−A
ωt
– 2π –π 0 π 2π
This is an even function; therefore, the series consists of cosine terms only. There is no half-wave
symmetry and the average ( DC component) is not zero. We will integrate from 0 to π and mul-
tiply by 2 . Then,
π π
2 A 2A
a n = ---
π ∫0 --- t cos nt dt = -------
π π
2 ∫0 t cos nt dt (1)
We cannot evaluate the average ( 1 ⁄ 2 ) ⁄ a 0 from (2); we must use (1). Then, for n = 0 ,
π 2 π 2
1 2A A t A π
--- a 0 = ---------
2 2π
2 ∫0 t dt = ----2- ⋅ ----
π 2
= ----2- ⋅ -----
π 2
0
or
( 1 ⁄ 2 ) ⁄ a0 = A ⁄ 2
– 4A – 4A-
for n = 1, a 1 = – 4A 4A -, for n = 7, a = ----------
-------, for n = 3, a 3 = ----------2-, for n = 5, a 5 = – ---------- 3
2 2 2 2 2
π 3 π
2
5 π 7 π
and so on.
Therefore,
∞
f ( t ) = --- a 0 – ------2- ⎛ cos t + --- cos 3t + ------ cos 5t + ------ cos 7t + …⎞ = A
1 4A 1 1 1 4A 1
2 π
⎝ 9 25 49 ⎠
--- – -------
2 π ∑ ----- cos nt
n
2
n = odd
2.
f( t) 2A
------- t
A π
ωt
0 π⁄2 π 3π ⁄ 2 π
This is an even function; therefore, the series consists of cosine terms only. There is no half-wave
symmetry and the average ( DC component) is not zero.
1 Area 2 × [ ( A ⁄ 2 ) ⋅ ( π ⁄ 2 ) ] + Aπ 3A ⋅ ( π ⁄ 2 ) 3A
Average = --- a 0 = ------------------ = --------------------------------------------------------------- = -------------------------- = -------
2 Period 2π 2π 4
π⁄2 π
2 2A
------- t cos nt dt + --2-
a n = ---
π ∫0 π π ∫π ⁄ 2 A cos nt dt (1)
and with
1 x 1
∫ x cos ax dx = ----2- cos ax + --- sin a x = ----2- ( cos ax + ax sin ax )
a a a
(1) simplifies to
π⁄2
4A 1 2A π
a n = ------2- ----2- ( cos nt + nt sin nt ) + ------- sin nt π⁄2
π n nπ
0
------ + ------- sin ------ – ----------- – ------- sin ------ = ----------2- ⎛⎝ cos ------ – 1⎞⎠
4A 2A nπ 4A 2A nπ 4A
a n = ----------2- cos nπ nπ
2 2 nπ 2 2 2 nπ 2 2 2
n π n π n π
4A 4A
4A-, for n = 2, a = -------- 2A-
for n = 1, a 1 = ------2- ( 0 – 1 ) = – ------ 2 - ( – 1 – 1 ) = – ------
2
π π
2
4π π
2
4A 4A – 4A
for n = 3, a 3 = --------2- ( 0 – 1 ) = – ---------, for n = 4, a 4 = ----------2- ( 1 – 1 ) = 0
2 2
9π 9π 7 π
We observe that the fourth harmonic and all its multiples are zero. Therefore,
3.
f(t)
A
ωt
0 π 2π
This is neither an even nor an odd function and has no half-wave symmetry; therefore, the series
consists of both cosine and sine terms. The average ( DC component) is not zero. Then,
2π
1 – jn ω t
C n = ------
2π ∫0 f ( t )e d( ωt )
and with ω = 1
2π π 2π π
1 – jnt 1 – jnt – jnt A – jnt
C n = ------
2π ∫0 f ( t )e dt = ------
2π ∫0 Ae dt + ∫π 0e dt = ------
2π ∫0 e dt
The DC value is
π π
A A
∫0
0 A
C 0 = ------ e dt = ------ t = ---
2π 2π 0 2
For n ≠ 0
π π
A – jnt A – jnt A – jn π
C n = ------
2π ∫0 e dt = --------------- e
– j2 nπ 0
= ------------ ( 1 – e
j2nπ
)
Recalling that
– jn π
e = cos nπ – j sin nπ
– jn π – jn π
for n = even , e = 1 and for n = odd , e = – 1 . Then,
A
C n = even = ------------ ( 1 – 1 ) = 0
j2nπ
and
A A-
C n = odd = ------------ [ 1 – ( – 1 ) ] = -------
j2nπ jnπ
By substitution into
– j2 ω t –j ω t jωt j2 ω t
f ( t ) = … + C –2 e + C –1 e + C0 + C1 e + C2 e +…
we find that
--- + ----- ⎛ … – --- e + …⎞
A 1 – j3 ω t – j ω t j ω t 1 j3 ω t
f( t) = A –e + e + --- e
2 jπ ⎝ 3 3 ⎠
The minus (−) sign of the first two terms within the parentheses results from the fact that
C – n = C n∗ . For instance, since C 1 = 2A ⁄ jπ , it follows that C – 1 = C 1∗ = – 2A ⁄ jπ . We observe
that f ( t ) is complex, as expected, since there is no symmetry.
4.
f(t)
A⁄2
0 ωt
–A ⁄ 2
This is the same waveform as in Exercise 3 where the DC component has been removed. Then,
f ( t ) = ----- ⎛ … – --- e + …⎞
A 1 – j3 ω t – j ω t j ω t 1 j3 ω t
–e + e + --- e
jπ ⎝ 3 3 ⎠
It is also the same waveform as in Example 7.10 except that the amplitude is halved. This wave-
form is an odd function and thus the expression for f ( t ) is imaginary.
5.
f( t)
A
–π 0 π
ωt
–π ⁄ 2 π⁄2
This is the same waveform as in Exercise 3 where the vertical axis has been shifted to make the
waveform an even function. Therefore, for this waveform C n is real. Then,
π π⁄2
1 – jnt A – jnt
C n = ------
2π ∫– π f ( t )e dt = ------
2π ∫– π ⁄ 2 e dt
The DC value is
jn π ⁄ 2 – jn π ⁄ 2
–e
) = ------ ⎛ --------------------------------------⎞ = ------ sin nπ
A jn π ⁄ 2 – jn π ⁄ 2 A e A
= ------------ ( e –e ------
j2nπ nπ ⎝ j2 ⎠ nπ 2
For n = odd , C n alternates in plus (+) and minus (−) signs, that is,
A
C n = ------ if n = 1, 5, 9, …
nπ
A
C n = – ------ if n = 3, 7, 11, …
nπ
Thus,
⎛ ± -----
A- jn ω t⎞
f(t) = A
--- +
2 ∑ ⎝ nπ
e
⎠
n = odd
where the plus (+) sign is used with n = 1, 5, 9, … and the minus (−) sign is used with
n = 3, 7, 11, … . We can express f ( t ) in a more compact form as
(n – 1) ⁄ 2 A- jn ω t
f( t) = A
--- +
2 ∑ ( –1 ) -----
nπ
e
n = odd
6.
f(t) 2A
------- t – 1
A π
–π ⁄ 2 π⁄2
ωt
–π 0 π
−A
ax
e
∫
ax
xe dx = ------2- ( ax – 1 )
a
Then,
0 π
⎛ – 2A
------- t – 1⎞ e dt + ∫0 ⎛⎝ ------
- t – 1⎞ e
1 – jnt 2A – jnt
C n = ------
2π ∫ –π
⎝ π ⎠ π ⎠
dt
4A ⎛
– 1 + nπ sin nπ + cos nπ – ------ sin nπ⎞
nπ
= --------------
2 2⎝ 2 ⎠
2n π
–4 A
For n = even , C n = 0 and for n = odd , cos nπ = – 1 , and C n = ----------
2 2
-
n π
f ( t ) = – ------- ⎛ … + --- e + …⎞
4A 1 – j3 ω t –j ω t j ω t 1 j3 ω t
+e + e + --- e
2⎝ 9 9 ⎠
π
– j3 ω t –j ω t
The coefficients of the terms e and e are positive because all coefficients of C n are real.
This is to be expected since f ( t ) is an even function. It also has half-wave symmetry and thus
C n = 0 for n = even as we’ve found.
his chapter introduces the Fourier Transform, also known as the Fourier Integral. The defini-
T tion, theorems, and properties are presented and proved. The Fourier transforms of the most
common functions are derived, the system function is defined, and several examples are given
to illustrate its application in circuit analysis.
and assuming that it exists for every value of the radian frequency ω , we call the function F ( ω ) the
Fourier transform or the Fourier integral.
The Fourier transform is, in general, a complex function. We can express it as the sum of its real and
imaginary components, or in exponential form, that is, as
jϕ ( ω )
F ( ω ) = Re { F ( ω ) } + jIm { F ( ω ) } = F ( ω ) e (8.2)
The Inverse Fourier transform is defined as
∞
1
∫–∞ F ( ω )e
jωt
f ( t ) = ------ dω (8.3)
2π
We will often use the following notation to express the Fourier transform and its inverse.
The subscripts Re and Im will be used often to denote the real and imaginary parts respectively.
These notations have the same meaning as Re { f ( t ) } and Im { f ( t ) } .
By substitution of (8.6) into the Fourier integral of (8.1), we get
∞ ∞
– jωt – jωt
F(ω) = ∫– ∞ f Re ( t )e dt + j ∫–∞ fIm ( t )e dt (8.7)
From (8.2), we see that the real and imaginary parts of F ( ω ) are
∞
F Re ( ω ) = ∫–∞ [ fRe ( t ) cos ωt + fIm ( t ) sin ωt ] dt (8.9)
and
∞
F Im ( ω ) = – ∫–∞ [ fRe ( t ) sin ωt –fIm ( t ) cos ω t ] dt (8.10)
We can derive similar forms for the Inverse Fourier transform as follows:
Substitution of (8.2) into (8.3) yields
∞
1
∫–∞ [ FRe ( ω ) + jFIm ( ω ) ]e
jωt
f ( t ) = ------ dω (8.11)
2π
and by Euler’s identity,
∞
1
f ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.12)
∞
1
+ j ------
2π ∫–∞ [ FRe ( ω ) sin ωt + FIm ( ω ) cos ω t ] dω
Therefore, the real and imaginary parts of f ( t ) are
∞
1
f Re ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.13)
and
∞
1
f Im ( t ) = ------
2π ∫–∞ [ FRe ( ω ) sin ωt + FIm ( ω ) cos ω t ] dω (8.14)
Now, we will use the above relations to determine the time to frequency domain correspondence for
real, imaginary, even, and odd functions in both the time and the frequency domains. We will show
these in tabular form, as indicated in Table 8.1.
TABLE 8.1 Time Domain and Frequency Domain Correspondence (Refer to Tables 8.2 - 8.7)
f(t) F(ω)
and
∞
F Im ( ω ) = – ∫–∞ fRe ( t ) sin ωt dt (8.16)
Conclusion: If f ( t ) is real, F ( ω ) is, in general, complex. We indicate this result with a check mark
in Table 8.2.
We know that any function f ( t ) , can be expressed as the sum of an even and an odd function.
Therefore, we will also consider the cases when f ( t ) is real and even, and when f ( t ) is real and
odd*.
* In our subsequent discussion, we will make use of the fact that the cosine is an even function, while the sine is an
odd function. Also, the product of two odd functions or the product of two even functions will result in an even
function, whereas the product of an odd function and an even function will result in an odd function.
TABLE 8.2 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.3 - 8.7)
f( t) F(ω)
a. f Re ( t ) is even
∞
F Re ( ω ) = 2 ∫0 f Re ( t ) cos ωt dt f Re ( t ) = even (8.17)
and
∞
F Im ( ω ) = – ∫–∞ fRe ( t ) sin ωt dt = 0 f Re ( t ) = even (8.18)
To determine whether F ( ω ) is even or odd when f Re ( t ) = even , we must perform a test for
evenness or oddness with respect to ω . Thus, substitution of – ω for ω in (8.17), yields
∞ ∞
F Re ( – ω ) = 2 ∫0 f Re ( t ) cos ( – ω )t dt = 2 ∫0 f Re ( t ) cos ωt dt = F Re ( ω ) (8.19)
Conclusion: If f ( t ) is real and even, F ( ω ) is also real and even. We indicate this result in Table 8.3.
b. f Re ( t ) is odd
TABLE 8.3 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.4 - 8.7)
f(t) F(ω)
∞
F Re ( ω ) = ∫–∞ fRe ( t ) cos ωt dt = 0 f Re ( t ) = odd (8.20)
and
∞
F Im ( ω ) = – 2 ∫0 f Re ( t ) sin ωt dt f Re ( t ) = odd (8.21)
To determine whether F ( ω ) is even or odd when f Re ( t ) = odd , we perform a test for evenness
or oddness with respect to ω . Thus, substitution of – ω for ω in (8.21), yields
∞ ∞
F Im ( – ω ) = – 2 ∫0 f Re ( t ) sin ( – ω )t dt = 2 ∫0 f Re ( t ) sin ωt dt = – F Im ( ω ) (8.22)
Conclusion: If f ( t ) is real and odd, F ( ω ) is imaginary and odd. We indicate this result in Table 8.4.
2. Imaginary Time Functions
If f ( t ) is imaginary, (8.9) and (8.10) reduce to
∞
F Re ( ω ) = ∫–∞ fIm ( t ) sin ω t dt (8.23)
and
∞
F Im ( ω ) = ∫–∞ fIm ( t ) cos ω t dt (8.24)
TABLE 8.4 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.5 - 8.7)
f( t) F(ω)
Conclusion: If f ( t ) is imaginary, F ( ω ) is, in general, complex. We indicate this result in Table 8.5.
TABLE 8.5 Time Domain and Frequency Domain Correspondence (Refer also to Tables 8.6 - 8.7)
f( t) F( ω )
Next, we will consider the cases where f ( t ) is imaginary and even, and f ( t ) is imaginary and odd.
a. f Im ( t ) is even
∞
F Re ( ω ) = ∫–∞ fIm ( t ) sin ω t dt = 0 f Im ( t ) = even (8.25)
and
∞
F Im ( ω ) = 2 ∫0 f Im ( t ) cos ω t dt f Im ( t ) = even (8.26)
To determine whether F ( ω ) is even or odd when f Im ( t ) = even , we perform a test for evenness or
oddness with respect to ω . Thus, substitution of – ω for ω in (8.26) yields
∞
F Im ( – ω ) = 2 ∫0 f Im ( t ) cos ( – ω )t dt
(8.27)
∞
= 2 ∫0 f Im ( t ) cos ωt dt = F Im ( ω )
Conclusion: If f ( t ) is imaginary and even, F ( ω ) is also imaginary and even. We indicate this result in
Table 8.6.
TABLE 8.6 Time Domain and Frequency Domain Correspondence (Refer also to Table 8.7)
f(t) F(ω)
b. f Im ( t ) is odd
∞ ∞
F Re ( ω ) = ∫– ∞ f Im ( t ) sin ω t dt = 2 ∫0 f Im ( t ) sin ω t dt f Im ( t ) = odd (8.28)
and
∞
F Im ( ω ) = ∫–∞ fIm ( t ) cos ω t dt = 0 f Im ( t ) = odd (8.29)
To determine whether F ( ω ) is even or odd when f Im ( t ) = odd , we perform a test for evenness
or oddness with respect to ω . Thus, substitution of – ω for ω in (8.28) yields
∞ ∞
F Re ( – ω ) = 2 ∫0 f Im ( t ) sin ( – ω )t dt = – 2 ∫0 f Im ( t ) sin ωt dt = – F Re ( ω ) (8.30)
Conclusion: If f ( t ) is imaginary and odd, F ( ω ) is real and odd. We indicate this result in Table
8.7.
TABLE 8.7 Time Domain and Frequency Domain Correspondence (Completed Table)
f(t) F(ω)
Table 8.7 is now complete and shows that if f ( t ) is real (even or odd), the real part of F ( ω ) is even,
and the imaginary part is odd. Then,
F Re ( – ω ) = F Re ( ω ) f ( t ) = Real (8.31)
and
F Im ( – ω ) = – F Im ( ω ) f ( t ) = Real (8.32)
Since,
F ( ω ) = F Re ( ω ) + jF Im ( ω ) (8.33)
it follows that
F ( – ω ) = F Re ( – ω ) + jF Im ( – ω ) = F Re ( ω ) – jF Im ( ω )
or
F ( – ω ) = F∗ ( ω ) f ( t ) = Real (8.34)
We observe that the integrand of (8.35) is zero since it is an odd function with respect to ω because
both products inside the brackets are odd functions*.
Therefore, f Im ( t ) = 0 , that is, f ( t ) is real.
We can state then, that a necessary and sufficient condition for f ( t ) to be real, is that F ( – ω ) = F∗ ( ω ) .
Also, if it is known that f ( t ) is real, the Inverse Fourier transform of (8.3) can be simplified as fol-
lows:
From (8.13),
∞
1
f Re ( t ) = ------
2π ∫–∞ [ FRe ( ω ) cos ωt –FIm ( ω ) sin ωt ] dω (8.36)
and since the integrand is an even function with respect to ω , we rewrite (8.36) as
∞
1
f Re ( t ) = 2 ------
2π ∫0 [ F Re ( ω ) cos ωt – F Im ( ω ) sin ωt ] dω
(8.37)
∞ ∞
1 1 j [ ωt + ϕ ( ω ) ]
= ---
π ∫0 A ( ω ) cos [ ωt + ϕ ( ω ) ] dω = --- Re
π ∫0 F ( ω )e dω
a1 f1 ( t ) + a2 f2 ( t ) + … + an fn ( t ) ⇔ a1 F1 ( ω ) + a2 F2 ( ω ) + … + an Fn ( ω ) (8.38)
Proof:
The proof is easily obtained from (8.1), that is, the definition of the Fourier transform. The proce-
dure is the same as for the linearity property of the Laplace transform in Chapter 2.
2. Symmetry
If F ( ω ) is the Fourier transform of f ( t ) , the symmetry property of the Fourier transform states that
F ( t ) ⇔ 2πf ( – ω ) (8.39)
that is, if in F ( ω ) , we replace ω with t , we get the Fourier transform pair of (8.39).
Proof:
Since
∞
1
∫–∞ F ( ω )e
jωt
f ( t ) = ------ dω
2π
then,
∞
– j ωt
2πf ( – t ) = ∫–∞ F ( ω )e dω
ω
f ( at ) ⇔ ----- F ⎛ ---- ⎞
1
a ⎝ a ⎠ (8.40)
that is, the time scaling property of the Fourier transform states that if we replace the variable t in
the time domain by at , we must replace the variable ω in the frequency domain by ω ⁄ a , and
divide F ( ω ⁄ a ) by the absolute value of a .
Proof:
We must consider both cases a > 0 and a < 0 .
For a > 0 ,
∞
– jωt
F { f ( at ) } = ∫–∞ f ( at )e dt (8.41)
τ ω
∞ – jω ⎛ --- ⎞ ∞ – j ⎛ ----⎞ τ
⎝a⎠ τ ⎝ a⎠ 1 ω
d ⎛ --- ⎞ = --- dτ = --- F ⎛ ----⎞
1
F {f(τ)} = ∫–∞ f ( τ )e ⎝ a⎠ a ∫–∞ f ( τ )e a ⎝ a⎠
For a < 0 ,
∞
– jωt
F { f ( –at ) } = ∫–∞ f ( –at )e dt
and making the above substitutions, we find that the multiplying factor is – 1 ⁄ a . Therefore, for
1 ⁄ a we obtain (8.40).
4. Time Shifting
If F ( ω ) is the Fourier transform of f ( t ) , then,
– jωt 0
f ( t – t 0 ) ⇔ F ( ω )e (8.42)
that is, the time shifting property of the Fourier transform states that if we shift the time function
f ( t ) by a constant t 0 , the Fourier transform magnitude does not change, but the term ωt 0 is
added to its phase angle.
Proof:
∞
– jωt
F { f ( t – t0 ) } = ∫–∞ f ( t – t )e 0 dt
or
– jωt 0
F { f ( t – t0 ) } = e F(ω)
5. Frequency Shifting
If F ( ω ) is the Fourier transform of f ( t ) , then,
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 ) (8.43)
jω 0 t
that is, multiplication of the time function f ( t ) by e , where ω 0 is a constant, results in shifting
the Fourier transform by ω 0 .
Proof:
∞
F { e jω t f ( t ) }0
= ∫– ∞ e
jω 0 t
f ( t )e
– jωt
dt
or
jω 0 t ∞ –j ( ω – ω0 )
F {e f(t)} = ∫–∞ f ( t )e dt = F ( ω – ω 0 )
ω–ω
f ( at ) ⇔ ----- F ⎛ ----------------0 ⎞
jω 0 t 1
e
⎝ ⎠
(8.44)
a a
Property 5, that is, (8.43) is also used to derive the Fourier transform of the modulated signals
f ( t ) cos ωt and f ( t ) sin ωt . Thus, from
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 )
and
jω 0 t –j ω0 t
e +e
cos ω 0 t = -------------------------------
2
we get
F ( ω – ω0 ) + F ( ω + ω0 )
f ( t ) cos ω 0 t ⇔ ----------------------------------------------------------
- (8.45)
2
Similarly,
F ( ω – ω0 ) –F ( ω + ω0 )
f ( t ) sin ω 0 t ⇔ -------------------------------------------------------
- (8.46)
j2
6. Time Differentiation
If F ( ω ) is the Fourier transform of f ( t ) , then,
n
d
-------- f ( t ) ⇔ ( jω ) n F ( ω ) (8.47)
n
dt
n
d n
that is, the Fourier transform of -------n- f ( t ) , if it exists, is ( jω ) F ( ω ) .
dt
Proof:
Differentiating the Inverse Fourier transform, we get
∞ ∞
n – j ωt – j ωt
∫– ∞ ∫–∞ f ( t )e
n
= f ( t ) ( –j t ) e dt = ( – j t ) dt
Proof:
We postpone the proof of this property until we derive the Fourier transform of the unit step
function u 0 ( t ) on the next section. In the special case where in (8.49), F ( 0 ) = 0 , then,
t
F(ω)
∫–∞ f ( τ ) dτ ⇔ ------------
jω
(8.50)
and this is easily proved by integrating both sides of the Inverse Fourier transform.
9. Conjugate Time and Frequency Functions
If F ( ω ) is the Fourier transform of the complex function f ( t ) , then,
f∗ ( t ) ⇔ F ∗ ( – ω ) (8.51)
Proof:
∞ ∞
– jωt – jωt
F(ω) = ∫– ∞ f ( t )e dt = ∫–∞ [ fRe ( t ) + jfIm ( t ) ]e dt
∞ ∞
– jωt – jωt
= ∫– ∞ f Re ( t )e dt + j ∫–∞ fIm ( t )e dt
Then,
∞ ∞
F∗ ( ω ) = ∫– ∞ ∫–∞ fIm ( t )e
jωt jωt
f Re ( t )e dt – j dt
f 1 ( t )∗ f 2 ( t ) ⇔ F 1 ( ω )F 2 ( ω ) (8.52)
that is, convolution in the time domain, corresponds to multiplication in the frequency domain.
Proof:
∞ ∞
– jωt
F { f1 ( t )∗ f2 ( t ) } = ∫–∞ ∫–∞ f ( τ )f2 ( t – τ ) dτ
1 e dt
(8.53)
∞ ∞
– jωt
= ∫–∞ f ( τ ) ∫–∞ f ( t – τ )e
1 2 dt dτ
The first integral above is F 1 ( ω ) while the second is F 2 ( ω ) , and thus (8.52) follows.
Alternate Proof:
– jωt 0
We can apply the time shifting property f ( t – t 0 ) ⇔ F ( ω )e into the bracketed integral of
– jωt 0
(8.53); then, replacing it with F 2 ( ω )e , we get
∞ ∞ ∞ – jωt 0
– jωt
F { f 1 ( t )∗ f 2 ( t ) } = ∫– ∞ f1 ( τ ) ∫– ∞ f 2 ( t – τ )e dt dτ = ∫–∞ f ( τ ) dτF
1 2 ( ω )e
∞
– jωt
= ∫–∞ f ( τ )e
1 dτF 2 ( ω ) = F 1 ( ω )F 2 ( ω )
1
f 1 ( t )f 2 ( t ) ⇔ ------ F 1 ( ω )∗ F 2 ( ω ) (8.54)
2π
that is, multiplication in the time domain, corresponds to convolution in the frequency domain
divided by the constant 1 ⁄ 2π .
Proof:
∞ ∞ ∞
– jωt 1- – jωt
∫– ∞ ∫ ∫–∞ F ( χ )e
jχt
F { f1 ( t )f2 ( t ) } = [ f 1 ( t ) f 2 ( t ) ]e dt = -----
2π 1 dχ f 2 ( t )e dt
–∞
∞ ∞ ∞
1 – j ( ω – χ )t 1
= ------
2π ∫– ∞ F1 ( χ ) ∫– ∞ f 2 ( t )e dt dχ = ------
2π ∫–∞ F ( χ )F ( ω – χ ) dχ
1 2
that is, the area under a time function f ( t ) is equal to the value of its Fourier transform evaluated
at ω = 0 .
Proof:
– jωt
Using the definition of F ( ω ) and that e ω=0
= 1 , we see that (8.55) follows.
that is, the value of the time function f ( t ) , evaluated at t = 0 , is equal to the area under its Fou-
rier transform F ( ω ) times 1 ⁄ 2π .
Proof:
jωt
In the Inverse Fourier transform of (8.3), we let e t=0
= 1 , and (8.56) follows.
that is, if the time function f ( t ) represents the voltage across, or the current through an 1 Ω resis-
tor, the instantaneous power absorbed by this resistor is either v 2 ⁄ R , v 2 ⁄ 1 , v 2 , or i 2 R , i 2 . Then,
the integral of the magnitude squared, represents the energy (in watt-seconds or joules) dissipated
by the resistor. For this reason, the integral is called the energy of the signal. Relation (8.57) then,
states that if we do not know the energy of a time function f ( t ) , but we know the Fourier trans-
form of this function, we can compute the energy without the need to evaluate the Inverse Fou-
rier transform.
Proof:
From the frequency convolution property,
1
f 1 ( t )f 2 ( t ) ⇔ ------ F 1 ( ω )∗ F 2 ( ω )
2π
or
∞ ∞
– jωt 1
F { f1 ( t )f2 ( t ) } = ∫– ∞ [ f 1 ( t )f 2 ( t ) ]e dt = ------
2π ∫–∞ F ( χ )F ( ω – χ ) dχ
1 2 (8.58)
Since (8.58) must hold for all values of ω , it must also be true for ω = 0 , and under this condi-
tion, it reduces to
∞ ∞
1
∫ –∞
[ f 1 ( t )f 2 ( t ) ] dt = ------
2π ∫–∞ F ( χ )F ( –χ ) dχ
1 2 (8.59)
For the special case where f 2 ( t ) = f 1∗ ( t ) , and the conjugate functions property f∗ ( t ) ⇔ F∗ ( – ω ) ,
by substitution into (8.59), we get:
∞ ∞ ∞
1 1
∫ –∞
[ f ( t )f∗ ( t ) ] dt = ------
2π ∫ –∞
F ( ω )F∗ [ – ( – ω ) ] dω = ------
2π ∫–∞ F ( ω )F∗ ( ω ) dω
Since f ( t )f∗ ( t ) = f ( t ) and F ( ω )F∗ ( ω ) = F ( ω ) , Parseval’s theorem is proved.
2 2
The Fourier transform properties and theorems are summarized in Table 8.8.
∫–∞ f ( t )δ ( t – t ) dt
0 = f ( t0 )
∫–∞ f ( t )δ ( t ) dt = f( 0)
0 t 0 ω
Property f( t) F(ω)
Linearity a1 f1 ( t ) + a2 f2 ( t ) + … a1 F1 ( ω ) + a2 F2 ( ω ) + …
Symmetry F(t) 2πf ( – ω )
Time Scaling f ( at ) ω
----- F ⎛ ----⎞
1
a ⎝ a⎠
Time Shifting f ( t – t0 ) – jωt 0
F ( ω )e
Frequency Shifting jω 0 t F ( ω – ω0 )
e f(t)
Time Differentiation n n
d ( jω ) F ( ω )
--------- f ( t )
n
dt
Frequency Differentiation n n
( – jt ) f ( t ) d
----------- F ( ω )
n
dω
Time Integration t F ( ω ) + πF ( 0 )δ ( ω )
∫– ∞
------------
f ( τ ) dτ jω
Conjugate Functions f∗ ( t ) F∗ ( – ω )
Time Convolution f 1 ( t )∗ f 2 ( t ) F1 ( ω ) ⋅ F2 ( ω )
Frequency Convolution f1 ( t ) ⋅ f2 ( t ) 1
------ F 1 ( ω )∗ F 2 ( ω )
2π
Area under f(t) ∞
F(0) = ∫–∞ f ( t ) dt
Area under F(w) ∞
∫
1
f ( 0 ) = ------ F ( ω ) dω
2π – ∞
Parseval’s Theorem ∞ ∞
∫ ∫
2 1 2
f ( t ) dt = ------ F ( ω ) dω
–∞ 2π –∞
– jωt 0
δ ( t – t0 ) ⇔ e (8.61)
2.
1 ⇔ 2πδ ( ω ) (8.62)
Proof:
–1 ∞ ∞
1
∫– ∞ ∫–∞ δ ( ω )e
jωt jωt jωt
F { 2πδ ( ω ) } = ------ 2πδ ( ω )e dω = dω = e ω=0
= 1
2π
f(t) F(ω)
1
2πδ ( ω )
0 t 0 ω
Also, by direct application of the Inverse Fourier transform, or the frequency shifting property
and (8.62), we derive the transform
jω 0 t
e ⇔ 2πδ ( ω – ω 0 ) (8.63)
The transform pairs of (8.62) and (8.63) can also be derived from (8.60) and (8.61) by using the
symmetry property F ( t ) ⇔ 2πf ( – ω )
3.
1 jω 0 t –j ω0 t
cos ω 0 t = --- ( e +e ) ⇔ πδ ( ω – ω 0 ) + πδ ( ω + ω 0 ) (8.64)
2
Proof:
This transform pair follows directly from (8.63). The f ( t ) ↔ F ( ω ) correspondence is also shown
in Figure 8.3.
cosωω
cos 0 t0
t F Re ( ω )
π π
t t
−ω0 0 ω0 ω
We know that cos ω 0 t is real and even function of time, and we found out that its Fourier trans-
form is a real and even function of frequency. This is consistent with the result in Table 8.7.
4.
1 jω0 t –j ω0 t
sin ω 0 t = ----- ( e –e ) ⇔ jπδ ( ω – ω 0 ) – jπ δ ( ω + ω 0 ) (8.65)
j2
Proof:
This transform pair also follows directly from (8.63). The f ( t ) ↔ F ( ω ) correspondence is also
shown in Figure 8.4.
sin ω 0 t F Im ( ω )
π
tt −ω0
0 ω0 ω
−π
We know that sin ω 0 t is real and odd function of time, and we found out that its Fourier trans-
form is an imaginary and odd function of frequency. This is consistent with the result in Table
8.7.
5.
2
sgn ( t ) = u 0 ( t ) – u 0 ( – t ) ⇔ ------ (8.66)
jω
−1
f(t)
1
– at
e u0 ( t )
0
– at −1
–e u0 ( –t )
Then,
– at at
sgn ( t ) = lim [ e u0 ( t ) – e u0 ( –t ) ] (8.67)
a→0
and
0 ∞
at – j ωt – a t – j ωt
F { sgn ( t ) } = lim
a→0 ∫– ∞ –e e dt + ∫0 e e dt
0 ∞
( a – j ω )t – ( a + jω ) t
= lim
a→0 ∫– ∞ –e dt + ∫0 e dt (8.68)
1 1 –1 1 2
= lim --------------- + --------------- = --------- + ------ = ------
a→0 a – jω a + jω – jω jω jω
f( t)
F Im ( ω )
1
0
t ω
0
−1
Proof:
If we attempt to verify the transform pair of (8.69) by direct application of the Fourier transform
definition, we will find that
∞ ∞ – jωt ∞
– jωt – jωt
F(ω ) = ∫–∞ f ( t )e dt = F ( ω ) = ∫0 e dt = e-----------
– jω 0
(8.70)
– jωt – jωt
but we cannot say that e approaches 0 as t → ∞ , because e = 1 ∠– ωt , that is, the magni-
– jωt
tude of e is always unity, and its angle changes continuously as t assumes greater and greater
values. Since the upper limit cannot be evaluated, the integral of (8.70) does not converge.
To work around this problem, we will make use of the sgn ( t ) function which we express as
sgn ( t ) = 2u 0 ( t ) – 1 (8.71)
f(t) 2
0
t
−1
Figure 8.8. Alternate expression for the signum function
We rewrite (8.71) as
1 1 1
u 0 ( t ) = --- ( 1 + sgn ( t ) ) = --- + --- sgn ( t ) (8.72)
2 2 2
and since we know that 1 ⇔ 2π δ ( ω ) and sgn ( t ) ⇔ 2 ⁄ ( jω ) , by substitution of these into (8.72)
we get
1
u 0 ( t ) ⇔ πδ ( ω ) + ------
jω
and this is the same as(8.69). This is a complex function in the frequency domain whose real part
is π δ ( ω ) and imaginary part – 1 ⁄ ω .
The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.9.
f(t)
F Im ( ω )
1 π
F Re ( ω )
t ω
F Im ( ω )
Since u 0 ( t ) is real but neither even nor odd function of time, its Fourier transform is a complex
function of frequency as shown in (8.69). This is consistent with the result in Table 8.7.
Now, we will prove the time integration property of (8.49), that is,
t
F(ω )
∫–∞ f ( τ ) dτ ⇔ ------------
jω
+ πF ( 0 )δ ( ω )
as follows:
By the convolution integral,
t
u 0 ( t )∗ f ( t ) = ∫–∞ f ( τ )u ( t – τ ) dτ
0
and since u 0 ( t – τ ) = 1 for t > τ , and it is zero otherwise, the above integral reduces to
t
u 0 ( t )∗ f ( t ) = ∫– ∞ f ( τ ) d τ
Next, by the time convolution property,
u 0 ( t )∗ f ( t ) ⇔ U 0 ( ω ) ⋅ F ( ω )
and since
1
U 0 ( ω ) = πδ ( ω ) + ------
jω
using these results and the sampling property of the delta function, we get
U 0 ( ω ) ⋅ F ( ω ) = ⎛ πδ ( ω ) + -----
1-⎞ F ( ω ) = = πδ ( ω )F ( ω ) + F ( ω ) = πF ( 0 )δ ( ω ) + ------------
------------ F(ω)
⎝ jω ⎠ jω jω
7.
– jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + ---------------------- (8.73)
j ( ω – ω0 )
Proof:
From the Fourier transform of the unit step function,
1-
u 0 ( t ) ⇔ πδ ( ω ) + -----
jω
and the frequency shifting property,
jω 0 t
e f ( t ) ⇔ F ( ω – ω0 )
we obtain (8.73).
8.
π 1 1
u 0 ( t ) cos ω 0 t ⇔ --- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -------------------------- + ---------------------------
2 2 j ( ω – ω 0 ) 2j ( ω + ω 0 )
(8.74)
π jω -
⇔ --- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
2 ω –ω
2 2
0
Proof:
We first express the cosine function as
1 jω0 t –j ω0 t
cos ω 0 t = --- ( e +e )
2
From (8.73),
– jω 0 t 1 -
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + ----------------------
j ( ω – ω0 )
and
jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω + ω 0 ) + -----------------------
j ( ω + ω0 )
Now, using
1
u 0 ( t ) ⇔ πδ ( ω ) + ------
jω
we get (8.74).
9.
π ω
2
u 0 ( t ) sin ω 0 t ⇔ ----- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
-2 (8.75)
j2 ω –ω
2
0
Proof:
We first express the sine function as
1 jω 0 t – j ω 0 t
sin ω 0 t = ----- ( e –e )
j2
From (8.73),
– jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + -----------------------
j ( ω – ω0 )
and
jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω + ω 0 ) + -----------------------
j ( ω + ω0 )
Using
1-
u 0 ( t ) ⇔ πδ ( ω ) + -----
jω
we obtain (8.75).
Example 8.1
– αt
It is known that L [ e 1
u 0 ( t ) ] = ------------ . Compute F { e–αt u0 ( t ) } .
s+α
Solution:
F { e–αt u0 ( t ) } = L [e
– αt
u0 ( t ) ]
s = jω
1
= ------------
s+α
1
= ----------------
jω + α
s = jω
– αt 1
e u 0 ( t ) ⇔ ---------------- (8.76)
jω + α
Example 8.2
It is known that
– αt s+α
L [(e cos ω 0 t )u 0 ( t ) ] = --------------------------------
2 2
( s + α ) + ω0
Solution:
– αt jω + α
(e cos ω 0 t )u 0 ( t ) ⇔ -----------------------------------
2
-
2
(8.77)
( jω + α ) + ω 0
We can also find the Fourier transform of a time function f ( t ) that has non-zero values for t < 0 ,
and it is zero for all t > 0 . But because the one-sided Laplace transform does not exist for t < 0 , we
must first express the negative time function in the t > 0 domain, and compute the one-sided
Laplace transform. Then, the Fourier transform of f ( t ) can be found by substituting s with – jω . In
other words, when f ( t ) = 0 for t ≥ 0 , and f ( t ) ≠ 0 for t < 0 , we use the substitution
F { f(t) } = L [ f ( –t ) ] s = –j ω
(8.78)
Example 8.3
–a t
Compute the Fourier transform of f ( t ) = e
a. using the Fourier transform definition
b. by substitution into the Laplace transform equivalent
Solution:
a. Using the Fourier transform definition, we get
0 ∞ 0 ∞
( a – jω )t – ( a + jω ) t
F { e –a t } = ∫– ∞ e e
at – jωt
dt + ∫0 e
– a t – jωt
e dt = ∫– ∞ e dt + ∫0 e dt
1 1 2
= --------------- + --------------- = ------------------
a – jω a + jω ω 2 + a 2
–a t 2
e ⇔ -----------------
2
-
2
(8.79)
ω +a
F { e–a t } = L [e
– at
] s = jω
+ L [e ]
at
s = –j ω
1-
= ----------
s+a
1-
+ ----------
s+a
s = jω s = –j ω
1 1 2
= --------------- + -------------------- = ------------------
jω + a – jω + a ω 2 + a 2
Example 8.4
Derive the Fourier transform of the pulse
f ( t ) = A [ u0 ( t + T ) – u0 ( t – T ) ] (8.80)
Solution:
The pulse of (8.80) is shown in Figure 8.10.
f(t)
A
t
−T 0 T
Figure 8.10. Pulse for Example 8.4
∞ T – jωt T
– jωt – jωt
F(ω) = ∫–∞ f ( t )e dt = ∫–T Ae dt = Ae
---------------
– jω –T
– jωt – T – jωt
(e
jωt
sin ωT sin ωT
Ae -
= -------------- =A –e -) = 2A --------------- = 2AT ---------------
-----------------------------------
jω T
jω ω ωT
We observe that the transform of this pulse has the ( sin x ) ⁄ x form, and has its maximum value 2AT
at ωT = 0 *.
Thus, we have the waveform pair
sin ωT
A [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ 2AT --------------- (8.81)
ωT
The f ( t ) ↔ F ( ω ) correspondence is also shown in Figure 8.11, where we observe that the ω axis
crossings occur at values of ωT = ± nπ where n is an integer.
F(ω )
f(t)
A
−π π
t −2π 2π ωT
−T 0 T 0
We also observe that since f ( t ) is real and even, F ( ω ) is also real and even.
Example 8.5
Derive the Fourier transform of the pulse of Figure 8.12.
f(t)
A
t
0 2T
Figure 8.12. Pulse for Example 8.5
Solution:
The expression for the given pulse is
f ( t ) = A [ u 0 ( t ) – u 0 ( t – 2T ) ] (8.82)
sin x- = 1
* We recall that lim ---------
x→0 x
Alternate Solution:
We can obtain the Fourier transform of (8.82) using the time shifting property, i.e,
– jωt 0
f ( t – t 0 ) ⇔ F ( ω )e
sin ωT – jωT
and the result of Example 8.4. Thus, multiplying 2AT --------------- by e , we obtain (8.84).
ωT
Example 8.6
Derive the Fourier transform of the waveform of Figure 8.13.
f(t) 2A
t
−T 0 T 2T
Figure 8.13. Waveform for Example 8.6
Solution:
The given waveform can be expressed as
f ( t ) = A [ u 0 ( t + T ) + u 0 ( t ) – u 0 ( t – T ) – u 0 ( t – 2T ) ] (8.85)
and this is precisely the sum of the waveforms of Examples 8.4 and 8.5. We also observe that this
waveform is obtained by the graphical addition of the waveforms of Figures 8.10 and 8.12. There-
fore, we will apply the linearity property to obtain the Fourier transform of this waveform.
We denote the transforms of Examples 8.4 and 8.5 as F 1 ( ω ) and F 2 ( ω ) respectively, and we get
We observe that F ( ω ) is complex since f ( t ) of (8.85) is neither an even nor an odd function.
Example 8.7
Derive the Fourier transform of
f ( t ) = A cos ω 0 t [ u 0 ( t + T ) – u 0 ( t – T ) ] (8.87)
Solution:
From (8.45),
F ( ω – ω0 ) + F ( ω + ω0 )
f ( t ) cos ω 0 t ⇔ ----------------------------------------------------------
-
2
and from (8.81),
sin ωT
A [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ 2AT ---------------
ωT
Then,
sin [ ( ω – ω 0 )T ] sin [ ( ω + ω 0 )T ]
A cos ω 0 t [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ AT ------------------------------------
- + -------------------------------------- (8.88)
( ω – ω 0 )T ( ω + ω 0 )T
We also observe that since f ( t ) is real and even, F ( ω ) is also real and even*.
Example 8.8
Derive the Fourier transform of a periodic time function with period T .
Solution:
From the definition of the exponential Fourier series,
∞ jnω 0 t
f(t) = ∑ Cn e
n = –∞
(8.89)
Taking the Fourier transform of (8.89), and applying the linearity property for the transforms of
(8.90), we get
∞ ∞ ∞
F ⎧⎨ ∑
jnω 0 t ⎫ jnω 0 t
F { f(t)} =
⎩
Cn e ⎬ =
⎭
∑ Cn F { e } = 2π ∑
n = –∞
C n δ ( ω – nω )
0 (8.91)
n = –∞ n = –∞
The line spectrum of the Fourier transform of (8.91) is shown in Figure 8.14.
F( ω)
2πC –2 2πC 0
2πC 2
2πC – 4 2πC – 1 2πC 1
2πC 4
2πC – 3 2πC 3
..... .....
ω
– 4ω 0 – 3 ω 0 – 2 ω 0 – ω 0 0 ω0 2 ω0 3 ω0 4 ω0
The line spectrum of Figure 8.14 reveals that the Fourier transform of a periodic time function, con-
sists of a train of equally spaced delta functions. The strength of each δ ( ω – nω 0 ) is equal to 2π
times the coefficient C n .
Example 8.9
Derive the Fourier transform of the periodic time function
∞
f( t) = A ∑
n = –∞
δ ( t – nT ) (8.92)
Solution:
This function consists of a train of equally spaced delta functions in the time domain, and each has
the same strength A , as shown in Figure 8.15.
f( t)
Α
..... .....
t
−4Τ −3Τ −2Τ −Τ 0 Τ 2Τ 3Τ 4Τ
Since this is a periodic function of time, its Fourier transform is as derived in the previous example,
that is, expression (8.91). Then,
∞
F ( ω ) = 2π ∑
n = –∞
C n δ ( ω – nω 0 ) (8.93)
From the waveform of Figure 8.15, we observe that, within the limits of integration from – T ⁄ 2 to
+T ⁄ 2 , there is only the impulse δ ( t ) at the origin. Therefore, replacing f ( t ) with δ ( t ) and using the
sifting property of the delta function, we get
T⁄2 – jnω 0 t
1
∫–T ⁄ 2 δ ( t )e
1
C n = --- dt = --- (8.95)
T T
Thus, we see that all C n coefficients are equal to 1 ⁄ T , and (8.93) with ω 0 = 2π ⁄ T reduces to
2π ∞
F ( ω ) = ------ ∑
T n = –∞
δ ( ω – nω 0 ) (8.96)
The Fourier transform of the waveform of Figure 8.15 is shown in Figure 8.16.
F(ω)
..... .....
ω
– 4ω 0 – 3ω 0 – 2ω 0 – ω 0 0 ω0 2ω 0 3ω 0 4ω 0
Figure 8.16. The Fourier transform of a train of equally spaced delta functions
Figure 8.16 shows that the Fourier transform of a periodic train of equidistant delta functions in the
time domain, is a periodic train of equally spaced delta functions in the frequency domain. This
result is the basis for the proof of the sampling theorem which states that a time function f ( t ) can be
uniquely determined from its values at a sequence of equidistant points in time.
Example 8.10
1 2 1 2
– --- t – --- ω
2 2
e ⇔ 2πe (8.97)
This time function, like the time function of Example 8.9, is its own Fourier transform multiplied by
the constant 2π .
syms t v w x; ft=exp(−t^2/2); Fw=fourier(ft)
Fw =
2^(1/2)*pi^(1/2)*exp(-1/2*w^2)
pretty(Fw)
1/2 1/2 2
2 pi exp(- 1/2 w )
% Check answer by computing the Inverse using "ifourier"
ft=ifourier(Fw)
ft =
exp(-1/2*x^2)
Example 8.11
1 2
2 – ω ---
–t 1 4
te ⇔ --- j πωe (8.98)
2
syms t v w x; ft=t*exp(−t^2); Fw=fourier (ft)
Fw =
-1/2*i*pi^(1/2)*w*exp(-1/4*w^2)
pretty(Fw)
1/2 2
- 1/2 i pi w exp(- 1/4 w )
Example 8.12
–t 1
– e u 0 ( t ) + 3δ ( t ) ⇔ – --------------- + 3 (8.99)
jω + 1
syms t v w x; fourier(sym('−exp(−t)*Heaviside(t)+3*Dirac(t)'))
ans =
-1/(1+i*w)+3
Example 8.13
1-
u 0 ( t ) ⇔ πδ ( ω ) + ----- (8.100)
jω
syms t v w x; u0=sym('Heaviside(t)'); Fw=fourier(u0)
Fw =
pi*Dirac(w)-i/w
We summarize the most common Fourier transform pairs in Table 8.9.
We let
g ( t ) = f ( t )∗ h ( t ) (8.102)
and recalling that convolution in the time domain corresponds to multiplication in the frequency
domain, we get
f( t) F(ω)
δ(t) 1
δ ( t – t0 ) – jωt 0
e
1 2π δ ( ω )
e
– jωt 0 2π δ ( ω – ω 0 )
sgn ( t ) 2 ⁄ ( jω )
u0 ( t ) 1
------ + π δ ( ω )
jω
cos ω 0 t π [ δ ( ω – ω0 ) + δ ( ω + ω0 ) ]
sin ω 0 t jπ [ δ ( ω – ω 0 ) – δ ( ω + ω 0 ) ]
– at 1
e u0 ( t ) ---------------
jω + a
a>0 a>0
– at 1
te u0 ( t ) ----------------------
2
( jω + a )
a>0
a>0
e
– at
cos ω 0 tu 0 ( t ) jω + a
-----------------------------------
2 2
a>0 ( jω + a ) + ω
a>0
– at ω
e sin ω 0 tu 0 ( t ) ----------------------------------
2
-
2
( jω + a ) + ω
a>0
a>0
A [ u0 ( t + T ) – u0 ( t – T ) ] sin ωT
2AT ---------------
ωT
f ( t )∗ h ( t ) = g ( t ) ⇔ G ( ω ) = F ( ω ) ⋅ H ( ω ) (8.103)
We call H ( ω ) the system function. From (8.103), we see that the system function H ( ω ) and the
impulse response h ( t ) form the Fourier transform pair
Therefore, if we know the impulse response h ( t ) , we can compute the response g ( t ) of any input
f ( t ) , by multiplication of the Fourier transforms H ( ω ) and F ( ω ) to obtain G ( ω ) . Then, we take the
Inverse Fourier transform of G ( ω ) to obtain the response g ( t ) .
Example 8.14
For the linear circuit of Figure 8.17 (a) below, it is known that the impulse response is as shown in
(b). Use the Fourier transform method to compute the response g ( t ) when the input f ( t ) is as
shown in (c).
f ( t ) = 2 [ u0 ( t ) – u0 ( t – 3 ) ]
3
+ Linear + – 2t
f(t) g(t) h ( t ) = 3e 2
− Circuit −
t t
(a) 0 (b) 0 1 (c) 2 3
The system function H ( ω ) is the Fourier transform of the impulse response h ( t ) . Thus,
3
F {h(t)} = H ( ω ) = ---------------
jω + 2
= F 1 ( ω ) = 2 ⎛ πδ ( ω ) + ------⎞
1
F { f1 ( t ) } ⎝ jω⎠
Then,
G 1 ( ω ) = H ( ω ) ⋅ F 1 ( ω ) = --------------- ⋅ 2 ⎛ πδ ( ω ) + ------⎞
3 1
jω + 2 ⎝ jω⎠
or
6π 3
G 1 ( ω ) = --------------- δ ( ω ) + -------------------------- (8.105)
jω + 2 jω ( jω + 2 )
To evaluate the first term of (8.105), we apply the sampling property of the delta function, i.e.,
X(ω) ⋅ δ(ω) = X(0) ⋅ δ(ω)
and this term reduces to 3πδ ( ω ) or 1.5 [ 2πδ ( ω ) ] . Since 1 ⇔ 2πδ ( ω ) , the Inverse Fourier trans-
form of this term is
–1
F { 1.5 [ 2πδ ( ω ) ] } = 1.5 (8.106)
To find the Inverse Fourier transform of the second term in (8.105), we use partial fraction expan-
sion. Thus,
3
-------------------------- = 1.5 1.5 -
------- – -------------------
jω ( jω + 2 ) jω ( jω + 2 )
and therefore,
– 1 ⎧ 1.5 1.5 ⎫ –1 ⎧ 2 1.5 ⎫
g 1 ( t ) = 1.5 + F ⎨ ------- – -------------------- ⎬ = 1.5 + F - – -------------------
⎨ 0.75 ----- -⎬
⎩ jω ( jω + 2) ⎭ ⎩ jω ( jω + 2) ⎭
or
– 2t
g 1 ( t ) = 1.5 + 0.75 sgn ( t ) – 1.5e u0 ( t )
(8.107)
– 2t – 2t
= 1.5 + 0.75 [ 2u 0 ( t ) – 1 ] – 1.5e u 0 ( t ) = 0.75 + 1.5 ( 1 – e )u 0 ( t )
Next, we denote the response due to the second term of the input as g 2 ( t ) , and replacing u 0 ( t ) in
(8.107) with u 0 ( t – 3 ) , we get
–2 ( t – 3 )
g 2 ( t ) = 0.75 + 1.5 ( 1 – e )u 0 ( t – 3 ) (8.108)
Now, we combine (8.107) with (8.108), and we get
g ( t ) = g1 ( t ) –g2 ( t )
or
– 2t –2 ( t – 3 )
g ( t ) = 1.5 { ( 1 – e )u 0 ( t ) – ( 1 – e )u 0 ( t – 3 ) }
Example 8.15
For the circuit of Figure 8.18, use the Fourier transform method, and the system function H ( ω ) to
compute v L ( t ) . Assume i L ( 0 − ) .
4Ω L
+
vL ( t )
+
−
v in ( t )
`
2H −
– 3t
v in ( t ) = 5e u0 ( t )
Solution:
We will find the system function H ( ω ) from the phasor equivalent circuit shown in Figure 8.19.
4Ω L
+
+
− `
j2ω −
V (ω) = V L out ( ω )
V in ( ω )
15 10
V out ( ω ) = --------------- – ---------------
jω + 3 jω + 2
and
–1 ⎧ 15 – 10 ⎫ – 3t – 2t
v L ( t ) = v out ( t ) = F ⎨ --------------- – --------------- ⎬ = 15e – 10e
⎩ jω + 3 jω + 2 ⎭
or
– 3t – 2t
v L ( t ) = 5 ( 3e – 2e )u 0 ( t ) (8.109)
Example 8.16
For the linear circuit of Figure 8.20, the input-output relationship is
d-
---- v ( t ) + 4v out ( t ) = 10v in ( t ) (8.110)
dt out
where v in ( t ) is as shown in Figure 8.20. Use the Fourier transform method, and the system function
H ( ω ) to compute the output v out ( t ) .
3
+ – 2t
v in ( t )
Linear +
v out ( t ) v in ( t ) = 3e
− Circuit −
t
0
Solution:
Taking the Fourier transform of both sides of (8.110), and recalling that
n
d- n
------
n
f ( t ) ⇔ ( jω ) F ( ω )
dt
we get,
jωV out ( ω ) + 4V out ( ω ) = 10V in ( ω )
or
( jω + 4 )V out ( ω ) = 10V in ( ω )
and thus,
V out ( ω ) 10
H ( ω ) = ------------------
- = --------------- (8.111)
V in ( ω ) jω + 4
Also,
– 2t 3
V in ( ω ) = F { v in ( t ) } = F ( 3e u 0 ( t ) ) = --------------- (8.112)
jω + 2
and
10 3 r1 r2
V out ( ω ) = H ( ω ) ⋅ V in ( ω ) = --------------- ⋅ --------------- = --------------
- + --------------
-
jω + 4 jω + 2 jω + 4 jω + 2
15 – 15
V out ( ω ) = --------------- + --------------- (8.113)
jω + 2 jω + 4
Therefore,
–1 ⎧ 15 – 15 ⎫ – 2t – 4t
v out ( t ) = F ⎨ --------------- + --------------- ⎬ = 15 ( e – e )u 0 ( t ) (8.114)
⎩ jω + 2 jω + 4 ⎭
Example 8.17
– 2t
The voltage across an 1 Ω resistor is known to be v R ( t ) = 3e u 0 ( t ) . Compute the energy dissi-
pated in this resistor for 0 < t < ∞ , and verify the result by application of Parseval’s theorem.
Solution:
The instantaneous power absorbed by the resistor is
2 2 – 4t
p R = v R ⁄ 1 = v R = 9e u0 ( t ) (8.115)
and thus, the energy is
∞ ∞ – 4t ∞ 0
– 4t e 9 –4t
∫0 ∫0
2
WR = v R dt = 9e dt = 9 -------- = --- e ∞
= 2.25 joules (8.116)
–4 0
4
Since
F(ω) = F { 3e–2t u0 ( t ) } 3
= --------------- (8.118)
jω + 2
and
9
= F ( ω ) ⋅ F∗ ( ω ) = ------------------
2
F(ω) (8.119)
2 2
ω +2
We observe that the integrand of (8.120) is an even function of ω ; therefore, we can multiply the
integral by 2 , and integrate from 0 to ∞ . Then,
∞ ∞
2 9 9 1
W R = ------
2π ∫0 ------------------ dω = ---
ω +2
2 2 π ∫0 ------------------ dω
ω +2
2 2
(8.121)
8.9 Summary
• The Fourier transform is defined as
∞
– jωt
F(ω) = ∫–∞ f ( t )e dt
• The Fourier transform is, in general, a complex function. We can express it as the sum of its real
and imaginary components, or in exponential form as
jϕ ( ω )
F ( ω ) = Re { F ( ω ) } + jIm { F ( ω ) } = F ( ω ) e
• We often use the following notations to express the Fourier transform and its inverse.
F {f(t)} = F(ω)
–1
F {F(ω)} = f(t)
• If F ( – ω ) = F∗ ( ω ) , f ( t ) is real.
• The linearity property states that
a1 f1 ( t ) + a2 f2 ( t ) + … + an fn ( t ) ⇔ a1 F1 ( ω ) + a2 F2 ( ω ) + … + an Fn ( ω )
• The Fourier transforms of the modulated signals f ( t ) cos ωt and f ( t ) sin ωt are
F ( ω – ω0 ) + F ( ω + ω0 )
f ( t ) cos ω 0 t ⇔ ----------------------------------------------------------
-
2
F ( ω – ω0 ) –F ( ω + ω0 )
f ( t ) sin ω 0 t ⇔ -------------------------------------------------------
-
j2
• The time differentiation property states that
n
d
-------- f ( t ) ⇔ ( jω ) n F ( ω )
n
dt
• The frequency differentiation property states that
n
n d
( – j t ) f ( t ) ⇔ ---------n F ( ω )
dω
• The time integration property states that
t
F(ω )
∫–∞ f ( τ ) dτ ⇔ ------------
jω
+ πF ( 0 )δ ( ω )
f∗ ( t ) ⇔ F∗ ( – ω )
• The time convolution property states that
f 1 ( t )∗ f 2 ( t ) ⇔ F 1 ( ω )F 2 ( ω )
1
f 1 ( t )f 2 ( t ) ⇔ ------ F 1 ( ω )∗ F 2 ( ω )
2π
• The area under a time function f ( t ) is equal to the value of its Fourier transform evaluated at
ω = 0 . In other words,
∞
F(0) = ∫–∞ f ( t ) dt
• The value of a time function f ( t ) , evaluated at t = 0 , is equal to the area under its Fourier trans-
• The delta function and its Fourier transform are as shown below.
f(t) F(ω) 1
δ(t)
0 t 0 ω
• The unity time function and its Fourier transform are as shown below.
f(t) F(ω)
1
2πδ ( ω )
0 t 0 ω
jω 0 t
• The Fourier transform of the complex time function e is as indicated below.
jω 0 t
e ⇔ 2πδ ( ω – ω 0 )
• The Fourier transforms of the time functions cos ω 0 t , and sin ω 0 t are as shown below.
cosωω
cos 0 t0
t F Re ( ω )
π π
t t
−ω0 0 ω0 ω
sin ω 0 t F Im ( ω )
π
tt −ω0
0 ω0 ω
−π
• The signum function and its Fourier transform are as shown below.
f( t)
F Im ( ω )
1
0
t ω
0
−1
• The unit step function and its Fourier transform are as shown below.
.
f(t)
F Im ( ω )
1 π
F Re ( ω )
t ω
F Im ( ω )
– jω 0 t
• The Fourier transforms of e u 0 ( t ) , u 0 ( t ) cos ω 0 t , and u 0 ( t ) sin ω 0 t are as shown below.
– jω 0 t 1
e u 0 ( t ) ⇔ πδ ( ω – ω 0 ) + ----------------------
j ( ω – ω0 )
π jω
u 0 ( t ) cos ω 0 t ⇔ --- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
-2
2 ω –ω
2
0
π ω -
2
u 0 ( t ) sin ω 0 t ⇔ ----- [ δ ( ω – ω 0 ) + δ ( ω + ω 0 ) ] + -----------------
j2 ω –ω
2 2
0
• If a time function f ( t ) is zero for t ≤ 0 , we can obtain the Fourier transform of f ( t ) from the one-
sided Laplace transform of f ( t ) by substitution of s with jω .
F {f(t)} = L [ f ( –t ) ] s = –j ω
F(ω )
f(t)
A
−π π
t −2π 2π ωT
−T 0 T 0
• The Fourier transform of a periodic time function with period T is as shown below.
∞ ∞ ∞
F {f(t)} = F ⎧⎨ ∑ Cn e
jnω 0 t ⎫
⎬ = ∑ Cn F { e
jnω 0 t
} = 2π ∑ C n δ ( ω – nω )
0
⎩ ⎭ n = –∞
n = –∞ n = –∞
• The Fourier transform of a periodic train of equidistant delta functions in the time domain, is a
periodic train of equally spaced delta functions in the frequency domain.
• The system function H ( ω ) and the impulse response h ( t ) form the Fourier transform pair
h(t) ⇔ H(ω)
and
f ( t )∗ h ( t ) = g ( t ) ⇔ G ( ω ) = F ( ω ) ⋅ H ( ω )
8.10 Exercises
1. Show that
∞
∫–∞ u ( t )δ ( t ) dt
0 = 1⁄2
2. Compute
– at
F { te u0 ( t ) } a > 0
7. For the circuit of Figure 8.21, use the Fourier transform method to compute v C ( t ) .
R1
1Ω
C +
+ vC ( t ) R2
− −
v in ( t ) 1F
0.5 Ω
v in ( t ) = 50 cos 4tu 0 ( t )
9. In a bandpass filter, the lower and upper cutoff frequencies are f 1 = 2 Hz , and f 2 = 6 Hz respec-
tively. Compute the 1 Ω energy of the input, and the percentage that appears at the output, if the
– 2t
input signal is v in ( t ) = 3e u 0 ( t ) volts.
F(ω)
f( t)
A
−π π
t −2π 2π ωT
−T 0 T 0
2.
∞ ∞ ∞
– jωt – at – jωt – ( jω + a )t
F(ω) = ∫– ∞ f ( t )e dt = ∫0 te e dt = ∫0 te dt
F(ω) 1
= ----------------------
t=0 2
( jω + a )
To evaluate the lower limit of integration, we apply L’Hôpital’s rule, i.e.,
d
----- [ ( jω + a )t + 1 ]
[ ( jω + a )t + 1 ] dt ( jω + a )
---------------------------------------------- = lim --------------------------------------------------------- = lim ------------------------------------------------------------------
- = 0
( jω + a )t t→∞d t → ∞ ( jω + a )e ( jω + a )t ⋅ ( jω + a ) 2
----- [ e ( jω + a )t ⋅ ( jω + a ) 2 ]
2
e ⋅ ( jω + a ) ∞
dt
and thus
1
F ( ω ) = ----------------------
2
( jω + a )
Check:
F(ω) = F(s) s = jω
and since
– at 1
te u 0 ( t ) ⇔ ------------------2
(s + a)
it follows that
1 1
F ( ω ) = ------------------ = ----------------------
2 2
(s + a) s = jω
( jω + a )
3.
From Example 8.7,
sin [ ( ω – ω 0 )T ] sin [ ( ω + ω 0 )T ]
A cos ω 0 t [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ AT ------------------------------------
- + --------------------------------------
( ω – ω 0 )T ( ω + ω 0 )T
F( ω)
AT
A f( t)
t ω
–T 0 T –ω0 0 ω0
2π
------
T
4.
f ( t ) = A [ u 0 ( t + 3T ) – u 0 ( t + T ) + u 0 ( t – T ) – u 0 ( t – 3T ) ]
f( t)
t
– 3T – 2T –T 0 T 2T 3T
–T t
0 T
–A
∞ T T
– jωt A – jωt – jωt
∫– ∞ ∫– T ∫– T t e
A
F(ω) = f ( t )e dt = --- t e
T
dt = --T- dt
Alternate Solution:
A
f 1 ( t ) = --- t f2 ( t )
T A⁄T
A
–T t
0 T
t
−T 0 T
–A
f 1 ( t ) = ⎛ --- t + A⎞ [ u 0 ( t + T ) – u 0 ( t ) ] + ⎛ – A
--- t + A⎞ u 0 ( t ) – u 0 ⎛ t – T
--- ⎞
A
⎝T ⎠ ⎝ T ⎠ ⎝ 2⎠
A
f 1 ( t ) = --- t f2 ( t ) f3( t )
A
T A⁄T
A⁄T
a
t T
t
–T 0 T −T 0 –T ⁄2 T ⁄2
b t
–A ⁄ T
We begin by finding the Fourier Transform of the pulse denoted as f 3 ( t ) , and using F 3 ( ω ) and
the time shifting and linearity properties, we will find F 2 ( ω ) .
sin ( ωT ⁄ 2 ) jω ( T ⁄ 2 )
F 2a ( ω ) = A --------------------------- ⋅ e
ωT ⁄ 2
sin ( ωT ⁄ 2 ) –j ω ( T ⁄ 2 )
F 2b ( ω ) = – A --------------------------- ⋅ e
ωT ⁄ 2
and using the linearity property we get
sin ( ωT ⁄ 2 ) jω ( T ⁄ 2 ) –j ω ( T ⁄ 2 )
F 2 ( ω ) = F 2a ( ω ) + F 2b ( ω ) = A --------------------------- ⋅ ( e –e )
ωT ⁄ 2
jω ( T ⁄ 2 ) –j ω ( T ⁄ 2 ) 2
sin ( ωT ⁄ 2 ) e –e sin ( ωT ⁄ 2 )
= j2A --------------------------- ⋅ ⎛ ------------------------------------------------ ⎞ = j2A -----------------------------
ωT ⁄ 2 ⎝ j2 ⎠ ωT ⁄ 2
2
This sin ( x ) ⁄ x curve is shown below and it was created with the following MATLAB code.
fplot('sin(x./2).^2./x',[0 16*pi 0 0.5])
ω
0
Now, we find F 1 ( ω ) of the triangular waveform of f 1 ( t ) with the use of the integration property
by multiplying F 2 ( ω ) by 1 ⁄ jω . Thus,
2 2
1 sin ( ωT ⁄ 2 ) 2A sin ( ωT ⁄ 2 ) 2A
- ⋅ sin 2( ωT ⁄ 2 )
F 1 ( ω ) = ( 1 ⁄ jω ) ⋅ F 2 ( ω ) = ------ ⋅ j2A ----------------------------- = ------- ⋅ ----------------------------- = ----------------
jω ωT ⁄ 2 ω ωT ⁄ 2 2
ω T⁄2
2
We can plot F 1 ( ω ) by letting T ⁄ 2 = 1 . Then, F 1 ( ω ) simplifies to the form K [ ( sin x ) ⁄ x ]
This curve is shown below and it was created with the following MATLAB code.
fplot('(sin(x)./x).^2',[-8*pi 8*pi 0 1])
ω
0
7.
1Ω
+
+ vC ( t )
− −
v in ( t ) 1F
0.5 Ω
By KCL
v C ( t ) – v in ( t ) dv C v C ( t )
- + 1 ⋅ --------- + -----------
------------------------------- - = 0
1 dt 0.5
dv C
--------- + 3v C = v in
dt
Taking the Fourier transform of both sides we get
jωV C ( ω ) + 3V C ( ω ) = V in ( ω )
( jω + 3 )V C ( ω ) = V in ( ω )
and since V out ( ω ) = V C ( ω )
( jω + 3 )V out ( ω ) = V in ( ω )
and
V out ( ω ) 1
H ( ω ) = ------------------
- = ---------------
V in ( ω ) jω + 3
where
v in ( t ) = 50 cos 4tu 0 ( t ) ⇔ V in ( ω ) = 50π [ δ ( ω – 4 ) + δ ( ω + 4 ) ]
Then,
1
V out ( ω ) = V C ( ω ) = H ( ω ) ⋅ V in ( ω ) = --------------- ⋅ 50π [ δ ( ω – 4 ) + δ ( ω + 4 ) ]
jω + 3
and
∞
–1 1 50π [ δ ( ω – 4 ) + δ ( ω + 4 ) ]
∫–∞ ---------------------------------------------------------------
jωt
vC ( t ) = F { V C ( ω ) } = ------ -⋅e dω
2π jω + 3
∞ ∞
δ ( ω – 4 ) jωt δ(ω + 4)
∫ ∫–∞ --------------------
jωt
= 25 --------------------- ⋅ e dω + 25 -⋅e dω
– ∞ jω + 3 jω + 3
20 -
[ ( jω + 2 ) ⋅ ( jω + 3 ) ]V out ( ω ) = 10V in ( ω ) = --------------
jω + 1
20
V out ( ω ) = ----------------------------------------------------------------------
( jω + 1 ) ⋅ ( jω + 2 ) ⋅ ( jω + 3 )
We use the following MATLAB code for partial fraction expansion where we let jω = s .
syms s; collect((s+1)*(s+2)*(s+3))
ans =
s^3+6*s^2+11*s+6
num=[0 0 0 20]; den=[1 6 11 6]; [num,den]=residue(num,den); fprintf(' \n');...
fprintf('r1 = %4.2f \t', num(1)); fprintf('p1 = %4.2f', den(1)); fprintf(' \n');...
fprintf('r2 = %4.2f \t', num(2)); fprintf('p2 = %4.2f', den(2)); fprintf(' \n');...
fprintf('r3 = %4.2f \t', num(3)); fprintf('p3 = %4.2f', den(3))
r1 = 10.00 p1 = -3.00
r2 = -20.00 p2 = -2.00
r3 = 10.00 p3 = -1.00
Then,
20
V out ( ω ) = ---------------------------------------------------------------------
- + ------------------- – 20 - + -------------------
10 - + ------------------- 10 -
( jω + 1 ) ⋅ ( jω + 2 ) ⋅ ( jω + 3 ) ( jω + 1 ) ( jω + 2 ) ( jω + 3 )
and thus
–1 –t – 2t – 3t
v out ( t ) = F { V out ( ω ) } = 10e – 20e + 10e
–t – 2t – 3t
= 10 ( e – 2e +e )u 0 ( t )
9.
The input energy in joules is
∞ ∞ ∞ ∞
– 2t 2 – 2t 2 – 4t
∫– ∞ ∫0 ∫0 ∫0 9e
2
W in = v in ( t ) dt = 3e dt = 3e dt = dt
– 4t ∞ – 4t 0
9e 9e 9
= ------------ = ------------ = --- = 2.25 J
–4 0
4 ∞
4
– 2t 3
F { v in ( t ) } = F { 3e u 0 ( t ) } = ---------------
jω + 2
The energy at the output for the frequency interval 2 Hz ≤ f ≤ 6 Hz or 4π rad ≤ ω ≤ 12π rad is
∞ ∞ 12π
1 1 1-
3 - 2 dω = ----- 9 -
∫– ∞ ∫– ∞ ∫4π
2
W out = ------ F ( ω ) dω = ------ -------------- ----------------- dω
2π 2π jω + 2 2π ω +2
2 2
10.
sin ωT
A [ u 0 ( t + T ) – u 0 ( t – T ) ] ⇔ 2AT ---------------
ωT
F(ω)
f(t)
A
−π π
t −2π 2π ωT
−T 0 T 0
∫– ∞ ∫ – T A dt
2 2
W total = f ( t ) dt =
Next, we denote the energy in the frequency interval – π ⁄ T rad ≤ ω ≤ π ⁄ T rad as W out in the fre-
But the last integral in (2) is an improper integral and does not appear in tables of integrals*.
We will attempt to simplify (2) using integration by parts. We start with the familiar
d ( uv ) = udv + vdu
from which
∫ d ( uv ) = ∫ u dv + ∫ vdu
or
∫ u dv = uv – vdu ∫
2 2
Letting u = sin y and dv = 1 ⁄ y it follows that du = 2 cos y sin y = sin 2y and v = – 1 ⁄ y .
With these substitutions (2) is written as
π
2 2 π 2 π
4A T sin y –1 4A T sin 2y
W out = ------------- ------------
π –y
– ∫0 ------ sin 2y dy = ------------- 0 +
y π ∫0 ------------- dy
y
0
(3)
2 π 2 π
4A T sin 2y 8A T sin 2y
= 2 ⋅ -------------
π ∫0 -------------
2y
dy = -------------
π ∫0 -------------
2y
dy
The last integral in (3) is also an improper integral. Fortunately, some handbooks of mathematical
tables include numerical values of the integral
π
sin x
∫0 ---------
x
- dx
From Table 5.3 of Handbook of Mathematical Functions, 1972 Edition, Dover Publications, with
πx = 2π or x = 2 we get
2π
sin w
∫0 ----------- dw
w
= 1.418
x=2
and thus (4) reduces to
2
4A T
W out = ------------- 1.418
π
Therefore, the percentage of the output for the frequency interval – π ⁄ T rad ≤ ω ≤ π ⁄ T rad is
2
W out ( 4A T ⁄ π ) ⋅ 1.418 2 × 1.418
- × 100% = ------------------------------------------- × 100% = ---------------------- × 100% = 90.3%
-------------
W total 2A T
2 π
Since this computation involves numerical integration, we can obtain the same result much faster
and easier with MATLAB as follows:
First, we define the function fourierxfm1 and we save it as an .m file as shown below. This file
must be created with MATLAB’s editor (or any other editor) and saved as an .m file in a drive that
has been added to MATLAB’s path.
function y1=fourierxfm1(x)
x=x+(x==0)*eps;% This statement avoids the sin(0)/0 value.
% It says that if x=0, then (x==0) = 1
% but if x is not zero, then (x==0) = 0
% and eps is approximately equal to 2.2e-16
% It is used to avoid division by zero.
y1=sin(x)./x;
Then, using MATLAB’s command window, we write and execute the program below.
% The quad function below performs numerical integration from 0 to 2*pi
% using a form of Simpson's rule of numerical integration.
value1=quad('fourierxfm1',0,2*pi)
value1 =
1.4182
We could also have used numerical integration with the integral
π 2
sin x
∫0 ------------ dx
x
2
thereby avoiding the integration by parts procedure. Shown below are the function fourierxfm2
which is saved as an .m file and the program execution using this function.
function y2=fourierxfm2(x)
x=x+(x==0)*eps;
y2=(sin(x)./x).^2;
and after this file is saved, we execute the statement below observing that the limits of integration
are from 0 to π .
value2=quad('fourierxfm2',0,pi)
value2 =
1.4182
his chapter is devoted to discrete time systems, and introduces the one-sided Z Transform.
T The definition, theorems, and properties are discussed, and the Z transforms of the most
common discrete time functions are derived. The discrete transfer function is also defined,
and several examples are given to illustrate its application. The Inverse Z transform, and the meth-
ods available for finding it, are also discussed.
∞ –n
F(z) = ∑ f [ n ]z (9.1)
n=0
1 k–1
j2π °∫
f [ n ] = -------- F ( z )z dz (9.2)
We can obtain a discrete time waveform from an analog (continuous or with a finite number of dis-
continuities) signal, by multiplying it by a train of impulses. We denote the continuous signal as f ( t ) ,
and the impulses as
∞
δ[n] = ∑
n=0
δ [ t – nT ] (9.3)
f(t)
t
0 (a)
δ[n]
n
0 (b)
f ( t )δ [ n ]
t
0 (c)
Next, we recall from Chapter 2, that the t – domain to s – domain transform pairs for the delta
– sT
function are δ ( t ) ⇔ 1 and δ ( t – T ) ⇔ e . Therefore, taking the Laplace transform of both sides of
(9.5), and, for simplicity, letting f [ nT ] = f [ n ] , we get
⎧ ∞ ⎫ ∞
– nsT
∞
– nsT
G(s) = L ⎨ f [n]
⎩ n=0
∑
δ [ t – nT ] ⎬ = f [ n ]
⎭ n=0
e ∑= ∑ f [n]e
n=0
(9.6)
and
–1
f [n] = Z {F(z)} (9.8)
The function F ( z ) , as defined in (9.1), is a series of complex numbers and converges outside the cir-
cle of radius R , that is, it converges (approaches a limit) when z > R . In complex variables theory,
the radius R is known as the radius of absolute convergence.
In the complex z – plane the region of convergence is the set of z for which the magnitude of F ( z )
is finite, and the region of divergence is the set of z for which the magnitude of F ( z ) is infinite.
af 1 [ n ] + bf 2 [ n ] + cf 3 [ n ] + … ⇔ aF 1 ( z ) + bF 2 ( z ) + cF 3 ( z ) + … (9.9)
–m
f [ n – m ]u 0 [ n – m ] ⇔ z F(z) (9.10)
Proof:
Applying the definition of the Z transform, we get
∞ –n
Z { f [ n – m ]u 0 [ n – m ] } = ∑ f [ n – m ]u0 [ n – m ]z
n=0
and since u 0 [ n – m ] = 0 for n < m and u 0 [ n – m ] = 1 for n > m , the above expression reduces to
∞ –n
Z { f [n – m]} = ∑ f [ n – m ]z
n=0
m–1
–m –n
f [n – m] ⇔ z F(z ) + ∑ f [ n – m ]z (9.11)
n=0
Proof:
By application of the definition of the Z transform, we get
∞ –n
Z { f [n – m]} = ∑ f [ n – m ]z
n=0
m–1 m–1
–m m–n –m –n
Z { f [n – m]} = z F(z) + ∑ f [ n – m ]z = z F(z ) + ∑ f [ n – m ]z
n=0 n=0
and this is the same as (9.11).
For m = 1 , (9.11) reduces to
–1
f [ n – 1 ] ⇔ z F ( z ) + f [ –1 ] (9.12)
–2 –1
f [ n – 2 ] ⇔ z F ( z ) + f [ –2 ] + z f [ –1 ] (9.13)
that is, if f [ n ] is a discrete time signal, and m is a positive integer, the mth left shift of f [ n ] is
f [n + m] .
Proof:
∞
–n
Z { f [n + m]} = ∑ f [ n + m ]z
n=0
Z { f [ n + 1 ] } = zF ( z ) – f [ 0 ]z (9.15)
2 2
Z { f [ n + 2 ] } = z F ( z ) – f [ 0 ]z – f [ 1 ]z (9.16)
n
5. Multiplication by a in the Discrete Time Domain
a f [ n ] ⇔ F ⎛ --- ⎞
n z
⎝a⎠
(9.17)
Proof:
∞ ∞ ∞ z –n
f [ n ] ⎛ --- ⎞ = F ⎛ --- ⎞
–n 1 z
------- f [ n ]z – n =
∑ ∑ ∑
n n
Z {a f [n]} = a f [ n ]z = –n ⎝ a⎠ ⎝ a⎠
a k=0
k=0 k=0
– naT
6. Multiplication by e in the Discrete Time Domain
– naT aT
e f [n ] ⇔ F( e z) (9.18)
Proof:
– naT ∞ – naT –n ∞ –n
∑e ∑ f [n](e
aT aT
Z {e f [n]} = f [n] z = z) = F(e z )
k=0 k=0
d
nf [ n ] ⇔ – z ----- F ( z )
dz
2 (9.19)
2 d 2d
n f [ n ] ⇔ z ----- F ( z ) + z -------2- F ( z )
dz dz
Proof:
By definition,
∞ –n
F(z) = ∑ f [ n ]z
n=0
and taking the first derivative of both sides with respect to z we get
d ∞ –n–1 –1 ∞ –n
----- F ( z ) =
dz ∑ ( – n )f [ n ]z = –z ∑ nf [ n ]z
n=0 n=0
that is, the Z transform of the sum of the values of a signal, is equal to z ⁄ ( z – 1 ) times the Z
transform of the signal. This property is equivalent to time integration in the continuous time
domain since integration in the discrete time domain is summation. We will see on the next sec-
tion that the term z ⁄ ( z – 1 ) is the Z transform of the discrete unit step function u 0 [ n ] , and
recalling that in the s – domain
1
u 0 ( t ) ⇔ ---
s
and
t
F(s)
∫0 f ( τ ) dτ ⇔ ----------
s
-
Since the summation symbol in (9.21) is y [ n ] , then the summation symbol in (9.22) is y [ n – 1 ] ,
and thus we can write (9.22) as
y[ n] = y[ n – 1] + x[n] (9.23)
Next, we take the Z transform of both sides of (9.23), and using the property
–m
x [ n – m ]u 0 [ n – m ] ⇔ z X(z)
we get
–1
Y(z) = z Y(z) + X(z)
or
1 - z
Y ( z ) = --------------- X ( z ) = ----------- X ( z )
1–z
–1 z–1
We will now prove that convolution in the discrete time domain corresponds to multiplication in
the Z domain, that is,
f 1 [ n ]∗ f 2 [ n ] ⇔ F 1 ( z ) ⋅ F 2 ( z ) (9.26)
Proof:
Taking the Z transform of both sides of (9.24), we get
⎧ ∞ ⎫ ∞ ∞
–n
Y(z) = Z ⎨
⎩m=0
∑
x [ m ]h [ n – m ] ⎬ =
⎭
∑ ∑ x [ m ]h [ n – m ] z
n=0 m=0
°∫ xF ( v )F ⎝ -v-⎠ v
1
f 1 [ n ] ⋅ f 2 [ n ] ⇔ -------- ⎛ ⎞ z –1
dv (9.28)
1 2
j 2π
f [ 0 ] = lim X ( z ) (9.29)
z→∞
Proof:
For all n ≥ 1 , as z → ∞
–n 1
z = ---- → 0
n
z
–n
and under these conditions f [ n ]z → 0 also. Taking the limit as z → ∞ in the expression
∞ –n
F(z) = ∑ f [n] z
n=0
we observe that the only non-zero value in the summation is that of n = 0 . Then,
∞ –n –0
∑ f [ n ]z = f [0] z = f [0]
n=0
Therefore,
lim F ( z ) = f [ 0 ]
z→∞
Proof:
Let us consider the Z transform of the sequence f [ n + 1 ] – f [ n ] , i.e.,
∞ –n
Z { f [n + 1] – f [n]} = ∑ ( f [ n + 1 ] – f [ n ] )z
n=0
We replace the upper limit of the summation with k, and we let k → ∞ . Then,
k
–n
Z { f [ n + 1 ] – f [ n ] } = lim
k→∞ ∑ ( f [ n + 1 ] – f [ n ] )z (9.31)
n=0
From (9.15),
Z { f [ n + 1 ] } = zF ( z ) – f [ 0 ]z (9.32)
and by substitution of (9.32) into (9.31), we get
k
–n
zF ( z ) – f [ 0 ]z – F ( z ) = lim
k→∞ ∑ ( f [ n + 1 ] – f [ n ] )z
n=0
⎧ ⎫
k
–n
lim { ( z – 1 )F ( z ) – f [0 ]z } = lim ⎨ lim
z→1 z → 1 ⎩k → ∞ ∑ ( f [ n + 1 ] – f [ n ] )z ⎬
⎭
n=0
k
–n
= lim
k→∞ ∑ zlim
→1
( f [ n + 1 ] – f [ n ] )z
n=0
k
–n
lim ( z – 1 )F ( z ) – lim f [ 0 ]z = lim
z→1 z→1 k→∞ ∑ zlim
→1
{ f [n + 1] – f [n]} z
n=0
k
lim ( z – 1 )F ( z ) – f [ 0 ] = lim
z→1 k→∞ ∑
n=0
{f [n + 1]} – f [n]
= lim { f [ k ] – f [ 0 ] } = lim f [ k ] – f [ 0 ]
k→∞ k→∞
lim f [ k ] = lim ( z – 1 )F ( z )
k→∞ z→1
We must remember, however, that if the sequence f [ n ] does not approach a limit, the final value
theorem is invalid. The right side of (9.30) may exist even though f [ n ] does not approach a limit.
In instances where we cannot determine whether f [ n ] exists or not, we can be certain that it
exists, if X ( z ) can be expressed in a proper rational form as
X(z) = A ( z )-
----------
B(z)
Example 9.1
Find the Z transform of the geometric sequence defined as
⎧0 n = – 1 , – 2, – 3, …
f [n] = ⎨ n (9.33)
⎩a n = 0, 1, 2, 3, …
Left Shift f [n + m] –1
–n
∑
m
z F(z ) + f [ n + m ]z
n = –m
Multiplication by a n n
a f [n] F ⎛ --- ⎞
z
⎝a ⎠
– naT – naT aT
Multiplication by e e f [n] F( e z)
Multiplication by n nf [ n ] d
– z ----- F ( z )
dz
Multiplication by n 2 2
n f [n] d 2d
2
z ----- F ( z ) + z -------2- F ( z )
dz dz
Summation in Time n
⎛ ----------
z ⎞
∑ f [m]
m=0
⎝z – 1⎠
- F(z)
Time Convolution f 1 [ n ]∗ f 2 [ n ] F1 ( z ) ⋅ F2 ( z )
Frequency Convolution f1 [ n ] ⋅ f2 [ n ]
xF 1 ( v )F 2 ⎛ --⎞ v dv
1 z –1
∫
j 2π °
--------
⎝ v⎠
Initial Value Theorem f [ 0 ] = lim F ( z )
z→∞
Final Value Theorem lim f [ n ] = lim ( z – 1 ) F ( z )
n→∞ z→1
Solution:
From the definition of the Z transform,
∞ –n ∞ n –n ∞ –1 n
F( z) = ∑ f [ n ]z = ∑ a z = ∑ ( az ) (9.34)
n=0 n=0 n=0
To evaluate this infinite summation, we form a truncated version of F ( z ) which contains the first k
terms of the series. We denote this truncated version as F k ( z ) . Then,
k–1
n –n –1 2 –2 k – 1 –( k – 1 )
Fk ( z ) = ∑a z = 1 + az +a z +…+a z (9.35)
n=0
k –k –1 k
1–a z 1 – ( az )
F k ( z ) = --------------------- = -------------------------- (9.37)
–1 –1
1 – az 1 – az
–1
for az ≠ 1
–1 k
To determine F ( z ) from F k ( z ) , we examine the behavior of the term ( az ) in the numerator of
–1 –1 k –1 –1 jθ
(9.37). We write the terms az and ( az ) in polar form, that is, az = az e and
–1 k – 1 k jkθ
( az ) = az e (9.38)
–1
From (9.38) we observe that, for the values of z for which az < 1 , the magnitude of the complex
–1 k
number ( az ) → 0 as k → ∞ and therefore,
1 z
F ( z ) = lim F k ( z ) = ------------------- = ----------- (9.39)
k→∞ 1 – az
– 1 z–a
–1
for az <1
–1 –1 k
For the values of z for which az > 1 , the magnitude of the complex number ( az ) becomes
–1
unbounded as k → ∞ , and therefore, F ( z ) = lim F k ( z ) is unbounded for az > 1.
k→∞
In summary,
∞ –1 n
F( z) = ∑ ( az )
n=0
–1 –1
converges to the complex number z ⁄ ( z – a ) for az < 1 , and diverges for az >1.
Also, since
–1 a a
az = --- = -----
z z
–1 –1
then, az < 1 implies that z > a , while az > 1 implies z < a and thus,
⎧ z
∞ n –n ⎪ ----------- for z > a
∑a z
n
Z { a u0 [ n ] } = = ⎨ z – a (9.40)
n=0 ⎪ unbounded for z < a
⎩
The regions of convergence and divergence for the sequence of (9.40) are shown in Figure 9.2.
Im[z]
Region of
Convergence
z
F ( z ) = -----------
z–a
Region of |a|
Divergence Re[z]
F(z) → ∞
n
Figure 9.2. Regions of convergence and divergence for the geometric sequence a
To determine whether the circumference of the circle, where z = a |, lies in the region of conver-
gence or divergence, we evaluate the sequence F k ( z ) at z = a . Then,
k–1
n –n –1 2 –2 k – 1 –( k – 1 )
Fk ( z ) = ∑a z
n=0
= 1 + az +a z +…+a z
(9.41)
z=a
= 1+1+1+…+1 = k
We see that this sequence becomes unbounded as k → ∞ , and therefore, the circumference of the
circle lies in the region of divergence.
Example 9.2
Find the Z transform of the discrete unit step function u 0 [ n ] shown in Figure 9.3.
u0 [ n ]
⎧0 n<0 1
u0 [ n ] = ⎨ ....
⎩1 n≥0
0
n
Solution:
From the definition of the Z transform,
∞ –n ∞ –n
F(z) = ∑ f [ n ]z = ∑ [ 1 ]z (9.42)
n=0 n=0
As in the previous example, to evaluate this infinite summation, we form a truncated version of
F ( z ) which contains the first k terms of the series, and we denote this truncated version as F k ( z ) .
Then,
k–1
–n –1 –2 –( k – 1 )
Fk ( z ) = ∑z = 1+z +z +…+z (9.43)
n=0
and we observe that as k → ∞ , (9.43) becomes the same as (9.42). To express (9.43) in a closed
–1
form, we multiply both sides by z and we get
–1 –1 –2 –3 –k
z Fk ( z ) = z +z +z +…+z (9.44)
–1
for z ≠ 1
–1 k – 1 k jkθ –1 k
Since ( z ) = z e , as k → ∞ , ( z ) → 0 . Therefore,
1 z
F ( z ) = lim F k ( z ) = ---------------- = ----------- (9.46)
k→∞ 1–z
– 1 z–1
for z > 1 , and the region of convergence lies outside the unit circle.
Alternate Solution:
n n
The discrete unit step u 0 [ n ] is a special case of the sequence a with a = 1 , and since 1 = 1 , by
substitution into (9.40) we get
⎧ z
∞ –n ⎪ ----------- for z > 1
Z { u0 [ n ] } = ∑ [ 1 ]z = ⎨ z–1
⎪ unbounded for z < 1
(9.47)
n=0
⎩
Example 9.3
– naT
Find the Z transform of the discrete exponential sequence f [ n ] = e
Solution:
∞
– naT – n – aT – 1 – 2aT – 2 – 3aT – 3
F( z) = ∑e
n=0
z = 1+e z +e z +e z +…
– naT 1 z
Z [e ] = -------------------------- = -----------------
– aT
- (9.48)
– aT – 1
1–e z z – e
– aT – 1
for e z <1
Example 9.4
Find the Z transform of the discrete time functions f 1 [ n ] = cos n aT and f 2 [ n ] = sin naT
Solution:
From (9.48) of Example 9.3,
– naT z
e ⇔ -----------------
– aT
-
z–e
and replacing – naT with jnaT we get
jnaT z
Z [e ] = Z [ cos naT + j sin naT ] = -------------------
jaT
z–e
– j aT
z z–e
= Z [ cos naT ] + jZ [ sin n aT ] = ------------------
jaT
- ⋅ --------------------
– j aT
z–e z–e
2
z – z cos aT + jz sin aT
= Z [ cos naT ] + jZ [ sin n aT ] = ------------------------------------------------------
2
z – 2z cos aT + 1
Im[z] Region of
Convergence
×Pole
1
Re[z]
Region of
Divergence
×Pole
Figure 9.4. Regions of convergence and divergence for cos n aT and sin n aT
From Figure 9.4, we see that the poles separate the regions of convergence and divergence. Also,
since the circumference of the circle lies on the region of divergence, as we have seen before, the
poles lie on the region of divergence. Therefore, for the discrete time cosine and sine functions we
have the pairs
z 2 – z cos aT
cos naT ⇔ ----------------------------------------- for z > 1 (9.52)
z 2 – 2z cos aT + 1
and
z sin a T
sin n aT ⇔ -----------------------------------------
2
for z > 1 (9.53)
z – 2z cos aT + 1
It is shown in complex variables theory that if F ( z ) is a proper rational function*, all poles lie outside
the region of convergence, but the zeros can lie anywhere on the z -plane.
Example 9.5
Find the Z transform of discrete unit ramp f [ n ] = nu 0 [ n ] .
Solution:
∞ –n –1 –2 –3
Z { nu 0 [ n ] } = ∑ nz = 0+z + 2z + 3z +… (9.54)
n=0
We can express (9.54) in closed form using the discrete unit step function transform pair
∞ –n z
Z { u0 [ n ] } = ∑ ( 1 )z = ----------- for
z–1
z > 1 (9.55)
n=0
We summarize the transform pairs we have derived, and others, given as exercises at the end of this
chapter, in Table 9.2.
f[n] F(z)
δ[n] 1
δ[n – m ] z
–m
n z
a u0 [ n ] ----------- z >a
z–a
u0 [ n ] z
----------- z >1
z–1
– naT z
(e )u 0 [ n ] ------------------ e
– aT – 1
z <1
– aT
z–e
( cos naT )u 0 [ n ] 2
z – z cos aT
-----------------------------------------
2
z >1
z – 2z cos aT + 1
( sin n aT )u 0 [ n ] z sin a T
----------------------------------------- z > 1
2
z – 2z cos aT + 1
n 2
( a cos naT )u 0 [ n ] z – az cos aT
-----------------------------------------------
2 2
z >a
z – 2az cos aT + a
n az sin a T
( a sin naT )u 0 [ n ] ----------------------------------------------- z >a
2 2
z – 2az cos aT + a
u0 [ n ] –u0 [ n – m ] m
z –1
----------------------------
m–1
z (z – 1)
nu 0 [ n ] z ⁄ (z – 1)
2
3
2
n u0 [ n ] z(z + 1) ⁄ (z – 1)
[ n + 1 ]u 0 [ n ] 2
z ⁄ (z – 1)
2
n 2
a nu 0 [ n ] ( az ) ⁄ ( z – a )
n 2 3
a n u0 [ n ] az ( z + a ) ⁄ ( z – a )
n 2 3
a n [ n + 1 ]u 0 [ n ] 2az ⁄ ( z – a )
where C is a contour enclosing all singularities (poles) of F ( s ) , and v is a dummy variable for s . We
can compute the Z transform of a discrete time function f [ n ] using the transformation
F ( z ) = F∗ ( s ) sT
(9.58)
z=e
F(s) F(s)
F(z) = ∑ Res 1-----------------------
–z e
– 1 sT
= lim ( s – p k ) -----------------------
s → pk 1–z e
– 1 sT
(9.60)
k s = pk
Example 9.6
Derive the Z transform of the discrete unit step function u 0 [ n ] using the residue theorem.
Solution:
We learned in Chapter 2, that
L [ u0 ( t ) ] = 1 ⁄ s
Then, by residue theorem of (9.60),
F(s) 1⁄s
F ( z ) = lim ( s – p k ) -----------------------
– 1 sT
= lim ( s – 0 ) -----------------------
– 1 sT
s → pk 1–z e s→0 1–z e
1⁄s 1 1 z
= lim s ----------------------- = lim ------------------------
- = ---------------- = -----------
s → 0 1 – z – 1 e sT s → 0 1 – z – 1 e sT 1–z
–1 z – 1
* This section may be skipped without loss of continuity. It is intended for readers who have prior knowledge of
complex variables theory. However, the following examples will show that this procedure is not difficult.
Example 9.7
– naT
Derive the Z transform of the discrete exponential function e u 0 [ n ] using the residue theorem.
Solution:
From Chapter 2,
– at 1-
L [e u 0 ( t ) ] = ----------
s+a
Then, by residue theorem of (9.60),
F(s) 1 ⁄ (s + a)
F ( z ) = lim ( s – p k ) -----------------------
– 1 sT
= lim ( s + a ) -----------------------
– 1 sT
s → pk 1–z e s → – a 1–z e
1
= lim ------------------------ 1 z
- = -------------------------- = ------------------
s → –a 1 – z e – 1 sT – 1 – a T –a T
1–z e z–e
Example 9.8
Derive the Z transform of the discrete unit ramp function nu 0 [ n ] using the residue theorem.
Solution:
From Chapter 2,
2
L [ tu 0 ( t ) ] = 1 ⁄ s
Since F ( s ) has a second order pole at s = 0 , we need to apply the residue theorem applicable to a
pole of order n. This theorem states that
n–1
F( s)
F ( z ) = lim ⎛ ------------------⎞ ( s – p k ) -------------
1 d
- ----------------------- (9.61)
s → pk⎝ ( n – 1 )! ⎠ n – 1 – 1 sT
ds 1–z e
and
∞ –n
F( z) = ∑ f [ n ]z (9.63)
n=0
sT
z = e (9.65)
and
1
s = --- ln z (9.66)
T
Therefore,
F(z ) = G(s) 1
(9.67)
s = --- ln z
T
Since s , and z are both complex variables, relation (9.67) allows the mapping (transformation) of
regions of the s -plane into the z -plane. We find this transformation by recalling that s = σ + jω and
therefore, expressing z in magnitude-phase form and using (9.65), we get
jθ σT jωT
z = z ∠θ = z e = e e (9.68)
where,
σT
z = e (9.69)
and
θ = ωT (9.70)
Since
T = 1 ⁄ fs
T = ( 2π ) ⁄ ω s
Therefore, we express (9.70) as
2π ω
θ = ω ------ = 2π ------ (9.71)
ωs ωs
σT j2π ( ω ⁄ ω s )
z = e e (9.72)
The quantity e j2π ( ω ⁄ ωs ) in (9.72), defines the unity circle; therefore, let us examine the behavior of
z when σ is negative, zero, or positive.
Case I σ < 0 : When σ is negative, from (9.69), we see that z < 1 , and thus the left half of the s -
plane maps inside the unit circle of the z -plane, and for different negative values of
σ , we get concentric circles with radius less than unity.
Case II σ > 0 : When σ is positive, from (9.69), we see that z > 1 , and thus the right half of the s -
plane maps outside the unit circle of the z -plane, and for different positive values of
σ we get concentric circles with radius greater than unity.
Case III σ = 0 : When σ is zero, from (9.72), we see that z = e j2π ( ω ⁄ ωs ) and all values of ω lie
on the circumference of the unit circle. For illustration purposes, we have mapped
several fractional values of the sampling radian frequency ω s , and these are shown
in Table 9.3.
From Table 9.3, we see that the portion of the jω axis for the interval 0 ≤ ω ≤ ω s in the s -plane,
maps on the circumference of the unit circle in the z -plane as shown in Figure 9.5.
The mapping from the z -plane to the s -plane is a multi-valued transformation since, as we have
1
seen, s = --- ln z and it is shown in complex variables textbooks that
T
ln z = ln z + j2nπ (9.73)
ωs ⁄ 4 1 π⁄2
3ω s ⁄ 8 1 3π ⁄ 4
ωs ⁄ 2 1 π
5ω s ⁄ 8 1 5π ⁄ 4
3ω s ⁄ 4 1 3π ⁄ 2
7ω s ⁄ 8 1 7π ⁄ 4
ωs 1 2π
jω s-plane Im [ z ] z-plane
ω = 0.25ω s
ωs x
0.875ω s x x 0.125ω s
0.375ω s
0.75ω s z =1
0.625ω s ω = 0.5ω s ω = 0
x x Re [ z ]
0.5ω s 0 ω = ωs
0.375ω s
x x
0.25ω s 0.625ω s 0.875ω s
0.125ω s x
σ ω = 0.75ω s
σ<0 σ = 0 σ>0
where k is a constant, and r i and p i represent the residues and poles respectively; these can be real
or complex.
Before we expand F ( z ) into partial fractions, we must express it as a proper rational function. This
is done by expanding F ( z ) ⁄ z instead of F ( z ) , that is, we expand it as
F ( z )- = k-- + ------------
----------
r1 r2
-+…
- + ------------ (9.75)
z z z – p1 z – p2
we rewrite (9.75) as
r1 z r2 z
F ( z ) = k + ------------ -+…
- + ------------ (9.77)
z – p1 z – p2
Example 9.9
Use the partial fraction expansion method to compute the Inverse Z transform of
1
F ( z ) = ---------------------------------------------------------------------------------- (9.78)
–1 –1 –1
( 1 – 0.5 z ) ( 1 – 0.75 z ) ( 1 – z )
Solution:
We multiply both numerator and denominator by z 3 to eliminate the negative powers of z . Then,
3
z
F ( z ) = -------------------------------------------------------------
( z – 0.5 ) ( z – 0.75 ) ( z – 1 )
2
F ( z )- = ------------------------------------------------------------
---------- z r1
- = -------------------
r2
- + ----------------------
r3
- + ---------------
-
z ( z – 0.5 ) ( z – 0.75 ) ( z – 1 ) ( z – 0.5 ) ( z – 0.75 ) ( z – 1 )
The residues are
2 2
z ( 0.5 )
r 1 = ---------------------------------------- = -------------------------------------------------- = 2
( z – 0.75 ) ( z – 1 ) z = 0.5
( 0.5 – 0.75 ) ( 0.5 – 1 )
2 2
z
r 2 = ------------------------------------
- ( 0.75 )
= ----------------------------------------------------
- = –9
( z – 0.5 ) ( z – 1 ) z = 0.75
( 0.75 – 0.5 ) ( 0.75 – 1 )
2 2
z 1
r 3 = -------------------------------------------- = ---------------------------------------------- = 8
( z – 0.5 ) ( z – 0.75 ) z=1
( 1 – 0.5 ) ( 1 – 0.25 )
Then,
2
F ( z )- = ------------------------------------------------------------
---------- z – 9 - + ---------------
2 - + ----------------------
- = ------------------- 8 -
z ( z – 0.5 ) ( z – 0.75 ) ( z – 1 ) ( z – 0.5 ) ( z – 0.75 ) ( z – 1 )
n f[n] n n
Discrete Time Sequence f[n] = 2(0.5) − 9(0.75) + 8
0.000 1.0000 8
1.000 2.2500
2.000 3.438 7
3.000 4.453 6
4.000 5.277
5.000 5.927 5
6.000 6.429 4
7.000 6.814
8.000 7.107 3
9.000 7.328
2
10.000 7.495
11.000 7.621 1
12.000 7.715
0
13.000 7.786
1
11
13
15
17
19
21
23
25
14.000 7.84
15.000 7.88
n n
Figure 9.6. The discrete time sequence f [ n ] = 2 ( 0.5 ) – 9 ( 0.75 ) + 8
Example 9.10
Use the partial fraction expansion method to compute the Inverse Z transform of
12z
F ( z ) = ----------------------------------- (9.81)
2
( z + 1 )( z – 1 )
Solution:
Division of both sides by z and partial expansion yields
F ( z -) = ----------------------------------
---------- 12 r1
- = ---------------
r2
- + -----------------
r3
- + ---------------
-
z (z + 1)(z – 1)
2 (z + 1) (z – 1) 2 (z – 1)
– 12
r 3 = ----- ⎛ ----------- ⎞
d 12
= ------------------ = – 3
dz ⎝ z + 1 ⎠ (z + 1)
2
z=1
Then,
F(z) 12 3 6 –3
----------- = ----------------------------------- = ---------------- + ------------------ + ----------------
z (z + 1)(z – 1)
2 ( z + 1 ) (z – 1)
2 ( z – 1)
or
12z 3z 6z – 3z
F ( z ) = ----------------------------------- = ------------------------ + ------------------ + ----------------
( z + 1 )( z – 1 )
2 ( z – ( – 1 ) ) (z – 1)
2 ( z – 1)
Now, we recall that
z
u 0 [ n ] ⇔ -----------
z–1
and
z
nu 0 [ n ] ⇔ -----------------2-
(z – 1)
for z > 1 .
Therefore, the discrete time sequence is
n
f [ n ] = 3 ( – 1 ) + 6n – 3 (9.82)
Check with MATLAB:
syms n z; fn=3*(−1)^n+6*n−3; Fz=ztrans(fn); simple(Fz)
ans =
12*z/(z+1)/(z-1)^2
We can also use the MATLAB dimpulse function to compute and display f [ n ] for any range of val-
ues of n . The following code will display the first 20 values of f [ n ] in (9.82).
% First, we must express the denominator of F(z) as a polynomial
denpol=collect((z+1)*((z−1)^2))
denpol =
z^3-z^2-z+1
num=[12 0]; % The coefficients of the numerator of F(z) in (9.81)
den=[1 −1 −1 1]; % The coefficients of the denominator in polynomial form
fn=dimpulse(num,den,20) % Compute the first 20 values of f[n]
fn =
0
0
12
12
24
24
36
36
48
48
60
60
72
72
84
84
96
96
108
108
The MATLAB function dimpulse(num,den) plots the impulse response of the polynomial trans-
fer function G ( z ) = num ( z ) ⁄ den ( z ) where num ( z ) and den ( z ) contain the polynomial coeffi-
cients in descending powers of z . Thus, the MATLAB code
num=[0 0 12 0]; den=[1 −1 −1 1]; dimpulse(num,den)
displays the plot of Figure 9.7.
Example 9.11
Use the partial fraction expansion method to compute the Inverse Z transform of
z+1
F ( z ) = ----------------------------------------------- (9.83)
2
( z – 1 ) ( z + 2z + 2 )
F(z) z+1 r r2 r3 r4
----------- = -------------------------------------------------- = ----1 + ----------
- + -----------------------
- + -----------------------
- (9.84)
z 2
z ( z – 1 ) ( z + 2z + 2 ) z z – 1 (z + 1 – j) (z + 1 + j)
Recalling that
δ[n] ⇔ 1
and
n z
a u 0 [ n ] ⇔ -----------
z–a
We will use the MATLAB dimpulse function to display the first 8 values of f [ n ] in (9.85). We
recall that his function requires that F ( z ) is expressed as a ratio of polynomials in descending order.
syms n z
collect((z−1)*(z^2+2*z+2)) % First, expand denominator of given F(z)
ans =
z^3+z^2-2
The following code displays the first 10 values of f [ n ] and plots the impulse response shown in Fig-
ure 9.8.
num=[0 0 1 1]; den=[1 1 0 −2]; fn=dimpulse(num,den,11), dimpulse(num,den,16)
fn =
0
0
1
0
0
2
-2
2
-6
10
1 n–1
j2π °C∫
f [ n ] = -------- F ( z )z dz (9.86)
where C is a closed curve that encloses all poles of the integrant, and by Cauchy’s residue theorem,
this integral can be expressed as
n–1
f [n] = ∑ Res [ F ( z )z ] (9.87)
k z = pk
n–1 n–1
where p k represents a pole of [ F ( z )z ] and Res [ F ( z )z ] represents a residue at z = p k .
Example 9.12
Use the inversion integral method to find the Inverse Z transform of
–1 –3
1 + 2z + z
F ( z ) = ----------------------------------------------------
- (9.88)
–1 –1
( 1 – z ) ( 1 – 0.75z )
Solution:
Multiplication of the numerator and denominator by z 3 yields
* This section may be skipped without loss of continuity. It is intended for readers who have prior knowledge of
complex variables theory.
3 2
z + 2z + 1
------------------------------------------ (9.89)
z ( z – 1 ) ( z – 0.75 )
3 2
( z + 2z + 1 )
f[ 0] = ∑ Res --------------------------------------------
2
z ( z – 1 ) ( z – 0.75 )
-
k z = pk
3 2 3 2
( z + 2z + 1 ) ( z + 2z + 1 )
= Res --------------------------------------------
2
- + Res --------------------------------------------
2
- (9.91)
z ( z – 1 ) ( z – 0.75 ) z ( z – 1 ) ( z – 0.75 )
z=0 z=1
3 2
( z + 2z + 1 )
+ Res --------------------------------------------
2
-
z ( z – 1 ) ( z – 0.75 )
z = 0.75
The first term on the right side of (9.91) has a pole of order 2 at z = 0 ; therefore, we must evaluate
the first derivative of
3 2
( z + 2z + 1 )
----------------------------------------
( z – 1 ) ( z – 0.75 )
3 2
( z + 2z + 1 )
f [1] = ∑ Res ------------------------------------------
z ( z – 1 ) ( z – 0.75 )
k z = pk
3 2 3 2
( z + 2z + 1 ) ( z + 2z + 1 )
= Res ------------------------------------------ + Res ------------------------------------------
z ( z – 1 ) ( z – 0.75 ) z=0
z ( z – 1 ) ( z – 0.75 ) z=1
3 2
( z + 2z + 1 ) (9.93)
+ Res ------------------------------------------
z ( z – 1 ) ( z – 0.75 ) z = 0.75
3 2
3
( z + 2z + 1 )
2 3
( z + 2z + 1 )
2 ( z + 2z + 1 )
= ---------------------------------------- + --------------------------------- + ----------------------------------
-
( z – 1 ) ( z – 0.75 ) z=0
z ( z – 0.75 ) z=1
z ( z – 1 )
z = 0.75
--------- = 15
= 4--- + 16 – 163 ------
3 12 4
For n ≥ 2 there are no poles at z = 0 , that is, the only poles are at z = 1 and z = 0.75 . Therefore,
3 2 n–2
( z + 2z + 1 )z
f[ n] = ∑ Res ----------------------------------------------
( z – 1 ) ( z – 0.75 )
k z = pk
3 2 n–2 3 2 n–2
( z + 2z + 1 )z ( z + 2z + 1 )z (9.94)
= Res ---------------------------------------------- + Res ----------------------------------------------
( z – 1 ) ( z – 0.75 ) ( z – 1 ) ( z – 0.75 )
z=1 z = 0.75
3 2 n–2 3 2 n–2
( z + 2z + 1 )z ( z + 2z + 1 )z
= ---------------------------------------------- + ----------------------------------------------
( z – 0.75 ) (z – 1)
z=1 z = 0.75
for n ≥ 2 .
n–2
From (9.94), we observe that for all values of n ≥ 2 , the exponential factor z is always unity for
z = 1 , but varies for values z ≠ 1 . Then,
3 2 3 2 n–2
( z + 2z + 1 ) ( z + 2z + 1 )z
f [ n ] = --------------------------------- + ---------------------------------------------
( z – 0.75 ) z=1
(z – 1) z = 0.75
3 2 n–2
4 - + [------------------------------------------------------------------------------
= --------- 0.75 + 2 ( 0.75 ) + 1 ] ( 0.75 ) - (9.95)
0.25 – 0.25
n
( 163 ⁄ 64 ) ( 0.75 ) - = 16 – 163
= 16 + ----------------------------------------- --------- ( 0.75 ) n
( – 0.25 ) ( 0.75 )
2 9
for n ≥ 2 .
28 4 163 n
f [ n ] = ------ δ [ n ] + --- δ [ n – 1 ] + 16 – --------- ( 0.75 ) (9.96)
9 3 9
where the coefficients of δ [ n ] and δ [ n – 1 ] are the residues that were found in (9.92) and (9.93) for
n = 0 and n = 1 respectively at z = 0 . The coefficient 28 ⁄ 9 is multiplied by δ [ n ] to emphasize
that this value exists only for n = 0 and coefficient 4 ⁄ 3 is multiplied by δ [ n – 1 ] to emphasize that
this value exists only for n = 1 .
Check with MATLAB:
syms z n; Fz=(z^3+2*z^2+1)/(z*(z−1)*(z−0.75)); iztrans(Fz)
ans =
4/3*charfcn[1](n)+28/9*charfcn[0](n)+16-163/9*(3/4)^n
We evaluate and plot f [ n ] for the first 20 values. This is shown on the spreadsheet of Figure 9.9.
n f[n]
0 1.000
Discrete Time Sequence of f[0] = 1, f[1] = 3.75,
1 3.750 and f[n] = 16−(163/9)*(3/4)n for n ≥ 2
2 5.813 16
3 8.359
4 10.270
12
5 11.702
6 12.777
7 13.582 8
8 14.187
9 14.640 4
10 14.980
11 15.235
12 15.426 0
13 15.570 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Example 9.13
Use the inversion integral method to find the Inverse Z transform of
1
F ( z ) = ----------------------------------------------------- (9.97)
–1 –1
( 1 – z ) ( 1 – 0.75z )
Solution:
2
z
---------------------------------------
- (9.98)
( z – 1 ) ( z – 0.75 )
This function has no poles at z = 0 . The poles are at z = 1 and z = 0.75 . Then by (9.87),
2 n–1 n+1
z z z
f[n] = ∑ Res ----------------------------------
( z – 1 ) ( z – 0.75 )
= ∑ Res ----------------------------------------
( z – 1 ) ( z – 0.75 )
k z = pk k z = pk
n+1 n+1
z z
= Res ---------------------------------------- + Res ----------------------------------------
( z – 1 ) ( z – 0.75 ) z=1
( z – 1 ) ( z – 0.75 ) z = 0.75 (9.99)
n+1 n+1 n+1 n+1
z
= -----------------------
z
+ ---------------- ( 0.75 ) -
1 - + -----------------------
= -----------
( z – 0.75 ) z=1
( z – 1) z = 0.75 0.25 ( – 0.25 )
n
( 0.75 ) 16 n
= 4 – ------------------------------- = 4 – ------ ( 0.75 )
( 0.25 ) ( 0.75 ) 3
Example 9.14
Use the long division method to determine f [ n ] for n = 0, 1, and 2 , given that
–1 –2 –3
1 + z + 2z + 3z
F ( z ) = --------------------------------------------------------------------------------------------- (9.100)
–1 –1 –1
( 1 – 0.25z ) ( 1 – 0.5z ) ( 1 – 0.75z )
Solution:
First, we multiply numerator and denominator by z 3 , expand the denominator to a polynomial, and
arrange the numerator and denominator polynomials in descending powers of z . Then,
3 2
z + z + 2z + 3
F ( z ) = --------------------------------------------------------------------
( z – 0.25 ) ( z – 0.5 ) ( z – 0.75 )
Next, we use the MATLAB collect function to expand the denominator to a polynomial.
syms z; den=collect((z−0.25)*(z−0.5)*(z−0.75))
den =
z^3-3/2*z^2+11/16*z-3/32
Thus,
5 –1 81 –2
Divisor 1 + --- z + ------ z + … Quotient
2 16
3 3 2 11 3- 3 2
z – --- z + ------ z – ----- z + z + 2z + 3 Dividend
2 16 32
3 3 2 11 3-
z – --- z + ------ z – -----
2 16 32
5
--- z 2 + 21
------ z + 35 1st Remainder
------
2 16 32
5 2 15 15 – 1
--- z – ------ z + 55
------ – ------ z
2 4 32 64
81 2nd Remainder
------ z – … + …
16
8.9688
10.2070
9.6191
8.2522
6.7220
5.3115
4.1195
3.1577
2.4024
1.8189
1.3727
1.0338
x[n] y[n]
Linear Discrete Time System
y [ n ] + b1 y [ n – 1 ] + b2 y [ n – 2 ] + … + bk y [ n – k ] (9.105)
= a0 x [ n ] + a1 x [ n – 1 ] + a2 x [ n – 2 ] + … + ak x [ n – k ]
Assuming that all initial conditions are zero, taking the Z transform of both sides of (9.106), and
using the Z transform pair
–m
[f [n – m]] ⇔ z F( z)
we get
–1 –2 –k
Y ( z ) + b1 z Y ( z ) + b2 z Y ( z ) + … + bk z Y ( z )
(9.107)
–1 –2 –k
= a0 X ( z ) + a1 z X ( z ) + a2 z X ( z ) + … + ak z X ( z )
–1 –2 –k
( 1 + b1 z + b2 z + … + b k z )Y ( z )
(9.108)
–1 –2 –k
= ( a0 + a1 z + a2 z + … + ak z ) X ( z )
–1 –2 –k
a0 + a1 z + a2 z + … + ak z
–1 –2 –k
- X(z )
Y ( z ) = -------------------------------------------------------------------------- (9.109)
1 + b1 z + b2 z + … + bk z
Y ( z ) = H ( z )X ( z ) (9.111)
The discrete impulse response h [ n ] is the response to the input x [ n ] = δ [ n ] , and since
∞ –n
Z {δ[n]} = ∑ δ [ n ]z = 1
n=0
we can find the discrete time impulse response h [ n ] by taking the Inverse Z transform of the dis-
crete transfer function H ( z ) , that is,
–1
h[ n] = Z {H(z)} (9.112)
Example 9.15
The difference equation describing the input-output relationship of a discrete time system with zero
initial conditions, is
y [ n ] – 0.5y [ n – 1 ] + 0.125y [ n – 2 ] = x [ n ] + x [ n – 1 ] (9.113)
Compute:
a. The transfer function H ( z )
b. The discrete time impulse response h [ n ]
c. The response when the input is the discrete unit step u 0 [ n ]
Solution:
a. Taking the Z transform of both sides of (9.113), we get
–1 –2 –1
Y ( z ) – 0.5 z Y ( z ) + 0.125z Y ( z ) = X ( z ) + z X ( z )
and thus
–1 2
Y(z ) 1+z z +z
H ( z ) = ----------- = ------------------------------------------------ = ---------------------------------------- (9.114)
X(z ) –1
1 – 0.5 z + 0.125z
–2 2
z – 0.5z + 0.125
b. To obtain the discrete time impulse response h [ n ] , we need to compute the Inverse Z transform
of (9.114). We first divide both sides by z and we get:
H(z) z+1
----------- = ---------------------------------------- (9.115)
z 2
z – 0.5z + 0.125
Using the MATLAB residue function, we obtain the residues and the poles of (9.115) as follows:
Recalling that
n z
a u 0 [ n ] ⇔ -----------
z–a
j45° n – j 45° n
h [ n ] = ( 0.5 – j2.5 ) ( 0.25 2e ) + ( 0.5 + j2.5 ) ( 0.25 2e )
n jn45° n – j n45°
= 0.5 [ ( 0.25 2 ) e ] + 0.5 [ ( 0.25 2 ) e ]
n jn45° n – j n45°
– j2.5 [ ( 0.25 2 ) e ] + j2.5 [ ( 0.25 2 ) e ]
n jn45° – j n45° n jn45° – j n45°
= 0.5 [ ( 0.25 2 ) ( e +e ) ] – j2.5 ( 0.25 2 ) ( e –e )
or
2 n
h [ n ] = ⎛ ------- ⎞ ( cos n45° + 5 sin n45° ) (9.117)
⎝ 4 ⎠
z
c. From Y ( z ) = H ( z )X ( z ) , the transform u 0 [ n ] ⇔ ----------- , and using the result of part (a) we get:
z–1
2 2
z +z
Y ( z ) = ---------------------------------------- ⋅
z z(z + z)
----------- = -------------------------------------------------------------
-
z – 0.5z + 0.125 z – 1
2 2
( z – 0.5z + 0.125 ) ( z – 1 )
or
2
Y ( z ) = -------------------------------------------------------------
---------- (z + z) - (9.118)
z 2
( z – 0.5z + 0.125 ) ( z – 1 )
We will use the MATLAB residue function to compute the residues and poles of expression
(9.117). First, we must express the denominator as a polynomial.
syms z; denom=(z^2−0.5*z+0.125)*(z−1); collect(denom)
ans =
z^3-3/2*z^2+5/8*z-1/8
Then,
2
Y( z) z +z
---------- = ---------------------------------------------------------------------- (9.119)
z z – ( 3 ⁄ 2 )z + ( 5 ⁄ 8 )z – 1 ⁄ 8
3 2
p3 =
0.2500 - 0.2500i
With these values, we express (9.119) as
2
Y ( z ) = ----------------------------------------------------------------------
---------- z +z 3.2- + -------------------------------
= ---------- – 1.1 – j 0.3 -
– 1.1 + j0.3 - + ---------------------------------- (9.120)
z z 3 – ( 3 ⁄ 2 )z 2 + ( 5 ⁄ 8 )z – 1 ⁄ 8 z – 1 z – 0.25 – j 0.25 z – 0.25 + j0.25
or
3.2z ( – 1.1 + j0.3 )z ( – 1.1 – j 0.3 )z
Y ( z ) = ----------- + ----------------------------------- + -----------------------------------
z – 1 z – 0.25 – j 0.25 z – 0.25 + j0.25
(9.121)
( – 1.1 + j0.3 )z- + ---------------------------------------
3.2z- + --------------------------------------
= ---------- ( – 1.1 – j 0.3 )z -
z – 1 z – 0.25 2e j45° z – 0.25 2e – j 45°
Recalling that
n z
a u 0 [ n ] ⇔ -----------
z–a
The plots for the discrete time sequences h [ n ] and y [ n ] are shown in Figure 9.13.
In a discrete time block diagram, the integrator is replaced by a delay device. The analogy between an
integrator and a unit delay device is shown in Figure 9.15.
n h[n] y[n]
0.000 1.000 1.000
1.000 1.500 2.500 h[n] and y[n] for Example 9.15
2.000 0.625 3.125 3.5
3.000 0.125 3.250
3.0
4.000 -0.02 3.234
5.000 -0.02 3.211 2.5
6.000 -0.010 3.201 2.0
7.000 -0 3.199 1.5
8.000 0.000 3.199
1.0
9.000 0.000 3.200
10.000 0.000 3.200 0.5
11.000 3E-05 3.200 0.0
12.000 -0 3.200 -0.5
13.000 -0 3.200 1 2 3 4 5 6 7 8 9 10 11
14.000 -0 3.200
x· x
Σ Σ
+ +
u b ∫ dt C y
+ +
x· ( t ) x(t) x[n + 1] x[ n]
∫ dt Delay
s-domain z-domain
x1 [ n ] = y [ n ] x2 [ n ] = y [ n + 1 ] x3 [ n ] = y [ n + 2 ] (9.125)
Then,
x3 [ n + 1 ] = y [ n + 3 ]
x2 [ n + 1 ] = y [ n + 2 ] = x3 [ n ]
x1 [ n + 1 ] = y [ n + 1 ] = x2 [ n ]
x1 [ n + 1 ] 0 1 0 x1 [ n ] 0
x2 [ n + 1 ] = 0 0 1 ⋅ x2 [ n ] + 0 u[n] (9.126)
x3 [ n + 1 ] –1 –5 –2 x3 [ n ] 1
The discrete time state equations are written in a more compact form as
x [ n + 1 ] = Ax [ n ] + bu [ n ]
(9.128)
y [ n ] = Cx [ n ] + du [ n ]
We can use the MATLAB c2d function to convert the continuous time state space equation
x· ( t ) = Ax ( t ) + bu ( t ) (9.129)
to the discrete time state space equation
x [ n + 1 ] = A disc x [ n ] + b disc u [ n ] (9.130)
where the subscript disc stands for discrete, n indicates the present sample, and n + 1 the next.
Example 9.17
Use the MATLAB c2d function to convert the continuous time state space equation
x· ( t ) = Ax ( t ) + bu ( t )
where
A = 0 1 and b = 0 (9.131)
–3 –4 1
Solution:
Adisc=[0 1; −3 −4]; bdisc=[0 1]'; [Adisc,bdisc]=c2d(Adisc,bdisc,0.1)
Adisc =
0.9868 0.0820
-0.2460 0.6588
bdisc =
0.0044
0.0820
and therefore, the equivalent discrete time state space equation is
We can invoke the MATLAB command help d2c to obtain a detailed description of this function.
9.9 Summary
• The Z transform performs the transformation from the domain of discrete time signals, to
another domain which we call z – domain . It is used with discrete time signals, the same way the
Laplace and Fourier transforms are used with continuous time signals.
• The one-sided Z transform F ( z ) of a discrete time function f [ n ] defined as
∞ –n
F( z) = ∑ f [ n ]z
n=0
and it is denoted as
F(z) = Z {f [n]}
• The Inverse Z transform is defined as
1 k–1
j2π °∫
f [ n ] = -------- F ( z )z dz
and it is denoted as
–1
f [n] = Z {F(z)}
af 1 [ n ] + bf 2 [ n ] + cf 3 [ n ] + … ⇔ aF 1 ( z ) + bF 2 ( z ) + cF 3 ( z ) + …
• The shifting of f [ n ]u 0 [ n ] where u 0 [ n ] is the discrete unit step function, produces the Z trans-
form pair
–m
f [ n – m ]u 0 [ n – m ] ⇔ z F(z )
• The right shifting of f [ n ] allows use of non-zero values for n < 0 and produces the Z transform
pair
m–1
–m –n
f [n – m] ⇔ z F(z) + ∑ f [ n – m ]z
n=0
• The mth left shifting of f [ n ] where m is a positive integer, produces the Z transform pair
–1
–n
∑
m
f [n + m] ⇔ z F(z) + f [ n + m ]z
n = –m
a f [ n ] ⇔ F ⎛ --- ⎞
n z
⎝a⎠
– naT
• Multiplication by e produces the Z transform pair
– naT aT
e f [n] ⇔ F(e z)
d
nf [ n ] ⇔ – z ----- F ( z )
dz
2
2 d 2d
n f [ n ] ⇔ z ----- F ( z ) + z -------2- F ( z )
dz dz
• The summation property of the Z transform states that
n
f [ m ] ⇔ ⎛ ----------- ⎞ F ( z )
z
∑
m=0
⎝z–1 ⎠
• Convolution in the discrete time domain corresponds to multiplication in the z -domain, that is,
f 1 [ n ]∗ f 2 [ n ] ⇔ F 1 ( z ) ⋅ F 2 ( z )
• Multiplication in the discrete time domain corresponds to convolution in the z -domain, that is,
1 ⎛ ⎞ z –1
f 1 [ n ] ⋅ f 2 [ n ] ⇔ --------
j 2π °∫ xF ( v )F ⎝ -v-⎠ v
1 2 dv
f [ 0 ] = lim X ( z )
z→∞
lim f [ n ] = lim ( z – 1 ) F ( z )
n→∞ z→1
⎧0 n = – 1, – 2, – 3, …
f [n] = ⎨ n
⎩a n = 0, 1, 2, 3, …
is
⎧ z
∞ n –n ⎪ ----------- for z > a
∑a z
n
Z [a ] = = ⎨ z–a
n=0 ⎪ unbounded for z < a
⎩
u0 [ n ]
⎧0 n<0 1
u0 [ n ] = ⎨ ....
⎩1 n≥0
0
n
is
⎧ z -
∞ –n ⎪ ---------- for z > 1
Z [ u0 [ n ] ] = ∑ [ 1 ]z = ⎨ z – 1
⎪ unbounded for z < 1
n=0
⎩
• The Z transform of the discrete exponential sequence
– naT
f [n] = e
is
z
Z [ e–naT ] 1
= -------------------------- = -----------------
– aT – 1 – aT
-
1–e z z – e
– aT – 1
for e z <1
• The Z transforms of the discrete time functions f 1 [ n ] = cos n aT and f 2 [ n ] = sin naT are respec-
tively
z 2 – z cos aT
cos naT ⇔ ----------------------------------------- for z > 1
z 2 – 2z cos aT + 1
z sin a T
sin n aT ⇔ -----------------------------------------
2
for z > 1
z – 2z cos aT + 1
z
nu 0 [ n ] ⇔ -----------------2-
(z – 1)
• The Z transform can also be found by means of the contour integral
1 F(v)
F∗ ( s ) = --------
j2π °∫C -------------------------- dv
1–e e
– sT vT
• The input X ( z ) and output Y ( z ) are related by the system transfer function H ( z ) as
Y ( z ) = H ( z )X ( z )
• The discrete time impulse response h [ n ] and the discrete transfer function H ( z ) are related as
–1
h[ n] = Z {H(z)}
x [ n + 1 ] = Ax [ n ] + bu [ n ]
y [ n ] = Cx [ n ] + du [ n ]
and the general form of the solution is
• The MATLAB c2d function converts the continuous time state space equation
x· ( t ) = Ax ( t ) + bu ( t )
to the discrete time state space equation
x [ n + 1 ] = A disc x [ n ] + b disc u [ n ]
x [ n + 1 ] = A disc x [ n ] + b disc u [ n ]
9.10 Exercises
1. Find the Z transform of the discrete time pulse p [ n ] defined as
⎧1 n = 0, 1, 2, …, m – 1
p[ n] = ⎨
⎩0 otherwise
n
2. Find the Z transform of a p [ n ] where p [ n ] is defined as in Exercise 1.
3. Prove the following Z transform pairs:
a. δ [ n ] ⇔ 1
–m
b. δ [ n – 1 ] ⇔ z
n az
c. na u 0 [ n ] ⇔ -----------------2-
(z – a)
2 n az ( z + a )
d. n a u 0 [ n ] ⇔ ---------------------
3
-
(z – a)
2
z
e. [ n + 1 ]u 0 [ n ] ⇔ -----------------2-
(z – 1)
–1
4. Use the partial fraction expansion to find f [ n ] = Z [ F ( z ) ] given that
A
F ( z ) = -------------------------------------------------
-
–1 –1
( 1 – z ) ( 1 – 0.5z )
5. Use the partial fraction expansion method to compute the Inverse Z transform of
2
z
F ( z ) = -------------------------------------------
2
( z + 1 ) ( z – 0.75 )
6. Use the Inversion Integral to compute the Inverse Z transform of
–1 –3
1 + 2z + z
F ( z ) = --------------------------------------------------
–1 –1
( 1 – z ) ( 1 – 0.5z )
7. Use the long division method to compute the first 5 terms of the discrete time sequence whose Z
transform is
–1 –2 –3
z +z –z
F ( z ) = ----------------------------------------------
–1 –2 –3
1 + z + z + 4z
write the difference equation that relates the output y [ n ] to the input x [ n ] .
2.
a f [ n ] ⇔ F ⎛ --- ⎞
n z
⎝a⎠
and from Exercise 1,
–m
1–z -
p [ n ] ⇔ ----------------
–1
1–z
Then,
–m –m –m –m –1 –m –m m –m –m m –m
n 1 – (z ⁄ a) (a – z ) ⁄ a a (a – z ) a (a – z ) 1–a z
a p [ n ] ⇔ ---------------------------- = ---------------------------------------- = ----------------------------------- = ---------------------------------- = -----------------------
–1 –1 –1 –1 –m –1 –1 –1 –1 –1
1 – (z ⁄ a) (a – z ) ⁄ a a (a – z ) a(a – z ) 1 – az
or
m –m
n z(1 – a z )
a p [ n ] ⇔ -------------------------------
z–a
3.
a.
∞
–n –0
Z{δ[n]} = ∑ δ [ n ]z = δ [ 0 ]z = 1
n=0
b.
∞
–n
Z{δ[n – m]} = ∑ δ [ n – m ]z
n=0
c.
From Example 9.1,
n z
f [ n ] = a u 0 [ n ] ⇔ F ( z ) = ----------- (1)
z–a
d –a z
z ----- F ( z ) = ------------------ (4)
dz (z – a)
2
The term ( n + 1 )u 0 [ n ] represents the sum of the first n values, including n = 0 , of u 0 [ n ] and
thus it can be written as the summation
n
g [ n ] = ( n + 1 )u 0 [ n ] = ∑ u0 [ k ]
k=0
Since summation in the discrete time domain corresponds to integration in the continuous time
domain, it follows that
u 1 [ n ] = ( n + 1 )u 0 [ n ]
where u 1 [ n ] represents the discrete unit ramp. Now, from the summation in time property,
n
∑ f [ k ] ⇔ ⎛⎝ ----------
- ⎞ F(z)
z
z–1⎠
k=0
4.
2
We first multiply the numerator and denominator by z to eliminate the negative exponents of z .
2
A Az
F ( z ) = -------------------------------------------------- = -------------------------------------
–1
( 1 – z ) ( 1 – 0.5z )
–1 ( z – 1 ) ( z – 0.5 )
or
F ( z )- = ------------------------------------
---------- Az r1
- = ----------
r2
- + ---------------
z ( z – 1 ) ( z – 0.5 ) z – 1 z – 0.5
Az Az
r 1 = --------------- = 2A r 2 = ----------- = –A
z – 0.5 z=1
z–1 z = 0.5
F(z) 2A A
----------- = ----------- – ---------------
z z – 1 z – 0.5
or
2Az Az
F ( z ) = ----------- – ---------------
z – 1 z – 0.5
Since
z z
----------- ⇔ 1 ----------- ⇔ a n
z–1 z–a
it follows that
1 n 1 n
[ F ( z ) ] = f [ n ] = 2A – A ⎛ --- ⎞ = A 2 – ⎛ --- ⎞
–1
Z ⎝ 2⎠ ⎝ 2⎠
5.
F ( z )- = ----------
----------
r1 r2
- + ------------------
r3
+ ------------------------- z - (1)
- = ------------------------------------------
z z + 1 z – 0.75 ( z – 0.75 ) 2
( z + 1 ) ( z – 0.75 )
2
( – 16 ⁄ 49 )z ( 16 ⁄ 49 )z ( 4 ⁄ 7 ) ⋅ ( 0.75z )
F ( z ) = -------------------------- + ----------------------- + -------------------------------------
z – ( –1 ) z – 0.75 ( z – 0.75 )
2
z n z az n
Using the transforms u 0 [ n ] ⇔ ----------- , a u 0 [ n ] = ----------
- , and na u 0 [ n ] = ------------------ we get
z–1 z–a 2
(z – a)
n–2
Next, we examine z to find out if there are any values of n for which there is a pole at the ori-
gin. We observe that for n = 0 there is a second order pole at z = 0 because
n–2 –2 1
z n=0
= z = ----
2
z
Also, for n = 1 there is a simple pole at z = 0 . But for n ≥ 2 the only poles are z = 1 and
z = 0.5 . Then, following the same procedure as in Example 9.12 we get:
For n = 0
3 2
( z + 2z + 1 )
f[ 0] = ∑ Res -----------------------------------------
2
z ( z – 1 ) ( z – 0.5 )
-
k z = pk
3 2 3 2
( z + 2z + 1 ) ( z + 2z + 1 )
= Res -----------------------------------------
2
- + Res -----------------------------------------
2
-
z ( z – 1 ) ( z – 0.5 ) z ( z – 1 ) ( z – 0.5 )
z=0 z=1
3 2
( z + 2z + 1 )
+ Res -----------------------------------------
2
-
z ( z – 1 ) ( z – 0.5 )
z = 0.5
The first term on the right side of the above expression has a pole of order 2 at z = 0 ; therefore,
we must evaluate the first derivative of
3 2
( z + 2z + 1 )
-------------------------------------
( z – 1 ) ( z – 0.5 )
3 2 3 2 3 2
d ( z + 2z + 1 ) ( z + 2z + 1 ) ( z + 2z + 1 )
f [ 0 ] = ----- ------------------------------------- + --------------------------------
- + --------------------------------
-
dz ( z – 1 ) ( z – 0.5 ) z=0
2
z ( z – 0.5 )
2
z (z – 1)
z=1 z = 0.5
= 6 + 8 – 13 = 1
For n = 1 , it reduces to
3 2
( z + 2z + 1 )
f[1] = ∑ Res ---------------------------------------
z ( z – 1 ) ( z – 0.5 )
k z = pk
3 2 3 2 3 2
( z + 2z + 1 ) ( z + 2z + 1 ) ( z + 2z + 1 )
= Res --------------------------------------- + Res --------------------------------------- + Res ------------------------------------------
z ( z – 1 ) ( z – 0.5 ) z=0
z ( z – 1 ) ( z – 0.5 ) z=1
z ( z – 1 ) ( z – 0.75 ) z = 0.5
or
3 2
3
( z + 2z + 1 )
2
( z + 2z + 1 )
3 2 ( z + 2z + 1 )
f [ 1 ] = ---------------------------------------- + --------------------------------- + -----------------------------------
( z – 1 ) ( z – 0.75 ) z=0
z ( z – 0.75 ) z=1
z(z – 1)
z = 0.5
= 2 + 8 – 13 ⋅ ( 0.5 ) = 3.5
For n ≥ 2 there are no poles at z = 0 , that is, the only poles are at z = 1 and z = 0.5 . There-
fore,
3 2 n–2
( z + 2z + 1 )z
f[n] = ∑ Res ----------------------------------------------
( z – 1 ) ( z – 0.5 )
k z = pk
3 2 n–2 3 2 n–2
( z + 2z + 1 )z ( z + 2z + 1 )z
= Res ---------------------------------------------- + Res ----------------------------------------------
( z – 1 ) ( z – 0.5 ) ( z – 1 ) ( z – 0.5 )
z=1 z = 0.5
3 2 n–2 3 2 n–2
( z + 2z + 1 )z ( z + 2z + 1 )z
= ---------------------------------------------- + ----------------------------------------------
( z – 0.5 ) (z – 1)
z=1 z = 0.5
for n ≥ 2 .
We can express f [ n ] for all n ≥ 0 as
n
f [ n ] = 6δ [ n ] + 2δ [ n – 1 ] + 8 – 13 ( 0.5 )
where the coefficients of δ [ n ] and δ [ n – 1 ] are the residues that were found for n = 0 and
n = 1 at z = 0 .The coefficient 6 is multiplied by δ [ n ] to emphasize that this value exists only
for n = 0 and coefficient 2 is multiplied by δ [ n ] to emphasize that this value exists only for
n = 1.
Check with MATLAB:
syms z n; Fz=(z^3+2*z^2+1)/(z*(z−1)*(z−0.5)); iztrans(Fz)
ans =
2*charfcn[1](n)+6*charfcn[0](n)+8-13*(1/2)^n
7.
3
Multiplication of each term by z yields
–1 –2 –3 2
z +z –z
F ( z ) = ---------------------------------------------- z +z–1 -
= ---------------------------------
–1 –2 –3 3 2
1 + z + z + 4z z +z +z+1
The long division of the numerator by the denominator is shown below.
–1 –3 –4
Divisor z –2 z + z +… Quotient
3 2 2
z +z +z+1 z +z–1 Dividend
2 –1
z +z+1+ z
–1 1st Remainder
–2– z
–1 –2 –3
–2–2z –2z –2z
–1 –2 –3
z + 2z + 2z 2nd Remainder
–1
z +…
… … …
Therefore,
–1 –3 –4
F ( z ) = z –2 z + z + … (1)
Also,
∞
–n –1 –2 –3 –4
F(z) = ∑ f [ n ]z = f[0 ] + f[1 ] z + f[2] z + f[ 3] z + f[ 4] z + … (2)
0
Equating like terms on the right sides of (1) and (2) we get
f[0] = 0 f[1] = 1 f[2] = 0 f [ 3 ] = –2 f[4] = 1
8.
a.
y [ n ] – y [ n – 1 ] = Tx [ n – 1 ]
Taking the Z transform of both sides we get
–1 –1
Y ( z ) – z Y ( z ) = Tz X ( z )
and thus
–1
Y(z ) Tz T
H ( z ) = ----------- = ---------------- = -----------
X(z ) 1–z
–1 z – 1
b.
– naT z -
x[ n] = e ⇔ X ( z ) = -----------------
– aT
z–e
Then,
T z Tz
Y ( z ) = H ( z )X ( z ) = ----------- ⋅ -----------------
- = -------------------------------------------
-
z–1 z–e – aT
(z – 1) ⋅ (z – e )
– aT
or
Y ( z ) = -------------------------------------------
---------- T r1
- = ----------
r2
- (1)
- + -----------------
z (z – 1) ⋅ (z – e )
– aT z – 1 z – e – aT
T T T –T
r 1 = ------------------ = ------------------- r 2 = ----------- = -------------------
z–e
– aT
1–e
– aT z–1 – aT 1–e
– aT
z=1 z=e
z z n
Recalling that ----------- ⇔ u 0 [ n ] and ----------- ⇔ a u 0 [ n ] we get
z–1 z–a
– naT
T
T Te - ( 1 – e –naT )u 0 [ n ]
y [ n ] = ------------------- – ------------------- = ------------------
– aT – aT – aT
1–e 1–e 1–e
9.
a.
T
y [ n ] – y [ n – 1 ] = --- { x [ n ] + x [ n – 1 ] }
2
Taking the Z transform of both sides we get
–1 T –1
Y ( z ) – z Y ( z ) = --- [ X ( z ) + z X ( z ) ]
2
or
–1 T –1
( 1 – z )Y ( z ) = --- ( 1 + z )X ( z )
2
and thus
–1
Y(z ) T 1+z T z+1
- = --- ⋅ -----------
H ( z ) = ----------- = --- ⋅ ---------------
X(z ) 2 1–z – 1 2 z–1
b.
– naT z
x[ n] = e ⇔ X ( z ) = ------------------
– aT
z–e
Then,
T z+1 z Tz ( z + 1 )
Y ( z ) = H ( z )X ( z ) = --- ⋅ ----------- ⋅ -----------------
- = ----------------------------------------------
-
2 z – 1 z – e – aT 2( z – 1) ⋅ (z – e )
– aT
or
Y(z) T(z + 1) r1 r2
---------- = ----------------------------------------------- = ---------- - (1)
- + -----------------
z 2(z – 1) ⋅ (z – e )
– aT z – 1 z – e –aT
T z+1- T 2 - T -
r 1 = --- ⋅ ----------------- = --- ⋅ ------------------ = ------------------
2 z – e – aT 2 1 – e –aT 1–e
– aT
z=1
– aT
T z+1 T e +1 T -
r 2 = --- ⋅ ----------- = --- ⋅ ------------------
- = ------------------
2 z–1 – aT
2 e – aT – 1 1–e
– aT
z=e
z z n
Recalling that ----------- ⇔ u 0 [ n ] and ----------- ⇔ a u 0 [ n ] we get
z–1 z–a
– aT
+ 1- –naT
T -+T – --- coth ⎛ ------ ⎞ e
e T - T aT – naT
y [ n ] = ------------------ --- ⋅ ------------------ e = ------------------
– aT 2 – aT – aT 2 ⎝ 2⎠
1–e e –1 1–e
10.
a.
y[n] + y[n – 1] = x[n]
y [ n ] = 0 for n < 0
Taking the Z transform of both sides we get
–1
( 1 + z )Y ( z ) = X ( z )
and thus
Y ( z )- = ---------------
H ( z ) = ---------- 1 - = ---------- z -
z - = ------------------
X(z ) 1+z
–1 z + 1 z – ( –1 )
b.
–1 –1 ⎧ z ⎫ n
h [ n ] = Z { H ( z ) } = Z ⎨ ------------------- ⎬ = ( – 1 )
⎩ z – ( –1 ) ⎭
c.
x [ n ] = 10 for n ≥ 0
z
X ( z ) = 10 ⋅ -----------
z–1
2
z 10z 10z
Y ( z ) = H ( z )X ( z ) = ----------- ⋅ ----------- = --------------------------------
-
z+1 z–1 ( z + 1 )( z – 1 )
Y ( z ) = --------------------------------
---------- 10z r1
- = ----------
r2 5 - + ----------
- = ----------
- + ---------- 5-
z (z + 1)(z – 1) z+1 z–1 z+1 z–1
5z - + ----------
Y ( z ) = ------------------ 5z - ⇔ f [ n ] = 5 ( – 1 ) n + 5
z – ( –1 ) z – 1
11.
z+2
H ( z ) = ----------------------------
2
8z – 2z – 3
2
Multiplication of each term by 1 ⁄ 8z yields
–1 –2
Y ( z )- = -----------------------------------------------------------
H ( z ) = ---------- 1 ⁄ 8 ⋅ ( z + 2z ) -
X(z ) 1 – (1 ⁄ 4) z – (3 ⁄ 8) z
–1 –2
1 –1 3 –2 1 –1 –2
1 – --- z – --- z ⋅ Y ( z ) = --- ⋅ ( z + 2z ) ⋅ X ( z )
4 8 8
his chapter begins with the actual computation of frequency spectra for discrete time systems.
T For brevity, we will use the acronyms DFT for the Discrete Fourier Transform and FFT for
Fast Fourier Transform algorithm respectively. The definition, theorems, and properties are
also discussed, and several examples are given to illustrate their uses.
In our subsequent discussion we will denote a discrete time signal as x [ n ] , and its discrete frequency
transform as X [ m ] .
Let us consider again the definition of the Z transform, that is,
∞ –n
F(z) = ∑ f [ n ]z (10.1)
n=0
This represents an infinite summation; it must be truncated before it can be computed. Let this trun-
cated version be represented by
mn
N–1 – j2π -------
∑ x [ n ]e
N
X[m] = (10.3)
n=0
where N represents the number of points that are equally spaced in the interval 0 to 2π on the unit
circle of the z -plane, and
ω = ⎛ ------- ⎞ m
2π
⎝ NT ⎠
mn
1 N–1 j2π -------
∑
N
x [ n ] = ---- X [ m ]e (10.4)
N
m=0
for n = 0, 1, 2, …, N – 1 .
In general, the discrete frequency transform X [ m ] is complex, and thus we can express it as
X [ m ] = Re { X [ m ] } + Im { X [ m ] } (10.5)
for m = 0, 1, 2, …, N – 1
Since
mn
– j2π -------
2πmn
= cos -------------- – j sin 2πmn
N
e -------------- (10.6)
N N
we can write (10.3) as
mn
N–1 – j2π ------- N–1 N–1
∑ ∑ x [ n ] cos 2πmn ∑x [ n ] sin 2πmn
N
X [m] = x [ n ]e = -------------- – j -------------- (10.7)
N N
n=0 n=0 n=0
For n = 0 , (10.7) reduces to X [ m ] = x [ 0 ] . Then, the real part Re { X [ m ] } can be computed from
N–1
2πmn
Re { X [ m ] } = x [ 0 ] + ∑ x [ n ] cos -------------
N
- for m = 0, 1, 2, …, N – 1 (10.8)
n=1
N–1
∑ x [ n ] sin -------------
Im { X [ m ] } = – 2πmn for m = 0, 1, 2, …, N – 1
- (10.9)
N
n=1
We observe that the summation in (10.8) and (10.9) is from n = 1 to n = N – 1 since x [ 0 ] appears
in (10.8).
Example 10.1
A discrete time signal is defined by the sequence
x [ 0 ] = 1, x [ 1 ] = 2, x [ 2 ] = 2, and x [ 3 ] = 1
∑ x [ n ] cos -------------
Re { X [ m ] } = x [ 0 ] + 2πmn -
N
n=1
2πm ( 1 ) + 2 cos 2πm ( 2 ) + cos 2πm ( 3 )
= 1 + 2 cos ------------------ (10.10)
- ------------------- -------------------
4 4 4
------- + 2 cos mπ + cos 3mπ
= 1 + 2 cos mπ -----------
2 2
∑ x [ n ] sin -------------
Im { X [ m ] } = – 2πmn = – 2 sin mπ – 2 sin mπ – sin 3mπ
- ------- -----------
N 2 2
n=1
The discrete frequency components X [ m ] for m = 0, 1, 2 , and 3 are found by addition of the real
and imaginary components X re [ i ] of (10.11) and X im [ i ] of (10.12). Thus,
X [ 0 ] = 6 + j0 = 6
X [1] = – 1 – j
(10.13)
X [ 2 ] = 0 + j0 = 0
X [3] = – 1 + j
Example 10.2
Use the Inverse DFT, i.e., relation (10.4), and the results of Example 10.1, to compute the values of
the discrete time sequence x [ n ] .
Solution:
Since we are given four discrete values of x [ n ] , we will use a 4 -point DFT, that is, for this example,
N = 4 . Then, (10.4) for m = 0, 1, 2 , and 3 reduces to
mn mn
3 j2π ------- 3 jπ -------
1 1
∑ X [ m ]e ∑ X [ m ]e
4 2
x [ n ] = --- = ---
4 4
m=0 m=0
(10.14)
n 3n
jπ --- jπ ------
1 2 jπn 2
= --- X [ 0 ] + X [ 1 ]e + X [ 2 ]e + X [ 3 ]e
4
1
x [ 1 ] = --- { X [ 0 ] + X [ 1 ] ⋅ j + X [ 2 ] ⋅ ( – 1 ) + X [ 3 ] ⋅ ( – j ) }
4
1
= --- { 6 + ( – 1 – j ) ⋅ j + 0 ⋅ ( – 1 ) + ( – 1 + j ) ⋅ ( – j ) } = 2
4
1
x [ 2 ] = --- { X [ 0 ] + X [ 1 ] ⋅ ( – 1 ) + X [ 2 ] ⋅ 1 + X [ 3 ] ⋅ ( – 1 ) }
4
1
= --- { 6 + ( – 1 – j ) ⋅ ( – 1 ) + 0 ⋅ 1 + ( – 1 + j ) ⋅ ( – 1 ) } = 2
4
1
x [ 3 ] = --- { X [ 0 ] + X [ 1 ] ⋅ ( – j ) + X [ 2 ] ⋅ ( – 1 ) + X [ 3 ] ⋅ j }
4
1
= --- { 6 + ( – 1 – j ) ⋅ ( – j ) + 0 ⋅ ( – 1 ) + ( – 1 + j ) ⋅ j } = 1
4
We observe that these are the same values as in Example 10.1. We will check the answers of Exam-
ples 10.1 and 10.2 with MATLAB and Excel.
With MATLAB, we use the fft(x) function to compute the DFT, and the ifft(x) function to compute
the Inverse DFT.
xn=[1 2 2 1]; % The discrete time sequence of Example 10.1
Xm=fft(xn) % Compute the FFT of this discrete time sequence
Xm =
6.0000 -1.0000-1.0000i 0 -1.0000+1.0000i
Xm = [6 −1−j 0 −1+j]; % The discrete frequency components of Example 10.2
xn=ifft(Xm) % Compute the Inverse FFT
xn =
1.0000 2.0000+0.0000i 2.0000 1.0000-0.0000i
To use Excel for the computation of the DFT, the Analysis ToolPak must have been installed. If not,
we can install it by clicking Add-Ins on the Tools menu, and follow the instructions on the screen.
With Excel’s Fourier Analysis tool, we get the spreadsheet shown in Figure 10.1. The instructions on
how to use it, are given on the spreadsheet.
The term e –( j2π ) ⁄ N is a rotating vector, where the range 0 ≤ θ ≤ 2π is divided into 360 ⁄ N equal seg-
ments. Therefore, it is convenient to represent it as W N , that is, we let
j2π
– --------
WN = e N (10.15)
and consequently
j2π
–1 --------
WN = e N (10.16)
A B C D E
1 Input data x(n) are same as in Example 10.1
2 and are entered in cells A11 through A14
3
4 From the Tools drop down menu, we select
5 Data Analysis and from it, Fourier Analysis
6
7 The Input Range is A11 through A14 (A11:A14)
8 and the Output Range is B11 through B14 (B11:B14)
9
10 x(n) X(m)
11 1 6
12 2 -1-i
13 2 0
14 1 -1+i
15
16 To obtain the discrete time sequence, we select
17 Inverse from the Fourier Analysis menu
18
19 Input data are the same as in Example 10.2
20
21 The Input Range is A25 through A28 (A25:A28)
22 and the Output Range is B25 through B28 (B25:B28)
23
24 X(m) x(n)
25 6 1
26 -1-j 2
27 0 2
28 -1+j 1
Figure 10.1. Using Excel to find the DFT and Inverse DFT
N–1
∑ x ( n )WN
mn
X[m] = (10.17)
n=0
and
1N–1 –m n
x [ n ] = ----
N ∑
X ( m )W N (10.18)
n=0
Example 10.3
Use MATLAB to compute the magnitude of the frequency components of the following discrete
time function. Then, use Excel to display the time and frequency values.
x[0] x[1] x[2] x[3] x[4] x[5] x[6] x[7] x[8] x[9] x[10] x[11] x[12] x[13] x[14] x[15]
1.0 1.5 2.0 2.3 2.7 3.0 3.4 4.1 4.7 4.2 3.8 3.6 3.2 2.9 2.5 1.8
Solution:
We compute the magnitude of the frequency spectrum with the MATLAB code below.
xn=[1 1.5 2 2.3 2.7 3 3.4 4.1 4.7 4.2 3.8 3.6 3.2 2.9 2.5 1.8]; magXm=abs(fft(xn));...
fprintf(' \n'); fprintf('magXm1 = %4.2f \t', magXm(1)); fprintf('magXm2 = %4.2f \t', magXm(2));...
fprintf('magXm3 = %4.2f \t', magXm(3)); fprintf(' \n'); fprintf('magXm4 = %4.2f \t', magXm(4));...
fprintf('magXm5 = %4.2f \t', magXm(5)); fprintf('magXm6 = %4.2f \t', magXm(6)); fprintf(' \n');...
fprintf('magXm7 = %4.2f \t', magXm(7)); fprintf('magXm8 = %4.2f \t', magXm(8));...
fprintf('magXm9 = %4.2f \t', magXm(9)); fprintf(' \n');...
fprintf('magXm10 = %4.2f \t', magXm(10)); fprintf('magXm11 = %4.2f \t', magXm(11)); ...
fprintf('magXm12 = %4.2f \t', magXm(12)); fprintf(' \n');...
fprintf('magXm13 = %4.2f \t', magXm(13)); fprintf('magXm14 = %4.2f \t', magXm(14));...
fprintf('magXm15 = %4.2f \t', magXm(15))
magXm1 = 46.70 magXm2 = 11.03 magXm3 = 0.42
magXm4 = 2.41 magXm5 = 0.22 magXm6 = 1.19
magXm7 = 0.07 magXm8 = 0.47 magXm9 = 0.10
magXm10 = 0.47 magXm11 = 0.07 magXm12 = 1.19
magXm13 = 0.22 magXm14 = 2.41 magXm15 = 0.42
Now, we use Excel to plot the x [ n ] and X [ m ] values. These are shown in Figure 10.2.
On the plot of X [ m ] in Figure 10.2, the first value X [ 0 ] = 46.70 represents the DC component.
We observe that after the X [ 8 ] = 0.10 value, the magnitude of the frequency components for the
range 9 ≤ m ≤ 15 , are the mirror image of the components in the range 1 ≤ m ≤ 7 . This is not a coin-
cidence; it is a fact that if x [ n ] is an N -point real discrete time function, only N ⁄ 2 of the frequency
components of X [ m ] are unique.
A B C D E F G H I J
1 n x(n) m |X(m)|
x(n)
2 0 1.0 0 46.70
3 1 1.5 1 11.03 4.7
4.1 4.2
4 2 2.0 2 0.42 3.8 3.6
5 3 2.3 3 2.41 3.4
3.2
3.0 2.9
6 4 2.7 4 0.22 2.7
2.5
2.3
7 5 3.0 5 1.19 2.0
1.8
8 6 3.4 6 0.07 1.5
9 7 4.1 7 0.47 1.0
10 8 4.7 8 0.10
11 9 4.2 9 0.47
12 10 3.8 10 0.07
13 11 3.6 11 1.19
|X(m)|
14 12 3.2 12 0.22
46.70
15 13 2.9 13 2.41
16 14 2.5 14 0.42
17 15 1.8 15 11.03
18
19
20
11.03
11.03
21
22
2.41
2.41
1.19
1.19
23
0.47
0.47
0.42
0.42
0.22
0.22
0.10
0.07
0.07
24
25
x[n]
n
0
N
(Start of New Period)
N-1
(End of Period)
X[m]
m
0
N-1 N
(End of Period) (Start of New Period)
In Chapter 8, we developed Table 8-7 showing the even and odd properties of the Fourier trans-
form. Table 10.2 shows the even and odd properties of the DFT.
These can be proved by methods similar to those that we have used for the continuous Fourier
transform. To prove the first entry, for example, we expand
N–1
∑ x [ n ]WN
mn
X [m] =
n=0
into its real and imaginary parts using Euler’s identity, and we equate these with the real and imagi-
nary parts of X [ m ] = X re [ m ] + jX im [ m ] . Now, since the real part contains the cosine, and the imag-
inary contains the sine function, and cos ( – m ) = cos m while sin ( – m ) = – sin m , this entry is
proved.
ax 1 [ n ] + bx 2 [ n ] + … ⇔ aX 1 [ m ] + bX 2 [ m ] + … (10.26)
Proof:
The proof is readily obtained by using the definition of the DFT.
2. Time Shift
km
x [ n – k ] ⇔ WN X [ m ] (10.27)
Proof:
By definition,
and if x [ n ] is shifted to the right by k sampled points for k > 0 , we must change the lower and
upper limits of the summation from 0 to k , and from N – 1 to N + k – 1 respectively. Then,
replacing x [ n ] with x [ n – k ] in the definition above, we get
N+k–1
∑
mn
D{x[n – k]} = x [ n – k ]W N (10.28)
n=k
We must remember, however, that although the magnitudes of the frequency components are not
affected by the shift, a phase shift of 2πkm ⁄ N radians is introduced as a result of the time shift.
To prove this, let us consider the relation y [ n ] = x [ n – k ] . Taking the DFT of both sides of this
relation, we get
2πkm
km – j -------------- – 2 πkm
Y [ m ] = W N X [ m ] = X [ m ]e N = X [ m ] ∠----------------- (10.30)
N
Since both X [ m ] and Y [ m ] are complex quantities, they can be expressed in magnitude and phase
angle form as
X [ m ] = X [ m ] ∠θ
and
Y [ m ] = Y [ m ] ∠ϕ
By substitution of these into (10.30), we get
– 2 πkm
Y [ m ] ∠ϕ = X [ m ] ∠θ ∠----------------- (10.31)
N
ϕ = θ – 2πkm
-------------- (10.32)
N
3. Frequency Shift
–k m
WN x [ n ] ⇔ X [ m –k ] (10.33)
Proof:
N–1 N–1
( m – k )n
D { WN–kn x [ n ] } = ∑ WN
–k m mn
x [ n ]W N = ∑ x [ n ]WN (10.34)
n=0 n=0
and we observe that the last term on the right side of (10.34) is the same as D { x [ n ] } except that
m is replaced with m – k . Therefore,
4. Time Convolution
x [ n ]∗ h [ n ] ⇔ X [ m ] ⋅ H [ m ] (10.36)
Proof:
Since
N–1
x [ n ]∗ h [ n ] = ∑ x [ n ]h [ n – k ]
k=0
then,
N–1 N–1
D { x [ n ]∗ h [ n ] } ∑ ∑ x [ k ]h [ n – k ]
mn
= WN (10.37)
n=0 k=0
Next, interchanging the order of the indices n and k in the lower limit of the summation, and also
changing the range of summation from N – 1 to N + k – 1 for the bracketed term on the right
side of (10.37), we get
N–1 N+k–1
D { x [ n ]∗ h [ n ] } ∑ ∑
kn
= x[k] h [ n – k ] WN (10.38)
k=0 n=k
∑
kn kn
h [ n – k ] WN = WN H [ m ] (10.39)
n=k
5. Frequency Convolution
N–1
1 1-
x [ n ] ⋅ y [ n ] ⇔ ----
N ∑ X [ k ]Y [ m – k ] = ---
N
X [ m ]∗ Y [ m ] (10.41)
k=0
Proof:
The proof is obtained by taking the Inverse DFT, changing the order of the summation, and let-
ting m – k = µ .
For example, if we assume that the highest frequency component in a signal is 18 KHz , this signal
must be sampled at 2 × 18 KHz = 36 KHz or higher so that it can be completed specified by its
sampled values. If the sampled frequency remains the same, i.e., 36 KHz , and the highest frequency
of this signal is increased to, say 25 KHz , this signal cannot be recovered by any digital-to-analog
converter.
A typical digital signal processing system contains a low-pass analog filter, often called pre-sampling
filter, to ensure that the highest frequency allowed into the system, will be equal or less the sampling
rate so that the signal can be recovered. The highest frequency allowed by the pre-sampling filter is
referred to as the Nyquist frequency, and it is denoted as f n .
If a signal is not band-limited, or if the sampling rate is too low, the spectral components of the sig-
nal will overlap each another and this condition is called aliasing. To avoid aliasing, we must increase
the sampling rate.
A discrete time signal may have an infinite length; in this case, it must be limited to a finite interval
before it is sampled. We can terminate the signal at the desired finite number of terms, by multiply-
ing it by a window function. There are several window functions such as the rectangular, triangular,
Hamming, Hanning, Kaiser, etc. To obtain a truncated sequence, we multiply an infinite sequence by
one of these window functions. However, we must choose a suitable window function; otherwise,
the sequence will be terminated abruptly producing the effect of leakage.
As an example of how leakage can occur, let us review Example 8.7, and Exercise 8.3, where the infi-
nite sequence cos ω 0 t , or cos naT is multiplied by the window function A [ u 0 ( t + T ) – u 0 ( t – T ) ] or
A [ u 0 ( n + m ) – u 0 ( n – m ) ] . We can see that the spectrum spreads or leaks over the neighborhood of
± ω 0 . Selection of an appropriate window function to avoid leakage is beyond the scope of this book,
and will not be discussed here.
A third problem that may arise in using the DFT, results from the fact the spectrum of the DFT is
not continuous; it is a discrete function where the spectrum consists of integer multiples of the fun-
damental frequency. It is possible, however, that some significant frequency component lies between
two spectral lines and goes undetected. This is called picket-fence effect since we can only see discrete
values of the spectrum. This problem can be alleviated by adding zeros at the end of the discrete sig-
nal, thereby changing the period, which in turn changes the location of the spectral lines.We should
remember that the sampling theorem states that the original time sequence can be completely recov-
ered if the sampling frequency is adequate, but does not guarantee that all frequency components will
be detected.
To get a better understanding of the sampling frequency f s , Nyquist frequency f n , number of sam-
ples N , and the periods in the time and frequency domains, we will adopt the following notations:
N = number of samples in time or frequency period
f s = sampling frequency = samples per sec ond
T t = period of a periodic discrete time function
t t = interval between the N samples in time period T t
T f = period of a periodic discrete frequency function
t f = interval between the N samples in frequency period T f
These notations are shown in Figure 10.4. Thus, we have the relations
T T
t t = ----t f s = --1- t f = ----f 1-
t t = --- 1-
t f = --- (10.42)
N tt N Tf Tt
Example 10.4
The period of a periodic discrete time function is 0.125 millisecond, and it is sampled at 1024
equally spaced points. It is assumed that with this number of samples, the sampling theorem is satis-
fied and thus there will be no aliasing.
a. Compute the period of the frequency spectrum in KHz .
b. Compute the interval between frequency components in KHz .
c. Compute the sampling frequency f s
x[n] Tt
tt
n
0
N
(Start of New Period)
N− 1
(End of Period)
X[m]
Tf
tf
m
0
(N/2)−1 N/2 (N/2)+1
N −1 N
(End of Period) (Start of New Period)
Figure 10.4. Intervals between samples and periods in discrete time and frequency domains
Solution:
For this example, T t = 0.125 ms and N = 1024 points . Therefore, the time between successive
time components is
–3
T
t t = ----t = 0.125 × 10 - = 0.122 µs
-----------------------------
N 1024
Then,
a. the period T f of the frequency spectrum is
1 1
T f = --- = ------------------------------ = 8192 kHz ≈ 8.2 MHz
tt –6
0.122 × 10
To compute the number of operations required to complete this task, let us expand the N -point
DFT defined as
N–1
∑ x [ n ]WN
mn
X[m] = (10.44)
n=0
Then,
0 0 0 0
X [ 0 ] = x [ 0 ]W N + x [ 1 ]W N + x [ 2 ]W N + … + x [ N – 1 ]W N
0 1 2 N–1
X [ 1 ] = x [ 0 ]W N + x [ 1 ]W N + x [ 2 ]W N + … + x [ N – 1 ]W N
0 2 4 2(N – 1)
X [ 2 ] = x [ 0 ]W N + x [ 1 ]W N + x [ 2 ]W N + … + x [ N – 1 ]W N (10.45)
…
2
0 N–1 2(N – 1) (N – 1)
X [ N – 1 ] = x [ 0 ]W N + x [ 1 ]W N + x [ 2 ]W N + … + x [ N – 1 ]W N
– ⎛ --------⎞ N
j2π
N ⎝ N⎠ – j2π
WN = e = e = 1
– ⎛ --------⎞ -----
j2π N
N⁄2 ⎝ N⎠ 2 – jπ
WN = e = e = –1
– ⎛ --------⎞ -----
j2π N
N⁄4 ⎝ N⎠ 4 – jπ ⁄ 2
WN = e = e = –j
– ⎛ --------⎞ -------
j2π 3N
3N ⁄ 4 ⎝ N⎠ 4 – j3π ⁄ 2 (10.48)
W N = e = e = j
– ⎛ --------⎞ kN
j2π
kN ⎝ N⎠ – j2kπ
W N = e = e = 1 k = 0, 1, 2, …
– ⎛ --------⎞ ( kN + r ) – ⎛ --------⎞ kN – ⎛ --------⎞ r
j2π j2π j2π
kN + r ⎝ N⎠ ⎝ N ⎠ ⎝ N⎠ r r
WN = e = e e = 1 ⋅ WN = WN
– ⎛ --------⎞ ---
j2π k
– ⎛ --------⎞ k
j2π
k ⎝ 2N ⎠ ⎝ N ⎠2 k⁄2
W 2N = e = e = WN
0 0 0 0
WN WN WN … WN
X [0] 0 1 2 N–1
x [0]
X [1] WN WN WN … WN x [1]
X [2] = W0 W2 WN
4
… WN
2(N – 1 ) ⋅ x [2] (10.49)
N N
… … … … … … …
X [N – 1] 0 N–1 2(N – 1) (N – 1)
2 x [N – 1]
WN WN WN … WN
X [ m ] = WN ⋅ x [ n ] (10.50)
The algorithm that was developed by Cooley and Tukey, is based on matrix decomposition methods,
where the matrix W in (10.50) is factored into L smaller matrices, that is,
WN = W1 ⋅ W2 ⋅ … ⋅ WL (10.51)
L
where L is chosen as L = log 2 N or N = 2
Each row of the matrices on the right side of (10.51) contains only two non-zero terms, unity and
k
W N . Then, the vector X [ m ] , is obtained from
X [ m ] = W1 ⋅ W2 ⋅ … ⋅ WL ⋅ x [ n ] (10.52)
The FFT computation begins with matrix W L in (10.52). This matrix operates on vector x [ n ] pro-
ducing a new vector, and each component of this new vector, is obtained by one multiplication and
one addition. This is because there are only two non-zero elements on a given row, and one of these
elements is unity. Since there are N components on x [ n ] , there will be N complex additions and N
complex multiplications. This new vector is then operated on by the [ W L – 1 ] matrix, then on
[ W L – 2 ] , and so on, until the entire computation is completed. It appears that the entire computa-
tion would require NL = Nlog 2 N complex multiplications, and also Nlog 2 N additions for a total of
0 N⁄2
2Nlog 2 N arithmetic operations. However, since W N = 1 , W N = – 1 , and other reductions, it is
estimated that only about half of these, that is, Nlog 2 N total arithmetic operations are required by the
2
FFT versus the N computations required by the DFT.
Under those assumptions, we construct Table 10.3 to compare the percentage of computations
achieved by the use of FFT versus the DFT.
A plethora of FFT algorithms has been developed and published. They are divided into two main
categories:
Category I
a. Decimation in Time
b. Decimation in Frequency
Category II
a. In−Place
b. Natural Input−Output (Double-Memory Technique)
To define Category I, we need to refer to the DFT and Inverse DFT definitions. They are repeated
below for convenience.
N–1
∑ x [ n ]WN
mn
X [m] = (10.53)
n=0
and
1N–1 –m n
N ∑
x [ n ] = ---- X [ m ]W N (10.54)
n=0
We observe that (10.53) and (10.54) differ only by the factor 1 ⁄ N , and the replacement of W N with
–1
W N . If the DFT algorithm is developed in terms of the direct DFT of (10.53), it is referred to as
decimation in time, and if it is developed in terms of the Inverse DFT of (10.54), it is referred to as
decimation in frequency. In the latter case, the vector
2π
– j ------
WN = e N
that is, the sine terms are reversed in sign, and the multiplication by the factor 1 ⁄ N can be done
either at the input or output.
The Category II algorithm schemes are described in the Table 10.4 along with their advantages and
disadvantages.
Now, we will explain how the unnatural order occurs and how it can be re-ordered.
Consider the discrete time sequence f [ n ] ; its DFT is found from
N–1
∑ f [ n ]WN
mn
F[m] = (10.55)
n=0
We assume that N is a power of 2 and thus, it is divisible by 2 . Then, we can decompose the
sequence f [ n ] into two subsequences, f even [ n ] which contains the even components, and f odd [ n ]
and
f odd [ n ] = f [ 2n + 1 ]
for n = 0, 1, 2, …N ⁄ 2 – 1
Each of these subsequences has a length of N ⁄ 2 and thus, their DFTs are, respectively,
N⁄2–1 N⁄2–1
∑ ∑
mn mn
F even [ m ] = f even [ n ]W N ⁄ 2 = f [ 2n ]W N ⁄ 2 (10.56)
n=0 n=0
and
N⁄2–1 N⁄2–1
∑ ∑
mn mn
F odd [ m ] = f odd [ n ]W N ⁄ 2 = f [ 2n + 1 ]W N ⁄ 2 (10.57)
n=0 n=0
where
2π
– j ----------- ⎛ – j -----
2π⎞
- 2
N⁄2 = e⎝ N⎠ 2
WN ⁄ 2 = e = WN (10.58)
∑ f [ n ]WN
mn
F[m] =
n=0
m 2m 3m (10.59)
= f [ 0 ] + f [ 1 ]W N + f [ 2 ]W N + f [ 3 ]W N
4m 5m 6m 7m
+ f [ 4 ]W N + f [ 5 ]W N + f [ 6 ]W N + f [ 7 ]W N
0
Expanding (10.56) for n = 0, 1, 2, and 3 and recalling that W N = 1 , we get
3
∑ f [ 2n ]WN
2mn
F even [ m ] =
n=0
0 2m 4m 6m (10.60)
= f [ 0 ]W N + f [ 2 ]W N + f [ 4 ]W N + f [ 6 ]W N
2m 4m 6m
= f [ 0 ] + f [ 2 ]W N + f [ 4 ]W N + f [ 6 ]W N
0
Expanding also (10.57) for n = 0, 1, 2, and 3 and using W N = 1 , we get
3
∑ f [ 2n + 1 ]WN
2 mn 0 2m 4m 6m
F odd [ m ] = = f [ 1 ]W N + f [ 3 ]W N + f [ 5 ]W N + f [ 7 ]W N
n=0 (10.61)
2m 4m 6m
= f [ 1 ] + f [ 3 ]W N + f [ 5 ]W N + f [ 7 ]W N
The vector W N is the same in (10.59), (10.60) and (10.61), and N = 8 . Then,
2π 2π
– j ------ – j ------
WN = W8 = e N = e 8
m
Multiplying both sides of (10.61) by W N , we get
m m 3m 5m 7m
W N F odd [ m ] = f [ 1 ]W N + f [ 3 ]W N + f [ 5 ]W N + f [ 7 ]W N (10.62)
for n = 1, 2, 3, …, 7 .
Continuing the process, we decompose { f [ 0 ], f [ 2 ], f [ 4 ], and f [ 6 ] } into { f [ 0 ], f [ 4 ] } and
{ f [ 2 ], f [ 6 ] } . These are sequences of length N ⁄ 4 = 2 .
Denoting their DFTs as F even1 [ m ] and F even2 [ m ] , and using the relation
2π ⎛ – j -----
2π⎞
- 4
– j ----------- ⎝ N⎠
N⁄4 4
WN ⁄ 4 = e =e = WN (10.64)
for n = 0, 1 , we get
4m
F even1 [ m ] = f [ 0 ] + f [ 4 ]W N (10.65)
and
4m
F even2 [ m ] = f [ 2 ] + f [ 6 ]W N (10.66)
The sequences of (10.65) and (10.66) cannot be decomposed further. They justify the statement
made earlier, that each computation produces a vector where each component of this vector, for
n = 1, 2, 3, …, 7 is obtained by one multiplication and one addition. This is often referred to as a
butterfly operation.
Substitution of (10.65) and (10.66) into (10.60), yields
2m
F even [ m ] = F even1 [ m ] + W N F even2 [ m ] (10.67)
Likewise, F odd [ m ] can be decomposed into DFTs of length 2; then, F [ m ] can be computed from
m
F ( m ) = F even ( m ) + W N F odd ( m ) m = 0, 1, 2, …, 7 (10.68)
for N = 8 . Of course, this procedure can be extended for any N that is divisible by 2.
Figure 10.5 shows the signal flow graph of a decimation in time, in-place FFT algorithm for N = 8 ,
where the input is shuffled in accordance with the above procedure. The subscript N in W has been
omitted for clarity.
Row 2 x[2] W0 W4 W2 X[ 2]
Row 4 x[1] W0 W0 W4 X[ 4]
Row 7 x[7] W4 W6 W7 X[ 7]
Figure 10.5. Signal flow graph of a decimation in time, in-place FFT algorithm, for N = 8
In the signal flow graph of Figure 10.5, the input x [ n ] appears in Column 0 . The N ⁄ 4 , N ⁄ 2 , and
N -point FFTs are in Columns 1 , 2 , and 3 respectively.
In simplified form, the output of the node associated with row R and column C , indicated as
Y ( R, C ) , is found from
Y [ Ri , C – 1 ] m
W Y [ Rj , C – 1 ]
Y [ R, C ] = + (10.69)
⎧
⎪
⎪
⎨
⎪
⎪
⎩
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
The binary input words and the bit-reversed words applicable to the this signal flow graph, are
shown in Table 10.5.
TABLE 10.5 Binary words for the signal flow graph of Figure 10.5
n Binary Word Reversed-Bit Word Input Order
0 000 000 x[0]
1 001 100 x[4]
2 010 010 x[2]
3 011 110 x[6]
4 100 001 x[1]
5 101 101 x[5]
6 110 011 x[3]
7 111 111 x[7]
Example 10.5
Using (10.69) with the signal flow graph of Figure 10.5, compute the spectral component X [ 3 ] in
j
terms of the inputs x [ i ] and vectors W . Then, verify that the result is the same as that obtained by
direct application of the DFT.
Solution:
The N ⁄ 4 point FFT appears in Column 1. Using (10.69) we get:
0 0
Y [ 0, 1 ] = x [ 0 ] + W x [ 4 ] = Y [ 0, 0 ] + W Y [ 1, 0 ]
4 4
Y [ 1, 1 ] = x [ 0 ] + W x [ 4 ] = Y [ 0, 0 ] + W Y [ 1, 0 ]
0 0
Y [ 2, 1 ] = x [ 2 ] + W x [ 6 ] = Y [ 2, 0 ] + W Y [ 3, 0 ]
4 4
Y [ 3, 1 ] = x [ 2 ] + W x [ 6 ] = Y [ 2, 0 ] + W Y [ 3, 0 ]
(10.70)
0 0
Y [ 4, 1 ] = x [ 1 ] + W x [ 5 ] = Y [ 4, 0 ] + W Y [ 5, 0 ]
4 4
Y [ 5, 1 ] = x [ 1 ] + W x [ 5 ] = Y [ 4, 0 ] + W Y [ 5, 0 ]
0 0
Y [ 6, 1 ] = x [ 3 ] + W x [ 7 ] = Y [ 6, 0 ] + W Y [ 7, 0 ]
0 4
Y [ 7, 1 ] = x [ 3 ] + W x [ 7 ] = Y [ 6, 0 ] + W Y [ 7, 0 ]
0
Y [ 0, 2 ] = Y [ 0, 1 ] + W Y [ 2, 1 ]
2
Y [ 1, 2 ] = Y [ 1, 1 ] + W Y [ 3, 1 ]
4
Y [ 2, 2 ] = Y [ 0, 1 ] + W Y [ 2, 1 ]
6
Y [ 3, 2 ] = Y [ 1, 1 ] + W Y [ 3, 1 ]
(10.71)
0
Y [ 4, 2 ] = Y [ 4, 1 ] + W Y [ 6, 1 ]
2
Y [ 5, 2 ] = Y [ 5, 1 ] + W Y[ 7, 1 ]
4
Y [ 6, 2 ] = Y [ 4, 1 ] + W Y [ 6, 1 ]
6
Y [ 7, 2 ] = Y [ 5, 1 ] + W Y [ 7, 1 ]
With the equations of (10.70), (10.71), and (10.72), we can determine any of the 8 spectral compo-
nents. Our example calls for X [ 3 ] ; then, with reference to the signal flow chart of Figure 10.5 and
the fourth of the equations in (10.72),
3
X [ 3 ] = Y [ 3, 3 ] = Y [ 3, 2 ] + W Y [ 7, 2 ] (10.73)
Also, from (10.71)
6
Y [ 3, 2 ] = Y [ 1, 1 ] + W Y [ 3, 1 ] (10.74)
and
6
Y [ 7, 2 ] = Y [ 5, 1 ] + W Y [ 7, 1 ] (10.75)
Finally, from (10.70)
4
Y [ 1, 1 ] = Y [ 0, 0 ] + W Y [ 1, 0 ]
4
Y [ 3, 1 ] = Y [ 2, 0 ] + W Y [ 3, 0 ]
(10.76)
4
Y [ 5, 1 ] = Y [ 4, 0 ] + W Y [ 5, 0 ]
4
Y [ 7, 1 ] = Y [ 6, 0 ] + W Y [ 7, 0 ]
and making these substitutions into (10.73), we get
3
Y [ 3, 3 ] = Y [ 3, 2 ] + W Y [ 7, 2 ]
6 3 6
= Y [ 1, 1 ] + W Y [ 3, 1 ] + W { Y [ 5, 1 ] + W Y [ 7, 1 ] }
4 6 4
= Y [ 0, 0 ] + W Y [ 1, 0 ] + W { Y [ 2, 0 ] + W Y [ 3, 0 ] }
3 4 6 4
+ W { ( Y [ 4, 0 ] + W Y [ 5, 0 ] ) + W ( Y [ 6, 0 ] + W Y [ 7, 0 ] ) }
4 6 10
= Y [ 0, 0 ] + W Y [ 1, 0 ] + W Y [ 2, 0 ] + W Y [ 3, 0 ]
3 7 9 13
+ W Y [ 4, 0 ] + W Y [ 5, 0 ] + W Y [ 6, 0 ] + W Y [ 7, 0 ]
3 4 6
X [ 3 ] = x [ 0 ] + x [ 1 ]W + x [ 4 ]W + x [ 2 ]W
7 9 10 13
+ x [ 5 ]W + x [ 3 ]W + x [ 6 ]W + x [ 7 ]W
(10.78)
3 6 9
= x [ 0 ] + x [ 1 ]W + x [ 2 ]W + x [ 3 ]W
4 7 10 13
+ x [ 4 ]W + x [ 5 ]W + x [ 6 ]W + x [ 7 ]W
We will verify that this expression is correct by the use of the direct DFT of (10.17), that is,
N–1
∑ x ( n )WN
mn
X[m] =
n=0
10.7 Summary
• The N -point DFT is defined as
mn
N–1 – j2π -------
∑ x [ n ]e
N
X[m] =
n=0
where N represents the number of points that are equally spaced in the interval 0 to 2π on the
unit circle of the z -plane, and m = 0, 1, 2, …, N – 1 .
∑
N
x [ n ] = ---- X [ m ]e
N
m=0
for n = 0, 1, 2, …, N – 1 .
• In general, the discrete frequency transform X [ m ] is complex, and it is expressed as
X [ m ] = Re { X [ m ] } + Im { X [ m ] }
• We can use the fft(x) function to compute the DFT, and the ifft(x) function to compute the
Inverse DFT.
• The term e – ( j2π ) ⁄ N is a rotating vector, where the range 0 ≤ θ ≤ 2π is divided into 360 ⁄ N equal
segments and it is denoted as represent it as W N , that is,
j2π
– --------
WN = e N
and consequently
j2π
–1 --------
WN = e N
x [n] ⇔ X [m]
ax 1 [ n ] + bx 2 [ n ] + … ⇔ aX 1 [ m ] + bX 2 [ m ] + …
x [ n ]∗ h [ n ] ⇔ X [ m ] ⋅ H [ m ]
• The frequency convolution property of the DFT states that
N–1
1 1
x [ n ] ⋅ y [ n ] ⇔ ----
N ∑ X [ k ]Y [ m – k ] = ---- X [ m ]∗ Y [ m ]
N
k=0
• The sampling theorem, also known as Shannon’s Sampling Theorem, states that if a continuous
time function f ( t ) is band-limited with its highest frequency component less than W , then f ( t )
can be completely recovered from its sampled values, if the sampling frequency if equal to or
greater than 2W .
• A typical digital signal processing system contains a low-pass analog filter, often called pre-sam-
pling filter, to ensure that the highest frequency allowed into the system, will be equal or less the
sampling rate so that the signal can be recovered. The highest frequency allowed by the pre-sam-
pling filter is referred to as the Nyquist frequency, and it is denoted as f n .
• If a signal is not band-limited, or if the sampling rate is too low, the spectral components of the sig-
nal will overlap each another and this condition is called aliasing.
• If a discrete time signal has an infinite length, we can terminate the signal at a desired finite num-
ber of terms, by multiplying it by a window function. However, we must choose a suitable window
function; otherwise, the sequence will be terminated abruptly producing the effect of leakage
• If in a discrete time signal some significant frequency component that lies between two spectral
lines and goes undetected, the picket-fence effect is produced. This effect can be alleviated by add-
ing zeros at the end of the discrete signal, thereby changing the period, which in turn changes the
location of the spectral lines.
• The number of operations required to compute the DFT can be significantly reduced by the FFT
algorithm.
• The Category I FFT algorithms are classified either as decimation it time or decimation in fre-
quency. Decimation in time implies that DFT algorithm is developed in terms of the direct DFT,
whereas decimation in frequency implies that the DFT is developed in terms of the Inverse DFT.
• The Category II FFT algorithms are classified either as in-place or natural input−output. In-place
refers to the process where the result of a computation of a new vector is stored in the same mem-
ory location as the result of the previous computation. Natural input−output refers to the process
where the output appears in same (natural) order as the input.
• The FFT algorithms are usually shown in a signal flow graph. In some signal flow graphs the input
is shuffled and the output is natural, and in others the input is natural and the output is shuffled.
These combinations occur in both decimation in time, and decimation in frequency algorithms.
10.8 Exercises
1. Compute the DFT of the sequence x [ 0 ] = x [ 1 ] = 1 , x [ 2 ] = x [ 3 ] = – 1
2. A square waveform is represented by the discrete time sequence
x [ 0 ] = x [ 1 ] = x [ 2 ] = x [ 3 ] = 1 and x [ 4 ] = x [ 5 ] = x [ 6 ] = x [ 7 ] = – 1
1
b. x [ n ] sin 2πkn
------------- ⇔ ----- { X [ m – k ] + X [ m + k ] }
N j2
4. The signal flow graph of Figure 10.6 is a decimation in time, natural-input, shuffled-output type
FFT algorithm. Using this graph and relation (10.69), compute the frequency component X [ 3 ] .
Verify that this is the same as that found in Example 10.5.
x[0] W0 W0 W0 X[ 0]
x[1] W0 W0 W4 X[4 ]
x[2] W0 W4 W2 X[2]
x[3] W0 W4 W6 X[ 6]
x[4] W4 W2 W1 X[1]
x[5] W4 W2 W5 X[5 ]
x[6] W4 W6 W3 X[ 3]
x[7] W4 W6 W7 X[7]
5. The signal flow graph of Figure 10.7 is a decimation in frequency, natural input, shuffled output
type FFT algorithm. There are two equations that relate successive columns. The first is
Y dash ( R, C ) = Y dash ( R i , C – 1 ) + Y dash ( R j , C – 1 )
and it is used with the nodes where two dashed lines terminate on them.
and it is used with the nodes where two solid lines terminate on them. The number inside the cir-
cles denote the power of W N , and the minus (−) sign below serves as a reminder that the brack-
eted term of the second equation involves a subtraction. Using this graph and the above equa-
tions, compute the frequency component X [ 3 ] . Verify that this is the same as in Example 10.5.
x[ 0] X[ 0]
x[1] − W0 X[4]
x[2] −W
0 X[2]
x[ 3] − W −W
2 0 X[ 6]
x[4] 0 X[1]
−W
x[5] 1 X[5]
−W
0
− W
x[ 6] −W0 X[ 3]
−W
2
x[7] −W
3
− W −W
2 0
X[7]
for m = 0, 1, 2, and 3
m = 0:
0 0 0 0
F ( 0 ) = ( 1 ) ⋅ ( e ) + ( 1 ) ⋅ ( e ) + ( –1 ) ⋅ ( e ) + ( –1 ) ⋅ ( e )
= ( 1 ) ⋅ ( 1 ) + ( 1 ) ⋅ ( 1 ) + ( –1 ) ⋅ ( 1 ) + ( –1 ) ⋅ ( 1 ) = 0
m = 1:
0 ( – j2π ⁄ 4 ) × 1 ( – j2π ⁄ 4 ) × 2 ( – j2π ⁄ 4 ) × 3
F(1) = (1) ⋅ (e ) + (1) ⋅ (e ) + ( –1 ) ⋅ ( e ) + ( –1 ) ⋅ ( e )
--- – j sin π
= 1 + cos π ------ + j sin 3π
--- – cos π + j sin π – cos 3π ------
2 2 2 2
= 1 + 0 – j + 1 + 0 – 0 – j = 2 – j2
m = 2:
0 ( – j2π ⁄ 4 ) × 2 ( – j2π ⁄ 4 ) × 4 ( – j2π ⁄ 4 ) × 6
F(2) = (1) ⋅ (e ) + (1) ⋅ (e ) + ( –1 ) ⋅ ( e ) + ( –1 ) ⋅ ( e )
= 1 + cos π – j sin π – cos 2π + j sin 2π – cos 3π + j sin 3π
= 1–1–0–1+0+1+0 = 0
m = 3:
0 ( – j2π ⁄ 4 ) × 3 ( – j2π ⁄ 4 ) × 6 ( – j2π ⁄ 4 ) × 9
F(3) = (1) ⋅ (e ) + (1) ⋅ (e ) + ( –1 ) ⋅ ( e ) + ( –1 ) ⋅ ( e )
= 1 + cos 3π 3π 3π 3π
------ – j sin ------ – cos 3π + j sin 3π – cos ------ + j sin ------
2 2 2 2
= 1 + 0 + j + 1 + 0 – 0 + j = 2 + j2
Check with MATLAB:
fn=[1 1 −1 −1]; Fm=fft(fn)
Fm =
0 2.0000 - 2.0000i 0 2.0000 + 2.0000i
2.
x [ 0 ] = x [ 1 ] = x [ 2 ] = x [ 3 ] = 1 and x [ 4 ] = x [ 5 ] = x [ 6 ] = x [ 7 ] = – 1
fn=[1 1 1 1 −1 −1 −1 −1]; magXm=abs(fft(fn)); fprintf(' \n');...
fprintf('magXm0 = %4.2f \t', magXm(1)); fprintf('magXm1 = %4.2f \t', magXm(2));...
fprintf('magXm2 = %4.2f \t', magXm(3)); fprintf('magXm3 = %4.2f \t', magXm(4));...
fprintf(' \n');...
fprintf('magXm4 = %4.2f \t', magXm(5)); fprintf('magXm5 = %4.2f \t', magXm(6));...
fprintf('magXm6 = %4.2f \t', magXm(7)); fprintf('magXm7 = %4.2f \t', magXm(8))
magXm0 = 0.00 magXm1 = 5.23 magXm2 = 0.00 magXm3 = 2.16
magXm4 = 0.00 magXm5 = 2.16 magXm6 = 0.00 magXm7 = 5.23
The MATLAB stem command can be used to plot discrete sequence data. For this exercise we
use the code
fn=[1 1 1 1 −1 −1 −1 −1]; stem(abs(fft(fn)))
and we obtain the plot below.
3.
2πkn- ⇔ 1
x [ n ] cos ------------ --- { X [ m – k ] + X [ m + k ] } x [ n ] sin 2πkn
1
------------- ⇔ ----- { X [ m – k ] + X [ m + k ] }
N 2 N j2
Then,
km
W N x [ n ] ⇔ X [ m + k ] (2)
Likewise, subtracting (2) from (1) and multiplying the difference by 1 ⁄ j2 we get
–k m km j2πkn ⁄ N – j 2πkn ⁄ N
( WN – WN ) ( x [ n ] ) (e –e )
- = ----------------------------------------------------- x [ n ] = x [ n ] sin 2πkn
---------------------------------------------------- -------------
j2 j2 N
4.
x[0] W0 W0 W0 X[ 0]
x[1] W0 W0 W4 X[4 ]
x[2] W0 W4 W2 X[2]
x[3] W0 W4 W6 X[ 6]
x[4] W4 W2 W1 X[1]
x[5] W4 W2 W5 X[5 ]
x[6] W4 W6 W3 X[ 3]
x[7] W4 W6 W7 X[7]
3
F ( 3 ) = X ( 3 ) = Y ( 6, 3 ) = Y ( 6, 2 ) + W N Y ( 7, 2 ) (1)
where
6
Y ( 6, 2 ) = Y ( 4, 1 ) + W N Y ( 6, 1 )
and
6
Y ( 7, 2 ) = Y ( 5, 1 ) + W N Y ( 7, 1 )
By comparison, we see that the first 4 terms of (3) are the same the first, second, fourth, and sixth
terms of (2) since Y ( k, 0 ) = x ( k ) , that is, Y ( 0, 0 ) = x ( 0 ) , Y ( 1, 0 ) = x ( 1 ) , and so on. The
kN + r r 12 4
remaining terms in (2) and (3) are also the same since W 8 = W 8 and thus W 8 = W 8 ,
15 7 18 10 21 13
W 8 = W 8 , W 8 = W 8 , and W 8 = W 8 .
5.
Y dash ( R, C ) = Y dash ( R i , C – 1 ) + Y dash ( R j , C – 1 )
m
Y sol ( R, C ) = W [ Y sol ( R i , C – 1 ) – Y sol ( R j , C – 1 ) ]
x[ 0] X[ 0]
x[1] − W0 X[4]
x[2] −W
0 X[2]
x[ 3] − W −W
2 0 X[ 6]
x[4] 0 X[1]
−W
x[5] 1 X[5]
−W
0
− W
x[ 6] −W0 X[ 3]
− W2
x[7] −W
3
− W −W
2 0 X[7]
We are asked to compute F ( 3 ) only. However, we will derive all equations as we did in Example
10.5.
Column 1 (C=1):
Y ( 0, 1 ) = Y ( 0, 0 ) + Y ( 4, 0 )
Y ( 1, 1 ) = Y ( 1, 0 ) + Y ( 5, 0 )
Y ( 2, 1 ) = Y ( 2, 0 ) + Y ( 6, 0 )
Y ( 3, 1 ) = Y ( 3, 0 ) + Y ( 7, 0 )
0
Y [ 4, 1 ] = W N [ Y ( 0, 0 ) – Y ( 4, 0 ) ] (1)
1
Y [ 5, 1 ] = W N [ Y ( 1, 0 ) – Y ( 5, 0 ) ]
2
Y [ 6, 1 ] = W N [ Y ( 2, 0 ) – Y ( 6, 0 ) ]
3
Y [ 7, 1 ] = W N [ Y ( 3, 0 ) – Y ( 7, 0 ) ]
Column 2 (C=2):
Y ( 0, 2 ) = Y ( 0, 1 ) + Y ( 2, 1 )
Y ( 1, 2 ) = Y ( 1, 1 ) + Y ( 3, 1 )
0
Y ( 2, 2 ) = W N [ Y ( 0, 1 ) – Y ( 2, 1 ) ]
2
Y ( 3, 2 ) = W N [ Y ( 1, 1 ) – Y ( 3, 1 ) ]
(2)
Y [ 4, 2 ] = Y ( 4, 1 ) + Y ( 6, 1 )
Y [ 5, 2 ] = Y ( 5, 1 ) + Y ( 7, 1 )
0
Y [ 6, 2 ] = W N [ Y ( 4, 1 ) – Y ( 6, 1 ) ]
2
Y [ 7, 2 ] = W N [ Y ( 5, 1 ) – Y ( 7, 1 ) ]
Column 3 (C=3):
Y ( 0, 3 ) = Y ( 0, 2 ) + Y ( 1, 2 )
0
Y ( 1, 3 ) = W N [ Y ( 0, 2 ) – Y ( 1, 2 ) ]
Y ( 2, 3 ) = Y ( 2, 2 ) + Y ( 3, 2 )
0
Y ( 3, 3 ) = W N [ Y ( 2, 2 ) – Y ( 3, 2 ) ]
(3)
Y [ 4, 3 ] = Y ( 4, 2 ) + Y ( 5, 2 )
0
Y [ 5, 3 ] = W N [ Y ( 4, 2 ) – Y ( 5, 2 ) ]
Y [ 6, 3 ] = Y ( 6, 2 ) + Y ( 7, 2 )
0
Y [ 7, 3 ] = W N [ Y ( 6, 2 ) – Y ( 7, 2 ) ]
F ( 3 ) = X ( 3 ) = Y ( 6, 3 ) = Y ( 6, 2 ) + Y ( 7, 2 ) (4)
where
0
Y ( 6, 2 ) = W N [ Y ( 4, 1 ) – Y ( 6, 1 ) ]
and
2
Y ( 7, 2 ) = W N [ Y ( 5, 1 ) – Y ( 7, 1 ) ]
From (1)
0
Y [ 4, 1 ] = W N [ Y ( 0, 0 ) – Y ( 4, 0 ) ]
1
Y [ 5, 1 ] = W N [ Y ( 1, 0 ) – Y ( 5, 0 ) ]
2
Y [ 6, 1 ] = W N [ Y ( 2, 0 ) – Y ( 6, 0 ) ]
3
Y [ 7, 1 ] = W N [ Y ( 3, 0 ) – Y ( 7, 0 ) ]
From Exercise 4,
3 6 9
F ( 3 ) = X ( 3 ) = x ( 0 ) + WN x ( 1 ) + WN x ( 2 ) + WN x ( 3 )
(6)
12 15 18 21
+W N x ( 4 ) + W N x ( 5 ) + W N x ( 6 ) + W N x ( 7 )
8i + n n n±4 n
Since Y ( k, 0 ) = x ( k ) , W 8 = W 8 and W 8 = – W 8 (see proof below), we see that
6 2 9 5 12 0 15 3 18 2 21 5
W8 = –W8 , W8 = –W8 , W 8 = – W 8 , W 8 = – W 8 , W 8 = W 8 , and W 8 = W N . Therefore, (5)
and (6) are the same.
n±4 n
Proof that W 8 = –W8 :
n±4 ±4 – j2π ⁄ 8 ⋅ ( ± 4 )
= W 8 ⋅ ( cos π −
n n n n
W8 = W8 ⋅ W8 = W8 ⋅ e + sin π ) = – W 8
his chapter is an introduction to analog and digital filters. It begins with the basic analog filters,
T transfer functions, and frequency response. The amplitude characteristics of Butterworth and
Chebychev filters and conversion of analog to equivalent digital filters using the bilinear trans-
formation is presented. It concludes with design examples using MATLAB.
V out V out
---------- ----------
V in V in
1 STOP 1
PASS (CUTOFF) STOP PASS
BAND BAND BAND BAND
ω ω
ωc ωc
Ideal low− pass filter Ideal high− pass filter
V out V out
---------- ----------
V in V in
1 1
STOP PASS STOP PASS STOP PASS
BAND BAND BAND BAND BAND BAND
ω ω
ω1 ω2 ω1 ω2
Ideal band− pass Filter Ideal band − elimination filter
Another, less frequently mentioned filter, is the all-pass or phase shift filter. It has a constant ampli-
tude response but is phase varies with frequency. Please refer to Exercise 4.
A digital filter, in general, is a computational process, or algorithm that converts one sequence of
numbers representing the input signal into another sequence representing the output signal. Accord-
Example 11.1
Derive expressions for the magnitude and phase responses of the series RC network of Figure 11.2,
and sketch their characteristics.
+ R +
V in C V out
−
−
Solution:
1 ⁄ jωC
V out = --------------------------- V in
R + 1 ⁄ jωC
or
V out 1 1
G ( jω ) = ---------
- = ----------------------- = ------------------------------------------------------------------------
V in 1 + jωRC 2 2 2
( 1 + ω R C ) ∠ atan ( ωRC )
(11.1)
1
= --------------------------------- ∠– atan ( ωRC )
2 2 2
1+ω R C
The magnitude of (11.1) is
V out 1
G ( jω ) = ---------
- = --------------------------------- (11.2)
V in 1+ω R C
2 2 2
V out
θ = arg { G ( jω ) } = arg ⎛ ----------⎞ = – atan ( ωRC ) (11.3)
⎝ V in ⎠
We can obtain a quick sketch for the magnitude G ( jω ) versus ω by evaluating (11.2) at ω = 0 ,
ω = 1 ⁄ RC , and ω → ∞ . Thus,
as ω → 0 ,
G(jω) ≅ 1
for ω = 1 ⁄ RC ,
G ( j ω ) = 1 ⁄ 2 = 0.707
and as ω → ∞ ,
G(jω) ≅ 0
We will use the MATLAB code below to plot G ( jω ) versus radian frequency ω . This is shown in
Figure 11.3 where, for convenience, we let RC = 1 .
w=0:0.02:10; RC=1; magGs=1./sqrt(1+w.*RC); semilogx(w,magGs); grid
For ω = 1 ⁄ RC ,
θ = – atan 1 = – 45°
For ω = – 1 ⁄ RC ,
θ = – atan ( – 1 ) = 45°
As ω → – ∞ ,
θ = – atan ( – ∞ ) = 90°
and as ω → ∞ ,
θ = – atan ( ∞ ) = – 90 °
We will use the MATLAB code below to plot the phase angle θ versus radian frequency ω . This is
shown in Figure 11.3 where, for convenience, we let RC = 1 .
w=-8:0.02:8; RC=1; argGs=−atan(w.*RC).*180./pi; plot(w,argGs); grid
Example 11.2
The network of Figure 11.5 is also a series RC circuit, where the positions of the resistor and capaci-
tor have been interchanged. Derive expressions for the magnitude and phase responses, and sketch
their characteristics.
+ C +
V in R V out
− −
Solution:
R
V out = --------------------------- V in
R + 1 ⁄ j ωC
or
2 2 2
V out
G ( j ω ) = ---------
j ωRC j ωRC + ω R C ωRC ( j + ωRC )
- = ------------------------ = ---------------------------------------- = --------------------------------------
V in 1 + j ωRC 1+ω R C
2 2 2
1+ω R C
2 2 2
(11.4)
2 2 2
ωRC 1 + ω R C ∠ atan ( 1 ⁄ ( ωRC ) -) = --------------------------------------------
= -------------------------------------------------------------------------------------------
1 - ∠ atan ( 1 ⁄ ( ωRC ) )
2 2 2
1+ω R C 2 2 2
1 + 1 ⁄ (ω R C )
The magnitude of (11.4) is
1
G ( j ω ) = --------------------------------------------- (11.5)
2 2 2
1 + 1 ⁄ (ω R C )
We can obtain a quick sketch for the magnitude G ( j ω ) versus ω by evaluating (11.5) at ω = 0 ,
ω = 1 ⁄ RC , and ω → ∞ . Thus,
As ω → 0 ,
G(jω) ≅ 0
For ω = 1 ⁄ RC ,
G ( j ω ) = 1 ⁄ 2 = 0.707
and as ω → ∞ ,
G(jω) ≅ 1
We will use the MATLAB code below to plot G ( jω ) versus radian frequency ω . This is shown in
Figure 11.5 where, for convenience, we let RC = 1 .
We can also obtain a quick sketch for the phase angle, i.e., θ = arg { G ( j ω ) } versus ω , by evaluating
(11.6) at ω = 0 , ω = 1 ⁄ RC , ω = – 1 ⁄ RC , ω → – ∞ , and ω → ∞ . Thus,
As ω → 0 ,
θ ≅ – atan 0 ≅ 0°
For ω = 1 ⁄ RC ,
θ = – atan 1 = – 45°
For ω = – 1 ⁄ RC ,
θ = – atan ( – 1 ) = 45°
As ω → – ∞ ,
θ = – atan ( – ∞ ) = 90°
and as ω → ∞ ,
θ = – atan ( ∞ ) = – 90 °
We will use the MATLAB code below to plot the phase angle θ versus radian frequency ω . This is
shown in Figure 11.7 where, for convenience, we let RC = 1 .
w=−8:0.02:8; RC=1; argGs=atan(1./(w.*RC)).*180./pi; plot(w,argGs); grid
Other low-pass, high-pass, band-pass, and band-elimination passive filters consist of combinations
of resistors, inductors, and capacitors. They are given as exercises in Chapter 4.
Example 11.3
The circuit of Figure 11.8 is an active low-pass filter and its magnitude G ( j ω ) versus ω was derived
and plotted in Example 4.7 of Chapter 4.
C2 10 nF
R2 40 K
R1 R3
200 K 50K
vin vout
C1
25 nF
Since ( jω )∗ = ( – jω ) , the square of the magnitude of a complex number can be expressed as that
number and its complex conjugate. Thus, if the magnitude is A , then
A ( ω ) = G ( jω ) = G ( jω )G∗ ( jω ) = G ( jω ) ⋅ G ( – j ω )
2
(11.8)
Now, G ( jω ) can be considered as G ( s ) evaluated at s = jω , and thus (11.7) is justified. Also, since
A is understood to represent the magnitude, it needs not be enclosed in vertical lines.
Not all amplitude-squared functions can be decomposed to G ( s ) and G ( – s ) rational functions; only
even functions of ω , positive for all ω , and proper rational functions* can satisfy (11.7).
Example 11.4
It is given that
2
3s + 5s + 7
G ( s ) = ----------------------------
2
s + 4s + 6
2
Compute A ( ω ) .
Solution:
Since
2
3s + 5s + 7
G ( s ) = ----------------------------
2
s + 4s + 6
it follows that
2
3s – 5 s + 7
G ( – s ) = --------------------------
2
s –4 s + 6
and
2 2 4 2
3s + 5s + 7 3s – 5 s + 7 + 17s + 49
- ⋅ -------------------------- = 9s
G ( s ) ⋅ G ( – s ) = ---------------------------- -------------------------------------
-
2 2 4 2
s + 4s + 6 s – 4 s + 6 s – 4s + 36
Therefore,
* It was stated earlier, that a rational function is said to be proper if the largest power in the denominator is equal
to or larger than that of the numerator.
2
The general form of the amplitude square function A ( ω ) is
2k 2k – 2
2 C ( bk ω + bk – 1 ω + … + b0 )
A ( ω ) = --------------------------------------------------------------------------------
- (11.9)
2k 2k – 2
ak ω + ak – 1 ω + … + a0
where C is the DC gain, a and b are constant coefficients, and k is a positive integer denoting the
2
order of the filter. Once the amplitude square function A ( ω ) is known, we can find G ( s ) from
2 2 2 2 2
(11.9) with the substitution ( jω ) = – ω = s or ω = – s , that is,
2
G ( s ) ⋅ G ( –s ) = A ( ω ) 2 2
(11.10)
ω = –s
In the simplest low-pass filter, the DC gain of the amplitude square function is unity. In this case
(11.9) reduces to
2 b0
A ( ω ) = -----------------------------------------------------------------------
- (11.11)
2k 2k – 2
ak ω + ak – 1 ω + … + a0
2 b0 ⁄ ak
A ( ω ) ≈ -------------
2k
- (11.12)
ω
ω2
u 2 – u 1 = log 10 ω 2 – log 10 ω 1 = log 10 ------ (11.13)
ω1
If these frequencies are such that ω 2 = 2ω 1 , we say that these frequencies are separated by one
octave, and if ω 2 = 10ω 1 , they are separated by one decade.
To compute the attenuation rate of (11.12), we take the square root of both sides. Then,
b0 ⁄ ak Cons tan t B
A ( ω ) = -----------------
- = ------------------------ = ------ (11.14)
k k k
ω ω ω
Taking the common log of both sides of (11.14) and multiplying by 20, we get
This is an equation of a straight line with slope = – 20k dB ⁄ decade , and intercept = B as shown
in Figure 11.9.
A ( ω ) dB
40
30 – 20 dB ⁄ decade = – 6 dB ⁄ octave
B
20
10
0 log 10 ω
1 10 100 1000
Example 11.5
Given the amplitude square function
2
2 16 ( – ω + 1 )
A ( ω ) = ----------------------------------------- (11.17)
2 2
(ω + 4 )( ω + 9)
There is no restriction on the zeros but, for stability*, we select the left-half s -plane poles.
We must also select the gain constant such that G ( 0 ) = A ( 0 ) .
Let
2
K( s + 1) -
G ( s ) = -------------------------------- (11.19)
(s + 2)(s + 3)
where k is a positive integer, and ω C is the cutoff ( 3 dB ) frequency. Figure 11.10 shows relation
(11.20) for k = 1, 2, 4, and 8 . The plot was created with the following MATLAB code.
w_w0=0:0.02:3; Aw2k1=sqrt(1./(w_w0.^2+1)); Aw2k2=sqrt(1./(w_w0.^4+1));...
Aw2k4=sqrt(1./(w_w0.^8+1)); Aw2k8=sqrt(1./(w_w0.^16+1));...
plot(w_w0,Aw2k1,w_w0,Aw2k2,w_w0,Aw2k4,w_w0,Aw2k8); grid
* Generally, a system is said to be stable if a finite input produces a finite output. Alternately, a system is stable if
the impulse response h ( t ) vanishes after a sufficiently long time. Stability is discussed in Feedback and Control
Systems textbooks.
All Butterworth filters have the property that all poles of the transfer functions that describes them,
lie on a circumference of a circle of radius ω C , and they are 2π ⁄ 2k radians apart. Thus, if k = odd ,
the poles start at zero radians, and if k = even , they start at 2π ⁄ 2k . But regardless whether k is odd
or even, the poles are distributed in symmetry with respect to the jω axis. For stability, we choose
the left half-plane poles to form G ( s ) .
We can find the nth roots of a the complex number s by DeMoivre’s theorem. It states that
θ + 2kπ
j ⎛ -------------------⎞
jθ ⎝ n ⎠
n
re = n re k = 0, ± 1, ± 2, … (11.21)
Example 11.6
Derive the transfer function G ( s ) for the third order ( k = 3 ) Butterworth low-pass filter with nor-
malized cutoff frequency ω C = 1 rad ⁄ s .
Solution:
With k = 3 and ω c = 1 rad ⁄ s , (11.20) reduces to
2 1
A ( ω ) = --------------- (11.22)
6
ω +1
0 + 2kπ
j ⎛ -------------------⎞
j0 ⎝ 6 ⎠
6
1e = 6 1e k = 0, 1, 2, 3, 4 , 5
Thus,
s 1 = 1 ∠0° = 1
3
s 2 = 1 ∠60° = 1
--- + j -------
2 2
1 3
s 3 = 1 ∠120° = – --- + j -------
2 2
s 4 = 1 ∠180° = – 1
3
s 5 = 1 ∠240° = – 1
--- – j -------
2 2
1 3
s 6 = 1 ∠300° = --- – j -------
2 2
As expected, these six poles lie on the circumference of the circle with radius ω c = 1 as shown in
Figure 11.11.
Im { s }
s3 s
× ×2
s4 s1
× × Re { s }
s 5× ×
s6
Figure 11.11. Location of the poles for the transfer function of Example 11.6
The transfer function G ( s ) is formed with the left half-plane poles s 3 , s 4 , and s 5 . Then,
K
G ( s ) = ----------------------------------------------------------------------------------- (11.24)
⎛s + 1 --- – j -------⎞ ( s + 1 ) ⎛ s + 1
3
--- + j -------⎞
3
⎝ 2 2 ⎠ ⎝ 2 2 ⎠
and this is the transfer function G ( s ) for the third order ( k = 3 ) Butterworth low-pass filter with
normalized cutoff frequency ω C = 1 rad ⁄ s .
The general form of any analog low-pass (Butterworth, Chebyshev, Elliptic, etc.) filter is
b0
G(s) lp
= ---------------------------------------------------------------- (11.27)
k 2
ak s + … + a2 s + a1 s + a0
The pole locations and the coefficients of the corresponding denominator polynomials, have been
derived and tabulated by Weinberg in Network Analysis and Synthesis, McGraw-Hill.
Table 11.1 shows the first through the fifth order coefficients for Butterworth analog low-pass filter
denominator polynomials with normalized frequency ω C = 1 rad ⁄ s .
We can also use the MATLAB buttap and zp2tf functions. The first returns the zeros, poles, and
gain for an N – th order normalized prototype Butterworth analog low-pass filter. The resulting fil-
ter has N poles around the unit circle in the left half plane, and no zeros. The second performs the
zero-pole to transfer function conversion.
Example 11.7
Use MATLAB to find the numerator b and denominator a coefficients for the third-order Butter-
worth low-pass filter prototype with normalized cutoff frequency*.
Solution:
[z,p,k]=buttap(3); [b,a]=zp2tf(z,p,k)
b =
0 0 0 1
a =
1.0000 2.0000 2.0000 1.0000
We observe that the denominator coefficients are the same as in Table 11.1.
Table 11.2 shows factored forms of the denominator polynomials in terms of linear and quadratic
factors with normalized frequency ω C = 1 rad ⁄ s .
∏ ⎛⎝ --s-i – 1⎞⎠
( –1 )
n s
i=0
n
where the factor ( – 1 ) is to ensure that G ( 0 ) = 1 , and s i denotes the poles on the left half of the
s -plane. They can be found from
2i + 1 )π + j cos ( 2i + 1 )π⎞
s i = ω C ⎛ – sin (---------------------- ----------------------⎠ (11.29)
⎝ 2k 2k
We must remember that the factors in Table 11.2 apply only when the cutoff frequency is normal-
ized to ω C = 1 rad ⁄ s . If ω C ≠ 1 , we must scale the transfer function appropriately.
G ( s ) actual = G ⎛ ---------------- ⎞
s
⎝ ω actual ⎠
(11.30)
Quite often, we require that ω ≥ ω C , that is, in the stop band of the low-pass filter, the attenuation to
be larger than – 20 dB ⁄ decade , i.e., we require a sharper cutoff. As we have seen from the plots of
Figure 11.10, the Butterworth low-pass filter cutoff becomes sharper for larger values of k . Accord-
ingly, we generate the plot for different values of k shown in Figure 11.12 using the following MAT-
LAB code.
w_w0=1:0.02:10; dBk1=20.*log10(sqrt(1./(w_w0.^2+1)));...
dBk2=20.*log10(sqrt(1./(w_w0.^4+1))); dBk3=20.*log10(sqrt(1./(w_w0.^6+1)));...
dBk4=20.*log10(sqrt(1./(w_w0.^8+1))); dBk5=20.*log10(sqrt(1./(w_w0.^10+1)));...
dBk6=20.*log10(sqrt(1./(w_w0.^12+1))); dBk7=20.*log10(sqrt(1./(w_w0.^14+1)));...
dBk8=20.*log10(sqrt(1./(w_w0.^16+1))); semilogx(w_w0,dBk1,w_w0,dBk2,w_w0,dBk3,...
w_w0,dBk4,w_w0,dBk5,w_w0,dBk6,w_w0,dBk7,w_w0,dBk8); grid
Example 11.8
Using the attenuation curves of Figure 11.12, derive the transfer function of a Butterworth low-pass
analog filter with pass band bandwidth of 5 rad ⁄ s , and attenuation in the stop band at least
30 dB ⁄ decade for frequencies larger than 15 rad ⁄ s .
Solution:
We refer to Figure 11.12 and at ω ⁄ ω C = 15 ⁄ 5 = 3 , we see that the vertical line at this value crosses
the k = 3 curve at approximately – 28 dB , and the k = 4 curve at approximately – 37 dB . Since we
require that the attenuation be at least – 30 dB , we use the attenuation corresponding to the k = 4
curve. Accordingly, we choose a fourth order Butterworth low-pass filter whose normalized transfer
function, from Table 11.2, is
1
G ( s ) norm = ----------------------------------------------------------------------------------------- (11.31)
2 2
( s + 0.7654s + 1 ) ( s + 1.8478s + 1 )
1
G ( s ) actual = -------------------------------------------------------------------------------------------------
2 2
⎛ -----
s
- + ------------------- + 1 ⎞ ⎛ ------ + ------------------- + 1⎞
0.7654s s 1.8478s
⎝ 25 5 ⎠ ⎝ 25 5 ⎠
R3
C1 vout
R1 R2
vin C2
The transfer function of the second order VCVS low-pass filter of Figure 11.13 is given as
2
Kbω c
G ( s ) = --------------------------------------
- (11.33)
2 2
s + aω c s + bω c
This is referred to as a second order all-pole* approximation to the ideal low-pass filter with cutoff
frequency ω C , where K is the gain, and the coefficients a and b are provided by tables.
For a non-inverting positive gain K , the circuit of Figure 11.13 satisfies the transfer function of
(11.33) with the conditions that
* The terminology “all-pole” stems from the fact that the s−plane contains poles only and the zeros are at ± ∞ ,
that is, the s−plane is all poles and no zeros.
2
R 1 = ----------------------------------------------------------------------------------------------------------------- (11.34)
⎧ 2 2 ⎫
⎨ aC 2 + ( [ a + 4b ( K – 1 ) ]C 2 – 4bC 1 C 2 ) ⎬ ω c
⎩ ⎭
1
R 2 = ----------------------------
- (11.35)
2
bC 1 C 2 R 1 ω c
K ( R1 + R2 )
R 3 = --------------------------
- K≠1 (11.36)
(K – 1)
R4 = K ( R1 + R2 ) (11.37)
Each factor in (11.38) can be realized by a stage (circuit). Then, the two stages can be cascaded as
shown in Figure 11.14.
Stage 1 Stage 2
vin vout
Example 11.9
Design a second-order VCVS Butterworth low-pass filter with gain K = 2 and cutoff frequency
f C = 1 kHz .
Solution:
We will use the second order VCVS prototype op amp circuit of Figure 11.13, with capacitance val-
–8
ues C 1 = C2 = 0.01 µF = 10 F . From Table 11.3, a = 1.41421 = 2 and b = 1 .
We substitute these values into (11.34) through (11.37), to find the values of the resistors.
We use MATLAB to do the calculations as follows:
C1=10^(-8); C2=C1; a=sqrt(2); b=1; K=2; wc=2*pi*10^3;
% and from (11.34) through (11.37)
R1=2/((a*C2+sqrt((a^2+4*b*(K−1))*C2^2-4*b*C1*C2))*wc);
R2=1/(b*C1*C2*R1*wc^2); R3=K*(R1+R2)/(K-1); R4=K*(R1+R2); fprintf(' \n');...
fprintf('R1 = %6.0f \t',R1); fprintf('R2 = %6.0f \t',R2);...
fprintf('R3 = %6.0f \t',R3); fprintf('R4 = %6.0f \t',R4)
R1 = 11254 R2 = 22508 R3 = 67524 R4 = 67524
These are the calculated values but they are not standard resistor values; we must select standard
resistor values as close as possible to the calculated values.
It will be interesting to find out what the frequency response of this filter looks like, with capacitors
C 1 = C 2 = 0.01 µF a n d s t a n d a r d 1% t o l e r a n c e r e s i s t o r s w i t h va l u e s R 1 = 11.3 KΩ ,
R 2 = 2 × R 1 = 22.6 KΩ , and R 3 = R 4 = 68.1 KΩ .
We now substitute these values into the equations of (11.34) through (11.37), and we solve the first
equation of this set for the cutoff frequency ω C . Then, we use ω C with the transfer function of
(11.33). We do this with the following MATLAB code which produces the plot of Figure 11.15.
f=1:10:5000; R1=11300; R2=22600; R3=68100; R4=R3; C1=10^(−8); C2=C1;
a=sqrt(2); b=1; w=2*pi*f; fc=sqrt(1/(b*R1*R2*C1*C2))/(2*pi); wc=2*pi*fc;
K=1+R3/R4; s=w*j; Gw=(K.*b.*wc.^2)./(s.^2+a.*wc.*s+b.*wc.^2); magGw=abs(Gw);
semilogx(f,magGw); grid; hold on; xlabel('Frequency Hz'); ylabel('|Vout/Vin|');
title ('2nd Order Butterworth Low-Pass Filter Response')
The frequency response of this low-pass filter is shown in Figure 11.15.
Figure 11.15. Plot for the VCVS low-pass filter of Example 11.9
and
–1
C k ( x ) = cos h ( kcosh x ) x >1 (11.40)
With k = 1 ,
–1
C 1 ( x ) = cos ( 1cos x ) = x * (11.42)
With k = 2
–1 2
C 2 ( x ) = cos ( 2cos x ) = 2x – 1 (11.43)
–1
and this is shown by letting cos x = α . Then,
2 2 –1
C 2 ( x ) = cos ( 2α ) = 2cos α – 1 = 2cos ( cos x ) – 1
–1 –1
cos ( cos x ) cos ( cos x ) 2
= 2 – 1 = 2x – 1
⎧
⎪
⎨
⎪
⎩
⎧
⎪
⎨
⎪
⎩
x x
We can also use MATLAB to convert these trigonometric functions to algebraic polynomials. For
example,
syms x; expand(cos(2*acos(x)))
ans =
2*x^2-1
Using this iterated procedure we can show that with k = 3, 4, and 5 , we get
3
C 3 ( x ) = 4x – 3x
4 2
C 4 ( x ) = 8x – 8x + 1 (11.44)
5 3
C 5 ( x ) = 16x – 20x + 5x
and so on.
We observe that for k = even , C k ( x ) = even , and for k = odd , C k ( x ) = odd .
2
The quantity ε is a parameter chosen to provide the desired passband ripple and α is a constant
chosen to determine the desired DC gain.
–1
* We recall that if x = cos y , then y = cos x , and cos y = x .
6
k=4
5
k=5 k=3 k=2
4
3
Ck(x)
k=1
2
1
k=0
0
-1
0.0 0.5 1.0 1.5 2.0
x
The parameter α in (11.45) is a constant representing the DC gain, ε is a parameter used in deter-
mining the ripple in the pass-band, the subscript k denotes both the degree of the Type I Chebyshev
polynomial and the order of the transfer function, and ω C is the cutoff frequency. This filter pro-
duces a sharp cutoff rate in the transition band.
Figure 11.18 shows Type I Chebyshev amplitude frequency responses for k = 3 and k = 4 .
α/(1+ε2)1/2
Amplitude
k=odd k=even
(k=3) (k=4)
0 1 2
ω/ωc
Figure 11.18. Chebyshev Type I Chebyshev low-pass filter for even and odd values of k .
2
The magnitude at ω = 0 is α when k = odd and α ⁄ 1 + ε when k = even . This is shown in
Figure 11.18. The cutoff frequency is the largest value of ω C for which
1
A ( ω C ) = ------------------ (11.46)
2
1+ε
Stated in other words, the pass-band is the range over which the ripple oscillates with constant
bounds; this is the range from DC to ω C . From (11.46), we observe that only when ε = 1 the mag-
nitude at the cutoff frequency is 0.707 i.e., the same as in other types of filters. But when 0 < ε < 1 ,
the cutoff frequency is greater than the conventional 3 dB cutoff frequency ω C .
Table 11.4 gives the ratio of the conventional cutoff frequency f 3 dB to the ripple width frequency
f C of a Type I Chebyshev low-pass filter.
dB k=2 k=4
0.1 1.943 1.213
0.5 1.390 1.093
1.0 1.218 1.053
where A max and A min are the maximum and minimum values respectively of the amplitude A in the
pass-band interval. From (11.45),
2 α
A ( ω ) = --------------------------------------------
- (11.48)
2 2
1 + ε Ck ( ω ⁄ ωC )
2 2 2
and A max occurs when ε C k ( ω ⁄ ω C ) = 0 . Then,
2
A max = α (11.49)
2
To find A min , we must first confirm that
2
Ck ( ω ⁄ ωC ) ≤ 1
or
2 r dB
log 10 ( 1 + ε ) = ------
-
10
or
2 r dB ⁄ 10
1 + ε = 10
or
2 r dB ⁄ 10
ε = 10 –1 (11.52)
We have seen that when k = odd , there is a maximum at ω = 0 . At this frequency, (11.45) reduces
to
2
A (0) = α (11.53)
and for a unity gain, α = 1 when k = odd .
2
However, for unity gain when k = even , we must have α = 1 + ε . This is because at ω = 0 , we
must have C k ( 0 ) = 1 in accordance with(11.41). Then, the relation
2 α
A ( ω ) = ---------------------------------------------
2 2
1 + ε Ck ( ω ⁄ ωC )
reduces to
2 α
A ( 0 ) = ---------------------------- α -=1
- = -------------
2 2 2
1 + ε Ck ( 0 ) 1 + ε
or
2
α = 1+ε
Example 11.10
Derive the transfer function G ( s ) for the k = 2 , Type I Chebyshev function that has pass-band rip-
ple r dB = 1 dB , unity DC gain, and normalized cutoff frequency at ω C = 1 rad ⁄ s .
Solution:
From (11.45),
2 α
A ( ω ) = --------------------------------------------- (11.54)
2 2
1 + ε Ck ( ω ⁄ ωC )
2
and since k = even , for unity DC gain, we must have α = 1 + ε . Then, (11.54) becomes
2
2 1+ε
A ( ω ) = ---------------------------------------------
2 2
1 + ε Ck ( ω ⁄ ωC )
For k = 2
2
C 2 ( x ) = 2x – 1
and
2 2 2 2 4
C k ( ω ⁄ ω C ) = C k ( ω ) = ( 2ω – 1 ) = 4ω – 4ω + 1
Also, from (11.52),
2 r dB ⁄ 10 1 ⁄ 10
ε = 10 – 1 = 10 – 1 = 1.259 – 1 = 0.259
Then,
2 1 + 0.259
A ( ω ) = ------------------------------------------------------------ 1.259
= -----------------------------------------------------------------
-
4 4 2
1 + 0.259 ( 4ω – 4ω + 1 ) 1.036ω – 1.036ω + 1.259
2 2
and with ω = – s ,
1.259
G ( s )G ( – s ) = --------------------------------------------------------------
-
4 2
1.036s + 1.036s + 1.259
We find the poles from the roots of the denominator using MATLAB.
d=[1.036 0 1.036 0 1.259]; p=roots(d); fprintf(' \n'); disp('p1 = '); disp(p(1));...
disp('p2 = '); disp(p(2)); disp('p3 = '); disp(p(3)); disp('p4 = '); disp(p(4))
p1 =
-0.5488 + 0.8951i
p2 =
-0.5488 - 0.8951i
p3 =
0.5488 + 0.8951i
p4 =
0.5488 - 0.8951i
We now form the transfer function from the left half-plane poles p 1 = – 0.5488 + j0.8951 and
p 2 = – 0.5488 – j 0.8951 . Then,
K
G ( s ) = ------------------------------------ K
- = -------------------------------------------------------------------------------------------------------------
-
( s – p1 ) ( s – p2 ) ( s + 0.5488 – j0.8951 ) ( s + 0.5488 + j0.8951 )
Thus,
K
G ( s ) = -----------------------------------------------------
2
s + 1.0976s + 1.1024
and at s = 0 ,
K -
G ( 0 ) = ---------------
1.1024
2
Also, A ( 0 ) = 1 , A ( 0 ) = 1
Then,
K
G ( 0 ) = A ( 0 ) = ---------------
- = 1
1.1024
or
K = 1.1024
Therefore, the transfer function for Example 11.10 is
1.1024
G ( s ) = ----------------------------------------------------
-
2
s + 1.0976s + 1.1024
We can plot the attenuation band for Type I Chebyshev filters, as we did with the Butterworth filters
in Figure 11.12, but we need to construct one for each value of dB in the ripple region. Instead, we
will develop the following procedure.
We begin with the Chebyshev approximation
2 α
A ( ω ) = --------------------------------------------- (11.55)
2 2
1 + ε Ck ( ω ⁄ ωC )
and, for convenience, we let α = 1 . If we want the magnitude of this to be less than some value β
2
for ω ≥ ω C , we should choose the value of k in C k ( ω ⁄ ω C ) so that
1
--------------------------------------------- ≤ β 2 (11.56)
2 2
1 + ε Ck ( ω ⁄ ωC )
that is, we need to find a suitable value of the integer k so that (11.56) will be satisfied. As we have
already seen from (11.52), the value of ε can be determined from
2 r dB ⁄ 10
ε = 10 –1
once the band-pass ripple has been specified.
Next, we need to find G ( s ) from
2
A ( ω ) = G ( s )G ( – s ) s = jω
It can be shown that the poles of the left half of the s -plane are given by
2i + 1 )π + jc cos (----------------------
s i = ω C – b sin (---------------------- 2i + 1 )π (11.58)
2k 2k
for i = 0, 1, 2, …, 2k – 1
The constants b and c in (11.58) can be evaluated from
–1
m–m
b = ------------------- (11.59)
2
and
–1
c = m +m
------------------- (11.60)
2
where
–2 –1 1 ⁄ k
m = ( 1+ε +ε ) (11.61)
The transfer function is then computed from
k
( –1 )
G ( s ) = ---------------------------
k–1
(11.62)
⎛ --s- – 1⎞
∏ ⎝s
i
⎠
i=0
Example 11.11
Design a Type I Chebyshev analog low-pass filter with 3 dB band-pass ripple and ω C = 5 rad ⁄ s .
The attenuation for ω ≥ 15 rad ⁄ s must be at least 30 dB ⁄ decade .
Solution:
From (11.52),
2 r dB ⁄ 10 3 ⁄ 10
ε = 10 – 1 = 10 – 1 = 1.9953 – 1 ≈ 1
or
2
– 10 log 10 ( 1 + C k ( 3 ) ) ≤ – 30
2
– log 10 ( 1 + C k ( 3 ) ) ≤ – 3
2 3
1 + C k ( 3 ) ≥ 10
To find the minimum value of k which satisfies this inequality, we compute the Chebyshev polyno-
mials for k = 0, 1, 2, 3, … . From (11.41) through (11.44), we get
2
C0 ( 3 ) = 1
2 2
C1 ( 3 ) = 3 = 9
2 2 2 2
C 2 ( 3 ) = ( 2 ⋅ 3 – 1 ) = 17 = 289
2 3 2 2
C 3 ( 3 ) = ( 4 ⋅ 3 – 3 ⋅ 3 ) = 99 = 9801
2 2 3
and since C k ( 3 ) must be such that 1 + Ck ( 3 ) ≥ 10 , we choose k = 3 . Next, to find the poles of
left half of the s -plane we first need to compute m , b , and c . From (11.61),
⎛ 1⁄3
–2 –1 1 ⁄ k 1 1 ⎞ 1⁄3
m = ( 1+ε +ε ) = ⎜ 1 + ----- + ---------⎟ = ( 2 + 1)
⎝ ε
2
ε⎠
2
or
m = 1.3415
and
–1
m = 0.7454
Then, from (11.59) and (11.60),
1.3415 – 0.7454
b = --------------------------------------- = 0.298
2
1.3415 + 0.7454
c = --------------------------------------- = 1.043
2
and the poles for i = 0, 1, and 2 are found from (11.58), that is
s i = ω C – b sin (----------------------
2i + 1 )π + jc cos ( 2i + 1 )π
----------------------
2k 2k
s 0 = 5 ⎛ – 0.298 sin π π
--- + j1.043 cos ---⎞⎠ = – 0.745 + j4.516
⎝ 6 6
s 1 = 5 ⎛ – 0.298 sin π π
--- + j1.043 cos ---⎞⎠ = – 1.49
⎝ 2 2
s 2 = 5 ⎛ – 0.298 sin 5π
------ + j1.043 cos ------⎞⎠ = – 0.745 – j4.516
5π
⎝ 6 6
To verify that the derived transfer function G ( s ) of (11.63) satisfies the filter specifications, we use
the MATLAB code below to plot G ( jω ) .
We can use the MATLAB cheb1ap function to design a Type I Chebyshev analog low-pass filter.
Thus, the [z,p,k] = cheb1ap(N,Rp) statement where N denotes the order of the filter, returns the
zeros, poles, and gain of an N – th order normalized prototype Type I Chebyshev analog lowpass fil-
ter with ripple Rp decibels in the pass band.
On the Bode plots shown in Figure 11.20, the ripple is not so obvious. The reason is that this is a
Bode plot with straight line approximations. To see the ripple, we use the following code:
w=0:0.01:10; [z,p,k]=cheb1ap(2,3); [b,a]=zp2tf(z,p,k); Gs=freqs(b,a,w);...
xlabel('Frequency in rad/s'), ylabel('Magnitude of G(s)'),...
semilogx(w,abs(Gs)); title('Type 1 Chebyshev Low-Pass Filter'), grid
The generated plot is shown in Figure 11.21.
and has the ripple in the stop-band as opposed to Type I which has the ripple in the pass-band. In
(11.64), the frequency ω C defines the beginning of the stop band.
The characteristics of a typical Type II Chebyshev low-pass filter are shown in Figure 11.22.
A(ω)
k=4
1 dB ripple
in stop band
We can design Type II Chebyshev low-pass filters with the MATLAB cheb2ap function. Thus, the
statement [z,p,k] = cheb2ap(N,Rs) where N denotes the order of the filter, returns the zeros,
poles, and gain of an N – th order normalized prototype Type II Chebyshev analog lowpass filter
with ripple Rs decibels in the stop band.
Example 11.13
Using the MATLAB cheb2ap function, design a third order Type II Chebyshev analog filter with
3 dB ripple in the stop band.
The elliptic (Cauer) filters are characterized by the low-pass amplitude-squared function
2 1
A ( ω ) = ------------------------------------ (11.65)
2
1 + Rk ( ω ⁄ ωC )
where R k ( x ) represents a rational elliptic function used with elliptic integrals. Elliptic filters have rip-
ple in both the pass-band and the stop-band as shown in Figure 11.24.
We can design elliptic low-pass filters with the MATLAB ellip function. The statement [b,a] =
ellip(N,Rp,Rs,Wn,’s’) where N is the order of the filter, designs an N – th order low-pass filter with
ripple Rp decibels in the pass band, ripple Rs decibels in the stop band, Wn is the cutoff frequency,
and ’s’ is used to specify analog elliptic filters. If ’s’ is not included in the above statement, MAT-
LAB designs a digital filter. The plot of Figure 11.24 was obtained with the following MATLAB
code:
w=0: 0.05: 500; [z,p,k]=ellip(5, 0.6, 20, 200, 's'); [b,a]=zp2tf(z,p,k);...
Gs=freqs(b,a,w); plot(w,abs(Gs)), title('5-pole Elliptic Low Pass Filter'); grid
Example 11.14
Use MATLAB to design a four-pole elliptic analog low-pass filter with 0.5 dB maximum ripple in
the pass-band and 20 dB minimum attenuation in the stop-band with cutoff frequency at
200 rad ⁄ s .
Solution:
The solution is obtained with the following MATLAB code.
w=0: 0.05: 500; [z,p,k]=ellip(4, 0.5, 20, 200, 's'); [b,a]=zp2tf(z,p,k);...
Gs=freqs(b,a,w); plot(w,abs(Gs)), title('4-pole Elliptic Low Pass Filter'); grid
The plot for this example is shown in Figure 11.25.
To form the transfer function G ( s ) , we need to know the coefficients a i and b i of the denominator
and numerator respectively, of G ( s ) in descending order. Because these are large numbers, we use
the format long MATLAB command, and we get
format long
a
a =
1.0e+009 *
Columns 1 through 4
0.00000000100000 0.00000033979343 0.00010586590805
0.01618902078998
Column 5
2.07245309396647
b
b =
1.0e+009 *
Columns 1 through 4
0.00000000010003 0 0.00003630258930 0
Column 5
2.04872991957189
Thus, the transfer function for this filter is
9
2.0487 × 10
G ( s ) = ----------------------------------------------------------------------------------------------------------------------------------------- (11.66)
4 3 2 6 9
s + 339.793s + 105866s + 16.189 × 10 + 2.072 × 10
Example 11.15
Compute the transfer function for a third-order band-pass Butterworth filter with 3 dB pass-band
from 3 KHz to 5 KHz , from a third-order low-pass Butterworth filter with cutoff frequency
f C = 1 KHz .
Solution:
We first find the transfer function for a third-order Butterworth low-pass filter with normalized fre-
quency ω C = 1 rad ⁄ s . Using the MATLAB function buttap we write and execute the following
code:
[z, p, k]=buttap(3); [b,a]=zp2tf(z,p,k)
b =
0 0 0 1
a =
1.0000 2.0000 2.0000 1.0000
1
G ( s ) = ---------------------------------------
- (11.67)
3 2
s + 2s + 2s + 1
3
Next, the actual cutoff frequency is given as f C = 1 KHz or ω C = 2π × 10 rad ⁄ s . Accordingly, in
accordance with Table 11.5, we replace s with
sω s -
---------C = --------------------
ω LP 2π × 10
3
and we get
(11.68)
11
2.48 × 10
= ------------------------------------------------------------------------------------------------------------
-
3 4 2 7 11
s + 1.26 × 10 s + 7.89 × 10 s + 2.48 × 10
or
2 3 3
s + 2π × 10 × 3 × 2π × 10 2
+ 12 × π × 10
2 6 2
s + 1.844 × 10
8
- = s-------------------------------------------
1 ⋅ -------------------------------------------------------------------- = --------------------------------------
-
3 3
s ( 3 × 2π × 10 – 2π × 10 ) s ( 4π × 10 )
3
1.257 × 10 s
4
Then,
11
2.48 × 10
G'' ( s ) = ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
3 2
⎛ s + 1.844 × 10 ⎞
2 8 ⎛ s + 1.844 × 10 8 ⎞
2 2
s + 1.844 × 10
8
11
⎜ --------------------------------------- ⎟ + ⎜ --------------------------------------- ⎟ + -------------------------------------- - + 2.48 × 10
⎝ 1.257 × 10 s ⎠ 4
⎝ 1.257 × 10 s ⎠ 4
1.257 × 10 s
4
We see that the computations, using the transformations of Table 11.5 become quite tedious. Fortu-
nately, we can use the MATLAB lp2lp, lp2hp, lp2bp, and lp2bs functions to transform a low pass
filter with normalized cutoff frequency, to another low-pass filter with any other specified frequency,
or to a high-pass filter, or to a band-pass filter, or to a band elimination filter respectively.
Example 11.16
Use the MATLAB buttap and lp2lp functions to find the transfer function of a third-order Butter-
worth low-pass filter with cutoff frequency f C = 2 KHz .
Solution:
We will use the buttap command to find the transfer function G ( s ) of the filter with normalized
cutoff frequency at ω C = 1 rad ⁄ s . Then, we will use the command lp2lp to transform G ( s ) to
G' ( s ) with cutoff frequency at f C = 2 KHz , or ω C = 2π × 2 × 10 rad ⁄ s .
3
format short e
% Design 3 pole Butterworth low-pass filter (wcn=1 rad/s)
[z,p,k]=buttap(3);
[b,a]=zp2tf(z,p,k); % Compute num, den coefficients of this filter (wcn=1rad/s)
f=1000:1500/50:10000; % Define frequency range to plot
w=2*pi*f; % Convert to rads/sec
fc=2000; % Define actual cutoff frequency at 2 KHz
wc=2*pi*fc; % Convert desired cutoff frequency to rads/sec
[bn,an]=lp2lp(b,a,wc); % Compute num, den of filter with fc = 2 kHz
Gsn=freqs(bn,an,w); % Compute transfer function of filter with fc = 2 kHz
semilogx(w,abs(Gsn)); grid; hold on; xlabel('Radian Frequency w (rad/sec)'),...
ylabel('Magnitude of Transfer Function'),...
title('3-pole Butterworth low-pass filter with fc=2 kHz or wc = 12.57 kr/s')
The plot for the magnitude of this transfer function is shown in Figure 11.26.
The coefficients of the numerator and denominator of the transfer function are as follows:
b
1
G ( s ) = ---------------------------------------
- (11.70)
3 2
s + 2s + 2s + 1
4
and with actual cutoff frequency ω Cn = 2 π × 2000 rad ⁄ s = 1.2566 × 10 is
12
1.9844 × 10
G' ( s ) = ------------------------------------------------------------------------------------------------------------------------------- (11.71)
3 4 2 8 12
s + 2.5133 × 10 s + 3.1583 × 10 s + 1.9844 × 10
Example 11.17
Use the MATLAB commands cheb1ap and lp2hp to find the transfer function of a 3-pole Type I
Chebyshev high-pass analog filter with cutoff frequency f C = 5 KHz .
Solution:
We will use the cheb1ap command to find the transfer function G ( s ) of the low-pass filter with
normalized cutoff frequency at ω C = 1 rad ⁄ s . Then, we will use the command lp2hp to transform
3
G ( s ) to another G' ( s ) with cutoff frequency at f C = 5 KHz or ω C = 2π × 5 × 10 rad ⁄ s
Figure 11.27. Magnitude of the transfer function of (11.73) for Example 11.17
The coefficients of the numerator and denominator of the transfer function are as follows:
b
b =
0 0 0 2.5059e-001
a
a =
1.0000e+000 5.9724e-001 9.2835e-001 2.5059e-001
bn
bn =
1.0000e+000 2.2496e-011 -1.4346e-002 -6.8973e-003
an
an =
1.0000e+000 1.1638e+005 2.3522e+009 1.2373e+014
Therefore, the transfer function with normalized cutoff frequency ω Cn = 1 rad ⁄ s is
0.2506
G ( s ) = --------------------------------------------------------------------------------
- (11.72)
3 2
s + 0.5972s + 0.9284s + 0.2506
4
and with actual cutoff frequency ω Cn = 2π × 5000 rad ⁄ s = 3.1416 × 10 , is
3
s
G' ( s ) = --------------------------------------------------------------------------------------------------------------------------------- (11.73)
3 5 2 9 14
s + 1.1638 × 10 s + 2.3522 × 10 s + 1.2373 × 10
Example 11.18
Use the MATLAB functions buttap and lp2bp to find the transfer function of a 3-pole Butterworth
analog band-pass filter with the pass band frequency centered at f 0 = 4 KHz , and bandwidth
BW = 2 KHz .
Solution:
We will use the buttap function to find the transfer function G ( s ) of the low-pass filter with nor-
malized cutoff frequency at ω C = 1 rad ⁄ s . We found this transfer function in Example 11.15 as
given by (11.67). However, to maintain a similar MATLAB code as in the previous examples, we will
include it in the code that follows. Then, we will use the command lp2bp to transform G ( s ) to
another G' ( s ) with centered frequency at f 0 = 4 KHz or ω 0 = 2π × 4 × 10 rad ⁄ s , and bandwidth
3
3
BW = 2 KHz or BW = 2π × 2 × 10 rad ⁄ s
format short e
[z,p,k]=buttap(3); % Design 3 pole Butterworth low-pass filter with wcn=1 rad/s
[b,a]=zp2tf(z,p,k); % Compute numerator and denominator coefficients for wcn=1
rad/s
f=100:100:100000; % Define frequency range to plot
f0=4000; % Define centered frequency at 4 KHz
W0=2*pi*f0; % Convert desired centered frequency to rads/sec
fbw=2000; % Define bandwidth
Bw=2*pi*fbw; % Convert desired bandwidth to rads/sec
[bn,an]=lp2bp(b,a,W0,Bw);% Compute num, den of band-pass filter
% Compute and plot the magnitude of the transfer function of the band-pass filter
Gsn=freqs(bn,an,2*pi*f); semilogx(f,abs(Gsn)); grid; hold on
xlabel('Frequency f (Hz)'); ylabel('Magnitude of Transfer Function');
title('3-pole Butterworth band-pass filter with f0 = 4 KHz, BW = 2KHz')
The plot for this band-pass filter is shown in Figure 11.28.
The coefficients b n and a n are as follows:
bn
bn =
s4 0 2.2108 × 10
9
s3 1.9844 × 10
12
3.3735 × 10
13
s2 – 4.6156 × 10
1
1.3965 × 10
18
s 5 22
– 1.6501 × 10 1.0028 × 10
Cons tan t 26
– 2.5456 × 10
9
2.5202 × 10
3
bandwidth BW = 2 KHz or BW = 2π × 2 × 10 rad ⁄ s .
Figure 11.29. Amplitude response for the band-elimination filter of Example 11.19
As in the previous example, we list the numerator b n and denominator a n coefficients in tabular
form as shown below.
In all of the above examples, we have shown the magnitude, but not the phase response of each fil-
ter type. However, we can use the MATLAB function bode(num,den) to generate both the magni-
tude and phase responses of any transfer function describing the filter type, as shown by the follow-
ing example.
Solution:
We know, from Example 11.15, that the transfer function for this type of filter is
1
G ( s ) = ---------------------------------------
-
3 2
s + 2s + 2s + 1
We can obtain the magnitude and phase characteristics with the following MATLAB code:
num=[0 0 0 1]; den=[1 2 2 1]; bode(num,den),...
title('Bode Plot for 3-pole Butterworth Low-Pass Filter'); grid
The magnitude and phase characteristics are shown in Figure 11.30.
We conclude the discussion on analog filters with Table 11.6 listing the advantages and disadvantages
of each type.
Therefore, the design of a digital filter to perform a desired function, entails the determination of
the coefficients a i and b i .
Digital filters are classified in terms of the duration of the impulse response, and in forms of realiza-
tion.
b. In a Non-Recursive Realization digital filter the output depends on present and past values of
the input only. In a non-recursive digital filter, only the coefficients a i are present, that is,
bi = 0 .
Figure 11.31 shows third-order (3-delay element) recursive and non-recursive realizations.
Generally, IIR filters are implemented by recursive realization, whereas FIR filters are implemented
by non-recursive realization.
Filter design methods have been established, and prototype circuits have been published. Thus, we
can choose the appropriate prototype to satisfy the requirements. Transformation methods are also
available to map an analog prototype to an equivalent digital filter. Three well known methods are
the following:
1. The Impulse Invariant Method that produces a digital filter whose impulse response consists of the
sampled values of the impulse response of an analog filter.
2. The Step Invariant Method that produces a digital filter whose step response consists of the sam-
pled values of the step response of an analog filter.
3. The Bilinear Transformation that uses the transformation
z–1
s = ----------- (11.76)
z+1
or, alternately, the transformation
2 z–1
s = ----- ⋅ ----------- * (11.77)
Ts z + 1
to transform the left-half of the s -plane into the interior of the unit circle in the z -plane.
We will discuss only the bilinear transformation.
* T s is the sampling period, that is, the reciprocal of the sampling frequency fs
a0
a1
a2
x[n] y[n]
–1 –1 –1
+ Z Z Z a3
+
−b1
−b2
−b3
Recursive Digital Filter Realization
a0
a1
a2
x[n] y[ n]
–1 –1 –1
Z Z Z a3
+
Non-Recursive Digital Filter Realization
x[n] y[n]
+ –1
Z x[n] A y[n]
± v[n]
Unit Delay Constant Multiplier
Adder/Subtractor
y[n] = x[n – 1] y [ n ] = Ax [ n ]
y[ n] = x[ n] + v[n]
z = 1 + s-
---------- (11.78)
1–s
z = (----------------------------
1 + α ) + jβ- (11.79)
( 1 – α ) – jβ
In the special case where s = jω , the jω axis of the s -plane is transformed to the circumference of
the unit circle in the z -plane, as we have seen in Chapter 9, Figure 9.5.
To derive a relationship between the frequencies of the analog and digital filters, we let ω a and ω d
jω d T s
denote the analog and digital frequencies respectively. Substitution of s = jω a and z = e into
(11.76) yields
jω T jω T ⁄ 2 jω T ⁄ 2 –j ω T ⁄ 2
e
d s
–1 e
d s
e
d s
–e
d s
2j sin ( ω d T s ⁄ 2 )
- ⋅ ---------------------------------------------- = ------------------------------------
jω a = ----------------------- = ------------------ -
jω d T s jω d T s ⁄ 2 jω d T s ⁄ 2
+e
–j ωd T s ⁄ 2 2 cos ( ω d T s ⁄ 2 )
e +1 e e
or
ωd Ts
ω a = tan -----------
- (11.80)
2
We see that the analog frequency to digital frequency transformation results in a non-linear mapping;
this condition is known as warping. For instance, the frequency range 0 ≤ ω a ≤ ∞ in the analog fre-
quency is warped into the frequency range 0 ≤ ω d ≤ π ⁄ T s in the digital frequency.
Another form of the analog to digital frequency transformation can be derived from the alternate
form of the bilinear transformation of (11.77), that is,
2 z–1
s = ----- ⋅ ----------- (11.81)
Ts z + 1
1
But the relation s = ----- ln z is a multi valued transformation and, as such, cannot be used to derive a
Ts
rational form in z .
1
It is shown in complex variables theory that ----- ln z can be approximated as
Ts
1 2 z–1
----- ln z ≈ ----- ⋅ ----------- (11.83)
Ts Ts z + 1
Since the z → s transformation maps the unit circle into the jω axis on the s -plane, the quantity
jω d
e – 1-
2- -------------------
---- ⋅ and jω must be equal to some point ω = ω a on the jω axis, that is,
T s jωd
e +1
jω d
2 e –1
jω a = ----- ⋅ -------------------
jω d
-
Ts
e +1
or
ωd ωd ω
jω d j ------
2
j ------
2 sin -----d-
1 2 e – 1 2 1 ⁄ (j2) e –e 2 2
ω a = --- ⋅ ----- ⋅ -------------------
- = ----- ⋅ ----------------- ⋅ --------------------------- = ----- ---------------
j Ts jω Ts 1 ⁄ 2 ω ω Ts ω
+1
d d d
e j ------
2
j ------
2 cos -----d-
e +e 2
or
2 ω
ω a = ----- ⋅ tan -----d- (11.86)
Ts 2
ω ωa Ts
tan-----d- = -----------
-
2 2
Then,
–1 ωa T s
ω d = 2tan ------------
2
and for small ω a T s ⁄ 2 ,
–1 ωa T s ωa T s
tan ------------ ≈ ------------
2 2
Therefore,
ωa Ts
ω d ≈ 2 ------------ ≈ ω a T s (11.87)
2
ωa Ts
ω d ≈ ------------ (11.88)
π
The effect of warping can be eliminated by pre-warping the analog filter prior to application of the
bilinear transformation. This is accomplished by the use of (11.80) or (11.86).
Example 11.21
Compute the transfer function G ( z ) of a low-pass digital filter with 3 dB cutoff frequency at
20 Hz , and attenuation of at least 10 dB for frequencies greater than 40 Hz . The sampling fre-
quency f s is 200 Hz .
Solution:
We will apply the bilinear transformation and, arbitrarily we choose a second order Butterworth low-
pass filter which, as we see from the curves of Figure 11.12, meets the stop-band specification.
The transfer function G ( s ) of the analog low-pass filter with nor malized frequency at
ω C = 1 rad ⁄ s is found with the MATLAB buttap function as follows:
1
G n ( s ) = -----------------------------------
- (11.89)
2
s + 1.414s + 1
Now, we must transform this transfer function to another with the actual cutoff frequency at 20 Hz .
We denote it as G a ( s ) .
We will first pre-warp the analog frequency which, by (11.80), is related to the digital frequency as
ωd Ts
ω a = tan -----------
-
2
where
1 1
T s = --- = ---------
fs 200
Denoting the analog cutoff ( 3 dB ) frequency as ω ac , and the attenuation frequency as ω aa , we get
ω ac = tan 2π × 20
------------------ = tan 0.3142 = 0.325 (11.90)
2 × 200
and
ω aa = tan 2π × 40
------------------ = tan 0.6283 = 0.7265 (11.91)
2 × 200
1
G a ( s ) = ---------------------------------------------------------------------------------
2
( s ⁄ 0.325 ) + ( 1.414s ) ⁄ 0.325 + 1
We will use MATLAB to simplify this expression.
syms s; simple(1/((s/0.325)^2+1.414*s/0.325+1))
1/(1600/169*s^2+1414/325*s+1)
1600/169
ans =
9.4675
1414/325
ans =
4.3508
Then,
1 01056
G a ( s ) = -------------------------------------------------------- = --------------------------------------------------- (11.92)
2 2
9.4675s + 4.3508s + 1 s + 0.4596s + 01056
z – 1- , we get
and making the substitution of s = ----------
z+1
0.1056
G ( z ) = -------------------------------------------------------------------------------------
-
– 2
( –
⎛ ----------- ⎞ + --------------------------------- + 0.1056
z 1 0.4596 z 1 )
⎝ z+1 ⎠ (z + 1)
We will use the MATLAB freqz function to plot the magnitude of G ( z ) , but we must first express it
2
in negative powers of z . Dividing each term of (11.93) by 3913z we get
–1 –2
G ( z ) = 0.0675 + 0.1349z + 0.0675z -
---------------------------------------------------------------------------- (11.94)
–1 –2
1 – 1.1429z + 0.4127z
The MATLAB code below will generate G ( z ) and will plot the magnitude of this transfer function.
bz=[0.0675 0.1349 0.0675]; az=[1 −1.1429 0.4127]; [Gz, wT]=freqz(bz,az,20,200);...
semilogx(wT,abs(Gz)), axis([0.1 1000 0 1]), hold on;...
title('Digital Low-Pass Filter'), xlabel('Frequency in Hz'), ylabel('Magnitude'),grid
The magnitude is shown on the plot of Figure 11.32.
Figure 11.32. Frequency response for the digital low-pass filter of Example 11.21
Let us now plot the analog equivalent to compare the digital to the analog frequency response. The
MATLAB code below produces the desired plot.
Comparing the digital filter plot with the analog, we see that they are almost identical. Also, from
both plots, we see that the amplitude is greater than 0.707 ( – 3 dB ) for frequencies less than 20Hz ,
and is smaller than 0.316 (– 10 dB) for frequencies larger than 40Hz . Therefore, the filter meets the
required specifications.
An analog filter transfer function can be mapped to a digital filter transfer function directly with the
MATLAB bilinear function. The procedure is illustrated with the following example.
Example 11.22
Use the MATLAB bilinear function to derive the low-pass digital filter transfer function G ( z ) from
a second-order Butterworth analog filter with a 3 dB cutoff frequency at 50 Hz , and sampling rate
f s = 500 Hz .
Solution:
We will use the following MATLAB code to produce the desired digital filter transfer function.
MATLAB provides us with all the functions that we need to design digital filters using analog proto-
types. These are listed below with the indicated notations.
N = order of the filter
Wn = normalized cutoff frequency
Rp = pass band ripple
Rs = stop band ripple
B = B(z), i.e., the numerator of the discrete transfer function G ( z ) = B ( z ) ⁄ A ( z )
A = A(z), i.e., the denominator of the discrete transfer function G ( z )
For Low-Pass Filters
[B,A] = butter(N,Wn)
[B,A] = cheb1(N,Rp,Wn)
[B,A] = cheb2(N,Rs,Wn)
[B,A] = ellip(N,Rp,Rs,Wn)
For High-Pass Filters
[B,A] = butter(N,Wn,'high')
[B,A] = cheb1(N,Rp,Wn,'high')
[B,A] = cheb2(N,Rs,Wn,'high')
[B,A] = ellip(N,Rp,Rs,Wn,'high')
Band-Pass Filters
[B,A] = butter(N,[Wn1,Wn2])
[B,A] = cheb1(N,Rp,[Wn1,Wn2])
[B,A] = cheb2(N,Rs,[Wn1,Wn2])
[B,A] = ellip(N,Rp,Rs,[Wn1,Wn2])
Band-Elimination Filters
[B,A] = butter(N,[Wn1,Wn2],'stop')
[B,A] = cheb1(N,Rp,[Wn1,Wn2],'stop')
[B,A] = cheb2(N,Rs,[Wn1,Wn2],'stop')
[B,A] = ellip(N,Rp,Rs,[Wn1,Wn2],'stop')
Example 11.23
The transfer functions of (11.96) through (11.99) below, describe different types of digital filters.
Use the MATLAB freqz command to plot the magnitude versus radian frequency.
–1 –2 –3 –3
G 1 ( z ) = (------------------------------------------------------------------------------------------------------------------------------
2.8982 + 8.6946 z + 8.6946z + 2.8982z ) ⋅ 10 - (11.96)
–1 –2 –3
1 – 2.3741z + 1.9294z – 0.5321z
–1 –2 –3
0.5276 – 1.5828 z + 1.5828z – 0.5276 z
G 2 ( z ) = -------------------------------------------------------------------------------------------------------- (11.97)
–1 –2 –3
1 – 1.7600z + 1.1829z – 0.2781z
–2 –4 –4
( 6.8482 – 13.6964 z + 6.8482z ) ⋅ 10
G 3 ( z ) = ---------------------------------------------------------------------------------------------------------------------------- (11.98)
–1 –2 –3 –4
1 + 3.2033z + 4.5244z + 3.1390z + 0.9603z
–1 –2
0.9270 – 1.2079 z + 0.9270z
G 4 ( z ) = ---------------------------------------------------------------------------- (11.99)
–1 –2
1 – 1.2079z + 0.8541z
Solution:
The MATLAB code to compute and plot each of the transfer functions of (11.96) through (11.99),
is given below.
% N=512 % Default
b1=[2.8982 8.6946 8.6946 2.8982]*10^(−3); a1=[1 −2.3741 1.9294 −0.5321];...
[G1z,w1T]=freqz(b1,a1);
%
b2=[0.5276 −1.5828 1.5828 −0.5276]; a2=[1 −1.7600 1.1829 −0.2781];...
[G2z,w2T]=freqz(b2,a2);
%
b3=[6.8482 0 -13.6964 0 6.8482]*10^(−4); a3=[1 3.2033 4.5244 3.1390 0.9603];...
[G3z,w3T]=freqz(b3,a3);
%
b4=[0.9270 −1.2079 0.9270]; a4=[1 −1.2079 0.8541];...
[G4z,w4T]=freqz(b4,a4);
clf; % clear current figure
%
subplot(221), semilogx(w1T,abs(G1z)), axis([0.1 1 0 1]), title('Filter for G1(z)')
xlabel(''),ylabel('Magnitude'),grid;
%
subplot(222), semilogx(w2T,abs(G2z)), axis([0.1 10 0 1]), title('Filter for G2(z)')
xlabel(''),ylabel('Magnitude'),grid;
%
subplot(223), semilogx(w3T,abs(G3z)), axis([1 10 0 1]), title('Filter for G3(z)')
xlabel(''),ylabel('Magnitude'),grid;
%
% N=512 % Default
fs=330000 % Chosen sampling frequency
Ts=1/fs; % Sampling period
fn=fs/2; % Nyquist frequency
%
f1=25000/fn % Low-pass filter cutoff frequency (Signal 1 End)
f2=40000/fn % Band-pass left cutoff frequency (Signal 2 Start)
f3=65000/fn % Band-pass right cutoff frequency (Signal 2 End)
f4=90000/fn % High-pass filter cutoff frequency (Signal 3 Start)
f5=115000/fn % Band-stop filter left cutoff frequency (Signal 3 End)
f6=140000/fn % Band-stop filter right cutoff frequency (Signal 4 Start)
% Signal 4 will terminate at 165 kHz
[b1,a1]=butter(12,f1);
[b2,a2]=butter(12,[f2,f3]);
[b3,a3]=butter(12,f4,'high');
[b4,a4]=butter(12,[f5,f6],'stop');
%
[G1z,wT]=freqz(b1,a1);
[G2z,wT]=freqz(b2,a2);
[G3z,wT]=freqz(b3,a3);
[G4z,wT]=freqz(b4,a4);
%
Hz=wT/(2*pi*Ts);
%
clf; % clear current figure
%
plot(Hz,abs(G1z),Hz,abs(G2z),Hz,abs(G3z),Hz,abs(G4z)), axis([0 16*10^4 0 1])
title('Four signals separated by four digital filters');
xlabel('Hz'),ylabel('Magnitude'),grid;
The plot of Figure 11.35 shows the frequency separations for these four signals.
In the following example, we will demonstrate the MATLAB filter function that is being used to
remove unwanted frequency components from a function. But before we use the filter function, we
must design a filter that is capable of removing those unwanted components.
Example 11.25
In Chapter 7, Example 7.6, we found that the half-wave rectifier can be represented by the trigono-
metric series
A A A cos 2t cos 4t cos 6t cos 8t
f ( t ) = --- + --- sin t – --- ------------- + ------------- + ------------- + ------------- + …
π 2 π 3 15 35 63
In this example, we want to filter out just the first 2 terms, or, in other words, to remove all cosine
terms. To simplify this expression, we let A = 3π and we truncate it by eliminating all terms beyond
the third. Then,
Therefore,
WITHOUT PREWARPING:
The num N(z) coefficients in descending order of z are:
0.0007 0.0040 0.0100 0.0134 0.0100 0.0040 0.0007
The den D(z) coefficients in descending order of z are:
1.0000 -3.2379 4.7566 -3.9273 1.8999 -0.5064 0.0578
WITH PREWARPING:
The num N(z) coefficients in descending order of z are:
0.0008 0.0050 0.0125 0.0167 0.0125 0.0050 0.0008
The den D(z) coefficients in descending order of z are:
1.0000 -3.1138 4.4528 -3.5957 1.7075
%
% Step 5
%
Nzp=[0.0008 0.0050 0.0125 0.0167 0.0125 0.0050 0.0008];
Dzp=[1.0000 -3.1138 4.4528 -3.5957 1.7075 -0.4479 0.0504];
n=0:150;
T=0.5;
gt=3+1.5*sin(n*T)-cos(2*n*T);
yt=filter(Nzp,Dzp,gt);
%
% We will plot the unfiltered analog signal gta
%
t=0:0.1:12;
gta=3+1.5*sin(t)-cos(2*t);
subplot(211), plot(t,gta), axis([0,12, 0, 6]); hold on
xlabel('Continuous Time t'); ylabel('Function g(t)');
%
% We will plot the filtered analog signal y(t)
%
subplot(212), plot(n*T,yt), axis([0,12, 0, 6]); hold on
xlabel('Continuous Time t'); ylabel('Filtered Output y(t)');
%
fprintf('Press any key to continue \n'); pause;
%
% We will plot the unfiltered discrete time signal g(n*T)
%
subplot(211), stem(n*T,gt), axis([0,12, 0, 6]); hold on
xlabel('Discrete Time nT'); ylabel('Discrete Function g(n*T)');
%
% We will plot the filtered discrete time signal y(n*T)
subplot(212), stem(n*T,yt), axis([0,12, 0, 6]); hold on
xlabel('Discrete Time nT'); ylabel('Filtered Output y(n*T)');
The analog and digital inputs and outputs are shown in Figure 11.37.
We will conclude this chapter with one more example to illustrate the use of the MATLAB find
function. This function displays the subscripts of an array of numbers where a relational expression
is true. For example,
x=−2:5; % Display the integers in the range −2 <= x < =5
x =
-2 -1 0 1 2 3 4 5
k=find(x>0.8); % Find the subscripts of the numbers for which x > 0.8
k =
4 5 6 7 8
y=x(k); % Create a new array y using the indices in k
y =
1 2 3 4 5
Example 11.26
Given the function f ( t ) = 5 sin 2t – 10 cos 5t , use the MATLAB randn function to add random
(Gaussian) noise to f ( t ) and plot this signal plus noise waveform which we denote as
x ( t ) = f ( t ) + randn ( N ) = 5 sin 2t – 10 cos 5t + randn ( size ( t ) ) (11.101)
where 0 ≤ t ≤ 512 . Next, use the fft function to compute the frequency components of the 512-
point FFT and plot the spectrum of this noisy signal. Finally, use the find function to restrict the fre-
quency range of the spectrum identify the frequency components of the signal f ( t ) .
Solution:
The MATLAB code is shown below.
t=linspace(0, 10, 512); x=10*sin(2*t)−5*cos(5*t)+15*randn(size(t));
% We plot the signal to see what it looks like
%
subplot(221); plot(t,x),title('x(t)=Signal plus Noise')
%
% The input signal x is shown in the upper left corner of the graph
%
% Next, we will compute the frequency domain of the signal x
%
X=fft(x);
%
% The sampling period of x is found by the time difference of two samples
%
Ts=t(2)−t(1);
%
% and the sampling frequency is
%
Ws=2*pi/Ts;
%
11.9 Summary
• Analog filters are defined over a continuous range of frequencies. They are classified as low-pass,
high-pass, band-pass and band-elimination (stop-band).
• An all-pass or phase shift filter has a constant amplitude response but is phase varies with fre-
quency.
• A digital filter, in general, is a computational process, or algorithm that converts one sequence of
numbers representing the input signal into another sequence representing the output signal.
• A digital filter, besides filtering out unwanted bands of frequency, can perform functions of differ-
entiation, integration, and estimation.
• Analog filter functions have been used extensively as prototype models for designing digital filters.
• An analog filter can also be classified as passive or active. Passive filters consist of passive devices
such as resistors, capacitors and inductors. Active filters are, generally, operational amplifiers with
resistors and capacitors connected to them externally.
• If two frequencies ω 1 and ω 2 are such that ω 2 = 2ω 1 , we say that these frequencies are sepa-
rated by one octave, and if ω 2 = 10ω 1 , they are separated by one decade.
• Τhe analog low-pass filter is used as a basis. Using transformations, we can derive high-pass and
the other types of filters from a basic low-pass filter.
2
• The general form of the amplitude square function A ( ω ) is
2k 2k – 2
2 C ( bk ω + bk – 1 ω + … + b0 )
A ( ω ) = --------------------------------------------------------------------------------
-
2k 2k – 2
ak ω + ak – 1 ω + … + a0
where C is the DC gain, a and b are constant coefficients, and k is a positive integer denoting
the order of the filter.
• The amplitude-squared function of a Butterworth analog low-pass filter is
2 1
A ( ω ) = ------------------------------------
2k
( ω ⁄ ωC ) + 1
where k is a positive integer indicating the order of the filter, and ω C is the cutoff ( 3 dB ) fre-
quency.
• All Butterworth filters have the property that all poles of the transfer functions that describes
them, lie on a circumference of a circle of radius ω C , and they are 2π ⁄ 2k radians apart. Thus, if
k = odd , the poles start at zero radians, and if k = even , they start at 2π ⁄ 2k . But regardless
whether k is odd or even, the poles are distributed in symmetry with respect to the jω axis. For
stability, we choose the poles of the left half of the s -plane to form G ( s ) .
• The general form of any analog low-pass (Butterworth, Chebyshev, Elliptic, etc.) filter is
b0
G(s) lp
= ----------------------------------------------------------------
k 2
ak s + … + a2 s + a1 s + a0
• The MATLAB buttap and zp2tf functions are very useful functions in the design of Butterworth
filters. The first returns the zeros, poles, and gain for an N – th order normalized prototype But-
terworth analog low-pass filter. The resulting filter has N poles around the unit circle in the left
half plane, and no zeros. The second performs the zero-pole to transfer function conversion.
• The Type I Chebyshev filters are based on approximations derived from the Chebyshev polynomi-
als C k ( x ) that constitute a set of orthogonal functions. The coefficients of these polynomials are
tabulated in math tables.
• We can use the MATLAB cheb1ap function to design a Type I Chebyshev analog low-pass filter.
Thus, the [z,p,k] = cheb1ap(N,Rp) statement where N denotes the order of the filter, returns
the zeros, poles, and gain of an N – th order normalized prototype Type I Chebyshev analog low-
pass filter with ripple Rp decibels in the pass band.
• The Inverted Chebyshev, also known as Type II Chebyshev, is characterized by the following
amplitude-square approximation.
2 2
2 ε C k ( ω C ⁄ ω)
A ( ω ) = -----------------------------------------
-
2 2
1 + ε C k ( ω C ⁄ ω)
and has the ripple in the stop-band as opposed to Type I which has the ripple in the pass-band.
The frequency ω C defines the beginning of the stop band.
• The elliptic (Cauer) filters are characterized by the low-pass amplitude-squared function
2 1
A ( ω ) = -----------------------------------
-
2
1 + Rk ( ω ⁄ ωC )
where R k ( x ) represents a rational elliptic function used with elliptic integrals. Elliptic filters have
ripple in both the pass-band and the stop-band.
• We can design elliptic low-pass filters with the MATLAB ellip function. The statement [b,a] =
ellip(N,Rp,Rs,Wn,’s’) where N is the order of the filter, designs an N – th order low-pass filter
with ripple Rp decibels in the pass band, a stop band with ripple Rs decibels, Wn is the cutoff fre-
quency, and ’s’ is used to specify analog elliptic filters. If ’s’ is not included in the above statement,
MATLAB designs a digital filter.
• Transformation methods have been developed where a low-pass filter can be converted to
another type of filter simply by transforming the complex variable s . These transformations are
listed in Table 11.5 where ω C is the cutoff frequency of a low-pass filter.
• We can use the MATLAB lp2lp, lp2hp, lp2bp, and lp2bs functions to transform a low pass fil-
ter with normalized cutoff frequency, to another low-pass filter with any other specified fre-
quency, or to a high-pass filter, or to a band-pass filter, or to a band elimination filter respectively
• We can use the MATLAB function bode(num,den) to generate both the magnitude and phase
responses of any transfer function describing the filter type.
• Digital filters are classified in terms of the duration of the impulse response, and in forms of real-
ization.
• An Infinite Impulse Response (IIR) digital filter has infinite number of samples in its impulse
response h [ n ]
• A Finite Impulse Response (FIR) digital filter has a finite number of samples in its impulse
response h [ n ]
• In a Non-Recursive Realization digital filter the output depends on present and past values of the
input only. In a non-recursive digital filter, only the coefficients a i are present, that is, b i = 0 .
• Generally, IIR filters are implemented by recursive realization, whereas FIR filters are imple-
mented by non-recursive realization.
• Transformation methods are also available to map an analog prototype to an equivalent digital fil-
ter. Three well known methods are the following:
1. The Impulse Invariant Method that produces a digital filter whose impulse response consists
of the sampled values of the impulse response of an analog filter.
2. The Step Invariant Method that produces a digital filter whose step response consists of the
sampled values of the step response of an analog filter.
3. The Bilinear Transformation that uses the transformation
z – 1-
s = ----------
z+1
or, alternately, the transformation
2 z–1
s = ----- ⋅ -----------
Ts z + 1
• The analog frequency to digital frequency transformation results in a non-linear mapping; this
condition is known as warping.
• The effect of warping can be eliminated by pre-warping the analog filter prior to application of
the bilinear transformation.
• We can use the MATLAB freqz(b,a,N) function to plot the magnitude of G ( z )
• An analog filter transfer function can be mapped to a digital filter transfer function directly with
the MATLAB bilinear(b,a,Fs) function.
• The MATLAB filter(b,a,X) function can be used to remove unwanted frequency components
from a function.
• We can use the MATLAB find(X) function to restrict the frequency range of the spectrum in
order to identify the frequency components of the signal f ( t ) .
11.10 Exercises
1. The circuit of Figure 11.39 is a VCVS second-order high-pass filter whose transfer function is
V out ( s ) Ks
2
G ( s ) = ----------------
- = --------------------------------------------------------------
-
V in ( s ) 2
s + ( a ⁄ b )ω C s + ( 1 ⁄ b )ω C
2
and for given values of a , b , and desired cutoff frequency ω C , we can calculate the values of
C 1, C 2, R 1, R 2, R 3, and R 4 to achieve the desired cutoff frequency ω C .
R1
C1 C2
vin
vout
R2
R4
R3
Using these relations, compute the appropriate values of the resistors to achieve the cutoff fre-
quency f C = 1 KHz . Choose the capacitors as C 1 = 10 ⁄ f C µF and C 2 = C 1 . Plot G ( s ) versus
frequency.
Solution using MATLAB is highly recommended.
R2
C2
R1
vin
vout
C1 R3
R5
R4
We can calculate the values of C 1, C 2, R 1, R 2, R 3, and R 4 to achieve the desired centered frequency
ω 0 and bandwidth BW . For this circuit,
2Q
R 1 = -----------------
C1 ω0 K
2Q
R 2 = --------------------------------------------------------------------------
⎧ 2 2⎫
C 1 ω 0 ⎨ – 1 + ( K – 1 ) + 8Q ⎬
⎩ ⎭
1 -⎛ 1 1⎞
R 3 = ------------
2 2⎝R
----- + -----
⎠
C1 ω0 1 R 2
R 4 = R 5 = 2R 3
Using these relations, compute the appropriate values of the resistors to achieve center frequency
f 0 = 1 KHz , Gain K = 10 , and Q = 10 .
3. The circuit of Figure 11.41 is a VCVS second-order band elimination filter whose transfer func-
tion is
V out ( s ) K ( s 2 + ω0 )
2
G ( s ) = ----------------
- = ---------------------------------------
-
V in ( s ) 2
s + [ BW ]s + ω 0
2
R3
C1 C2
vin
vout
R1 R2
C3
We can calculate the values of C 1, C 2, R 1, R 2, R 3, and R 4 to achieve the desired centered fre-
quency ω 0 and bandwidth BW . For this circuit,
1
R 1 = --------------------
2ω 0 QC 1
2Q
R 2 = ------------
ω0 C1
2Q
R 3 = ------------------------------------
2
-
C 1 ω 0 ( 4Q + 1 )
The gain K must be unity, but Q can be up to 10. Using these relations, compute the appropriate
values of the resistors to achieve center frequency f 0 = 1 KHz , Gain K = 1 and Q = 10 .
where the gain K = cons tan t , ( 0 < K < 1 ) , and the phase is given by
aω 0 ω ⎞
φ ( ω ) = – 2tan ⎛ ---------------------
–1
-
⎝ 2 2⎠
bω 0 – ω
C2
C1 R2
R1
vin
vout
R3
R4
φ 0 = φ ( ω 0 ) = – 2tan ⎛ ------------ ⎞
–1 a
⎝ b – 1⎠
2
R 2 = ----------------
aω 0 C 1
( 1 – K )R
R 1 = ------------------------2
4K
R
R 3 = -----2
K
R2
R 4 = ------------
-
1–K
1–K 4K 2
a = --------------------------------- – 1 – 1 + ------------- ⋅ tan ( φ 0 ⁄ 2 )
2K tan ( φ 0 ⁄ 2 ) 1–K
Using these relations, compute the appropriate values of the resistors to achieve a phase shift
φ 0 = – 90° at f 0 = 1 KHz with K = 0.75 .
R2 C2
R1 R3
vin C1 vout
12
T 0 = T ( ω 0 ) = ------------ sec onds
13ω 0
2(K + 1)
R 2 = ------------------------------------------------------------------------------------
2 2
( aC 1 + a C 1 – 4 bC 1 C 2 ( K – 1 ) )ω 0
R
R 1 = -----2
K
1
R 3 = ----------------------------
-
2
bC 1 C 2 R 2 ω 0
Using these relations, compute the appropriate values of the resistors to achieve a time delay
T 0 = 100 µs with K = 2 . Use capacitors C 1 = 0.01 µF and C 2 = 0.002 µF . Plot G ( s ) versus
frequency.
Solution using MATLAB is highly recommended.
6. Derive the transfer function of a fourth-order Butterworth filter with ω C = 1 rad ⁄ s .
7. Derive the amplitude-squared function for a third-order Type I Chebyshev low-pass filter with
1.5 dB pass band ripple and cutoff frequency ω C = 1 rad ⁄ s .
8. Use MATLAB to derive the transfer function G ( z ) and plot G ( z ) versus ω for a two-pole, Type
I Chebyshev high-pass digital filter with sampling period T S = 0.25 s . The equivalent analog filter
cutoff frequency is ω C = 4 rad ⁄ s and has 3 dB pass band ripple. Compute the coefficients of
the numerator and denominator and plot G ( z ) with and without pre-warping.
for b in terms of φ 0 and a so that we can find its value from the MATLAB code below. We
rewrite the above relation as
φ
tan ⎛ ------------ ⎞ = – -----0
–1 a
⎝b – 1⎠ 2
or
φ
tan ⎛ – -----0 ⎞ = ------------
a
⎝ 2⎠ b–1
φ φ
btan ⎛ – -----0 ⎞ = a + tan ⎛ – -----0 ⎞
⎝ 2⎠ ⎝ 2⎠
φ φ
b = a + tan ⎛ – -----0 ⎞ ⁄ tan ⎛ – -----0 ⎞
⎝ 2⎠ ⎝ 2⎠
6. From (11.20)
2 1
A ( ω ) = ----------------------------------
-
2k
( ω ⁄ ωC ) + 1
2 1
A ( ω ) = ---------------
8
ω +1
Then,
1
G ( s ) ⋅ G ( – s ) = --------------
8
s +1
8
We can use DeMoivre’s theorem to find the roots of s + 1 but we will use MATLAB instead.
Since we are only interested in the poles of the left half of the s -plane, we choose the roots s 2 , s 4 ,
s 6 , and s 8 . To express the denominator in polynomial form we use the following MATLAB code:
denGs=(s-s2)*(s-s4)*(s-s6)*(s-s8); r=vpa(denGs,4)
r = (s+.9240+.3827*i)*(s+.3827-.9240*i)*(s+.9240-.3827*i)*(s+.3827+.9240*i)
expand(r)
ans =
s^4+2.6134*s^3+3.41492978*s^2+2.614014906886*s
+1.0004706353613841
and thus
1
G ( s ) = ----------------------------------------------------------------------------
-
4 3 2
s + 2.61s + 3.41s + 2.61s + 1
7. From (11.45),
2 α
A ( ω ) = --------------------------------------------- (1)
2 2
1 + ε Ck ( ω ⁄ ωC )
2 2 3 2
C k = C 3 = ( 4ω – 3ω )
and this value is not very close to unity. Therefore we will compute G 1 ( z ) with pre-warping using
the following MATLAB code.
% This code designs a 2-pole Chebyshev Type 1 high-pass digital filter with
% analog cutoff frequency wc=4 rads/sec, sampling period Ts=0.25 sec., with
% pass band % ripple of 3 dB.
%
N=2; % # of poles
Rp=3; % Passband ripple in dB
Ts=0.25; % Sampling period
wc=4; % Analog cutoff frequency
% Let wd be the discrete time radian frequency. This frequency is related to
% the continuous time radian frequency wc by wd=Ts*wc with no pre-warping.
% With prewarping it is related to wc by wdp=2*arctan(wc*Ts/2).
% We divide by pi to normalize the digital cutoff frequency.
wdp=2*atan(wc*Ts/2)/pi;
% To obtain the digital cutoff frequency without prewarping we use the relation
% wd=(wc*Ts)/pi;
[Nz,Dz]=cheby1(N,Rp,wdp,'high');
%
fprintf('The numerator N(z) coefficients in descending powers of z are: \n\n');
fprintf('%8.4f \t',[Nz]); fprintf(' \n');
fprintf('The denominator D(z) coefficients in descending powers of z are: \n\n');
Next, we will compute G 2 ( z ) without pre-warping using the following MATLAB code:
N=2; % # of poles
Rp=3; % Pass band ripple in dB
Ts=0.25; % Sampling period
wc=4; % Analog cutoff frequency
wd=(wc*Ts)/pi;
[Nz,Dz]=cheby1(N,Rp,wd,'high');
%
fprintf('The numerator N(z) coefficients in descending powers of z are: \n\n');
fprintf('%8.4f \t',[Nz]); fprintf(' \n');
fprintf('The denominator D(z) coefficients in descending powers of z are: \n\n');
fprintf('%8.4f \t',[Dz]); fprintf(' \n');
%
fprintf('Press any key to see the graph \n');
pause;
%
w=0:2*pi/300:pi; Gz=freqz(Nz,Dz,w); plot(w,abs(Gz)); grid; xlabel('Frequency (rads/sec)');
ylabel('|H|'); title('High-Pass Digital Filter without prewarping')
The numerator N(z) coefficients in descending powers of z are:
0.3689 -0.7377 0.3689
The denominator D(z) coefficients in descending powers of z
are:
1.0000 -0.6028 0.4814
and thus the transfer function and the plot without pre-warping are as shown below.
2
0.3689z – 0.7377z + 0.3689
G 2 ( z ) = ---------------------------------------------------------------------
2
z – 0.6028z + 0.4814
his appendix serves as an introduction to the basic MATLAB commands and functions, proce-
T dures for naming and saving the user generated files, comment lines, access to MATLAB’s Edi-
tor/Debugger, finding the roots of a polynomial, and making plots. Several examples are pro-
vided with detailed explanations.
Helvetica Font: User inputs at MATLAB’s command window prompt >> or EDU>>*
When we first start MATLAB, we see the toolbar on top of the command screen and the prompt
EDU>>. This prompt is displayed also after execution of a command; MATLAB now waits for a new
command from the user. It is highly recommended that we use the Editor/Debugger to write our pro-
gram, save it, and return to the command screen to execute the program as explained below.
To use the Editor/Debugger:
1. From the File menu on the toolbar, we choose New and click on M-File. This takes us to the Editor
Window where we can type our code (list of statements) for a new file, or open a previously saved
file. We must save our program with a file name which starts with a letter. Important! MATLAB is
case sensitive, that is, it distinguishes between upper- and lower-case letters. Thus, t and T are two
different characters in MATLAB language. The files that we create are saved with the file name we
use and the extension .m; for example, myfile01.m. It is a good practice to save the code in a file
name that is descriptive of our code content. For instance, if the code performs some matrix oper-
ations, we ought to name and save that file as matrices01.m or any other similar name. We should
also use a floppy disk to backup our files.
2. Once the code is written and saved as an m-file, we may exit the Editor/Debugger window by click-
ing on Exit Editor/Debugger of the File menu. MATLAB then returns to the command window.
3. To execute a program, we type the file name without the .m extension at the >> prompt; then, we
press <enter> and observe the execution and the values obtained from it. If we have saved our
file in drive a or any other drive, we must make sure that it is added it to the desired directory in
MATLAB’s search path. The MATLAB User’s Guide provides more information on this topic.
Henceforth, it will be understood that each input command is typed after the >> prompt and fol-
lowed by the <enter> key.
The command help matlab\iofun will display input/output information. To get help with other
MATLAB topics, we can type help followed by any topic from the displayed menu. For example, to
get information on graphics, we type help matlab\graphics. The MATLAB User’s Guide contains
numerous help topics.
To appreciate MATLAB’s capabilities, we type demo and we see the MATLAB Demos menu. We
can do this periodically to become familiar with them. Whenever we want to return to the command
window, we click on the Close button.
When we are done and want to leave MATLAB, we type quit or exit. But if we want to clear all pre-
vious values, variables, and equations without exiting, we should use the command clear. This com-
mand erases everything; it is like exiting MATLAB and starting it again. The command clc clears the
screen but MATLAB still remembers all values, variables and equations that we have already used. In
other words, if we want to clear all previously entered commands, leaving only the >> prompt on the
upper left of the screen, we use the clc command.
All text after the % (percent) symbol is interpreted as a comment line by MATLAB, and thus it is
ignored during the execution of a program. A comment can be typed on the same line as the function
or command or as a separate line. For instance,
conv(p,q) % performs multiplication of polynomials p and q.
% The next statement performs partial fraction expansion of p(x) / q(x)
are both correct.
One of the most powerful features of MATLAB is the ability to do computations involving complex
numbers. We can use either i , or j to denote the imaginary part of a complex number, such as 3-4i
or 3-4j. For example, the statement
z=3−4j
displays
z = 3.0000-4.0000i
In the above example, a multiplication (*) sign between 4 and j was not necessary because the com-
plex number consists of numerical constants. However, if the imaginary part is a function, or variable
such as cos ( x ) , we must use the multiplication sign, that is, we must type cos(x)*j or j*cos(x) for the
imaginary part of the complex number.
We find the roots of any polynomial with the roots(p) function; p is a row vector containing the
polynomial coefficients in descending order.
Example A.1
Find the roots of the polynomial
4 3 2
p 1 ( x ) = x – 10x + 35x – 50x + 24
Solution:
The roots are found with the following two statements where we have denoted the polynomial as p1,
and the roots as roots_ p1.
p1=[1 −10 35 −50 24] % Specify and display the coefficients of p1(x)
p1 =
1 -10 35 -50 24
roots_ p1=roots(p1) % Find the roots of p1(x)
roots_p1 =
4.0000
3.0000
2.0000
1.0000
We observe that MATLAB displays the polynomial coefficients as a row vector, and the roots as a
column vector.
Example A.2
Find the roots of the polynomial
5 4 2
p 2 ( x ) = x – 7x + 16x + 25x + 52
Solution:
There is no cube term; therefore, we must enter zero as its coefficient. The roots are found with the
statements below, where we have defined the polynomial as p2, and the roots of this polynomial as
roots_ p2. The result indicates that this polynomial has three real roots, and two complex roots. Of
course, complex roots always occur in complex conjugate* pairs.
p2=[1 −7 0 16 25 52]
p2 =
1 -7 0 16 25 52
roots_ p2=roots(p2)
roots_ p2 =
6.5014
2.7428
-1.5711
-0.3366+ 1.3202i
-0.3366- 1.3202i
Example A.3
It is known that the roots of a polynomial are 1, 2, 3, and 4 . Compute the coefficients of this poly-
nomial.
Solution:
We first define a row vector, say r3 , with the given roots as elements of this vector; then, we find the
coefficients with the poly(r) function as shown below.
r3=[1 2 3 4] % Specify the roots of the polynomial
r3 =
1 2 3 4
poly_r3=poly(r3) % Find the polynomial coefficients
poly_r3 =
1 -10 35 -50 24
We observe that these are the coefficients of the polynomial p 1 ( x ) of Example A.1.
Example A.4
It is known that the roots of a polynomial are – 1, – 2, – 3, 4 + j5 and 4 – j5 . Find the coefficients
of this polynomial.
Solution:
We form a row vector, say r4 , with the given roots, and we find the polynomial coefficients with the
poly(r) function as shown below.
r4=[ −1 −2 −3 −4+5j −4−5j ]
r4 =
Columns 1 through 4
-1.0000 -2.0000 -3.0000 -4.0000+ 5.0000i
Column 5
-4.0000- 5.0000i
poly_r4=poly(r4)
poly_r4 =
1 14 100 340 499 246
[q,r]=deconv(c,d) −divides polynomial c by polynomial d and displays the quotient q and remain-
der r.
polyder(p) − produces the coefficients of the derivative of a polynomial p.
Example A.6
Let
5 4 2
p 1 = x – 3x + 5x + 7x + 9
and
Solution:
p1=[1 −3 0 5 7 9]; % The coefficients of p1
p2=[2 0 −8 0 4 10 12]; % The coefficients of p2
p1p2=conv(p1,p2) % Multiply p1 by p2 to compute coefficients of the product p1p2
p1p2 =
2 -6 -8 34 18 -24 -74 -88 78 166 174 108
Therefore,
11 10 9 8 7 6
p 1 ⋅ p 2 = 2x – 6x – 8x + 34x + 18x – 24x
5 4 3 2
– 74x – 88x + 78x + 166x + 174x + 108
Example A.7
Let
7 5 3
p 3 = x – 3x + 5x + 7x + 9
and
6 5 2
p 4 = 2x – 8x + 4x + 10x + 12
Solution:
% It is permissible to write two or more statements in one line separated by semicolons
p3=[1 0 −3 0 5 7 9]; p4=[2 −8 0 0 4 10 12]; [q,r]=deconv(p3,p4)
q =
0.5000
r =
0 4 -3 0 3 2 3
Therefore,
5 4 2
q = 0.5 r = 4x – 3x + 3x + 2x + 3
Example A.8
Let
6 4 2
p 5 = 2x – 8x + 4x + 10x + 12
d
Compute the derivative ------ p 5 using the polyder(p) function.
dx
Solution:
p5=[2 0 −8 0 4 10 12]; % The coefficients of p5
der_p5=polyder(p5) % Compute the coefficients of the derivative of p5
der_p5 =
12 0 -32 0 8 10
Therefore,
d
------ p 5 = 12x 5 – 32x 3 + 4x 2 + 8x + 10
dx
n n–1 n–2
bn x + bn – 1 x + bn – 2 x + … + b1 x + b0
( x ) = ----------------------------------------------------------------------------------------------------------------------
R ( x ) = Num
------------------- - (A.2)
Den ( x ) m
am x + am – 1 x
m–1
+ am – 2 x
m–2
+ … + a1 x + a0
where some of the terms in the numerator and/or denominator may be zero. We can find the roots
of the numerator and denominator with the roots(p) function as before.
As noted in the comment line of Example A.7, we can write MATLAB statements in one line, if we
separate them by commas or semicolons. Commas will display the results whereas semicolons will
suppress the display.
Example A.9
Let
p num 5 4 2
x – 3x + 5x + 7x + 9
R ( x ) = -----------
- = --------------------------------------------------------
p den 6 4 2
x – 4x + 2x + 5x + 6
Express the numerator and denominator in factored form, using the roots(p) function.
Solution:
num=[1 −3 0 5 7 9]; den=[1 0 −4 0 2 5 6]; % Do not display num and den coefficients
roots_num=roots(num), roots_den=roots(den) % Display num and den roots
roots_num =
2.4186+ 1.0712i 2.4186- 1.0712i -1.1633
-0.3370+ 0.9961i -0.3370- 0.9961i
roots_den =
1.6760+0.4922i 1.6760-0.4922i -1.9304
-0.2108+0.9870i -0.2108-0.9870i -1.0000
As expected, the complex roots occur in complex conjugate pairs.
For the numerator, we have the factored form
p num = ( x – 2.4186 – j1.0712 ) ( x – 2.4186 + j1.0712 ) ( x + 1.1633 )
( x + 0.3370 – j0.9961 ) ( x + 0.3370 + j0.9961 )
We can also express the numerator and denominator of this rational function as a combination of lin-
ear and quadratic factors. We recall that, in a quadratic equation of the form x 2 + bx + c = 0 whose
roots are x 1 and x 2 , the negative sum of the roots is equal to the coefficient b of the x term, that is,
– ( x 1 + x 2 ) = b , while the product of the roots is equal to the constant term c , that is, x 1 ⋅ x 2 = c .
Accordingly, we form the coefficient b by addition of the complex conjugate roots and this is done
by inspection; then we multiply the complex conjugate roots to obtain the constant term c using
MATLAB as follows:
(2.4186 + 1.0712i)*(2.4186 −1.0712i)
ans = 6.9971
(−0.3370+ 0.9961i)*(−0.3370−0.9961i)
ans = 1.1058
(1.6760+ 0.4922i)*(1.6760−0.4922i)
ans = 3.0512
(−0.2108+ 0.9870i)*(−0.2108−0.9870i)
ans = 1.0186
Thus,
p num ( x – 4.8372x + 6.9971 ) ( x + 0.6740x + 1.1058 ) ( x + 1.1633 )
2 2
R ( x ) = -----------
- = ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
( x – 3.3520x + 3.0512 ) ( x + 0.4216x + 1.0186 ) ( x + 1.0000 ) ( x + 1.9304 )
p den 2 2
We can check this result with MATLAB’s Symbolic Math Toolbox which is a collection of tools (func-
tions) used in solving symbolic expressions. They are discussed in detail in MATLAB’s Users Manual.
For the present, our interest is in using the collect(s) function that is used to multiply two or more
symbolic expressions to obtain the result in polynomial form. We must remember that the
conv(p,q) function is used with numeric expressions only, that is, polynomial coefficients.
Before using a symbolic expression, we must create one or more symbolic variables such as x, y, t, and
so on. For our example, we use the following code:
syms x % Define a symbolic variable and use collect(s) to express numerator in polynomial
form
collect((x^2−4.8372*x+6.9971)*(x^2+0.6740*x+1.1058)*(x+1.1633))
ans =
x^5-29999/10000*x^4-1323/3125000*x^3+7813277909/
1562500000*x^2+1750276323053/250000000000*x+4500454743147/
500000000000
and if we simplify this, we find that is the same as the numerator of the given rational expression in
polynomial form. We can use the same procedure to verify the denominator.
<
1 V
<R C
`L F
Figure A.1. Electric circuit for Example A.10
Plot the magnitude of the impedance, that is, Z versus radian frequency ω .
Solution:
We cannot type ω (omega) in the MATLAB command window, so we will use the English letter w
instead.
If a statement, or a row vector is too long to fit in one line, it can be continued to the next line by typ-
ing three or more periods, then pressing <enter> to start a new line, and continue to enter data. This
is illustrated below for the data of w and z. Also, as mentioned before, we use the semicolon (;) to
suppress the display of numbers that we do not care to see on the screen.
The data are entered as follows:
w=[300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900....
2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000];
%
z=[39.339 52.789 71.104 97.665 140.437 222.182 436.056....
1014.938 469.830 266.032 187.052 145.751 120.353 103.111....
90.603 81.088 73.588 67.513 62.481 58.240 54.611 51.468....
48.717 46.286 44.122 42.182 40.432 38.845];
Of course, if we want to see the values of w or z or both, we simply type w or z, and we press
<enter>. To plot z (y-axis) versus w (x-axis), we use the plot(x,y) command. For this example, we
use plot(w,z). When this command is executed, MATLAB displays the plot on MATLAB’s graph
screen. This plot is shown in Figure A.2.
title(‘string’): This command adds a line of the text string (label) at the top of the plot.
xlabel(‘string’) and ylabel(‘string’) are used to label the x- and y-axis respectively.
The amplitude frequency response is usually represented with the x-axis in a logarithmic scale. We
can use the semilogx(x,y) command that is similar to the plot(x,y) command, except that the x-axis
is represented as a log scale, and the y-axis as a linear scale. Likewise, the semilogy(x,y) command is
similar to the plot(x,y) command, except that the y-axis is represented as a log scale, and the x-axis as
a linear scale. The loglog(x,y) command uses logarithmic scales for both axes.
Throughout this text it will be understood that log is the common (base 10) logarithm, and ln is the
natural (base e) logarithm. We must remember, however, the function log(x) in MATLAB is the nat-
ural logarithm, whereas the common logarithm is expressed as log10(x), and the logarithm to the
base 2 as log2(x).
Let us now redraw the plot with the above options by adding the following statements:
semilogx(w,z); grid; % Replaces the plot(w,z) command
title('Magnitude of Impedance vs. Radian Frequency');
xlabel('w in rads/sec'); ylabel('|Z| in Ohms')
After execution of these commands, our plot is as shown in Figure A.3.
If the y-axis represents power, voltage or current, the x-axis of the frequency response is more often
shown in a logarithmic scale, and the y-axis in dB (decibels).
To display the voltage v in dB units on the y-axis, we add the relation dB=20*log10(v), and we
replace the semilogx(w,z) command with semilogx(w,dB).
The command gtext(‘string’)* switches to the current Figure Window, and displays a cross-hair that
can be moved around with the mouse. For instance, we can use the command gtext(‘Impedance |Z|
versus Frequency’), and this will place a cross-hair in the Figure window. Then, using the mouse, we
can move the cross-hair to the position where we want our label to begin, and we press <enter>.
* A default is a particular value for a variable that is assigned automatically by an operating system and remains in
effect unless canceled or overridden by the operator.
* With MATLAB Versions 6 and higher we can add text, lines and arrows directly into the graph using the tools pro-
vided on the Figure Window.
This function specifies the number of data points but not the increments between data points. An
alternate function is
x=first: increment: last
and this specifies the increments between points but not the number of data points.
The code for the 3-phase plot is as follows:
x=linspace(0, 2*pi, 60); % pi is a built-in function in MATLAB;
% we could have used x=0:0.02*pi:2*pi or x = (0: 0.02: 2)*pi instead;
y=sin(x); u=sin(x+2*pi/3); v=sin(x+4*pi/3);
plot(x,y,x,u,x,v); % The x-axis must be specified for each function
grid on, box on, % turn grid and axes box on
text(0.75, 0.65, 'sin(x)'); text(2.85, 0.65, 'sin(x+2*pi/3)'); text(4.95, 0.65, 'sin(x+4*pi/3)')
These three waveforms are shown on the same plot of Figure A.4.
Solution:
We arbitrarily choose the interval (length) shown on the code below.
x= -10: 0.5: 10; % Length of vector x
y= x; % Length of vector y must be same as x
z= -2.*x.^3+x+3.*y.^2-1;% Vector z is function of both x and y*
plot3(x,y,z); grid
The three-dimensional plot is shown in Figure A.5
In a two-dimensional plot, we can set the limits of the x- and y-axes with the axis([xmin xmax ymin
ymax]) command. Likewise, in a three-dimensional plot we can set the limits of all three axes with
the axis([xmin xmax ymin ymax zmin zmax]) command. It must be placed after the plot(x,y) or
plot3(x,y,z) commands, or on the same line without first executing the plot command. This must be
done for each plot. The three-dimensional text(x,y,z,’string’) command will place string beginning
at the co-ordinate (x,y,z) on the plot.
For three-dimensional plots, grid on and box off are the default states.
* This statement uses the so called dot multiplication, dot division, and dot exponentiation where the multiplication,
division, and exponential operators are preceded by a dot. These operations will be explained in Section A.9.
Example A.12
The volume V of a right circular cone of radius r and height h is given by
2
V = ( 1 ⁄ 3 )πr h (A.4)
Plot the volume of the cone as r and h vary on the intervals 0 ≤ r ≤ 4 and 0 ≤ h ≤ 6 meters.
Solution:
The volume of the cone is a function of both the radius r and the height h, that is,
V = f ( r, h )
The three-dimensional plot is created with the following MATLAB code where, as in the previous
example, in the second line we have used the dot multiplication, dot division, and dot exponentiation.
As mentioned earlier, this will be explained in Section A.9.
A.8 Subplots
MATLAB can display up to four windows of different plots on the Figure window using the com-
mand subplot(m,n,p). This command divides the window into an m × n matrix of plotting areas
and chooses the pth area to be active. No spaces or commas are required between the three integers
m, n and p. The possible combinations are shown in Figure A.7.
We will illustrate the use of the subplot(m,n,p) command following the discussion on multiplica-
tion, division and exponentiation that follows.
111
Default
Full Screen
In Section A.2, the arrays [ a b c … ] , such a those that contained the coefficients of polynomials,
consisted of one row and multiple columns, and thus are called row vectors. If an array has one col-
umn and multiple rows, it is called a column vector. We recall that the elements of a row vector are
separated by spaces. To distinguish between row and column vectors, the elements of a column vec-
tor must be separated by semicolons. An easier way to construct a column vector, is to write it first as
a row vector, and then transpose it into a column vector. MATLAB uses the single quotation charac-
ter (′) to transpose a vector. Thus, a column vector can be written either as b=[−1; 3; 6; 11] or as
b=[−1 3 6 11]'. MATLAB produces the same display with either format as shown below.
b=[−1; 3; 6; 11]
b =
-1
3
6
11
b=[−1 3 6 11]'
b =
-1
3
6
11
We will now define Matrix Multiplication and Element-by-Element multiplication.
be two vectors. We observe that A is defined as a row vector whereas B is defined as a column vec-
tor, as indicated by the transpose operator (′). Here, multiplication of the row vector A by the col-
umn vector B , is performed with the matrix multiplication operator (*). Then,
For example, if
A = [1 2 3 4 5]
and
B = [ – 2 6 – 3 8 7 ]'
the matrix multiplication A*B produces the single value 68, that is,
A∗ B = 1 × ( – 2 ) + 2 × 6 + 3 × ( – 3 ) + 4 × 8 + 5 × 7 = 68
be two row vectors. Here, multiplication of the row vector C by the row vector D is performed with
the dot multiplication operator (.*). There is no space between the dot and the multiplication symbol.
Thus,
C.∗ D = [ c 1 d 1 c2 d2 c3 d3 … cn dn ] (A.6)
This product is another row vector with the same number of elements, as the elements of C and D .
As an example, let
C = [1 2 3 4 5]
and
D = [ –2 6 –3 8 7 ]
Dot multiplication of these two row vectors produce the following result.
C.∗ D = 1 × ( – 2 ) 2 × 6 3 × ( – 3 ) 4 × 8 5 × 7 = – 2 12 – 9 32 35
Check with MATLAB:
C=[1 2 3 4 5]; % Vectors C and D must have
D=[−2 6 −3 8 7]; % same number of elements
C.*D % We observe that this is a dot multiplication
ans =
-2 12 -9 32 35
Similarly, the division (/) and exponentiation (^) operators, are used for matrix division and exponen-
tiation, whereas dot division (./) and dot exponentiation (.^) are used for element-by-element divi-
sion and exponentiation.
We must remember that no space is allowed between the dot (.) and the multiplication, division, and
exponentiation operators.
Note: A dot (.) is never required with the plus (+) and minus (−) operators.
Example A.13
Write the MATLAB code that produces a simple plot for the waveform defined as
2
–4 t –3 t t
y = f ( t ) = 3e cos 5t – 2e sin 2t + ----------- (A.7)
t+1
Example A.14
Plot the functions
in the interval 0 ≤ x ≤ 2π using 100 data points. Use the subplot command to display these func-
tions as four windows on the same graph.
Solution:
The MATLAB code to produce the four subplots is as follows:
x=linspace(0,2*pi,100); % Interval with 100 data points
y=(sin(x).^ 2); z=(cos(x).^ 2);
w=y.* z;
v=y./ (z+eps); % add eps to avoid division by zero
subplot(221); % upper left of four subplots
plot(x,y); axis([0 2*pi 0 1]);
title('y=(sinx)^2');
subplot(222); % upper right of four subplots
plot(x,z); axis([0 2*pi 0 1]);
title('z=(cosx)^2');
subplot(223); % lower left of four subplots
plot(x,w); axis([0 2*pi 0 0.3]);
title('w=(sinx)^2*(cosx)^2');
subplot(224); % lower right of four subplots
plot(x,v); axis([0 2*pi 0 400]);
title('v=(sinx)^2/(cosx)^2');
These subplots are shown in Figure A.9.
The next example illustrates MATLAB’s capabilities with imaginary numbers. We will introduce the
real(z) and imag(z) functions that display the real and imaginary parts of the complex quantity z = x
+ iy, the abs(z), and the angle(z) functions that compute the absolute value (magnitude) and phase
angle of the complex quantity z = x + iy = r∠θ. We will also use the polar(theta,r) function that pro-
duces a plot in polar coordinates, where r is the magnitude, theta is the angle in radians, and the
round(n) function that rounds a number to its nearest integer.
Example A.15
Consider the electric circuit of Figure A.10.
With the given values of resistance, inductance, and capacitance, the impedance Z ab as a function of
the radian frequency ω can be computed from the following expression:
a
<
10 Ω
Z ab
< 10 µF
10 Ω
0.1 H
` F
b
Figure A.11. Plot for the real part of the impedance in Example A.15
Example A.15 clearly illustrates how powerful, fast, accurate, and flexible MATLAB is.
Figure A.12. Plot for the imaginary part of the impedance in Example A.15
Note: MATLAB does not have a function to maximize a function of one variable, that is, there is no
fmax(f,x1,x2) function in MATLAB; but since a maximum of f ( x ) is equal to a minimum of – f ( x ) ,
we can use fmin(f,x1,x2) to find both minimum and maximum values of a function.
fplot(fcn,lims) plots the function specified by the string fcn between the x-axis limits specified by
lims = [xmin xmax]. Using lims = [xmin xmax ymin ymax] also controls the y-axis limits. The
string fcn must be the name of an m-file function or a string with variable x .
Note: NaN (Not-a-Number) is not a function; it is MATLAB’s response to an undefined expression
such as 0 ⁄ 0 , ∞ ⁄ ∞ , or inability to produce a result as described on the next paragraph. We can avoid
division by zero using the eps number, that we mentioned earlier.
Example A.16
Find the zeros, maxima and minima of the function
1
f ( x ) = --------------------------------------- 1
- + ---------------------------------------
- – 10
2 2
( x – 0.1 ) + 0.01 ( x – 1.2 ) + 0.04
Solution:
We first plot this function to observe the approximate zeros, maxima, and minima using the follow-
ing code.
x=−1.5: 0.01: 1.5; y=1./ ((x−0.1).^ 2 + 0.01) −1./ ((x−1.2).^ 2 + 0.04) −10; plot(x,y); grid
* This function is being replaced with the function x = fminbnd(fun,x1,x2). This function starts at X0 and finds a
local minimizer x of the function fun in the interval x1 < x < x2.
Figure A.14. Plot for Example A.16 using the plot command
The roots (zeros) of this function appear to be in the neighborhood of x = – 0.2 and x = 0.3 . The
maximum occurs at approximately x = 0.1 where, approximately, y max = 90 , and the minimum
occurs at approximately x = 1.2 where, approximately, y min = – 34 .
Next, we define and save f ( x ) as the funczero01.m function m-file with the following code:
function y=funczero01(x)
% Finding the zeros of the function shown below
y=1/((x−0.1)^2+0.01)−1/((x−1.2)^2+0.04)-10;
Now, we can use the fplot(fcn,lims) command to plot f ( x ) as follows.
fplot('funczero01', [−1.5 1.5]); grid
This plot is shown in Figure A.15. As expected, this plot is identical to the plot of Figure A.14 that
was obtained with the plot(x,y) command.
We will use the fzero(f,x) function to compute the roots of f ( x ) in (A.20) more precisely. The code
below must be saved with a file name, and then invoked with that file name.
x1= fzero('funczero01', −0.2);
x2= fzero('funczero01', 0.3);
fprintf('The roots (zeros) of this function are r1= %3.4f', x1);
fprintf(' and r2= %3.4f \n', x2)
Figure A.15. Plot for Example A.16 using the fplot command
MATLAB displays the following:
The roots (zeros) of this function are r1= -0.1919 and r2= 0.3788
Whenever we use the fmin(f,x1,x2) or the fminbnd(f,x1,x2) function, we must remember that this
function searches for a minimum and it may display the values of local minima* , if any, before dis-
playing the function minimum. We should, therefore, plot the function with either the plot(x,y) or
the fplot(fcn,lims) command to find the smallest possible interval within which the function mini-
mum lies. For this example, we specify the range 0 ≤ x ≤ 1.5 rather than the interval – 1.5 ≤ x ≤ 1.5 .
The minimum of f ( x ) is found with the fmin(f,x1,x2) function as follows.
min_val=fmin('funczero01', 0, 1.5)
min_val = 1.2012
This is the value of x at which y = f ( x ) is minimum. To find the value of y corresponding to this
value of x, we substitute it into f ( x ) , that is,
x=1.2012; y=1 / ((x−0.1) ^ 2 + 0.01) −1 / ((x−1.2) ^ 2 + 0.04) −10
y = -34.1812
To find the maximum value, we must first define a new function m-file that will produce – f ( x ) . We
define it as follows:
* Local maxima or local minima, are the maximum or minimum values of a function within a restricted range of
values in the independent variable. When the entire range is considered, the maxima and minima are considered
be to the maximum and minimum values in the entire range in which the function is defined.
function y=minusfunczero01(x)
% It is used to find maximum value from −f(x)
y=−(1/((x−0.1)^2+0.01)−1/((x−1.2)^2+0.04)−10);
We have placed the minus (−) sign in front of the right side of the last expression above, so that the
maximum value will be displayed. Of course, this is equivalent to the negative of the funczero01
function.
Now, we execute the following code to get the value of x where the maximum y = f ( x ) occurs.
max_val=fmin('minusfunczero01', 0,1)
max_val = 0.0999
x=0.0999; % Using this value find the corresponding value of y
y=1 / ((x−0.1) ^ 2 + 0.01) −1 / ((x−1.2) ^ 2 + 0.04) −10
y = 89.2000
NOTES
his appendix is a review of the algebra of complex numbers. The basic operations are defined
T and illustrated by several examples. Applications using Euler’s identities are presented, and the
exponential and polar forms are discussed and illustrated with examples.
y
jA
j ( j A ) = j2 A = –A A
x
j ( –j A ) = –j A = A
2
j ( –A ) = j 3 A = –j A
Note: In our subsequent discussion, we will designate the x-axis (abscissa) as the real axis, and the y-
axis (ordinate) as the imaginary axis with the understanding that the “imaginary” axis is just as “real”
as the real axis. In other words, the imaginary axis is just as important as the real axis.*
An imaginary number is the product of a real number, say r , by the operator j . Thus, r is a real num-
ber and jr is an imaginary number.
A complex number is the sum (or difference) of a real number and an imaginary number. For example,
the number A = a + jb where a and b are both real numbers, is a complex number. Then,
a = Re { A } and b = Im { A } where Re { A } denotes real part of A, and b = Im { A } the imagi-
nary part of A .
By definition, two complex numbers A and B where A = a + jb and B = c + jd , are equal if and
only if their real parts are equal, and also their imaginary parts are equal. Thus, A = B if and only if
a = c and b = d .
Example B.1
It is given that A = 3 + j 4 , and B = 4 – j 2 . Find A + B and A – B
Solution:
A + B = (3 + j4) + (4 – j2) = (3 + 4) + j(4 – 2) = 7 + j2
and
A – B = (3 + j4) – (4 – j2) = (3 – 4) + j(4 + 2) = – 1 + j6
* We may think the real axis as the cosine axis and the imaginary axis as the sine axis.
A = a + jb and B = c + jd
then
A ⋅ B = ( a + jb ) ⋅ ( c + jd ) = ac + jad + jbc + j 2 bd
A ⋅ B = ac + jad + jbc – b d
(B.1)
= ( ac – bd ) + j ( ad + bc )
Example B.2
It is given that A = 3 + j 4 and B = 4 – j 2 . Find A ⋅ B
Solution:
A ⋅ B = ( 3 + j 4 ) ⋅ ( 4 – j 2 ) = 12 – j 6 + j 16 – j 2 8 = 20 + j 10
The conjugate of a complex number, denoted as A∗ , is another complex number with the same real
component, and with an imaginary component of opposite sign. Thus, if A = a + jb , then
A∗ = a – j b .
Example B.3
It is given that A = 3 + j 5 . Find A∗
Solution:
The conjugate of the complex number A has the same real component, but the imaginary compo-
nent has opposite sign. Then, A∗ = 3 – j 5
If a complex number A is multiplied by its conjugate, the result is a real number. Thus, if A = a + jb ,
then
A ⋅ A∗ = ( a + jb ) ( a – jb ) = a 2 – jab + jab – j 2 b 2 = a + b
2 2
Example B.4
It is given that A = 3 + j 5 . Find A ⋅ A∗
Solution:
A ⋅ A∗ = ( 3 + j 5 ) ( 3 – j 5 ) = 3 + 5 = 9 + 25 = 34
2 2
A a + jb ( a + jb ) ( c – jd ) A B∗ ( ac + bd ) + j ( bc – ad )
--- = -------------- = -------------------------------------- = --- ⋅ ------ = -------------------------------------------------------
B c + jd ( c + jd ) ( c – jd ) B B∗ 2
c +d
2
(B.2)
( ac + bd -) j (----------------------
bc – ad )
= ----------------------
2 2
+ 2 2
-
c +d c + d
In (B.2), we multiplied both the numerator and denominator by the conjugate of the denominator to
eliminate the j operator from the denominator of the quotient. Using this procedure, we see that the
quotient is easily separated into a real and an imaginary part.
Example B.5
It is given that A = 3 + j 4 , and B = 4 + j 3 . Find A ⁄ B
Solution:
Using the procedure of expression (B.2), we get
A 3 + j 4 ( 3 + j 4 ) ( 4 – j 3 ) 12 – j 9 + j 16 + 12 24 + j 7 7
--- = -------------- = -------------------------------------- = -------------------------------------------- = ----------------- = 24
------ + j ------ = 0.96 + j 0.28
B 4 + j3 (4 + j3)(4 – j3) 4 +3
2 2 25 25 25
jθ
e = cos θ + j sin θ (B.3)
and
– jθ
e = cos θ – j sin θ (B.4)
Then,
2 2 2
C = a +b
or
2 2
C = a +b (B.8)
θ = tan ⎛ --- ⎞
–1 b
(B.9)
⎝a⎠
To convert a complex number from rectangular to exponential form, we use the expression
j ⎛ tan
–1 b⎞
---
2 2 ⎝ a⎠
a + jb = a +b e (B.10)
To convert a complex number from exponential to rectangular form, we use the expressions
jθ
Ce = C cos θ + j C sin θ
– jθ
(B.11)
Ce = C cos θ – j C sin θ
The polar form is essentially the same as the exponential form but the notation is different, that is,
jθ
Ce = C ∠θ (B.12)
where the left side of (B.12) is the exponential form, and the right side is the polar form.
We must remember that the phase angle θ is always measured with respect to the positive real axis, and
rotates in the counterclockwise direction.
Example B.6
Convert the following complex numbers to exponential and polar forms:
a. 3 + j 4
b. – 1 + j 2
c. – 2 – j
d. 4 – j 3
Solution:
a. The real and imaginary components of this complex number are shown in Figure B.2.
Im
4
53.1°
Re
3
Then,
–1
j ⎛ tan -----
2⎞
- j116.6 °
– 1 + j2 =
2
1 +2 e ⎝
2 – 1⎠ = 5e = 5 ∠116.6°
Im
2
5
116.6°
63.4°
Re
−1
Im
206.6°
−2 Re
26.6°
−153.4°(Measured
5 Clockwise)
−1
Then,
–1
⎛tan – -----3-⎞⎠
4 = 5e j323.1 ° = 5 ∠323.1° = 5e –j36.9 ° = 5 ∠– 36.9 °
j
4 –j 3 =
2 2
4 +3 e ⎝
Im
323.1° 4
Re
−36.9°
5
−3
Example B.7
Express the complex number – 2 ∠30° in exponential and in rectangular forms.
Solution:
We recall that – 1 = j 2 . Since each j rotates a vector by 90 ° counterclockwise, then – 2 ∠30° is the
same as 2 ∠30° rotated counterclockwise by 180 ° . Therefore,
– 2 ∠30° = 2 ∠( 30° + 180° ) = 2 ∠210° = 2 ∠– 150°
The components of this complex number are shown in Figure B.6.
Im
210°
−1.73 Re
30°
−150°(Measured
2 Clockwise)
−1
then,
jθ jφ j(θ + φ)
AB = MN ∠( θ + φ ) = M e Ne = MN e (B.13)
Example B.8
Multiply A = 10 ∠53.1° by B = 5 ∠– 36.9°
Solution:
Multiplication in polar form yields
AB = ( 10 × 5 ) ∠[ 53.1° + ( – 36.9° ) ] = 50 ∠16.2°
and multiplication in exponential form yields
j53.1° – j 36.9° j ( 53.1° – 36.9° ) j16.2°
AB = ( 10 e ) ( 5e ) = 50 e = 50 e
To divide one complex number by another when both are expressed in exponential or polar form,
we divide the magnitude of the dividend by the magnitude of the divisor, and we subtract the phase
angle of the divisor from the phase angle of the dividend, that is, if
A = M ∠θ and B = N ∠φ
then,
jθ
A M Me M j(θ – φ )
--- = ----- ∠( θ – φ ) = ------------- = ---- e (B.14)
B N jφ N
Ne
Example B.9
Divide A = 10 ∠53.1° by B = 5 ∠– 36.9°
Solution:
Division in polar form yields
A 10 ∠53.1°
--- = ------------------------ = 2 ∠[ 53.1° – ( – 36.9° ) ] = 2 ∠90°
B 5 ∠– 36.9°
Division in exponential form yields
j53.1°
A 10 e j53.1° j36.9° j90°
--- = --------------------- = 2e e = 2e
B – j36.9°
5e
NOTES
1 3 1
2 3 7 or –2 1 –5
1 –1 5
4 –7 6
a 11 a 12 a 13 … a 1 n
a 21 a 22 a 23 … a 2 n
A = a 31 a 32 a 33 … a 3 n (C.1)
… … … … …
a m 1 a m 2 a m 3 … a mn
The numbers a ij are the elements of the matrix where the index i indicates the row, and j indicates
the column in which each element is positioned. For instance, a 43 indicates the element positioned
in the fourth row and third column.
A matrix of m rows and n columns is said to be of m × n order matrix.
If m = n , the matrix is said to be a square matrix of order m (or n ). Thus, if a matrix has five rows
and five columns, it is said to be a square matrix of order 5.
In a square matrix, the elements a 11, a 22, a 33, …, a nn are called the main diagonal elements. Alter-
nately, we say that the matrix elements a 11, a 22, a 33, …, a nn , are located on the main diagonal.
† The sum of the diagonal elements of a square matrix A is called the trace* of A .
† A matrix in which every element is zero, is called a zero matrix.
a ij = b ij i = 1 , 2 , 3 , …, m j = 1, 2, 3, …, n (C.2)
Two matrices are said to be conformable for addition (subtraction), if they are of the same order m × n .
If A = a ij and B = b ij are conformable for addition (subtraction), their sum (difference) will be
another matrix C with the same order as A and B , where each element of C is the sum (difference)
of the corresponding elements of A and B , that is,
C = A ± B = [ a ij ± b ij ] (C.3)
Example C.1
Compute A + B and A – B given that
A = 1 2 3 and B = 2 3 0
0 1 4 –1 2 5
Solution:
A–B = 1–2 2 – 3 3 – 0 = –1 –1 3
0+1 1–2 4–5 1 –1 –1
Check with MATLAB:
A=[1 2 3; 0 1 4]; B=[2 3 0; −1 2 5]; % Define matrices A and B
A+B % Add A and B
* Henceforth, all paragraphs and topics preceded by a dagger ( † ) may be skipped. These are discussed in matrix
theory textbooks.
ans =
3 5 3
-1 3 9
If k is any scalar (a positive or negative number), and not [k] which is a 1 × 1 matrix, then multipli-
cation of a matrix A by the scalar k is the multiplication of every element of A by k .
Example C.2
Multiply the matrix
A = 1 –2
2 3
by
a. k 1 = 5
b. k 2 = – 3 + j2
Solution:
a.
k 1 ⋅ A = 5 × 1 – 2 = 5 × 1 5 × ( – 2 ) = 5 – 10
2 3 5×2 5×3 10 15
b.
k 2 ⋅ A = ( – 3 + j2 ) × 1 – 2 = ( – 3 + j2 ) × 1 ( – 3 + j2 ) × ( – 2 ) = – 3 + j2 6 – j4
2 3 ( – 3 + j2 ) × 2 ( – 3 + j2 ) × 3 – 6 + j4 – 9 + j6
10 15
k2*A %Multiply matrix A by constant k2
ans =
-3.0000+ 2.0000i 6.0000- 4.0000i
-6.0000+ 4.0000i -9.0000+ 6.0000i
Two matrices A and B are said to be conformable for multiplication A ⋅ B in that order, only when the
number of columns of matrix A is equal to the number of rows of matrix B . That is, the product
A ⋅ B (but not B ⋅ A ) is conformable for multiplication only if A is an m × p matrix and matrix B is
an p × n matrix. The product A ⋅ B will then be an m × n matrix. A convenient way to determine if
two matrices are conformable for multiplication is to write the dimensions of the two matrices side-
by-side as shown below.
Shows that A and B are conformable for multiplication
A B
m×p p×n
B A
p×n m×p
For matrix multiplication, the operation is row by column. Thus, to obtain the product A ⋅ B , we
multiply each element of a row of A by the corresponding element of a column of B ; then, we add
these products.
Example C.3
Matrices C and D are defined as
1
C = 2 3 4 and D = – 1
2
Compute the products C ⋅ D and D ⋅ C
Solution:
The dimensions of matrices C and D are respectively 1 × 3 3 × 1 ; therefore the product C ⋅ D is
feasible, and will result in a 1 × 1 , that is,
1
C ⋅ D = 2 3 4 –1 = ( 2 ) ⋅ ( 1 ) + ( 3 ) ⋅ ( –1 ) + ( 4 ) ⋅ ( 2 ) = 7
2
The dimensions for D and C are respectively 3 × 1 1 × 3 and therefore, the product D ⋅ C is also
feasible. Multiplication of these will produce a 3 × 3 matrix as follows:
a 11 a 12 a 13 … a 1 n
0 a 22 a 23 … a 2 n
A = 0 0 … … … (C.4)
… … 0 … …
0 0 0 … a mn
a 11 0 0 … 0
a 21 a 22 0 … 0
B = … … … 0 0 (C.5)
… … … … 0
am1 am 2 a m 3 … a mn
In a lower triangular matrix, not all elements below the diagonal need to be non-zero.
† A square matrix is said to be diagonal, if all elements are zero, except those in the diagonal. The
matrix C of (C.6) is a diagonal matrix.
a 11 0 0 … 0
0 a 22 0 … 0
C = 0 0 … 0 0 (C.6)
0 0 0 … 0
0 0 0 … a mn
4 0 0 0
D = 0 4 0 0 (C.7)
0 0 4 0
0 0 0 4
A scalar matrix with k = 1 , is called an identity matrix I . Shown below are 2 × 2 , 3 × 3 , and 4 × 4
identity matrices.
1 0 0 0
1 0 0
1 0 0 1 0 0 (C.8)
0 1 0
0 1 0 0 1 0
0 0 1
0 0 0 1
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Likewise, the eye(size(A)) function, produces an identity matrix whose size is the same as matrix
A . For example, let matrix A be defined as
A=[1 3 1; −2 1 −5; 4 −7 6] % Define matrix A
A =
1 3 1
-2 1 -5
4 -7 6
then,
eye(size(A))
displays
ans =
1 0 0
0 1 0
0 0 1
† The transpose of a matrix A , denoted as A T , is the matrix that is obtained when the rows and col-
umns of matrix A are interchanged. For example, if
1 4
A=
1 2 3 then A T = (C.9)
2 5
4 5 6
3 6
In MATLAB we use the apostrophe (′) symbol to denote and obtain the transpose of a matrix. Thus,
for the above example,
A=[1 2 3; 4 5 6] % Define matrix A
A =
1 2 3
4 5 6
A' % Display the transpose of A
ans =
1 4
2 5
3 6
† A symmetric matrix A is a matrix such that A T = A , that is, the transpose of a matrix A is the same
as A . An example of a symmetric matrix is shown below.
1 2 3 1 2 3
A = 2 4 –5 A = 2 4 –5 = A
T
(C.10)
3 –5 6 3 –5 6
† If a matrix A has complex numbers as elements, the matrix obtained from A by replacing each
element by its conjugate, is called the conjugate of A , and it is denoted as A∗
An example is shown below.
A = 1 + j2 j A∗ = 1 – j 2 –j
3 2 – j3 3 2 + j3
MATLAB has two built-in functions which compute the complex conjugate of a number. The
first, conj(x), computes the complex conjugate of any complex number, and the second, conj(A),
computes the conjugate of a matrix A . Using MATLAB with the matrix A defined as above, we
get
A = [1+2j j; 3 2−3j] % Define and display matrix A
A =
1.0000+ 2.0000i 0+ 1.0000i
3.0000 2.0000- 3.0000i
conj_A=conj(A) % Compute and display the conjugate of A
conj_A =
0 2 –3 0 –2 3
T
A = –2 0 –4 A = 2 0 4 = –A
3 4 0 –3 –4 0
C.4 Determinants
Let matrix A be defined as the square matrix
a 11 a 12 a 13 … a 1 n
a 21 a 22 a 23 … a 2 n
A = a a a … a (C.11)
31 32 33 3n
… … … … …
a n 1 a n 2 a n 3 … a nn
detA = a 11 a 22 a 33 …a nn + a 12 a 23 a 34 …a n 1 + a 13 a 24 a 35 …a n 2 + … (C.12)
– a n 1 …a 22 a 13 … – a n 2 …a 23 a 14 – a n 3 …a 24 a 15 – …
a 11 a 12
A = (C.13)
a 21 a 22
Then,
detA = a 11 a 22 – a 21 a 12 (C.14)
Example C.4
Matrices A and B are defined as
A = 1 2 and B = 2 – 1
3 4 2 0
Compute detA and detB .
Solution:
detA = 1 ⋅ 4 – 3 ⋅ 2 = 4 – 6 = – 2
detB = 2 ⋅ 0 – 2 ⋅ ( – 1 ) = 0 – ( – 2 ) = 2
Check with MATLAB:
A=[1 2; 3 4]; B=[2 −1; 2 0]; % Define matrices A and B
det(A) % Compute the determinant of A
ans =
-2
det(B) % Compute the determinant of B
ans =
2
Let A be a matrix of order 3 , that is,
a 11 a 12 a 13
A = a 21 a 22 a 23 (C.15)
a 31 a 32 a 33
A convenient method to evaluate the determinant of order 3 , is to write the first two columns to
the right of the 3 × 3 matrix, and add the products formed by the diagonals from upper left to lower
right; then subtract the products formed by the diagonals from lower left to upper right as shown
on the diagram of the next page. When this is done properly, we obtain (C.16) above.
−
a 11 a 12 a 13 a 11 a 12
a 21 a 22 a 23 a 21 a 22
a 31 a 32 a 33 a 31 a 32 +
This method works only with second and third order determinants. To evaluate higher order deter-
minants, we must first compute the cofactors; these will be defined shortly.
Example C.5
Compute detA and detB if matrices A and B are defined as
2 3 5 2 –3 –4
A = 1 0 1 and B = 1 0 –2
2 1 0 0 –5 –6
Solution:
2 3 5 2 3
detA = 1 0 1 1 0
2 1 0 2 1
or
detA = ( 2 × 0 × 0 ) + ( 3 × 1 × 1 ) + ( 5 × 1 × 1 )
– ( 2 × 0 × 5 ) – ( 1 × 1 × 2 ) – ( 0 × 1 × 3 ) = 11 – 2 = 9
Likewise,
2 –3 –4 2 –3
detB = 1 0 – 2 1 – 2
0 –5 –6 2 –6
or
detB = [ 2 × 0 × ( – 6 ) ] + [ ( – 3 ) × ( – 2 ) × 0 ] + [ ( – 4 ) × 1 × ( – 5 ) ]
– [ 0 × 0 × ( – 4 ) ] – [ ( – 5 ) × ( – 2 ) × 2 ] – [ ( – 6 ) × 1 × ( – 3 ) ] = 20 – 38 = – 18
a 11 a 12 a 13 … a 1 n
a 21 a 22 a 23 … a 2 n
A = a a a … a (C.17)
31 32 33 3n
… … … … …
a n 1 a n 2 a n 3 … a nn
If we remove the elements of its ith row, and jth column, the remaining n – 1 square matrix is called
the minor of A , and it is denoted as M ij .
i+j
The signed minor ( – 1 ) M ij is called the cofactor of a ij and it is denoted as α ij .
Example C.6
Matrix A is defined as
a 11 a 12 a 13
A = a 21 a 22 a 23 (C.18)
a 31 a 32 a 33
Solution:
a 22 a 23 a 21 a 23 a 21 a 22
M 11 = M 12 = M 11 =
a 32 a 33 a 31 a 33 a 31 a 32
and
1+1 1+2 1+3
α 11 = ( – 1 ) M 11 = M 11 α 12 = ( – 1 ) M 12 = – M 12 α 13 = M 13 = ( – 1 ) M 13
Example C.7
Compute the cofactors of matrix A defined as
1 2 –3
A = 2 –4 2 (C.19)
–1 2 –6
Solution:
α 11 = ( – 1 )
1+1 – 4 2 = 20 α 12 = ( – 1 )
1+2 2 2 = 10 (C.20)
2 –6 –1 –6
α 13 = ( – 1 )
1+3 2 –4 = 0 α 21 = ( – 1 )
2+1 2 –3 = 6 (C.21)
–1 2 2 –6
α 22 = ( – 1 )
2+2 1 –3 = –9 α 23 = ( – 1 )
2+3 1 2 = –4 (C.22)
–1 –6 –1 2
α 31 = ( – 1 )
3+1 2 – 3 = – 8, α 32 = ( – 1 )
3+2 1 –3 = –8 (C.23)
–4 2 2 2
3+3 1 2 = –8
α 33 = ( – 1 ) (C.24)
2 –4
It is useful to remember that the signs of the cofactors follow the pattern
+ − + − +
− + − + −
+ − + − +
− + − + −
+ − + − +
that is, the cofactors on the diagonals have the same sign as their minors.
Let A be a square matrix of any size; the value of the determinant of A is the sum of the products
obtained by multiplying each element of any row or any column by its cofactor.
Example C.8
Matrix A is defined as
1 2 –3
A = 2 –4 2 (C.25)
–1 2 –6
detA = 1 – 4 2 – 2 2 2 – 3 2 – 4 = 1 × 20 – 2 × ( – 10 ) – 3 × 0 = 40
2 –6 –1 –6 –1 2
a 11 a 12 a 13 a 14
a 22 a 23 a 24 a 12 a 13 a 14
a 21 a 22 a 23 a 24
A = = a 11 a 32 a 33 a 34 – a 21 a 32 a 33 a 34 (C.26)
a 31 a 32 a 33 a 34
a 42 a 43 a 44 a 42 a 43 a 44
a 41 a 42 a 43 a 44
a 12 a 13 a 14 a 12 a 13 a 14
+a 31 a 22 a 23 a 24 – a 41 a 22 a 23 a 24
a 42 a 43 a 44 a 32 a 33 a 34
Example C.9
Compute the value of the determinant of the matrix A defined as
2 –1 0 –3
A = –1 1 0 –1 (C.27)
4 0 3 –2
–3 0 0 1
Solution:
Using the above procedure, we will multiply each element of the first column by its cofactor. Then,
1 0 –1 –1 0 –3 –1 0 –3 –1 0 –3
A=2 0 3 – 2 –( –1 ) 0 3 –2 +4 1 0 – 1 –( –3 ) 1 0 –1
0 0 1 0 0 1 0 0 1 0 3 –2
⎧
⎪
⎪
⎨
⎪
⎪
⎩
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
⎧
⎪
⎪
⎨
⎪
⎪
⎩
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
[a] [b] [c] [d]
Next, using the procedure of Example C.5 or Example C.8, we find
[ a ] = 6 , [ b ] = – 3 , [ c ] = 0 , [ d ] = – 36
and thus
detA = [ a ] + [ b ] + [ c ] + [ d ] = 6 – 3 + 0 – 36 = – 33
Property 1: If all elements of one row or one column are zero, the determinant is zero. An example of
this is the determinant of the cofactor [ c ] above.
Property 2: If all the elements of one row or column are m times the corresponding elements of another
row or column, the determinant is zero. For example, if
2 4 1
A = 3 6 1 (C.28)
1 2 1
then,
2 4 1 2 4
detA = 3 6 1 3 6 = 12 + 4 + 6 – 6 – 4 – 12 = 0 (C.29)
1 2 1 1 2
Here, detA is zero because the second column in A is 2 times the first column.
Check with MATLAB:
A=[2 4 1; 3 6 1; 1 2 1];det(A)
ans =
0
Property 3: If two rows or two columns of a matrix are identical, the determinant is zero. This follows
from Property 2 with m = 1 .
and let
a 11 a 12 a 13 A a 11 a 13 a 11 A a 13 a 11 a 12 A
∆ = a 21 a 22 a 23 D1 = B a 21 a 23 D2 = a 21 B a 23 D3 = a 21 a 22 B
a 31 a 32 a 33 C a 31 a 33 a 31 C a 33 a 31 a 32 C
Cramer’s rule states that the unknowns x, y, and z can be found from the relations
D D D
x = -----1- y = -----2- z = -----3- (C.31)
∆ ∆ ∆
provided that the determinant ∆ (delta) is not zero.
We observe that the numerators of (C.31) are determinants that are formed from ∆ by the substitu-
tion of the known values A , B , and C , for the coefficients of the desired unknown.
Cramer’s rule applies to systems of two or more equations.
If (C.30) is a homogeneous set of equations, that is, if A = B = C = 0 , then, D 1, D 2, and D 3 are
all zero as we found in Property 1 above. Then, x = y = z = 0 also.
Example C.10
Use Cramer’s rule to find v 1 , v 2 , and v 3 if
2v 1 – 5 – v 2 + 3v 3 = 0
– 2v 3 – 3v 2 – 4v 1 = 8 (C.32)
v 2 + 3v 1 – 4 – v 3 = 0
2v 1 – v 2 + 3v 3 = 5
– 4v 1 – 3v 2 – 2v 3 = 8 (C.33)
3v 1 + v 2 – v 3 = 4
2 –1 3 2 –1
∆ = –4 –3 –2 – 4 – 3 = 6 + 6 – 12 + 27 + 4 + 4 = 35
3 1 –1 3 1
5 –1 3 5 –1
D1 = 8 –3 –2 8 – 3 = 15 + 8 + 24 + 36 + 10 – 8 = 85
4 1 –1 4 1
2 5 3 2 5
D2 = –4 8 –2 –4 8 = – 16 – 30 – 48 – 72 + 16 – 20 = – 170
3 4 –1 3 4
2 –1 5 2 –1
D3 = –4 –3 8 – 4 – 3 = – 24 – 24 – 20 + 45 – 16 – 16 = – 55
3 1 4 3 1
D D D
------ = 17
x 1 = -----1- = 85 ------ --------- = – 34
x 2 = -----2- = – 170 ------ ------ = – 11
x 3 = -----3- = – 55 ------ (C.34)
∆ 35 7 ∆ 35 7 ∆ 35 7
v1=
17/7
v2=
-34/7
v3=
-11/7
These are the same values as in (C.34)
2v 1 – v 2 + 3v 3 = 5
– 4v 1 – 3v 2 – 2v 3 = 8 (C.35)
3v 1 + v 2 – v 3 = 4
Solution:
As a first step, we add the first equation of (C.35) with the third to eliminate the unknown v2 and we
obtain the following equation.
5v 1 + 2v 3 = 9 (C.36)
Next, we multiply the third equation of (C.35) by 3, and we add it with the second to eliminate v 2 .
Then, we obtain the following equation.
5v 1 – 5v 3 = 20 (C.37)
Now, we can find the unknown v 1 from either (C.36) or (C.37). By substitution of (C.38) into (C.36)
we get
5v 1 + 2 ⋅ ⎛ – ------⎞ = 9 or v 1 = ------
11 17
⎝ 7⎠
(C.39)
7
Finally, we can find the last unknown v 2 from any of the three equations of (C.35). By substitution
into the first equation we get
34 33 35 34
v 2 = 2v 1 + 3v 3 – 5 = ------ – ------ – ------ = – ------ (C.40)
7 7 7 7
These are the same values as those we found in Example C.10.
The Gaussian elimination method works well if the coefficients of the unknowns are small integers,
as in Example C.11. However, it becomes impractical if the coefficients are large or fractional num-
bers.
α 11 α 21 α 31 … α n 1
α 12 α 22 α 32 … α n 2
adjA = α α α … α (C.41)
13 23 33 n3
… … … … …
α 1 n α 2 n α 3 n … α nn
We observe that the cofactors of the elements of the ith row (column) of A are the elements of the
ith column (row) of adjA .
Example C.12
Compute adjA if Matrix A is defined as
1 2 3
A = 1 3 4 (C.42)
1 4 3
Solution:
3 4 – 2 3 2 3
4 3 4 3 3 4
–7 6 –1
adjA = – 1 4 1 3 – 2 3 = 1 0 –1
1 3 1 3 3 4 1 –2 1
1 3 – 1 2 1 2
1 4 1 4 1 3
Example C.13
Matrix A is defined as
1 2 3
A = 2 3 4 (C.43)
3 5 7
1 2 3 1 2
detA = 2 3 4 2 3 = 21 + 24 + 30 – 27 – 20 – 28 = 0
3 5 7 3 5
–1 1
A = ------------ adjA (C.44)
detA
Example C.14
Matrix A is defined as
1 2 3
A = 1 3 4 (C.45)
1 4 3
–7 6 –1 3.5 – 3 0.5
–1 1 - 1-
A = ----------- adjA = ----- = – 0.5 0 0.5 (C.46)
detA –2 1 0 –1
1 –2 1 – 0.5 1 – 0.5
Multiplication of a matrix A by its inverse A –1 produces the identity matrix I , that is,
–1 –1
AA = I or A A = I (C.47)
Example C.15
Prove the validity of (C.47) for the Matrix A defined as
A = 4 3
2 2
Proof:
A
–1
= ------------ adjA = --- 2 – 3 =
1 1 1 –3 ⁄ 2
detA 2 –2 4 –1 2
and
AA
–1
= 4 3 1 –3 ⁄ 2 = 4 – 3 –6+6 = 1 0 = I
2 2 –1 2 2–2 –3+4 0 1
Therefore, we can use (C.50) to solve any set of simultaneous equations that have solutions. We will
refer to this method as the inverse matrix method of solution of simultaneous equations.
Example C.16
For the system of the equations
⎧ 2x 1 + 3x 2 + x 3 = 9 ⎫
⎪ ⎪
⎨ x 1 + 2x 2 + 3x 3 = 6 ⎬ (C.51)
⎪ ⎪
⎩ 3x 1 + x 2 + 2x 3 = 8 ⎭
Solution:
In matrix form, the given set of equations is AX = B where
2 3 1 x1 9
A= 1 2 3 , X = x 2
, B = 6 (C.52)
3 1 2 x3 8
Then,
–1
X = A B (C.53)
or
–1
x1 2 3 1 9
x2 = 1 2 3 6 (C.54)
x3 3 1 2 8
1 –5 7
detA = 18 and adjA = 7 1 –5
–5 7 1
Therefore,
1 –5 7
–1 1 1
A = ------------ adjA = ------ 7 1 – 5
detA 18
–5 7 1
x1 1 –5 7 9 35 35 ⁄ 18 1.94
1 1
X = x2 = -----
- = -----
- = 29 ⁄ 18 = 1.61 (C.55)
18 7 1 – 5 6 18 29
x3 –5 7 1 8 5 5 ⁄ 18 0.28
To verify our results, we could use the MATLAB’s inv(A) function, and then multiply A –1 by B .
However, it is easier to use the matrix left division operation X = A \ B ; this is MATLAB’s solution
of A –1 B for the matrix equation A ⋅ X = B , where matrix X is the same size as matrix B . For this
example,
Example C.17
For the electric circuit of Figure C.1,
1Ω 2Ω 2Ω
V = 100 v
+ 9Ω 4Ω
− 9Ω
I1 I3
I2
Use the inverse matrix method to compute the values of the currents I 1 , I 2 , and I 3
Solution:
10 – 9 0 100 I1
R = – 9 20 – 9 , V = 0 and I = I 2
0 – 9 15 0 I3
Therefore, we find the determinant and the adjoint of R . For this example, we find that
219 135 81
detR = 975, adjR = 135 150 90 (C.58)
81 90 119
Then,
219 135 81
–1 1 1
R = ------------ adjR = --------- 135 150 90
detR 975
81 90 119
and
I1 219 135 81 100 219 22.46
1- 100
I = I2 = -------- 0 = 975 135 = 13.85
--------
-
975 135 150 90
I3 81 90 119 0 81 8.31
=MININVERSE(B3:D5)
and we press the Crtl-Shift-Enter keys simultaneously.
We observe that R – 1 appears in these cells.
3. Now, we choose the block of cells G7:G9 for the values of the current I . As before, we highlight
them, and with the cell marker positioned in G7, we type the formula
=MMULT(B7:D9,G3:G5)
and we press the Crtl-Shift-Enter keys simultaneously. The values of I then appear in G7:G9.
A B C D E F G H
1 Spreadsheet for Matrix Inversion and Matrix Multiplication
2
3 10 -9 0 100
4 R= -9 20 -9 V= 0
5 0 -9 15 0
6
7 0.225 0.138 0.083 22.462
-1
8 R = 0.138 0.154 0.092 I= 13.846
9 0.083 0.092 0.122 8.3077
10
Example C.18
For the phasor circuit of Figure C.18
85 Ω C
R1 −j100 Ω
170∠0° +
IX
V1 V2
− R3 = 100 Ω
VS
`
L R2 50 Ω
j200 Ω
V1 – V2
I X = -----------------
- (C.59)
R3
and the voltages V 1 and V 2 can be computed from the nodal equations
V 1 – 170 ∠0° V 1 – V 2 V 1 – 0
------------------------------- + ------------------ + --------------- = 0 (C.60)
85 100 j200
and
V 2 – 170 ∠0° V 2 – V 1 V 2 – 0
------------------------------- + ------------------ + --------------- = 0 (C.61)
– j100 100 50
Compute, and express the current I x in both rectangular and polar forms by first simplifying like
terms, collecting, and then writing the above relations in matrix form as YV = I , where
Y = Admit tan ce , V = Voltage , and I = Current
Solution:
The Y matrix elements are the coefficients of V 1 and V 2 . Simplifying and rearranging the nodal
equations of (C.60) and (C.61), we get
( 0.0218 – j0.005 )V 1 – 0.01V 2 = 2
(C.62)
– 0.01 V 1 + ( 0.03 + j0.01 )V 2 = j1.7
⎧
⎨
⎩
⎧
⎨
⎩
Y V I
V1 =
1.0490e+002 + 4.9448e+001i
V2 =
53.4162 + 55.3439i
Next, we find I X from
R3=100; IX=(V(1)−V(2))/R3 % Compute the value of IX
IX =
0.5149- 0.0590i
This is the rectangular form of I X . For the polar form we use
magIX=abs(IX) % Compute the magnitude of IX
magIX =
0.5183
thetaIX=angle(IX)*180/pi % Compute angle theta in degrees
thetaIX =
-6.5326
Therefore, in polar form
I X = 0.518 ∠– 6.53°
Spreadsheets have limited capabilities with complex numbers, and thus we cannot use them to com-
pute matrices that include complex numbers in their elements as in Example C.18
C.12 Exercises
For Exercises 1, 2, and 3 below, the matrices A , B , C , and D are defined as:
1 –1 –4 5 9 –3 4 6
A = 5 7 –2 B = –2 8 2 C= – 3 8 D = 1 –2 3
–3 6 –4
3 –5 6 7 –4 6 5 –2
1. Perform the following computations, if possible. Verify your answers with MATLAB.
a. A + B b. A + C c. B + D d. C + D
e. A – B f. A – C g. B – D h. C – D
2. Perform the following computations, if possible. Verify your answers with MATLAB.
a. A ⋅ B b. A ⋅ C c. B ⋅ D d. C ⋅ D
e. B ⋅ A f. C ⋅ A g. D ⋅ A h. D· ⋅ C
3. Perform the following computations, if possible. Verify your answers with MATLAB.
a. detA b. detB c. detC d. detD
e. det ( A ⋅ B ) f. det ( A ⋅ C )
4. Solve the following systems of equations using Cramer’s rule. Verify your answers with MATLAB.
– x 1 + 2x 2 – 3x 3 + 5x 4 = 14
x 1 – 2x 2 + x 3 = – 4
x 1 + 3x 2 + 2x 3 – x 4 = 9
a. – 2x 1 + 3x 2 + x 3 = 9 b.
3x 1 – 3 x 2 + 2x 3 + 4x 4 = 19
3x 1 + 4x 2 – 5x 3 = 0
4x 1 + 2x 2 + 5x 3 + x 4 = 27
2 4 3 –2 x1 1
1 3 4 x1 –3
a. 3 1 – 2 ⋅ x 2 = – 2 b. 2 – 4 1 3 ⋅ x 2 = 10
–1 3 –4 2 x3 – 14
2 3 5 x3 0
2 –2 2 1 x4 7
warping 11-52
window function 10-13
Z transform 9-1
zlabel MATLAB function A-18
ztrans MATLAB function 9-27