Ece210 Text
Ece210 Text
and Systems
This page intentionally left blank
Analog Signals
and Systems
Erhan Kudeki
University of Illinois at Urbana-Champaign
All rights reserved. No part of this book may be reproduced, in any form or by any means, without
permission in writing from the publisher.
The author and publisher of this book have used their best efforts in preparing this book. These efforts
include the development, research, and testing of the theories and programs to determine their
effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard
to these programs or the documentation contained in this book. The author and publisher shall not be
liable in any event for incidental or consequential damages in connection with, or arising out of, the
furnishing, performance, or use of these programs.
ISBN 0-13-143506-X
978-0-13-143506-3
Preface xi
Chapter 0 Analog Signals and Systems— The Scope and Study Plan 1
vii
viii Contents
INDEX 509
Preface
Dear student: This textbook will introduce you to the exciting world of analog signals
and systems, explaining in detail some of the basic principles that underlie the opera-
tion of radio receivers, cell phones, and other devices that we depend on to exchange
and process information (and that we enjoy in our day-to-day lives). This subject
matter constitutes a fundamental core of the modern discipline of electrical and
computer engineering (ECE). The overall scope of our book and our pedagogical
approach are discussed in an introductory chapter numbered “0” before we get into
the nitty-gritty of electrical circuits and analog systems, beginning in Chapter 1. Here,
in the Preface, we tell you and our other readers – mainly your instructors – about the
reasons underlying our choice of topics and the organization of material presented
in this book. We hope that you will enjoy using this text and then perhaps return to
this Preface after having completed your study, for then you will be able to better
appreciate what we are about to describe.
This textbook traces its origins to a major curriculum revision undertaken in
the middle 1990s at the University of Illinois. Among the many different elements
of the revision, it was decided to completely restructure the required curriculum in
the area of circuits and systems. In particular, both the traditional sophomore-level
circuit analysis course and the junior-level signals and systems course were phased
out, with the material in these courses redistributed within the curriculum in a new
way. Some of the circuits material, and the analog part of the signals and systems
material, were integrated into a new sophomore-level course on analog signals and
systems. This course, for which this book was written, occupies the same slot in the
curriculum as the old circuit analysis course. Other material in the circuit analysis
course was moved to a junior-level electronics elective and to an introductory course
on power systems. The discrete-time topics from the old junior-level signals and
systems course were moved into a popular course on digital signal processing. This
restructuring consolidated the curriculum (saved credit hours); but, more importantly,
it offered pedagogical benefits that are described below.
Similar to the trend initiated in DSP First: A Multimedia Approach, and its
successor Signal Processing First (both by McClellan, Schafer, and Yoder, Prentice-
Hall), our approach takes some of the focus in the early curriculum off circuit analysis,
which no longer is the central topic of ECE. And, it permits the introduction of
signal processing concepts earlier in the curriculum, for immediate use in subsequent
xi
xii Preface
courses. However, unlike DSP First and Signal Processing First, we prefer “analog
first” as the portal into ECE curricula, for three reasons. First, this treatment follows
more naturally onto required courses on calculus, differential equations, and physics,
which model primarily analog phenomena. Second, this approach better serves as a
cornerstone of the broader ECE curricula, preparing students for follow-on courses
in electronics (requiring circuit analysis and frequency response), electromagnetics
(needing phasors, capacitance, and inductance), solid state electronics (using differ-
ential equations), and power systems (requiring circuit analysis, complex numbers,
and phasors). Third, the concept of digital frequency is entirely foreign to students
familiar with only trigonometry and physics. Humans perceive most of the phys-
ical world to be analog. Indeed, it is not possible to fully understand digital signal
processing without considering a complete system composed of an analog-to-digital
converter, a digital filter (or other digital processing), and a digital-to-analog converter.
An analysis of this system requires substantial knowledge of Fourier transforms and
analog frequency response, which are the main emphases of this book.
Beginning with simple course notes, versions of this textbook have been used
successfully at the University of Illinois for more than a decade. As we continued to
work on this project, it became clear to us that the book would have broader appeal,
beyond those schools following the Illinois approach to the early ECE curriculum.
Indeed, this text is equally useful as either “analog first” (prior to a course on discrete-
time signal processing) or “analog second” (after a course on discrete-time signal
processing). In that sense, the text is universal. And, the book would work well
for a course that follows directly onto a standard sophomore-level course on circuit
analysis.
We invite instructors to try the approach in this text that has succeeded so well
for us. And, we urge you to look beyond the topical headings, which may sound
standard. We believe that instructors who follow the path of this book will encounter
new and better ways of teaching this foundational material. We have observed that
integration of circuit analysis with signals and systems allows students to see how
circuits are used for signal processing, and not just as mathematical puzzles where
the goal is to solve for node voltages and loop currents. Even more important, we feel
that our students develop an unusually thorough understanding of Fourier analysis
and complex numbers (see Appendix A), which is the core of our book. Students who
complete this text can design simple filters and can explain in the Fourier domain the
workings of a superheterodyne AM radio receiver. Finally, through the introduction
of a small number of labs that are intimately tied to the theory covered in lecture
(see Appendix B), we have constructed a well-rounded learning environment by
integrating theory with applications, design, and implementation.
We extend sincere thanks to all of our faculty colleagues at the University of
Illinois and elsewhere who participated at one time or another in the “ECE 210
project,” and who contributed in so many ways to the course notes that evolved into
our book. We especially thank Tangul Basar, Douglas Jones, George Papen, Dilip
Sarwate, and Timothy Trick, who have used, critiqued, and helped improve many
versions of the notes. We also thank countless students and graduate TAs – Andrea
Preface xiii
Mitofsky in particular – for their helpful comments and catching our mistakes. Finally,
we acknowledge the influence of our prior education and reading on what we have put
to paper – this influence is so far-reaching and untraceable that we have avoided the
task of compiling a comprehensive reference list. Instead, we have included a short
list of further reading (see Appendix C). This list should be useful to readers who
wish to explore the world of signals and systems beyond what can be reached with
this book. Our very best wishes to all who are about to begin their learning journey!
THE SCOPE 1
STUDY PLAN 4
THE SCOPE
The world around us is teeming with signals from natural and man-made sources—
stars and galaxies, radio and TV stations, computers and WiFi cards, cell phones,
video cameras, MP3 players, musical instruments, temperature and pressure sensors,
and countless other devices and systems. In their natural form many of these signals
are analog, or continuous in time. For example, the electrical signal received by a
radio antenna may be represented as an analog voltage waveform v(t), a function of a
continuous time variable t. Similarly, sound traveling through the air can be thought
of as a pressure waveform having a specific numerical value at each instant in time
and each position in space.
Not all signals are analog. Nowadays, many signals are digital. Digital signals
are sequences of numbers. Most often, we acquire digital signals by sampling analog
signals at uniformly spaced points in time and rounding off (quantizing) the samples
to values that can be stored in a computer memory. This process of producing digital
signals is called analog-to-digital (A/D) conversion, or digitization. The acquired
sequence of numbers can be stored or processed (manipulated) by a computer. Then
it often is desired to return the values from the digital realm back to the analog world.
This is accomplished by a process called digital-to-analog (D/A) conversion, whereby
1
2 Analog Signals and Systems
a smooth, continuous (analog) waveform, say, some v(t), is constructed that passes
through the numerical values of the digital signal.
Modern-day signal processing systems commonly involve both analog and digital
signals. For example, a so-called digital cell phone has many analog components. Your
vocal chords create an acoustic signal (analog), which is captured by a microphone in
the cell phone and converted into an electrical signal (analog). The analog electrical
signal is then digitized—that is, sampled and quantized—to produce a (digital) signal
that is further manipulated by a computer in the cell phone to create a new digital
signal that requires fewer bits for storage and transmission and that is more resistant
to errors encountered during transmission.
Next, this digital signal is used as the input to a modulator that creates a high-
frequency analog waveform that carries the information in the coded digital signal
away from your cell phone’s antenna. At the receiving cell phone these processes
are reversed. The receiving antenna captures an analog signal (voltage versus time),
which then is passed through the demodulator to retrieve the digitally coded speech. A
computer in the receiving cell phone processes, or decodes, this sequence to recreate
the set of samples of the original speech waveform. This set of samples is passed
through a D/A converter to create the analog speech waveform. This signal is amplified
and passed to the speaker in the cell phone, which in turn creates the analog acoustic
waveform that is heard by your ear.
Figure 0.1 uses a block diagram language to summarize what we have just
described. From a high-level point of view—that is, ignoring what is happening in
individual blocks or subsystems shown in the figure—we see that the overall task of
the transmitting phone depicted at the top is to convert the analog voice input pi (t)
into a propagating analog radio wave Ei (t). The receiving phone’s task, on the bottom,
is to extract from many analog radio waves hitting its antenna (E1 (t), E2 (t), etc.) a
νi(t) Vn Ym yi(t)
pi(t) M A/D C D/A A
Ei(t)
Transmitting Receiving
νo(t) Vk Yl yo(t)
E2(t) A A/D C D/A S po(t)
E1(t)
Figure 0.1 Simplified cell phone models, transmitting (at the top) and receiving
(on the bottom). Blocks A denote a pair of antennas, A/D and D/A denote
analog-to-digital and digital-to-analog converters, C stands for a computer or
digital processing unit, and M and S represent a microphone and a loudspeaker,
respectively. The transmitting phone at the top converts the sound input pi (t)
(an analog pressure waveform) into a propagating radio wave Ei (t), whereas
the receiving phone on the bottom is designed to reconstruct a sound wave
po (t), which is a delayed and scaled copy of pi (t).
The Scope and Study Plan 3
delayed and scaled copy of pi (t) and render it as sound. These tasks are carried out
through a combination of analog and digital signal processing steps represented by the
individual blocks of Figure 0.1. Such combinations of analog and digital processing
are common in many devices that we encounter everyday: MP3 players, digital cable
or satellite TV, digital cameras, anti-lock brakes, and household appliance controls.
In this book, we focus on the mathematical analysis and design of analog signal
processing, which is carried out in the analog parts of the previously mentioned
systems. Analog processing frequently is employed in amplification, filtering (to
remove noise or interference) or equalization (to shape the frequency content of a
signal), and modulation (to piggyback a low-frequency signal onto a higher frequency
carrier signal for wireless transmission), as well as in parts of A/D and D/A converters.
Processing of analog signals typically is accomplished by electrical circuits composed
of elements such as resistors, capacitors, inductors, and operational amplifiers. Thus,
it will be necessary for us to learn more about circuit analysis than you may have seen
in preceding courses.
We wrote this book under the assumption that the reader may not be familiar
with electrical circuits much beyond the most fundamental level, which is reviewed
in Chapter 1. Chapters 2 through 4 introduce DC circuit analysis, time-varying circuit
response, and sinusoidal steady-state AC circuits, respectively. The emphasis is on
linear circuit analysis, because the discussions of linear time-invariant (LTI) systems
in later chapters develop naturally as generalizations of the concepts from Chapter 4
on sinusoidal steady-state response in linear circuits.
To utilize the cell phone models shown in Figure 0.1, we need unambiguous
descriptions of the input–output relations of their interconnected subsystems or proces-
sors. In Chapters 5 through 7 we will learn how to formulate such relations for linear,
electrical circuits as well as for other types of analog LTI systems. Chapters 5 through
7 focus on how LTI circuits and systems respond to arbitrary periodic and non-periodic
inputs, and how such responses can be represented by Fourier series and transform
techniques, respectively.
As you will see in this book, there are two different approaches to understanding
signal processing systems: Both frequency-domain and time-domain techniques are
widely employed. Our initial approach in Chapters 5 through 7 takes the frequency-
domain path, because that happens to be the easier and more natural approach to follow
if our starting point is sinusoidal steady-state circuits (i.e., in Chapter 4). This path
quickly takes us to a sufficiently advanced stage to enable a detailed discussion of AM
radio receivers in Chapter 8. However, fluency in both time- and frequency-domain
methods is necessary because, depending on the problem, one approach generally
will be easier to use or will offer more insight than the other. We therefore will turn
our attention in Chapters 9 and 10 to time-domain methods. We will translate the
frequency-domain results of Chapter 7 to their time-domain counterparts and illus-
trate the use of the time-domain convolution method, as well as impulse and impulse
response concepts. We also will learn in Chapter 9 about sampling and reconstruc-
tion of bandlimited analog signals, and get a glimpse of digital signal processing
techniques.
4 Analog Signals and Systems
An analog system can produce an output even in the absence of an input signal
if there is initial energy stored in the system (like the free oscillations of a stretched
spring after its release). In Chapters 11 and 12 we will investigate the full response of
LTI systems and circuits, including their energy-driven outputs, and learn the Laplace
transform technique for solving LTI system initial-value problems. The emphasis
throughout most of the course is on circuit and system analysis—that is, deter-
mining how a circuit or system functions and processes its input signals. However, in
Chapter 12 we will discuss system design and learn how to build stable analog filter
circuits.
The book ends with three appendices. Appendix A is a review of complex numbers.
Both at the circuit level and at the block diagram level, much of the analysis and
design of signal processing systems relies on mathematics. You will be familiar
with the required mathematical subjects: algebra, trigonometry, calculus, differen-
tial equations, and complex numbers. However, most students using this text will find
that their background in complex
√ numbers is entirely insufficient. For example, can
you explain the meaning of −1? If not, then Appendix A is for you. Most students
will want to study this appendix carefully, parallel to Chapters 1 and 2, and then refer
to it later as needed in Chapter 3 and beyond.
Appendix B includes five laboratory worksheets that are used at the University
of Illinois in a required lab that accompanies the sophomore-level course ECE 210,
Analog Signal Processing, based on this text. The lab component of ECE 210 starts
approximately five weeks into the semester, and the biweekly labs involve simple
measurement and/or design projects related to circuit and systems concepts covered
in class. In the fourth lab session, an AM radio receiver—the topic of Chapter 8—is
assembled with components built in the earlier labs. In the fifth session, the receiver is
modified to include a PC sound card (and software), replacing the back-end hardware.
The labs provide a taste of how signal and system theory applies in practice and
illustrate how real-life signals and circuit behavior may differ from the idealized
versions described in class.
Appendix C provides a list of further reading for students who may wish to learn
more about a topic or who seek an alternative explanation.
STUDY PLAN
This book was written with a “just-in-time” approach. This means that the book tells
a story (so to speak), and new ideas and topics relevant to the story are introduced
only when needed to help the story advance. You will not find here “encyclopedic”
chapters that are stand-alone and complete treatments of distinct topics. (Chapter 1,
which is a review of circuit fundamentals, may be an exception.) Instead, topics are
developed throughout the narrative, and individual ideas make multiple appearances
just when needed as the story unfolds, much like the dynamics of individual characters
in a novel or a play.
For example, although the title of Chapter 7 contains the words “Fourier trans-
form,” the concept of the Fourier transform is foreshadowed as early as in Chapter 3,
The Scope and Study Plan 5
and discussions of the Fourier transform continue into the final chapter of the book.
Thus, to learn about the Fourier transform, a full reading of the text is necessary—
reading just Chapter 7 will provide only a partial understanding. And that is true with
many of the main ideas treated in the book. We hope that students will enjoy the story
line enough to stick with it.
In ECE 210 at the University of Illinois, the full text is covered from Chapter 1
through Chapter 12 in one semester (approximately 15 weeks) in four lecture hours per
week, including a first-week lecture on complex numbers as treated in Appendix A.
Chapter 1 and much of Chapter 2 are of a review nature for most students; conse-
quently, they are treated rapidly to devote the bulk of classroom time to Chapters 3
through 12. Exposure to circuit analysis in Chapters 1 through 4 prepares students
for junior-level courses in electronics and electromagnetics, while signal processing
and system analysis tools covered throughout the entire text provide the background
for advanced courses in digital signal processing, communications, control, remote
sensing, and other areas where linear systems notions are essential. Exposure of
sophomores to the tools of linear system theory opens up many creative options for
later courses in their junior and senior years.
The story line of our book is, of course, open ended, in the sense that student
learning of the Fourier transform and its applications, and other important ideas intro-
duced here, will continue beyond Chapter 12. Because of that, we trust our students
to question “what happens next” and to pursue the plot in subsequent courses.
1
Circuit Fundamentals
Review Electrostatic attraction and repulsion between charged particles are fundamental to
of voltage, all electrical phenomena observed in nature. Free charge carriers (e.g., electrons and
current, protons) transported against electrostatic forces gain potential energy just like a pebble
and power; lifted up from the ground against gravitational pull. Conversely, charge carriers release
KVL and KCL; or lose their potential energy when they move in the direction of an electrostatic pull.
two-terminal In circuit models of electrical systems and devices the movement of charge
elements carriers is quantified in terms of current variables such as iR , iL , and iC marked
on the circuit diagram shown in Figure 1.1. Voltage variables such as vs , vR , and vo
are used to keep track of energy gains and losses of carriers moving against or with
electrostatic forces. Flow channels of the carriers are represented by two-terminal
circuit elements such as R, L, and C, which are distinguished from one another by
unique voltage–current, or v–i, relations. The v–i relations plus Kirchhoff’s voltage
and current laws representing energy and charge conservation are sufficient to deter-
mine quantitatively how a circuit functions and at what rates energy is generated
6
Section 1.1 Voltage, Current, and Power 7
iR(t) R
+ −
νR(t) iL(t) iC(t)
+
νs(t) + L C ν (t)
− o
−
and lost in the circuit.1 These fundamental circuit concepts will be reviewed in this
chapter.
Voltage
The definition of element voltage v given in Table 1.1 can be interpreted as energy
loss per unit charge transported from an element terminal marked by a + sign to the
second terminal marked by −; equivalently, as the energy gain per Coulomb moved
from the − to the + terminal:
Example 1.1
In Figure 1.2a, vb = 4 V stands for energy loss per unit charge transported
from the left terminal of element b marked by the + sign to the right terminal
marked by −. Equivalently, vb = 4 V can be interpreted as energy gain per
unit charge transported from the − to + terminals, or from right to left.
Thus, electrical potential energy per unit charge, or the electrical potential,
is higher at the + terminal of element b compared with its − terminal by
an amount 4 V.
1
Charge and energy conservation are fundamental to nature: Net electrical charge can neither be gener-
ated nor destroyed; if a room contains 1 C of net charge, the only way to change this amount is to move
some charged particles in or out of the room; likewise, for energy. When we talk about electrical energy
generation, use, or loss, what we really mean is conversion between electrical potential energy and some
other form of energy, e.g., mechanical, chemical, thermal, etc.
8 Chapter 1 Circuit Fundamentals
Element voltage
w dw
v ≡ lim = ,
q→0 q dq Joule (J)
v[=] = Volt (V)
where w denotes the potential energy loss of q amount of Coulomb (C)
charge transported from + to − terminal.
Element current
q dq
i ≡ lim = ,
t→0 t dt Coulomb (C)
i[=] = Ampere (A)
where q denotes the net amount of electrical charge trans- second (s)
ported in direction → during the time interval t.
Absorbed power
dw dq dw w
p ≡ vi = = = lim ,
dq dt dt t→0 t Joule (J)
p[=] = Watt (W)
where w denotes the net energy loss of charge carriers moving second (s)
through the element during the time interval t.
Table 1.1 Definitions of element voltage, current, and absorbed power, and the associated units in
Systeme International (SI). The symbol [=] stands for “has the unit.”
b −
b +
+ −
νb = 4 V νb = − 4 V −
+
a c d νd = −1 V a c d νd = 1 V
−
+
(a) (b)
Example 1.2
In Figure 1.2a, vd = −1 V indicates that energy per unit charge is lower
at the + terminal of element d on the top relative to the − terminal on
the bottom, since vd is negative. Or, equivalently, energy per unit charge is
higher at the bottom terminal relative to the top terminal.
The plus and minus signs assigned to element terminals are essential for describing
what the voltage variable stands for. These signs are said to indicate the polarity of
the element voltage, but the polarity does not indicate whether the voltage is positive
or negative. Figure 1.2b shows the same circuit as Figure 1.2a, but with the voltage Polarity
polarities assigned in the opposite way. As the next two examples show, these revised
voltage definitions describe the same physical situation as in Figure 1.2a, because the
algebraic signs of the voltage values also have been reversed in Figure 1.2b.
Example 1.3
In Figure 1.2b, vb = −4 V stands for energy loss per unit charge transported
from the right terminal of element b marked by a + sign to the left terminal
marked by −. Therefore, energy gain per unit charge transported from right
to left is 4 V, consistent with what we found in Example 1.1
Example 1.4
In Figure 1.2b, vd = 1 V indicates that energy per unit charge is higher at
the + terminal on the bottom relative to the − terminal on the top, because
vd > 0. This is consistent with what we found in Example 1.2.
We call an element voltage such as vb a voltage rise from the − terminal to the Voltage drop
+ terminal, or, equivalently, a voltage drop from + to −. So, vb = 4 V in Figure 1.2a and rise
is a 4 V rise from the right to left terminals (− to +) and a 4 V drop from left to right.
Likewise, vd = 1 V in Figure 1.2b is a 1 V drop from bottom to top and a 1 V rise
from top to bottom.
Example 1.5
What is the voltage drop associated with vb in Figure 1.2b?
Solution In Figure 1.2b, vb is a −4 V drop from right (+) to left (−)
across element b. Remember, by definition, a drop is always from + to −
(and rise, always from − to +).
Notions of voltage drop and rise will play a useful role in formulating Kirchhoff’s
voltage law in the next section.
Closely associated with the notion of element voltage is the concept of node
voltage, or electrical potential. The connection points of element terminals in a circuit
are called the nodes of the circuit. In Figure 1.3, two of the three nodes are marked by Circuit nodes,
dots and the third node is marked by a ground symbol. The node marked by the ground reference,
symbol is called the reference node. Distinct node voltage variables are assigned to node voltage
10 Chapter 1 Circuit Fundamentals
ν1 = 3 V ν2 = − 1 V
+
b
νb = ν1 − ν2 −
+
a c d νd = ν2 − 0
−
Figure 1.3 A circuit with four two-terminal elements and three nodes.
all the remaining nodes.2 In Figure 1.3, we have labeled node voltages v1 and v2 . By
definition, the node voltage, or electrical potential, vn stands for energy gain per unit
charge transported from the reference node to node n.3 The electrical potential v0 of
the reference node is, of course, zero.
Example 1.6
Because v1 = 3 V in Figure 1.3, the energy gain per unit charge transported
from the reference node to node 1 is 3 V, or 3 J/C. Thus, the electrical
potential at node 1 is 3 V higher than at the reference node. Equivalently,
charges transported from node 1 to the reference node lose energy at a 3 J/C
rate.
Example 1.7
In Figure 1.3, v2 = −1 V indicates that 1 C of charge transported from the
reference node to node 2 gains −1 J of energy (same as losing 1 J). Thus,
the electrical potential is higher at the reference node than at node 2.
The voltage across each element in a circuit is the difference of electrical poten-
tials of the nodes at the element terminals. To express an element voltage as a potential
difference, we subtract from the electrical potential of the + terminal the electrical
potential of the − terminal. For instance, in Figure 1.3,
vb = v1 − v2 = 3 V − (−1 V) = 4 V.
This expression reflects the fact that energy loss per unit charge transferred from node
1 to node 2 is 4 V, because electrical potential at node 1 is 4 V higher than at node 2.
Similarly, again referring to Figure 1.3,
vd = v2 − v0 = (−1 V) − 0 = −1 V.
2
The reference node generally is not electrically connected to the earth. Instead, it is an arbitrarily
chosen node that is used as the baseline for defining the other node voltages, much as sea level is arbitrarily
chosen as zero elevation.
3
An analogy is the gravitational potential energy gain of a rock lifted from the ground and placed on a
windowsill.
Section 1.1 Voltage, Current, and Power 11
ν1 = 3 V ν2 = − 1 V ν1 = 4 V
−
b + −
b +
νb − −
νb − −
a c νc d νd a c νc d νd
+ + + +
ν2 = 1 V
(a) (b)
Figure 1.4 (a) The same as the circuit in Figure 1.3, but with reversed polarities assigned to element
voltages vb and vd . (b) The same circuit, but with a different reference node.
Example 1.8
Express the voltage vb in Figure 1.4a in terms of node voltages v1 = 3 V
and v2 = −1 V.
vb = v2 − v1 = (−1 V) − 3 V = −4 V.
In Figure 1.4a, elements c and d are placed in parallel, making terminal contacts Elements
with the same pair of nodes. Thus, both elements have the same terminal potentials. in parallel
Since the assigned polarities of vc and vd are in agreement, it follows that the potential
differences, or element voltages, are vc = vd = 0 − (−1 V) = 1 V.
Finally, we note that any node in a circuit can be chosen as the reference. The
choice impacts the values of the node voltages of the circuit, but not the element volt-
ages, since the latter are potential differences. Changing the reference node causes
equal changes in all node voltage values, so that element voltages (potential differ-
ences) remain unchanged. See Figure 1.4b for an illustration.
Current
Figure 1.5 shows a circuit with four elements, each representing a possible path for
electrical charge flow. Current variables ia , ib , etc., shown in the figure have been
defined to quantify these flows. (See Table 1.1 for the definition of current.) A current
variable such as ib indicates the amount of net electrical charge that transits an element
per unit time in the direction indicated by the arrow → shown next to the element.
12 Chapter 1 Circuit Fundamentals
ib
b
ia a ic = 3 A c id d
Figure 1.5 The same as Figure 1.3, but showing only the element currents.
Example 1.9
Suppose that, every second, 2 C of charge move through element b in
Figure 1.5 from left to right. Then ib = 2 A, because the arrow assigned
to ib points from left to right, the direction of 2 C/s net charge transport. If,
on the other hand, the flow from left to right is −2 C/s (same as 2 C/s from
right to left), then ib = −2 A.
Example 1.10
In Figure 1.5, ic = 3 A indicates that the amount of net charge transported
through element c from top to bottom (the direction indicated by the arrow
accompanying ic ) is 3 C/s. If the arrow direction were reversed, the same
3 C/s net flow from top to bottom would be described by ic = −3 A.
Electrical current is a measure of net charge flow. Let’s see what this really means:
If, for instance, equal numbers of positive and negative charge carriers were to move
in the same direction every second,4 then there would be no net charge transport and
the electrical current would be zero. Net charge transport requires unequal numbers of
positive and negative charge carriers moving in the same direction (per unit time) or
carriers with opposite signs moving in different directions. For instance, ic = 3 A in
Figure 1.5 could be due to top-to-bottom movement of only positive charge carriers,
bottom-to-top transport of only negative charge carriers, or a combination of these
two possibilities, such as negative carriers moving from bottom to top at a −2 C/s
rate simultaneously with a 1 C/s transport of positive carriers from top to bottom. An
element current indicates not how charge transport occurs through the element, but
what the net charge flow is.
In Figure 1.5 elements a and b are positioned in series on the same circuit branch
Elements and therefore constitute parts of the same charge flow path. Since the assigned flow
in series directions of ia and ib agree, it follows that ia = ib .
4
This movement is as in the interior of the sun, where free electrons and protons of the solar plasma
move at equal rates in the direction perpendicular to solar magnetic field lines in response to electric fields.
Section 1.1 Voltage, Current, and Power 13
Example 1.11
Given that ib = 2 A in Figure 1.5, determine ia .
Solution Since the flow directions of ia and ib are the same and since
elements a and b are in series, it follows that ia = ib = 2 A.
Example 1.12
Describe ic + id in Figure 1.5.
Solution In Figure 1.5, ic + id = 3 A + id is the net amount of charge
transported from the top of the circuit to the bottom per unit time, since the
direction arrows of both ic and id point from top to bottom.
Absorbed power
In Figure 1.6, vb = 4 V is a voltage drop in the direction of element current ib = 2 A
from left to right. Therefore, each coulomb of net charge moving through element b
loses 4 J of energy, and since 2 C move every second, the product 4 V × 2 A = 8 W
stands for the total energy loss of charge carriers passing through the element per
unit time. We will refer to this product as the absorbed power for element b, since
the energy loss of charge carriers is the energy gain of the element, according to the
principle of energy conservation.
ib = 2 A
b
+ −
νb = 4 V id
+ + +
i a = 2 A a νa νc = − 1 V c ic = 3 A d νd = − 1 V
− − −
Figure 1.6 The same as Figure 1.2a, but showing the voltage and current
variables defined for each circuit element.
p ≡ vi,
where v denotes the voltage drop across the element in the direction of element
current i. Depending on the algebraic signs of v and i, the absorbed power p = vi for
an element may have a positive or negative numerical value. For instance, with vb =
4 V and ib = 2 A, the absorbed power pb = vb ib = 8 W is positive, indicating that
element b “burns,” or dissipates, 8 J of charge carrier energy every second. However,
14 Chapter 1 Circuit Fundamentals
or
pa + pd = −5 W,
where
pa = −va ia = −va 2,
and
pd = vd id = (−1)id .
In the next section we will determine the numerical values of va and id , and confirm
that pa + pd = −5 W. Notice that the absorbed power pa for element a is −va ia
rather than va ia , because va is a voltage rise (rather than a drop) in the direction of
current ia . (See Figure 1.6.)
Section 1.2 Kirchhoff’s Voltage and Current Laws: KVL and KCL 15
The values of va and id in Figure 1.6 can be determined with the aid of Kirchhoff’s
voltage and current laws (KVL and KCL), reviewed next. These laws, which are
the basic axioms of circuit theory, correspond to principles of energy and charge
conservation expressed in terms of element voltages and currents.
Kirchhoff’s
voltage law: Around any closed loop in a circuit,
vrise = vdrop
Translating this axiom into words, KVL demands that the sum of all voltage rises
encountered around any closed loop of elements in a circuit equals the sum of all
voltage drops encountered around the same loop.
In applying this rule, you should remember that each element voltage can be
interpreted as a rise or a drop, depending on the transit direction across the element;
in constructing a KVL equation, the voltage of each element should be added to only
one side of the equation, depending on whether the loop traverses the element from
minus to plus (called a voltage rise) or from plus to minus (called a voltage drop), as
illustrated next:
Example 1.13
We traverse Loop 1 of Figure 1.7(a) in the clockwise direction and obtain
the KVL equation
va = vb + vc ,
since in the clockwise direction va appears as a voltage rise (we rise from
the − to the + terminal as we traverse element a) and vb and vc appear as
voltage drops (we drop from the + to − terminals in each case). Substituting
the values for vb and vc gives
va = 4 V + (−1) V = 3 V.
Notice that this result does not depend on the direction of travel around the
loop. Traversing Loop 1 in the counterclockwise direction yields
vc + vb = va ,
vc = vd .
16 Chapter 1 Circuit Fundamentals
ib = 2 A
Node 1 Node 2
b b
+
+ νb = 4 V − id
+ +
νa ia
a c d νd a c ic = 3 A d
νc = −1 V
− − −
Loop 1 Loop 2
(a) (b)
ib = 2 A
b
+ −
νb = 4 V
+ + νc = − 1 V +
ia = 2 A
a νa = 3 V c d
− i c = 3 A − νd = − 1 V −
id = − 1 A
(c)
Figure 1.7 Figure 1.6 redrawn (a) without the element currents, (b) without the element voltages,
and (c) after application of KVL and KCL.
vb + vd = va ,
Kirchhoff’s current law: At any node in a circuit, iin = iout
In plain words, KCL demands that the sum of all the currents flowing into a node
equals the sum of all the currents flowing out.
In applying this rule, we just need to pay attention to the arrows indicating the
flow directions of the element currents present at each node.
Example 1.14
KCL applied to Node 2 of Figure 1.7(b) states that
ib = ic + id ,
since ib flows into the node, while both ic and id flow out. Likewise, the
KCL equation for Node 1 is
ia = ib .
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 17
ia = ic + id ,
which is the KCL equation for the node at the bottom of the diagram. Now,
with ib = 2 A and ic = 3 A, the preceding KCL equations indicate that
ia = ib = 2 A and id = ib − ic = 2 A − 3 A = −1 A.
In Figure 1.7(c) we redraw Figure 1.6, showing the numerical values of va and
id that we have deduced in earlier examples. From the figure, we see that
pd = vd id = 1 W
and
pa = −va ia = −6 W.
Therefore,
pa + pd = −5 W,
Circuit models of electrical systems and multiterminal electrical devices can be repre-
sented by a small number of idealized two-terminal circuit elements, which will be
reviewed in this section. These elements have been defined to account for energy
dissipation (resistor), injection (independent and dependent sources), and storage
(capacitor and inductor) and are distinguished from one another by unique voltage–
current, or v–i, relations.
Ideal resistor: An ideal resistor is a two-terminal circuit element having the v–i
relation
v = Ri
known as Ohm’s law. In this relation, R ≥ 0 is a constant known as the resistance Ohm’s law
and v denotes the voltage drop in the direction of current i, as shown in Figure 1.8.
Resistance R is measured in units of V/A = Ohms ().
Resistors cannot inject energy into circuits, because Ohm’s law v = Ri and the
absorbed power formula p = vi imply that, for a resistor,
v2
p = i2R = ≥ 0,
R
18 Chapter 1 Circuit Fundamentals
i R
+ −
ν
Figure 1.8 The circuit symbol for an ideal resistor.
since R ≥ 0. Resistors primarily are used to represent energy sinks in circuit models
of electrical systems.5 The zigzag circuit symbol of the resistor is a reminder of the
frictional energy loss of charge carriers moving through a resistor.
Example 1.15
Figure 1.9a shows a resistor carrying a 2 A current in the direction of a
4 V drop. Ohm’s law v = Ri indicates that the corresponding resistance
value is
v 4V
R= = = 2 .
i 2A
− 1.5 A i
i=2 A R R 4Ω
+ − + − + −
ν=4 V 6V 2V
(a) (b) (c)
Example 1.16
For the resistor shown in Figure 1.9b, and using Ohm’s law,
v 6V
R= = = 4 .
i 1.5 A
Notice that to calculate R we used i = 1.5 A rather than −1.5 A, because
the element current in the direction of the 6 V drop is 1.5 A–Ohm’s law
requires i to be the current in the direction of the voltage drop.
Example 1.17
For the resistor shown in Figure 1.9c, the element current is
v 2V
i= = = 0.5 A.
R 4
5
Negative resistance can be used to model certain energy sources. We will first encounter negative
resistances at the end of Chapter 2 when we study the Thevenin and Norton equivalent circuits.
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 19
The special cases of an ideal resistor with R = 0 and R = ∞ are known as short-
circuit and open-circuit elements, or short and open, respectively. Their special circuit
symbols are shown in Figure 1.10.
short i=0
ν=0 open
νs
+−
i
Example 1.18
For the circuit in Figure 1.12a with a 4 V voltage source, KVL and Ohm’s
law imply that
(8 ) × iR = 4 V;
so
4V
iR = = 0.5 A.
8
6
This is because in a short (R = 0) there is zero friction, while in an open (R = ∞) charge transport,
and thus energy dissipation, cannot take place at all.
7
More precisely, the plus and minus indicate that the potential difference between the plus and minus
terminals is vs . They do not indicate that the potential at the plus terminal is higher than the potential at
the minus terminal. In other words, vs may be positive or negative.
20 Chapter 1 Circuit Fundamentals
2Ω 3Ω
4V + i iR 4V + + 2V
− 8Ω − i −
(a) (b)
Since KCL requires that i = iR (for the given reference directions of i and
iR ), the current of the 4 V source is i = 0.5 A.
Example 1.19
For the circuit shown in Figure 1.12b, a single current variable i is sufficient
to represent all the element currents (indicated as a clockwise flow), since
all four elements of the circuit are in series. In terms of i, KVL for the
circuit is
4 = 2i + 3i + 2.
Therefore,
4−2 2
i= = = 0.4 A.
2+3 5
is
+ −
ν
8
The current may have a positive value, in which case positive charges flow in the direction of the arrow
(or negative charges flow opposite the direction of the arrow), or it may have a negative value, in which
case positive charges flow opposite the direction of the arrow.
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 21
Example 1.20
For the circuit in Figure 1.14a, with the 3 A current source, KCL and Ohm’s
law imply that
vR
3A = ,
5
vR
since 5 is the resistor current flowing from top to bottom. Hence,
vR = 3 × 5 = 15 V.
+
+ +
3A ν vR 5Ω 4A 2Ω ν 1Ω 1A
−
− (b) −
(a)
Example 1.21
In Figure 1.14b the short at the top of the circuit, enclosed within the dashed
oval, can be regarded as a single node, because the potential difference
between any two points along a short is zero. Likewise, the bottom of the
circuit can be regarded as a second node. The voltage drop in the circuit from
top to bottom is denoted as v, which can be regarded as the element voltage
of all four components of the circuit connected in parallel. Consequently, the
2 and 1 resistors conduct v2 and v1 ampere currents from top to bottom,
respectively. Thus, the KCL equation for the top node can be written as
v v
4= + + 1,
2 1
from which we find
4−1 3
v= = = 2 V.
1
2 +1 1.5
νs is
+−
+ −
i ν
vs = Avx or Biy
is = Cvx or Diy,
where A and D are dimensionless constants and B and C are constants with units of
and −1 ≡ Siemens (S), respectively. A dependent voltage source specified as
vs = Biy ,
Example 1.22
Figure 1.16 shows a circuit with a voltage-controlled current source
is = 2vx ,
2Ω i1
+
+
4V − νx 2Ω 2νx
− 4Ω
Figure 1.16 A circuit with a dependent current source. Voltage vx regulates the
current supplied by the dependent current source into the circuit.
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 23
4 = 2i1 + vx
(which is the KVL equation of the loop on the left), we find the following
solution:
vx = −2 V
i1 = 3 A.
Example 1.23
Figure 1.17 shows a circuit with a current-controlled voltage source
1
vd = − ib ,
2
where ib is the current through a 2 resistor. The KVL equation around
the outer loop of the circuit is
1
3 = 2ib + (− ib ),
2
from which it follows that
ib = 2 A.
Hence,
1
vd = − ib = −1 V
2
and
vb = 2ib = 4 V.
2Ω ib id
+ −
νb
3V +
− 3A +
− νd = − –12 ib
ib = 3 + id ;
thus,
id = ib − 3 = 2 − 3 = −1 A.
Capacitor and inductor: Ideal capacitors and inductors are two-terminal circuit
elements with v–i relations
dv
i=C (capacitor)
dt
di
v=L (inductor),
dt
where
C ≥ 0 and L ≥ 0
are constants known as capacitance and inductance, respectively, and v denotes the
voltage drop in the direction of current i, as shown in Figures 1.18a and 1.18b. The
capacitance C is measured in units of A s/V = Farads (F) and the inductance L in
units of V s/A = Henries (H).
i C i L
(a) + − (b) + −
ν ν
Figure 1.18 Circuit symbols for ideal capacitor (a) and inductor (b).
dv di
and
dt dt
denote the time derivatives of v and i, respectively. In circuits with time-independent
voltages and currents, the derivatives
dv di
= 0 and = 0,
dt dt
and, consequently, capacitors and inductors behave as open (i.e., zero capacitor
current) and short (i.e., zero inductor voltage) circuits, respectively. We refer to such
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 25
circuits as DC circuits (DC for “direct current,” meaning constant current) and use the
term AC (AC for “alternating current”) to describe circuits with time-varying voltage
and currents. Capacitors and inductors play nontrivial roles only in AC circuits, where DC
they respond differently than opens and shorts. versus
In AC circuits, capacitors and inductors represent energy storage elements, since AC
they can re-inject the energy absorbed from charge carriers back to carrier motions.
For instance, for a capacitor with capacitance C, the absorbed power is
dv d 1 2
p = vi = vC = Cv ,
dt dt 2
1 2
w≡ Cv
2
stands for the net absorbed energy of the capacitor, in units of J. When the absorbed
power p = vi = dw dt is positive, w increases with time so that the capacitor is drawing
energy from the circuit. Conversely, a negative p = vi = dw dt is associated with a
decrease in w—that is, energy injection back into the circuit. This indicates that a Stored
capacitor does not dissipate its absorbed energy, but rather stores it in a form available energy
for subsequent release when vi turns negative. We therefore will refer to w = 21 Cv 2
as the stored energy of the capacitor. It can likewise be argued that the stored energy
of an inductor L carrying a current i is
1 2
w= Li .
2
9
The amount of charge stored on a capacitor plate is given as q = Cv, since the time-rate of change of
q, namely, the derivative dqdt = C dt , matches the capacitor current i = C dt .
dv dv
10
An inductor with current i generates a flux linkage of λ = Li, and the time-rate of change of λ, namely,
dt = L dt , corresponds to the inductor voltage v = L dt . For a helical inductor coil, the flux linkage λ is
dλ di di
the product of the magnetic flux φ generated by current i and the number of turns of the helix.
26 Chapter 1 Circuit Fundamentals
The analysis of resistive DC circuits—the main topic of Chapter 2—requires the use
of only real numbers and real algebra. By contrast, calculating AC or time-varying
response of circuits with capacitors and inductors may require solving differential
equations, which often can be simplified with the use of complex numbers. We
will first encounter complex numbers in Chapter 3, where we solve simple first-
order differential equations describing RC and RL circuits with sinusoidal inputs.
Complex numbers also will arise in the solution of second and higher order differential
equations, irrespective of the input. Starting with Chapter 4, we will rely on complex
arithmetic to provide an efficient solution to AC circuit problems where the signals
are sinusoidal. More advanced circuit and system analysis methods (Fourier series,
Fourier transform, and Laplace transform), discussed in succeeding chapters, will
require a solid understanding of complex numbers, complex functions, and complex
variables.
A review of complex numbers and arithmetic is provided in Appendix A at the
end of this book. A short introduction to complex functions and complex variables is
included as well. You should read Appendix A and then work the complex number
exercises at the end of this chapter and Chapter 2 well before entering Chapters 3
and 4. It is important that you feel comfortable with complex addition, subtraction,
multiplication, and division; understand the conversions between rectangular, polar,
and exponential forms of complex numbers; and understand the relationship between
trigonometric functions such as cos(ωt) and complex exponentials ej ωt . In particular,
Euler’s identity and its implications will play a crucial role in Chapter 4 and the rest
of this book.
EXERCISES
2Ω
+ −
+ 4V
νs − 4Ω
R
+ −
2V
6V + 2Ω 4A
−
Exercises 27
1.3 (a) In the following circuit, determine all of the unknown element and node
voltages:
ν1 2V ν2 + νc − ν3
−+
1 4Ω
—
3
A
+ + + 3A
νa 2Ω 1Ω νb νd
− − −
2Ω
+ νe −
ν4
(b) What is the voltage drop in the above circuit from the reference to
node 4?
1.4 (a) A volume of ionized gas filled with free electrons and protons can be
modeled as a resistor. Consider such a resistor model supporting a 6
V potential difference between its terminals. We are told that in 1 s,
6.2422 × 1018 protons move through the resistor in the direction of the
6 V drop (say, from left to right) and 1.24844 × 1019 electrons move
in the opposite direction. What is the net amount of electrical charge
that transits the element in 1 s in the direction of the 6 V drop and
what is the corresponding resistance R? Note that electrical charge q is
1.602 × 10−19 C for a proton and −1.602 × 10−19 C for an electron.
(b) Does a proton gain or lose energy as it transits the resistor? How many
joules? Explain.
(c) Does an electron gain or lose energy as it transits the resistor? How
many joules? Explain.
1.5 In the circuit pictured here, one of the independent voltage sources is injecting
energy into the circuit, while the other one is absorbing energy. Identify the
source that is injecting the energy absorbed in the circuit and confirm that
the sum of all absorbed powers equals zero.
1Ω 1Ω
+ i +
6V − − 4V
1.6 In the circuit pictured here, one of the independent current sources is injecting
energy into the circuit, while the other one is absorbing energy. Identify the
source that is injecting the energy absorbed in the circuit and confirm that
the sum of all absorbed powers equals zero.
3A 1Ω 1A
1.7 Calculate the absorbed power for each element in the following circuit and
determine which elements inject energy into the circuit:
3A 1Ω −1A
1.8 In the circuit given, determine ix and calculate the absorbed power for each
circuit element. Which element is injecting the energy absorbed in the circuit?
1Ω
+
ix +
2V − − 5ix
1.9 In the circuit given, determine vx and calculate the absorbed power for each
circuit element. Which element is injecting the energy absorbed in the circuit?
+
6A 2Ω νx 2νx
−
Exercises 29
1.10 Some of the circuits shown next violate KVL or KCL and/or basic definitions
of two-terminal elements given in Section 1.3. Identify these ill-specified
circuits and explain the problem in each case.
2V + + 3V 2V + 4A 1Ω
− − −
(a) (b)
+
1Ω
3V − 2Ω 2A 3A
(c) (d)
6V + 2A
− 1Ω 2Ω
(e)
1.17 Show graphically on the complex plane that |C1 + C2 | ≤ |C1 | + |C2 |.
π
1.18 (a) The function f (t) = ej 4 t , for real-valued t, takes on complex values.
Plot the values of f (t) on the complex plane, for t = 0, 1, 2, 3, 4, 5, 6,
and 7.
1 π
(b) Repeat (a), but for the complex-valued function g(t) = e(− 8 +j 4 )t .
2
Analysis of Linear
Resistive Circuits
Most analog signal processing systems are built with electrical circuits. Thus, the anal- Strategies for
ysis and design of signal processing systems requires proficiency in circuit analysis, circuit
meaning the calculation of voltages and currents (or voltage and current waveforms) simplification;
at various locations in a circuit. In this chapter we will describe a number of anal- node-voltage
ysis techniques applicable to linear resistive circuits composed of resistors and some and
combination of independent and dependent sources. The techniques developed here loop-current
will be applied in Chapter 3 to circuits containing operational amplifiers, capacitors, methods;
and inductors that can be used for signal processing purposes. Later, our basic tech- linearity and
niques introduced here will be further developed in Chapter 4 for the analysis of linear superposition;
circuits containing sinusoidal sources. coupling and
The topics to be covered in this chapter include resistor combinations and source available power
transformations (analysis via circuit simplification), node-voltage and loop-current of resistive
methods (systematic applications of KCL and KVL), and Thevenin and Norton equiv- networks
alents of linear resistive networks and their interactions with external loads.
31
32 Chapter 2 Analysis of Linear Resistive Circuits
i +
R1 Ro +
i R1 + R2 Ro
i1 R aR b
Ra va Rb i1 νa
R2 νo +
− νo +
−
Ra + Rb
− −
(a) (b)
R1 i1 + R2 i1 = (R1 + R2 )i1 ≡ Rs i1 .
Rs = R1 + R2
is the series equivalent of resistors R1 and R2 , and replaces them in the simplified
Series version of the circuit shown in Figure 2.1b. In general, the series equivalent of N
equivalent resistors R1 , R2 , · · ·, RN (all carrying the same current on a single circuit branch) is
Rs = R1 + R2 + · · · + RN .
Likewise, we note the total resistance of the two parallel branches on the right of
Figure 2.1a as RRaa+R
Rb
b
, because the total current conducted from top to bottom through
these branches is
va va 1 1 va
+ = va ( + )≡ .
Ra Rb Ra Rb Rp
resistors R1 , R2 , · · ·, RN (all supporting the same voltage between the same pair of
nodes) is
1 1 1 −1
Rp = ( + + ··· + ) .
R1 R2 RN
Example 2.1
Figure 2.2a replicates Figure 2.1a, but with numerical values for the resis-
tors and the source. We next will solve for the unknown current i supplied
by the source, using the technique of resistor combinations.
A simplified version of the circuit after series and parallel resistor
combinations is shown in Figure 2.2b, where the series combination is
Rs = 1 + 5 = 6
ia ia
+ +
1Ω i 0.5 Ω
i1 0.5 Ω i1 i
3Ω νa 6Ω 6Ω νa 2Ω
5Ω + +
− −
4V − 4V −
(a) (b)
0.5 Ω +
νa 1.5 Ω
2Ω
+ − +
− i − i
4V 4V
(c) (d)
Figure 2.2 A resistive circuit (a) and its equivalents (b), (c), (d).
34 Chapter 2 Analysis of Linear Resistive Circuits
Finally, in Figure 2.2c we notice that the 0.5 and 1.5 resistors are
in series and replace them by their series equivalent
Rs = 0.5 + 1.5 = 2
4V
i= = 2 A.
2
Suppose we had wanted to solve for va in Figure 2.2a. Our analysis then could
have stopped with Figure 2.2c, which shows two resistors in series. This is a special
case of the circuit shown in Figure 2.3a. Here we see that
vs vs
i= = ,
Rs R1 + R2
and, therefore,
R1
v1 = vs
R1 + R2
and
R2
v2 = vs .
R1 + R2
Voltage These voltage division equations tell us how the total voltage vs across two resistors
division is divided between the elements.
i + +
i1 i2
R1 ν1
−
+ is R1 R2
νs +
−
R2 ν2 v
−
(a) − (b)
Figure 2.3 Voltage (a) and current (b) division.
Example 2.2
Apply voltage division to find va in Figure 2.2c.
Section 2.1 Resistor Combinations and Source Transformations 35
1.5 6
va = 4 V = V = 3 V.
0.5 + 1.5 2
R1 R2
v = is Rp = is ,
R1 + R2
and
v v
i1 = , i2 = .
R1 R2
Therefore,
R2
i1 = is
R1 + R2
and
R1
i2 = is .
R1 + R2
These current-division equations tell us how the total current is conducted by two Current
resistors is split between them. division
Example 2.3
Apply current division to find ia in Figure 2.2b.
Solution The parallel resistors carry a total current of i =2 A. (See Example 2.1.)
Thus, using current division, we obtain
6 12
ia = 2 A = A = 1.5 A.
6+2 8
Rs a a
+ +
ν i i
νs
+
− is Rs ν
−
−
(a) b (b) b
Figure 2.4 Two networks that are equivalent when vs = Rs is . The term
“network” refers to a circuit block with two or more external terminals.
vs
vs = Rs is ⇔ is = .
Rs
In this case the source-resistor combinations are said to be equivalent, because they
produce the same result when attached to a third circuit.
Proof: To prove that the source combinations in Figure 2.4 are equivalent
for vs = Rs is , we will find the expression for v in both circuits, in terms of
the terminal current i (assumed to be carried by an external element or circuit
connected between terminals a and b). Applying KVL in Figure 2.4a, we first
find that vs = Rs i + v, or
v = vs − Rs i.
v = Rs is − Rs i.
Clearly, these expressions are identical for vs = Rs is , in which case the two
networks are equivalent because they apply the same voltage and inject the
same current into an element or circuit attached to terminals a and b.
The next examples illustrate application of the source transformation method
based on the equivalence of the networks shown in Figure 2.4 under the condition
vs = Rs is .
Section 2.1 Resistor Combinations and Source Transformations 37
ia ia
+ 1Ω 8A +
1Ω i
i1 0.5 Ω i1 νa 6Ω
3Ω νa 6Ω 0.5 Ω 3 Ω
+
5Ω −
(a) 4V −
(b) 5Ω −
Example 2.4
Our goal is to find i1 in Figure 2.5a.
We begin by exchanging the 4 V source in series with the 0.5 resistor
with a
4V
= 8A
0.5
current source in parallel with a 0.5 resistor. This exchange is permis-
sible, as proven earlier, and is called a “source transformation.” After the
source transformation, the circuit in Figure 2.5a becomes the circuit shown
in Figure 2.5b.
In this modified circuit, the current i1 through the 6 branch on the left
can be easily calculated by current division. Since the parallel equivalent
1
of the three resistors on the right is 2.5 , we find that
1
8
i1 = 8 A 2.5
= A = 0.5 A.
1
2.5 +6 1 + 15
Example 2.5
In Figure 2.6 successive uses of source transformations and resistor combi-
nations are illustrated to determine the unknown voltage voc . The simplified
and equivalent versions of Figure 2.6a shown in Figures 2.6e and 2.6f are
known as Thevenin and Norton equivalents (to be studied in Section 2.4).
From either Figure 2.6e or 2.6f, we clearly see that
voc = 1 V.
2Ω 1Ω 1Ω
+ +
+ 1A 1A
4V − 2Ω ν
− oc
2A 2Ω 2Ω − oc
ν
(a) (b)
1Ω 1Ω 1Ω 2Ω
+ + + +
1A 1Ω ν 1V +
− ν 1V +
− ν oc 0.5A
2Ω ν oc
− oc − oc − −
(c) (d) (e) (f)
Figure 2.6 Simplification of circuit (a) by source transformations (a→b, c→d, and e→f) and resistor
combinations (b→c and d→e) to its Thevenin (e) and Norton (f) equivalents. Note that in step b→c two
current sources in parallel are combined into a single current source.
Example 2.6
Consider the circuit shown in Figure 2.7. The reference node is indicated
by the ground symbol; v1 and v2 denote two unknown node voltages; and
node voltage v3 = 3 V has been directly identified, since there is an explicit
3 V rise from the reference to node 3 provided by an independent voltage
source.
Following step 2 of the node-voltage method, we next construct KCL
equations for nodes 1 and 2 where v1 and v2 were declared:
Section 2.2 Node-Voltage Method 39
ν1 ν2 ν3 = 3 V
2Ω 1Ω
+
2A − 3V
4Ω
v1 − v2 = 4
−2v1 + 7v2 = 12.
v1 = 8 V
v2 = 4 V.
We can calculate the element voltages and currents in the circuit by using
these node voltages and v3 = 3 V.
Example 2.7
The circuit shown in Figure 2.8 contains a voltage controlled current source
and three unknown node voltages marked as v1 , v2 , and v3 . We can apply
the node voltage method to the circuit by writing 3 KCL equations in which
each occurrence of voltage vx controlling the dependent current source is
expressed in terms of node voltage v3 .
However, application of simple voltage division on the right side of the
circuit also shows that v3 = v22 as already marked on the circuit diagram.
40 Chapter 2 Analysis of Linear Resistive Circuits
2Ω 3Ω ν2
ν1 ν2 ν3 =
2
+
2νx 1A 3 Ω −νx
Figure 2.8 Another node-voltage example. Note that voltage vx , which controls
the dependent current source on the left, also can be written as node voltage v3 .
v1 + v2 = 0
v1 − 3v2 = −6,
v1 = −3 V
v2 = 3 V.
Also, v3 = v2
2 = 1.5 V.
Example 2.7 illustrated how controlled sources are handled in the node-voltage
method. Their contributions to the required KCL equations are entered after their
values are expressed in terms of the node-voltage unknowns of the problem. The
example also illustrated that if any of the unknown node voltages can be expressed
Section 2.2 Node-Voltage Method 41
readily in terms of other node voltages, the number of KCL equations to be formed
can be reduced. The next example also illustrates such a reduction.
Example 2.8
Consider the circuit shown in Figure 2.9. In this circuit v1 = 2 V has been
directly identified, v2 has been declared as an unknown node voltage, and
it has been recognized at the outset that
v3 = v2 + 1
2A
1V
ν1 = 2 V ν2 ν3 = ν2 + 1
−
+
ix
+
2Ω
2V − 1Ω 2Ω
To solve for the unknown node voltage v2 , we need to write the KCL
equation for node 2. This equation can be expressed as
v2 − 2 v2 − 0
+ + ix = 0,
2 1
where we make use of a current variable ix , which, according to the KCL
equation written for node 3, is given by
(v2 + 1) − 0
ix = 2 + .
2
Eliminating ix between the two KCL equations, we obtain
v2 − 2 v2 − 0 (v2 + 1) − 0
+ +2+ = 0,
2 1 2
which is, in fact, a single KCL equation for a super-node in Figure 2.9,
indicated by the dashed oval. (See the subsequent discussions of the super-
node “trick.”) Solving the equation, we obtain
v2 = −0.75 V.
42 Chapter 2 Analysis of Linear Resistive Circuits
Furthermore,
v3 = v2 + 1 = 0.25 V.
The expression that we previously called the “super-node” KCL equation states
that the sum of all currents out of the region in Figure 2.10, surrounded by the dashed
oval, equals zero. Since there is no current flow into this region, the statement has
the form of a KCL equation applied to the dashed oval. Thus, the dashed oval is an
Super-node example of what is called a “super-node” in the node-voltage technique. Super-nodes
can be defined around ordinary nodes connected by voltage sources, as in the previous
example. When solving small circuits by hand, we find that clever application of the
super-node concept sometimes can shorten the solution. We illustrate in the next
example.
ν2 = ν1 + 4 ix ν3 = 3i x
4V + 3Ω + 3i x 4Ω
− −
1Ω
ν1
Example 2.9
Figure 2.10 shows a circuit with a dependent source and a super-node. (See
the dashed oval.) All node voltages in the circuit have been marked in terms
of an unknown node voltage v1 and current ix that controls the dependent
voltage source.
Using Ohm’s law, we first note that current
v2 − v3 (v1 + 4) − 3ix
ix = = ,
3 3
from which we obtain
v1 + 4
ix =
6
in terms of unknown voltage v1 . We then can express the super-node KCL
equation as
v1 v1 + 4
+ = 0,
1 6
Section 2.3 Loop-Current Method 43
v1 + 4 4
ix = = A,
6 7
and
12
v3 = 3ix = V.
7
Example 2.10
In Figure 2.11a we see a single-loop circuit. All the element currents in the
circuit can be represented in terms of a single loop-current variable i as
shown in the diagram. A KVL equation for the loop is
3 V = 2i + 4(2i) + 2 V;
4νx
2Ω 2Ω 3Ω
−
+
+ νx −
+ + + +
3V − i − 2V 5V − i1 1Ω i 2 − − 2V
(a) (b) ix
1
Planar circuits are circuits that can be diagrammed on a plane with no element crossing another
element.
44 Chapter 2 Analysis of Linear Resistive Circuits
therefore,
(3 − 2) V
i= = 0.1 A
(2 + 8)
and
vx = 2i = 0.2 V.
Example 2.11
Figure 2.11b shows a circuit with two-elementary loops (loops that contain
no further loops within), which have been assigned distinct loop-current
variables i1 and i2 . In analogy with the previous problem, we consider i1
to be the element current of the 5 V source and 2 resistor on the left.
Likewise, i2 can be considered as the current of the 3 resistor and −2 V
source on the right. Furthermore, the current ix through the 1 resistor in
the middle can be expressed in terms of loop currents i1 and i2 ; using KCL
at the top node of the resistor, we see that
ix = i1 − i2 .
Clearly, all the element currents in the circuit are either directly known
or can be calculated once the loop currents i1 and i2 are known. We will
determine i1 and i2 by solving two KVL equations constructed around the
two-elementary loops of the circuit.
The KVL equation for the loop on the left is
5 = 2i1 + 1(i1 − i2 ),
where 1(i1 − i2 ) = 1ix denotes the voltage drop across the 1 resistor from
top to bottom. Likewise, for the loop on the right we have
or, equivalently,
where 1(i2 − i1 ) denotes the voltage drop across the 1 resistor from
bottom to top. Rearranging the two equations, we obtain
3i1 − i2 = 5
and
−i1 + 4i2 = 2.
Section 2.3 Loop-Current Method 45
i1 = 2 A
and
i2 = 1 A.
Consequently,
ix = i1 − i2 = 1 A,
and the voltage drop across the 1 resistor from top to bottom is 1 × ix =
1 V. The remaining element voltages also can be calculated in a similar way
using the loop currents i1 and i2 .
The preceding examples illustrate the main idea behind the loop-current method;
namely, loop-current values are calculated for each elementary loop in the circuit.
Then, if desired, element currents or voltages can be calculated from the loop currents. Loop-current
A step-by-step procedure for calculating the loop currents is as follows: procedure
Example 2.12
Consider the circuit shown in Figure 2.12. The circuit contains three elemen-
tary loops; therefore, there are three loop currents in the circuit. Two of these
have been declared as unknowns i1 and i2 , and the third one has been recog-
nized as i3 = 2 A to match the 2 A current source on the right. Since we have
only two unknown loop currents i1 and i2 , we need to write only two KVL
equations. The KVL equation for loop 1 (where i1 has been declared) is
14 = 2i1 + 3ix .
2Ω 4Ω 2Ω
+ + i 2 1Ω i 3 = 2 A
14V − i 1 3i x −
ix 2A
Likewise, the KVL equation for loop 2 (where i2 has been declared) is
ix = i2 + 2,
since the direction of loop currents i2 and i3 = 2 A coincide with the direc-
tion of ix . Substituting for ix in the two KVL equations and rearranging
as
2i1 + 3i2 = 8
and
2i2 = 4,
we are finished with the implementation of step 2. Clearly then, the solution
(that is, step 3) is
i2 = 2 A
and
i1 = 1 A.
Example 2.13
Consider the circuit shown in Figure 2.13 with three loop currents declared
as i1 , i2 , and i3 . Nominally, we need three KVL equations, one for each
elementary loop. However, we note that loops 2 and 3 are separated by a
2 A current source and consequently,
i3 = i2 + 2,
1Ω
i2
1Ω
− νx +
2A
+ i1 2Ω
2.6V − 1Ω
i3 = i2 + 2
1Ω
Figure 2.13 A loop-current example with a super-loop equation.
Section 2.3 Loop-Current Method 47
1i2 + vx + 1(i2 − i1 ) = 0,
where
vx = 2(i2 + 2) + 1((i2 + 2) − i1 )
is the voltage drop across the 2 A source as determined by writing the KVL
equation around loop 3. Eliminating vx between the two equations, we
obtain
which is in fact the KVL equation around the dashed contour shown in
Figure 2.13, which can be called a “super loop,” in analogy with the super- Super loop
node concept. The KVL equations for loop 1 and the dashed contour (super
loop) simplify to
Their solution is
i1 = 1 A
i2 = −0.8 A,
and, consequently,
i3 = i2 + 2 = 1.2 A.
Super loops such as the dashed contour in Figure 2.13 can be formed around
elementary loops sharing a common boundary that includes a current source. In such
cases, either of the elementary loop currents can be expressed in terms of the other
(e.g., i3 = i2 + 2 in Example 2.13, earlier) and therefore, instead of writing two KVL
equations around the elementary loops, it suffices to formulate a single KVL equation
around the super loop.
48 Chapter 2 Analysis of Linear Resistive Circuits
ix
2Ω
ν1 = νs ν2 ν3
4Ω is
νs +
− 2i x 1Ω
Figure 2.14 Circuit with arbitrary current and voltage sources is and vs .
We might wonder how scaling the values of the independent sources in a resistive
circuit will affect the value of some particular voltage or current. For example, will
a doubling of the source values result in a doubling of the voltage across a specified
resistor? More generally, we can ask how circuits respond to their source elements.
By “circuit response,” we refer to all voltages and currents excited in a circuit, and
we consider whether the combined effect of all independent sources is the sum of the
effects of the sources operating individually. We shall reach some general conclusions
by examining these questions within the context of the node-voltage approach.
We first note that the application of KCL in node-voltage analysis always leads
to a set of linear algebraic equations with node-voltage unknowns and forcing terms
proportional to independent source strengths. For instance, for the circuit shown in
Figure 2.14, the KCL equations to be solved are
v1 = vs
v2 − v1 v1 − v3
+2 + is = 0
4 2
v3 v3 − v1
+ = is,
1 2
in which the forcing terms vs and is are due to the independent sources in the circuit.
Dependent sources enter the KCL equations in terms of node voltages
v1 − v3
2ix = 2 ,
2
(as for instance, in the second of the preceding equations), therefore, dependent
sources do not contribute to the forcing terms. The consequence is that, node-voltage
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 49
v1 = vs
5 4
v2 = − vs − is
3 3
1 2
v3 = vs + is ,
3 3
which is the solution to the KCL equations for the circuit of Figure 2.14.2 Since we
can calculate any voltage or current in a resistive circuit by using a linear combination
of node voltages and/or independent source strengths (as we have seen in Section 2.2),
the following general statement can be made:
y = K1 f1 + K2 f2 + · · · + KN fN ,
v = K1 vs + K2 is ,
where K1 and K2 are constant coefficients (to be determined in the next section). What
this means is that voltage v is a weighted linear superposition of the independent
sources vs and is in the circuit. Likewise, any electrical response y in any resistive
circuit is a weighted linear superposition of the independent sources f1 , f2 , · · ·, fN .
Since any response in a resistive circuit can be expressed as a weighted linear
superposition of independent sources, such circuits are said to be linear. We see from Linearity
this property that a doubling of all independent sources will indeed double all circuit
responses and that every circuit response is a sum of the responses due to the individual
sources. We shall examine this latter point more closely in the next section.
The linearity property is not unique to resistive circuits; circuits containing resis-
tors, capacitors, and inductors also can be linear, as we will see and understand in
2
These results can be viewed as the solution of the matrix problem
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 0 0 v1 vs
⎣ 3 1
−1 ⎦ ⎣ v2 ⎦ = ⎣ −is ⎦
4 4
− 21 0 2
3
v3 is
for the node-voltage vector on the left-hand side. Since the solution is the product of the source vector on the
right with the inverse of the coefficient matrix on the left, the node-voltage values are linear combinations
of the elements of the source vector and, hence, linear combinations of vs and is .
50 Chapter 2 Analysis of Linear Resistive Circuits
+ +
2Ω 2Ω
is 4Ω 4Ω ν 4Ω 4Ω ν
v s +− 1V +
−
− −
(a) (b)
+
2Ω
4Ω 4Ω ν
1A
−
(c)
Figure 2.15 (a) A linear resistive circuit with two independent sources, (b) the same circuit with
suppressed current source and vs = 1 V, (c) the same circuit with suppressed voltage source and is = 1 A.
later chapters. On the other hand, circuits containing even a single nonlinear element,
such as a diode, ordinarily will behave nonlinearly, meaning that circuit responses
will not be weighted superpositions of independent sources in the circuit. A diode
is nonlinear because its v–i relation cannot be plotted as a straight line. By contrast,
resistors have straight-line v–i characteristics.
v = K1 vs + K2 is .
v = K1 × (1 V).
But with vs = 1 V and is = 0, the circuit simplifies to the form shown in Figure 2.15b,
since with is = 0 the current source becomes an effective open circuit. Analyzing the
circuit of Figure 2.15b with the suppressed current source, we find that (using resistor
combination and voltage division, for instance) v = 21 V. Therefore, v = K1 × (1 V)
implies that
1
K1 = .
2
Likewise, to determine K2 , we let vs = 0 and is = 1 A in the circuit so that
v = K2 × (1 A)
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 51
in the modified circuit with the suppressed voltage source, shown in Figure 2.15c.
We find that in Figure 2.15c (using resistor combination and Ohm’s law) v = 1 V.
Therefore, v = K2 × (1 A) implies that
V
K2 = 1 = 1 .
A
Hence, combining vs and is with the weight factors K1 and K2 , respectively, we find
that in the circuit shown in Figure 2.15a,
1
v= vs + 1 × is .
2
Note that each term in this sum represents the contribution of an individual source
to voltage v. Likewise, the superposition principle implies that any response in any
resistive circuit is the sum of individual contributions of the independent sources
acting alone in the circuit. We will use this notion in Example 2.14.
Example 2.14
In a linear resistive circuit with two independent current sources i1 and i2 ,
a resistor voltage vx = 2 V when
i1 = 1 A and i2 = 0 A.
But when
i1 = −1 A and i2 = 3 A,
i1 = 0 A and i2 = 5 A.
vx = K1 i1 + K2 i2 .
2 = K1
4 = −K1 + 3K2 .
Thus,
4 + K1
K2 = = 2 ,
3
52 Chapter 2 Analysis of Linear Resistive Circuits
and we have
vx = 5K2 = 10 V
for i1 = 0 A and i2 = 5 A.
Example 2.15
Determine v2 in Figure 2.7 from Section 2.2 using source suppression and
superposition.
Solution First, suppressing (setting to zero) the 2 A source in the circuit,
we find that v2 due to the 3 V source is
4 12
v2 = 3 V = V.
1 + 4 5
To understand this expression, you must redraw Figure 2.7 with the 2 A
source replaced by an open circuit and apply voltage division in the revised
circuit. Second, suppressing the 3 V source, we calculate that v2 due to the
2 A source is
4 × 1 8
v2 = 2 A = V.
4 + 1 5
We obtain this expression by redrawing Figure 2.7 with the 3 V source set
equal to zero, which implies that the voltage source is replaced by a short
circuit. Finally, according to the superposition principle, the actual value of
v2 in the circuit is the sum of the two values just calculated, representing
the individual contributions of each source. Hence,
12 8
v2 = + = 4 V,
5 5
which agrees with the node-voltage analysis result obtained in Example 2.6
in Section 2.2.
i
+ +
Resistive Resistive
Network ν i
Network ν = νT − R T i
− −
(a) (b)
RT i i
+ +
νT
νT +
− ν = νT − R T i iN = RT ν = R T iN − R T i
RT
− −
(c) (d)
Figure 2.16 (a) A linear circuit containing an independent current source i and some linear resistive
network, (b) the linear resistive network and its terminal v–i relation, (c) Thevenin equivalent of the
network, and (d) Norton equivalent.
behavior as seen by the outside world. To incorporate the outside world, suppose that
the network terminals are connected to a second circuit that draws current i and results
in voltage v across the terminals. We have not shown the second circuit explicitly;
instead, we have modeled the effect of the second circuit by the current source i.
Now, what might the simple replacement be for the resistive network? Because
the network is resistive, its terminal voltage v will vary with current i in a linear form,
say, as
v = vT − RT i,
where RT and vT are constants that depend on what is within the resistive network.3
Both vT and RT may be either positive or negative.
Figure 2.16b emphasizes that the boxed resistive network is governed at its termi-
nals by an equation having the preceding form. Any circuit having the same terminal
v–i relation will be mathematically equivalent to the resistive network. The simplest
such circuit is shown in Figure 2.16c and is known as the Thevenin equivalent.
Since we started by assuming that the box contains an arbitrary resistive network,
this result means that all resistive networks can be represented in the simple form of
Figure 2.16c! A second equivalent circuit is obtained as the source transform of the
Thevenin equivalent (see Figure 2.16d) and is known as the Norton equivalent. Both
3
A more rigorous argument for this relation is as follows: Since the circuit of Figure 2.16a is linear, the
voltage v is a weighted linear superposition of all the independent sources within the box, as well as the
independent current source i external to the box. Expressing the overall contributions of the sources within
the box as vT , and declaring a weighting coefficient −RT for the contribution of i, we obtain v = vT − RT i
with no loss of generality.
54 Chapter 2 Analysis of Linear Resistive Circuits
of these equivalent circuits4 are useful for studying the coupling of resistive networks
to external loads and other circuits.
We next will describe how to calculate the Thevenin voltage vT , Norton current
iN , and Thevenin resistance RT . We will then illustrate in the next section the use of
the Thevenin equivalent in an important network coupling problem.
Thevenin voltage vT : For i = 0, the terminal v–i relation of a resistive network,
that is,
v = vT − RT i,
reduces to
v = vT .
Thevenin Thus, to find the Thevenin voltage vT of a network, it is sufficient to calculate its
voltage output voltage v in the absence of an external load at the network terminals. The
equals the Thevenin voltage vT , therefore, also is known as the “open-circuit voltage” of the
open-circuit network.
voltage As an example, consider the network of Figure 2.15a, which is repeated in
Figure 2.17a, with vs = 2 V and is = 1 A. Using the result
1
v= vs + 1 × is
2
from the last section, with vs = 2 V and is = 1 A, we calculate that v = 2 V is the
open-circuit voltage of the network (i.e., the terminal voltage in the absence of an
external load). Hence, Thevenin voltage of the network is
vT = 2 V.
Norton current iN : We next will see that the Norton current of a linear network,
defined as (see Figure 2.16d)
vT
iN ≡ ,
RT
also happens to be the current that flows through an external short placed between the
network terminals (as in Figure 2.17b) in the direction of the voltage drop adopted
for v and vT . The termination shorts out the network output voltage v, and thus the
relation v = vT − RT i is reduced to
0 = vT − RT i,
implying that the short-circuit current of the network is
vT
i= = iN .
RT
4
These two circuits are named after Leon C. Thevenin (1857-1926) of French Postes et Telegraphes
and Edward L. Norton (1898-1983) of Bell Labs.
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 55
+ +
2Ω 2Ω
4Ω 4Ω ν 4Ω 4Ω ν=0
1A +
1A +
2V − 2V − i
− −
(a) (b)
1Ω +
2Ω
4Ω 4Ω ν
2V +
− 2A 1Ω
−
(c) (d) (e)
Figure 2.17 (a) A linear network with terminal voltage v, (b) the same network with an external short,
(c) Thevenin equivalent, (d) Norton equivalent, and (e) the same network as (a) after source suppression.
Thus, to determine the Norton current of the network in Figure 2.17a, we place Norton
a short between the network terminals and calculate the short-circuit current i, as current
shown in Figure 2.17b. Applying KCL at the top node gives equals the
short-circuit
0−2 current
1= + i,
2
because there is zero voltage across the two 4 resistors. Therefore, the short-circuit
current of the network is i = 2 A, and consequently,
iN = 2 A.
The Norton current of any linear network can be calculated as a short-circuit current.
and use it with known values of vT (open circuit voltage) and iN (short circuit current).
For the network shown in Figure 2.17a, for example,
vT 2V
RT = = = 1 ;
iN 2A
the corresponding Thevenin and Norton equivalent circuits are as shown in Figures
2.17c and 2.17d, respectively. Thevenin
The Thevenin resistance RT also can be determined directly by a source suppres- resistance
v
sion method without first finding the Thevenin voltage and Norton current. Before RT = i T
N
56 Chapter 2 Analysis of Linear Resistive Circuits
we describe the method, we observe that if all the independent source strengths in
Figure 2.17a were halved, that is,
1 A → 0.5 A and 2 V → 1 V,
the open-circuit voltage vT and short-circuit current iN of the network would also be
halved. In other words,
vT = 2 V → 1 V and iN = 2 A → 1 A
because of the linearity of the network. However, the Thevenin resistance
vT 2V 1V
RT = = = 1 → = 1
iN 2A 1A
would remain unchanged. The Thevenin resistance would, in fact, remain unchanged
even in the limiting case when all independent source strengths are suppressed to
zero, as shown in Figure 2.17e. Indeed, the Thevenin resistance of the network in
Figure 2.17e is the same as the Thevenin resistance of the original network shown in
Figure 2.17a!
Source This observation leads to the source suppression method for finding RT :
suppression
(1) Replace all independent voltage sources in the network by short circuits and all
method
independent current sources by open circuits.
(2) If the remaining network contains no dependent sources (as in Figure 2.17e),
then RT is the equivalent resistance, which we can determine usually by using
series and/or parallel resistor combinations. (Note that in Figure 2.17e the
parallel equivalent of the three resistors yields the correct Thevenin resistance
RT = 1 obtained earlier.)
(3) If the remaining network contains dependent sources, or cannot be simplified
by just series and parallel combinations, then RT can be determined by the test
signal method, illustrated in Example 2.17 to follow.
Example 2.16
Figure 2.18a shows a resistive network with a dependent source. Determine
the Thevenin and Norton equivalents of the network.
Solution To determine the Thevenin voltage vT , we apply the node-
voltage method to the circuit shown in Figure 2.18a. The unknown node
voltages are vT and vT − 2ix , where
vT − 2ix
ix =
1
is the current flowing down through the 1 resistor. We also have a super-
node KCL equation
vT
2 = ix + .
2
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 57
2i x 2i x
νT − 2i x νT − 2i x 0
− −
+
iN
2A ix 2A ix
1Ω 2Ω 1Ω 2Ω
(a) (b)
1.2Ω 2i x
−
+
2.4V +
− 2A 1.2Ω ix
1Ω 2Ω
(c) (d) (e)
2i x
ν − 2i x −
ν
+
i
1Ω ix 1A
(f) 2Ω
Figure 2.18 (a) A linear network with terminal voltage vT , (b) the same network with an external short,
(c) Thevenin equivalent, (d) Norton equivalent, (e) the same network with suppressed independent
source, and (f) source suppressed network with an applied test source.
12
vT = = 2.4 V.
5
−2ix
ix = ,
1
indicating that
ix = 0.
Hence, neither resistor carries any current in Figure 2.18(b), and therefore,
iN = 2 A.
58 Chapter 2 Analysis of Linear Resistive Circuits
Finally,
vT 12/5 6
RT = = = = 1.2 .
iN 2 5
The Thevenin and Norton equivalents are shown in Figures 2.18c and 2.18d.
Test-signal The next example explains the test signal method for determining the Thevenin
method resistance in networks containing dependent sources.
Example 2.17
Figure 2.18e shows the network in Figure 2.18a with the independent source
suppressed. The Thevenin resistance RT of the original network is the
equivalent resistance of the source-suppressed network, but the presence
of a dependent source prevents us from determining RT with series/parallel
resistor combination methods. Instead, we can determine RT by calculating
the voltage response v of the source-suppressed network to a test current
i = 1 A injected into the network, as shown in Figure 2.18f. RT is then the
equivalent resistance
v v
RT = = .
i 1A
For a test current i = 1 A, we write KCL at the injection node to obtain
v
i = 1A = + ix .
2
Also,
v − 2ix
ix = ,
1
implying that
v
ix = .
3
Thus,
v v 5
i = 1A = + = v,
2 3 6
from which we obtain
6
v= = 1.2 V.
5
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 59
Hence,
v 1.2 V
RT = = = 1.2 ,
i 1A
as already determined in Example 2.16.
Example 2.18
Determine the Thevenin equivalent of the network shown in Figure 2.19a.
Solution Since the network contains no independent source, the open-
circuit voltage vT = 0 and the Thevenin equivalent is just the Thevenin
resistance RT .
To determine RT , we will feed the network with an external 1 A current
source, as shown in Figure 2.19b, and solve for the response voltage v at
the network terminals. Then
v
RT = .
1A
The KCL equation at the top node is
v
= vx + 1,
3
and, via voltage division across the series resistors on the left, we also have
2 2
vx = v = v.
2+1 3
Replacing vx in the KCL equation with 23 v, we find that
v = −3 V.
1Ω 1Ω
+ + +
2Ω νx νx 2Ω νx νx ν 1A
− − −
(a) (b)
Figure 2.19 (a) A linear network with a dependent source, and (b) the same network excited by a 1 A
test source to determine RT .
60 Chapter 2 Analysis of Linear Resistive Circuits
Hence,
−3 V
RT = = −3 .
1A
Given a resistive network, it is important to ask how much power that network can
transfer to an external load resistance. Ordinarily, we wish to size the load resistance
so as to transfer the maximum power possible. For networks with positive Thevenin
resistance RT , there is an upper bound on the amount of power that can be delivered
out of the network, a quantity known as the available power of the network.
To determine the general expression for the power available from a resistive
network, we will examine the circuit shown in Figure 2.20. In this circuit an arbitrary
resistive network is represented by its Thevenin equivalent on the left, and the resistor
RL on the right represents an arbitrary load.
Using voltage division, we find that the load voltage is
RL
vL = vT .
RT + RL
vL2
pL = vL iL =
RL
is
RL
pL = v2 .
(RT + RL )2 T
RT
iL
+
νT +
− νL RL
−
Figure 2.20 The model for the interaction of linear resistive networks with an
external load RL .
Section 2.5 Available Power and Maximum Power Transfer 61
pL
0.25
0.2
0.15
0.1
0.05
2 4 6 8 10 RL
Figure 2.21 The plot of load power pL in W versus load resistance RL in , for
VT = 1 V and RT = 1 .
For fixed values of vT and RT > 0 (i.e., for a specific network with specific values
of vT and RT ), this quantity will vary with load resistance RL . From the expression it
is easy to see that pL vanishes for RL = 0 and RL = ∞, and is maximized for some
positive value of RL . (See Figure 2.21.) The maximum value of pL is the available
power pa of the network.
To determine the value of RL that maximizes pL , we set the derivative of pL
with respect to RL to zero, since the slope of the pL curve vanishes at its maximum,
as shown in Figure 2.21. The derivative is
RT − RL
=0
(RT + RL )3
RT > 0,
5
With negative RT (see Example 2.18 in the previous section), the transferred power pL is maxi-
mized for
RL = −RT
at a value of ∞. In practice, such networks will violate the circuit model when RL = −RT and will deliver
only finite power to external loads.
62 Chapter 2 Analysis of Linear Resistive Circuits
RL = RT .
vT2
pa = .
4RT
Example 2.19
For the network of Figure 2.18a,
12 6
vT = V and RT = .
5 5
Thus, for maximum power transfer, the load resistance should be selected
as
6
RL = RT = ,
5
(12/5)2 12 × 12
pa = = = 1.2 W.
4 × 6/5 24 × 5
EXERCISES
1Ω 1Ω 1Ω
6A 2Ω 3Ω ix 4V +
− 2Ω ix 1Ω
(a) (b)
2.2 In the following circuits, determine vx and the absorbed power in the elements
supporting voltage vx :
+
2Ω νx
2Ω 1Ω − iy
3i y 2Ω
+ νx −
4V + + 2ν x +
− 1Ω − − 2V
(a) (b)
2.3 In the circuit shown next, determine v1 , v2 , ix , and iy , using the node-voltage
method. Notice that the reference node has not been marked in the circuit;
therefore, you are free to select any one of the nodes in the circuit as the
reference. The position of the reference (which should be shown in your
solution) will influence the values obtained for v1 and v2 , but not for ix
and iy .
ν1 2Ω ν2 2Ω
ix iy
4.5 A 2Ω + 3i x + 7V
− −
2Ω
64 Chapter 2 Analysis of Linear Resistive Circuits
2Ω
2V ν
ν1 − 2 ν3
+
4Ω
+
2Ω 4Ω − 4V
ν1 1Ω ν2 6V 1Ω
+
−
14 A 12 Ω 3Ω 3Ω −
+ 6V
13Ω
2Ω 3Ω
6V + − 2i 1
− 4Ω +
i1 i2
(a)
2Ω 1Ω 4V
−
+
4V + 2Ω
− i1 1Ω i2
2A
(b)
2.7 (a) For the next circuit, obtain two independent equations in terms of loop-
currents i1 and i2 and simplify them to the form
Ai1 + Bi2 = E
Ci1 + Di2 = F.
2V 3i x
− −
+
1Ω i1 2Ω i 2 4Ω
ix
Exercises 65
R a
R
2R i 2R
ν +−
a RT a
iN RT νT +−
b b
R a
R iL
2R i 2R RL
ν +
−
2.10 In the following circuit, find the open-circuit voltage and the short-circuit
current between nodes a and b, and determine the Thevenin and Norton
equivalents of the network between nodes a and b:
1Ω 1Ω a
ix
+ 2i x
2V −
2.11 Determine the Thevenin equivalent of the following network between nodes
a and b, and then determine the available power of the network:
a
+ 1Ω
νx 1Ω 2A − 2νx
-
b
1Ω
+ is
1Ω
v
1Ω
+
− − νs 1Ω
Exercises 67
1Ω R
+ νx − 1Ω +
νs +
− νL RL
+ −
− Aνx
Op-amps,
capacitors,
inductors; In this chapter, we begin taking the viewpoint that voltages and currents in electrical
circuits and circuits may represent analog signals and that circuits can perform mathematical
systems operations on these signals, such as addition, subtraction, scaling (amplification),
for signal differentiation and integration. Naturally, the type of math performed depends on the
amplification, circuit and its components.
addition, In Section 3.1 we describe a multiterminal circuit component called the opera-
subtraction, tional amplifier and examine some of its simplest applications (amplification, addition,
integration; and differencing). We examine in Section 3.2 the use of capacitors and inductors (see
LTI systems Chapter 1 for their basic definitions) in circuits for signal differentiation and integra-
and zero-input tion. In Section 3.3 we discuss system properties of differentiators and integrators
and zero-state and introduce the important concepts of system linearity and time-invariance. Anal-
response; ysis of linear and time-invariant (LTI) circuits with capacitors and inductors may
first-order require solving differential equations. In Section 3.4 we discuss solutions of first-
RC and RL order differential equations describing RC and RL circuits. Section 3.5 previews the
circuits and analysis techniques for more complicated, higher-order LTI circuits and systems to
ODE initial-value be pursued in later chapters.
problems;
steady-state
3.1 Operational Amplifiers and Signal Arithmetic
response;
nth-order In Figure 3.1a the triangle symbol represents a multi-terminal electrical device known
systems as an operational amplifier—the op-amp. As we shall see, op-amps are high-gain
amplifiers that are useful components in circuits designed to process signals. The
diagram in Figure 3.1a shows how an op-amp is “powered up,” or biased, by raising
68
Section 3.1 Operational Amplifiers and Signal Arithmetic 69
ν+ νo
ν+ νb i+
+
iz νo io
o Ro
i− − io
ν− − νb
Ri
−
i− +
− A (ν+ − ν− )
+
νb + − νb ν−
(a) (b)
νo
νb
νb νb ν+ − ν−
−
A A
(c) − νb
Figure 3.1 (a) Circuit symbol of the op-amp, showing five of its seven terminals and its biasing
connections to a reference node, (b) equivalent linear circuit model of the op-amp showing only three
terminals with node voltages v+ , v− , and vo , and (c) the dependence of vo on differential input v+ − v−
and bias voltage vb . The equivalent model shown in (b) is valid only for v+ − v− between the dashed
vertical lines.
the electrical potentials of two of its terminals1 to DC voltages ±vb with respect to
some reference. Commercially available op-amps require vb to be in the range of a
few to about 15 V in order to function in the desired manner, as described here.
Glancing ahead for a moment, we see that Figure 3.2 shows two op-amp circuits,
where (to keep the diagrams uncluttered) the biasing DC sources ±vb are not shown.
To understand what op-amps do in circuits, we find it necessary to know the rela-
tionship between the input voltages v+ , and v− , and the output voltage vo , defined
at the three op-amp terminals (+, −, and o) shown in the diagrams. Because op-
amps themselves are complicated circuits composed of many transistors, resistors,
and capacitors, analysis of op-amp circuits would seem to be challenging. Fortu-
nately, op-amps can be described by fairly simple equivalent circuit models involving
the input and output voltages. Figure 3.1b shows the simplest op-amp model that is
accurate enough for our purposes, which indicates that
vo = A(v+ − v− ) − Ro io .
It is because vo depends on v+ and v− that the latter are considered as op-amp inputs
and vo is considered to be the op-amp output.
1
Op-amps such as the Fairchild 741 have seven terminals. Two of these are for output offset adjustments,
which will not be important to us. The discussion here will focus on the remaining five, shown in Figure 3.1a,
two of which are used for powering up, or biasing, the device, as indicated in the figure.
70 Chapter 3 Circuits for Signal Processing
Input Terminal o is called the output terminal, and the terminals marked by − and +
and are referred to as inverting and noninverting input terminals. Resistors Ri and Ro
output in Figure 3.1b are called the input and output resistances of the op-amp, and A is a
terminals voltage gain factor (scales up the differential input v+ − v− ) called the “open-loop
gain.” Typically, for an op-amp,
Ro ∼ 1 − 10 ,
Ri ∼ 106 ,
A ∼ 106 ,
Typical where the symbol ∼ stands for “within an order of magnitude of.” Very large values
op-amp of A and Ri and a relatively small Ro are essential for op-amps to perform as intended
parameters in typical applications.
When the output terminal of an op-amp is left open—that is, when no connec-
tion is made to other components—as in Figure 3.1a, then io = 0 and, according to
Figure 3.1b, vo is just an amplified version of v+ − v− . However, an important detail
that is not properly modeled by the simple op-amp equivalent circuit2 is the satura-
tion of vo at ±vb . For the actual physical op-amp, the output vo cannot exceed vb in
magnitude. Assuming small io , Figure 3.1c shows the variation of the output voltage
vo as a function of the differential input voltage v+ − v− . Only for
vb
|v+ − v− | <
A
Linear is the variation of vo with v+ − v− linear, as described by the model in Figure 3.1b.
vs Otherwise, the op-amp is said to be saturated or behaving nonlinearly.
saturated Clearly, then, to maintain an op-amp in the desired linear regime—that is, to
ensure that |v+ − v− | < vAb —it is necessary to keep the differential input v+ − v−
extremely small. Typically, with A ∼ 106 and vb ≈ 10 V, we must have |v+ − v− | <
10 μV. When this condition is achieved, the op-amp input currents i+ and i− through
μV
the input resistance Ri ∼ 106 will have magnitudes less than 10 106
= 10 pA, which
is an exceedingly small (in fact, negligible) current. Hence, for all practical purposes,
for an op-amp operating in the linear regime (i.e., between the dashed vertical lines
in Figure 3.1c), we have
v+ ≈ v− ,
i+ ≈ 0,
i− ≈ 0,
These conditions are fundamental approximations that tremendously simplify the Ideal
analysis of op-amp circuits known to be in linear operation. As you will discover later, op-amp
if you are having difficulty in analyzing an op-amp circuit, it is probably because you approximations
have forgotten to apply one of the previous conditions—so memorize them! These
equations are known as ideal op-amp approximations. The reason for this terminology
is, v+ = v− and i+ = i− = 0 would be exactly true for an ideal op-amp with Ri → ∞
and A → ∞. Notice that there is no ideal op-amp approximation that constrains the
output voltage vo . Instead, as we shall see, the value of vo depends upon the circuit
in which the op-amp is embedded.
Applying the ideal op-amp approximations to the analysis of circuits containing
nonideal (i.e., real, “off-the-shelf”) op-amps amounts to ignoring voltage and current
terms in our KVL and KCL equations having magnitudes less than ∼ 10 μV and
10 pA. This will result in negligible errors in the calculation of the larger voltages and
currents in a circuit that typically will interest us.
Alternatively, we can analyze linear op-amp circuits by substituting for each op-
amp the equivalent circuit model shown in Figure 3.1b and writing down the exact
KVL and KCL equations in the resulting circuit diagrams. In such calculations, we
use Ro ∼ 1 − 10 , Ri ∼ 106 , and A ∼ 106 instead of the op-amp approxi-
mations introduced earlier. This approach can be slightly more accurate (if exact
values of Ro , Ri , and A are known), but requires far more effort than using the simple
ideal op-amp relations.
In the following discussion of well-known linear op-amp circuits, we will illus-
trate both of these approaches.
Rs ν+ Rs ν+
+ νo + νo
i+ o i+ o
v− − io ν− − io
νs +
− νs +
−
R1
i− i−
R2
(a) (b)
Figure 3.2 (a) A voltage-follower (or buffer), and (b) a noninverting amplifier.
72 Chapter 3 Circuits for Signal Processing
Starting with the voltage-follower circuit and using the ideal op-amp approxima-
tions, we can argue as follows: Since i+ ≈ 0, we can ignore the voltage drop across
resistor Rs and thus claim that
v+ ≈ vs .
v− ≈ vs .
vo = v− ,
and, as a consequence,
vo ≈ vs .
R2
v− ≈ vo ,
R1 + R2
R1
vo ≈ (1 + )vs .
R2
Clearly, then, the circuit in Figure 3.2b is a voltage amplifier with a voltage gain
vo R1
G≡ ≈ (1 + ).
vs R2
The amplifier is called noninverting because the gain is positive, and this preserves
the algebraic sign of the amplified voltage.
According to these results, a noninverting amplifier multiplies the input voltage
vs by a gain G that is independent of the source resistance Rs (so long as it is not
infinite), whereas the voltage follower is simply a noninverting amplifier with unity
gain (G = 1). What makes both of these circuits very useful is the fact that their gain G
Section 3.1 Operational Amplifiers and Signal Arithmetic 73
RL
Rs Rs νo = νs
Rs + RL
+ νo ≈ νs
o i
−
νs + νs +
− Ro −
Buffer R L >>
A RL
Source Source
network network
(a) (b)
Figure 3.3 (a) Feeding a load through a buffer (see Exercise Problem 3.1 for a detailed discussion of this
circuit), versus (b) a direct connection between the source network and the load.
remains essentially unchanged when the circuits are terminated by an external load—
for example, by some finite resistance RL —as shown in Figure 3.3a, so long as the
load resistance exceeds a value on the order of RAo . (See the discussion that follows.)
The voltage follower in particular can be used as a unity gain buffer between a source Buffering
circuit and a load, as shown in Figure 3.3a, to prevent “loading down” the source
voltage. That is, the entire source voltage vs appears across the load resistance RL .
Ordinarily, if two circuits are to be connected to one another, then making a direct
connection changes the behavior of the two circuits. This is illustrated in Figure 3.3b,
where connecting RL to the terminals of the source network reduces the voltage at
those terminals. By connecting the load resistance to the source network through a
voltage follower, as in Figure 3.3a, RL does not draw current from the source network
(instead, the current is supplied by the op-amp) and the full source voltage appears
across the load resistance. More generally, the connection of one circuit to another
through a voltage follower allows both circuits to continue to operate as designed.
The preceding ideal op-amp analysis did not provide us with detailed information
such as the range of values of Rs for which the gain G will be independent of Rs .
To obtain such information, we would need to insert the op-amp equivalent model
of Figure 3.1b into the circuit (to replace the op-amp symbol) and then reanalyze the
circuit without making any assumptions about v± and i± . We will lead you through
such a calculation in Exercise Problem 3.2 to confirm that the noninverting amplifier
gain G is well approximated by 1 + R R1
2
, so long as Rs Ri and RAo RL . Such
calculations also are demonstrated in the next section in our discussion of the inverting
amplifier.
We will close this section with a discussion of negative feedback, the magic
behind how a voltage follower circuit keeps itself within the linear operating regime.
Notice how the output terminal of the op-amp in the voltage follower is fed back to its
own inverting input. That connection configures the circuit with a negative feedback Negative
loop, ensuring that v+ − v− remains between the dashed lines in Figure 3.1c, so long feedback
as |vs | < vb . Let us see how.
74 Chapter 3 Circuits for Signal Processing
Rs
− νo
o
+
νs +
−
Example 3.1
In the linear op-amp circuit shown in Figure 3.5, determine the node volt-
ages vo , v1 , and v2 assuming that Vs1 = 3 V, Vs2 = 4 V, and Rs1 = Rs2 =
100 .
Section 3.1 Operational Amplifiers and Signal Arithmetic 75
R s1 R s2
+ ν1 νo ν2 +
− 10kΩ 10kΩ −
10kΩ
Vs1 +− +
− Vs2
10kΩ
20kΩ
v1 ≈ Vs1 = 3 V.
10 k
G=1+ = 1.5;
20 k
therefore,
v2 ≈ GVs2 = (1.5)(4 V) = 6 V.
vo − 3 vo vo − 6
+ + = 0,
10 k 10 k 10 k
giving
vo = 3 V.
Example 3.2
Both sources of Example 3.1 are doubled so that now Vs1 = 6 V, and Vs2 =
8 V. What is the new value of vo ? Assume that the biasing voltage is vb = 15
V for both op-amps.
Solution Since the circuit in Figure 3.5 is linear, a doubling of both inputs
causes a doubling of the response vo . Hence, the new value of vo is 6 V.
This is the correct result, because the new values of v1 and v2 , namely, 6
V and 12 V, are both below the saturation level of 15 V, and therefore, the
circuit remains in linear operation.
76 Chapter 3 Circuits for Signal Processing
Example 3.3
Would the circuit in Figure 3.5 remain in linear operation if the source
voltage values of Example 3.1 were tripled? Once again, assume that vb =
15 V.
Solution No, the circuit would enter into a nonlinear saturated mode,
because with a tripled value for Vs2 the response v2 could not triple to 18
V (since v2 cannot exceed the biasing voltage of 15 V).
vs − v− vs − 0 vs
≈ = .
Rs Rs Rs
The current through resistor Rf away from the inverting terminal is, likewise,
v− − vo 0 − vo vo
≈ =− .
Rf Rf Rf
Rf
Rf
Rs ν− νo
i− io
Rs ν Ro
− − νo Ri
i− o νs +
− A (ν+ − ν− )
ν+ + io +
−
νs + i+
− ν+
i+
(a) (b)
Figure 3.6 (a) An op-amp circuit (inverting amplifier), and (b) the same circuit where the op-amp is
shown in terms of its equivalent circuit model.
Section 3.1 Operational Amplifiers and Signal Arithmetic 77
Hence,
Rf
vo ≈ − vs ,
Rs
which shows that the circuit is a voltage amplifier with a voltage gain of
vo Rf
G= ≈− .
vs Rs
Because the gain is negative, the amplifier is said to be inverting.
Let us next verify the preceding result by using the more accurate equivalent-
circuit approach, where we replace the op-amp by its equivalent linear model, intro-
duced earlier in Figure 3.1b. Making this substitution, we find that the noninverting
amplifier circuit takes the form shown in Figure 3.6b. Applying KCL at the inverting
terminal (where v− is defined) gives
v− − vs v− v− − vo
+ + = 0.
Rs Ri Rf
assuming that Ro Rf and A 1. Substituting v− ≈ − vAo into the first KCL equation
gives
− vAo − vs − vo − vo − vo
+ A + A ≈ 0,
Rs Ri Rf
Clearly, with ARs Rf , ARi Rf , and A 1, the first three terms within the
parentheses on the left can be neglected to yield
Rf
vo ≈ − vs .
Rs
This is the result that we obtained earlier by using the ideal op-amp approximation.
78 Chapter 3 Circuits for Signal Processing
A 1
and
Ro Rf ARs , ARi .
Rf
vo ≈ − vs = −4vs
Rs
would be a valid result, whereas our simple gain formula would not be accurate
for Rf = 20 and Rs = 5 . Importantly, our detailed analysis has shown that the
R
inverting amplifier gain G ≈ − Rfs is not sensitive to the exact values of A, Ri , or Ro ; it
is sufficient that A and Ri be very large and Ro quite small. Op-amps are intentionally
designed to satisfy these conditions.
To examine how the inverting amplifier gain may depend on a possible load
RL connected to the output terminal, we find it sufficient to calculate the Thevenin
resistance RT of the equivalent circuit shown in Figure 3.6b. The calculation can be
carried out by use of the circuit shown in Figure 3.7, where the source vs of Figure 3.6b
has been suppressed and a 1 A current has been injected into the output terminal to
implement the test current method discussed in Chapter 2. Using straightforward steps
Rf
Rs νo
ν−
i− io
Ro
Ri
A (ν+ − ν− ) 1A
+
i+ −
ν+
Figure 3.7 Test current method is applied to determine the Thevenin resistance
of the equivalent circuit of an inverting amplifier.
Section 3.1 Operational Amplifiers and Signal Arithmetic 79
Clearly, the circuit sums the input voltages v1 and v2 , with respective weighting
R R
coefficients − Rf1 and − Rf2 . The circuit can be modified in a straightforward way to
combine three or more inputs in a similar manner. Furthermore, because of the low
Thevenin resistance of the inverting amplifier, the weighted sum will appear in full
strength across any load that is reasonably large.
The circuit shown in Figure 3.8b forms the difference, v1 − v2 , between voltages
v1 and v2 . This becomes apparent when we note that v− ≈ v+ ≈ v21 (obtained by
Rf R2
ν−
i−
− νo R 1 v−
− νo
o o
v+ + io R2 + io
R2 R1 ν+
i+
ν2 +
− ν1 +
− ν2 +
− ν1 +
− R1
(a) (b)
applying voltage division to v1 , since i+ ≈ 0), so that the KCL equation at the inverting
input node is
v2 − 21 v1 1
2 v1− vo
≈ .
R2 R2
Hence,
vo ≈ v1 − v2 .
In the previous section we discussed resistive op-amp circuits for signal amplification,
summation, and differencing. In this section we shall see that, by including capacitors
or inductors, we can build op-amp circuits for signal differentiation and integration.
These operations are, of course, relevant for processing time-varying signals (or give
rise to them) and therefore, this will be our first exposure to so-called AC circuits.
dv
i=C (Capacitor)
dt
di
v=L (Inductor),
dt
capacitors are natural differentiators of their voltage inputs and inductors differentiate
their current inputs.
Example 3.4
The capacitor voltage in the circuit shown in Figure 3.9 is
v(t) = 5 cos(100t) V.
Figure 3.9 A capacitor circuit with an imposed capacitor voltage signal and a
current response i(t) proportional to the time derivative of the imposed voltage.
Section 3.2 Differentiators and Integrators 81
Solution
dv d
i(t) = C = 2 μF × [5 cos(100t) V]
dt dt
= 2 × 5 × (−100) sin(100t) μA = − sin(100t) mA.
Example 3.5
The 1 H inductor shown in Figure 3.10a responds to the ramp current
input shown in Figure 3.10b, with a unit-step voltage v(t) = u(t) shown in
Figure 3.10c. The unit-step voltage output is 0 for t < 0, when the input
i(t) is constant (with zero value), and 1 for t > 0, when
i(t) = t A,
and, thus,
di
L = 1 V.
dt
i(t) 2
1.5
1
+
0.5
i(t) 1H ν (t)
−
(a) (b) −1 −0.5 0.5 1 1.5 2 t
ν(t) 2
1.5
1
u(t)
0.5
Figure 3.10 (a) An inductor circuit, (b) current input to the inductor, and (c)
voltage response of the inductor described by a unit-step function, u(t).
82 Chapter 3 Circuits for Signal Processing
R L
− −
C R
νs (t) +
− + νo (t) νs (t) +
− + νo (t)
(a) (b)
0 − vo (t) vo (t)
=−
R R
directed toward the output terminal of the op-amp. Therefore,
dvs
vo (t) = −RC .
dt
L dvs
vo (t) = −
R dt
dv
i(t) = C
dt
i(t) 0.8
0.6
0.4
+
0.2
i(t) C ν (t)
− 2 4 6 8
(a) (b) t
3
ν(t) 2.5
2
1.5
1
0.5
2 4 6 8
(c) t
Figure 3.12 (a) Capacitor as an integrator, and (b) input and output signals of
the integrator circuit discussed in Example 3.6.
Replacing t with τ under the integral sign, and then replacing a and b with to and t,
respectively, we obtain
t
1
v(t) = v(to ) + i(τ )dτ,
C to
an explicit formula for the capacitor voltage in terms of the input current i(t).
Clearly, the result implies that if we know the capacitor voltage at some initial
instant to , then we can determine any subsequent value of the capacitor voltage in
terms of a definite integral of the input current from time to to the time t of interest.
We will use the term initial value or, occasionally, initial state, to refer to v(to ). Initial
state
Example 3.6
In the circuit shown in Figure 3.12a, suppose that C = 1 F and v(0) = 1 V.
Let
1
i(t) = e− 2 t A
for t > 0 as shown in Figure 3.12b. Calculate the voltage response v(t) for
t > 0.
Solution Using the general expression for v(t) obtained previously, with
1
to = 0, v(0) = 1 V, i(τ ) = e− 2 τ A, and C = 1 F, we find that
1
1 t
1 e− 2 t − 1 1
v(t) = 1 + (e− 2 τ A)dτ = (1 + ) V = 3 − 2e− 2 t V.
1 0 − 21
Notice in Figures 3.12b and 3.12c that i(t) = 0 A and v(t) = 1 V for t < 0.
These values of the capacitor current and voltage for t < 0 are consistent with one
another, since a constant v(t) implies zero i(t), according to
dv
i(t) = C .
dt
For t > 0 the curves in the figure also satisfy this same relation, because the voltage
curve was computed in Example 3.6 to satisfy this very same constraint.
A comparison of the i(t) and v(t) curves in Figures 3.12b and 3.12c leads to the
following important observation: Even though the input curve i(t) is discontinuous
at t = 0, the output curve v(t) does not display a discontinuity. For reasons to be
explained next, the following general statement can be made about a capacitor voltage:
The voltage response of a capacitor to a practical input current must be
Capacitor continuous.
voltage Explanation: Recall from Chapter 1 that the capacitor is an energy storage
can’t device, with the stored energy varying with capacitor voltage as
“jump”
1 2
w(t) = Cv (t).
2
Furthermore, a capacitor stores net electrical charge on its plates, an amount
q(t) = Cv(t)
on one plate assigned positive polarity and −q(t) on the other. Therefore, any
time discontinuity in capacitor voltage would lead to accompanying discon-
tinuities in stored charge and energy, which would require infinite current in
order to add a finite amount of charge and energy in zero time. Such discon-
tinuous changes are naturally prohibited in practical circuits where current
amplitudes remain bounded, consistent with the fact that
dv dv
|i(t)| = |C | = C| | < ∞
dt dt
when v(t) is a continuous function.
In the following example, we will make use of the continuity of capacitor voltage
to calculate the response of a capacitor to a piecewise continuous input current:
Example 3.7
Calculate the voltage response of a 2 F capacitor to the discontinuous current
input shown in Figure 3.13a, for t > 0. Assume that v(0) = 0 V.
Solution For the period 0 < t < 1, where t is in units of seconds, the
current input is i(t) = 1 A. Therefore, for that period,
1 t t −0 t
v(t) = v(0) + 1dτ = 0 + = V.
2 0 2 2
Section 3.2 Differentiators and Integrators 85
i(t) 1 ν(t) 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
(a) −1 1 2 3 4 5
t (b) −1 1 2 3 4 5
t
Figure 3.13 (a) A piecewise continuous input current i(t), and (b) response v(t)
of a 2 F capacitor discussed in Example 3.6.
Now, even though this result is only for the interval 0 < t < 1, we still can
use it to calculate
1
v(1) = V
2
because of the continuity of the capacitor voltage v(t) at t = 1. Thus, for
t > 1, where the current i(t) = 0, we find that
1 t 1
v(t) = v(1) + 0dτ = v(1) = V.
2 1 2
Notice that v(t) grows from zero to 21 V (as shown in Figure 3.13b)
during the time interval 0 < t < 1 because of a steady current flow that
charges up the capacitor within that time interval. After t = 1, the current
stops so that no more charging or discharging takes place and the capacitor
voltage remains fixed at the level of v(1) = 21 V forever.
di
v(t) = L
dt
to yield
t
1
i(t) = i(to ) + v(τ )dτ,
L to
ν(t) + L i(t)
−
Inductor in analogy with the voltage response of a current-driven capacitor. Also, by analogy,
current it should be clear that the current-response of an inductor to a practical voltage input
can’t must be continuous.
“jump”
Example 3.8
A voltage input
v(t) = u(t) V
t
i(t) = u(t) A,
3
Op-amp integrators: If we were to swap the positions of the capacitor and resistor
in the differentiator circuit of Figure 3.11a, we would obtain the integrator circuit
shown in Figure 3.15a. To see that Figure 3.15a is an integrator, let us analyze the
circuit, using the ideal op-amp approximations.
C R
− −
R L
νs (t) +
− + νo (t) νs (t) +
− + νo (t)
(a) (b)
Figure 3.15 An op-amp integrator with (a) a capacitor and (b) an inductor.
Section 3.3 Linearity, Time Invariance, and LTI Systems 87
The differentiator and integrator circuits of Section 3.2 can be viewed abstractly as
analog systems. Such systems convert their time-continuous input signals f (t) to Analog
output signals y(t) according to some rule that defines the system. For instance, for system
the integrator system shown in Figure 3.16, the input–output rule has the form
t
y(t) = y(to ) + f (τ )dτ.
to
For another analog system, the rule may be specified in terms of a differential equation
for the output y(t) that depends on input f (t).
This book is concerned mainly with signal processing systems that can be imple-
mented with lumped-element electrical circuits, and, in particular, with linear and
time-invariant (LTI) systems such as differentiators, integrators, and RC and RL
circuits to be examined in the next section. The purpose of this section is to describe LTI
what we mean by an LTI system and to introduce some key terminology to be used systems
throughout the rest of the book.
88 Chapter 3 Circuits for Signal Processing
System System
System
input output
+
∫t
t
f (t) = i(t) 1F y(t) = ν(t) = ν(t o) + i(τ )dτ
o
(a) −
Figure 3.16 (a) An integrator system with an input i(t) and output v(t), and (b)
a symbolic representation of the same system in terms of the input and output
signals designated as f (t) and y(t), respectively.
Input for t > to , depends not only on the input f (t) from time to to t, but also on y(to ),
and the initial state of the integrator. In Figure 3.16, y(to ) is the initial voltage across the
initial capacitor, which is proportional to the initial charge on the capacitor. So we think of
state this as the initial state. Notice that the contributions of y(to ) and {f (τ ), to < τ < t}
t
to y(t) are additive, and each contribution, namely, y(to ) and to f (τ )dτ , vanish with
vanishing y(to ) and f (t), respectively.
Zero-input Thus, for f (t) = 0—that is, with zero input—the integrator output is just
response is
independent y(to ),
of the while for y(to ) = 0—that is, under the zero state condition—the output is only
input and
t
zero-state
f (τ )dτ.
response is to
independent
t
of the Therefore, it is convenient to think of y(to ) and to f (τ )dτ as zero-input and zero-
initial state state components of the output, respectively, and, conversely, to think of the output
as a sum of zero-input and zero-state response components:
t
y(t) = y(to ) + f (τ )dτ
to
Zero-input response
Zero-state response
In a very general sense, and using the same terminology, we can state that:
The zero-state response is said to vary linearly with the system input—known as
zero-state linearity—if (using the symbolic notation of Figure 3.16b), under zero-state
conditions,
and
imply that
for any arbitrary constants K1 and K2 . In other words, with zero-state linear systems, a Zero-state
weighted sum of inputs produces a similarly weighted sum of corresponding zero-state linearity
outputs, consistent with the superposition principle.
Example 3.9
Verify that an integrator system
t
i(t) −→ Integ −→ v(to ) + i(τ )dτ
to
is zero-state linear.
Solution Assuming a zero initial condition, v(to ) = 0, we can express the
integrator outputs caused by inputs i1 (t) and i2 (t) as
t
v1 (t) = i1 (τ )dτ
to
and
t
v2 (t) = i2 (τ )dτ,
to
which is a linear combination of the original outputs (with the same coef-
ficients in the linear combination), implying that the system is zero-state
linear.
Example 3.10
Identify the zero-input and zero-state response components of the output
of the op-amp differentiator
dvs
vs (t) −→ Diff −→ −RC
dt
and show that the system is zero-state linear.
Solution For zero-input, vs (t) = 0 and the system output is also zero.
So, the system has no dependence on initial conditions and the zero-input
response is identically zero. Thus, the system output consists entirely of
the zero-state component. The system outputs caused by inputs vs1 (t) and
vs2 (t) can be expressed as
dvs1
v1 (t) = −RC
dt
and
dvs2
v2 (t) = −RC ,
dt
respectively. With the input
y(t) = 3y(to )2
and
y(t) = 1 + y(to )
would not be zero-input linear. This is easily seen because doubling the initial condi-
tion does not double the response due to the initial condition, which is a necessary
component of linearity. (Choose K1 = 2 and K2 = 0 in the linear combination of
initial states.) A third system with zero-input response
would be zero-input linear. The integrator and differentiator examined in Examples 3.9
and 3.10 also are zero-input linear.
Example 3.11
Is the system
then
f (t − to ) −→ Time-inv. −→ y(t − to )
Example 3.12
Determine the zero-state response of an integrator system
t
i(t) −→ Integ −→ v(to ) + 2 i(τ )dτ
to
with inputs
and
and show that responses to i1 (t) and i2 (t) are consistent with time-invariance.
The constant a is arbitrary.
Solution The zero-state response to input i1 (t) can be expressed as
t t
t
v1 (t) = 2 i1 (τ )dτ = 2 aτ dτ = a τ 2 τ =0 = at 2 , t > 0.
0 0
Since
is the same as
v1 (t − 3), t − 3 > 0,
t = to
R
+
νs (t) + i(t) C ν(t)
−
−
We can identify the ODE that governs the RC circuit shown in Figure 3.17 for
t > to by writing the KVL equation around the loop. Since the loop current can be
expressed as
dv
i(t) = C
dt
in terms of the capacitor voltage v(t), the KVL equation is
dv
vs (t) = RC + v(t).
dt
This equation can be rearranged as
dv 1 1
+ v(t) = vs (t),
dt RC RC
which is a first-order linear ODE with constant coefficients that describes the circuit
for t > to .
94 Chapter 3 Circuits for Signal Processing
Linear ODE This differential equation is called “first-order” and “ordinary” because it contains
with only the first ordinary derivative of its unknown v(t), instead of higher-order deriva-
constant tives or partial derivatives. It is said to have constant coefficients because the coefficient
1
RC in front of both v(t) and the forcing function vs (t) do not vary with time. This is
coefficients
true because the circuit components R and C are assumed to have constant values.
The ODE also is said to be linear, because a linear combination of inputs applied to
the ODE produces a solution that is the linear combination of the individual outputs.3
Furthermore, the ODE also satisfies the zero-input linearity condition, as we will see
in the next section.
The linearity and the constant coefficients of the preceding ODE guarantee that
the RC circuit of Figure 3.17 constitutes an LTI system for t > to . The same system
properties also apply to all resistive circuits containing a single capacitor, because we
can represent all such circuits as in Figure 3.17 by using Thevenin equivalents.
vs (t) = Vs ,
t=0
R +
Vs + C ν(t)
−
−
3
Verification of linearity: Assume that
dv 1 1
+ v(t) = f (t)
dt RC RC
and
dw 1 1
+ w(t) = g(t)
dt RC RC
are true and therefore that v(t) and w(t) are different solutions of the same ODE with different inputs f (t)
and g(t). A weighted sum of the equations, with coefficients K1 and K2 , can be expressed as
d(K1 v + K2 w) 1 1
+ (K1 v(t) + K2 w(t)) = (K1 f (t) + K2 g(t)),
dt RC RC
which implies that the solution of the ODE, with an input K1 f (t) + K2 g(t), is the linear superposition
K1 v(t) + K2 w(t) of solutions v(t) and w(t), obtained with inputs f (t) and g(t), respectively. Hence,
superposition works, and the ODE is said to be linear.
Section 3.4 First-Order RC and RL Circuits 95
a DC input. Clearly, the capacitor response v(t) for t > 0 is a solution of the linear
ODE
dv 1 1
+ v(t) = Vs ,
dt RC RC
v(0) = v(0− ),
where v(0− ) stands for the capacitor voltage just before the switch is closed. The
continuity of capacitor voltage discussed in Section 3.2.2 allows us to evaluate v(t)
at t = 0 and requires that we match v(0) to v(0− ), a voltage value established by the
past charging/discharging activity of the capacitor.
To solve the ODE initial value problem just outlined, we first note that ODE
initial-value
v(t) = Vs problem
is a particular solution of the ODE, meaning that it fits into the differential equation.
However, we must find a solution that also matches the initial value at t = 0− . So,
unless Vs = v(0− ), this particular solution is not viable.
Next, we note that, with no loss generality, an entire family of solutions of the
ODE can be written as
v(t) = vh (t) + Vs ,
d 1 1
(vh (t) + Vs ) + (vh (t) + Vs ) = Vs ,
dt RC RC
or, equivalently,
dvh 1
+ vh (t) = 0.
dt RC
But the last ODE—known as the homogeneous form of the original ODE—can be Particular,
integrated directly to obtain a homogeneous (or complementary) solution homogeneous,
and
t general
vh = Ae− RC
solutions
96 Chapter 3 Circuits for Signal Processing
is the family of general solutions of the ODE, where A can be any constant.
Imposing the initial condition
v(0) = v(0− )
v(0) = A + Vs = v(0− ),
so that
A = v(0− ) − Vs .
Therefore, the solution to the initial-value problem posed earlier, which is the capacitor
voltage in the RC-circuit for t > 0, is
t
v(t) = [v(0− ) − Vs ]e− RC + Vs .
This solution simultaneously satisfies the ODE and also matches the initial condition
at t = 0− .
This result has an interesting structure that makes it easy to remember. The first
term decays to zero as t → ∞, leaving the second term, which is a DC solution
(i.e., the solution after the voltages and currents are no longer changing). But, under
DC conditions, the capacitor in Figure 3.18 becomes an open circuit, taking the full
value of the DC source voltage Vs . Thus, the component Vs in the solution for v(t)
should be viewed as the final value of v(t), as opposed to its initial value v(0− ). The
RC time- transition from initial to final DC state for v(t) occurs as an exponential decay of the
constant difference v(0− ) − Vs between the two states, with the rate of decay controlled by a
time-constant RC.
4
Verification: The homogeneous ODE implies that
dvh 1
=− dt,
vh RC
1
ln vh = − t + K,
RC
1 1
eln vh = vh = e(− RC t+K) = Ae− RC t ,
4 4
3 3
2 2
1 1
(a) −1 1 2 3 4 5 t (b) −1 1 2 3 4 5 t
1
RC = (2 ) × ( F ) = 1 s,
2
which is the amount of time it takes for
v(0− ) − Vs
to drop to
We derived this result for v(t), assuming a switch closing time of to = 0. For an
arbitrary switching time to , the original result can be shifted in time to obtain
t−to
v(t) = [v(to− ) − Vs ]e− RC + Vs
for t > to . Here v(to− ) denotes the initial state of the capacitor voltage just before
the switch is closed at t = to . Figure 3.19b depicts v(t) for the case with to = 1s,
v(1− ) = 4 V, Vs = 6 V, R = 1 , and C = 21 F. Notice that now the RC time constant
is 0.5 s, which is one-half the value assumed in Figure 3.19a.
In the next set of examples we will make use of the general results obtained
above.
Example 3.13
Consider the circuit shown in Figure 3.20a. The switch in the circuit is
moved at t = 0 from position a to position b, bringing the capacitor into
the left side of the network. Assuming that the capacitor is in the DC state
when the switch is moved, calculate the capacitor voltage v(t) for t > 0.
Solution For t < 0, the capacitor is in the DC state and behaves like an
open circuit. Therefore, the 2 A source current in the circuit flows through
the 1 resistor on the right, generating a 2 V drop from top to bottom.
98 Chapter 3 Circuits for Signal Processing
t=0
i 1 2Ω
b 2Ω
i2 + a
Vs +
− 2Ω ν(t) 1Ω 2A
(a) − 1F
i 1 2Ω 1Ω
i2 + Vs +
Vs + +
− 2Ω ν(t ) 1F − ν(t)
−
2 − 1F
(b) (c)
Figure 3.20 (a) A switched RC circuit, (b) capacitor circuit for t > 0, and (c)
equivalent circuit.
v(0− ) = 2 V.
Figure 3.20b shows the capacitor circuit for t > 0. The resistive network
across the capacitor can be replaced by its Thevenin equivalent, yielding
the equivalent circuit in Figure 3.20c. Using the equivalent circuit, we see
that as t → ∞, v(t) → V2s , which is the final state of the capacitor when
it becomes an open circuit. We also note that the RC time constant is
1 × 1 F = 1 s. Hence, for t > 0,
Vs −t Vs
v(t) = [v(0− ) − ]e + .
2 2
Notice that the expression for v(t) in Example 3.13 also can be written as
Vs
v(t) = v(0− )e−t + (1 − e−t ) .
2
Zero-input
Zero-state
Zero-input Clearly, the foregoing zero-input and zero-state components vary linearly with the
and initial state v(0− ) and input Vs , respectively. Therefore, the zero-input and zero-state
zero-state linearity conditions are satisfied and the circuit constitutes a linear system (as claimed,
components but not shown, earlier on).
Example 3.14
Calculate the currents i1 (t) and i2 (t) in the circuit shown in Figure 3.20a.
Section 3.4 First-Order RC and RL Circuits 99
Vs
i1 (t) = i2 (t) = .
4
For t > 0,
Vs − v(t)
i1 (t) =
2
and
v(t)
i2 (t) = .
2
Substituting the expression for v(t) from the previous equation, we obtain
v(0− ) −t Vs
i1 (t) = − e + (1 + e−t )
2 4
and
v(0− ) −t Vs
i2 (t) = e + (1 − e−t )
2 4
for t > 0. The voltage waveform v(t) and the current waveforms i1 (t) and
i2 (t) are plotted in Figure 3.21, for the case v(0− ) = 2 V (as in Example 3.13)
and Vs = 8 V.
ν(t) 4
3
−1 1 2 3 4 5 t
i 1(t) 4 i 2(t) 4
3 3
2 2
1 1
−1 1 2 3 4 5 t −1 1 2 3 4 5 t
Figure 3.21 Capacitor voltage and resistor current responses in the circuit
shown in Figure 3.20.
100 Chapter 3 Circuits for Signal Processing
Notice that the capacitor voltage curve in Figure 3.21 is continuous across t = 0
(the switching instant in Examples 3.13 and 3.14), but the curves for i1 (t) and i2 (t)
are discontinuous. Clearly, it is impossible to assign unique values to i1 (0) and i2 (0).
Instead, we note that i1 (0− ) = 2 A, i1 (0+ ) = 3 A and i2 (0− ) = 2 A, i2 (0+ ) = 1 A,
where i1 (0− ) and i1 (0+ ), for instance, refer to the limiting values of i1 (t) as t = 0 is
approached from the left and right, respectively. All solutions in the circuit for t > 0
can be found using the initial-state v(0− ) of the capacitor voltage, without knowledge
of i1 (0− ), i2 (0− ), etc., as we found out explicitly in Example 3.14. In this sense the
initial capacitor voltage v(0− ) fully describes the initial state of the entire RC circuit
for t > 0.
L di
Is = dt
+ i(t).
R
So, the ODE that describes the inductor current in the circuit is
di R R
+ i(t) = Is .
dt L L
The solution to this ODE for t > 0 can be expressed as
t
− L/R
i(t) = [i(0− ) − Is ]e + Is ,
− t−to
i(t) = [i(to− ) − Is ]e L/R + Is ,
Is R L i(t)
t=0
+ 2Ω
ν(t) i(t) +
2Ω 2Ω 2H − 4V
−
Example 3.15
Consider the circuit shown in Figure 3.23 where the switch moves from
right to left and the inductor is connected into both sides of the circuit at the
single instant t = 0. Determine the inductor current i(t) and voltage v(t)
for t > 0. Assume that di dt = 0 for t < 0.
Solution From the figure, we see that
i(0− ) = 2 A,
dt = 0).
because the inductor is effectively a short circuit for t < 0 (since di
For t > 0, the inductor finds itself in the source-free segment on the left.
Hence,
Is ≡ lim i(t) = 0.
t→∞
L 2H
= = 2 s.
R 1
Thus, for t > 0, the inductor current is
t
− L/R
i(t) = i(0− )e = 2e−0.5t A.
Next, we notice that half of this current flows upward through each resistor
on the left, and therefore the voltage v(t) is −(2 ) × i(t)
2 , or
v(t) = −2e−0.5t V.
1
w= 2 H × (2 A)2 = 4 J
2
stored in the inductor, and all signals in the circuit vanish as t → ∞.
102 Chapter 3 Circuits for Signal Processing
i y (t)
i x (t)
ν(t)
2Ω 2Ω i(t) 2H
+
− 2V
Example 3.16
Consider the circuit shown in Figure 3.24. We will assume that
i(0− ) = 0
and calculate i(t) for t > 0. First, we notice that the inductor becomes an
effective short circuit as t → ∞, so the currents
L
= 2 s.
R
What if the initial state of the circuit described were i(0− ) = 2 A? In that case
the response i(t) would be the superposition of the previous expression and the zero-
input solution of the circuit for i(0− ) = 2 A. But, we already have found the zero-
input solution. Under the zero-input condition, the voltage source in Figure 3.24
is replaced with a short circuit and the circuit reduces to the circuit analyzed in
Example 3.15 for the same initial current i(0− ) = 2 A. Therefore, superposing the
answers in Examples 3.15 and 3.16, we get
t=2s
t=0
+ 2Ω
2Ω ν(t) i(t) + 4V
2Ω 2H −
−
Now, what if i(0− ) were 4 A? No problem! In this case we can double the zero-
input response just calculated, since the response is linear in i(0− ), and add it to the
zero-state response. The answer is
Example 3.17
As a final example, we will calculate the inductor current i(t) in the circuit
shown in Figure 3.25. This is the same circuit as in Example 3.15, but,
at t = 2 s, the switch is returned back to its original position. Therefore,
the inductor current i(t) evolves until t = 2 s, exactly as determined in
Example 3.15, namely,
i(t) = 2e−0.5t A.
So, the inductor current just before the switch is moved again is
i(2− ) = 2e−1 A.
As t → ∞, the inductor current will build up from this value to a final
current value of 2 A, with a time constant of
2H
= 1 s.
2
Notice that the time constant is different than before, because the inductor
sees a different Thevenin resistance after t = 2 s. Therefore, for t > 2 s, the
current variation is
i(t) = (2e−1 − 2)e−(t−2) + 2 A.
The complete current waveform is shown in Figure 3.26.
3.4.3 RC- and RL-circuit response to time-varying sources
As we have found, the capacitor voltage in RC circuits and inductor current in RL
circuits are described by linear first-order ODEs of the form
dy
+ ay(t) = bf (t),
dt
104 Chapter 3 Circuits for Signal Processing
i(t)
2
1.5
0.5
2 4 6 8 t
Figure 3.26 Inductor current waveform for the circuit shown in Figure 3.25.
where y(t) denotes a capacitor voltage (in RC circuits) or inductor current (in RL
circuits), a and b are circuit-dependent constants, and f (t) is an input source or signal.
For instance, in the RC circuit shown in Figure 3.27,
1
a=b=
RC
and
f (t) = vs (t)
for t > 0.
In RC and RL circuits with time varying inputs (as in Figure 3.27), the general
solution of
dy
+ ay(t) = bf (t)
dt
for t > 0 can be written as
t=0
R +
+ ν (t)
ν s (t) − C
−
Figure 3.27 A first-order circuit with an arbitrary time-varying input f (t) = vs (t)
applied at t = 0.
Section 3.4 First-Order RC and RL Circuits 105
dy
Source function f (t) Particular solution of dt + ay(t) = bf (t)
1 constant D constant K
2 Dt Kt + L for some K and L
Kept if p = −a
3 Dept
Ktept if p = −a
4 cos(ωt) or sin(ωt) H cos(ωt + θ), where H and θ depend on ω, a, and b
dy
Table 3.1 Suggestions for particular solutions of dt + ay(t) = bf (t) with various
source functions f (t).
and yp (t) is a particular solution of the original ODE (like those suggested in Table 3.1
for inputs f (t) having various forms). Since
y(0) = A + yp (0),
it follows that
where y(0) = y(0− ), an initial capacitor voltage or an inductor current. In the second
line of the preceding equation, we have grouped the solution into its zero-input
response (due to initial condition) and zero-state response (due to input).
Our solutions in the next set of examples will be specific applications of this
general result for first-order RC and RL circuits.
For reference, the result can be generalized further to
for t > to , where y(to ) is an initial-state (capacitor voltage or inductor current) defined
at t = to . In this case the zero-state response for t > to is
y(to )e−a(t−to ) .
106 Chapter 3 Circuits for Signal Processing
Example 3.18
Find the capacitor voltage y(t) in an RC circuit described by
dy
+ y(t) = e−2t
dt
for t > 0. Assume a zero initial state—that is, y(0− ) = 0.
Solution Since
f (t) = e−2t ,
yp (t) = Ke−2t
K = −1.
yp (t) = −e−2t .
The particular solution does not match the initial condition (otherwise
we would be finished), and so we proceed to the general solution (sum of
the homogeneous and particular solutions):
y(0) = A − 1,
implying that
A = y(0) + 1.
A = 1.
Notice that the zero-state response determined in Example 3.18 consists of two
transient functions, e−t and e−2t . The term transient describes functions that vanish Transient
as t → ∞. function
Example 3.19
What is the zero-state solution of
dy
+ y(t) = e−2t
dt
Since we require
y(−1) = 0,
we conclude that
A = e.
Example 3.20
Let R = 1 , C = 1F, and
vs (t) = cos(t)
in the RC circuit shown earlier in Figure 3.27. Then, for t > 0, the capacitor
voltage v(t) will be the solution to the ODE
dv
+ v(t) = cos(t).
dt
Determine v(t) for t > 0, assuming zero initial state—that is, v(0− ) = 0.
108 Chapter 3 Circuits for Signal Processing
where5
Therefore,
dvp
= −A sin(t) − B cos(t).
dt
dvp
Substituting vp (t) and dt into the ODE, we obtain
or, equivalently,
A − B = 1 and A + B = 0,
yielding
1 1
A= and B = − .
2 2
Now, substituting the values for A and B into A = H cos θ and B =
H sin θ gives
1 1
= H cos θ and − = H sin θ.
2 2
It follows that
sin θ
= tan θ = −1,
cos θ
and so
◦
θ = −45
5
Here, we are making use of the trig identity cos(a + b) = cos a cos b − sin a sin b.
Section 3.4 First-Order RC and RL Circuits 109
and
1/2 1
H = =√ ,
cos(−45◦ ) 2
1 ◦
v(t) = Ae−t + √ cos(t − 45 ),
2
where the first term is the homogeneous solution with arbitrary constant A.
Employing the initial condition, v(0) = 0, gives
1 ◦ 1
v(0) = A + √ cos(−45 ) = A + ,
2 2
1
A=− .
2
Thus, the zero-state solution for t > 0 is
1 1 ◦
v(t) = − e−t + √ cos(t − 45 ).
2 2
Clearly, the first term − 21 e−t is transient and the second term √1
2
cos(t −
45◦ ) is non-transient.
Example 3.21
Given that
y(0− ) = 1,
dy
+ ay(t) = f (t)?
dt
Is the solution transient?
110 Chapter 3 Circuits for Signal Processing
y(t) = Ae−at .
Since
y(0) = A = y(0− ) = 1,
we conclude that
y(t) = e−at
In Examples 3.18 through 3.21, we saw that both the zero-input and zero-state
responses of 1st-order ODEs can include non-transient as well as transient functions.
The part of a system response remaining after the transient components have vanished
Steady-state is called the system steady-state response. Of course, if nothing remains after the
response transients vanish, then the steady-state response is, trivially, zero. Such was the case
in Example 3.18, where we found that the system response was composed entirely
of transient functions. In Example 3.20, on the other hand, the steady-state response
was √12 cos(t − 45◦ ).
Example 3.22
Suppose that
dv
+ v(t) = cos(t)
dt
is valid for all t. Then, what is the steady-state component of the response
v(t)?
Solution From Example 3.20, we know that a particular solution to the
ODE is the co-sinusoid
1 ◦
vp (t) = √ cos(t − 45 ),
2
which is non-transient. The specification of an initial condition is irrelevant,
because no matter when (for what time) an initial condition is specified, the
homogeneous solution has the form Ae−t , which is transient and vanishes
Section 3.5 nth-Order LTI Systems 111
+
i s (t) 2Ω i(t) 4H 3F ν(t)
−
Figure 3.28 A parallel RLC circuit with a current source input is (t).
Example 3.23
Determine the ODEs for the inductor current i(t) and capacitor voltage v(t)
in the parallel RLC circuit shown in Figure 3.28.
Solution The KCL equation for the top node is
v(t) dv
is (t) = + i(t) + 3 ,
2 dt
where we have
di
v(t) = 4 ,
dt
using the v − i relation for the 4 H inductor. Eliminating v(t) in the KCL
equation, we get
di d 2i
is (t) = 2 + i(t) + 12 2 ,
dt dt
or
d 2i 1 di 1 1
2
+ + i(t) = is (t),
dt 6 dt 12 12
which is the ODE for the inductor current i(t). Taking the derivative of both
sides of the ODE and making the substitution
di v(t)
= ,
dt 4
112 Chapter 3 Circuits for Signal Processing
we next obtain
which implies
d 2v 1 dv 1 1 dis
2
+ + v(t) = .
dt 6 dt 12 3 dt
This is the ODE describing the capacitor voltage. Notice that the ODEs for
i(t) and v(t) are identical except for their right-hand sides. Thus, the forms
of the homogeneous solutions of the ODEs are identical.
nth-order As the order n of an LTI circuit or system increases, obtaining the governing
linear ODE ODEs of the form
with
constant d ny d n−1 y d mf d m−1 f
n
+ a1 n−1 + · · · + an y(t) = bo m + b1 m−1 + · · · + bm f (t)
coefficients dt dt dt dt
and solving them becomes increasingly more involved and difficult. Fortunately, there
are efficient, alternative ways of analyzing LTI circuits and systems that do not depend
on the formulation and solution of differential equations. The details of such methods,
which are particularly useful when n is large, depend on whether or not the system
zero-input response is transient.
Here is the central idea: In an LTI system with a transient zero-input response,
the steady-state response to a co-sinusoidal input applied at t = −∞ will itself be a
Dissipative co-sinusoid and will not depend on an initial state. (See Example 3.22.) Therefore, in
LTI systems such systems—known as dissipative LTI systems—the zero-state response to a super-
and position of co-sinusoidal inputs can be written as a superposition (because of linearity)
Fourier of co-sinusoidal responses. This superposition method for zero-state response calcula-
method tions in dissipative LTI systems is known as the Fourier method and will be described
in detail beginning in Chapter 5. An extension of the method, known as the Laplace
method, is available for non-dissipative LTI systems where the zero-input response
is not transient.
Our plan for learning how to handle nth-order circuit and system problems is
as follows. In Chapters 5 through 7 we will study the Fourier method for zero-state
response calculations in dissipative LTI systems. This is a very powerful method
because, as we will discover in Chapter 7, any practical signal that can be generated
in the lab can be expressed as a weighted superposition of sin(ωt) and cos(ωt) signals
with different ω’s. Since the Fourier method requires that we know the system steady-
state response to co-sinusoidal inputs, we need an efficient method for calculating
such responses in circuits and systems of arbitrary complexity (arbitrary order n). We
will develop such a method in Chapter 4. The discussion of the Laplace method for
handling non-dissipative system problems will be delayed until Chapter 11.
Section 3.5 nth-Order LTI Systems 113
We close this chapter with two simple examples on zero-input response in nth- Zero-input
order systems. A more complete coverage of the same topic will be provided in response
Chapter 11 after we learn the Laplace method. in nth-order
systems
Example 3.24
Determine the zero-input solution of the second-order ODE
d 2y dy
2
+3 + 2y(t) = f (t), t > 0,
dt dt
and discuss whether or not the system is dissipative.
Solution To find the zero-input solution, we set
f (t) = 0,
d 2y dy
+3 + 2y(t) = 0.
dt 2 dt
This equation can be satisfied by
y(t) = est
with certain allowed values for s. To find the permissible s we insert est for
y(t) in the ODE and obtain
which implies
(s 2 + 3s + 2)est = 0,
and thus,
s 2 + 3s + 2 = (s + 1)(s + 2) = 0.
s = −1 and s = −2.
where A and B are constants chosen so that y(t) satisfies prescribed initial
conditions. For example, the initial state may be specified as the values of
y(t) and its first derivative at t = 0, so that A and B can be found by solving
y(0) = A + B
and
dy
= −A − 2B.
dt t=0
The system is dissipative, because the zero-input solution is transient.
s n + a1 s n−1 + · · · + an ,
Characteristic known as the characteristic polynomial of the ODE. For instance, the characteristic
polynomial polynomial of
d 2y dy
+3 + 2y(t) = 0,
dt 2 dt
used in Example 3.24, is
s 2 + 3s + 2 = (s + 1)(s + 2).
Example 3.25
Repeat Example 3.24 with the ODE
d 2y dy
+ − 2y(t) = f (t).
dt 2 dt
s 2 + s − 2 = (s − 1)(s + 2).
Hence, permissible values for s are 1 and −2, and the zero-input solution
is of the form
y(t) = Aet + Be−2t .
Because the first continually increases as t → ∞, the zero-input response
is non-transient and the system must be non-dissipative.
Exercises 115
From the previous examples, it should be clear that whether or not an LTI circuit is Resistive
dissipative depends on the algebraic sign of the roots of its characteristic polynomial.6 linear circuits
However, even before examining the characteristic polynomial, we can recognize a with no
circuit as dissipative if it contains at least one current-carrying resistor and contains controlled
no dependent sources. That is true, because such a circuit would have no new source sources are
of energy under zero-input conditions (i.e., with the independent sources suppressed) guaranteed
and would dissipate, as heat, whatever energy it may have stored in its capacitors and to be
inductors, via current flowing through the resistor. dissipative
EXERCISES
3.1 (a) In Figure 3.3a, given that i+ ≈ 0, what happens to the current RvoL in
the circuit? Hint: the answer is related to the answer of part (b).
(b) For vs = 1 V, Rs = 50 , and RL = 1 k, what is the power absorbed
by resistor RL in the circuit shown in Figure 3.3a and where does that
power come from?
3.2 (a) Confirm that substitution of the linear op-amp model of Figure 3.1b
into the noninverting amplifier circuit of Figure 3.2b leads to the following
circuit diagram:
Rs ν+ Ri ν− R1 νo
+ νx −
Ro
νs +
− R2
+
− A νx
6
In later chapters, we commonly will encounter situations where the roots of the characteristic polyno-
mial are complex, as in Example 3.23, where the polynomial is s 2 + 16 s + 12
1
. In such cases, we will learn
that the system is dissipative if the real parts of the roots are negative.
116 Chapter 3 Circuits for Signal Processing
3.3 In the next circuit shown, determine the node voltage vo (t). You may assume
that the circuit behaves linearly and make use of the ideal op-amp approxi-
mations (v+ ≈ v− and i+ ≈ i− ≈ 0).
6 kΩ
2 kΩ
−
νo (t)
+
ν1 (t) −
+ +
−
ν2 (t)
3.4 In the following circuit, determine the node voltage vo , using the ideal op-
amp approximations and assuming that Ra = Rb = 1 k:
1kΩ
1kΩ
−
+ νo
+
−
1956Ω
Ra
2V +− 4V +−
Rb
− +
νx
5V +
−
+ −
−
2 kΩ
2 kΩ
Exercises 117
3.7 (a) In the following circuit, determine the capacitor current i(t):
i s (t), A
1
+
i s (t) ν(t) t
− 1F 0 1 2
−1
(b) In the next circuit, determine and plot the capacitor voltage v(t). Assume
that v(t) = 0 for t < 0.
i s (t), A
1
+
i s (t) ν(t) t
− 1F 0 1 2
−1
3.8 In the following circuit, determine the output vo (t), using the ideal op-amp
approximations:
1mH
1kΩ
−
ν o (t)
+
10 cos(2000t)V + +
− − 0.1V
1H
2F
− νo (t)
+
cos(t)mV +− +
− 2mV
118 Chapter 3 Circuits for Signal Processing
3.10 Using KCL and the v–i relations for resistors and capacitors, show that the
voltage v(t) in the following circuit satisfies the ODE
dv 1
3 + v(t) = is (t).
dt 2
+
i s (t) 2Ω 3F ν(t)
−
3.11 In the next circuit, v(t) = 2 V for t < 0. Determine v(t) for t > 0 after the
switch is closed, and identify the zero-state and zero-input components of
v(t). In the circuit, vs denotes a DC voltage source (time-independent).
t= 0
2Ω 2Ω
+
νs +
− ν(t)
0.25F −
3.12 In the next circuit, v(t) = 0 for t < 0. Determine v(t) for t > 0 after the
switch is closed.
t= 0
+
2A 2Ω ν(t) 2Ω
1F −
1kΩ
1kΩ
−
t= 0 + +
2V + +
− νo(t)
νc(t) 1μF 1kΩ
− −
Exercises 119
3.14 Determine the ODE that describes the inductor current i(t) in the next circuit.
Hint: Apply KVL using a loop current i(t) such that v(t) = 2 di dt .
4Ω +
νs (t) +− 2H ν (t)
−
3.15 In the circuit that follows, find i(t) for t > 0 after the switch is closed.
Assume that i(t) = 0 for t < 0.
t= 0
2Ω
2A 2Ω 1H
i(t)
3.16 The circuit shown next is in DC steady state before the switch flips at t = 0.
Find vL (0− ) and iL (0− ), as well as iL (t) and vL (t), for t > 0.
3Ω
t= 0
νL (t)
6Ω
9V +
− 5Ω 6H
i L (t)
3.17 Obtain the second-order ODE describing the capacitor voltage v(t) in the
series RLC circuit shown next. Hint: Proceed as in Problem 3.14 and use
i(t) = 2 dv
dt for the loop current.
1Ω 1H
+
νs (t) +− 2F ν(t)
−
120 Chapter 3 Circuits for Signal Processing
3.21 Let f (x) = x − (2 + j 3). Sketch the surface |f (x)| over the 2-D complex
plane and describe in words what the surface looks like.
4
Phasors and Sinusoidal
Steady State
Suppose that we wish to calculate the voltage v(t) in Figure 4.1a, where the input Steady-state
source is co-sinusoidal. In Chapter 3 we learned how to do this by writing and then AC-circuit
solving a differential equation. The solution involved doing a large amount of algebra, calculations
using trigonometric identities. It turns out that there is a far simpler method for by complex
calculating just the steady-state response v(t) in Figure 4.1a. The trick is to calculate arithmetic;
V in Figure 4.1b by treating it as a DC voltage in a resistive circuit having a 1 V phasors
source and resistors with values −j and 1 . We then can use voltage division to and
obtain impedances;
average power
1 1
V = (1 V) = V. and
−j + 1 1−j
impedance
Now, in this calculation j represents the imaginary unit and, thus, V is complex. If matching;
you were to convert V to polar form by using your engineering calculator, you would resonance
see an output like
◦
(0.707107, 45 ),
indicating a magnitude 0.707107 and an angle of 45◦ for the complex number V .
Surprisingly, this magnitude and angle are the correct magnitude and angle of v(t) in
Figure 4.1a so that v(t) is simply
◦
v(t) = 0.707107 cos(t + 45 ) V.
121
122 Chapter 4 Phasors and Sinusoidal Steady State
1F −jΩ
+ +
cos(t) V + 1Ω ν(t) + 1Ω V
− 1V −
− −
(a) (b)
1
cos(t)
0.5
−10 −5 5 10 15 20
t (s)
−0.5 0.707cos(t + 4 5° )
(c) −1
Figure 4.1 (a) An RC circuit with a cosine input, (b) the equivalent phasor circuit, and (c) input and
output signals of the circuit.
Figure 4.1c compares the input signal, cos(t), with the output signal, 0.707107 cos(t +
45◦ ). Notice that the output signal has the same frequency and shape as the input, but
it is attenuated (reduced in amplitude) and shifted in phase.
In this chapter we will learn why this trick with complex numbers works (even
though DC circuits do not have imaginary resistances, and a capacitor is not a resistor).
The trick is known as the phasor method, where V stands for the phasor of v(t) and
1 V is the phasor of cos(t) V. We will see that the phasor method is a simple technique
for determining the steady-state response of LTI circuits to sine and cosine inputs. For
brevity, we will refer to such responses as “sinusoidal steady-state.” We will explain
the phasor method in Section 4.1: We will see why the capacitor of Figure 4.1a was
converted to an imaginary resistance known as impedance and learn how to use
impedances to calculate sinusoidal steady-state responses in LTI circuits. Section 4.2
is mainly a sequence of examples demonstrating how to apply the phasor method using
the analysis techniques of Chapter 2, including source transformations, loop and node
analyses, and Thevenin and Norton equivalents. Sections 4.3 and 4.4 describe power
calculations in sinusoidal steady-state circuits and also discuss a phenomenon known
as resonance.
complex number C,
C + C∗
Re{C} = ,
2
C − C∗
Im{C} = .
j2
Furthermore, given any real φ, you should be able to recite Euler’s formula in your
sleep:
f (t) ≡ Re{F ej ωt },
where ω is a real constant and F is a complex constant that can be written in expo-
nential form as
F = |F |ej θ .
Then what kind of a signal is f (t) and how can we plot it?
Let’s find out. Substituting the polar form of F into the formula for f (t), and
using the identity Re{ej φ } = cos φ, we get
So,
Example 4.2
The signal
π
g(t) = 3 cos(2t − ),
4
with frequency ω = 2 rad
s , has phasor
π π ◦
G = 3e−j 4 = 3∠ − = 3∠ − 45 .
4
See the phasor G and the signal g(t) in Figure 4.2.
Period The period of signals f (t) and g(t), with frequency ω = 2 rad
s , is T = ω = π s,
2π
which is the time interval between the successive peaks (crests) of the f (t) and g(t)
curves shown in Figure 4.2a.
Example 4.3
The phasor of
π
v(t) = 2 cos(6t − )
2
is
π ◦
V = 2e−j 2 = −j 2 = 2∠ − 90 .
Signal v(t) has frequency ω = 6 rad
s , and its period is
2π
6 = π
3 s.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 125
Im
f (t) = 6 cos(2t + π )
6 π
4 3
F = 6e j 3
π
2 g(t) = 3 cos(2t − )
4 Re
−5 5 10
t(s) 3 π
G = 3e− j 4
−2
−4
(a) −6 (b)
Figure 4.2 (a) Plots of cosine signals f (t) = 6 cos(2t + π3 ) and g(t) = 3 cos(2t − π4 ) versus t, and (b) the
π π
locations of the corresponding phasors F = 6ej 3 and G = 3e−j 4 in the complex plane. Note that the
signal f (t) “leads” (peaks earlier than) g(t) by π3 + π4 rad = 105◦ , because the angle of F is 105◦ greater
than the angle of G in the complex plane. Equivalently, g(t) “lags” (peaks later than) f (t) by 105◦ . Also,
the amplitude of g(t) is half as large as the amplitude of f (t), because the magnitude of phasor G is half
the magnitude of phasor F.
Since
π
sin φ = cos(φ − )
2
for all real φ (a trig identity that you should be able to visualize), the
foregoing v(t) also can be expressed as 2 sin(6t). Therefore,
π
V = 2e−j 2 = −j 2
as well as
π
v(t) = Re{V ej 6t } = Re{2e−j 2 ej 6t }
π π
= 2Re{ej (6t− 2 ) } = 2 cos(6t − ).
2
Example 4.4
w(t) = 5 sin(5t + π3 ). What is the phasor W of w(t)?
Solution The phasor of
π
5 cos(5t + )
3
is
◦
5∠60 .
Hence, W = 5∠ − 30◦ .
Example 4.5
s , has phasor P = j 7. What is
A cosine signal p(t), with frequency 3 rad
p(t)?
Solution
π π
p(t) = Re{j 7ej 3t } = Re{7ej (3t+ 2 ) } = 7 cos(3t + ).
2
Alternatively,
◦ ◦
P = j 7 = 7∠90 ⇒ p(t) = 7 cos(3t + 90 ),
Section 4.1 Phasors, Co-Sinusoids, and Impedance 127
since ω = 3 rad
s . As another alternative,
Re{F ej ωt }.
In particular, linear combinations of sinusoids and their derivatives, all having the
same frequency ω, easily can be calculated by the use of phasors.
In this section, we will state and prove two principles concerning co-sinusoids and
their phasors and also demonstrate their applications in circuit analysis.
k1 f1 (t) + k2 f2 (t)
of co-sinusoids
f1 (t) = Re{F1 ej ωt }
and
f2 (t) = Re{F2 ej ωt }
k1 F1 + k2 F2 .
Example 4.6
Suppose that as shown in Figure 4.3
i1 (t) = 3 cos(3t) A
and
i2 (t) = −4 sin(3t) A
denote currents flowing into a circuit node. Determine the amplitude and
phase shift of the current i3 (t) = i1 (t) + i2 (t).
Solution The phasor of i1 (t) is I1 = 3 A, while the phasor of i2 (t) is
I2 = j 4 A. Since
I3 = I1 + I2 .
Hence,
−1 ( 4 )
I3 = 3 + j 4 = 5ej tan 3 A,
and therefore,
Example 4.7
Let
v1 (t) = 3 sin(5t) V
1
Notice that the proof does not hold for f1 (t) and f2 (t) with different frequencies ω1 and ω2 . Therefore,
the superposition principle is valid only for co-sinusoids having the same frequency ω.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 129
i 2 (t) = − 4 sin(3 t)
Figure 4.3 A circuit node where three branches with currents i1 (t), i2 (t), and
i3 (t) meet.
2 2
−2 2 4 6 t −2 2 4 6 t
−2 −2
i 1(t)
−4 −4
(a) (b)
Figure 4.4 Signal waveforms of Example 4.6: (a) i1 (t) and i2 (t), and (b)
i3 (t) = i1 (t) + i2 (t).
and
π
v2 (t) = 2 cos(5t − )V
4
denote the two element voltages in the circuit shown in Figure 4.5. Calculate
the amplitude of the third element voltage v3 (t).
Solution Since the KVL equation for the loop indicates that
Using a calculator we easily find that |V3 | ≈ 4.635 V, which is the amplitude
of voltage v3 (t).
As the previous examples illustrate, we can write KVL and KCL circuit equations Phasor KVL
for co-sinusoidal signals in terms of the signal phasors. Specifically, and KCL
Vdrop = Vrise
loop
and
Iin = Iout ,
node
130 Chapter 4 Phasors and Sinusoidal Steady State
ν1 (t) = 3 sin(5 t)
+ −
+ +
ν3 (t) ν2 (t) = 2 cos(5t − π )
− − 4
Figure 4.5 An elementary loop with three circuit elements. All elements carry
co-sinusoidal voltages.
where Vdrop (rise) denotes a phasor voltage drop (rise) and Iin (out) denotes a phasor
current flowing into (out of) a node.
Derivative principle
df
(i) The derivative dt of co-sinusoid
f (t) = Re{F ej ωt }
Example 4.8
Apply the superposition and derivative rules to find a particular solution of
the ODE
df
+ 4f (t) = 2 cos(4t).
dt
F ej ωt + F ∗ e−j ωt
2
Note: d
dt Re F ej ωt = Re F dtd ej ωt is justified because Re F ej ωt = 2 and
F d ej ωt +F ∗ d e−j ω t
Re F dtd ej ωt = dt
2
dt .
Section 4.1 Phasors, Co-Sinusoids, and Impedance 131
j 4F + 4F = 2,
Example 4.9
Determine the particular solution of
d 2y dy
+3 + 2y = 5 sin(6t).
dt 2 dt
So,
−j 5 ◦
Y = ≈ 0.130∠117.9 ,
−34 + j 18
giving the particular solution
◦
y(t) = 0.130 cos(6t + 117.9 ).
These examples illustrate how phasors can be used to find steady-state solutions
of linear constant-coefficient ODEs with co-sinusoidal inputs. But, phasors also can
be used to completely avoid having to deal with differential equations in the analysis
of dissipative LTI circuits with co-sinusoid inputs. The key is to establish phasor
V –I relations for inductors and capacitors in such circuits, as illustrated in the next
example.
132 Chapter 4 Phasors and Sinusoidal Steady State
Example 4.10
Consider an inductor with some co-sinusoidal current
i(t) = Re{I ej ωt }
v(t) = Re{V ej ωt }
in the direction of the current. (See Figure 4.6a.) Express the voltage phasor
V in terms of the current phasor I .
Solution We can do this by replacing each co-sinusoid in the inductor
v–i relation
di
v(t) = L
dt
di
by its phasor. Since the phasor of dt is j ωI , the result is
V = j ωL I.
Similar V –I relations also can be established for capacitors and resistors embedded
in dissipative LTI circuits. For a capacitor (see Figure 4.6b), the v–i relation
dv
i(t) = C
dt
di
ν(t) = L V = jωLI
dt
+ − + −
L L
⇒
(a) i(t) = Re{Ie jωt } I
dv 1
i(t) = C V = I
dt + jωC −
C C
+ −
⇒
(b) ν(t) = Re{Ve jωt } I
ν(t) = Ri (t)
+ V = RI −
+ −
R R
⇒
(c) i(t) = Re{Ie jωt } I
Figure 4.6 Elements with co-sinusoidal signals and their phasor V − I relations:
(a) an inductor, (b) a capacitor, and (c) a resistor.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 133
implies that
1
I = j ωC V or V = I.
j ωC
For a resistor (see Figure 4.6c), Ohm’s law,
v(t) = Ri(t),
implies that
V = RI.
In the next section, we will discuss the implications of these phasor V –I relations for Phasor V –I
the analysis of LTI circuits with co-sinusoid inputs. relations
V = ZI,
with
⎧
⎪
⎪j ωL for inductors
⎨ 1
Z≡ for capacitors ,
⎪
⎪ j ωC
⎩
R for resistors
where ω is the signal frequency. The parameter Z is known as impedance and is Impedance
measured in units of ohms, since Z = VI is a voltage-to-current ratio, just like ordinary
resistance R = v(t)
i(t) . Unlike resistance, however, impedance is, in general a complex
quantity. Its imaginary part is known as reactance. Inductors and capacitors have
reactances ωL and − ωC 1
, respectively, but the reactance of a resistor is zero. The
real part of the impedance is known as resistance. Inductors and capacitors have zero
resistance—they are purely reactive.3
3
An alternative form of the phasor V –I relation is
I = YV,
where
1
Y ≡
Z
is known as admittance and is measured in Siemens (S = −1 ). The real and imaginary parts of admittance
are known as conductance and susceptance, respectively.
134 Chapter 4 Phasors and Sinusoidal Steady State
V = ZI,
and
Iin = Iout
node
Example 4.11
In the circuit shown in Figure 4.7a, we can write
2 = I1 + I2 ,
Vc = 2I2 + j I2 ,
i 1 (t) 2Ω i 2 (t) I1 2Ω I2
+
1 + 1
2 cos(2t) A F νc (t) H 2A −jΩ Vc jΩ
2 − 2 −
(a) (b)
Figure 4.7 (a) An RLC circuit with a cosine input, and (b) the equivalent phasor circuit.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 135
and
I1 = j Vc ,
respectively.
Notice that we could have obtained the phasor equations above directly
from the phasor equivalent circuit shown in Figure 4.7b, without ever writing
the differential equations, by applying phasor KCL and KVL and the V –I
relations pertinent to inductors, capacitors, and resistors. In Figure 4.7b
each element of Figure 4.7a has been replaced by the corresponding imped-
ance calculated at the source frequency ω = 2 rad/s. For example, 21 H
s )( 2 H) = j , and each co-sinusoid by the corre-
is replaced by j (2 rad 1
and
√ j (−1.107)
Vc = −j I1 = 5e V,
so that
√
i1 (t) = 5 cos(2t + 0.464) A,
π
i2 (t) = 1 cos(2t − ) A,
2
and
√
vc (t) = 5 cos(2t − 1.107) V
are the steady-state co-sinusoidal currents and capacitor voltage in the orig-
inal circuit. This can be confirmed by substituting these current expressions
into the preceding set of differential equations.
In Example 4.11 we demonstrated the basic procedure for calculating the steady-
state response of dissipative LTI circuits to co-sinusoidal inputs. However, we no
longer will need to write the differential equations. A step-by-step description of the Phasor
recommended procedure, called the phasor method, is as follows: method
(1) Construct an equivalent phasor circuit by replacing all inductors L and capaci-
1
tors C with their impedances j ωL and j ωC , calculated at the source frequency
ω, and replacing all the signals (input signals as well as unknown responses)
with their phasors.
(2) Construct phasor KVL and KCL equations for the equivalent circuit, using the
phasor V –I relation V = ZI .
136 Chapter 4 Phasors and Sinusoidal Steady State
Zs = Z1 + Z2 (series equivalent)
and
Z1 Z2
Zp = (parallel equivalent).
Z1 + Z2
Zs = 6 + j 8
and
6(j 8) j 48 j 48(6 − j 8)
Zp = = = = 3.84 + j 2.88 .
6 + j8 6 + j8 100
Section 4.2 Sinusoidal Steady-State Analysis 137
I
i(t)
2 cos(2t) V + 2H 4Ω 2V + j 4Ω V 4Ω
− −
(a) (b)
Figure 4.8 (a) An RL circuit with a parallel inductor and resistor, and (b) the equivalent phasor circuit.
Example 4.12
In the circuit shown in Figure 4.8a, the source voltage
v(t) = 2 cos(2t) V
has phasor
V = 2 V.
rad
Z1 = j (2 )(2 H) = j 4
s
and
Z2 = 4
for the inductor and resistor, respectively. The parallel equivalent of Z1 and
Z2 is
Z1 Z2 (j 4)(4) j4
Zp = = = .
Z1 + Z2 j4 + 4 1 + j1
Therefore,
V 2(1 + j 1) 1−j 1 π
I= = = = √ e−j 4 A
Zp j4 2 2
and
1 π
i(t) = √ cos(2t − ) A.
2 4
138 Chapter 4 Phasors and Sinusoidal Steady State
I + +I
I1 2
Z1 V1
−
V + I Z1 Z2
− +
Z2 V2 V
−
(a) − (b)
Figure 4.9 (a) Phasor voltage division, and (b) current division.
Voltage and current division: The equations for voltage and current division in
phasor circuits (see Figure 4.9) have the forms
Z1 Z2
V1 = V , V2 = V ,
Z1 + Z2 Z1 + Z2
and
Z2 Z1
I1 = I , I2 = I ,
Z1 + Z2 Z1 + Z2
respectively.
Example 4.13
In the circuit shown in Figure 4.10a, calculate the steady-state voltage v(t).
Solution In the equivalent phasor circuit shown in Figure 4.10b,
1
rad
= −j
j (1 s )(1 F)
1 1 1 π
V = 1V = V = √ ej 4 V.
−j + 1 1−j 2
1F −jΩ
+ +
cos(t) V + 1Ω ν(t) + 1Ω V
− 1V −
− −
(a) (b)
Figure 4.10 (a) An RC circuit in sinusoidal steady-state, and (b) the equivalent phasor circuit.
Section 4.2 Sinusoidal Steady-State Analysis 139
Hence,
1 π
v(t) = √ cos(t + ) V.
2 4
Example 4.14
The phasor equivalent of Figure 4.11a is shown in Figure 4.11b. Determine
the steady-state current i(t).
Solution The parallel equivalent impedance of the inductor and capacitor
in Figure 4.11b is
(j 8)(−j 4) 32
Zp = = = −j 8 .
j 8 + (−j 4) j4
2 cos(4t − π ) A
π
i(t) 1F 2e− j 3 A I
3 8Ω 2H 8Ω j 8Ω − j 4Ω
(a) 16 (b)
Figure 4.11 (a) A circuit with a co-sinusoidal current source, and (b) the equivalent phasor circuit.
Vs
Vs = Zs Is ⇔ Is = .
Zs
140 Chapter 4 Phasors and Sinusoidal Steady State
Zs a a
+ +
I I
Vs +− V Is Zs V
(a) − (b) −
b b
Figure 4.12 Equivalent sinusoidal steady-state networks for Vs = Zs Is .
Example 4.15
Find an expression for phasor V in Figure 4.13a in terms of source phasors
Vs and Is .
Solution In Figures 4.13b through 4.13e we demonstrate the simplifica-
tion of the given phasor network via source transformations and element
combinations. The final network is the Thevenin equivalent.4 Clearly,
√ ◦ 1 ◦
V = ( 2∠ − 45 )Is + ( √ ∠ − 45 )Vs .
2
2Ω 4Ω 4Ω
+ +
+ Vs
Vs − Is − j 2Ω V 2Ω Is − j 2Ω V
− 2 −
(a) (b)
4Ω 4Ω
Vs + Vs +
Is + 1 − jΩ V (1 − j )[I s + ] + 1 − jΩ
− V
2 − 2 −
(c) (d)
Vs +
(1 − j )[I s + ] + 5 − jΩ
− V
2 −
(e)
Figure 4.13 Network simplification with source transformations (a→b, c→d) and element combinations
(b→c, d→e). Network (e) is the Thevenin equivalent of network (a).
4
With no loss of generality, the terminal voltage phasor of any linear network in sinusoidal steady-state
is V = VT − ZT I , where I is the terminal current and VT is the open-circuit voltage of the network. Hence,
a linear network in sinusoidal steady-state can be represented by its Thevenin equivalent with a Thevenin
V
voltage VT and impedance ZT = I T , where IN is the short-circuit current phasor of the network (in exact
N
analogy with resistive networks).
Section 4.2 Sinusoidal Steady-State Analysis 141
Example 4.16
Determine the Thevenin equivalent of the network shown in Figure 4.14a,
using the superposition method.
Solution Since the network terminals are open, the source current Is flows
down the j 4 inductor, generating a voltage drop of j 4Is from top to
bottom of the element. Therefore, when the voltage source is suppressed
(i.e., replaced by a short), the output voltage of the network is j 4Is . When
the current source is suppressed, the inductor carries no current and the
output voltage is simply Vs . Superposition of these contributions yields
V = Vs + j 4Is ,
2Ω 2 + j 4Ω
+ Is j 4Ω
2Ω
V + Vs + j 4I s
−
− − j 2Ω + Vs
−
(a) (b)
Figure 4.14 (a) A phasor network with two independent sources and (b) its
Thevenin equivalent.
2V j 2Ω
V1 + 2 V1 VT 1Ω
−
+
2Ω j 1A − j 2Ω VT
(a)
2V j 2Ω 1Ω 3 − j 2Ω
−
+
2Ω I1 j 1A I 1 + j − j 2Ω IN
IN 2 + j 2 V +−
(b) (c)
Figure 4.15 (a) A phasor network with an open-circuit voltage phasor VT , (b) the same network
terminated with an external short carrying a phasor current IN , and (c) the Thevenin equivalent of the
same network.
Example 4.17
The network shown in Figure 4.15a already has been marked in preparation
for node-voltage analysis. Determine VT and V1 , using the node-voltage
method.
Solution The KCL equation for the super-node on the left is
V1 + 2 V1 − VT
+ = j 1,
2 j2
while, for the remaining node, the KCL equation is
VT − V1 VT
+ = 0.
j2 −j 2
Note that the second equation implies that V1 = 0. Hence, the first equation
simplifies to
VT
1− = j 1 ⇒ VT = 2 + j 2 V.
j2
Example 4.18
After termination by an external short, the network of Figure 4.15a appears
as shown in Figure 4.15b. The diagram in Figure 4.15b has been marked in
preparation for loop-current analysis. Determine the Norton current IN for
the network shown in Figure 4.15a by applying the loop-current method to
Figure 4.15b
Solution The KVL equation for the dashed super-loop in Figure 4.15b is
I1 = −1 − j IN ,
IN (1 − j 2) = 2(1 − j I1 ).
Eliminating I1 , we obtain
2 + j2
IN (1 − j 2) = 2(1 + j 1 − IN ) ⇒ IN = A.
3 − j2
Example 4.19
Determine the Thevenin equivalent of the network shown in Figure 4.15a.
Solution From the previous Examples 4.17 and 4.18, we know that
2 + j2
VT = 2 + j 2 V and IN = A.
3 − j2
VT
ZT = = 3 − j 2 .
IN
p(t) = v(t)i(t)
The net absorbed power—that is, the average value of p(t) = v(t)i(t) over one
oscillation period T = 2π
ω for signals v(t) and i(t)—can be calculated as
T T
1 1
P = v(t)i(t)dt = |V | cos(ωt + θ) × |I | cos(ωt + φ)dt.
T t=0 T t=0
This integral can be evaluated using quite a lot of algebra, but it turns out that P can
be computed far more easily, directly from the signal phasors V and I , as shown next.
We next derive the phasor formula for the average absorbed power P by expressing
the instantaneous power p(t) = v(t)i(t) as the sum of a constant term and a zero-
average time-varying term. Since v(t) and i(t) are co-sinusoids, the instantaneous
power works out to be
Net absorbed 1
power P
P = Re{V I ∗ },
2
which is the net power absorbed by an element having voltage and current phasors V
and I .
For a resistor,
V = RI
Section 4.3 Average and Available Power 145
so that
1 R|I |2 |V |2
P = Re{(RI )I ∗ } = = ,
2 2 2R
where |V | and |I | are the amplitudes of the resistor voltage and current. The same
result also can be expressed as
2
Vrms
P = RIrms
2
= ,
R
|I | |V
√|
where Irms ≡ √
2
and Vrms ≡ 2
are known as the rms, or effective, amplitudes.
Example 4.20
A 60-W lightbulb is designed to absorb 60 W of net power when utilized
with Vrms = 120 V available from a wall outlet. Estimate the resistance of a
60-W lightbulb. What is the peak value of the sinusoidal voltage waveform
at a wall outlet?
Solution Using one of the preceding formulas for P , we find that
2
Vrms 1202
R= = = 240 .
P 60
The peak value of the voltage waveform at a wall outlet is approximately
√
|V | = 2Vrms = 169.7 V.
(The power company does not always deliver exactly Vrms = 120 V.)
Example 4.21
A signal
f (t) = A cos(ωt + θ)
|V |2 1
P = = A2 .
2R 2
146 Chapter 4 Phasors and Sinusoidal Steady State
V = j XI,
1 X|I |2
P = Re{(j XI )I ∗ } = Re{j } = 0.
2 2
Capacitors and inductors absorb no net power, because they return their instantaneous
absorbed power back to the circuit.
Example 4.22
Determine the net power absorbed by each element in the circuit shown in
Figure 4.16a.
Solution Clearly, the inductor will absorb no net power. To calculate P
for the resistor and the voltage source, we first calculate the phasor current
I for the loop. From Figure 4.16b,
1V 1 ◦
I= = ∠53.13 A.
j4 + 3 5
|I |2 R ( 1 )2 3 3
P = = 5 = = 0.06 W.
2 2 2 × 25
Energy conservation requires that the net power absorbed by the source must
be P = −0.06 W. Indeed, since the phasor voltage drop for the source in
2H j 4Ω
1 cos(2t) V +− 3Ω 1 V +− 3Ω
(a) (b)
Figure 4.16 (a) A circuit with a single energy source, and (b) the equivalent
phasor circuit.
Section 4.3 Average and Available Power 147
1 1 1
P = Re{V I ∗ } = Re{(−1 V)( A)∗ }
2 2 3 + j4
1 3 + j4 1 3
= − Re{ }=− = −0.06 W.
2 25 2 25
VT ZL
VL =
ZT + ZL
and
VT
IL = .
ZT + ZL
Therefore, the net power delivered to (and absorbed by) the load ZL is
1 1 |VT |2 ZL |VT |2 RL
PL = Re{VL IL∗ } = Re{ } = ,
2 2 |ZT + ZL |2 2|ZT + ZL |2
Z T = R T + jX T
IL
+
VT +− Network VL Z L = R L + jX L
− Load
|VT |2 RL
PL = .
2|(RT + RL ) + j (XT + XL )|2
XL = −XT ,
because the denominator of PL is then reduced to its smallest possible value, namely,
2(RT + RL )2 . Choosing XL = −XT , the net power formula becomes
|VT |2 RL
PL = ,
2(RT + RL )2
RL = RT .
as we learned in Chapter 2.
In summary then, an external load with resistance RL = RT and reactance XL =
−XT , that is, with an impedance
ZL = RT − j XT = ZT∗ ,
Average will extract the full available power from a network having a Thevenin impedance
available ZT . We obtain the formula for the available power of a network by evaluating the
power Pa preceding formula for PL , with RL = RT . The result is
|VT |2
Pa = .
8RT
So, the available power of a network depends on the magnitude of its Thevenin
voltage phasor VT and only the resistive (real) part RT of its Thevenin impedance ZT =
Matched RT + j XT . The available power will be delivered to any load having an impedance
load ZL = ZT∗ . Such loads are known as matched loads.
Example 4.23
Determine the available power of the network shown in Figure 4.18.
Solution We first use the superposition method to determine the open-
circuit voltage V as
V = 2 V + (2 + j 3) V = 4 + j 3 V,
Section 4.3 Average and Available Power 149
2Ω j 3Ω 2Ω
+
2 V +− 1A V
−
where the first term is the contribution of the voltage source and the second
term is the contribution of the current source. Thus, for this network,
|VT |2 = |4 + j 3|2 = 25 V2 .
ZT = 4 + j 3
|VT |2 25
Pa = = = 0.78125 W.
8RT 8×4
Example 4.24
What load ZL is matched to the network shown in Figure 4.19 and what is
the available power of the network?
Solution ZL = ZT∗ , where ZT is the Thevenin impedance of the network
shown in Figure 4.19. Note that, for all possible loads,
1 − 2Ix 1
Ix = ⇒ Ix = A.
1 3
1Ω −jΩ 1Ω
Ix
1 V +− +
− 2I x
|VT |2 ( 2 )2 1
Pa = = 3 = W.
8RT 8·1 18
4.4 Resonance
Consider the source-free circuits shown in Figures 4.20a and 4.20b. We next examine
whether the signals marked v(t) and i(t) in these circuits can be co-sinusoidal wave-
forms, despite the absence of source elements.
For the RC circuit shown in Figure 4.20a, the phasor KVL equation, expressed
in terms of the phasor I of a co-sinusoidal i(t), is
1
(R + )I = 0.
j ωC
Because R + 1
j ωC cannot equal zero, the equation requires that
I = 0.
Hence, in the RC circuit shown in Figure 4.20a we cannot have co-sinusoidal i(t)
and v(t).
By contrast, the phasor KVL equation for the LC circuit shown in Figure 4.20b,
1
(j ωL + )I = 0,
j ωC
+ +
R> 0 ν(t) C L ν(t) C
(a) i(t) − (b) i(t) −
and
I |I |
v(t) = Re{ ej ωo t } = sin(ωo t + θ)
j ωo C ωo C
are possible with arbitrary |I | and θ = ∠I . The oscillation frequency
1 Resonant
ωo = √
LC frequency
is known as the resonant frequency of the circuit. The phenomenon itself (i.e., the
possible existence of steady-state co-sinusoidal oscillations in a source-free circuit)
is known as resonance.
Resonance is possible in the LC circuit of Figure 4.20b because the circuit is non-
dissipative. As we learned in Section 3.4, circuits with no dissipative elements (i.e.,
resistors) can exhibit non-transient zero-input response. Resonance in the foregoing
LC circuit is an example of such behavior. The inclusion of a series or parallel resistor
in the circuit, added in Figure 4.21, introduces dissipation and spoils the possibility of
source-free oscillations. We can see this in Figure 4.21a by writing the KVL equation
Zs I = 0,
L
+ +
R ν(t) C R L ν(t) C
(a) i(t) − (b) −
Figure 4.21 (a) A source-free series RLC circuit, and (b) a source-free parallel
RLC circuit.
152 Chapter 4 Phasors and Sinusoidal Steady State
jωL
1
1 Zp = 1
Z s = R + j (ωL − ) 1
+ j (ωC − 1
ωL )
R jωL
ωC 1
R jωC
jωC
(a) (b)
Figure 4.22 (a) Series RLC network and its equivalent impedance, and (b) parallel RLC network and its
equivalent impedance.
Example 4.25
In the series RLC circuit shown in Figure 4.23a, with an external voltage
input
v(t) = cos(ωt),
νR (t) νL (t) R jω L
+ − + −
+ R L + + 1
ν(t) = cos(ωt) V νC (t) 1V
− i(t) C − − jω C
(a) (b)
Figure 4.23 (a) A series RLC network with a cosine input, and (b) the
equivalent phasor network.
1
I= A,
R
1 1 1 L
VL = (j ωo L)I = j √ L =j V,
LC R R C
and
√
1 LC 1 1 L
VC = ( )I = −j = −j V.
j ωo C C R R C
1
i(t) = cos(ωo t) A,
R
vR (t) = cos(ωo t) V,
1 L π
vL (t) = cos(ωo t + ) V,
R C 2
vC (t) = −vL (t).
Figure 4.24 shows plots of the voltage waveforms for the special case with
R = 0.5 , L = 1 H, C = 1 F, and ω = ωo = 1 rad s . Although the ampli-
tudes of vL (t) and vC (t) are greater than the amplitude of the system
input v(t), KVL is not violated around the loop, because vL (t) + vC (t) = 0
(effective short) and v(t) = vR (t). The large amplitude response of vL (t)
and vC (t) is a consequence of the behavior of the series RLC network at
resonance and the relatively small value chosen for R.
154 Chapter 4 Phasors and Sinusoidal Steady State
(a) −2 (b) −2
Figure 4.24 (a) Voltage waveforms v(t) = vR (t), and (b) vL (t) = −vc (t) for the resonant system examined
in Example 4.25, for the special case with R = 0.5 , L = 1 H, C = 1 F, and ω = ωo = 1 rad
s . Notice that the
amplitudes of the inductor and capacitor signals in (b) are larger than the amplitude of the input signal
v(t) in (a).
EXERCISES
4.3 Use the phasor method to determine the amplitude and phase shift (in rad)
of the following signals when written as cosines:
(a) f (t) = 3 cos(4t) − 4 sin(4t).
(b) g(t) = 2(cos(ωt) + cos(ωt + π/4)).
4.5 (a) Calculate the series equivalent impedance of the following network for
ω = 1 rad/s in rectangular and polar forms and determine the steady-
state current i(t), given that v(t) = 2 cos(t) V:
4Ω
+ +
ν(t) νL (t)
− i(t) 3H
−
(b) What is the phasor of the inductor voltage vL (t) in this network, given
that v(t) = 2 cos(t) V?
4.6 Consider the following circuit:
2 cos(5t)A 5Ω i(t) 1H
2Ω
I2
V2 1∠90° A
V1 V3
− j1Ω
2V + I3 j2Ω
− I1 1Ω
4.8 In the circuit shown for Problem 4.7, determine the loop-current phasors I1 ,
I2 , and I3 and express them in polar form.
156 Chapter 4 Phasors and Sinusoidal Steady State
4.9 Use the phasor method to determine v1 (t) in the following circuit:
4i x (t)
2Ω ν1(t)
− ν2(t)
+
2 cos(4t)V + 1F
− i x (t) 1H 16
4.10 In the following circuit determine the phasor V and express it in polar form:
1Ω
+
−jΩ jΩ V
−
1V −
+
2A
1Ω
4.11 Use the phasor method to determine the steady-state voltage v(t) in the
following op-amp circuit:
1Ω
2F
1Ω 1H
−
ν(t)
+
+ 20 cos(4t) V
−
1Ω − j1Ω
+
j1Ω Is
V
1Ω
−
+
− Vs − j3Ω
Exercises 157
j3Ω jV
2Ω a
+−
2Ω 2A
4.14 (a) Calculate the equivalent impedance of the following network for (i)
ω = 5 krad/s, (ii) ω = 25 krad/s, and (iii) ω = 125 krad/s:
50 Ω
2μF 0.8mH
(b) Assuming a cosine voltage input to the network, with a fixed amplitude
and variable frequency ω, at which value of ω is the amplitude of the
capacitor voltage maximized? At the same frequency, what will be the
amplitude of the resistor current?
5
Frequency Response H(ω)
of LTI Systems
Frequency Figure 5.1a shows an LTI system with input signal vi (t) and output vo (t). In Chapter 4
response we learned how to calculate the steady-state output of such systems when the input
H (ω) is a co-sinusoid (just a simple phasor calculation in Figure 5.1b, for instance). In
and Chapter 7 we will learn how to calculate the zero-state system response to any input
properties; signal of practical interest (e.g., a rectangular or triangular pulse, a talk show, a song,
LTI system a lecture) by using the following facts:
response
to co-sinusoids (1) All practical signals that can be generated in the lab or in a radio station can be
and expressed as a superposition of co-sinusoids with different frequencies, phases,
multi-frequency and amplitudes.
inputs; (2) The output of an LTI system is the superposition of the individual responses
resonant caused by the co-sinusoidal components of the input signal.
and
non-dissipative
In this chapter we lay down the conceptual path from Chapter 4 (co-sinusoids) to
systems
Chapter 7 (arbitrary signals), and we do some practical calculations concerning dissi-
pative LTI circuits and systems with multifrequency inputs. Section 5.1 introduces the
concept of frequency response H (ω) of an LTI system and shows how to determine
H (ω) for linear circuits and ODEs. We discuss general properties of the frequency
response H (ω) in Section 5.2. Sections 5.3 and 5.4 describe the applications of H (ω)
in single- and multi-frequency system response calculations. Finally, in Section 5.5
we revisit the resonance phenomenon first encountered in Section 4.4.
158
Section 5.1 The Frequency Response H (ω) of LTI Systems 159
1Ω 1Ω
1
|H (ω)| 180
∠H (ω)
0.8
90
0.6
0.4 −10 −5 5 10 ω
0.2 −90
Figure 5.1 (a) A single-input LTI system, (b) its phasor representation, (c) |H(ω)| vs ω, and (d) ∠H(ω) (in
degrees) vs ω.
Y = |Y |∠Y
F = |F |∠F.
Y = H (ω)F,
where the function H (ω), with variable ω, is said to be the frequency response of the Frequency
circuit. response H(ω)
160 Chapter 5 Frequency Response H (ω) of LTI Systems
The next four examples illustrate how the frequency response H (ω) can be deter-
mined in LTI circuits and systems. We also introduce the concepts of amplitude
response |H (ω)| and phase response ∠H (ω).
Example 5.1
For the system shown in Figure 5.1a, the input is voltage signal
f (t) = vi (t)
y(t) = vo (t).
Y 1
H (ω) = = .
F 1 + jω
Because 1
1+j ω = √1 ∠ − tan−1 (ω), we also can write
1+ω2
where
1
|H (ω)| ≡ √
1 + ω2
and
are known as the amplitude and phase responses, respectively. The varia-
tions of |H (ω)| and ∠H (ω) with frequency ω are plotted in Figures 5.1c
and 5.1d, respectively.
Section 5.1 The Frequency Response H (ω) of LTI Systems 161
The plot of |H (ω)| in Figure 5.1c shows how the amplitude of the output signal
depends on the frequency of the input signal. For example, an input signal with
frequency near zero is passed with nearly unity scaling of the amplitude (amplitude Amplitude
of the output will be nearly the same as the amplitude of the input), whereas inputs and
with high frequencies will be greatly attenuated (amplitude of the output will be phase
nearly zero). As a consequence of this behavior, the circuit in Figure 5.1a is referred response
to as a low-pass filter. The plot of ∠H (ω) shows how the phase of the output signal |H(ω)|
depends on the frequency of the input signal. For an input frequency that is near zero, and
the phase of the output will be nearly the same as that of the input. For very large ∠H(ω)
frequencies, the phase of the output will be retarded by approximately 90◦ . We will
study this example further in Section 5.3.
Example 5.2
For the system shown in Figure 5.2a determine the frequency response
H (ω).
Solution From the phasor circuit in Figure 5.2b,
1 jω
Vo = Vi = Vi .
1
jω +1 1 + jω
Therefore,
Vo jω
H (ω) = =
Vi 1 + jω
1
Ω
1F jω
1 180
∠ H (ω)
|H (ω)| 0.8
90
0.6
0.4 −10 −5 5 10 ω
0.2 −90
Figure 5.2 (a) A simple high-pass system, (b) its phasor representation, (c) |H(ω)| vs ω, and (d) ∠H(ω) (in
degrees) vs ω.
162 Chapter 5 Frequency Response H (ω) of LTI Systems
We see in Figure 5.2c that an input signal with frequency near zero will be almost
completely attenuated (amplitude of the output will be nearly zero), whereas inputs
with high frequencies will be passed without attenuation. As a consequence, the circuit
in Figure 5.2a is referred to as a high-pass filter. The plot of ∠H (ω) shows that for a
positive input frequency that is nearly zero, the phase of the output will be advanced
by approximately 90◦ . For very large frequencies, the phase of the output will be
nearly the same as that of the input. We will study this example further in Section 5.3
Example 5.3
For the system shown in Figure 5.3a, the input is current signal f (t) and
the output is voltage signal y(t). Determine the frequency response of the
system H (ω) = FY .
Solution Using the phasor circuit, we find that
Y = Zp F,
1 jω
Zp = = .
1
1 + 1
jω + j ω −1 1 − ω2 + j ω
Therefore,
Y jω
H (ω) = = .
F 1 − ω2 + j ω
The amplitude and phase responses are
|ω|
|H (ω)| =
(1 − ω2 )2 + ω2
Section 5.1 The Frequency Response H (ω) of LTI Systems 163
1 180
∠ H (ω)
|H (ω)| 0.8
90
0.6
0.4 −6 −4 −2 2 4 6 ω
0.2 −90
Figure 5.3 (a) A simple band-pass system, (b) its phasor representation, (c) |H(ω)| vs ω, and (d) ∠H(ω) (in
degrees) vs ω.
and
1 − ω2
∠H (ω) = tan−1 ( ),
ω
Figure 5.3c shows that both low and high frequencies are attenuated, whereas
some frequencies lying between the lowest and highest are passed with significant
amplitude. Thus, the circuit in Figures 5.3a is called a band-pass filter. We will offer
further comments on this example in Section 5.3.
Example 5.4
A linear system with some input f (t) and output y(t) is described by the
ODE
dy df
+ 4y(t) = + 2f (t).
dt dt
Y
H (ω) =
F
of the system. Also, identify the amplitude response |H (ω)| and phase
response ∠H (ω).
164 Chapter 5 Frequency Response H (ω) of LTI Systems
j ωY + 4Y = j ωF + 2F,
(4 + j ω)Y = (2 + j ω)F.
Hence,
Y 2 + jω
H (ω) = = .
F 4 + jω
The amplitude and phase response of the system are
√
4 + ω2
|H (ω)| = √
16 + ω2
and
ω ω
∠H (ω) = tan−1 ( ) − tan−1 ( ),
2 4
respectively.
Table 5.1 lists some of the general properties of the frequency response H (ω) of LTI
circuits introduced in the previous section. The reason for the conjugate symmetry
condition
H (−ω) = H ∗ (ω)
Description Property
1 Conjugate symmetry H (−ω) = H ∗ (ω)
2 Even amplitude response |H (−ω)| = |H (ω)|
3 Odd phase response ∠H (−ω) = −∠H (ω)
4 Real DC response H (0) = H ∗ (0) is real valued
5 Steady-state response to ej ωt ej ωt −→ LTI −→ H (ω)ej ωt
(property 1) can be traced back to the fact that capacitor and inductor impedances
1
j ωC and j ωL satisfy the same property—for example,
−j ωL = (j ωL)∗ .
Because the ω dependence enters H (ω) via only the capacitor and inductor impedances,
the frequency response H (ω) of an LTI circuit will always be conjugate symmetric.
Linear ODEs with real-valued constant coefficients, which describe such circuits, will
also have conjugate symmetric frequency response functions.1
One consequence of H (−ω) = H ∗ (ω) is that
|H (−ω)| = |H (ω)|;
that is, the amplitude response |H (ω)| is an even function of frequency ω (property
2). A second consequence is
which indicates that the phase response is an odd function of ω (property 3). Notice
that the amplitude and phase response curves shown in Figures 5.1 through 5.3 exhibit
the even and odd properties of |H (ω)| and ∠H (ω) just mentioned. Notice also that
H (0) is real valued in each case,2 consistent with property 4 in Table 5.1.
Complex-valued functions
ej ωt
and
(cos(ωt), sin(ωt))
and
ej ωt −→ LTI −→ H (ω)ej ωt ,
1
It is possible to define LTI systems with a frequency response for which conjugate symmetry is not
true—for instance, a linear ODE with complex-valued coefficients.
2
H (−ω) = H ∗ (ω) implies that H (0) = H ∗ (0), which indicates that the DC response H (0) must be
real, since only real numbers can equal their conjugates.
166 Chapter 5 Frequency Response H (ω) of LTI Systems
in Table 5.1 can be viewed as shorthand for the fact that in sinusoidal steady-state
Input: Output:
|F | cos(ωt + θ) LTI system: |H (ω)|| F | cos(ωt + θ + ∠ H (ω))
H (ω)
|F | sin(ωt + θ) |H (ω)|| F | sin(ωt + θ + ∠ H (ω))
Figure 5.4 Steady-state response of LTI systems H(ω) to cosine and sine inputs,
with arbitrary amplitudes |F| and phase shifts θ.
and
These steady-state input–output relations for LTI systems, summarized in Figure 5.4,
indicate that LTI systems convert their co-sinusoidal inputs of frequency ω into co-
sinusoidal outputs having the same frequency and the following amplitude and phase
parameters:
(1) Output amplitude = Input amplitude multiplied by |H (ω)|, and
(2) Output phase = Input phase plus ∠H (ω).
Therefore, knowledge of the frequency response H (ω) is sufficient to determine how H(ω)
an LTI system responds to co-sinusoidal signals and their superpositions in steady represents
state. Furthermore, if the steady-state response of a system to a co-sinusoidal input is the
not a co-sinusoid of the same frequency, then the system cannot be LTI. steady-state
behavior
Example 5.5 of LTI circuits
Return to Example 5.1 (see Figure 5.1a), where we found the system frequency
response to be
1 1
H (ω) = =√ ∠ − tan−1 (ω).
1 + jω 1 + ω2
Consider two different inputs,
f1 (t) = 1 cos(0.5t) V
and
f2 (t) = 1 cos(2t) V.
Determine the steady-state system responses y1 (t) and y2 (t) to f1 (t) and
f2 (t).
Solution Applying the input–output relation shown in Figure 5.4, we note
that
where |H (0.5)| and ∠H (0.5) are the amplitude and phase response evalu-
ated at the frequency ω = 0.5 rad/s of the input f1 (t). Since
1
|H (0.5)| = √ = 0.894
1 + 0.52
and
◦
∠H (0.5) = − tan−1 (0.5) = −26.56 ,
it follows that
◦
y1 (t) = 0.894 cos(0.5t − 26.56 ) V.
Likewise,
1
|H (2)| = √ = 0.447
1 + 22
and
◦
∠H (2) = − tan−1 (2) = −63.43 ,
and so
◦
y2 (t) = |H (2)|1 cos(2t + ∠H (2)) V = 0.447 cos(2t − 63.43 ) V.
A summary of these results is presented in Figure 5.5. Study the plots care-
fully to better understand that system
1
H (ω) =
1 + jω
Low-pass is a low-pass filter.
filter
Example 5.6
Return to Example 5.2 (see Figure 5.2a), where we found the system frequency
response to be
jω |ω| 1
H (ω) = =√ ∠ tan−1 ( ).
1 + jω 1+ω 2 ω
Consider two different inputs
f1 (t) = 2 sin(0.5t) V
and
◦
f2 (t) = 1 cos(2t + 45 ) V.
Determine the steady-state system responses y1 (t) and y2 (t) to f1 (t) and
f2 (t).
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 169
1
0.894
0.8
1 0.6
|H (ω)| 1
0.4 0.447
0.5 0.5
0.2
t
−10 10 20 30 40
−10 −5 5 10 ω -10 10 20 30 40
t
−0.5 -0.5
−1 -1
1Ω
f1(t) = cos(0.5t) + y 1(t) = 0.894cos(0.5t − 26.56°)
Input : +− 1F Output :
H (ω) = 1
f 2(t) = cos(2t) − y 2(t) = 0.447cos(2t − 63.43°)
1 + jω
1 1
180
Figure 5.5 A summary plot of the responses of the system H(ω) = 1+jω1
to co-sinusoidal inputs f1 (t) and
f2 (t) examined in Example 5.5. Note that the higher frequency input (bottom left signal) is attenuated
more strongly than the lower frequency input (upper left). The system is therefore a low-pass filter.
Solution Once again, using the same input–output relation based on frequency
response, we obtain
Since
|0.5|
|H (0.5)| = √ = 0.447
1 + 0.52
and
1 ◦
∠H (0.5) = tan−1 ( ) = 63.43 ,
0.5
it follows that
◦
y1 (t) = 0.894 sin(0.5t + 63.43 ) V.
170 Chapter 5 Frequency Response H (ω) of LTI Systems
Likewise,
|2|
|H (2)| = √ = 0.894
1 + 22
and
1 ◦
∠H (2) = tan−1 ( ) = 26.56 ,
2
and, therefore,
◦ ◦
y2 (t) = |H (2)|1 cos(2t + 45 + ∠H (2)) = 0.894 cos(2t + 71.56 ) V.
jω
H (ω) =
1 + jω
1
0.894 |H (ω)|
0.8
2 0.6 2
0.4 0.447
1 1
0.2
−10 10 20 30 40 t ω −10 10 20 30 40 t
−10 −5 5 10
−1 −1
−2 −2
1F
f 1(t) = 2 sin(0.5t) y 1(t) = 0.894sin(0.5t + 6 3.43°)
+
Input : +− 1Ω Output :
jω −
H (ω) =
f 2(t) = cos(2t + 4 5°) 1 + jω y2(t) = 0.894cos(2t + 7 1.56°)
2 2
∠ H (ω)
180
1 1
90
63.43°
−10 10 20 30 40 t 26.56° −10 10 20 30 40 t
− 10 −5 5 10 ω
−1 −1
−90
−2 −2
−180
jω
Figure 5.6 A summary plot of the responses of the system H(ω) = 1+jω to co-sinusoidal inputs f1 (t) and
f2 (t) examined in Example 5.6. Note that the low-frequency input (upper left signal) is attenuated more
strongly than the high-frequency input (bottom left). The system is therefore a high-pass filter.
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 171
While the preceding Examples 5.5 and 5.6 illustrate that systems
1
H (ω) =
1 + jω
and
jω
H (ω) =
1 + jω
function as low-pass and high-pass filters, respectively, recall that we found the system
jω
H (ω) =
1 − ω2 + j ω
of Example 5.3 in Section 5.1 to be a band-pass filter. Returning to Example 5.3, we Band-pass
note the amplitude response curve |H (ω)| shown in Figure 5.3c peaks at ω = ±1 rad/s filter
and vanishes as ω → 0 and ω → ±∞. Therefore, only those co-sinusoidal inputs with
frequencies ω in the vicinity of 1 rad/s pass through the system with relatively small
attenuation. This occurs because 1 rad/s is the resonant frequency of the parallel LC
combination in the circuit. At resonance, the parallel LC combination is an effective
open circuit and all the source current is routed through the resistor to generate a
peak response. Conversely, in the limits as ω → 0 and ω → ±∞, respectively, the
inductor and the capacitor behave as effective shorts, and force the output voltage to
zero.
Example 5.7
An LTI system H (ω) that is known to be a low-pass filter converts its input
f (t) = 2 sin(12t)
for some real valued constant θ. Determine H (12) and also compare the
average power that the input f (t) and output y(t) would deliver to a 1
resistor.
Solution First, we note that
√
|Y | 2 1
|H (12)| = = =√ .
|F | 2 2
172 Chapter 5 Frequency Response H (ω) of LTI Systems
1
H (12) = √ ej θ .
2
1 2
Pf = |F | = 2,
2
while it is
1 2
Py = |Y | = 1
2
Py 1
= ,
Pf 2
Example 5.8
A system converts its input
f (t) = 5 sin(12t)
component of the output y(t) would not be present. LTI systems cannot
create new frequencies that are not present in their inputs.
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 173
ej ωt −→ LTI −→ H (ω)ej ωt
reduces to
1 −→ LTI −→ H (0).
Linearity then implies that for an arbitrary DC input f (t) = Fo , the relation is
Fo −→ LTI −→ H (0)Fo ,
where the response H (0)Fo is real valued. (Recall from property 4 in Table 5.1 that
H (0) is real valued.)
Example 5.9
What is the steady-state response of the system
2 + jω
H (ω) =
4 + jω
to a DC input
f (t) = 5?
Solution Since
2 + j0
H (0) = = 0.5,
4 + j0
the steady-state response must be
f (t) = cos(ωt)
(2) For each setting of the input frequency ω, observe and record the amplitude
|H (ω)| and phase shift ∠H (ω) from the measured circuit response
The amplitude and phase shift data, |H (ω)| and ∠H (ω), collected over a wide range
of frequencies ω, then can be displayed as amplitude and phase response plots for the
system.
This method generally will reveal if the system is not LTI (in which case the
output y(t) corresponding to the cosine input f (t) = cos(ωt) usually is not a pure
co-sinusoid at frequency ω) or whether the system is non-dissipative (in which case
the system output may contain non-decaying components even after the input is
turned off). In either case, it will not be possible to infer |H (ω)| and ∠H (ω). For
such systems, H (ω) is not a meaningful system characterization. (See Section 5.5
for further discussion on non-dissipative systems.) For the frequent case with dissi-
pative LTI circuits and systems, however, the foregoing method provides a direct
experimental means for determining |H (ω)| and ∠H (ω).
More information often is revealed about |H (ω)| if we plot it on a logarithmic
scale rather than on a regular, or linear scale. That, in effect, is the idea behind the
decibel definition3
1
H (ω) = ,
1 + jω
|1|
|H (ω)|dB = 20 log = 20 log |1| − 20 log |1 + j ω| = −20 log 1 + ω2
|1 + j ω|
3
A decibel (dB) is one-tenth of a bel (B), which is the name given to log |H (ω)|2 = 2 log |H (ω)| in
honor of the lab practices of Alexander Graham Bell, the inventor of the telephone and hearing aid.
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 175
|H (ω)| 1
|H (ω)| dB 20
0.8 10
0.6 0
-10
0.4
-20
0.2
-30
(a) 20 40 60 80 100
(b)
0.1 1 10 100
ω ω
|H (ω)| 0.01 |H (ω)| dB -20
0.008 -40
0.006 -60
0.004 -80
0.002
-100
(c) 200 400 600 800 1000 (d)
ω
0.1 1 10 100 1000
ω
for the same filter. In the dB plot in Figure 5.7b we also use a logarithmic scale for
the horizontal axis representing the frequency variable ω.
Figures 5.7c and 5.7d display the amplitude response of another low-pass filter
1
H (ω) =
(1 + j ω)(100 + j ω)
|1|
|H (ω)|dB = 20 log = −20 log 1 + ω2 − 20 log 1002 + ω2 .
|1 + j ω||100 + j ω|
Clearly, the dB plots shown in Figures 5.7b and 5.7d are more informative than the
linear plots shown in Figures 5.7a and 5.7c. For instance, we can see from Figures 5.7b
and 5.7d that both filters have similar “flat” amplitude responses for ω 1, a detail
that is not as apparent in the linear plots. It is useful to remember that a 20 dB change
corresponds to a factor of 10 change in the amplitude |H (ω)| and a factor √ of 100
change in |H (ω)|2 . Likewise, a 3 dB change corresponds to a factor of 2 variation
in |H (ω)| and a factor of 2 variation in |H (ω)|2 , as summarized in Table 5.2.
176 Chapter 5 Frequency Response H (ω) of LTI Systems
Table 5.2 A conversion table for amplitude response |H (ω)|, power response
|H (ω)|2 , and their representation in dB units.
where
and
and
N
f (t) = |Fn | cos(ωn t + θn )
n=1
N
y(t) = |H (ωn )||Fn | cos(ωn t + θn + ∠H (ωn )).
n=1
This input–output relation for multi-frequency inputs is shown graphically in Figure 5.8
and will be the basis of the calculations presented in the following examples.
Example 5.10
A 1 H inductor current is specified as
Determine the inductor voltage v(t), using the v–i relation for the inductor
and, confirm that the result is consistent with the input–output relation
shown in Figure 5.8.
Solution Given that L = 1 H and i(t) = 2 cos(2t) + 4 cos(4t), we obtain
di d
v(t) = L = (2 cos(2t) + 4 cos(4t))
dt dt
π π
= −4 sin(2t) − 16 sin(4t) = 4 cos(2t + ) + 16 cos(4t + ) V.
2 2
Now, the frequency response of the same system is
V j ωI
H (ω) = = = j ω.
I I
Applying the relation in Figure 5.8 with the input signal
2 cos(2t) + 4 cos(4t),
178 Chapter 5 Frequency Response H (ω) of LTI Systems
we have
y(t) = |H (2)|2 cos(2t + ∠H (2)) + |H (4)|4 cos(4t + ∠H (4)) V.
This yields the output
π π
2 · 2 cos(2t + ) + 4 · 4 cos(4t + ) V,
2 2
in agreement with the previous result.
Example 5.11
Suppose the input of the low-pass filter
1
H (ω) =
1 + jω
is
f (t) = 1 cos(0.5t) + 1 cos(πt) V.
Determine the system output y(t).
Solution Using the relation given in Figure 5.8, we have
y(t) = |H (0.5)|1 cos(0.5t + ∠H (0.5)) + |H (π)|1 cos(πt + ∠H (π)) V.
Now,
1
|H (0.5)| = ≈ 0.894
|1 + j 0.5|
and
1 ◦
∠H (0.5) = ∠ ≈ −26.56 .
1 + j 0.5
Likewise,
1
|H (π)| = ≈ 0.303
|1 + j π|
and
1 ◦
∠H (π) = ∠ ≈ −72.34 .
1 + jπ
Therefore,
◦ ◦
y(t) ≈ 0.894 cos(0.5t − 26.56 ) + 0.303 cos(πt − 72.34 ) V.
The input and output signals of Example 5.11 are plotted in Figure 5.9a. Notice
that the output signal y(t) is a smoothed version of the input, because the low-pass
filter H (ω) has attenuated the high-frequency content that corresponds to rapid signal
variation.
Section 5.4 LTI System Response to Multifrequency Inputs 179
2 f (t) 2 y(t)
1 1 1
H (ω) =
t 1 + jω t
10 20 30 40 50 60 10 20 30 40 50 60
−1 Low-pass filter −1
(a) −2 −2
f (t) y(t)
1 1
0.8 jω 0.8
0.6 H (ω) = 0.6
1 + jω
0.4 0.4
0.2 0.2
High-pass filter
−10 10 20 30 t −10 10 20 30 t
−0.2 −0.2
(b) −0.4 −0.4
Figure 5.9 Input and output signals f (t) and y(t) for (a) the low-pass filter examined in Example 5.11,
and (b) the high-pass filter examined in Example 5.12. Note that both signals in (b) are periodic and have
the same period To = 4π ≈ 12.57 s.
Example 5.12
Suppose the input of the high-pass filter
jω |ω| 1
H (ω) = =√ ∠ tan−1 ( )
1 + jω 1+ω 2 ω
is
∞
1
f (t) = cos(0.5nt) V.
1 + n2
n=1
ωn = 0.5n rad/s
∞
1
y(t) = |H (0.5n)| cos(0.5nt + ∠H (0.5n)) V.
1 + n2
n=1
180 Chapter 5 Frequency Response H (ω) of LTI Systems
|0.5n|
|H (0.5n)| = √
1 + 0.25n2
and
1
∠H (0.5n) = tan−1 ( ).
0.5n
Thus,
∞
|0.5n| 1 1
y(t) = √ cos(0.5nt + tan−1 ( )) V.
1+ 0.25n2 1+n2 0.5n
n=1
The input and output signals f (t) and y(t) are plotted4 in Figure 5.9b. Both
f (t) and y(t) are periodic signals with period To = 4π s. Can you explain
why? If not, Chapter 6 will provide the answer.
Glancing at Figure 5.9, try to appreciate that we have just taken a major step
toward handling arbitrary inputs in LTI systems. We don’t even have names for the
complicated input and output signals shown in Figure 5.9!
Example 5.13
What is the steady-state response y(t) of the LTI circuit shown in Figure 5.10a
to the input
f (t) = 5 + sin(2t)?
2H jω2 Ω
Input : Output :
+ +
f (t) + 3Ω y(t) F + 3Ω Y
− −
− −
System
(a) (b)
Figure 5.10 (a) An LTI circuit and (b) its phasor equivalent.
4
The plotted curves actually correspond to the sum of the first 100 terms (n = 1 to 100) in the expressions
for f (t) and y(t). Similar curves calculated with many more terms are virtually indistinguishable from
those shown in Figure 5.9b. Thus, the first 100 terms provide a sufficiently accurate representation.
Section 5.5 Resonant and Non-Dissipative Systems 181
3
Y = F,
3 + j ω2
Y 3
= H (ω) = .
F 3 + j ω2
3 3 ◦
H (2) = = = 0.6∠ − 53.13 ,
3 + j4 5∠53.13◦
f (t) = 5 + sin(2t)
is
◦
y(t) = |H (0)|5 + |H (2)| sin(2t + ∠H (2)) = 5 + 0.6 sin(2t − 53.13 ).
ej ωt −→ LTI −→ H (ω)ej ωt ,
applicable to finding the steady-state response of LTI systems, requires that the system
be dissipative. The reason for this restriction is that in non-dissipative systems the
steady-state response may contain additional undamped and possibly unbounded
terms. For instance, in resonant circuits examined in Section 4.4, we saw that the
steady-state output may contain unforced oscillations at a resonance frequency ωo —
for example, ωo = √LC 1
. For such circuits and systems, an H (ω)-based description
of the steady-state response is necessarily incomplete.
Consider, for instance, the frequency response of the series RLC circuit shown
in Figure 4.23a:
I 1 j ωC
H (ω) = = = .
V R + j ωL + 1
j ωC
(1 − ω2 LC)+ j ωRC
182 Chapter 5 Frequency Response H (ω) of LTI Systems
In the limit, as R → 0,
j ωC
H (ω) →
1 − ω2 LC
and the circuit becomes non-dissipative. For R = 0, the response to inputs ej ωt can
no longer be described as
j ωC
ej ω .
1 − ω2 LC
Notice that for R = 0, as ω → √LC 1
, we have |H (ω)| → ∞, indicating that non-
dissipative systems can generate unbounded outputs with bounded inputs (an insta-
bility phenomenon that we will examine in detail in Chapter 10).
In summary, frequency-response-based analysis methods are extremely powerful
and widely used. We will continue to develop these techniques in Chapters 6 through
8. However, the concept of frequency response offers an incomplete and inadequate
description of non-dissipative and non-LTI systems. Therefore, we must be careful to
apply these techniques only to dissipative LTI systems. Beginning in Chapter 9, we
will develop alternative analysis methods that are appropriate for non-dissipative LTI
systems
EXERCISES
5.1 Determine the frequency response H (ω) = FY of the circuit shown and sketch
|H (ω)| versus ω ≥ 0. In the diagram, f (t) and y(t) denote the input and
output signals of the circuit.
1Ω 0.2H
+ +
1Ω 1H
+
f (t) +
− 1Ω 2F y(t)
−
Exercises 183
5.3 Determine the frequency response H (ω) = IVs of the circuit in Exercise
Problem 3.10 in Chapter 3. Note that H (ω) can be obtained with the use of
the phasor domain circuit as well as the ODE for v(t) given in Problem 3.10.
5.4 Determine the frequency response H (ω) = VVs of the circuit in Exercise
Problem 3.17 in Chapter 3. Sketch |H (ω)| versus ω ≥ 0.
5.5 A linear system with input f (t) and output y(t) is described by the ODE
d 2y dy df
+4 + 4y(t) = .
dt 2 dt dt
Determine the frequency response H (ω) = Y
F of the system.
5.6 Determine the amplitude response |H (ω)| and phase response ∠H (ω) of
the system in Problem 5.5. Also, plot ∠H (ω) versus ω for −10 < ω < 10.
5.7 A linear circuit with input f (t) and output y(t) is described by the frequency
jω
response FY = H (ω) = 4+j ω . Determine the following:
(a) Amplitude of y(t) when f (t) = 5 cos(3t + π4 ) V.
(b) Output y(t) when the input is f (t) = 8 + 2 sin(4t) V.
5.9 Repeat Problem 5.8 for a linear system described by the ODE
dy
+ y(t) = 4f (t).
dt
5.10 In the circuit of Problem 5.2, the input is f (t) = 4 + cos(2t). Determine the
steady-state output y(t) of the circuit.
5.11 Given an input f (t) = 5 + 4ej 2t + 4e−j 2t and H (ω) = 2+j
1+j ω
ω , determine the
steady-state response y(t) of the system H (ω) and express it as a real valued
signal. Hint: Use the rule ej ωt −→ LTI −→ H (ω)ej ωt and superposition.
5.12 Repeat Problem 5.11 for the input f (t) = 2e−j 2t + (2 + j 2)e−j t + (2 −
j 2)ej t + 2ej 2t .
184 Chapter 5 Frequency Response H (ω) of LTI Systems
(d) 4 −→ System −→ j 8.
f (t) y(t)
1 1
0.8 jω 0.8
0.6 H (ω) = 0.6
1 + jω
0.4 0.4
0.2 0.2
High-pass filter
−10 10 20 30 t −10 10 20 30 t
−0.2 −0.2
−0.4 −0.4
Figure 6.1 A replica of Figure 5.9b, illustrating that LTI systems respond to periodic inputs with periodic
outputs.
The signals shown in Figure 6.1 are periodic. In general, a signal f (t) is said to be
periodic if there exists some delay to such that
f (t − to ) = f (t)
f (t − kto ) = f (t)
for all integers k. Thus, when periodic signals are delayed by integer multiples of some
time interval to , they are indistinguishable from their undelayed forms. So, periodic
signals consist of replicas, repeated in time. The smallest nonzero value of to that
Signal satisfies the condition f (t − to ) = f (t) is said to be the period and is denoted by T .
period For the signals shown in Figure 6.1, the period T = 4π s.
Signals cos(ωt), sin(ωt), and
ej ωt = cos(ωt) + j sin(ωt)
ej nωo t
1
Alternatively if its graph is an endless repetition of the same pattern over and over again, just like the
graph in Figure 6.1b.
Section 6.1 Periodic Signals 187
is T = 2π
nωo , because 2π
nωo is the smallest nonzero delay to that satisfies2 the constraint
where the Fn are constant coefficients, is also periodic with a period T = 2π ωo , which
is the smallest nonzero delay to that satisfies3 f (t − to ) = f (t).
The periodic function f (t), defined above, also can be expressed in terms of
a weighted superposition of cos(nωo t) and sin(nωo t) signals, or even in terms of
cos(nωo t + θn ), when f (t) is real. This gives rise to three different, but related,
series representations for periodic signals, as indicated in Table 6.1.
The representations of f (t) shown in Table 6.1, using periodic sums, are known
as Fourier series. The three equivalent versions shown in the table will be referred to
as exponential, trigonometric, and compact forms. Given a set of coefficients Fn that
specifies an exponential Fourier series representation of f (t), the equivalence of the
trigonometric form can be verified as explained next.
a0 ∞ an = Fn + F−n
+ n=1 an cos(nωo t) + bn sin(nωo t) Trigonometric
2 bn = j (Fn − F−n )
c0 ∞ cn = 2|Fn |
+ n=1 cn cos(nωo t + θn ) Compact for real f (t)
2 θn = ∠Fn
Table 6.1 Summary of different representations of periodic signal f (t) having period T = 2πωo and funda-
mental frequency ωo . The formula for Fn in the upper right corner will be derived in Section 6.2.
2
We need e−j nωo to = 1, which requires nωo to = 0, 2π, 4π, · · ·. The smallest nonzero choice for to
2π
clearly is nω , which is the period T .
o
3
f (t − to ) = ∞ n=−∞ Fn e
j nωo (t−to )
= f (t) only if to satisfies |n|ωo to = 2πk for every value of |n| ≥
1. The smallest nonzero to that meets this criterion is to = T = ω2πo .
188 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
we obtain
∞
f (t) = F0 + (Fm + F−m ) cos(mωo t) + j (Fm − F−m ) sin(mωo t),
m=1
and
bn ≡ j (Fn − F−n ),
for n ≥ 0.
Notice that for a real-valued signal f (t), the coefficients an and bn of the trigono-
metric Fourier series
∞
a0
f (t) = + an cos(nωo t) + bn sin(nωo t)
2
n=1
For-real must be real-valued. This implies that F−n = Fn∗ ; in other words, the exponential
valued f(t) series coefficients Fn are conjugate symmetric when f (t) is real-valued.
coefficients We next verify the equivalence of the compact form of the Fourier series to the
F−n = Fn∗ exponential form, for real-valued f (t)—because f (t) is real-valued, we use the fact
that the Fn coefficients have conjugate symmetry.
and assuming that F−n = Fn∗ so that |F−n | = |Fn | and ∠F−n = −∠Fn , we
have
∞
f (t) = F0 + |Fm |(ej (mωo t+∠Fm ) + e−j (mωo t+∠Fm ) )
m=1
∞
= F0 + 2|Fn | cos(nωo t + ∠Fn ).
n=1
Section 6.2 Fourier Series 189
This is the same as the compact trigonometric form for real f (t), shown in
Table 6.1, with
for n ≥ 0.
The Fourier series f (t) in Table 6.1 are sums of an infinite number of periodic
signals with distinct frequencies ω = nωo and periods nω 2π
o
. It is the longest period,
corresponding to the lowest frequency and n = 1, that defines an interval across
which every periodic component repeats. Therefore, the period of the series is T =
ωo . The corresponding lowest frequency ωo = T is referred to as the fundamental
2π 2π
frequency of the series. We will refer to the component of f (t) with frequency ωo
as the fundamental, and the component having frequency nωo as the nth harmonic. Fundamental
Finally, F0 = a20 = c20 will be referred to as the DC component of f (t). When the and
DC component is zero, we will refer to f (t) as having zero mean. harmonics
or its equivalents. (See Table 6.1 in the previous section.) The credit for sorting out
which periodic functions can be expressed as Fourier series goes to German mathe-
matician Gustave Peter Lejeune Dirichlet.
4
You should realize, of course, that in the lab we can generate only a finite numbers of periods, due
to time limitation. For example, for some f (t) with a fundamental frequency of ω2πo = 1 MHz, the signal
would have 3600 million periods in 1 hour. So long as the number of periods generated during an experiment
is large enough, it is reasonable to treat the signal as periodic and represent it by a Fourier series.
190 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
If a periodic signal f (t) can be expressed in a Fourier series, then the series
coefficients Fn , known as Fourier coefficients, can be determined by the formula (see
Section 6.2.2 for the derivation)
1
Fn = f (t)e−j nωo t dt
T T
T T /2 t +T
Absolutely where T denotes integration over one period (i.e., t=0 , or t=−T /2 , or t=t , where
integrable t is arbitrary). Dirichlet recognized that if f (t) is absolutely integrable over a period
f(t) T , that is, if
|f (t)|dt < ∞,
T
then the Fourier coefficients Fn must be bounded,5 or |Fn | < ∞ for all n. With
bounded coefficients Fn , the convergence of the Fourier series of f (t) is possible,
and, in fact, guaranteed (as proved by Dirichlet), so long as f (t) has only a finite
number of minima and maxima, and a finite number of finite-sized discontinuities,6
within a single period T (i.e., so long as a plot of f (t) over a period T can be drawn
Dirichlet on a piece of paper with a pencil having a finite-width tip). These Dirichlet sufficiency
conditions conditions for the convergence of the Fourier series—namely, that f (t) be absolutely
integrable and be plottable—are satisfied by all periodic signals that can be generated
in the lab or in a radio station and displayed on an oscilloscope.
1 ≡ (1, 0, 0)
u
2 ≡ (0, 1, 0)
u
3 ≡ (0, 0, 1)
u
5
Notice that
1 1 1
|Fn | = | f (t)e−j nωo t dt| ≤ |f (t)e−j nωo t |dt = |f (t)|dt,
T T T T T T
where we first use the triangle inequality—the absolute value of a sum can’t be greater than the sum of the
absolute values—and next the fact that the magnitude of a product is the product of the magnitudes, and
that |e−j nωo t | = 1. So, if T |f (t)|dt < ∞, then |Fn | < ∞.
6
At discontinuity points, the Fourier series converges to a value that is midway between the bottom and
the top of the discontinuous jump.
Section 6.2 Fourier Series 191
as
u1 − 2
v = 3 u2 + 5
u3 .
3
v = n,
Vn u
n=1
where
Vn = v · u
n,
n · u
u n = 1
and
n · u
u m = 0 for m = n.
Note that u n · u
m = 0, m = n, is the orthogonality condition pertinent to vectors
u 2 , and u
1, u 3 , which can be regarded as basis vectors for all 3-D vectors v . Further-
more, the coefficients Vn of the vector v can be regarded as projections8 of v along
the basis vectors u n.
By analogy, a convergent Fourier series
∞
f (t) = Fn ej nωo t
n=−∞
ej nωo t , −∞ ≤ n ≤ ∞,
7
The scalar, or dot, product of two vectors is the sum of the pairwise products of the two sets of vector
coordinates.
8
A projection of one vector onto another is the component of the first vector that lies in the direction
of the second vector.
9
Verification: Assuming m = n, we find that
2π
T=ω
o ej (n−m)2π − 1 1−1
(ej nωo t )(ej mωo t )∗ dt = ej (n−m)ωo t dt = = = 0.
T t=0 j (n − m)ωo j (n − m)ωo
192 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
A Fourier coefficient Fm of f (t) is then the projection of f (t) along the basis function
ej mωo t . To calculate the coefficient we multiply both sides of the series expression
with
(ej mωo t )∗ = e−j mωo t
and integrate the products on each side across a period T . The result, called the inner
product (instead of dot product) of f (t) with ej mωo t , is
∞
−j mωo t
f (t)e dt = Fn ej nωo t e−j mωo t dt
T T n=−∞
∞
= Fn (ej nωo t )(ej mωo t )∗ dt = T Fm ,
n=−∞ T
provided that the series is uniformly convergent and hence a term-by-term integration
Fourier of the series is permissible. We then find (after exchanging m with n),
coefficients
1
for Fn = f (t)e−j nωo t dt,
exponential T T
form which can be utilized with any periodic f (t) satisfying the Dirichlet conditions to
obtain a Fourier series converging to f (t) at all points where f (t) is continuous.
The trigonometric Fourier series
∞
a0
f (t) = + an cos(nωo t) + bn sin(nωo t)
2
n=1
a0 ∞ an = 2
T T f (t) cos(nωo t)dt
2 + n=1 an cos(nωo t) + bn sin(nωo t)
bn = 2
T T f (t) sin(nωo t)dt
c0 ∞ cn = 2|Fn |
2 + n=1 cn cos(nωo t + θn ) for real f (t)
θn = ∠Fn
form for LTI system response calculations. (See Section 6.3.) The trigonometric form
is preferable only when f (t) is either an even function—that is, when
f (−t) = f (t)
f (−t) = −f (t)
is not periodic unless there exists some number ωo such that all frequencies ωk are
integer multiples of ωo . Thus, if the sum is periodic, then all possible ratios of the
frequencies ωk are rational numbers. Furthermore, if the sum is periodic, then its
fundamental frequency ωo is defined to be the largest number whose integer multiples
match each and every ωk .
Example 6.1
Signal
Example 6.2
Signal
π
q(t) = cos(4t) + 5 sin(6t) + 2 cos(7t − )
3
is periodic, because the frequencies 4, 6, and 7 rad/s are each integer multi-
ples of 1 rad/s or, equivalently, the frequency ratios 46 , 47 , and 67 (and their
inverses) all are rational. Furthermore, since 1 rad/s is the largest frequency
whose integer multiples can match 4, 6, and 7 rad/s, the fundamental
frequency of q(t) is ωo = 1 rad/s and its period is T = 2π ωo = 2π s.
π π
q(t) = cos(4t) + 5 cos(6t − ) + 2 cos(7t − ).
2 3
Thus, the parameters in the compact Fourier series are (compare with Table 6.2)
c4 = 1, θ4 = 0, c6 = 5, θ6 = − π2 rad, c7 = 2, θ7 = − π3 rad; all other cn are zero.
Example 6.3
What is the period of
Solution Since 2π is the largest number whose integer multiples (×4 and
×5) match frequencies 8π and 10π, the fundamental frequency of f (t) is
ωo = 2π rad/s. Therefore, the period of f (t) is T = 2π
ωo = 2π rad/s = 1 s.
2π
Example 6.4
What are the exponential Fourier series coefficients of f (t) in Example 6.3?
Example 6.5
Find the exponential and compact Fourier series of f (t) = | sin(t)| shown
in Figure 6.2a.
Solution The period of sin(t) is 2π, while the period of f (t) = | sin(t)|
is T = π s as can be verified from the graph of f (t) shown in Figure 6.2a.
Therefore, we identify the fundamental frequency of f (t) as ωo = 2π T =2
rad/s. We will calculate Fn by using the integration limits of 0 and π,
since f (t) can be described by a single equation on the interval 0 < t < π ,
namely, f (t) = | sin(t)| = sin(t). We then have
1 −j nωo t 1 π −j n2t 1 π ej t − e−j t −j n2t
Fn = f (t)e dt = sin(t)e dt = e dt
T T π 0 π 0 2j
π j (1−2n)t π
1 −j (1+2n)t 1 e e−j (1+2n)t
= (e j (1−2n)t
−e ) dt = −
j 2π 0 j 2π j (1 − 2n) −j (1 + 2n) 0
1 ej (1−2n)π − 1 e−j (1+2n)π − 1
=− + .
2π 1 − 2n 1 + 2n
196 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
2 4 6 8 t 2 4 6 8 t
(a) −0.2 (b) −0.2
1
0.8
0.6
0.4
0.2
2 4 6 8 t
(c) −0.2
Figure 6.2 (a) A periodic function f (t) = | sin(t)|, (b) plot of Fourier series of f (t)
truncated at n = 5, and (c) truncated at n = 20.
Now,
∞
2 1
f (t) = ej n2t .
n=−∞
π 1 − 4n2
4 1 1 1
cn = 2|Fn | = =
π 4n2 − 1 π n2 − 1
4
and
θn = ∠Fn = π rad,
Section 6.2 Fourier Series 197
where the last line follows because for n ≥ 1 the Fn are all real and negative,
so that their angles all have value π. Also, Fo = c2o = π2 . The compact form
of the Fourier series is therefore
∞
2 1 1
f (t) = + cos(n2t + π).
π π n −
2 1
4
n=1
Figures 6.2b and 6.2c show plots of the Fourier series of f (t), but with the
sums truncated at n = 5 and n = 20, respectively (for example, we dropped the sixth
and higher-order harmonics from the Fourier series to obtain the curve plotted in
Figure 6.2b.) Notice that the curve in Figure 6.2b approximates f (t) = | sin(t)| very
well, except in the neighborhoods where f (t) is nearly zero and abruptly changes
direction. The curve in Figure 6.2c, which we obtained by including more terms in
the sum (up to the 20th harmonic), clearly gives a finer approximation. Because f (t)
is a continuous function, the Fourier series converges to f (t) for all values of t.
Example 6.6
Prove the time-shift property from Table 6.3.
Solution This property states that
Hence, the expression in parentheses, Fn e−j nωo to , is the nth Fourier coeffi-
cient for f (t − to ), proving the time-shift property.
Example 6.7
What are the exponential-form Fourier coefficients Gn of the periodic func-
tion
g(t) = | cos(t)|
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
2 4 6 8 t 2 4 6 8 t
(a) −0.2 (b) −0.2
Figure 6.3 (a) Plot of g(t) = | cos(t)|, and (b) plot of its Fourier series truncated
at n = 20.
Solution Clearly,
π
g(t) = | cos(t)| = f (t ± ),
2
where f (t) = | sin(t)| as in Example 6.5. Therefore, using the Fourier coef-
ficients Fn of f (t) from Example 6.5 and the time-shift property from
Table 6.3 with to = π2 s, we obtain the Fourier coefficients Gn of g(t) as
π
Gn = Fn e−j nωo to = Fn e−j n(2)( 2 ) = Fn e−j nπ .
Note that the same result also could have been obtained by replacing t
with t − π2 in the compact form Fourier series for f (t) = | sin(t)| from
Example 6.5. A plot of the series for g(t), truncated at n = 20, is shown in
Figure 6.3b.
Examples 6.5 and 6.7 illustrated the influence of the angle, or phase coefficient,
θn on the shape of periodic signals. The amplitude coefficients cn for f (t) and g(t)
are identical, which implies that both functions are constructed with cosines of equal
amplitudes. The curves, however, are different, because the phase shifts θn of the
cosines are different. To further illustrate the impact of θn on the shape of a signal
waveform, we plot in Figure 6.4 a truncated version of another series,
∞
2 1 1 π
h(t) = + cos(n2t + (1 − n) ),
π π n −
2 1
4
2
n=1
Section 6.2 Fourier Series 199
1
0.8
0.6
0.4
0.2
2 4 6 8 t
−0.2
Figure 6.4 Plot of the Fourier series of signal h(t) truncated at n = 20.
having the same amplitude coefficients as f (t) and g(t), but different phase coeffi-
cients θn . Notice that h(t) has a shape that is different from both f (t) and g(t), which
is caused by the different Fourier phases. In general, both the Fourier amplitudes and
phases affect the shape of a signal.
Example 6.8
Given that (from Example 6.5)
∞
2 1
f (t) = | sin(t)| = ej n2t ,
n=−∞
π 1 − 4n2
1
g(t) = | sin( t)|.
2
∞
2 1
g(t) = ej nt .
n=−∞
π 1 − 4n2
Notice that the Fourier series coefficients have not changed. A stretching
or squashing of a periodic waveform corresponds to a change in period and
fundamental frequency. Comparing f (t) and g(t), the period has increased
200 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
from π to 2π, and the fundamental frequency has dropped from 2 to 1 rad/s.
The waveform g(t) is simply a stretched (by a factor of 2) version of f (t),
plotted earlier in Figure 6.2a.
Example 6.9
A periodic signal f (t) with period T is specified as
A plot of f (t) for a = 0.5 s−1 and T = 2 s is shown in Figure 6.5a. Deter-
mine both the exponential and compact Fourier series for f (t), for arbitrary
a and T .
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
(a) −2 −1 1 2 3 4 t (b) −2 −1 1 2 3 4 t
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
(c) −0.4 −0.2 0.2 0.4 t (d) −0.1 −0.05 0.05 0.1 t
0.8
0.6
0.4
0.2
Figure 6.5 (a) A periodic signal f (t); (b) plot of the Fourier series of f (t)
truncated at n = 20; (c) expanded plot of the same curve near t = 0; (d)
expanded plot of the same series truncated at n = 200; and (e) truncated at
n = 2000.
Section 6.2 Fourier Series 201
In the last line we used ωo T = 2π and e−j 2πn = 1. Thus, the exponential
Fourier series is
∞
1 − e−aT −1 2πn
f (t) = e−j tan aT ej nωo t .
n=−∞ (aT )2 + (2πn)2
2(1 − e−aT )
cn = 2|Fn | =
(aT )2 + (2πn)2
and
2πn
θn = ∠Fn = − tan−1 .
aT
Therefore, in compact form,
∞
2(1 − e−aT )
1 − e−aT 2πn
f (t) = + cos(nωo t − tan−1 ).
aT (aT ) + (2πn)
n=1
2 2 aT
Figure 6.5b displays a plot of the Fourier series for f (t) (assuming that a = 0.5
and T = 2), truncated at n = 20. Note that the plot exhibits small fluctuations about
the true f (t) shown in Figure 6.5a. Furthermore, the fluctuations are largest near the
points of discontinuity of f (t), at t = 0, 2, 4, etc., as shown in more detail in the
expanded plot of Figure 6.5c. Including a larger number of higher-order harmonics in
the truncated series reduces the widths of the fluctuations, but does not diminish their
amplitudes. This is illustrated in Figures 6.5d and 6.5e, which show other expanded
plots of the series, but now with the series truncated at n = 200 and 2000. In the limit,
as the number of terms in the Fourier series becomes infinite, the fluctuation widths
vanish as the fluctuations bunch up around the points of discontinuity and the infinite
series converges to f (t) at all points, except those where the signal is discontinuous.
At points of discontinuity, the series converges to a value that is midway between the
bottom and the top of the discontinuous jump. The behavior of a Fourier series near Gibbs
points of discontinuity, where increasing the number of terms in the series causes phenomenon
202 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
Example 6.10
Determine the exponential and compact-form Fourier series of the square-
wave signal
1, 0 < t < D,
p(t) =
0, D < t < 1,
where 0 < D < 1 and the signal period is T = 1 s. Figure 6.6a shows an
example of p(t) with a duty cycle of D = 0.25 = 25%.
1
0.8
0.6
0.4
0.2
t
−1 −0.5 0.5 1 1.5
(a) −0.2
0.8
0.6
0.4
0.2
t
−1 −0.5 0.5 1 1.5
(b) −0.2
Figure 6.6 (a) A square wave p(t) with 25 percent duty cycle and unity
amplitude, and (b) a plot of its Fourier series truncated at n = 200.
Section 6.2 Fourier Series 203
P0 = D.
For n = 0,
D
e−j n2πt e−j n2πD − 1
Pn = =
−j n2π 0 −j n2π
e−j nπD − ej nπD sin(nπD) −j nπD
= e−j nπD = e .
−j n2π nπ
This is consistent in the limit n → 0 with P0 determined above, as can be
verifiable by using l’Hopital’s rule.10 Thus, in exponential form the Fourier
series is
∞
sin(nπD) j (n2πt−nπD)
p(t) = e ,
n=−∞
nπ
and in compact-form is
∞
2 sin(nπD)
p(t) = D + cos(n2πt − nπD).
nπ
n=1
Figure 6.6b shows a plot of the Fourier series of p(t) for D = 0.25, truncated
at n = 200. Note the Gibbs phenomenon near t = 0, t = 0.25, etc., where p(t) is
discontinuous.
All discontinuous signals having Fourier series coefficients cn that are propor-
tional to n1 (for large n) exhibit the Gibbs phenomenon. Notice that in Examples 6.5
and 6.7, where the signals were continuous, cn was proportional to n12 (for large n) and
the Gibbs phenomenon was absent. When cn decays as n12 (or faster), the contribution
of higher-order harmonics to the Fourier series is less important than in cases where
cn is proportional to n1 , and the Gibbs phenomenon does not occur.
10
Applying l’Hopital’s rule yields
d
sin(nπD) sin(nπD) πD cos(nπD)|n=0
lim = lim dn
d
= = D = P0 .
n→0 nπ n→0
dn nπ
π
204 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
t
−10 −5 5 10 15 20
−1
−2
(a) −3
t
−10 −5 5 10 15 20
−1
−2
(b) −3
Figure 6.7 (a) A square wave of peak-to-peak amplitude 4 and period 12 s, and
(b) its truncated Fourier series plot.
Example 6.11
Let f (t) denote a zero-mean square wave with a period of T = 12 s and
with a peak-to-peak amplitude of 4, as depicted in Figure 6.7a. Express f (t)
as a stretched, scaled, and offset version of p(t) defined in Example 6.10
and then determine the compact-form Fourier series of f (t).
Solution Consider first 4p( 12t ), which is, for D = 21 , a square wave with
50% duty cycle, period T = 12 s, and an amplitude of 4. It differs from
f (t) shown in Figure 6.7a only by a DC offset of 2. Clearly, then, we can
write
t
f (t) = 4p( )−2
12
∞
1 2 sin(nπ 21 ) 1
p(t) = + cos(n2πt − nπ ),
2 nπ 2
n=1
Section 6.2 Fourier Series 205
to reach line 2. Figure 6.7b shows a truncated Fourier series plot of f (t)
obtained from the preceding expression.
Example 6.12
Obtain the trigonometric-form Fourier series for f (t) in Figure 6.7a, using
the formula for bn from Table 6.2.
Solution The period of f (t) is T = 12 s and
2π π rad
ωo = = .
T 6 s
The function is odd, so by property 7 from Table 6.3, all an = 0. Using the
formula for bn from Table 6.2,
6
2 π
bn = f (t) sin(n t)dt.
12 −6 6
Since the integrand is even—note that odd f (t)× odd sin(n π6 t) is an even
function—we can evaluate bn as
6
2×2 6 π 8 cos(n π6 t) 4
bn = 2 sin(n t)dt = = − (cos(nπ) − 1).
12 0 6 12 −n π6 0 nπ
Example 6.13
The function q(t) is periodic with period T = 4 s and is specified as
2t, 0 < t < 2 s,
q(t) =
0, 2 < t < 4 s.
1 2
1 2 2
Q0 = 2tdt = t 0 = 1.
4 0 4
π
1 2
−j n π2 t 1 2
d e−j n 2 t
Qn = 2te dt = t dt.
4 0 2 0 dt −j n π2
(The last line of this expression does not hold for n = 0, which is why we
examined this case separately.) Using integration by parts, we obtain
!2 !2
π
te−j n 2 t π
1 2 e−j n 2 t 1 2e−j nπ − 0 1
π
e−j n 2 t
1
Qn = − dt = −
2 −j n π2 2 0 −j n 2π
2 −j n π2 2 (−j n π2 )2
0 0
(−1)n 1 (−1)n − 1 2((−1)n − 1) + j 2πn(−1)n
= − =
−j n π2 2 −n2 π 2 π 2 n2
4
j2
, for even n > 0,
= πn4+j 2πn
− π 2 n2 , for odd n.
After finding the magnitudes and phases of Qn , for even and odd n, the
compact Fourier series can be written as
∞ ∞ √
4 π π 4 4 + π 2 n2
q(t) = 1 + cos(n t + ) +
πn 2 2 π 2 n2
n=2 (even) n=1 (odd)
π πn
× cos(n t + π + tan−1 ).
2 2
2 4 6 8 t
−1
Example 6.14
Prove the derivative property from Table 6.3.
Solution This property states that
df
f (t) ↔ Fn ⇒ ↔ j nωo Fn
dt
when f (t) is a continuous function. In other words, the derivative f (t) ≡
df
dt will have a Fourier series with Fourier coefficients j nωo Fn .
To verify the property, we differentiate
∞
f (t) = Fn ej nωo t
n=−∞
to obtain
∞ ∞ ∞
df d d
= Fn ej nωo t = Fn ej nωo t = (j nωo Fn )ej nωo t .
dt dt n=−∞ n=−∞
dt n=−∞
This is a Fourier series expansion for the function df dt , where the expres-
sion in parentheses, j nωo Fn , is the nth Fourier coefficient. This proves the
derivative property.
Example 6.15
Let
df
g(t) = ,
dt
where
f (t) = | sin(t)|.
Gn = j nωo Fn = j n2Fn ,
0.5
2 4 6 8 t
−0.5
−1
ej ωt −→ LTI −→ H (ω)ej ωt
∞ ∞
f (t) = ∑ Fn ejnωot LTI system: y(t) = ∑ H (nωo)Fn ejnωot
n= ∞ n= ∞
Fn H (ω) Yn = H (nωo)Fn
Figure 6.10 Input–output relation for dissipative LTI systems with periodic
inputs.
for any set of coefficients Fn . Thus, as illustrated in Figure 6.10, the steady-state
response of an LTI system H (ω) to an arbitrary periodic input
∞
f (t) = Fn ej nωo t
n=−∞
The input–output relation for periodic signals described in Figure 6.10 indicates
that an LTI system H (ω) simply converts the Fourier coefficients Fn of its periodic
input f (t) into the Fourier coefficients
Yn = H (nωo )Fn
Example 6.16
The input of a linear system
2 + jω
H (ω) =
3 + jω
is the periodic function
∞
n
f (t) = e−j n4t .
n=−∞
1 + n2
What are the Fourier coefficients Yn of the periodic system output y(t)?
210 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
2 + j n4
H (nωo ) = H (n4) =
3 + j n4
2 + j n4 n
Yn = H (nωo )Fn = .
3 + j n4 1 + n2
1
H1 (ω) = ,
1 + jω
jω
H2 (ω) = ,
1 + jω
and
jω
H3 (ω) = .
1 − ω2 + j ω
Suppose that all three systems are excited by the same periodic input
∞
1 2 1
f (t) = | sin( t)| = ej nt ,
2 n=−∞
π 1 − 4n2
with fundamental frequency ωo = 1 rad s . (See Example 6.8 in the previous section.)
Let y1 (t), y2 (t), and y3 (t) denote the steady-state response of the systems to this
input. Applying the relation in Figure 6.10, we determine that
∞
1 2 1
y1 (t) = ej nt ,
n=−∞
1 + j n π 1 − 4n2
∞
jn 2 1
y2 (t) = ej nt ,
n=−∞
1 + j n π 1 − 4n2
Section 6.3 System Response to Periodic Inputs 211
and
∞
jn 2 1
y3 (t) = ej nt .
n=−∞
1 − n2 + j n π 1 − 4n2
These expressions can be readily converted to compact form, and then their truncated
plots can be compared with the input signal | sin( 21 t)| to assess the impact of each
system on the input. However, we also can gain some insight by examining how the
magnitudes of the Fourier coefficients of input f (t) and responses y1 (t), y2 (t), and
y3 (t) compare.
Figures 6.11a through 6.11d display point plots of |Fn |2 , |H1 (n)|2 |Fn |2 , |H2 (n)|2
|Fn | , and |H3 (n)|2 |Fn |2 versus harmonic frequency nωo = n rad
2
s , representing the
squared magnitude of the Fourier coefficients of f (t), y1 (t), y2 (t), and y3 (t), respec-
tively. (The reason for plotting the squared magnitudes instead of just the magnitudes
will be explained in the next section.) Notice that |H1 (n)|2 |Fn |2 , representing y1 (t),
has been reduced compared with |Fn |2 , for |n| ≥ 1. This shows that system H1 (ω)
attenuates the high-frequency content of the input f (t) in producing y1 (t). As a
consequence, the waveform y1 (t), shown in Figure 6.12b, is smoother than the input
f (t), shown in Figure 6.12a. This occurs because substantial high-frequency content
is necessary to produce rapid variation in a waveform.
By contrast, we see in Figure 6.11c that H2 (ω) has zeroed out the lowest frequency
(DC component) of the input. Hence, the response y2 (t), shown in Figure 6.12c, is a
zero-mean signal, but it retains the sharp corners present in f (t), indicating insignif-
icant change in the high-frequency content of the input.
0.5
|Fn | 2 0.5
|H 1(n)| 2|Fn | 2
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
nωo nωo
(a) -6 -4 -2 2 4 6 (b) -6 -4 -2 2 4 6
0.05
|H 2(n)| 2|Fn | 2 0.05
|H 3(n)| 2|Fn | 2
0.04 0.04
0.03 0.03
0.02 0.02
0.01 0.01
Figure 6.11 (a) |Fn |2 vs nωo = n rad/s, (b) |H1 (n)|2 |Fn |2 , (c) |H2 (n)|2 |Fn |2 , and (d) |H3 (n)|2 |Fn |2 .
212 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
Finally, by comparing Figures 6.11c and 6.11d, we should expect that y3 (t) will
look more like a sine wave than does y2 (t), because |H3 (n)|2 |Fn |2 is almost entirely
dominated by the n = ±1 harmonic (i.e, the fundamental). Indeed, Figure 6.12d
shows that y3 (t) is almost a pure co-sinusoid with frequency 1 rad/s.
f (t) y1(t)
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
(a) 2.5 5 7.5 10 12.5 15 17.5 t (b) 2.5 5 7.5 10 12.5 15 17.5 t
1 y2(t) 1 y3(t)
0.75 0.75
0.5 0.5
0.25 0.25
Figure 6.12 (a) f (t), (b) y1 (t), (c) y2 (t), and (d) y3 (t) versus time t (in seconds).
In this integral we have used |f (t)|2 , instead of f 2 (t), to denote the instantaneous
power, because doing so allows us to define instantaneous and average power measures
that are suitable for complex-valued f (t). Of course, there is no distinction between
|f (t)|2 and f 2 (t) for real-valued f (t).
Now, not all f (t) are voltages or currents, nor are they always applied to 1
Average resistors. Nevertheless, we still will refer to |f (t)|2 as the instantaneous signal power
signal and to P as the average signal power. Whatever the true nature of f (t) may be, in
power practice the cost of generating f (t) will be proportional to P .
Section 6.3 System Response to Periodic Inputs 213
The formula for P given above, shows how we can calculate the average signal
power by integrating in the time domain, using the waveform f (t) over a period T .
Surprisingly, P also can be computed using the Fourier coefficients Fn . This follows Parseval’s
by Parseval’s theorem, which is property 8 in Table 6.3, stated as follows: theorem
∞
1
P ≡ |f (t)|2 dt = |Fn |2 .
T T n=−∞
Thus, the average value of |f (t)|2 over one period and the sum of |Fn |2 over all n
give the same number,11 the average signal power P . That is why we interpret |Fn |2
as the power spectrum of f (t)—just as the rainbow reveals the spectrum of colors Power
(frequencies) contained in sunlight, |Fn |2 describes how the average power of f (t) is spectrum
distributed among its different harmonic components at different frequencies nωo . |Fn |2
Now, for real-valued signals f (t), we have
F−n = Fn∗
and
cn = 2|Fn | = 2|F−n |,
implying that
cn2
|F−n |2 = |Fn |2 = ,
4
where cn is the compact Fourier series coefficient. Therefore, for real f (t),
∞
∞
∞
∞
c2 1 2
|Fn | = |F0 | +
2 2
(|Fm | + |F−m | ) = |F0 | +
2 2 2
2|Fn | = 0 +
2
c .
n=−∞
4 2 n
m=1 n=1 n=1
So, for real-valued f (t), Parseval’s theorem also can be written as Parseval’s
theorem
∞
1 c02 1 2 for real
P ≡ |f (t)|2 dt = + c .
T T 4 2 n valued f(t)
n=1
∞
11
Proof of Parseval’s theorem: For a periodic f (t) = n=−∞ Fn ej nωo t with period T = 2π
ωo ,
∞
"
1 1 1
∗
P = |f (t)| dt =
2
f (t)f (t)dt = f (t) Fn∗ e−j nωo t dt
T T T T T T n=−∞
∞
∞
∞
1
= Fn∗ f (t)e−j nωo t dt = Fn∗ Fn = |Fn |2 .
n=−∞
T T n=−∞ n=−∞
214 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
This formula has a simple, intuitive interpretation. It states that the average power
P of a real-valued periodic signal
∞
c0
f (t) = + cn cos(nωo t + θn )
2
n=1
c2
is the sum of 40 , representing the DC power, and the terms 21 cn2 , n > 0, where
each succeeding term represents the average AC power of a harmonic component
cn cos(nωo t + θn ). Recall that, for co-sinusoids, the average power into a 1 resistor
is simply one-half the amplitude squared.
Example 6.17
Consider the square-wave signal
∞
8 π π
f (t) = cos(n t − ),
nπ 6 2
n=1 (odd)
determine the system output y(t) and the average powers Pf and Py of the
input and output signals f (t) and y(t).
Solution First, we note that the input f (t) consists of co-sinusoids with
harmonic frequencies
π 3π 5π 7π rad
, , , , ··· .
6 6 6 6 s
Since H (ω) = 0 for ω > 2 rad s , only the fundamental (n = 1) and 3rd
harmonic (n = 3) of f (t) will pass through the specified system. Hence,
8 π π 8 π π
y(t) = cos( t − ) + cos(3 t − ).
π 6 2 3π 6 2
Using the second form of Parseval’s theorem (applicable to real-valued
signals) with c1 = π8 and c3 = 3π
8
, we find the average power in the output
y(t) to be
1 8 2 1 8 2
Py = ( ) + ( ) ≈ 3.602.
2 π 2 3π
Section 6.3 System Response to Periodic Inputs 215
is the actual response of a system to some input f (t), where A and B denote arbitrary
constants. Assume that the first term in the expression, which is linear in f (t), is
the desired response. The second term, which is nonlinear in f (t), is unintentional,
perhaps due to a design error or imperfect electrical components. How might we
measure the consequences of the undesired nonlinear term?
As we saw in Chapters 4 and 5, linear systems respond to co-sinusoidal inputs
with co-sinusoidal outputs at the same frequency. However, the system defined above
will respond to a pure cosine input
with
B B
y(t) = A cos(ωo t) + B cos2 (ωo t) = + A cos(ωo t) + cos(2ωo t),
2 2
since
1
cos2 θ = (1 + cos(2θ)).
2
Clearly, the output y(t) is not just the desired pure cosine
A cos(ωo t),
B
but also contains a DC term 2 and a second-harmonic term
B
cos(2ωo t).
2
The average power of the output (by Parseval’s theorem) is
B2 A2 B2
Py = + + ,
4 2 8
216 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
where the last two terms represent the average power of the fundamental and the
Second-harmonic second harmonic, respectively. One common measure of second-harmonic distortion
distortion in the system output is the ratio of the average power in second-harmonic to the
average power in the fundamental:12
B2
.
4A2
For instance, for B = 0.1A, this ratio is 0.25%.
In a more general scenario, a nonlinear system response to a pure cosine input
f (t) = cos(ωo t) may have the Fourier series form
∞
c0
y(t) = + cn cos(nωo t + θn ),
2
n=1
containing many, and possibly an infinite number of, higher-order harmonics. Every
term in this response, except for the fundamental, would have been absent if the system
were linear. Hence, a sensible and useful measure of the effects of the nonlinearity in
this case is the ratio of the total power of the second and higher-order harmonics to
Total-harmonic the power of the fundamental. This ratio, known as total harmonic distortion (THD),
distortion: THD can be expressed (by Parseval’s theorem) as
∞ ∞ c02
1 2
n=2 2 cn
2
n=2 cn Py − 4 − 2 c1
1 2
THD = 1 2
= 2
= 1 2
,
2 c1
c1 2 c1
Example 6.18
A power plant promises
y(t) = cos(ωo t)
ωo
as a product to its customers, where 2π = 60 Hz. The plant actually delivers
the signal
1 1
y(t) = cos(ωo t) + cos(3ωo t) + cos(5ωo t).
9 25
What is the THD?
Solution Clearly,
( 19 )2 + ( 25
1 2
)
THD = ≈ 1.39%.
12
12
DC distortion is of less concern because the DC component can be removed by the use of a blocking
capacitor or a simple high-pass filter.
Section 6.3 System Response to Periodic Inputs 217
0.5
t
0 5 10
−0.5
−1
Example 6.19
The Fourier series of the zero-mean square-wave signal shown in Figure 6.13 is
4 1 1 1
y(t) = [cos(t) − cos(3t) + cos(5t) − cos(7t) + · · ·].
π 3 5 7
1 2 8
c1 = 2 .
2 π
Therefore, substituting this last expression into the formula for THD, we
obtain
1− 8
π2 π2 − 8
THD = 8
= ≈ 23.4%.
8
π2
218 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
EXERCISES
6.1 Plot the following periodic functions over at least two periods and specify
their period T and fundamental frequency ωo = 2π T :
(a) f (t) = 4 + cos(3t).
(b) g(t) = 8 + 4e−j 4t + 4ej 4t .
(c) h(t) = 2e−j 2t + 2ej 2t + 2 cos(4t).
6.5 (a) Calculate the exponential Fourier series of f (t) plotted below.
1 f (t)
1 1 t
− s s
2 2
Exercises 219
(b) Express the function g(t), shown next, as a scaled and shifted version
of the function f (t) from part (a) and determine the Fourier series of
g(t) by using the scaling and time-shift properties of the Fourier series.
Simplify your result as much as you can.
g(t)
4
1s t
−4
h(t)
2
1 1 t
− s s
2 2
−2
(c) Using the result of part (b), determine the exponential Fourier series
of the following signal s(t), assuming that T = 1 s:
s(t)
A
T t
−A
f (t)
4
3 4 t
6.9 The input signal of a linear system with frequency response H (ω) is
∞
1 1 π
f (t) = + cos(2πnt + ).
2 nπ 2
n=1
The input f (t) and frequency response H (ω) are plotted as:
H (ω)
f (t)
1 2
ω
1s 9π rad/ s
t
Exercises 221
6.10 Show that the compact trigonometric Fourier series of f (t) shown in Problem
6.9 is
∞
1 1 π
f (t) = + cos(2πnt + ).
2 nπ 2
n=1
6.12 The input–output relation for a system with input f (t) is given as
y(t) = 6f (t) + f 2 (t) − f 3 (t).
Determine the total harmonic distortion (THD) of the system response to a
pure cosine input. Hint:
1
cos3 θ = (3 cos θ + cos(3θ)).
4
Is the system linear or nonlinear? Explain.
6.13 Let f (t) be a real-valued periodic signal with fundamental frequency ωo .
We will approximate f (t) with another function
âo # $
N
fN (t) ≡ + ân cos(nωo t) + b̂n sin(nωo t)
2
n=1
where (real-valued) coefficients âm and b̂m are selected to minimize the
average power Pe in the error signal
e(t) ≡ fN (t) − f (t).
That is, to determine the optimal âm and b̂m we minimize
1
Pe ≡ e2 (t) dt.
T T
∂Pe ∂Pe
We will do so by setting ∂ âm and ∂ b̂m
to zero and solving for âm and b̂m .
222 Chapter 6 Fourier Series and LTI System Response to Periodic Signals
∂Pe 1 âo 1
= e(t) dt = − f (t) dt
∂ âo T T 2 T T
and, for 1 ≤ m ≤ N,
∂Pe 2 2
= e(t) cos(mωo t) dt = âm − f (t) cos(mωo t) dt
∂ âm T T T T
and
∂Pe 2 2
= e(t) sin(mωo t) dt = b̂m − f (t) sin(mωo t) dt.
∂ b̂m T T T T
(b) Solve for the optimizing âm and b̂m , and show that the optimal fN (t)
can be rewritten as
N
fN (t) = Fn ej nωo t
m=−N
with
1
Fn = f (t)e−j nωo t dt.
T T
(c) Assuming that f (t) satisfies the Dirichlet Conditions, what is the value
of Pe in the limit as N → ∞? Explain.
7
Fourier Transform and LTI
System Response to
Energy Signals
7.1 FOURIER TRANSFORM PAIRS f (t) ↔ F(ω) AND THEIR PROPERTIES 226
7.2 FREQUENCY-DOMAIN DESCRIPTION OF SIGNALS 240
7.3 LTI CIRCUIT AND SYSTEM RESPONSE TO ENERGY SIGNALS 247
EXERCISES 255
In this chapter we will talk about the Beatles and Beethoven. (See Figure 7.11.) But Frequency-domain
before that, even before beginning Section 7.1, it is worthwhile to take stock of where description
we are and where we are going. of aperiodic
We earlier developed the phasor technique for finding the response of dissipative finite-energy
linear time-invariant circuits to co-sinusoidal inputs. We then proceeded to add the signals:
concept of Fourier series to our toolbox for the case with periodic inputs. We saw Fourier and
that periodic signals are superpositions of harmonically related co-sinusoids (the inverse-Fourier
frequencies are integer multiples of some fundamental frequency ω0 = 2π/T , where transforms;
T is the period of the signal). In practice, most of the signals we encounter in the real energy and
world are not periodic. We shall call such signals aperiodic. For aperiodic signals, energy spectrum;
we might ask whether there is some form of frequency representation. That is, can low-pass,
we write an aperiodic signal f (t) as a sum of co-sinusoids? The answer is yes, but in band-pass, and
this case the frequencies are not harmonically related. In fact, most aperiodic signals high-pass signals;
consist of a continuum of frequencies, not just a set of discrete frequencies. The bandwidth;
derivation that follows leads us to the frequency representation for aperiodic signals. linear system
response
to energy signals
223
224 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
where
T /2 π/ωo
1 ωo
Fn = f (t)e−j nωo t dt = f (t)e−j nωo t dt.
T −T /2 2π −π/ωo
with
π/ωo
F (ω) ≡ f (t)e−j ωt dt.
−π/ωo
This expression for f (t) is valid no matter how large or small the fundamental
frequency ωo = 2π T is, so long as the Dirichlet conditions are satisfied by f (t). In
the case of a vanishingly small ωo , however, the period T = 2π ωo becomes infinite
and f (t) loses its periodic character. In that limit—as ωo → 0 and f (t) becomes
aperiodic—the series f (t) and function F (ω) converge to1
∞
1
f (t) = F (ω)ej ωt dω
2π −∞
and
∞
F (ω) = f (t)e−j ωt dt,
−∞
respectively.
The function F (ω), just introduced, is known as the Fourier transform of the
function f (t). Similarly, the formula for computing f (t) from F (ω) is called the
inverse Fourier transform of F (ω). Notice that the inverse Fourier transform expresses
f (t) as a continuous sum (integral) of co-sinusoids (complex exponentials), where
F (ω) is the weighting applied to frequency ω. The foregoing formulas for the inverse
∞ ∞
1
Remember from calculus that, given a function, say, z(x), −∞ z(x)dx ≡ lim n=−∞ z(nx)x;
x→0
the definite integral of z(x) is nothing but an infinite sum of infinitely many infinitesimals z(nx)x,
amounting to the area under the curve z(x).
Chapter 7 Fourier Transform and LTI System Response to Energy Signals 225
Fourier transform and the (forward) Fourier transform will be applied so frequently
throughout the remainder of this text that they should be committed to memory.
We will come to understand the implications of the Fourier transform as we
proceed in this chapter. None is more important than the following observation: We
have just seen that any aperiodic signal with a Fourier transform F (ω) is a weighted
linear superposition of exponentials
ej ωt = cos(ωt) + j sin(ωt)
ej ωt −→ LTI −→ H (ω)ej ωt ,
This indicates that for an LTI system with frequency response H (ω), and input f (t)
with Fourier transform F (ω), the corresponding system output y(t) is simply the
inverse Fourier transform of the product H (ω)F (ω). This relationship is illustrated
in Figure 7.1.
This chapter explores and expands upon the concepts just introduced, namely
the inverse Fourier transform representation of aperiodic signals f (t) and the input–
output relation for LTI systems shown in Figure 7.1. In Section 7.1 we will discuss the
existence conditions and general properties of the Fourier transform F (ω) and also
compile a table of Fourier transform pairs “f (t) ↔ F (ω)” for signals f (t) commonly
encountered in signal processing applications. In Section 7.2 we will examine the
concept of signal energy W and energy spectrum |F (ω)|2 and discuss a signal classi-
fication based on energy spectrum types. Finally, Section 7.3 will address applications
of the input–output rule stated in Figure 7.1.
1 ∞ 1 ∞
2π ∫−∞ 2π ∫−∞
f (t) = F (ω)ejωt dω LTI system y(t) = H (ω)F (ω)ejωt dω
Figure 7.1 Input–output relation for dissipative LTI system H(ω), with aperiodic
input f (t). System H(ω) converts the Fourier transform F(ω) of input f (t) to the
Fourier transform Y (ω) of the output y(t), according to the rule
Y (ω) = H(ω)F(ω). See Section 7.1 for a description of the “↔” notation.
226 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
Signals f (t) and their Fourier transforms F (ω) satisfying the relations
∞ ∞
1
f (t) = F (ω)ej ωt dω and F (ω) = f (t)e−j ωt dt
2π −∞ −∞
Fourier are said to be Fourier transform pairs. To indicate a Fourier transform pair, we will
transform use the notation
pairs
f (t) ↔ F (ω).
For instance,
2
e−|t| ↔ ,
1 + ω2
as demonstrated later in this section. This pairing is unique, because there exists no
time signal f (t) other than e−|t| , having the Fourier transform F (ω) = 1+ω 2
2.
Existence For the Fourier transform of f (t) to exist, it is sufficient that f (t) be absolutely
of Fourier integrable,3 that is, satisfy
transform ∞
paris |f (t)|dt < ∞,
−∞
and for the convergence4 of the inverse Fourier transform of F (ω) to f (t) it is sufficient
that f (t) satisfy the remaining Dirichlet conditions over any finite interval. However,
absolute integrability—which is satisfied by all signals that can be generated in the
lab5 —is not a necessary condition for a Fourier pairing f (t) ↔ F (ω) to be true.
2
The term “transform” generally is used in mathematics to describe a reversible conversion. If F (ω) is
a transform of f (t), then there exists an inverse process that converts F (ω) uniquely back into f (t).
3
Proof: Note that
∞ ∞ ∞
|F (ω)| = | f (t)e−j ωt dt| ≤ |f (t)e−j ωt |dt = |f (t)|dt.
−∞ −∞ −∞
Thus
∞
|f (t)|dt < ∞
−∞
Some signals f (t) that are not absolutely integrable still satisfy the relations
∞ ∞
1
f (t) = F (ω)e dω and F (ω) =
j ωt
f (t)e−j ωt dt
2π −∞ −∞
for a given F (ω), for instance, when F (ω) satisfies the Dirichlet conditions. In that
case, f (t) and F (ω) form a Fourier transform pair f (t) ↔ F (ω) just the same.
Therefore, the input–output relation for LTI systems shown in Figure 7.1 has a very
wide range of applicability (which we will explore in Sections 7.2 and 7.3).
Table 7.1 lists some of the general properties of Fourier transform pairs. Many
important Fourier transform pairs are listed in Table 7.2. In this section we will prove
some of the properties listed in Table 7.1 and verify some of the transform pairs
shown in Table 7.2. Detailed discussions of some of the entries in these tables will
be delayed until their specific uses are needed. You can ignore the right column of
Table 7.2 until it is needed in Chapter 9. Some of the entries in the left column of
Table 7.2 contain unfamiliar functions such as u(t), rect(t), sinc(t), and (t). These
functions are defined next.
and is plotted in Figure 7.2a. Figures 7.2b, 7.2c, and 7.2d show u(t − 2) (a shifted
unit step), u(1 − t) (a reversed and shifted unit step), and the signum, or sign, function
sgn(t) ≡ 2u(t) − 1.
The unit step is not absolutely integrable, but it still has a Fourier transform that can
be defined, (which will be identified in Chapter 9).
and is plotted in Figure 7.3a. Figures 7.3b, 7.3c and 7.3d show rect(t − 21 ) (a delayed
rect), rect( 3t ) (a rect stretched by a factor of 3), and rect( t+1
2 ) (a rect stretched by a
factor of 2 and shifted by 1 unit to the left). The unit rectangle is absolutely integrable,
and we will derive its Fourier transform later in this section.
6
In general, two analog signals, say, f (t) and g(t), can be regarded equal so long as their values differ
by a finite amount at no more than a finite number (or perhaps a countably infinite number) of discrete
points in time t. In keeping with this definition of equivalence—applicable to Fourier series as well as
transforms—there is no need to specify the value of the unit-step u(t) at the single point t = 0. This value
can be any finite number (e.g., 0.5), and its choice will not affect any of our results. The situation is similar
for the points of discontinuity of the unit rectangle rect(t) at t = ±0.5.
228 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
f (t) ↔ F (ω)
1 e−at u(t) ↔ 1
a+j ω , a>0 14 δ(t) ↔ 1
2 eat u(−t) ↔ 1
a−j ω , a>0 15 1 ↔ 2πδ(ω)
3 e−a|t| ↔ 2a
a 2 +ω2
, a>0 16 δ(t − to ) ↔ e−j ωto
a2
4 a 2 +t 2
↔ πae−a|ω| , a > 0 17 ej ωo t ↔ 2πδ(ω − ωo )
5 te−at u(t) ↔ 1
(a+j ω)2
, a>0 18 cos(ωo t) ↔ π[δ(ω − ωo ) + δ(ω + ωo )]
cos(ωo t)u(t) ↔
7 rect( τt ) ↔ τ sinc( ωτ
2 ) 20 jω
π
2 [δ(ω − ωo ) + δ(ω + ωo )] + ωo2 −ω2
sin(ωo t)u(t) ↔
8 sinc(W t) ↔ π ω
W rect( 2W ) 21 j π2 [δ(ω + ωo ) − δ(ω − ωo )] + ωo
ωo2 −ω2
9 ( τt ) ↔ τ2 sinc2 ( ωτ
4 ) 22 sgn(t) ↔ 2
jω
10 sinc2 ( W2 t ) ↔ W ( 2W )
2π ω
23 u(t) ↔ πδ(ω) + 1
jω
∞
e−at cos(ωo t)u(t) ↔ n=−∞ f (t)δ(t − nT ) ↔
12 a+j ω 25 ∞
n=−∞ T F (ω − n T )
,a>0 1 2π
(a+j ω)2 +ω2 o
2
− t 2 √ σ 2 ω2
13 e 2σ ↔ σ 2π e− 2
Table 7.2 Important Fourier transform pairs. The left-hand column includes only
energy signals f (t) (see Section 7.2), while the right-hand column includes power
signals and distributions (covered in Chapter 9).
230 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
u(t) u(t − 2)
1 1
−4 −2 2 4 t −4 −2 2 4 t
(a) (b)
u(1 − t) 1
sgn(t) = 2u(t) − 1
1
−4 −2 2 4 t
−4 −2 2 4 t
−1
(c) (d)
Figure 7.2 (a) The unit step u(t), (b) u(t − 2), (c) u(1 − t), and
(d) sgn(t) = 2u(t) − 1.
rect(t) 1
rect(t − )
1 1
2
t t +1
rect( ) rect( )
1
3 1
2
−3 −2 −1 1 2 3 t −3 −2 −1 1 2 3 t
(c) (d)
Figure 7.3 (a) The unit rectangle rect(t), (b) rect(t − 12 ), (c) rect( 3t ), and
(d) rect( t+1
2 ).
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 231
sin(t)
sinc(t) ≡
t
and is plotted in Figure 7.4. Notice that sinc(t) is zero (crosses the horizontal axis)
for t = kπ, where k is any integer but 0. The use of l’Hopital’s rule7 indicates
that sinc(0) = 1. It can be shown that the sinc function is not absolutely integrable;
however, its Fourier transform exists and it will be determined later in this section.
1.2
sinc(t)
1
0.8
0.6
0.4
0.2
−20 −10 10 20 t
−0.2
and is plotted in Figure 7.5a. A shifted unit-triangle function is shown in Figure 7.5b.
Δ(t) 1
1
Δ(t − )
2 1
d
sin(t)|t=0 cos(0)
7
lim sinc(t) = dt
d
= = 1.
t→0
dt t|t=0
1
232 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
Example 7.1
Determine the Fourier transform of
where a > 0 is a constant. The function is plotted in Figure 7.6a for the case
a = 1. Notice that the u(t) multiplier of e−at makes f (t) zero for t < 0.
Solution Substituting e−at u(t) for f (t) in the Fourier transform formula
gives
∞
∞
−at −j ωt
∞
−(a+j ω)t
e−(a+j ω)t 0
F (ω) = e u(t)e dt = e dt =
−∞ 0 −(a + j ω)
0−1 1
= = .
−(a + j ω) a + jω
Therefore,
1
e−at u(t) ↔ ,
a + jω
which is the first entry in Table 7.2. The magnitude and angle of
1
F (ω) =
a + jω
f (t) = e− t u(t)
1 1
|F (ω)|
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
−1 1 2 3 t −10 −5 5 10 ω
(a) −0.2 (b) −0.2
180
∠F (ω)
90
−10 −5 5 10 ω
−90
(c) −180
Figure 7.6 (a) f (t) = e−t u(t), (b) |F(ω)| versus ω, and (c) angle of F(ω) in degrees
versus ω.
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 233
are plotted in Figures 7.6b and 7.6c for the case a = 1. From the plot of
|F (ω)|, we see that f (t) contains all frequencies ω, but that it has more
low-frequency content than it does high-frequency content. Notice that for
a < 0, the preceding integral is not convergent, because in that case e−at →
∞ as t → ∞. Therefore, for a < 0, the function e−at u(t) does not have a
Fourier transform (and it is not absolutely integrable). For the special case
of a = 0, e−at u(t) = u(t) is not absolutely integrable, yet it still is possible
to define a Fourier transform, as already mentioned. (See Chapter 9.)
Example 7.2
Determine the Fourier transform G(ω) of the function g(t) = eat u(−t),
where a > 0 is a constant. The function is plotted in Figure 7.7 for the case
a = 1.
g(t) = et u(− t) 1
0.8
0.6
0.4
0.2
−3 −2 −1 1
t
−0.2
Solution Notice that g(t) = f (−t), where f (t) = e−at u(t) ↔ a+j 1
ω,
a > 0. Therefore, we can use the time-scaling property from Table 7.1,
with c = −1. Thus,
1 ω 1
G(ω) = F( )= .
| − 1| −1 a − jω
1
eat u(−t) ↔ ,
a − jω
Example 7.3
Determine the Fourier transform P (ω) of the function p(t) = e−a|t| , where
a > 0 is a constant. The function is plotted in Figure 7.8a for the case a = 1.
234 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
2
p(t) = e −| t |
1
P (ω)
0.8 1.5
0.6
1
0.4
0.2 0.5
−3 −2 −1 1 2 3 t
(a) −0.2 (b) −10 −5 5 10 ω
Solution Comparing Figures 7.6a, 7.7, and 7.8a, we note that p(t) =
f (t) + g(t). Therefore, using the addition property from Table 7.1, we have
1 1
P (ω) = F (ω) + G(ω) = +
a + jω a − jω
1 2a
= 2Re[ ]= 2 ,
a + jω a + ω2
which is plotted in Figure 7.8b for the case a = 1. Hence, for a > 0,
2a
e−a|t| ↔ ,
a 2 + ω2
which is entry 3 in Table 7.2. This is a case where the Fourier transform
turns out to be real, because the time-domain function is real and even
(property 4 in Table 7.1).
Example 7.4
Using the symmetry property from Table 7.1 and the result of Example 7.3,
determine the Fourier transform Q(ω) of function
a2
q(t) = .
a2 + t 2
Solution The symmetry property 6 from Table 7.1 states that the Fourier
transform of a Fourier transform gives back the original waveform, except
reversed and scaled by 2π. Applying this to the result
2a
e−a|t| ↔
a 2 + ω2
for a > 0 of Example 7.3, we can write
2a
↔ 2πe−a|−ω| = 2πe−a|ω| .
a2 + t2
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 235
Therefore,
a2
↔ πae−a|ω|
a2 + t 2
Q(ω) = πae−a|ω|
for a > 0. If the sign of a is changed, then q(t) is not altered and no changes
can result to the values of Q(ω). To enforce that, we write
Q(ω) = π|a|e−|aω| ,
The Fourier transform properties listed in Table 7.1 can be derived from the
Fourier transform and inverse Fourier transform definitions, as illustrated in two cases
in the next two examples. Notice that all of the Fourier transforms calculated in
Examples 7.1 through 7.4 are consistent with property 3,
F (−ω) = F ∗ (ω),
which is valid for all real-valued f (t). After reading Examples 7.5 and 7.6, try to
derive property 3 on your own.
Example 7.5
Confirm the symmetry property listed in Table 7.1.
Solution To confirm this property, it is sufficient to show that, given the
inverse Fourier transform integral
∞
1
f (t) = F (ω)ej ωt dω,
2π −∞
The first integral given can be rewritten (after ω is changed to the dummy
variable x) as
∞
1
f (t) = F (x)ej xt dx.
2π −∞
236 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
Hence,
∞
F (t)e−j ωt dt = 2πf (−ω),
−∞
as claimed.
Example 7.6
Confirm the time-derivative property listed in Table 7.1.
Solution Given the inverse transform
∞
1
f (t) = F (ω)ej ωt dω,
2π −∞
it follows that
∞ ∞
df d 1 1
= F (ω)e j ωt
dω = {j ωF (ω)}ej ωt dω.
dt dt 2π −∞ 2π −∞
Example 7.7
Using the frequency-derivative property from Table 7.1 and entry 1 from
Table 7.2, confirm entry 5 in Table 7.2.
Solution Entry 1 in Table 7.2 is
1
e−at u(t) ↔ , a > 0.
a + jω
The frequency derivative of the right side is
d 1 d −j
= (a + j ω)−1 = −(a + j ω)−2 j = ,
dω a + j ω dω (a + j ω)2
which, according to the frequency-derivative property, is the Fourier trans-
form of
−j te−at u(t).
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 237
Thus,
1
te−at u(t) ↔ , a > 0,
(a + j ω)2
We next will determine the Fourier transforms of rect( τt ) and sinc(W t). The Fourier
results are of fundamental importance in signal processing, as we will see later on. transforms
of rect
Example 7.8 and sinc
Determine the Fourier transform of f (t) = rect( τt ).
Solution Since rect( τt ) equals 1 for − τ2 < t < τ2 and 0 for |t| > τ2 , it
follows that
∞ τ/2
t −j ωt
F (ω) = rect( )e dt = e−j ωt dt
−∞ τ −τ/2
τ τ
e−j ω 2 − ej ω 2 sin( ωτ
2 )
= = .
−j ω ω
2
sin( ωτ
2 ) sin( ωτ
2 ) ωτ
F (ω) = ω =τ ωτ = τ sinc( ).
2 2 2
Hence,
t ωτ
rect( ) ↔ τ sinc( ),
τ 2
Example 7.9
Determine the Fourier transform of g(t) = sinc(W t).
Solution Applying the symmetry property to the result of Example 7.8,
we write
tτ −ω
τ sinc( ) ↔ 2πrect( ).
2 τ
1
8
It can be shown that the inverse Fourier transform of τ sinc( ωτ t
2 ) gives rect( τ ), with values of 2 at the
two points of discontinuity. This is a standard feature with Fourier transforms. When either a forward or
inverse Fourier transform produces a waveform with discontinuities, the values at the points of discontinuity
are always the midpoints between the bottoms and tops of the jumps.
238 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
Replacing τ with 2W and using the fact that rect is an even function yields
π ω
sinc(W t) ↔ rect( ),
W 2W
Four functions, rect( τt ) for τ = 1 (solid) and 2 (dashed), and sinc(W t) for
W = 2π (solid) and π (dashed), are plotted in Figure 7.9a. Their Fourier transforms
τ sinc( ωτ π ω
2 ) and W rect( 2W ) are plotted in Figure 7.9b. Notice that the narrower signals
(solid) in Figure 7.9a are paired with the broader Fourier transforms (also solid) in
Figure 7.9b, while the broader signals (dashed) correspond to the narrower Fourier
transforms. This inverse relationship always holds with Fourier transform pairs, as
indicated by property 7 in Table 7.1. Narrower signals have broader Fourier trans-
forms because narrower signals must have more high-frequency content in order to
exhibit fast transitions in time. Likewise, broader signals are smoother and have more
low-frequency content. Try identifying all the Fourier transform pairs in Figure 7.9
and mark them for future reference.
0.8
f (t)
0.6
0.4
0.2
−4 −2 2 4 t
(a) −0.2
F (ω)
1.5
0.5
−20 −10 10 20 ω
(b)
Figure 7.9 Four signals f (t) and their Fourier transforms F(ω) are shown in (a)
versus t (s) and (b) versus ω ( rad
s ), respectively.
9
It can be shown that sinc(W t) is not absolutely integrable but yet we have found its Fourier transform
using the symmetry property. An alternative way of confirming the Fourier transform of sinc(W t) is to
π W
show that the inverse Fourier transform of W rect( 2W ) is sinc(W t)—see Exercise Problem 7.4.
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 239
Example 7.10
Consider a function f (t) defined as the time derivative of ( τt ), which also
can be expressed as
d t 2 t + τ/4 2 t − τ/4
f (t) ≡ ( ) = rect( ) − rect( )
dt τ τ τ/2 τ τ/2
in terms of shifted and scaled rect functions. (See Exercise 7.5.) Using this
relationship, verify10 the Fourier transform of ( τt ) specified in Table 7.2.
Solution Using the time-shift and time-scaling properties of the Fourier
transform and
t ωτ
rect( ) ↔ τ sinc( ),
τ 2
t ± τ/4 (t ± τ/4) 1 ωτ
rect( ) = rect(2 ) ↔ τ sinc( )e±j ωτ/4 .
τ/2 τ 2 4
d t
f (t) = ( )
dt τ
is
ωτ j ωτ/4 ωτ ωτ
F (ω) = sinc( )(e − e−j ωτ/4 ) = j 2 sin( )sinc( ).
4 4 4
Wt
sinc2 ( ),
2
can be obtained from item 9 using the symmetry property of the Fourier transform.
10
Another approach to the same problem (less tricky, but more tedious) is illustrated in Example 7.11.
240 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
Example 7.11
In Example 7.10 the Fourier transform of ( τt ) was obtained by a sequence
of clever tricks related to some of the properties of the Fourier trans-
form. Alternatively, we can get the same result in a straightforward way
by performing the Fourier transform integral
∞ ∞ τ/2
t t 2 t d
( )e−j ωt dt = 2 ( ) cos(ωt)dt = ( ) [sin(ωt)] dt,
−∞ τ 0 τ ω 0 τ dt
where we make use of the fact that ( τt ) is an even function that vanishes
for t > τ2 . Integrating by parts, we have
τ/2
τ/2
t d −1 2 τ/2
( ) [sin(ωt)] dt = 0 − sin(ωt)dt = sin(ωt)dt
0 τ dt 0 τ/2 τ 0
τ/2
2 2 ωτ 4 ωτ
=− cos(ωt) = (1 − cos( )) = sin2 ( ).
τω 0 τ ω 2 τ ω 4
Hence,
∞ τ/2
t 2 t
( )e−j ωt dt = ( )d[sin(ωt)]
−∞ τ ω 0 τ
2 4 ωτ τ ωτ
= sin2 ( ) = sinc2 ( ),
ω τω 4 2 4
in agreement with item 9 in Table 7.2.
Similar to the case with periodic signals and the Fourier series, it is possible to calculate
the integral of the squared magnitude of an aperiodic signal in terms of its Fourier
representation. The difference in the aperiodic case is that the range of integration,
extends from −∞ to +∞, as shown, not across just a single period, and we have a
Fourier transform rather than a set of Fourier series coefficients.
Section 7.2 Frequency-Domain Description of Signals 241
This identity11 is the last entry in Table 7.1, and it provides an alternative means for
calculating the energy W in a signal. Parseval’s theorem holds for all signals with
finite W (i.e., W < ∞), which are referred to as energy signals. The entire left column Energy
of Table 7.2 is composed of such signals f (t) with finite energy W . signals
All aperiodic signals that can be generated in a lab or at a radio station are neces-
sarily energy signals, because generating a signal with infinite W is not physically
possible (since the cost for the radio station would be unbounded). Parseval’s theorem
indicates that the energy of a signal is spread across the frequency space in a manner
described by |F (ω)|2 . Hence, in analogy with the power spectrum |Fn |2 (for periodic
signals), we refer to |F (ω)|2 as the energy spectrum. The notion of energy spectrum Energy
provides a useful basis for signal classification and bandwidth definitions, which we spectrum
will discuss next. |F(ω)|2
11
Proof of Parseval’s theorem: Assuming that F (ω) exists,
∞ ∞ ∞ ∞ % &∗
1
|f (t)|2 dt = f (t)f ∗ (t)dt = F (ω)ej ωt dω dt
f (t)
−∞ −∞ t=−∞ 2π −∞
∞ ∞ ∞ ∞
1 1 1
= F ∗ (ω) f (t)e−j ωt dt dω = F ∗ (ω)F (ω)dω = |F (ω)|2 dω.
2π ω=−∞ −∞ 2π ω=−∞ 2π −∞
242 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
|F1(ω)| 2
0 ω
(a)
|F2(ω)| 2
0 ω
(b)
Figure 7.10 Energy spectra of possible (a) low-pass and (b) band-pass signals.
and spectral widths (see the next section), without requiring detailed knowledge of
the signal. For instance, a transmitter that we build should work equally well for
Beethoven’s 5th and the Beatles’ “A Hard Day’s Night,” even though the two signals
have little in common other than their energy spectra shown in Figures 7.11a and
7.11b. Most audio signal spectra have the same general shape (low-pass, up to about
20 kHz or less) as the example audio spectra shown in Figure 7.11.
0.15
|F (ω)| 2
0.125
W
0.1
0.075
0.05
0.025
ω
(a) 5000 10000 15000 20000 f=
2π
0.15
|F (ω)| 2
0.125
W
0.1
0.075
0.05
0.025
ω
(b) 5000 10000 15000 20000 f=
2π
2
Figure 7.11 The normalized energy spectrum |F(ω)| W of approximately the first 6
s of (a) Beethoven’s “Symphony No. 5 in C minor” (“da-da-da-daaa . . . ”) and (b)
the Beatles’ “A Hard Day’s Night” (“BWRRAAANNG! It’s been a ha˜rd da˜y’s
night . . . ), plotted against frequency f = 2π
ω
in the range f ∈ [0, 22050] Hz. Note
that the energy spectrum (i.e., signal energy per unit frequency) is negligible in
each case for f > 15 kHz (1 Hz=2π rad s ). The procedure for calculating these
normalized energy spectra is described in Section 9.4.
Section 7.2 Frequency-Domain Description of Signals 243
Signal bandwidth describes the width of the energy spectrum of low-pass and band-
pass signals. We next will discuss how signal bandwidth can be defined and deter-
mined.
ω = = 2πB
beyond which the energy spectrum |F (ω)|2 is very small. For example, for the audio
signals with the energy spectra shown in Figure 7.11, we might define the bandwidth to
be B ≈ 15 kHz, since for f = 2π ω
> 15 kHz the energy spectra |F (ω)|2 are negligible.
There are several standardized bandwidth definitions that are appropriate for 3-dB
quantitative work. One of them, the 3-dB bandwidth, requires that bandwidth
|F ()|2 1
= ,
|F (0)|2 2
or, equivalently,
|F ()|2
10 log( ) = −3 dB,
|F (0)|2
meaning that the bandwidth = 2πB is the frequency where the energy spectrum
|F (ω)|2 falls to one-half the spectral value |F (0)|2 at DC. Another definition for
= 2πB requires that
1
|F (ω)|2 dω = rW,
2π −
where r is some number such as 0.95 or 0.99 that is close to 1. Using this criterion
for with r = 0.95, for example, we refer to = 2πB as the 95% bandwidth of 95%
a low-pass signal f (t), because with r = 0.95, the frequency band |ω| ≤ rad s , or
bandwidth
|f | ≤ B Hz, contains 95% of the total signal energy W .
The bandwidths = 2πB, just defined, characterize the half-width of the signal
energy spectrum curve |F (ω)|2 (i.e., the full width over only positive ω). This conven-
tion of associating the signal bandwidth with the width of the spectrum over just posi-
tive frequencies ω is sensible, because, for real-valued f (t) (with F (−ω) = F ∗ (ω)),
the inverse Fourier transform can be expressed as (see Exercise Problem 7.9)
∞
1
f (t) = 2|F (ω)| cos(ωt + ∠F (ω))dω.
2π 0
244 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
Example 7.12
The Fourier transform of signal
t
f (t) = rect( )
τ
is
ωτ
F (ω) = τ sinc( ).
2
ωτ
|F (ω)|2 = τ 2 sinc2 ( ),
2
2π
=
τ
(or B = τ1 ) as the signal bandwidth. Show that this choice for = 2πB
corresponds to approximately the 90% bandwidth of rect( τt ).
1
|F (ω)| 2
0.8
0.6
0.4
0.2
Figure 7.12 Energy spectrum |F(ω)|2 of the low-pass signal f (t) = rect(t) versus
ω in rad
s .
12
There are real systems, however, containing signals that are modeled as being complex. For example,
certain communications transmitters and receivers have two channels, and it is mathematically convenient
to represent the signals in the two channels as the real and imaginary parts of a single complex signal. In
this case the Fourier transform does not have the usual symmetry, and it is preferred to define a two-sided
bandwidth covering both negative and positive frequencies.
Section 7.2 Frequency-Domain Description of Signals 245
Hence,
2π
=
τ
is the r% bandwidth of the signal, where r satisfies the condition
2π
1 τ ωτ
τ 2 sinc2 ( )dω = rτ.
2π − 2π
τ
2
The result of Example 7.12 is well worth remembering: the 90% bandwidth of
a pulse of duration τ s is B = τ1 Hz. Hence, a 1 μs pulse has a 1 MHz (i.e., 106 Hz)
bandwidth. To view a 1 μs pulse on an oscilloscope, the scope bandwidth needs to be
larger than 1 MHz.
Band-pass signal bandwidth: Recall that band-pass signals have energy spectra
with shapes similar to those in Figure 7.10b. Because, for real-valued f (t), we can
write
∞
1
f (t) = 2|F (ω)| cos(ωt + ∠F (ω))dω,
2π 0
the bandwidth = 2πB of such signals is once again defined to characterize the
width of the energy spectrum |F (ω)|2 over only positive frequencies ω.
For instance, the 95% bandwidth of a band-pass signal is defined to be
= ωu − ωl ,
246 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
The right-hand side of this constraint is 95% of just half the signal energy W , because
the integral on the left is computed over only positive ω.
Example 7.13
Determine the 95% bandwidths of the signals f (t) and g(t) with the energy
spectra shown in Figures 7.13a and 7.13c.
Solution According to Parseval’s theorem, the energy W of signal f (t)
is simply the area under the |F (ω)|2 curve, scaled by 2π
1
. Using the formula
for the area of a triangle, we have
1 2π × 1 1
W = = .
2π 2 2
|F (ω)| 2
1
(a) π rad/s ω
|F (ω)| 2
1
(b) −Ω Ω π rad/s ω
|G(ω)| 2
1
Figure 7.13 Energy spectrum of (a) a low-pass signal f (t) ↔ F(ω), (b) portion of
the energy spectrum in (a) lying outside the bandwidth , and (c) energy
spectrum of a band-pass signal g(t) ↔ G(ω).
Section 7.3 LTI Circuit and System Response to Energy Signals 247
1 (π − )
(π − ) .
2π π
Setting this quantity equal to 0.025 yields
(π − )2 = 0.05π 2
so that
√ rad
= π(1 − .05) ≈ 0.7764π .
s
Now, a comparison of |F (ω)|2 in Figure 7.13a and |G(ω)|2 in Figure 7.13c
shows that |G(ω)|2 , for positive ω, is a shifted replica of |F (ω)|2 . Hence,
for positive ω, |G(ω)|2 is twice as wide as |F (ω)|2 , and therefore, the 95%
bandwidth of the band-pass signal g(t) is twice that of f (t), i.e.
rad
≈ 1.5528π .
s
Example 7.14
What is the 100% bandwidth of the signal g(t) defined in Example 7.13?
Solution For 100% bandwidth, we observe that ωu = 4π and ωl = 2π
rad/s. Hence, the 100% bandwidth is simply
rad
= ωu − ωl = 2π ,
s
or B = 1 Hz.
In either case, the system output is the inverse Fourier transform of the product of the
system frequency response and the Fourier transform of the input.
248 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
ej ωt −→ LTI −→ H (ω)ej ωt .
Because this latter rule describes a steady-state response (see Section 5.2), we may
think that the relation y(t) ↔ Y (ω) = H (ω)F (ω) describes just the steady-state
response y(t) of a dissipative LTI system to an input f (t) ↔ F (ω). However, the
relation is more powerful than that: Because in dissipative systems transient compo-
nents of the zero-state response to inputs cos(ωt) and sin(ωt) applied at t = −∞
have vanished at all finite times t, the rule
ej ωt −→ LTI −→ H (ω)ej ωt
Zero-state actually describes the zero-state response for all finite times t. Consequently, the
response inverse Fourier transform of Y (ω) = H (ω)F (ω) represents, for all finite t, the entire
y(t) ↔ H(ω)F(ω) zero-state response of system H (ω)—not just the steady-state part of it—to input
f (t) ↔ F (ω).
In summary, Figure 7.1 illustrates how the zero-state response can be calculated
for dissipative LTI systems, for all finite t. We shall make use of this relation in the
next several examples.
Example 7.15
The input of an LTI system
1
H (ω) =
1 + jω
is
f (t)
1 1
y(t)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
−1 1 2 3 4 5 t −1 1 2 3 4 5 t
(a) −0.2 (b) −0.2
Figure 7.14 (a) Input and (b) output signals of the system examined in
Example 7.15.
Section 7.3 LTI Circuit and System Response to Energy Signals 249
Solution Since
1
f (t) = e−t u(t) ↔ F (ω) =
1 + jω
(entry 1 in Table 7.2), the Fourier transform of output y(t) is
1 1 1
Y (ω) = H (ω)F (ω) = = .
1 + jω 1 + jω (1 + j ω)2
According to entry 5 in Table 7.2,
1
te−t u(t) ↔ .
(1 + j ω)2
Therefore, the system zero-state response is
Example 7.16
Figure 7.15a shows the input
ω
f (t) = sinc(t) ↔ F (ω) = πrect( ).
2
Suppose this input is applied to an LTI system having the frequency response
H (ω) = rect(ω).
1 0.5
0.4 0.2
0.2 0.1
Figure 7.15 (a) Input and (b) output signals of the system examined in
Example 7.16.
250 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
as the Fourier transform of the system response y(t). Taking the inverse
Fourier transform of Y (ω), we find that
∞ ∞
1 1
y(t) = Y (ω)ej ωt dω = πrect(ω)ej ωt dω
2π −∞ 2π −∞
1 1/2 j ωt ej t/2 − e−j t/2
= e dω = .
2 −1/2 2j t
1 t
y(t) = sinc( ),
2 2
which is plotted in Figure 7.15b. Notice that the system broadens the input
f (t) by a factor of 2 by halving its bandwidth.
Example 7.17
Signal
where G1 (ω) and G2 (ω) are depicted in Figures 7.16a and 7.16b, and
is passed through a band-pass system with the frequency response H (ω)
shown in Figure 7.16c. Determine the system zero-state output y(t) in terms
of g1 (t) and g2 (t).
Solution Since f (t) = g1 (t) + g2 (t),
H (ω)G1 (ω) = 0
and
1
H (ω)G2 (ω) = G2 (ω).
2
Hence,
1
Y (ω) = G2 (ω)
2
Section 7.3 LTI Circuit and System Response to Energy Signals 251
G 1(ω)
1
H (ω)
1
0.5
Figure 7.16 (a) Fourier transforms of signals g1 (t) and g2 (t), and (b) the
frequency response H(ω) of the system examined in Example 7.16 with an input
f (t) = g1 (t) + g2 (t).
so that
1
y(t) = g2 (t).
2
Thus, the system filters out the component g1 (t) from the input f (t) and
delivers a scaled-down replica of g2 (t) as the output.
Example 7.18
An input f (t) is passed through a system having frequency response
y(t) = f (t − to ),
Example 7.19
The input of an LTI system
1
H (ω) =
1 + jω
is
f (t) = et u(−t)
1 1
f (t) y(t)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
−4 −2 2 4 t −4 −2 2 4 t
(a) −0.2 (b) −0.2
Figure 7.17 (a) Input and (b) output signals of the system examined in
Example 7.19.
Solution Since
1
f (t) = et u(−t) ↔ F (ω) =
1 − jω
1 1 1
Y (ω) = H (ω)F (ω) = = .
1 + jω 1 − jω 1 + ω2
2
e−|t| ↔ .
1 + ω2
Therefore, the system output is
1 −|t|
y(t) = e ,
2
which is plotted in Figure 7.17b.
Section 7.3 LTI Circuit and System Response to Energy Signals 253
Example 7.20
The input of an LTI system
1
H (ω) =
1 + jω
is
f (t) = rect(t).
1 ω
Y (ω) = H (ω)F (ω) = sinc( ).
1 + jω 2
by hand. But, the integral looks too complicated to carry out. In Chapter 9,
we will learn a simpler way of finding y(t) for this problem. So, let us leave
the answer in integral form. (It is the right answer, but it is not in a form
suitable for visualizing the output.)
We will close the chapter with a small surprise (plus one more example), which
will illustrate the power and generality of the Fourier transform method, even though,
as we found out in Example 7.20, the application of the method may sometimes be
difficult.
Here is our surprise: Figure 7.18 shows an inductor L with voltage v(t), current
i(t), and v–i relation
di
v(t) = L .
dt
Suppose that this inductor is a component of some dissipative LTI circuit in the
laboratory, and therefore its voltage v(t) and current i(t) are lab signals (i.e., absolutely
integrable, finite energy, etc.) Thus, v(t) and i(t) have Fourier transforms V (ω) and
I (ω), respectively. Since i(t) ↔ I (ω), the time-derivative property from Table 7.1
254 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
di
ν(t) = L V (ω) = jωLI (ω)
+ dt −
L + −
↔ L
i(t) I (ω)
indicates that di
dt ↔ j ωI (ω). Thus, the amplitude-scaling property (with K = L)
implies that
or
V (ω) = ZI (ω),
with
Z = j ωL.
Notice that we effectively have obtained a Fourier V (ω)–I (ω) relation (see Figure 7.18
again) for the inductor, which has the same form as the phasor V –I relation for the
Fourier same element. The V (ω)–I (ω) relations for a capacitor C and resistor R are similar to
V(ω)–I(ω) the relation for the inductor, but with impedances Z = j ωC1
and Z = R, respectively,
relations just as in the phasor case.
So, it is no wonder that the frequency-response function H (ω) derived by the
phasor method also describes the circuit response to arbitrary inputs. The next example
will show how the Fourier transform method can be applied to directly analyze a
circuit.
Example 7.21
Determine the response y(t) ↔ Y (ω) of the circuit shown in Figure 7.19a
to an arbitrary input f (t) ↔ F (ω).
+ + 1
f (t) y(t) 1Ω 1F 3y(t) F (ω) Y (ω) 1Ω Ω 3Y(ω)
− − jω
(a) (b)
Figure 7.19 (a) An LTI circuit with an arbitrary input f (t), and (b) the Fourier equivalent of the same
circuit.
Exercises 255
Thus,
1
Y (ω) = F (ω),
4 + jω
which is the Fourier transform of the system zero-state response y(t).
Hence, the system frequency response is
1
H (ω) = ,
4 + jω
and, using the inverse Fourier transform,
∞
1 1
y(t) = F (ω)ej ωt dω.
2π −∞ 4 + j ω
This result is valid for any laboratory input f (t) ↔ F (ω).
EXERCISES
7.1 (a) Given that f (t) = e−a(t−to ) u(t − to ), where a > 0, determine the Fourier
transform F (ω) of f (t).
(b) Given that
1
g(t) = ,
a + jt
where a > 0, determine the Fourier transform G(ω) of g(t) by using
the symmetry property and the result of part (a).
(c) Confirm the result of part (b) by calculating g(t) from G(ω), using the
inverse Fourier transform integral.
7.2 Let
t
f (t) = rect( ).
2
7.3 Determine the Fourier transform F (ω) of the following signal f (t):
f (t)
1
−3 −2 −1 1 2 3
−1
7.6 Given that f (t) = 52 ( 5t ), evaluate the Fourier transform F (ω) at ω = 0.
7.7 (a) Show that for real-valued signals f (t), the Fourier transform F (ω)
satisfies the property
F (−ω) = F ∗ (ω).
(b) Using this result, show that for real-valued f (t), we have |F (−ω)| =
|F (ω)| and ∠F (−ω) = −∠F (ω) (i.e. that the magnitude of the Fourier
transform is even and the phase is odd).
7.8 On an exam, you are asked to calculate F (0) for some real-valued signal
f (t). You obtain the answer F (0) = 4 − j 2. Explain why, for sure, you have
made a mistake in your calculation.
Exercises 257
7.9 Show that, given a real-valued signal f (t), the inverse Fourier transform
integral can be expressed as
∞
1
f (t) = 2|F (ω)| cos(ωt + ∠F (ω))dω.
2π 0
7.11 Determine the 3-dB bandwidth and the 95% bandwidth of signals f (t) and
g(t) with the following energy spectra:
|F (ω)| 2
1
π π ω (rad/s)
2
|G(ω)| 2
1
−2 π π 2π 3π ω (rad/s)
7.12 (a) Let f (t) = f1 (t) + f2 (t) such that f1 (t) ↔ F1 (ω) and f2 (t) ↔ F2 (ω).
Show that
(b) The input signal of an LTI system with a frequency response H (ω) =
|H (ω)|ej χ(ω) is f1 (t) + f2 (t). Functions F1 (ω), F2 (ω), H (ω), and
χ(ω) are given graphically as follows:
258 Chapter 7 Fourier Transform and LTI System Response to Energy Signals
F1 (ω)
ω (rad/s)
− 10π F2 (ω) 10π
ω (rad/s)
− 10π |H (ω)| 10π
4
2
ω (rad/s)
− 10π 10π
χ(ω)
ω (rad/s)
− 10π 10π
−π rad
2Ω
+
f (t) + 3F y(t)
−
−
8
Modulation and AM Radio
1
A transducer converts energy from one form to another—for example, electrical to acoustical.
259
260 Chapter 8 Modulation and AM Radio
3 f (t) 3
f (t) cos(ω ct)
2 2
1 1
−4 −2 2 4 6 t −4 −2 2 4 6 t
−1 -1
−2 -2
−3 −3
(a) (b)
Figure 8.1 (a) A signal f (t), and (b) a corresponding AM signal f (t) cos(ωc t) with
a carrier frequency ωc .
Example 8.1
Prove the Fourier time-shift property of Table 7.1.
f (t − to ) ↔ F (ω)e−j ωto ,
where F (ω) denotes the Fourier transform of f (t). To verify this property,
we write the Fourier transform of f (t − to ) as
∞ ∞ ∞
−j ωt −j ω(x+to ) −j ωto
f (t − to )e dt = f (x)e dx = e f (x)e−j ωx dx,
−∞ −∞ −∞
x = t − to ⇒ t = x + to
f (t − to ) ↔ F (ω)e−j ωto ,
Example 8.2
Prove the Fourier frequency-shift property of Table 7.1.
Solution The frequency-shift property states that
f (t)ej ωo t ↔ F (ω − ωo ).
This is the dual of the time-shift property. It can be proven as in Example 8.1,
except by the use of the inverse Fourier transform. Alternatively, a more
direct proof yields
∞ ∞
j ωo t −j ωt
f (t)e e dt = f (t)e−j (ω−ωo )t dt = F (ω − ωo ),
−∞ −∞
as claimed.
The frequency-shift property will play a major role in this chapter. In particular,
it forms the basis for the modulation property in Table 7.1. The modulation property
is derived as follows. Evaluating the frequency-shift property for ωo = ±ωc gives
f (t)ej ωc t ↔ F (ω − ωc )
and
f (t)e−j ωc t ↔ F (ω + ωc ).
Hence,
1 1
f (t) cos(ωc t) ↔ F (ω − ωc ) + F (ω + ωc ),
2 2
which is the modulation property.
The implication of the modulation property is illustrated in Figure 8.2. The top Modulation
figure shows the Fourier transform F (ω) of some signal f (t). The bottom figure then property
shows the Fourier transform of f (t) cos(ωc t), where the replicas of F (ω) have half
the height of the original (as depicted by the dashed curve in the top figure) and are
shifted to the right and left by ωc . The modulation process cannot be described in
terms of a frequency response function H (ω), because, unlike the LTI filtering oper-
ations discussed in Section 7.3, modulation is a time-varying operation that shifts
the energy content of signal f (t) from its baseband to an entirely new location
in the frequency domain. Figure 8.2, illustrates how the modulation process gener-
ates a bandpass signal f (t) cos(ωc t) from a low-pass signal f (t). We will refer to
f (t) cos(ωc t) as an AM signal with carrier ωc .
262 Chapter 8 Modulation and AM Radio
1 F (ω)
− 2ωc − ωc 0 ωc 2ω c ω
1 1
1 F (ω − ω c) + F (ω + ω c)
2 2
− 2ω c − ωc 0 ωc 2ω c ω
Figure 8.2 An example illustrating the modulation property: Fourier transform of some low-pass signal
f (t) (top) and Fourier transform of the AM signal f (t) cos(ωc t) (bottom).
1 G(ω)
− 2ω c − ωc 0 ωc 2ω c ω
1 1
1 G(ω − ω c) + G(ω + ω c)
2 2
− 2ω c − ωc 0 ωc 2ω c ω
Figure 8.3 Another example illustrating the modulation property: Fourier transform of some band-pass
signal g(t) (top) and Fourier transform of modulated multi-band signal g(t) cos(ωc t) (bottom). The short-
and long-dashed arrows from top to bottom suggest how the right and left shifts of half of G(ω) result
in the Fourier transform of the modulated signal g(t) cos(ωc t).
Section 8.1 Fourier Transform Shift and Modulation Properties 263
Mixer
f (t) × f (t) cos(ωct)
cos(ωc t)
Figure 8.4 System symbol for AM modulation process. The multiplier unit is
known as a mixer. Shifting the frequency content of a signal to a new location
in ω-space, by the use of a mixer, also is known as heterodyning.
(1) Radio antennas, which convert voltage and current signals into radiowaves
(and vice versa), essentially are high-pass systems2 with negligible amplitude
response |Hant (ω)| for |ω| < 2L
πc
, where c = 3 × 108 m/s is the speed of light
and L is the physical size of the antenna.3 For L = 75 m, for instance, the
antenna performs poorly unless
π 3 × 108 rad
ω≥ = 2π106 ,
150 s
or, equivalently, frequencies are above 1 MHz. No practical antenna with a
reasonable physical size exists to transmit an audio signal f (t) directly. By
contrast, a modulated AM signal, f (t) cos(ωc t), can be efficiently radiated
with an antenna having a 75-m length, when the carrier frequency is in the
1-MHz frequency range.
(2) Even if efficient antennas were available in the audio frequency range, radio
communication within the audio band would be problematic. The reason for
this is that all signal transmissions then would occupy the same frequency
band. Thus, it would be impossible to extract the signal for a single station
of interest from the superposed signals of all other stations operating in the
same geographical region. In other words, the signals would interfere with one
another. To overcome this difficulty, modulation is essential.
In the United States, different AM stations operating in the same vicinity are
assigned different individual carrier frequencies—chosen from a set spaced 10 kHz
or 2π × 104 rad/s apart, within the frequency range 540 to 1700 kHz. With a 10-kHz
operation bandwidth, each AM radio station can broadcast a voice signal of up to
5-kHz bandwidth. (Can you explain why?)
Example 8.3
A mixer is used to multiply
∞
1
1+ cos(nωo t)
n
n=1
2
Examined and modeled in courses on electromagnetics.
3
For a monopole antenna over a conducting ground plane.
264 Chapter 8 Modulation and AM Radio
1 F (ω)
(a) − 2ω o − ωo 0 ωo 2ω o ω
1 M (ω)
(b) − 2ω o − ωo 0 ωo 2ω o ω
H LP F (ω)
1
(c) − 2ω o − ωo 0 ωo 2ω o ω
H BP F (ω)
1
(d) − 2ω o − ωo 0 ωo 2ω o ω
Figure 8.5 (a) Fourier transform of a low-pass signal f (t), (b) M(ω) computed in Example 8.3, and (c) a
low-pass filter HLPF (ω), and (d) a band-pass filter HBPF (ω).
Next, we apply the addition and modulation properties of the Fourier trans-
form to this expression to obtain
∞
1
M(ω) = F (ω) + {F (ω − nωo ) + F (ω + nωo )}.
2n
n=1
This result is plotted in Figure 8.5b. Notice that f (t) can be extracted from
m(t) with the low-pass filter HLPF (ω) shown in Figure 8.5c. On the other
hand, filtering m(t) with the band-pass filter HBPF (ω), shown in Figure 8.5d,
would generate a band-pass AM signal f (t) cos(ωo t) having carrier ωo .
The antenna for an AM radio receiver typically captures the signals from tens
of radio stations simultaneously. The receiver uses a front-end band-pass filter with
an adjustable passband to respond to just the single AM signal f (t) cos(ωc t) to be
Section 8.2 Coherent Demodulation of AM Signals 265
“tuned in.” The band-pass filter output then is converted into an audio signal that
is proportional to f (t − to ). This conversion process, known as demodulation, is
discussed in the next two sections.
The block diagram in Figure 8.6a depicts a possible AM communication system. The
mixer on the left idealizes an AM radio transmitter. The mixer on the right and the
low-pass filter HLPF (ω) constitute a coherent-demodulation, or coherent-detection,
AM receiver. Because the transmitter and receiver are located at different sites, the
transmitted AM signal
f (t) cos(ωc t)
F (ω) 1
− 2ω c − ωc 0 ωc 2ω c ω
(b)
|R (ω)| k/2
− 2ω c − ωc 0 ωc 2ω c ω
(c)
|M (ω)| k/2
− 2ω c − ωc 0 ωc 2ω c ω
(d)
H LP F (ω) 1
− 2ω c − ωc 0 ωc 2ω c ω
(e)
arrives at the receiver after traveling the intervening distance through some channel
(usually, through air in the form of a radiowave). In the block diagram the center box
represents an ideal propagation channel from the transmitter to the receiver, which
only delays the AM signal by an amount to and scales it in amplitude by a constant
factor k. Hence, the receiver input labeled as r(t) is
Figures 8.6b and 8.6c show plots of a possible F (ω) and |R(ω)|, where
Let us now examine how demodulation takes place as in Figure 8.6a. The mixer
output in the receiver is
k k
M(ω) = F (ω)e−j ωto + {F (ω − 2ωc ) + F (ω + 2ωc )}e−j ωto ,
2 4
k
f (t − to ),
2
Section 8.3 Envelope Detection of AM Signals 267
is the delayed audio signal that we hope to recover. To extract this signal, we pass
m(t) through the low-pass filter HLPF (ω) shown in Figure 8.6e. The result is
k
Y (ω) = HLPF (ω)M(ω) = F (ω)e−j ωto ,
2
implying that
k
y(t) = f (t − to ),
2
the desired audio signal at the loudspeaker input.4
The crucial and most difficult step in coherent demodulation is the mixing of the
incoming signal r(t) with cos(ωc (t − to )). The difficulty lies in obtaining cos(ωc (t −
to )) for mixing purposes. A locally generated cos(ωc t + θ) in the receiver with the
right frequency ωc , but an arbitrary phase shift θ = −ωc to , will not work, because
even small fluctuations of the propagation delay to will translate to large phase-shift
variations of the carrier and will cause y(t) to fluctuate. Coherent detection receivers
are thus required to extract cos(ωc (t − to )) from the incoming signal r(t) before the
mixing can take place. This requirement increases the complexity and, therefore, the
cost of coherent demodulation receivers. The term coherent demodulation or detection
refers to the requirement that the phase shift θ of the mixing signal cos(ωc t + θ) be
coherent (same as or different by a constant amount) with the phase shift −ωc to of
the incoming carrier.
The receiver complexity can be reduced if the incoming signal is of the form
a simple envelope detection procedure, described in the next section, works well,
obviating the need to generate cos(ωc (t − to )) within the receiver.
Consider the modified AM transmitter system shown in Figure 8.7a. In the modified
transmitter, a DC offset α is added to the voice signal f (t), and the sum is used
4
In practice, y(t) is amplified by an audio amplifier (which is not shown) prior to being applied to the
loudspeaker.
268 Chapter 8 Modulation and AM Radio
6
f (t)
4
Antenna t
∑ × −4 −2 2 4 6
−4
6 6
(f (t ) + α) cos(ωct)
4 f (t) cos(ω ct) 4
2 2
−4 −2 2 4 6 t −4 −2 2 4 6 t
−2 −2
−4 −4
−6 −6
6 | f (t)| 6 | f (t ) + α|
4 4
2 2
−4 −2 2 4 6 t −4 −2 2 4 6 t
−2 −2
−4 −4
(c) −6 (d) −6
Figure 8.7 (a) An idealized AM transmitter with offset (α) insertion, (b) a
possible voice signal f (t), (c) AM signal f (t) cos(ωc t) and its envelope |f (t)|, and
(d) AM signal (f (t) + α) cos(ωc t) with a DC offset α and its envelope
|f (t) + α| = f (t) + α. Notice that the envelope function in (d) is an offset version
of f (t) plotted in (b); the envelope in (c), however, does not resemble f (t).
to modulate the amplitude of a carrier cos(ωc t). A possible waveform for f (t) is
shown in Figure 8.7b. The incoming signal to an AM receiver, from the transmitter
in Figure 8.7a, will be
assuming, for simplicity, a propagation channel with k = 1 and zero time delay to .
This signal is plotted in Figures 8.7c and 8.7d for the cases α = 0 and α > max |f (t)|.
The bottoms of Figures 8.7c and 8.7d show the envelopes of r(t) for the same two
cases, where the envelope of r(t) is defined as
|f (t) + α|
(also indicated by the dashed lines superposed upon the r(t) curves in Figures 8.7c
and 8.7d).
Section 8.3 Envelope Detection of AM Signals 269
Except for a DC offset α, the envelope signal shown in Figure 8.7d is the same Envelope
as the desired signal f (t). By contrast, the envelope |f (t)| shown in Figure 8.7c, of AM
corresponding to the case α = 0, does not resemble f (t), because of the “rectification signal
effect” of the absolute-value operation. We next will describe an envelope detector
system that can extract from the AM signal shown at the top of Figure 8.7d the desired
envelope
|f (t) + α| = f (t) + α.
Figure 8.8a shows an ideal envelope detector that consists of a full-wave rectifier Envelope
followed by a low-pass filter HLPF (ω). The figure also includes plots of a possible AM detection
input signal r(t) (same as in Figure 8.7d), the rectifier output p(t) = |r(t)|, and the
output q(t) that follows the peaks of p(t) and therefore is equal to the envelope of
the input r(t).
Assuming that α > max |f (t)| so that f (t) + α > 0 for all t, we have
|f (t) + α| = f (t) + α, and the rectifier output is
p(t) = |r(t)| = |(f (t) + α) cos(ωc t)| = |f (t) + α|| cos(ωc t)|
= (f (t) + α)| cos(ωc t)|,
Envelope Detector
4 4 4
2 2 2
−4 −2 2 4 6 −4 −2 2 4 6 −4 −2 2 4 6
−2 −2 −2
−4 −4 −4
(a) −6 −6 −6
F (ω) 1
1.5 | cos(ωct)|
1.25
− ωc 0 ωc ω
1
0.75
0.5 2 H LP F (ω)
0.25 a0
−4 −2 2 4 6 t
−0.25 − ωc 0 ωc ω
(b) −0.5 (c)
Figure 8.8 (a) Block diagram of an ideal envelope detector, as well as example plots of an input signal
r(t), its rectified version p(t) = |r(t)|, and the detector output q(t), (b) plot of a full-wave rectified AM
carrier, and (c) example F(ω) and frequency response of an ideal low-pass filter included in the envelope
detector system.
270 Chapter 8 Modulation and AM Radio
as shown in the figure. Clearly, after rectification the desired AM signal envelope
f (t) + α appears as the amplitude of the rectified cosine | cos(ωc t)| shown in
Figure 8.8b.
We next will see that we can extract f (t) + α from p(t) by passing p(t) through
a low-pass filter HLPF (ω) having the shape shown in Figure 8.8c. The rectified cosine
| cos(ωc t)| of Figure 8.8b is a periodic function with period
Tc 2π/ωc π
T = = =
2 2 ωc
and fundamental frequency
2π 2π
ωo = = = 2ωc .
T π/ωc
Thus, it can be expanded in a Fourier series as
∞
a0
| cos(ωc t)| = + an cos(n2ωc t)
2
n=1
with an appropriate set of Fourier coefficients an . (See Example 6.7 in Section 6.2.4,
where it was found that a0 = π4 .) Hence, the rectifier output, or the input to filter
HLPF (ω), can be expressed as
where
∞
a0
p1 (t) ≡ f (t) + an f (t) cos(n2ωc t)
2
n=1
and
∞
a0
p2 (t) ≡ α + an α cos(n2ωc t).
2
n=1
Now, the response of the filter HLPF (ω) to the input p2 (t) is
q2 (t) = α,
To determine the filter response q1 (t) to input p1 (t), we first observe (using the
Fourier addition and modulation properties) that
∞
an
a0
P1 (ω) = F (ω) + {F (ω − n2ωc ) + F (ω + n2ωc )}.
2 2
n=1
Because only the first term is within the passband of HLPF (ω), it follows that
implying that
q1 (t) = f (t).
Therefore, using superposition, we find the filter output due to input p(t) =
p1 (t) + p2 (t) to be
Example 8.4
Is envelope detection a linear or nonlinear process? Explain.
Solution Envelope detection is a nonlinear process, because, in general,
the envelope of
will be different from the sum of envelopes |f1 (t)| and |f2 (t)| of signals
f1 (t) cos(ωc t)
and
respectively. The envelope of (f1 (t) + f2 (t)) cos(ωc t) is |f1 (t) + f2 (t)|,
but
unless, for each value of t, f1 (t) and f2 (t) have the same algebraic sign.
272 Chapter 8 Modulation and AM Radio
Example 8.5
Suppose that the input signal of an envelope detector is
where to > 0. What will be the detector output, assuming that f (t) + α >
0?
Solution The detector output still will be
f (t) + α,
We have just described the operation of an ideal envelope detector. Figure 8.9
shows a practical envelope detector circuit that offers an approximation to the ideal
system in Figure 8.8a (which consists of both a full-wave rectifier and an ideal low-
pass filter). The practical and very simple detector consists of a diode in series with a
parallel RC network. The capacitor voltage q(t) in the circuit will closely approximate
the envelope of an AM input r(t) if the RC time constant of the circuit is appropriately
selected (long compared with the carrier period 2π ωc and short compared with the
inverse of the bandwidth of the envelope of r(t)). Whenever the capacitor voltage
q(t) is larger than the instantaneous value of the AM input r(t), the diode behaves as
an open circuit and the capacitor discharges through resistor R with a time constant
RC. When the decaying capacitor voltage q(t) drops below r(t), the diode starts
conducting, the capacitor begins recharging, and q(t) is pulled up to follow r(t).
The decay and growth cycle repeats after r(t) dips below q(t) (once every period of
cos(ωc t)), and the overall result is an output that closely approximates the envelope
of r(t). The output will contain a small ripple component (a deviation from the true
envelope) with an energy content concentrated near the frequency ωc and above. This
component cannot be converted to sound by audio loudspeakers and headphones;
consequently, its presence can be ignored for all practical purposes.
+ +
r(t) q(t)
R C
− −
Figure 8.9 A practical envelope detector. When r(t) > q(t), the diode conducts
and the capacitor C charges up to a voltage q(t) that remains close to the
envelope of r(t).
Section 8.4 Superheterodyne AM Receivers with Envelope Detection 273
Figure 8.10 depicts a portion of the energy spectrum of a possible AM receiver input
versus frequency f = 2π ω
≥ 0. The spectrum consists of features 10 kHz apart, where
each feature is the energy spectrum of a different AM signal broadcast from a different
radio station. A practical AM receiver, employing an envelope detector, needs to select
one of these AM signals by using a band-pass filter placed at the receiver front end.
One option is to use a band-pass filter with a variable, or tunable, center frequency.
However, such filters are difficult to build when the bandwidth-to-center-frequency
ratio Bf is of the order 1% or less. With B = 10 kHz and f = 1 MHz (a typical
frequency in the AM broadcast band), this ratio5 is exactly 1%.
|R (ω)| 2
WILL
0
560 570 580 590 600 610 620 f =
ω kHz
2π
ωIF
f = fIF = = 455 kHz,
2π
and to shift or heterodyne the frequency band of the desired AM signal into the Intermediate
pass-band of this filter, called an IF filter, which is short for intermediate frequency frequency (IF)
filter.
As shown in Figure 8.11, we can do this by mixing the receiver input signal r(t)
with a local oscillator signal cos(ωLO t) of an appropriate frequency ωLO prior to entry Local
of r(t) into the IF filter. To understand how this procedure works, let us first assume oscillator (LO)
that in Figure 8.11
5
This is also the inverse of the quality factor Q introduced and discussed in Chapter 12.
274 Chapter 8 Modulation and AM Radio
r(t) m (t) ~
r(t)
× H IF (ω) Detector
Figure 8.11 Heterodyning an incoming signal r(t) into the IF band by the use
of a local-oscillator signal cos(ωLO t).
representing a single AM signal. The mixer output then can be represented as6
ωLO = ωc + ωIF
so that
ωLO − ωc = ωIF ,
1 1
cos(a) cos(b) = cos(b − a) + cos(b + a);
2 2
since cosine is an even function, the order of the difference (i.e., b − a versus a − b) does not matter. To
verify the identity, note that
F (ω)
ω
R (ω)
− ωc ωc ω
M (ω)
− ω IF ω IF ω
Figure 8.12 Fourier-domain view of the mixing and filtering operations shown
in Figure 8.11. R(ω) represents the Fourier transform of the input to the system.
Figure 8.12 helps illustrate, in the Fourier domain, the mixing and filtering opera-
tions shown in Figure 8.11. Figure 8.12 depicts possible Fourier transforms of signals
f (t), r(t), and m(t), as well as an ideal IF-filter frequency response HIF (ω). Notice
that R(ω) and M(ω) can be obtained from F (ω) via successive uses of the Fourier
modulation property with frequency shifts ±ωc and ±ωLO = ±(ωc + ωIF ). Compar-
ison of R̃(ω) and R(ω) indicates that, at the IF filter output, the signal r̃(t) is just a
carrier-shifted version of the signal r(t).
In commercial AM receivers,
ωIF
fIF = = 455 kHz
2π
is used as the standard intermediate frequency. So, to listen to WILL, which broadcasts
from the University of Illinois with the carrier frequency
ωc
fc = = 580 kHz,
2π
1
√ = 2πfLO .
LC
tuned to carrier ωc , its Fourier transform (the M-shaped component) is also shifted
into the IF band, as illustrated in Figure 8.13 (a straightforward consequence of the
Fourier modulation property applied to R(ω)). Thus, the signal from the second
station, called the image station, interferes with the ωc carrier signal, unless r(t) is
Image band-pass filtered prior to the LO mixer to eliminate the image station signal. This
station, filtering can be accomplished with a preselector band-pass filter placed in front of the
interference LO mixer. The preselector filter must have a variable center frequency ωc , but it need
not be a high-quality filter. The preselector is permitted to have a wide bandwidth
that varies with ωc , so long as the bandwidth remains smaller than about 2ωIF , since
ωc and the image carrier ωc + 2ωIF are 2ωIF apart (and thus a narrower preselector
bandwidth is unnecessary). Since the bandwidth-to-center-frequency ratio
2ωIF 2fIF
=
ωc fc
R (ω)
− ωc ωc ω
ω c + 2 ω IF
M (ω)
− ω IF ω IF ω
H IF (ω)
1
− ω IF ω IF ω
~
R (ω)
− ω IF ω IF ω
Figure 8.13 Similar to Figure 8.12 but illustrating the image station problem.
Section 8.4 Superheterodyne AM Receivers with Envelope Detection 277
Pre-selector ωc IF-filter
from antenna:
r(t) H RF (ω) RFA × H IF (ω) IFA Env. Det. AA f (t)
to speaker
cos(ω LO t)
ω LO = ω c + ω IF
Volume control
Local
oscillator
Tuning knob
of the preselector is close to 100% in the AM band, its construction is relatively simple
and inexpensive, despite the variable center–frequency requirement. The tuning knob
in commercial AM receivers simultaneously controls both the LO frequency ωc + ωIF
and the preselector center frequency ωc .
Figure 8.14 shows the block diagram of a superheterodyne, or superhet, AM
receiver that incorporates the features discussed. In addition to the preselector (to Superhet
eliminate the image station), the LO mixer (to heterodyne the station of interest into
the IF band), the IF filter (to eliminate adjacent channel stations), and the envelope
detector, the diagram includes three blocks identified as RFA, IFA, and AA, which
are RF, IF, and audio amplifiers, respectively. The audio amplifier is controlled by an
external volume knob to adjust the sound output level of the receiver. The amplifier
also blocks the DC component α from the envelope detector. In commercial AM
receivers the same DC component is used to control the gains of the RF and IF
amplifiers to maintain a constant output level under slowly time-varying propagation
or reception conditions (such as the decrease in the incoming signal level that you
experience when you are traveling in your car away from a broadcast station).
We note that in superhet receivers,
ωLO = ωc + ωIF
is not the only possible LO choice. A local oscillator signal cos(ωLO t) with frequency
ωLO = ωc − ωIF
also works, and ωIF can be specified below, within, or even above the broadcast band.
Some of these possibilities are explored in Exercise Problem 8.8. High-LO
However, the standard IF frequency of 455 kHz for commercial AM receivers versus
and the high-LO standard ωLO = ωc + ωIF have advantages: First, the low-IF (i.e., IF low-LO
below the AM broadcast band) leads to a reasonably large fBIF ratio that lessens the
cost of the IF filter circuit. Second, the high-LO choice (i.e., ωLO = ωc + ωIF ) has the
advantage over low-LO in that it requires the generation of LO frequencies only in
the range from 995 to 2155 kHz, a reasonable 2-to-1 tuning ratio as opposed to a
278 Chapter 8 Modulation and AM Radio
more demanding 15-to-1 ratio (85 to 1245 kHz) that would be required by a low-LO
system based on ωLO = ωc − ωIF .
EXERCISES
F (ω)
1
4π 8π ω rad/ s
H (ω)
2
4π 8π ω rad/ s
Exercises 279
(a) Sketch the Fourier transform Y (ω) of the system output y(t) and calcu-
late the energy Wy of y(t).
(b) It is observed that output q(t) of the following system equals y(t)
determined in part (a).
p(t) × q(t)
cos(ω ot)
F (ω) H (ω)
8 1
ω ω
−π rad/ s π rad/ s − 2π rad/ s π 2π rad/ s
8.9 For each possible choice of ωLO in Problem 8.8(a), determine the carrier
frequency of the corresponding image station.
8.10 What would be a disadvantage of using a very low IF, say, fIF = 20 kHz,
in AM reception? Also, under what circumstances could such a low IF be
tolerated? Hint: Think of image station interference issues.
9
Convolution, Impulse,
Sampling, and
Reconstruction
Figure 9.1 is a replica of Figure 5.9a from Chapter 5. In Chapters 5 through 7 we A time-domain
developed a frequency-domain description of how dissipative LTI systems, such as perspective
H (ω) = 1+j 1
ω , convert their inputs f (t) to outputs y(t). The description relies on the
on signal
fact that all signals that can be generated in the lab can be expressed as sums of co- processing:
sinusoids (a discrete or continuous sum (integral), depending on whether the signal convolution,
is periodic or nonperiodic). LTI systems scale the amplitudes of co-sinusoidal inputs impulse δ(t),
with the amplitude response |H (ω)| and shift their phases by the phase response impulse
∠H (ω). When all co-sinusoidal components of an input f (t) are modified according response h(t);
to this simple rule, their sum is the system response y(t). That, in essence, is the Fourier
frequency-domain description of how LTI systems work. transform
There is an alternative time-domain description of the same process, which is of power
the main topic of this chapter. Notice how response y(t) in Figure 9.1 appears as an signals;
“ironed-out” version of input f (t); it is as if y(t) is some sort of “running average” sampling
of the input. In this chapter we will learn that is exactly the case for a low-pass filter. and signal
More generally, the output y(t) of any LTI system is a weighted linear superposition reconstruction.
of the present and past values of the input f (t), where the specific type of running
average of f (t) is controlled by a function h(t) related to the frequency response
H (ω). For reasons that will become clear, we will refer to h(t) as the system impulse
response, and we will use the term convolution to describe the mathematical operation
between f (t) and h(t) that yields y(t).
281
282 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
2 f (t) 2 y(t)
1 1 1
H (ω) =
1 + jω
10 20 30 40 50 60 10 20 30 40 50 60
t t
−1 Low-pass filter −1
−2 −2
In Section 9.1 we will examine the convolution properties of the Fourier trans-
form (properties 13 and 14 in Table 7.1) and learn about the convolution operation
and its properties. In Section 9.2 we will introduce a new signal δ(t), known as the
impulse, and learn how to use it in convolution and Fourier transform calculations.
Section 9.3 extends the idea of Fourier transforms to signals with infinite energy,
but finite power—signals such as sines, cosines, and the unit step. These so-called
power signals are shown to have Fourier transforms that can be expressed in terms of
impulses in the frequency domain. The chapter concludes with further applications
of convolution and the impulse, including, in Section 9.4, discussions of sampling
(analog-to-digital conversion) and signal reconstruction.
9.1 Convolution
Given a pair of signals, say, h(t) and f (t), we call a new signal y(t), defined as
∞
y(t) ≡ h(τ )f (t − τ )dτ,
−∞
As we later shall see, we can perform many LTI system calculations by convolving
the system input f (t) with the inverse Fourier transform h(t) of the system frequency
response H (ω). That possibility motivates our study of convolution in this section.
Example 9.1
Let y(t) ≡ h(t) ∗ u(t), the convolution of some h(t) with the unit step u(t).
Express y(t) in terms of h(t) only.
Solution Since
∞
y(t) = h(t) ∗ u(t) = h(τ )u(t − τ )dτ
−∞
Section 9.1 Convolution 283
and
1, τ < t,
u(t − τ ) =
0, τ > t,
Example 9.2
Determine the function
Solution Using the result of Example 9.1 with h(t) = u(t), we get
t
0, t < 0,
y(t) = u(τ )dτ = t = tu(t).
−∞ 0 dτ = t, t > 0,
So, the convolution of a unit step with another unit step produces a ramp
signal tu(t).
We are just about to show that if h(t) ↔ H (ω), f (t) ↔ F (ω), and
So, as claimed,
h(t) ∗ f (t) ↔ H (ω)F (ω).
is
F (ω)H (ω) = H (ω)F (ω).
This indicates that
h(t) ∗ f (t) = f (t) ∗ h(t),
Convolution which means that the convolution operation is commutative. Other properties of
is commutative convolution will be discussed in the next section.
We can verify the Fourier frequency-convolution property (item 14 in Table 7.1),
1
f (t)g(t) ↔ F (ω) ∗ G(ω),
2π
Multiplication in a similar maner, except by starting with the inverse Fourier transform integral of
in time-domain F (ω) ∗ G(ω). In this property, the convolution
implies ∞
convolution in F (ω) ∗ G(ω) ≡ F ()G(ω − )d,
frequency-domain −∞
Section 9.1 Convolution 285
This equation indicates that y(2) is a weighted linear superposition of f (2) (corre-
sponding to τ = 0 in f (2 − τ )) and all other values of f (t) before and after t = 2
(corresponding to positive and negative τ in f (2 − τ ), respectively) with different
weights h(τ ); f (2) is weighted by h(0), f (1) by h(1), f (0) by h(2), f (−1) by h(3),
and so forth. The same interpretation, of course, holds for y(t) at every value of t;
y(t) = h(t) ∗ f (t) is a weighted linear superposition of present (τ = 0), past (τ > 0),
and future (τ < 0) values f (t − τ ) of the signal f (t) with h(τ ) weightings.
Table 9.1 lists some of the important properties of convolution. The first three
properties indicate that convolution is commutative (as we already have seen), distribu-
tive, and associative. Convolution is distributive because Distributive
∞
f (t) ∗ (g(t) + h(t)) = f (τ )(g(t − τ ) + h(t − τ ))dτ
−∞
∞ ∞
= f (τ )g(t − τ )dτ + f (τ )h(t − τ )dτ
−∞ −∞
= f (t) ∗ g(t) + f (t) ∗ h(t).
To show that convolution is associative, we can use the Fourier time-convolution Associative
property to note that
as well as
Shift The convolution shift property also can be verified using the properties of the Fourier
property transform: Since
f (t − to ) ↔ F (ω)e−j ωto ,
it follows that
where Y (ω) = H (ω)F (ω) has inverse Fourier transform y(t) = h(t) ∗ f (t). But the
inverse Fourier transform of Y (ω)e−j ωto is y(t − to ), so
h(t) ∗ f (t − to ) = y(t − to ),
as claimed. Also, using the commutative property of convolution, we can see that
must be true.
Section 9.1 Convolution 287
df dy
h(t) ∗ = .
dt dt
Likewise,
dh dy
∗ f (t) =
dt dt
is true, since convolution is commutative.
Example 9.3
Given that
c(t) = f (t) ∗ (g(t) − g(t − 2)) = f (t) ∗ g(t) − f (t) ∗ g(t − 2).
Therefore,
t t −2
c(t) = p(t) − p(t − 2) = ( ) − ( ),
2 2
which is plotted in Figure 9.2b.
We will not prove the start-point, end-point, and width properties, but after every
example that follows you should check that the convolution results are consistent with
these properties. The width property is relevant only when both h(t) and f (t) have Width,
finite widths over which they have nonzero values. As an example, h(t) = rect(t) has start-point,
unit width (width=1), because outside the interval − 21 < t < 21 the value of rect(t) is end-point
288 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
−1 1 2 3 t
(a)
−1 1 2 3 t
(b)
Figure 9.2 (a) p(t) = f (t) ∗ g(t) = ( 2t ), and (b) c(t) = f (t) ∗ (g(t) − g(t − 2)).
zero. Likewise, f (t) = ( 2t ) has a width of two units. Hence, the width property tells
us that if h(t) = rect(t) and f (t) = ( 2t ) were convolved, the result y(t) would be
1 + 2 = 3 units wide. Also, functions h(t) = rect(t) and f (t) = ( 2t ) start at times
t = − 21 and t = −1, respectively, which means that the convolution rect(t) ∗ ( 2t )
will start at time t = (− 21 ) + (−1) = − 23 (according to the start-point property).
Example 9.4
Given that
t −5 t −8
c(t) = rect( ) ∗ ( ),
2 4
determine the width and start-time of c(t).
Solution Since the widths of rect( t−5 2 ) and ( 4 ) are 2 and 4, respec-
t−8
tively, the width of c(t) must be 2 + 4 = 6. Since the start times of rect( t−5
2 )
and ( 4 ) are 4 and 6, respectively, the start time of c(t) must be 4 + 6 =
t−8
10. You should try to find the convolution c(t) = rect( t−5 2 ) ∗ ( 4 ) after
t−8
reading the next section (see Exercise Problem 9.5) and then verify the
width and start time.
Section 9.1 Convolution 289
t
1 h(t) ∗ u(t) = −∞ h(τ )dτ for any h(t) 4 h(t) ∗ δ(t) = h(t) for any h(t)
2 rect( Tt )∗ rect( Tt )= T ( 2Tt ) 5 h(t) ∗ δ(t − to ) = h(t − to ) for any h(t)
3 u(t) ∗ u(t) = tu(t)
Table 9.2 A short list of frequently encountered convolutions. Signals δ(t) and
δ(t − to ) will be discussed in Section 9.2.
Example 9.5
Find the convolution
y(t) = u(t) ∗ et .
established in Example 9.1 (the same result is also included in Table 9.2
that lists some commonly encountered convolutions), we find that
t
y(t) = e ∗ u(t) =
t
eτ dτ = et − e−∞ = et .
−∞
Example 9.6
Given that
Example 9.7
Repeat Example 9.6 to determine y(t) = h(t) ∗ f (t) for all values of t. The
signals h(t) and f (t) are plotted in Figures 9.3a and b.
Solution Once again,
∞ ∞
y(t) = h(τ )f (t − τ )dτ = e−τ u(τ )e−2(t−τ ) u(t − τ )dτ
−∞ −∞
∞
= e−2t eτ u(τ )u(t − τ )dτ.
−∞
h(t) f(t)
1 1
−1 1 2 3 t −1 1 2 3 t
(a) (b)
u(t − τ) y(t) = h(t) * f(t)
1 1
−1 1 t 2 3 τ −1 1 2 3 t
(c) (d)
f(τ) h(τ)
1 1
−1 1 2 3 τ −1 1 2 3 τ
(e) (f)
f(−τ) f(1 − τ)
1 1
−1 1 2 3 τ −1 1 2 3 τ
(g) (h)
h(τ)f(1 − τ)
−1 1 2 3 τ
(i)
Figure 9.3 (a) h(t) = e−t u(t), (b) f (t) = e−2t u(t), (c) u(t − τ ) vs τ , and (d)
y(t) = h(t) ∗ f (t) = (e−t − e−2t )u(t). (e)–(i) are self explanatory.
292 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
Figures 9.3e through 9.3i will provide some further insight about the convolution
Graphical results of Examples 9.6 and 9.7: Figures 9.3e and 9.3f show plots of f (τ ) = e−2τ u(τ )
convolution and h(τ ) = e−τ u(τ ) versus τ . Furthermore, Figures 9.3g and 9.3h show plots of
f (−τ ), which is a flipped (reversed) version of Figure 9.3e, and f (1 − τ ), which is
the same as Figure 9.3g except shifted to the right by 1 unit in τ . The “area under”
the product of the curves shown in Figures 9.3f and 9.3h—that is, the area under the
h(τ )f (1 − τ ) curve shown in Figure 9.3i—is the result of the convolution calculation
∞
−∞ h(τ )f (1 − τ )dτ performed in Example 9.6 and also the result of Example 9.7
evaluated at t = 1. For other values of t, the convolution result is still the area under
the product curve h(τ )f (t − τ ).
As you can see, visualizing the waveforms at the various steps in the convolu-
tion process can be challenging. When contemplating evaluation of the convolution
integral
∞
y(t) = h(τ )f (t − τ )dτ,
−∞
we strongly recommend that, prior to performing the calculation, you make the
following series of plots:
(1) Plot f (τ ) versus τ and h(τ ) versus τ (looking ahead to Figures 9.4a and 9.4b,
for instance), where τ is the variable of integration.
(2) Plot f (−τ ) versus τ by flipping the plot of f (τ ) about the vertical axis (as in
Figure 9.4c).
(3) Plot f (t − τ ) versus τ by shifting the graph of step 2 to the right by some
amount t, as shown in Figure 9.4d. (Note that the τ = −1 mark in Figure 9.4c
becomes the τ = t − 1 mark in Figure 9.4d.)
The next example illustrates how these plots are helpful in performing convolu-
tion, in this case a convolution of the waveforms 2rect(t − 5.5) and u(t − 1).
Section 9.1 Convolution 293
f(τ) h(τ)
2 2
1 1
−2 2 4 6 8 τ −2 2 4 6 8 τ
(a) (b)
f(−τ) f(t − τ)
2 2
1 1
−2 τ −2 4 t−1 τ
(c) −1 2 4 6 8
(d)
2 6 8
2 4 6 8 10 t
(e)
Figure 9.4 (a) f (τ ) vs τ , (b) h(τ ) vs τ , (c) f (−τ ) vs τ , (d) f (t − τ ) vs τ , and (e) y(t) = h(t) ∗ f (t). See
Example 9.8.
Example 9.8
Given that h(t) = 2rect(t − 5.5) and f (t) = u(t − 1), determine y(t) =
h(t) ∗ f (t).
Solution The plots of h(τ ) and f (t − τ ) of the integrand
h(τ )f (t − τ )
are shown in Figures 9.4b and 9.4d, respectively. (We obtained these plots
by following steps 1 through 3 outlined previously). The convolution inte-
gral is simply the area under the product of these two curves shown in
294 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
Figures 9.4b and 9.4d. As explained next, and as seen from the figures, the
area under the product curve depends on the value of t:
• First, for t − 1 < 5, the h(τ ) and f (t − τ ) curves do not “overlap,”
their product is zero, and therefore, h(t) ∗ f (t) = 0 for t < 6.
• Next, for 5 < t − 1 < 6, the same two curves overlap between τ = 5
and τ = t − 1 > 5. In this interval only, the product of the two curves
is nonzero and equals 2. The area under the product of the curves is
2 × ((t − 1) − 5) = 2(t − 6) for 6 < t < 7, and hence
h(t) ∗ f (t) = 2.
Therefore,
⎧
⎪
⎨0, t < 6,
y(t) = 2rect(t − 5.5) ∗ u(t − 1) = 2(t − 6), 6<t <7,
⎪
⎩
2, t > 7,
Example 9.9
Figures 9.5a and 9.5b show plots of two waveforms f (τ ) and h(τ ), versus τ .
Determine the convolution y(t) = h(t) ∗ f (t).
Solution First, flipping Figure 9.5a, we obtain f (−τ ) shown in Figure 9.5c.
Next, we shift Figure 9.5c to the right by an amount t to obtain the graph
of f (t − τ ), shown in Figure 9.5d. The convolution y(t) = h(t) ∗ f (t) is
the area under the product of the curves shown in Figures 9.5b and 9.5d,
which, of course, depends on the value of t as in Example 9.8.
• For t < 0, the curves do not overlap, and hence, y(t) = h(t) ∗ f (t) =
0.
• For 0 < t < 1, the curves partially overlap, between τ = 0 and t, and
hence,
t
y(t) = 2τ dτ = t 2 .
0
f(τ) h(τ)
4 4
3 3
2 2
1 1
(a) -2 2 4 6 8 τ (b) -2 2 4 6 8 τ
f(−τ) f(t − τ)
4 4
3 3
2 2
1 1
(c) -2 2 4 6 8 τ (d) -2 t 2 4 6 8 τ
y(t) = h(t) * f(t)
4
(e)-2 2 4 6 8 t
Figure 9.5 (a) f (τ ) vs τ , (b) h(τ ) vs τ , (c) f (−τ ) vs τ , (d) f (t − τ ) vs τ , and (e) y(t) = h(t) ∗ f (t). See
Example 9.9.
Thus, overall,
⎧
⎪
⎪ 0, t < 0,
⎪
⎪
⎪
⎪ 2
⎨t , 0 < t < 1,
y(t) = h(t) ∗ f (t) = t 2 − (t − 1)2 = 2t − 1, 1 < t < 2,
⎪
⎪
⎪
⎪4 − (t − 1)2 , 2 < t < 3,
⎪
⎪
⎩0, t > 3,
Example 9.10
Determine y(t) = h(t) ∗ h(t) where h(t) = rect(t).
Solution Figures 9.6a and 9.6b display h(τ ) = rect(τ ) and h(t − τ ) =
rect(t − τ ) versus τ , respectively.
• For t + 0.5 < −0.5, or t < −1, the two curves do not overlap and
h(t) ∗ h(t) = 0.
• For −0.5 < t + 0.5 < 0.5, or −1 < t < 0, the curves overlap between
τ = −0.5 and t + 0.5. Hence, for −1 < t < 0,
t+0.5
h(t) ∗ h(t) = 1 · 1dτ = t + 1.
−0.5
h(τ) h(t − τ)
1 1
τ
−2 −1 1 2 −2 −1
t − 0.5 t + 0.5 1 2τ
(a) (b)
−2 −1 1 2 t
(c)
Figure 9.6 (a) h(τ ) = rect(τ ) vs τ , (b) h(t − τ ) = rect(t − τ ) vs τ , and (c) rect(t) ∗ rect(t) = ( 2t ).
Section 9.1 Convolution 297
• Next, for −0.5 < t − 0.5 < 0.5, or 0 < t < 1, the overlap is between
τ = t − 0.5 and 0.5. Hence, for 0 < t < 1,
0.5
h(t) ∗ h(t) = 1 · 1dτ = 1 − t.
t−0.5
• Finally, for t − 0.5 > 0.5, or t > 1, there is no overlap and thus h(t) ∗
h(t) = 0.
Overall,
⎧
⎪
⎪ 0, t < −1,
⎪
⎨t + 1, −1 < t < 0, t
h(t) ∗ h(t) = rect(t) ∗ rect(t) = = ( ),
⎪
⎪ 1 − t, 0 < t < 1, 2
⎪
⎩
0, t > 1,
The result of Example 9.10 is a special case of entry 2 in Table 9.2. According to
the entry, the self-convolution of a rectangle of width T and unit height is a triangle
of width 2T and apex height T ; that is,
t t t
rect( ) ∗ rect( ) = T ( ).
T T 2T
Let us apply the Fourier time-convolution property to this convolution identity. Since
from Table 7.2 we know
t ωT
rect( ) ↔ T sinc( ),
T 2
the time-convolution property implies that
t t ωT
rect( ) ∗ rect( ) ↔ T 2 sinc2 ( ).
T T 2
But, we just saw that
t t t
rect( ) ∗ rect( ) = T ( );
T T 2T
therefore,
t ωT
T ( ) ↔ T 2 sinc2 ( ).
2T 2
Letting τ = 2T , this relation reduces to
t τ ωτ
( ) ↔ sinc2 ( ),
τ 2 4
298 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
which verifies entry 9 in Table 7.2. Furthermore, entry 10 in the same table can be
obtained from this result by using the symmetry property of the Fourier transform.
Example 9.11
Given that h(t) = rect(t) and f (t) = rect( 2t ), determine y(t) = h(t) ∗ f (t).
Solution This problem is easy if we notice that
t 1 1
rect( ) = rect(t + ) + rect(t − );
2 2 2
that is, a rect 2 units wide is the same as a pair of side-by-side shifted rects
with unit widths, shifted by ± 21 units. Thus,
t
y(t) = rect(t) ∗ rect( )
2
1 1
= rect(t) ∗ (rect(t + ) + rect(t − ))
2 2
t + 21 t − 21
= ( ) + ( ),
2 2
by the distributive and time-shift properties of convolution and the result
t± 21
of Example 9.10. Figures 9.7a through 9.7c show the plots of ( 2 ) and
t + 0.5 t − 0.5
Δ(———)
Δ(———) 2
2 1
1
−2 −1 1 2 t −2 −1 1 2 t
(a) (b)
t − 0.5
t + 0.5 + Δ(———)
Δ(———)
2 2
1
−2 −1 1 2 t
(c)
y(t). You should try finding the same convolution directly, without first
decomposing rect( 2t ) into a sum of two rects.
Example 9.12
Convolve the two functions shown in Figures 9.8a and 9.8b.
Solution Clearly, according to Figure 9.8a, f (t) = rect( t−1
2 ). Also, according
to Figure 9.8b, h(t) = f (t) − f (t − 2). Thus, the required convolution is
t −1 t −1 t −2
f (t) ∗ f (t) = rect( ) ∗ rect( ) = 2( ).
2 2 4
f(t) h(t)
1 1
−2 2 4 6 8 t −2 2 4 6 8 t
−1 −1
(a) (b)
f(t) * h(t)
2
−2 2 4 6 8 t
−2
(c)
Figure 9.8 (a) f (t), (b) h(t), and (c) their convolution f (t) ∗ h(t) = h(t) ∗ f (t).
300 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
Using this result and applying the time-shift property once again, we see
that
h(t) ∗ f (t) = f (t) ∗ f (t) − f (t) ∗ f (t − 2)
t −2 t −4
= 2( ) − 2( ),
4 4
as shown in Figure 9.8c.
Example 9.13
Suppose that the input of an LTI system having frequency response
1
H (ω) =
1 + jω
is f (t) = rect(t). Determine the zero-state response y(t) by using the
convolution formula y(t) = h(t) ∗ f (t), where h(t) is the inverse Fourier
transform of H (ω).
Solution From Table 7.2 we note that
1
e−t u(t) ↔ .
1 + jω
−t
Thus, H (ω) = 1+j 1
ω implies that h(t) = e u(t). We now can proceed
by computing the convolution directly (making the required plots first).
Instead, begin by noting that
1 1
f (t) = rect(t) = u(t + ) − u(t − ).
2 2
Thus, the system zero-state response can be found as
1 1
y(t) = h(t) ∗ f (t) = e−t u(t) ∗ rect(t) = e−t u(t) ∗ [u(t + ) − u(t − )]
2 2
1 1
= q(t + ) − q(t − ),
2 2
where
t
q(t) ≡ e−t u(t) ∗ u(t) = e−τ u(τ )dτ = u(t)(1 − e−t ).
−∞
Thus,
1 1 1 1
y(t) = u(t + )(1 − e−(t+ 2 ) ) − u(t − )(1 − e−(t− 2 ) ),
2 2
where we have made use of the time-shift property of convolution. The
resulting output y(t) is shown in Figure 9.9. Notice that the low-pass filter
“smooths” the rectangular input signal into the shape shown in the figure.
Section 9.2 Impulse δ(t) 301
0.5
2 4 6 t
It turns out that, in general, there is no waveform that exactly satisfies this identity for
an arbitrary f (t). However, there are waveforms that will produce an approximation
that is as fine as we like. In particular, consider the rectangular pulse signal defined by
1 t
p (t) ≡ rect( ).
(See Figure 9.10.) This pulse has unit area and, for small enough , the pulse is very
tall and narrow. In the limit, as approaches zero, it can be shown that1
1
Verification:
∞ /2
f (t − τ )dτ
1 t 1 τ −/2
p (t) ∗ f (t) ≡ rect( ) ∗ f (t) = rect( )f (t − τ )dτ = .
−∞
For = 0, p (t) ∗ f (t) is indeterminate because both the numerator and denominator of the last expression
/2
reduce to zero. However, −/2 f (t − τ )dτ ≈ f (t) for small , and therefore, as claimed,
/2
−/2 f (t − τ )dτ f (t)
lim {p (t) ∗ f (t)} = lim = lim = f (t),
→0 →0 →0
which also can be verified more rigorously using l’Hopital’s rule.
302 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
1 t
⑀ rect( ⑀ )
1
⑀ ⑀
Figure 9.10 Pulse signal p (t) ≡ 1 rect( t ). Note that the area under p (t) is 1 for
all values of the pulse width , and as decreases p (t) gets “thinner” and
“taller.” Function p (t) has the property that, given an arbitrary function f (t),
lim {p (t) ∗ f (t)} = f (t).
→0
which indicates that p (t), for very small , is essentially the identity pulse we are
seeking.
It is convenient (and useful, as we shall see) to denote the left-hand side of this
identity, lim {p (t) ∗ f (t)}, as δ(t) ∗ f (t), and think of δ(t) as a special signal—
→0
essentially, p (t), for exceedingly small —having the property
(See entry 4 in Table 9.2.) Other properties of δ(t), some of which are listed in
Table 9.3, include
δ(t) ↔ 1;
Scaling 1
δ(at)= |a| δ(t), a = 0 δ(a(t − to ))= |a|
1
δ(t − to ), a = 0
∞ ∞
= 1 and
−∞ δ(t)dt
−
−∞ δ(tto )dt = 1 and
Area b 1 if a < 0 < b b 1 if a < to < b
a δ(t)dt = a δ(t − to )dt =
0, otherwise 0, otherwise
Definite t t
−∞ δ(τ )dτ = u(t) −∞ δ(τ − to )dτ = u(t − to )
integral
Unit-step du
dt = δ(t) d
dt u(t − to ) = δ(t − to )
derivative
Derivative d
dt δ(t) ∗ f (t) = d
dt f (t)
d
dt δ(t − to ) ∗ f (t) = d
dt f (t − to )
Fourier
δ(t) ↔ 1 δ(t − to ) ↔ e−j ωto
transform
1 δ(t) 1 δ(t − t o)
Graphical
symbol
t to t
Table 9.3 The properties and graphical symbols of the impulse and shifted
impulse. The derivative of the impulse also is known as the doublet. Further prop-
erties of the doublet δ (t) ≡ dtd δ(t), e.g., δ (−t) = −δ (t), etc., can be enumerated
starting with the derivative property for the impulse given in the table.
304 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
and are described in terms of their properties such as those listed in Table 9.3 (instead
of with plots or tabulated values).
Convolution The properties of the impulse distribution δ(t) can be viewed to be a consequence
property of its convolution property
δ(t − to ) ∗ f (t) = f (t − to ),
Area This result is known as the area property of the shifted impulse, and it appears in
∞
property Table 9.3 in the form −∞ δ(t − to )dt = 1. For the case to = 0, the same expression
yields the area property of the impulse δ(t).
We also can evaluate the foregoing convolution integral for arbitrary f (t) at time
t = 0 to obtain
∞
δ(τ − to )f (−τ )dτ = f (−to ),
−∞
and hence
∞
δ(τ − to )g(τ )dτ = g(to ),
−∞
3
Although p (t) was defined here as 1 rect( t ), it also can be replaced by other functions such as 2 ( t ),
2
− t
e√ 2 2 ∞
2π
, and 1 sinc( πt ) peaking at t = 0 and satisfying the constraint −∞ p (t)dt = 1.
Section 9.2 Impulse δ(t) 305
with g(t) ≡ f (−t). This last expression is known as the sifting property of the shifted Sifting
impulse. Its special case for to = 0 gives
∞
δ(τ )g(τ )dτ = g(0),
−∞
the sifting property of the impulse. These last two expressions show that the impulse
δ(t) and its shifted version δ(t − to ) act like sieves and sift out specific values f (0) and
f (to ) of any function f (t), which they multiply under an integral sign, for instance,
as in
∞
δ(t − to )f (t)dt = f (to )
−∞
(so long as f (t) is continuous at t = to so that f (to ) is specified). From the previous
discussion, we know that the sifting property of δ(t − to ) is a shorthand for
% ∞ &
lim p (t − to )f (t)dt = f (to ).
→0 −∞
because if we were to replace δ(t − to )f (t) in the second integral above with
δ(t − to )f (to ), we would obtain
∞ ∞
δ(t − to )f (to )dt = f (to ) δ(t − to )dt = f (to ) × 1 = f (to ),
−∞ −∞
using the area property of the shifted impulse. Thus, the sampling property is consis-
tent with the sifting property and is valid. For the special case to = 0, we obtain
which is the sampling property for the impulse. This property of the impulse also
can be viewed as shorthand4 for the approximation p (t)f (t) ≈ p (t)f (0), which
we easily can see is valid for sufficiently small (provided that f (t) is continuous at
t = 0).
4
So far we have emphasized the fact that Table 9.3 comprises shorthand statements about the special
pulse function p (t) in the limit as → 0. Each statement is, of course, also a property of the distribution
δ(t), expressed in terms of familiar mathematical symbols such as integral, convolution, and equality signs.
A nuance that needs to be appreciated here is that the statements in the table also provide the special
meaning that these mathematical symbols take on when used with distributions instead of regular functions.
For instance, the = sign in the statement δ(t)f (t) = δ(t)f (0) indicates that the distributions on each side of
= have the same effect on any regular function, say, g(t), via any operation such as convolution or integration
defined in Table 9.3 in the very same spirit. The “equality in distribution” that we have just described is
distinct from, say, numerical equality between regular functions, say, cos(ωt) = sin(ωt + π/2).
306 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
cos(t)δ(t) = δ(t)
sin(t)δ(t) = 0
since sin(0) = 0,
(1 + t 2 )δ(t − 1) = 2δ(t − 1)
(1 + t 3 )δ(t + 1) = 0
since (1 + (−1)3 ) = 0.
Symmetry We will ask you to verify the symmetry and scaling properties in homework
and problems. These are very useful properties that give you the freedom to replace δ(−t)
scaling with δ(t), δ(5 − t) with δ(t − 5), δ(−2t) with 21 δ(t), etc. For example,
Example 9.14
Using the Fourier transform property of the impulse, δ(t) ↔ 1, determine
if
a(t) = u(t)
and
1
B(ω) = 1 − .
1 + jω
δ(t) ↔ 1
and
1
e−t u(t) ↔ ,
1 + jω
we have
Thus,
In the last equality we obtained the first term by using the convolution prop-
erty of the impulse, whereas the second term represents −∞ e−τ u(τ )dτ .
t
Example 9.15
Verify the Fourier transform property of a shifted impulse.
Solution For the shifted impulse δ(t − to ), the corresponding Fourier
transform is
∞
δ(t − to )e−j ωt dt = e−j ωto ,
−∞
Example 9.16
Given that f (t) = δ(t − to ), determine the energy Wf of signal f (t).
Solution Since δ(t − to ) ↔ e−j ωto , F (ω) = e−j ωto , and therefore the energy
spectrum for the signal is |F (ω)|2 = 1. Hence, using Rayleigh’s theorem,
we find that
∞ ∞
1 1
Wf = |F (ω)|2 dω = 1 · dω = ∞.
2π −∞ 2π −∞
Thus, δ(t − to ) contains infinite energy and therefore is not an energy signal.
(Consequently, it cannot be generated in the lab.)
The versions of the area and sifting properties discussed earlier include integration
limits −∞ and ∞. However, alternative forms also included in Table 9.3 indicate that
these properties still are valid when the integration limits are finite a and b, so long
as a < to < b. Thus, for instance,
8
δ(t − 2) = 1,
−5
6
δ(t − 5)2 cos(πt)dt = 2 cos(π5) = −2,
4
308 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
and
∞
δ(t)e−st dt = e−s·0 = 1.
0−
To verify the alternative forms of the sampling property, consider first a function
f (t)w(t) where
1, a < t < b,
w(t) =
0, t < a and t > b,
is a unit-amplitude window function shown in Figure 9.11a. Starting with the sifting
property
∞
δ(t − to )f (t)w(t)dt = f (to )w(to ),
−∞
we can write
b
δ(t − to )f (t)dt = f (to )w(to ),
a
since w(t) = 0 outside a < t < b, and 1 within. Now, if a < to < b, as shown in
Figure 9.11a, then w(to ) = 1 and
b
δ(t − to )f (t)dt = 1 · f (to ) = f (to );
a
ω(t)
1
(a) a to b t
ω(t)
1
(b) a b to t
Overall,
b
f (to ), a < to < b,
δ(t − to )f (t)dt =
a 0, to < a or to > b,
as in Table 9.3.
Use of the modified form of the sifting property with f (t) = 1 leads to the
modified area properties given in Table 9.3. Furthermore, the modified form of the
area property for the shifted impulse allows us to write
t
1, t > to ,
δ(τ − to )dτ = = u(t − to )
−∞ 0, t < to ,
and
t
δ(τ )dτ = u(t).
−∞
The last two properties, listed in Table 9.3 as definite-integral properties, also
imply that the derivatives of the unit step u(t) and the shifted unit step u(t − to )
can be defined as δ(t) and δ(t − to ), respectively. Of course, u(t) and u(t − to ) are Unit-step
discontinuous functions, and therefore they are not differentiable within the ordinary and its
function space over all t; however, their derivatives can be regarded as the distributions derivative
d d
u(t) = δ(t) and u(t − to ) = δ(t − to ).
dt dt
Example 9.17
Find the derivative of function
y(t) = t 2 u(t).
Solution dy dy
dt = 0 for t < 0, and dt = 2t for t > 0. Patching together these
two results, we can write
dy
= 2tu(t)
dt
as the answer. Alternatively, using the product rule of differentation and
properties of the impulse, we obtain
dy d du
= (t 2 u(t)) = 2tu(t) + t 2 = 2tu(t) + t 2 δ(t)
dt dt dt
= 2tu(t) + 02 δ(t) = 2tu(t).
310 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
This second version of the solution is not any faster or better than the first
one, but it works and gives the correct result. Moreover, this second method
is the only safe approach in certain cases, as illustrated by the next example.
Example 9.18
Find the derivative of
dz
= 2e2t u(t),
dt
z(t) = e2t u(t) as it should if 2e2t u(t) were the correct derivative. However,
using the product rule and properties of the impulse, we obtain
dz d du
= (e2t u(t)) = 2e2t u(t) + e2t = 2e2t u(t) + e2t δ(t)
dt dt dt
= 2e2t u(t) + e0 δ(t) = 2e2t u(t) + δ(t),
which is the right answer (and can be confirmed by integrating 2e2τ u(τ ) +
δ(τ ) from −∞ to t, over τ , to recover z(t) = e2t u(t)).
What about the derivative of the impulse δ(t) itself? Since the impulse is a
distribution, its derivative
d
δ(t) ≡ δ (t)
dt
df
δ (t) ∗ f (t) = ,
dt
which is both the time-derivative property of the impulse listed in Table 9.3 and the
Doublet δ (t) convolution property of a new distribution δ (t) = dtd δ(t), known as the doublet. The
doublet will play a relatively minor role in Chapters 10 and 11, and all we need to
remember about it until then is its convolution property just given.
Section 9.2 Impulse δ(t) 311
Since δ(t) and δ(t − to ) are not functions, they cannot be plotted in the usual way
functions are plotted. Instead, we use defined graphical representations for δ(t) and
δ(t − to ), which are shown in Table 9.3. By convention, δ(t − to ) is depicted as an up-
pointing arrow of unit height placed at t = to along the t-axis. The length of the arrow
is a reminder of the area property, while the placement of the arrow at t = to owes to
the sampling and sifting properties, as well as to the fact that a graphical symbol for
δ(t − to ) is also a symbol for dtd u(t − to ) (which is numerically 0 everywhere except
at t = to , where it is undefined).
Following this convention, 2δ(t − 3), for instance, can be pictured as an arrow
2 units tall, placed at t = 3, while −3δ(t + 1) can be pictured as a down-pointing
arrow of length 3 units, placed at t = −1. (See Figures 9.12a and 9.12b.) The graphical
representation of
3 2δ(t − 3) 3 − 3δ(t + 1 )
2 2
1 1
v4 −2 2 4 t −4 −2 2 4 t
−1 −1
−2 −2
−3 −3
(a) (b)
3 f (t)
2
1
−4 −2 2 4 t
−1
−2
−3
(c)
Figure 9.12 Examples of sketches, including the impulse and the shifted
impulse.
312 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
Example 9.19
Determine the Fourier transform of rect(t), using the Fourier time-derivative
property and the fact that
which is the same result that we had established earlier through more
conventional means.
rect(t) d
1 1 rect(t) = δ(t + 0 .5) − δ(t − 0.5)
dt
0.5
−1.5 −1 −0.5 0.5 1 1.5 t −1.5 −1 −0.5 1 1.5 t
−1 −1
since h(t) ∗ δ(t) = h(t) by the convolution property of the impulse. Thus, h(t), the
inverse Fourier transform of the system frequency response H (ω), is also the zero-
state response of the system to an impulse input. For that reason we will refer to h(t) Impulse
as the impulse response. response h(t)
The concept of impulse response is fundamental and important. The concept will
be explored much further in Chapter 10; so, for now, we provide just a single example
illustrating its usefulness.
Example 9.20
Suppose that a high-pass filter with frequency response
jω
H (ω) =
1 + jω
f (t) = rect(t).
Determine the zero-state system response y(t), using the system impulse
response h(t) and the convolution method.
jω
Solution In Table 7.2 we find no match for 1+j ω . However,
jω jω + 1 − 1 1
= =1− ,
1 + jω 1 + jω 1 + jω
and, therefore, using the addition rule and the same table, we find that the
system impulse response is
Using y(t) = h(t) ∗ f (t) and the convolution property of the impulse, we
obtain
where the last term was calculated earlier in Example 9.13 and plotted in
Figure 9.9. By subtracting Figure 9.9 from rect(t) we obtain Figure 9.14,
which is the solution to this problem. Notice that, because the Fourier
transform of f (t) is a sinc in the variable ω, this problem would be very
difficult to solve by the Fourier technique.
314 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
2 4 6 t
−0.5
−1
jω
Figure 9.14 The response of high-pass filter H(ω) = 1+jω
to input f (t) = rect(t).
After our introduction to the impulse δ(t), we are ready to explore the Fourier trans-
forms of signals for which the energy W may be infinite. It turns out that the Fourier
transform of some such signals f (t) with infinite energy W , but finite instantaneous
power |f (t)|2 (e.g., signals like cos(ωo t), sin(ωo t), u(t)), can be expressed in terms of
Power the impulse δ(ω) in the Fourier domain. Such signals are known as power signals (as
signal opposed to energy signals, which have finite W ) and appear in the right-hand column
of Table 7.2. The same column also includes the Fourier transform of distributions
δ(t) and δ(t − to ), discussed in the previous section.
In the next set of examples we will verify some of these new Fourier transforms
and also illustrate some of their applications.
Example 9.21
Show that
1 ↔ 2πδ(ω)
and
ej ωo t ↔ 2πδ(ω − ωo ),
This statement is valid, because, by the sifting property, the right-hand side
is reduced to
1
2πej ωo t = ej ωo t ,
2π
which is the same as the left-hand side. The Fourier pair 1 ↔ 2πδ(ω) is
just a special case of ej ωo t ↔ 2πδ(ω − ωo ) for ωo = 0.
Example 9.22
Show that
and
Figures 9.15a through 9.15c show the plots of power signals cos(ωo t), sin(ωo t),
and 1 (DC signal), and their Fourier transforms. Notice that all three Fourier transforms Fourier
are depicted in terms of impulses in the ω-domain. The Fourier transform plots show transform
that frequency-domain contributions to signals cos(ωo t) and sin(ωo t) are confined to of sine
frequencies ω = ±ωo . The impulses located at ω = ±ωo are telling us the obvious: and cosine
No complex exponentials other than e±j ωo t are needed to represent
ej ωo t + e−j ωo t
cos(ωo t) =
2
and
ej ωo t − e−j ωo t
sin(ωo t) = .
j2
316 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
3.14
t
− ωo ωo ω
−3.14
−1 −6.28
(a)
j3.14
ωo
t
− ωo ω
−j3.14
−1 −j6.28
(b)
3.14
t
− ωo ωo ω
−3.14
−1 −6.28
(c)
Figure 9.15 Three important power signals and their Fourier transforms: (a)
f (t) = cos(ωo t) ↔ F(ω) = πδ(ω − ωo ) + πδ(ω + ωo ), (b)
f (t) = sin(ωo t) ↔ F(ω) = jπδ(ω + ωo ) − jπδ(ω − ωo ), and (c)
f (t) = 1 ↔ F(ω) = 2πδ(ω).
Likewise, DC signal f (t) = 1 can be identified with e±j 0·t = 1; therefore, its Fourier
transform plot is represented by an impulse sitting at ω = 0.
Example 9.23
Given that f (t) ↔ F (ω), determine the Fourier transform of f (t) sin(ωo t)
by using the Fourier frequency-convolution property.
Solution The Fourier frequency-convolution property states that
1
f (t)g(t) ↔ F (ω) ∗ G(ω).
2π
Using this property with
g(t) = sin(ωo t)
Section 9.3 Fourier Transform of Distributions and Power Signals 317
and
we obtain
1
f (t) sin(ωo t) ↔ F (ω) ∗ j π[δ(ω + ωo ) − δ(ω − ωo )]
2π
j
= [F (ω + ωo ) − F (ω − ωo )].
2
This is an alternative form of the Fourier modulation property.
Example 9.24
Find the Fourier transform of an arbitrary periodic signal Fourier
transform
∞
of periodic
f (t) = Fn ej nωo t ,
signals
n=−∞
Thus, the Fourier transform F (ω) of a periodic f (t) with Fourier coef-
ficients Fn is an infinite sum of weighted and shifted frequency-domain
impulses 2πFn δ(ω − nωo ), placed at integer multiples of the fundamental
frequency ωo = 2π T . The area weights assigned to the impulses are propor-
tional to the Fourier series coefficients.
Example 9.25
In Table 7.2, entry 24,
∞
∞
2π 2π
δ(t − nT ) ↔ δ(ω − n ),
n=−∞
T n=−∞ T
∞ 2π ∞
∑ δ(t − nT ) 2π
n= ∞
∑ δ(ω − n )
T n= ∞ T
2π
1
T
↔
0 T t 0 2π ω
T
Figure 9.16 The Fourier transform of the impulse train on the left is also an
impulse train in the frequency domain, as depicted on the right.
∞
δ(t − nT ),
n=−∞
∞
"
1 T /2 2π
Fn = δ(t − mT ) e−j n T t dt
T −T /2 m=−∞
∞
T /2
1 2π
= δ(t − mT )e−j n T t dt
T m=−∞ −T /2
T /2
1 2π 1
= δ(t)e−j n T t dt =
T −T /2 T
Impulse for all n. Hence, equating the impulse train to its exponential Fourier series,
train we get5
and its
∞
∞
Fourier 1 j n 2π t
transform δ(t − nT ) = e T ,
n=−∞ n=−∞
T
and using the Fourier transform pair (from item 17 from Table 7.2)
2π 2π
ej n T t ↔ 2πδ(ω − n ),
T
5
The meaning of this equality is that when each side is multiplied by a signal f (t) and then the two
sides are integrated, the values of the integrals are equal.
Section 9.3 Fourier Transform of Distributions and Power Signals 319
we obtain
∞
∞
2π 2π
δ(t − nT ) ↔ δ(ω − n ),
n=−∞
T n=−∞ T
as requested.
Note that entry 25 in Table 7.2 is a straightforward consequence of this
result and the frequency-convolution property (item 14 in Table 7.1).
Example 9.26
Suppose that a signal generator produces a periodic signal
and then displays the squared magnitude of the Fourier transform of f (t)w(t)
on a screen. What will the screen display look like if To = 10 s and 20 s?
Solution Let g(t) ≡ f (t)w(t). Then, according to the Fourier frequency-
convolution property,
1
G(ω) = F (ω) ∗ W (ω),
2π
where
and W (ω) is the Fourier transform of w(t). Substituting the expression for
F (ω) into the convolution, and cancelling the π factors, we have
where in the last step we used the convolution property of the shifted
impulse. Now,
t ωTo
rect( ) ↔ To sinc( ),
To 2
320 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
and so
ωTo
W (ω) = To sinc( ).
2
Figures 9.17a and 9.17b show the plots of |W (ω)|2 for the cases To = 10
and 20 s. In both cases, the 90% bandwidth 2π T of w(t) is less than the
shift frequencies of 4 and 8 rad/s relevant for G(ω). Thus, the various
components of G(ω) have little overlap, so
Plots of this approximation for |G(ω)|2 are shown in Figure 9.17c and 9.17d
for the cases To = 10 and 20 s, respectively. We conclude that the spectrum
analyzer display will look like Figures 9.17c and 9.17d (although, typically,
only the positive-ω half of the plots would be displayed).
Notice that a longer analysis window (larger To ) produces a higher-
resolution estimate of the spectrum of f (t), characterized by narrower
“spikes.”
ωTo 2 ωTo 2
500 |W(ω)|2 = |Tosinc(——)| 500 |W(ω)|2 = |Tosinc(——)|
2 2
400 400
To = 10 s To = 20 s
300 300
200 200
100 100
(a) −3 −2 −1 1 2 3 (b) −3 −2 −1 1 2 3
500
|G(ω)|2 2000
|G(ω)|2
1750
400
To = 10 s 1500 To = 20 s
300 1250
1000
200 750
500
100
250
Figure 9.17 The energy spectrum |W (ω)|2 of the window function w(t) = rect( Tto ) for (a) To = 10 s and
(b) To = 20 s. The spectrum | 2π
1
F(ω) ∗ W (ω)|2 of f (t)w(t), f (t) = 4 cos(4t) + 2 cos(8t), is shown in (c) and (d)
for the two values of To . The frequency resolution of the measurement device is set by the window
length To and can be described as the half-width 2π To of |W (ω)| .
2
Section 9.3 Fourier Transform of Distributions and Power Signals 321
Example 9.27
An incoming AM radio signal
is mixed with a signal cos(ωc t), and the result p(t) is filtered with an ideal
low-pass filter H (ω). The filter bandwidth is less than ωc , but larger than
the bandwidth of the low-pass message signal f (t). In addition, ωc .
What is the output q(t) of the low-pass filter?
Solution Let
Example 9.28
An incoming AM signal
is mixed with sin(ωc t) and the product signal is filtered with an ideal low-
pass filter H (ω) as in Example 9.27. As before, the filter bandwidth is less
than ωc but larger than the bandwidth of the low-pass f (t), and ωc .
What is the output q(t) of the filter?
322 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
we first obtain
jπ jπ
sin(ωc t) cos(ωc t) ↔ [δ(ω) − δ(ω − 2ωc )] + [δ(ω + 2ωc ) − δ(ω)]
2 2
jπ jπ
= δ(ω + 2ωc ) − δ(ω − 2ωc ).
2 2
Therefore, using the frequency-convolution property, the Fourier transform
of
is
1 jπ jπ
P (ω) = F (ω) ∗ [ δ(ω + 2ωc ) − δ(ω − 2ωc )]
2π 2 2
j j
= F (ω + 2ωc ) − F (ω − 2ωc ).
4 4
Note that P (ω) contains no term in the passband of the described low-pass
filter. Thus,
implying that
q(t) = 0.
by mixing it with sin(ωc t), because zero output is obtained from the low-
pass filter.
Note that the results of Examples 9.27 and 9.28 suggest that if a signal g(t) sin(ωc t)
were added to an AM transmission f (t) cos(ωc t), then it would be possible to recover
f (t) and g(t) from the sum unambiguously by mixing the sum with signals cos(ωc t)
and sin(ωc t), respectively. This idea is exploited in so-called quadrature amplitude
modulation communication systems.
Section 9.3 Fourier Transform of Distributions and Power Signals 323
Finding the Fourier transform of power signal u(t), the unit step, is somewhat
tricky: First, note that the unit step has an average value of Fourier
transform
T
1 1 of unit step
lim u(t)dt =
T →∞ 2T −T 2
1 1
u(t) = + sgn(t)
2 2
in terms of its DC component and the signum function sgn(t) shown in Figure 7.2d.
Since
1 ↔ 2πδ(ω),
the Fourier transform of 21 is πδ(ω); also, from Table 7.2 (and as verified in Example 9.29,
next),
2
sgn(t) ↔ .
jω
Thus, using the addition property of the Fourier transform, we conclude that
1 1 1
u(t) = + sgn(t) ↔ πδ(ω) + ,
2 2 jω
as indicated in Table 7.2. Note that the term πδ(ω) in the Fourier transform of u(t)
accounts for its DC component equal to 21 ; the second term j1ω results from its AC
component 21 sgn(t).
Example 9.29
Notice that
1 1
u(t) = + sgn(t)
2 2
implies
du 1 d
= δ(t) = sgn(t),
dt 2 dt
or, equivalently,
d
sgn(t) = 2δ(t).
dt
324 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
Using the Fourier time-derivative property and the fact that δ(t) ↔ 1, verify
2
sgn(t) ↔ .
jω
Solution Let S(ω) denote the Fourier transform of sgn(t). Then the Fourier
transform of the equation
d
sgn(t) = 2δ(t)
dt
is
j ωS(ω) = 2.
Hence,
2
S(ω) =
jω
and, consequently,
2
sgn(t) ↔ .
jω
T
Since sgn(t) is a pure AC signal—that is, lim 1
−T sgn(t)dt = 0—its
T →∞ 2T
Fourier transform does not contain an impulse term.
We offer one closing comment about the Fourier transform of power signals,
such as u(t) and cos(ωo t) where
1
u(t) ↔ πδ(ω) +
jω
cos(ωo t) ↔ πδ(ω − ωo ) + πδ(ω + ωo ).
It should be apparent that, in the case of these types of signals, the Fourier integral
does not converge. Therefore, the corresponding Fourier transform does not exist in
the usual sense. This is the reason that the Fourier transforms of signals such as u(t)
and cos(ωo t) must be expressed in terms of impulses.
Section 9.4 Sampling and Analog Signal Reconstruction 325
Now that we know about the impulse train and its Fourier transform (see Example 9.25
in the previous section), we are ready to examine the basic ideas behind analog-to-
digital conversion and analog signal reconstruction. In this section we will learn about
the Nyquist criterion, which constrains the sampling rates used in CD production, and
find out how a CD player may convert data stored on a CD into sound.
Consider a bandlimited signal Bandlimited
signal
f (t) ↔ F (ω) with
bandwidth B
having bandwidth B Hz, so that
F (ω) = 0
where the samples are equally spaced at times t = nT , which are integer multiples of Samples
the sampling interval T . In the modern digital world, where music and image samples and
routinely are stored on a computer, we might ask whether it is possible to reconstruct sampling
the analog signal f (t) from its discrete samples fn with full fidelity (i.e., identically). interval T
It turns out that the answer is yes if the sampling interval T is small enough, compared
with the reciprocal of the signal bandwidth B. The specific requirement, called the
Nyquist criterion, is Nyquist
criterion
1
T < ,
2B
or, equivalently,
1
> 2B.
T
Notice that the version of the Nyquist criterion just presented states that the sampling
frequency 1/T must be larger than twice the highest frequency B (measured in Hertz)
in the signal being sampled. That is, each frequency component in f (t) must be
sampled at a rate of at least two samples per period. Under this condition, it is theo-
retically possible for us to exactly reconstruct the analog signal f (t) from its discrete Reconstruction
samples fn by using the so-called reconstruction formula formula
π
f (t) = fn sinc( (t − nT )).
n
T
326 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
However, if the Nyquist criterion is violated, then the reconstruction formula becomes
invalid in the sense that the sum on the right side of the formula converges, not to the
original analog signal f (t) shown on the left, but to another analog signal known as
an aliased version of f (t). Before we verify the validity of the reconstruction formula
and the Nyquist criterion, let us examine the sampling and reconstruction examples
shown in Figures 9.18 and 9.19.
3 3
f(t) sinc(πt)
2 2
1 1
f(−1) f(1) f(2)
−4 −2 2 4 6 t −4 −2 2 4 6 t
−1 −1
−2 f(3) −2
f(−2)
(a) −3 (b) −3
3 3
∑ f (n)sinc(π(t − n)) f (n)sinc(π(t − n))
2 2
1 1
−4 −2 2 4 6 t −4 −2 2 4 6
t
−1 −1
−2 −2
(c) −3 (d) −3
Figure 9.18 (a) An analog signal f (t) and its discrete samples f (nT ) (shown by dots) taken at T = 1 s
intervals, (b) interpolating function sinc(π Tt ) for the case T = 1 s, and (c) reconstructed f (t), using the
interpolation formula. (See text.) Panel (d) shows the functions f (nT )sinc( πT (t − nT )) for T = 1 s, which
are summed up to produce the reconstructed f (t) shown in panel (c). In this example, reconstruction is
successful because the Nyquist criterion is satisfied.
2 2
f(t) sinc(πt)
1.5 1.5
1 1
0.5 0.5
−4 2 4 6 t −4 −2 2 4 6 t
−2 −0.5 −0.5
−1 −1
−1.5 −1.5
(a) −2 (b) −2
2
∑ f (n)sinc(π(t − n)) 3
f (n)sinc(π(t − n))
1.5
2
1
1
0.5
−4 −2 2 4 6 t −4 −2 2 4 6 t
−0.5
−1
−1
−2
−1.5
(c) −2 (d) −3
Figure 9.19 (a) f (t) = cos(4t) and its samples f (nT ), T = 1 s, (b) interpolating
function sinc(π Tt ) for T = 1 s, (c) reconstructed f (t), and (d) f (nT )sinc( πT (t − nT ))
for T = 1 and all n. Notice that the reconstructed f (t) does not match the
original f (t) because of undersampling of f (t) in violation of the Nyquist
1
criterion, which requires T < 2B . (See text.)
Hence,
∞
∞
1 2π
f (t)δ(t − nT ) ↔ F (ω − n ),
n=−∞ n=−∞
T T
328 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
∞
1 2π
FT (ω) ≡ F (ω − n ),
n=−∞
T T
with the aid of Figure 9.20. Panel (b) in the figure presents a sketch of FT (ω)
as a collection of amplitude-scaled and frequency-shifted replicas of F (ω),
the Fourier transform of f (t) sketched in panel (a). For the FT (ω) shown in
panel (b), it is clear that 2πB < Tπ , or, equivalently, B < 2T1 , in accordance
with the Nyquist criterion. Also, it should be clear that if FT (ω) in the same
panel were multiplied with
ω
H (ω) = T rect( ),
2π/T
1 F (ω)
1 FT (ω)
T
− 2πB 2πB π 2π ω
2π
(b) −
T T T
1 FT (ω)
T
− 2πB π 2π ω
(c) T T
Figure 9.20 (a) The Fourier transform F(ω) of a band-limited signal f (t) with
∞
bandwidth = 2πB rad s ; (b) and (c) are FT (ω) = n=−∞ T F(ω − T n) for the same
1 2π
1 1
signal, with the sampling interval T < 2B and T > 2B , respectively. Note that the
1
central feature of FT (ω) in panel (b) is identical to T F(ω). Also note that panel (c)
contains no isolated replica of F(ω). Therefore, F(ω) can be correctly inferred
1
from FT (ω) if and only if T < 2B .
Section 9.4 Sampling and Analog Signal Reconstruction 329
filtering of ∞ n=−∞ f (nT )δ(t − nT ) (having the# π Fourier
$ transform FT (ω))
via a system with impulse response h(t) = sinc T t (with Fourier transform
ω
T rect 2π/T ) produces f (t). But, the same convolution operation describing
the filter action yields precisely the right side of the reconstruction formula.
The proof of the reconstruction formula just completed is contingent
upon satisfying the Nyquist criterion. When the criterion is violated (i.e.,
when 2πB ≥ Tπ , as illustrated in Figure 9.20c), FT (ω) no longer contains an
isolated replica of F (ω) within the passband of H (ω), and, as a consequence,
in such situations the reconstruction formula produces aliased results (as in
Figure 9.19).
D/A (digital-to-analog) conversion is a hardware implementation that mimics the
reconstruction formula just verified. This is accomplished by utilizing a circuit that
creates a weighted pulse train
fn p(t − nT ),
n
where p(t) generally is a rectangular pulse of width T , and then low-pass filtering
the pulse train, using a suitable LTI system
h(t) ↔ H (ω).
The reconstruction of f (t) is nearly ideal if h(t) is designed in such a way that h(t) ∗
p(t) is a good approximation to (a delayed) sinc( πt T ). This reconstruction process,
which parallels our proof of the reconstruction formula, is illustrated symbolically on
the right side of the system shown in Figure 9.21. The left side of the same system
is a symbolic representation of an A/D (analog-to-digital) converter, where the input
f (t) is sampled every T seconds in order to generate the sequence of discrete samples
fn = f (nT ).
Discrete
T
f (t) f n ≡ f (nT ) y(t)
× H (ω)
D/A conversion
Figure 9.21 A model system that samples an analog input f (t) with a sampling
interval T and generates an analog output y(t) by using the samples fn = f (nT ).
The input to filter H(ω) is n fn p(t − nT ). Mathematically, if p(t) = δ(t) and
H(ω) = T rect( 2π/T
ω
), then y(t) = f (t), assuming the Nyquist criterion is satisfied. In
real-world systems, p(t) is chosen to be a rectangular pulse of width T , and H(ω)
is chosen so that |P(ω)||H(ω)| ≈ T rect( 2π/Tω
).
330 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
The system shown in Figure 9.21 simply regenerates y(t) = f (t) at its output,
because the samples fn of input f (t) are not modified in any way within the system
prior to reconstruction. The option of modifying the samples fn is there, however,
and, hence, the endless possibilities of digital signal processing (DSP). DSP can be
used to convert analog input signals f (t) into new, desirable analog outputs y(t) by
replacing the samples fn by a newly computed sequence yn , prior to reconstruction.
Examples of digital processing, or manipulating the samples fn , are
1
yn = (fn + fn−1 ),
2
which is a simple smoothing (averaging) digital low-pass filter, and
1
yn = (fn − fn−1 ),
2
which is a simple high-pass digital filter that emphasizes variations from one sample
to the next in fn . More sophisticated digital filters compute outputs as a more general
weighted sum of present and past inputs and, sometimes, past outputs as well. Some
other types of digital processing are explored (along with aliasing errors) in one of
the labs. (See Appendix B.)
Sound cards, Sound cards, found nowadays in most PC’s, consist of a pair of A/D and D/A
A/D and D/A converters that work in concert with one another and the rest of the PC—CPU,
converters memory, keyboard, CD player, speaker, etc. These cards can perform a variety of
signal processing tasks in the audio frequency range. For instance, the D/A circuitry
on the sound card converts samples fn = f (nT ), fetched from a CD (or an MP3 file
stored on the hard disk), into a song f (t) by a procedure like the one described here.
44.1 kHz The standard sampling interval used in sound cards is T = 1/44100 s. In other
sampling words, sound cards sample their input signals at a rate of 1/T = 44100 samples/s,
1
rate or, 44.1 kHz, as usually quoted. Since the Nyquist criterion requires that T < 2B
or, equivalently, B < 2T1 = 22.05 kHz, only signals bandlimited to 22.05 kHz can
be processed (without aliasing). Thus, the input stage of a sound card (prior to the
A/D) typically incorporates a low-pass filter with bandwidth around 20 kHz (or less)
to reduce or prevent aliasing effects. The 44.1 kHz sampling rate of standard sound
cards is the same as the sampling rate used in audio CD production. Because human
hearing does not exceed 20 kHz, it is possible to low-pass filter analog audio signals
to 20kHz prior to sampling, with no audible effect. The choice of sampling rate
(44.1 kHz) in excess of twice the filtered sound bandwidth (20 kHz) works quite
well and allows for some flexibility in the design of the D/A (i.e., choice of H (ω) in
Figure 9.21). Special cards with wider bandwidths and higher sampling rates are used
to digitize and process signals encountered in communications and radar applications.
Digital oscilloscopes also use high-bandwidth A/D cards. Some applications use A/D
converters operating up into the GHz range.
Returning back to Figure 9.20, recall that
1 2π
FT (ω) = F (ω − n ),
n
T T
Section 9.4 Sampling and Analog Signal Reconstruction 331
Also, recall that if f (t) is bandlimited and T satisfies the Nyquist criterion, then
FT (ω) = T1 F (ω) for |ω| < Tπ . (See Figures 9.20a and 9.20b.) So, if we can find a
way of calculating FT (ω), we then have a way of calculating F (ω). This is easily
done. Remembering that
and then transforming the preceding expression for fT (t), term by term, we get Calculating
F(ω) with
FT (ω) = f (nT )e−j ωnT , discrete
n samples
fn = f(nT)
which is an alternative formula for FT (ω). This formula provides us with a means of
computing the Fourier transform F (ω) of a bandlimited f (t) by using only its sample
data f (nT ), namely,
π
F (ω) = T FT (ω) = T f (nT )e−j ωnT , |ω| < ,
n
T
N −1
mn 2π
Fm ≡ fn e−j 2π N = FT ( m),
NT
n=0
m ∈ [0, 1, · · · , N − 1], using the Mathematica “Fourier” function (which implements the popular
FFT algorithm). The plotted curves actually correspond to the normalized quantity
|Fm |2
N −1 ,
k=0 |Fk |
2
for m ∈ [0, 511, 1023, · · · , N2 − 1], after a 512-point running average operation was applied to
|Fm |2 . The purpose of the running average was to smooth the spectrum and reduce the number of
data points to be plotted.
332 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
The impulse definition and concept play important roles outside of signal processing,
for example in various physics problems. Take, for instance, the idealized notion
of a point charge. Physicists often envision charged particles, such as electrons, as
point charges occupying no volume, but carrying some definite amount of charge, say
q coulombs. Since the equations of electrodynamics (Newton’s laws and Maxwell’s
equations) are formulated in terms of charge density ρ(x, y, z, t), a physicist studying
the motion of a single free electron in some electromagnetic device needs an expres-
sion ρ(x, y, z, t) describing the state (the position and motion) of the electron. A
model for the charge density of a stationary electron located at the origin and envi-
sioned as a point charge is
C
ρ(x, y, z, t) = qδ(x)δ(y)δ(z)
m3
in terms of spatial impulses δ(x), δ(y), and δ(z) and electronic charge q. This is a
satisfactory representation, because the volume integral of ρ(x, y, z, t) over a small
cube centered about coordinates (xo , yo , zo ), that is,
xo +ε/2
qδ(x)δ(y)δ(z)dxdydz = q δ(x)dx
cube xo −ε/2
yo +ε/2 zo +ε/2
× δ(y)dy × δ(z)dz ,
yo −ε/2 zo −ε/2
equals q for xo = yo = zo = 0, for arbitrarily small > 0, yielding the total amount
of charge contained within the box. Conversely, if the box excludes the origin, then
the integration result is zero.
Example 9.30
How would you express the charge density ρ(x, y, z, t) of an oscillating
electron with time-dependent coordinates (0, 0, zo cos(ωt))?
Solution In view of the preceding discussion, the answer must be
C
ρ(x, y, z, t) = qδ(x)δ(y) × δ(z − zo cos(ωt)) ,
m3
since the electron always remains on the z-axis and deviates from the origin
by an amount z = zo cos(ωt).
EXERCISES
f (t)
1
2 3
1 4 5 t
−1
g(t)
1
1 2 3 4 5 t
9.3 Given f (t) = u(t), g(t) = 2tu(t), and q(t) = f (t − 1) ∗ g(t), determine
q(4).
334 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
9.4 Suppose that the convolution of f (t) = rect(t) with some h(t) produces
y(t) = ( t−2
2 ). What is the convolution of rect(t) + rect(t − 4) with h(t) −
2h(t − 6)? Sketch the result.
9.5 2 ) ∗ ( 4 ).
Determine and plot c(t) = rect( t−5 t−8
9.7 Simplify the following expressions involving the impulse and/or shifted
impulse and sketch the results:
(a) f (t) = (1 + t 2 )(δ(t) − 2δ(t − 4)).
(b) dt + δ(t + 0.5)).
g(t) = cos(2πt)( du
(c) h(t) = sin(2πt)δ(0.5 − 2t).
∞ 2
(d) y(t) = −6 (τ + 6)δ(τ − 2)dτ .
∞ 2
(e) z(t) = 6 (τ + 6)δ(τ − 2)dτ .
t
(f) a(t) = −∞ δ(τ + 1)dτ + rect( 6 )δ(t
t
− 2).
(g) b(t) = δ(t − 3) ∗ u(t).
(h) c(t) = ( 2t ) ∗ (δ(t) − δ(t + 2)).
f (t) g(t)
1 1
2
1 t −1 1 t
−1
9.10 For a system with impulse response h(t), the system output y(t) = h(t) ∗
f (t) = rect( t−4
2 ). Determine and sketch h(t) if
(d) C(ω) = 8
jω + 4πδ(ω).
9.15 The inverse of the sampling interval T —that is, T −1 —is known as the
sampling frequency and usually is specified in units of Hz. Determine the
minimum sampling frequencies T −1 needed to sample the following analog
signals without causing aliasing error:
(a) Arbitrary signal f (t) with bandwidth 20 kHz.
(b) f (t) = sinc(4000πt).
(c) f (t) = sinc(4000πt) cos(20000πt).
336 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction
9.16 Using
∞
∞
1 2π
f (t)δ(t − nT ) ↔ F (ω − n )
n=−∞ n=−∞
T T
(item 25 in Table 7.2), sketch the Fourier transform FT (ω) of signal
∞
fT (t) ≡ f (t)δ(t − nT )
n=−∞
if
f (t) = cos(4πt),
assuming (a) T = 1 s, and (b) T = 0.2 s. Which sampling period allows
f (t) to be recovered by applying an appropriate low-pass filter to fT (t)?
9.17 Given the identity
∞
δ(t − to )f (t)dt = f (to ),
−∞
where the function f (t) of time variable t is measured in units of, say, volts
(V), what would be the units of the shifted impulse δ(t − to )? What would
be the units of an impulse δ(x) if x is a position variable measured in units
of meters? What would be the units of a charge distribution specified as
qδ(x − 4) if q is measured in units of coulombs (C)? It is an interesting fact
that the impulse δ(t) has a unit (in the sense of a dimension), but it has no
numerical values; only integrals of the impulse have a numerical value.
9.18 To confirm the scaling property of the impulse, show that
δ(a(t − to )) ∗ f (t)
and
1
δ(t − to ) ∗ f (t)
|a|
are identical for a = 0. Hint: Write the above convolutions explicitly and
make use of an appropriate change of variable before applying the sifting
property of the impulse.
10
Impulse Response,
Stability, Causality, and
LTIC Systems
In the last chapter we discovered that the zero-state response y(t) of an LTI system A time-domain
H (ω) to an arbitrary input can be calculated in the time-domain with the convolution perspective:
formula zero-state
response
y(t) = h(t) ∗ f (t);
y(t) = h(t) ∗ f (t),
impulse
response
as shown in Figure 10.1. Here, h(t) is the inverse Fourier transform of the system and causality;
frequency response H (ω), or equivalently, the zero-state response of the system to LTIC systems
an impulse input.
But what if the Fourier transform of an impulse response h(t) does not exist, as
in the case of the impulse response h(t) = et u(t)? Does this mean that we cannot
make use of h(t) ∗ f (t) to calculate the zero-state response in such cases?
To the contrary, as we shall see in Section 10.1, the convolution method y(t) =
h(t) ∗ f (t) is always valid for LTI systems, so long as the integral converges, and
is, in fact, more fundamental than Fourier inversion of Y (ω) = H (ω)F (ω). It is the
latter method that fails in the scenario invoked, when the Fourier transform of h(t)
(or of f (t), for that matter) does not exist.
337
338 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems
Figure 10.1 The time-domain input–output relation for LTI systems with
frequency response H(ω). The convolution formula y(t) = h(t) ∗ f (t) describes
the system zero-state response to input f (t), where h(t) is the inverse Fourier
transform of the system frequency response H(ω).
Section 10.1 begins with a discussion of how h(t), the impulse response of an
LTI system, can be measured in the lab. The discussion subsequently focuses on the
universality of the convolution formula, h(t) ∗ f (t), and the relation between h(t) and
H (ω). In Section 10.2 we examine stability conditions for LTI systems and establish
the fact that only those systems with absolutely integrable h(t) are guaranteed to
produce bounded outputs when presented with bounded inputs. Next, in Section 10.3
we introduce the concept of causality and establish the fact that impulse responses
h(t) of causal real-time LTI systems, such as linear circuits built in the lab, vanish for
t < 0. We refer to causal LTI systems as LTIC systems and consider a sequence of
examples that illustrate how to recognize whether a system is causal, linear, and time-
invariant. We conclude with short sections that discuss the importance of noncausal
models in some settings (Section 10.4) and the modeling of delay lines (Section 10.5).
where p (t) is a pulse centered about t = 0, having width and area 1. (See
Figure 9.10.) Now, if we were to apply an input p (t) to a system in the lab,
having an unknown impulse response h(t), we would measure (or display on
a scope) an output p (t) ∗ h(t). Taking a sequence of such measurements with
inputs p (t) having decreasing widths (and increasing heights), we should see
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 339
the output p (t) ∗ h(t) converge to h(t). We would need to keep reducing until
further changes in the output were imperceptible. If that did not happen and the
output kept changing at every step, no matter how small (see Example 10.1 that
follows, for a possible reason), then we instead could use the second method,
presented next.
(2) Excite the system with a unit-step input f (t) = u(t) to obtain the unit-step
response
In symbolic terms,
What is the system impulse response h(t)? Can we measure h(t) by using
the first method described above?
Solution We find h(t) by differentiating g(t):
dg d du
h(t) = = (e−t u(t)) = −e−t u(t) + e−t
dt dt dt
= −e−t u(t) + e−t δ(t) = δ(t) − e−t u(t).
Notice that we used the sampling property of the impulse to replace e−t δ(t)
with δ(t).
In implementing the first method, the system response to input p (t) is
h(t) ∗ p (t) = (δ(t) − e−t u(t)) ∗ p (t) = p (t) − e−t u(t) ∗ p (t).
As is reduced, the second term of the output will converge to e−t u(t),
because
However, the first term p (t) will not converge (i.e., stop changing) as
is reduced. Even if we guess that an impulse is appearing in the output as
is made smaller, it will be difficult to estimate the area of the impulse.
Thus, the first method will not be workable in practice. This problem arises
because the system impulse response h(t) = δ(t) − e−t u(t) contains an
impulse.
Example 10.2
Measurements in the lab indicate that the unit-step response of a certain
circuit is
What is the system impulse response h(t)? Can we measure h(t) by using
the first method described in this section?
Solution Similar to Example 10.1, the impulse response is
dg d du
h(t) = = (te−t u(t)) = (1 − t)e−t u(t) + te−t
dt dt dt
= (1 − t)e−t u(t) + te−t δ(t) = (1 − t)e−t u(t).
Because h(t) does not contain an impulse, the first method also will work.
Example 10.3
What is the frequency response of the system described in Example 10.2?
Solution Given that
Example 10.4
A system that is known to be LTI responds to the input u(t + 1) with the
output rect( 2t ), as shown at the top of Figure 10.2. What will be the system
response y(t) to the input f (t) = rect(t)? Solve this problem by first finding
the system impulse response.
Solution Since the system is time-invariant, the information
t
u(t + 1) −→ LTI −→ rect( )
2
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 341
1 1
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3
1 1
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3
−1
Figure 10.2 An LTI system that responds to an input u(t + 1) with the output rect( 2t ) will respond to
input f (t) = rect(t) with output rect(t) − rect(t − 2), as shown here. (See Example 10.4.)
implies that
t −1
u(t) −→ LTI −→ rect( ) = u(t) − u(t − 2).
2
Thus,
so that
y(t) = h(t) ∗ f (t) = [δ(t) − δ(t − 2)] ∗ rect(t) = rect(t) − rect(t − 2),
We can solve this same problem more directly by using the properties of super-
position and time invariance and the fact that the second input can be written as a
linear combination of delayed versions of the first input. Working the problem in this
way will produce the answer
t − 21 t − 23
y(t) = rect( ) − rect( ),
2 2
which can be shown to be the same as the former answer. Try it!
342 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems
It is surprisingly easy for us to develop the convolution formula, describing the zero-
state response of linear, time-invariant (LTI) systems, by working strictly within the
time domain, without any need for the frequency response H (ω) to exist. To do so,
we begin by noting that every LTI system in zero state will respond to an impulse
input with a specific signal that we denote as h(t) and call the impulse response. That
is, in symbolic notation,
is true for any LTI system as a matter of definition of its impulse response. Now,
invoking time-invariance of the system—meaning that delayed inputs cause equally
delayed outputs—we have
is the formula for the zero-state response of all LTI systems with all possible inputs
f (t)—the existence of the Fourier transform of the system impulse response h(t) is
not necessary.
1
Here, we assume that the LTI system satisfies the superposition property even for an uncountably
infinite number of input terms; i.e., for linear combinations of δ(t − τ ) for all values of τ. This assumption
is satisfied by all LTI systems encountered in the lab.
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 343
Example 10.5
When applied to a particular LTI system, an impulse input δ(t) produces the
output h(t) = et u(t). What is the zero-state response of the same system
to the input f (t) = u(t)?
Solution
t
y(t) = h(t) ∗ f (t) = e u(t) ∗ u(t) =
t
eτ u(τ )dτ = (et − 1)u(t).
−∞
Notice that we could not have calculated y(t) with the Fourier method,
because the Fourier transform of h(t) = et u(t) does not exist. Also, notice
that the output y(t) is not bounded (y(t) → ∞ as t → ∞), even though
the input f (t) = u(t) is a bounded function (|f (t)| ≤ 1 for all t).
Example 10.6
For a system with input f (t), the output is given as
y(t) = f (t + T ).
h(t) = δ(t + T ).
Thus, the system must satisfy both zero-state linearity and time invariance.
Example 10.7
Suppose a system has input–output relation
y(t) = f 2 (t + T ).
which is not in the form y(t) = h(t) ∗ f (t). Thus, the system is not LTI.
Example 10.8
Is the system
y(t) = f 2 (t + T )
time invariant?
Solution We already know from Example 10.7 that the system is not
LTI. But, it still could be time invariant. To test time invariance, we feed
the system with a new input
f1 (t) = f (t − to )
Because the new output is a delayed version of the original output, the
system is time invariant. Since the system is time invariant, but not LTI, it
must be nonlinear. We easily can confirm this by noting that a doubling of
the input does not double the output.
We also can test zero-state linearity of the system from first principles
by checking whether the output formula supports linear superposition. With
an input
Notice that this is not the sum of the responses f12 (t + T ) and f22 (t + T )
due to individual inputs f1 (t) and f2 (t). Hence, as expected, the system is
not linear.
Example 10.9
A system responds to an unspecified input f (t) with the output 2rect(t).
If a delayed version f (t − 2) of the same input is applied, the output is
observed to be 4rect(t − 2). Is the system LTI? Time invariant? Zero-state
linear?
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 345
Solution The system is not time invariant because, if it were, its response
to input f (t − 2) would have been 2rect(t − 2) instead of 4rect(t − 2).
Because the system is not time invariant, it cannot be LTI.
Is the system zero-state linear? It could be. There is not enough infor-
mation provided to test whether the system is zero-state linear. Unless a
general input–output formula that relates f (t) and y(t) is given, it is not
always possible to test zero-state linearity and time invariance.
ej ωt −→ LTI −→ ej ωt H (ω),
where
∞
H (ω) ≡ h(τ )e−j ωτ dτ.
−∞
In other words, given a complex exponential input, the output of an LTI system is
the same complex exponential, except scaled by a constant that is the frequency
response evaluated at the frequency of the input. As before, the frequency response
is the Fourier transform of the impulse response h(t). What we have just shown is
consistent with our earlier development of the concept of frequency response, but
here we have arrived at the same notion through use of the convolution formula
Notice that this result implicitly assumes that the Fourier transform of h(t)
converges so that H (ω) is well defined (i.e., is finite at every value of ω). If the
Fourier integral does not converge (in engineering terms, H (ω) may be infinite), then
our result suggests that the system zero-state response to a bounded input ej ωt may
be unbounded. Such LTI systems with nonconvergent H (ω) are considered unstable,
a concept to be examined in more detail in the next section.
Example 10.10
Find the frequency responses H (ω) of the LTI systems having impulse
response functions h1 (t) = e−t u(t), h2 (t) = tu(t), and h3 (t) = u(t).
Solution The Fourier transform integral of h1 (t) = e−t u(t) equals H1 (ω) =
1
1+j ω , which is the frequency response of the system.
The Fourier transform integral of h2 (t) = tu(t) does not converge, and
consequently, the frequency response H2 (ω) does not exist.
346 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems
The Fourier transform integral of h3 (t) = u(t) also does not converge.
Even though (according to Table 7.2) the Fourier transform of the power
signal u(t) is πδ(ω) + j1ω , this expression is not a regular function and the
frequency response H3 (ω) of the system does not exist.
A more general frequency-domain description will be discussed in
Chapter 11 for systems h2 (t) = tu(t) and h3 (t) = u(t), based on the Laplace
transform.
The previous section suggested that if the Fourier transform of an impulse response
h(t) does not converge—as in the case with h(t) = u(t)—then the zero-state response
of the corresponding LTI system to a bounded input may be unbounded. In most cases
this behavior of having an unbounded output would be problematic. We generally wish
to avoid designing such systems, because as the output signal continues to grow, one
of two things can happen: Either the output signal grows large enough that circuit
components burn out, or the output saturates at the supply voltage level (a nonlinear
effect). In either case the circuit fails to produce the designed response.
Ordinarily, we wish to have systems that produce bounded outputs from bounded
Bounded inputs. We will use the term BIBO stable—an abbreviation for bounded input, bounded
inputs output stable—to refer to such systems. Systems that are not stable are called unstable.
and The terms “bounded input” and “bounded output” in this context refer to functions
outputs f (t) and y(t) = h(t) ∗ f (t) having bounded magnitudes. In other words,
|f (t)| ≤ α < ∞
and
for all t, where the bounds α and β are finite constants. For instance, signals f (t) =
e−t u(t), u(t), e−|t| , cos(2t), sgn[cos(−t)], and ej 3t are bounded by α = 1 for any
value of t. If a signal is not bounded, then it is said to be unbounded; the functions
e−t , tu(t), and et u(t) are examples of unbounded signals.
If an LTI system is not BIBO stable, this means that there is at least one bounded
input function f (t) that will cause an unbounded zero-state response y(t) = h(t) ∗
f (t). It does not mean that all possible outputs y(t) of an unstable system will
be unbounded. For instance, the system h(t) = u(t) is not BIBO stable (e.g., the
output y(t) = u(t) ∗ f (t) due to input f (t) = 1 is not bounded), but its zero-state
response y(t) = u(t) ∗ rect(t) due to input f (t) = rect(t) is bounded. In practice, we
are concerned if even one bounded input can cause an unbounded output, and hence,
we insist on BIBO stability.
We might ask how we can test whether a system is BIBO stable. Certainly, we
cannot try every possible bounded input and then examine the corresponding outputs
Section 10.2 BIBO Stability 347
to check whether they are bounded. Doing so would require trying an infinite number
of inputs, which would require an infinite amount of time. Instead, we need a simpler
test. From our earlier discussion, we feel certain that if the Fourier transform of the
impulse response h(t) does not converge, then the corresponding LTI system cannot
be BIBO stable. So, might this be the test we are looking for? The answer is “not
quite,” because although convergence of the Fourier transform of h(t) is a necessary
condition for BIBO stability, it is not sufficient. For example, the Fourier transform
of sinc(t) converges, and yet a system with impulse response h(t) = sinc(t) is not
BIBO stable, as we will see shortly.
The key to BIBO stability turns out to be absolute integrability of the impulse
response h(t). More specifically, it can be shown that:
An LTI system is BIBO stable if and only if its impulse response h(t) is BIBO stability
absolutely integrable, satisfying criterion
∞
|h(t)|dt < ∞.
−∞
The proof of this BIBO-stability criterion will be given in two parts: First, we
will show that absolute integrability of h(t) is sufficient for BIBO stability; then we
will show that absolute integrability also is necessary for BIBO stability.
Proof of sufficiency: With the help of the triangle inequality, note that
∞
|y(t)| = |h(t) ∗ f (t)| = | h(τ )f (t − τ )dτ |
−∞
∞ ∞
≤ |h(τ )||f (t − τ )|dτ ≤ |f (t)|max |h(τ )|dτ,
−∞ −∞
where |f (t)|max is the maximum value of |f (t)| for all t. Now, suppose that
h(t) is absolutely integrable, that is.
∞
|h(τ )|dτ = γ < ∞.
−∞
Then, for any bounded input f (t) such that |f (t)|max < ∞, the corresponding
output y(t) = h(t) ∗ f (t) also is bounded, and we have
we find that
∞
y(0) = h(τ )f (−τ )dτ.
−∞
Clearly, if h(t) is not absolutely integrable then y(0) will be infinite. Thus, y(t)
will not be bounded, even though the input f (t) = sgn(h(−t)) is bounded;
the sgn function takes on only ±1 values. Hence, we have proven the necessity
of an absolutely integrable h(t) for BIBO stability.
Example 10.11
Determine whether the given systems are BIBO stable. For each system
that is unstable, provide an example of a bounded input that will cause an
unbounded output.
h1 (t) = e−t u(t),
h2 (t) = e−t ,
h3 (t) = 2u(t − 1),
h4 (t) = 2δ(t),
h5 (t) = e2t u(t),
h6 (t) = e−2t u(t) − e−t u(t),
d
h7 (t) = δ (t) ≡ δ(t),
dt
h8 (t) = cos(ωo t) u(t),
h9 (t) = rect(t − 1).
Solution The system with h1 (t) = e−t u(t) is BIBO stable, because
∞ ∞
−t
|e u(t)|dt = e−t dt = 1,
−∞ 0
which does not converge. There are many bounded inputs that will cause this
system to have an unbounded output. For example, choosing f (t) = u(t),
we see that the system response is
t
y(t) = e−t ∗ u(t) = e−τ dτ,
−∞
is unbounded.
Our proof that absolute integrability of h(t) is both a necessary and
sufficient condition for BIBO stability assumed that h(t) is a function.
Because the impulse response for system h4 (t) = 2δ(t) is not a function (it
involves an impulse), we resort to first principles to test whether this system
is BIBO stable. In particular, we note that the system zero-state response
to an arbitrary input f (t) is
Thus, all bounded inputs f (t) cause bounded outputs 2f (t), and so the
system must be BIBO stable.
System h5 (t) = e2t u(t) is not BIBO stable, because the area under
|h5 (t)| = e2t u(t) is infinite. Many bounded inputs, including f (t) = u(t),
will produce an unbounded output.
System h6 (t) = e−2t u(t) − e−t u(t) is BIBO stable because
|h6 (t)| = |e−2t u(t) − e−t u(t)| ≤ e−2t u(t) + e−t u(t)
System h8 (t) = cos(ωo t)u(t) is not BIBO stable because cos(ωo t)u(t)
is not an absolutely integrable function; the area under | cos(ωo t)u(t)| =
| cos(ωo t)|u(t) is infinite. An example of a bounded input that will cause
an unbounded output is f (t) = ej ωo t , which produces
where
∞
H (ωo ) = cos(ωo t)e−j ωo t dt
0
We close this section with several observations that reinforce and add to what
we have learned about BIBO stability. First, note that the above examples illustrate
that the test for stability is the absolute integrability of the impulse response (for h(t)
that are functions). The test is not the boundedness of the impulse response. This is
sometimes a point of confusion. Notice that both the aforementioned h3 (t) and h8 (t)
are bounded, and yet they correspond to unstable systems.
Second, it should be clear that BIBO stable systems always have a well-defined
frequency response, H (ω). This follows from Chapter 7, where we learned that abso-
lute integrability is a sufficient condition for the existence of the Fourier transform.
Third, there are some systems with a well defined frequency response that are
not BIBO stable. So, the existence of H (ω) is not sufficient to imply stability. For
example, h(t) = sinc(t) is not absolutely integrable, and so a system having this
impulse response is not BIBO stable. Yet, H (ω) is well defined; indeed, H (ω) =
πrect( ω2 ), which represents an ideal low-pass filter.3
Fourth, most systems that are not BIBO stable—including very important systems
such as ideal integrators (h(t) = u(t)) and differentiators (h(t) = δ (t))—do not have
convergent H (ω). Unlike in the case of the ideal low-pass filter, Fourier representation
of such systems is difficult or impossible. The best tool for frequency-domain analysis
of unstable systems is the Laplace transform, which will be introduced in Chapter 11.
The Laplace transform also is very convenient for solving zero-input and initial-value
problems, as we shall see.
3
The input f (t) = sgn(sinc(t)) is an example of a bounded input that will cause the ideal low-pass
filter to have an unbounded output. Many other bounded inputs, such as rect(t), cos(t), etc., will produce
a bounded output.
Section 10.3 Causality and LTIC Systems 351
e− t u(t) e− t
1 1
1 t 1 t
1 2 t
−0.5
rect(t − 1)
12
1 10
e2t u(t)
8
6
4
1 2 3 t 2
1 t
Finally, some important unstable systems, such as ideal low-pass filters, and ideal
integrators and differentiators, can be approximated as closely as we like by stable
systems. Chapter 12 introduces this topic of filter design.
An LTI system, stable or not, is said to be causal if its zero-state response h(t) ∗ f (t)
depends only on past and present, but not future, values of the input signal f (t).
Systems that are not causal are said to be noncausal. All practical analog LTI circuits
built in the lab are, of course, causal, because the existence of a noncausal circuit—that
352 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems
is, one that produces an output before its input is applied—would be as impossible as
a football flying off the tee before the kicker kicked the ball.4
Writing the LTI zero-state response in terms of the convolution formula,
∞
y(t) = h(τ )f (t − τ )dτ,
−∞
we see clearly that y(t1 ), the output at some fixed time t1 , can depend on f (t) for
t > t1 only if h(τ ) is nonzero for some negative values τ . For example, if h(−1)
is nonzero, then we see from the convolution formula that y(t1 ) will depend on
f (t1 − (−1)) = f (t1 + 1), that is, on a future value of f . Thus:
Causality
criterion An LTI system with impulse response function h(t) is causal if and only
for LTI if h(t) = 0 for t < 0.
systems
Clearly, then, the ideal low-pass system h(t) = sinc(t), just considered in the
previous section, is not a causal system (because sinc(t) is nonzero for t < 0) and
thus cannot be implemented by an LTI circuit in the lab, no matter how clever the
designer. Fortunately, there is no need for an exact implementation. In Chapter 12,
we will see how to design causal and realizable (as well as BIBO stable) alternatives
that will closely approximate the behavior of an ideal low-pass filter.
It is important to note that the causality criterion just stated, h(t) = 0 for t < 0,
applies only to systems whose impulse responses can be expressed in terms of
functions—for example, sinc(t), u(t), e−t u(t), etc. If we must write the impulse
response in terms of the impulse δ(t) and related distributions, then we should deter-
mine causality by directly examining the dependence of the zero-state output on the
system input.
Example 10.12
The zero-state response of an LTI system to arbitrary input f (t) is described
by
y(t) = f (t − 2).
Find the system impulse response h(t) and determine whether the system
is causal.
Solution Since f (t − 2) = δ(t − 2) ∗ f (t), the input–output formula can
be written as
4
As described in Section 10.4, however, noncausal models are highly useful in practice, especially
for processing of spatial data (e.g., images) and in digital signal processing, where entire signals can be
prestored prior to processing.
Section 10.3 Causality and LTIC Systems 353
Example 10.13
The zero-state response y(t) of an LTI system to a unit-step input u(t) is
the function
g(t) = rect(t).
Find the system impulse response h(t) and determine whether the system
is causal.
Solution Since
1 1
rect(t) = u(t + ) − u(t − ),
2 2
and because the impulse response h(t) of an LTI system is the derivative of
the unit-step response g(t), we have
d 1 1 1 1
h(t) = rect(t) = u (t + ) − u (t − ) = δ(t + ) − δ(t − ).
dt 2 2 2 2
Thus, the system zero-state response to an arbitrary input f (t) is
1 1 1 1
y(t) = [δ(t + ) − δ(t − )] ∗ f (t) = f (t + ) − f (t − ).
2 2 2 2
Clearly, this system is noncausal, because the system output y(t) depends
on f (t + 21 ), representing an input half a time unit into the future.
Another way to see that this system is noncausal, without having to
find h(t), is to note that the output rect(t) turns on at time t = − 21 , which is
earlier than t = 0 when the input u(t) turns on. No practical causal circuit
built in the lab can behave this way!
It should be clear from the foregoing examples that LTI systems with impulse
responses of the form δ(t − to ) are causal when to ≥ 0 and noncausal when to < 0. In
general, we will use the term causal signal to refer to signals that could be the impulse
response of a causal LTI system, and use the term LTIC to describe LTI systems
that are causal. For instance, δ(t), u(t − 1), e−t u(t), δ (t − 2), and cos(2πt)u(t) LTIC
are examples of causal signals that qualify as impulse responses of possible LTIC systems
systems, whereas signals δ(t + 2), e−t , u(t + 1), and cos(t) are noncausal and cannot and causal
be impulse responses of LTIC systems. (See Table 10.2.) signals
354 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems
Causal Noncausal
δ(t − 1) δ(t + 2)
1 1
-2 2 4 6
t −2 2 4 6 t
u(t − 1) e− t
1 1
−2 2 4 6 t −2 2 4 6 t
cos(2πt)u(t) u(t + 1)
1
1
-2 2 4 6 t
−2 2 4 6 t
Example 10.14
Determine whether the following signal is causal:
Clearly, y(t) depends on values of f (t) up to 1 time unit into the future,
and thus the LTI system is not causal. Consequently, the given h(t) is not
causal.
Section 10.3 Causality and LTIC Systems 355
Note that, while all practical LTI systems built in the lab will be causal, not all
causal lab systems are LTI. The concept of causality also applies to systems that
are nonlinear and/or time-varying. The following examples illustrate some of these
possibilities.
Example 10.15
A particular time-varying (and thus not LTI) system is described by the
input–output relation
Example 10.16
A system is described by the input–output relation
y(t) = f (t 2 ).
showing that there are times for which the output depends on future values
of the input.
Example 10.17
A particular nonlinear system is described by the input–output relation
y(t) = f 2 (t + T ).
Example 10.18
What type of filter is implemented by an LTI system having the impulse
response
h(t) = sinc(t) cos(ωo t),
π
assuming ωo > ? Discuss why this filter is impossible to build in the lab.
Solution According to Table 7.2
ω
sinc(t) ↔ rect( ),
π 2
so use of the modulation property implies that the frequency response of
the given system is
1 ω − ωo 1 ω + ωo
H (ω) = rect( ) + rect( ).
2 2 2 2
Clearly, this frequency response describes an ideal band-pass filter with
a center frequency ωo and bandwidth . The ideal filter is impossible to
implement in the lab, because the system impulse response is noncausal.
Example 10.19
The input–output relation of a system is given as
y(t) = f (3t).
f1 (t) = f (t − to ).
the new output y1 (t) is not a to -delayed version of the original output and,
therefore, the system is time varying.
Section 10.5 Delay Lines 357
While our focus in the remaining chapters will be on LTIC systems, it should be noted
that noncausal LTI system models are important in practice. First, as we saw in the
previous section, certain ideal filtering operations (e.g., ideal low-pass) are noncausal.
We need our mathematical apparatus to be general enough to model these types of
filters, because we often wish to design real filters that approximate such ideal filters.
As indicated earlier, Chapter 12 will introduce this filter design problem.
A second use for noncausal models arises in the processing of spatial data where
one or more of the variables involves position. Such examples abound in signal
processing descriptions of imaging systems—for instance, cameras, telescopes, x-ray
computer tomography (CT), synthetic aperture radar (SAR), and radio astronomy. In
such systems the signal being imaged is a spatial quantity in two or three dimensions.
Often, it is convenient to define the origin at the center of the scene, for both the
input and output of the system. Doing so generally leads to a noncausal model for the
processing. For example, a simple noise-smoothing operation, where each point in
the output image is an average of the values surrounding that same point in the input
image, can be described by LTI filtering with rect(x) as the impulse response, which
is noncausal. Here, we use the variable x, rather than t, to denote position.
Finally, noncausal models routinely are applied in digital signal processing, where
an entire sampled signal may be prestored prior to processing. Depending on how the
time origin is specified for the prestored signal samples and the output signal samples,
a “current” output sample may depend on “future” input samples. For example,
where yn depends on the future (already prestored) value fn+1 . There even are cases
with digital processing where signals are reversed and processed backwards. This is
the epitomy of noncausal processing!
The system
is zero-state linear, time invariant, and BIBO stable. As we have seen in Chapter 8,
systems having the frequency response
Clearly, if to ≥ 0, then the output y(t) depends only on past or present values of f (t)
and the system is LTIC.
The LTIC system just described can be considered a delay line having a delay to
Delay-line and gain K. A practical example of a delay-line system is the coaxial cable. The delay
systems of a coaxial cable or any type of transmission-line cable can be calculated as to = Lv ,
where L is the physical cable length and v is the signal propagation speed in the cable,
usually a number close to (but less than) the speed of light in free space, c = 3 × 108
m/s. If the cable is short, then to will be very small and the delay can be neglected in
applications where signal durations are larger than to . For large L, on the other hand,
delay can be an appreciable fraction of signal durations (e.g., to = 0.1 ms for a cable
of length 30 km) and may need to be factored into system calculations. In a lossless
cable (with some further conditions satisfied concerning impedance matching), the
amplitude scaling constant is K = 1. However, in practice, K < 1.
Example 10.20
The signal input to a coaxial line is f (t) = u(t). At the far end of the line,
an output y(t) = 0.2u(t − 10) is observed. What is the impulse response
h(t) of the system?
Solution From the given information, we deduce that the unit-step response
of the system is
Alternatively, the given information suggests that the coax has a 10-
second delay and a gain of 0.2. Thus, we can construct the impulse response
of the system as
Distributed Electrical circuits containing finite-length transmission lines are said to be distri-
versus buted circuits, as opposed to lumped-element circuits which are composed of discrete
lumped-element elements, such as capacitors and resistors, with insignificantly small (in the sense
circuits of associated propagation time delays) physical dimensions. Transmission lines and
distributed circuits are studied in detail in courses on electromagnetics and RF circuits.
While techniques from elementary circuit analysis are helpful in these studies, trans-
mission lines do not behave like lumped-element circuits.
Exercises 359
EXERCISES
10.1 An LTI circuit has the frequency response H (ω) = 1+j1
ω + 2+j ω . What is
1
the system impulse response h(t) and what is the system response y(t) =
h(t) ∗ f (t) to the input f (t) = e−t u(t)?
10.2 Find the impulse responses h(t) of the systems having the following frequency
responses:
(a) H (ω) = 1
3+j ω .
(b) H (ω) = 1
(4+j ω)2
.
jω
(c) H (ω) = 5+j ω = 1 − 5+j ω .
5
−j ω
(d) H (ω) = 1
1+j ω e .
10.8 Determine whether the LTI systems with the following impulse response
functions are causal:
(a) h(t) = u(t − 1).
(b) h(t) = u(t + 1).
(c) h(t) = δ(t − 2) ∗ u(t + 1).
(d) h(t) = u(1 − t) − u(t).
(e) h(t) = u(−t) ∗ u(−t).
10.9 Determine whether the following LTIC systems are BIBO stable and explain
why or why not:
(a) h1 (t) = 5δ(t) + 2e−2t u(t) + 3te−2t u(t).
(b) h2 (t) = δ(t) + u(t).
(c) h3 (t) = δ (t) + e−t u(t).
(d) h4 (t) = −2δ(t − 3) − te−5t u(t).
10.10 For each unstable system in Problem 10.9, provide an example of a bounded
input that will cause an unbounded output.
10.11 Consider the given zero-state input–output relations for a variety of systems.
In each case, determine whether the system is zero-state linear, time invariant,
and causal.
(a) y(t) = f (t − 1) + f (t + 1).
(b) y(t) = 5f (t) ∗ u(t).
t−2
(c) y(t) = δ(t − 4) ∗ f (t) − −∞ f 2 (τ )dτ.
t+2
(d) y(t) = −∞ f (τ )dτ.
t−2
(e) y(t) = −∞ f (τ 2 )dτ.
(f) y(t) = f (t − 1).
3
Consider applying an exponential input f (t) = est to an LTIC system having impulse Laplace
response h(t). Then the zero-state response y(t) = h(t) ∗ f (t) can be calculated as transform
and its
∞ ∞
inverse;
y(t) = h(t) ∗ est = h(τ )es(t−τ ) dτ = est h(τ )e−sτ dτ.
−∞ −∞ partial
fraction
Since in LTIC systems h(t) is zero for t < 0, it should be possible to move the lower expansion;
integration limit in the above formula to 0; however, in anticipation of a possible h(t) transfer
that includes δ(t) we will move the limit to 0− . Thus, we obtain a rule function Ĥ (s);
zero-state
est −→ LTIC −→ Ĥ (s)est , and general
response
where of LTIC
∞
circuits
Ĥ (s) ≡ h(t)e−st dt and systems;
0− cascade,
parallel
is known as both the Laplace transform of h(t) and the transfer function of the system and feedback
with impulse response h(t). configurations
The above relations hold whether s is real or complex, so long as the Laplace
transform integral defining Ĥ (s) converges. In general, s is complex, and then Ĥ (s)
361
362 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
ej ωt −→ LTIC −→ Ĥ (j ω)ej ωt ,
where
∞
Ĥ (j ω) = h(t)e−j ωt dt = H (ω)
0−
is both the system frequency response and the Fourier transform of the impulse
response h(t). Clearly, then, the frequency response H (ω) and transfer function Ĥ (s)
of LTIC systems are related as
H (ω) = Ĥ (j ω),
is convergent for any real σ for which the product h(t)e−σ t is absolutely integrable.
For instance, h(t) = et u(t) represents an unstable system whose frequency response
H (ω) is undefined. But, because h(t)e−σ t is absolutely integrable for σ > 1, a conver-
gent Laplace transform Ĥ (s) of h(t) = et u(t) exists for all s = σ + j ω satisfying
σ > 1.
It should be apparent from the above discussion that the LTIC system transfer
function Ĥ (s) is a generalization of the frequency response H (ω) that remains valid
for many unstable systems. In this chapter we will develop an Ĥ (s)-based frequency-
domain method applicable to nearly all LTIC systems that we will encounter.
Section 11.1 focuses on the Laplace transform and its basic properties. We shall
see that the Laplace transform of the zero-state response y(t) = h(t) ∗ f (t) of LTIC
systems can be expressed as
if F̂ (s) denotes the Laplace transform of a causal input signal f (t). Since causal
signals and their Laplace transforms form unique pairs (like Fourier transform pairs),
Section 11.1 Laplace Transform and its Properties 363
a causal zero-state response y(t) can be uniquely inferred from its Laplace transform
Ŷ (s) as described in Section 11.2. In Section 11.3 we will learn how to determine
Ĥ (s) and Ŷ (s) in LTIC circuits using circuit analysis methods similar to the phasor
method of Chapters 4 and 5, and then infer h(t) and y(t) from Ĥ (s) and Ŷ (s).
Section 11.4 examines the general response of nth -order LTIC circuits and systems
in terms of Ĥ (s), including their zero-input response to initial conditions. Finally, in
Section 11.5 we consider systems that are composed of interconnected subsystems,
and determine how the transfer function of the overall system is related to the transfer
functions of its subsystems.
where
s ≡ σ + jω
is a complex variable with real part σ and imaginary part ω. Because a complex
variable is a pair of real variables, in this case s = (σ, ω), the Laplace transform
Ĥ (s) is a function of two real variables. For conciseness, we choose to write Ĥ (s)
rather than Ĥ (σ, ω).
Generally, the Laplace transform integral converges2 for some values of s and Laplace
not for others. The region of the complex number plane containing all transform
s = (σ, ω) = σ + j ω
for which the Laplace transform integral converges is said to be the region of conver- ROC
gence (ROC) of the Laplace transform. We refer to the entire complex number and
plane, containing all possible values of s = (σ, ω) = σ + j ω, as the s-plane. The s-plane
1
Ĥ (s) defined above also is known as the one-sided Laplace transform of h(t) to distinguish it from a
∞
two-sided transform defined as −∞ h(t)e−st dt. In these notes, we will use only the one-sided transform.
Thus, there will be no occasion for ambiguity when we use the term Laplace transform to refer to its
one-sided version.
Meaning that the integral 0− h(t)e−st dt approaches a limit as T → ∞.
2 T
364 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
s-plane 3 ω
^ (s)|
|H
1
ω
10
−3 −2 −1 1 2 3 σ 8
2
6
4
−1 2 1
−2
0
0 σ
−1
−2
0 −1
1
(a) −3 (b) 2 −2
Figure 11.1 (a) The ROC (shaded region) and pole location (×) of Laplace
transform expression Ĥ(s) = s−1 1
on the complex s-plane, and (b) surface plot of
|Ĥ(s)| = | s−1 | = |σ +jω−1| above the s-plane.
1 1
horizontal (so-called real) axis of the s-plane is labeled with σ and the vertical (so-
called imaginary) axis with ω as shown in Figure 11.1a. It is important to realize that
this complex number plane is just the usual two-dimensional plane with a real variable
corresponding to each axis. The following example computes a Laplace transform
having the ROC shown as the shaded area in Figure 11.1a.
Example 11.1
Determine the Laplace transform Ĥ (s) of h(t) = et u(t) and its ROC.
Solution The Laplace transform of
h(t) = et u(t)
is
∞
∞
−st
∞
e(1−s)t
Ĥ (s) = t
e u(t)e dt = e (1−s)t
dt =
0− 0 1 − s t=0
σ = Re{s} > 1.
Section 11.1 Laplace Transform and its Properties 365
in the form of a surface plot—note that the surface resembles a circus tent
supported by a single pole erected above the s-plane at the location × marked
in Figure 11.1a.3
Poles
Locations on the s-plane—or the values of s—where the magnitude of the of the
expression for Ĥ (s) goes to infinity are called poles of the Laplace transform Laplace
Ĥ (s). transform
As we shall discuss below, the ROC always has at least one pole on its boundary.
Example 11.2
Determine the Laplace transform F̂ (s) of signal f (t) = e−2t u(t) − e−t u(t).
Solution Proceeding as in Example 11.1,
∞ ∞
−2t −t −st
F̂ (s) = (e − e )u(t)e dt = (e−(2+s)t − e−(1+s)t )dt
0− 0
∞ ∞
e
−(2+s)t
e−(1+s)t 1 1 −1
= − = − =
−(2 + s) t=0 −(1 + s) t=0 s+2 s+1 (s + 2)(s + 1)
under the assumptions σ = Re{s} > −2 and σ = Re{s} > −1, dictated by
convergence of the two terms above. The first condition is automatically
3
Outside the ROC, {s : σ > 1}, this surface represents the analytic continuation of the Laplace trans-
form integral, in analogy with 1−s 1
representing an extension of the infinite series 1 + s + s 2 + s 3 + · · ·
beyond its region of convergence, |s| < 1, as an analytic function. The concept of analytic continuation
arises in the theory of functions, where it is shown that analytic functions known over a finite region of the
complex plane can be uniquely extended across the rest of the plane by a Taylor series expansion process.
366 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
s-plane 3 ω
1 | F^ (s)|
ω
10
−3 −2 −1 1 2 3 σ 8
2
6
4
−1 2 1
−2
0
0 σ
−1
−2
0 −1
1
(a) −3 (b) 2 −2
Figure 11.2 (a) The ROC (shaded region) and pole locations (×) of
−1
F̂(s) = (s+2)(s+1) , and (b) a surface plot of |F̂(s)|.
satisfied if the second is satisfied; hence the ROC consists of all complex
s such that σ = Re{s} > −1, which is the cross-hatched region shown in
Figure 11.2a.
−1
F̂ (s) =
(s + 2)(s + 1)
are evident in the surface plot of |F̂ (s)| shown in Figure 11.2b, with poles located at
s = −2 and s = −1. Notice that the ROC of F̂ (s) lies to the right of the rightmost
pole located at s = −1.
The ROC and pole locations depicted in Figures 11.1 and 11.2 are consistent
with the following general rule:
ROC is all The ROC of a Laplace transform coincides with the portion of the s-plane
s to the to the right of the rightmost pole (not counting a possible pole at s = ∞).
right
of the As illustrated in Example 11.2, the Laplace transform of a function
rightmost
pole f (t) = f1 (t) + f2 (t) + · · ·
where F̂n (s) is the Laplace transform of fn (t), and, furthermore, the ROC of F̂ (s)
will be the intersection of the ROC’s of all of the F̂n (s) components. This is the reason
underlying the general rule that the ROC of a Laplace transform F̂ (s) is the region to
the right of its rightmost pole, not counting a possible pole at s = ∞. A pole at infinity
arises if F̂ (s) contains an additive term proportional to s (or any increasing function
Section 11.1 Laplace Transform and its Properties 367
of s). But, the ROC of F̂n (s) = s is the entire s-plane (as shown in Example 11.4
below), and so a pole at s = ∞ is not counted in the rule just explained.
Example 11.3
Determine the Laplace transform Ĥ (s) of signal h(t) = δ(t).
Solution In this case, using the sifting property of the impulse,
∞
Ĥ (s) = δ(t)e−st dt = e−s·0 = 1.
0−
Example 11.4
Using the derivative of
which shows that the Laplace transform of δ (t) is simply s (as given in
Table 11.1). Since we did not invoke any constraint on s while calculating
the Laplace transform, the ROC is the entire s-plane, even though there is
a pole at infinity, (i.e., at s = ∞).
Example 11.5
Given that the Laplace transform of h(t) = et u(t) is Ĥ (s) = 1
s−1 , show
that the Laplace transform of f (t) = tet u(t) is F̂ (s) = (s−1)
1
2.
368 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
1 δ(t) ↔ 1 7 δ (t) ↔ s
2 ept u(t) ↔ 1
s−p 8 u(t) ↔ 1
s
3 tept u(t) ↔ 1
(s−p)2
9 tu(t) ↔ 1
s2
4 t n ept u(t) ↔ n!
(s−p)n+1
10 t n u(t) ↔ n!
s n+1
5 cos(ωo t)u(t) ↔ s
s 2 +ωo2
11 sin(ωo t)u(t) ↔ ωo
s 2 +ωo2
Table 11.1 Laplace transforms pairs h(t) ↔ Ĥ (s) involving frequently encoun-
tered causal signals—α, ωo , and ωd stand for arbitrary real constants, n for non-
negative integers, and p denotes an arbitrary complex constant.
which holds for {s : σ > 1}. Taking the derivative of the above expression
with respect to s, we find that
∞ ∞
d
( et e−st dt) = − tet e−st dt
ds 0 0
implying that the Laplace transform of f (t) = tet u(t) is F̂ (s) = (s−1)
1
2.
The ROC is {s : σ > 1} since our calculation has not required any addi-
tional constraints on s.
The Laplace transforms calculated in the examples so far all are special cases
from a list of important Laplace transform pairs shown in Table 11.1. Notice that
the list contains only causal signals and in each case the ROC can be deduced from
Section 11.1 Laplace Transform and its Properties 369
corresponding pole locations, which can, in turn, be directly inferred from the form
of the Laplace transform. For example,
s
cos(ωo t)u(t) ↔
s2 + ωo2
has a pair of poles at s = ±j ωo on the vertical axis of the s-plane, and, as a conse-
quence, the ROC of this particular Laplace transform coincides with the right half of
the s-plane, which we will refer to as the RHP (right half-plane).
Also, notice that the poles of Laplace transforms of absolutely integrable signals Laplace
included in Table 11.1 all are confined to the left half-plane (LHP). For example, the transforms
pole s = p for ept u(t) is in the LHP if and only if p < 0 and the signal is absolutely of absolutely
integrable. The same is true with the pole s = p for tept u(t), etc. This detail is, integrable
of course, not a coincidence. It is true more generally, because if a signal h(t) is signals
absolutely integrable and causal, then its Fourier transform integral is guaranteed to have only
converge to a bounded H (ω) = Ĥ (j ω)—this requires that all poles of Ĥ (s) (if any) LHP poles
be located within the LHP (as in Figure 11.2) so that the vertical axis of the s-plane,
where s equals j ω, is contained within the ROC.4
Remembering that BIBO stable systems must have absolutely integrable impulse
response functions, it is no surprise that the BIBO stability criterion from Chapter 10
can be restated as:
An LTIC system h(t) ↔ Ĥ (s) is BIBO stable if and only if its transfer LTIC
function Ĥ (s) has all of its poles in the LHP. systems
This alternative test for BIBO stability holds for any LTIC system with an Ĥ (s) are BIBO
that is a rational function written in minimal form (polynomial in s divided by a stable
polynomial in s, where all possible cancellations of terms between the numerator and if and only
denominator have been performed), giving rise to poles at distinct locations, as in the if all
examples above. A proof of this alternative stability test is accomplished by simply poles
writing Ĥ (s) in a partial fraction expansion via the method described in Section 11.2. in LHP
Example 11.6
Using Table 11.1 and the version of the BIBO stability criterion stated
above, determine whether the LTIC systems with the following impulse
response functions are BIBO stable:
ha (t) = u(t)
hb (t) = e−t u(t) + e2t u(t)
hc (t) = e−t cos(t)u(t)
hd (t) = sin(2t)u(t)
he (t) = e−t u(t) + hc (t)
hf (t) = δ (t)
4
A pole at s = ∞ is no exception to this rule, because the corresponding Ĥ (s) = s does not lead to a
bounded H (ω) = Ĥ (j ω) as required by an absolutely integrable h(t).
370 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Solution Using Table 11.1, we find that the transfer function that corre-
sponds to impulse response
ha (t) = u(t)
is
1
Ĥa (s) = .
s
This transfer function has a pole at s = 0, which is just outside the LHP.
Thus, the system is not BIBO stable, which of course is consistent with the
fact that ha (t) = u(t) is not absolutely integrable.
For
we have
1 1 2s − 1
Ĥb (s) = + = .
s+1 s−2 (s + 1)(s − 2)
This transfer function has two poles, one at s = −1 within the LHP, and
another at s = 2 outside the LHP. Therefore, the system is not BIBO stable.
The system
s+1 s+1
Ĥc (s) = = ,
(s + 1) + 1
2 (s + 1 + j )(s + 1 − j )
located at s = −1 ± j , are within the LHP. The pole locations and surface
plot of |Ĥc (s)| for this BIBO stable system are shown in Figures 11.3a and
11.3b. We refer to locations where Ĥc (s) = 0 as “zeros” of the transfer
function; Figure 11.3a shows an “O” that marks the location of the zero
“Zero” of Ĥc (s). Finally, Figure 11.3c shows a plot of |Ĥc (j ω)| = |Hc (ω)| as a
of a function of ω, which is the magnitude of the frequency response of the
transfer system.
function The poles of
2 2
Ĥd (s) = =
s2 +4 (s + j 2)(s − j 2)
are at s = ±j 2, just outside the LHP; therefore the system is not BIBO
stable.
Section 11.1 Laplace Transform and its Properties 371
s-plane 3 ω
2
^ (s)|
|Hc
1
ω
5
4
−3 −2 −1 1 2 3 σ 3 2
2
1 1
−1
−2
0
0 σ
−1
−2 0 −1
1
(a) (b) 2 −2
−3
1
^ (j ω )|
|Hc
0.8
0.6
0.4
0.2
−4 −2
(c) 2 4
ω
Figure 11.3 (a) The ROC (shaded region), pole locations (×), and the zero
location (O) of Ĥc (s) = (s+1+j)(s+1−j)
s+1
, (b) a surface plot of |Ĥc (s)|, and (c) line plot
of |Ĥc (jω)| versus ω representing the magnitude of the frequency response of
system Ĥc (s).
All three poles of
1
Ĥe (s) = + Ĥc (s),
s+1
at s = −1 and s = −1 ± j are within the LHP. Therefore the system is
BIBO stable.
Finally, for
hf (t) = δ (t)
Ĥf (s) = s
(see Example 11.4). Since |Ĥf (s)| → ∞ as |s| → ∞, this transfer function
has poles at infinities to the right and left of the s-plane. A pole at s =
+∞ obviously is outside the LHP, and hence the system should not be
BIBO stable. This conclusion checks with our earlier observation that a
differentiator described by an impulse response δ (t) cannot be BIBO stable.
372 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Hidden Poles at infinities as seen in the previous and upcoming examples are known as
poles hidden poles.
Example 11.7
Determine the Laplace transform Q̂(s) of causal signal
1
q(t) = rect(t − ) = u(t) − u(t − 1).
2
Solution
1
∞
−st
1
−st e−st
Q̂(s) = q(t)e dt = e dt =
0− 0 −s t=0
e−s·1 − e−s·0 1 − e−s
= = ,
−s s
except in the limit as σ = Re{s} → −∞. Hence, in this case the ROC is
described by {s : σ = Re{s} > −∞}.
Note that Q̂(s) has a hidden pole at s = −∞ in the LHP. Also note
that s = 0 is not a pole, since, using l’Hospital’s rule,
d
ds (1 − e−s ) e−s
lim Q̂(s) = lim ds
= lim =1
s→0 s→0 s→0 1
ds
is finite.
Since the Laplace transform operation on a signal f (t) ignores the portion of f (t) for
t < 0, only causal signals f (t) can be expected to form unique transform pair relations
with F̂ (s), such as those listed in Table 11.1. In fact, if f (t) is not causal, then its
Laplace transform will match the Laplace transform of the causal signal f (t)u(t).
For example, the noncausal signal
1
shown in Figure 11.4a, shares the same Laplace transform, s−1 , with causal signal
shown in Figure 11.4b. Likewise, the Laplace transform of noncausal f (t) = δ(t +
1) + 2δ(t) is the same as the Laplace transform of causal g(t) = 2δ(t), namely 2.
Section 11.1 Laplace Transform and its Properties 373
et u(t + 2 )
3 3
et u(t)
2 2
1 1
−3 −2 −1 1 2 −3 −2 −1 1 2
(a) t (b) t
Figure 11.4 (a) Noncausal f (t) = et u(t + 2) and (b) causal g(t) = f (t)u(t).
We shall indicate Laplace transform pair relationships involving causal signals Laplace
using double-headed arrows, as in transform
of noncausal
1
et u(t) ↔ , signals
s−1
and use single-headed arrows to indicate the Laplace transform of noncausal signals,
as in
1
et u(t + 1) → .
s−1
Keep in mind the distinction between → and ↔ in reading and interpreting
Table 11.2, which lists some of the general properties of the Laplace transform. Prop-
erties given in terms of → are applicable to both causal and noncausal signals, while
those given in terms of ↔ apply only to causal signals. For instance, the time-delay
property (item 4 in Table 11.2) is stated using ↔, and therefore it applies only to causal
f (t). The next example illustrates use of the time-delay property when calculating
the Laplace transform of a delayed causal signal.
Example 11.8
Given that (according to Table 11.1)
1
f (t) = tu(t) ↔ F̂ (s) = ,
s2
find the Laplace transform P̂ (s) of
Signals f (t) and p(t) are shown in Figures 11.5a and 11.5b, respectively.
Solution We first note that p(t) = f (t − 1) is a delayed version of the
causal f (t), with a positive delay of to = 1 s (as seen in Figure 11.5). Thus,
the time-delay property given in Table 11.2 is applicable, and
e−s
P̂ (s) = F̂ (s)e−sto = .
s2
374 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Table 11.2 Important properties and definitions for the one-sided Laplace transform. Properties marked
by * in the first column hold only for causal signals.
Section 11.1 Laplace Transform and its Properties 375
4 4
tu (t) (t − 1)u(t − 1)
3 3
2 2
1 1
−2 −1 1 2 3 −2 −1 1 2 3
(a) t (b) t
4
(t + 1)u(t + 1)
3
−2 −1 1 2 3
(c) t
Figure 11.5 (a) Causal ramp function f (t) = tu(t), (b) delayed ramp
p(t) = (t − 1)u(t − 1), and (c) noncausal q(t) = (t + 1)u(t + 1).
Example 11.9
Determine the Laplace transform Q̂(s) of
q(t) = (t + 1)u(t + 1)
Using Table 11.1 as well as the addition property from Table 11.2, we
find that
1 1
q(t)u(t) ↔ Q̂(s) = 2
+ .
s s
The previous two examples illustrated how the properties in Table 11.2 can be
used to deduce new Laplace transforms from ones already known. Here is another
example of a similar type.
376 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Example 11.10
Using Table 11.2 confirm that
s+α
f (t) = e−αt cos(ωd t)u(t) ↔ F̂ (s) = .
(s + α)2 + ωd2
Example 11.11
Using the time-delay property, determine the Laplace transform of
1
rect(t − ).
2
Solution Since
1
rect(t − ) = u(t) − u(t − 1),
2
where the second term is a delayed version of the causal first term, we have
1 1 −s
u(t) − u(t − 1) ↔ − e .
s s
Thus, it follows that, in agreement with Example 11.7,
1 1
rect(t − ) ↔ (1 − e−s ).
2 s
Example 11.12
Using the frequency-derivative property, show that
1
ept u(t) ↔
s−p
(from Table 11.1) implies
1
tept u(t) ↔ .
(s − p)2
d 1 d 1
− = − (s − p)−1 = .
ds s − p ds (s − p)2
Hence,
1
tept u(t) ↔ .
(s − p)2
378 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
df
≡ f (t) → s F̂ (s) − f (0− ) + lim f (t)e−st .
dt t→∞
Now,
lim f (t)e−st = 0
t→∞
for all values of s in the ROC for F̂ (s); otherwise the Laplace integral F̂ (s)
cannot converge. Hence, for s in the ROC of F̂ (s),
and by induction
For Note that for causal f (t), we have the simpler time-derivative rule
causal f(t),
f (n) (t) ↔ sn F̂ (s). f (n) (t) → s n F̂ (s) for all n ≥ 0.
Example 11.13
Given that
1
f (t) = e2t → F̂ (s) = ,
s−2
use the time-derivative property to determine the Laplace transforms of
df
= 2e2t
dt
and
d 2f
= 4e2t .
dt 2
Note that f (t) is a noncausal signal.
Section 11.1 Laplace Transform and its Properties 379
Solution Since
1
F̂ (s) =
s−2
and
−
f (0− ) = e2·0 = e0 = 1,
we have, using the formula s F̂ (s) − f (0− ) for the Laplace transform of
the derivative of f (t),
d 2t 1 s − (s − 2) 2
e = 2e2t → s −1= = .
dt s−2 s−2 s−2
Next, we notice that
f (0− ) = 2e2t t=0− = 2.
Hence, using the formula s 2 F̂ (s) − sf (0− ) − f (0− ) for the Laplace trans-
form of the second derivative of f (t), we have
d 2 2t 2 1 s 2 − s(s − 2) − 2(s − 2) 4
e = 4e 2t
→ s − s · 1 − 2 = = .
dt 2 s−2 s−2 s−2
Verification of the convolution property: Table 11.2 states that h(t) ∗ The
f (t) ↔ Ĥ (s)F̂ (s), for causal h(t) and f (t). To prove this, write the Laplace convolution
transform of the convolution of h(t) and f (t) as property
∞ ∞ ∞ applies
−st
{h(t) ∗ f (t)}e dt = { h(τ )f (t − τ )dτ }e−st dt to causal
0− t=0− τ =−∞ signals
∞ ∞
= h(τ ){ f (t − τ )e−st dt}dτ
τ =−∞ t=0−
∞ ∞
= h(τ ){ f (t − τ )e−st dt}dτ
τ =0− t=0−
∞
= h(τ )e−sτ F̂ (s)dτ
τ =0−
∞
= F̂ (s) h(τ )e−sτ dτ = F̂ (s)Ĥ (s),
τ =0−
380 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
which proves the result. Notice that the change of the bottom integration limit
in line 3 from −∞ to 0− requires that h(t) be causal. Also, use of the time-shift
property between lines 3 and 4 requires that f (t) be causal and τ ≥ 0.
Example 11.14
Given that
1
f (t) = e−t u(t) ↔ F̂ (s) = .
s+1
1
Ŷ (s) = F̂ (s)F̂ (s) = .
(s + 1)2
1
te−t u(t) ↔ ,
(s + 1)2
Example 11.15
If f (t) = e−t , can we take advantage of the time-convolution property to
calculate y(t) ≡ f (t) ∗ f (t)?
Solution Because the given f (t) is not causal, the answer is “no”—in
particular,
Try computing the convolution e−t ∗ e−t directly in the time domain to see
that it does not equal either te−t or te−t u(t).
Section 11.2 Inverse Laplace Transform and PFE 381
Causal signals and their Laplace transforms form unique pairs, just like Fourier trans-
form pairs familiar from earlier chapters. Hence, we can associate with each Laplace
transform Ĥ (s) a unique causal signal h(t) such that
h(t) ↔ Ĥ (s),
where h(t) is said to be the inverse Laplace transform of Ĥ (s). For instance, the
−t
inverse Laplace transform5 of Ĥ (s) = (s+1)
1
2 is the causal signal h(t) = te u(t).
1 1
Inverse Laplace transforms for elementary cases, such as s n , s−p , (s−p)2 , etc., can
be directly identified by using the pairs in Table 11.1. More complicated cases call
for the use of systematic algebraic procedures discussed in this section, as well as
properties in Table 11.2.
The most important class of Laplace transforms encountered in LTIC system
theory is the ratio of polynomials. In fact, as we will see in Section 11.3, transfer
functions of all lumped-element LTIC circuits have this form, namely
B(s)
Ĥ (s) = ,
P (s)
where both B(s) and P (s) denote polynomials in s. As indicated earlier, such transfer
functions and/or Laplace transforms are said to be rational functions or to have Rational
rational form. If the denominator polynomial P (s) has degree n ≥ 1 then the rational form
function can be written as
B(s)
Ĥ (s) = .
(s − p1 )(s − p2 ) · · · (s − pn )
5
Uniqueness of and a formula for the inverse Laplace transform for causal signals: The Laplace
transform Ĥ (s) = Ĥ (σ + j ω) of a causal signal h(t) is also the Fourier transform of a causal signal
e−σ t h(t) with σ = Re{s} selected so that s is in the ROC of Ĥ (s). Thus, the uniqueness of the inverse
Fourier transform
∞
1
e−σ t h(t) = Ĥ (σ + j ω)ej ωt dω
2π −∞
which is a line integral in the complex plane, within the ROC, along a vertical line intersecting the horizontal
axis at Re{s} = σ . Although this formula always is available to us, we generally will strive for simpler
methods of computing inverse Laplace transforms.
382 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
−1
Repeated poles s
(s+1)2
1
s+1 + (s+1)2
(e−t − te−t )u(t)
1/2 1/2 1 −j t
Complex conjugate pair s
s 2 +1
= s
(s+j )(s−j ) s+j + s−j 2 (e + ej t )u(t)
−2 −1
Mixed s+1
s 3 −s 2
= s+1
s 2 (s−1) s + s2
+ 2
s−1 (−2 − t + 2et )u(t)
Table 11.3 Four examples of proper rational expressions with different types of poles are shown in the
second column. In proper rational form, the degree of the denominator polynomial exceeds the degree of
the numerator polynomial. The third column of the table shows the same proper rational-form expressions
on the left rearranged as a weighted sum of elementary terms, known as a partial fraction expansion (PFE).
Finally, the last column shows the inverse Laplace transform in each case.
The second column in Table 11.3 lists a few examples of such rational Laplace trans-
forms where the poles p1 , p2 , · · ·, pn correspond to the roots of the denominator
polynomial P (s). In the following discussion we will assume that the degree of poly-
Proper nomial P (s) is larger than the degree of B(s)—just like in the examples shown in
rational Table 11.3—so that there are no poles6 of Ĥ (s) at infinity. When that condition holds,
form a rational form is said to be proper.
Our strategy for calculating inverse Laplace transforms of proper rational func-
tions will be to rewrite the Laplace transform as a sum of simple terms, called a partial
fraction expansion (PFE), and to then identify the inverse Laplace transform of each
simple term by inspection. Table 11.3 illustrates the strategy, where the third column
shows the Laplace transforms given on the left rearranged as PFEs (i.e., as weighted
1 1
sums of elementary terms (s−p) , (s−p)2 ). It then becomes an easy matter to write h(t)
in the last column. The subsections below explain how to rewrite a rational function
as a PFE.
B(s) K1 K2 Kn
= + + ··· +
(s − p1 )(s − p2 ) · · · (s − pn ) s − p1 s − p2 s − pn
This can be done most easily by repeating the following procedure for each
unknown coefficient on the right: To determine K1 , for instance, we multiply both
sides of the expression above by s − p1 , yielding
B(s) K2 (s − p1 ) Kn (s − p1 )
= K1 + + ··· + ,
(s − p2 ) · · · (s − pn ) s − p2 s − pn
and then evaluate this expression at s = p1 . The right-hand side then becomes just K1 ,
matching, on the left, effectively what you would see after “covering up” the original
s − p1 factor in the denominator (e.g., with your THUMB) evaluated at s = p1 ; i.e.,
B(s)
= K1 .
THUMB(s − p2 ) · · · (s − pn ) s=p1
1 K1 K2
= + ,
(s + 1)(s + 2) s+1 s+2
and
1
K2 = = −1,
(s + 1)THUMB s=−2
Example 11.16
Using the cover-up method, find the PFE of the transfer function
s
Ĥ (s) =
s2 + 1
so that the poles are visible. Since the poles (at s = ±j ) are distinct, the
PFE has the form:
s K1 K2
= + .
(s + j )(s − j ) s+j s−j
and
s j 1
K2 = = = .
(s + j )THUMB s=j 2j 2
Thus,
1/2 1/2
Ĥ (s) = +
s+j s−j
1 −j t
h(t) = (e + ej t )u(t) = cos(t)u(t).
2
With a little practice, one can apply the cover-up method “in place,” as illustrated
in the following example.
Example 11.17
We write
0+1 1+1
s+1 THUMB(0−1)
Ĥ (s) = = + 1THUMB
.
s(s − 1) s s−1
Thus,
b0 s m + · · · + bm
,
(s − p1 )(s − p1 ) · · · (s − pn )
Section 11.2 Inverse Laplace Transform and PFE 385
b0 s m + · · · + bm K1 K2 Kn
= + + ··· + .
(s − p1 )(s − p1 ) · · · (s − pn ) s − p1 s − p1 s − pn
1 K1 K2 K3 K1 + K2 K3
= + + = + .
(s + 1)(s + 1)(s + 2) s+1 s+1 s+2 s+1 s+2
However, by modifying the form of the PFE, we still can express a rational
function as a sum of simple terms. In the above example, we can write
1 K1 K2 K3
= + +
(s + 1)(s + 1)(s + 2) (s + 1)2 s+1 s+2
with K1 = 1, K2 = −1, and K3 = 1. But, what is a simple way to find the Ki for the
repeated-pole case?
In general, the PFE of a proper-form rational expression with repeated poles will
contain terms such as
Ki Ki+1 Ki+r−1
+ + ··· +
(s − pi )r (s − pi )r−1 (s − pi )
associated with poles pi that repeat r times—or have multiplicity r. Simple poles, i.e., Simple poles
nonrepeating poles with multiplicity r = 1, contribute to the expansion in the same versus
way as before. The weighting coefficients for terms involving simple poles, as well repeated
as the coefficient of the leading term for each repeated pole, (i.e., Ki above), can be poles with
determined using the cover-up method. The remaining coefficients are obtained using multiplicity
a strategy illustrated in the next set of examples. r>1
Example 11.18
Find the PFE of
s
F̂ (s) = .
(s + 1)2
Next, using the cover-up method (where the entire (s + 1)2 factor is covered
up on the left), we determine K1 as
s
K1 = = −1.
THUMB s=−1
386 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Thus, the problem is now reduced to finding the weight K2 so that so that
s −1 K2
= +
(s + 1) 2 (s + 1)2 s+1
is true for all s. This is accomplished by evaluating the above expression
with any convenient value of s and then solving for K2 . For example, using
s = 0 we find
0 −1 K2
= + ,
(0 + 1)2 (0 + 1)2 0+1
which yields K2 = 1. Thus, we have
s −1 1
= + ,
(s + 1)2 (s + 1)2 s+1
confirming the example in the second row of Table 11.3.
Example 11.19
Determine the inverse Laplace transform h(t) of
s+1
Ĥ (s) = .
− 1)
s 2 (s
and
1
Ĥ (s) = s = s Ĝ(s),
s+1
where
1
g(t) = e−t u(t) ↔ Ĝ(s) = .
s+1
Then the derivative property of the Laplace transform for causal signals (so
that g(0− ) = 0) implies that
dg d
h(t) = = (e−t u(t)) = −e−t u(t) + e−t δ(t) = δ(t) − e−t u(t).
dt dt
s+1−1 1
Ĥ (s) = =1− .
s+1 s+1
Example 11.20
Find the inverse Laplace transform of the improper rational expression
s2 + 1
F̂ (s) = .
(s + 1)(s + 2)
Solution Write
s2 1
F̂ (s) = + .
(s + 1)(s + 2) (s + 1)(s + 2)
So, letting
1
g(t) ↔ Ĝ(s) =
(s + 1)(s + 2)
we have
d 2g
f (t) = + g(t),
dt 2
where
1 1
THUMB(−1+2) (−2+1)THUMB
g(t) ↔ + .
s+1 s+2
Thus,
dg
g(t) = e−t u(t) − e−2t u(t), = −e−t u(t) + 2e−2t u(t),
dt
and
d 2g
= e−t u(t) − 4e−2t u(t) + δ(t).
dt 2
Therefore,
Alternatively, we could perform a long division operation with the original form
of F̂ (s) to obtain
s2 + 1 s 2 + 3s + 2 − (3s + 1) 3s + 1
F̂ (s) = = =1− .
s + 3s + 2
2 s + 3s + 2
2 (s + 1)(s + 2)
Section 11.3 s-Domain Circuit Analysis 389
as before.
Example 11.21
Determine the inverse Laplace transform of
s3
Ĝ(s) = .
s2 + 1
Note that in applying the derivative property, above, we used the fact that
δ(t) − sin(t)u(t) is a causal signal.
f (t) ↔ F̂ (s).
Then all of the voltage and current signals v(t) and i(t) excited in the circuit are
necessarily causal and satisfy the initial conditions
v(0− ) = i(0− ) = 0.
390 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
di
v(t) = L
dt
for an inductor in the circuit is
V̂ (s) = Ls Î (s),
where
dv
i(t) = C
dt
for a capacitor is
Î (s) = Cs V̂ (s),
implying
1
V̂ (s) = Î (s).
sC
Finally, the Laplace transform of the v-i relation
v(t) = Ri(t)
V̂ (s) = R Î (s).
These results imply a general s-domain relation between V̂ (s) and Î (s) having
the form
V̂ (s) = Z Î (s),
s-domain where
impedance ⎧
⎪
⎨sL for an inductor L
Z≡ 1
for a capacitor C
⎪ sC
⎩
R for a resistor R.
Section 11.3 s-Domain Circuit Analysis 391
and
Î (s)in = Î (s)out
node
can be used to perform s-domain circuit analysis, analogous to phasor circuit calcu-
lations familiar from earlier chapters. Because KVL and KCL hold in the s-domain,
techniques such as voltage division and current division do as well.
Assuming that the LTIC circuit shown in Figure 11.6a has zero initial state and a
causal input f (t), Figure 11.6b shows the equivalent s-domain circuit. We can use the
s-domain equivalent and simple voltage division to calculate the Laplace transform
Ŷ (s) of the zero-state response
of the circuit to a causal input. The result will have the form (because of the convolution
property of Laplace transforms)
1Ω 1Ω
+ +
f (t) +− i(t) 1H y(t) = ν(t) F^ (s) +− sΩ Y^ (s) = V^ (s)
− −
(a) (b)
12
Input
10
e2t u(t)
8
4
Zero-state
2
response
(c) 1 t
Figure 11.6 (a) An LTIC circuit with zero initial state, (b) its s-domain equivalent assuming a causal
f (t) ↔ F̂(s), (c) plots of a causal input f (t) = e2t u(t) (heavy curve) and zero-state response (light curve)
calculated in Example 11.22.
392 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Ŷ (s)
Ĥ (s) =
F̂ (s)
of the circuit, as well as finding the zero-state response y(t) by computing the inverse
Zero-state Laplace transform of Ŷ (s). The following set of examples illustrate such s-domain
circuit circuit calculations.
response
to causal Example 11.22
inputs In the LTIC circuit shown in Figure 11.6a, determine the transfer function
Ĥ (s) = Ŷ (s)/F̂ (s) and also find the zero-state response y(t), assuming the
circuit input signal is
1
f (t) = e2t u(t) ↔ F̂ (s) = .
s−2
Solution Figure 11.6b shows the equivalent s-domain circuit where the
1 H inductor has been replaced by an s impedance as a consequence of
Z = sL, pertinent for inductors. This equivalent circuit has been obtained
by assuming that the circuit input f (t) is a causal signal having some
Finding Laplace transform F̂ (s).
the Now, using voltage division in Figure 11.6b (similar to phasor analysis,
transfer but with s-domain impedances), we have
function
s
Ŷ (s) = F̂ (s) .
1+s
Hence the system transfer function is
Ŷ (s) s
Ĥ (s) = = .
F̂ (s) s+1
Hence, the system zero-state response to input f (t) = e2t u(t) is the inverse
Laplace transform
1 −t
y(t) = (e + 2e2t )u(t).
3
Section 11.3 s-Domain Circuit Analysis 393
Figure 11.6c shows plots of the causal input f (t) = e2t u(t) and the causal
zero-state response y(t) derived above.
Example 11.23
Determine the transfer function and the impulse response of the LTIC circuit
shown in Figure 11.7a.
Solution Transfer functions and impulse responses are defined under
zero-state conditions. Figure 11.7b shows the s-domain equivalent zero-
state circuit, where F̂ (s) denotes the Laplace transform of a causal source
f (t). Using KVL in the s-domain, we find that
1
F̂ (s) = (2 + s + )Ŷ (s),
s
giving
F̂ (s) s s
Ŷ (s) = = F̂ (s) = F̂ (s).
2+s+ 1
s
s2 + 2s + 1 (s + 1)2
Ŷ (s) s
Ĥ (s) = =
F̂ (s) (s + 1)2
yielding
2Ω 1H + 2Ω sΩ 1Ω +
f (t) +− 1F x (t) F^ (s) +− s
X^ (s)
(a)
y(t) −
(b) Y^ (s) −
Figure 11.7 (a) An RLC circuit, and (b) its s-domain equivalent.
394 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Notice how the result of Example 11.23 indicates an effective jump in the inductor
current y(t) = h(t) at t = 0 in response to an impulse input f (t) = δ(t). As pointed
out in Chapter 3, inductor currents and capacitor voltages cannot jump in response
to practical sources. However, when theoretical sources such as δ(t) are involved,
jumps can occur, as in Example 11.23.
Example 11.24
Determine the transfer function Ĥ (s) and impulse response h(t) of the
circuit described in Figure 11.8.
Solution Applying ideal op-amp approximations, and writing a KCL
equation at the inverting node of the op amp,
F̂ (s) − 0 0 − Ŷ (s)
= .
2 1 + 1s
Ŷ (s) 1 1
Ĥ (s) = = − (1 + ).
F̂ (s) 2 s
1
h(t) = − [δ(t) + u(t)].
2
Notice that the circuit in Example 11.24 is not BIBO stable—the transfer func-
tion has a pole at s = 0, which is outside the LHP, and the impulse response is not
absolutely integrable. Evidently, unlike the phasor method and Fourier analysis, the
s-domain method can work perfectly well for the analysis of unstable circuits. The
next example further illustrates this point.
1Ω 1
2Ω sΩ
−
+ Y^ (s)
F^ (s) − +
Example 11.25
Find the transfer function of the LTIC circuit described in Figure 11.9 and Circuit
determine the range of values of the real parameter A for which the circuit stability
is BIBO stable. analysis
1Ω
1Ω s AV^x (s) 1Ω
+ ^ −
Vx (s)
+
F^ (s) +− A V^x (s) +
− 1Ω Y^ (s)
−
Solution Note that Ŷ (s) is simply one half of AV̂x (s), so we begin by
finding V̂x (s) in terms of F̂ (s). Applying voltage division, we note that
1
V̂x (s) = (F̂ (s) − AV̂x (s)) ,
1+ 1
s
Thus,
AV̂x (s) A s A s
Ŷ (s) = = F̂ (s) = ( ) F̂ (s),
2 2 s(1 + A) + 1 2(1 + A) s + 1+A
1
The circuit is BIBO stable if and only if the single pole of the system at
1
s=−
1+A
396 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
B(s) = b0 s m + b1 s m−1 + · · · + bm
and
P (s) = s n + a1 s n−1 + · · · + an = (s − p1 )(s − p2 ) · · · (s − pn ),
respectively. Since
Ŷ (s)
Ĥ (s) = ,
F̂ (s)
where Ŷ (s) is the Laplace transform of a causal zero-state response to a causal input
f (t) ↔ F̂ (s), it follows that for an LTIC circuit,
Ŷ (s) B(s)
=
F̂ (s) P (s)
or, equivalently,
2Ω 2Ω
f (t) +− 1F + 1
i(t) 2Ω y(t) F^ (s) − I^(s) 2Ω s
Y^ (s)
(a) 1H (b) s
Figure 11.10 (a) A 2nd-order LTIC circuit and (b) its s-domain equivalent.
Solution Using the s-domain equivalent of the circuit, shown in Figure 11.10b,
and writing a KVL equation for each loop, we have
and
1
2(Î (s) − Ŷ (s)) = Ŷ (s).
s
Solving the second equation for Î (s) gives
1
Î (s) = (1 + )Ŷ (s).
2s
Substituting for Î (s) in the first equation then yields
5 2
F̂ (s) = (s + + )Ŷ (s).
2 s
Hence, the transfer function is
1 s
Ĥ (s) = = .
s + 2.5 + 2
s
s2 + 2.5s + 2
398 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
Therefore,
B(s) = s,
and
P (s) = s 2 + 2.5s + 2.
Furthermore, because
Ŷ (s) B(s)
= ,
F̂ (s) P (s)
we have
s 2 Ŷ (s) + 2.5s Ŷ (s) + 2Ŷ (s) = s F̂ (s),
so that an ODE describing the circuit is
d 2y dy df
+ 2.5 + 2y(t) =
dt 2 dt dt
and the circuit order is n = 2. The zero-input response of the circuit must
satisfy the homogeneous ODE
d 2y dy
2
+ 2.5 + 2y(t) = 0,
dt dt
which is obtained by setting the input f (t) to zero in the ODE for the circuit.
y(0− )(s n−1 + a1 s n−2 + · · ·) + y (0− )(s n−2 + a1 s n−3 + · · ·) + · · · + y (n−1) (0− ).
Consequently, the Laplace transform of the system zero-input response is
C(s)
Ŷ (s) = .
(s − p1 )(s − p2 ) · · · (s − pn )
Note that Ŷ (s) is a proper rational expression with poles pi that coincide with
the characteristic poles of the system Ĥ (s). Also note that Ŷ (s) is linearly
Section 11.4 General Response of LTIC Circuits and Systems 401
Hence, the zero-input response of the circuit with the given initial condi-
tions is
Notice that the zero-input response examined in this example is nontransient because
of the contribution of a nontransient characteristic mode e2t due to the characteristic
pole at s = p1 = 2 in the RHP.
Example 11.28
Determine the characteristic modes and the form of the zero-input response
of the circuit shown in Figure 11.7a.
Solution From Example 11.23 in Section 11.3, the system transfer func-
tion is known to be
s
Ĥ (s) = .
(s + 1)2
Example 11.29
Evaluate the constants A and B of Example 11.28 if the zero-input response
y(t) is known to satisfy y(0) = 1 and y(−1) = 0.
Solution Evaluating y(t) = Ate−t + Be−t at t = 0 and t = −1 s, we find
that
y(0) = B = 1,
y(−1) = −Ae1 + Be1 = 0.
Thus,
A=B=1
and
Example 11.30
Applying the Laplace transform to the ODE
dy
+ 3y(t) = f (t),
dt
y(0− )
Ŷ (s) = .
s+3
Therefore, we find
valid for t ≥ 0 (and also before t = 0 if the ODE is valid for all times t).
Example 11.31
Suppose that a 3rd-order LTIC system is described by the ODE
d 3y d 2y dy df
2
−2 2 −6 − 8y(t) = − 4f (t).
dt dt dt dt
Determine the system transfer function Ĥ (s) and find the characteristic
poles and characteristic modes of the system. Find also the poles of the
transfer function Ĥ (s) (distinct from characteristic poles as will be seen in
the solution). Is the system BIBO stable? What is the zero-input response
if y(0− ) = 2, y (0− ) = 3, y (0− ) = 16?
Solution To determine the transfer function, we take the Laplace trans-
form of the ODE under the assumption of a causal zero-state response y(t)
to a causal input f (t). The transform is
The reason for the behavior encountered in the above example is that the system
analyzed contains a characteristic pole at s = 4 that is cancelled in the transfer function
and yet it affects the zero-input response. It is easy to imagine how this might happen.
For example, a circuit may consist of two cascaded parts, with transfer functions
Section 11.4 General Response of LTIC Circuits and Systems 405
Ĥ1 (s) and Ĥ2 (s), and the overall transfer function Ĥ1 (s)Ĥ2 (s). Part 2 of the circuit
may have an unstable pole that is cancelled by a zero in the transfer function of Part
1. This can lead to a situation where the overall system is BIBO stable (all poles of
Ĥ1 (s)Ĥ2 (s) are in the LHP) and yet nonzero initial conditions in Part 2 of the circuit
may yield a zero-input response that is unbounded, because Ĥ2 (s) has a pole located
outside the LHP.
Because we do not wish to have systems whose zero-input responses can be
unbounded, we frequently require that systems exhibit a transient zero-input response.
Such systems with transient zero-input responses are called asymptotically stable:
Asymptotic
An LTIC circuit with a rational transfer function is asymptotically stable
stability
if and only if all of its characteristic poles are in the LHP.
criterion
For LTIC systems with proper rational transfer functions, BIBO stability and
asymptotic stability are equivalent unless there is cancellation of an unstable char-
acteristic pole in the transfer function. The BIBO stable system in Example 11.31 is
not asymptotically stable, because of the unstable characteristic pole at s = 4. Note
that the output of a circuit that is not asymptotically stable may be dominated by
(or at least include) components of the zero-input response. Likewise, any thermal
noise injected within such a circuit may create an unbounded signal either interior
to or at the output of the system. Thus, circuits that are not asymptotically stable (a
stronger condition than BIBO stability) must be used with care. One possible use of
such systems is exemplified in the following section.
y(t) = y(0− )
with ωo2 > 0, is marginally stable. The system has simple poles at s = ±j ωo . Under
initial conditions y(0− ) = 1 and y (0− ) = 0, the system exhibits the bounded and
406 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
ej ωo t + e−j ωo t
y(t) = = cos(ωo t).
2
Reiterating, an
Marginal LTIC system is marginally stable if and only if it has simple characteristic
stability poles on the imaginary axis and no characteristic poles in the RHP.
criterion
Example 11.32
Determine the zero-input response of the nth -order system
1
Ĥ (s) = , n ≥ 1.
sn
Show that the system has a bounded zero-input response and is marginally
stable only for n = 1.
Solution With characteristic poles at s = 0, the zero-input response of
the system is
1
Ĥ (s) = ,
s2 + ωo2
ωo2 > 0, indicates that the system can be used as an “oscillator,” or as a co-sinusoidal
signal source. Because the oscillations cos(ωo t) are produced in the absence of an
external input, the system is resonant (see Chapter 4), which in turn implies that no net
energy dissipation takes place within the system.8 Clearly, resonance and marginal
stability are related concepts—all marginally stable systems also are resonant. For
instance, the 4th-order marginally stable system
1
Ĥ (s) =
(s 2 + 4)(s 2 + 9)
8
Such systems can be built with dissipative components so long as the systems also include active
elements that can be modeled by negative resistances.
Section 11.4 General Response of LTIC Circuits and Systems 407
1
νc (0 − )
t=0 + νc (t) − 2
+ 2Ω +
1
F s
+ +
− 2
y(t) − F^ (s) = s3 Y^ (s)
f (t)
1Ω 1Ω
− −
(a) (b)
Figure 11.11 (a) An LTIC circuit with a switch, and (b) its s-domain equivalent including an initial-value
source.
can sustain unforced and steady-state co-sinusoidal oscillations at two resonant frequen-
cies 2 and 3 rad/s. On the other hand, another 4th-order system
1
Ĥ (s) =
(s 2 + 4)(s 2 + 4)
with the resonant frequency 2 rad/s is not marginally stable because the poles at
s = ±j 2 are not simple.
Example 11.33
The source in the circuit shown in Figure 11.11a is specified as f (t) = 3 V.
Determine the system response y(t) for t > 0, for an arbitrary initial state
vc (0− ), by using the s-domain equivalent circuit shown in Figure 11.11b.
Solution Note that the source f (t) = 3 V appears in the equivalent circuit
as the Laplace transform F̂ (s) = 3s . The impedance 2s is the counterpart
of the 21 F capacitor (as in Section 11.3), but it is in parallel with a current
source 21 vc (0− ) for reasons to be explained after this example. Focusing,
for now, on the analysis of the circuit in Figure 11.11b, we note that we may
408 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
1 3 1 3
Ŷ (s) = F̂ (s) = = .
1+ 2
s
s 1+ 2
s
s+2
Next, suppressing F̂ (s) and applying the current 21 vc (0− ) through the
parallel combination of impedances of 1 and 2s , we obtain
1 1 · 2s vc (0− )
Ŷ (s) = − vc (0− ) = − .
2 1 + 2s s+2
3 vc (0− )
Ŷ (s) = − .
s+2 s+2
Taking the inverse Laplace transform of this result, we find that for t > 0
The first term of the solution is the zero-state component (driven by the
3 V source) while the second term, proportional to vc (0− ), is the zero-input
response.
Figures 11.12a and 11.12b, below, depict the s-domain equivalents of an inductor
L and capacitor C, with nonzero initial states i(0− ) and v(0− ), respectively. In the
equivalent networks, the s-domain impedances sL and 1/sC are accompanied by
series and parallel voltage and current sources, Li(0− ) and Cv(0− ), having polarity
and flow direction opposite to those associated with v(t) and i(t). For instance, in
Figure 11.12a the voltage Li(0− ) is a drop in the direction of voltage rise v(t), while
in Figure 11.12b the current Cv(0− ) flows counter to i(t). The equivalence depicted
in Figure 11.12b is the transformation rule used to obtain the s-domain circuit shown
above in Figure 11.11b.
i(t) ^ i(t) ^
I (s) I (s)
+ + + +
sL Cν(0− )
1
L ν(t) ⇒ ^
V (s) C ν(t) ⇒ sC
^
V (s)
−
+
−
(a) − Li (0 ) − (b) − −
Figure 11.12 Transformation rule for obtaining s-domain equivalents of (a) an inductor L and (b) a
capacitor C, with initial states i(0− ) and v(0− ), respectively (see text for explanation).
Section 11.4 General Response of LTIC Circuits and Systems 409
To validate the transformation depicted in Figure 11.12a, we note that the Laplace
transform of the inductor v-i relation
di
v(t) = L
dt
is
an equality that can be seen to satisfy the s-domain circuit constraint at the network
terminals shown at the right side of the figure. Likewise, the s-domain constraint at
the network terminals at the right side of Figure 11.12b is
which is in agreement with the Laplace transform of the capacitor v-i relation
dv
i(t) = C .
dt
The Laplace transforms in both cases above result from applying the Laplace
transform derivative property (item 6 in Table 11.2) to account for the initial-value
source terms included in the above models. Note that by applying source transfor- Initial-value
mations to the s-domain equivalents shown in Figures 11.12a and 11.12b we can sources
generate additional forms (Norton and Thevenin types, respectively) of the equiva-
lents. That will not be necessary for solving the example problems given below, but
remembering the equivalences shown in Figure 11.12, and in particular the proper
initial-value source directions, is important.
Example 11.34
In the circuit shown in Figure 11.13a, the initial capacitor voltage is v(0− ) =
0 and the initial inductor current is i(0− ) = 2 A. Determine v(t) for t > 0
if f (t) = 1 for t > 0.
Solution Figure 11.13b shows the equivalent s-domain circuit, where an
initial-value voltage source, due to the nonzero initial inductor current
i(0− ) = 2 A, has been introduced. Since the initial capacitor voltage is zero,
2Ω t=0 ν(t) 2Ω ^
V (s)
^
+ i(t) + ^ I (s) 3s Ω 6
− − F (s) Ω
f (t) 3H 1 s
F − 3i(0 − )
6 +
(a) (b)
Figure 11.13 (a) An LTIC circuit with a switch and two energy storage elements, and (b) its s-domain
equivalent including an initial-value source.
410 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
leading to
1 1 s F̂ (s) i(0− )
( + + )V̂ (s) = − .
2 3s 6 2 s
3s F̂ (s) 6i(0− )
V̂ (s) = − .
s 2 + 3s + 2 s 2 + 3s + 2
3 − 12 −9 9
V̂ (s) = = + ,
(s + 1)(s + 2) s+1 s+2
Example 11.35
In Figure 11.14a f1 (t) = 3t V for t > 0 and f2 (t) = 1 A. Assuming that
v(0− ) = 1 V, and using s-domain techniques, determine the zero-state and
zero-input components of the current i(t) for t > 0.
1 −
ν(0 )
1 3
t=0 F
3 4Ω ^
Vx 4Ω
+ − 3Ω ^
v(t) s I
f 1(t) +− i(t) 1H ^ sΩ
F1 +−
f 2(t) ^ −
F2 i(0 − ) +
(a) (b)
Figure 11.14 (a) A circuit with two independent sources, and (b) its s-domain equivalent including
initial-value sources.
Section 11.4 General Response of LTIC Circuits and Systems 411
V̂x − 3
s2 1 V̂x
− + = 0,
3
s
s 4+s
from which
2
V̂x = s
.
s
3 + 1
4+s
Thus,
V̂x 6 6 2 −3 1
Î (s) = = 2 = = + + .
4+s s (s + 4) + 3s s(s + 1)(s + 3) s s+1 s+3
Taking the inverse Laplace transform, we find that the zero-state response
for t > 0 must be
Now, for the zero-input response we suppress input sources F̂1 (s) and
F̂2 (s), and then use the superposition method once again to calculate the
circuit response to initial value-sources 13 v(0− ) and i(0− ). First, due to the
current source 31 v(0− ), we have, using current division,
1 3
−v(0− )
Î (s) = − v(0− ) 3 s
= .
3 s + (s + 4)
3 + s(s + 4)
Next, due to the voltage source i(0− ), we have, dividing by the total
impedance around the loop,
i(0− ) si(0− )
Î (s) = = .
3
s + (s + 4) 3 + s(s + 4)
with
h(t) ≡ h1 (t) ∗ h2 (t) ∗ · · · ∗ hk (t) ↔ Ĥ (s) ≡ Ĥ1 (s)Ĥ2 (s) · · · Ĥk (s).
The order of the cascade system h(t) ↔ Ĥ (s) is less than or equal to the sum of the
orders of its k components—the order can be less if there are pole-zero cancellations
in the product of the Ĥi (s) that forms Ĥ (s).
Example 11.36
The 3rd-order system
1
Ĥ (s) =
(s + 1)(s + 1 − j )(s + 1 + j )
is to be realized by cascading systems Ĥ1 (s) and Ĥ2 (s). Determine h1 (t) ↔
Ĥ1 (s) and h2 (t) ↔ Ĥ2 (s) in such a way that the transfer function of each
subsystem has real-valued coefficients.
Solution Since
1 1
Ĥ (s) = · ,
s + 1 (s + 1 − j )(s + 1 + j )
the system can be realized by cascading
1
h1 (t) = e−t u(t) ↔ Ĥ1 (s) =
s+1
(e.g., an RC circuit) with a 2nd-order system
1
h2 (t) = e−t sin(t)u(t) ↔ Ĥ2 (s) =
(s + 1 − j )(s + 1 + j )
1
= .
s2 + 2s + 2
Example 11.37
An LTIC system Ĥ1 (s) = s+1 s
is cascaded with the LTIC system h2 (t) =
−2t
δ(t) + e u(t) to implement a system Ĥ (s) = Ĥ1 (s)Ĥ2 (s). Discuss the
stability of system Ĥ (s) and examine H (ω) = Ĥ (j ω) to determine the
kind of filter implemented by Ĥ (s).
Solution Since
1 s+3
h2 (t) = δ(t) + e−2t u(t) ↔ Ĥ2 (s) = 1 + = ,
s+2 s+2
it follows that
s s+3 s(s + 3)
Ĥ (s) = Ĥ1 (s)Ĥ2 (s) = · = .
s+1 s+2 (s + 1)(s + 2)
Because both poles of Ĥ (s) are in the LHP, the system is BIBO stable.
Furthermore, both components of the system have only LHP system poles
and therefore they are asymptotically stable. The system frequency response
j ω(j ω + 3)
Ĥ (j ω) =
(j ω + 1)(j ω + 2)
414 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
with
The order of the parallel system h(t) ↔ Ĥ (s) is less than or equal to the sum of the
orders of its k subcomponents. (The order can be less if pole-zero cancellations occur
in the sum of the Ĥi (s) that forms Ĥ (s).)
H^ 1(s)
h 1 (t ) ∗ f (t)
f (t) y(t) = h(t) ∗ f (t)
H^ 2(s) ∑
F^ (s) Y^ (s) = H^ (s) Y^ (s)
..
.
h k (t ) ∗ f (t)
H^ k (s)
Example 11.38
A 3rd-order LTIC system
s(s + 1)
Ĥ (s) =
(s + 3)(s 2 + s + 1)
Solution Writing Ĥ (s) in a PFE, and then recombining the two terms
involving complex poles, we have
s(s + 1) 1 6 1 s−2
Ĥ (s) = = + .
(s + 3)(s + s + 1)
2 7 s + 3 7 s2 + s + 1
6/7
Ĥ1 (s) = ,
s+3
and
1 s−2
Ĥ2 (s) = .
7 s2 + s + 1
Ĥ1 (s) is a low-pass system. Ĥ2 (s) is another low-pass system and is 2nd -
order. However, the overall system behavior is bandpass since the DC
outputs of Ĥ1 (s) and Ĥ2 (s) are equal in magnitude but opposite in sign
(Ĥ1 (0) = 27 and Ĥ2 (0) = − 27 ).
In this example, we also could have expressed Ĥ (s) in the more stan-
dard PFE, as a sum of three 1st-order terms. In that case the transfer
functions of two of the subsystems would have complex coefficients. Such
filter sections can be implemented (see Exercise Problem 11.28), but the
implementation is more complicated than for transfer functions having real-
valued coefficients.
y(t) = h1 (t) ∗ (f (t) + h2 (t) ∗ y(t)) ↔ Ŷ (s) = Ĥ1 (s)(F̂ (s) + Ĥ2 (s)Ŷ (s)).
∑ H^ 1(s)
F^ (s) Y^ (s) Y^ (s)
H^ 2(s)
H^ 1(s)
H^ (s) =
1 − H^ 1 (s) H^ 2(s)
Example 11.39
Consider a LTIC feedback system with Ĥ1 (s) = 1s and Ĥ2 (s) = A, where
A is a real constant. Determine the conditions under which the feedback
system is BIBO stable.
Solution
1
Ĥ1 (s) 1
Ĥ (s) = = s
= .
1 − Ĥ1 (s)Ĥ2 (s) 1− 1
sA
s−A
The pole of Ĥ (s), at s = A, is in the left-half plane only for A < 0; therefore
the system is BIBO stable if and only if A < 0.
Example 11.39 illustrates that a marginally stable system Ĥ1 (s) = 1s (which is
not BIBO stable) can be stabilized by “feeding back” an amount Ay(t) of the system
Section 11.5 LTIC System Combinations 417
output, y(t), to the system input. This is an example of “negative feedback” stabi-
lization, since for stability A < 0 is required. Positive feedback (A > 0) leads to an
unstable configuration. Notice that for A very small and negative, the transfer func-
tions Ĥ (s) and Ĥ1 (s) can be nearly identical, and yet the former is stable and has a
frequency response, whereas the latter does not.
Example 11.40
Consider a LTIC feedback system with Ĥ1 (s) = s+11
and Ĥ2 (s) = A, where
A is a real constant. Note that both Ĥ1 (s) and Ĥ2 (s) are stable systems.
Determine the values of A for which the feedback system itself is unstable.
Solution
1
Ĥ1 (s) s+1 1
Ĥ (s) = = = .
1 − Ĥ1 (s)Ĥ2 (s) 1 − s+1
1
A s+1−A
Notice that instability in Example 11.40 occurs once again with positive feedback
(A ≥ 1). However, in Example 11.40 negative feedback is not necessary for system
stability (e.g., with A = 0.5, the system is stable) because Ĥ1 (s) is stable to begin
with.
Example 11.41
For a LTIC feedback system with
1
Ĥ1 (s) =
s+1
and
1
Ĥ2 (s) = ,
s+K
or
± K 2 − 2K + 5 < 1 + K,
implying
K 2 − 2K + 5 < K 2 + 2K + 1.
Thus,
K>1
for stability.
Example 11.42
Determine the transfer function Ĥ (s) of the system shown in Figure 11.18.
Solution Labeling the output of the bottom adder as Ŵ (s),we have
F^ (s) Y^ (s)
∑ H^ 1(s)
H^ 2(s) ∑ H^ 3(s)
H^ 4(s)
Ŷ (s) = Ĥ1 (s)[F̂ (s) + Ĥ2 (s)(Ĥ3 (s)Ŷ (s) + Ĥ4 (s)F̂ (s))].
Thus
Ŷ (s) − Ĥ1 (s)Ĥ2 (s)Ĥ3 (s)Ŷ (s) = Ĥ1 (s)F̂ (s) + Ĥ1 (s)Ĥ2 (s)Ĥ4 (s)F̂ (s),
EXERCISES
11.1 Determine the Laplace transform F̂ (s), and the ROC, for the following
signals f (t). In each case identify the corresponding pole locations where
|F̂ (s)| is not finite.
(a) f (t) = u(t) − u(t − 8).
(b) f (t) = u(t) − u(t + 8).
(c) f (t) = u(t + 8).
(d) f (t) = 6.
(e) f (t) = rect( t−4
2 ).
(f) f (t) = rect( t+8
3 ).
(g) f (t) = te2t u(t).
(h) f (t) = te2t u(t − 2).
(i) f (t) = 2te2t .
(j) f (t) = te−4t + δ(t) + u(t − 2).
(k) f (t) = e2t cos(t)u(t).
11.2 For each of the following Laplace transforms F̂ (s), determine the inverse
Laplace transform f (t).
(a) F̂ (s) = s+3
(s+2)(s+4) .
s2
(b) F̂ (s) = (s+2)(s+4) .
(c) F̂ (s) = 1
s(s−5)2
.
s 2 +2s+1
(d) F̂ (s) = (s+1)(s+2) .
(e) F̂ (s) = s
s 2 +2s+5
.
s3
(f) F̂ (s) = s 2 +4
.
420 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
11.3 Sketch the amplitude response |H (ω)| and determine the impulse response
h(t) of the LTIC systems having the following transfer functions:
(a) Ĥ (s) = s
s+10 .
(b) Ĥ (s) = 10
s+1 .
(c) Ĥ (s) = s
s 2 +3s+2
.
1 −s
(d) Ĥ (s) = s+1 e .
11.4 Determine the zero-state responses of the systems defined in Problem 11.3
to a causal input f (t) = u(t). Use y(t) = h(t) ∗ f (t) or find the inverse
Laplace transform of Ŷ (s) = Ĥ (s)F̂ (s), whichever is more convenient.
11.5 Repeat Problem 11.4 with f (t) = e−t u(t).
11.6 Given the frequency response H (ω), below, determine the system transfer
function Ĥ (s) and impulse response h(t).
jω
(a) H (ω) = (1+j ω)(2+j ω) .
jω
(b) H (ω) = 1−ω2 +j ω
.
11.7 Determine whether the LTIC systems with the following transfer functions
are BIBO stable and explain why or why not.
s 3 +1
(a) Ĥ1 (s) = (s+2)(s+4) .
(b) Ĥ2 (s) = 2 + (s+1)(s−2)
s
.
s 2 +4s+6
(c) Ĥ3 (s) = (s+1+j 6)(s+1−j 6) .
11.8 For each unstable system in Problem 11.7 give an example of a bounded
input that causes an unbounded output.
11.9 Given
∞ ∞
− df −st df −st
s F̂ (s) − f (0 ) = e dt = f (0+ ) − f (0− ) + e dt,
0− dt 0+ dt
and assuming that Laplace transforms of f (t) and f (t) exist, show that
(a) lims→0 s F̂ (s) = f (∞).
(b) lims→∞ s F̂ (s) = f (0+ ).
11.10 Consider the LTIC circuit shown in Figure 11.7a. What is the zero-state
response x(t) if the input is f (t) = u(t)? Hint: Use s-domain voltage divi-
sion to relate X̂(s) to F̂ (s) in Figure 11.7b.
Exercises 421
11.11 Repeat Problem 11.10 for (a) f (t) = δ(t), and (b) f (t) = tu(t).
11.12 Consider the following circuit with C > 0:
1Ω 1H
2Ω
−
f (t) +− +
C y(t)
11.13 Determine the transfer functions Ĥ (s) and the zero-state responses for LTIC
systems described by the following ODEs:
d2y
(a) dt 2
+ 3 dy
dt + 2y(t) = e u(t).
3t
d2y
(b) dt 2
+ y(t) = cos(2t)u(t).
d2y
(c) dt 2
+ y(t) = cos(t)u(t).
11.14 If an LTIC system has the transfer function Ĥ (s) = F̂Ŷ (s)
(s)
= (s+2)
s+1
2 , determine
a linear ODE that describes the relationship between the system input f (t)
and the output y(t).
11.15 Determine the characteristic polynomial P (s), characteristic poles, charac-
teristic modes, and the zero-input solution for each of the LTIC systems
described below.
d2y − −
(a) dt 2
+ 2 dy
dt − 8y(t) = 6f (t), y(0 ) = 0, y (0 ) = 1.
d3y 2
(b) dt 3
+ 2 ddt 2y
− dy
dt − 2y(t) = f (t), y(0− ) = 1, y (0− ) = 1,
−
y (0 ) = 0.
d2y − −
(c) dt 2
+ 2 dy
dt + y(t) = 2f (t), y(0 ) = 1, y (0 ) = 1.
11.16 (a) Take the Laplace transform of the following ODE to determine Ŷ (s)
assuming f (t) = u(t), y(0− ) = 1, and y (0− ) = 0. Determine y(t) for
t > 0 by taking the inverse Laplace transform of Ŷ (s).
d 2y dy df
2
+5 + 4y(t) = + 2f (t).
dt dt dt
422 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
11.17 The transfer function of a particular LTIC system is Ĥ (s) = F̂Ŷ (s)
(s)
= s−2
4
.
Is the system asymptotically stable? Explain. Is the system BIBO stable?
Explain.
11.18 What are the resonance frequencies in a system with the transfer function
s
Ĥ (s) = ?
(s + 1)(s 2 + 4)(s 2 + 25)
Is the system marginally stable? BIBO-stable? Explain.
11.19 Determine the zero-state response y(t) = h(t) ∗ f (t) of the marginally stable
system
1
Ĥ (s) =
(s 2 + 4)(s 2 + 9)
to an input f (t) = cos(2t)u(t).
11.20 (a) Determine the transfer function and characteristic modes of the circuit
shown below assuming that Ca = Cb = 1/2 F, R = 1.5 , and L =
1/2 H.
Ca L
ν (t) y(t)
i(t)
f (t) +− Cb R
(b) Given that v(0− ) = 1 V and i(0− ) = 0.5 A, and using the element
values given in part (a), determine y(t) for t > 0 in the circuit:
Ca L
ν(t) y(t)
i(t)
f (t) = 0 Cb R
Exercises 423
11.21 Consider the following circuit, which is in DC steady-state until the switch
is opened at t = 0:
t = 0
2Ω 2Ω
i(t)
+
+ 1F 2H
4V − ν(t)
−
t= 0
i(t)
1Ω +
f (t) +− y(t) 1H
1F −
t= 0
2Ω 2Ω +
f (t) +− 1H 1F y(t)
i(t) −
(a) Show that the transfer function of the circuit for t > 0 is Ĥ (s) = Ŷ (s)
F̂ (s)
=
s
4s 2 +5s+2
.
(b) What are the characteristic modes of the circuit?
(c) Determine y(t) for t > 0 if f (t) = 1 V, y(0− ) = 1 V, and i(0− ) = 0.
424 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response
11.24 The system shown below can be implemented as a cascade of two 1st-order
systems Ĥ1 (s) and Ĥ2 (s). Identify the possible forms of Ĥ1 (s) and Ĥ2 (s).
1
s+ 2
F^ (s) Y^ (s)
2 ∑
s
s+ 1
11.25 Determine the impulse response h(t) of the system shown below. Also deter-
mine whether the system is BIBO stable.
s
s+ 1
F^ (s) Y^ (s)
∑
1
∑ s+ 1
11.26 Determine the transfer function Ĥ (s) of the system shown below. Also deter-
mine whether the system is BIBO stable.
F^ (s) 1 s Y^ (s)
s+ 1
∑ s+ 1
2
s
11.27 Determine the transfer function Ĥ (s) of the system shown below and deter-
mine for which K the system is BIBO stable if
(a) Ĥf (s) = 1
s+K .
(b) Ĥf (s) = s + K.
F (s) 1 Y (s)
s+1 ∑
H f (s)
Exercises 425
Ideal In earlier chapters we focused our efforts on the analysis of LTIC circuits and systems.
filters; For example, given a circuit, we learned how to find its transfer function and its
practical corresponding frequency response. In this final chapter, we consider the other side of
low-order the coin, the problem of design. For instance, suppose we wish to create a filter whose
filters; frequency response approximates that of an ideal low-pass shape with a particular
2nd-order cutoff frequency. How can this be accomplished? The solution lies in answering two
filters questions. First, how do we choose the coefficients in the transfer function? And,
and quality second, given the coefficients of a transfer function, how do we build a circuit having
factor Q; this transfer function (and, therefore, approximating the desired frequency response)?
Butterworth We will tackle these questions in reverse order, but before doing so, in Section
filter 12.1 we first describe the desired frequency response characteristics for ideal filters.
design Section 12.2 then examines how closely the ideal characteristics can be approximated
by 1st- and 2nd-order filters and describes how to implement such low-order filters
using op-amps. This section also includes discussion of a system design parameter
known as Q and its relation to frequency response and pole locations of 2nd-order
systems. Section 12.3 tackles the important problem of designing the filter transfer
function for higher-order filters, describing a method for designing one common
class of filters called Butterworth. The higher-order filters then can be implemented
as cascade or parallel op-amp first- and second-order circuits.
426
Section 12.1 Ideal Filters: Distortionless and Nondispersive 427
Many real-world applications require filters that pass signal content in one frequency
band, with no distortion (other than a possible amplitude scaling and signal delay),
and that attenuate (reduce in amplitude) all signal content outside of this frequency
band. The sets of frequencies where signal content is passed and attenuated are called
the passband and stopband, respectively.
Figures 12.1a through 12.1c depict the magnitude and angle variations of three
such ideal frequency response functions H (ω) = Ĥ (j ω). Linear systems with the
frequency response curves shown in Figures 12.1a through 12.1c are known as distor-
tionless low-pass, band-pass, and high-pass filters, respectively.
To understand this terminology, consider the low-pass filter with frequency
response
ω −j ωto
H (ω) = Krect( )e ,
2
represented in Figure 12.1a. This filter responds to a low-pass input f (t) ↔ F (ω) of
bandwidth less than with an output y(t) = Kf (t − to ), which is a time-delayed and
amplitude-scaled but undistorted replica of the input. In fact, all three filters shown
in Figures 12.1a through 12.1c produce delayed and scaled, but undistorted, copies
of their inputs within their respective passbands because of the filters’ flat-amplitude
responses and linear-phase characteristics.
|H (ω)|
K
−Ω ω
∠ H (ω) Ω
(a) ω
slope = − t o
|H (ω)|
K
ωc ω
∠H (ω) ωc − Ω ωc + Ω
ω
(b) slope = − t o
|H (ω)|
K
−Ω ω
∠ H (ω) Ω
(c) ω
slope = − t o
|H (ω)|
K =1
−Ω Ω ω
(a) ∠ H (ω)
slope = − t o ω
|H (ω)|
0.5
ωc ω
(b) ∠H (ω)
slope = − t o ω
(remember the Fourier modulation property) denote a band-pass filter having magni-
tude and phase responses depicted in Figure 12.2b. Clearly, H2 (ω) is a band-pass filter
with a linear phase variation within its passband, but the phase curve in Figure 12.2b
is different from the phase curve of the distortionless band-pass filter shown in
Figure 12.1b. We refer to the filter in Figure 12.2b as nondispersive. Now, how is
the output signal of a nondispersive filter different from the output of a distortionless Nondispersive
filter? filter
To answer this question, consider the response of the nondispersive band-pass
filter
ω − ωc −j (ω−ωc )to ω + ωc −j (ω+ωc )to
H (ω) = rect( )e + rect( )e
2 2
depicted in Figure 12.3 to an AM signal
1 1
f (t) = m(t) cos(ωc t) ↔ F (ω) = M(ω − ωc ) + M(ω + ωc ),
2 2
where m(t) ↔ M(ω) is a low-pass signal with the triangular Fourier transform shown
in the same figure. Also shown in the figure is the Fourier transform of the filter output,
1 −j (ω−ωc )to 1
Y (ω) = H (ω)F (ω) = e M(ω − ωc ) + e−j (ω+ωc )to M(ω + ωc ).
2 2
You should be able to confirm that this expression is the Fourier transform of the func-
tion m(t − to ) cos(ωc t). Therefore, the nondispersive filter responds to the band-pass
AM input f (t) = m(t) cos(ωc t) with the output y(t) = m(t − to ) cos(ωc t), which is
different from f (t − to ), which would be the response of a distortionless band-pass
filter to the same input signal. Evidently, a nondispersive1 filter delays only the enve-
lope of an AM signal, while a distortionless band-pass filter delays the entire signal.
1
The term nondispersive refers to the fact that the signal envelope is not distorted when the filter phase
variation is a linear function of ω across the passband. Deviations from phase linearity generally cause
430 Chapter 12 Analog Filters and Low-Pass Filter Design
M (ω)
ω
0.5M (ω − ω c) + 0.5M (ω + ω c)
ω
|H (ω)|
1
ωc ω
∠ H (ω)
slope = − t o ω
|Y (ω)|
ωc ω
∠Y (ω)
slope = − t o ω
Since both types of filters with linear phase variation preserve the envelope integrity
of the input, they both are compatible with the filter design goals stated above.
“spreading” of the envelope, which is known as dispersion. Also, the terms phase delay and group delay
often are used to refer to the delay imposed by a filter on the carrier and envelope components, respectively,
of AM signals. In distortionless filters, phase and group delays are equal; in nondispersive filters they are
different.
Section 12.2 1st- and 2nd-Order Filters 431
1Ω 1Ω
1
|H (ω)| 180
∠H (ω)
0.8
90
0.6
0.4 −10 −5 5 10 ω
0.2 −90
Figure 12.4 (a) A 1st-order low-pass filter circuit, (b) its phasor representation, (c) the amplitude
response |H(ω)| , and (d) phase response ∠H(ω) (in degrees).
and
Kωo2
Ĥb (s) = ,
s 2 + 2αs + ωo2
R1 1
K =1+ , ωo = √ ,
R2 R3 R4 C1 C2
432 Chapter 12 Analog Filters and Low-Pass Filter Design
C1
+ +
+
R − + +
R3 R4 − +
R1
R1
f (t) y(t) f (t) y(t)
C C2
− − R2
R2 − −
(a) (b)
Figure 12.5 (a) 1st-order low-pass active filter circuit, and (b) 2nd-order low-pass active filter circuit
known as the Sallen-Key circuit.
and
1 1 1−K
α= + + .
2R3 C1 2R4 C1 2R4 C2
that can be controlled by capacitance and resistance values in the circuits. The op-amps
in the circuits provide a DC amplitude gain K ≥ 1 and the possibility, as discussed
above, of cascading similar active filter circuits in order to assemble higher-order filter
circuits having more ideal frequency response functions H (ω) than either Ha (ω) or
Hb (ω).
Figure 12.6a illustrates the amplitude response curves |HQ (ω)| of three different
versions of the 2nd-order filter shown in Figure 12.5b, labeled by distinct values of
Q = ω2αo . These responses are to be contrasted with |H (ω)| of Figure 12.6b describing
the cascade combination of the same three systems. The phase response ∠H (ω) of
the cascaded system also is shown in Figure 12.6b. Clearly, the cascade idea seems to
provide a simple means of obtaining practical filter characteristics approaching those
of ideal filters. In Section 12.3 we will learn a method for designing high-order transfer
functions Ĥ (s) that can be implemented as a cascade of 2nd-order (and possibly first-
order) sections to produce high-order op-amp filters having properties approximating
the ideal. We close this section with a discussion of the Q parameter introduced above
and its relation to pole locations and frequency responses of dissipative 2nd-order
systems.
Section 12.2 1st- and 2nd-Order Filters 433
|H Q (ω)|
2
Q = 1 .93
1.5
Q = 0 .707
0.5
Q = 0 .52
(a) −3 −2 −1 1 2 3 ω
|H (ω)| ∠ H (ω)
3
0.8 2
1
0.6
−3 −2 -1 1 2 3
ω
0.4
−1
0.2
−2
(b) −3 −2 −1 1 2 3 ω −3
Figure 12.6 The amplitude response of (a) 2nd-order low-pass filters with
ωo = 1 rad
s and Q ≡ 2α = 1.93, 0.707, and 0.52, and (b) a 6th-order filter obtained
ωo
by cascading (multiplying) the filter curves shown in (a); phase response of the
6th-order filter also is shown in (b).
where p1 and p2 denote characteristic system poles confined to the LHP. The parame-
√
ters ωo = p1 p2 and 2α ≡ −(p1 + p2 ) are real and positive coefficients having the
ratio ωo /2α denoted as Q.
The zero-input response of such systems takes underdamped, critically damped, Under-damped,
or overdamped forms illustrated in Table 12.1a, depending on the relative values critically
of ωo and α, known as the undamped resonance frequency and damping coefficient, damped,
respectively. Also, depending on the values of ωo and α, the poles p1 and p2 either are overdamped
both real valued (note the pole locations depicted in Table 12.1a) or form a complex
conjugate pair (i.e., p2 = p1∗ , specifically in underdamped systems). The parameter
ωo
Q≡
2α
is called the quality factor and it has a number of useful interpretations summarized Quality
in Table 12.1b, which are discussed below. factor Q
434 Chapter 12 Analog Filters and Low-Pass Filter Design
(a)
400 ω 1
Complex conjugate poles: 0.8 y(t)
200
0.6
ωo > α
0.4
−400 −200 200 400
Underdamped: σ 0.2
−200
0.02 0.04 0.06 0.08 0.1 0.12 0.14
y(t) = Ae−αt cos( ωo2 − α 2 t + θ) −400
−0.2
t
−0.4
400 ω 1
Repeated real poles: 0.8 y(t)
200
0.6
ωo = α,
0.4
−400 −200 200 400
Critically damped: σ 0.2
−200
0.02 0.04 0.06 0.08 0.1 0.12 0.14
y(t) = e−αt (A + Bt) −400
−0.2
t
−0.4
−400 t
α 2 −ωo2
+Be t
) −0.4
(b)
Quality factor Q = ωo
2α Poles at s = −α ± ωo2 − α 2
center frequency
3 dB bandwidth Band-pass filter Ĥ (j ω)
Table 12.1 (a) Zero-input response types and examples for dissipative 2nd-order systems with a charac-
teristic polynomial P (s) = s 2 + 2αs + ωo2 , where constants A, B, and θ depend on initial conditions and ×’s
mark the locations of characteristic poles, and (b) quality factor with interpretations.
Section 12.2 1st- and 2nd-Order Filters 435
1H 10−4F +
f (t) +− R y(t)
−
Ŷ (s) R Rs
Ĥ (s) = = = ,
F̂ (s) s+ 1
+R s 2 + Rs + 104
10−4 s
the quality factor Q turns out to be the ratio of center-frequency ωo and 3-dB
bandwidth 2α for the filter. To verify this, we first note that in the above transfer
function P (s) = s 2 + Rs + 104 , 2α = R, and ωo = 102 , so that the corresponding
filter frequency response is
2αω
H (ω) = Ĥ (j ω) = .
2αω + j (ω2 − ωo2 )
1
|H (ωu,l )|2 = .
2
|H (ω)|
1
0.8 Q = 0 .3
0.6
0.4
Q=5 Q = 0 .5
0.2
Figure 12.8 The amplitude response |H(ω)| of a 2nd-order band-pass filter with
Q = ω2αo = 5, 0.5, and 0.3.
436 Chapter 12 Analog Filters and Low-Pass Filter Design
Therefore, as indicated in the first row of Table 12.1b, Q = ω2αo is the center frequency
to bandwidth ratio of the filter. Notice that ωu and ωl are not equidistant from ωo except
in the limit for very large Q.
The remaining interpretations of Q given in Table 12.1b have diagnostic value
in the case of high Q (i.e., Q 1), irrespective of system details (for electrical,
mechanical, or any type of system modeled by a second-order transfer function). To
understand the energy interpretation given in the second row of Table 12.1b, we first
note that the envelope of the instantaneous power |y(t)|2 for underdamped y(t) (see
Table 12.1a) decays with time-constant
1
τd = ,
2α
while
2π 2π
T = ≈
ωo2 − α2 ωo
|y(t)|2
where |y(t)|e ≡ Ae−αt is the envelope function of y(t), T 2 e in the denominator
t+T
is the same as t |y(τ )|2 dτ , the energy dissipated in one underdamped oscillation
|y(t)|2 ∞
period T , and τd 2 e in the numerator is equivalent to t |y(τ )|2 dτ (see Exercise
Problem 12.5), the stored energy in the system at time t (can you see why?). Hence,
the energy interpretation of Q follows as indicated in Table 12.1b.
Example 12.1
Suppose a 2nd-order system with an oscillation period of 1 s dissipates 1
J/s when its stored energy is 10 J. Approximate the damping coefficient α
of the system.
Solution Based on the energy interpretation of Q given above,
10 J
Q ≈ 2π = 20π ≈ 63.
1J
Section 12.3 Low-Pass Butterworth Filter Design 437
Clearly,
this is a high Q system and therefore the oscillation frequency
ωo2 − α 2 ≈ ωo = 2π 1 . Hence,
2π
ωo 1
α= = 1
= = 0.05 s−1 .
2Q 2(20π) 20
Example 12.2
A steel beam oscillates about 30 times before the oscillation amplitude is
reduced to a few percent of the initial amplitude. What is the system Q and
what is the damping rate if the oscillation period is 0.5 s?
Solution Q ≈ 30, and therefore
2π
ωo
α= ≈ 0.5 ≈ 0.2 s−1 .
2Q 2 · 30
Before closing this section, it is worthwhile to point out that the limiting case
with Q → ∞ corresponds to a dissipation-free resonant circuit (a straightforward
application of the energy interpretation of Q). Thus, high-Q systems are considered
to be “near resonant.” In the following section we will see that such near-resonant
circuits are needed to build high-quality low-pass filters.
where > 0 and n ≥ 1. Plots of |H (ω)|2 for n = 1, 2, 3, and 6 and = 1 rad s are
shown in Figure 12.9. Clearly, |H (ω)| above describes a filter with a 3-dB bandwidth
438 Chapter 12 Analog Filters and Low-Pass Filter Design
10
|H (ω)| dB
|H (ω)| 2 0
−10
0.8
−20
n =1
0.6 −30
0.4 −40
−50
0.2 −60
2
63 2 n =1 6 3
(a) −3 −2 −1 1 2 3 (b) 0.1 1 10 100 1000
ω ω
Figure 12.9 (a) The square of the amplitude response curves of 1st, 2nd, 3rd, and 6th-order Butterworth
filters with a 3-dB bandwidth of = 1 rad
s , and (b) decibel amplitude response plots of the same filters;
labels n indicate the filter order.
(which equals 1 rads in Figure 12.9, where |H (1)| = 2 = −3dB) and an amplitude
2 1
The first step in designing and building Butterworth circuits is finding a stable and
realizable transfer function Ĥ (s) that leads to the Butterworth amplitude response
|H (ω)| given above. That in turn amounts to selecting appropriate pole locations,
once the number of poles to be used is decided.
Given any stable LTIC system Ĥ (s) with a frequency response H (ω) = Ĥ (j ω),
1 1
|H (ω)|2 = # ω $2n = 2n = Ĥ (j ω)Ĥ (−j ω),
1+ 1+ jω
j
1
Ĥ (s)Ĥ (−s) = 2n .
1+ s
j
Section 12.3 Low-Pass Butterworth Filter Design 439
This equation indicates that an nth-order Butterworth circuit must have a transfer
function Ĥ (s) with n characteristic poles because the product Ĥ (s)Ĥ (−s) has 2n
poles corresponding to the 2n solutions of
2n
s
1+ = 0.
j
After determining the poles of Ĥ (s)Ĥ (−s) (by solving the equation above) we will
assign the half of them from the LHP2 to the Butterworth transfer function Ĥ (s). As
we will see in Example 12.3 below, once the characteristic poles of the Butterworth
Ĥ (s) are known, Ĥ (s) itself is easy to write down.
To determine the poles of Ĥ (s)Ĥ (−s) we rearrange the equation above as
2n
s
= −1 = ej mπ ,
j
where m is any odd integer ±1, ±3, ±5 · · ·. Taking the “2nth root” of both sides,
mπ ◦ πm ◦ m
s = j ej 2n = ∠(90 + ) = ∠90 (1 + ).
2 n n
For m positive and odd, m = 1, 3, · · · < 2n, this result gives the locations of n distinct
LHP poles of Ĥ (s)Ĥ (−s). The remaining n poles of Ĥ (s)Ĥ (−s), obtained with m
negative and odd, m = −1, , −3, · · · > −2n, are all in the RHP. Furthermore, as
we will see below in Example 12.3, the complex LHP poles of Ĥ (s)Ĥ (−s) come in
conjugate pairs, as needed for Ĥ (s) with real-valued coefficients.
Thus, here is the main result:
Example 12.3
Determine the pole locations of a 3rd-order Butterworth filter with 3-dB
bandwidth = 10 rad
s . Also, determine Ĥ (s) so that the system DC gain
is 1.
2
There are many ways of selecting n of the 2n poles of Ĥ (s)Ĥ (−s) for Ĥ (s). However, for Ĥ (s) to
have real-valued coefficients, the selection must be made so that H (ω) = Ĥ (j ω) is conjugate symmetric
and, furthermore, the selection should include only LHP poles to assure stability.
440 Chapter 12 Analog Filters and Low-Pass Filter Design
Solution Since n = 3 and = 10, the pole locations of the circuit transfer
function are given by the formula
◦ m ◦ m
s = ∠90 (1 + ) = 10∠90 (1 + ), m = 1, 3, 5 < 6.
n 3
Clearly, all three poles p1 = 10∠120◦ , p2 = 10∠180◦ , and p3 = 10∠240◦
have magnitudes 10 and are positioned in the LHP around a semicircle as
shown in Figure 12.10a. The angular separation between the neighboring
◦ ◦ ◦
n = 3 = 60 .
poles on the semicircle is 180 180
K 1 1
Ĥ (s) = × ×
(s − 10∠120◦ ) (s − 10∠180◦ ) (s − 10∠240◦ )
Ĥ (0) = H (0) = 1
requires
K = 103 = 3 .
ω ω
15 1.5
10 1
5 0.5
−10 −1
Figure 12.10 The pole locations of (a) a 3rd-order Butterworth filter with 3-dB
frequency = 10 rad
s , and (b) 6th-order Butterworth filter with 3-dB frequency
= 1 rad
s .
Section 12.3 Low-Pass Butterworth Filter Design 441
Example 12.4
How would the filter of Example 12.3 be implemented in a cascade config-
uration?
Solution The transfer function determined in Example 12.3, with K =
103 , can be expressed as
10 10 10
Ĥ (s) = × ×
(s − 10∠120◦ ) s + 10 (s − 10∠ − 120◦ )
10 102
= .
s + 10 s + 10s + 102
2
10
Ĥ1 (s) =
s + 10
(e.g., the active filter shown in Figure 12.5a) with a 2nd-order system
102
Ĥ2 (s) =
s 2 + 10s + 102
(e.g., the active filter shown in Figure 12.5b).
Example 12.5
How would the filter of Example 12.3 be implemented in a parallel config-
uration?
Solution Note that
10 102 10 −10s
Ĥ (s) = = + 2 .
s + 10 s + 10s + 10
2 2 s + 10 s + 10s + 102
Hence, the system can be implemented with a parallel connection of a
low-pass filter
10
Ĥ1 (s) =
s + 10
and a band-pass filter
−10s
Ĥ2 (s) = .
s 2 + 10s + 102
The pole locations of a 6th-order Butterworth low-pass filter with 3-dB band-
width = 1 rad
s are shown Figure 12.10b. Note that the poles are positioned on a
LHP semicircle with radius 1 and with 30◦ angular separation between neighboring
442 Chapter 12 Analog Filters and Low-Pass Filter Design
◦
poles, which once again agrees with angular spacing specified by the formula 180 n .
Furthermore, the marked pole locations are the only possible locations in the LHP for
◦ ◦
6 = 30 angular separations, magnitudes = 1, and complex conju-
6 poles with 180
gate pairings. The 6th-order filter can be implemented by cascading three 2nd-order
low-pass filters. The frequency response magnitudes for the individual second-order
sections are shown in Figure 12.11a.
|H Q (ω)| |H (ω)|
2
Q = 1.93 0.8
1.5
0.6
1
0.4
Q = 0 .707
0.5
0.2
Q = 0 .52
(a) −3 −2 −1 1 2 3 ω (b) −3 −2 −1 1 2 3 ω
| H^ (s)|
∠H (ω)
3
20 ω 2
15
1
10 1
5
0
0.5
−3 −2 −1 1 2 3
ω
−2 0
−1.5 −1
−0.5
−1
σ −0.5
−1
−2
(c) 0 (d) −3
Figure 12.11 (a) The amplitude response |HQ (ω)| of 2nd-order low-pass filters
with a resonance frequency of ωo = 1 rad s and Q = 1.93, 0.707, and 0.52,
respectively, (b) the amplitude response |H(ω)| of a 6-pole Butterworth filter
with a 3-dB bandwidth = ωo = 1 rad s obtained by cascading the three low-pass
filters shown in (a), (c) a surface plot of the magnitude of the transfer function
Ĥ(s) of the same 6-pole filter, and (d) the phase response ∠H(ω) of the same
filter.
(
n−1
2
Ĥ (s) =
s 2 + 2 sin( π2 mn )s + 2
m=1,odd
Section 12.3 Low-Pass Butterworth Filter Design 443
for n even or
(
n−2
2
Ĥ (s) =
s+ s 2 + 2 sin( π2 mn )s + 2
m=1,odd
for n odd. These expressions were obtained by noting that each pair of conjugate
poles contribute to Ĥ (s) a 2nd-order component
2
,
(s − ∠θm )(s − ∠ − θm )
where θm = 90◦ (1 + mn ) for odd m < n − 1. Multiplying out the denominator of this
expression gives the more compact forms above (see Exercise Problem 12.8).
Note that the 2nd-order cascade components above correspond to underdamped
systems with underdamped resonance frequency
ωo = ,
damping coefficients
πm
α = sin( ) < ωo ,
2 n
and quality factors
ωo 1 1
Q= = π m > .
2α 2 sin( 2 n ) 2
Figure 12.11a shows the amplitude response curves of three 2nd-order cascade compo-
nents (Q = 1.93, 0.707, and 0.52) of a 6th-order Butterworth filter with a 3-dB
bandwidth = 1 rad s . The amplitude and phase response of the 6th-order—or the
“6-pole”—filter are shown in Figures 12.11b and 12.11d, respectively.
Notice that amplitude response curves of the three 2nd-order low-pass filters
shown in Figure 12.11a are far from ideal (non-flat, rippled for higher Q, sluggish
decay for low Q). However, their product—which is the amplitude response of the 6th-
order Butterworth filter obtained by cascading the three filters—is flat almost all the
way up to the 3-dB bandwidth frequency 1 rad s and drops reasonably fast beyond that.
Also, the accompanying phase response curve shown in Figure 12.11d is reasonably
linear up to the 3-dB frequency. Higher-order Butterworth filters will do even better
in all these respects.
The Butterworth transfer function surface plot shown in Figure 12.11c clearly
indicates the locations and distributions of the six LHP system poles of Ĥ (s) (same
locations as in Figure 12.10b)—the poles are arranged around a semi-circle as expected.
The pole-pair closest to the ω-axis is the contribution of the highest-Q subcomponent
of the cascade; the same poles also are responsible for the peaks in the amplitude
response curve of the highest-Q subsystem shown in Figure 12.11a. By contrast, the
444 Chapter 12 Analog Filters and Low-Pass Filter Design
pole-pair furthest away from the ω-axis (and closest to the σ -axis) is the contribution
of the lowest-Q subcomponent of the system with the narrowest frequency response
curve shown in Figure 12.11a.
Example 12.6
A 5th-order (or a 5-pole) Butterworth filter with a 3-dB frequency of 5 kHz
needs to be built using a cascade configuration. Choose the appropriate
capacitor and resistance values C1 , C2 , R1 , R2 , R3 , and R4 for the highest-
Q subcomponent of the cascade assuming that the 2nd-order active filter
shown in Figure 12.5b will be used.
Solution A 5 kHz 3-dB frequency means that we have
rad
= 2π(5000) = 104 π .
s
1
Q= = 1.618.
2 sin( π2 15 )
ωo 104 π
α= = ≈ 9708.1.
2Q 3.236
K =1+ R R1
2
= 1,
ωo = R R C C = 104 π,
√ 1
3 4 1 2
and
1 1 1−K
α= + + = 9708.1.
2R3 C1 2R4 C1 2R4 C2
1 1
+ = 9708.1.
2R3 C1 2R4 C1
Section 12.3 Low-Pass Butterworth Filter Design 445
1
C1 = = 1.03 × 10−7 F = 103 pF.
1000 × 9708.1
Substituting R3 = R4 = 1 k and C1 = 103 pF into the first constraint
equation, we find
1
C2 = = 9.84 × 10−9 F = 9.84 pF.
108 π 2 × 103 × 103 × 1.03 × 10−7
In summary, R3 = R4 = 1 k, C1 ≈ 100 pF, C2 ≈ 10 pF, R1 = 0, and
R2 = ∞ give one possible solution of the filter design problem. Other values
for C1 and C2 also can be chosen by starting with different values of R3
and R4 .
1
|H (ω)| = ' # ω $2n ,
1+
446 Chapter 12 Analog Filters and Low-Pass Filter Design
ω 2n
ωh
.
s
Similarly, a band-pass filter with lower passband cutoff frequency ωl and upper pass-
band cutoff frequency ωu can be produced from a low-pass transfer function with
cutoff by replacing s in the low-pass transfer function with
s 2 + ωl ωu
s(ωu − ωl )
|H (ω)| |H (ω)|
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
ω
ω kHz
(a) −3 −2 −1 1 2 3 (b) 1 2 3 4 5 2π
∠H (ω) ∠H (ω)
3
3
2
2
1
ω
1
ω
−3 −2 −1 1 2 3 1 2 3 4 5
kHz
−1
2π
−1
−2 −2
−3 −3
Figure 12.12 Amplitude and phase response curves for (a) a 6th-order Butterworth low-pass filter with
= 1, and (b) a bandpass filter with ωl /2π = 2 kHz and ωu /2π = 3 kHz derived from the same low-pass
filter.
EXERCISES
12.1 Derive the transfer function Ĥa (s) of the 1st-order active filter circuit depicted
in Figure 12.5a.
12.2 Derive the transfer function Ĥb (s) of the Sallen-Key circuit depicted in
Figure 12.5b.
12.3 Given the following transfer functions determine whether the system is
underdamped, overdamped, or critically damped, and calculate the quality
factor Q:
(a) Ĥ1 (s) = s
s 2 +4s+400
,
s 2 +100
(c) Ĥ3 (s) = s 2 +20000s+106
.
12.4 Identify each of the three systems defined in Problem 12.3 as low-pass,
band-pass, or high-pass, and for the band-pass system determine the 3-dB
bandwidth .
448 Chapter 12 Analog Filters and Low-Pass Filter Design
12.5 Given that y(t) is an underdamped zero-input response of the form given in
the first row of Table 12.1a, show that
∞
|y(t)|2e
|y(τ )|2 dτ ≈ τd ,
t 2
where τd = 2α 1
and |y(t)|e = Ae−αt is the envelope of the underdamped
∞
y(t), and explain why t |y(τ )|2 dτ can be interpreted as the stored energy
of the system at instant t. In performing the integral, make use of ωo α
to handle an integral with an oscillatory integrand.
12.6 The zero-input response of a 2nd-order band-pass filter is observed to oscil-
late about sixty cycles before the oscillation amplitude is reduced to a few
percent of its initial amplitude. Furthermore, the oscillation period of the
zero-input response is measured as 1 ms. Assuming that the maximum ampli-
tude response of the system is 1, write an approximate expression for the
frequency response H (ω) = Ĥ (j ω) of the system.
12.7 Determine the frequency response, 3-dB bandwidth, and quality factor Q
of the following parallel RLC bandpass filter circuit, below, in terms of
resistance R:
+
f (t) 1H 1F R y(t)
−
12.9 Determine the pole locations of a 4th-order low-pass Butterworth filter with
a 3-dB frequency of 1 kHz.
12.10 What is the transfer function of the highest-Q subcomponent of the 4th-order
Butterworth filter described in Problem 12.9.
12.11 Assuming R3 = R4 = 4 k, determine the capacitance values C1 and C2
for the 2nd-order circuit of Figure 12.5b to implement the transfer function
of Problem 12.10.
12.12 Approximate the time delay of the filter described in Problem 12.10 by
calculating the slope of the phase response curve of the filter at ω = 0.
Exercises 449
(a, b) + (c, d) = (a + c, b + d)
and
respectively. Furthermore, the complex number (a, 0) and real number a are defined
to be the same; that is,
(a, 0) = a.
450
Section A.1 Complex Numbers as Real Number Pairs 451
(1) Complex addition and multiplication are compatible with real addition and
multiplication; for complex numbers (a, 0) and (c, 0), which also happen to be
the reals a and c,
(a, 0) + (c, 0) = (a + c, 0) = a + c
and
j ≡ (0, 1),
j 2 = −1.
We refer to j as the imaginary unit,2 since there exists no real number whose square
equals −1. However, within the complex number system, j = (0, 1) is no less real or
no more imaginary than, say, (1, 0) = 1. Furthermore, because in the complex number j 2 = −1
system, j 2 = (0, 1)(0, 1) = −1 and (−j )2 = (0, −1)(0, −1) = −1,√the square root √
−1 = ±j
of −1 exists and equals (0, ±1) = ±j. Because of this, we say j = −1.
Example A.1
Notice that, as a consequence of j 2 = −1, it follows that
1 1j j
= = = −j.
j jj −1
2
In math and physics books, the imaginary unit usually is denoted as i. We prefer j in electrical
engineering, because i is often a current variable.
452 Appendix A Complex Numbers and Functions
Given the real numbers a and b, and the imaginary number j ≡ (0, 1),
Thus, a + j b and (a, b) are simply two different ways of expressing the same complex
number.
In the next section, we will explain how a + j b = (a, b) can be plotted in the
two-dimensional plane, where a is the horizontal coordinate and b is the vertical
coordinate. Because of this ability to directly map a + j b = (a, b) onto the Cartesian
plane, both a + j b and (a, b) are called rectangular forms of a complex number.
To distinguish these two slightly different representations, however, we will refer to
(a, b) as the pair form and reserve the term rectangular form for a + j b. The value a
is called the real part of the complex number, whereas the value b is referred to as the
imaginary part. This is dreadful terminology, because both a and b are real numbers.
We are stuck with these descriptors, though, because they were adopted long ago and
are in widespread use. Remember that the imaginary part of a complex number is real
valued! The imaginary part of a + j b is b, not j b.
When expressed in rectangular form, complex numbers add and multiply like
algebraic expressions in the variable j . This is the main advantage in using the j
notation. For example, the product of
X = (1, 1) = 1 + j
and
Y = (−1, 1) = −1 + j 1
is, simply,
XY = (1 + j 1)(−1 + j 1) = −1 + j 2 + j − j = −2,
since j 2 = −1. The result is the same as that obtained with the original multiplication
rule (which is hard to remember):
X + Y = (1 + j 1) + (−1 + j 1) = 0 + j 2 = j 2,
in conformity with
All complex arithmetic, including subtraction and division, can be carried out by
treating complex numbers as algebraic expressions in j . For example,
X − Y = (1 + j ) − (−1 + j ) = 2 + j 0 = 2.
Example A.2
Given P = (3, 6) and Q = 2 + j , determine R = P Q and express the
product in pair form.
Solution
Example A.3
P −Q
Given P = (3, 6), Q = 2 + j , and R = P Q, determine R .
Solution Since R = j 15, as determined in Example A.2,
P −Q (3 + j 6) − (2 + j ) 1 + j5 (1 + j 5)j
= = =
R j 15 j 15 (j 15)j
−5 + j 1 1
= = −j .
−15 3 15
Example A.4
Use the quadratic equation3 to determine the roots of the polynomial
x 2 + x + 1.
3
If ax 2 + bx + c = 0, then
√
−b ± b2 − 4ac
x= .
2a
454 Appendix A Complex Numbers and Functions
x 2 + x + 1 = 0.
Using the quadratic equation, we find that the roots are given by
√ √
−1 ± 1 − 4 −1 ± −3
x= = .
2 2
√
However, since −3 is not a real number, x 2 + x + 1 has no real roots; in
other words, there is no real number x for which the statement x 2 + x + 1 =
0 is true. That is, if we plot f (x) = x 2 + x + 1 versus x, where x is a
customary real variable, then f (x) does not cross through the x-axis.
But if we allow the variable x to take complex values, then x 2 + x +
1 = 0 is true for
√
1 3
x =− ±j ,
2 2
√
since two distinct roots of the number −3 are ±j 3. Therefore, if x is √
a
complex variable, the roots of the polynomial x + x + 1 are − 2 ± j 2 .
2 1 3
While the pair form of complex numbers has conceptual importance, this form
is cumbersome when we are performing calculations by hand. In practice, we usually
will rely on rectangular form (with j notation) as well as the exponential and polar
forms discussed next.
A complex number
C = (a, b) = a + j b
can be envisioned as a point (a, b) on the 2-D Cartesian plane where a is the coordinate
on the horizontal axis and b is the coordinate on the vertical axis. Since the reals a and b
are called the real and imaginary parts of the complex number C = (a, b) = a + j b,
we use the notation
a = Re{C}
and
b = Im{C}.
Section A.3 Complex Plane, Polar and Exponential Forms 455
We label the horizontal and vertical axes of the 2D plane as Re and Im, respectively,
as shown in Figure A.1a, where we have plotted several different complex numbers.
When plotting complex numbers in this fashion, we call the 2-D plane the complex Complex
plane. plane
Clearly, (a, b) and a + j b are equivalent ways of referencing the same point C
on the complex plane. As illustrated in Figure A.1b, point C also can be referenced Magnitude,
by another pair of numbers: its distance |C| from the origin, which is the length |C|,
of the dashed line connecting the origin to point C; and the angle ∠C between the and angle,
positive real axis and a straight line path from the origin to the point C. Using simple ∠C, of
trigonometry, we have C = a + jb
|C| = a 2 + b2
and
b
∠C = tan−1 ( ),
a
which are said to be the magnitude |C| and angle ∠C of
C = (a, b) = a + j b,
respectively.
The formula given for ∠C assumes a ≥ 0; otherwise, ±180◦ needs to be added
to the value of tan−1 ( ab ) provided by your calculator. For instance, the angle of the
Im Im
2 2
C = a + jb
b |C|
1 + j1
−1 + j1 1 1
θ=∠C
−2 −1 1 2 Re −2 −1 1 2 a Re
−1 −1
−2 2− j2 −2
(a) (b)
Figure A.1 The complex plane showing (a) the locations of complex numbers 1 + j1, −1 + j1, and 2 − j2,
and (b) an arbitrary complex number C = (a, b) = a + jb with magnitude, |C|, and angle, ∠C = θ.
456 Appendix A Complex Numbers and Functions
complex number
Y = −1 + j 1
shown in Figure A.2 is not −45◦ , as tan−1 ( ab ) would indicate, but rather, −45◦ ± 180◦ ,
as can be graphically confirmed. One version of ∠(−1 + j ),
◦ ◦ ◦
−45 + 180 = 135 ,
corresponds to counterclockwise rotation starting from the Re axis, while the second
version,
◦ ◦ ◦
−45 − 180 = −225 ,
while for Z = 2 − j 2,
√ ◦
|Z| = 2 2 and ∠Z = −45 .
Polar Because complex numbers can be represented by their magnitudes and angles in
form the two-dimensional plane, we call this polar-form representation. We write a complex
|C|∠C number C in polar form as
Im
2
b
−1+ j1
1+ j1
1
∠(1 + j 1)
Re
−2 −1 1 2 a
∠(2 − j 2)
−1
−2 2 − j2
Figure A.2 Magnitudes and angles of complex numbers 1 + j1, −1 + j1, and
2 − j2 are indicated by dashed lines and arcs, respectively.
Section A.3 Complex Plane, Polar and Exponential Forms 457
|C|∠C.
and
√ ◦
Z = 2 2∠ − 45 .
and
Hence,
Now, as will be shown in Section A.5, there is a remarkably useful mathematical Euler’s
relation called Euler’s identity, which states that identity
ej θ = cos θ + j sin θ.
C = |C|ej θ ,
respectively.
458 Appendix A Complex Numbers and Functions
XY = (1 + j 1)(−1 + j 1) = −1 + j 2 + j − j = −2.
Note that
◦
2ej 180 = −2,
because
◦ ◦ ◦
ej 180 = cos 180 + j sin 180 = −1.
and
√ ◦ √ ◦ √ √ ◦ ◦
XZ = ( 2ej 45 )(2 2e−j 45 ) = 2 × 2 2 × ej (45 −45 ) = 4ej 0 = 4.
Example A.5
Convert the following complex numbers from exponential to rectangular
form:
◦ ◦
C1 = 7ej 25 , C2 = 7e−j 25 ,
◦ ◦
C3 = 2ej 160 , C4 = 2e−j 160 .
You should roughly sketch the locations of these points on the complex
plane and check that these exponential-to-rectangular conversions seem
correct.
Example A.6
Convert
C1 = 3 + j 4, C2 = −3 + j 4,
C3 = 3 − j 4, C4 = −3 − j 4
to exponential form.
Solution To write C = a + j b in the form |C|ej θ , use
|C| = a 2 + b2
and
b
θ = ∠C = tan−1 .
a
We have
4 ◦
∠C1 = tan−1 = 53.13 .
3
For C2 we seemingly have
−4 ◦
∠C2 = tan−1 = −53.13 .
3
However, the inverse tangent is multivalued, and your calculator gives only
one of the two values. If a > 0, your calculator provides the correct value.
As indicated earlier, if a < 0 then you must add or subtract 180◦ from the
calculated value to give the correct angle. Thus,
◦ ◦ ◦
∠C2 = −53.13 + 180 = 126.87 .
Recall that exponential form was derived from polar form. So, for example, we
√ 3π √ ◦
can express Y = −1 + j 1 = 2ej 4 as 2∠135 √ . Since both exponential and polar
forms express Y in terms of magnitude |Y | = 2 and angle ∠Y = 3π 4 rad = 135◦ ,
their distinction is mainly cosmetic.
Example A.7
What is the magnitude of the product P of complex numbers U = 1 + j 2
π
and V = 3ej 2 ?
Solution Clearly,
Example A.8
Given that A = 2∠45◦ and B = 3e−j 2 , determine AB and
π A
B.
π π π π π
AB = (2ej 4 )(3e−j 2 ) = 6ej ( 4 − 2 ) = 6e−j 4 .
Section A.4 More on Complex Conjugate 461
Also,
π
A 2ej 4 2 π π 2 3π
= −j π = ej ( 4 + 2 ) = ej 4 .
B 3e 2 3 3
Example A.9
A
Express AB and B from Example A.8 in rectangular form.
Solution Using Euler’s identity, we first note that
π π π π 1 1
e−j 4 = ej (− 4 ) = cos(− ) + j sin(− ) = √ − j √ ,
4 4 2 2
which also can be seen visually on the complex plane. Hence,
π 1 1 √ √
AB = 6e−j 4 = 6( √ − j √ ) = 3 2 − j 3 2.
2 2
Likewise,
√ √
j 3π 3π 3π 2 2
e 4 = cos( ) + j sin( ) = − +j ,
4 4 2 2
and so
√ √ √ √
A 2 3π 2 2 2 2 2
= ej 4 = (− +j )=− +j .
B 3 3 2 2 3 3
The last two equalities can be verified by using Euler’s formula. Try it!
In the rectangular and exponential forms, we obtain the conjugate by changing
j to −j , whereas in polar form the algebraic sign of the angle is reversed. Thus, for
X = 1 − j 1, we have
X ∗ = 1 − (−j )1 = 1 + j 1;
462 Appendix A Complex Numbers and Functions
π
whereas, for Y = ej 6 ,
π
Y ∗ = e−j 6 ;
and for Z = 2∠ − π4 ,
π
Z ∗ = 2∠ .
4
These same changes in sign work even for more complicated expressions, such as
the pair
π
j 3e−j 4 ◦
Q= 5∠30
(1 + j 2)2(1+j )
and
π
−j 3ej 4 ◦
Q∗ = 5∠ − 30 .
(1 − j 2)2(1−j )
C = a + jb
and
C∗ = a − j b
is
CC ∗ = |C|2 ,
since, using the exponential form, we have CC ∗ = (|C|ej θ )(|C|e−j θ ) = |C|2 (which,
in turn, equals a 2 + b2 ). Thus, CC ∗ is always real.
The absolute value | − 2| of the√real number −2 is 2. The√absolute value |1 + j 1|
π
of the complex number 1 + j 1 = 2ej 4 is its magnitude 2, that is, its distance
from the√origin of the complex plane. The absolute values of −1 − j 1 √ and 1 − j 1
also are 2. Since, for an arbitrary C, CC ∗ = |C|2 , it follows that |C| = CC ∗ (the
positive root, only).
Taking the sum of C = a + j b and C ∗ = a − j b, we get
C + C ∗ = 2a = 2Re{C},
Section A.5 Euler’s Identity 463
yielding
C + C∗
Re{C} = .
2
The difference gives
C − C ∗ = j 2b = j 2Im{C},
implying that
C − C∗
Im{C} = .
j2
1 − j2 1 + j2 1 − j2
+ = 2Re{ } = Re{−j (1 − j 2)} = Re{−2 − j } = −2.
j2 −j 2 j2
x2 x3 x4 xn
ex = 1 + x + + + + ··· + + ···.
2 3! 4! n!
C2 C3 C4 Cn
eC ≡ 1 + C + + + + ··· + + ···,
2 3! 4! n!
φ2 φ4 φ3 φ5
ej φ = (1 − + − · · ·) + j (φ − + − · · ·) = cos φ + j sin φ,
2 4! 3! 5!
φ2 φ4
1− + − ···
2 4!
dn x xn dn x
4
The Taylor series of ex , about x = 0, is obtained as ∞
n=0 dx n e|x=0 n! . Note that dx n
e = ex for any
x
n, leading to the series quoted above. The series converges to e for all x.
464 Appendix A Complex Numbers and Functions
and
φ3 φ5
φ− + − ···
3! 5!
are the series expansions of cos φ and sin φ, respectively. Therefore, for real φ,
and
ej φ = cos φ + j sin φ.
Euler’s The last statement is Euler’s identity, which we introduced without proof in Section A.3.
identity Throughout this textbook, we make use of both Euler’s identity and its conjugate,
Re{ej φ } = cos φ
ej φ + e−j φ
cos φ =
2
and
ej φ − e−j φ
sin φ = .
j2
These formulas will be used often; so they, along with Euler’s formula, should be
committed to memory.
Example A.10
Given that
determine A and χ.
Solution Using the identity
ej φ + e−j φ
cos φ = ,
2
Section A.6 Complex-Valued Functions 465
we have
A # jχ $
4(ej 3 + e−j 3 ) = e + e−j χ .
2
Therefore, A = 8 and χ = 3.
Example A.11
ej 5t −e−j 5t
Express the function 2 in terms of a sine function.
Solution Using the identity
ej φ − e−j φ
sin φ = ,
j2
we find
ej 5t − e−j 5t ej 5t − e−j 5t
=j = j sin(5t).
2 j2
A function f (t) is said to be real valued if, at each instant t, its numerical value is a
real number. For example,
f (t) = cos(2πt)
is real valued. By contrast, complex-valued functions take on values that are complex
numbers. For example,
cos(ωt) = Re{ej ωt }
5
This section can be studied after Chapter 4, in preparation for Chapter 5.
466 Appendix A Complex Numbers and Functions
and
sin(ωt) = Im{ej ωt }.
ej ωt = (cos(ωt), sin(ωt)).
ej ωt −→ Diff −→ j ωej ωt ,
where the function on the left is the system input and the function on the right is
the system output. This is a concise representation of a much more complicated real-
world situation, which can be understood by expressing the input and output functions
in pair form (with the help of Euler’s identity), given as
1
ej ωt −→ Lowpass −→ ej ωt
1 + jω
systems used for analysis and design purposes can be constructed in terms of complex-
valued functions whenever it is advantageous to do so. The advantages are made
abundantly clear in Chapter 4 and later chapters.
Example A.12
Given a linear filter circuit described by
1
ej ωt −→ Filter −→ ej ωt ,
1 + jω
1
ej 2t −→ Filter −→ ej 2t .
1 + j2
Since
ej 2t = (cos(2t), sin(2t))
and
−1
ej 2t ej 2t ej (2t−tan 2)
=√ = √
1 + j2 −1
5ej tan 2 5
1
= √ (cos(2t − tan−1 2), sin(2t − tan−1 2)),
5
cos(2t − tan−1 2)
cos(2t) −→ Filter −→ √ .
5
f (t) = cos(2t)
1
y(t) = √ cos(2t − tan−1 2).
5
468 Appendix A Complex Numbers and Functions
Let us return to Example A.3 from Section A.2. In that example, we were asked to
determine the roots of the polynomial x 2 + x + 1. That is, we were asked to find the
values of x for which
x 2 + x + 1 = 0.
f (x) = x 2 + x + 1.
Clearly, there is no value of x for which this function passes through zero. But, isn’t
this function, which is a second-order polynomial, required to have two roots? The
answer is no, not if x is a real variable!
An nth-order polynomial is guaranteed to have n roots only if the polynomial is
a function of a complex variable.7 A complex variable x is an ordered pair of real
variables—say, xr and xi —much like a complex number C is an ordered pair of real
numbers (a, b). We can define
x ≡ (xr , xi )
where xr and xi are called the real and imaginary parts of x, respectively. Alternatively,
we can write x = xr + j xi .
| f (x )| 2
Im
7
f (x )
6
5 1
1
4 0.5
0.5
3 0
2 -1 0
-0.8
1 -0.6 -0.5 Re
-0.4
(a) -2 -1 1 2 x (b) -0.2 -1
0
Figure A.3 (a) Plot of function f (x) = x 2 + x + 1 of the real variable x, and (b)
surface plot of squared magnitude of function f (x) of the complex variable
x = (xr , xi ) = xr + jxi over the complex plane.
6
This section can be studied after Chapter 10, in preparation for Chapter 11.
7
In high school you probably worked with functions of only a real variable. However, in cases where
polynomials were factored and discovered to have complex roots, it was implicitly assumed that the variable
was complex (whether you were told that or not!).
Section A.7 Functions of Complex Variables 469
Because this function depends on two real-valued variables, xr and xi , we can contem-
plate plotting it in 3-D, as a surface over the xr –xi plane. There is one catch, though:
f (x) itself is complex valued. That is, the value of f (x) for a given x is generally a
complex number. For example, if xr = 0 and xi = 1, then f (x) = (j )2 + j + 1 = j .
How do we plot f (x) if it is complex valued?
The answer is that we must make two plots. Our first plot, or sketch, can be the
real part of f (x) as a 3-D surface over the xr –xi plane, which also happens to be the
complex plane. The second sketch can be the surface plot of the imaginary part of
f (x). These two sketches together would fully describe f (x). Alternatively, we could
sketch the magnitude of f (x) and the angle of f (x), both as surfaces over the xr –xi
plane, which also would fully describe f (x). Let’s go ahead and calculate, and then
plot, the square of the magnitude of f (x), and check whether there are any values
of x for which the squared magnitude hits zero (in which case f (x) itself must, of
course, be zero).
The squared magnitude of f (x) is the square of the real part of f (x) plus the
square of the imaginary part of f (x). We have
So,
and
giving
|f (x)|2 = Re2 {f (x)} + Im2 {f (x)} = (xr2 − xi2 + xr + 1)2 + xi2 (2xr + 1)2 .
A 3-D surface plot of the squared magnitude |f (x)|2 is shown in Figure A.3b. It
appears that this plot may hit zero at two locations, but it is difficult see precisely. An
examination of the preceding expression for |f (x)|2 can help. This quantity can be
zero only if both terms are zero. The second term, xi2 (2xr + 1)2 , can be zero only if
either xi = 0 or xr = −1/2. With the first choice, it is impossible for xr2 − xi2 + xr + 1
to be zero (remember, xr is a real variable). However, if
xr = −1/2,
then
xr2 − xi2 + xr + 1 = 0
470 Appendix A Complex Numbers and Functions
when
√
xi = ± 3/2.
Thus, the squared magnitude of f (x), which is the plot shown in Figure A.3b, hits
zero when
√
x = (xr , xi ) = (−1/2, ± 3/2).
This agrees with the result we obtained in Section A.2 by using the quadratic formula.
Plots such as the one in Figure A.3b help us visualize functions of complex
variables. The key is that a complex variable is a pair of real variables, and so plots
of this sort always must be made over the 2-D complex plane. In Chapter 10 we
introduce the Laplace transform, which is a function of a complex variable. You
may have seen the Laplace transform used as an algebraic tool to assist in solving
differential equations. In this course, we will need to have a deeper understanding of
the functional behavior of the Laplace transform and we will sometimes want to plot
either the magnitude or squared magnitude, using ideas similar to those expressed
here. Doing so will give us insight into the frequency response and stability of signal
processing circuits.
B
Labs
Lab 1: RC-Circuits
Lab 2: Op-Amps
Lab 3: Frequency Response and Fourier Series
Lab 4: Fourier Transform and AM Radio
Lab 5: Sampling, Reconstruction, and Software Radio
471
Lab 1: RC-Circuits
Over the course of five laboratory sessions you will build a working AM radio receiver
that operates on the same principles as commercially available systems. The receiver
will consist of relatively simple subsystems examined and discussed in class. We will
build the receiver up slowly from its component subsystems, mastering each as it is
added.
In Lab 1 you will begin your AM receiver project with a study of RC circuits.
Although RC circuits, consisting of resistors R and capacitors C, are simple, they
can perform many functions within a receiver circuit. They are often used as audio
filters (e.g., the circuitry behind the “bass” and “treble” knobs), and as you will see
later in this lab, envelope detectors, with the inclusion of diodes. Lab 1 starts with an
exercise that will familiarize you with sources and measuring instruments to be used
in the lab, and continues with a study of characteristics of capacitors and steady-state
and transient behavior in RC circuits. Then you convert an RC filter circuit into an
envelope detector and test it using a synthetic AM signal.
1 Prelab
Prelab exercises are meant to alert you to topics to be covered in each lab session.
Make sure to complete them before coming to lab since their solutions often will be
essential for understanding/explaining the results of your lab measurements.
(1) For the circuit of Figure 1, calculate the following:
(a) The RC time constant.
(b) The voltage v(t) across the capacitor 1 ms after the switch is closed,
assuming the capacitor is initially uncharged—express in terms of Vs .
(c) The initial current i(0+ ) that will flow in the circuit.
(2) Suppose you are given an unknown capacitor. Describe an experimental tech-
nique that you could use to determine its value.
472
Lab 1: RC-Circuits 473
t= 0 R = 2 kΩ
+
Vs +
− i(t) ν(t) C = 0.1 μF
−
2 Laboratory Exercise
(d) (e)
Figure 2 Your (a) function generator, (b) protoboard, (c) oscilloscope, (d) cables
and wires, and (e) resistor on a protoboard with source and probe connections
to (a) and (c).
474 Lab 1: RC-Circuits
(1) Place a 50 resistor on your protoboard and use Y-cables to connect the
signal generator (output port) and oscilloscope (input port) across the resistor
terminals—see Figure 2e or ask your TA for help if you are confused about this
step.
(2) Press the power buttons of the scope and the function generator to turn both
instruments on.
(3) (a) Set the function generator to produce a co-sinusoid output with 5 kHz
frequency and 4 V peak-to-peak amplitude:
• Press “Freq,” “Enter Number,” “5,” “kHz.”
• Press “Ampl,” “Enter Number,” “4,” “Vpp.”
(b) Press “Auto-scale” on the scope and adjust, if needed, vertical and hori-
zontal scales further so that the scope display exhibits the expected co-
sinusoid waveform produced by the generator.
(c) Sketch what you see on the scope, labeling carefully horizontal and
vertical axes of your graph in terms of appropriate time and voltage
markers.
(4) The default setting of your function generator is to produce the specified voltage
waveform (e.g., 4 V peak-to-peak co-sinusoid as in above measurement) for
a 50 resistive load. An alternate setting, known as “High Z”, allows you to
specify the generator output in terms of open circuit voltage. To enter High Z
mode you can use the following steps (needed every time after turning on the
generator):
• Press “shift” and “enter” to enter the “MENU” mode.
• Press “>” three times until “D sys Menu” is highlighted.
• Press “V” twice until “50 Ohm” is highlighted.
Lab 1: RC-Circuits 475
(5) Remove the 50 resistor from the protoboard without modifying the remaining
connections to the function generator and scope. Observe and make a sketch of
the modified scope output once again.
476 Lab 1: RC-Circuits
(6) Based on observations from steps 3, 4, and 5, and the explanation for High
Z mode provided in step 4, determine the output resistance (Thevenin) of the
function generator and the input (load) resistance of the scope. Explain your
reasoning.
(7) Measure the period of the co-sinusoidal signal displayed in step 5 by using the
time cursors of the oscilloscope and compare the measured period with the
inverse of the input frequency.
(8) Repeat step 7 using a square wave with a 50% duty cycle (i.e., the waveform
is positive 50% of the time and negative also 50% of the time) instead of the
co-sinusoid.
When you power up your function generator it will come up by default in 50 Ohm
mode. Remember to switch it to High Z mode if you want to specify open-circuit
voltage outputs (as in the next section, and in most experiments).
2 kΩ
+ −
Channel 1 νR (t) + Channel 2
νin (t) +
− i(t) νC (t)
0.1 μF −
(1) Measure the time constant by determining the amount of time required for the
capacitor voltage to decay to 37% of vmax on the scope. Use the measurement
cursors and record the result, and sketch what you see on the oscilloscope.
(2) Compare this value to your theoretical value for RC (give percent error).
(2) Plot the amplitude versus frequency data collected above in the box below—
note that the axes are labeled using a logarithmic scale.
1
10
0
10
Amplitude
1
10
2
10
2 3 4 5
10 10 10 10
Frequency (Hz)
−1
−2
−3
0 200 400 600 800 1000 1200 1400 1600 1800 2000
With a simple modification to your RC circuit, you can create an envelope detector
that will recover the message signal contained in the function generator’s AM signal.
Change your circuit to match Figure 5b. Make sure you reconnect the function gener-
ator to the circuit, as indicated.
Diode
+
Channel 1 Channel 2
ν in (t ) +
− i(t) ν C (t )
2 kΩ 0.1 μF
−
(a) (b)
Figure 5 (a) Circuit symbol and physical diagram for a diode, and (b) an
“envelope detector” circuit using a diode.
480 Lab 1: RC-Circuits
(1) View the voltage across the capacitor on the oscilloscope. What do you see?
Use the measurement cursors to determine the frequency of the output. Is this
the message signal?
(2) Can you explain how the circuit works? Hints: The diode in the circuit will
be conducting part of each cycle and non-conducting in the remainder—figure
out how the capacitor voltage will vary when the diode is conducting and non-
conducting. When is the RC time constant in the circuit relevant—when the
diode is conducting or not?
Also consider these questions:
(a) Is the envelope detector circuit linear or nonlinear? Explain.
(b) Recall that a capacitor is a charge-storage device. When you turn on your
radio, the capacitor will store an unknown charge. Will it be necessary to
account for this in the design of your receiver circuit? Why or why not?
Important! Leave your envelope detector assembled on your protoboard! You will
need it in future lab sessions.
Already you have come far: Your circuit can detect the message-containing envelope
of an AM signal. In the next lab session, you will explore amplifier circuits. You will
build an audio amplifier and connect your envelope detector to it so that you can listen
to the message signal you recovered in this session.
Lab 2: Op-Amps
The objective of this experiment is to gain experience in the design and construc-
tion of operational amplifier circuits. You also will examine some nonideal op-amp
behavior.
By the end of Lab 2, you will have designed and built two amplifiers for your
radio circuit. Once you have verified that they work, you will connect them to your
envelope detector from Lab 1 and listen to synthetic AM signals.
1 Prelab
In the prelab exercises, you will review the analysis of op-amp circuits and design
amplifiers for your radio circuit.
481
482 Lab 2: Op-Amps
(1) Assuming ideal op-amps, derive an expression for the output voltage vo in the
circuit of Figure 1(a).
(2) Write two KCL equations in terms of vx that relate the output voltage to
input voltage for the circuit in Figure 1(b) (you do not need to solve the KCL
equations). Simplify your expressions as much as possible.
νx νi +
R2 R1 R2 νo
−
R1 R1
ν1 − −
νo νo
+ +
R3 R3 ν
ν2 ν1 x
R4 R2
R4 R5
(a) (b) (c)
(3) Again assuming an ideal op-amp, derive an expression for the output voltage
vo in the circuit in Figure 1(c).
vo
(a) For a gain vi of 2, how must R1 and R2 be related?
vo
(b) For a gain vi of 11, how must R1 and R2 be related?
(c) How do you build two amplifiers, one with a gain of two and one with a
gain of 11, given four resistors: one 2 k, two 10 k, and one 20 k?
(d) Using the pin-out diagram in Figure 2 as a reference, draw how you will
wire the amplifier with a gain of 11. Draw in resistors and wires as needed.
2 Laboratory Exercise
• The DC supplies must be set carefully to power the op-amp without destroying it.
• The DC sources should be set to +12 V for VCC + supply and −12 V for
VCC − supply.
• The VCC + and VCC − supplies always stay at the same pins throughout the
experiment. Do not switch them or you will destroy the op amp!
• When you are ready to turn the power on, always turn on the DC supplies first,
then the AC supply.
• When you turn the circuit off, always turn off the AC first and then the DC.
Rf
C
R1
−
νo
+
νi +
−
R2
(2) Apply a 500 Hz, 6 V peak-to-peak square wave to the input (with 50% duty
cycle). Sketch the input and output waveforms. Explain the shape of the output
waveform, and how it confirms that the circuit acts as an effective integrator.
(3) Switch the input waveform from square to co-sinusoid and decrease the frequency
to 100 Hz. Sketch the input and output waveforms again.
(4) Now slowly increase the input amplitude to 16 V peak-to-peak and describe
what happens to the shape of the output waveform. How do you explain what
you see? What is the peak-to-peak amplitude of the output waveform when the
input amplitude is 16 V peak-to-peak? If you were to increase the amplitude of
the input further, would that increase the amplitude of the output? Why or why
not?
Lab 2: Op-Amps 485
νi +
νo
−
R1
i−
R2
Diode 33μF
νi +
+
− νo
R1 −
R1
2 kΩ 0.1μF
R2
R2
envelope detector and the loudspeaker, preventing the loudspeaker from changing the
time constant of the tuned envelope detector.
Until now, we have been displaying signals on our oscilloscopes. However,
signals in the audio frequency range also can be “displayed” acoustically using loud-
speakers. We next will use a corner of our protoboard, an audio jack, and a pair of
speakers to listen to a number of signal waveforms:
(1) Connect the function generator output to the speaker input (via the protoboard
and audio jack) and generate and listen to an 880 Hz signal. Repeat for a 13
kHz signal. Describe what you hear in each case—in what ways do the 880 Hz
and 13 kHz audio signals sound different to your ear?
(2) To test your three-stage circuit, create an AM signal with the function generator
(almost like in Lab 1):
• Set the function generator to create a 13 kHz sine wave measuring 0.2 V
peak-to-peak, no DC offset.
• Press “Shift,” then “AM” to enable amplitude modulation.
• Press “Shift,” then “Freq” to set the message-signal frequency to 880 Hz.
• Press “Shift,” then “Level” to set the modulation depth to 80%. This
adjusts the height of the envelope.
• Listen to the AM signal on the loudspeaker.
Turn on the DC supplies, and connect the function generator to the input of your
three-stage circuit from step 5 above. Connect the output of the same circuit to
the oscilloscope. Sketch what you see on the oscilloscope display. How does it
compare to the waveform you obtained in the last part of Lab 1? On the function
generator, press “Shift,” then “Freq” and sweep the message-signal frequency
from 100 Hz and 2000 Hz. Describe what you see.
Lab 2: Op-Amps 487
(3) Disconnect the oscilloscope and replace it with a loudspeaker. What do you
hear? Sweep the message signal frequency from 100 Hz to 2000 Hz again.
Describe how the sound changes as the frequency is swept, do you recognize
the 880 Hz sound from step 1 above?
Important! Leave your circuit on the protoboard but return the audiojack to your
TA.
In this lab you will build an active bandpass filter circuit with two capacitors and
an op-amp, and examine the response of the circuit to periodic inputs over a range
of frequencies. The same circuit will be used in Lab 4 in your AM radio receiver
system as an intermediate frequency (IF) filter, but in this current lab our main focus
will be on the frequency response H (ω) of the filter circuit and the Fourier series
of its periodic input and output signals. In particular we want to examine and gain
experience with the response of linear time-invariant circuits to periodic inputs.
1 Prelab
(1) Determine the compact-form trigonometric Fourier series of the square wave
signal, f (t), with a period T and amplitude A shown in Figure 1. That is, find
cn and θn such that
∞
c0
f (t) = + cn cos(nωo t + θn ),
2
n=1
c0
where ωo = 2π T . Notice 2 = 0. How could you have determined that without
any calculation?
(2) Consider the circuit in Figure 2 where vi (t) is a co-sinusoidal input with some
radian frequency ω.
(a) What is the phasor gain VVoi in the circuit as ω → 0? (Hint: How does one
model a capacitor at DC—open or short?)
488
Lab 3: Frequency Response and Fourier Series 489
f (t)
A
−A
1 kΩ
5 kΩ ∼ νo (t) / 2
+
νi (t) 0.01 μF νo(t)
−
3.6 kΩ
0.01 μ F 1.7 kΩ
3.6 kΩ
(b) What is the gain VVoi as ω → ∞? (Hint: Think of capacitor behavior in the
limit as ω → ∞.)
(c) In view of the answers to parts (a) and (b), and the fact that the circuit is
second-order (it contains two energy storage elements), try to guess what
kind of filter the system frequency response H (ω) ≡ VVoi implements—
low-pass, high-pass, or band-pass? The amplitude response |H (ω)| of the
circuit will be measured in the lab.
2 Laboratory Exercise
an active bandpass filter circuit and measure its amplitude response over the frequency
range 1–20 kHz. Do the following:
(1) Construct the circuit shown in Figure 2 on your protoboard. For now, do not
connect it to the three-stage circuit from Lab 2. Remember the rules for wiring
the 741 op-amp, which are repeated in Figure 3.
(2) Turn on the DC supplies, then connect a 1 kHz sine wave with amplitude 1 V
peak-to-peak as the AC input vi (t). Display vi (t) and vo (t) on different channels
of the oscilloscope and verify the waveforms.
(3) Increase the function generator frequency from 1 kHz to 20 kHz in 1 kHz
increments. At each frequency enter the magnitude of the phasor voltage gain
Vo Vo
Vi in the graph shown below— Vi is the system frequency response H (ω) and
|Vo |
its magnitude |Vi | is the system amplitude response |H (ω)|.
10 1
10 0
Amplitude
response
10 −1
−2
10
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Frequency (kHz)
Lab 3: Frequency Response and Fourier Series 491
1
FFT stands for fast Fourier transform and it is a method for calculating Fourier transforms with sampled
signal data—see Example 9.26 in Section 9.3 of Chapter 9 to understand the relation of windowed Fourier
transforms to Fourier coefficients.
492 Lab 3: Frequency Response and Fourier Series
(4) Observe the output signal’s Fourier coefficient by setting the Operand to Channel
2 under the function menu. How does the FFT display change as you sweep
the frequency of the input from 1 kHz to 20 kHz? Describe what you see and
briefly explain what is happening.
Important! Leave your active filter assembled on your protoboard! You will need
it in the next lab session.
In Lab 4, you finally will connect all of your receiver components and tune in an
AM radio broadcast. You will follow the radio signal through the entire system, from
antenna to loudspeaker, in both the time domain and the frequency domain.
1 Prelab
You should prepare for this lab by reviewing Sections 8.3 and 8.4 of the text on AM
detection and superhetrodyne receivers, familiarizing yourself with your own receiver
design shown in Figure 1, and answering the following questions:
(1) Suppose you want to tune your AM receiver in the lab to WDWS, an AM
station broadcasting from Champaign-Urbana with a carrier frequency fc =
2π = 1400 kHz. Given that the IF (Intermediate Frequency) of your receiver is
ωc
fIF = ω2πIF =13 kHz, to which LO (Local Oscillator) frequency fLO = ω2πLO should
you set the function generator input of the mixer in your receiver (see Figure 1)
to be able to detect WDWS? (Hint: There are two possible answers; give both
of them.)
(2) Repeat 1, supposing you wish to listen to WILL, an AM broadcast at fc = 580
kHz.
(3) Sketch the amplitude response curve |HIF (ω)| of an ideal IF filter designed for
an IF of fIF = ω2πIF = 13 kHz and a filter bandwidth of 10 kHz. Label the axes
of your plot carefully using appropriate tick marks and units.
493
494 Lab 4: Fourier Transform and AM Radio
Local Oscillator
BPF
Antenna RF Amp Mixer IF Filter IF Amp Env. Detector Audio Amp Speaker
2 Laboratory Exercise
Your scope is capable of displaying the Fourier transform of its input signal. We have
already used this feature in Lab 3 in “observing” the Fourier coefficients of periodic
signal inputs. In this section we will learn how to examine nonperiodic inputs in the
frequency domain.
(1) No circuit is used for this part of the laboratory. Connect the function generator’s
output to Channel 1 of the oscilloscope.
(2) Set the function generator to create a 1 kHz square wave with amplitude 1
V peak-to-peak. Turn on burst mode by pressing “Shift” then “Burst.” In this
mode the generator outputs a single rectangular pulse, (i.e., rect(t/τ )), which
is repeated at a 100 Hz rate. Sketch the burst signal output that you see on the
scope display. Confirm that pulses are generated with at a 100 Hz rate (that is,
100 pulses per second, or a pulse every 10 ms).
(3) Set the oscilloscope to display the magnitude of the Fourier transform of a
segment of the input signal (containing a single rectangle):
• Press “+/−.”
• Turn on Function 2.
• Press “Menu.”
• Press “Operation” until FFT appears.
• Set the Operand to Channel 1.
• Press “FFT Menu.”
Lab 4: Fourier Transform and AM Radio 495
• Press “Enter” to save the change and turn off the menu.
• Press “Shift,” then “Freq” to set the message signal frequency to 880
Hz.
• Press “Shift,” then “Level” to set the modulation depth to 80%. This
adjusts the DC component added to the message signal before modula-
tion.
(3) Set your oscilloscope to display the frequency spectrum of the input:
• Press “+/−.”
• Turn on Function 2.
• Press “Menu,” then “Operation” until FFT appears.
• Set the time/div to 2 ms.
• Set the units/div to 10 dB and the ref level to 0 dBV.
• Set the center frequency to 12.2 kHz.
• Set the frequency span to 24.4 kHz.
(4) Sketch the AM signal and its frequency spectrum using the oscilloscope display
and explain what you see in terms of the modulation property of the Fourier
transform.
(5) Change the shape of the message signal in the modulation menu from SINE to
SQUARE. Do not change the shape or frequency of the 13 kHz carrier signal.
Explain what you see. (Hint: See Example 9.26 in Section 9.3 of Chapter 9.)
(3) Connect the output of the band-pass filter to the input of your three-stage circuit
from Lab 2. Make sure all of the signal grounds are connected, and connect the
DC supplies to your op-amps.
(4) Turn on the DC supplies. Then connect the function generator to the local-
oscillator input on the frequency mixer. Tune in AM 1400 by selecting an
appropriate mixing frequency for the local oscillator as described below. (Hint:
Look back to Problem 1 in the Prelab.)
(5) Now you will follow the processing of the received RF signal into an audible
signal by displaying the time and frequency domain of the four test-point signals
on the oscilloscope. The test points are described below.
When displaying the signals from each of the test points, it will be your
task to select an appropriate time scale for the time-domain waveform and
center frequency and frequency span for its Fourier spectrum. If you select
inappropriate numbers, all you will see on the oscilloscope is noise. You may
ask the TA for hints, but give your choice some thought and discuss it with your
partner before displaying the signal.
Test point 1: RF amplifier The antenna picks up the AM radio broadcast we are
trying to tune, along with many other unwanted signals. The antenna signal is then
passed through an LC resonator called the preselector, which acts as a crude band-pass
filter. The preselector removes some unwanted frequencies from the antenna signal
while preserving the frequencies associated with the desired AM radio broadcast. The
output of the preselector is amplified by the RF amplifier stage.
Connect an oscilloscope probe to the RF amplifier output (TP1). Sketch the
waveform in both the time and frequency domains. Set the oscilloscope for the FFT
as before, but set the center frequency to 1.2 MHz and the frequency span to 2.4
MHz. You should see at least one AM signal in the frequency domain, but nothing
discernible in the time domain. Touching your finger to one of the antenna terminals
will amplify the signal considerably.
Test point 2: frequency mixer The mixer multiplies the RF signal with the signal
coming from the local oscillator, which is tuned so that all subsequent processing is
independent of the radio broadcast’s carrier frequency. In most commercial receivers,
the mixer is tuned to produce an Intermediate Frequency (IF) of 455 kHz. For our
purposes, an IF of 13 kHz will suffice. The band-pass filter and envelope detector you
built are suited for an IF of 13 kHz.
The function generator will be our local oscillator, abbreviated as LO. To produce
an IF of 13 kHz, the LO must be set at a frequency 13 kHz above or below the carrier
signal of the station we are trying to tune. We will be tuning AM 1400, so set the LO
to be a 1413 kHz or a 1387 kHz sine wave with an amplitude of 400 mV peak-to-peak.
Make sure that the function generator’s AM feature is turned off.
You may have to adjust LO frequency slightly to better tune the station. Once
the station is tuned, probe TP2 and sketch the output in both the time domain and the
frequency domain. At TP2, the signal should be very noisy, just as it was at TP1.
498 Lab 4: Fourier Transform and AM Radio
Test point 3: IF filter and amplifier The IF filter is used to select the signal
centered on the IF frequency and to reject all other signals. Receivers employing
higher IFs typically include ceramic IF filters that operate on a piezoelectric prin-
ciple. Although small and inexpensive, ceramic filters can have very sharply tuned
responses, which are needed with large IF compared to AM bandwidth. With lower
IF, such as 13 kHz, a sharply tuned response is not necessary, and thus even the low-Q
op-amp-based band-pass filter from Lab 3 that we are using is more than adequate.
Depending on the AM signal strength from the antenna, and the noise level, you
may find it necessary to add gain to the IF amplifier. Feel free to experiment with
different gain values (remember the design equation from Lab 2) to get an output
that can be demodulated by the envelope detector. An IF gain of about 30 is not
uncommon.
Probe the signal at TP3. Sketch the time waveform and the frequency spectrum.
Test point 4: envelope detector and audio amplifier The envelope detector
then recovers the message signal from the IF signal. Probe TP4 and sketch the time-
domain waveform and frequency spectrum.
At this point, if all stages of the AM radio behave as expected, hook up the output
of the audio amplifier to the speaker. Do you hear what you expect to hear?
Until this point, your study of signals and systems has concerned only the continuous-
time case,1 which dominated the early history of signal processing. About 50 years
ago, however, the development of the modern computer generated research interest in
digital signal processing (DSP), a type of discrete-time signal processing. Although
hardware limitations made most real-time DSP impractical at the time, the continuing
maturation of the computer has been matched with a continuing expansion of DSP.
Much of that expansion has been into areas previously dominated by continuous-
time systems: our telephone network, medical imaging, music recordings, wireless
communications, and many more.
You do not need to worry whether the time and effort you have invested in
studying continuous-time systems will be wasted because of the growth of DSP—
digital systems are practically always hybrids of analog and digital sub-systems.
Furthermore, many DSP systems are linear and time-invariant, meaning that the
same analysis techniques apply, although with some modifications. In this lab, you
will explore some of the parallels between continuous-time systems and DSP with
a “software radio” designed to the same specifications as the receiver circuit you
developed on your protoboard.
1
The term “continuous time” is used generically to refer to signals that are functions of a contin-
uous independent variable. Often that variable represents time, but it may instead represent distance, etc.
“Discrete time” is used in the same way.
499
500 Lab 5: Sampling, Reconstruction, and Software Radio
T
x (t) f (t) f (nT ) f T (t) y(t)
H 1 (ω) × H 2 (ω)
A/D conversion ∑
n
δ(t − nT )
D/A conversion
1 Prelab
Our software radio is typical of many DSP systems in that both the available input
and required output are continuous-time signals. The conversion of a continuous-
time input signal to a discrete-time signal is called sampling (or A/D conversion),
and the conversion of a discrete-time signal to a continuous-time output signal is
called reconstruction (or D/A conversion). As discussed in class, samples f (nT ) of
a bandlimited analog signal f (t) can be used to reconstruct f (t) exactly when the
sampling interval T and signal bandwidth = 2πB satisfy the Nyquist criterion
1
T < 2B .
This is illustrated by the hypothetical system shown in Figure 1, where the analog
signal f (t) defined at the output stage of a low-pass filter H1 (ω) has a bandwidth
= 2πB limited by the bandwidth 1 = 2πB1 of the filter. The A/D converter
extracts the samples f (nT ) from f (t) with a sampling interval of T . D/A conversion
of samples f (nT ) into an analogsignal y(t) can be envisioned as low-pass filtering
of a hypothetical signal fT (t) = n f (nT )δ(t − nT ) using the filter H2 (ω). With an
appropriate choice of H2 (ω), the system output y(t) will be identical to f (t) so long
as T < 2B1 1 . The reason for that easily can be appreciated after comparing the Fourier
transforms F (ω) and FT (ω) of signals f (t) and fT (t) with the help of Figure 2.
The following prelab exercises concern the system shown in Figure 1. Assume
that T = 441001
s (i.e., the sampling frequency is T −1 = 44100 Hz) and signal x(t)
has a Fourier transform X(ω) shown in Figure 3.
Lab 5: Sampling, Reconstruction, and Software Radio 501
1 F (ω)
2πB ω
1
T FT (ω)
2πB π 2π ω
T T
Figure 2 An example comparing the Fourier transforms of signals f (t) and fT (t)
defined in Figure 1. Since for |ω| < πT the two Fourier transforms have the same
shape, low-pass filtering of fT (t) yields the original analog signal f (t). FT (ω) is
constructed as a superposition of replicas of F(ω)
T shifted in ω by all integer
multiples of 2π
T (see item 25 in Table 7.2 in the text).
X (ω)
H 1 (ω) H 2 (ω)
H 1 (ω) H 2 (ω)
H 1 (ω) H 2 (ω)
(1) For frequency responses H1 (ω) and H2 (ω) in Figure 4a, sketch the Fourier
transforms of f (t), fT (t) = n f (nT )δ(t − nT ), and y(t). Is y(t) a perfect
reconstruction of x(t)? Is y(t) a perfect reconstruction of f (t)?
(2) Now, consider an ideal H1 (ω) but a nonideal H2 (ω) given by Figure 4b. The
signals f (t) and fT (t) are unchanged, but sketch the Fourier transform of the
new y(t). Is y(t) a perfect reconstruction of f (t)?
502 Lab 5: Sampling, Reconstruction, and Software Radio
(3) Now suppose a nonideal H1 (ω) and an ideal H2 (ω) given by Figure 4c. Sketch
the Fourier transform of f (t), fT (t), and y(t). Is y(t) a perfect reconstruction
of f (t)?
(4) Discuss the role of filters H1 (ω) and H2 (ω) in the system examined above. In
what ways do they impact the system output?
2 Laboratory Exercise
In this section, you will observe a real system much like the one you studied in the
prelab exercises. The lab also will illustrate the phenomenon of aliasing, which occurs
when the analog input is undersampled.
(1) Connect the function generator’s output to the “mic in” jack at the back of the
computer at your lab station. Use BNC “Y” cables to bring the signal from the
lab equipment to the protoboard. Use stereo jacks and stereo cables to run the
signal from the protoboard to the computer. Ask your TA if you need help.
(2) In Windows, go to Start, Control Panel, and click Sound, Speech, and Audio
Devices. Next click on Adjust the system volume. Under the volume tab turn
the Device Volume all the way up. Then in the Audio tab click on Volume...
under Sound Recording. Make sure the Microphone box is selected and turn
the Microphone level all the way down. Also make sure the “wave” setting is
all the way up. Leave this window open, as it may be necessary to change the
gain if the signal is too strong or too weak during the lab exercises.
(3) In Windows, go to Start, Programs, MATLAB, R2006b and click MATLAB
R2006b (MATLAB versions are continously updated. If you do not see
MATLAB R2006b, start the latest version installed on the computer). At
MATLAB command prompt, type “softRx.” This will launch the graphical
user interface (GUI) for the software AM receiver shown in Figure 5.
(4) Select “Output = Unprocessed Input” from the pull-down options menu near
the top left corner of the softRx GUI, in which case the analog “mic in” signal
is sampled and reconstructed as an analog signal as depicted in Figure 6 and
explained in the caption. Note the resemblance of the diagram to the system
studied in the prelab. The low-pass filters shown in Figure 6 are part of the sound
card and serve the same role as those in the prelab. Important: You should use
the zoom buttons at the top left corner of the GUI to zoom in on the waveform
if it looks like a solid line on the screen.
Lab 5: Sampling, Reconstruction, and Software Radio 503
∑
n
δ(t − nT )
(5) Set the function generator to produce a 5 kHz sinusoid with amplitude 500
mV peak-to-peak. Connect the 33 μF capacitor in parallel with the stereo jack
(adding this capacitor will prevent saturation of the input port; you can remove
it to see its effect). Click the “Start Data Acquisition” button in the GUI to begin
sampling and reconstruction. Describe what you see in the time and frequency
plots of the output in the GUI.
(6) Slowly sweep the input frequency up to 19 kHz. Does the output look like a
perfect reconstruction of the input signal?
504 Lab 5: Sampling, Reconstruction, and Software Radio
(7) Now, slowly sweep the input frequency from 19 to 25 kHz, passing through
22.05 kHz. What is the significance of the frequency 22.05 kHz? What happens
in the time and frequency domain? Does this look like a perfect reconstruction
or do we have an aliased component at the output? Which component(s) in the
system could be improved to reduce the aliasing effect? Note that the answer to
the last question is not the sampling frequency of the sound card or the sound
card itself.
(8) Finally, sweep the input frequency from 25 to 30 kHz. What do you observe?
∑
n
δ(t − nT)
Figure 7 Block diagram for the Software Receiver when the “Output = IF
Filtered Input” option is selected.
(2) Note that the filter frequency-response depicted in the GUI depends on the cutoff
frequency inputs. Change them, after clicking “Change Cutoff Frequencies,”
from fcl = 8 and fcu = 18 kHz to 10 and 16 kHz, respectively, and observe
the new filter response.
(3) Switch the input waveform from the sinusoid to a square wave with f = 1 kHz
and describe what you see at the filter output.
• Press “Enter” to save the change and turn off the menu.
• Press “Shift,” then “Freq” to set the message signal frequency to 880
Hz.
• Press “Shift,” then “Level” to set the modulation depth to 80%. This
adjusts the DC component added to the message signal before modula-
tion.
(2) Select the “Output = Unprocessed Input” option. Sketch and describe what you
see on the GUI output.
(3) Select the “Output = IF Filtered Input” option. Sketch and describe what you
see.
(4) Select the “Output = Envelope Detected Input” option. This introduces an enve-
lope detector to the system as shown in Figure 8. You will be asked to enter
three cut-off frequencies, two for the bandpass filter and one for the lowpass
filter of the envelope detector. Sketch and describe what you see on the output
panel. Overall, what does this system accomplish?
∑
n
δ(t − nT )
Figure 8 Block diagram for the Software Receiver when “Output = Envelope
Detected Input” is selected.
(4) Select the “Output = IF Filtered Input” option. Sketch and describe the output
that you see.
(5) Repeat 4 with the “Output = Envelope Detected Input” option. How do the
signals in each of these settings resemble those you observed in the continuous-
time case during Lab 4?
(6) Connect a pair of loudspeakers to “speaker out” to listen to AM 1400. Is your
software radio working as expected? Note: You might need to move the antenna
or change how you are holding the antenna in order to increase signal quality.
(7) Explore sound quality changes as you vary the following parameters:
(a) Increase the IF frequency to 16 kHz by varying the LO frequency.
(b) Set fcl = 14 kHz and fcu = 18 kHz.
(c) Set the lowpass filter cutoff frequency, fc , to 2 kHz.
The End
Congratulations on completing the ECE 210 labs! Over these five labs you have
learned and applied the most important principles of continuous-time signals and
systems and explored their parallels in discrete-time signals and systems. Advanced
coursework in ECE will require you to apply these principles again and again. You
are well prepared!
C
Further Reading
507
This page intentionally left blank
Index
509
510 Index