0% found this document useful (0 votes)
40 views529 pages

Ece210 Text

Uploaded by

92scbxmyws
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views529 pages

Ece210 Text

Uploaded by

92scbxmyws
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 529

Analog Signals

and Systems
This page intentionally left blank
Analog Signals
and Systems

Erhan Kudeki
University of Illinois at Urbana-Champaign

David C. Munson Jr.


University of Michigan

Upper Saddle River, New Jersey 07458


Library of Congress Cataloging-in-Publication Data

CIP data on file

Editorial Director, Computer Science, Engineering: Marcia J. Horton


Associate Editor: Alice Dworkin
Editorial Assistant: William Opulach
Senior Managing Editor: Scott Disanno
Production Editor: James Buckley
Art Director: Jayne Conte
Cover Designer: Bruce Kenselaar
Art Editor: Greg Dulles
Media Editor: Dave Alick
Manufacturing Manager: Alan Fischer
Manufacturing Buyer: Lisa McDowell

© 2009 by Pearson Education Inc.


Pearson Prentice Hall
Pearson Education, Inc.
Upper Saddle River, NJ 07458

All rights reserved. No part of this book may be reproduced, in any form or by any means, without
permission in writing from the publisher.

Pearson Prentice Hall™ is a trademark of Pearson Education, Inc.

The author and publisher of this book have used their best efforts in preparing this book. These efforts
include the development, research, and testing of the theories and programs to determine their
effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard
to these programs or the documentation contained in this book. The author and publisher shall not be
liable in any event for incidental or consequential damages in connection with, or arising out of, the
furnishing, performance, or use of these programs.

Printed in the United States of America


10 9 8 7 6 5 4 3 2 1

ISBN 0-13-143506-X
978-0-13-143506-3

Pearson Education Ltd., London


Pearson Education Australia Pty. Ltd., Sydney
Pearson Education Singapore, Pte. Ltd.
Pearson Education North Asia Ltd., Hong Kong
Pearson Education Canada, Inc., Toronto
Pearson Educación de Mexico, S.A. de C.V.
Pearson Education—Japan, Tokyo
Pearson Education Malaysia, Pte. Ltd.
Pearson Education, Inc., Upper Saddle River, New Jersey
to
Beverly, Melisa, and Deren
and to
Nancy, David, Ryan, Mark, and Jamie
Analog Signals
and Systems
Contents

Preface xi

Chapter 0 Analog Signals and Systems— The Scope and Study Plan 1

Chapter 1 Circuit Fundamentals 6

1.1 Voltage, Current, and Power 7


1.2 Kirchhoff’s Voltage and Current Laws: KVL and KCL 15
1.3 Ideal Circuit Elements and Simple Circuit Analysis
Examples 17
1.4 Complex Numbers 26
Exercises 26

Chapter 2 Analysis of Linear Resistive Circuits 31

2.1 Resistor Combinations and Source Transformations 31


2.2 Node-Voltage Method 38
2.3 Loop-Current Method 43
2.4 Linearity, Superposition, and Thevenin and Norton
Equivalents 48
2.5 Available Power and Maximum Power Transfer 60
Exercises 63

Chapter 3 Circuits for Signal Processing 68

3.1 Operational Amplifiers and Signal Arithmetic 68


3.2 Differentiators and Integrators 80
3.3 Linearity, Time Invariance, and LTI Systems 87
3.4 First-Order RC and RL Circuits 93
3.5 nth-Order LTI Systems 111
Exercises 115

vii
viii Contents

Chapter 4 Phasors and Sinusoidal Steady State 121

4.1 Phasors, Co-Sinusoids, and Impedance 122


4.2 Sinusoidal Steady-State Analysis 136
4.3 Average and Available Power 143
4.4 Resonance 150
Exercises 154

Chapter 5 Frequency Response H(ω) of LTI Systems 158

5.1 The Frequency Response H (ω) of LTI Systems 159


5.2 Properties of Frequency Response H (ω) of LTI Circuits 164
5.3 LTI System Response to Co-Sinusoidal Inputs 166
5.4 LTI System Response to Multifrequency Inputs 176
5.5 Resonant and Non-Dissipative Systems 181
Exercises 182

Chapter 6 Fourier Series and LTI System Response to Periodic


Signals 185

6.1 Periodic Signals 186


6.2 Fourier Series 189
6.3 System Response to Periodic Inputs 208
Exercises 218

Chapter 7 Fourier Transform and LTI System Response to Energy


Signals 223
7.1 Fourier Transform Pairs f (t) ↔ F (ω) and Their
Properties 226
7.2 Frequency-Domain Description of Signals 240
7.3 LTI Circuit and System Response to Energy Signals 247
Exercises 255

Chapter 8 Modulation and AM Radio 259

8.1 Fourier Transform Shift and Modulation Properties 260


8.2 Coherent Demodulation of AM Signals 265
8.3 Envelope Detection of AM Signals 267
8.4 Superheterodyne AM Receivers with Envelope
Detection 273
Exercises 278
Contents ix

Chapter 9 Convolution, Impulse, Sampling, and Reconstruction 281

9.1 Convolution 282


9.2 Impulse δ(t) 301
9.3 Fourier Transform of Distributions and Power Signals 314
9.4 Sampling and Analog Signal Reconstruction 325
9.5 Other Uses of the Impulse 332
Exercises 333

Chapter 10 Impulse Response, Stability, Causality, and LTIC


Systems 337
10.1 Impulse Response h(t) and Zero-State Response
y(t) = h(t) ∗ f (t) 338
10.2 BIBO Stability 346
10.3 Causality and LTIC Systems 351
10.4 Usefulness of Noncausal System Models 357
10.5 Delay Lines 357
Exercises 359

Chapter 11 Laplace Transform, Transfer Function, and LTIC System


Response 361

11.1 Laplace Transform and its Properties 363


11.2 Inverse Laplace Transform and PFE 381
11.3 s-Domain Circuit Analysis 389
11.4 General Response of LTIC Circuits and Systems 396
11.5 LTIC System Combinations 412
Exercises 419

Chapter 12 Analog Filters and Low-Pass Filter Design 426

12.1 Ideal Filters: Distortionless and Nondispersive 427


12.2 1st- and 2nd-Order Filters 430
12.3 Low-Pass Butterworth Filter Design 437
Exercises 447

Appendix A Complex Numbers and Functions 450

A.1 Complex Numbers as Real Number Pairs 450


A.2 Rectangular Form 452
A.3 Complex Plane, Polar and Exponential Forms 454
x Contents

A.4 More on Complex Conjugate 461


A.5 Euler’s Identity 463
A.6 Complex-Valued Functions 465
A.7 Functions of Complex Variables 468

Appendix B Labs 471

Lab 1: RC-Circuits 472


Lab 2: Op-Amps 481
Lab 3: Frequency Response and Fourier Series 488
Lab 4: Fourier Transform and AM Radio 493
Lab 5: Sampling, Reconstruction, and Software Radio 499

Appendix C Further Reading 507

INDEX 509
Preface

Dear student: This textbook will introduce you to the exciting world of analog signals
and systems, explaining in detail some of the basic principles that underlie the opera-
tion of radio receivers, cell phones, and other devices that we depend on to exchange
and process information (and that we enjoy in our day-to-day lives). This subject
matter constitutes a fundamental core of the modern discipline of electrical and
computer engineering (ECE). The overall scope of our book and our pedagogical
approach are discussed in an introductory chapter numbered “0” before we get into
the nitty-gritty of electrical circuits and analog systems, beginning in Chapter 1. Here,
in the Preface, we tell you and our other readers – mainly your instructors – about the
reasons underlying our choice of topics and the organization of material presented
in this book. We hope that you will enjoy using this text and then perhaps return to
this Preface after having completed your study, for then you will be able to better
appreciate what we are about to describe.
This textbook traces its origins to a major curriculum revision undertaken in
the middle 1990s at the University of Illinois. Among the many different elements
of the revision, it was decided to completely restructure the required curriculum in
the area of circuits and systems. In particular, both the traditional sophomore-level
circuit analysis course and the junior-level signals and systems course were phased
out, with the material in these courses redistributed within the curriculum in a new
way. Some of the circuits material, and the analog part of the signals and systems
material, were integrated into a new sophomore-level course on analog signals and
systems. This course, for which this book was written, occupies the same slot in the
curriculum as the old circuit analysis course. Other material in the circuit analysis
course was moved to a junior-level electronics elective and to an introductory course
on power systems. The discrete-time topics from the old junior-level signals and
systems course were moved into a popular course on digital signal processing. This
restructuring consolidated the curriculum (saved credit hours); but, more importantly,
it offered pedagogical benefits that are described below.
Similar to the trend initiated in DSP First: A Multimedia Approach, and its
successor Signal Processing First (both by McClellan, Schafer, and Yoder, Prentice-
Hall), our approach takes some of the focus in the early curriculum off circuit analysis,
which no longer is the central topic of ECE. And, it permits the introduction of
signal processing concepts earlier in the curriculum, for immediate use in subsequent

xi
xii Preface

courses. However, unlike DSP First and Signal Processing First, we prefer “analog
first” as the portal into ECE curricula, for three reasons. First, this treatment follows
more naturally onto required courses on calculus, differential equations, and physics,
which model primarily analog phenomena. Second, this approach better serves as a
cornerstone of the broader ECE curricula, preparing students for follow-on courses
in electronics (requiring circuit analysis and frequency response), electromagnetics
(needing phasors, capacitance, and inductance), solid state electronics (using differ-
ential equations), and power systems (requiring circuit analysis, complex numbers,
and phasors). Third, the concept of digital frequency is entirely foreign to students
familiar with only trigonometry and physics. Humans perceive most of the phys-
ical world to be analog. Indeed, it is not possible to fully understand digital signal
processing without considering a complete system composed of an analog-to-digital
converter, a digital filter (or other digital processing), and a digital-to-analog converter.
An analysis of this system requires substantial knowledge of Fourier transforms and
analog frequency response, which are the main emphases of this book.
Beginning with simple course notes, versions of this textbook have been used
successfully at the University of Illinois for more than a decade. As we continued to
work on this project, it became clear to us that the book would have broader appeal,
beyond those schools following the Illinois approach to the early ECE curriculum.
Indeed, this text is equally useful as either “analog first” (prior to a course on discrete-
time signal processing) or “analog second” (after a course on discrete-time signal
processing). In that sense, the text is universal. And, the book would work well
for a course that follows directly onto a standard sophomore-level course on circuit
analysis.
We invite instructors to try the approach in this text that has succeeded so well
for us. And, we urge you to look beyond the topical headings, which may sound
standard. We believe that instructors who follow the path of this book will encounter
new and better ways of teaching this foundational material. We have observed that
integration of circuit analysis with signals and systems allows students to see how
circuits are used for signal processing, and not just as mathematical puzzles where
the goal is to solve for node voltages and loop currents. Even more important, we feel
that our students develop an unusually thorough understanding of Fourier analysis
and complex numbers (see Appendix A), which is the core of our book. Students who
complete this text can design simple filters and can explain in the Fourier domain the
workings of a superheterodyne AM radio receiver. Finally, through the introduction
of a small number of labs that are intimately tied to the theory covered in lecture
(see Appendix B), we have constructed a well-rounded learning environment by
integrating theory with applications, design, and implementation.
We extend sincere thanks to all of our faculty colleagues at the University of
Illinois and elsewhere who participated at one time or another in the “ECE 210
project,” and who contributed in so many ways to the course notes that evolved into
our book. We especially thank Tangul Basar, Douglas Jones, George Papen, Dilip
Sarwate, and Timothy Trick, who have used, critiqued, and helped improve many
versions of the notes. We also thank countless students and graduate TAs – Andrea
Preface xiii

Mitofsky in particular – for their helpful comments and catching our mistakes. Finally,
we acknowledge the influence of our prior education and reading on what we have put
to paper – this influence is so far-reaching and untraceable that we have avoided the
task of compiling a comprehensive reference list. Instead, we have included a short
list of further reading (see Appendix C). This list should be useful to readers who
wish to explore the world of signals and systems beyond what can be reached with
this book. Our very best wishes to all who are about to begin their learning journey!

Erhan Kudeki and David C. Munson, Jr.


This page intentionally left blank
Analog Signals
and Systems
This page intentionally left blank
0
Analog Signals and
Systems—The Scope and
Study Plan

THE SCOPE 1
STUDY PLAN 4

THE SCOPE

The world around us is teeming with signals from natural and man-made sources—
stars and galaxies, radio and TV stations, computers and WiFi cards, cell phones,
video cameras, MP3 players, musical instruments, temperature and pressure sensors,
and countless other devices and systems. In their natural form many of these signals
are analog, or continuous in time. For example, the electrical signal received by a
radio antenna may be represented as an analog voltage waveform v(t), a function of a
continuous time variable t. Similarly, sound traveling through the air can be thought
of as a pressure waveform having a specific numerical value at each instant in time
and each position in space.
Not all signals are analog. Nowadays, many signals are digital. Digital signals
are sequences of numbers. Most often, we acquire digital signals by sampling analog
signals at uniformly spaced points in time and rounding off (quantizing) the samples
to values that can be stored in a computer memory. This process of producing digital
signals is called analog-to-digital (A/D) conversion, or digitization. The acquired
sequence of numbers can be stored or processed (manipulated) by a computer. Then
it often is desired to return the values from the digital realm back to the analog world.
This is accomplished by a process called digital-to-analog (D/A) conversion, whereby

1
2 Analog Signals and Systems

a smooth, continuous (analog) waveform, say, some v(t), is constructed that passes
through the numerical values of the digital signal.
Modern-day signal processing systems commonly involve both analog and digital
signals. For example, a so-called digital cell phone has many analog components. Your
vocal chords create an acoustic signal (analog), which is captured by a microphone in
the cell phone and converted into an electrical signal (analog). The analog electrical
signal is then digitized—that is, sampled and quantized—to produce a (digital) signal
that is further manipulated by a computer in the cell phone to create a new digital
signal that requires fewer bits for storage and transmission and that is more resistant
to errors encountered during transmission.
Next, this digital signal is used as the input to a modulator that creates a high-
frequency analog waveform that carries the information in the coded digital signal
away from your cell phone’s antenna. At the receiving cell phone these processes
are reversed. The receiving antenna captures an analog signal (voltage versus time),
which then is passed through the demodulator to retrieve the digitally coded speech. A
computer in the receiving cell phone processes, or decodes, this sequence to recreate
the set of samples of the original speech waveform. This set of samples is passed
through a D/A converter to create the analog speech waveform. This signal is amplified
and passed to the speaker in the cell phone, which in turn creates the analog acoustic
waveform that is heard by your ear.
Figure 0.1 uses a block diagram language to summarize what we have just
described. From a high-level point of view—that is, ignoring what is happening in
individual blocks or subsystems shown in the figure—we see that the overall task of
the transmitting phone depicted at the top is to convert the analog voice input pi (t)
into a propagating analog radio wave Ei (t). The receiving phone’s task, on the bottom,
is to extract from many analog radio waves hitting its antenna (E1 (t), E2 (t), etc.) a

νi(t) Vn Ym yi(t)
pi(t) M A/D C D/A A
Ei(t)
Transmitting Receiving
νo(t) Vk Yl yo(t)
E2(t) A A/D C D/A S po(t)

E1(t)

Figure 0.1 Simplified cell phone models, transmitting (at the top) and receiving
(on the bottom). Blocks A denote a pair of antennas, A/D and D/A denote
analog-to-digital and digital-to-analog converters, C stands for a computer or
digital processing unit, and M and S represent a microphone and a loudspeaker,
respectively. The transmitting phone at the top converts the sound input pi (t)
(an analog pressure waveform) into a propagating radio wave Ei (t), whereas
the receiving phone on the bottom is designed to reconstruct a sound wave
po (t), which is a delayed and scaled copy of pi (t).
The Scope and Study Plan 3

delayed and scaled copy of pi (t) and render it as sound. These tasks are carried out
through a combination of analog and digital signal processing steps represented by the
individual blocks of Figure 0.1. Such combinations of analog and digital processing
are common in many devices that we encounter everyday: MP3 players, digital cable
or satellite TV, digital cameras, anti-lock brakes, and household appliance controls.
In this book, we focus on the mathematical analysis and design of analog signal
processing, which is carried out in the analog parts of the previously mentioned
systems. Analog processing frequently is employed in amplification, filtering (to
remove noise or interference) or equalization (to shape the frequency content of a
signal), and modulation (to piggyback a low-frequency signal onto a higher frequency
carrier signal for wireless transmission), as well as in parts of A/D and D/A converters.
Processing of analog signals typically is accomplished by electrical circuits composed
of elements such as resistors, capacitors, inductors, and operational amplifiers. Thus,
it will be necessary for us to learn more about circuit analysis than you may have seen
in preceding courses.
We wrote this book under the assumption that the reader may not be familiar
with electrical circuits much beyond the most fundamental level, which is reviewed
in Chapter 1. Chapters 2 through 4 introduce DC circuit analysis, time-varying circuit
response, and sinusoidal steady-state AC circuits, respectively. The emphasis is on
linear circuit analysis, because the discussions of linear time-invariant (LTI) systems
in later chapters develop naturally as generalizations of the concepts from Chapter 4
on sinusoidal steady-state response in linear circuits.
To utilize the cell phone models shown in Figure 0.1, we need unambiguous
descriptions of the input–output relations of their interconnected subsystems or proces-
sors. In Chapters 5 through 7 we will learn how to formulate such relations for linear,
electrical circuits as well as for other types of analog LTI systems. Chapters 5 through
7 focus on how LTI circuits and systems respond to arbitrary periodic and non-periodic
inputs, and how such responses can be represented by Fourier series and transform
techniques, respectively.
As you will see in this book, there are two different approaches to understanding
signal processing systems: Both frequency-domain and time-domain techniques are
widely employed. Our initial approach in Chapters 5 through 7 takes the frequency-
domain path, because that happens to be the easier and more natural approach to follow
if our starting point is sinusoidal steady-state circuits (i.e., in Chapter 4). This path
quickly takes us to a sufficiently advanced stage to enable a detailed discussion of AM
radio receivers in Chapter 8. However, fluency in both time- and frequency-domain
methods is necessary because, depending on the problem, one approach generally
will be easier to use or will offer more insight than the other. We therefore will turn
our attention in Chapters 9 and 10 to time-domain methods. We will translate the
frequency-domain results of Chapter 7 to their time-domain counterparts and illus-
trate the use of the time-domain convolution method, as well as impulse and impulse
response concepts. We also will learn in Chapter 9 about sampling and reconstruc-
tion of bandlimited analog signals, and get a glimpse of digital signal processing
techniques.
4 Analog Signals and Systems

An analog system can produce an output even in the absence of an input signal
if there is initial energy stored in the system (like the free oscillations of a stretched
spring after its release). In Chapters 11 and 12 we will investigate the full response of
LTI systems and circuits, including their energy-driven outputs, and learn the Laplace
transform technique for solving LTI system initial-value problems. The emphasis
throughout most of the course is on circuit and system analysis—that is, deter-
mining how a circuit or system functions and processes its input signals. However, in
Chapter 12 we will discuss system design and learn how to build stable analog filter
circuits.
The book ends with three appendices. Appendix A is a review of complex numbers.
Both at the circuit level and at the block diagram level, much of the analysis and
design of signal processing systems relies on mathematics. You will be familiar
with the required mathematical subjects: algebra, trigonometry, calculus, differen-
tial equations, and complex numbers. However, most students using this text will find
that their background in complex
√ numbers is entirely insufficient. For example, can
you explain the meaning of −1? If not, then Appendix A is for you. Most students
will want to study this appendix carefully, parallel to Chapters 1 and 2, and then refer
to it later as needed in Chapter 3 and beyond.
Appendix B includes five laboratory worksheets that are used at the University
of Illinois in a required lab that accompanies the sophomore-level course ECE 210,
Analog Signal Processing, based on this text. The lab component of ECE 210 starts
approximately five weeks into the semester, and the biweekly labs involve simple
measurement and/or design projects related to circuit and systems concepts covered
in class. In the fourth lab session, an AM radio receiver—the topic of Chapter 8—is
assembled with components built in the earlier labs. In the fifth session, the receiver is
modified to include a PC sound card (and software), replacing the back-end hardware.
The labs provide a taste of how signal and system theory applies in practice and
illustrate how real-life signals and circuit behavior may differ from the idealized
versions described in class.
Appendix C provides a list of further reading for students who may wish to learn
more about a topic or who seek an alternative explanation.

STUDY PLAN

This book was written with a “just-in-time” approach. This means that the book tells
a story (so to speak), and new ideas and topics relevant to the story are introduced
only when needed to help the story advance. You will not find here “encyclopedic”
chapters that are stand-alone and complete treatments of distinct topics. (Chapter 1,
which is a review of circuit fundamentals, may be an exception.) Instead, topics are
developed throughout the narrative, and individual ideas make multiple appearances
just when needed as the story unfolds, much like the dynamics of individual characters
in a novel or a play.
For example, although the title of Chapter 7 contains the words “Fourier trans-
form,” the concept of the Fourier transform is foreshadowed as early as in Chapter 3,
The Scope and Study Plan 5

and discussions of the Fourier transform continue into the final chapter of the book.
Thus, to learn about the Fourier transform, a full reading of the text is necessary—
reading just Chapter 7 will provide only a partial understanding. And that is true with
many of the main ideas treated in the book. We hope that students will enjoy the story
line enough to stick with it.
In ECE 210 at the University of Illinois, the full text is covered from Chapter 1
through Chapter 12 in one semester (approximately 15 weeks) in four lecture hours per
week, including a first-week lecture on complex numbers as treated in Appendix A.
Chapter 1 and much of Chapter 2 are of a review nature for most students; conse-
quently, they are treated rapidly to devote the bulk of classroom time to Chapters 3
through 12. Exposure to circuit analysis in Chapters 1 through 4 prepares students
for junior-level courses in electronics and electromagnetics, while signal processing
and system analysis tools covered throughout the entire text provide the background
for advanced courses in digital signal processing, communications, control, remote
sensing, and other areas where linear systems notions are essential. Exposure of
sophomores to the tools of linear system theory opens up many creative options for
later courses in their junior and senior years.
The story line of our book is, of course, open ended, in the sense that student
learning of the Fourier transform and its applications, and other important ideas intro-
duced here, will continue beyond Chapter 12. Because of that, we trust our students
to question “what happens next” and to pursue the plot in subsequent courses.
1
Circuit Fundamentals

1.1 VOLTAGE, CURRENT, AND POWER 7


VOLTAGE 7
CURRENT 11
ABSORBED POWER 13
1.2 KIRCHHOFF’S VOLTAGE AND CURRENT LAWS:
KVL AND KCL 15
KIRCHHOFF’S
 VOLTAGE LAW: AROUND ANY CLOSED LOOP IN A CIRCUIT,
VRISE = VDROP 15
KIRCHHOFF’S
 CURRENT LAW: AT ANY NODE IN A CIRCUIT,
IIN = IOUT 16
1.3 IDEAL CIRCUIT ELEMENTS AND SIMPLE CIRCUIT ANALYSIS
EXAMPLES 17
1.4 COMPLEX NUMBERS 26
EXERCISES 26

Review Electrostatic attraction and repulsion between charged particles are fundamental to
of voltage, all electrical phenomena observed in nature. Free charge carriers (e.g., electrons and
current, protons) transported against electrostatic forces gain potential energy just like a pebble
and power; lifted up from the ground against gravitational pull. Conversely, charge carriers release
KVL and KCL; or lose their potential energy when they move in the direction of an electrostatic pull.
two-terminal In circuit models of electrical systems and devices the movement of charge
elements carriers is quantified in terms of current variables such as iR , iL , and iC marked
on the circuit diagram shown in Figure 1.1. Voltage variables such as vs , vR , and vo
are used to keep track of energy gains and losses of carriers moving against or with
electrostatic forces. Flow channels of the carriers are represented by two-terminal
circuit elements such as R, L, and C, which are distinguished from one another by
unique voltage–current, or v–i, relations. The v–i relations plus Kirchhoff’s voltage
and current laws representing energy and charge conservation are sufficient to deter-
mine quantitatively how a circuit functions and at what rates energy is generated

6
Section 1.1 Voltage, Current, and Power 7

iR(t) R
+ −
νR(t) iL(t) iC(t)
+
νs(t) + L C ν (t)
− o

Figure 1.1 An electrical circuit.

and lost in the circuit.1 These fundamental circuit concepts will be reviewed in this
chapter.

1.1 Voltage, Current, and Power

Most elementary circuit models are constructed in terms of two-terminal elements


representing the possible flow paths of electrical charge carriers. The energetics and
flow rate of charge carriers transiting each circuit element are described in terms of
the voltage, current, and power variables defined in Table 1.1.

Voltage

The definition of element voltage v given in Table 1.1 can be interpreted as energy
loss per unit charge transported from an element terminal marked by a + sign to the
second terminal marked by −; equivalently, as the energy gain per Coulomb moved
from the − to the + terminal:

Example 1.1
In Figure 1.2a, vb = 4 V stands for energy loss per unit charge transported
from the left terminal of element b marked by the + sign to the right terminal
marked by −. Equivalently, vb = 4 V can be interpreted as energy gain per
unit charge transported from the − to + terminals, or from right to left.
Thus, electrical potential energy per unit charge, or the electrical potential,
is higher at the + terminal of element b compared with its − terminal by
an amount 4 V.

1
Charge and energy conservation are fundamental to nature: Net electrical charge can neither be gener-
ated nor destroyed; if a room contains 1 C of net charge, the only way to change this amount is to move
some charged particles in or out of the room; likewise, for energy. When we talk about electrical energy
generation, use, or loss, what we really mean is conversion between electrical potential energy and some
other form of energy, e.g., mechanical, chemical, thermal, etc.
8 Chapter 1 Circuit Fundamentals

Circuit element and variables


i
Units
+ −
ν

Element voltage
w dw
v ≡ lim = ,
q→0 q dq Joule (J)
v[=] = Volt (V)
where w denotes the potential energy loss of q amount of Coulomb (C)
charge transported from + to − terminal.
Element current
q dq
i ≡ lim = ,
t→0 t dt Coulomb (C)
i[=] = Ampere (A)
where q denotes the net amount of electrical charge trans- second (s)
ported in direction → during the time interval t.
Absorbed power
dw dq dw w
p ≡ vi = = = lim ,
dq dt dt t→0 t Joule (J)
p[=] = Watt (W)
where w denotes the net energy loss of charge carriers moving second (s)
through the element during the time interval t.

Table 1.1 Definitions of element voltage, current, and absorbed power, and the associated units in
Systeme International (SI). The symbol [=] stands for “has the unit.”

b −
b +
+ −
νb = 4 V νb = − 4 V −
+

a c d νd = −1 V a c d νd = 1 V

+

(a) (b)

Figure 1.2 Circuits with several two-terminal elements.


Section 1.1 Voltage, Current, and Power 9

Example 1.2
In Figure 1.2a, vd = −1 V indicates that energy per unit charge is lower
at the + terminal of element d on the top relative to the − terminal on
the bottom, since vd is negative. Or, equivalently, energy per unit charge is
higher at the bottom terminal relative to the top terminal.

The plus and minus signs assigned to element terminals are essential for describing
what the voltage variable stands for. These signs are said to indicate the polarity of
the element voltage, but the polarity does not indicate whether the voltage is positive
or negative. Figure 1.2b shows the same circuit as Figure 1.2a, but with the voltage Polarity
polarities assigned in the opposite way. As the next two examples show, these revised
voltage definitions describe the same physical situation as in Figure 1.2a, because the
algebraic signs of the voltage values also have been reversed in Figure 1.2b.

Example 1.3
In Figure 1.2b, vb = −4 V stands for energy loss per unit charge transported
from the right terminal of element b marked by a + sign to the left terminal
marked by −. Therefore, energy gain per unit charge transported from right
to left is 4 V, consistent with what we found in Example 1.1

Example 1.4
In Figure 1.2b, vd = 1 V indicates that energy per unit charge is higher at
the + terminal on the bottom relative to the − terminal on the top, because
vd > 0. This is consistent with what we found in Example 1.2.

We call an element voltage such as vb a voltage rise from the − terminal to the Voltage drop
+ terminal, or, equivalently, a voltage drop from + to −. So, vb = 4 V in Figure 1.2a and rise
is a 4 V rise from the right to left terminals (− to +) and a 4 V drop from left to right.
Likewise, vd = 1 V in Figure 1.2b is a 1 V drop from bottom to top and a 1 V rise
from top to bottom.

Example 1.5
What is the voltage drop associated with vb in Figure 1.2b?
Solution In Figure 1.2b, vb is a −4 V drop from right (+) to left (−)
across element b. Remember, by definition, a drop is always from + to −
(and rise, always from − to +).

Notions of voltage drop and rise will play a useful role in formulating Kirchhoff’s
voltage law in the next section.
Closely associated with the notion of element voltage is the concept of node
voltage, or electrical potential. The connection points of element terminals in a circuit
are called the nodes of the circuit. In Figure 1.3, two of the three nodes are marked by Circuit nodes,
dots and the third node is marked by a ground symbol. The node marked by the ground reference,
symbol is called the reference node. Distinct node voltage variables are assigned to node voltage
10 Chapter 1 Circuit Fundamentals

ν1 = 3 V ν2 = − 1 V
+
b
νb = ν1 − ν2 −
+

a c d νd = ν2 − 0

Figure 1.3 A circuit with four two-terminal elements and three nodes.

all the remaining nodes.2 In Figure 1.3, we have labeled node voltages v1 and v2 . By
definition, the node voltage, or electrical potential, vn stands for energy gain per unit
charge transported from the reference node to node n.3 The electrical potential v0 of
the reference node is, of course, zero.

Example 1.6
Because v1 = 3 V in Figure 1.3, the energy gain per unit charge transported
from the reference node to node 1 is 3 V, or 3 J/C. Thus, the electrical
potential at node 1 is 3 V higher than at the reference node. Equivalently,
charges transported from node 1 to the reference node lose energy at a 3 J/C
rate.

Example 1.7
In Figure 1.3, v2 = −1 V indicates that 1 C of charge transported from the
reference node to node 2 gains −1 J of energy (same as losing 1 J). Thus,
the electrical potential is higher at the reference node than at node 2.

The voltage across each element in a circuit is the difference of electrical poten-
tials of the nodes at the element terminals. To express an element voltage as a potential
difference, we subtract from the electrical potential of the + terminal the electrical
potential of the − terminal. For instance, in Figure 1.3,

vb = v1 − v2 = 3 V − (−1 V) = 4 V.

This expression reflects the fact that energy loss per unit charge transferred from node
1 to node 2 is 4 V, because electrical potential at node 1 is 4 V higher than at node 2.
Similarly, again referring to Figure 1.3,

vd = v2 − v0 = (−1 V) − 0 = −1 V.
2
The reference node generally is not electrically connected to the earth. Instead, it is an arbitrarily
chosen node that is used as the baseline for defining the other node voltages, much as sea level is arbitrarily
chosen as zero elevation.
3
An analogy is the gravitational potential energy gain of a rock lifted from the ground and placed on a
windowsill.
Section 1.1 Voltage, Current, and Power 11

ν1 = 3 V ν2 = − 1 V ν1 = 4 V

b + −
b +
νb − −
νb − −

a c νc d νd a c νc d νd
+ + + +

ν2 = 1 V
(a) (b)

Figure 1.4 (a) The same as the circuit in Figure 1.3, but with reversed polarities assigned to element
voltages vb and vd . (b) The same circuit, but with a different reference node.

Example 1.8
Express the voltage vb in Figure 1.4a in terms of node voltages v1 = 3 V
and v2 = −1 V.

Solution Since the + terminal of element b in Figure 1.4a has an electrical


potential of v2 = −1 V and the − terminal has potential v1 =3 V, it follows
that

vb = v2 − v1 = (−1 V) − 3 V = −4 V.

In Figure 1.4a, elements c and d are placed in parallel, making terminal contacts Elements
with the same pair of nodes. Thus, both elements have the same terminal potentials. in parallel
Since the assigned polarities of vc and vd are in agreement, it follows that the potential
differences, or element voltages, are vc = vd = 0 − (−1 V) = 1 V.
Finally, we note that any node in a circuit can be chosen as the reference. The
choice impacts the values of the node voltages of the circuit, but not the element volt-
ages, since the latter are potential differences. Changing the reference node causes
equal changes in all node voltage values, so that element voltages (potential differ-
ences) remain unchanged. See Figure 1.4b for an illustration.

Current

Figure 1.5 shows a circuit with four elements, each representing a possible path for
electrical charge flow. Current variables ia , ib , etc., shown in the figure have been
defined to quantify these flows. (See Table 1.1 for the definition of current.) A current
variable such as ib indicates the amount of net electrical charge that transits an element
per unit time in the direction indicated by the arrow → shown next to the element.
12 Chapter 1 Circuit Fundamentals

ib
b

ia a ic = 3 A c id d

Figure 1.5 The same as Figure 1.3, but showing only the element currents.

Example 1.9
Suppose that, every second, 2 C of charge move through element b in
Figure 1.5 from left to right. Then ib = 2 A, because the arrow assigned
to ib points from left to right, the direction of 2 C/s net charge transport. If,
on the other hand, the flow from left to right is −2 C/s (same as 2 C/s from
right to left), then ib = −2 A.

Example 1.10
In Figure 1.5, ic = 3 A indicates that the amount of net charge transported
through element c from top to bottom (the direction indicated by the arrow
accompanying ic ) is 3 C/s. If the arrow direction were reversed, the same
3 C/s net flow from top to bottom would be described by ic = −3 A.

Electrical current is a measure of net charge flow. Let’s see what this really means:
If, for instance, equal numbers of positive and negative charge carriers were to move
in the same direction every second,4 then there would be no net charge transport and
the electrical current would be zero. Net charge transport requires unequal numbers of
positive and negative charge carriers moving in the same direction (per unit time) or
carriers with opposite signs moving in different directions. For instance, ic = 3 A in
Figure 1.5 could be due to top-to-bottom movement of only positive charge carriers,
bottom-to-top transport of only negative charge carriers, or a combination of these
two possibilities, such as negative carriers moving from bottom to top at a −2 C/s
rate simultaneously with a 1 C/s transport of positive carriers from top to bottom. An
element current indicates not how charge transport occurs through the element, but
what the net charge flow is.
In Figure 1.5 elements a and b are positioned in series on the same circuit branch
Elements and therefore constitute parts of the same charge flow path. Since the assigned flow
in series directions of ia and ib agree, it follows that ia = ib .

4
This movement is as in the interior of the sun, where free electrons and protons of the solar plasma
move at equal rates in the direction perpendicular to solar magnetic field lines in response to electric fields.
Section 1.1 Voltage, Current, and Power 13

Example 1.11
Given that ib = 2 A in Figure 1.5, determine ia .
Solution Since the flow directions of ia and ib are the same and since
elements a and b are in series, it follows that ia = ib = 2 A.

Example 1.12
Describe ic + id in Figure 1.5.
Solution In Figure 1.5, ic + id = 3 A + id is the net amount of charge
transported from the top of the circuit to the bottom per unit time, since the
direction arrows of both ic and id point from top to bottom.

Absorbed power
In Figure 1.6, vb = 4 V is a voltage drop in the direction of element current ib = 2 A
from left to right. Therefore, each coulomb of net charge moving through element b
loses 4 J of energy, and since 2 C move every second, the product 4 V × 2 A = 8 W
stands for the total energy loss of charge carriers passing through the element per
unit time. We will refer to this product as the absorbed power for element b, since
the energy loss of charge carriers is the energy gain of the element, according to the
principle of energy conservation.

ib = 2 A
b
+ −
νb = 4 V id
+ + +

i a = 2 A a νa νc = − 1 V c ic = 3 A d νd = − 1 V
− − −

Figure 1.6 The same as Figure 1.2a, but showing the voltage and current
variables defined for each circuit element.

In general, for any two-terminal element, absorbed power is defined as (see


Table 1.1)

p ≡ vi,

where v denotes the voltage drop across the element in the direction of element
current i. Depending on the algebraic signs of v and i, the absorbed power p = vi for
an element may have a positive or negative numerical value. For instance, with vb =
4 V and ib = 2 A, the absorbed power pb = vb ib = 8 W is positive, indicating that
element b “burns,” or dissipates, 8 J of charge carrier energy every second. However,
14 Chapter 1 Circuit Fundamentals

for element c, with vc = −1 V and ic = 3 A, the absorbed power pc = vc ic = −3 W


is negative. The reason for this is that charge carriers gain energy (1 J/C) as they pass
through element c. Although it would be correct to say that the element dissipates −3
J every second, a better description would be to say the element injects 3 J of energy
into the circuit (via moving the carriers against electrostatic forces) per second.
Circuit elements that are capable of injecting energy into circuits will be referred
to as sources (e.g., independent voltage source, controlled current source, etc.). Such
elements model physical devices that can convert nonelectrical forms of energy into
electrical potential energy of charge carriers (a battery, for instance). It is also possible
that an element stores, rather than dissipates, the absorbed energy in some form
and then re-injects the stored energy to the circuit at a later time. This process can,
of course, occur only in circuits with time-varying voltages and currents; in such
circuits capacitors and inductors act as energy storage elements, as we will see later
on. Resistors, however, literally burn (i.e., turn the absorbed energy into heat) under
all possible operation conditions.
Energy conservation requires, at each instant, that the sum of all the energy that
is burned or put into storage within a circuit must be exactly compensated by all
the energy that is injected into the circuit. Since the absorbed power variable p = vi
for each element can quantify either the absorption rate or the injection rate (with
positive and negative numerical values, respectively), we find that all the absorbed
powers (taking into account the powers for each and every element in the circuit)
Powers absorbed must sum to zero. For instance, for the circuit shown in Figure 1.6, it is required that
in a circuit
sum to zero
pa + pb + pc + pd = pa + 8 W + (−3 W) + pd = 0,

or

pa + pd = −5 W,

where

pa = −va ia = −va 2,

and

pd = vd id = (−1)id .

In the next section we will determine the numerical values of va and id , and confirm
that pa + pd = −5 W. Notice that the absorbed power pa for element a is −va ia
rather than va ia , because va is a voltage rise (rather than a drop) in the direction of
current ia . (See Figure 1.6.)
Section 1.2 Kirchhoff’s Voltage and Current Laws: KVL and KCL 15

1.2 Kirchhoff’s Voltage and Current Laws:


KVL and KCL

The values of va and id in Figure 1.6 can be determined with the aid of Kirchhoff’s
voltage and current laws (KVL and KCL), reviewed next. These laws, which are
the basic axioms of circuit theory, correspond to principles of energy and charge
conservation expressed in terms of element voltages and currents.

Kirchhoff’s
 voltage law: Around any closed loop in a circuit,
vrise = vdrop
Translating this axiom into words, KVL demands that the sum of all voltage rises
encountered around any closed loop of elements in a circuit equals the sum of all
voltage drops encountered around the same loop.
In applying this rule, you should remember that each element voltage can be
interpreted as a rise or a drop, depending on the transit direction across the element;
in constructing a KVL equation, the voltage of each element should be added to only
one side of the equation, depending on whether the loop traverses the element from
minus to plus (called a voltage rise) or from plus to minus (called a voltage drop), as
illustrated next:

Example 1.13
We traverse Loop 1 of Figure 1.7(a) in the clockwise direction and obtain
the KVL equation

va = vb + vc ,

since in the clockwise direction va appears as a voltage rise (we rise from
the − to the + terminal as we traverse element a) and vb and vc appear as
voltage drops (we drop from the + to − terminals in each case). Substituting
the values for vb and vc gives

va = 4 V + (−1) V = 3 V.

Notice that this result does not depend on the direction of travel around the
loop. Traversing Loop 1 in the counterclockwise direction yields

vc + vb = va ,

which is simply a rewritten version of the earlier equation.


Finally, we can apply KVL to Loop 2 in the clockwise direction,
yielding

vc = vd .
16 Chapter 1 Circuit Fundamentals

ib = 2 A
Node 1 Node 2
b b
+
+ νb = 4 V − id
+ +
νa ia
a c d νd a c ic = 3 A d
νc = −1 V
− − −
Loop 1 Loop 2

(a) (b)
ib = 2 A
b
+ −
νb = 4 V
+ + νc = − 1 V +
ia = 2 A
a νa = 3 V c d
− i c = 3 A − νd = − 1 V −
id = − 1 A

(c)

Figure 1.7 Figure 1.6 redrawn (a) without the element currents, (b) without the element voltages,
and (c) after application of KVL and KCL.

Substituting for vc in the prior equation gives

vb + vd = va ,

which is a KVL equation for the outer loop of the circuit.

 
Kirchhoff’s current law: At any node in a circuit, iin = iout
In plain words, KCL demands that the sum of all the currents flowing into a node
equals the sum of all the currents flowing out.
In applying this rule, we just need to pay attention to the arrows indicating the
flow directions of the element currents present at each node.

Example 1.14
KCL applied to Node 2 of Figure 1.7(b) states that

ib = ic + id ,

since ib flows into the node, while both ic and id flow out. Likewise, the
KCL equation for Node 1 is

ia = ib .
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 17

Combining these equations gives

ia = ic + id ,

which is the KCL equation for the node at the bottom of the diagram. Now,
with ib = 2 A and ic = 3 A, the preceding KCL equations indicate that
ia = ib = 2 A and id = ib − ic = 2 A − 3 A = −1 A.

In Figure 1.7(c) we redraw Figure 1.6, showing the numerical values of va and
id that we have deduced in earlier examples. From the figure, we see that

pd = vd id = 1 W

and

pa = −va ia = −6 W.

Therefore,

pa + pd = −5 W,

as claimed in the previous section.

1.3 Ideal Circuit Elements and Simple Circuit


Analysis Examples

Circuit models of electrical systems and multiterminal electrical devices can be repre-
sented by a small number of idealized two-terminal circuit elements, which will be
reviewed in this section. These elements have been defined to account for energy
dissipation (resistor), injection (independent and dependent sources), and storage
(capacitor and inductor) and are distinguished from one another by unique voltage–
current, or v–i, relations.

Ideal resistor: An ideal resistor is a two-terminal circuit element having the v–i
relation

v = Ri

known as Ohm’s law. In this relation, R ≥ 0 is a constant known as the resistance Ohm’s law
and v denotes the voltage drop in the direction of current i, as shown in Figure 1.8.
Resistance R is measured in units of V/A = Ohms ().
Resistors cannot inject energy into circuits, because Ohm’s law v = Ri and the
absorbed power formula p = vi imply that, for a resistor,

v2
p = i2R = ≥ 0,
R
18 Chapter 1 Circuit Fundamentals

i R
+ −
ν
Figure 1.8 The circuit symbol for an ideal resistor.

since R ≥ 0. Resistors primarily are used to represent energy sinks in circuit models
of electrical systems.5 The zigzag circuit symbol of the resistor is a reminder of the
frictional energy loss of charge carriers moving through a resistor.

Example 1.15
Figure 1.9a shows a resistor carrying a 2 A current in the direction of a
4 V drop. Ohm’s law v = Ri indicates that the corresponding resistance
value is
v 4V
R= = = 2 .
i 2A

The resistor absorbs a power of p = vi = 8 W (or i 2 R = 22 × 2 = 8 W).

− 1.5 A i
i=2 A R R 4Ω
+ − + − + −
ν=4 V 6V 2V
(a) (b) (c)

Figure 1.9 Ideal resistor examples.

Example 1.16
For the resistor shown in Figure 1.9b, and using Ohm’s law,

v 6V
R= = = 4 .
i 1.5 A
Notice that to calculate R we used i = 1.5 A rather than −1.5 A, because
the element current in the direction of the 6 V drop is 1.5 A–Ohm’s law
requires i to be the current in the direction of the voltage drop.

Example 1.17
For the resistor shown in Figure 1.9c, the element current is

v 2V
i= = = 0.5 A.
R 4
5
Negative resistance can be used to model certain energy sources. We will first encounter negative
resistances at the end of Chapter 2 when we study the Thevenin and Norton equivalent circuits.
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 19

The special cases of an ideal resistor with R = 0 and R = ∞ are known as short-
circuit and open-circuit elements, or short and open, respectively. Their special circuit
symbols are shown in Figure 1.10.

short i=0
ν=0 open

Figure 1.10 Circuit symbols for a “short” (R = 0) and an “open” (R = ∞).

For a short, R = 0, and therefore v = 0, independent of the current i; whereas,


for an open, R = ∞, and therefore i = 0, independent of the voltage v. Consequently,
the absorbed power vi is zero in both cases.6 A short can conduct any externally driven Open
current; likewise, an open can maintain any value of externally imposed voltage across and
its terminals. short

Independent voltage source: An independent voltage source is an element that


maintains a specified potential difference vs between its terminals, independent of
the value of element current i. Its circuit symbol, shown in Figure 1.11, indicates the
polarity7 of voltage vs . The value of the current i through the voltage source depends
on the circuit containing the source and can be determined only by an analysis of the
complete circuit.

νs
+−
i

Figure 1.11 Circuit symbol for independent voltage source.

Example 1.18
For the circuit in Figure 1.12a with a 4 V voltage source, KVL and Ohm’s
law imply that

(8 ) × iR = 4 V;

so
4V
iR = = 0.5 A.
8
6
This is because in a short (R = 0) there is zero friction, while in an open (R = ∞) charge transport,
and thus energy dissipation, cannot take place at all.
7
More precisely, the plus and minus indicate that the potential difference between the plus and minus
terminals is vs . They do not indicate that the potential at the plus terminal is higher than the potential at
the minus terminal. In other words, vs may be positive or negative.
20 Chapter 1 Circuit Fundamentals

2Ω 3Ω

4V + i iR 4V + + 2V
− 8Ω − i −
(a) (b)

Figure 1.12 Circuit examples with independent voltage sources.

Since KCL requires that i = iR (for the given reference directions of i and
iR ), the current of the 4 V source is i = 0.5 A.

Example 1.19
For the circuit shown in Figure 1.12b, a single current variable i is sufficient
to represent all the element currents (indicated as a clockwise flow), since
all four elements of the circuit are in series. In terms of i, KVL for the
circuit is

4 = 2i + 3i + 2.

Therefore,

4−2 2
i= = = 0.4 A.
2+3 5

Independent current source: An independent current source is an element that


maintains a specified current flow is between its terminals, independent of the value of
the element voltage v. Its circuit symbol, shown in Figure 1.13, indicates the reference
direction8 of the current is . The voltage v across the current source is affected by the
circuit containing the source and can be determined only by analysis of the complete
circuit.

is

+ −
ν

Figure 1.13 Circuit symbol for independent current source.

8
The current may have a positive value, in which case positive charges flow in the direction of the arrow
(or negative charges flow opposite the direction of the arrow), or it may have a negative value, in which
case positive charges flow opposite the direction of the arrow.
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 21

Example 1.20
For the circuit in Figure 1.14a, with the 3 A current source, KCL and Ohm’s
law imply that
vR
3A = ,
5
vR
since 5 is the resistor current flowing from top to bottom. Hence,

vR = 3 × 5 = 15 V.

Notice that KVL requires that v = vR = 15 V.

+
+ +
3A ν vR 5Ω 4A 2Ω ν 1Ω 1A

− (b) −
(a)

Figure 1.14 Circuits containing independent current sources.

Example 1.21
In Figure 1.14b the short at the top of the circuit, enclosed within the dashed
oval, can be regarded as a single node, because the potential difference
between any two points along a short is zero. Likewise, the bottom of the
circuit can be regarded as a second node. The voltage drop in the circuit from
top to bottom is denoted as v, which can be regarded as the element voltage
of all four components of the circuit connected in parallel. Consequently, the
2  and 1  resistors conduct v2 and v1 ampere currents from top to bottom,
respectively. Thus, the KCL equation for the top node can be written as
v v
4= + + 1,
2 1
from which we find
4−1 3
v= = = 2 V.
1
2 +1 1.5

Using v = 2 V, we now can calculate the resistor currents. For instance, a


current of
2V
= 1A
2
flows through the 2  resistor from top to bottom.
22 Chapter 1 Circuit Fundamentals

νs is
+−
+ −
i ν

Figure 1.15 Circuit symbols of dependent (or controlled) sources.

Dependent sources: Dependent voltage and current sources are represented by


the diamond-shaped symbols shown in Figure 1.15. These source elements have the
same characteristics as independent voltage and current sources, respectively, except
that vs and is depend on a voltage or current vx or iy defined elsewhere in the circuit
containing the dependent source. The dependencies may take the forms

vs = Avx or Biy
is = Cvx or Diy,

where A and D are dimensionless constants and B and C are constants with units of
 and −1 ≡ Siemens (S), respectively. A dependent voltage source specified as

vs = Biy ,

for instance, is a current-controlled voltage source.

Example 1.22
Figure 1.16 shows a circuit with a voltage-controlled current source

is = 2vx ,

where vx denotes the voltage across a 2  resistor. The KCL equation at


the top node is
vx
i1 + 2vx = ,
2

2Ω i1

+
+
4V − νx 2Ω 2νx
− 4Ω

Figure 1.16 A circuit with a dependent current source. Voltage vx regulates the
current supplied by the dependent current source into the circuit.
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 23

leading to the result


3
i1 = − vx .
2
Using this result with

4 = 2i1 + vx

(which is the KVL equation of the loop on the left), we find the following
solution:

vx = −2 V
i1 = 3 A.

Example 1.23
Figure 1.17 shows a circuit with a current-controlled voltage source
1
vd = − ib ,
2
where ib is the current through a 2  resistor. The KVL equation around
the outer loop of the circuit is
1
3 = 2ib + (− ib ),
2
from which it follows that

ib = 2 A.

Hence,
1
vd = − ib = −1 V
2
and

vb = 2ib = 4 V.

2Ω ib id
+ −
νb
3V +
− 3A +
− νd = − –12 ib

Figure 1.17 A circuit containing a current controlled voltage source vd .


24 Chapter 1 Circuit Fundamentals

Also, the KCL equation at the top node requires that

ib = 3 + id ;

thus,

id = ib − 3 = 2 − 3 = −1 A.

Capacitor and inductor: Ideal capacitors and inductors are two-terminal circuit
elements with v–i relations
dv
i=C (capacitor)
dt

di
v=L (inductor),
dt
where

C ≥ 0 and L ≥ 0

are constants known as capacitance and inductance, respectively, and v denotes the
voltage drop in the direction of current i, as shown in Figures 1.18a and 1.18b. The
capacitance C is measured in units of A s/V = Farads (F) and the inductance L in
units of V s/A = Henries (H).

i C i L
(a) + − (b) + −
ν ν

Figure 1.18 Circuit symbols for ideal capacitor (a) and inductor (b).

In the previous equations,

dv di
and
dt dt
denote the time derivatives of v and i, respectively. In circuits with time-independent
voltages and currents, the derivatives

dv di
= 0 and = 0,
dt dt
and, consequently, capacitors and inductors behave as open (i.e., zero capacitor
current) and short (i.e., zero inductor voltage) circuits, respectively. We refer to such
Section 1.3 Ideal Circuit Elements and Simple Circuit Analysis Examples 25

circuits as DC circuits (DC for “direct current,” meaning constant current) and use the
term AC (AC for “alternating current”) to describe circuits with time-varying voltage
and currents. Capacitors and inductors play nontrivial roles only in AC circuits, where DC
they respond differently than opens and shorts. versus
In AC circuits, capacitors and inductors represent energy storage elements, since AC
they can re-inject the energy absorbed from charge carriers back to carrier motions.
For instance, for a capacitor with capacitance C, the absorbed power is
 
dv d 1 2
p = vi = vC = Cv ,
dt dt 2

and, therefore, the quantity

1 2
w≡ Cv
2

stands for the net absorbed energy of the capacitor, in units of J. When the absorbed
power p = vi = dw dt is positive, w increases with time so that the capacitor is drawing
energy from the circuit. Conversely, a negative p = vi = dw dt is associated with a
decrease in w—that is, energy injection back into the circuit. This indicates that a Stored
capacitor does not dissipate its absorbed energy, but rather stores it in a form available energy
for subsequent release when vi turns negative. We therefore will refer to w = 21 Cv 2
as the stored energy of the capacitor. It can likewise be argued that the stored energy
of an inductor L carrying a current i is

1 2
w= Li .
2

Energy storage in a capacitor is associated with the separation of oppositely


signed electrical charges between the two terminals of the element; the circuit symbol
of the capacitor shows a pair of plates where these charge populations may be main-
tained separately in a physical capacitor.9 Energy storage in an inductor is associated
with the generation of magnetostatic fields in space due to the inductor current; the
circuit symbol of the inductor is a representation of the helical configuration of phys-
ical inductors. The helical configuration is particularly efficient in generating intense
magnetostatic fields and magnetic flux linkage.10
Analysis of AC circuits with capacitors and inductors will be delayed until
Chapter 3. The physics of energy storage in capacitors and inductors typically is
discussed in a first course in electromagnetics.

9
The amount of charge stored on a capacitor plate is given as q = Cv, since the time-rate of change of
q, namely, the derivative dqdt = C dt , matches the capacitor current i = C dt .
dv dv
10
An inductor with current i generates a flux linkage of λ = Li, and the time-rate of change of λ, namely,
dt = L dt , corresponds to the inductor voltage v = L dt . For a helical inductor coil, the flux linkage λ is
dλ di di

the product of the magnetic flux φ generated by current i and the number of turns of the helix.
26 Chapter 1 Circuit Fundamentals

1.4 Complex Numbers

The analysis of resistive DC circuits—the main topic of Chapter 2—requires the use
of only real numbers and real algebra. By contrast, calculating AC or time-varying
response of circuits with capacitors and inductors may require solving differential
equations, which often can be simplified with the use of complex numbers. We
will first encounter complex numbers in Chapter 3, where we solve simple first-
order differential equations describing RC and RL circuits with sinusoidal inputs.
Complex numbers also will arise in the solution of second and higher order differential
equations, irrespective of the input. Starting with Chapter 4, we will rely on complex
arithmetic to provide an efficient solution to AC circuit problems where the signals
are sinusoidal. More advanced circuit and system analysis methods (Fourier series,
Fourier transform, and Laplace transform), discussed in succeeding chapters, will
require a solid understanding of complex numbers, complex functions, and complex
variables.
A review of complex numbers and arithmetic is provided in Appendix A at the
end of this book. A short introduction to complex functions and complex variables is
included as well. You should read Appendix A and then work the complex number
exercises at the end of this chapter and Chapter 2 well before entering Chapters 3
and 4. It is important that you feel comfortable with complex addition, subtraction,
multiplication, and division; understand the conversions between rectangular, polar,
and exponential forms of complex numbers; and understand the relationship between
trigonometric functions such as cos(ωt) and complex exponentials ej ωt . In particular,
Euler’s identity and its implications will play a crucial role in Chapter 4 and the rest
of this book.

EXERCISES

1.1 In the following circuit, determine R and vs :


+ −
+ 4V
νs − 4Ω
R
+ −
2V

1.2 In the following circuit, determine i:

6V + 2Ω 4A

Exercises 27

1.3 (a) In the following circuit, determine all of the unknown element and node
voltages:

ν1 2V ν2 + νc − ν3
−+
1 4Ω

3
A
+ + + 3A
νa 2Ω 1Ω νb νd
− − −

+ νe −
ν4

(b) What is the voltage drop in the above circuit from the reference to
node 4?

(c) What is the voltage rise from node 2 to node 3?

(d) What is the voltage drop from node 1 to the reference?

1.4 (a) A volume of ionized gas filled with free electrons and protons can be
modeled as a resistor. Consider such a resistor model supporting a 6
V potential difference between its terminals. We are told that in 1 s,
6.2422 × 1018 protons move through the resistor in the direction of the
6 V drop (say, from left to right) and 1.24844 × 1019 electrons move
in the opposite direction. What is the net amount of electrical charge
that transits the element in 1 s in the direction of the 6 V drop and
what is the corresponding resistance R? Note that electrical charge q is
1.602 × 10−19 C for a proton and −1.602 × 10−19 C for an electron.

(b) Does a proton gain or lose energy as it transits the resistor? How many
joules? Explain.

(c) Does an electron gain or lose energy as it transits the resistor? How
many joules? Explain.

(d) A second resistor with 6 V potential difference conducts 1.87266 ×


1019 electrons every second, but no proton is allowed to move through
it. Compare the current, resistance, and absorbed power of the two
resistors.
28 Chapter 1 Circuit Fundamentals

1.5 In the circuit pictured here, one of the independent voltage sources is injecting
energy into the circuit, while the other one is absorbing energy. Identify the
source that is injecting the energy absorbed in the circuit and confirm that
the sum of all absorbed powers equals zero.

1Ω 1Ω

+ i +
6V − − 4V

1.6 In the circuit pictured here, one of the independent current sources is injecting
energy into the circuit, while the other one is absorbing energy. Identify the
source that is injecting the energy absorbed in the circuit and confirm that
the sum of all absorbed powers equals zero.

3A 1Ω 1A

1.7 Calculate the absorbed power for each element in the following circuit and
determine which elements inject energy into the circuit:

3A 1Ω −1A

1.8 In the circuit given, determine ix and calculate the absorbed power for each
circuit element. Which element is injecting the energy absorbed in the circuit?

+
ix +
2V − − 5ix

1.9 In the circuit given, determine vx and calculate the absorbed power for each
circuit element. Which element is injecting the energy absorbed in the circuit?

+
6A 2Ω νx 2νx

Exercises 29

1.10 Some of the circuits shown next violate KVL or KCL and/or basic definitions
of two-terminal elements given in Section 1.3. Identify these ill-specified
circuits and explain the problem in each case.

2V + + 3V 2V + 4A 1Ω
− − −

(a) (b)

+

3V − 2Ω 2A 3A

(c) (d)

6V + 2A
− 1Ω 2Ω

(e)

1.11 (a) Let A = 3 − j 3. Express A in exponential form.


(b) Let B = −1 − j 1. Express B in exponential form.
(c) Determine the magnitudes of A + B and A − B.
(d) Express AB and A/B in rectangular form.

1.12 Do Exercise 1.11(a) through (d), but with A = −3 − j 3 and B = 1 + j 2.


π π
1.13 (a) Determine the rectangular forms of ej 0 , ej 2 , e−j 2 , ej π , e−j π , and ej 2π .
π π
(b) Simplify P = ej π + e−j π , Q = ej 2 + e−j 2 , R = 1 − ej π .
π m
(c) Show that ej 2 m = (−1) 2 .
π π 3π 3π
1.14 (a) Determine the rectangular forms of 7ej 4 , 7e−j 4 , 5ej 4 , and 5e−j 4 .
j3π
5π 5π π π
(b) Simplify P = 2ej 4 − 2e−j 4 , Q = 8e−j 4 − 8ej 4 , and R = e 4
−j π
.
e 4

1.15 (a) Prove that CC ∗ = |C|2 .


(b) Prove that (C1 C2 )∗ = C1∗ C2∗ .
30 Chapter 1 Circuit Fundamentals

1.16 (a) Prove that |C1 C2 | = |C1 ||C2 |.


(b) Prove that | C1 | = 1
|C| .
|C1 |
(c) Prove that | CC21 | = |C2 | .

1.17 Show graphically on the complex plane that |C1 + C2 | ≤ |C1 | + |C2 |.
π
1.18 (a) The function f (t) = ej 4 t , for real-valued t, takes on complex values.
Plot the values of f (t) on the complex plane, for t = 0, 1, 2, 3, 4, 5, 6,
and 7.
1 π
(b) Repeat (a), but for the complex-valued function g(t) = e(− 8 +j 4 )t .
2
Analysis of Linear
Resistive Circuits

2.1 RESISTOR COMBINATIONS AND SOURCE TRANSFORMATIONS 31


2.2 NODE-VOLTAGE METHOD 38
2.3 LOOP-CURRENT METHOD 43
2.4 LINEARITY, SUPERPOSITION, AND THEVENIN
AND NORTON EQUIVALENTS 48
2.5 AVAILABLE POWER AND MAXIMUM POWER TRANSFER 60
EXERCISES 63

Most analog signal processing systems are built with electrical circuits. Thus, the anal- Strategies for
ysis and design of signal processing systems requires proficiency in circuit analysis, circuit
meaning the calculation of voltages and currents (or voltage and current waveforms) simplification;
at various locations in a circuit. In this chapter we will describe a number of anal- node-voltage
ysis techniques applicable to linear resistive circuits composed of resistors and some and
combination of independent and dependent sources. The techniques developed here loop-current
will be applied in Chapter 3 to circuits containing operational amplifiers, capacitors, methods;
and inductors that can be used for signal processing purposes. Later, our basic tech- linearity and
niques introduced here will be further developed in Chapter 4 for the analysis of linear superposition;
circuits containing sinusoidal sources. coupling and
The topics to be covered in this chapter include resistor combinations and source available power
transformations (analysis via circuit simplification), node-voltage and loop-current of resistive
methods (systematic applications of KCL and KVL), and Thevenin and Norton equiv- networks
alents of linear resistive networks and their interactions with external loads.

2.1 Resistor Combinations and Source


Transformations

Analysis of resistive circuits frequently can be simplified via transformation to simpler


equivalent circuits, using resistor combination and source transformation techniques.
These approaches and some of their applications will be discussed in this section.

31
32 Chapter 2 Analysis of Linear Resistive Circuits

i +
R1 Ro +
i R1 + R2 Ro
i1 R aR b
Ra va Rb i1 νa
R2 νo +
− νo +

Ra + Rb
− −
(a) (b)

Figure 2.1 Resistors in series and parallel.

2.1.1 Resistor combinations and voltage and current division


Resistors R1 and R2 in Figure 2.1a carry the same current i1 through a single branch
of the circuit (leftmost branch) and therefore are said to be in series. Resistors Ra and
Rb , on the other hand, support the same voltage va between the same pair of nodes
and are said to be in parallel. We can analyze the circuit after simplifying it to the
form shown in Figure 2.1b, using the series and parallel equivalents of these resistor
pairs. We next will describe the simplification procedure and illustrate how to analyze
the circuit.
In Figure 2.1a we recognize the total resistance of the left branch containing R1
and R2 as R1 + R2 , since the voltage drop across the branch from top to bottom is

R1 i1 + R2 i1 = (R1 + R2 )i1 ≡ Rs i1 .

Thus, a single resistor

Rs = R1 + R2

is the series equivalent of resistors R1 and R2 , and replaces them in the simplified
Series version of the circuit shown in Figure 2.1b. In general, the series equivalent of N
equivalent resistors R1 , R2 , · · ·, RN (all carrying the same current on a single circuit branch) is

Rs = R1 + R2 + · · · + RN .

Likewise, we note the total resistance of the two parallel branches on the right of
Figure 2.1a as RRaa+R
Rb
b
, because the total current conducted from top to bottom through
these branches is
va va 1 1 va
+ = va ( + )≡ .
Ra Rb Ra Rb Rp

Thus, a single resistor


1 1 −1 Ra Rb
Rp = ( + ) =
Ra Rb Ra + Rb
Parallel is the parallel equivalent of resistors Ra and Rb , and replaces them in the simplified
equivalent version of the circuit shown in Figure 2.1b. In general, the parallel equivalent of N
Section 2.1 Resistor Combinations and Source Transformations 33

resistors R1 , R2 , · · ·, RN (all supporting the same voltage between the same pair of
nodes) is
1 1 1 −1
Rp = ( + + ··· + ) .
R1 R2 RN

Example 2.1
Figure 2.2a replicates Figure 2.1a, but with numerical values for the resis-
tors and the source. We next will solve for the unknown current i supplied
by the source, using the technique of resistor combinations.
A simplified version of the circuit after series and parallel resistor
combinations is shown in Figure 2.2b, where the series combination is

Rs = 1 + 5 = 6 

and the parallel combination is


3×6
Rp = = 2 .
3+6
Now notice that in Figure 2.2b the 6  and 2  resistors are in parallel,
since they support the same voltage between the same pair of nodes. Thus,
we replace them with their parallel equivalent
6×2 12
Rp = = = 1.5 
6+2 8
and obtain the circuit shown in Figure 2.2c.

ia ia
+ +
1Ω i 0.5 Ω
i1 0.5 Ω i1 i
3Ω νa 6Ω 6Ω νa 2Ω
5Ω + +
− −
4V − 4V −
(a) (b)

0.5 Ω +

νa 1.5 Ω

+ − +
− i − i
4V 4V
(c) (d)

Figure 2.2 A resistive circuit (a) and its equivalents (b), (c), (d).
34 Chapter 2 Analysis of Linear Resistive Circuits

Finally, in Figure 2.2c we notice that the 0.5  and 1.5  resistors are
in series and replace them by their series equivalent

Rs = 0.5 + 1.5 = 2 

to obtain Figure 2.2d.


Clearly now, the source current in the circuit is

4V
i= = 2 A.
2

Suppose we had wanted to solve for va in Figure 2.2a. Our analysis then could
have stopped with Figure 2.2c, which shows two resistors in series. This is a special
case of the circuit shown in Figure 2.3a. Here we see that
vs vs
i= = ,
Rs R1 + R2

and, therefore,

R1
v1 = vs
R1 + R2

and
R2
v2 = vs .
R1 + R2

Voltage These voltage division equations tell us how the total voltage vs across two resistors
division is divided between the elements.

i + +
i1 i2
R1 ν1

+ is R1 R2
νs +

R2 ν2 v

(a) − (b)
Figure 2.3 Voltage (a) and current (b) division.

Example 2.2
Apply voltage division to find va in Figure 2.2c.
Section 2.1 Resistor Combinations and Source Transformations 35

Solution The total voltage across the resistors in series is 4 V. Thus,


applying voltage division, we find

1.5 6
va = 4 V = V = 3 V.
0.5 + 1.5 2

We can check this result by multiplying i = 2 A with 1.5 , which, of


course, gives the same result.

Suppose we had wanted to find ia in Figure 2.2a. The equivalent circuit in


Figure 2.2b makes it clear that the current i splits into the two currents i1 and ia .
The general situation is shown in Figure 2.3b. Here,

R1 R2
v = is Rp = is ,
R1 + R2

and
v v
i1 = , i2 = .
R1 R2

Therefore,

R2
i1 = is
R1 + R2

and
R1
i2 = is .
R1 + R2

These current-division equations tell us how the total current is conducted by two Current
resistors is split between them. division

Example 2.3
Apply current division to find ia in Figure 2.2b.
Solution The parallel resistors carry a total current of i =2 A. (See Example 2.1.)
Thus, using current division, we obtain

6 12
ia = 2 A = A = 1.5 A.
6+2 8

We can check this result by dividing va = 3 V (see Example 2.2) with 2 ,


which gives the same result.
36 Chapter 2 Analysis of Linear Resistive Circuits

Rs a a
+ +
ν i i
νs
+
− is Rs ν


(a) b (b) b
Figure 2.4 Two networks that are equivalent when vs = Rs is . The term
“network” refers to a circuit block with two or more external terminals.

2.1.2 Source transformations

In analyzing circuits, it sometimes is advantageous to replace a voltage source in


series with a resistor by a current source in parallel with a resistor of the same value.
Figure 2.4 shows these source–resistor combinations. It can be shown that, when
terminals a and b are attached to another circuit, these source–resistor combinations
will cause the same currents and voltages to occur in that circuit, so long as the source
values are related by

vs
vs = Rs is ⇔ is = .
Rs

In this case the source-resistor combinations are said to be equivalent, because they
produce the same result when attached to a third circuit.

Proof: To prove that the source combinations in Figure 2.4 are equivalent
for vs = Rs is , we will find the expression for v in both circuits, in terms of
the terminal current i (assumed to be carried by an external element or circuit
connected between terminals a and b). Applying KVL in Figure 2.4a, we first
find that vs = Rs i + v, or

v = vs − Rs i.

Applying KCL in Figure 2.4b, we next find that is = v


Rs + i, or

v = Rs is − Rs i.

Clearly, these expressions are identical for vs = Rs is , in which case the two
networks are equivalent because they apply the same voltage and inject the
same current into an element or circuit attached to terminals a and b.
The next examples illustrate application of the source transformation method
based on the equivalence of the networks shown in Figure 2.4 under the condition
vs = Rs is .
Section 2.1 Resistor Combinations and Source Transformations 37

ia ia
+ 1Ω 8A +
1Ω i
i1 0.5 Ω i1 νa 6Ω
3Ω νa 6Ω 0.5 Ω 3 Ω
+
5Ω −
(a) 4V −
(b) 5Ω −

Figure 2.5 A source transformation example.

Example 2.4
Our goal is to find i1 in Figure 2.5a.
We begin by exchanging the 4 V source in series with the 0.5  resistor
with a
4V
= 8A
0.5 
current source in parallel with a 0.5  resistor. This exchange is permis-
sible, as proven earlier, and is called a “source transformation.” After the
source transformation, the circuit in Figure 2.5a becomes the circuit shown
in Figure 2.5b.
In this modified circuit, the current i1 through the 6  branch on the left
can be easily calculated by current division. Since the parallel equivalent
1
of the three resistors on the right is 2.5 , we find that
1
8
i1 = 8 A 2.5
= A = 0.5 A.
1
2.5 +6 1 + 15

Example 2.5
In Figure 2.6 successive uses of source transformations and resistor combi-
nations are illustrated to determine the unknown voltage voc . The simplified
and equivalent versions of Figure 2.6a shown in Figures 2.6e and 2.6f are
known as Thevenin and Norton equivalents (to be studied in Section 2.4).
From either Figure 2.6e or 2.6f, we clearly see that

voc = 1 V.

Resistor combinations, voltage and current division, and source transformations


often are useful in the analysis of small circuits. But what if the circuit has thousands
of loops and nodes? We need systematic ways of solving for voltages and currents
in large circuits. In the next two sections we describe two procedures for systematic
analysis of circuits: node analysis and loop analysis.
38 Chapter 2 Analysis of Linear Resistive Circuits

2Ω 1Ω 1Ω
+ +
+ 1A 1A
4V − 2Ω ν
− oc
2A 2Ω 2Ω − oc
ν
(a) (b)

1Ω 1Ω 1Ω 2Ω
+ + + +
1A 1Ω ν 1V +
− ν 1V +
− ν oc 0.5A
2Ω ν oc
− oc − oc − −
(c) (d) (e) (f)
Figure 2.6 Simplification of circuit (a) by source transformations (a→b, c→d, and e→f) and resistor
combinations (b→c and d→e) to its Thevenin (e) and Norton (f) equivalents. Note that in step b→c two
current sources in parallel are combined into a single current source.

2.2 Node-Voltage Method

The node-voltage method is a systematic circuit analysis procedure that is based on


the node-voltage concept (see Chapter 1 for a review) and application of KCL. The
node-voltage method can be used for analyzing all possible resistive circuits, however
complicated they may be. The popular and powerful circuit analysis software SPICE
Node-voltage is based on the node-voltage method for that reason. The step-by-step procedure is
procedure as follows:
(1) In a circuit with N + 1 nodes, declare N node-voltage variables v1 , v2 , · · ·, vN
with respect to a reference node (node 0), which is assigned a node voltage of
zero.
(2) At each node with an unknown node voltage, write a KCL equation expressed
in terms of node voltages.
(3) Solve the system of algebraic equations obtained in step 2 for the unknown
node voltages.
Once all the node voltages are known, element voltages and currents easily can be
calculated from the node voltages.

Example 2.6
Consider the circuit shown in Figure 2.7. The reference node is indicated
by the ground symbol; v1 and v2 denote two unknown node voltages; and
node voltage v3 = 3 V has been directly identified, since there is an explicit
3 V rise from the reference to node 3 provided by an independent voltage
source.
Following step 2 of the node-voltage method, we next construct KCL
equations for nodes 1 and 2 where v1 and v2 were declared:
Section 2.2 Node-Voltage Method 39

ν1 ν2 ν3 = 3 V
2Ω 1Ω
+
2A − 3V

Figure 2.7 A node-voltage method example.

For node 1 we have


v1 − v2
2= ,
2
where v1 −v
2
2
denotes the current away from node 1 through the 2  resistor.
Likewise, for node 2 we have
v1 − v2 v2 − 0 v2 − 3
= + ,
2 4 1
or, equivalently,
v2 − v1 v2 − 0 v2 − 3
0= + + ,
2 4 1
where each term on the right corresponds to a current leaving node 2.
These KCL equations obtained from nodes 1 and 2 can be simplified to

v1 − v2 = 4
−2v1 + 7v2 = 12.

Their solution (step 3) is

v1 = 8 V
v2 = 4 V.

We can calculate the element voltages and currents in the circuit by using
these node voltages and v3 = 3 V.

Example 2.7
The circuit shown in Figure 2.8 contains a voltage controlled current source
and three unknown node voltages marked as v1 , v2 , and v3 . We can apply
the node voltage method to the circuit by writing 3 KCL equations in which
each occurrence of voltage vx controlling the dependent current source is
expressed in terms of node voltage v3 .
However, application of simple voltage division on the right side of the
circuit also shows that v3 = v22 as already marked on the circuit diagram.
40 Chapter 2 Analysis of Linear Resistive Circuits

2Ω 3Ω ν2
ν1 ν2 ν3 =
2
+

2νx 1A 3 Ω −νx

Figure 2.8 Another node-voltage example. Note that voltage vx , which controls
the dependent current source on the left, also can be written as node voltage v3 .

Therefore, we also should be able to solve this problem by writing down


2 KCL equations in terms of two unknowns v1 and v2 . That approach is
illustrated next:
We write the KCL equation for node 1 as
v2 v1 − v2
2 + =0
2 2
where each term on the left side represents a current away from node
1, written in terms of the node-voltage unknowns. Notice how voltage-
controlled current 2vx has been represented as 2 v22 in terms of node voltage
v3 = v22 .
The KCL equation for node 2 can be written as
v1 − v2 v2 − 0
+1= ,
2 6
where the left side is the sum of currents into node 2 and the right side is a
current away from node 2 through the 3  resistor in series with a second
3  resistor.
Simplifying these equations gives

v1 + v2 = 0
v1 − 3v2 = −6,

and solving them yields

v1 = −3 V
v2 = 3 V.

Also, v3 = v2
2 = 1.5 V.

Example 2.7 illustrated how controlled sources are handled in the node-voltage
method. Their contributions to the required KCL equations are entered after their
values are expressed in terms of the node-voltage unknowns of the problem. The
example also illustrated that if any of the unknown node voltages can be expressed
Section 2.2 Node-Voltage Method 41

readily in terms of other node voltages, the number of KCL equations to be formed
can be reduced. The next example also illustrates such a reduction.

Example 2.8
Consider the circuit shown in Figure 2.9. In this circuit v1 = 2 V has been
directly identified, v2 has been declared as an unknown node voltage, and
it has been recognized at the outset that

v3 = v2 + 1

because of the 1 V rise from node 2 to node 3 provided by the 1 V inde-


pendent voltage source.

2A
1V
ν1 = 2 V ν2 ν3 = ν2 + 1

+

ix
+

2V − 1Ω 2Ω

Figure 2.9 A node-voltage problem with a super-node.

To solve for the unknown node voltage v2 , we need to write the KCL
equation for node 2. This equation can be expressed as

v2 − 2 v2 − 0
+ + ix = 0,
2 1
where we make use of a current variable ix , which, according to the KCL
equation written for node 3, is given by

(v2 + 1) − 0
ix = 2 + .
2
Eliminating ix between the two KCL equations, we obtain

v2 − 2 v2 − 0 (v2 + 1) − 0
+ +2+ = 0,
2 1 2
which is, in fact, a single KCL equation for a super-node in Figure 2.9,
indicated by the dashed oval. (See the subsequent discussions of the super-
node “trick.”) Solving the equation, we obtain

v2 = −0.75 V.
42 Chapter 2 Analysis of Linear Resistive Circuits

Furthermore,

v3 = v2 + 1 = 0.25 V.

The expression that we previously called the “super-node” KCL equation states
that the sum of all currents out of the region in Figure 2.10, surrounded by the dashed
oval, equals zero. Since there is no current flow into this region, the statement has
the form of a KCL equation applied to the dashed oval. Thus, the dashed oval is an
Super-node example of what is called a “super-node” in the node-voltage technique. Super-nodes
can be defined around ordinary nodes connected by voltage sources, as in the previous
example. When solving small circuits by hand, we find that clever application of the
super-node concept sometimes can shorten the solution. We illustrate in the next
example.

ν2 = ν1 + 4 ix ν3 = 3i x

4V + 3Ω + 3i x 4Ω
− −

ν1

Figure 2.10 A node-voltage problem with a dependent source and a


super-node.

Example 2.9
Figure 2.10 shows a circuit with a dependent source and a super-node. (See
the dashed oval.) All node voltages in the circuit have been marked in terms
of an unknown node voltage v1 and current ix that controls the dependent
voltage source.
Using Ohm’s law, we first note that current

v2 − v3 (v1 + 4) − 3ix
ix = = ,
3 3
from which we obtain
v1 + 4
ix =
6
in terms of unknown voltage v1 . We then can express the super-node KCL
equation as

v1 v1 + 4
+ = 0,
1 6
Section 2.3 Loop-Current Method 43

which can be solved for v1 as


4
v1 = − V.
7
Thus,
24
v2 = v1 + 4 = V,
7

v1 + 4 4
ix = = A,
6 7
and
12
v3 = 3ix = V.
7

2.3 Loop-Current Method

The loop-current method is an alternative circuit analysis procedure based on system-


atic application of KVL. When solving, by hand, small planar circuits1 that contain
primarily voltage sources, the loop-current method can be quicker than the node-
voltage method. The next two examples will clarify what we mean by “loop current”
and illustrate the main features of the method.

Example 2.10
In Figure 2.11a we see a single-loop circuit. All the element currents in the
circuit can be represented in terms of a single loop-current variable i as
shown in the diagram. A KVL equation for the loop is

3 V = 2i + 4(2i) + 2 V;

4νx
2Ω 2Ω 3Ω

+

+ νx −
+ + + +
3V − i − 2V 5V − i1 1Ω i 2 − − 2V
(a) (b) ix

Figure 2.11 Loop-current method examples.

1
Planar circuits are circuits that can be diagrammed on a plane with no element crossing another
element.
44 Chapter 2 Analysis of Linear Resistive Circuits

therefore,
(3 − 2) V
i= = 0.1 A
(2 + 8) 
and

vx = 2i = 0.2 V.

Example 2.11
Figure 2.11b shows a circuit with two-elementary loops (loops that contain
no further loops within), which have been assigned distinct loop-current
variables i1 and i2 . In analogy with the previous problem, we consider i1
to be the element current of the 5 V source and 2  resistor on the left.
Likewise, i2 can be considered as the current of the 3  resistor and −2 V
source on the right. Furthermore, the current ix through the 1  resistor in
the middle can be expressed in terms of loop currents i1 and i2 ; using KCL
at the top node of the resistor, we see that

ix = i1 − i2 .

Clearly, all the element currents in the circuit are either directly known
or can be calculated once the loop currents i1 and i2 are known. We will
determine i1 and i2 by solving two KVL equations constructed around the
two-elementary loops of the circuit.
The KVL equation for the loop on the left is

5 = 2i1 + 1(i1 − i2 ),

where 1(i1 − i2 ) = 1ix denotes the voltage drop across the 1  resistor from
top to bottom. Likewise, for the loop on the right we have

1(i1 − i2 ) = 3i2 + (−2),

or, equivalently,

3i2 + (−2) + 1(i2 − i1 ) = 0,

where 1(i2 − i1 ) denotes the voltage drop across the 1  resistor from
bottom to top. Rearranging the two equations, we obtain

3i1 − i2 = 5

and

−i1 + 4i2 = 2.
Section 2.3 Loop-Current Method 45

Solving these two equations in two unknowns yields

i1 = 2 A

and

i2 = 1 A.

Consequently,

ix = i1 − i2 = 1 A,

and the voltage drop across the 1  resistor from top to bottom is 1  × ix =
1 V. The remaining element voltages also can be calculated in a similar way
using the loop currents i1 and i2 .

The preceding examples illustrate the main idea behind the loop-current method;
namely, loop-current values are calculated for each elementary loop in the circuit.
Then, if desired, element currents or voltages can be calculated from the loop currents. Loop-current
A step-by-step procedure for calculating the loop currents is as follows: procedure

(1) In a planar circuit with N elementary loops, declare N loop-current variables


i1 , i2 , · · ·, iN ,
(2) Around each elementary loop with an unknown loop current, write a KVL
equation expressed in terms of loop currents.
(3) Solve the system of algebraic equations obtained in step 2 for the unknown
loop currents.

Example 2.12
Consider the circuit shown in Figure 2.12. The circuit contains three elemen-
tary loops; therefore, there are three loop currents in the circuit. Two of these
have been declared as unknowns i1 and i2 , and the third one has been recog-
nized as i3 = 2 A to match the 2 A current source on the right. Since we have
only two unknown loop currents i1 and i2 , we need to write only two KVL
equations. The KVL equation for loop 1 (where i1 has been declared) is

14 = 2i1 + 3ix .

2Ω 4Ω 2Ω

+ + i 2 1Ω i 3 = 2 A
14V − i 1 3i x −
ix 2A

Figure 2.12 Loop-current example for a circuit containing a current source.


46 Chapter 2 Analysis of Linear Resistive Circuits

Likewise, the KVL equation for loop 2 (where i2 has been declared) is

3ix = 4i2 + 1(i2 + 2).

Note that ix can be expressed as

ix = i2 + 2,

since the direction of loop currents i2 and i3 = 2 A coincide with the direc-
tion of ix . Substituting for ix in the two KVL equations and rearranging
as

2i1 + 3i2 = 8

and

2i2 = 4,

we are finished with the implementation of step 2. Clearly then, the solution
(that is, step 3) is

i2 = 2 A

and

i1 = 1 A.

Example 2.13
Consider the circuit shown in Figure 2.13 with three loop currents declared
as i1 , i2 , and i3 . Nominally, we need three KVL equations, one for each
elementary loop. However, we note that loops 2 and 3 are separated by a
2 A current source and consequently,

i3 = i2 + 2,

i2

− νx +
2A
+ i1 2Ω
2.6V − 1Ω
i3 = i2 + 2

Figure 2.13 A loop-current example with a super-loop equation.
Section 2.3 Loop-Current Method 47

as already marked in the diagram. Therefore, in the KVL equations every


occurrence of i3 can be expressed as i2 + 2 so that the equations contain
only two unknowns, i1 and i2 .
We start with loop 1 and obtain

1(i1 − i2 ) + 1(i1 − (i2 + 2)) + 1i1 = 2.6,

expressing i3 as (i2 + 2). The KVL equation for loop 2 is

1i2 + vx + 1(i2 − i1 ) = 0,

where

vx = 2(i2 + 2) + 1((i2 + 2) − i1 )

is the voltage drop across the 2 A source as determined by writing the KVL
equation around loop 3. Eliminating vx between the two equations, we
obtain

1i2 + 2(i2 + 2) + 1((i2 + 2) − i1 ) + 1(i2 − i1 ) = 0,

which is in fact the KVL equation around the dashed contour shown in
Figure 2.13, which can be called a “super loop,” in analogy with the super- Super loop
node concept. The KVL equations for loop 1 and the dashed contour (super
loop) simplify to

3i1 − 2i2 = 4.6


−2i1 + 5i2 = −6.

Their solution is

i1 = 1 A
i2 = −0.8 A,

and, consequently,

i3 = i2 + 2 = 1.2 A.

Super loops such as the dashed contour in Figure 2.13 can be formed around
elementary loops sharing a common boundary that includes a current source. In such
cases, either of the elementary loop currents can be expressed in terms of the other
(e.g., i3 = i2 + 2 in Example 2.13, earlier) and therefore, instead of writing two KVL
equations around the elementary loops, it suffices to formulate a single KVL equation
around the super loop.
48 Chapter 2 Analysis of Linear Resistive Circuits

ix

ν1 = νs ν2 ν3
4Ω is
νs +
− 2i x 1Ω

Figure 2.14 Circuit with arbitrary current and voltage sources is and vs .

2.4 Linearity, Superposition, and Thevenin


and Norton Equivalents

We might wonder how scaling the values of the independent sources in a resistive
circuit will affect the value of some particular voltage or current. For example, will
a doubling of the source values result in a doubling of the voltage across a specified
resistor? More generally, we can ask how circuits respond to their source elements.
By “circuit response,” we refer to all voltages and currents excited in a circuit, and
we consider whether the combined effect of all independent sources is the sum of the
effects of the sources operating individually. We shall reach some general conclusions
by examining these questions within the context of the node-voltage approach.
We first note that the application of KCL in node-voltage analysis always leads
to a set of linear algebraic equations with node-voltage unknowns and forcing terms
proportional to independent source strengths. For instance, for the circuit shown in
Figure 2.14, the KCL equations to be solved are

v1 = vs
v2 − v1 v1 − v3
+2 + is = 0
4 2
v3 v3 − v1
+ = is,
1 2

in which the forcing terms vs and is are due to the independent sources in the circuit.
Dependent sources enter the KCL equations in terms of node voltages

v1 − v3
2ix = 2 ,
2

(as for instance, in the second of the preceding equations), therefore, dependent
sources do not contribute to the forcing terms. The consequence is that, node-voltage
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 49

solutions are linear combinations of independent source strengths only, as in

v1 = vs
5 4
v2 = − vs − is
3 3
1 2
v3 = vs + is ,
3 3
which is the solution to the KCL equations for the circuit of Figure 2.14.2 Since we
can calculate any voltage or current in a resistive circuit by using a linear combination
of node voltages and/or independent source strengths (as we have seen in Section 2.2),
the following general statement can be made:

Superposition principle: Let f1 , f2 , · · ·, fN be the complete set of inde-


pendent source strengths (such as vs and is in Figure 2.15a) in a resistive
circuit. Then, any electrical response y in the circuit (such as voltage v in
Figure 2.15a or some element current) can be expressed as

y = K1 f1 + K2 f2 + · · · + KN fN ,

where K1 , K2 , · · ·, KN are constant coefficients, unique for each response y.


Applying the superposition principle to the resistive circuit in Figure 2.15a, for
instance, we can write

v = K1 vs + K2 is ,

where K1 and K2 are constant coefficients (to be determined in the next section). What
this means is that voltage v is a weighted linear superposition of the independent
sources vs and is in the circuit. Likewise, any electrical response y in any resistive
circuit is a weighted linear superposition of the independent sources f1 , f2 , · · ·, fN .
Since any response in a resistive circuit can be expressed as a weighted linear
superposition of independent sources, such circuits are said to be linear. We see from Linearity
this property that a doubling of all independent sources will indeed double all circuit
responses and that every circuit response is a sum of the responses due to the individual
sources. We shall examine this latter point more closely in the next section.
The linearity property is not unique to resistive circuits; circuits containing resis-
tors, capacitors, and inductors also can be linear, as we will see and understand in
2
These results can be viewed as the solution of the matrix problem
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 0 0 v1 vs
⎣ 3 1
−1 ⎦ ⎣ v2 ⎦ = ⎣ −is ⎦
4 4
− 21 0 2
3
v3 is

for the node-voltage vector on the left-hand side. Since the solution is the product of the source vector on the
right with the inverse of the coefficient matrix on the left, the node-voltage values are linear combinations
of the elements of the source vector and, hence, linear combinations of vs and is .
50 Chapter 2 Analysis of Linear Resistive Circuits

+ +
2Ω 2Ω
is 4Ω 4Ω ν 4Ω 4Ω ν
v s +− 1V +

− −
(a) (b)

+

4Ω 4Ω ν
1A

(c)
Figure 2.15 (a) A linear resistive circuit with two independent sources, (b) the same circuit with
suppressed current source and vs = 1 V, (c) the same circuit with suppressed voltage source and is = 1 A.

later chapters. On the other hand, circuits containing even a single nonlinear element,
such as a diode, ordinarily will behave nonlinearly, meaning that circuit responses
will not be weighted superpositions of independent sources in the circuit. A diode
is nonlinear because its v–i relation cannot be plotted as a straight line. By contrast,
resistors have straight-line v–i characteristics.

2.4.1 Superposition method


The superposition principle just introduced can be exploited in many ways in resistive
circuit analysis.
For instance, as already discussed, voltage v in Figure 2.15a can be expressed as

v = K1 vs + K2 is .

To determine K1 , we notice that, for vs = 1 V and is = 0,

v = K1 × (1 V).

But with vs = 1 V and is = 0, the circuit simplifies to the form shown in Figure 2.15b,
since with is = 0 the current source becomes an effective open circuit. Analyzing the
circuit of Figure 2.15b with the suppressed current source, we find that (using resistor
combination and voltage division, for instance) v = 21 V. Therefore, v = K1 × (1 V)
implies that

1
K1 = .
2
Likewise, to determine K2 , we let vs = 0 and is = 1 A in the circuit so that

v = K2 × (1 A)
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 51

in the modified circuit with the suppressed voltage source, shown in Figure 2.15c.
We find that in Figure 2.15c (using resistor combination and Ohm’s law) v = 1 V.
Therefore, v = K2 × (1 A) implies that

V
K2 = 1 = 1 .
A
Hence, combining vs and is with the weight factors K1 and K2 , respectively, we find
that in the circuit shown in Figure 2.15a,

1
v= vs + 1  × is .
2
Note that each term in this sum represents the contribution of an individual source
to voltage v. Likewise, the superposition principle implies that any response in any
resistive circuit is the sum of individual contributions of the independent sources
acting alone in the circuit. We will use this notion in Example 2.14.

Example 2.14
In a linear resistive circuit with two independent current sources i1 and i2 ,
a resistor voltage vx = 2 V when

i1 = 1 A and i2 = 0 A.

But when

i1 = −1 A and i2 = 3 A,

it is found that vx = 4 V. Determine vx if

i1 = 0 A and i2 = 5 A.

Solution Because the circuit is linear, we can write

vx = K1 i1 + K2 i2 .

Using the given information, we then obtain

2 = K1
4 = −K1 + 3K2 .

Thus,

4 + K1
K2 = = 2 ,
3
52 Chapter 2 Analysis of Linear Resistive Circuits

and we have

vx = 5K2 = 10 V

for i1 = 0 A and i2 = 5 A.

Example 2.15
Determine v2 in Figure 2.7 from Section 2.2 using source suppression and
superposition.
Solution First, suppressing (setting to zero) the 2 A source in the circuit,
we find that v2 due to the 3 V source is
4 12
v2 = 3 V = V.
1 + 4 5
To understand this expression, you must redraw Figure 2.7 with the 2 A
source replaced by an open circuit and apply voltage division in the revised
circuit. Second, suppressing the 3 V source, we calculate that v2 due to the
2 A source is
4 × 1 8
v2 = 2 A = V.
4 + 1 5
We obtain this expression by redrawing Figure 2.7 with the 3 V source set
equal to zero, which implies that the voltage source is replaced by a short
circuit. Finally, according to the superposition principle, the actual value of
v2 in the circuit is the sum of the two values just calculated, representing
the individual contributions of each source. Hence,
12 8
v2 = + = 4 V,
5 5
which agrees with the node-voltage analysis result obtained in Example 2.6
in Section 2.2.

2.4.2 Thevenin and Norton equivalents of resistive networks


Signal processing systems typically are composed of very large, interconnected cir-
cuits. For purposes of understanding how these circuits operate together, it often is
advantageous to model one or more of these complex circuits by a much simpler
circuit that is easier to understand and to work with mathematically. Here we are not
suggesting a replacement of the physical circuit, but rather the introduction of a model
whose mathematical behavior mimics that of the physical circuit.
Toward this end, consider the diagram shown in Figure 2.16a. The box on the
left contains an arbitrary resistive network (e.g., the network shown in Figure 2.15a)
including possibly a large number of independent and dependent sources. This is the
network that we wish to model with a simpler circuit which will have equivalent
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 53

i
+ +
Resistive Resistive
Network ν i
Network ν = νT − R T i
− −
(a) (b)

RT i i
+ +
νT
νT +
− ν = νT − R T i iN = RT ν = R T iN − R T i
RT
− −
(c) (d)

Figure 2.16 (a) A linear circuit containing an independent current source i and some linear resistive
network, (b) the linear resistive network and its terminal v–i relation, (c) Thevenin equivalent of the
network, and (d) Norton equivalent.

behavior as seen by the outside world. To incorporate the outside world, suppose that
the network terminals are connected to a second circuit that draws current i and results
in voltage v across the terminals. We have not shown the second circuit explicitly;
instead, we have modeled the effect of the second circuit by the current source i.
Now, what might the simple replacement be for the resistive network? Because
the network is resistive, its terminal voltage v will vary with current i in a linear form,
say, as

v = vT − RT i,

where RT and vT are constants that depend on what is within the resistive network.3
Both vT and RT may be either positive or negative.
Figure 2.16b emphasizes that the boxed resistive network is governed at its termi-
nals by an equation having the preceding form. Any circuit having the same terminal
v–i relation will be mathematically equivalent to the resistive network. The simplest
such circuit is shown in Figure 2.16c and is known as the Thevenin equivalent.
Since we started by assuming that the box contains an arbitrary resistive network,
this result means that all resistive networks can be represented in the simple form of
Figure 2.16c! A second equivalent circuit is obtained as the source transform of the
Thevenin equivalent (see Figure 2.16d) and is known as the Norton equivalent. Both

3
A more rigorous argument for this relation is as follows: Since the circuit of Figure 2.16a is linear, the
voltage v is a weighted linear superposition of all the independent sources within the box, as well as the
independent current source i external to the box. Expressing the overall contributions of the sources within
the box as vT , and declaring a weighting coefficient −RT for the contribution of i, we obtain v = vT − RT i
with no loss of generality.
54 Chapter 2 Analysis of Linear Resistive Circuits

of these equivalent circuits4 are useful for studying the coupling of resistive networks
to external loads and other circuits.
We next will describe how to calculate the Thevenin voltage vT , Norton current
iN , and Thevenin resistance RT . We will then illustrate in the next section the use of
the Thevenin equivalent in an important network coupling problem.
Thevenin voltage vT : For i = 0, the terminal v–i relation of a resistive network,
that is,
v = vT − RT i,
reduces to
v = vT .
Thevenin Thus, to find the Thevenin voltage vT of a network, it is sufficient to calculate its
voltage output voltage v in the absence of an external load at the network terminals. The
equals the Thevenin voltage vT , therefore, also is known as the “open-circuit voltage” of the
open-circuit network.
voltage As an example, consider the network of Figure 2.15a, which is repeated in
Figure 2.17a, with vs = 2 V and is = 1 A. Using the result
1
v= vs + 1  × is
2
from the last section, with vs = 2 V and is = 1 A, we calculate that v = 2 V is the
open-circuit voltage of the network (i.e., the terminal voltage in the absence of an
external load). Hence, Thevenin voltage of the network is
vT = 2 V.

Norton current iN : We next will see that the Norton current of a linear network,
defined as (see Figure 2.16d)
vT
iN ≡ ,
RT
also happens to be the current that flows through an external short placed between the
network terminals (as in Figure 2.17b) in the direction of the voltage drop adopted
for v and vT . The termination shorts out the network output voltage v, and thus the
relation v = vT − RT i is reduced to
0 = vT − RT i,
implying that the short-circuit current of the network is
vT
i= = iN .
RT

4
These two circuits are named after Leon C. Thevenin (1857-1926) of French Postes et Telegraphes
and Edward L. Norton (1898-1983) of Bell Labs.
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 55

+ +
2Ω 2Ω
4Ω 4Ω ν 4Ω 4Ω ν=0
1A +
1A +
2V − 2V − i
− −
(a) (b)

1Ω +

4Ω 4Ω ν
2V +
− 2A 1Ω

(c) (d) (e)
Figure 2.17 (a) A linear network with terminal voltage v, (b) the same network with an external short,
(c) Thevenin equivalent, (d) Norton equivalent, and (e) the same network as (a) after source suppression.

Thus, to determine the Norton current of the network in Figure 2.17a, we place Norton
a short between the network terminals and calculate the short-circuit current i, as current
shown in Figure 2.17b. Applying KCL at the top node gives equals the
short-circuit
0−2 current
1= + i,
2
because there is zero voltage across the two 4  resistors. Therefore, the short-circuit
current of the network is i = 2 A, and consequently,

iN = 2 A.

The Norton current of any linear network can be calculated as a short-circuit current.

Thevenin resistance RT : One way to determine the Thevenin resistance RT is to


rewrite the formula for Norton current as
vT
RT =
iN

and use it with known values of vT (open circuit voltage) and iN (short circuit current).
For the network shown in Figure 2.17a, for example,

vT 2V
RT = = = 1 ;
iN 2A

the corresponding Thevenin and Norton equivalent circuits are as shown in Figures
2.17c and 2.17d, respectively. Thevenin
The Thevenin resistance RT also can be determined directly by a source suppres- resistance
v
sion method without first finding the Thevenin voltage and Norton current. Before RT = i T
N
56 Chapter 2 Analysis of Linear Resistive Circuits

we describe the method, we observe that if all the independent source strengths in
Figure 2.17a were halved, that is,
1 A → 0.5 A and 2 V → 1 V,
the open-circuit voltage vT and short-circuit current iN of the network would also be
halved. In other words,
vT = 2 V → 1 V and iN = 2 A → 1 A
because of the linearity of the network. However, the Thevenin resistance
vT 2V 1V
RT = = = 1 → = 1
iN 2A 1A
would remain unchanged. The Thevenin resistance would, in fact, remain unchanged
even in the limiting case when all independent source strengths are suppressed to
zero, as shown in Figure 2.17e. Indeed, the Thevenin resistance of the network in
Figure 2.17e is the same as the Thevenin resistance of the original network shown in
Figure 2.17a!
Source This observation leads to the source suppression method for finding RT :
suppression
(1) Replace all independent voltage sources in the network by short circuits and all
method
independent current sources by open circuits.
(2) If the remaining network contains no dependent sources (as in Figure 2.17e),
then RT is the equivalent resistance, which we can determine usually by using
series and/or parallel resistor combinations. (Note that in Figure 2.17e the
parallel equivalent of the three resistors yields the correct Thevenin resistance
RT = 1  obtained earlier.)
(3) If the remaining network contains dependent sources, or cannot be simplified
by just series and parallel combinations, then RT can be determined by the test
signal method, illustrated in Example 2.17 to follow.

Example 2.16
Figure 2.18a shows a resistive network with a dependent source. Determine
the Thevenin and Norton equivalents of the network.
Solution To determine the Thevenin voltage vT , we apply the node-
voltage method to the circuit shown in Figure 2.18a. The unknown node
voltages are vT and vT − 2ix , where
vT − 2ix
ix =
1
is the current flowing down through the 1  resistor. We also have a super-
node KCL equation
vT
2 = ix + .
2
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 57

2i x 2i x
νT − 2i x νT − 2i x 0
− −

+
iN
2A ix 2A ix
1Ω 2Ω 1Ω 2Ω
(a) (b)

1.2Ω 2i x

+
2.4V +
− 2A 1.2Ω ix
1Ω 2Ω
(c) (d) (e)

2i x
ν − 2i x −
ν
+

i
1Ω ix 1A
(f) 2Ω

Figure 2.18 (a) A linear network with terminal voltage vT , (b) the same network with an external short,
(c) Thevenin equivalent, (d) Norton equivalent, (e) the same network with suppressed independent
source, and (f) source suppressed network with an applied test source.

Solving these two equations in two unknowns, we find that

12
vT = = 2.4 V.
5

We next proceed to find iN by shorting the external terminals of the network


as shown in Figure 2.18b. After placing the short at the network terminals,
the voltage vT − 2ix across the 1  resistor has been reduced to −2ix , and
therefore the current through the resistor is

−2ix
ix = ,
1
indicating that

ix = 0.

Hence, neither resistor carries any current in Figure 2.18(b), and therefore,

iN = 2 A.
58 Chapter 2 Analysis of Linear Resistive Circuits

Finally,

vT 12/5 6
RT = = = = 1.2 .
iN 2 5

The Thevenin and Norton equivalents are shown in Figures 2.18c and 2.18d.

Test-signal The next example explains the test signal method for determining the Thevenin
method resistance in networks containing dependent sources.

Example 2.17
Figure 2.18e shows the network in Figure 2.18a with the independent source
suppressed. The Thevenin resistance RT of the original network is the
equivalent resistance of the source-suppressed network, but the presence
of a dependent source prevents us from determining RT with series/parallel
resistor combination methods. Instead, we can determine RT by calculating
the voltage response v of the source-suppressed network to a test current
i = 1 A injected into the network, as shown in Figure 2.18f. RT is then the
equivalent resistance
v v
RT = = .
i 1A
For a test current i = 1 A, we write KCL at the injection node to obtain
v
i = 1A = + ix .
2
Also,
v − 2ix
ix = ,
1
implying that
v
ix = .
3
Thus,

v v 5
i = 1A = + = v,
2 3 6
from which we obtain
6
v= = 1.2 V.
5
Section 2.4 Linearity, Superposition, and Thevenin and Norton Equivalents 59

Hence,
v 1.2 V
RT = = = 1.2 ,
i 1A
as already determined in Example 2.16.

The Thevenin resistance in resistive networks with no dependent sources is


always a positive quantity. However, in a network with dependent sources the Thevenin
resistance can be zero or even negative (as happens in the next example). Remember
that the Thevenin resistance is a quantity in a mathematical model—it is not a physical
resistor.

Example 2.18
Determine the Thevenin equivalent of the network shown in Figure 2.19a.
Solution Since the network contains no independent source, the open-
circuit voltage vT = 0 and the Thevenin equivalent is just the Thevenin
resistance RT .
To determine RT , we will feed the network with an external 1 A current
source, as shown in Figure 2.19b, and solve for the response voltage v at
the network terminals. Then
v
RT = .
1A
The KCL equation at the top node is
v
= vx + 1,
3
and, via voltage division across the series resistors on the left, we also have
2 2
vx = v = v.
2+1 3
Replacing vx in the KCL equation with 23 v, we find that

v = −3 V.

1Ω 1Ω

+ + +
2Ω νx νx 2Ω νx νx ν 1A
− − −
(a) (b)

Figure 2.19 (a) A linear network with a dependent source, and (b) the same network excited by a 1 A
test source to determine RT .
60 Chapter 2 Analysis of Linear Resistive Circuits

Hence,

−3 V
RT = = −3 .
1A

2.5 Available Power and Maximum Power Transfer

Given a resistive network, it is important to ask how much power that network can
transfer to an external load resistance. Ordinarily, we wish to size the load resistance
so as to transfer the maximum power possible. For networks with positive Thevenin
resistance RT , there is an upper bound on the amount of power that can be delivered
out of the network, a quantity known as the available power of the network.
To determine the general expression for the power available from a resistive
network, we will examine the circuit shown in Figure 2.20. In this circuit an arbitrary
resistive network is represented by its Thevenin equivalent on the left, and the resistor
RL on the right represents an arbitrary load.
Using voltage division, we find that the load voltage is

RL
vL = vT .
RT + RL

Hence, the absorbed load power

vL2
pL = vL iL =
RL

is
RL
pL = v2 .
(RT + RL )2 T

RT

iL
+
νT +
− νL RL

Figure 2.20 The model for the interaction of linear resistive networks with an
external load RL .
Section 2.5 Available Power and Maximum Power Transfer 61

pL
0.25

0.2

0.15

0.1

0.05

2 4 6 8 10 RL

Figure 2.21 The plot of load power pL in W versus load resistance RL in , for
VT = 1 V and RT = 1 .

For fixed values of vT and RT > 0 (i.e., for a specific network with specific values
of vT and RT ), this quantity will vary with load resistance RL . From the expression it
is easy to see that pL vanishes for RL = 0 and RL = ∞, and is maximized for some
positive value of RL . (See Figure 2.21.) The maximum value of pL is the available
power pa of the network.
To determine the value of RL that maximizes pL , we set the derivative of pL
with respect to RL to zero, since the slope of the pL curve vanishes at its maximum,
as shown in Figure 2.21. The derivative is

dpL (RT + RL )2 − RL 2(RT + RL ) 2


= vT .
dRL (RT + RL )4

Setting this to zero yields

RT − RL
=0
(RT + RL )3

(for nonzero vT ). Thus, assuming that5

RT > 0,

5
With negative RT (see Example 2.18 in the previous section), the transferred power pL is maxi-
mized for
RL = −RT
at a value of ∞. In practice, such networks will violate the circuit model when RL = −RT and will deliver
only finite power to external loads.
62 Chapter 2 Analysis of Linear Resistive Circuits

the absorbed load power pL will be maximum for

RL = RT .

Therefore, the available power pa of a resistive network is the value of pL for RL =


RT . Evaluating pL with RL = RT , we obtain

vT2
pa = .
4RT

Available Also, since vT


RT = iN ,
power
iN2 RT
pa =
4

is an alternative expression for the available power in terms of Norton current.


These expressions indicate that, to find the available power of any resistive
network, all we need is its Thevenin or Norton equivalent.

Example 2.19
For the network of Figure 2.18a,

12 6
vT = V and RT = .
5 5

Thus, for maximum power transfer, the load resistance should be selected
as

6
RL = RT = ,
5

in which case the power transferred to the load will be

(12/5)2 12 × 12
pa = = = 1.2 W.
4 × 6/5 24 × 5

In summary, to achieve maximum power transfer from a resistive network, the


Matched external load resistance must be matched to the Thevenin resistance of the network.
load A matched load will absorb the available power of the network. For instance, the
input resistance of an optimal speaker connected to an audio amplifier having an 8 
Thevenin resistance is 8 . All other loads (speakers) not matching the Thevenin
resistance of the amplifier will absorb less power than is available from the amplifier.
Exercises 63

EXERCISES

2.1 In the following circuits, determine ix :

1Ω 1Ω 1Ω

6A 2Ω 3Ω ix 4V +
− 2Ω ix 1Ω
(a) (b)

2.2 In the following circuits, determine vx and the absorbed power in the elements
supporting voltage vx :

+
2Ω νx
2Ω 1Ω − iy
3i y 2Ω
+ νx −
4V + + 2ν x +
− 1Ω − − 2V
(a) (b)

2.3 In the circuit shown next, determine v1 , v2 , ix , and iy , using the node-voltage
method. Notice that the reference node has not been marked in the circuit;
therefore, you are free to select any one of the nodes in the circuit as the
reference. The position of the reference (which should be shown in your
solution) will influence the values obtained for v1 and v2 , but not for ix
and iy .

ν1 2Ω ν2 2Ω

ix iy
4.5 A 2Ω + 3i x + 7V
− −


64 Chapter 2 Analysis of Linear Resistive Circuits

2.4 In the following circuit, determine the node voltages v1 , v2 , and v3 :


2V ν
ν1 − 2 ν3

+

+
2Ω 4Ω − 4V

2.5 In the following circuit, determine node voltages v1 and v2 :

ν1 1Ω ν2 6V 1Ω

+

14 A 12 Ω 3Ω 3Ω −
+ 6V
13Ω

2.6 In the following circuits, determine loop currents i1 and i2 :

2Ω 3Ω

6V + − 2i 1
− 4Ω +
i1 i2
(a)

2Ω 1Ω 4V

+

4V + 2Ω
− i1 1Ω i2
2A
(b)

2.7 (a) For the next circuit, obtain two independent equations in terms of loop-
currents i1 and i2 and simplify them to the form
Ai1 + Bi2 = E
Ci1 + Di2 = F.

2V 3i x
− −
+

1Ω i1 2Ω i 2 4Ω
ix
Exercises 65

(b) Express the previous equations in the matrix form


    
A B i1 E
=
C D i2 F
and use matrix inversion or Cramer’s rule to solve for i1 and i2 .
2.8 By a sequence of resistor combinations and source transformations, the next
circuit shown can be simplified to its Norton (bottom left) and Thevenin
(bottom right) equivalents between nodes a and b. Show that
i v 2
iN = + and RT = R
2 R 3
and obtain the expression for vT .

R a

R
2R i 2R
ν +−

a RT a

iN RT νT +−

b b

2.9 In the following circuit, it is observed that for i = 0 and v = 1 V, iL = 1


2 A,
while for i = 1 A and v = 0, iL = 41 A.

R a

R iL

2R i 2R RL
ν +

(a) Determine iL when i = 4 A and v = 2 V. (You do not need the values


of R and RL to answer this part; just make use of the results of
Problem 2.8.)
(b) Determine the values of resistances R and RL .
(c) Is it possible to change the value of RL in order to increase the power
absorbed in RL when i = 4 A and v = 2 V? Explain.
66 Chapter 2 Analysis of Linear Resistive Circuits

2.10 In the following circuit, find the open-circuit voltage and the short-circuit
current between nodes a and b, and determine the Thevenin and Norton
equivalents of the network between nodes a and b:

1Ω 1Ω a
ix
+ 2i x
2V −

2.11 Determine the Thevenin equivalent of the following network between nodes
a and b, and then determine the available power of the network:

a
+ 1Ω
νx 1Ω 2A − 2νx
-
b

2.12 Determine ix in Figure 2.11b, using source suppression followed by super-


position.

2.13 In the next circuit, do the following:


(a) Determine v when is = 0.
(b) Determine v when vs = 0.
(c) When vs = 4 V and is = 2 A, what is the value of v and what is the
available power of the network? Hint: Make use of the results of parts
(a) and (b) and the superposition method.


+ is

v

+
− − νs 1Ω
Exercises 67

2.14 Consider the following circuit:

1Ω R
+ νx − 1Ω +
νs +
− νL RL
+ −
− Aνx

(a) Determine vL , given that vs = 1 V, R = 1 k, RL = 0.1 , and


A = 100.
(b) Find an approximate expression for vL that is valid when R 1 ,
RL ≈ 1 , and A 1.
2.15 Determine the Thevenin resistance RT of the network to the left of RL in
the circuit shown in Problem 2.14. What is the approximate expression for
RT if R 1  and A 1?
π
2.16 For (a) through (e), assume that A = 3 − j 3, B = −1 − j 1, and C = 5e−j 3 :
(a) Let D = AB. Express D in exponential form.
(b) Let E = A/B. Express E in rectangular form.
(c) Let F = B
C. Express F in exponential form.
(d) Let G = (CD)∗ , where ∗ denotes complex conjugation. Express G in
rectangular and exponential forms.
(e) Let H = (A + C)∗ . Determine |H | and ∠H , the magnitude and angle
of H .
3
Circuits for Signal
Processing

3.1 OPERATIONAL AMPLIFIERS AND SIGNAL ARITHMETIC 68


3.2 DIFFERENTIATORS AND INTEGRATORS 80
3.3 LINEARITY, TIME INVARIANCE, AND LTI SYSTEMS 87
3.4 FIRST-ORDER RC AND RL CIRCUITS 93
3.5 nTH-ORDER LTI SYSTEMS 111
EXERCISES 115

Op-amps,
capacitors,
inductors; In this chapter, we begin taking the viewpoint that voltages and currents in electrical
circuits and circuits may represent analog signals and that circuits can perform mathematical
systems operations on these signals, such as addition, subtraction, scaling (amplification),
for signal differentiation and integration. Naturally, the type of math performed depends on the
amplification, circuit and its components.
addition, In Section 3.1 we describe a multiterminal circuit component called the opera-
subtraction, tional amplifier and examine some of its simplest applications (amplification, addition,
integration; and differencing). We examine in Section 3.2 the use of capacitors and inductors (see
LTI systems Chapter 1 for their basic definitions) in circuits for signal differentiation and integra-
and zero-input tion. In Section 3.3 we discuss system properties of differentiators and integrators
and zero-state and introduce the important concepts of system linearity and time-invariance. Anal-
response; ysis of linear and time-invariant (LTI) circuits with capacitors and inductors may
first-order require solving differential equations. In Section 3.4 we discuss solutions of first-
RC and RL order differential equations describing RC and RL circuits. Section 3.5 previews the
circuits and analysis techniques for more complicated, higher-order LTI circuits and systems to
ODE initial-value be pursued in later chapters.
problems;
steady-state
3.1 Operational Amplifiers and Signal Arithmetic
response;
nth-order In Figure 3.1a the triangle symbol represents a multi-terminal electrical device known
systems as an operational amplifier—the op-amp. As we shall see, op-amps are high-gain
amplifiers that are useful components in circuits designed to process signals. The
diagram in Figure 3.1a shows how an op-amp is “powered up,” or biased, by raising

68
Section 3.1 Operational Amplifiers and Signal Arithmetic 69

ν+ νo
ν+ νb i+
+
iz νo io
o Ro
i− − io
ν− − νb
Ri

i− +
− A (ν+ − ν− )
+
νb + − νb ν−

(a) (b)

νo
νb

νb νb ν+ − ν−

A A
(c) − νb

Figure 3.1 (a) Circuit symbol of the op-amp, showing five of its seven terminals and its biasing
connections to a reference node, (b) equivalent linear circuit model of the op-amp showing only three
terminals with node voltages v+ , v− , and vo , and (c) the dependence of vo on differential input v+ − v−
and bias voltage vb . The equivalent model shown in (b) is valid only for v+ − v− between the dashed
vertical lines.

the electrical potentials of two of its terminals1 to DC voltages ±vb with respect to
some reference. Commercially available op-amps require vb to be in the range of a
few to about 15 V in order to function in the desired manner, as described here.
Glancing ahead for a moment, we see that Figure 3.2 shows two op-amp circuits,
where (to keep the diagrams uncluttered) the biasing DC sources ±vb are not shown.
To understand what op-amps do in circuits, we find it necessary to know the rela-
tionship between the input voltages v+ , and v− , and the output voltage vo , defined
at the three op-amp terminals (+, −, and o) shown in the diagrams. Because op-
amps themselves are complicated circuits composed of many transistors, resistors,
and capacitors, analysis of op-amp circuits would seem to be challenging. Fortu-
nately, op-amps can be described by fairly simple equivalent circuit models involving
the input and output voltages. Figure 3.1b shows the simplest op-amp model that is
accurate enough for our purposes, which indicates that

vo = A(v+ − v− ) − Ro io .

It is because vo depends on v+ and v− that the latter are considered as op-amp inputs
and vo is considered to be the op-amp output.

1
Op-amps such as the Fairchild 741 have seven terminals. Two of these are for output offset adjustments,
which will not be important to us. The discussion here will focus on the remaining five, shown in Figure 3.1a,
two of which are used for powering up, or biasing, the device, as indicated in the figure.
70 Chapter 3 Circuits for Signal Processing

Input Terminal o is called the output terminal, and the terminals marked by − and +
and are referred to as inverting and noninverting input terminals. Resistors Ri and Ro
output in Figure 3.1b are called the input and output resistances of the op-amp, and A is a
terminals voltage gain factor (scales up the differential input v+ − v− ) called the “open-loop
gain.” Typically, for an op-amp,

Ro ∼ 1 − 10 ,
Ri ∼ 106 ,
A ∼ 106 ,

Typical where the symbol ∼ stands for “within an order of magnitude of.” Very large values
op-amp of A and Ri and a relatively small Ro are essential for op-amps to perform as intended
parameters in typical applications.
When the output terminal of an op-amp is left open—that is, when no connec-
tion is made to other components—as in Figure 3.1a, then io = 0 and, according to
Figure 3.1b, vo is just an amplified version of v+ − v− . However, an important detail
that is not properly modeled by the simple op-amp equivalent circuit2 is the satura-
tion of vo at ±vb . For the actual physical op-amp, the output vo cannot exceed vb in
magnitude. Assuming small io , Figure 3.1c shows the variation of the output voltage
vo as a function of the differential input voltage v+ − v− . Only for
vb
|v+ − v− | <
A
Linear is the variation of vo with v+ − v− linear, as described by the model in Figure 3.1b.
vs Otherwise, the op-amp is said to be saturated or behaving nonlinearly.
saturated Clearly, then, to maintain an op-amp in the desired linear regime—that is, to
ensure that |v+ − v− | < vAb —it is necessary to keep the differential input v+ − v−
extremely small. Typically, with A ∼ 106 and vb ≈ 10 V, we must have |v+ − v− | <
10 μV. When this condition is achieved, the op-amp input currents i+ and i− through
μV
the input resistance Ri ∼ 106  will have magnitudes less than 10 106 
= 10 pA, which
is an exceedingly small (in fact, negligible) current. Hence, for all practical purposes,
for an op-amp operating in the linear regime (i.e., between the dashed vertical lines
in Figure 3.1c), we have

v+ ≈ v− ,
i+ ≈ 0,
i− ≈ 0,

to within microvolts and picoamps.


2
The equivalent circuit model of an op-amp should not be confused with the actual structure of the
interior of the op-amp. The Fairchild 741 op-amp circuit consists of 20 transistors, 11 resistors, and one
capacitor. The equivalent circuit in Figure 3.1b is a model that merely describes the relationship of the
signals at the op-amp terminals under normal (linear) operating conditions.
Section 3.1 Operational Amplifiers and Signal Arithmetic 71

These conditions are fundamental approximations that tremendously simplify the Ideal
analysis of op-amp circuits known to be in linear operation. As you will discover later, op-amp
if you are having difficulty in analyzing an op-amp circuit, it is probably because you approximations
have forgotten to apply one of the previous conditions—so memorize them! These
equations are known as ideal op-amp approximations. The reason for this terminology
is, v+ = v− and i+ = i− = 0 would be exactly true for an ideal op-amp with Ri → ∞
and A → ∞. Notice that there is no ideal op-amp approximation that constrains the
output voltage vo . Instead, as we shall see, the value of vo depends upon the circuit
in which the op-amp is embedded.
Applying the ideal op-amp approximations to the analysis of circuits containing
nonideal (i.e., real, “off-the-shelf”) op-amps amounts to ignoring voltage and current
terms in our KVL and KCL equations having magnitudes less than ∼ 10 μV and
10 pA. This will result in negligible errors in the calculation of the larger voltages and
currents in a circuit that typically will interest us.
Alternatively, we can analyze linear op-amp circuits by substituting for each op-
amp the equivalent circuit model shown in Figure 3.1b and writing down the exact
KVL and KCL equations in the resulting circuit diagrams. In such calculations, we
use Ro ∼ 1  − 10 , Ri ∼ 106 , and A ∼ 106 instead of the op-amp approxi-
mations introduced earlier. This approach can be slightly more accurate (if exact
values of Ro , Ri , and A are known), but requires far more effort than using the simple
ideal op-amp relations.
In the following discussion of well-known linear op-amp circuits, we will illus-
trate both of these approaches.

3.1.1 Voltage follower and noninverting amplifier


Figures 3.2a and 3.2b show two important linear op-amp circuits known as the voltage
follower and the noninverting amplifier. Clearly, the circuits are related, because the
voltage follower of Figure 3.2a is a special case of the amplifier of Figure 3.2b, with
R1 = 0 and R2 = ∞. We next will analyze the circuits by using the ideal op-amp
approximations and later justify the analysis by showing that for small inputs the
circuits operate in the linear regime.

Rs ν+ Rs ν+
+ νo + νo
i+ o i+ o
v− − io ν− − io
νs +
− νs +

R1
i− i−

R2
(a) (b)

Figure 3.2 (a) A voltage-follower (or buffer), and (b) a noninverting amplifier.
72 Chapter 3 Circuits for Signal Processing

Starting with the voltage-follower circuit and using the ideal op-amp approxima-
tions, we can argue as follows: Since i+ ≈ 0, we can ignore the voltage drop across
resistor Rs and thus claim that

v+ ≈ vs .

Also, since v− ≈ v+ , it follows that

v− ≈ vs .

Finally, in the voltage follower circuit

vo = v− ,

and, as a consequence,

vo ≈ vs .

Now, for the noninverting amplifier of Figure 3.2b, vo = v− because of the


voltage drop across R1 . Instead, we can write v− in terms of vo as

R2
v− ≈ vo ,
R1 + R2

because i− ≈ 0 and, as a consequence, vo is distributed across R1 and R2 in accordance


with voltage division. Given that v− ≈ vs , this leads to

R1
vo ≈ (1 + )vs .
R2

Clearly, then, the circuit in Figure 3.2b is a voltage amplifier with a voltage gain

vo R1
G≡ ≈ (1 + ).
vs R2

The amplifier is called noninverting because the gain is positive, and this preserves
the algebraic sign of the amplified voltage.
According to these results, a noninverting amplifier multiplies the input voltage
vs by a gain G that is independent of the source resistance Rs (so long as it is not
infinite), whereas the voltage follower is simply a noninverting amplifier with unity
gain (G = 1). What makes both of these circuits very useful is the fact that their gain G
Section 3.1 Operational Amplifiers and Signal Arithmetic 73

RL
Rs Rs νo = νs
Rs + RL
+ νo ≈ νs
o i

νs + νs +
− Ro −
Buffer R L >>
A RL

Source Source
network network
(a) (b)

Figure 3.3 (a) Feeding a load through a buffer (see Exercise Problem 3.1 for a detailed discussion of this
circuit), versus (b) a direct connection between the source network and the load.

remains essentially unchanged when the circuits are terminated by an external load—
for example, by some finite resistance RL —as shown in Figure 3.3a, so long as the
load resistance exceeds a value on the order of RAo . (See the discussion that follows.)
The voltage follower in particular can be used as a unity gain buffer between a source Buffering
circuit and a load, as shown in Figure 3.3a, to prevent “loading down” the source
voltage. That is, the entire source voltage vs appears across the load resistance RL .
Ordinarily, if two circuits are to be connected to one another, then making a direct
connection changes the behavior of the two circuits. This is illustrated in Figure 3.3b,
where connecting RL to the terminals of the source network reduces the voltage at
those terminals. By connecting the load resistance to the source network through a
voltage follower, as in Figure 3.3a, RL does not draw current from the source network
(instead, the current is supplied by the op-amp) and the full source voltage appears
across the load resistance. More generally, the connection of one circuit to another
through a voltage follower allows both circuits to continue to operate as designed.
The preceding ideal op-amp analysis did not provide us with detailed information
such as the range of values of Rs for which the gain G will be independent of Rs .
To obtain such information, we would need to insert the op-amp equivalent model
of Figure 3.1b into the circuit (to replace the op-amp symbol) and then reanalyze the
circuit without making any assumptions about v± and i± . We will lead you through
such a calculation in Exercise Problem 3.2 to confirm that the noninverting amplifier
gain G is well approximated by 1 + R R1
2
, so long as Rs  Ri and RAo  RL . Such
calculations also are demonstrated in the next section in our discussion of the inverting
amplifier.
We will close this section with a discussion of negative feedback, the magic
behind how a voltage follower circuit keeps itself within the linear operating regime.
Notice how the output terminal of the op-amp in the voltage follower is fed back to its
own inverting input. That connection configures the circuit with a negative feedback Negative
loop, ensuring that v+ − v− remains between the dashed lines in Figure 3.1c, so long feedback
as |vs | < vb . Let us see how.
74 Chapter 3 Circuits for Signal Processing

Assume for a moment that vo = vb , which requires having v+ − v− > 0 in the


model curve in Figure 3.1c. But in the circuit in Figure 3.2a, vo = vb implies that
v+ − v− = vs − vb < 0 if |vs | < vb . Since v+ − v− > 0 and v+ − v− < 0 are contra-
dictory, vo = vb is clearly not possible when |vs | < vb . Neither is vo = −vb for similar
reasons. In that case v+ − v− < 0, according to Figure 3.1c, whereas, according to
the circuit, v+ − v− = vs + vb > 0 if |vs | < vb . The upshot is that if |vs | < vb , then
the only noncontradictory situation in a voltage follower is when |vo | < vb —that is,
being in a linear operation.
To see that negative feedback is essential to the linearity of the voltage follower,
consider what happens when we connect the op-amp output to the noninverting input,
as in Figure 3.4—the circuit in Figure 3.4 appears identical to a voltage follower,
except for the use of positive feedback! Then a variety of outcomes for vo that deviate
from the voltage follower behavior are possible. For instance, a saturated solution of
vo = vb is possible with vs = 0, since, with vo = vb , Figure 3.1c implies v+ − v− >
0, which is consistent with v+ − v− = vb − 0 = vb obtained from Figure 3.4.
All op-amp circuits discussed in this chapter and elsewhere in this textbook use
negative feedback—some fraction of vo is always added to v− by an appropriate
connection between the output and inverting input—and thus the described circuits
operate linearly when excited with sufficiently small inputs. For instance, the nonin-
verting amplifier adds a fraction R1R+R 2
2
of vo to v− and, therefore, operates linearly
vb
for |vs | < G .

Rs
− νo
o
+
νs +

Figure 3.4 Not a voltage follower, because of positive feedback.

Example 3.1
In the linear op-amp circuit shown in Figure 3.5, determine the node volt-
ages vo , v1 , and v2 assuming that Vs1 = 3 V, Vs2 = 4 V, and Rs1 = Rs2 =
100 .
Section 3.1 Operational Amplifiers and Signal Arithmetic 75

R s1 R s2
+ ν1 νo ν2 +

− 10kΩ 10kΩ −
10kΩ
Vs1 +− +
− Vs2
10kΩ
20kΩ

Figure 3.5 A circuit with two op-amps.

Solution The left-end of the circuit is a voltage follower; therefore,

v1 ≈ Vs1 = 3 V.

At the right-end we notice a noninverting amplifier with a gain of

10 k
G=1+ = 1.5;
20 k
therefore,

v2 ≈ GVs2 = (1.5)(4 V) = 6 V.

To obtain vo , we write the KCL equation at the middle node as

vo − 3 vo vo − 6
+ + = 0,
10 k 10 k 10 k
giving

vo = 3 V.

Example 3.2
Both sources of Example 3.1 are doubled so that now Vs1 = 6 V, and Vs2 =
8 V. What is the new value of vo ? Assume that the biasing voltage is vb = 15
V for both op-amps.
Solution Since the circuit in Figure 3.5 is linear, a doubling of both inputs
causes a doubling of the response vo . Hence, the new value of vo is 6 V.
This is the correct result, because the new values of v1 and v2 , namely, 6
V and 12 V, are both below the saturation level of 15 V, and therefore, the
circuit remains in linear operation.
76 Chapter 3 Circuits for Signal Processing

Example 3.3
Would the circuit in Figure 3.5 remain in linear operation if the source
voltage values of Example 3.1 were tripled? Once again, assume that vb =
15 V.
Solution No, the circuit would enter into a nonlinear saturated mode,
because with a tripled value for Vs2 the response v2 could not triple to 18
V (since v2 cannot exceed the biasing voltage of 15 V).

3.1.2 Inverting amplifier


The op-amp circuit shown in Figure 3.6a employs negative feedback and is known
as an inverting amplifier. Ideal op-amp analysis of the circuit—justified because of
negative feedback—proceeds as follows.
Because the noninverting terminal in Figure 3.6a is in contact with the reference,
the corresponding node voltage is v+ = 0. Therefore, the ideal op-amp approxima-
tion v− ≈ v+ implies that v− ≈ 0. Thus, the current through resistor Rs toward the
inverting terminal can be calculated as

vs − v− vs − 0 vs
≈ = .
Rs Rs Rs

The current through resistor Rf away from the inverting terminal is, likewise,

v− − vo 0 − vo vo
≈ =− .
Rf Rf Rf

Since i− ≈ 0, KCL applied at the inverting terminal gives


vs vo
≈− + 0.
Rs Rf

Rf
Rf
Rs ν− νo
i− io
Rs ν Ro
− − νo Ri
i− o νs +
− A (ν+ − ν− )
ν+ + io +

νs + i+
− ν+
i+
(a) (b)

Figure 3.6 (a) An op-amp circuit (inverting amplifier), and (b) the same circuit where the op-amp is
shown in terms of its equivalent circuit model.
Section 3.1 Operational Amplifiers and Signal Arithmetic 77

Hence,
Rf
vo ≈ − vs ,
Rs
which shows that the circuit is a voltage amplifier with a voltage gain of
vo Rf
G= ≈− .
vs Rs
Because the gain is negative, the amplifier is said to be inverting.
Let us next verify the preceding result by using the more accurate equivalent-
circuit approach, where we replace the op-amp by its equivalent linear model, intro-
duced earlier in Figure 3.1b. Making this substitution, we find that the noninverting
amplifier circuit takes the form shown in Figure 3.6b. Applying KCL at the inverting
terminal (where v− is defined) gives
v− − vs v− v− − vo
+ + = 0.
Rs Ri Rf

Likewise, the KCL equation for the output terminal is


vo − v− vo − A(0 − v− )
+ = 0.
Rf Ro

From the second equation, we obtain


1
Rf + 1
Ro
1
Ro vo
v− = vo 1 ≈ −vo A =− ,
Rf − A
Ro Ro
A

assuming that Ro  Rf and A 1. Substituting v− ≈ − vAo into the first KCL equation
gives

− vAo − vs − vo − vo − vo
+ A + A ≈ 0,
Rs Ri Rf

which implies that


1 1 1 1 vs
vo ( + + + )≈− .
ARs ARi ARf Rf Rs

Clearly, with ARs Rf , ARi Rf , and A 1, the first three terms within the
parentheses on the left can be neglected to yield
Rf
vo ≈ − vs .
Rs
This is the result that we obtained earlier by using the ideal op-amp approximation.
78 Chapter 3 Circuits for Signal Processing

The advantages of the (more laborious) equivalent-circuit approach just illus-


trated is that it provides us with the validity conditions for the result (which are not
obtained with the ideal approximation method). The previous exercise showed that
the validity conditions are

A 1

and

Ro  Rf  ARs , ARi .

Since for typical op-amps A ∼ 106 , Ro ∼ 10 , and Ri ∼ 106 , these conditions


are readily satisfied if Rs and Rf are chosen in the k range. For instance, with
Rf = 20 k and Rs = 5 k,

Rf
vo ≈ − vs = −4vs
Rs

would be a valid result, whereas our simple gain formula would not be accurate
for Rf = 20  and Rs = 5 . Importantly, our detailed analysis has shown that the
R
inverting amplifier gain G ≈ − Rfs is not sensitive to the exact values of A, Ri , or Ro ; it
is sufficient that A and Ri be very large and Ro quite small. Op-amps are intentionally
designed to satisfy these conditions.
To examine how the inverting amplifier gain may depend on a possible load
RL connected to the output terminal, we find it sufficient to calculate the Thevenin
resistance RT of the equivalent circuit shown in Figure 3.6b. The calculation can be
carried out by use of the circuit shown in Figure 3.7, where the source vs of Figure 3.6b
has been suppressed and a 1 A current has been injected into the output terminal to
implement the test current method discussed in Chapter 2. Using straightforward steps

Rf

Rs νo
ν−
i− io
Ro
Ri
A (ν+ − ν− ) 1A
+
i+ −
ν+

Figure 3.7 Test current method is applied to determine the Thevenin resistance
of the equivalent circuit of an inverting amplifier.
Section 3.1 Operational Amplifiers and Signal Arithmetic 79

and the assumptions given, we find that


Ro Ro
vo ≈ ∼
1+ A Rs R+R
s
f
A

for Rs ∼ Rf . Thus, the Thevenin resistance of the equivalent circuit is RT ∼ RAo in


typical usage, just as for a voltage follower. (See Exercise Problem 3.2.) The amplifier
Ro
gain will not be sensitive to RL so long as RL A , and, in addition, the total current
conducted by RL remains within specified limits (typically, less than a few tens of
mA, depending on the specific op-amp being used).

3.1.3 Sums and differences


A variation of the op-amp inverting amplifier is shown in Figure 3.8a. This circuit
works as an adder or a summer. Since v− ≈ v+ = 0, the total current from the left
toward the minus terminal of the op-amp is approximately
v1 − 0 v2 − 0
+ .
R1 R2

Equating this current to the current 0−vo


Rf through Rf toward the output node, we
obtain
 
Rf Rf
vo ≈ − v1 + v2 .
R1 R2

Clearly, the circuit sums the input voltages v1 and v2 , with respective weighting
R R
coefficients − Rf1 and − Rf2 . The circuit can be modified in a straightforward way to
combine three or more inputs in a similar manner. Furthermore, because of the low
Thevenin resistance of the inverting amplifier, the weighted sum will appear in full
strength across any load that is reasonably large.
The circuit shown in Figure 3.8b forms the difference, v1 − v2 , between voltages
v1 and v2 . This becomes apparent when we note that v− ≈ v+ ≈ v21 (obtained by

Rf R2
ν−
i−
− νo R 1 v−
− νo
o o
v+ + io R2 + io
R2 R1 ν+
i+
ν2 +
− ν1 +
− ν2 +
− ν1 +
− R1
(a) (b)

Figure 3.8 (a) An adder circuit, and (b) differencing circuit.


80 Chapter 3 Circuits for Signal Processing

applying voltage division to v1 , since i+ ≈ 0), so that the KCL equation at the inverting
input node is

v2 − 21 v1 1
2 v1− vo
≈ .
R2 R2
Hence,

vo ≈ v1 − v2 .

3.2 Differentiators and Integrators

In the previous section we discussed resistive op-amp circuits for signal amplification,
summation, and differencing. In this section we shall see that, by including capacitors
or inductors, we can build op-amp circuits for signal differentiation and integration.
These operations are, of course, relevant for processing time-varying signals (or give
rise to them) and therefore, this will be our first exposure to so-called AC circuits.

3.2.1 Differentiator circuits


Capacitors and inductors as differentiators: Since the v–i relations for capac-
itors and inductors (see Chapter 1) are

dv
i=C (Capacitor)
dt
di
v=L (Inductor),
dt
capacitors are natural differentiators of their voltage inputs and inductors differentiate
their current inputs.

Example 3.4
The capacitor voltage in the circuit shown in Figure 3.9 is

v(t) = 5 cos(100t) V.

Determine the capacitor’s current response.

5 cos(100t) V + i(t) 2μF


Figure 3.9 A capacitor circuit with an imposed capacitor voltage signal and a
current response i(t) proportional to the time derivative of the imposed voltage.
Section 3.2 Differentiators and Integrators 81

Solution
dv d
i(t) = C = 2 μF × [5 cos(100t) V]
dt dt
= 2 × 5 × (−100) sin(100t) μA = − sin(100t) mA.

Example 3.5
The 1 H inductor shown in Figure 3.10a responds to the ramp current
input shown in Figure 3.10b, with a unit-step voltage v(t) = u(t) shown in
Figure 3.10c. The unit-step voltage output is 0 for t < 0, when the input
i(t) is constant (with zero value), and 1 for t > 0, when
i(t) = t A,
and, thus,
di
L = 1 V.
dt

Op-amp differentiators: The op-amp circuit shown in Figure 3.11a converts


the current-response of the capacitor (which is proportional to the derivative of its
voltage) into an output voltage and therefore functions as a voltage differentiator. To
understand this behavior, note that v− ≈ 0 because of the ideal op-amp approximation
v− ≈ v+ = 0. Therefore, the capacitor current, left-to-right, is approximately
d dvs
C (vs (t) − 0) = C .
dt dt

i(t) 2

1.5

1
+
0.5
i(t) 1H ν (t)

(a) (b) −1 −0.5 0.5 1 1.5 2 t

ν(t) 2

1.5

1
u(t)
0.5

(c) −1 −0.5 0.5 1 1.5 2 t

Figure 3.10 (a) An inductor circuit, (b) current input to the inductor, and (c)
voltage response of the inductor described by a unit-step function, u(t).
82 Chapter 3 Circuits for Signal Processing

R L
− −

C R
νs (t) +
− + νo (t) νs (t) +
− + νo (t)

(a) (b)

Figure 3.11 (a) A differentiator, and (b) a differentiator with an inductor.

Because i− ≈ 0, this capacitor current is very nearly matched to the resistor


current

0 − vo (t) vo (t)
=−
R R
directed toward the output terminal of the op-amp. Therefore,

dvs
vo (t) = −RC .
dt

We leave it as an exercise for you to derive the output formula

L dvs
vo (t) = −
R dt

for the differentiator circuit shown in Figure 3.11b.

3.2.2 Integrator circuits


Capacitors and inductors as integrators: Consider the circuit shown in Figure
3.12a, where a capacitor responds to an applied current i(t) with the voltage v(t).
Integrating the capacitor current

dv
i(t) = C
dt

between two points in time, say, from a to b, we get


 b  b  b
dv
i(t)dt = C dt = C dv = C[v(b) − v(a)],
a a dt a

which indicates that


 b
1
v(b) = v(a) + i(t)dt.
C a
Section 3.2 Differentiators and Integrators 83
1

i(t) 0.8
0.6
0.4
+
0.2
i(t) C ν (t)
− 2 4 6 8
(a) (b) t

3
ν(t) 2.5
2
1.5
1
0.5
2 4 6 8
(c) t

Figure 3.12 (a) Capacitor as an integrator, and (b) input and output signals of
the integrator circuit discussed in Example 3.6.

Replacing t with τ under the integral sign, and then replacing a and b with to and t,
respectively, we obtain
 t
1
v(t) = v(to ) + i(τ )dτ,
C to

an explicit formula for the capacitor voltage in terms of the input current i(t).
Clearly, the result implies that if we know the capacitor voltage at some initial
instant to , then we can determine any subsequent value of the capacitor voltage in
terms of a definite integral of the input current from time to to the time t of interest.
We will use the term initial value or, occasionally, initial state, to refer to v(to ). Initial
state
Example 3.6
In the circuit shown in Figure 3.12a, suppose that C = 1 F and v(0) = 1 V.
Let
1
i(t) = e− 2 t A

for t > 0 as shown in Figure 3.12b. Calculate the voltage response v(t) for
t > 0.
Solution Using the general expression for v(t) obtained previously, with
1
to = 0, v(0) = 1 V, i(τ ) = e− 2 τ A, and C = 1 F, we find that
 1
1 t
1 e− 2 t − 1 1
v(t) = 1 + (e− 2 τ A)dτ = (1 + ) V = 3 − 2e− 2 t V.
1 0 − 21

The plot of the response v(t) is shown in Figure 3.12c.


84 Chapter 3 Circuits for Signal Processing

Notice in Figures 3.12b and 3.12c that i(t) = 0 A and v(t) = 1 V for t < 0.
These values of the capacitor current and voltage for t < 0 are consistent with one
another, since a constant v(t) implies zero i(t), according to
dv
i(t) = C .
dt
For t > 0 the curves in the figure also satisfy this same relation, because the voltage
curve was computed in Example 3.6 to satisfy this very same constraint.
A comparison of the i(t) and v(t) curves in Figures 3.12b and 3.12c leads to the
following important observation: Even though the input curve i(t) is discontinuous
at t = 0, the output curve v(t) does not display a discontinuity. For reasons to be
explained next, the following general statement can be made about a capacitor voltage:
The voltage response of a capacitor to a practical input current must be
Capacitor continuous.
voltage Explanation: Recall from Chapter 1 that the capacitor is an energy storage
can’t device, with the stored energy varying with capacitor voltage as
“jump”
1 2
w(t) = Cv (t).
2
Furthermore, a capacitor stores net electrical charge on its plates, an amount

q(t) = Cv(t)

on one plate assigned positive polarity and −q(t) on the other. Therefore, any
time discontinuity in capacitor voltage would lead to accompanying discon-
tinuities in stored charge and energy, which would require infinite current in
order to add a finite amount of charge and energy in zero time. Such discon-
tinuous changes are naturally prohibited in practical circuits where current
amplitudes remain bounded, consistent with the fact that
dv dv
|i(t)| = |C | = C| | < ∞
dt dt
when v(t) is a continuous function.
In the following example, we will make use of the continuity of capacitor voltage
to calculate the response of a capacitor to a piecewise continuous input current:

Example 3.7
Calculate the voltage response of a 2 F capacitor to the discontinuous current
input shown in Figure 3.13a, for t > 0. Assume that v(0) = 0 V.
Solution For the period 0 < t < 1, where t is in units of seconds, the
current input is i(t) = 1 A. Therefore, for that period,

1 t t −0 t
v(t) = v(0) + 1dτ = 0 + = V.
2 0 2 2
Section 3.2 Differentiators and Integrators 85

i(t) 1 ν(t) 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

(a) −1 1 2 3 4 5
t (b) −1 1 2 3 4 5
t

Figure 3.13 (a) A piecewise continuous input current i(t), and (b) response v(t)
of a 2 F capacitor discussed in Example 3.6.

Now, even though this result is only for the interval 0 < t < 1, we still can
use it to calculate
1
v(1) = V
2
because of the continuity of the capacitor voltage v(t) at t = 1. Thus, for
t > 1, where the current i(t) = 0, we find that

1 t 1
v(t) = v(1) + 0dτ = v(1) = V.
2 1 2

Notice that v(t) grows from zero to 21 V (as shown in Figure 3.13b)
during the time interval 0 < t < 1 because of a steady current flow that
charges up the capacitor within that time interval. After t = 1, the current
stops so that no more charging or discharging takes place and the capacitor
voltage remains fixed at the level of v(1) = 21 V forever.

While a capacitor is an integrator of its input current, an inductor, shown in


Figure 3.14, produces a current response that is related to the integral of the voltage
applied across its terminals. This can be seen by integrating

di
v(t) = L
dt
to yield
 t
1
i(t) = i(to ) + v(τ )dτ,
L to

ν(t) + L i(t)

Figure 3.14 An inductor acts as an integrator of its voltage input v(t).


86 Chapter 3 Circuits for Signal Processing

Inductor in analogy with the voltage response of a current-driven capacitor. Also, by analogy,
current it should be clear that the current-response of an inductor to a practical voltage input
can’t must be continuous.
“jump”
Example 3.8
A voltage input

v(t) = u(t) V

is applied to a 3 H inductor, where u(t) represents the unit step function


introduced in Figure 3.10c. Find the current response of the inductor for
t > 0 if i(t) = 0 for t < 0.
Solution The applied input v(t) = u(t) V is zero for t < 0, which is
consistent with i(t) being zero in the same time period.
Now, for t > 0 the input is v(t) = u(t) = 1V, and therefore, the current
response is
 t
1 t t
i(t) = i(0) + 1dτ = i(0) + = A.
3 0 3 3

where, by the continuity of the inductor current, i(0) = 0. We can express


the overall response curve for all t as

t
i(t) = u(t) A,
3

making use of the unit step function notation.

Op-amp integrators: If we were to swap the positions of the capacitor and resistor
in the differentiator circuit of Figure 3.11a, we would obtain the integrator circuit
shown in Figure 3.15a. To see that Figure 3.15a is an integrator, let us analyze the
circuit, using the ideal op-amp approximations.

C R
− −
R L
νs (t) +
− + νo (t) νs (t) +
− + νo (t)

(a) (b)

Figure 3.15 An op-amp integrator with (a) a capacitor and (b) an inductor.
Section 3.3 Linearity, Time Invariance, and LTI Systems 87

The resistor current is approximately


vs (t)
,
R
flowing from left to the right, and the capacitor current is
d dvo
C (0 − vo (t)) = −C ,
dt dt
in the same direction. Matching these currents, we find that
dvo
vs (t) ≈ −RC .
dt
The implication is (in analogy with capacitor and inductor integrators) that
 t
1
vo (t) ≈ vo (to ) − vs (τ )dτ.
RC to
We leave it as an exercise for you to verify that for the integrator circuit shown
in Figure 3.15b,
L dvo
vs (t) ≈ − ,
R dt
and therefore,
 t
R
vo (t) ≈ vo (to ) − vs (τ )dτ.
L to

3.3 Linearity, Time Invariance, and LTI Systems

The differentiator and integrator circuits of Section 3.2 can be viewed abstractly as
analog systems. Such systems convert their time-continuous input signals f (t) to Analog
output signals y(t) according to some rule that defines the system. For instance, for system
the integrator system shown in Figure 3.16, the input–output rule has the form
 t
y(t) = y(to ) + f (τ )dτ.
to

For another analog system, the rule may be specified in terms of a differential equation
for the output y(t) that depends on input f (t).
This book is concerned mainly with signal processing systems that can be imple-
mented with lumped-element electrical circuits, and, in particular, with linear and
time-invariant (LTI) systems such as differentiators, integrators, and RC and RL
circuits to be examined in the next section. The purpose of this section is to describe LTI
what we mean by an LTI system and to introduce some key terminology to be used systems
throughout the rest of the book.
88 Chapter 3 Circuits for Signal Processing

System System
System
input output
+
∫t
t
f (t) = i(t) 1F y(t) = ν(t) = ν(t o) + i(τ )dτ
o
(a) −

f (t) → System → y(t) = y(t o) + ∫ t o f (τ )dτ


t
(b)

Figure 3.16 (a) An integrator system with an input i(t) and output v(t), and (b)
a symbolic representation of the same system in terms of the input and output
signals designated as f (t) and y(t), respectively.

We already have seen that an integrator output having the form


 t
y(t) = y(to ) + f (τ )dτ,
to

Input for t > to , depends not only on the input f (t) from time to to t, but also on y(to ),
and the initial state of the integrator. In Figure 3.16, y(to ) is the initial voltage across the
initial capacitor, which is proportional to the initial charge on the capacitor. So we think of
state this as the initial state. Notice that the contributions of y(to ) and {f (τ ), to < τ < t}
t
to y(t) are additive, and each contribution, namely, y(to ) and to f (τ )dτ , vanish with
vanishing y(to ) and f (t), respectively.
Zero-input Thus, for f (t) = 0—that is, with zero input—the integrator output is just
response is
independent y(to ),
of the while for y(to ) = 0—that is, under the zero state condition—the output is only
input and
 t
zero-state
f (τ )dτ.
response is to
independent
t
of the Therefore, it is convenient to think of y(to ) and to f (τ )dτ as zero-input and zero-
initial state state components of the output, respectively, and, conversely, to think of the output
as a sum of zero-input and zero-state response components:
 t
y(t) = y(to ) + f (τ )dτ
to
Zero-input response
Zero-state response

In a very general sense, and using the same terminology, we can state that:

Linearity A system is said to be linear if its output is a sum of distinct zero-input


and zero-state responses that vary linearly with the initial state of the
system and linearly with the system input, respectively.
Section 3.3 Linearity, Time Invariance, and LTI Systems 89

The zero-state response is said to vary linearly with the system input—known as
zero-state linearity—if (using the symbolic notation of Figure 3.16b), under zero-state
conditions,

f1 (t) −→ Linear −→ y1 (t)

and

f2 (t) −→ Linear −→ y2 (t),

imply that

K1 f1 (t) + K2 f2 (t) −→ Linear −→ K1 y1 (t) + K2 y2 (t),

for any arbitrary constants K1 and K2 . In other words, with zero-state linear systems, a Zero-state
weighted sum of inputs produces a similarly weighted sum of corresponding zero-state linearity
outputs, consistent with the superposition principle.

Example 3.9
Verify that an integrator system
 t
i(t) −→ Integ −→ v(to ) + i(τ )dτ
to

is zero-state linear.
Solution Assuming a zero initial condition, v(to ) = 0, we can express the
integrator outputs caused by inputs i1 (t) and i2 (t) as
 t
v1 (t) = i1 (τ )dτ
to

and
 t
v2 (t) = i2 (τ )dτ,
to

respectively. With a new input

i3 (t) ≡ K1 i1 (t) + K2 i2 (t),

which is a linear combination of the original inputs, the zero-state output is


 t
v3 (t) = (K1 i1 (τ ) + K2 i2 (τ ))dτ.
to
90 Chapter 3 Circuits for Signal Processing

But we can rewrite this as


 t  t
v3 (t) = K1 i1 (τ )dτ + K2 i2 (τ )dτ = K1 v1 (t) + K2 v2 (t),
to to

which is a linear combination of the original outputs (with the same coef-
ficients in the linear combination), implying that the system is zero-state
linear.

Example 3.10
Identify the zero-input and zero-state response components of the output
of the op-amp differentiator
dvs
vs (t) −→ Diff −→ −RC
dt
and show that the system is zero-state linear.
Solution For zero-input, vs (t) = 0 and the system output is also zero.
So, the system has no dependence on initial conditions and the zero-input
response is identically zero. Thus, the system output consists entirely of
the zero-state component. The system outputs caused by inputs vs1 (t) and
vs2 (t) can be expressed as
dvs1
v1 (t) = −RC
dt
and
dvs2
v2 (t) = −RC ,
dt
respectively. With the input

vs3 (t) ≡ K1 vs1 (t) + K2 vs2 (t),

the zero-state output is


d(K1 vs1 (t) + K2 vs2 (t))
v3 (t) = −RC
dt
dvs1 dvs2
= −K1 RC − K2 RC
dt dt
= K1 v1 (t) + K2 v2 (t),

which is a linear combination of the original outputs. Hence, the system is


zero-state linear.

The foregoing examples focused on the concept of zero-state linearity. Zero-input


linearity is a similar concept: A system is zero-input linear if the aforementioned type
Section 3.3 Linearity, Time Invariance, and LTI Systems 91

of superposition principle applies to the calculation of the zero-input response. That


is, for systems that are zero-input linear, a linear combination of initial states (initial
conditions) produces an output that is the linear combination of the outputs caused
by the two sets of initial conditions operating individually. For example, hypothetical Zero-input
systems with zero-input responses linearity

y(t) = 3y(to )2

and

y(t) = 1 + y(to )

would not be zero-input linear. This is easily seen because doubling the initial condi-
tion does not double the response due to the initial condition, which is a necessary
component of linearity. (Choose K1 = 2 and K2 = 0 in the linear combination of
initial states.) A third system with zero-input response

y(t) = y(to )e−(t−to )

would be zero-input linear. The integrator and differentiator examined in Examples 3.9
and 3.10 also are zero-input linear.

Example 3.11
Is the system

f (t) −→ System −→ y(to ) + f 2 (t)

defined for t > to linear?


Solution Clearly, the system zero-input response y(to ) satisfies zero-input
linearity. However, the zero-state response f 2 (t) is nonlinear, and therefore
the system is not linear. To observe violation of zero-state linearity, simply
double the input from f (t) to 2f (t), and notice that the zero-state response
is not doubled. Rather, it is quadrupled from f 2 (t) to 4f 2 (t)! That can’t
happen in a linear system. In a linear system a doubled input always must
produce a doubled output; it can’t be anything else.

Integrators and differentiators, as well as the RC and RL circuits examined in the


next section, satisfy an additional system property known as time-invariance. Such Time-invariance
systems that are both linear and time-invariant are referred to as LTI systems, for and
short. In time-invariant systems, delayed inputs cause equally delayed outputs, in the LTI systems
sense that if

f (t) −→ Time-inv. −→ y(t),


92 Chapter 3 Circuits for Signal Processing

then

f (t − to ) −→ Time-inv. −→ y(t − to )

for zero initial states and arbitrary to .


All electrical circuits constructed with constant valued components (with the
exception of source elements) are necessarily time-invariant, since in the description
of such circuits the choice of time origin t = 0 is totally arbitrary. By contrast, a circuit
containing a time-dependent resistor, for example, would not be time-invariant.

Example 3.12
Determine the zero-state response of an integrator system
 t
i(t) −→ Integ −→ v(to ) + 2 i(τ )dτ
to

with inputs

i1 (t) = at, t > 0

and

i2 (t) = a(t − 3), t > 3 s,

and show that responses to i1 (t) and i2 (t) are consistent with time-invariance.
The constant a is arbitrary.
Solution The zero-state response to input i1 (t) can be expressed as
 t  t
t
v1 (t) = 2 i1 (τ )dτ = 2 aτ dτ = a τ 2 τ =0 = at 2 , t > 0.
0 0

Likewise, the zero-state response to input i2 (t) is


 t  t
v2 (t) = 2 i2 (τ )dτ = 2 a(τ − 3)dτ
3 3
t
= a (τ − 6τ )τ =3 = a(t − 3)2 , t > 3 s.
2

Since

v2 (t) = a(t − 3)2 , t > 3 s

is the same as

v1 (t − 3), t − 3 > 0,

the results are consistent with time-invariance of the system: A delayed


input causes an equally delayed zero-state output.
Section 3.4 First-Order RC and RL Circuits 93

As we study the properties of first-order RC and RL circuits in the next section,


we will make use of the systems terminology introduced in this section. The termi-
nology and associated concepts will, in fact, play an essential role throughout the rest
of this book.
3.4 First-Order RC and RL Circuits
Figure 3.17 shows a source network with an open circuit voltage vs (t) and a Thevenin
resistance R that are connected at time to across a capacitor C. For t > to , the loop
current i(t) and resistor voltage Ri(t) easily can be deduced from the capacitor voltage
v(t). Therefore, we will focus our efforts on how to find v(t) for t > to . The problem
of determining v(t) for t > to can be viewed as an LTI system problem with a system
input
f (t) = vs (t),
output
y(t) = v(t),
and an input–output relation corresponding to the solution of a first-order ordinary
differential equation (ODE) derived and examined next.

t = to
R
+
νs (t) + i(t) C ν(t)

Figure 3.17 A first-order RC circuit that constitutes an LTI system.

We can identify the ODE that governs the RC circuit shown in Figure 3.17 for
t > to by writing the KVL equation around the loop. Since the loop current can be
expressed as
dv
i(t) = C
dt
in terms of the capacitor voltage v(t), the KVL equation is
dv
vs (t) = RC + v(t).
dt
This equation can be rearranged as
dv 1 1
+ v(t) = vs (t),
dt RC RC
which is a first-order linear ODE with constant coefficients that describes the circuit
for t > to .
94 Chapter 3 Circuits for Signal Processing

Linear ODE This differential equation is called “first-order” and “ordinary” because it contains
with only the first ordinary derivative of its unknown v(t), instead of higher-order deriva-
constant tives or partial derivatives. It is said to have constant coefficients because the coefficient
1
RC in front of both v(t) and the forcing function vs (t) do not vary with time. This is
coefficients
true because the circuit components R and C are assumed to have constant values.
The ODE also is said to be linear, because a linear combination of inputs applied to
the ODE produces a solution that is the linear combination of the individual outputs.3
Furthermore, the ODE also satisfies the zero-input linearity condition, as we will see
in the next section.
The linearity and the constant coefficients of the preceding ODE guarantee that
the RC circuit of Figure 3.17 constitutes an LTI system for t > to . The same system
properties also apply to all resistive circuits containing a single capacitor, because we
can represent all such circuits as in Figure 3.17 by using Thevenin equivalents.

3.4.1 RC-circuit response to constant sources


Figure 3.18 is a special case of Figure 3.17, with to = 0 and

vs (t) = Vs ,

t=0

R +
Vs + C ν(t)

Figure 3.18 An RC circuit with a switch that closes at t = 0 and a DC voltage


source Vs .

3
Verification of linearity: Assume that

dv 1 1
+ v(t) = f (t)
dt RC RC

and

dw 1 1
+ w(t) = g(t)
dt RC RC

are true and therefore that v(t) and w(t) are different solutions of the same ODE with different inputs f (t)
and g(t). A weighted sum of the equations, with coefficients K1 and K2 , can be expressed as

d(K1 v + K2 w) 1 1
+ (K1 v(t) + K2 w(t)) = (K1 f (t) + K2 g(t)),
dt RC RC

which implies that the solution of the ODE, with an input K1 f (t) + K2 g(t), is the linear superposition
K1 v(t) + K2 w(t) of solutions v(t) and w(t), obtained with inputs f (t) and g(t), respectively. Hence,
superposition works, and the ODE is said to be linear.
Section 3.4 First-Order RC and RL Circuits 95

a DC input. Clearly, the capacitor response v(t) for t > 0 is a solution of the linear
ODE

dv 1 1
+ v(t) = Vs ,
dt RC RC

which satisfies an initial condition

v(0) = v(0− ),

where v(0− ) stands for the capacitor voltage just before the switch is closed. The
continuity of capacitor voltage discussed in Section 3.2.2 allows us to evaluate v(t)
at t = 0 and requires that we match v(0) to v(0− ), a voltage value established by the
past charging/discharging activity of the capacitor.
To solve the ODE initial value problem just outlined, we first note that ODE
initial-value
v(t) = Vs problem

is a particular solution of the ODE, meaning that it fits into the differential equation.
However, we must find a solution that also matches the initial value at t = 0− . So,
unless Vs = v(0− ), this particular solution is not viable.
Next, we note that, with no loss generality, an entire family of solutions of the
ODE can be written as

v(t) = vh (t) + Vs ,

where it is required that

d 1 1
(vh (t) + Vs ) + (vh (t) + Vs ) = Vs ,
dt RC RC

or, equivalently,

dvh 1
+ vh (t) = 0.
dt RC

But the last ODE—known as the homogeneous form of the original ODE—can be Particular,
integrated directly to obtain a homogeneous (or complementary) solution homogeneous,
and
t general
vh = Ae− RC
solutions
96 Chapter 3 Circuits for Signal Processing

where A is an arbitrary constant.4 Thus,


t
v(t) = Ae− RC + Vs

is the family of general solutions of the ODE, where A can be any constant.
Imposing the initial condition

v(0) = v(0− )

on the general solution, we find that

v(0) = A + Vs = v(0− ),

so that

A = v(0− ) − Vs .

Therefore, the solution to the initial-value problem posed earlier, which is the capacitor
voltage in the RC-circuit for t > 0, is
t
v(t) = [v(0− ) − Vs ]e− RC + Vs .

This solution simultaneously satisfies the ODE and also matches the initial condition
at t = 0− .
This result has an interesting structure that makes it easy to remember. The first
term decays to zero as t → ∞, leaving the second term, which is a DC solution
(i.e., the solution after the voltages and currents are no longer changing). But, under
DC conditions, the capacitor in Figure 3.18 becomes an open circuit, taking the full
value of the DC source voltage Vs . Thus, the component Vs in the solution for v(t)
should be viewed as the final value of v(t), as opposed to its initial value v(0− ). The
RC time- transition from initial to final DC state for v(t) occurs as an exponential decay of the
constant difference v(0− ) − Vs between the two states, with the rate of decay controlled by a
time-constant RC.
4
Verification: The homogeneous ODE implies that

dvh 1
=− dt,
vh RC

which in turn integrates into

1
ln vh = − t + K,
RC

where K is an arbitrary integration constant. Hence,

1 1
eln vh = vh = e(− RC t+K) = Ae− RC t ,

as claimed, where A ≡ eK is another arbitrary constant.


Section 3.4 First-Order RC and RL Circuits 97
6 6
ν(t) ν(t)
5 5

4 4

3 3

2 2

1 1

(a) −1 1 2 3 4 5 t (b) −1 1 2 3 4 5 t

Figure 3.19 (a) Response function for v(0− ) = 4 V, Vs = 2 V, RC = 1 s, and (b)


for v(1− ) = 4 V, Vs = 6 V, RC = 12 s, and 1 s time delay in closing the switch.

Figure 3.19a shows v(t) for the case v(0− ) = 4 V, Vs = 2 V, R = 2 , and C =


1
2 F. The time constant of the decay is

1
RC = (2 ) × ( F ) = 1 s,
2
which is the amount of time it takes for

v(0− ) − Vs

to drop to

(v(0− ) − Vs )e−1 ≈ 0.37(v(0− ) − Vs ).

We derived this result for v(t), assuming a switch closing time of to = 0. For an
arbitrary switching time to , the original result can be shifted in time to obtain
t−to
v(t) = [v(to− ) − Vs ]e− RC + Vs

for t > to . Here v(to− ) denotes the initial state of the capacitor voltage just before
the switch is closed at t = to . Figure 3.19b depicts v(t) for the case with to = 1s,
v(1− ) = 4 V, Vs = 6 V, R = 1 , and C = 21 F. Notice that now the RC time constant
is 0.5 s, which is one-half the value assumed in Figure 3.19a.
In the next set of examples we will make use of the general results obtained
above.

Example 3.13
Consider the circuit shown in Figure 3.20a. The switch in the circuit is
moved at t = 0 from position a to position b, bringing the capacitor into
the left side of the network. Assuming that the capacitor is in the DC state
when the switch is moved, calculate the capacitor voltage v(t) for t > 0.
Solution For t < 0, the capacitor is in the DC state and behaves like an
open circuit. Therefore, the 2 A source current in the circuit flows through
the 1  resistor on the right, generating a 2 V drop from top to bottom.
98 Chapter 3 Circuits for Signal Processing

t=0
i 1 2Ω
b 2Ω
i2 + a
Vs +
− 2Ω ν(t) 1Ω 2A
(a) − 1F

i 1 2Ω 1Ω
i2 + Vs +
Vs + +
− 2Ω ν(t ) 1F − ν(t)

2 − 1F
(b) (c)

Figure 3.20 (a) A switched RC circuit, (b) capacitor circuit for t > 0, and (c)
equivalent circuit.

Because the 2  resistor in series with the capacitor conducts no current,


there is no voltage drop across the 2  resistor. Thus, the capacitor voltage
v(t) matches the 2 V drop across the 1  resistor, giving

v(0− ) = 2 V.

Figure 3.20b shows the capacitor circuit for t > 0. The resistive network
across the capacitor can be replaced by its Thevenin equivalent, yielding
the equivalent circuit in Figure 3.20c. Using the equivalent circuit, we see
that as t → ∞, v(t) → V2s , which is the final state of the capacitor when
it becomes an open circuit. We also note that the RC time constant is
1  × 1 F = 1 s. Hence, for t > 0,
Vs −t Vs
v(t) = [v(0− ) − ]e + .
2 2

Notice that the expression for v(t) in Example 3.13 also can be written as

Vs
v(t) = v(0− )e−t + (1 − e−t ) .
2
Zero-input
Zero-state

Zero-input Clearly, the foregoing zero-input and zero-state components vary linearly with the
and initial state v(0− ) and input Vs , respectively. Therefore, the zero-input and zero-state
zero-state linearity conditions are satisfied and the circuit constitutes a linear system (as claimed,
components but not shown, earlier on).

Example 3.14
Calculate the currents i1 (t) and i2 (t) in the circuit shown in Figure 3.20a.
Section 3.4 First-Order RC and RL Circuits 99

Solution From the figure, we see that for t < 0

Vs
i1 (t) = i2 (t) = .
4
For t > 0,

Vs − v(t)
i1 (t) =
2
and
v(t)
i2 (t) = .
2
Substituting the expression for v(t) from the previous equation, we obtain

v(0− ) −t Vs
i1 (t) = − e + (1 + e−t )
2 4
and

v(0− ) −t Vs
i2 (t) = e + (1 − e−t )
2 4
for t > 0. The voltage waveform v(t) and the current waveforms i1 (t) and
i2 (t) are plotted in Figure 3.21, for the case v(0− ) = 2 V (as in Example 3.13)
and Vs = 8 V.

ν(t) 4
3

−1 1 2 3 4 5 t

i 1(t) 4 i 2(t) 4
3 3

2 2

1 1

−1 1 2 3 4 5 t −1 1 2 3 4 5 t

Figure 3.21 Capacitor voltage and resistor current responses in the circuit
shown in Figure 3.20.
100 Chapter 3 Circuits for Signal Processing

Notice that the capacitor voltage curve in Figure 3.21 is continuous across t = 0
(the switching instant in Examples 3.13 and 3.14), but the curves for i1 (t) and i2 (t)
are discontinuous. Clearly, it is impossible to assign unique values to i1 (0) and i2 (0).
Instead, we note that i1 (0− ) = 2 A, i1 (0+ ) = 3 A and i2 (0− ) = 2 A, i2 (0+ ) = 1 A,
where i1 (0− ) and i1 (0+ ), for instance, refer to the limiting values of i1 (t) as t = 0 is
approached from the left and right, respectively. All solutions in the circuit for t > 0
can be found using the initial-state v(0− ) of the capacitor voltage, without knowledge
of i1 (0− ), i2 (0− ), etc., as we found out explicitly in Example 3.14. In this sense the
initial capacitor voltage v(0− ) fully describes the initial state of the entire RC circuit
for t > 0.

3.4.2 RL-circuit response to constant sources


Figure 3.22 shows an RL circuit with a DC current source Is . Since the inductor
voltage drop is L di
dt in the direction of i(t), the KCL equation at the top node of the
circuit can be written as

L di
Is = dt
+ i(t).
R
So, the ODE that describes the inductor current in the circuit is

di R R
+ i(t) = Is .
dt L L
The solution to this ODE for t > 0 can be expressed as
t
− L/R
i(t) = [i(0− ) − Is ]e + Is ,

by analogy with the RC-circuit solution developed in the previous section.


Clearly, the inductor current i(t) in the RL circuit evolves from an initial value
L
R time- of i(0− ) to a final DC value of Is as t → ∞ and the inductor turns into an effective
constant short circuit. The time-constant of exponential variation is RL . If the input Is is applied,
beginning at some delayed time t = to , then a delayed version of the solution,

− t−to
i(t) = [i(to− ) − Is ]e L/R + Is ,

is pertinent for t > to .

Is R L i(t)

Figure 3.22 An RL circuit with DC input Is .


Section 3.4 First-Order RC and RL Circuits 101

t=0
+ 2Ω
ν(t) i(t) +
2Ω 2Ω 2H − 4V

Figure 3.23 An RL circuit with a make-before-break type switch.

Example 3.15
Consider the circuit shown in Figure 3.23 where the switch moves from
right to left and the inductor is connected into both sides of the circuit at the
single instant t = 0. Determine the inductor current i(t) and voltage v(t)
for t > 0. Assume that di dt = 0 for t < 0.
Solution From the figure, we see that

i(0− ) = 2 A,

dt = 0).
because the inductor is effectively a short circuit for t < 0 (since di
For t > 0, the inductor finds itself in the source-free segment on the left.
Hence,

Is ≡ lim i(t) = 0.
t→∞

Also, the equivalent resistance in parallel with the inductor is 1 , and


therefore the exponential decay time-constant is

L 2H
= = 2 s.
R 1
Thus, for t > 0, the inductor current is
t
− L/R
i(t) = i(0− )e = 2e−0.5t A.

Next, we notice that half of this current flows upward through each resistor
on the left, and therefore the voltage v(t) is −(2 ) × i(t)
2 , or

v(t) = −2e−0.5t V.

The resistors will dissipate the initial energy

1
w= 2 H × (2 A)2 = 4 J
2
stored in the inductor, and all signals in the circuit vanish as t → ∞.
102 Chapter 3 Circuits for Signal Processing

i y (t)

i x (t)
ν(t)
2Ω 2Ω i(t) 2H
+
− 2V

Figure 3.24 An RL circuit with a 2 V DC input.

Example 3.16
Consider the circuit shown in Figure 3.24. We will assume that

i(0− ) = 0

and calculate i(t) for t > 0. First, we notice that the inductor becomes an
effective short circuit as t → ∞, so the currents

ix (t) → 1 A and iy (t) → 0.

Therefore, the final inductor current is 1 A. The Thevenin resistance of the


network seen by the inductor is 1  (just replace the voltage source with a
short circuit and combine the two parallel resistors); therefore,

L
= 2 s.
R

Using these values, as well as i(0− ) = 0, we write

i(t) = [0 − 1]e−0.5t + 1 A = 1 − e−0.5t A.

This is a zero-state response of the circuit, because we started with a zero


initial-state, i(0− ) = 0.

What if the initial state of the circuit described were i(0− ) = 2 A? In that case
the response i(t) would be the superposition of the previous expression and the zero-
input solution of the circuit for i(0− ) = 2 A. But, we already have found the zero-
input solution. Under the zero-input condition, the voltage source in Figure 3.24
is replaced with a short circuit and the circuit reduces to the circuit analyzed in
Example 3.15 for the same initial current i(0− ) = 2 A. Therefore, superposing the
answers in Examples 3.15 and 3.16, we get

i(t) = 2e−0.5t A + (1 − e−0.5t ) A = 1 + e−0.5t A.


Section 3.4 First-Order RC and RL Circuits 103

t=2s
t=0
+ 2Ω
2Ω ν(t) i(t) + 4V
2Ω 2H −

Figure 3.25 In this circuit the switch is moved twice, at t = 0 and t = 2 s.

Now, what if i(0− ) were 4 A? No problem! In this case we can double the zero-
input response just calculated, since the response is linear in i(0− ), and add it to the
zero-state response. The answer is

i(t) = 4e−0.5t A + (1 − e−0.5t ) A = 1 + 3e−0.5t A.

Example 3.17
As a final example, we will calculate the inductor current i(t) in the circuit
shown in Figure 3.25. This is the same circuit as in Example 3.15, but,
at t = 2 s, the switch is returned back to its original position. Therefore,
the inductor current i(t) evolves until t = 2 s, exactly as determined in
Example 3.15, namely,

i(t) = 2e−0.5t A.
So, the inductor current just before the switch is moved again is

i(2− ) = 2e−1 A.
As t → ∞, the inductor current will build up from this value to a final
current value of 2 A, with a time constant of
2H
= 1 s.
2
Notice that the time constant is different than before, because the inductor
sees a different Thevenin resistance after t = 2 s. Therefore, for t > 2 s, the
current variation is
i(t) = (2e−1 − 2)e−(t−2) + 2 A.
The complete current waveform is shown in Figure 3.26.
3.4.3 RC- and RL-circuit response to time-varying sources
As we have found, the capacitor voltage in RC circuits and inductor current in RL
circuits are described by linear first-order ODEs of the form
dy
+ ay(t) = bf (t),
dt
104 Chapter 3 Circuits for Signal Processing

i(t)
2

1.5

0.5

2 4 6 8 t

Figure 3.26 Inductor current waveform for the circuit shown in Figure 3.25.

where y(t) denotes a capacitor voltage (in RC circuits) or inductor current (in RL
circuits), a and b are circuit-dependent constants, and f (t) is an input source or signal.
For instance, in the RC circuit shown in Figure 3.27,
1
a=b=
RC
and

f (t) = vs (t)

for t > 0.
In RC and RL circuits with time varying inputs (as in Figure 3.27), the general
solution of
dy
+ ay(t) = bf (t)
dt
for t > 0 can be written as

y(t) = Ae−at + yp (t),

where Ae−at is a solution of the homogeneous ODE


dyh
+ ayh = 0
dt

t=0

R +
+ ν (t)
ν s (t) − C

Figure 3.27 A first-order circuit with an arbitrary time-varying input f (t) = vs (t)
applied at t = 0.
Section 3.4 First-Order RC and RL Circuits 105

dy
Source function f (t) Particular solution of dt + ay(t) = bf (t)
1 constant D constant K
2 Dt Kt + L for some K and L
Kept if p = −a
3 Dept
Ktept if p = −a
4 cos(ωt) or sin(ωt) H cos(ωt + θ), where H and θ depend on ω, a, and b

dy
Table 3.1 Suggestions for particular solutions of dt + ay(t) = bf (t) with various
source functions f (t).

and yp (t) is a particular solution of the original ODE (like those suggested in Table 3.1
for inputs f (t) having various forms). Since

y(0) = A + yp (0),

it follows that

y(t) = [y(0) − yp (0)]e−at + yp (t)

= y(0)e−at + yp (t) − yp (0)e−at ,


Zero-input Zero-state

where y(0) = y(0− ), an initial capacitor voltage or an inductor current. In the second
line of the preceding equation, we have grouped the solution into its zero-input
response (due to initial condition) and zero-state response (due to input).
Our solutions in the next set of examples will be specific applications of this
general result for first-order RC and RL circuits.
For reference, the result can be generalized further to

y(t) = [y(to ) − yp (to )]e−a(t−to ) + yp (t)

for t > to , where y(to ) is an initial-state (capacitor voltage or inductor current) defined
at t = to . In this case the zero-state response for t > to is

yp (t) − yp (to )e−a(t−to ) ,

while the zero-input response is

y(to )e−a(t−to ) .
106 Chapter 3 Circuits for Signal Processing

Example 3.18
Find the capacitor voltage y(t) in an RC circuit described by
dy
+ y(t) = e−2t
dt
for t > 0. Assume a zero initial state—that is, y(0− ) = 0.
Solution Since

f (t) = e−2t ,

we can try, according to Table 3.1, a particular solution

yp (t) = Ke−2t

where K is a constant to be determined. Replacing y(t) in the ODE by


Ke−2t , we find that

−2Ke−2t + Ke−2t = e−2t ,

which implies that

K = −1.

Therefore, a particular solution of the ODE is

yp (t) = −e−2t .

The particular solution does not match the initial condition (otherwise
we would be finished), and so we proceed to the general solution (sum of
the homogeneous and particular solutions):

y(t) = Ae−t − e−2t .

Applying the initial condition yields

y(0) = A − 1,

implying that

A = y(0) + 1.

But since y(0) = y(0− ) = 0, it follows that

A = 1.

Hence, the zero-state solution for t > 0 is

y(t) = e−t − e−2t .


Section 3.4 First-Order RC and RL Circuits 107

Notice that the zero-state response determined in Example 3.18 consists of two
transient functions, e−t and e−2t . The term transient describes functions that vanish Transient
as t → ∞. function

Example 3.19
What is the zero-state solution of
dy
+ y(t) = e−2t
dt

for t > −1 s, assuming that y(−1− ) = 0?


Solution This problem is similar to Example 3.18, except that a solution
is sought for t > −1 s, with y(−1− ) = 0. Evaluating the general solution

y(t) = Ae−t − e−2t

(from Example 3.18) at t = −1 s, we find

y(−1) = Ae − e2 = e(A − e).

Since we require

y(−1) = 0,

we conclude that

A = e.

Hence, the zero-state solution for the period t > −1 is

y(t) = e−(t−1) − e−2t .

Example 3.20
Let R = 1 , C = 1F, and

vs (t) = cos(t)

in the RC circuit shown earlier in Figure 3.27. Then, for t > 0, the capacitor
voltage v(t) will be the solution to the ODE

dv
+ v(t) = cos(t).
dt

Determine v(t) for t > 0, assuming zero initial state—that is, v(0− ) = 0.
108 Chapter 3 Circuits for Signal Processing

Solution Since the ODE input is cos(t), a particular solution according


to Table 3.1, should have the form

vp (t) = H cos(t + θ) = A cos(t) − B sin(t),

where5

A = H cos θ and B = H sin θ.

Therefore,
dvp
= −A sin(t) − B cos(t).
dt
dvp
Substituting vp (t) and dt into the ODE, we obtain

(−A sin(t) − B cos(t)) + (A cos(t) − B sin(t)) = cos(t),

or, equivalently,

(A − B) cos(t) − (A + B) sin(t) = cos(t).

This identity imposes the two constraints

A − B = 1 and A + B = 0,

yielding

1 1
A= and B = − .
2 2
Now, substituting the values for A and B into A = H cos θ and B =
H sin θ gives

1 1
= H cos θ and − = H sin θ.
2 2
It follows that
sin θ
= tan θ = −1,
cos θ
and so

θ = −45

5
Here, we are making use of the trig identity cos(a + b) = cos a cos b − sin a sin b.
Section 3.4 First-Order RC and RL Circuits 109

and
1/2 1
H = =√ ,
cos(−45◦ ) 2

or equivalently, θ = 135◦ and H = − √12 . Therefore, a particular solution


of the ODE is
1 ◦
vp (t) = H cos(t + θ) = √ cos(t − 45 ).
2

Consequently, the general solution can be written as

1 ◦
v(t) = Ae−t + √ cos(t − 45 ),
2

where the first term is the homogeneous solution with arbitrary constant A.
Employing the initial condition, v(0) = 0, gives

1 ◦ 1
v(0) = A + √ cos(−45 ) = A + ,
2 2

which implies that

1
A=− .
2
Thus, the zero-state solution for t > 0 is

1 1 ◦
v(t) = − e−t + √ cos(t − 45 ).
2 2

Clearly, the first term − 21 e−t is transient and the second term √1
2
cos(t −
45◦ ) is non-transient.

Example 3.21
Given that

y(0− ) = 1,

what is the zero-input response of

dy
+ ay(t) = f (t)?
dt
Is the solution transient?
110 Chapter 3 Circuits for Signal Processing

Solution To find the zero-input response we set f (t) = 0 in the ODE to


obtain
dy
+ ay(t) = 0.
dt
The general solution of this homogeneous ODE is

y(t) = Ae−at .

Since

y(0) = A = y(0− ) = 1,

we conclude that

y(t) = e−at

for t > 0. This solution is transient only if a > 0. A first-order RC circuit


with negative R will display a non-transient zero-state response. Note:
A negative Thevenin resistance is possible for a network with dependent
sources.

In Examples 3.18 through 3.21, we saw that both the zero-input and zero-state
responses of 1st-order ODEs can include non-transient as well as transient functions.
The part of a system response remaining after the transient components have vanished
Steady-state is called the system steady-state response. Of course, if nothing remains after the
response transients vanish, then the steady-state response is, trivially, zero. Such was the case
in Example 3.18, where we found that the system response was composed entirely
of transient functions. In Example 3.20, on the other hand, the steady-state response
was √12 cos(t − 45◦ ).

Example 3.22
Suppose that
dv
+ v(t) = cos(t)
dt
is valid for all t. Then, what is the steady-state component of the response
v(t)?
Solution From Example 3.20, we know that a particular solution to the
ODE is the co-sinusoid
1 ◦
vp (t) = √ cos(t − 45 ),
2
which is non-transient. The specification of an initial condition is irrelevant,
because no matter when (for what time) an initial condition is specified, the
homogeneous solution has the form Ae−t , which is transient and vanishes
Section 3.5 nth-Order LTI Systems 111

with increasing t. Therefore, the steady-state component of v(t) is the co-


sinusoidal solution vp (t) given above.

3.5 nth-Order LTI Systems

As we saw in Section 3.4, RC and RL circuits containing a single energy storage


element (a capacitor C or inductor L) are described by first-order linear ODEs. Simi-
larly, linear circuits with n distinct energy storage elements are described by nth-order
ODEs. For instance, the parallel RLC circuit shown in Figure 3.28 is a second-order
LTI system described by second-order ODEs determined in Example 3.23.

+
i s (t) 2Ω i(t) 4H 3F ν(t)

Figure 3.28 A parallel RLC circuit with a current source input is (t).

Example 3.23
Determine the ODEs for the inductor current i(t) and capacitor voltage v(t)
in the parallel RLC circuit shown in Figure 3.28.
Solution The KCL equation for the top node is
v(t) dv
is (t) = + i(t) + 3 ,
2 dt
where we have
di
v(t) = 4 ,
dt
using the v − i relation for the 4 H inductor. Eliminating v(t) in the KCL
equation, we get

di d 2i
is (t) = 2 + i(t) + 12 2 ,
dt dt
or
d 2i 1 di 1 1
2
+ + i(t) = is (t),
dt 6 dt 12 12
which is the ODE for the inductor current i(t). Taking the derivative of both
sides of the ODE and making the substitution
di v(t)
= ,
dt 4
112 Chapter 3 Circuits for Signal Processing

we next obtain

d 2 v(t) 1 d v(t) 1 v(t) 1 dis


( )+ ( )+ = ,
dt 2 4 6 dt 4 12 4 12 dt

which implies

d 2v 1 dv 1 1 dis
2
+ + v(t) = .
dt 6 dt 12 3 dt

This is the ODE describing the capacitor voltage. Notice that the ODEs for
i(t) and v(t) are identical except for their right-hand sides. Thus, the forms
of the homogeneous solutions of the ODEs are identical.

nth-order As the order n of an LTI circuit or system increases, obtaining the governing
linear ODE ODEs of the form
with
constant d ny d n−1 y d mf d m−1 f
n
+ a1 n−1 + · · · + an y(t) = bo m + b1 m−1 + · · · + bm f (t)
coefficients dt dt dt dt

and solving them becomes increasingly more involved and difficult. Fortunately, there
are efficient, alternative ways of analyzing LTI circuits and systems that do not depend
on the formulation and solution of differential equations. The details of such methods,
which are particularly useful when n is large, depend on whether or not the system
zero-input response is transient.
Here is the central idea: In an LTI system with a transient zero-input response,
the steady-state response to a co-sinusoidal input applied at t = −∞ will itself be a
Dissipative co-sinusoid and will not depend on an initial state. (See Example 3.22.) Therefore, in
LTI systems such systems—known as dissipative LTI systems—the zero-state response to a super-
and position of co-sinusoidal inputs can be written as a superposition (because of linearity)
Fourier of co-sinusoidal responses. This superposition method for zero-state response calcula-
method tions in dissipative LTI systems is known as the Fourier method and will be described
in detail beginning in Chapter 5. An extension of the method, known as the Laplace
method, is available for non-dissipative LTI systems where the zero-input response
is not transient.
Our plan for learning how to handle nth-order circuit and system problems is
as follows. In Chapters 5 through 7 we will study the Fourier method for zero-state
response calculations in dissipative LTI systems. This is a very powerful method
because, as we will discover in Chapter 7, any practical signal that can be generated
in the lab can be expressed as a weighted superposition of sin(ωt) and cos(ωt) signals
with different ω’s. Since the Fourier method requires that we know the system steady-
state response to co-sinusoidal inputs, we need an efficient method for calculating
such responses in circuits and systems of arbitrary complexity (arbitrary order n). We
will develop such a method in Chapter 4. The discussion of the Laplace method for
handling non-dissipative system problems will be delayed until Chapter 11.
Section 3.5 nth-Order LTI Systems 113

We close this chapter with two simple examples on zero-input response in nth- Zero-input
order systems. A more complete coverage of the same topic will be provided in response
Chapter 11 after we learn the Laplace method. in nth-order
systems
Example 3.24
Determine the zero-input solution of the second-order ODE

d 2y dy
2
+3 + 2y(t) = f (t), t > 0,
dt dt
and discuss whether or not the system is dissipative.
Solution To find the zero-input solution, we set

f (t) = 0,

and solve the homogeneous ODE

d 2y dy
+3 + 2y(t) = 0.
dt 2 dt
This equation can be satisfied by

y(t) = est

with certain allowed values for s. To find the permissible s we insert est for
y(t) in the ODE and obtain

s 2 est + 3sest + 2est = 0,

which implies

(s 2 + 3s + 2)est = 0,

and thus,

s 2 + 3s + 2 = (s + 1)(s + 2) = 0.

Clearly, the permissible values for s are

s = −1 and s = −2.

The zero-input solution of the ODE generally is a weighted superposition


of e−t and e−2t . In other words,

y(t) = Ae−t + Be−2t ,


114 Chapter 3 Circuits for Signal Processing

where A and B are constants chosen so that y(t) satisfies prescribed initial
conditions. For example, the initial state may be specified as the values of
y(t) and its first derivative at t = 0, so that A and B can be found by solving
y(0) = A + B
and

dy 
= −A − 2B.
dt t=0
The system is dissipative, because the zero-input solution is transient.

In general, we can construct solutions of any nth-order homogeneous ODE


d ny d n−1 y
n
+ a1 n−1 + · · · + an y(t) = 0
dt dt
by superposing functions proportional to esi t , where the constants si correspond to
the roots of the polynomial

s n + a1 s n−1 + · · · + an ,
Characteristic known as the characteristic polynomial of the ODE. For instance, the characteristic
polynomial polynomial of
d 2y dy
+3 + 2y(t) = 0,
dt 2 dt
used in Example 3.24, is

s 2 + 3s + 2 = (s + 1)(s + 2).

Example 3.25
Repeat Example 3.24 with the ODE
d 2y dy
+ − 2y(t) = f (t).
dt 2 dt

Solution The characteristic polynomial is

s 2 + s − 2 = (s − 1)(s + 2).
Hence, permissible values for s are 1 and −2, and the zero-input solution
is of the form
y(t) = Aet + Be−2t .
Because the first continually increases as t → ∞, the zero-input response
is non-transient and the system must be non-dissipative.
Exercises 115

From the previous examples, it should be clear that whether or not an LTI circuit is Resistive
dissipative depends on the algebraic sign of the roots of its characteristic polynomial.6 linear circuits
However, even before examining the characteristic polynomial, we can recognize a with no
circuit as dissipative if it contains at least one current-carrying resistor and contains controlled
no dependent sources. That is true, because such a circuit would have no new source sources are
of energy under zero-input conditions (i.e., with the independent sources suppressed) guaranteed
and would dissipate, as heat, whatever energy it may have stored in its capacitors and to be
inductors, via current flowing through the resistor. dissipative

EXERCISES

3.1 (a) In Figure 3.3a, given that i+ ≈ 0, what happens to the current RvoL in
the circuit? Hint: the answer is related to the answer of part (b).
(b) For vs = 1 V, Rs = 50 , and RL = 1 k, what is the power absorbed
by resistor RL in the circuit shown in Figure 3.3a and where does that
power come from?
3.2 (a) Confirm that substitution of the linear op-amp model of Figure 3.1b
into the noninverting amplifier circuit of Figure 3.2b leads to the following
circuit diagram:
Rs ν+ Ri ν− R1 νo
+ νx −
Ro
νs +
− R2
+
− A νx

(b) Assuming that A 1, Ri Rs , and Ri Ro , show that vo ≈ (1 +


R1 /R2 )vs in the equivalent circuit model shown in part (a).
(c) Determine the short circuit current ix in the following circuit:
Rs ν+ Ri v− R1
+ vx −
Ro
νs +
− R2 ix
+
− Aνx

(d) What is the Thevenin resistance RT = vo /ix of the equivalent circuit


model above? Use the results from parts (b) and (c).

6
In later chapters, we commonly will encounter situations where the roots of the characteristic polyno-
mial are complex, as in Example 3.23, where the polynomial is s 2 + 16 s + 12
1
. In such cases, we will learn
that the system is dissipative if the real parts of the roots are negative.
116 Chapter 3 Circuits for Signal Processing

3.3 In the next circuit shown, determine the node voltage vo (t). You may assume
that the circuit behaves linearly and make use of the ideal op-amp approxi-
mations (v+ ≈ v− and i+ ≈ i− ≈ 0).
6 kΩ

2 kΩ

νo (t)
+

ν1 (t) −
+ +

ν2 (t)

3.4 In the following circuit, determine the node voltage vo , using the ideal op-
amp approximations and assuming that Ra = Rb = 1 k:

1kΩ
1kΩ

+ νo
+

1956Ω
Ra
2V +− 4V +−

Rb

3.5 Repeat Problem 3.4 for Ra = 0 and Rb = ∞.


3.6 In the circuit shown next, determine the voltage vx , assuming linear opera-
tion.
+ 2 kΩ

− +

νx

5V +

+ −


2 kΩ

2 kΩ
Exercises 117

3.7 (a) In the following circuit, determine the capacitor current i(t):

i s (t), A
1
+
i s (t) ν(t) t
− 1F 0 1 2
−1

(b) In the next circuit, determine and plot the capacitor voltage v(t). Assume
that v(t) = 0 for t < 0.

i s (t), A
1
+
i s (t) ν(t) t
− 1F 0 1 2
−1

3.8 In the following circuit, determine the output vo (t), using the ideal op-amp
approximations:

1mH

1kΩ

ν o (t)
+

10 cos(2000t)V + +
− − 0.1V

3.9 In the following circuit, determine vo (t):

1H

2F
− νo (t)
+

cos(t)mV +− +
− 2mV
118 Chapter 3 Circuits for Signal Processing

3.10 Using KCL and the v–i relations for resistors and capacitors, show that the
voltage v(t) in the following circuit satisfies the ODE
dv 1
3 + v(t) = is (t).
dt 2

+
i s (t) 2Ω 3F ν(t)

3.11 In the next circuit, v(t) = 2 V for t < 0. Determine v(t) for t > 0 after the
switch is closed, and identify the zero-state and zero-input components of
v(t). In the circuit, vs denotes a DC voltage source (time-independent).

t= 0
2Ω 2Ω
+
νs +
− ν(t)
0.25F −

3.12 In the next circuit, v(t) = 0 for t < 0. Determine v(t) for t > 0 after the
switch is closed.

t= 0
+
2A 2Ω ν(t) 2Ω
1F −

3.13 Assuming linear operation and vc (0) = 1 V, determine vo (t) at t = 1 ms in


the following circuit:

1kΩ

1kΩ

t= 0 + +
2V + +
− νo(t)
νc(t) 1μF 1kΩ
− −
Exercises 119

3.14 Determine the ODE that describes the inductor current i(t) in the next circuit.
Hint: Apply KVL using a loop current i(t) such that v(t) = 2 di dt .

4Ω +
νs (t) +− 2H ν (t)

3.15 In the circuit that follows, find i(t) for t > 0 after the switch is closed.
Assume that i(t) = 0 for t < 0.

t= 0

2A 2Ω 1H
i(t)

3.16 The circuit shown next is in DC steady state before the switch flips at t = 0.
Find vL (0− ) and iL (0− ), as well as iL (t) and vL (t), for t > 0.


t= 0
νL (t)

9V +
− 5Ω 6H

i L (t)

3.17 Obtain the second-order ODE describing the capacitor voltage v(t) in the
series RLC circuit shown next. Hint: Proceed as in Problem 3.14 and use
i(t) = 2 dv
dt for the loop current.

1Ω 1H
+
νs (t) +− 2F ν(t)

120 Chapter 3 Circuits for Signal Processing

3.18 A second-order linear system is described by


d 2v dv
+3 + 2v(t) = cos(2t).
dt 2 dt
Confirm that the transient function
vh (t) = Ae−t + Be−2t
is the homogeneous solution of the ODE and that its particular solution can
be expressed as
vp (t) = H cos(2t + θ).
Determine the values of H and θ. Hint: See Example 3.20 in Section 3.4.3.
ej 2t +e−j 2t
3.19 (a) Show that 2 = cos(2t).
ej 4t −e−j 4t
(b) Express 2j in terms of a sine function.
(c) Given that −4(ej 3t + e−j 3t ) = A cos(3t + θ), determine A > 0 and θ.
π
(d) Express P = Re{2ej 3 ej 5t } in terms of a cosine function.
3.20 Let f (x) = x, where x is a complex variable.
(a) Sketch the surface |f (x)| over the 2-D complex plane. Describe in
words what the surface looks like.
(b) Describe in words the appearance of the surface ∠f (x).

3.21 Let f (x) = x − (2 + j 3). Sketch the surface |f (x)| over the 2-D complex
plane and describe in words what the surface looks like.
4
Phasors and Sinusoidal
Steady State

4.1 PHASORS, CO-SINUSOIDS, AND IMPEDANCE 122


4.2 SINUSOIDAL STEADY-STATE ANALYSIS 136
4.3 AVERAGE AND AVAILABLE POWER 143
4.4 RESONANCE 150
EXERCISES 154

Suppose that we wish to calculate the voltage v(t) in Figure 4.1a, where the input Steady-state
source is co-sinusoidal. In Chapter 3 we learned how to do this by writing and then AC-circuit
solving a differential equation. The solution involved doing a large amount of algebra, calculations
using trigonometric identities. It turns out that there is a far simpler method for by complex
calculating just the steady-state response v(t) in Figure 4.1a. The trick is to calculate arithmetic;
V in Figure 4.1b by treating it as a DC voltage in a resistive circuit having a 1 V phasors
source and resistors with values −j  and 1 . We then can use voltage division to and
obtain impedances;
average power
1 1
V = (1 V) = V. and
−j + 1 1−j
impedance
Now, in this calculation j represents the imaginary unit and, thus, V is complex. If matching;
you were to convert V to polar form by using your engineering calculator, you would resonance
see an output like

(0.707107, 45 ),

indicating a magnitude 0.707107 and an angle of 45◦ for the complex number V .
Surprisingly, this magnitude and angle are the correct magnitude and angle of v(t) in
Figure 4.1a so that v(t) is simply

v(t) = 0.707107 cos(t + 45 ) V.

121
122 Chapter 4 Phasors and Sinusoidal Steady State

1F −jΩ
+ +

cos(t) V + 1Ω ν(t) + 1Ω V
− 1V −
− −
(a) (b)
1
cos(t)
0.5

−10 −5 5 10 15 20
t (s)
−0.5 0.707cos(t + 4 5° )

(c) −1

Figure 4.1 (a) An RC circuit with a cosine input, (b) the equivalent phasor circuit, and (c) input and
output signals of the circuit.

Figure 4.1c compares the input signal, cos(t), with the output signal, 0.707107 cos(t +
45◦ ). Notice that the output signal has the same frequency and shape as the input, but
it is attenuated (reduced in amplitude) and shifted in phase.
In this chapter we will learn why this trick with complex numbers works (even
though DC circuits do not have imaginary resistances, and a capacitor is not a resistor).
The trick is known as the phasor method, where V stands for the phasor of v(t) and
1 V is the phasor of cos(t) V. We will see that the phasor method is a simple technique
for determining the steady-state response of LTI circuits to sine and cosine inputs. For
brevity, we will refer to such responses as “sinusoidal steady-state.” We will explain
the phasor method in Section 4.1: We will see why the capacitor of Figure 4.1a was
converted to an imaginary resistance known as impedance and learn how to use
impedances to calculate sinusoidal steady-state responses in LTI circuits. Section 4.2
is mainly a sequence of examples demonstrating how to apply the phasor method using
the analysis techniques of Chapter 2, including source transformations, loop and node
analyses, and Thevenin and Norton equivalents. Sections 4.3 and 4.4 describe power
calculations in sinusoidal steady-state circuits and also discuss a phenomenon known
as resonance.

4.1 Phasors, Co-Sinusoids, and Impedance

The phasor method requires a working knowledge of addition, subtraction, multi-


plication, and division of complex numbers and an understanding that, given any
Section 4.1 Phasors, Co-Sinusoids, and Impedance 123

complex number C,
C + C∗
Re{C} = ,
2
C − C∗
Im{C} = .
j2
Furthermore, given any real φ, you should be able to recite Euler’s formula in your
sleep:

e±j φ = cos φ ± j sin φ,

as well as its corollaries


ej φ + e−j φ
Re{ej φ } = cos φ =
2
and
ej φ − e−j φ
Im{ej φ } = sin φ = .
j2
Finally, you should be adept at converting between Cartesian and polar-form repre-
sentation of complex numbers. After completing your review of complex numbers
in Appendix A, read the rest of this section to learn about phasors and the phasor
method.

4.1.1 Phasors and co-sinusoids


Consider the signal f (t), defined as

f (t) ≡ Re{F ej ωt },

where ω is a real constant and F is a complex constant that can be written in expo-
nential form as

F = |F |ej θ .

Then what kind of a signal is f (t) and how can we plot it?
Let’s find out. Substituting the polar form of F into the formula for f (t), and
using the identity Re{ej φ } = cos φ, we get

f (t) = Re{|F |ej θ ej ωt } = |F |Re{ej (ωt+θ) } = |F | cos(ωt + θ).

So,

f (t) = Re{F ej ωt } = |F | cos(ωt + θ)


124 Chapter 4 Phasors and Sinusoidal Steady State

Cosine signal is a cosine signal with


Amplitude, • amplitude |F |,
phase, • phase ωt + θ,
phase shift, • phase shift θ = ∠F , and
radian frequency, • radian frequency ω.
and phasor
The complex constant F = |F |ej θ , which has a magnitude that is the amplitude
of f (t) and a phase that is the phase shift of f (t), will be called the phasor of f (t).
Example 4.1
The cosine signal
π π
f (t) = Re{6ej 3 ej 2t } = 6 cos(2t + )
3
is plotted in Figure 4.2a. Its phasor is
π
F = 6ej 3 .
The location of the phasor F in the complex plane is marked in Figure 4.2b.
The magnitude, |F | = 6, is the peak value of the signal f (t) = 6 cos(2t +
π
3 ) in Figure 4.2a. The angle of phasor F in Figure 4.2b is the phase shift,

3 = 60 , of the signal f (t).
π

Example 4.2
The signal
π
g(t) = 3 cos(2t − ),
4
with frequency ω = 2 rad
s , has phasor
π π ◦
G = 3e−j 4 = 3∠ − = 3∠ − 45 .
4
See the phasor G and the signal g(t) in Figure 4.2.

Period The period of signals f (t) and g(t), with frequency ω = 2 rad
s , is T = ω = π s,

which is the time interval between the successive peaks (crests) of the f (t) and g(t)
curves shown in Figure 4.2a.

Example 4.3
The phasor of
π
v(t) = 2 cos(6t − )
2
is
π ◦
V = 2e−j 2 = −j 2 = 2∠ − 90 .
Signal v(t) has frequency ω = 6 rad
s , and its period is

6 = π
3 s.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 125

Im
f (t) = 6 cos(2t + π )
6 π

4 3
F = 6e j 3
π
2 g(t) = 3 cos(2t − )
4 Re
−5 5 10
t(s) 3 π
G = 3e− j 4
−2

−4

(a) −6 (b)

Figure 4.2 (a) Plots of cosine signals f (t) = 6 cos(2t + π3 ) and g(t) = 3 cos(2t − π4 ) versus t, and (b) the
π π
locations of the corresponding phasors F = 6ej 3 and G = 3e−j 4 in the complex plane. Note that the
signal f (t) “leads” (peaks earlier than) g(t) by π3 + π4 rad = 105◦ , because the angle of F is 105◦ greater
than the angle of G in the complex plane. Equivalently, g(t) “lags” (peaks later than) f (t) by 105◦ . Also,
the amplitude of g(t) is half as large as the amplitude of f (t), because the magnitude of phasor G is half
the magnitude of phasor F.

Since
π
sin φ = cos(φ − )
2
for all real φ (a trig identity that you should be able to visualize), the
foregoing v(t) also can be expressed as 2 sin(6t). Therefore,
π
V = 2e−j 2 = −j 2

is also the phasor of signal 2 sin(6t). In fact, using Euler’s identity, we


obtain

v(t) = Re{V ej 6t } = Re{−j 2ej 6t }


= Re{−j 2(cos(6t) + j sin(6t))} = 2 sin(6t),

as well as
π
v(t) = Re{V ej 6t } = Re{2e−j 2 ej 6t }
π π
= 2Re{ej (6t− 2 ) } = 2 cos(6t − ).
2

In general, the phasor of a sine signal Sine signal


π
|F | sin(ωt + φ) = |F | cos(ωt + φ − )
2
is
π
|F |ej (φ− 2 ) = −j |F |ej φ .
126 Chapter 4 Phasors and Sinusoidal Steady State

For instance, the phasor of


π
i(t) = 6 sin(20t − )
4
is
π
I = −j 6e−j 4 ,

which is also the same as


π π 3π ◦
6e−j 2 e−j 4 = 6e−j 4 = 6∠ − 135 .

Example 4.4
w(t) = 5 sin(5t + π3 ). What is the phasor W of w(t)?
Solution The phasor of
π
5 cos(5t + )
3
is

5∠60 .

Therefore, the phasor of


π
5 sin(5t + )
3
is
◦ ◦ ◦ ◦
−j (5∠60 ) = (1∠ − 90 )(5∠60 ) = 5∠ − 30 .

Hence, W = 5∠ − 30◦ .

Example 4.5
s , has phasor P = j 7. What is
A cosine signal p(t), with frequency 3 rad
p(t)?
Solution
π π
p(t) = Re{j 7ej 3t } = Re{7ej (3t+ 2 ) } = 7 cos(3t + ).
2
Alternatively,
◦ ◦
P = j 7 = 7∠90 ⇒ p(t) = 7 cos(3t + 90 ),
Section 4.1 Phasors, Co-Sinusoids, and Impedance 127

since ω = 3 rad
s . As another alternative,

p(t) = Re{j 7ej 3t } = 7Re{j (cos(3t) + j sin(3t))}


= 7Re{j cos(3t) − sin(3t)} = −7 sin(3t).

All versions of p(t) just given are equivalent.

We can simplify certain mathematical operations on co-sinusoidal signals (meaning


signals that are expressed either as a cosine or a sine) by working with the phasor Co-sinusoids
representation

Re{F ej ωt }.

In particular, linear combinations of sinusoids and their derivatives, all having the
same frequency ω, easily can be calculated by the use of phasors.

4.1.2 Superposition and derivatives of co-sinusoids

In this section, we will state and prove two principles concerning co-sinusoids and
their phasors and also demonstrate their applications in circuit analysis.

Superposition principle: The weighted superposition

k1 f1 (t) + k2 f2 (t)

of co-sinusoids

f1 (t) = Re{F1 ej ωt }

and

f2 (t) = Re{F2 ej ωt }

with phasors F1 and F2 is also a co-sinusoid with the phasor

k1 F1 + k2 F2 .

Proof The superposition principle claims that

k1 f1 (t) + k2 f2 (t) = Re{(k1 F1 + k2 F2 )ej ωt }.


128 Chapter 4 Phasors and Sinusoidal Steady State

To prove this claim, we expand its right-hand side as

Re{(k1 F1 + k2 F2 )ej ωt } = Re{k1 F1 ej ωt } + Re{k2 F2 ej ωt }


= k1 Re{F1 ej ωt } + k2 Re{F2 ej ωt } = k1 f1 (t) + k2 f2 (t).

Therefore, the claim is correct.1 QED

Example 4.6
Suppose that as shown in Figure 4.3

i1 (t) = 3 cos(3t) A

and

i2 (t) = −4 sin(3t) A

denote currents flowing into a circuit node. Determine the amplitude and
phase shift of the current i3 (t) = i1 (t) + i2 (t).
Solution The phasor of i1 (t) is I1 = 3 A, while the phasor of i2 (t) is
I2 = j 4 A. Since

i3 (t) = i1 (t) + i2 (t),

the superposition principle implies that

I3 = I1 + I2 .

Hence,
−1 ( 4 )
I3 = 3 + j 4 = 5ej tan 3 A,

and therefore,

i3 (t) = 5 cos(3t + tan−1 ( 43 ))A.

Clearly, the amplitude of i3 (t) is 5 A and its phase shift is tan−1 ( 43 ) ≈


53.13◦ . Signals i1 (t), i2 (t), and i3 (t) = i1 (t) + i2 (t) are shown in Figure 4.4.

Example 4.7
Let

v1 (t) = 3 sin(5t) V

1
Notice that the proof does not hold for f1 (t) and f2 (t) with different frequencies ω1 and ω2 . Therefore,
the superposition principle is valid only for co-sinusoids having the same frequency ω.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 129

i 1 (t) = 3 cos(3t) i 3 (t)

i 2 (t) = − 4 sin(3 t)

Figure 4.3 A circuit node where three branches with currents i1 (t), i2 (t), and
i3 (t) meet.

i 3(t) = i 1(t) + i 2(t)


4 i 2(t) 4

2 2

−2 2 4 6 t −2 2 4 6 t
−2 −2
i 1(t)
−4 −4
(a) (b)

Figure 4.4 Signal waveforms of Example 4.6: (a) i1 (t) and i2 (t), and (b)
i3 (t) = i1 (t) + i2 (t).

and
π
v2 (t) = 2 cos(5t − )V
4
denote the two element voltages in the circuit shown in Figure 4.5. Calculate
the amplitude of the third element voltage v3 (t).
Solution Since the KVL equation for the loop indicates that

v3 (t) = v1 (t) + v2 (t),

we apply the superposition principle, giving


π
V3 = V1 + V2 = −j 3 + 2e−j 4 V.

Using a calculator we easily find that |V3 | ≈ 4.635 V, which is the amplitude
of voltage v3 (t).

As the previous examples illustrate, we can write KVL and KCL circuit equations Phasor KVL
for co-sinusoidal signals in terms of the signal phasors. Specifically, and KCL
  
Vdrop = Vrise
loop

and
  
Iin = Iout ,
node
130 Chapter 4 Phasors and Sinusoidal Steady State

ν1 (t) = 3 sin(5 t)
+ −

+ +
ν3 (t) ν2 (t) = 2 cos(5t − π )
− − 4

Figure 4.5 An elementary loop with three circuit elements. All elements carry
co-sinusoidal voltages.

where Vdrop (rise) denotes a phasor voltage drop (rise) and Iin (out) denotes a phasor
current flowing into (out of) a node.

Derivative principle
df
(i) The derivative dt of co-sinusoid

f (t) = Re{F ej ωt }

is a co-sinusoid with phasor j ωF.


dnf
(ii) The nth derivative dt n is also a co-sinusoid with phasor (j ω)n F.

Proof The claim of statement (i) is


df
= Re{j ωF ej ωt }.
dt
To prove this claim, we calculate2 the derivative of co-sinusoid f (t) as
df d d
= Re{F ej ωt } = Re{F ej ωt } = Re{j ωF ej ωt },
dt dt dt
which indeed has the phasor j ωF. Statement (ii) follows by induction from
statement (i), because the nth derivative is the first derivative of the (n − 1)st
derivative.

Example 4.8
Apply the superposition and derivative rules to find a particular solution of
the ODE
df
+ 4f (t) = 2 cos(4t).
dt
      F ej ωt + F ∗ e−j ωt
2
Note: d
dt Re F ej ωt = Re F dtd ej ωt is justified because Re F ej ωt = 2 and
  F d ej ωt +F ∗ d e−j ω t
Re F dtd ej ωt = dt
2
dt .
Section 4.1 Phasors, Co-Sinusoids, and Impedance 131

Solution Since the right-hand side of the equation is a co-sinusoid, the


preceding phasor rules imply that the left-hand side can be a simple sum of
co-sinusoids dfdt and 4f (t), with phasors j ωF and 4F , respectively. Here, F
is the phasor of a co-sinusoid f (t) with frequency ω = 4 rad
s . Thus, equating
the phasors of the left- and right-hand sides of the equation, we get

j 4F + 4F = 2,

from which it follows that


2 2 1 π
F = = √ j π = √ e−j 4 .
4 + j4 4 2e 4 2 2
The corresponding co-sinusoid
1 π
f (t) = √ cos(4t − )
2 2 4
is a particular solution of the ODE. (You easily can confirm this by substi-
tuting the result into the ODE.) This solution is also the steady-state compo-
nent of the zero-state solution of the ODE. Notice that, by using phasors, we
simplified the solution tremendously, requiring no trigonometric identities.

Example 4.9
Determine the particular solution of

d 2y dy
+3 + 2y = 5 sin(6t).
dt 2 dt

Solution Equating the phasors of both sides (using ω = 6 rad


s ), we have

(j 6)2 Y + 3(j 6)Y + 2Y = −j 5.

So,
−j 5 ◦
Y = ≈ 0.130∠117.9 ,
−34 + j 18
giving the particular solution

y(t) = 0.130 cos(6t + 117.9 ).
These examples illustrate how phasors can be used to find steady-state solutions
of linear constant-coefficient ODEs with co-sinusoidal inputs. But, phasors also can
be used to completely avoid having to deal with differential equations in the analysis
of dissipative LTI circuits with co-sinusoid inputs. The key is to establish phasor
V –I relations for inductors and capacitors in such circuits, as illustrated in the next
example.
132 Chapter 4 Phasors and Sinusoidal Steady State

Example 4.10
Consider an inductor with some co-sinusoidal current

i(t) = Re{I ej ωt }

and a voltage drop

v(t) = Re{V ej ωt }

in the direction of the current. (See Figure 4.6a.) Express the voltage phasor
V in terms of the current phasor I .
Solution We can do this by replacing each co-sinusoid in the inductor
v–i relation
di
v(t) = L
dt
di
by its phasor. Since the phasor of dt is j ωI , the result is

V = j ωL I.

Similar V –I relations also can be established for capacitors and resistors embedded
in dissipative LTI circuits. For a capacitor (see Figure 4.6b), the v–i relation
dv
i(t) = C
dt

di
ν(t) = L V = jωLI
dt
+ − + −
L L

(a) i(t) = Re{Ie jωt } I

dv 1
i(t) = C V = I
dt + jωC −
C C
+ −

(b) ν(t) = Re{Ve jωt } I

ν(t) = Ri (t)
+ V = RI −
+ −
R R

(c) i(t) = Re{Ie jωt } I

Figure 4.6 Elements with co-sinusoidal signals and their phasor V − I relations:
(a) an inductor, (b) a capacitor, and (c) a resistor.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 133

implies that
1
I = j ωC V or V = I.
j ωC
For a resistor (see Figure 4.6c), Ohm’s law,

v(t) = Ri(t),

implies that

V = RI.

In the next section, we will discuss the implications of these phasor V –I relations for Phasor V –I
the analysis of LTI circuits with co-sinusoid inputs. relations

4.1.3 Impedance and the phasor method


In the previous section, we learned that phasor V –I relations for inductors L, capac-
itors C, and resistors R carrying co-sinusoidal signals have the form

V = ZI,

with


⎪j ωL for inductors
⎨ 1
Z≡ for capacitors ,

⎪ j ωC

R for resistors

where ω is the signal frequency. The parameter Z is known as impedance and is Impedance
measured in units of ohms, since Z = VI is a voltage-to-current ratio, just like ordinary
resistance R = v(t)
i(t) . Unlike resistance, however, impedance is, in general a complex
quantity. Its imaginary part is known as reactance. Inductors and capacitors have
reactances ωL and − ωC 1
, respectively, but the reactance of a resistor is zero. The
real part of the impedance is known as resistance. Inductors and capacitors have zero
resistance—they are purely reactive.3
3
An alternative form of the phasor V –I relation is

I = YV,

where

1
Y ≡
Z

is known as admittance and is measured in Siemens (S = −1 ). The real and imaginary parts of admittance
are known as conductance and susceptance, respectively.
134 Chapter 4 Phasors and Sinusoidal Steady State

Using the phasor V –I relation

V = ZI,

and the phasor KVL and KCL equations


  
Vdrop = Vrise
loop

and
  
Iin = Iout
node

discussed in the previous section, it is possible to express all of the mathematical


constraints (KVL, KCL, and voltage-current relations for the elements) for LTI circuits
(with co-sinusoidal inputs) in an algebraic form. The next example demonstrates this
possibility.

Example 4.11
In the circuit shown in Figure 4.7a, we can write

2 cos(2t) = i1 (t) + i2 (t),

which is the KCL equation for the top node,


1 di2
2i2 (t) + = vc (t),
2 dt
the KVL equation for the right loop, and
1 dvc
i1 (t) = ,
2 dt
giving us three coupled differential equations in three unknowns. Since
the source term in this set of equations is a co-sinusoid, 2 cos(2t), the
corresponding phasor equations—which we obtain using ω = 2 rad/s (the
source frequency) and the superposition and derivative principles—are

2 = I1 + I2 ,
Vc = 2I2 + j I2 ,

i 1 (t) 2Ω i 2 (t) I1 2Ω I2
+
1 + 1
2 cos(2t) A F νc (t) H 2A −jΩ Vc jΩ
2 − 2 −

(a) (b)

Figure 4.7 (a) An RLC circuit with a cosine input, and (b) the equivalent phasor circuit.
Section 4.1 Phasors, Co-Sinusoids, and Impedance 135

and

I1 = j Vc ,

respectively.
Notice that we could have obtained the phasor equations above directly
from the phasor equivalent circuit shown in Figure 4.7b, without ever writing
the differential equations, by applying phasor KCL and KVL and the V –I
relations pertinent to inductors, capacitors, and resistors. In Figure 4.7b
each element of Figure 4.7a has been replaced by the corresponding imped-
ance calculated at the source frequency ω = 2 rad/s. For example, 21 H
s )( 2 H) = j , and each co-sinusoid by the corre-
is replaced by j (2 rad 1

sponding phasor. The solution of the phasor equations (obtained either


way) gives

I1 = 2 + j 1 A = 5ej 0.464 A,
π
I2 = −j A = 1e−j 2 A,

and
√ j (−1.107)
Vc = −j I1 = 5e V,

so that

i1 (t) = 5 cos(2t + 0.464) A,
π
i2 (t) = 1 cos(2t − ) A,
2
and

vc (t) = 5 cos(2t − 1.107) V

are the steady-state co-sinusoidal currents and capacitor voltage in the orig-
inal circuit. This can be confirmed by substituting these current expressions
into the preceding set of differential equations.

In Example 4.11 we demonstrated the basic procedure for calculating the steady-
state response of dissipative LTI circuits to co-sinusoidal inputs. However, we no
longer will need to write the differential equations. A step-by-step description of the Phasor
recommended procedure, called the phasor method, is as follows: method
(1) Construct an equivalent phasor circuit by replacing all inductors L and capaci-
1
tors C with their impedances j ωL and j ωC , calculated at the source frequency
ω, and replacing all the signals (input signals as well as unknown responses)
with their phasors.
(2) Construct phasor KVL and KCL equations for the equivalent circuit, using the
phasor V –I relation V = ZI .
136 Chapter 4 Phasors and Sinusoidal Steady State

(3) Solve the equations for the unknown phasors.


(4) Translate each phasor to its co-sinusoid at the source frequency ω.
This method will work in all cases when all the sources in the circuit have the same
frequency. (The procedure for sources with different frequencies will be described in
Chapter 5.)
Because the voltage–current relations v(t) = Ri(t) and V = ZI have the same
form, step 2 of the phasor method produces a set of algebraic equations (instead
of differential equations), just as in resistive circuit analysis. As a consequence,
phasor circuits inherit all of the properties of resistive DC circuits, so that the various
analysis strategies discussed in Chapter 2 can be applied: series and parallel combina-
tions, voltage and current division, source transformations, the superposition method,
Thevenin and Norton equivalents, and the node-voltage and loop-current methods.
The next section consists of a sequence of examples where we use the phasor method
to solve steady-state circuit problems. The section is titled “Sinusoidal Steady-State
Analysis” because the term sinusoidal steady state commonly is used to refer to the
steady-state response of dissipative LTI systems with co-sinusoidal inputs (as already
indicated in the introduction to this chapter).

4.2 Sinusoidal Steady-State Analysis

4.2.1 Impedance combinations and voltage and current division

Impedance combinations: Impedances in series or parallel can be combined to


obtain equivalent impedances. Series and parallel equivalents of impedances Z1 and
Z2 are

Zs = Z1 + Z2 (series equivalent)

and

Z1 Z2
Zp = (parallel equivalent).
Z1 + Z2

For instance, for Z1 = 6  and Z2 = j 8 ,

Zs = 6 + j 8 

and

6(j 8) j 48 j 48(6 − j 8)
Zp = = = = 3.84 + j 2.88 .
6 + j8 6 + j8 100
Section 4.2 Sinusoidal Steady-State Analysis 137

I
i(t)
2 cos(2t) V + 2H 4Ω 2V + j 4Ω V 4Ω
− −

(a) (b)

Figure 4.8 (a) An RL circuit with a parallel inductor and resistor, and (b) the equivalent phasor circuit.

Example 4.12
In the circuit shown in Figure 4.8a, the source voltage

v(t) = 2 cos(2t) V

has phasor

V = 2 V.

Determine the steady-state current i(t) in the circuit.


Solution Figure 4.8b shows the equivalent phasor circuit with impedances

rad
Z1 = j (2 )(2 H) = j 4 
s

and

Z2 = 4 

for the inductor and resistor, respectively. The parallel equivalent of Z1 and
Z2 is

Z1 Z2 (j 4)(4) j4
Zp = = = .
Z1 + Z2 j4 + 4 1 + j1

Therefore,

V 2(1 + j 1) 1−j 1 π
I= = = = √ e−j 4 A
Zp j4 2 2

and
1 π
i(t) = √ cos(2t − ) A.
2 4
138 Chapter 4 Phasors and Sinusoidal Steady State

I + +I
I1 2
Z1 V1

V + I Z1 Z2
− +
Z2 V2 V

(a) − (b)

Figure 4.9 (a) Phasor voltage division, and (b) current division.

Voltage and current division: The equations for voltage and current division in
phasor circuits (see Figure 4.9) have the forms

Z1 Z2
V1 = V , V2 = V ,
Z1 + Z2 Z1 + Z2

and
Z2 Z1
I1 = I , I2 = I ,
Z1 + Z2 Z1 + Z2

respectively.

Example 4.13
In the circuit shown in Figure 4.10a, calculate the steady-state voltage v(t).
Solution In the equivalent phasor circuit shown in Figure 4.10b,

1
rad
= −j 
j (1 s )(1 F)

denotes the impedance of the 1 F capacitor. Applying voltage division in


the phasor circuit,

1 1 1 π
V = 1V = V = √ ej 4 V.
−j + 1 1−j 2

1F −jΩ
+ +

cos(t) V + 1Ω ν(t) + 1Ω V
− 1V −
− −
(a) (b)

Figure 4.10 (a) An RC circuit in sinusoidal steady-state, and (b) the equivalent phasor circuit.
Section 4.2 Sinusoidal Steady-State Analysis 139

Hence,
1 π
v(t) = √ cos(t + ) V.
2 4

Example 4.14
The phasor equivalent of Figure 4.11a is shown in Figure 4.11b. Determine
the steady-state current i(t).
Solution The parallel equivalent impedance of the inductor and capacitor
in Figure 4.11b is

(j 8)(−j 4) 32
Zp = = = −j 8 .
j 8 + (−j 4) j4

Therefore, current division gives



π−j 8 ◦ 1∠ − 90
I = 2e−j 3 = 2∠ − 60 √
8 − j8 2∠ − 45◦
 
◦ 1 ◦
√ ◦
= 2∠ − 60 √ ∠ − 45 = 2∠ − 105 A.
2
So,
√ ◦
i(t) = 2 cos(4t − 105 ) A.

2 cos(4t − π ) A
π
i(t) 1F 2e− j 3 A I
3 8Ω 2H 8Ω j 8Ω − j 4Ω
(a) 16 (b)

Figure 4.11 (a) A circuit with a co-sinusoidal current source, and (b) the equivalent phasor circuit.

4.2.2 Source transformations and superposition method


Source transformations: Source transformations can be used in phasor analysis
just as in resistive circuit analysis. The phasor networks shown in Figure 4.12 are
equivalent when

Vs
Vs = Zs Is ⇔ Is = .
Zs
140 Chapter 4 Phasors and Sinusoidal Steady State

Zs a a
+ +
I I
Vs +− V Is Zs V
(a) − (b) −
b b
Figure 4.12 Equivalent sinusoidal steady-state networks for Vs = Zs Is .

Example 4.15
Find an expression for phasor V in Figure 4.13a in terms of source phasors
Vs and Is .
Solution In Figures 4.13b through 4.13e we demonstrate the simplifica-
tion of the given phasor network via source transformations and element
combinations. The final network is the Thevenin equivalent.4 Clearly,
√ ◦ 1 ◦
V = ( 2∠ − 45 )Is + ( √ ∠ − 45 )Vs .
2

2Ω 4Ω 4Ω
+ +
+ Vs
Vs − Is − j 2Ω V 2Ω Is − j 2Ω V
− 2 −
(a) (b)

4Ω 4Ω
Vs + Vs +
Is + 1 − jΩ V (1 − j )[I s + ] + 1 − jΩ
− V
2 − 2 −
(c) (d)

Vs +
(1 − j )[I s + ] + 5 − jΩ
− V
2 −
(e)

Figure 4.13 Network simplification with source transformations (a→b, c→d) and element combinations
(b→c, d→e). Network (e) is the Thevenin equivalent of network (a).

4
With no loss of generality, the terminal voltage phasor of any linear network in sinusoidal steady-state
is V = VT − ZT I , where I is the terminal current and VT is the open-circuit voltage of the network. Hence,
a linear network in sinusoidal steady-state can be represented by its Thevenin equivalent with a Thevenin
V
voltage VT and impedance ZT = I T , where IN is the short-circuit current phasor of the network (in exact
N
analogy with resistive networks).
Section 4.2 Sinusoidal Steady-State Analysis 141

Superposition method: In Example 4.15, the voltage phasor V was shown to


be a weighted superposition of independent source phasors Vs and Is in the circuit.
Notice that the weighting constants are complex quantities.
Using the superposition method, we can express any signal phasor as a weighted
superposition of all the independent source phasors in any linear circuit. The super-
posed terms represent individual contributions of source elements to the signal phasor.
We determine the contribution of each source after suppressing all other sources in
the circuit (shorting the voltage sources and opening the current sources).

Example 4.16
Determine the Thevenin equivalent of the network shown in Figure 4.14a,
using the superposition method.
Solution Since the network terminals are open, the source current Is flows
down the j 4  inductor, generating a voltage drop of j 4Is from top to
bottom of the element. Therefore, when the voltage source is suppressed
(i.e., replaced by a short), the output voltage of the network is j 4Is . When
the current source is suppressed, the inductor carries no current and the
output voltage is simply Vs . Superposition of these contributions yields

V = Vs + j 4Is ,

which is the Thevenin voltage of the network. The Thevenin impedance


of the network is the equivalent network impedance 2 + j 4  when both
sources are suppressed. The Thevenin equivalent is shown in Figure 4.14b.

2Ω 2 + j 4Ω

+ Is j 4Ω

V + Vs + j 4I s

− − j 2Ω + Vs

(a) (b)

Figure 4.14 (a) A phasor network with two independent sources and (b) its
Thevenin equivalent.

4.2.3 Node-voltage and loop-current methods


To determine the Thevenin and Norton equivalents of the network shown in Figure 4.15a,
we will calculate the open-circuit voltage and short-circuit current in Examples 4.17
and 4.18 below, using the node-voltage and loop-current methods.
142 Chapter 4 Phasors and Sinusoidal Steady State

2V j 2Ω
V1 + 2 V1 VT 1Ω

+
2Ω j 1A − j 2Ω VT

(a)

2V j 2Ω 1Ω 3 − j 2Ω

+

2Ω I1 j 1A I 1 + j − j 2Ω IN
IN 2 + j 2 V +−
(b) (c)

Figure 4.15 (a) A phasor network with an open-circuit voltage phasor VT , (b) the same network
terminated with an external short carrying a phasor current IN , and (c) the Thevenin equivalent of the
same network.

Example 4.17
The network shown in Figure 4.15a already has been marked in preparation
for node-voltage analysis. Determine VT and V1 , using the node-voltage
method.
Solution The KCL equation for the super-node on the left is
V1 + 2 V1 − VT
+ = j 1,
2 j2
while, for the remaining node, the KCL equation is
VT − V1 VT
+ = 0.
j2 −j 2
Note that the second equation implies that V1 = 0. Hence, the first equation
simplifies to
VT
1− = j 1 ⇒ VT = 2 + j 2 V.
j2

Example 4.18
After termination by an external short, the network of Figure 4.15a appears
as shown in Figure 4.15b. The diagram in Figure 4.15b has been marked in
preparation for loop-current analysis. Determine the Norton current IN for
the network shown in Figure 4.15a by applying the loop-current method to
Figure 4.15b
Solution The KVL equation for the dashed super-loop in Figure 4.15b is

2 + j 2(I1 + j ) + (−j 2)(I1 + j − IN ) + 2I1 = 0,


Section 4.3 Average and Available Power 143

while, for the remaining loop, the KVL equation is

1IN + (−j 2)(IN − (I1 + j )) = 0.

The first equation yields

I1 = −1 − j IN ,

while the second one simplifies to

IN (1 − j 2) = 2(1 − j I1 ).

Eliminating I1 , we obtain

2 + j2
IN (1 − j 2) = 2(1 + j 1 − IN ) ⇒ IN = A.
3 − j2

Example 4.19
Determine the Thevenin equivalent of the network shown in Figure 4.15a.
Solution From the previous Examples 4.17 and 4.18, we know that

2 + j2
VT = 2 + j 2 V and IN = A.
3 − j2

Thus, the Thevenin impedance is

VT
ZT = = 3 − j 2 .
IN

The Thevenin equivalent network is shown in Figure 4.15c.

4.3 Average and Available Power

For circuits in sinusoidal steady-state, the absorbed power of a circuit element

p(t) = v(t)i(t)

is necessarily a function of time and is referred to as instantaneous power. The net


power absorbed by each element corresponds to the average value of instantaneous
power p(t). For instance, a 60-W lightbulb absorbs 60 W of net power, while the
instantaneous power fluctuates between 0 and 120 W. A 60-W lightbulb also shines
brighter than a 45-W lightbulb because, on the average, it converts more joules per
second into light than does the 45-W bulb.
144 Chapter 4 Phasors and Sinusoidal Steady State

The net absorbed power—that is, the average value of p(t) = v(t)i(t) over one
oscillation period T = 2π
ω for signals v(t) and i(t)—can be calculated as

 T  T
1 1
P = v(t)i(t)dt = |V | cos(ωt + θ) × |I | cos(ωt + φ)dt.
T t=0 T t=0

This integral can be evaluated using quite a lot of algebra, but it turns out that P can
be computed far more easily, directly from the signal phasors V and I , as shown next.

4.3.1 Average power

We next derive the phasor formula for the average absorbed power P by expressing
the instantaneous power p(t) = v(t)i(t) as the sum of a constant term and a zero-
average time-varying term. Since v(t) and i(t) are co-sinusoids, the instantaneous
power works out to be

p(t) = v(t)i(t) = Re{V ej ωt }Re{I ej ωt }


V ej ωt + V ∗ e−j ωt I ej ωt + I ∗ e−j ωt
=( )( )
2 2
V I ej 2ωt + V I ∗ + V ∗ I + V ∗ I ∗ e−j 2ωt
=
4
V I ∗ + V ∗I V I ej 2ωt + V ∗ I ∗ e−j 2ωt
= +
4 4
1 1
= Re{V I ∗ } + Re{V I ej 2ωt },
2 2

where we have made use of the identity Re{C} = C+C 2 exactly four times. The last
line expresses p(t) as a superposition of a constant (the first term) and a co-sinusoid
of frequency 2ω (the second term). Because the average value (integrated across the
interval from 0 to T ) of the co-sinusoid is zero, the average value of p(t) is just the
first term

Net absorbed 1
power P
P = Re{V I ∗ },
2

which is the net power absorbed by an element having voltage and current phasors V
and I .
For a resistor,

V = RI
Section 4.3 Average and Available Power 145

so that
1 R|I |2 |V |2
P = Re{(RI )I ∗ } = = ,
2 2 2R
where |V | and |I | are the amplitudes of the resistor voltage and current. The same
result also can be expressed as
2
Vrms
P = RIrms
2
= ,
R
|I | |V
√|
where Irms ≡ √
2
and Vrms ≡ 2
are known as the rms, or effective, amplitudes.

Example 4.20
A 60-W lightbulb is designed to absorb 60 W of net power when utilized
with Vrms = 120 V available from a wall outlet. Estimate the resistance of a
60-W lightbulb. What is the peak value of the sinusoidal voltage waveform
at a wall outlet?
Solution Using one of the preceding formulas for P , we find that
2
Vrms 1202
R= = = 240 .
P 60
The peak value of the voltage waveform at a wall outlet is approximately

|V | = 2Vrms = 169.7 V.

(The power company does not always deliver exactly Vrms = 120 V.)

Example 4.21
A signal

f (t) = A cos(ωt + θ)

is applied to a 1  resistor. What is the average power delivered by f (t) to Signal


the resistor? A cos(ωt + θ)
transfers
Solution First, we note that, since R = 1 , 1 2
2A
v(t) = i(t) = f (t) average
power
in this problem; that is, f (t) stands for both the element voltage v(t) and to a
element current i(t). The corresponding phasors are all equal—V = I = 1 resistor
F = Aej θ —and the average power is

|V |2 1
P = = A2 .
2R 2
146 Chapter 4 Phasors and Sinusoidal Steady State

This is an important result that should be committed to memory: Average


power from a co-sinusoid to a 1 ohm resistor is half the square of the signal
amplitude.

For ideal inductors and capacitors,

V = j XI,

where the reactance X (either ωL or − ωC


1
) is real; therefore,

1 X|I |2
P = Re{(j XI )I ∗ } = Re{j } = 0.
2 2
Capacitors and inductors absorb no net power, because they return their instantaneous
absorbed power back to the circuit.

Example 4.22
Determine the net power absorbed by each element in the circuit shown in
Figure 4.16a.
Solution Clearly, the inductor will absorb no net power. To calculate P
for the resistor and the voltage source, we first calculate the phasor current
I for the loop. From Figure 4.16b,

1V 1 ◦
I= = ∠53.13 A.
j4 + 3  5

Therefore, for the resistor,

|I |2 R ( 1 )2 3 3
P = = 5 = = 0.06 W.
2 2 2 × 25

Energy conservation requires that the net power absorbed by the source must
be P = −0.06 W. Indeed, since the phasor voltage drop for the source in

2H j 4Ω

1 cos(2t) V +− 3Ω 1 V +− 3Ω

(a) (b)

Figure 4.16 (a) A circuit with a single energy source, and (b) the equivalent
phasor circuit.
Section 4.3 Average and Available Power 147

the direction of I is −1 V, the average absorbed power of the source is

1 1 1
P = Re{V I ∗ } = Re{(−1 V)( A)∗ }
2 2 3 + j4
1 3 + j4 1 3
= − Re{ }=− = −0.06 W.
2 25 2 25

4.3.2 Available power and maximum power transfer


What is the available net power that can be delivered from a linear network, oper-
ating in sinusoidal steady state, to an external load? To answer this question, we will
examine the phasor circuit shown in Figure 4.17, where the Thevenin network on the
left represents an arbitrary linear network operating in sinusoidal steady state. Our
approach will be to find the value of the average power that is delivered to the load
and then to find the value of the load that will maximize this average power.
In Figure 4.17,

VT ZL
VL =
ZT + ZL

and
VT
IL = .
ZT + ZL

Therefore, the net power delivered to (and absorbed by) the load ZL is

1 1 |VT |2 ZL |VT |2 RL
PL = Re{VL IL∗ } = Re{ } = ,
2 2 |ZT + ZL |2 2|ZT + ZL |2

where RL ≡ Re{ZL } is the resistive part of the load impedance ZL .


Clearly, for a given network (i.e., for fixed VT and ZT ), the value of PL depends
on both the load resistance RL and the load reactance XL ≡ Im{ZL }. We wish to
determine the maximum possible value for PL .

Z T = R T + jX T

IL
+

VT +− Network VL Z L = R L + jX L
− Load

Figure 4.17 A linear phasor network terminated by an external load.


148 Chapter 4 Phasors and Sinusoidal Steady State

Substituting ZT = RT + j XT and ZL = RL + j XL into the formula for PL


yields

|VT |2 RL
PL = .
2|(RT + RL ) + j (XT + XL )|2

For any fixed value of RL , this expression is maximized when

XL = −XT ,

because the denominator of PL is then reduced to its smallest possible value, namely,
2(RT + RL )2 . Choosing XL = −XT , the net power formula becomes

|VT |2 RL
PL = ,
2(RT + RL )2

which is maximized when

RL = RT .

as we learned in Chapter 2.
In summary then, an external load with resistance RL = RT and reactance XL =
−XT , that is, with an impedance

ZL = RT − j XT = ZT∗ ,

Average will extract the full available power from a network having a Thevenin impedance
available ZT . We obtain the formula for the available power of a network by evaluating the
power Pa preceding formula for PL , with RL = RT . The result is

|VT |2
Pa = .
8RT

So, the available power of a network depends on the magnitude of its Thevenin
voltage phasor VT and only the resistive (real) part RT of its Thevenin impedance ZT =
Matched RT + j XT . The available power will be delivered to any load having an impedance
load ZL = ZT∗ . Such loads are known as matched loads.

Example 4.23
Determine the available power of the network shown in Figure 4.18.
Solution We first use the superposition method to determine the open-
circuit voltage V as

V = 2 V + (2 + j 3) V = 4 + j 3 V,
Section 4.3 Average and Available Power 149

2Ω j 3Ω 2Ω
+

2 V +− 1A V

Figure 4.18 A phasor network.

where the first term is the contribution of the voltage source and the second
term is the contribution of the current source. Thus, for this network,

|VT |2 = |4 + j 3|2 = 25 V2 .

Next, we note that

ZT = 4 + j 3 

by calculating the equivalent impedance of the network after source suppres-


sion. Therefore, for this network, RT = 4  and the available power is

|VT |2 25
Pa = = = 0.78125 W.
8RT 8×4

This amount of average power will be transferred to any matched load


having an impedance ZL = 4 − j 3 .

Example 4.24
What load ZL is matched to the network shown in Figure 4.19 and what is
the available power of the network?
Solution ZL = ZT∗ , where ZT is the Thevenin impedance of the network
shown in Figure 4.19. Note that, for all possible loads,

1 − 2Ix 1
Ix = ⇒ Ix = A.
1 3

1Ω −jΩ 1Ω

Ix
1 V +− +
− 2I x

Figure 4.19 A phasor network with a current-controlled voltage source.


150 Chapter 4 Phasors and Sinusoidal Steady State

Hence, the open-circuit voltage at the network terminals is VT = 2Ix = 2


3
V. The short-circuit current is
2
2Ix
IN = = 3 A.
1−j 1−j
So, for this network
VT
ZT = = 1 − j ,
IN
and the matched load impedance is ZL = ZT∗ = 1 + j . The available
power of the network is

|VT |2 ( 2 )2 1
Pa = = 3 = W.
8RT 8·1 18

4.4 Resonance

Consider the source-free circuits shown in Figures 4.20a and 4.20b. We next examine
whether the signals marked v(t) and i(t) in these circuits can be co-sinusoidal wave-
forms, despite the absence of source elements.
For the RC circuit shown in Figure 4.20a, the phasor KVL equation, expressed
in terms of the phasor I of a co-sinusoidal i(t), is
1
(R + )I = 0.
j ωC

Because R + 1
j ωC cannot equal zero, the equation requires that

I = 0.

Hence, in the RC circuit shown in Figure 4.20a we cannot have co-sinusoidal i(t)
and v(t).
By contrast, the phasor KVL equation for the LC circuit shown in Figure 4.20b,
1
(j ωL + )I = 0,
j ωC

+ +
R> 0 ν(t) C L ν(t) C
(a) i(t) − (b) i(t) −

Figure 4.20 (a) A source-free RC circuit, and (b) a source-free LC circuit.


Section 4.4 Resonance 151

can be satisfied by any I , so long as


1 1
j ωL + =0 ⇒ ω= √ ≡ ωo .
j ωC LC
Thus, in the circuit of Figure 4.20b, co-sinusoidal signals

i(t) = Re{I ej ωo t } = |I | cos(ωo t + θ)

and
I |I |
v(t) = Re{ ej ωo t } = sin(ωo t + θ)
j ωo C ωo C
are possible with arbitrary |I | and θ = ∠I . The oscillation frequency
1 Resonant
ωo = √
LC frequency

is known as the resonant frequency of the circuit. The phenomenon itself (i.e., the
possible existence of steady-state co-sinusoidal oscillations in a source-free circuit)
is known as resonance.
Resonance is possible in the LC circuit of Figure 4.20b because the circuit is non-
dissipative. As we learned in Section 3.4, circuits with no dissipative elements (i.e.,
resistors) can exhibit non-transient zero-input response. Resonance in the foregoing
LC circuit is an example of such behavior. The inclusion of a series or parallel resistor
in the circuit, added in Figure 4.21, introduces dissipation and spoils the possibility of
source-free oscillations. We can see this in Figure 4.21a by writing the KVL equation

Zs I = 0,

where the series equivalent impedance is


1 1
Zs = R + j ωL + = R + j (ωL − ).
j ωC ωC
If R = 0, then the KVL equation can be satisfied only with I = 0. Likewise, in
Figure 4.21b, the KCL equation is
V V V V
+ + 1 ≡ = 0,
R j ωL j ωC
Zp

L
+ +
R ν(t) C R L ν(t) C
(a) i(t) − (b) −

Figure 4.21 (a) A source-free series RLC circuit, and (b) a source-free parallel
RLC circuit.
152 Chapter 4 Phasors and Sinusoidal Steady State

where the parallel equivalent impedance is


1
Zp = .
1
R + j (ωC − 1
ωL )

If R is not infinite (creating an open circuit), then the denominator of Zp cannot be


zero and, therefore, Zp cannot be infinite. Thus, the KCL equation can be satisfied
only by having V = 0. In summary, the presence of a resistor prevents both circuits
in Figure 4.21 from supporting unforced co-sinusoidal signals.
Although series and parallel RLC networks cannot exhibit undamped resonant
oscillations, the resonant frequency ωo = √LC 1
remains a significant system param-
eter in such networks. At frequency ω = ωo the equivalent impedances Zs and Zp
of the networks (see Figure 4.22) reduce to Zs = Zp = R. Hence, the series equiv-
alent impedance of L and C in Figure 4.22a is an effective short circuit at ω = ωo .
Likewise, the parallel equivalent of L and C, shown in Figure 4.22b, is an effective
open circuit. Thus, the current response of the series RLC network to an external
voltage source, and also the voltage response of the parallel RLC network to an
Series external current source, are maximized at the resonant frequency ωo . These behav-
and iors of RLC networks are known as series and parallel resonance, respectively, and
parallel are exploited in linear filter circuits (as we will see in Chapters 5 and 11) to obtain
resonance frequency sensitive system response.

jωL
1
1 Zp = 1
Z s = R + j (ωL − ) 1
+ j (ωC − 1
ωL )
R jωL
ωC 1
R jωC
jωC
(a) (b)

Figure 4.22 (a) Series RLC network and its equivalent impedance, and (b) parallel RLC network and its
equivalent impedance.

Example 4.25
In the series RLC circuit shown in Figure 4.23a, with an external voltage
input

v(t) = cos(ωt),

determine the loop current and all of the element voltages if


1
ω = ωo = √ .
LC
Section 4.4 Resonance 153

νR (t) νL (t) R jω L
+ − + −
+ R L + + 1
ν(t) = cos(ωt) V νC (t) 1V
− i(t) C − − jω C
(a) (b)

Figure 4.23 (a) A series RLC network with a cosine input, and (b) the
equivalent phasor network.

Solution Since the series combination of L and C in the circuit is an


effective short at ω = ωo , the resistor voltage phasor VR equals the source
voltage phasor V = 1 V; hence,

1
I= A,
R

1 1 1 L
VL = (j ωo L)I = j √ L =j V,
LC R R C

and
√ 
1 LC 1 1 L
VC = ( )I = −j = −j V.
j ωo C C R R C

Notice that VL + VC = 0, confirming that the series combination of L and C


is an effective short at resonance. Translating these phasors to co-sinusoids,
we obtain

1
i(t) = cos(ωo t) A,
R
vR (t) = cos(ωo t) V,

1 L π
vL (t) = cos(ωo t + ) V,
R C 2
vC (t) = −vL (t).

Figure 4.24 shows plots of the voltage waveforms for the special case with
R = 0.5 , L = 1 H, C = 1 F, and ω = ωo = 1 rad s . Although the ampli-
tudes of vL (t) and vC (t) are greater than the amplitude of the system
input v(t), KVL is not violated around the loop, because vL (t) + vC (t) = 0
(effective short) and v(t) = vR (t). The large amplitude response of vL (t)
and vC (t) is a consequence of the behavior of the series RLC network at
resonance and the relatively small value chosen for R.
154 Chapter 4 Phasors and Sinusoidal Steady State

ν(t) = νR (t) = cos(t) V νL (t) = − νC (t) = 2 cos(t + π ) V


2 2 2
1 1

−10 −5 5 10 15 20 t (s) −10 −5 5 10 15 20 t (s)


−1 −1

(a) −2 (b) −2

Figure 4.24 (a) Voltage waveforms v(t) = vR (t), and (b) vL (t) = −vc (t) for the resonant system examined
in Example 4.25, for the special case with R = 0.5 , L = 1 H, C = 1 F, and ω = ωo = 1 rad
s . Notice that the
amplitudes of the inductor and capacitor signals in (b) are larger than the amplitude of the input signal
v(t) in (a).

EXERCISES

4.1 Determine the phasor F of the following co-sinusoidal functions f (t):


(a) f (t) = 2 cos(2t + π3 ).
(b) f (t) = A sin(ωt).
(c) f (t) = −5 sin(πt).

4.2 Find the cosine function f (t), with frequency ω = 2 rad


s , corresponding to
the following phasors:
(a) F = j 2.
π
(b) F = 3e−j 6 .
π
(c) F = j 2 + 3e−j 6 .

4.3 Use the phasor method to determine the amplitude and phase shift (in rad)
of the following signals when written as cosines:
(a) f (t) = 3 cos(4t) − 4 sin(4t).
(b) g(t) = 2(cos(ωt) + cos(ωt + π/4)).

4.4 A circuit component is conducting a current i(t) = 2 cos(2πt + π3 ) A, and


its impedance is Z = 1 + j . Plot i(t) and the voltage drop v(t) in the
current direction as a function of time for 0 ≤ t ≤ 2 s.
Exercises 155

4.5 (a) Calculate the series equivalent impedance of the following network for
ω = 1 rad/s in rectangular and polar forms and determine the steady-
state current i(t), given that v(t) = 2 cos(t) V:


+ +
ν(t) νL (t)
− i(t) 3H

(b) What is the phasor of the inductor voltage vL (t) in this network, given
that v(t) = 2 cos(t) V?
4.6 Consider the following circuit:

2 cos(5t)A 5Ω i(t) 1H

Determine the steady-state current i(t), using phasor current division.


4.7 In the following circuit determine the node-voltage phasors V1 , V2 , and V3
and express them in polar form:

I2
V2 1∠90° A
V1 V3
− j1Ω

2V + I3 j2Ω
− I1 1Ω

4.8 In the circuit shown for Problem 4.7, determine the loop-current phasors I1 ,
I2 , and I3 and express them in polar form.
156 Chapter 4 Phasors and Sinusoidal Steady State

4.9 Use the phasor method to determine v1 (t) in the following circuit:

4i x (t)
2Ω ν1(t)
− ν2(t)

+
2 cos(4t)V + 1F
− i x (t) 1H 16

4.10 In the following circuit determine the phasor V and express it in polar form:


+
−jΩ jΩ V

1V −
+
2A

4.11 Use the phasor method to determine the steady-state voltage v(t) in the
following op-amp circuit:

2F
1Ω 1H

ν(t)
+
+ 20 cos(4t) V

4.12 Use the following network to answer (a) through (d):

1Ω − j1Ω
+
j1Ω Is
V


+
− Vs − j3Ω
Exercises 157

(a) Determine the phasor V when Is = 0.


(b) Determine the phasor V when Vs = 0.
(c) Determine V when Vs = 4 V and Is = −2 A, and calculate the average
power absorbed in the resistors.
(d) What are the Thevenin equivalent and the available average power of
the network when Vs = 4 V and Is = −2 A?

4.13 Determine the impedance ZL of a load that is matched to the following


network at terminals a and b, and determine the net power absorbed by the
matched load.

j3Ω jV
2Ω a
+−

2Ω 2A

4.14 (a) Calculate the equivalent impedance of the following network for (i)
ω = 5 krad/s, (ii) ω = 25 krad/s, and (iii) ω = 125 krad/s:

50 Ω
2μF 0.8mH

(b) Assuming a cosine voltage input to the network, with a fixed amplitude
and variable frequency ω, at which value of ω is the amplitude of the
capacitor voltage maximized? At the same frequency, what will be the
amplitude of the resistor current?
5
Frequency Response H(ω)
of LTI Systems

5.1 THE FREQUENCY RESPONSE H(ω) OF LTI SYSTEMS 159


5.2 PROPERTIES OF FREQUENCY RESPONSE H(ω) OF LTI CIRCUITS 164
5.3 LTI SYSTEM RESPONSE TO CO-SINUSOIDAL INPUTS 166
5.4 LTI SYSTEM RESPONSE TO MULTIFREQUENCY INPUTS 176
5.5 RESONANT AND NON-DISSIPATIVE SYSTEMS 181
EXERCISES 182

Frequency Figure 5.1a shows an LTI system with input signal vi (t) and output vo (t). In Chapter 4
response we learned how to calculate the steady-state output of such systems when the input
H (ω) is a co-sinusoid (just a simple phasor calculation in Figure 5.1b, for instance). In
and Chapter 7 we will learn how to calculate the zero-state system response to any input
properties; signal of practical interest (e.g., a rectangular or triangular pulse, a talk show, a song,
LTI system a lecture) by using the following facts:
response
to co-sinusoids (1) All practical signals that can be generated in the lab or in a radio station can be
and expressed as a superposition of co-sinusoids with different frequencies, phases,
multi-frequency and amplitudes.
inputs; (2) The output of an LTI system is the superposition of the individual responses
resonant caused by the co-sinusoidal components of the input signal.
and
non-dissipative
In this chapter we lay down the conceptual path from Chapter 4 (co-sinusoids) to
systems
Chapter 7 (arbitrary signals), and we do some practical calculations concerning dissi-
pative LTI circuits and systems with multifrequency inputs. Section 5.1 introduces the
concept of frequency response H (ω) of an LTI system and shows how to determine
H (ω) for linear circuits and ODEs. We discuss general properties of the frequency
response H (ω) in Section 5.2. Sections 5.3 and 5.4 describe the applications of H (ω)
in single- and multi-frequency system response calculations. Finally, in Section 5.5
we revisit the resonance phenomenon first encountered in Section 4.4.

158
Section 5.1 The Frequency Response H (ω) of LTI Systems 159

1Ω 1Ω

Input : Output : Input : Output :


+ +
νi (t) + 1F νo(t) + 1
− Vi − Ω Vo
− jω −
System System
(a) (b)

1
|H (ω)| 180
∠H (ω)
0.8
90
0.6

0.4 −10 −5 5 10 ω
0.2 −90

(c) −10 −5 5 10 ω (d) −180

Figure 5.1 (a) A single-input LTI system, (b) its phasor representation, (c) |H(ω)| vs ω, and (d) ∠H(ω) (in
degrees) vs ω.

5.1 The Frequency Response H(ω) of LTI Systems


We learned in Chapter 4 that a dissipative LTI circuit with a co-sinusoidal source
(independent voltage or current source), namely,

f (t) = Re{F ej ωt } = |F | cos(ωt + ∠F ),

produces a steady-state output response y(t) (a voltage or current)

y(t) = Re{Y ej ωt } = |Y | cos(ωt + ∠Y ),

where the output phasor

Y = |Y |∠Y

is proportional to the input phasor

F = |F |∠F.

The proportionality between Y and F depends, in general, on the source frequency ω,


because the impedances in the circuit depend on ω. Hence, in a single-source circuit
the relationship between source and response phasors F and Y has the form

Y = H (ω)F,

where the function H (ω), with variable ω, is said to be the frequency response of the Frequency
circuit. response H(ω)
160 Chapter 5 Frequency Response H (ω) of LTI Systems

The next four examples illustrate how the frequency response H (ω) can be deter-
mined in LTI circuits and systems. We also introduce the concepts of amplitude
response |H (ω)| and phase response ∠H (ω).

Example 5.1
For the system shown in Figure 5.1a, the input is voltage signal

f (t) = vi (t)

and the output is voltage signal

y(t) = vo (t).

Determine the frequency response of the system


Y
H (ω) = ,
F
where F and Y are the input and output signal phasors when the input is
specified as a co-sinusoid with frequency ω.
Solution From the equivalent phasor circuit shown in Figure 5.1b, we
obtain, using voltage division,
1
jω 1
Vo = Vi = Vi .
1+ 1

1 + jω

Since F = Vi and Y = Vo , it follows that Y = 1


1+j ω F , so

Y 1
H (ω) = = .
F 1 + jω

Because 1
1+j ω = √1 ∠ − tan−1 (ω), we also can write
1+ω2

H (ω) = |H (ω)|∠H (ω),

where
1
|H (ω)| ≡ √
1 + ω2
and

∠H (ω) ≡ − tan−1 (ω)

are known as the amplitude and phase responses, respectively. The varia-
tions of |H (ω)| and ∠H (ω) with frequency ω are plotted in Figures 5.1c
and 5.1d, respectively.
Section 5.1 The Frequency Response H (ω) of LTI Systems 161

The plot of |H (ω)| in Figure 5.1c shows how the amplitude of the output signal
depends on the frequency of the input signal. For example, an input signal with
frequency near zero is passed with nearly unity scaling of the amplitude (amplitude Amplitude
of the output will be nearly the same as the amplitude of the input), whereas inputs and
with high frequencies will be greatly attenuated (amplitude of the output will be phase
nearly zero). As a consequence of this behavior, the circuit in Figure 5.1a is referred response
to as a low-pass filter. The plot of ∠H (ω) shows how the phase of the output signal |H(ω)|
depends on the frequency of the input signal. For an input frequency that is near zero, and
the phase of the output will be nearly the same as that of the input. For very large ∠H(ω)
frequencies, the phase of the output will be retarded by approximately 90◦ . We will
study this example further in Section 5.3.

Example 5.2
For the system shown in Figure 5.2a determine the frequency response
H (ω).
Solution From the phasor circuit in Figure 5.2b,

1 jω
Vo = Vi = Vi .
1
jω +1 1 + jω

Therefore,
Vo jω
H (ω) = =
Vi 1 + jω

1
Ω
1F jω

Input : Output : Input : Output :


+ +
νi (t) +
− 1Ω νo(t) Vi +
− 1Ω Vo
− −
System System
(a) (b)

1 180
∠ H (ω)
|H (ω)| 0.8
90
0.6

0.4 −10 −5 5 10 ω
0.2 −90

(c) −10 −5 5 10 ω (d) −180

Figure 5.2 (a) A simple high-pass system, (b) its phasor representation, (c) |H(ω)| vs ω, and (d) ∠H(ω) (in
degrees) vs ω.
162 Chapter 5 Frequency Response H (ω) of LTI Systems

for this circuit. Note that


jω jω 1 − jω ω(ω + j ) |ω||ω + j | 1
= = = ∠ tan−1 ( ).
1 + jω 1 + jω 1 − jω 1+ω 2 1+ω 2 ω
Therefore,
|ω|
|H (ω)| = √
1 + ω2
and
1
∠H (ω) = tan−1 ( ).
ω
The variations of |H (ω)| and ∠H (ω) with frequency ω are plotted in
Figures 5.2c and 5.2d, respectively.

We see in Figure 5.2c that an input signal with frequency near zero will be almost
completely attenuated (amplitude of the output will be nearly zero), whereas inputs
with high frequencies will be passed without attenuation. As a consequence, the circuit
in Figure 5.2a is referred to as a high-pass filter. The plot of ∠H (ω) shows that for a
positive input frequency that is nearly zero, the phase of the output will be advanced
by approximately 90◦ . For very large frequencies, the phase of the output will be
nearly the same as that of the input. We will study this example further in Section 5.3

Example 5.3
For the system shown in Figure 5.3a, the input is current signal f (t) and
the output is voltage signal y(t). Determine the frequency response of the
system H (ω) = FY .
Solution Using the phasor circuit, we find that

Y = Zp F,

where Zp is the parallel equivalent impedance,

1 jω
Zp = = .
1
1 + 1
jω  + j ω −1 1 − ω2 + j ω

Therefore,
Y jω
H (ω) = = .
F 1 − ω2 + j ω
The amplitude and phase responses are
|ω|
|H (ω)| = 
(1 − ω2 )2 + ω2
Section 5.1 The Frequency Response H (ω) of LTI Systems 163

Input : Output : Input : Output:


+ 1 +
f (t) 1Ω 1H 1F y(t) F 1Ω jω Ω Ω Y

jω −
(a) (b)
System System

1 180
∠ H (ω)
|H (ω)| 0.8
90
0.6

0.4 −6 −4 −2 2 4 6 ω
0.2 −90

(c) −6 −4 −2 2 4 6 ω (d) −180

Figure 5.3 (a) A simple band-pass system, (b) its phasor representation, (c) |H(ω)| vs ω, and (d) ∠H(ω) (in
degrees) vs ω.

and

1 − ω2
∠H (ω) = tan−1 ( ),
ω

which are plotted in Figures 5.3c and 5.3d.

Figure 5.3c shows that both low and high frequencies are attenuated, whereas
some frequencies lying between the lowest and highest are passed with significant
amplitude. Thus, the circuit in Figures 5.3a is called a band-pass filter. We will offer
further comments on this example in Section 5.3.

Example 5.4
A linear system with some input f (t) and output y(t) is described by the
ODE
dy df
+ 4y(t) = + 2f (t).
dt dt

Determine the frequency response

Y
H (ω) =
F
of the system. Also, identify the amplitude response |H (ω)| and phase
response ∠H (ω).
164 Chapter 5 Frequency Response H (ω) of LTI Systems

Solution Using the derivative and superposition rules for phasors, we


convert the ODE into its algebraic phasor form

j ωY + 4Y = j ωF + 2F,

which implies that

(4 + j ω)Y = (2 + j ω)F.

Hence,
Y 2 + jω
H (ω) = = .
F 4 + jω
The amplitude and phase response of the system are

4 + ω2
|H (ω)| = √
16 + ω2
and
ω ω
∠H (ω) = tan−1 ( ) − tan−1 ( ),
2 4
respectively.

5.2 Properties of Frequency Response H(ω) of LTI Circuits

Table 5.1 lists some of the general properties of the frequency response H (ω) of LTI
circuits introduced in the previous section. The reason for the conjugate symmetry
condition

H (−ω) = H ∗ (ω)

Description Property
1 Conjugate symmetry H (−ω) = H ∗ (ω)
2 Even amplitude response |H (−ω)| = |H (ω)|
3 Odd phase response ∠H (−ω) = −∠H (ω)
4 Real DC response H (0) = H ∗ (0) is real valued
5 Steady-state response to ej ωt ej ωt −→ LTI −→ H (ω)ej ωt

Table 5.1 Properties of the frequency response H (ω) of LTI circuits.


Section 5.2 Properties of Frequency Response H (ω) of LTI Circuits 165

(property 1) can be traced back to the fact that capacitor and inductor impedances
1
j ωC and j ωL satisfy the same property—for example,

−j ωL = (j ωL)∗ .

Because the ω dependence enters H (ω) via only the capacitor and inductor impedances,
the frequency response H (ω) of an LTI circuit will always be conjugate symmetric.
Linear ODEs with real-valued constant coefficients, which describe such circuits, will
also have conjugate symmetric frequency response functions.1
One consequence of H (−ω) = H ∗ (ω) is that

|H (−ω)| = |H (ω)|;

that is, the amplitude response |H (ω)| is an even function of frequency ω (property
2). A second consequence is

∠H (−ω) = −∠H (ω),

which indicates that the phase response is an odd function of ω (property 3). Notice
that the amplitude and phase response curves shown in Figures 5.1 through 5.3 exhibit
the even and odd properties of |H (ω)| and ∠H (ω) just mentioned. Notice also that
H (0) is real valued in each case,2 consistent with property 4 in Table 5.1.
Complex-valued functions

ej ωt

and

H (ω)ej ωt = |H (ω)|ej (ωt+∠H (ω))

can be re-expressed in pair form (see Appendix A, Section 6) as

(cos(ωt), sin(ωt))

and

|H (ω)|(cos(ωt + ∠H (ω), sin(ωt + ∠H (ω)),

respectively. Therefore, property 5,

ej ωt −→ LTI −→ H (ω)ej ωt ,

1
It is possible to define LTI systems with a frequency response for which conjugate symmetry is not
true—for instance, a linear ODE with complex-valued coefficients.
2
H (−ω) = H ∗ (ω) implies that H (0) = H ∗ (0), which indicates that the DC response H (0) must be
real, since only real numbers can equal their conjugates.
166 Chapter 5 Frequency Response H (ω) of LTI Systems

in Table 5.1 can be viewed as shorthand for the fact that in sinusoidal steady-state

cos(ωt) −→ LTI −→ |H (ω)| cos(ωt + ∠H (ω)),


as well as
sin(ωt) −→ LTI −→ |H (ω)| sin(ωt + ∠H (ω)).
These in turn can be readily inferred from the phasor relation
Y = H (ω)F,
with F = 1 and −j , corresponding to input functions f (t) = cos(ωt) and sin(ωt),
respectively.
Complex Because the steady-state output due to input ej ωt is the same as the input signal,
exponentials except scaled by a constant H (ω), the complex exponential input ej ωt is sometimes
are called an “eigenfunction” of an LTI system. Also the corresponding “eigenvalues”
eigen-functions constitute the frequency response H (ω) of the system.
of LTI Property 5 also is handy when dealing with real-valued input signals such as
circuits cos(ωt), sin(ωt), and their weighted linear superpositions, because all such signals
and systems can be expressed in terms of ej ωt with variable ω and its conjugate, as for example in,
ej ωt + e−j ωt
cos(ωt) = .
2
Thus, property 5 and the conjugate symmetry of H (ω) can be used directly to infer the
steady-state response of LTI circuits to various types of real-valued inputs. Starting in
the next section, we will make use of these properties and the superposition principle
to describe the steady-state response of LTI circuits to sine and cosine inputs with
arbitrary amplitudes and phase shifts (in Section 5.3), multifrequency sums of co-
sinusoids (in Section 5.4), arbitrary periodic signals (Chapter 6), and nearly arbitrary
aperiodic signals (Chapter 7).

5.3 LTI System Response to Co-Sinusoidal Inputs


We have just seen that in steady-state

cos(ωt) −→ LTI −→ |H (ω)| cos(ωt + ∠H (ω))


and
sin(ωt) −→ LTI −→ |H (ω)| sin(ωt + ∠H (ω)).
Since linearity implies that amplitude-scaled inputs cause similarly scaled outputs
and time-invariance means that delayed inputs cause equally delayed outputs, we can
infer that
|F | cos(ωt + θ) −→ LTI −→ |H (ω)||F | cos(ωt + θ + ∠H (ω))
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 167

Input: Output:
|F | cos(ωt + θ) LTI system: |H (ω)|| F | cos(ωt + θ + ∠ H (ω))

H (ω)
|F | sin(ωt + θ) |H (ω)|| F | sin(ωt + θ + ∠ H (ω))

Figure 5.4 Steady-state response of LTI systems H(ω) to cosine and sine inputs,
with arbitrary amplitudes |F| and phase shifts θ.

and

|F | sin(ωt + θ) −→ LTI −→ |H (ω)||F | sin(ωt + θ + ∠H (ω)).

These steady-state input–output relations for LTI systems, summarized in Figure 5.4,
indicate that LTI systems convert their co-sinusoidal inputs of frequency ω into co-
sinusoidal outputs having the same frequency and the following amplitude and phase
parameters:
(1) Output amplitude = Input amplitude multiplied by |H (ω)|, and
(2) Output phase = Input phase plus ∠H (ω).
Therefore, knowledge of the frequency response H (ω) is sufficient to determine how H(ω)
an LTI system responds to co-sinusoidal signals and their superpositions in steady represents
state. Furthermore, if the steady-state response of a system to a co-sinusoidal input is the
not a co-sinusoid of the same frequency, then the system cannot be LTI. steady-state
behavior
Example 5.5 of LTI circuits
Return to Example 5.1 (see Figure 5.1a), where we found the system frequency
response to be
1 1
H (ω) = =√ ∠ − tan−1 (ω).
1 + jω 1 + ω2
Consider two different inputs,

f1 (t) = 1 cos(0.5t) V

and

f2 (t) = 1 cos(2t) V.

Determine the steady-state system responses y1 (t) and y2 (t) to f1 (t) and
f2 (t).
Solution Applying the input–output relation shown in Figure 5.4, we note
that

y1 (t) = |H (0.5)|1 cos(0.5t + ∠H (0.5)) V,


168 Chapter 5 Frequency Response H (ω) of LTI Systems

where |H (0.5)| and ∠H (0.5) are the amplitude and phase response evalu-
ated at the frequency ω = 0.5 rad/s of the input f1 (t). Since
1
|H (0.5)| = √ = 0.894
1 + 0.52
and

∠H (0.5) = − tan−1 (0.5) = −26.56 ,
it follows that

y1 (t) = 0.894 cos(0.5t − 26.56 ) V.
Likewise,
1
|H (2)| = √ = 0.447
1 + 22
and

∠H (2) = − tan−1 (2) = −63.43 ,
and so

y2 (t) = |H (2)|1 cos(2t + ∠H (2)) V = 0.447 cos(2t − 63.43 ) V.
A summary of these results is presented in Figure 5.5. Study the plots care-
fully to better understand that system
1
H (ω) =
1 + jω
Low-pass is a low-pass filter.
filter
Example 5.6
Return to Example 5.2 (see Figure 5.2a), where we found the system frequency
response to be
jω |ω| 1
H (ω) = =√ ∠ tan−1 ( ).
1 + jω 1+ω 2 ω
Consider two different inputs
f1 (t) = 2 sin(0.5t) V
and

f2 (t) = 1 cos(2t + 45 ) V.
Determine the steady-state system responses y1 (t) and y2 (t) to f1 (t) and
f2 (t).
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 169
1
0.894
0.8

1 0.6
|H (ω)| 1

0.4 0.447
0.5 0.5
0.2
t
−10 10 20 30 40
−10 −5 5 10 ω -10 10 20 30 40
t
−0.5 -0.5

−1 -1

f1(t) = cos(0.5t) + y 1(t) = 0.894cos(0.5t − 26.56°)
Input : +− 1F Output :
H (ω) = 1
f 2(t) = cos(2t) − y 2(t) = 0.447cos(2t − 63.43°)
1 + jω
1 1
180

0.5 ∠ H (ω) 0.5


90

−10 t − 26.56° −10 t


ω
10 20 30 40 10 20 30 40
−10 −5 5 10
−0.5 −0.5
−90
− 63.43°
−1 −1
−180

Figure 5.5 A summary plot of the responses of the system H(ω) = 1+jω1
to co-sinusoidal inputs f1 (t) and
f2 (t) examined in Example 5.5. Note that the higher frequency input (bottom left signal) is attenuated
more strongly than the lower frequency input (upper left). The system is therefore a low-pass filter.

Solution Once again, using the same input–output relation based on frequency
response, we obtain

y1 (t) = |H (0.5)|2 sin(0.5t + ∠H (0.5)) V.

Since
|0.5|
|H (0.5)| = √ = 0.447
1 + 0.52

and
1 ◦
∠H (0.5) = tan−1 ( ) = 63.43 ,
0.5
it follows that

y1 (t) = 0.894 sin(0.5t + 63.43 ) V.
170 Chapter 5 Frequency Response H (ω) of LTI Systems

Likewise,
|2|
|H (2)| = √ = 0.894
1 + 22
and
1 ◦
∠H (2) = tan−1 ( ) = 26.56 ,
2
and, therefore,
◦ ◦
y2 (t) = |H (2)|1 cos(2t + 45 + ∠H (2)) = 0.894 cos(2t + 71.56 ) V.

A summary of these results is presented in Figure 5.6. Examine the plots


carefully to better understand that system


H (ω) =
1 + jω

High-pass filter is a high-pass filter.

1
0.894 |H (ω)|
0.8

2 0.6 2

0.4 0.447
1 1
0.2

−10 10 20 30 40 t ω −10 10 20 30 40 t
−10 −5 5 10

−1 −1

−2 −2
1F
f 1(t) = 2 sin(0.5t) y 1(t) = 0.894sin(0.5t + 6 3.43°)
+
Input : +− 1Ω Output :
jω −
H (ω) =
f 2(t) = cos(2t + 4 5°) 1 + jω y2(t) = 0.894cos(2t + 7 1.56°)
2 2

∠ H (ω)
180
1 1
90
63.43°
−10 10 20 30 40 t 26.56° −10 10 20 30 40 t
− 10 −5 5 10 ω
−1 −1
−90
−2 −2
−180


Figure 5.6 A summary plot of the responses of the system H(ω) = 1+jω to co-sinusoidal inputs f1 (t) and
f2 (t) examined in Example 5.6. Note that the low-frequency input (upper left signal) is attenuated more
strongly than the high-frequency input (bottom left). The system is therefore a high-pass filter.
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 171

While the preceding Examples 5.5 and 5.6 illustrate that systems

1
H (ω) =
1 + jω

and


H (ω) =
1 + jω

function as low-pass and high-pass filters, respectively, recall that we found the system


H (ω) =
1 − ω2 + j ω

of Example 5.3 in Section 5.1 to be a band-pass filter. Returning to Example 5.3, we Band-pass
note the amplitude response curve |H (ω)| shown in Figure 5.3c peaks at ω = ±1 rad/s filter
and vanishes as ω → 0 and ω → ±∞. Therefore, only those co-sinusoidal inputs with
frequencies ω in the vicinity of 1 rad/s pass through the system with relatively small
attenuation. This occurs because 1 rad/s is the resonant frequency of the parallel LC
combination in the circuit. At resonance, the parallel LC combination is an effective
open circuit and all the source current is routed through the resistor to generate a
peak response. Conversely, in the limits as ω → 0 and ω → ±∞, respectively, the
inductor and the capacitor behave as effective shorts, and force the output voltage to
zero.

Example 5.7
An LTI system H (ω) that is known to be a low-pass filter converts its input

f (t) = 2 sin(12t)

into a steady-state output



y(t) = 2 sin(12t + θ)

for some real valued constant θ. Determine H (12) and also compare the
average power that the input f (t) and output y(t) would deliver to a 1 
resistor.
Solution First, we note that

|Y | 2 1
|H (12)| = = =√ .
|F | 2 2
172 Chapter 5 Frequency Response H (ω) of LTI Systems

Thus, according to the available information,

1
H (12) = √ ej θ .
2

For co-sinusoids, the average power into a 1  load is obtained as one-half


the square of the signal amplitude. (See Example 4.21 in Section 4.3.1.)
Thus. the average power per  for the input f (t) is

1 2
Pf = |F | = 2,
2

while it is

1 2
Py = |Y | = 1
2

for the output y(t). Thus,

Py 1
= ,
Pf 2

Half-power which makes ω = 12 rad/s the half-power frequency of filter H (ω).


frequency
of a Py
Note that for any ω and any LTI system, the power ratio Pf can be obtained as
low-pass
filter |H (ω)| , the square of the system amplitude response.
2

Example 5.8
A system converts its input

f (t) = 5 sin(12t)

into a steady-state output


◦ ◦
y(t) = 25 sin(12t − 45 ) + 2.5 sin(24t − 90 ).

Is the system LTI?


Solution No, the system is not LTI because, if it were, then the

2.5 sin(24t − 90 )

component of the output y(t) would not be present. LTI systems cannot
create new frequencies that are not present in their inputs.
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 173

DC response: For ω = 0, the input–output relation

ej ωt −→ LTI −→ H (ω)ej ωt

reduces to

1 −→ LTI −→ H (0).

Linearity then implies that for an arbitrary DC input f (t) = Fo , the relation is

Fo −→ LTI −→ H (0)Fo ,

where the response H (0)Fo is real valued. (Recall from property 4 in Table 5.1 that
H (0) is real valued.)

Example 5.9
What is the steady-state response of the system
2 + jω
H (ω) =
4 + jω
to a DC input

f (t) = 5?

Solution Since
2 + j0
H (0) = = 0.5,
4 + j0
the steady-state response must be

y(t) = H (0)5 = 2.5.

Measuring H(ω) in the lab and dB representation: The steady-state input–


output relation for LTI systems, depicted in Figure 5.4, suggests that the frequency
response

H (ω) = |H (ω)|ej ∠H (ω)

of a circuit can be determined in the lab via the following procedure:


(1) Using a variable-frequency signal generator, produce a signal

f (t) = cos(ωt)

and display it on an oscilloscope.


174 Chapter 5 Frequency Response H (ω) of LTI Systems

(2) For each setting of the input frequency ω, observe and record the amplitude
|H (ω)| and phase shift ∠H (ω) from the measured circuit response

y(t) = |Y | cos(ωt + ∠Y ) = |H (ω)| cos(ωt + ∠H (ω)).

The amplitude and phase shift data, |H (ω)| and ∠H (ω), collected over a wide range
of frequencies ω, then can be displayed as amplitude and phase response plots for the
system.
This method generally will reveal if the system is not LTI (in which case the
output y(t) corresponding to the cosine input f (t) = cos(ωt) usually is not a pure
co-sinusoid at frequency ω) or whether the system is non-dissipative (in which case
the system output may contain non-decaying components even after the input is
turned off). In either case, it will not be possible to infer |H (ω)| and ∠H (ω). For
such systems, H (ω) is not a meaningful system characterization. (See Section 5.5
for further discussion on non-dissipative systems.) For the frequent case with dissi-
pative LTI circuits and systems, however, the foregoing method provides a direct
experimental means for determining |H (ω)| and ∠H (ω).
More information often is revealed about |H (ω)| if we plot it on a logarithmic
scale rather than on a regular, or linear scale. That, in effect, is the idea behind the
decibel definition3

|H (ω)|dB ≡ 10 log |H (ω)|2 = 20 log |H (ω)|,

Decibel commonly used in describing the amplitude response function |H (ω)|.


amplitude A plot of |H (ω)|dB makes it possible to see exceedingly small values of |H (ω)|.
response This is important when we graph the response of high-quality low-pass, high-pass,
|H(ω)|dB and band-pass filters, where we may wish to have the response in the stop band (the
frequency band where the signal components are to be blocked) attenuated by a factor
of 1000 or more. This situation and others lead to a frequency response magnitude
|H (ω)| that has a very wide dynamic range best viewed on a logarithmic plot. Both
linear and decibel, or dB, plots are illustrated for two different |H (ω)| in Figures 5.7a
through 5.7d.
Figure 5.7a, for instance, shows a linear plot of the amplitude response of the
low-pass filter

1
H (ω) = ,
1 + jω

while Figure 5.7b is a plot of the dB amplitude response

|1| 
|H (ω)|dB = 20 log = 20 log |1| − 20 log |1 + j ω| = −20 log 1 + ω2
|1 + j ω|

3
A decibel (dB) is one-tenth of a bel (B), which is the name given to log |H (ω)|2 = 2 log |H (ω)| in
honor of the lab practices of Alexander Graham Bell, the inventor of the telephone and hearing aid.
Section 5.3 LTI System Response to Co-Sinusoidal Inputs 175

|H (ω)| 1
|H (ω)| dB 20

0.8 10

0.6 0

-10
0.4
-20
0.2
-30
(a) 20 40 60 80 100
(b)
0.1 1 10 100
ω ω
|H (ω)| 0.01 |H (ω)| dB -20
0.008 -40
0.006 -60
0.004 -80
0.002
-100
(c) 200 400 600 800 1000 (d)
ω
0.1 1 10 100 1000
ω

Figure 5.7 Plots of the amplitude response of systems H(ω) = 1+jω 1


(a), (b) and H(ω) = (1+jω)(100+jω)
1
(c),
(d). Plots (a) and (c) are on a linear scale whereas (b) and (d) are on a decibel, or dB, scale. Note that in the
dB plots a logarithmic scale is used for the horizontal axes, according to a common engineering practice.

for the same filter. In the dB plot in Figure 5.7b we also use a logarithmic scale for
the horizontal axis representing the frequency variable ω.
Figures 5.7c and 5.7d display the amplitude response of another low-pass filter

1
H (ω) =
(1 + j ω)(100 + j ω)

and the corresponding dB amplitude response

|1|  
|H (ω)|dB = 20 log = −20 log 1 + ω2 − 20 log 1002 + ω2 .
|1 + j ω||100 + j ω|

Clearly, the dB plots shown in Figures 5.7b and 5.7d are more informative than the
linear plots shown in Figures 5.7a and 5.7c. For instance, we can see from Figures 5.7b
and 5.7d that both filters have similar “flat” amplitude responses for ω  1, a detail
that is not as apparent in the linear plots. It is useful to remember that a 20 dB change
corresponds to a factor of 10 change in the amplitude |H (ω)| and a factor √ of 100
change in |H (ω)|2 . Likewise, a 3 dB change corresponds to a factor of 2 variation
in |H (ω)| and a factor of 2 variation in |H (ω)|2 , as summarized in Table 5.2.
176 Chapter 5 Frequency Response H (ω) of LTI Systems

Amplitude response Power response dB


√1
2
1
2 −3
1 1 0

2 2 3
2 4 6

10 10 10
10 100 20

Table 5.2 A conversion table for amplitude response |H (ω)|, power response
|H (ω)|2 , and their representation in dB units.

5.4 LTI System Response to Multifrequency Inputs

The steady-state response of LTI systems to multifrequency inputs can be calculated


by applying the superposition principle and the input–output relation for co-sinusoids,
shown in Figure 5.4.
Consider, for instance, a multifrequency input

f (t) = f1 (t) + f2 (t),

where

f1 (t) = |F1 | cos(ω1 t + θ1 )

and

f2 (t) = |F2 | sin(ω2 t + θ2 ).

Using superposition, we see that the steady-state response is

y(t) = y1 (t) + y2 (t),

where (using the input–output relation of Figure 5.4)

y1 (t) = |H (ω1 )||F1 | cos(ω1 t + θ1 + ∠H (ω1 ))

and

y2 (t) = |H (ω2 )||F2 | sin(ω2 t + θ2 + ∠H (ω2 )).


Section 5.4 LTI System Response to Multifrequency Inputs 177

More generally, given a multifrequency input


N
f (t) = |Fn | cos(ωn t + θn )
n=1

with arbitrary N, we find that the steady-state output is


N
y(t) = |H (ωn )||Fn | cos(ωn t + θn + ∠H (ωn )).
n=1

This input–output relation for multi-frequency inputs is shown graphically in Figure 5.8
and will be the basis of the calculations presented in the following examples.

Input: LTI system: Output:


N H (ω) N
f (t) = ∑ |Fn | cos(ωn t + θn )
n=1
y(t) = ∑ |H (ωn )|| Fn |cos(ω n t + θn + χ(ωn ))
n=1

Figure 5.8 LTI system response to multifrequency inputs.

Example 5.10
A 1 H inductor current is specified as

i(t) = 2 cos(2t) + 4 cos(4t) A.

Determine the inductor voltage v(t), using the v–i relation for the inductor
and, confirm that the result is consistent with the input–output relation
shown in Figure 5.8.
Solution Given that L = 1 H and i(t) = 2 cos(2t) + 4 cos(4t), we obtain
di d
v(t) = L = (2 cos(2t) + 4 cos(4t))
dt dt
π π
= −4 sin(2t) − 16 sin(4t) = 4 cos(2t + ) + 16 cos(4t + ) V.
2 2
Now, the frequency response of the same system is
V j ωI
H (ω) = = = j ω.
I I
Applying the relation in Figure 5.8 with the input signal

2 cos(2t) + 4 cos(4t),
178 Chapter 5 Frequency Response H (ω) of LTI Systems

we have
y(t) = |H (2)|2 cos(2t + ∠H (2)) + |H (4)|4 cos(4t + ∠H (4)) V.
This yields the output
π π
2 · 2 cos(2t + ) + 4 · 4 cos(4t + ) V,
2 2
in agreement with the previous result.
Example 5.11
Suppose the input of the low-pass filter
1
H (ω) =
1 + jω
is
f (t) = 1 cos(0.5t) + 1 cos(πt) V.
Determine the system output y(t).
Solution Using the relation given in Figure 5.8, we have
y(t) = |H (0.5)|1 cos(0.5t + ∠H (0.5)) + |H (π)|1 cos(πt + ∠H (π)) V.
Now,
1
|H (0.5)| = ≈ 0.894
|1 + j 0.5|
and
1 ◦
∠H (0.5) = ∠ ≈ −26.56 .
1 + j 0.5
Likewise,
1
|H (π)| = ≈ 0.303
|1 + j π|
and
1 ◦
∠H (π) = ∠ ≈ −72.34 .
1 + jπ
Therefore,
◦ ◦
y(t) ≈ 0.894 cos(0.5t − 26.56 ) + 0.303 cos(πt − 72.34 ) V.
The input and output signals of Example 5.11 are plotted in Figure 5.9a. Notice
that the output signal y(t) is a smoothed version of the input, because the low-pass
filter H (ω) has attenuated the high-frequency content that corresponds to rapid signal
variation.
Section 5.4 LTI System Response to Multifrequency Inputs 179

2 f (t) 2 y(t)
1 1 1
H (ω) =
t 1 + jω t
10 20 30 40 50 60 10 20 30 40 50 60

−1 Low-pass filter −1

(a) −2 −2

f (t) y(t)
1 1
0.8 jω 0.8
0.6 H (ω) = 0.6
1 + jω
0.4 0.4
0.2 0.2
High-pass filter
−10 10 20 30 t −10 10 20 30 t
−0.2 −0.2
(b) −0.4 −0.4

Figure 5.9 Input and output signals f (t) and y(t) for (a) the low-pass filter examined in Example 5.11,
and (b) the high-pass filter examined in Example 5.12. Note that both signals in (b) are periodic and have
the same period To = 4π ≈ 12.57 s.

Example 5.12
Suppose the input of the high-pass filter

jω |ω| 1
H (ω) = =√ ∠ tan−1 ( )
1 + jω 1+ω 2 ω

is

 1
f (t) = cos(0.5nt) V.
1 + n2
n=1

Determine the system output y(t).


Solution Using the relation given in Figure 5.8, with

ωn = 0.5n rad/s

for the specified input, we write


 1
y(t) = |H (0.5n)| cos(0.5nt + ∠H (0.5n)) V.
1 + n2
n=1
180 Chapter 5 Frequency Response H (ω) of LTI Systems

Now, with the specified amplitude and phase response, we have

|0.5n|
|H (0.5n)| = √
1 + 0.25n2

and

1
∠H (0.5n) = tan−1 ( ).
0.5n

Thus,

 |0.5n| 1 1
y(t) = √ cos(0.5nt + tan−1 ( )) V.
1+ 0.25n2 1+n2 0.5n
n=1

The input and output signals f (t) and y(t) are plotted4 in Figure 5.9b. Both
f (t) and y(t) are periodic signals with period To = 4π s. Can you explain
why? If not, Chapter 6 will provide the answer.

Glancing at Figure 5.9, try to appreciate that we have just taken a major step
toward handling arbitrary inputs in LTI systems. We don’t even have names for the
complicated input and output signals shown in Figure 5.9!

Example 5.13
What is the steady-state response y(t) of the LTI circuit shown in Figure 5.10a
to the input

f (t) = 5 + sin(2t)?

2H jω2 Ω

Input : Output :
+ +
f (t) + 3Ω y(t) F + 3Ω Y
− −
− −
System
(a) (b)

Figure 5.10 (a) An LTI circuit and (b) its phasor equivalent.

4
The plotted curves actually correspond to the sum of the first 100 terms (n = 1 to 100) in the expressions
for f (t) and y(t). Similar curves calculated with many more terms are virtually indistinguishable from
those shown in Figure 5.9b. Thus, the first 100 terms provide a sufficiently accurate representation.
Section 5.5 Resonant and Non-Dissipative Systems 181

Solution Applying the phasor method to the phasor equivalent circuit


shown in Figure 5.10b, we first deduce that (using voltage division)

3
Y = F,
3 + j ω2

which implies that

Y 3
= H (ω) = .
F 3 + j ω2

Since the DC response H (0) = 1 and

3 3 ◦
H (2) = = = 0.6∠ − 53.13 ,
3 + j4 5∠53.13◦

the steady-state response to input

f (t) = 5 + sin(2t)

is

y(t) = |H (0)|5 + |H (2)| sin(2t + ∠H (2)) = 5 + 0.6 sin(2t − 53.13 ).

5.5 Resonant and Non-Dissipative Systems

The input–output relation

ej ωt −→ LTI −→ H (ω)ej ωt ,

applicable to finding the steady-state response of LTI systems, requires that the system
be dissipative. The reason for this restriction is that in non-dissipative systems the
steady-state response may contain additional undamped and possibly unbounded
terms. For instance, in resonant circuits examined in Section 4.4, we saw that the
steady-state output may contain unforced oscillations at a resonance frequency ωo —
for example, ωo = √LC 1
. For such circuits and systems, an H (ω)-based description
of the steady-state response is necessarily incomplete.
Consider, for instance, the frequency response of the series RLC circuit shown
in Figure 4.23a:

I 1 j ωC
H (ω) = = = .
V R + j ωL + 1
j ωC
(1 − ω2 LC)+ j ωRC
182 Chapter 5 Frequency Response H (ω) of LTI Systems

In the limit, as R → 0,
j ωC
H (ω) →
1 − ω2 LC
and the circuit becomes non-dissipative. For R = 0, the response to inputs ej ωt can
no longer be described as
j ωC
ej ω .
1 − ω2 LC
Notice that for R = 0, as ω → √LC 1
, we have |H (ω)| → ∞, indicating that non-
dissipative systems can generate unbounded outputs with bounded inputs (an insta-
bility phenomenon that we will examine in detail in Chapter 10).
In summary, frequency-response-based analysis methods are extremely powerful
and widely used. We will continue to develop these techniques in Chapters 6 through
8. However, the concept of frequency response offers an incomplete and inadequate
description of non-dissipative and non-LTI systems. Therefore, we must be careful to
apply these techniques only to dissipative LTI systems. Beginning in Chapter 9, we
will develop alternative analysis methods that are appropriate for non-dissipative LTI
systems

EXERCISES

5.1 Determine the frequency response H (ω) = FY of the circuit shown and sketch
|H (ω)| versus ω ≥ 0. In the diagram, f (t) and y(t) denote the input and
output signals of the circuit.

1Ω 0.2H
+ +

f (t) 0.05F y(t)


− −

5.2 In the following circuit, determine the frequency response H (ω) = Y


F and
H (0):

1Ω 1H

+
f (t) +
− 1Ω 2F y(t)

Exercises 183

5.3 Determine the frequency response H (ω) = IVs of the circuit in Exercise
Problem 3.10 in Chapter 3. Note that H (ω) can be obtained with the use of
the phasor domain circuit as well as the ODE for v(t) given in Problem 3.10.
5.4 Determine the frequency response H (ω) = VVs of the circuit in Exercise
Problem 3.17 in Chapter 3. Sketch |H (ω)| versus ω ≥ 0.
5.5 A linear system with input f (t) and output y(t) is described by the ODE
d 2y dy df
+4 + 4y(t) = .
dt 2 dt dt
Determine the frequency response H (ω) = Y
F of the system.
5.6 Determine the amplitude response |H (ω)| and phase response ∠H (ω) of
the system in Problem 5.5. Also, plot ∠H (ω) versus ω for −10 < ω < 10.
5.7 A linear circuit with input f (t) and output y(t) is described by the frequency

response FY = H (ω) = 4+j ω . Determine the following:
(a) Amplitude of y(t) when f (t) = 5 cos(3t + π4 ) V.
(b) Output y(t) when the input is f (t) = 8 + 2 sin(4t) V.

5.8 A linear system has the frequency response


1 A
H (ω) = .
(j ω + 1)(j ω + 2) V
Determine the system steady-state output y(t) with the following inputs:
(a) f (t) = 4 V DC.
(b) f (t) = 2 cos(2t) V.
(c) f (t) = cos(2t − 10◦ ) + 2 sin(4t) V.

5.9 Repeat Problem 5.8 for a linear system described by the ODE
dy
+ y(t) = 4f (t).
dt

5.10 In the circuit of Problem 5.2, the input is f (t) = 4 + cos(2t). Determine the
steady-state output y(t) of the circuit.
5.11 Given an input f (t) = 5 + 4ej 2t + 4e−j 2t and H (ω) = 2+j
1+j ω
ω , determine the
steady-state response y(t) of the system H (ω) and express it as a real valued
signal. Hint: Use the rule ej ωt −→ LTI −→ H (ω)ej ωt and superposition.
5.12 Repeat Problem 5.11 for the input f (t) = 2e−j 2t + (2 + j 2)e−j t + (2 −
j 2)ej t + 2ej 2t .
184 Chapter 5 Frequency Response H (ω) of LTI Systems

5.13 Determine whether each of the given steady-state input–output pairs is


consistent with the properties of H (ω) discussed in Section 5.2. Explain
your reasoning.

(a) cos(25t) −→ System −→ 99.5 sin(25t − π ).

(b) 2 cos(4t) −→ System −→ 1 + 4 cos(4t).

(c) 4 −→ System −→ −8.

(d) 4 −→ System −→ j 8.

(e) 4 −→ System −→ 4 cos(3t).

(f) sin(πt) −→ System −→ cos(πt) + 0.1 sin(πt).

(g) sin(πt) −→ System −→ cos(πt) + 0.1 sin(2πt).

(h) sin(πt) −→ System −→ sin2 (πt).


6
Fourier Series and LTI
System Response to
Periodic Signals

6.1 PERIODIC SIGNALS 186


6.2 FOURIER SERIES 189
6.3 SYSTEM RESPONSE TO PERIODIC INPUTS 208
EXERCISES 218

Suppose a weighted linear superposition of co-sinusoids and/or complex exponentials Periodic


ej ωt has harmonically related frequencies ω, meaning that each frequency ω divided signals
by any other is a ratio of integers. Then the resulting sum is a periodic signal, such as and
those shown in Figure 6.1. Fourier
In this chapter we will learn how to express arbitrary periodic signals as such series;
sums and then use the input–output rules for LTI systems established in Chapter 5 to average
determine the steady-state response of such systems to periodic inputs. In Section 6.1 power
we will discuss series representations of periodic signals—weighted sums of co- and
sinusoids or complex exponentials, known as Fourier series—and introduce the power
terminology used in the description of periodic signals. Section 6.2 describes the spectrum;
conditions under which such Fourier series representations of periodic signals can linear
exist and how they can be determined. Finally, in Section 6.3 we will examine the system
response of linear as well as nonlinear systems to periodic inputs. Also, the concepts response to
of average signal power and power spectrum will be introduced. periodic
inputs
185
186 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

f (t) y(t)
1 1
0.8 jω 0.8
0.6 H (ω) = 0.6
1 + jω
0.4 0.4
0.2 0.2
High-pass filter
−10 10 20 30 t −10 10 20 30 t
−0.2 −0.2
−0.4 −0.4

Figure 6.1 A replica of Figure 5.9b, illustrating that LTI systems respond to periodic inputs with periodic
outputs.

6.1 Periodic Signals

The signals shown in Figure 6.1 are periodic. In general, a signal f (t) is said to be
periodic if there exists some delay to such that

f (t − to ) = f (t)

for all values of t.1 A consequence of this basic definition is that

f (t − kto ) = f (t)

for all integers k. Thus, when periodic signals are delayed by integer multiples of some
time interval to , they are indistinguishable from their undelayed forms. So, periodic
signals consist of replicas, repeated in time. The smallest nonzero value of to that
Signal satisfies the condition f (t − to ) = f (t) is said to be the period and is denoted by T .
period For the signals shown in Figure 6.1, the period T = 4π s.
Signals cos(ωt), sin(ωt), and

ej ωt = cos(ωt) + j sin(ωt)

are the simplest examples of periodic signals, having the period T = 2π


ω and frequency
ω. Consider now the family of periodic signals

ej nωo t = cos(nωo t) + j sin(nωo t),

where n denotes any integer. The period of

ej nωo t

1
Alternatively if its graph is an endless repetition of the same pattern over and over again, just like the
graph in Figure 6.1b.
Section 6.1 Periodic Signals 187

is T = 2π
nωo , because 2π
nωo is the smallest nonzero delay to that satisfies2 the constraint

ej nωo (t−to ) = ej nωo t .

A weighted linear superposition of signals ej nωo t ,



 Exponential
f (t) ≡ Fn ej nωo t , Fourier
n=−∞ series

where the Fn are constant coefficients, is also periodic with a period T = 2π ωo , which
is the smallest nonzero delay to that satisfies3 f (t − to ) = f (t).
The periodic function f (t), defined above, also can be expressed in terms of
a weighted superposition of cos(nωo t) and sin(nωo t) signals, or even in terms of
cos(nωo t + θn ), when f (t) is real. This gives rise to three different, but related,
series representations for periodic signals, as indicated in Table 6.1.
The representations of f (t) shown in Table 6.1, using periodic sums, are known
as Fourier series. The three equivalent versions shown in the table will be referred to
as exponential, trigonometric, and compact forms. Given a set of coefficients Fn that
specifies an exponential Fourier series representation of f (t), the equivalence of the
trigonometric form can be verified as explained next.

f (t), period T = 2π ωo Form Coefficients


∞
n=−∞ Fn e
j nωo t
Exponential Fn = 1
T T f (t)e−j nωo t dt

a0 ∞ an = Fn + F−n
+ n=1 an cos(nωo t) + bn sin(nωo t) Trigonometric
2 bn = j (Fn − F−n )

c0 ∞ cn = 2|Fn |
+ n=1 cn cos(nωo t + θn ) Compact for real f (t)
2 θn = ∠Fn

Table 6.1 Summary of different representations of periodic signal f (t) having period T = 2πωo and funda-
mental frequency ωo . The formula for Fn in the upper right corner will be derived in Section 6.2.

Verification of trigonometric form: The exponential form can be rewritten as




f (t) = F0 + (Fm ej mωo t + F−m e−j mωo t ).
m=1

2
We need e−j nωo to = 1, which requires nωo to = 0, 2π, 4π, · · ·. The smallest nonzero choice for to

clearly is nω , which is the period T .
o 
3
f (t − to ) = ∞ n=−∞ Fn e
j nωo (t−to )
= f (t) only if to satisfies |n|ωo to = 2πk for every value of |n| ≥
1. The smallest nonzero to that meets this criterion is to = T = ω2πo .
188 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

Replacing e±j mωo t with

cos(mωo t) ± j sin(mωo t),

we obtain


f (t) = F0 + (Fm + F−m ) cos(mωo t) + j (Fm − F−m ) sin(mωo t),
m=1

Trigonometric which is the same as the trigonometric form with


Fourier
series an ≡ Fn + F−n

and

bn ≡ j (Fn − F−n ),

for n ≥ 0.
Notice that for a real-valued signal f (t), the coefficients an and bn of the trigono-
metric Fourier series

a0 
f (t) = + an cos(nωo t) + bn sin(nωo t)
2
n=1

For-real must be real-valued. This implies that F−n = Fn∗ ; in other words, the exponential
valued f(t) series coefficients Fn are conjugate symmetric when f (t) is real-valued.
coefficients We next verify the equivalence of the compact form of the Fourier series to the
F−n = Fn∗ exponential form, for real-valued f (t)—because f (t) is real-valued, we use the fact
that the Fn coefficients have conjugate symmetry.

Verification of compact form: Expressing the exponential form as



 ∞

f (t) = Fn ej nωo t = |Fn |ej (nωo t+∠Fn )
n=−∞ n=−∞

and assuming that F−n = Fn∗ so that |F−n | = |Fn | and ∠F−n = −∠Fn , we
have


f (t) = F0 + |Fm |(ej (mωo t+∠Fm ) + e−j (mωo t+∠Fm ) )
m=1


= F0 + 2|Fn | cos(nωo t + ∠Fn ).
n=1
Section 6.2 Fourier Series 189

This is the same as the compact trigonometric form for real f (t), shown in
Table 6.1, with

cn ≡ 2|Fn | and θn ≡ ∠Fn ,

for n ≥ 0.
The Fourier series f (t) in Table 6.1 are sums of an infinite number of periodic
signals with distinct frequencies ω = nωo and periods nω 2π
o
. It is the longest period,
corresponding to the lowest frequency and n = 1, that defines an interval across
which every periodic component repeats. Therefore, the period of the series is T =
ωo . The corresponding lowest frequency ωo = T is referred to as the fundamental
2π 2π

frequency of the series. We will refer to the component of f (t) with frequency ωo
as the fundamental, and the component having frequency nωo as the nth harmonic. Fundamental
Finally, F0 = a20 = c20 will be referred to as the DC component of f (t). When the and
DC component is zero, we will refer to f (t) as having zero mean. harmonics

6.2 Fourier Series

6.2.1 Existence of the Fourier series


In 1807 Jean-Baptiste Joseph Fourier made a shocking announcement that all periodic
signals with periods T = 2πωo can be expressed as weighted linear superpositions of an
infinite number of cosine and sine functions cos(nωo t) and sin(nωo t). The claim was
made during a lecture that Fourier was presenting at the Paris Institute to compete
for the Grand Prize in mathematics. Fourier did not win the prize, because the jury,
including the prominent mathematicians Laplace and Lagrange, did not quite believe
the claim. The story, however, has an all-around happy ending: Laplace and Lagrange
were shown to be right in their judgment, because some periodic functions cannot
be expressed the way Fourier described. On the other hand, it is now known that all
periodic signals f (t) that can be generated in the lab4 can be expressed exactly as
Fourier suggested—that is, as the “Fourier series”


f (t) = Fn ej nωo t ,
n=−∞

or its equivalents. (See Table 6.1 in the previous section.) The credit for sorting out
which periodic functions can be expressed as Fourier series goes to German mathe-
matician Gustave Peter Lejeune Dirichlet.

4
You should realize, of course, that in the lab we can generate only a finite numbers of periods, due
to time limitation. For example, for some f (t) with a fundamental frequency of ω2πo = 1 MHz, the signal
would have 3600 million periods in 1 hour. So long as the number of periods generated during an experiment
is large enough, it is reasonable to treat the signal as periodic and represent it by a Fourier series.
190 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

If a periodic signal f (t) can be expressed in a Fourier series, then the series
coefficients Fn , known as Fourier coefficients, can be determined by the formula (see
Section 6.2.2 for the derivation)

1
Fn = f (t)e−j nωo t dt
T T
T T /2 t  +T
Absolutely where T denotes integration over one period (i.e., t=0 , or t=−T /2 , or t=t  , where
integrable t  is arbitrary). Dirichlet recognized that if f (t) is absolutely integrable over a period
f(t) T , that is, if

|f (t)|dt < ∞,
T

then the Fourier coefficients Fn must be bounded,5 or |Fn | < ∞ for all n. With
bounded coefficients Fn , the convergence of the Fourier series of f (t) is possible,
and, in fact, guaranteed (as proved by Dirichlet), so long as f (t) has only a finite
number of minima and maxima, and a finite number of finite-sized discontinuities,6
within a single period T (i.e., so long as a plot of f (t) over a period T can be drawn
Dirichlet on a piece of paper with a pencil having a finite-width tip). These Dirichlet sufficiency
conditions conditions for the convergence of the Fourier series—namely, that f (t) be absolutely
integrable and be plottable—are satisfied by all periodic signals that can be generated
in the lab or in a radio station and displayed on an oscilloscope.

6.2.2 Orthogonal projections and Fourier coefficients


There is a deep mathematical connection between Fourier series and the representation
of a vector in n-dimensional space. We will not explore this fully, but instead will
simply raise some of the concepts.
A 3-D vector, say,

v = (3, −2, 5),

can be expressed as a weighted sum of three mutually orthogonal vectors

 1 ≡ (1, 0, 0)
u
 2 ≡ (0, 1, 0)
u
 3 ≡ (0, 0, 1)
u
5
Notice that
  
1 1 1
|Fn | = | f (t)e−j nωo t dt| ≤ |f (t)e−j nωo t |dt = |f (t)|dt,
T T T T T T

where we first use the triangle inequality—the absolute value of a sum can’t be greater than the sum of the
absolute values—and next the fact that the magnitude of a product is the product of the magnitudes, and
that |e−j nωo t | = 1. So, if T |f (t)|dt < ∞, then |Fn | < ∞.
6
At discontinuity points, the Fourier series converges to a value that is midway between the bottom and
the top of the discontinuous jump.
Section 6.2 Fourier Series 191
as

u1 − 2
v = 3 u2 + 5
u3 .

In general, any 3-D vector can be written as


3
v = n,
Vn u
n=1

where

Vn = v · u
n,

since the dot products7

n · u
u n = 1

and

n · u
u  m = 0 for m = n.

Note that u n · u
 m = 0, m = n, is the orthogonality condition pertinent to vectors
u  2 , and u
1, u  3 , which can be regarded as basis vectors for all 3-D vectors v . Further-
more, the coefficients Vn of the vector v can be regarded as projections8 of v along
the basis vectors u n.
By analogy, a convergent Fourier series


f (t) = Fn ej nωo t
n=−∞

can be interpreted as an infinite weighted sum of orthogonal basis functions

ej nωo t , −∞ ≤ n ≤ ∞,

satisfying an orthogonality condition9



(ej nωo t )(ej mωo t )∗ dt = 0 for m = n.
T

7
The scalar, or dot, product of two vectors is the sum of the pairwise products of the two sets of vector
coordinates.
8
A projection of one vector onto another is the component of the first vector that lies in the direction
of the second vector.
9
Verification: Assuming m = n, we find that

  2π
T=ω
o ej (n−m)2π − 1 1−1
(ej nωo t )(ej mωo t )∗ dt = ej (n−m)ωo t dt = = = 0.
T t=0 j (n − m)ωo j (n − m)ωo
192 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

A Fourier coefficient Fm of f (t) is then the projection of f (t) along the basis function
ej mωo t . To calculate the coefficient we multiply both sides of the series expression
with
(ej mωo t )∗ = e−j mωo t
and integrate the products on each side across a period T . The result, called the inner
product (instead of dot product) of f (t) with ej mωo t , is
  ∞
−j mωo t
f (t)e dt = Fn ej nωo t e−j mωo t dt
T T n=−∞


 
= Fn (ej nωo t )(ej mωo t )∗ dt = T Fm ,
n=−∞ T

provided that the series is uniformly convergent and hence a term-by-term integration
Fourier of the series is permissible. We then find (after exchanging m with n),
coefficients 
1
for Fn = f (t)e−j nωo t dt,
exponential T T
form which can be utilized with any periodic f (t) satisfying the Dirichlet conditions to
obtain a Fourier series converging to f (t) at all points where f (t) is continuous.
The trigonometric Fourier series

a0 
f (t) = + an cos(nωo t) + bn sin(nωo t)
2
n=1

can be interpreted as a representation of the periodic signal f (t) in terms of an


alternative set of orthogonal basis functions consisting of cos(nωo t) and sin(nωo t),
n ≥ 0. The pertinent Fourier coefficients an and bn can be determined as projections
of f (t) along cos(nωo t) and sin(nωo t). Equivalently, we can use the relations
an = Fn + F−n
and
bn = j (Fn − F−n )
(see Table 6.1) and also the formula for the exponential Fourier coefficients Fn
obtained previously.
Table 6.2 lists the formulae for the Fourier coefficient for all three forms of
the Fourier series. Note that the exponential Fourier series requires the calculation
of only a single set of coefficients Fn , while two sets, an and bn , are needed for
the trigonometric form. Furthermore, the compact-form coefficients cn and θn can
be inferred from the exponential-form coefficients Fn in a straightforward way. For
these reasons, we will stress mainly the exponential and compact forms of the Fourier
series. We also will see that the exponential Fourier series has the most convenient
Section 6.2 Fourier Series 193

f (t), period T = 2πωo , Fourier coefficients


∞
n=−∞ Fn e
j nωo t
Fn = 1
T T f (t)e−j nωo t dt

a0 ∞ an = 2
T T f (t) cos(nωo t)dt
2 + n=1 an cos(nωo t) + bn sin(nωo t)
bn = 2
T T f (t) sin(nωo t)dt

c0 ∞ cn = 2|Fn |
2 + n=1 cn cos(nωo t + θn ) for real f (t)
θn = ∠Fn

Table 6.2 Exponential, trigonometric, and compact-form Fourier coefficients for


a periodic signal f (t) having period T and fundamental frequency ωo = 2π
T .

form for LTI system response calculations. (See Section 6.3.) The trigonometric form
is preferable only when f (t) is either an even function—that is, when

f (−t) = f (t)

(in which case bn = 0)—or an odd function—that is, when

f (−t) = −f (t)

(in which case an = 0).

6.2.3 Periodic and non-periodic sums


Not all sums of co-sinusoids or complex exponentials are periodic; in particular,


g(t) = ck cos(ωk t + θk )
k=1

is not periodic unless there exists some number ωo such that all frequencies ωk are
integer multiples of ωo . Thus, if the sum is periodic, then all possible ratios of the
frequencies ωk are rational numbers. Furthermore, if the sum is periodic, then its
fundamental frequency ωo is defined to be the largest number whose integer multiples
match each and every ωk .

Example 6.1
Signal

p(t) = 2 cos(πt) + 4 cos(2t)


π
is not periodic, because the ratio of the two frequencies 2 is not a rational
number.
194 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

Example 6.2
Signal

π
q(t) = cos(4t) + 5 sin(6t) + 2 cos(7t − )
3

is periodic, because the frequencies 4, 6, and 7 rad/s are each integer multi-
ples of 1 rad/s or, equivalently, the frequency ratios 46 , 47 , and 67 (and their
inverses) all are rational. Furthermore, since 1 rad/s is the largest frequency
whose integer multiples can match 4, 6, and 7 rad/s, the fundamental
frequency of q(t) is ωo = 1 rad/s and its period is T = 2π ωo = 2π s.

Because q(t) in Example 6.2 is periodic, it can be expressed as a Fourier series.


In fact, the compact trigonometric series for q(t) is simply

π π
q(t) = cos(4t) + 5 cos(6t − ) + 2 cos(7t − ).
2 3

Thus, the parameters in the compact Fourier series are (compare with Table 6.2)
c4 = 1, θ4 = 0, c6 = 5, θ6 = − π2 rad, c7 = 2, θ7 = − π3 rad; all other cn are zero.

Example 6.3
What is the period of

f (t) = 1 + cos(8πt) + 7.6 sin(10πt)?

Solution Since 2π is the largest number whose integer multiples (×4 and
×5) match frequencies 8π and 10π, the fundamental frequency of f (t) is
ωo = 2π rad/s. Therefore, the period of f (t) is T = 2π
ωo = 2π rad/s = 1 s.

Example 6.4
What are the exponential Fourier series coefficients of f (t) in Example 6.3?

Solution We can rewrite f (t) from Example 6.3 as

ej 4·2πt + ej (−4)·2πt ej 5·2πt − ej (−5)·2πt


f (t) = 1ej 0·2πt + + 7.6 .
2 2j

The right-hand side of this expression is effectively the exponential Fourier


series (see Table 6.2) of f (t). Hence, all Fourier coefficients Fn are zero,
except for F0 = 1, F4 = 21 , F−4 = 21 , F5 = −j 3.8, and F−5 = j 3.8.
Section 6.2 Fourier Series 195

6.2.4 Properties and calculations of Fourier series


Table 6.3 lists some of the properties of periodic functions and their Fourier series. We
will verify some of these properties and use a number of them to assist us in Fourier
series calculations. The notation f (t) ↔ Fn , g(t) ↔ Gn , · · · in Table 6.3 indicates
that Fn , Gn , · · · are the respective exponential Fourier coefficients of f (t), g(t), · · ·,
where the periodic functions f (t), g(t), · · · are assumed to have the same fundamental
frequency ωo = 2π T . The coefficients an , bn , cn , and θn refer to the coefficients in the
trigonometric and compact forms of the Fourier series.

Name: Condition: Property:


1 Scaling Constant K Kf (t) ↔ KFn
2 Addition f (t) ↔ Fn , g(t) ↔ Gn , · · · f (t) + g(t) + · · · ↔ Fn + Gn + · · ·
3 Time shift Delay to f (t − to ) ↔ Fn e−j nωo to
df
4 Derivative Continuous f (t) dt ↔ j nωo Fn
5 Hermitian Real f (t) F−n = Fn∗

6 Even function f (−t) = f (t) f (t) = a20 + ∞ n=1 an cos(nωo t)
∞
7 Odd function f (−t) = −f (t) f (t) = n=1 bn sin(nωo t)

8 Average power P ≡ T1 T |f (t)|2 dt = ∞ n=−∞ |Fn |
2

Table 6.3 Properties of Fourier series.

Example 6.5
Find the exponential and compact Fourier series of f (t) = | sin(t)| shown
in Figure 6.2a.
Solution The period of sin(t) is 2π, while the period of f (t) = | sin(t)|
is T = π s as can be verified from the graph of f (t) shown in Figure 6.2a.
Therefore, we identify the fundamental frequency of f (t) as ωo = 2π T =2
rad/s. We will calculate Fn by using the integration limits of 0 and π,
since f (t) can be described by a single equation on the interval 0 < t < π ,
namely, f (t) = | sin(t)| = sin(t). We then have
  
1 −j nωo t 1 π −j n2t 1 π ej t − e−j t −j n2t
Fn = f (t)e dt = sin(t)e dt = e dt
T T π 0 π 0 2j
 π  j (1−2n)t π
1 −j (1+2n)t 1 e e−j (1+2n)t 
= (e j (1−2n)t
−e ) dt = −
j 2π 0 j 2π j (1 − 2n) −j (1 + 2n) 0
 
1 ej (1−2n)π − 1 e−j (1+2n)π − 1
=− + .
2π 1 − 2n 1 + 2n
196 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

2 4 6 8 t 2 4 6 8 t
(a) −0.2 (b) −0.2

1
0.8
0.6
0.4
0.2

2 4 6 8 t
(c) −0.2

Figure 6.2 (a) A periodic function f (t) = | sin(t)|, (b) plot of Fourier series of f (t)
truncated at n = 5, and (c) truncated at n = 20.

Now,

ej (1−2n)π = e−j (1+2n)π = e±j π = −1

for all integers n, and therefore


 
1 1 1 2 1
Fn = + = .
π 1 − 2n 1 + 2n π 1 − 4n2

The exponential Fourier series of f (t) = | sin(t)| is then

∞
2 1
f (t) = ej n2t .
n=−∞
π 1 − 4n2

The coefficients for the compact form are, for n ≥ 1,

4 1 1 1
cn = 2|Fn | = =
π 4n2 − 1 π n2 − 1
4

and

θn = ∠Fn = π rad,
Section 6.2 Fourier Series 197

where the last line follows because for n ≥ 1 the Fn are all real and negative,
so that their angles all have value π. Also, Fo = c2o = π2 . The compact form
of the Fourier series is therefore

2 1 1
f (t) = + cos(n2t + π).
π π n −
2 1
4
n=1

Figures 6.2b and 6.2c show plots of the Fourier series of f (t), but with the
sums truncated at n = 5 and n = 20, respectively (for example, we dropped the sixth
and higher-order harmonics from the Fourier series to obtain the curve plotted in
Figure 6.2b.) Notice that the curve in Figure 6.2b approximates f (t) = | sin(t)| very
well, except in the neighborhoods where f (t) is nearly zero and abruptly changes
direction. The curve in Figure 6.2c, which we obtained by including more terms in
the sum (up to the 20th harmonic), clearly gives a finer approximation. Because f (t)
is a continuous function, the Fourier series converges to f (t) for all values of t.

Example 6.6
Prove the time-shift property from Table 6.3.
Solution This property states that

f (t) ↔ Fn ⇒ f (t − to ) ↔ Fn e−j nωo to .

To verify it, we first express f (t) in its Fourier series as




f (t) = Fn ej nωo t .
n=−∞

Replacing t with t − to on both sides gives



 ∞

f (t − to ) = Fn ej nωo (t−to ) = (Fn e−j nωo to )ej nωo t .
n=−∞ n=−∞

Hence, the expression in parentheses, Fn e−j nωo to , is the nth Fourier coeffi-
cient for f (t − to ), proving the time-shift property.

Example 6.7
What are the exponential-form Fourier coefficients Gn of the periodic func-
tion

g(t) = | cos(t)|

shown in Figure 6.3a? Also, determine the compact-form Fourier series of


g(t).
198 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

2 4 6 8 t 2 4 6 8 t
(a) −0.2 (b) −0.2

Figure 6.3 (a) Plot of g(t) = | cos(t)|, and (b) plot of its Fourier series truncated
at n = 20.

Solution Clearly,
π
g(t) = | cos(t)| = f (t ± ),
2
where f (t) = | sin(t)| as in Example 6.5. Therefore, using the Fourier coef-
ficients Fn of f (t) from Example 6.5 and the time-shift property from
Table 6.3 with to = π2 s, we obtain the Fourier coefficients Gn of g(t) as
π
Gn = Fn e−j nωo to = Fn e−j n(2)( 2 ) = Fn e−j nπ .

Hence, |Gn | = |Fn |, and for n ≥ 1

∠Gn = ∠Fn − nπ = (1 − n)π,

since ∠Fn = π in that case. The compact form of g(t) is therefore



2 1 1
g(t) = + cos(n2t + (1 − n)π).
π π n −
2 1
4
n=1

Note that the same result also could have been obtained by replacing t
with t − π2 in the compact form Fourier series for f (t) = | sin(t)| from
Example 6.5. A plot of the series for g(t), truncated at n = 20, is shown in
Figure 6.3b.

Examples 6.5 and 6.7 illustrated the influence of the angle, or phase coefficient,
θn on the shape of periodic signals. The amplitude coefficients cn for f (t) and g(t)
are identical, which implies that both functions are constructed with cosines of equal
amplitudes. The curves, however, are different, because the phase shifts θn of the
cosines are different. To further illustrate the impact of θn on the shape of a signal
waveform, we plot in Figure 6.4 a truncated version of another series,

2 1 1 π
h(t) = + cos(n2t + (1 − n) ),
π π n −
2 1
4
2
n=1
Section 6.2 Fourier Series 199

1
0.8
0.6
0.4
0.2

2 4 6 8 t
−0.2

Figure 6.4 Plot of the Fourier series of signal h(t) truncated at n = 20.

having the same amplitude coefficients as f (t) and g(t), but different phase coeffi-
cients θn . Notice that h(t) has a shape that is different from both f (t) and g(t), which
is caused by the different Fourier phases. In general, both the Fourier amplitudes and
phases affect the shape of a signal.

Example 6.8
Given that (from Example 6.5)

∞
2 1
f (t) = | sin(t)| = ej n2t ,
n=−∞
π 1 − 4n2

determine the exponential and compact Fourier series of

1
g(t) = | sin( t)|.
2

Solution We note that


t
g(t) = f ( ).
2
Therefore, replacing t in the expression for the exponential Fourier series
of f (t) with 2t , we obtain the Fourier series of g(t) as

∞
2 1
g(t) = ej nt .
n=−∞
π 1 − 4n2

Likewise, the compact form is



2 1 1
g(t) = + cos(nt + π).
π π n2 − 1
4
n=1

Notice that the Fourier series coefficients have not changed. A stretching
or squashing of a periodic waveform corresponds to a change in period and
fundamental frequency. Comparing f (t) and g(t), the period has increased
200 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

from π to 2π, and the fundamental frequency has dropped from 2 to 1 rad/s.
The waveform g(t) is simply a stretched (by a factor of 2) version of f (t),
plotted earlier in Figure 6.2a.

Example 6.9
A periodic signal f (t) with period T is specified as

f (t) = e−at for 0 ≤ t < T .

A plot of f (t) for a = 0.5 s−1 and T = 2 s is shown in Figure 6.5a. Deter-
mine both the exponential and compact Fourier series for f (t), for arbitrary
a and T .

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

(a) −2 −1 1 2 3 4 t (b) −2 −1 1 2 3 4 t

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

(c) −0.4 −0.2 0.2 0.4 t (d) −0.1 −0.05 0.05 0.1 t

0.8

0.6

0.4

0.2

(e) −0.1 −0.05 0.05 0.1 t

Figure 6.5 (a) A periodic signal f (t); (b) plot of the Fourier series of f (t)
truncated at n = 20; (c) expanded plot of the same curve near t = 0; (d)
expanded plot of the same series truncated at n = 200; and (e) truncated at
n = 2000.
Section 6.2 Fourier Series 201

Solution Using the integration limits of 0 to T , we see that


  T
1 T
1 T
1 e−(a+j nωo )t 
Fn = e−at e−j nωo t dt = e−(a+j nωo )t dt =
T 0 T 0 T −(a + j nωo ) 0
1 − e−(a+j nωo )T 1 − e−aT e−j n2π 1 − e−aT −1 2πn
= = = e−j tan aT .
(a + j nωo )T aT + j 2πn (aT ) + (2πn)
2 2

In the last line we used ωo T = 2π and e−j 2πn = 1. Thus, the exponential
Fourier series is

 1 − e−aT −1 2πn
f (t) =  e−j tan aT ej nωo t .
n=−∞ (aT )2 + (2πn)2

The compact-form coefficients are

2(1 − e−aT )
cn = 2|Fn | = 
(aT )2 + (2πn)2

and
2πn
θn = ∠Fn = − tan−1 .
aT
Therefore, in compact form,

 2(1 − e−aT )
1 − e−aT 2πn
f (t) = +  cos(nωo t − tan−1 ).
aT (aT ) + (2πn)
n=1
2 2 aT

Figure 6.5b displays a plot of the Fourier series for f (t) (assuming that a = 0.5
and T = 2), truncated at n = 20. Note that the plot exhibits small fluctuations about
the true f (t) shown in Figure 6.5a. Furthermore, the fluctuations are largest near the
points of discontinuity of f (t), at t = 0, 2, 4, etc., as shown in more detail in the
expanded plot of Figure 6.5c. Including a larger number of higher-order harmonics in
the truncated series reduces the widths of the fluctuations, but does not diminish their
amplitudes. This is illustrated in Figures 6.5d and 6.5e, which show other expanded
plots of the series, but now with the series truncated at n = 200 and 2000. In the limit,
as the number of terms in the Fourier series becomes infinite, the fluctuation widths
vanish as the fluctuations bunch up around the points of discontinuity and the infinite
series converges to f (t) at all points, except those where the signal is discontinuous.
At points of discontinuity, the series converges to a value that is midway between the
bottom and the top of the discontinuous jump. The behavior of a Fourier series near Gibbs
points of discontinuity, where increasing the number of terms in the series causes phenomenon
202 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

the fluctuations, or “ripples,” to be narrower and to bunch up around the points of


discontinuity (but with no decrease in amplitude), is known as the Gibbs phenomenon,
after the American mathematical physicist Willard Josiah Gibbs.

Example 6.10
Determine the exponential and compact-form Fourier series of the square-
wave signal

1, 0 < t < D,
p(t) =
0, D < t < 1,

where 0 < D < 1 and the signal period is T = 1 s. Figure 6.6a shows an
example of p(t) with a duty cycle of D = 0.25 = 25%.

1
0.8
0.6
0.4
0.2
t
−1 −0.5 0.5 1 1.5
(a) −0.2

0.8

0.6

0.4

0.2

t
−1 −0.5 0.5 1 1.5
(b) −0.2

Figure 6.6 (a) A square wave p(t) with 25 percent duty cycle and unity
amplitude, and (b) a plot of its Fourier series truncated at n = 200.
Section 6.2 Fourier Series 203

Solution Since T = 1, the fundamental frequency ωo = 2π T = 2π. There-


fore, the Fourier coefficients are
  D
1 1
Pn = p(t)e−j nωo t dt = e−j n2πt dt.
T 0 0

For the special case of n = 0, we obtain

P0 = D.

For n = 0,
D
e−j n2πt  e−j n2πD − 1
Pn = =
−j n2π 0 −j n2π
e−j nπD − ej nπD sin(nπD) −j nπD
= e−j nπD = e .
−j n2π nπ
This is consistent in the limit n → 0 with P0 determined above, as can be
verifiable by using l’Hopital’s rule.10 Thus, in exponential form the Fourier
series is
∞
sin(nπD) j (n2πt−nπD)
p(t) = e ,
n=−∞

and in compact-form is

 2 sin(nπD)
p(t) = D + cos(n2πt − nπD).

n=1

Figure 6.6b shows a plot of the Fourier series of p(t) for D = 0.25, truncated
at n = 200. Note the Gibbs phenomenon near t = 0, t = 0.25, etc., where p(t) is
discontinuous.
All discontinuous signals having Fourier series coefficients cn that are propor-
tional to n1 (for large n) exhibit the Gibbs phenomenon. Notice that in Examples 6.5
and 6.7, where the signals were continuous, cn was proportional to n12 (for large n) and
the Gibbs phenomenon was absent. When cn decays as n12 (or faster), the contribution
of higher-order harmonics to the Fourier series is less important than in cases where
cn is proportional to n1 , and the Gibbs phenomenon does not occur.

10
Applying l’Hopital’s rule yields
d
sin(nπD) sin(nπD) πD cos(nπD)|n=0
lim = lim dn
d
= = D = P0 .
n→0 nπ n→0
dn nπ
π
204 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

t
−10 −5 5 10 15 20
−1

−2

(a) −3

t
−10 −5 5 10 15 20
−1

−2

(b) −3

Figure 6.7 (a) A square wave of peak-to-peak amplitude 4 and period 12 s, and
(b) its truncated Fourier series plot.

Example 6.11
Let f (t) denote a zero-mean square wave with a period of T = 12 s and
with a peak-to-peak amplitude of 4, as depicted in Figure 6.7a. Express f (t)
as a stretched, scaled, and offset version of p(t) defined in Example 6.10
and then determine the compact-form Fourier series of f (t).
Solution Consider first 4p( 12t ), which is, for D = 21 , a square wave with
50% duty cycle, period T = 12 s, and an amplitude of 4. It differs from
f (t) shown in Figure 6.7a only by a DC offset of 2. Clearly, then, we can
write
t
f (t) = 4p( )−2
12

for D = 21 . Since, for this choice of D, we have, from Example 6.10,


1  2 sin(nπ 21 ) 1
p(t) = + cos(n2πt − nπ ),
2 nπ 2
n=1
Section 6.2 Fourier Series 205

it follows, by direct substitution, that



 8 sin(n π )
t t π
f (t) = 4p( ) − 2 = 2
cos(nπ − n )
12 nπ 6 2
n=1

 2
8 sin (n π2 ) t
= sin(nπ )
nπ 6
n=1 (odd)

 8 π

 8  π π
= sin(n t) = cos n t − ,
nπ 6 nπ 6 2
n=1 (odd) n=1 (odd)

where we made use of the trig identity

cos(a − b) = cos a cos b + sin a sin b

to reach line 2. Figure 6.7b shows a truncated Fourier series plot of f (t)
obtained from the preceding expression.

Example 6.12
Obtain the trigonometric-form Fourier series for f (t) in Figure 6.7a, using
the formula for bn from Table 6.2.
Solution The period of f (t) is T = 12 s and
2π π rad
ωo = = .
T 6 s
The function is odd, so by property 7 from Table 6.3, all an = 0. Using the
formula for bn from Table 6.2,
 6
2 π
bn = f (t) sin(n t)dt.
12 −6 6

Since the integrand is even—note that odd f (t)× odd sin(n π6 t) is an even
function—we can evaluate bn as
 6
2×2 6 π 8 cos(n π6 t)  4
bn = 2 sin(n t)dt = = − (cos(nπ) − 1).
12 0 6 12 −n π6 0 nπ

Clearly, for even n, bn = 0, and for odd n, bn = 8


nπ . Thus, the trigonometric
Fourier series is

 8 π
f (t) = sin(n t),
nπ 6
n=1 (odd)

as already determined by another method in Example 6.11.


206 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

Example 6.13
The function q(t) is periodic with period T = 4 s and is specified as

2t, 0 < t < 2 s,
q(t) =
0, 2 < t < 4 s.

Determine the compact trigonometric Fourier series of q(t).


Solution The fundamental frequency is ωo = 2π
T = π
2. First, for n = 0,


1 2
1 2 2
Q0 = 2tdt = t 0 = 1.
4 0 4

For all other values of n,

   π

1 2
−j n π2 t 1 2
d e−j n 2 t
Qn = 2te dt = t dt.
4 0 2 0 dt −j n π2

(The last line of this expression does not hold for n = 0, which is why we
examined this case separately.) Using integration by parts, we obtain

!2  !2
π
te−j n 2 t  π
1 2 e−j n 2 t 1 2e−j nπ − 0 1
π
e−j n 2 t 
1  
Qn =  − dt = − 
2 −j n π2  2 0 −j n 2π
2 −j n π2 2 (−j n π2 )2 
0 0
(−1)n 1 (−1)n − 1 2((−1)n − 1) + j 2πn(−1)n
= − =
−j n π2 2 −n2 π 2 π 2 n2
4

j2
, for even n > 0,
= πn4+j 2πn
− π 2 n2 , for odd n.

After finding the magnitudes and phases of Qn , for even and odd n, the
compact Fourier series can be written as

∞ ∞ √
 4 π π  4 4 + π 2 n2
q(t) = 1 + cos(n t + ) +
πn 2 2 π 2 n2
n=2 (even) n=1 (odd)
π πn
× cos(n t + π + tan−1 ).
2 2

A plot of this series, truncated at n = 20, is shown in Figure 6.8.


Section 6.2 Fourier Series 207
5

2 4 6 8 t
−1

Figure 6.8 Plot of the Fourier series of q(t) truncated at n = 20.

Example 6.14
Prove the derivative property from Table 6.3.
Solution This property states that
df
f (t) ↔ Fn ⇒ ↔ j nωo Fn
dt
when f (t) is a continuous function. In other words, the derivative f  (t) ≡
df
dt will have a Fourier series with Fourier coefficients j nωo Fn .
To verify the property, we differentiate


f (t) = Fn ej nωo t
n=−∞

to obtain
∞ ∞ ∞
df d   d 
= Fn ej nωo t = Fn ej nωo t = (j nωo Fn )ej nωo t .
dt dt n=−∞ n=−∞
dt n=−∞

This is a Fourier series expansion for the function df dt , where the expres-
sion in parentheses, j nωo Fn , is the nth Fourier coefficient. This proves the
derivative property.

Example 6.15
Let
df
g(t) = ,
dt
where

f (t) = | sin(t)|.

Determine the exponential Fourier coefficients of g(t) and its compact


trigonometric Fourier series.
208 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

Solution Since f (t) is continuous,

Gn = j nωo Fn = j n2Fn ,

by the derivative property from Table 6.3. Because


2 1
Fn =
π 1 − 4n2
from Example 6.5, it follows that
4 n
Gn = j .
π 1 − 4n2
The easiest way to obtain the compact trigonometric series for g(t) is to
differentiate the compact series for f (t) (from Example 6.5) term by term;
the result is
∞
1 2n 3π
g(t) = cos(n2t + ).
π n2 − 1
4
2
n=1

A truncated version of this expression is plotted in Figure 6.9.

0.5

2 4 6 8 t
−0.5

−1

Figure 6.9 A plot of the Fourier series of g(t) = d


dt
| sin(t)| truncated at n = 20.

6.3 System Response to Periodic Inputs

6.3.1 LTI system response to periodic inputs


The input–output relation

ej ωt −→ LTI −→ H (ω)ej ωt

for dissipative LTI systems implies that, with ω = nωo ,

ej nωo t −→ LTI −→ H (nωo )ej nωo t .


Section 6.3 System Response to Periodic Inputs 209

∞ ∞
f (t) = ∑ Fn ejnωot LTI system: y(t) = ∑ H (nωo)Fn ejnωot
n= ∞ n= ∞

Fn H (ω) Yn = H (nωo)Fn

Figure 6.10 Input–output relation for dissipative LTI systems with periodic
inputs.

Therefore, using superposition, we get



 ∞

Fn ej nωo t −→ LTI −→ H (nωo )Fn ej nωo t
n=−∞ n=−∞

for any set of coefficients Fn . Thus, as illustrated in Figure 6.10, the steady-state
response of an LTI system H (ω) to an arbitrary periodic input


f (t) = Fn ej nωo t
n=−∞

is the periodic output




y(t) = H (nωo )Fn ej nωo t .
n=−∞

The input–output relation for periodic signals described in Figure 6.10 indicates
that an LTI system H (ω) simply converts the Fourier coefficients Fn of its periodic
input f (t) into the Fourier coefficients

Yn = H (nωo )Fn

of its periodic output y(t).

Example 6.16
The input of a linear system

2 + jω
H (ω) =
3 + jω
is the periodic function

 n
f (t) = e−j n4t .
n=−∞
1 + n2

What are the Fourier coefficients Yn of the periodic system output y(t)?
210 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

Solution The input Fourier coefficients are


n
Fn = ,
1 + n2

and the fundamental frequency of the input is ωo = 4 rad


s . Therefore,

2 + j n4
H (nωo ) = H (n4) =
3 + j n4

and the Fourier coefficients of the output y(t) are

2 + j n4 n
Yn = H (nωo )Fn = .
3 + j n4 1 + n2

As a further illustration of the input–output relation in Figure 6.10, consider the


three linear systems depicted in Figures 5.1, 5.2, and 5.3. The frequency responses of
the systems are

1
H1 (ω) = ,
1 + jω

H2 (ω) = ,
1 + jω

and

H3 (ω) = .
1 − ω2 + j ω

Suppose that all three systems are excited by the same periodic input

∞
1 2 1
f (t) = | sin( t)| = ej nt ,
2 n=−∞
π 1 − 4n2

with fundamental frequency ωo = 1 rad s . (See Example 6.8 in the previous section.)
Let y1 (t), y2 (t), and y3 (t) denote the steady-state response of the systems to this
input. Applying the relation in Figure 6.10, we determine that

 1 2 1
y1 (t) = ej nt ,
n=−∞
1 + j n π 1 − 4n2

 jn 2 1
y2 (t) = ej nt ,
n=−∞
1 + j n π 1 − 4n2
Section 6.3 System Response to Periodic Inputs 211

and

 jn 2 1
y3 (t) = ej nt .
n=−∞
1 − n2 + j n π 1 − 4n2

These expressions can be readily converted to compact form, and then their truncated
plots can be compared with the input signal | sin( 21 t)| to assess the impact of each
system on the input. However, we also can gain some insight by examining how the
magnitudes of the Fourier coefficients of input f (t) and responses y1 (t), y2 (t), and
y3 (t) compare.
Figures 6.11a through 6.11d display point plots of |Fn |2 , |H1 (n)|2 |Fn |2 , |H2 (n)|2
|Fn | , and |H3 (n)|2 |Fn |2 versus harmonic frequency nωo = n rad
2
s , representing the
squared magnitude of the Fourier coefficients of f (t), y1 (t), y2 (t), and y3 (t), respec-
tively. (The reason for plotting the squared magnitudes instead of just the magnitudes
will be explained in the next section.) Notice that |H1 (n)|2 |Fn |2 , representing y1 (t),
has been reduced compared with |Fn |2 , for |n| ≥ 1. This shows that system H1 (ω)
attenuates the high-frequency content of the input f (t) in producing y1 (t). As a
consequence, the waveform y1 (t), shown in Figure 6.12b, is smoother than the input
f (t), shown in Figure 6.12a. This occurs because substantial high-frequency content
is necessary to produce rapid variation in a waveform.
By contrast, we see in Figure 6.11c that H2 (ω) has zeroed out the lowest frequency
(DC component) of the input. Hence, the response y2 (t), shown in Figure 6.12c, is a
zero-mean signal, but it retains the sharp corners present in f (t), indicating insignif-
icant change in the high-frequency content of the input.

0.5
|Fn | 2 0.5
|H 1(n)| 2|Fn | 2
0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

nωo nωo
(a) -6 -4 -2 2 4 6 (b) -6 -4 -2 2 4 6

0.05
|H 2(n)| 2|Fn | 2 0.05
|H 3(n)| 2|Fn | 2
0.04 0.04

0.03 0.03

0.02 0.02

0.01 0.01

(c) -6 -4 -2 2 4 6 nωo (d) -6 -4 -2 2 4 6 nωo

Figure 6.11 (a) |Fn |2 vs nωo = n rad/s, (b) |H1 (n)|2 |Fn |2 , (c) |H2 (n)|2 |Fn |2 , and (d) |H3 (n)|2 |Fn |2 .
212 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

Finally, by comparing Figures 6.11c and 6.11d, we should expect that y3 (t) will
look more like a sine wave than does y2 (t), because |H3 (n)|2 |Fn |2 is almost entirely
dominated by the n = ±1 harmonic (i.e, the fundamental). Indeed, Figure 6.12d
shows that y3 (t) is almost a pure co-sinusoid with frequency 1 rad/s.

f (t) y1(t)
1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

(a) 2.5 5 7.5 10 12.5 15 17.5 t (b) 2.5 5 7.5 10 12.5 15 17.5 t

1 y2(t) 1 y3(t)
0.75 0.75
0.5 0.5
0.25 0.25

2.5 5 7.5 10 12.5 15 17.5 2.5 5 7.5 10 12.5 15 17.5


−0.25 t −0.25 t
−0.5 −0.5
−0.75 −0.75
(c) −1 (d) −1

Figure 6.12 (a) f (t), (b) y1 (t), (c) y2 (t), and (d) y3 (t) versus time t (in seconds).

6.3.2 Average power, power spectrum, and Parseval’s theorem


The plot of |Fn |2 displayed in Figure 6.11a is known as the power spectrum of the
periodic signal f (t). The sum of |Fn |2 over all n gives the average power of the same
signal. The reasoning behind this terminology follows.
Suppose that f (t) denotes a periodic voltage or current signal in a circuit. Then the
application of f (t) to a 1  resistor would lead to an instantaneous power absorption
of f 2 (t) W from the signal. Therefore, the average power delivered by f (t) to the
resistor would be

1
P ≡ |f (t)|2 dt.
T T

In this integral we have used |f (t)|2 , instead of f 2 (t), to denote the instantaneous
power, because doing so allows us to define instantaneous and average power measures
that are suitable for complex-valued f (t). Of course, there is no distinction between
|f (t)|2 and f 2 (t) for real-valued f (t).
Now, not all f (t) are voltages or currents, nor are they always applied to 1 
Average resistors. Nevertheless, we still will refer to |f (t)|2 as the instantaneous signal power
signal and to P as the average signal power. Whatever the true nature of f (t) may be, in
power practice the cost of generating f (t) will be proportional to P .
Section 6.3 System Response to Periodic Inputs 213

The formula for P given above, shows how we can calculate the average signal
power by integrating in the time domain, using the waveform f (t) over a period T .
Surprisingly, P also can be computed using the Fourier coefficients Fn . This follows Parseval’s
by Parseval’s theorem, which is property 8 in Table 6.3, stated as follows: theorem
 ∞

1
P ≡ |f (t)|2 dt = |Fn |2 .
T T n=−∞

Thus, the average value of |f (t)|2 over one period and the sum of |Fn |2 over all n
give the same number,11 the average signal power P . That is why we interpret |Fn |2
as the power spectrum of f (t)—just as the rainbow reveals the spectrum of colors Power
(frequencies) contained in sunlight, |Fn |2 describes how the average power of f (t) is spectrum
distributed among its different harmonic components at different frequencies nωo . |Fn |2
Now, for real-valued signals f (t), we have

F−n = Fn∗

and

cn = 2|Fn | = 2|F−n |,

implying that

cn2
|F−n |2 = |Fn |2 = ,
4
where cn is the compact Fourier series coefficient. Therefore, for real f (t),

 ∞
 ∞
 ∞
c2  1 2
|Fn | = |F0 | +
2 2
(|Fm | + |F−m | ) = |F0 | +
2 2 2
2|Fn | = 0 +
2
c .
n=−∞
4 2 n
m=1 n=1 n=1

So, for real-valued f (t), Parseval’s theorem also can be written as Parseval’s
theorem
 ∞
1 c02  1 2 for real
P ≡ |f (t)|2 dt = + c .
T T 4 2 n valued f(t)
n=1

∞
11
Proof of Parseval’s theorem: For a periodic f (t) = n=−∞ Fn ej nωo t with period T = 2π
ωo ,

    ∞
"
1 1 1 

P = |f (t)| dt =
2
f (t)f (t)dt = f (t) Fn∗ e−j nωo t dt
T T T T T T n=−∞


  ∞
 ∞

1
= Fn∗ f (t)e−j nωo t dt = Fn∗ Fn = |Fn |2 .
n=−∞
T T n=−∞ n=−∞
214 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

This formula has a simple, intuitive interpretation. It states that the average power
P of a real-valued periodic signal

c0 
f (t) = + cn cos(nωo t + θn )
2
n=1

c2
is the sum of 40 , representing the DC power, and the terms 21 cn2 , n > 0, where
each succeeding term represents the average AC power of a harmonic component
cn cos(nωo t + θn ). Recall that, for co-sinusoids, the average power into a 1  resistor
is simply one-half the amplitude squared.

Example 6.17
Consider the square-wave signal

 8 π π
f (t) = cos(n t − ),
nπ 6 2
n=1 (odd)

with period T = 12 s plotted in Figure 6.7a. If f (t) is the input of an LTI


system with a frequency response

1, for |ω| < 2 rad/s,
H (ω) =
0, otherwise,

determine the system output y(t) and the average powers Pf and Py of the
input and output signals f (t) and y(t).
Solution First, we note that the input f (t) consists of co-sinusoids with
harmonic frequencies

π 3π 5π 7π rad
, , , , ··· .
6 6 6 6 s

Since H (ω) = 0 for ω > 2 rad s , only the fundamental (n = 1) and 3rd
harmonic (n = 3) of f (t) will pass through the specified system. Hence,

8 π π 8 π π
y(t) = cos( t − ) + cos(3 t − ).
π 6 2 3π 6 2
Using the second form of Parseval’s theorem (applicable to real-valued
signals) with c1 = π8 and c3 = 3π
8
, we find the average power in the output
y(t) to be

1 8 2 1 8 2
Py = ( ) + ( ) ≈ 3.602.
2 π 2 3π
Section 6.3 System Response to Periodic Inputs 215

Calculation of Pf —the average power in the input f (t)—is easier in the


time domain, using the graph of f (t) shown in Figure 6.7a. We note that
the period T = 12 s and |f (t)|2 = 4 for 0 < t < 12 s. Hence,
  12
1 T 1
Pf = |f (t)|2 dt = 4dt = 4.
T 0 12 0
Comparison of Pf and Py shows that only about 10% of the average power
in f (t) is contained in the fifth and higher harmonics.

6.3.3 Harmonic distortion


Suppose that

y(t) = Af (t) + Bf 2 (t)

is the actual response of a system to some input f (t), where A and B denote arbitrary
constants. Assume that the first term in the expression, which is linear in f (t), is
the desired response. The second term, which is nonlinear in f (t), is unintentional,
perhaps due to a design error or imperfect electrical components. How might we
measure the consequences of the undesired nonlinear term?
As we saw in Chapters 4 and 5, linear systems respond to co-sinusoidal inputs
with co-sinusoidal outputs at the same frequency. However, the system defined above
will respond to a pure cosine input

f (t) = cos(ωo t),

with
B B
y(t) = A cos(ωo t) + B cos2 (ωo t) = + A cos(ωo t) + cos(2ωo t),
2 2
since
1
cos2 θ = (1 + cos(2θ)).
2
Clearly, the output y(t) is not just the desired pure cosine

A cos(ωo t),
B
but also contains a DC term 2 and a second-harmonic term

B
cos(2ωo t).
2
The average power of the output (by Parseval’s theorem) is

B2 A2 B2
Py = + + ,
4 2 8
216 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

where the last two terms represent the average power of the fundamental and the
Second-harmonic second harmonic, respectively. One common measure of second-harmonic distortion
distortion in the system output is the ratio of the average power in second-harmonic to the
average power in the fundamental:12

B2
.
4A2
For instance, for B = 0.1A, this ratio is 0.25%.
In a more general scenario, a nonlinear system response to a pure cosine input
f (t) = cos(ωo t) may have the Fourier series form

c0 
y(t) = + cn cos(nωo t + θn ),
2
n=1

containing many, and possibly an infinite number of, higher-order harmonics. Every
term in this response, except for the fundamental, would have been absent if the system
were linear. Hence, a sensible and useful measure of the effects of the nonlinearity in
this case is the ratio of the total power of the second and higher-order harmonics to
Total-harmonic the power of the fundamental. This ratio, known as total harmonic distortion (THD),
distortion: THD can be expressed (by Parseval’s theorem) as
∞ ∞ c02
1 2
n=2 2 cn
2
n=2 cn Py − 4 − 2 c1
1 2
THD = 1 2
= 2
= 1 2
,
2 c1
c1 2 c1

where Py denotes the average signal power of the output y(t).

Example 6.18
A power plant promises

y(t) = cos(ωo t)
ωo
as a product to its customers, where 2π = 60 Hz. The plant actually delivers
the signal
1 1
y(t) = cos(ωo t) + cos(3ωo t) + cos(5ωo t).
9 25
What is the THD?
Solution Clearly,

( 19 )2 + ( 25
1 2
)
THD = ≈ 1.39%.
12
12
DC distortion is of less concern because the DC component can be removed by the use of a blocking
capacitor or a simple high-pass filter.
Section 6.3 System Response to Periodic Inputs 217

0.5

t
0 5 10

−0.5

−1

Figure 6.13 A square-wave signal with period T = 2π s and a peak-to-peak


amplitude of 2.

Example 6.19
The Fourier series of the zero-mean square-wave signal shown in Figure 6.13 is

4 1 1 1
y(t) = [cos(t) − cos(3t) + cos(5t) − cos(7t) + · · ·].
π 3 5 7

Assuming it is desired that y(t) be proportional to cos(t), what is the THD


for this signal?
Solution Since the signal is zero-mean, c02 = 0. Also,

1 2 8
c1 = 2 .
2 π

Moreover, we can obtain the average power Py by averaging y 2 (t) over a


period 2π as
 2π
1
Py = 12 dt = 1.
2π 0

Therefore, substituting this last expression into the formula for THD, we
obtain

1− 8
π2 π2 − 8
THD = 8
= ≈ 23.4%.
8
π2
218 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

EXERCISES

6.1 Plot the following periodic functions over at least two periods and specify
their period T and fundamental frequency ωo = 2π T :
(a) f (t) = 4 + cos(3t).
(b) g(t) = 8 + 4e−j 4t + 4ej 4t .
(c) h(t) = 2e−j 2t + 2ej 2t + 2 cos(4t).

6.2 Show that (from Example 6.5)


 
1 ej (1−2n)π − 1 e−j (1+2n)π − 1 2 1
− + =
2π 1 − 2n 1 + 2n π 1 − 4n2
for all integers n and explain each step of the derivation carefully. You
will be expected to perform similar simplifications when you make Fourier
coefficient calculations.
6.3 Show that (from Example 6.13 in Section 6.2.4)
!2 ⎧ j 2
e−j n 2 t  ⎨ ,
π
1 2e−j nπ − 0 1 πn for even n > 0,
− π 2   =
2 −j n 2 π
2 (−j n 2 ) ⎩− 4+j 2πn , for odd n.
0 π 2 n2

Explain each step of the derivation carefully.


6.4 The function f (t) is periodic with period T = 4 s. Between t = 0 and t = 4 s
the function is described by

1, 0 < t < 2 s,
f (t) =
2, 2 < t < 4 s.

(a) Plot f (t) between t = −4 s and t = 8 s.


(b) Determine the exponential Fourier coefficients Fn of f (t) for n = 0,
n = 1, and n = 2.
(c) Using the results of part (b), determine the compact-form Fourier coef-
ficients c0 , c1 , and c2 .

6.5 (a) Calculate the exponential Fourier series of f (t) plotted below.

1 f (t)

1 1 t
− s s
2 2
Exercises 219

(b) Express the function g(t), shown next, as a scaled and shifted version
of the function f (t) from part (a) and determine the Fourier series of
g(t) by using the scaling and time-shift properties of the Fourier series.
Simplify your result as much as you can.

g(t)
4

1s t
−4

6.6 (a) Suppose a periodic signal f (t) is differentiable and g(t) ≡ df


dt . Express
Gn , the Fourier series coefficients of function g(t), in terms of the
Fourier coefficients Fn of f (t).
(b) Determine the exponential Fourier series of the function h(t) plotted
below. Hint: If you already have solved Problem 6.5, then note that the
derivative of signal f (t) in that problem is related to h(t).

h(t)
2

1 1 t
− s s
2 2
−2

(c) Using the result of part (b), determine the exponential Fourier series
of the following signal s(t), assuming that T = 1 s:

s(t)
A

T t

−A

(d) Repeat part (c) for an arbitrary T .


220 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

6.7 (a) The function


z(t) = 6 + 3 cos(4t) + 2 sin(5t)
is periodic with period T = 2π s and has fundamental frequency ωo =
1 rad/s. Determine the Fourier series coefficients Z0 , Z1 , Z2 , Z3 , Z4 ,
and Z6 . What is Z−4 ?
(b) What is the period of the function
q(t) = 5 sin(4t) + 3 cos(6t)
and how can the function be expressed in exponential Fourier series
form?
(c) Determine the average power Pz and Pq of signals z(t) and q(t), and
plot their power spectra versus harmonic frequency nωo in each case.
6.8 For the periodic function f (t) shown here, with exponential Fourier coeffi-
cients Fn , determine

f (t)
4

3 4 t

(a) The period T and fundamental frequency ωo .


c0
(b) The DC level F0 = 2.
(c) F1 , c1 and θ1 .
(d) F2 , c2 and θ2 .
(e) The average signal power Pf .
(f) THD.

6.9 The input signal of a linear system with frequency response H (ω) is

1  1 π
f (t) = + cos(2πnt + ).
2 nπ 2
n=1
The input f (t) and frequency response H (ω) are plotted as:

H (ω)
f (t)
1 2

ω
1s 9π rad/ s
t
Exercises 221

(a) What is the system output y(t) in response to f (t)?


(b) Calculate the average power of the input and output signals f (t) and
y(t). Hint: You can use either side of Parseval’s theorem to calculate
the average power of a periodic signal, but depending on the signal,
one side often is easier to evaluate than the other.

6.10 Show that the compact trigonometric Fourier series of f (t) shown in Problem
6.9 is

1  1 π
f (t) = + cos(2πnt + ).
2 nπ 2
n=1

6.11 Confirm the Fourier series


4 1 1 1
y(t) = [cos(t) − cos(3t) + cos(5t) − cos(7t) + · · ·]
π 3 5 7
for the square wave described and discussed in Example 6.19 of Section 6.3.3.

6.12 The input–output relation for a system with input f (t) is given as
y(t) = 6f (t) + f 2 (t) − f 3 (t).
Determine the total harmonic distortion (THD) of the system response to a
pure cosine input. Hint:
1
cos3 θ = (3 cos θ + cos(3θ)).
4
Is the system linear or nonlinear? Explain.
6.13 Let f (t) be a real-valued periodic signal with fundamental frequency ωo .
We will approximate f (t) with another function

âo  # $
N
fN (t) ≡ + ân cos(nωo t) + b̂n sin(nωo t)
2
n=1

where (real-valued) coefficients âm and b̂m are selected to minimize the
average power Pe in the error signal
e(t) ≡ fN (t) − f (t).
That is, to determine the optimal âm and b̂m we minimize

1
Pe ≡ e2 (t) dt.
T T
∂Pe ∂Pe
We will do so by setting ∂ âm and ∂ b̂m
to zero and solving for âm and b̂m .
222 Chapter 6 Fourier Series and LTI System Response to Periodic Signals

(a) Confirm that

 
∂Pe 1 âo 1
= e(t) dt = − f (t) dt
∂ âo T T 2 T T

and, for 1 ≤ m ≤ N,

 
∂Pe 2 2
= e(t) cos(mωo t) dt = âm − f (t) cos(mωo t) dt
∂ âm T T T T

and

 
∂Pe 2 2
= e(t) sin(mωo t) dt = b̂m − f (t) sin(mωo t) dt.
∂ b̂m T T T T

(b) Solve for the optimizing âm and b̂m , and show that the optimal fN (t)
can be rewritten as

N
fN (t) = Fn ej nωo t
m=−N

with

1
Fn = f (t)e−j nωo t dt.
T T

(c) Assuming that f (t) satisfies the Dirichlet Conditions, what is the value
of Pe in the limit as N → ∞? Explain.
7
Fourier Transform and LTI
System Response to
Energy Signals

7.1 FOURIER TRANSFORM PAIRS f (t) ↔ F(ω) AND THEIR PROPERTIES 226
7.2 FREQUENCY-DOMAIN DESCRIPTION OF SIGNALS 240
7.3 LTI CIRCUIT AND SYSTEM RESPONSE TO ENERGY SIGNALS 247
EXERCISES 255

In this chapter we will talk about the Beatles and Beethoven. (See Figure 7.11.) But Frequency-domain
before that, even before beginning Section 7.1, it is worthwhile to take stock of where description
we are and where we are going. of aperiodic
We earlier developed the phasor technique for finding the response of dissipative finite-energy
linear time-invariant circuits to co-sinusoidal inputs. We then proceeded to add the signals:
concept of Fourier series to our toolbox for the case with periodic inputs. We saw Fourier and
that periodic signals are superpositions of harmonically related co-sinusoids (the inverse-Fourier
frequencies are integer multiples of some fundamental frequency ω0 = 2π/T , where transforms;
T is the period of the signal). In practice, most of the signals we encounter in the real energy and
world are not periodic. We shall call such signals aperiodic. For aperiodic signals, energy spectrum;
we might ask whether there is some form of frequency representation. That is, can low-pass,
we write an aperiodic signal f (t) as a sum of co-sinusoids? The answer is yes, but in band-pass, and
this case the frequencies are not harmonically related. In fact, most aperiodic signals high-pass signals;
consist of a continuum of frequencies, not just a set of discrete frequencies. The bandwidth;
derivation that follows leads us to the frequency representation for aperiodic signals. linear system
response
to energy signals

223
224 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

Begin by considering any periodic function f (t) with period T = 2π


ωo and Fourier
series


f (t) = Fn ej nωo t ,
n=−∞

where
 T /2  π/ωo
1 ωo
Fn = f (t)e−j nωo t dt = f (t)e−j nωo t dt.
T −T /2 2π −π/ωo

This also can be expressed as



1 
f (t) = F (nωo )ej nωo t ωo ,
2π n=−∞

with
 π/ωo
F (ω) ≡ f (t)e−j ωt dt.
−π/ωo

This expression for f (t) is valid no matter how large or small the fundamental
frequency ωo = 2π T is, so long as the Dirichlet conditions are satisfied by f (t). In
the case of a vanishingly small ωo , however, the period T = 2π ωo becomes infinite
and f (t) loses its periodic character. In that limit—as ωo → 0 and f (t) becomes
aperiodic—the series f (t) and function F (ω) converge to1
 ∞
1
f (t) = F (ω)ej ωt dω
2π −∞

and
 ∞
F (ω) = f (t)e−j ωt dt,
−∞

respectively.
The function F (ω), just introduced, is known as the Fourier transform of the
function f (t). Similarly, the formula for computing f (t) from F (ω) is called the
inverse Fourier transform of F (ω). Notice that the inverse Fourier transform expresses
f (t) as a continuous sum (integral) of co-sinusoids (complex exponentials), where
F (ω) is the weighting applied to frequency ω. The foregoing formulas for the inverse

∞ ∞
1
Remember from calculus that, given a function, say, z(x), −∞ z(x)dx ≡ lim n=−∞ z(nx)x;
x→0
the definite integral of z(x) is nothing but an infinite sum of infinitely many infinitesimals z(nx)x,
amounting to the area under the curve z(x).
Chapter 7 Fourier Transform and LTI System Response to Energy Signals 225

Fourier transform and the (forward) Fourier transform will be applied so frequently
throughout the remainder of this text that they should be committed to memory.
We will come to understand the implications of the Fourier transform as we
proceed in this chapter. None is more important than the following observation: We
have just seen that any aperiodic signal with a Fourier transform F (ω) is a weighted
linear superposition of exponentials

ej ωt = cos(ωt) + j sin(ωt)

of all frequencies ω. So, since

ej ωt −→ LTI −→ H (ω)ej ωt ,

and since LTI systems allow superposition, it follows that


 ∞  ∞
1 1
F (ω)e j ωt
dω −→ LTI −→ H (ω)F (ω)ej ωt dω.
2π −∞ 2π −∞

This indicates that for an LTI system with frequency response H (ω), and input f (t)
with Fourier transform F (ω), the corresponding system output y(t) is simply the
inverse Fourier transform of the product H (ω)F (ω). This relationship is illustrated
in Figure 7.1.
This chapter explores and expands upon the concepts just introduced, namely
the inverse Fourier transform representation of aperiodic signals f (t) and the input–
output relation for LTI systems shown in Figure 7.1. In Section 7.1 we will discuss the
existence conditions and general properties of the Fourier transform F (ω) and also
compile a table of Fourier transform pairs “f (t) ↔ F (ω)” for signals f (t) commonly
encountered in signal processing applications. In Section 7.2 we will examine the
concept of signal energy W and energy spectrum |F (ω)|2 and discuss a signal classi-
fication based on energy spectrum types. Finally, Section 7.3 will address applications
of the input–output rule stated in Figure 7.1.

1 ∞ 1 ∞
2π ∫−∞ 2π ∫−∞
f (t) = F (ω)ejωt dω LTI system y(t) = H (ω)F (ω)ejωt dω

f (t) ↔ F (ω) H (ω) y(t) ↔ Y (ω) = H (ω)F (ω)

Figure 7.1 Input–output relation for dissipative LTI system H(ω), with aperiodic
input f (t). System H(ω) converts the Fourier transform F(ω) of input f (t) to the
Fourier transform Y (ω) of the output y(t), according to the rule
Y (ω) = H(ω)F(ω). See Section 7.1 for a description of the “↔” notation.
226 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

7.1 Fourier Transform Pairs f (t) ↔ F(ω) and Their


Properties2

Signals f (t) and their Fourier transforms F (ω) satisfying the relations
 ∞  ∞
1
f (t) = F (ω)ej ωt dω and F (ω) = f (t)e−j ωt dt
2π −∞ −∞

Fourier are said to be Fourier transform pairs. To indicate a Fourier transform pair, we will
transform use the notation
pairs
f (t) ↔ F (ω).

For instance,

2
e−|t| ↔ ,
1 + ω2
as demonstrated later in this section. This pairing is unique, because there exists no
time signal f (t) other than e−|t| , having the Fourier transform F (ω) = 1+ω 2
2.
Existence For the Fourier transform of f (t) to exist, it is sufficient that f (t) be absolutely
of Fourier integrable,3 that is, satisfy
transform  ∞
paris |f (t)|dt < ∞,
−∞

and for the convergence4 of the inverse Fourier transform of F (ω) to f (t) it is sufficient
that f (t) satisfy the remaining Dirichlet conditions over any finite interval. However,
absolute integrability—which is satisfied by all signals that can be generated in the
lab5 —is not a necessary condition for a Fourier pairing f (t) ↔ F (ω) to be true.

2
The term “transform” generally is used in mathematics to describe a reversible conversion. If F (ω) is
a transform of f (t), then there exists an inverse process that converts F (ω) uniquely back into f (t).
3
Proof: Note that
 ∞  ∞  ∞
|F (ω)| = | f (t)e−j ωt dt| ≤ |f (t)e−j ωt |dt = |f (t)|dt.
−∞ −∞ −∞

Thus
 ∞
|f (t)|dt < ∞
−∞

is sufficient to ensure that |F (ω)| < ∞, and a bounded F (ω) exists.


4
At discontinuity points of f (t), the inverse Fourier transform will converge to the midpoints between
the bottoms and tops of the jumps, as in the case of the Fourier series.
5
In the lab or in a radio station, we can generate only finite-duration signals having finite values. Thus,
for a lab signal f (t), the area under |f (t)| is finite and the signal is absolutely integrable.
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 227

Some signals f (t) that are not absolutely integrable still satisfy the relations
 ∞  ∞
1
f (t) = F (ω)e dω and F (ω) =
j ωt
f (t)e−j ωt dt
2π −∞ −∞

for a given F (ω), for instance, when F (ω) satisfies the Dirichlet conditions. In that
case, f (t) and F (ω) form a Fourier transform pair f (t) ↔ F (ω) just the same.
Therefore, the input–output relation for LTI systems shown in Figure 7.1 has a very
wide range of applicability (which we will explore in Sections 7.2 and 7.3).
Table 7.1 lists some of the general properties of Fourier transform pairs. Many
important Fourier transform pairs are listed in Table 7.2. In this section we will prove
some of the properties listed in Table 7.1 and verify some of the transform pairs
shown in Table 7.2. Detailed discussions of some of the entries in these tables will
be delayed until their specific uses are needed. You can ignore the right column of
Table 7.2 until it is needed in Chapter 9. Some of the entries in the left column of
Table 7.2 contain unfamiliar functions such as u(t), rect(t), sinc(t), and (t). These
functions are defined next.

Unit-step u(t): The unit-step function u(t) is defined6 as



1, for t > 0,
u(t) ≡
0, for t < 0,

and is plotted in Figure 7.2a. Figures 7.2b, 7.2c, and 7.2d show u(t − 2) (a shifted
unit step), u(1 − t) (a reversed and shifted unit step), and the signum, or sign, function

sgn(t) ≡ 2u(t) − 1.

The unit step is not absolutely integrable, but it still has a Fourier transform that can
be defined, (which will be identified in Chapter 9).

Unit rectangle rect(t): The unit rectangle rect(t) is defined as



1, for |t| < 21 ,
rect(t) ≡
0, for |t| > 21 ,

and is plotted in Figure 7.3a. Figures 7.3b, 7.3c and 7.3d show rect(t − 21 ) (a delayed
rect), rect( 3t ) (a rect stretched by a factor of 3), and rect( t+1
2 ) (a rect stretched by a
factor of 2 and shifted by 1 unit to the left). The unit rectangle is absolutely integrable,
and we will derive its Fourier transform later in this section.
6
In general, two analog signals, say, f (t) and g(t), can be regarded equal so long as their values differ
by a finite amount at no more than a finite number (or perhaps a countably infinite number) of discrete
points in time t. In keeping with this definition of equivalence—applicable to Fourier series as well as
transforms—there is no need to specify the value of the unit-step u(t) at the single point t = 0. This value
can be any finite number (e.g., 0.5), and its choice will not affect any of our results. The situation is similar
for the points of discontinuity of the unit rectangle rect(t) at t = ±0.5.
228 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

Name: Condition: Property:


1 Amplitude scaling f (t) ↔ F (ω), constant K Kf (t) ↔ KF (ω)
2 Addition f (t) ↔ F (ω), g(t) ↔ G(ω), · · · f (t) + g(t) + · · · ↔
F (ω) + G(ω) + · · ·
3 Hermitian Real f (t) ↔ F (ω) F (−ω) = F ∗ (ω)
4 Even Real and even f (t) Real and even F (ω)
5 Odd Real and odd f (t) Imaginary and odd F (ω)
6 Symmetry f (t) ↔ F (ω) F (t) ↔ 2πf (−ω)
7 Time scaling f (t) ↔ F (ω), real c f (ct) ↔ 1 ω
|c| F ( c )

8 Time shift f (t) ↔ F (ω) f (t − to ) ↔ F (ω)e−j ωto


9 Frequency shift f (t) ↔ F (ω) f (t)ej ωo t ↔ F (ω − ωo )
10 Modulation f (t) ↔ F (ω) f (t) cos(ωo t) ↔
1
2 F (ω − ωo ) + 21 F (ω + ωo )
df
11 Time derivative Differentiable f (t) ↔ F (ω) dt ↔ j ωF (ω)
12 Freq derivative f (t) ↔ F (ω) −j tf (t) ↔ d
dω F (ω)

13 Time convolution f (t) ↔ F (ω), g(t) ↔ G(ω) f (t) ∗ g(t) ↔ F (ω)G(ω)


14 Freq convolution f (t) ↔ F (ω), g(t) ↔ G(ω) f (t)g(t) ↔ 1
2π F (ω) ∗ G(ω)
15 Compact form Real f (t) f (t) =

1
2π 0 2|F (ω)| cos(ωt +
∠F (ω))dω

16 Parseval, Energy W f (t) ↔ F (ω) W ≡ −∞ |f (t)| dt =
2

−∞ |F (ω)| dω
1 2

Table 7.1 Important properties of the Fourier transform.


Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 229

f (t) ↔ F (ω)

1 e−at u(t) ↔ 1
a+j ω , a>0 14 δ(t) ↔ 1

2 eat u(−t) ↔ 1
a−j ω , a>0 15 1 ↔ 2πδ(ω)

3 e−a|t| ↔ 2a
a 2 +ω2
, a>0 16 δ(t − to ) ↔ e−j ωto
a2
4 a 2 +t 2
↔ πae−a|ω| , a > 0 17 ej ωo t ↔ 2πδ(ω − ωo )

5 te−at u(t) ↔ 1
(a+j ω)2
, a>0 18 cos(ωo t) ↔ π[δ(ω − ωo ) + δ(ω + ωo )]

6 t n e−at u(t) ↔ (a+jn!ω)n+1 , a > 0 19 sin(ωo t) ↔ j π[δ(ω + ωo ) − δ(ω − ωo )]

cos(ωo t)u(t) ↔
7 rect( τt ) ↔ τ sinc( ωτ
2 ) 20 jω
π
2 [δ(ω − ωo ) + δ(ω + ωo )] + ωo2 −ω2

sin(ωo t)u(t) ↔
8 sinc(W t) ↔ π ω
W rect( 2W ) 21 j π2 [δ(ω + ωo ) − δ(ω − ωo )] + ωo
ωo2 −ω2

9 ( τt ) ↔ τ2 sinc2 ( ωτ
4 ) 22 sgn(t) ↔ 2

10 sinc2 ( W2 t ) ↔ W ( 2W )
2π ω
23 u(t) ↔ πδ(ω) + 1

e−at sin(ωo t)u(t) ↔ ∞ ∞


11 ωo
,a>0 24 n=−∞ δ(t − nT ) ↔ 2π
T n=−∞ δ(ω − n 2π
T )
(a+j ω)2 +ω2 o

∞
e−at cos(ωo t)u(t) ↔ n=−∞ f (t)δ(t − nT ) ↔
12 a+j ω 25  ∞
n=−∞ T F (ω − n T )
,a>0 1 2π
(a+j ω)2 +ω2 o

2
− t 2 √ σ 2 ω2
13 e 2σ ↔ σ 2π e− 2

Table 7.2 Important Fourier transform pairs. The left-hand column includes only
energy signals f (t) (see Section 7.2), while the right-hand column includes power
signals and distributions (covered in Chapter 9).
230 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

u(t) u(t − 2)
1 1

−4 −2 2 4 t −4 −2 2 4 t
(a) (b)

u(1 − t) 1
sgn(t) = 2u(t) − 1
1

−4 −2 2 4 t
−4 −2 2 4 t
−1
(c) (d)

Figure 7.2 (a) The unit step u(t), (b) u(t − 2), (c) u(1 − t), and
(d) sgn(t) = 2u(t) − 1.

rect(t) 1
rect(t − )
1 1
2

−1.5 −1 −0.5 0.5 1 1.5 t −1.5 −1 −0.5 0.5 1 1.5 t


(a) (b)

t t +1
rect( ) rect( )
1
3 1
2

−3 −2 −1 1 2 3 t −3 −2 −1 1 2 3 t
(c) (d)

Figure 7.3 (a) The unit rectangle rect(t), (b) rect(t − 12 ), (c) rect( 3t ), and
(d) rect( t+1
2 ).
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 231

Sinc function sinc(t): The sinc function sinc(t) is defined as

sin(t)
sinc(t) ≡
t

and is plotted in Figure 7.4. Notice that sinc(t) is zero (crosses the horizontal axis)
for t = kπ, where k is any integer but 0. The use of l’Hopital’s rule7 indicates
that sinc(0) = 1. It can be shown that the sinc function is not absolutely integrable;
however, its Fourier transform exists and it will be determined later in this section.

1.2
sinc(t)
1
0.8
0.6
0.4
0.2

−20 −10 10 20 t
−0.2

Figure 7.4 sinc(t) versus t. Notice that sinc(t) = 0 at t = kπ s, where k is any


integer but 0.

Unit triangle (t): The unit triangle function (t) is defined as



1 − 2|t|, for|t| ≤ 21 ,
(t) ≡
0, otherwise,

and is plotted in Figure 7.5a. A shifted unit-triangle function is shown in Figure 7.5b.

Δ(t) 1
1
Δ(t − )
2 1

−1.5 −1 −0.5 0.5 1 1.5 t −1.5 −1 −0.5 0.5 1 1.5 t

Figure 7.5 (a)(t), and (b) a shifted unit triangle (t − 12 ).

d
sin(t)|t=0 cos(0)
7
lim sinc(t) = dt
d
= = 1.
t→0
dt t|t=0
1
232 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

7.1.1 Fourier transform examples

Example 7.1
Determine the Fourier transform of

f (t) = e−at u(t),

where a > 0 is a constant. The function is plotted in Figure 7.6a for the case
a = 1. Notice that the u(t) multiplier of e−at makes f (t) zero for t < 0.
Solution Substituting e−at u(t) for f (t) in the Fourier transform formula
gives
  ∞

−at −j ωt

−(a+j ω)t
e−(a+j ω)t 0
F (ω) = e u(t)e dt = e dt =
−∞ 0 −(a + j ω)
0−1 1
= = .
−(a + j ω) a + jω
Therefore,
1
e−at u(t) ↔ ,
a + jω
which is the first entry in Table 7.2. The magnitude and angle of
1
F (ω) =
a + jω

f (t) = e− t u(t)
1 1
|F (ω)|
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

−1 1 2 3 t −10 −5 5 10 ω
(a) −0.2 (b) −0.2

180
∠F (ω)
90

−10 −5 5 10 ω
−90

(c) −180

Figure 7.6 (a) f (t) = e−t u(t), (b) |F(ω)| versus ω, and (c) angle of F(ω) in degrees
versus ω.
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 233

are plotted in Figures 7.6b and 7.6c for the case a = 1. From the plot of
|F (ω)|, we see that f (t) contains all frequencies ω, but that it has more
low-frequency content than it does high-frequency content. Notice that for
a < 0, the preceding integral is not convergent, because in that case e−at →
∞ as t → ∞. Therefore, for a < 0, the function e−at u(t) does not have a
Fourier transform (and it is not absolutely integrable). For the special case
of a = 0, e−at u(t) = u(t) is not absolutely integrable, yet it still is possible
to define a Fourier transform, as already mentioned. (See Chapter 9.)

Example 7.2
Determine the Fourier transform G(ω) of the function g(t) = eat u(−t),
where a > 0 is a constant. The function is plotted in Figure 7.7 for the case
a = 1.

g(t) = et u(− t) 1
0.8
0.6
0.4
0.2

−3 −2 −1 1
t
−0.2

Figure 7.7 g(t) = et u(−t).

Solution Notice that g(t) = f (−t), where f (t) = e−at u(t) ↔ a+j 1
ω,
a > 0. Therefore, we can use the time-scaling property from Table 7.1,
with c = −1. Thus,

1 ω 1
G(ω) = F( )= .
| − 1| −1 a − jω

Hence, we have a new Fourier transform pair,

1
eat u(−t) ↔ ,
a − jω

which is entry 2 in Table 7.2.

Example 7.3
Determine the Fourier transform P (ω) of the function p(t) = e−a|t| , where
a > 0 is a constant. The function is plotted in Figure 7.8a for the case a = 1.
234 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

2
p(t) = e −| t |
1
P (ω)
0.8 1.5
0.6
1
0.4
0.2 0.5

−3 −2 −1 1 2 3 t
(a) −0.2 (b) −10 −5 5 10 ω

Figure 7.8 (a) p(t) = e−|t| , and (b) P(ω) versus ω.

Solution Comparing Figures 7.6a, 7.7, and 7.8a, we note that p(t) =
f (t) + g(t). Therefore, using the addition property from Table 7.1, we have

1 1
P (ω) = F (ω) + G(ω) = +
a + jω a − jω
1 2a
= 2Re[ ]= 2 ,
a + jω a + ω2

which is plotted in Figure 7.8b for the case a = 1. Hence, for a > 0,

2a
e−a|t| ↔ ,
a 2 + ω2
which is entry 3 in Table 7.2. This is a case where the Fourier transform
turns out to be real, because the time-domain function is real and even
(property 4 in Table 7.1).

Example 7.4
Using the symmetry property from Table 7.1 and the result of Example 7.3,
determine the Fourier transform Q(ω) of function

a2
q(t) = .
a2 + t 2

Solution The symmetry property 6 from Table 7.1 states that the Fourier
transform of a Fourier transform gives back the original waveform, except
reversed and scaled by 2π. Applying this to the result

2a
e−a|t| ↔
a 2 + ω2
for a > 0 of Example 7.3, we can write

2a
↔ 2πe−a|−ω| = 2πe−a|ω| .
a2 + t2
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 235

Therefore,

a2
↔ πae−a|ω|
a2 + t 2

for a > 0, which is entry 4 in Table 7.2. Hence,

Q(ω) = πae−a|ω|

for a > 0. If the sign of a is changed, then q(t) is not altered and no changes
can result to the values of Q(ω). To enforce that, we write

Q(ω) = π|a|e−|aω| ,

which is valid for all a.

The Fourier transform properties listed in Table 7.1 can be derived from the
Fourier transform and inverse Fourier transform definitions, as illustrated in two cases
in the next two examples. Notice that all of the Fourier transforms calculated in
Examples 7.1 through 7.4 are consistent with property 3,

F (−ω) = F ∗ (ω),

which is valid for all real-valued f (t). After reading Examples 7.5 and 7.6, try to
derive property 3 on your own.

Example 7.5
Confirm the symmetry property listed in Table 7.1.
Solution To confirm this property, it is sufficient to show that, given the
inverse Fourier transform integral
 ∞
1
f (t) = F (ω)ej ωt dω,
2π −∞

it follows that the Fourier transform of function F (t) is


 ∞
F (t)e−j ωt dt = 2πf (−ω).
−∞

The first integral given can be rewritten (after ω is changed to the dummy
variable x) as
 ∞
1
f (t) = F (x)ej xt dx.
2π −∞
236 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

Replacing t with −ω on both sides and multiplying the result by 2π, we


obtain
 ∞  ∞
1
2πf (−ω) = 2π F (x)ej x(−ω) dx = F (x)e−j ωx dx
2π −∞ −∞
 ∞
= F (t)e−j ωt dt.
−∞

Hence,
 ∞
F (t)e−j ωt dt = 2πf (−ω),
−∞

as claimed.

Example 7.6
Confirm the time-derivative property listed in Table 7.1.
Solution Given the inverse transform
 ∞
1
f (t) = F (ω)ej ωt dω,
2π −∞
it follows that
 ∞  ∞
df d 1 1
= F (ω)e j ωt
dω = {j ωF (ω)}ej ωt dω.
dt dt 2π −∞ 2π −∞

This expression indicates that df


dt is the inverse Fourier transform of j ωF (ω),
as stated by the time-derivative property.

Example 7.7
Using the frequency-derivative property from Table 7.1 and entry 1 from
Table 7.2, confirm entry 5 in Table 7.2.
Solution Entry 1 in Table 7.2 is
1
e−at u(t) ↔ , a > 0.
a + jω
The frequency derivative of the right side is
d 1 d −j
= (a + j ω)−1 = −(a + j ω)−2 j = ,
dω a + j ω dω (a + j ω)2
which, according to the frequency-derivative property, is the Fourier trans-
form of

−j te−at u(t).
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 237

Thus,

1
te−at u(t) ↔ , a > 0,
(a + j ω)2

which is entry 5 in Table 7.2.

We next will determine the Fourier transforms of rect( τt ) and sinc(W t). The Fourier
results are of fundamental importance in signal processing, as we will see later on. transforms
of rect
Example 7.8 and sinc
Determine the Fourier transform of f (t) = rect( τt ).
Solution Since rect( τt ) equals 1 for − τ2 < t < τ2 and 0 for |t| > τ2 , it
follows that
 ∞  τ/2
t −j ωt
F (ω) = rect( )e dt = e−j ωt dt
−∞ τ −τ/2
τ τ
e−j ω 2 − ej ω 2 sin( ωτ
2 )
= = .
−j ω ω
2

This result usually is written as

sin( ωτ
2 ) sin( ωτ
2 ) ωτ
F (ω) = ω =τ ωτ = τ sinc( ).
2 2 2

Hence,
t ωτ
rect( ) ↔ τ sinc( ),
τ 2

which is entry 7 in Table 7.2.8

Example 7.9
Determine the Fourier transform of g(t) = sinc(W t).
Solution Applying the symmetry property to the result of Example 7.8,
we write
tτ −ω
τ sinc( ) ↔ 2πrect( ).
2 τ
1
8
It can be shown that the inverse Fourier transform of τ sinc( ωτ t
2 ) gives rect( τ ), with values of 2 at the
two points of discontinuity. This is a standard feature with Fourier transforms. When either a forward or
inverse Fourier transform produces a waveform with discontinuities, the values at the points of discontinuity
are always the midpoints between the bottoms and tops of the jumps.
238 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

Replacing τ with 2W and using the fact that rect is an even function yields

π ω
sinc(W t) ↔ rect( ),
W 2W

which is entry 8 in Table 7.2.9

Four functions, rect( τt ) for τ = 1 (solid) and 2 (dashed), and sinc(W t) for
W = 2π (solid) and π (dashed), are plotted in Figure 7.9a. Their Fourier transforms
τ sinc( ωτ π ω
2 ) and W rect( 2W ) are plotted in Figure 7.9b. Notice that the narrower signals
(solid) in Figure 7.9a are paired with the broader Fourier transforms (also solid) in
Figure 7.9b, while the broader signals (dashed) correspond to the narrower Fourier
transforms. This inverse relationship always holds with Fourier transform pairs, as
indicated by property 7 in Table 7.1. Narrower signals have broader Fourier trans-
forms because narrower signals must have more high-frequency content in order to
exhibit fast transitions in time. Likewise, broader signals are smoother and have more
low-frequency content. Try identifying all the Fourier transform pairs in Figure 7.9
and mark them for future reference.

0.8
f (t)
0.6

0.4

0.2

−4 −2 2 4 t
(a) −0.2

F (ω)
1.5

0.5

−20 −10 10 20 ω
(b)

Figure 7.9 Four signals f (t) and their Fourier transforms F(ω) are shown in (a)
versus t (s) and (b) versus ω ( rad
s ), respectively.

9
It can be shown that sinc(W t) is not absolutely integrable but yet we have found its Fourier transform
using the symmetry property. An alternative way of confirming the Fourier transform of sinc(W t) is to
π W
show that the inverse Fourier transform of W rect( 2W ) is sinc(W t)—see Exercise Problem 7.4.
Fourier Transform Pairs f (t) ↔ F (ω) and Their Properties 239

Example 7.10
Consider a function f (t) defined as the time derivative of ( τt ), which also
can be expressed as

d t 2 t + τ/4 2 t − τ/4
f (t) ≡ ( ) = rect( ) − rect( )
dt τ τ τ/2 τ τ/2

in terms of shifted and scaled rect functions. (See Exercise 7.5.) Using this
relationship, verify10 the Fourier transform of ( τt ) specified in Table 7.2.
Solution Using the time-shift and time-scaling properties of the Fourier
transform and
t ωτ
rect( ) ↔ τ sinc( ),
τ 2

we first note that

t ± τ/4 (t ± τ/4) 1 ωτ
rect( ) = rect(2 ) ↔ τ sinc( )e±j ωτ/4 .
τ/2 τ 2 4

Thus, the Fourier transform of

d t
f (t) = ( )
dt τ

is
ωτ j ωτ/4 ωτ ωτ
F (ω) = sinc( )(e − e−j ωτ/4 ) = j 2 sin( )sinc( ).
4 4 4

Subsequently, using the time-derivative property of the Fourier transform,


we find
Fourier
t F (ω) j2 ωτ ωτ τ ωτ transform
( ) ↔ = sin( )sinc( ) = sinc2 ( ),
τ jω jω 4 4 2 4 of a
triangle
in agreement with item 9 in Table 7.2.

Item 10 in Table 7.2, showing the Fourier transform of

Wt
sinc2 ( ),
2

can be obtained from item 9 using the symmetry property of the Fourier transform.

10
Another approach to the same problem (less tricky, but more tedious) is illustrated in Example 7.11.
240 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

Example 7.11
In Example 7.10 the Fourier transform of ( τt ) was obtained by a sequence
of clever tricks related to some of the properties of the Fourier trans-
form. Alternatively, we can get the same result in a straightforward way
by performing the Fourier transform integral
 ∞  ∞  τ/2
t t 2 t d
( )e−j ωt dt = 2 ( ) cos(ωt)dt = ( ) [sin(ωt)] dt,
−∞ τ 0 τ ω 0 τ dt

where we make use of the fact that ( τt ) is an even function that vanishes
for t > τ2 . Integrating by parts, we have
  τ/2 
τ/2
t d −1 2 τ/2
( ) [sin(ωt)] dt = 0 − sin(ωt)dt = sin(ωt)dt
0 τ dt 0 τ/2 τ 0
τ/2
2  2 ωτ 4 ωτ
=− cos(ωt) = (1 − cos( )) = sin2 ( ).
τω 0 τ ω 2 τ ω 4

Hence,
 ∞  τ/2
t 2 t
( )e−j ωt dt = ( )d[sin(ωt)]
−∞ τ ω 0 τ
2 4 ωτ τ ωτ
= sin2 ( ) = sinc2 ( ),
ω τω 4 2 4
in agreement with item 9 in Table 7.2.

7.2 Frequency-Domain Description of Signals

7.2.1 Signal energy and Parseval’s theorem


The concept of average power is not applicable to aperiodic signals, because aperiodic
signals lack a standard time measure, like a period T , for normalizing the signal energy
Signal in a meaningful way. Instead, the energetics (and cost) of aperiodic signals is described
energy by the total signal energy
 ∞
W ≡ |f (t)|2 dt.
−∞

Similar to the case with periodic signals and the Fourier series, it is possible to calculate
the integral of the squared magnitude of an aperiodic signal in terms of its Fourier
representation. The difference in the aperiodic case is that the range of integration,
extends from −∞ to +∞, as shown, not across just a single period, and we have a
Fourier transform rather than a set of Fourier series coefficients.
Section 7.2 Frequency-Domain Description of Signals 241

The corresponding Parseval’s theorem—also known as Rayleigh’s theorem—for Parseval’s


the aperiodic case states that theorem
 ∞  ∞
1
W ≡ |f (t)|2 dt = |F (ω)|2 dω.
−∞ 2π −∞

This identity11 is the last entry in Table 7.1, and it provides an alternative means for
calculating the energy W in a signal. Parseval’s theorem holds for all signals with
finite W (i.e., W < ∞), which are referred to as energy signals. The entire left column Energy
of Table 7.2 is composed of such signals f (t) with finite energy W . signals
All aperiodic signals that can be generated in a lab or at a radio station are neces-
sarily energy signals, because generating a signal with infinite W is not physically
possible (since the cost for the radio station would be unbounded). Parseval’s theorem
indicates that the energy of a signal is spread across the frequency space in a manner
described by |F (ω)|2 . Hence, in analogy with the power spectrum |Fn |2 (for periodic
signals), we refer to |F (ω)|2 as the energy spectrum. The notion of energy spectrum Energy
provides a useful basis for signal classification and bandwidth definitions, which we spectrum
will discuss next. |F(ω)|2

7.2.2 Signal classification based on energy spectrum |F(ω)|2


Figures 7.10a and 7.10b show two different types of energy spectra |F1 (ω)|2 and
|F2 (ω)|2 associated with possible aperiodic signals. The signal spectrum |F1 (ω)|2
shown in Figure 7.10a is large for small values of ω and then vanishes as ω increases,
which indicates that the energy in signal f1 (t) ↔ F1 (ω) is concentrated at low
frequencies. Thus, f1 (t) has the spectral characteristics of a signal expected at the
output of a low-pass filter and is said to be a low-pass signal. The spectrum |F2 (ω)|2 Low-pass
of signal f2 (t) vanishes at both low and high frequencies, and therefore the energy in and
f2 (t) ↔ F2 (ω) is concentrated in an intermediate frequency band where |F2 (ω)|2 is band-pass
relatively large. Thus, f2 (t) is called a band-pass signal. signals
Classification of signals according to their spectral shapes has an important and
useful purpose. When we design a radio transmitter circuit, for instance, we have no
knowledge of the exact signal the circuit will be transmitting during actual use; yet
we know that the microphone or CD player output will be a low-pass signal in the
audio frequency range that we somehow must convert into a higher-frequency band-
pass signal before sending it to the transmitting antenna. (See Chapter 8.) Thus, we
design and build circuits and systems to process signals having certain types of spectra

11
Proof of Parseval’s theorem: Assuming that F (ω) exists,
 ∞  ∞  ∞ ∞ % &∗
1
|f (t)|2 dt = f (t)f ∗ (t)dt = F (ω)ej ωt dω dt
f (t)
−∞ −∞ t=−∞ 2π −∞
 ∞  ∞  ∞  ∞
1 1 1
= F ∗ (ω) f (t)e−j ωt dt dω = F ∗ (ω)F (ω)dω = |F (ω)|2 dω.
2π ω=−∞ −∞ 2π ω=−∞ 2π −∞
242 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

|F1(ω)| 2

0 ω
(a)

|F2(ω)| 2

0 ω
(b)

Figure 7.10 Energy spectra of possible (a) low-pass and (b) band-pass signals.

and spectral widths (see the next section), without requiring detailed knowledge of
the signal. For instance, a transmitter that we build should work equally well for
Beethoven’s 5th and the Beatles’ “A Hard Day’s Night,” even though the two signals
have little in common other than their energy spectra shown in Figures 7.11a and
7.11b. Most audio signal spectra have the same general shape (low-pass, up to about
20 kHz or less) as the example audio spectra shown in Figure 7.11.

0.15
|F (ω)| 2
0.125
W
0.1

0.075

0.05

0.025
ω
(a) 5000 10000 15000 20000 f=

0.15
|F (ω)| 2
0.125
W
0.1

0.075

0.05

0.025
ω
(b) 5000 10000 15000 20000 f=

2
Figure 7.11 The normalized energy spectrum |F(ω)| W of approximately the first 6
s of (a) Beethoven’s “Symphony No. 5 in C minor” (“da-da-da-daaa . . . ”) and (b)
the Beatles’ “A Hard Day’s Night” (“BWRRAAANNG! It’s been a ha˜rd da˜y’s
night . . . ), plotted against frequency f = 2π
ω
in the range f ∈ [0, 22050] Hz. Note
that the energy spectrum (i.e., signal energy per unit frequency) is negligible in
each case for f > 15 kHz (1 Hz=2π rad s ). The procedure for calculating these
normalized energy spectra is described in Section 9.4.
Section 7.2 Frequency-Domain Description of Signals 243

7.2.3 Signal bandwidth

Signal bandwidth describes the width of the energy spectrum of low-pass and band-
pass signals. We next will discuss how signal bandwidth can be defined and deter-
mined.

Low-pass signal bandwidth: By convention, the bandwidth of a low-pass signal


f (t) corresponds to a positive frequency

ω =  = 2πB

beyond which the energy spectrum |F (ω)|2 is very small. For example, for the audio
signals with the energy spectra shown in Figure 7.11, we might define the bandwidth to
be B ≈ 15 kHz, since for f = 2π ω
> 15 kHz the energy spectra |F (ω)|2 are negligible.
There are several standardized bandwidth definitions that are appropriate for 3-dB
quantitative work. One of them, the 3-dB bandwidth, requires that bandwidth

|F ()|2 1
= ,
|F (0)|2 2

or, equivalently,

|F ()|2
10 log( ) = −3 dB,
|F (0)|2

meaning that the bandwidth  = 2πB is the frequency where the energy spectrum
|F (ω)|2 falls to one-half the spectral value |F (0)|2 at DC. Another definition for
 = 2πB requires that
 
1
|F (ω)|2 dω = rW,
2π −

where r is some number such as 0.95 or 0.99 that is close to 1. Using this criterion
for  with r = 0.95, for example, we refer to  = 2πB as the 95% bandwidth of 95%
a low-pass signal f (t), because with r = 0.95, the frequency band |ω| ≤  rad s , or
bandwidth
|f | ≤ B Hz, contains 95% of the total signal energy W .
The bandwidths  = 2πB, just defined, characterize the half-width of the signal
energy spectrum curve |F (ω)|2 (i.e., the full width over only positive ω). This conven-
tion of associating the signal bandwidth with the width of the spectrum over just posi-
tive frequencies ω is sensible, because, for real-valued f (t) (with F (−ω) = F ∗ (ω)),
the inverse Fourier transform can be expressed as (see Exercise Problem 7.9)
 ∞
1
f (t) = 2|F (ω)| cos(ωt + ∠F (ω))dω.
2π 0
244 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

So, clearly, a real-valued f (t) can be written as a superposition of co-sinusoids


2|F (ω)| cos(ωt + ∠F (ω)) with frequencies ω ≥ 0, and therefore, only nonnegative
frequencies are pertinent.12

Example 7.12
The Fourier transform of signal

t
f (t) = rect( )
τ

is
ωτ
F (ω) = τ sinc( ).
2

Hence, the corresponding energy spectrum is

ωτ
|F (ω)|2 = τ 2 sinc2 ( ),
2

which is shown in Figure 7.12 for τ = 1 s. Since |F (ω)|2 = τ 2 sinc2 ( ωτ


2 )
has relatively small values for ω > 2π τ , where ω = 2π
τ is the first zero-
crossing of sinc( ωτ
2 ), we might decide to define


=
τ

(or B = τ1 ) as the signal bandwidth. Show that this choice for  = 2πB
corresponds to approximately the 90% bandwidth of rect( τt ).

1
|F (ω)| 2
0.8
0.6
0.4
0.2

−30 −20 −10 10 20 30 ω

Figure 7.12 Energy spectrum |F(ω)|2 of the low-pass signal f (t) = rect(t) versus
ω in rad
s .

12
There are real systems, however, containing signals that are modeled as being complex. For example,
certain communications transmitters and receivers have two channels, and it is mathematically convenient
to represent the signals in the two channels as the real and imaginary parts of a single complex signal. In
this case the Fourier transform does not have the usual symmetry, and it is preferred to define a two-sided
bandwidth covering both negative and positive frequencies.
Section 7.2 Frequency-Domain Description of Signals 245

Solution Given that f (t) = rect ( τt ), the signal energy is


 ∞  τ
t 2
W = |rect( )|2 dt = dt = τ.
−∞ τ t=− τ2

Hence,

=
τ
is the r% bandwidth of the signal, where r satisfies the condition
 2π
1 τ ωτ
τ 2 sinc2 ( )dω = rτ.
2π − 2π
τ
2

Thus, after a change of variables with


ωτ
x≡ ,
2
and using the fact that the integrand is even, we find

2 π
r= sinc2 (x)dx ≈ 0.903,
π x=0
where the numerical value on the right is obtained by numerical integration.
According to this result,  = 2π
τ , corresponding to the frequency of the first
null in the energy spectrum of the signal rect( τt ), is the 90.3% bandwidth
of the signal.

The result of Example 7.12 is well worth remembering: the 90% bandwidth of
a pulse of duration τ s is B = τ1 Hz. Hence, a 1 μs pulse has a 1 MHz (i.e., 106 Hz)
bandwidth. To view a 1 μs pulse on an oscilloscope, the scope bandwidth needs to be
larger than 1 MHz.

Band-pass signal bandwidth: Recall that band-pass signals have energy spectra
with shapes similar to those in Figure 7.10b. Because, for real-valued f (t), we can
write
 ∞
1
f (t) = 2|F (ω)| cos(ωt + ∠F (ω))dω,
2π 0
the bandwidth  = 2πB of such signals is once again defined to characterize the
width of the energy spectrum |F (ω)|2 over only positive frequencies ω.
For instance, the 95% bandwidth of a band-pass signal is defined to be

 = ωu − ωl ,
246 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

with ωu > ωl > 0, such that


 ωu
1 W
|F (ω)|2 dω = 0.95 .
2π ωl 2

The right-hand side of this constraint is 95% of just half the signal energy W , because
the integral on the left is computed over only positive ω.

Example 7.13
Determine the 95% bandwidths of the signals f (t) and g(t) with the energy
spectra shown in Figures 7.13a and 7.13c.
Solution According to Parseval’s theorem, the energy W of signal f (t)
is simply the area under the |F (ω)|2 curve, scaled by 2π
1
. Using the formula
for the area of a triangle, we have

1 2π × 1 1
W = = .
2π 2 2

To determine the 95% bandwidth, it is simplest to compute the total signal


energy outside the signal bandwidth (i.e., |ω| > ) and to then set this equal
to 5% of W, or 0.05W = 0.025.

|F (ω)| 2
1

(a) π rad/s ω

|F (ω)| 2
1

(b) −Ω Ω π rad/s ω

|G(ω)| 2
1

(c) 2π rad/s 4π rad/s ω

Figure 7.13 Energy spectrum of (a) a low-pass signal f (t) ↔ F(ω), (b) portion of
the energy spectrum in (a) lying outside the bandwidth , and (c) energy
spectrum of a band-pass signal g(t) ↔ G(ω).
Section 7.3 LTI Circuit and System Response to Energy Signals 247

The energy outside |ω| >  equals 2π 1


times the combined areas of the right
and left tips of the |F (ω)| triangle shown in Figure 7.13b, which is
2

1 (π − )
(π − ) .
2π π
Setting this quantity equal to 0.025 yields

(π − )2 = 0.05π 2

so that
√ rad
 = π(1 − .05) ≈ 0.7764π .
s
Now, a comparison of |F (ω)|2 in Figure 7.13a and |G(ω)|2 in Figure 7.13c
shows that |G(ω)|2 , for positive ω, is a shifted replica of |F (ω)|2 . Hence,
for positive ω, |G(ω)|2 is twice as wide as |F (ω)|2 , and therefore, the 95%
bandwidth of the band-pass signal g(t) is twice that of f (t), i.e.
rad
 ≈ 1.5528π .
s

Example 7.14
What is the 100% bandwidth of the signal g(t) defined in Example 7.13?
Solution For 100% bandwidth, we observe that ωu = 4π and ωl = 2π
rad/s. Hence, the 100% bandwidth is simply
rad
 = ωu − ωl = 2π ,
s
or B = 1 Hz.

7.3 LTI Circuit and System Response to Energy Signals

In the opening section of this chapter we obtained the input–output relation


 ∞  ∞
1 1
f (t) = F (ω)ej ωt dω −→ LTI −→ y(t) = H (ω)F (ω)ej ωt dω.
2π −∞ 2π −∞
(See Figure 7.1.) This also may be expressed as

y(t) ↔ Y (ω) = H (ω)F (ω).

In either case, the system output is the inverse Fourier transform of the product of the
system frequency response and the Fourier transform of the input.
248 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

This relation was derived as a straightforward application of

ej ωt −→ LTI −→ H (ω)ej ωt .

Because this latter rule describes a steady-state response (see Section 5.2), we may
think that the relation y(t) ↔ Y (ω) = H (ω)F (ω) describes just the steady-state
response y(t) of a dissipative LTI system to an input f (t) ↔ F (ω). However, the
relation is more powerful than that: Because in dissipative systems transient compo-
nents of the zero-state response to inputs cos(ωt) and sin(ωt) applied at t = −∞
have vanished at all finite times t, the rule

ej ωt −→ LTI −→ H (ω)ej ωt

Zero-state actually describes the zero-state response for all finite times t. Consequently, the
response inverse Fourier transform of Y (ω) = H (ω)F (ω) represents, for all finite t, the entire
y(t) ↔ H(ω)F(ω) zero-state response of system H (ω)—not just the steady-state part of it—to input
f (t) ↔ F (ω).
In summary, Figure 7.1 illustrates how the zero-state response can be calculated
for dissipative LTI systems, for all finite t. We shall make use of this relation in the
next several examples.

Example 7.15
The input of an LTI system

1
H (ω) =
1 + jω

is

f (t) = e−t u(t),

shown in Figure 7.14a. Determine the system zero-state response y(t),


using the input–output rule shown in Figure 7.1.

f (t)
1 1
y(t)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

−1 1 2 3 4 5 t −1 1 2 3 4 5 t
(a) −0.2 (b) −0.2

Figure 7.14 (a) Input and (b) output signals of the system examined in
Example 7.15.
Section 7.3 LTI Circuit and System Response to Energy Signals 249

Solution Since
1
f (t) = e−t u(t) ↔ F (ω) =
1 + jω
(entry 1 in Table 7.2), the Fourier transform of output y(t) is
1 1 1
Y (ω) = H (ω)F (ω) = = .
1 + jω 1 + jω (1 + j ω)2
According to entry 5 in Table 7.2,
1
te−t u(t) ↔ .
(1 + j ω)2
Therefore, the system zero-state response is

y(t) = te−t u(t),

as plotted in Figure 7.14b.

Example 7.16
Figure 7.15a shows the input
ω
f (t) = sinc(t) ↔ F (ω) = πrect( ).
2
Suppose this input is applied to an LTI system having the frequency response

H (ω) = rect(ω).

Determine the system zero-state response y(t).


Solution Given that F (ω) = πrect( ω2 ) and H (ω) = rect(ω), we have
ω
Y (ω) = H (ω)F (ω) = rect(ω)πrect( ) = πrect(ω)
2

1 0.5

0.8 f (t) 0.4 y(t)


0.6 0.3

0.4 0.2

0.2 0.1

−30 −20 −10 10 20 30 t −30 −20 −10 10 20 30 t


(a) −0.2 (b) −0.1

Figure 7.15 (a) Input and (b) output signals of the system examined in
Example 7.16.
250 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

as the Fourier transform of the system response y(t). Taking the inverse
Fourier transform of Y (ω), we find that

 ∞  ∞
1 1
y(t) = Y (ω)ej ωt dω = πrect(ω)ej ωt dω
2π −∞ 2π −∞

1 1/2 j ωt ej t/2 − e−j t/2
= e dω = .
2 −1/2 2j t

This result simplifies to

1 t
y(t) = sinc( ),
2 2
which is plotted in Figure 7.15b. Notice that the system broadens the input
f (t) by a factor of 2 by halving its bandwidth.

Example 7.17
Signal

f (t) = g1 (t) + g2 (t),

where G1 (ω) and G2 (ω) are depicted in Figures 7.16a and 7.16b, and
is passed through a band-pass system with the frequency response H (ω)
shown in Figure 7.16c. Determine the system zero-state output y(t) in terms
of g1 (t) and g2 (t).
Solution Since f (t) = g1 (t) + g2 (t),

f (t) ↔ F (ω) = G1 (ω) + G2 (ω).

Therefore, the Fourier transform of the output y(t) is

Y (ω) = H (ω)F (ω) = H (ω)G1 (ω) + H (ω)G2 (ω).

From Figure 7.16 we can determine that

H (ω)G1 (ω) = 0

and
1
H (ω)G2 (ω) = G2 (ω).
2
Hence,
1
Y (ω) = G2 (ω)
2
Section 7.3 LTI Circuit and System Response to Energy Signals 251

G 1(ω)
1

(a) −8π −4π 4π 8π ω


G 2(ω)
1

(b) −8π −4π 4π 8π ω

H (ω)
1

0.5

(c) −8π −4π 4π 8π ω

Figure 7.16 (a) Fourier transforms of signals g1 (t) and g2 (t), and (b) the
frequency response H(ω) of the system examined in Example 7.16 with an input
f (t) = g1 (t) + g2 (t).

so that
1
y(t) = g2 (t).
2
Thus, the system filters out the component g1 (t) from the input f (t) and
delivers a scaled-down replica of g2 (t) as the output.

Example 7.18
An input f (t) is passed through a system having frequency response

H (ω) = e−j ωto .

Determine the system zero-state output y(t).


Solution

Y (ω) = H (ω)F (ω) = e−j ωto F (ω).


Therefore, using the time-shift property from Table 7.1, we find that the
output is

y(t) = f (t − to ),

which is a delayed copy of the input f (t).


252 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

Example 7.19
The input of an LTI system

1
H (ω) =
1 + jω

is

f (t) = et u(−t)

shown in Figure 7.17a. What is the system zero-state output y(t)?

1 1
f (t) y(t)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

−4 −2 2 4 t −4 −2 2 4 t
(a) −0.2 (b) −0.2

Figure 7.17 (a) Input and (b) output signals of the system examined in
Example 7.19.

Solution Since
1
f (t) = et u(−t) ↔ F (ω) =
1 − jω

(entry 2 in Table 7.2), the Fourier transform of the output is

1 1 1
Y (ω) = H (ω)F (ω) = = .
1 + jω 1 − jω 1 + ω2

According to entry 3 in Table 7.2,

2
e−|t| ↔ .
1 + ω2
Therefore, the system output is

1 −|t|
y(t) = e ,
2
which is plotted in Figure 7.17b.
Section 7.3 LTI Circuit and System Response to Energy Signals 253

Example 7.20
The input of an LTI system

1
H (ω) =
1 + jω

is

f (t) = rect(t).

What is the system output y(t)?


Solution Since
ω
f (t) = rect(t) ↔ F (ω) = sinc( ),
2
the Fourier transform of the output is

1 ω
Y (ω) = H (ω)F (ω) = sinc( ).
1 + jω 2

We cannot find an appropriate entry in Table 7.2 to determine y(t). Thus,


we must work out the inverse Fourier transform
 ∞
1 1 ω
y(t) = sinc( )ej ωt dω
2π −∞ 1 + j ω 2

by hand. But, the integral looks too complicated to carry out. In Chapter 9,
we will learn a simpler way of finding y(t) for this problem. So, let us leave
the answer in integral form. (It is the right answer, but it is not in a form
suitable for visualizing the output.)

We will close the chapter with a small surprise (plus one more example), which
will illustrate the power and generality of the Fourier transform method, even though,
as we found out in Example 7.20, the application of the method may sometimes be
difficult.
Here is our surprise: Figure 7.18 shows an inductor L with voltage v(t), current
i(t), and v–i relation

di
v(t) = L .
dt
Suppose that this inductor is a component of some dissipative LTI circuit in the
laboratory, and therefore its voltage v(t) and current i(t) are lab signals (i.e., absolutely
integrable, finite energy, etc.) Thus, v(t) and i(t) have Fourier transforms V (ω) and
I (ω), respectively. Since i(t) ↔ I (ω), the time-derivative property from Table 7.1
254 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

di
ν(t) = L V (ω) = jωLI (ω)
+ dt −
L + −
↔ L
i(t) I (ω)

Figure 7.18 v–i and V (ω)–I(ω) relations for an inductor L.

indicates that di
dt ↔ j ωI (ω). Thus, the amplitude-scaling property (with K = L)
implies that

V (ω) = j ωLI (ω),

or

V (ω) = ZI (ω),

with

Z = j ωL.

Notice that we effectively have obtained a Fourier V (ω)–I (ω) relation (see Figure 7.18
again) for the inductor, which has the same form as the phasor V –I relation for the
Fourier same element. The V (ω)–I (ω) relations for a capacitor C and resistor R are similar to
V(ω)–I(ω) the relation for the inductor, but with impedances Z = j ωC1
and Z = R, respectively,
relations just as in the phasor case.
So, it is no wonder that the frequency-response function H (ω) derived by the
phasor method also describes the circuit response to arbitrary inputs. The next example
will show how the Fourier transform method can be applied to directly analyze a
circuit.

Example 7.21
Determine the response y(t) ↔ Y (ω) of the circuit shown in Figure 7.19a
to an arbitrary input f (t) ↔ F (ω).

+ + 1
f (t) y(t) 1Ω 1F 3y(t) F (ω) Y (ω) 1Ω Ω 3Y(ω)
− − jω
(a) (b)

Figure 7.19 (a) An LTI circuit with an arbitrary input f (t), and (b) the Fourier equivalent of the same
circuit.
Exercises 255

Solution Building on what we just learned, in Figure 7.19b we construct


the equivalent Fourier transform circuit and then proceed with the node-
voltage method, as usual. The KCL equation for the top node is
Y (ω) Y (ω)
F (ω) = + 1 + 3Y (ω) = (4 + j ω)Y (ω).
1 jω

Thus,
1
Y (ω) = F (ω),
4 + jω
which is the Fourier transform of the system zero-state response y(t).
Hence, the system frequency response is
1
H (ω) = ,
4 + jω
and, using the inverse Fourier transform,
 ∞
1 1
y(t) = F (ω)ej ωt dω.
2π −∞ 4 + j ω
This result is valid for any laboratory input f (t) ↔ F (ω).

EXERCISES
7.1 (a) Given that f (t) = e−a(t−to ) u(t − to ), where a > 0, determine the Fourier
transform F (ω) of f (t).
(b) Given that
1
g(t) = ,
a + jt
where a > 0, determine the Fourier transform G(ω) of g(t) by using
the symmetry property and the result of part (a).
(c) Confirm the result of part (b) by calculating g(t) from G(ω), using the
inverse Fourier transform integral.
7.2 Let
t
f (t) = rect( ).
2

(a) Plot g(t) = f (t − 1).


(b) Determine the Fourier transform G(ω) by using the time-shift property
and the fact that
t ωT
rect( ) ↔ T sinc( ).
T 2
256 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

(c) Determine G(ω) by direct Fourier transformation (integration) of g(t),


and confirm that (b) is correct.
(d) Taking advantage of Parseval’s theorem (Table 7.1, entry 16), deter-
mine the signal energy
 ∞
1
W = |G(ω)|2 dω.
2π −∞

7.3 Determine the Fourier transform F (ω) of the following signal f (t):

f (t)
1

−3 −2 −1 1 2 3

−1

7.4 Determine the inverse Fourier transform of


π  ω 
F (ω) = rect
W 2W
by direct integration. Is F (ω) absolutely integrable? Does it satisfy the
Dirichlet conditions?
7.5 Plot the time derivatives of the unit triangle ( τt ) and the function
2 t + τ/4 2 t − τ/4
f (t) = rect( ) − rect( )
τ τ/2 τ τ/2
to show that they are equivalent. In plotting f (t), superpose the plots of
2 t±τ/4 t
τ rect( τ/2 ), which you obtain by shifting and scaling the graph of rect( τ ).

7.6 Given that f (t) = 52 ( 5t ), evaluate the Fourier transform F (ω) at ω = 0.
7.7 (a) Show that for real-valued signals f (t), the Fourier transform F (ω)
satisfies the property
F (−ω) = F ∗ (ω).
(b) Using this result, show that for real-valued f (t), we have |F (−ω)| =
|F (ω)| and ∠F (−ω) = −∠F (ω) (i.e. that the magnitude of the Fourier
transform is even and the phase is odd).
7.8 On an exam, you are asked to calculate F (0) for some real-valued signal
f (t). You obtain the answer F (0) = 4 − j 2. Explain why, for sure, you have
made a mistake in your calculation.
Exercises 257

7.9 Show that, given a real-valued signal f (t), the inverse Fourier transform
integral can be expressed as
 ∞
1
f (t) = 2|F (ω)| cos(ωt + ∠F (ω))dω.
2π 0

7.10 The bandwidth  of a low-pass signal f (t) ↔ F (ω) is defined by the


constraint
 
1
|F (ω)|2 dω = 0.8Wf ,
2π −
where Wf denotes the energy of signal f (t).
(a) What fraction of the signal energy Wf is contained in the frequency
band 0 < ω < ? Explain.
(b) The signal f (t) is filtered by a linear system with a frequency response
H (ω) satisfying H (ω) = 0 for |ω| <  and |H (ω)| = 1 for |ω| ≥ .
What is the total energy of the system output y(t) in terms of the energy
Wf of the input f (t)?

7.11 Determine the 3-dB bandwidth and the 95% bandwidth of signals f (t) and
g(t) with the following energy spectra:

|F (ω)| 2
1

π π ω (rad/s)
2
|G(ω)| 2
1

−2 π π 2π 3π ω (rad/s)

7.12 (a) Let f (t) = f1 (t) + f2 (t) such that f1 (t) ↔ F1 (ω) and f2 (t) ↔ F2 (ω).
Show that

f (t) ↔ F1 (ω) + F2 (ω).

(b) The input signal of an LTI system with a frequency response H (ω) =
|H (ω)|ej χ(ω) is f1 (t) + f2 (t). Functions F1 (ω), F2 (ω), H (ω), and
χ(ω) are given graphically as follows:
258 Chapter 7 Fourier Transform and LTI System Response to Energy Signals

F1 (ω)

ω (rad/s)
− 10π F2 (ω) 10π

ω (rad/s)
− 10π |H (ω)| 10π
4
2
ω (rad/s)
− 10π 10π
χ(ω)

ω (rad/s)
− 10π 10π
−π rad

Express the output y(t) of the system as a superposition of scaled


and/or shifted versions of f1 (t) and f2 (t). (Hint: y(t) = y1 (t) + y2 (t),
with Y1 (ω) = H (ω)F1 (ω) and Y2 (ω) = H (ω)F2 (ω).)
7.13 Determine the response y(t) of the following circuit with an arbitrary input
f (t) in the form of an inverse Fourier transform and then evaluate y(t) for
t
the case f (t) = e− 6 u(t) V:


+
f (t) + 3F y(t)


8
Modulation and AM Radio

8.1 FOURIER TRANSFORM SHIFT AND MODULATION PROPERTIES 260


8.2 COHERENT DEMODULATION OF AM SIGNALS 265
8.3 ENVELOPE DETECTION OF AM SIGNALS 267
8.4 SUPERHETERODYNE AM RECEIVERS WITH ENVELOPE DETECTION 273
EXERCISES 278

AM radio communication is based on relatively simple signal processing concepts. Voice


The output signal f (t) of a microphone or an audio playback device (see Figure 8.1a) and music
is used to vary, or modulate, the amplitude of a cosine signal cos(ωc t). The resulting signals
AM (short for amplitude modulation) signal modulate
radio carriers;
f (t) cos(ωc t) modulated
carriers
(see Figure 8.1b) is subsequently used to excite a transmitting antenna that converts are filtered
the signal (a voltage or current) into a propagating radiowave (traveling electric and and the
magnetic field variations). We refer to ωc as the carrier frequency of the AM signal, outputs are
or the operating frequency of the radio station where the signal is generated. In demodulated
typical cases the carrier frequency ωc , where  denotes the bandwidth of the and detected;
audio signal f (t). The task of an AM radio receiver is to pick up the radiowave how commercial
signal arriving from the broadcast station and convert it into an output y(t) that is AM transmitters
proportional to f (t − to ), where to is some small propagation time delay, typically and receivers
a millisecond or less in practice. Subsequently, y(t) (again, a voltage or current) is work
converted into sound by a suitable transducer,1 which ordinarily is a loudspeaker or
a set of headphones.
In this chapter we will study some of the details of AM radio communica-
tion, using the Fourier series and transform tools introduced in Chapters 6 and 7.
In Section 8.1 we will begin by introducing the Fourier transform shift and modu-
lation properties and discussing their relevance to AM radio. Sections 8.2 and 8.3
will describe two different ways of demodulating an AM radio signal acquired by
a receiving antenna. Finally, Section 8.4 will examine the components and overall
performance details of practical superheterodyne AM receivers.

1
A transducer converts energy from one form to another—for example, electrical to acoustical.

259
260 Chapter 8 Modulation and AM Radio

3 f (t) 3
f (t) cos(ω ct)
2 2

1 1

−4 −2 2 4 6 t −4 −2 2 4 6 t
−1 -1

−2 -2

−3 −3
(a) (b)

Figure 8.1 (a) A signal f (t), and (b) a corresponding AM signal f (t) cos(ωc t) with
a carrier frequency ωc .

8.1 Fourier Transform Shift and Modulation


Properties

Example 8.1
Prove the Fourier time-shift property of Table 7.1.

Solution The time-shift property states that

f (t − to ) ↔ F (ω)e−j ωto ,

where F (ω) denotes the Fourier transform of f (t). To verify this property,
we write the Fourier transform of f (t − to ) as
 ∞  ∞  ∞
−j ωt −j ω(x+to ) −j ωto
f (t − to )e dt = f (x)e dx = e f (x)e−j ωx dx,
−∞ −∞ −∞

where the second line uses the change of variable

x = t − to ⇒ t = x + to

and where dt = dx. The last integral is F (ω), and so we have

f (t − to ) ↔ F (ω)e−j ωto ,

as claimed. Note that we already applied this time-shift property to several


problems in Chapter 7.
Section 8.1 Fourier Transform Shift and Modulation Properties 261

Example 8.2
Prove the Fourier frequency-shift property of Table 7.1.
Solution The frequency-shift property states that

f (t)ej ωo t ↔ F (ω − ωo ).

This is the dual of the time-shift property. It can be proven as in Example 8.1,
except by the use of the inverse Fourier transform. Alternatively, a more
direct proof yields
 ∞  ∞
j ωo t −j ωt
f (t)e e dt = f (t)e−j (ω−ωo )t dt = F (ω − ωo ),
−∞ −∞

as claimed.

The frequency-shift property will play a major role in this chapter. In particular,
it forms the basis for the modulation property in Table 7.1. The modulation property
is derived as follows. Evaluating the frequency-shift property for ωo = ±ωc gives

f (t)ej ωc t ↔ F (ω − ωc )

and

f (t)e−j ωc t ↔ F (ω + ωc ).

Summing these results, we obtain

f (t)(ej ωc t + e−j ωc t ) = 2f (t) cos(ωc t) ↔ F (ω − ωc ) + F (ω + ωc ).

Hence,
1 1
f (t) cos(ωc t) ↔ F (ω − ωc ) + F (ω + ωc ),
2 2
which is the modulation property.
The implication of the modulation property is illustrated in Figure 8.2. The top Modulation
figure shows the Fourier transform F (ω) of some signal f (t). The bottom figure then property
shows the Fourier transform of f (t) cos(ωc t), where the replicas of F (ω) have half
the height of the original (as depicted by the dashed curve in the top figure) and are
shifted to the right and left by ωc . The modulation process cannot be described in
terms of a frequency response function H (ω), because, unlike the LTI filtering oper-
ations discussed in Section 7.3, modulation is a time-varying operation that shifts
the energy content of signal f (t) from its baseband to an entirely new location
in the frequency domain. Figure 8.2, illustrates how the modulation process gener-
ates a bandpass signal f (t) cos(ωc t) from a low-pass signal f (t). We will refer to
f (t) cos(ωc t) as an AM signal with carrier ωc .
262 Chapter 8 Modulation and AM Radio

1 F (ω)

− 2ωc − ωc 0 ωc 2ω c ω

1 1
1 F (ω − ω c) + F (ω + ω c)
2 2

− 2ω c − ωc 0 ωc 2ω c ω

Figure 8.2 An example illustrating the modulation property: Fourier transform of some low-pass signal
f (t) (top) and Fourier transform of the AM signal f (t) cos(ωc t) (bottom).

In another example, shown in Figure 8.3, modulation generates a multi-band


signal g(t) cos(ωc t) from a band-pass signal g(t). Notice that, as in the earlier
example, we obtain the Fourier transform of g(t) cos(ωc t) by shifting half (in ampli-
tude) of the Fourier transform of g(t) to the right by an amount ωc , shifting half to
the left by the same amount, and then summing.
The system symbol for the AM modulation process is shown in Figure 8.4. The
Mixer multiplying unit in the diagram is known as a mixer. As already mentioned, in AM
radio communication an audio signal f (t) is mixed with a carrier signal cos(ωc t) and
transmitted as a band-pass radiowave (a time-varying electric field traveling through
space). The purpose of modulation prior to transmission is twofold:

1 G(ω)

− 2ω c − ωc 0 ωc 2ω c ω

1 1
1 G(ω − ω c) + G(ω + ω c)
2 2

− 2ω c − ωc 0 ωc 2ω c ω

Figure 8.3 Another example illustrating the modulation property: Fourier transform of some band-pass
signal g(t) (top) and Fourier transform of modulated multi-band signal g(t) cos(ωc t) (bottom). The short-
and long-dashed arrows from top to bottom suggest how the right and left shifts of half of G(ω) result
in the Fourier transform of the modulated signal g(t) cos(ωc t).
Section 8.1 Fourier Transform Shift and Modulation Properties 263

Mixer
f (t) × f (t) cos(ωct)

cos(ωc t)

Figure 8.4 System symbol for AM modulation process. The multiplier unit is
known as a mixer. Shifting the frequency content of a signal to a new location
in ω-space, by the use of a mixer, also is known as heterodyning.

(1) Radio antennas, which convert voltage and current signals into radiowaves
(and vice versa), essentially are high-pass systems2 with negligible amplitude
response |Hant (ω)| for |ω| < 2L
πc
, where c = 3 × 108 m/s is the speed of light
and L is the physical size of the antenna.3 For L = 75 m, for instance, the
antenna performs poorly unless
π 3 × 108 rad
ω≥ = 2π106 ,
150 s
or, equivalently, frequencies are above 1 MHz. No practical antenna with a
reasonable physical size exists to transmit an audio signal f (t) directly. By
contrast, a modulated AM signal, f (t) cos(ωc t), can be efficiently radiated
with an antenna having a 75-m length, when the carrier frequency is in the
1-MHz frequency range.
(2) Even if efficient antennas were available in the audio frequency range, radio
communication within the audio band would be problematic. The reason for
this is that all signal transmissions then would occupy the same frequency
band. Thus, it would be impossible to extract the signal for a single station
of interest from the superposed signals of all other stations operating in the
same geographical region. In other words, the signals would interfere with one
another. To overcome this difficulty, modulation is essential.
In the United States, different AM stations operating in the same vicinity are
assigned different individual carrier frequencies—chosen from a set spaced 10 kHz
or 2π × 104 rad/s apart, within the frequency range 540 to 1700 kHz. With a 10-kHz
operation bandwidth, each AM radio station can broadcast a voice signal of up to
5-kHz bandwidth. (Can you explain why?)

Example 8.3
A mixer is used to multiply

 1
1+ cos(nωo t)
n
n=1

2
Examined and modeled in courses on electromagnetics.
3
For a monopole antenna over a conducting ground plane.
264 Chapter 8 Modulation and AM Radio

1 F (ω)

(a) − 2ω o − ωo 0 ωo 2ω o ω
1 M (ω)

(b) − 2ω o − ωo 0 ωo 2ω o ω
H LP F (ω)
1

(c) − 2ω o − ωo 0 ωo 2ω o ω
H BP F (ω)
1

(d) − 2ω o − ωo 0 ωo 2ω o ω

Figure 8.5 (a) Fourier transform of a low-pass signal f (t), (b) M(ω) computed in Example 8.3, and (c) a
low-pass filter HLPF (ω), and (d) a band-pass filter HBPF (ω).

with a low-pass signal f (t). The Fourier transform of f (t) is plotted in


Figure 8.5a. Plot the Fourier transform M(ω) of the mixer output

 1
m(t) = f (t){1 + cos(nωo t)}.
n
n=1

Solution We first expand m(t) as



 1
m(t) = f (t) + f (t) cos(nωo t).
n
n=1

Next, we apply the addition and modulation properties of the Fourier trans-
form to this expression to obtain

 1
M(ω) = F (ω) + {F (ω − nωo ) + F (ω + nωo )}.
2n
n=1

This result is plotted in Figure 8.5b. Notice that f (t) can be extracted from
m(t) with the low-pass filter HLPF (ω) shown in Figure 8.5c. On the other
hand, filtering m(t) with the band-pass filter HBPF (ω), shown in Figure 8.5d,
would generate a band-pass AM signal f (t) cos(ωo t) having carrier ωo .

The antenna for an AM radio receiver typically captures the signals from tens
of radio stations simultaneously. The receiver uses a front-end band-pass filter with
an adjustable passband to respond to just the single AM signal f (t) cos(ωc t) to be
Section 8.2 Coherent Demodulation of AM Signals 265

“tuned in.” The band-pass filter output then is converted into an audio signal that
is proportional to f (t − to ). This conversion process, known as demodulation, is
discussed in the next two sections.

8.2 Coherent Demodulation of AM Signals

The block diagram in Figure 8.6a depicts a possible AM communication system. The
mixer on the left idealizes an AM radio transmitter. The mixer on the right and the
low-pass filter HLPF (ω) constitute a coherent-demodulation, or coherent-detection,
AM receiver. Because the transmitter and receiver are located at different sites, the
transmitted AM signal

f (t) cos(ωc t)

Propagation channel Receiver


Transmitter
Microphone × H c(ω) = ke − jωt o × H LP F (ω) Speaker
f (t) f (t) cos(ω ct) r(t) m (t) y(t)

(a) cos(ω ct) cos(ω c(t − t o))

F (ω) 1

− 2ω c − ωc 0 ωc 2ω c ω
(b)

|R (ω)| k/2

− 2ω c − ωc 0 ωc 2ω c ω
(c)

|M (ω)| k/2

− 2ω c − ωc 0 ωc 2ω c ω
(d)

H LP F (ω) 1

− 2ω c − ωc 0 ωc 2ω c ω
(e)

Figure 8.6 (a) A possible AM communication system using a


coherent-demodulation receiver, (b) the Fourier transform of the audio signal
f (t), (c) the Fourier transform magnitude of the receiver input signal r(t), (d) the
Fourier transform magnitude of the mixer output m(t) in the receiver, and (e)
the frequency response of the low-pass filter in the receiver.
266 Chapter 8 Modulation and AM Radio

arrives at the receiver after traveling the intervening distance through some channel
(usually, through air in the form of a radiowave). In the block diagram the center box

Hc (ω) = ke−j ωto

represents an ideal propagation channel from the transmitter to the receiver, which
only delays the AM signal by an amount to and scales it in amplitude by a constant
factor k. Hence, the receiver input labeled as r(t) is

r(t) = kf (t − to ) cos(ωc (t − to )).

Figures 8.6b and 8.6c show plots of a possible F (ω) and |R(ω)|, where

f (t) ↔ F (ω) and r(t) ↔ R(ω).

Let us now examine how demodulation takes place as in Figure 8.6a. The mixer
output in the receiver is

m(t) = r(t) cos(ωc (t − to ))


= kf (t − to ) cos2 (ωc (t − to ))
k
= f (t − to ) {1 + cos(2ωc (t − to ))}.
2

Therefore, using a combination of Fourier addition, modulation, and time-shift prop-


erties, we determine its Fourier transform M(ω) as

k k
M(ω) = F (ω)e−j ωto + {F (ω − 2ωc ) + F (ω + 2ωc )}e−j ωto ,
2 4

where the first term is the Fourier transform of


k
f (t − to )
2

and the second term is the transform of


k
f (t − to ) cos(2ωc (t − to )).
2

A plot of |M(ω)| is shown in Figure 8.6d.


Clearly, the first term of m(t),

k
f (t − to ),
2
Section 8.3 Envelope Detection of AM Signals 267

is the delayed audio signal that we hope to recover. To extract this signal, we pass
m(t) through the low-pass filter HLPF (ω) shown in Figure 8.6e. The result is
k
Y (ω) = HLPF (ω)M(ω) = F (ω)e−j ωto ,
2
implying that
k
y(t) = f (t − to ),
2
the desired audio signal at the loudspeaker input.4
The crucial and most difficult step in coherent demodulation is the mixing of the
incoming signal r(t) with cos(ωc (t − to )). The difficulty lies in obtaining cos(ωc (t −
to )) for mixing purposes. A locally generated cos(ωc t + θ) in the receiver with the
right frequency ωc , but an arbitrary phase shift θ = −ωc to , will not work, because
even small fluctuations of the propagation delay to will translate to large phase-shift
variations of the carrier and will cause y(t) to fluctuate. Coherent detection receivers
are thus required to extract cos(ωc (t − to )) from the incoming signal r(t) before the
mixing can take place. This requirement increases the complexity and, therefore, the
cost of coherent demodulation receivers. The term coherent demodulation or detection
refers to the requirement that the phase shift θ of the mixing signal cos(ωc t + θ) be
coherent (same as or different by a constant amount) with the phase shift −ωc to of
the incoming carrier.
The receiver complexity can be reduced if the incoming signal is of the form

r(t) = k(f (t − to ) + α) cos(ωc (t − to )),

that is, if it contains a constant-amplitude cosine component kα cos(ωc (t − to )) in


addition to the primary term kf (t − to ) cos(ωc (t − to )) carrying the voice signal f (t).
In commercial AM radio, we achieve this by adding a constant offset α > 0 to the voice
signal f (t) before modulating the carrier cos(ωc t). (See Figure 8.7a.) For sufficiently
large offsets α satisfying

f (t) + α > 0 for all t,

a simple envelope detection procedure, described in the next section, works well,
obviating the need to generate cos(ωc (t − to )) within the receiver.

8.3 Envelope Detection of AM Signals

Consider the modified AM transmitter system shown in Figure 8.7a. In the modified
transmitter, a DC offset α is added to the voice signal f (t), and the sum is used
4
In practice, y(t) is amplified by an audio amplifier (which is not shown) prior to being applied to the
loudspeaker.
268 Chapter 8 Modulation and AM Radio
6
f (t)
4

Antenna t
∑ × −4 −2 2 4 6

f (t) ( f (t) + α) cos(ωct) −2

−4

(a) α cos(ωct) (b) −6

6 6
(f (t ) + α) cos(ωct)
4 f (t) cos(ω ct) 4

2 2

−4 −2 2 4 6 t −4 −2 2 4 6 t
−2 −2

−4 −4

−6 −6

6 | f (t)| 6 | f (t ) + α|
4 4

2 2

−4 −2 2 4 6 t −4 −2 2 4 6 t
−2 −2

−4 −4
(c) −6 (d) −6

Figure 8.7 (a) An idealized AM transmitter with offset (α) insertion, (b) a
possible voice signal f (t), (c) AM signal f (t) cos(ωc t) and its envelope |f (t)|, and
(d) AM signal (f (t) + α) cos(ωc t) with a DC offset α and its envelope
|f (t) + α| = f (t) + α. Notice that the envelope function in (d) is an offset version
of f (t) plotted in (b); the envelope in (c), however, does not resemble f (t).

to modulate the amplitude of a carrier cos(ωc t). A possible waveform for f (t) is
shown in Figure 8.7b. The incoming signal to an AM receiver, from the transmitter
in Figure 8.7a, will be

r(t) = (f (t) + α) cos(ωc t),

assuming, for simplicity, a propagation channel with k = 1 and zero time delay to .
This signal is plotted in Figures 8.7c and 8.7d for the cases α = 0 and α > max |f (t)|.
The bottoms of Figures 8.7c and 8.7d show the envelopes of r(t) for the same two
cases, where the envelope of r(t) is defined as

|f (t) + α|

(also indicated by the dashed lines superposed upon the r(t) curves in Figures 8.7c
and 8.7d).
Section 8.3 Envelope Detection of AM Signals 269

Except for a DC offset α, the envelope signal shown in Figure 8.7d is the same Envelope
as the desired signal f (t). By contrast, the envelope |f (t)| shown in Figure 8.7c, of AM
corresponding to the case α = 0, does not resemble f (t), because of the “rectification signal
effect” of the absolute-value operation. We next will describe an envelope detector
system that can extract from the AM signal shown at the top of Figure 8.7d the desired
envelope

|f (t) + α| = f (t) + α.

The detector is designed to work when

α > max |f (t)|.

Figure 8.8a shows an ideal envelope detector that consists of a full-wave rectifier Envelope
followed by a low-pass filter HLPF (ω). The figure also includes plots of a possible AM detection
input signal r(t) (same as in Figure 8.7d), the rectifier output p(t) = |r(t)|, and the
output q(t) that follows the peaks of p(t) and therefore is equal to the envelope of
the input r(t).
Assuming that α > max |f (t)| so that f (t) + α > 0 for all t, we have
|f (t) + α| = f (t) + α, and the rectifier output is

p(t) = |r(t)| = |(f (t) + α) cos(ωc t)| = |f (t) + α|| cos(ωc t)|
= (f (t) + α)| cos(ωc t)|,

Envelope Detector

Antenna Full-wave rectifer H LP F (ω) Speaker


r(t) = ( f (t) + α) cos(ω ct) q(t) = f (t) + α
6 6
p(t) = |r(t)| 6

4 4 4

2 2 2

−4 −2 2 4 6 −4 −2 2 4 6 −4 −2 2 4 6
−2 −2 −2

−4 −4 −4

(a) −6 −6 −6

F (ω) 1
1.5 | cos(ωct)|
1.25
− ωc 0 ωc ω
1
0.75
0.5 2 H LP F (ω)
0.25 a0
−4 −2 2 4 6 t
−0.25 − ωc 0 ωc ω
(b) −0.5 (c)

Figure 8.8 (a) Block diagram of an ideal envelope detector, as well as example plots of an input signal
r(t), its rectified version p(t) = |r(t)|, and the detector output q(t), (b) plot of a full-wave rectified AM
carrier, and (c) example F(ω) and frequency response of an ideal low-pass filter included in the envelope
detector system.
270 Chapter 8 Modulation and AM Radio

as shown in the figure. Clearly, after rectification the desired AM signal envelope
f (t) + α appears as the amplitude of the rectified cosine | cos(ωc t)| shown in
Figure 8.8b.
We next will see that we can extract f (t) + α from p(t) by passing p(t) through
a low-pass filter HLPF (ω) having the shape shown in Figure 8.8c. The rectified cosine
| cos(ωc t)| of Figure 8.8b is a periodic function with period
Tc 2π/ωc π
T = = =
2 2 ωc
and fundamental frequency
2π 2π
ωo = = = 2ωc .
T π/ωc
Thus, it can be expanded in a Fourier series as

a0 
| cos(ωc t)| = + an cos(n2ωc t)
2
n=1

with an appropriate set of Fourier coefficients an . (See Example 6.7 in Section 6.2.4,
where it was found that a0 = π4 .) Hence, the rectifier output, or the input to filter
HLPF (ω), can be expressed as

p(t) = (f (t) + α)| cos(ωc t)| ≡ p1 (t) + p2 (t),

where
 ∞
a0
p1 (t) ≡ f (t) + an f (t) cos(n2ωc t)
2
n=1

and
 ∞
a0
p2 (t) ≡ α + an α cos(n2ωc t).
2
n=1

Now, the response of the filter HLPF (ω) to the input p2 (t) is

q2 (t) = α,

since (see Figure 8.8c)


2
HLPF (0) =
ao
and

HLPF (n2ωc ) = 0 for n ≥ 1.


Section 8.3 Envelope Detection of AM Signals 271

To determine the filter response q1 (t) to input p1 (t), we first observe (using the
Fourier addition and modulation properties) that

 an
a0
P1 (ω) = F (ω) + {F (ω − n2ωc ) + F (ω + n2ωc )}.
2 2
n=1

Because only the first term is within the passband of HLPF (ω), it follows that

Q1 (ω) = HLPF (ω)P1 (ω) = F (ω),

implying that

q1 (t) = f (t).

Therefore, using superposition, we find the filter output due to input p(t) =
p1 (t) + p2 (t) to be

q(t) = q1 (t) + q2 (t) = f (t) + α,

which is the desired envelope of the envelope detector input

r(t) = (f (t) + α) cos(ωc t).

In summary, an envelope detector performs exactly as intended: It detects the


envelope of an AM signal corresponding to an audio signal that is offset by some
suitable DC level.

Example 8.4
Is envelope detection a linear or nonlinear process? Explain.
Solution Envelope detection is a nonlinear process, because, in general,
the envelope of

(f1 (t) + f2 (t)) cos(ωc t)

will be different from the sum of envelopes |f1 (t)| and |f2 (t)| of signals

f1 (t) cos(ωc t)

and

f2 (t) cos(ωc t),

respectively. The envelope of (f1 (t) + f2 (t)) cos(ωc t) is |f1 (t) + f2 (t)|,
but

|f1 (t) + f2 (t)| = |f1 (t)| + |f2 (t)|,

unless, for each value of t, f1 (t) and f2 (t) have the same algebraic sign.
272 Chapter 8 Modulation and AM Radio

Example 8.5
Suppose that the input signal of an envelope detector is

r(t) = (f (t) + α) cos(ωc (t − to )),

where to > 0. What will be the detector output, assuming that f (t) + α >
0?
Solution The detector output still will be

f (t) + α,

because the amplitude coefficients of the Fourier series of | cos(ωc (t − to ))|


and | cos(ωc t)| are identical. Hence, envelope detection is not sensitive to
a phase shift of the carrier signal.

We have just described the operation of an ideal envelope detector. Figure 8.9
shows a practical envelope detector circuit that offers an approximation to the ideal
system in Figure 8.8a (which consists of both a full-wave rectifier and an ideal low-
pass filter). The practical and very simple detector consists of a diode in series with a
parallel RC network. The capacitor voltage q(t) in the circuit will closely approximate
the envelope of an AM input r(t) if the RC time constant of the circuit is appropriately
selected (long compared with the carrier period 2π ωc and short compared with the
inverse of the bandwidth of the envelope of r(t)). Whenever the capacitor voltage
q(t) is larger than the instantaneous value of the AM input r(t), the diode behaves as
an open circuit and the capacitor discharges through resistor R with a time constant
RC. When the decaying capacitor voltage q(t) drops below r(t), the diode starts
conducting, the capacitor begins recharging, and q(t) is pulled up to follow r(t).
The decay and growth cycle repeats after r(t) dips below q(t) (once every period of
cos(ωc t)), and the overall result is an output that closely approximates the envelope
of r(t). The output will contain a small ripple component (a deviation from the true
envelope) with an energy content concentrated near the frequency ωc and above. This
component cannot be converted to sound by audio loudspeakers and headphones;
consequently, its presence can be ignored for all practical purposes.

+ +
r(t) q(t)
R C
− −

Figure 8.9 A practical envelope detector. When r(t) > q(t), the diode conducts
and the capacitor C charges up to a voltage q(t) that remains close to the
envelope of r(t).
Section 8.4 Superheterodyne AM Receivers with Envelope Detection 273

8.4 Superheterodyne AM Receivers with


Envelope Detection

Figure 8.10 depicts a portion of the energy spectrum of a possible AM receiver input
versus frequency f = 2π ω
≥ 0. The spectrum consists of features 10 kHz apart, where
each feature is the energy spectrum of a different AM signal broadcast from a different
radio station. A practical AM receiver, employing an envelope detector, needs to select
one of these AM signals by using a band-pass filter placed at the receiver front end.
One option is to use a band-pass filter with a variable, or tunable, center frequency.
However, such filters are difficult to build when the bandwidth-to-center-frequency
ratio Bf is of the order 1% or less. With B = 10 kHz and f = 1 MHz (a typical
frequency in the AM broadcast band), this ratio5 is exactly 1%.

|R (ω)| 2

WILL

0
560 570 580 590 600 610 620 f =
ω kHz

Figure 8.10 A sketch of the energy spectrum of a portion of the AM broadcast


band. The University of Illinois AM station WILL operates at a carrier frequency
of 580 kHz, near the low end of the AM broadcast band that extends from 540
to 1700 kHz. Each AM broadcast occupies a bandwidth of 10 kHz. In the United
States, frequency allocations are regulated by the FCC (Federal Communications
Commission).

A practical way to circumvent this difficulty is to use a band-pass filter with a


fixed center frequency that is below the AM band, for example, at

ωIF
f = fIF = = 455 kHz,

and to shift or heterodyne the frequency band of the desired AM signal into the Intermediate
pass-band of this filter, called an IF filter, which is short for intermediate frequency frequency (IF)
filter.
As shown in Figure 8.11, we can do this by mixing the receiver input signal r(t)
with a local oscillator signal cos(ωLO t) of an appropriate frequency ωLO prior to entry Local
of r(t) into the IF filter. To understand how this procedure works, let us first assume oscillator (LO)
that in Figure 8.11

r(t) = f (t) cos(ωc t),

5
This is also the inverse of the quality factor Q introduced and discussed in Chapter 12.
274 Chapter 8 Modulation and AM Radio

r(t) m (t) ~
r(t)
× H IF (ω) Detector

Center frequency = ωIF


cos(ω LO t)

Figure 8.11 Heterodyning an incoming signal r(t) into the IF band by the use
of a local-oscillator signal cos(ωLO t).

representing a single AM signal. The mixer output then can be represented as6

m(t) = f (t) cos(ωc t) cos(ωLO t)


1 1
= f (t) cos((ωLO − ωc )t) + f (t) cos((ωLO + ωc )t).
2 2
Choosing

ωLO = ωc + ωIF

so that

ωLO − ωc = ωIF ,

we find that the expression for m(t) reduces to


1 1
m(t) = f (t) cos(ωIF t) + f (t) cos((2ωc + ωIF )t),
2 2
which is equivalent to the sum of two AM station outputs where the audio signal f (t)
is transmitted by two carriers, ωIF and 2ωc + ωIF .
Now, the first term of m(t) lies within the passband of an IF filter constructed
about a center frequency ωIF , whereas the second term lies in the filter stopband.
Therefore, in Figure 8.11 the IF filter output r̃(t) is just the first term of m(t) and the
detector output will be proportional to the desired f (t) (or f (t) + α, if a DC offset
is included in the incoming AM signal).
6
We use here the sum and difference trigonometric identity

1 1
cos(a) cos(b) = cos(b − a) + cos(b + a);
2 2
since cosine is an even function, the order of the difference (i.e., b − a versus a − b) does not matter. To
verify the identity, note that

ej a + e−j a ej b + e−j b ej (a+b) + e−j (a+b) ej (b−a) + e−j (b−a)


cos(a) cos(b) = = + ,
2 2 4 4
which obviously equals one-half times the sum of two cosines with sum and difference arguments.
Section 8.4 Superheterodyne AM Receivers with Envelope Detection 275

F (ω)

ω
R (ω)

− ωc ωc ω
M (ω)

−ωc − ωLO − ω IF ω IF ωc + ωLO ω


H IF (ω)
1
− ω IF ω IF ω
~
R (ω)

− ω IF ω IF ω

Figure 8.12 Fourier-domain view of the mixing and filtering operations shown
in Figure 8.11. R(ω) represents the Fourier transform of the input to the system.

Figure 8.12 helps illustrate, in the Fourier domain, the mixing and filtering opera-
tions shown in Figure 8.11. Figure 8.12 depicts possible Fourier transforms of signals
f (t), r(t), and m(t), as well as an ideal IF-filter frequency response HIF (ω). Notice
that R(ω) and M(ω) can be obtained from F (ω) via successive uses of the Fourier
modulation property with frequency shifts ±ωc and ±ωLO = ±(ωc + ωIF ). Compar-
ison of R̃(ω) and R(ω) indicates that, at the IF filter output, the signal r̃(t) is just a
carrier-shifted version of the signal r(t).
In commercial AM receivers,

ωIF
fIF = = 455 kHz

is used as the standard intermediate frequency. So, to listen to WILL, which broadcasts
from the University of Illinois with the carrier frequency

ωc
fc = = 580 kHz,

we must tune the local oscillator to


ωLO
fLO = = 580 + 455 = 1035 kHz.

To listen to WTKA in Ann Arbor, with fc = 1050 kHz, we need

fLO = 1050 + 455 = 1505 kHz.


276 Chapter 8 Modulation and AM Radio

In commercial AM receivers, fLO is controlled by a tuning knob that changes the


capacitance C in an LC-based oscillator circuit. The knob adjusts C so that

1
√ = 2πfLO .
LC

In the presence of multiple AM signals (with different carriers), the receiver


just described will not perform as desired because of an image station problem. To
understand the image station problem, examine Figure 8.13. In this figure, the M-
shaped component of R(ω) depicts the Fourier transform of an AM signal being
broadcast by a second radio station with a carrier ωc + 2ωIF that accompanies the
AM signal from the first radio station with carrier ωc . When the signal with carrier
ωc + 2ωIF is mixed with the LO signal

cos(ωLO t) = cos((ωc + ωIF )t)

tuned to carrier ωc , its Fourier transform (the M-shaped component) is also shifted
into the IF band, as illustrated in Figure 8.13 (a straightforward consequence of the
Fourier modulation property applied to R(ω)). Thus, the signal from the second
station, called the image station, interferes with the ωc carrier signal, unless r(t) is
Image band-pass filtered prior to the LO mixer to eliminate the image station signal. This
station, filtering can be accomplished with a preselector band-pass filter placed in front of the
interference LO mixer. The preselector filter must have a variable center frequency ωc , but it need
not be a high-quality filter. The preselector is permitted to have a wide bandwidth
that varies with ωc , so long as the bandwidth remains smaller than about 2ωIF , since
ωc and the image carrier ωc + 2ωIF are 2ωIF apart (and thus a narrower preselector
bandwidth is unnecessary). Since the bandwidth-to-center-frequency ratio

2ωIF 2fIF
=
ωc fc

R (ω)

− ωc ωc ω
ω c + 2 ω IF
M (ω)

− ω IF ω IF ω
H IF (ω)
1
− ω IF ω IF ω
~
R (ω)

− ω IF ω IF ω

Figure 8.13 Similar to Figure 8.12 but illustrating the image station problem.
Section 8.4 Superheterodyne AM Receivers with Envelope Detection 277

Pre-selector ωc IF-filter
from antenna:
r(t) H RF (ω) RFA × H IF (ω) IFA Env. Det. AA f (t)
to speaker
cos(ω LO t)
ω LO = ω c + ω IF
Volume control
Local
oscillator
Tuning knob

Figure 8.14 Block diagram of a superheterodyne AM receiver. RFA, IFA, and AA


represent RF amplifier, IF amplifier, and audio amplifier, respectively.

of the preselector is close to 100% in the AM band, its construction is relatively simple
and inexpensive, despite the variable center–frequency requirement. The tuning knob
in commercial AM receivers simultaneously controls both the LO frequency ωc + ωIF
and the preselector center frequency ωc .
Figure 8.14 shows the block diagram of a superheterodyne, or superhet, AM
receiver that incorporates the features discussed. In addition to the preselector (to Superhet
eliminate the image station), the LO mixer (to heterodyne the station of interest into
the IF band), the IF filter (to eliminate adjacent channel stations), and the envelope
detector, the diagram includes three blocks identified as RFA, IFA, and AA, which
are RF, IF, and audio amplifiers, respectively. The audio amplifier is controlled by an
external volume knob to adjust the sound output level of the receiver. The amplifier
also blocks the DC component α from the envelope detector. In commercial AM
receivers the same DC component is used to control the gains of the RF and IF
amplifiers to maintain a constant output level under slowly time-varying propagation
or reception conditions (such as the decrease in the incoming signal level that you
experience when you are traveling in your car away from a broadcast station).
We note that in superhet receivers,

ωLO = ωc + ωIF

is not the only possible LO choice. A local oscillator signal cos(ωLO t) with frequency

ωLO = ωc − ωIF

also works, and ωIF can be specified below, within, or even above the broadcast band.
Some of these possibilities are explored in Exercise Problem 8.8. High-LO
However, the standard IF frequency of 455 kHz for commercial AM receivers versus
and the high-LO standard ωLO = ωc + ωIF have advantages: First, the low-IF (i.e., IF low-LO
below the AM broadcast band) leads to a reasonably large fBIF ratio that lessens the
cost of the IF filter circuit. Second, the high-LO choice (i.e., ωLO = ωc + ωIF ) has the
advantage over low-LO in that it requires the generation of LO frequencies only in
the range from 995 to 2155 kHz, a reasonable 2-to-1 tuning ratio as opposed to a
278 Chapter 8 Modulation and AM Radio

more demanding 15-to-1 ratio (85 to 1245 kHz) that would be required by a low-LO
system based on ωLO = ωc − ωIF .

EXERCISES

8.1 Verify the frequency-shift property


f (t)ej ωo t ↔ F (ω − ωo )
by taking the inverse Fourier transform of F (ω − ωo ).
8.2 Given that
f (t)e±j ωo t ↔ F (ω ∓ ωo ),
determine the Fourier transform of
f (t) sin(ωo t).

8.3 Given that


cos(ωo t + θ) = cos(ωo t) cos(θ) − sin(ωo t) sin(θ),
determine the Fourier transform of
f (t) cos(ωo t + θ).
Hint: Use the multiplication, addition, and modulation Fourier properties,
as well as the frequency-shift property of Problem 8.1.
8.4 Given that
1 1
f (t) cos(ωo t) ↔F (ω − ωo ) + F (ω + ωo ),
2 2
determine the Fourier transform of
θ
f (t + ) cos(ωo t + θ).
ωo
Hint: Use the time-shift property.
8.5 A linear system with frequency response H (ω) is excited with an input
f (t) ↔ F (ω).
H (ω) and F (ω) are plotted below:

F (ω)
1
4π 8π ω rad/ s
H (ω)
2

4π 8π ω rad/ s
Exercises 279

(a) Sketch the Fourier transform Y (ω) of the system output y(t) and calcu-
late the energy Wy of y(t).
(b) It is observed that output q(t) of the following system equals y(t)
determined in part (a).

p(t) × q(t)

cos(ω ot)

Sketch P (ω) and determine ωo .


8.6 It was indicated that the coherent demodulation scheme depicted in Figure
8.6a does not perform properly when the phase of the demodulating carrier
signal is mismatched to the phase of the modulating carrier. In Figure 8.6a,
suppose k = 1 and to = 0, but that the demodulating carrier is cos(ωo t + θ),
where θ is the phase mismatch.
(a) Find an expression for y(t) in terms of f (t) and θ.
(b) For what values of θ is the amplitude of y(t) the largest and smallest?
(c) Explain how a time-varying θ would affect the sound that you hear
coming from the loudspeaker. This should help you understand why a
coherent-demodulation receiver requires precise tracking of the carrier
phase.

8.7 Consider the system

f (t) g(t) = f (t) cos(10πt) p(t) = g(t) cos(5πt) q(t) y(t)


× × × H (ω)

cos(10πt) cos(5πt) cos(5πt)

where F (ω) and H (ω) are as follows:

F (ω) H (ω)
8 1

ω ω
−π rad/ s π rad/ s − 2π rad/ s π 2π rad/ s

(a) Express q(t) in terms of p(t).


(b) Sketch the Fourier transforms G(ω), P (ω), Q(ω), and Y (ω).
(c) Express y(t) in terms of f (t).
280 Chapter 8 Modulation and AM Radio

8.8 We wish to heterodyne an AM signal


f (t) cos(ωc t)
into an IF band, with a center frequency ωIF , by mixing it with a signal
cos(ωLO t).
Assume that an IF filter is present with twice the bandwidth of the low-pass
signal f (t), as in our discussion of AM receiver systems in Section 8.4.
Determine all of the usable values of ωLO if
(a) ωc = 2π106 and ωIF = 5π106 rad/s.
(b) ωc = 4π106 and ωIF = π106 rad/s.

8.9 For each possible choice of ωLO in Problem 8.8(a), determine the carrier
frequency of the corresponding image station.
8.10 What would be a disadvantage of using a very low IF, say, fIF = 20 kHz,
in AM reception? Also, under what circumstances could such a low IF be
tolerated? Hint: Think of image station interference issues.
9
Convolution, Impulse,
Sampling, and
Reconstruction

9.1 CONVOLUTION 282


9.2 IMPULSE δ(t) 301
9.3 FOURIER TRANSFORM OF DISTRIBUTIONS AND POWER SIGNALS 314
9.4 SAMPLING AND ANALOG SIGNAL RECONSTRUCTION 325
9.5 OTHER USES OF THE IMPULSE 332
EXERCISES 333

Figure 9.1 is a replica of Figure 5.9a from Chapter 5. In Chapters 5 through 7 we A time-domain
developed a frequency-domain description of how dissipative LTI systems, such as perspective
H (ω) = 1+j 1
ω , convert their inputs f (t) to outputs y(t). The description relies on the
on signal
fact that all signals that can be generated in the lab can be expressed as sums of co- processing:
sinusoids (a discrete or continuous sum (integral), depending on whether the signal convolution,
is periodic or nonperiodic). LTI systems scale the amplitudes of co-sinusoidal inputs impulse δ(t),
with the amplitude response |H (ω)| and shift their phases by the phase response impulse
∠H (ω). When all co-sinusoidal components of an input f (t) are modified according response h(t);
to this simple rule, their sum is the system response y(t). That, in essence, is the Fourier
frequency-domain description of how LTI systems work. transform
There is an alternative time-domain description of the same process, which is of power
the main topic of this chapter. Notice how response y(t) in Figure 9.1 appears as an signals;
“ironed-out” version of input f (t); it is as if y(t) is some sort of “running average” sampling
of the input. In this chapter we will learn that is exactly the case for a low-pass filter. and signal
More generally, the output y(t) of any LTI system is a weighted linear superposition reconstruction.
of the present and past values of the input f (t), where the specific type of running
average of f (t) is controlled by a function h(t) related to the frequency response
H (ω). For reasons that will become clear, we will refer to h(t) as the system impulse
response, and we will use the term convolution to describe the mathematical operation
between f (t) and h(t) that yields y(t).

281
282 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

2 f (t) 2 y(t)
1 1 1
H (ω) =
1 + jω
10 20 30 40 50 60 10 20 30 40 50 60
t t
−1 Low-pass filter −1

−2 −2

Figure 9.1 An example of a LTI system response to an input f (t).

In Section 9.1 we will examine the convolution properties of the Fourier trans-
form (properties 13 and 14 in Table 7.1) and learn about the convolution operation
and its properties. In Section 9.2 we will introduce a new signal δ(t), known as the
impulse, and learn how to use it in convolution and Fourier transform calculations.
Section 9.3 extends the idea of Fourier transforms to signals with infinite energy,
but finite power—signals such as sines, cosines, and the unit step. These so-called
power signals are shown to have Fourier transforms that can be expressed in terms of
impulses in the frequency domain. The chapter concludes with further applications
of convolution and the impulse, including, in Section 9.4, discussions of sampling
(analog-to-digital conversion) and signal reconstruction.

9.1 Convolution

Given a pair of signals, say, h(t) and f (t), we call a new signal y(t), defined as
 ∞
y(t) ≡ h(τ )f (t − τ )dτ,
−∞

Convolution the convolution of h(t) and f (t) and denote it symbolically as

y(t) = h(t) ∗ f (t).

As we later shall see, we can perform many LTI system calculations by convolving
the system input f (t) with the inverse Fourier transform h(t) of the system frequency
response H (ω). That possibility motivates our study of convolution in this section.

Example 9.1
Let y(t) ≡ h(t) ∗ u(t), the convolution of some h(t) with the unit step u(t).
Express y(t) in terms of h(t) only.
Solution Since
 ∞
y(t) = h(t) ∗ u(t) = h(τ )u(t − τ )dτ
−∞
Section 9.1 Convolution 283

and

1, τ < t,
u(t − τ ) =
0, τ > t,

by replacing u(t − τ ) in the integrand by 1 and changing the upper integra-


tion limit from ∞ to t, we obtain the answer:
 t
y(t) = h(τ )dτ.
−∞

Example 9.2
Determine the function

y(t) = u(t) ∗ u(t).

Solution Using the result of Example 9.1 with h(t) = u(t), we get
 
t
0, t < 0,
y(t) = u(τ )dτ = t = tu(t).
−∞ 0 dτ = t, t > 0,

So, the convolution of a unit step with another unit step produces a ramp
signal tu(t).

9.1.1 Fourier convolution properties

We are just about to show that if h(t) ↔ H (ω), f (t) ↔ F (ω), and

Y (ω) = H (ω)F (ω),

then the inverse Fourier transform y(t) of Y (ω) is the convolution

y(t) = h(t) ∗ f (t).

As a consequence, convolution provides an alternative means of finding the zero- Convolution


state response y(t) of a system having frequency response H (ω). This alternative in time domain
approach to finding y(t) is an application of the time-convolution property of the implies
Fourier transform (see item 13 in Table 7.1), which effectively states that multiplication
in frequency
h(t) ∗ f (t) ↔ H (ω)F (ω). domain
284 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Verification of time-convolution property: We first Fourier transform


h(t) ∗ f (t) as
 ∞  ∞ % ∞ &
−j ωt
{h(t) ∗ f (t)}e dt = h(τ )f (t − τ )dτ e−j ωt dt,
−∞ t=−∞ τ =−∞

and, subsequently, exchange the order of t and τ integrations to obtain


 ∞  ∞ % ∞ &
{h(t) ∗ f (t)}e−j ωt dt = h(τ ) f (t − τ )e−j ωt dt dτ.
−∞ τ =−∞ t=−∞

By the Fourier time-shift property, the expression in curly brackets on the


right side is recognized as F (ω)e−j ωτ , where F (ω) is the Fourier transform
of f (t). Hence,
 ∞  ∞
−j ωt
{h(t) ∗ f (t)}e dt = h(τ )F (ω)e−j ωτ dτ
−∞ τ =−∞
 ∞
= F (ω) h(τ )e−j ωτ dτ = F (ω)H (ω).
τ =−∞

So, as claimed,
h(t) ∗ f (t) ↔ H (ω)F (ω).

It similarly is true that the Fourier transform of


 ∞
f (t) ∗ h(t) ≡ f (τ )h(t − τ )dτ
−∞

is
F (ω)H (ω) = H (ω)F (ω).
This indicates that
h(t) ∗ f (t) = f (t) ∗ h(t),
Convolution which means that the convolution operation is commutative. Other properties of
is commutative convolution will be discussed in the next section.
We can verify the Fourier frequency-convolution property (item 14 in Table 7.1),
1
f (t)g(t) ↔ F (ω) ∗ G(ω),

Multiplication in a similar maner, except by starting with the inverse Fourier transform integral of
in time-domain F (ω) ∗ G(ω). In this property, the convolution
implies  ∞
convolution in F (ω) ∗ G(ω) ≡ F ()G(ω − )d,
frequency-domain −∞
Section 9.1 Convolution 285

can be replaced with


 ∞
F (ω) ∗ G(ω) = G(ω) ∗ F (ω) ≡ G()F (ω − )d,
−∞

because convolution is a commutative operation.

9.1.2 The meaning and properties of convolution


To appreciate the meaning of the convolution operation
 ∞
h(t) ∗ f (t) = h(τ )f (t − τ )dτ,
−∞

let us first examine

y(t) ≡ h(t) ∗ f (t)

at some specific instant in time, say, at t = 2. Evaluating y(t) at t = 2, we obtain


 ∞
y(2) = [h(t) ∗ f (t)]|t=2 = h(τ )f (2 − τ )dτ.
−∞

This equation indicates that y(2) is a weighted linear superposition of f (2) (corre-
sponding to τ = 0 in f (2 − τ )) and all other values of f (t) before and after t = 2
(corresponding to positive and negative τ in f (2 − τ ), respectively) with different
weights h(τ ); f (2) is weighted by h(0), f (1) by h(1), f (0) by h(2), f (−1) by h(3),
and so forth. The same interpretation, of course, holds for y(t) at every value of t;
y(t) = h(t) ∗ f (t) is a weighted linear superposition of present (τ = 0), past (τ > 0),
and future (τ < 0) values f (t − τ ) of the signal f (t) with h(τ ) weightings.
Table 9.1 lists some of the important properties of convolution. The first three
properties indicate that convolution is commutative (as we already have seen), distribu-
tive, and associative. Convolution is distributive because Distributive

 ∞
f (t) ∗ (g(t) + h(t)) = f (τ )(g(t − τ ) + h(t − τ ))dτ
−∞
 ∞  ∞
= f (τ )g(t − τ )dτ + f (τ )h(t − τ )dτ
−∞ −∞
= f (t) ∗ g(t) + f (t) ∗ h(t).

To show that convolution is associative, we can use the Fourier time-convolution Associative
property to note that

f (t) ∗ (g(t) ∗ h(t)) ↔ F (ω)(G(ω)H (ω)) = (F (ω)G(ω))H (ω)


286 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Commutative h(t) ∗ f (t) = f (t) ∗ h(t)


Distributive f (t) ∗ (g(t) + h(t)) = f (t) ∗ g(t) + f (t) ∗ h(t)
Associative f (t) ∗ (g(t) ∗ h(t)) = (f (t) ∗ g(t)) ∗ h(t)
Shift h(t) ∗ f (t) = y(t) ⇒ h(t − to ) ∗ f (t) = h(t) ∗ f (t − to ) = y(t − to )

Derivative h(t) ∗ f (t) = y(t) ⇒ d


dt h(t) ∗ f (t) = h(t) ∗ d
dt f (t) = d
dt y(t)

Reversal h(t) ∗ f (t) = y(t) ⇒ h(−t) ∗ f (−t) = y(−t)


If h(t) = 0 for t < tsh and f (t) = 0 for t < tsf
Start-point
then y(t) = h(t) ∗ f (t) = 0 for t < tsy = tsh + tsf .
If h(t) = 0 for t > teh and f (t) = 0 for t > tef
End-point
then y(t) = h(t) ∗ f (t) = 0 for t > tey = teh + tef .
h(t) ∗ f (t) = y(t) ⇒ Ty = Th + Tf
Width
where Th , Tf and Ty denote the widths of h(t), f (t), and y(t).

Table 9.1 Convolution and its properties.

as well as

(f (t) ∗ g(t)) ∗ h(t) ↔ (F (ω)G(ω))H (ω).

Thus, by the uniqueness of the Fourier transform, it follows that

f (t) ∗ (g(t) ∗ h(t)) = (f (t) ∗ g(t)) ∗ h(t).

Shift The convolution shift property also can be verified using the properties of the Fourier
property transform: Since

f (t − to ) ↔ F (ω)e−j ωto ,

it follows that

h(t) ∗ f (t − to ) ↔ H (ω)F (ω)e−j ωto ≡ Y (ω)e−j ωto ,

where Y (ω) = H (ω)F (ω) has inverse Fourier transform y(t) = h(t) ∗ f (t). But the
inverse Fourier transform of Y (ω)e−j ωto is y(t − to ), so

h(t) ∗ f (t − to ) = y(t − to ),

as claimed. Also, using the commutative property of convolution, we can see that

h(t − to ) ∗ f (t) = y(t − to )

must be true.
Section 9.1 Convolution 287

To verify the derivative property, note that Derivative


property
d d
y(t) = [h(t) ∗ f (t)] ↔ j ω[H (ω)F (ω)] = H (ω)[j ωF (ω)],
dt dt
which implies that

df dy
h(t) ∗ = .
dt dt
Likewise,

dh dy
∗ f (t) =
dt dt
is true, since convolution is commutative.

Example 9.3
Given that

f (t) ∗ g(t) = p(t),

where p(t) = ( 2t ) (as shown in Figure 9.2a), determine and plot

c(t) = f (t) ∗ (g(t) − g(t − 2)).

Solution Using the distributive property, we find that

c(t) = f (t) ∗ (g(t) − g(t − 2)) = f (t) ∗ g(t) − f (t) ∗ g(t − 2).

Since f (t) ∗ g(t) = p(t), the shift property indicates that

f (t) ∗ g(t − 2) = p(t − 2).

Therefore,

t t −2
c(t) = p(t) − p(t − 2) = ( ) − ( ),
2 2
which is plotted in Figure 9.2b.

We will not prove the start-point, end-point, and width properties, but after every
example that follows you should check that the convolution results are consistent with
these properties. The width property is relevant only when both h(t) and f (t) have Width,
finite widths over which they have nonzero values. As an example, h(t) = rect(t) has start-point,
unit width (width=1), because outside the interval − 21 < t < 21 the value of rect(t) is end-point
288 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

p(t) = f(t) * g(t)


1

−1 1 2 3 t

(a)

c(t) = f(t) * g(t) − g(t − 2)


1

−1 1 2 3 t

(b)

Figure 9.2 (a) p(t) = f (t) ∗ g(t) = ( 2t ), and (b) c(t) = f (t) ∗ (g(t) − g(t − 2)).

zero. Likewise, f (t) = ( 2t ) has a width of two units. Hence, the width property tells
us that if h(t) = rect(t) and f (t) = ( 2t ) were convolved, the result y(t) would be
1 + 2 = 3 units wide. Also, functions h(t) = rect(t) and f (t) = ( 2t ) start at times
t = − 21 and t = −1, respectively, which means that the convolution rect(t) ∗ ( 2t )
will start at time t = (− 21 ) + (−1) = − 23 (according to the start-point property).

Example 9.4
Given that
t −5 t −8
c(t) = rect( ) ∗ ( ),
2 4
determine the width and start-time of c(t).
Solution Since the widths of rect( t−5 2 ) and ( 4 ) are 2 and 4, respec-
t−8

tively, the width of c(t) must be 2 + 4 = 6. Since the start times of rect( t−5
2 )
and ( 4 ) are 4 and 6, respectively, the start time of c(t) must be 4 + 6 =
t−8

10. You should try to find the convolution c(t) = rect( t−5 2 ) ∗ ( 4 ) after
t−8

reading the next section (see Exercise Problem 9.5) and then verify the
width and start time.
Section 9.1 Convolution 289

t
1 h(t) ∗ u(t) = −∞ h(τ )dτ for any h(t) 4 h(t) ∗ δ(t) = h(t) for any h(t)
2 rect( Tt )∗ rect( Tt )= T ( 2Tt ) 5 h(t) ∗ δ(t − to ) = h(t − to ) for any h(t)
3 u(t) ∗ u(t) = tu(t)

Table 9.2 A short list of frequently encountered convolutions. Signals δ(t) and
δ(t − to ) will be discussed in Section 9.2.

9.1.3 Convolution examples and graphical convolution

Example 9.5
Find the convolution

y(t) = u(t) ∗ et .

Solution Because convolution is commutative, y(t) = u(t) ∗ et = et ∗


u(t). Therefore, using the general result
 t
h(t) ∗ u(t) = h(τ )dτ
−∞

established in Example 9.1 (the same result is also included in Table 9.2
that lists some commonly encountered convolutions), we find that
 t
y(t) = e ∗ u(t) =
t
eτ dτ = et − e−∞ = et .
−∞

Example 9.6
Given that

h(t) = e−t u(t),


f (t) = e−2t u(t),

and y(t) = h(t) ∗ f (t), determine y(1).


Solution Since we can calculate y(t) with the formula
 ∞  ∞
y(t) = h(τ )f (t − τ )dτ = [e−τ u(τ )][e−2(t−τ ) u(t − τ )]dτ,
−∞ −∞
290 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

for any value of t, it follows that


 ∞  1
−τ −2(1−τ )
y(1) = [e u(τ )][e u(1 − τ )]dτ = e−τ e−2(1−τ ) dτ.
−∞ 0

To obtain the last step we used the fact that



1, 0 < τ < 1,
u(τ )u(1 − τ ) =
0, τ < 0 or τ > 1,

and replaced u(τ )u(1 − τ ) by 1 after changing the integration limits to 0


and 1 as shown. Continuing from where we left off, we get
 1  1
−τ −2(1−τ ) −2
y(1) = e e dτ = e eτ dτ
0 0

= e−2 (e1 − 1) = e−1 − e−2 ≈ 0.233.

Example 9.7
Repeat Example 9.6 to determine y(t) = h(t) ∗ f (t) for all values of t. The
signals h(t) and f (t) are plotted in Figures 9.3a and b.
Solution Once again,
 ∞  ∞
y(t) = h(τ )f (t − τ )dτ = e−τ u(τ )e−2(t−τ ) u(t − τ )dτ
−∞ −∞
 ∞
= e−2t eτ u(τ )u(t − τ )dτ.
−∞

Now, generalizing from Example 9.6, we obtain



1, 0 < τ < t,
u(τ )u(t − τ ) =
0, τ < 0 or τ > t.

This follows because u(t − τ ) = 0 for τ > t, as shown in Figure 9.3c.


Therefore, assuming that t > 0, we get
 ∞  t
y(t) = e−2t eτ u(τ )u(t − τ )dτ = e−2t eτ dτ
−∞ 0

= e−2t (et − 1) = e−t − e−2t .


Section 9.1 Convolution 291

h(t) f(t)

1 1

−1 1 2 3 t −1 1 2 3 t
(a) (b)
u(t − τ) y(t) = h(t) * f(t)

1 1

−1 1 t 2 3 τ −1 1 2 3 t
(c) (d)
f(τ) h(τ)

1 1

−1 1 2 3 τ −1 1 2 3 τ
(e) (f)
f(−τ) f(1 − τ)

1 1

−1 1 2 3 τ −1 1 2 3 τ
(g) (h)
h(τ)f(1 − τ)

−1 1 2 3 τ
(i)

Figure 9.3 (a) h(t) = e−t u(t), (b) f (t) = e−2t u(t), (c) u(t − τ ) vs τ , and (d)
y(t) = h(t) ∗ f (t) = (e−t − e−2t )u(t). (e)–(i) are self explanatory.
292 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

For t < 0, u(τ )u(t − τ ) is identically zero, so that y(t)= 0. Therefore, a


formula for y(t) that holds for all t is

y(t) = (e−t − e−2t )u(t),

which is plotted in Figure 9.3d.

Figures 9.3e through 9.3i will provide some further insight about the convolution
Graphical results of Examples 9.6 and 9.7: Figures 9.3e and 9.3f show plots of f (τ ) = e−2τ u(τ )
convolution and h(τ ) = e−τ u(τ ) versus τ . Furthermore, Figures 9.3g and 9.3h show plots of
f (−τ ), which is a flipped (reversed) version of Figure 9.3e, and f (1 − τ ), which is
the same as Figure 9.3g except shifted to the right by 1 unit in τ . The “area under”
the product of the curves shown in Figures 9.3f and 9.3h—that is, the area under the
h(τ )f (1 − τ ) curve shown in Figure 9.3i—is the result of the convolution calculation

−∞ h(τ )f (1 − τ )dτ performed in Example 9.6 and also the result of Example 9.7
evaluated at t = 1. For other values of t, the convolution result is still the area under
the product curve h(τ )f (t − τ ).
As you can see, visualizing the waveforms at the various steps in the convolu-
tion process can be challenging. When contemplating evaluation of the convolution
integral

 ∞
y(t) = h(τ )f (t − τ )dτ,
−∞

we strongly recommend that, prior to performing the calculation, you make the
following series of plots:

(1) Plot f (τ ) versus τ and h(τ ) versus τ (looking ahead to Figures 9.4a and 9.4b,
for instance), where τ is the variable of integration.
(2) Plot f (−τ ) versus τ by flipping the plot of f (τ ) about the vertical axis (as in
Figure 9.4c).
(3) Plot f (t − τ ) versus τ by shifting the graph of step 2 to the right by some
amount t, as shown in Figure 9.4d. (Note that the τ = −1 mark in Figure 9.4c
becomes the τ = t − 1 mark in Figure 9.4d.)

The next example illustrates how these plots are helpful in performing convolu-
tion, in this case a convolution of the waveforms 2rect(t − 5.5) and u(t − 1).
Section 9.1 Convolution 293

f(τ) h(τ)
2 2

1 1

−2 2 4 6 8 τ −2 2 4 6 8 τ
(a) (b)
f(−τ) f(t − τ)
2 2

1 1

−2 τ −2 4 t−1 τ
(c) −1 2 4 6 8
(d)
2 6 8

y(t) = h(t) * f(t)


2

2 4 6 8 10 t
(e)

Figure 9.4 (a) f (τ ) vs τ , (b) h(τ ) vs τ , (c) f (−τ ) vs τ , (d) f (t − τ ) vs τ , and (e) y(t) = h(t) ∗ f (t). See
Example 9.8.

Example 9.8
Given that h(t) = 2rect(t − 5.5) and f (t) = u(t − 1), determine y(t) =
h(t) ∗ f (t).
Solution The plots of h(τ ) and f (t − τ ) of the integrand

h(τ )f (t − τ )

of the convolution integral


 ∞
h(τ )f (t − τ )dτ
−∞

are shown in Figures 9.4b and 9.4d, respectively. (We obtained these plots
by following steps 1 through 3 outlined previously). The convolution inte-
gral is simply the area under the product of these two curves shown in
294 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Figures 9.4b and 9.4d. As explained next, and as seen from the figures, the
area under the product curve depends on the value of t:
• First, for t − 1 < 5, the h(τ ) and f (t − τ ) curves do not “overlap,”
their product is zero, and therefore, h(t) ∗ f (t) = 0 for t < 6.
• Next, for 5 < t − 1 < 6, the same two curves overlap between τ = 5
and τ = t − 1 > 5. In this interval only, the product of the two curves
is nonzero and equals 2. The area under the product of the curves is
2 × ((t − 1) − 5) = 2(t − 6) for 6 < t < 7, and hence

h(t) ∗ f (t) = 2(t − 6)

for the same time interval.


• Finally, for t − 1 > 6, or t > 7, the two curves fully overlap, and the
area under their product is 2. Hence, for t > 7,

h(t) ∗ f (t) = 2.

Therefore,


⎨0, t < 6,
y(t) = 2rect(t − 5.5) ∗ u(t − 1) = 2(t − 6), 6<t <7,


2, t > 7,

as shown in Figure 9.4e.

Example 9.9
Figures 9.5a and 9.5b show plots of two waveforms f (τ ) and h(τ ), versus τ .
Determine the convolution y(t) = h(t) ∗ f (t).
Solution First, flipping Figure 9.5a, we obtain f (−τ ) shown in Figure 9.5c.
Next, we shift Figure 9.5c to the right by an amount t to obtain the graph
of f (t − τ ), shown in Figure 9.5d. The convolution y(t) = h(t) ∗ f (t) is
the area under the product of the curves shown in Figures 9.5b and 9.5d,
which, of course, depends on the value of t as in Example 9.8.
• For t < 0, the curves do not overlap, and hence, y(t) = h(t) ∗ f (t) =
0.
• For 0 < t < 1, the curves partially overlap, between τ = 0 and t, and
hence,
 t
y(t) = 2τ dτ = t 2 .
0

Note that in obtaining the result, we used the fact that f (t − τ ) = 1


and h(τ ) = 2τ , which are valid for 0 < τ < t < 1.
Section 9.1 Convolution 295

f(τ) h(τ)
4 4

3 3

2 2

1 1

(a) -2 2 4 6 8 τ (b) -2 2 4 6 8 τ
f(−τ) f(t − τ)
4 4

3 3

2 2

1 1

(c) -2 2 4 6 8 τ (d) -2 t 2 4 6 8 τ
y(t) = h(t) * f(t)
4

(e)-2 2 4 6 8 t

Figure 9.5 (a) f (τ ) vs τ , (b) h(τ ) vs τ , (c) f (−τ ) vs τ , (d) f (t − τ ) vs τ , and (e) y(t) = h(t) ∗ f (t). See
Example 9.9.

• For 1 < t < 2, the curves overlap between τ = t − 1 and τ = t, and


hence,
 t
y(t) = 2τ dτ = t 2 − (t − 1)2 .
t−1

• For 2 < t < 3, the curves partially overlap, only between τ = t − 1


and t = 2, and hence,
 2
y(t) = 2τ dτ = 4 − (t − 1)2 .
t−1

• Finally, for t > 3, there is no overlap and y(t) = 0.


296 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Thus, overall,


⎪ 0, t < 0,



⎪ 2
⎨t , 0 < t < 1,
y(t) = h(t) ∗ f (t) = t 2 − (t − 1)2 = 2t − 1, 1 < t < 2,



⎪4 − (t − 1)2 , 2 < t < 3,


⎩0, t > 3,

as plotted in Figure 9.5e.

Example 9.10
Determine y(t) = h(t) ∗ h(t) where h(t) = rect(t).
Solution Figures 9.6a and 9.6b display h(τ ) = rect(τ ) and h(t − τ ) =
rect(t − τ ) versus τ , respectively.
• For t + 0.5 < −0.5, or t < −1, the two curves do not overlap and
h(t) ∗ h(t) = 0.
• For −0.5 < t + 0.5 < 0.5, or −1 < t < 0, the curves overlap between
τ = −0.5 and t + 0.5. Hence, for −1 < t < 0,
 t+0.5
h(t) ∗ h(t) = 1 · 1dτ = t + 1.
−0.5

h(τ) h(t − τ)
1 1

τ
−2 −1 1 2 −2 −1
t − 0.5 t + 0.5 1 2τ

(a) (b)

rect (t) * rect (t) = Δ(t/2)

−2 −1 1 2 t

(c)

Figure 9.6 (a) h(τ ) = rect(τ ) vs τ , (b) h(t − τ ) = rect(t − τ ) vs τ , and (c) rect(t) ∗ rect(t) = ( 2t ).
Section 9.1 Convolution 297

• Next, for −0.5 < t − 0.5 < 0.5, or 0 < t < 1, the overlap is between
τ = t − 0.5 and 0.5. Hence, for 0 < t < 1,
 0.5
h(t) ∗ h(t) = 1 · 1dτ = 1 − t.
t−0.5

• Finally, for t − 0.5 > 0.5, or t > 1, there is no overlap and thus h(t) ∗
h(t) = 0.
Overall,


⎪ 0, t < −1,

⎨t + 1, −1 < t < 0, t
h(t) ∗ h(t) = rect(t) ∗ rect(t) = = ( ),

⎪ 1 − t, 0 < t < 1, 2


0, t > 1,

as shown in Figure 9.6c.

The result of Example 9.10 is a special case of entry 2 in Table 9.2. According to
the entry, the self-convolution of a rectangle of width T and unit height is a triangle
of width 2T and apex height T ; that is,
t t t
rect( ) ∗ rect( ) = T ( ).
T T 2T
Let us apply the Fourier time-convolution property to this convolution identity. Since
from Table 7.2 we know
t ωT
rect( ) ↔ T sinc( ),
T 2
the time-convolution property implies that
t t ωT
rect( ) ∗ rect( ) ↔ T 2 sinc2 ( ).
T T 2
But, we just saw that
t t t
rect( ) ∗ rect( ) = T ( );
T T 2T
therefore,
t ωT
T ( ) ↔ T 2 sinc2 ( ).
2T 2
Letting τ = 2T , this relation reduces to
t τ ωτ
( ) ↔ sinc2 ( ),
τ 2 4
298 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

which verifies entry 9 in Table 7.2. Furthermore, entry 10 in the same table can be
obtained from this result by using the symmetry property of the Fourier transform.

Example 9.11
Given that h(t) = rect(t) and f (t) = rect( 2t ), determine y(t) = h(t) ∗ f (t).
Solution This problem is easy if we notice that

t 1 1
rect( ) = rect(t + ) + rect(t − );
2 2 2
that is, a rect 2 units wide is the same as a pair of side-by-side shifted rects
with unit widths, shifted by ± 21 units. Thus,

t
y(t) = rect(t) ∗ rect( )
2
1 1
= rect(t) ∗ (rect(t + ) + rect(t − ))
2 2
t + 21 t − 21
= ( ) + ( ),
2 2
by the distributive and time-shift properties of convolution and the result
t± 21
of Example 9.10. Figures 9.7a through 9.7c show the plots of ( 2 ) and

t + 0.5 t − 0.5
Δ(———)
Δ(———) 2
2 1
1

−2 −1 1 2 t −2 −1 1 2 t

(a) (b)

t − 0.5
t + 0.5 + Δ(———)
Δ(———)
2 2
1

−2 −1 1 2 t

(c)

Figure 9.7 (a) ( t+0.5


2 ), (b) ( 2 ), and (c) rect(t) ∗ rect( 2 ) = ( 2 ) + ( 2 ).
t−0.5 t t+0.5 t−0.5
Section 9.1 Convolution 299

y(t). You should try finding the same convolution directly, without first
decomposing rect( 2t ) into a sum of two rects.

Example 9.12
Convolve the two functions shown in Figures 9.8a and 9.8b.
Solution Clearly, according to Figure 9.8a, f (t) = rect( t−1
2 ). Also, according
to Figure 9.8b, h(t) = f (t) − f (t − 2). Thus, the required convolution is

h(t) ∗ f (t) = f (t) ∗ h(t) = f (t) ∗ (f (t) − f (t − 2))


= f (t) ∗ f (t) − f (t) ∗ f (t − 2).

Next, we observe that since


t t t
rect( ) ∗ rect( ) = 2( ),
2 2 4
we can apply the time-shift property of convolution to write

t −1 t −1 t −2
f (t) ∗ f (t) = rect( ) ∗ rect( ) = 2( ).
2 2 4

f(t) h(t)
1 1

−2 2 4 6 8 t −2 2 4 6 8 t

−1 −1

(a) (b)

f(t) * h(t)
2

−2 2 4 6 8 t

−2
(c)

Figure 9.8 (a) f (t), (b) h(t), and (c) their convolution f (t) ∗ h(t) = h(t) ∗ f (t).
300 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Using this result and applying the time-shift property once again, we see
that
h(t) ∗ f (t) = f (t) ∗ f (t) − f (t) ∗ f (t − 2)
t −2 t −4
= 2( ) − 2( ),
4 4
as shown in Figure 9.8c.

Example 9.13
Suppose that the input of an LTI system having frequency response
1
H (ω) =
1 + jω
is f (t) = rect(t). Determine the zero-state response y(t) by using the
convolution formula y(t) = h(t) ∗ f (t), where h(t) is the inverse Fourier
transform of H (ω).
Solution From Table 7.2 we note that
1
e−t u(t) ↔ .
1 + jω
−t
Thus, H (ω) = 1+j 1
ω implies that h(t) = e u(t). We now can proceed
by computing the convolution directly (making the required plots first).
Instead, begin by noting that
1 1
f (t) = rect(t) = u(t + ) − u(t − ).
2 2
Thus, the system zero-state response can be found as
1 1
y(t) = h(t) ∗ f (t) = e−t u(t) ∗ rect(t) = e−t u(t) ∗ [u(t + ) − u(t − )]
2 2
1 1
= q(t + ) − q(t − ),
2 2
where
 t
q(t) ≡ e−t u(t) ∗ u(t) = e−τ u(τ )dτ = u(t)(1 − e−t ).
−∞

Thus,
1 1 1 1
y(t) = u(t + )(1 − e−(t+ 2 ) ) − u(t − )(1 − e−(t− 2 ) ),
2 2
where we have made use of the time-shift property of convolution. The
resulting output y(t) is shown in Figure 9.9. Notice that the low-pass filter
“smooths” the rectangular input signal into the shape shown in the figure.
Section 9.2 Impulse δ(t) 301

1 y(t) = f (t) ∗ h(t)

0.5

2 4 6 t

Figure 9.9 The response of low-pass filter H(ω) = 1


1+jω
to input f (t) = rect(t).

9.2 Impulse δ(t)


9.2.1 Definition and properties of the impulse
Convolution is a mathematical operation, just as is multiplication. In the case of
multiplication, there is a special number called one. Multiplying any number by one
just reproduces the original number. That is, multiplication by one is the identity
operation; it does not change the value of a number. In the case of convolution, we
might ask whether there is any signal that plays a role analogous to that of one. More
precisely, is there any signal that, when convolved with an arbitrary signal f (t), always
reproduces f (t)? In mathematical terms, we are seeking a signal p(t) satisfying

p(t) ∗ f (t) = f (t).

It turns out that, in general, there is no waveform that exactly satisfies this identity for
an arbitrary f (t). However, there are waveforms that will produce an approximation
that is as fine as we like. In particular, consider the rectangular pulse signal defined by
1 t
p (t) ≡ rect( ).
 
(See Figure 9.10.) This pulse has unit area and, for small enough , the pulse is very
tall and narrow. In the limit, as  approaches zero, it can be shown that1

lim {p (t) ∗ f (t)} = f (t),


→0

1
Verification:
 ∞ /2
f (t − τ )dτ
1 t 1 τ −/2
p (t) ∗ f (t) ≡ rect( ) ∗ f (t) = rect( )f (t − τ )dτ = .
  −∞   

For  = 0, p (t) ∗ f (t) is indeterminate because both the numerator and denominator of the last expression
/2
reduce to zero. However, −/2 f (t − τ )dτ ≈ f (t) for small , and therefore, as claimed,

/2
−/2 f (t − τ )dτ f (t)
lim {p (t) ∗ f (t)} = lim = lim = f (t),
→0 →0  →0 
which also can be verified more rigorously using l’Hopital’s rule.
302 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

1 t
⑀ rect( ⑀ )

1
⑀ ⑀

Figure 9.10 Pulse signal p (t) ≡ 1 rect( t ). Note that the area under p (t) is 1 for
all values of the pulse width , and as  decreases p (t) gets “thinner” and
“taller.” Function p (t) has the property that, given an arbitrary function f (t),
lim {p (t) ∗ f (t)} = f (t).
→0

which indicates that p (t), for very small , is essentially the identity pulse we are
seeking.
It is convenient (and useful, as we shall see) to denote the left-hand side of this
identity, lim {p (t) ∗ f (t)}, as δ(t) ∗ f (t), and think of δ(t) as a special signal—
→0
essentially, p (t), for exceedingly small —having the property

δ(t) ∗ f (t) = f (t).

(See entry 4 in Table 9.2.) Other properties of δ(t), some of which are listed in
Table 9.3, include

δ(t) ↔ 1;

this Fourier transform property of δ(t) is a consequence of applying the Fourier


time-convolution property to the identity δ(t) ∗ f (t) = f (t). Furthermore, given that
δ(t) ↔ 1, the inverse Fourier transform of 1 must be
 ∞
1
δ(t) = ej ωt dω.
2π −∞
We will refer to the signal δ(t) with the properties shown in Table 9.3 as an
Impulse δ(t) impulse. The properties of the impulse are a set of instructions specifying how
various calculations involving the impulse and other signals can be performed. These
instructions are necessary for computational purposes, since a numerical interpreta-
∞ j ωt ∞
tion of δ(t) = 2π
1
−∞ e dω is not possible because the integral −∞ ej ωt dω does
not converge. Signals such as δ(t) that lack a numerical interpretation—because, in
Distributions essence, they are non-realizable—are known as distributions2 (instead of functions)
2
The impulse δ(t) and its properties were first used as shortcuts by Paul Dirac in the 1920s in his
quantum mechanics calculations. Soon after, a new branch of mathematics, known as distribution theory,
was developed to provide a firm foundation for the impulse and its applications. The impulse distribution
δ(t) also is known as the Dirac delta, or simply, delta.
Section 9.2 Impulse δ(t) 303

Name Impulse properties: Shifted-impulse properties:


Convolution δ(t) ∗ f (t) = f (t) δ(t − to ) ∗ f (t) = f (t − to )
∞ ∞
−∞ δ(t)f (t)dt = f (0) and −∞ δ(t − to )f (t)dt = f (to ) and
b b
Sifting  a δ(t)f (t)dt  a δ(t − to )f (t)dt
f (0) if a < 0 < b f (to ) if a < to < b
= =
0, otherwise 0, otherwise

Sampling f (t)δ(t) = f (0)δ(t) f (t)δ(t − to ) = f (to )δ(t − to )


Symmetry δ(−t) = δ(t) δ(to − t) = δ(t − to )

Scaling 1
δ(at)= |a| δ(t), a = 0 δ(a(t − to ))= |a|
1
δ(t − to ), a = 0
∞ ∞
= 1 and
−∞ δ(t)dt
 −
−∞ δ(tto )dt = 1 and
Area b 1 if a < 0 < b b 1 if a < to < b
a δ(t)dt = a δ(t − to )dt =
0, otherwise 0, otherwise

Definite t t
−∞ δ(τ )dτ = u(t) −∞ δ(τ − to )dτ = u(t − to )
integral
Unit-step du
dt = δ(t) d
dt u(t − to ) = δ(t − to )
derivative

Derivative d
dt δ(t) ∗ f (t) = d
dt f (t)
d
dt δ(t − to ) ∗ f (t) = d
dt f (t − to )

Fourier
δ(t) ↔ 1 δ(t − to ) ↔ e−j ωto
transform

1 δ(t) 1 δ(t − t o)
Graphical
symbol
t to t

Table 9.3 The properties and graphical symbols of the impulse and shifted
impulse. The derivative of the impulse also is known as the doublet. Further prop-
erties of the doublet δ  (t) ≡ dtd δ(t), e.g., δ  (−t) = −δ  (t), etc., can be enumerated
starting with the derivative property for the impulse given in the table.
304 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

and are described in terms of their properties such as those listed in Table 9.3 (instead
of with plots or tabulated values).
Convolution The properties of the impulse distribution δ(t) can be viewed to be a consequence
property of its convolution property

δ(t) ∗ f (t) = f (t)

and should be interpreted as “shorthands” for various expressions involving3 p (t),


just as δ(t) ∗ f (t) = f (t) is itself such a shorthand. For instance, by applying the
shift property of convolution to the statement δ(t) ∗ f (t) = f (t), we obtain

δ(t − to ) ∗ f (t) = f (t − to ),

which is a shorthand for

lim {p (t − to ) ∗ f (t)} = f (t − to ).


→0

It is convenient to think of δ(t − to ) as a signal, a new distribution known as a shifted


Shifted impulse. This property of the shifted impulse is listed in the upper right corner of
impulse Table 9.3.
Writing out the same property as an explicit convolution integral, we obtain
 ∞
δ(τ − to )f (t − τ )dτ = f (t − to ).
−∞

For the special case f (t) = 1, this expression reduces to


 ∞
δ(τ − to )dτ = 1.
−∞

Area This result is known as the area property of the shifted impulse, and it appears in

property Table 9.3 in the form −∞ δ(t − to )dt = 1. For the case to = 0, the same expression
yields the area property of the impulse δ(t).
We also can evaluate the foregoing convolution integral for arbitrary f (t) at time
t = 0 to obtain
 ∞
δ(τ − to )f (−τ )dτ = f (−to ),
−∞

and hence
 ∞
δ(τ − to )g(τ )dτ = g(to ),
−∞

3
Although p (t) was defined here as 1 rect( t ), it also can be replaced by other functions such as 2 ( t ),
2
− t
e√ 2 2 ∞
2π 
, and 1 sinc( πt ) peaking at t = 0 and satisfying the constraint −∞ p (t)dt = 1.
Section 9.2 Impulse δ(t) 305

with g(t) ≡ f (−t). This last expression is known as the sifting property of the shifted Sifting
impulse. Its special case for to = 0 gives
 ∞
δ(τ )g(τ )dτ = g(0),
−∞

the sifting property of the impulse. These last two expressions show that the impulse
δ(t) and its shifted version δ(t − to ) act like sieves and sift out specific values f (0) and
f (to ) of any function f (t), which they multiply under an integral sign, for instance,
as in
 ∞
δ(t − to )f (t)dt = f (to )
−∞

(so long as f (t) is continuous at t = to so that f (to ) is specified). From the previous
discussion, we know that the sifting property of δ(t − to ) is a shorthand for
% ∞ &
lim p (t − to )f (t)dt = f (to ).
→0 −∞

One consequence of sifting is the sampling property, namely, Sampling

δ(t − to )f (t) = δ(t − to )f (to ),

because if we were to replace δ(t − to )f (t) in the second integral above with
δ(t − to )f (to ), we would obtain
 ∞  ∞
δ(t − to )f (to )dt = f (to ) δ(t − to )dt = f (to ) × 1 = f (to ),
−∞ −∞

using the area property of the shifted impulse. Thus, the sampling property is consis-
tent with the sifting property and is valid. For the special case to = 0, we obtain

δ(t)f (t) = δ(t)f (0),

which is the sampling property for the impulse. This property of the impulse also
can be viewed as shorthand4 for the approximation p (t)f (t) ≈ p (t)f (0), which
we easily can see is valid for sufficiently small  (provided that f (t) is continuous at
t = 0).
4
So far we have emphasized the fact that Table 9.3 comprises shorthand statements about the special
pulse function p (t) in the limit as  → 0. Each statement is, of course, also a property of the distribution
δ(t), expressed in terms of familiar mathematical symbols such as integral, convolution, and equality signs.
A nuance that needs to be appreciated here is that the statements in the table also provide the special
meaning that these mathematical symbols take on when used with distributions instead of regular functions.
For instance, the = sign in the statement δ(t)f (t) = δ(t)f (0) indicates that the distributions on each side of
= have the same effect on any regular function, say, g(t), via any operation such as convolution or integration
defined in Table 9.3 in the very same spirit. The “equality in distribution” that we have just described is
distinct from, say, numerical equality between regular functions, say, cos(ωt) = sin(ωt + π/2).
306 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

As a consequence of the sampling property, for example,

cos(t)δ(t) = δ(t)

because cos(0) = 1. Also,

sin(t)δ(t) = 0

since sin(0) = 0,

(1 + t 2 )δ(t − 1) = 2δ(t − 1)

since (1 + (1)2 ) = 2, and

(1 + t 3 )δ(t + 1) = 0

since (1 + (−1)3 ) = 0.
Symmetry We will ask you to verify the symmetry and scaling properties in homework
and problems. These are very useful properties that give you the freedom to replace δ(−t)
scaling with δ(t), δ(5 − t) with δ(t − 5), δ(−2t) with 21 δ(t), etc. For example,

t 2 δ(2 − t) = t 2 δ(t − 2) = (2)2 δ(t − 2) = 4δ(t − 2).

Example 9.14
Using the Fourier transform property of the impulse, δ(t) ↔ 1, determine

c(t) = a(t) ∗ b(t)

if

a(t) = u(t)

and
1
B(ω) = 1 − .
1 + jω

Solution Using the fact that

δ(t) ↔ 1

and
1
e−t u(t) ↔ ,
1 + jω
we have

b(t) = δ(t) − e−t u(t).


Section 9.2 Impulse δ(t) 307

Thus,

c(t) = u(t) ∗ b(t) = u(t) ∗ δ(t) − u(t) ∗ e−t u(t)


= u(t) − e−t u(t) ∗ u(t) = u(t) − (1 − e−t )u(t).

In the last equality we obtained the first term by using the convolution prop-
erty of the impulse, whereas the second term represents −∞ e−τ u(τ )dτ .
t

Note that with further simplification,

c(t) = e−t u(t).

Example 9.15
Verify the Fourier transform property of a shifted impulse.
Solution For the shifted impulse δ(t − to ), the corresponding Fourier
transform is
 ∞
δ(t − to )e−j ωt dt = e−j ωto ,
−∞

as a consequence of the sifting property.

Example 9.16
Given that f (t) = δ(t − to ), determine the energy Wf of signal f (t).
Solution Since δ(t − to ) ↔ e−j ωto , F (ω) = e−j ωto , and therefore the energy
spectrum for the signal is |F (ω)|2 = 1. Hence, using Rayleigh’s theorem,
we find that
 ∞  ∞
1 1
Wf = |F (ω)|2 dω = 1 · dω = ∞.
2π −∞ 2π −∞

Thus, δ(t − to ) contains infinite energy and therefore is not an energy signal.
(Consequently, it cannot be generated in the lab.)

The versions of the area and sifting properties discussed earlier include integration
limits −∞ and ∞. However, alternative forms also included in Table 9.3 indicate that
these properties still are valid when the integration limits are finite a and b, so long
as a < to < b. Thus, for instance,
 8
δ(t − 2) = 1,
−5
 6
δ(t − 5)2 cos(πt)dt = 2 cos(π5) = −2,
4
308 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

and
 ∞
δ(t)e−st dt = e−s·0 = 1.
0−

To verify the alternative forms of the sampling property, consider first a function
f (t)w(t) where

1, a < t < b,
w(t) =
0, t < a and t > b,

is a unit-amplitude window function shown in Figure 9.11a. Starting with the sifting
property
 ∞
δ(t − to )f (t)w(t)dt = f (to )w(to ),
−∞

we can write
 b
δ(t − to )f (t)dt = f (to )w(to ),
a

since w(t) = 0 outside a < t < b, and 1 within. Now, if a < to < b, as shown in
Figure 9.11a, then w(to ) = 1 and
 b
δ(t − to )f (t)dt = 1 · f (to ) = f (to );
a

otherwise—that is, if to < a or to > b (see Figure 9.11b, as an example)—then


w(to ) = 0 and
 b
δ(t − to )f (t)dt = 0 · f (to ) = 0.
a

ω(t)
1
(a) a to b t

ω(t)
1
(b) a b to t

Figure 9.11 A unit-amplitude window function with t = to (a) included within


the window, and (b) excluded from the window.
Section 9.2 Impulse δ(t) 309

Overall,
 
b
f (to ), a < to < b,
δ(t − to )f (t)dt =
a 0, to < a or to > b,

as in Table 9.3.
Use of the modified form of the sifting property with f (t) = 1 leads to the
modified area properties given in Table 9.3. Furthermore, the modified form of the
area property for the shifted impulse allows us to write
 
t
1, t > to ,
δ(τ − to )dτ = = u(t − to )
−∞ 0, t < to ,

and
 t
δ(τ )dτ = u(t).
−∞

The last two properties, listed in Table 9.3 as definite-integral properties, also
imply that the derivatives of the unit step u(t) and the shifted unit step u(t − to )
can be defined as δ(t) and δ(t − to ), respectively. Of course, u(t) and u(t − to ) are Unit-step
discontinuous functions, and therefore they are not differentiable within the ordinary and its
function space over all t; however, their derivatives can be regarded as the distributions derivative

d d
u(t) = δ(t) and u(t − to ) = δ(t − to ).
dt dt

Example 9.17
Find the derivative of function

y(t) = t 2 u(t).

Solution dy dy
dt = 0 for t < 0, and dt = 2t for t > 0. Patching together these
two results, we can write

dy
= 2tu(t)
dt
as the answer. Alternatively, using the product rule of differentation and
properties of the impulse, we obtain

dy d du
= (t 2 u(t)) = 2tu(t) + t 2 = 2tu(t) + t 2 δ(t)
dt dt dt
= 2tu(t) + 02 δ(t) = 2tu(t).
310 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

This second version of the solution is not any faster or better than the first
one, but it works and gives the correct result. Moreover, this second method
is the only safe approach in certain cases, as illustrated by the next example.

Example 9.18
Find the derivative of

z(t) = e2t u(t).

Solution dz dt = 0 for t < 0 and = 2e2t for t > 0. If we were to patch


dz
dt
these together to write

dz
= 2e2t u(t),
dt

we would be wrong. The reason is that z(t) is discontinuous at t = 0, where


its derivative is undefined; so, as a consequence, dz dt is not the function
2e u(t). The integral of 2e u(τ ) from −∞ to t, over τ , does not lead to
2t 2τ

z(t) = e2t u(t) as it should if 2e2t u(t) were the correct derivative. However,
using the product rule and properties of the impulse, we obtain

dz d du
= (e2t u(t)) = 2e2t u(t) + e2t = 2e2t u(t) + e2t δ(t)
dt dt dt
= 2e2t u(t) + e0 δ(t) = 2e2t u(t) + δ(t),

which is the right answer (and can be confirmed by integrating 2e2τ u(τ ) +
δ(τ ) from −∞ to t, over τ , to recover z(t) = e2t u(t)).

What about the derivative of the impulse δ(t) itself? Since the impulse is a
distribution, its derivative

d
δ(t) ≡ δ  (t)
dt

must also be a distribution described by some set of properties. By applying the


time-derivative property of convolution to δ(t) ∗ f (t) = f (t), we find

df
δ  (t) ∗ f (t) = ,
dt

which is both the time-derivative property of the impulse listed in Table 9.3 and the
Doublet δ (t) convolution property of a new distribution δ  (t) = dtd δ(t), known as the doublet. The
doublet will play a relatively minor role in Chapters 10 and 11, and all we need to
remember about it until then is its convolution property just given.
Section 9.2 Impulse δ(t) 311

9.2.2 Graphical representation of δ(t) and δ(t − to )

Since δ(t) and δ(t − to ) are not functions, they cannot be plotted in the usual way
functions are plotted. Instead, we use defined graphical representations for δ(t) and
δ(t − to ), which are shown in Table 9.3. By convention, δ(t − to ) is depicted as an up-
pointing arrow of unit height placed at t = to along the t-axis. The length of the arrow
is a reminder of the area property, while the placement of the arrow at t = to owes to
the sampling and sifting properties, as well as to the fact that a graphical symbol for
δ(t − to ) is also a symbol for dtd u(t − to ) (which is numerically 0 everywhere except
at t = to , where it is undefined).
Following this convention, 2δ(t − 3), for instance, can be pictured as an arrow
2 units tall, placed at t = 3, while −3δ(t + 1) can be pictured as a down-pointing
arrow of length 3 units, placed at t = −1. (See Figures 9.12a and 9.12b.) The graphical
representation of

f (t) = 2δ(t) − 3δ(t + 2) + δ(t − 4) + rect(t)

is shown in Figure 9.12c. Notice that f (t) is a superposition of a number of impulses


and an ordinary function, rect(t). Even though we cannot assign numerical values to
the distribution f (t), we still represent it graphically as a collection of impulse arrows
plus the regular graph of rect(t).

3 2δ(t − 3) 3 − 3δ(t + 1 )
2 2
1 1

v4 −2 2 4 t −4 −2 2 4 t
−1 −1
−2 −2
−3 −3
(a) (b)

3 f (t)
2
1

−4 −2 2 4 t
−1
−2
−3
(c)

Figure 9.12 Examples of sketches, including the impulse and the shifted
impulse.
312 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Example 9.19
Determine the Fourier transform of rect(t), using the Fourier time-derivative
property and the fact that

δ(t − to ) ↔ e−j ωto .

Solution Since rect(t) = u(t + 21 ) − u(t − 21 ),


d d 1 1
rect(t) = [u(t + ) − u(t − )]
dt dt 2 2
1 1 ω ω
= δ(t + ) − δ(t − ) ↔ ej 2 − e−j 2 .
2 2
Plots of rect(t) and its derivative are sketched in Figure 9.13.
Now, since by the Fourier time-derivative property the Fourier trans-
form of dtd rect(t) is j ω times the Fourier transform of rect(t), it follows
that
ω ω
1 jω ω ej 2 − e−j 2 sin(ω/2) ω
rect(t) ↔ (e 2 − e−j 2 ) = = = sinc( ),
jω j2 · 2
ω
ω/2 2

which is the same result that we had established earlier through more
conventional means.

rect(t) d
1 1 rect(t) = δ(t + 0 .5) − δ(t − 0.5)
dt
0.5
−1.5 −1 −0.5 0.5 1 1.5 t −1.5 −1 −0.5 1 1.5 t

−1 −1

Figure 9.13 (a) rect(t), and (b) d


dt
rect(t) = δ(t + 0.5) − δ(t − 0.5).

9.2.3 Impulse response of an LTI system


It is important to ask how an LTI system would respond to an impulse input f (t) =
δ(t). More specifically, we ask, “What is the zero-state response of an LTI system
described in terms of h(t) ↔ H (ω) if the input signal f (t) is an impulse?”
The answer is extremely simple. Clearly, according to the convolution formula for
the zero-state response (which follows from the familiar Y (ω) = H (ω)F (ω) relation
discussed earlier in the chapter), the answer has to be

y(t) = h(t) ∗ f (t) = h(t) ∗ δ(t) = h(t),


Section 9.2 Impulse δ(t) 313

since h(t) ∗ δ(t) = h(t) by the convolution property of the impulse. Thus, h(t), the
inverse Fourier transform of the system frequency response H (ω), is also the zero-
state response of the system to an impulse input. For that reason we will refer to h(t) Impulse
as the impulse response. response h(t)
The concept of impulse response is fundamental and important. The concept will
be explored much further in Chapter 10; so, for now, we provide just a single example
illustrating its usefulness.

Example 9.20
Suppose that a high-pass filter with frequency response


H (ω) =
1 + jω

(see Figure 5.2a) has input

f (t) = rect(t).

Determine the zero-state system response y(t), using the system impulse
response h(t) and the convolution method.

Solution In Table 7.2 we find no match for 1+j ω . However,

jω jω + 1 − 1 1
= =1− ,
1 + jω 1 + jω 1 + jω

and, therefore, using the addition rule and the same table, we find that the
system impulse response is

h(t) = δ(t) − e−t u(t).

Using y(t) = h(t) ∗ f (t) and the convolution property of the impulse, we
obtain

y(t) = (δ(t) − e−t u(t)) ∗ rect(t) = rect(t) − e−t u(t) ∗ rect(t)

where the last term was calculated earlier in Example 9.13 and plotted in
Figure 9.9. By subtracting Figure 9.9 from rect(t) we obtain Figure 9.14,
which is the solution to this problem. Notice that, because the Fourier
transform of f (t) is a sinc in the variable ω, this problem would be very
difficult to solve by the Fourier technique.
314 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

1 y(t) = f (t) ∗ h(t)


0.5

2 4 6 t
−0.5

−1


Figure 9.14 The response of high-pass filter H(ω) = 1+jω
to input f (t) = rect(t).

9.3 Fourier Transform of Distributions and Power Signals

In Chapter 7 we focused primarily on the Fourier transform of signals f (t) having


finite signal energy
 ∞
W = |f (t)|2 dt.
−∞

After our introduction to the impulse δ(t), we are ready to explore the Fourier trans-
forms of signals for which the energy W may be infinite. It turns out that the Fourier
transform of some such signals f (t) with infinite energy W , but finite instantaneous
power |f (t)|2 (e.g., signals like cos(ωo t), sin(ωo t), u(t)), can be expressed in terms of
Power the impulse δ(ω) in the Fourier domain. Such signals are known as power signals (as
signal opposed to energy signals, which have finite W ) and appear in the right-hand column
of Table 7.2. The same column also includes the Fourier transform of distributions
δ(t) and δ(t − to ), discussed in the previous section.
In the next set of examples we will verify some of these new Fourier transforms
and also illustrate some of their applications.

Example 9.21
Show that

1 ↔ 2πδ(ω)

and

ej ωo t ↔ 2πδ(ω − ωo ),

to confirm entries 15 and 17 in Table 7.2.


Solution First, confirm entry 17 (the second line) by verifying that ej ωo t
is the inverse Fourier transform
 ∞
1
e j ωo t
= 2πδ(ω − ωo )ej ωt dω.
2π −∞
Section 9.3 Fourier Transform of Distributions and Power Signals 315

This statement is valid, because, by the sifting property, the right-hand side
is reduced to
1
2πej ωo t = ej ωo t ,

which is the same as the left-hand side. The Fourier pair 1 ↔ 2πδ(ω) is
just a special case of ej ωo t ↔ 2πδ(ω − ωo ) for ωo = 0.

Example 9.22
Show that

cos(ωo t) ↔ π[δ(ω − ωo ) + δ(ω + ωo )]

and

sin(ωo t) ↔ j π[δ(ω + ωo ) − δ(ω − ωo )],

to confirm entries 18 and 19 in Table 7.2.


Solution To confirm entry 18, rewrite cos(ωo t), using Euler’s formula,
and use ej ωo t ↔ 2πδ(ω − ωo ) from Example 9.21, giving
1 j ωo t
cos(ωo t) = (e + e−j ωo t ) ↔ π[δ(ω − ωo ) + δ(ω + ωo )].
2
We also can show this same result by using 1 ↔ 2πδ(ω) and the Fourier
modulation property.
To confirm entry 19, rewrite sin(ωo t) by using Euler’s formula, and
again use ej ωo t ↔ 2πδ(ω − ωo ), giving
j −j ωo t
sin(ωo t) = (e − ej ωo t ) ↔ j π[δ(ω + ωo ) − δ(ω − ωo )].
2

Figures 9.15a through 9.15c show the plots of power signals cos(ωo t), sin(ωo t),
and 1 (DC signal), and their Fourier transforms. Notice that all three Fourier transforms Fourier
are depicted in terms of impulses in the ω-domain. The Fourier transform plots show transform
that frequency-domain contributions to signals cos(ωo t) and sin(ωo t) are confined to of sine
frequencies ω = ±ωo . The impulses located at ω = ±ωo are telling us the obvious: and cosine
No complex exponentials other than e±j ωo t are needed to represent

ej ωo t + e−j ωo t
cos(ωo t) =
2
and
ej ωo t − e−j ωo t
sin(ωo t) = .
j2
316 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

f (t) = cos(ωot) F (ω) = πδ(ω + ω o) + πδ(ω − ωo)


1 6.28

3.14

t
− ωo ωo ω
−3.14

−1 −6.28

(a)

f (t) = sin(ωot) F (ω) = jπδ (ω + ωο) − jπδ (ω − ωo)


1 j6.28

j3.14
ωo
t
− ωo ω
−j3.14

−1 −j6.28

(b)

f (t) = 1 F (ω) = 2πδ (ω)


1
6.28

3.14

t
− ωo ωo ω
−3.14

−1 −6.28

(c)

Figure 9.15 Three important power signals and their Fourier transforms: (a)
f (t) = cos(ωo t) ↔ F(ω) = πδ(ω − ωo ) + πδ(ω + ωo ), (b)
f (t) = sin(ωo t) ↔ F(ω) = jπδ(ω + ωo ) − jπδ(ω − ωo ), and (c)
f (t) = 1 ↔ F(ω) = 2πδ(ω).

Likewise, DC signal f (t) = 1 can be identified with e±j 0·t = 1; therefore, its Fourier
transform plot is represented by an impulse sitting at ω = 0.

Example 9.23
Given that f (t) ↔ F (ω), determine the Fourier transform of f (t) sin(ωo t)
by using the Fourier frequency-convolution property.
Solution The Fourier frequency-convolution property states that
1
f (t)g(t) ↔ F (ω) ∗ G(ω).

Using this property with

g(t) = sin(ωo t)
Section 9.3 Fourier Transform of Distributions and Power Signals 317

and

G(ω) = j π[δ(ω + ωo ) − δ(ω − ωo )],

we obtain
1
f (t) sin(ωo t) ↔ F (ω) ∗ j π[δ(ω + ωo ) − δ(ω − ωo )]

j
= [F (ω + ωo ) − F (ω − ωo )].
2
This is an alternative form of the Fourier modulation property.

Example 9.24
Find the Fourier transform of an arbitrary periodic signal Fourier
transform

 of periodic
f (t) = Fn ej nωo t ,
signals
n=−∞

with Fourier coefficients Fn .


Solution Since

ej nωo t ↔ 2πδ(ω − nωo ),

application of the Fourier addition property yields



 ∞

f (t) = Fn ej nωo t ↔ F (ω) = 2πFn δ(ω − nωo ).
n=−∞ n=−∞

Thus, the Fourier transform F (ω) of a periodic f (t) with Fourier coef-
ficients Fn is an infinite sum of weighted and shifted frequency-domain
impulses 2πFn δ(ω − nωo ), placed at integer multiples of the fundamental
frequency ωo = 2π T . The area weights assigned to the impulses are propor-
tional to the Fourier series coefficients.

Example 9.25
In Table 7.2, entry 24,

 ∞
2π  2π
δ(t − nT ) ↔ δ(ω − n ),
n=−∞
T n=−∞ T

specifies the Fourier transform of the so-called impulse train depicted in


Figure 9.16 on the left. Verify this Fourier transform pair by first deter-
mining the Fourier series of the impulse train.
318 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

∞ 2π ∞
∑ δ(t − nT ) 2π
n= ∞
∑ δ(ω − n )
T n= ∞ T


1
T

0 T t 0 2π ω
T

Figure 9.16 The Fourier transform of the impulse train on the left is also an
impulse train in the frequency domain, as depicted on the right.

Solution The impulse train



δ(t − nT ),
n=−∞

depicted in Figure 9.16, is a periodic signal with period T and fundamental


frequency 2πT . Its exponential Fourier coefficients are

  ∞
"
1 T /2  2π
Fn = δ(t − mT ) e−j n T t dt
T −T /2 m=−∞

  T /2
1 2π
= δ(t − mT )e−j n T t dt
T m=−∞ −T /2
 T /2
1 2π 1
= δ(t)e−j n T t dt =
T −T /2 T

Impulse for all n. Hence, equating the impulse train to its exponential Fourier series,
train we get5
and its

 ∞
Fourier 1 j n 2π t
transform δ(t − nT ) = e T ,
n=−∞ n=−∞
T

and using the Fourier transform pair (from item 17 from Table 7.2)

2π 2π
ej n T t ↔ 2πδ(ω − n ),
T

5
The meaning of this equality is that when each side is multiplied by a signal f (t) and then the two
sides are integrated, the values of the integrals are equal.
Section 9.3 Fourier Transform of Distributions and Power Signals 319

we obtain

 ∞
2π  2π
δ(t − nT ) ↔ δ(ω − n ),
n=−∞
T n=−∞ T

as requested.
Note that entry 25 in Table 7.2 is a straightforward consequence of this
result and the frequency-convolution property (item 14 in Table 7.1).

Example 9.26
Suppose that a signal generator produces a periodic signal

f (t) = 4 cos(4t) + 2 cos(8t)

and a spectrum analyzer is used to examine the frequency-domain composi-


tion of f (t). Let us assume that the spectrum analyzer bases its calculations
on only a finite-length segment of f (t). In effect, the spectrum analyzer
multiplies f (t) with a window function
t
w(t) = rect( )
To

and then displays the squared magnitude of the Fourier transform of f (t)w(t)
on a screen. What will the screen display look like if To = 10 s and 20 s?
Solution Let g(t) ≡ f (t)w(t). Then, according to the Fourier frequency-
convolution property,

1
G(ω) = F (ω) ∗ W (ω),

where

F (ω) = 4π[δ(ω − 4) + δ(ω + 4)] + 2π[δ(ω − 8) + δ(ω + 8)]

and W (ω) is the Fourier transform of w(t). Substituting the expression for
F (ω) into the convolution, and cancelling the π factors, we have

G(ω) = 2[δ(ω − 4) + δ(ω + 4)] ∗ W (ω) + [δ(ω − 8) + δ(ω + 8)] ∗ W (ω)


= 2W (ω − 4) + 2W (ω + 4) + W (ω − 8) + W (ω + 8),

where in the last step we used the convolution property of the shifted
impulse. Now,

t ωTo
rect( ) ↔ To sinc( ),
To 2
320 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

and so
ωTo
W (ω) = To sinc( ).
2
Figures 9.17a and 9.17b show the plots of |W (ω)|2 for the cases To = 10
and 20 s. In both cases, the 90% bandwidth 2π T of w(t) is less than the
shift frequencies of 4 and 8 rad/s relevant for G(ω). Thus, the various
components of G(ω) have little overlap, so

|G(ω)|2 ≈ 4|W (ω − 4)|2 + 4|W (ω + 4)|2 + |W (ω − 8)|2 + |W (ω + 8)|2 .

Plots of this approximation for |G(ω)|2 are shown in Figure 9.17c and 9.17d
for the cases To = 10 and 20 s, respectively. We conclude that the spectrum
analyzer display will look like Figures 9.17c and 9.17d (although, typically,
only the positive-ω half of the plots would be displayed).
Notice that a longer analysis window (larger To ) produces a higher-
resolution estimate of the spectrum of f (t), characterized by narrower
“spikes.”

ωTo 2 ωTo 2
500 |W(ω)|2 = |Tosinc(——)| 500 |W(ω)|2 = |Tosinc(——)|
2 2
400 400
To = 10 s To = 20 s
300 300

200 200

100 100

(a) −3 −2 −1 1 2 3 (b) −3 −2 −1 1 2 3

500
|G(ω)|2 2000
|G(ω)|2
1750
400
To = 10 s 1500 To = 20 s
300 1250
1000
200 750
500
100
250

(c) −10 −5 5 10 (d) −10 −5 5 10

Figure 9.17 The energy spectrum |W (ω)|2 of the window function w(t) = rect( Tto ) for (a) To = 10 s and
(b) To = 20 s. The spectrum | 2π
1
F(ω) ∗ W (ω)|2 of f (t)w(t), f (t) = 4 cos(4t) + 2 cos(8t), is shown in (c) and (d)
for the two values of To . The frequency resolution of the measurement device is set by the window
length To and can be described as the half-width 2π To of |W (ω)| .
2
Section 9.3 Fourier Transform of Distributions and Power Signals 321

Example 9.27
An incoming AM radio signal

y(t) = (f (t) + α) cos(ωc t)

is mixed with a signal cos(ωc t), and the result p(t) is filtered with an ideal
low-pass filter H (ω). The filter bandwidth is less than ωc , but larger than
the bandwidth  of the low-pass message signal f (t). In addition,   ωc .
What is the output q(t) of the low-pass filter?
Solution Let

p(t) ≡ y(t) cos(ωc t) = (f (t) + α)(cos(ωc t))2


1
= (f (t) + α) (1 + cos(2ωc t)).
2
Using the frequency-convolution property, we find that the Fourier trans-
form of p(t) is
1
P (ω) = (F (ω) + α2πδ(ω)) ∗ [2πδ(ω) + πδ(ω − 2ωc ) + πδ(ω + 2ωc )]

1 1
= (F (ω) + α2πδ(ω)) + (F (ω − 2ωc ) + α2πδ(ω − 2ωc ))
2 4
1
+ (F (ω + 2ωc ) + α2πδ(ω + 2ωc )).
4
But, only the first term of P (ω) on the right is within the pass-band of the
described low-pass filter H (ω). Therefore, it follows that
1
Q(ω) = H (ω)P (ω) = (F (ω) + α2πδ(ω)),
2
implying an output
1
q(t) = (f (t) + α),
2
as expected in a successful coherent demodulation of the given AM signal.

Example 9.28
An incoming AM signal

y(t) = f (t) cos(ωc t)

is mixed with sin(ωc t) and the product signal is filtered with an ideal low-
pass filter H (ω) as in Example 9.27. As before, the filter bandwidth is less
than ωc but larger than the bandwidth  of the low-pass f (t), and   ωc .
What is the output q(t) of the filter?
322 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Solution In this case

p(t) = y(t) sin(ωc t) = f (t) cos(ωc t) sin(ωc t).

Applying the Fourier modulation property to

sin(ωc t) ↔ j π[δ(ω + ωc ) − δ(ω − ωc )],

we first obtain
jπ jπ
sin(ωc t) cos(ωc t) ↔ [δ(ω) − δ(ω − 2ωc )] + [δ(ω + 2ωc ) − δ(ω)]
2 2
jπ jπ
= δ(ω + 2ωc ) − δ(ω − 2ωc ).
2 2
Therefore, using the frequency-convolution property, the Fourier transform
of

p(t) = f (t)[sin(ωc t) cos(ωc t)]

is
1 jπ jπ
P (ω) = F (ω) ∗ [ δ(ω + 2ωc ) − δ(ω − 2ωc )]
2π 2 2
j j
= F (ω + 2ωc ) − F (ω − 2ωc ).
4 4
Note that P (ω) contains no term in the passband of the described low-pass
filter. Thus,

Q(ω) = H (ω)P (ω) = 0,

implying that

q(t) = 0.

Clearly, we cannot demodulate the AM signal

y(t) = f (t) cos(ωc t)

by mixing it with sin(ωc t), because zero output is obtained from the low-
pass filter.

Note that the results of Examples 9.27 and 9.28 suggest that if a signal g(t) sin(ωc t)
were added to an AM transmission f (t) cos(ωc t), then it would be possible to recover
f (t) and g(t) from the sum unambiguously by mixing the sum with signals cos(ωc t)
and sin(ωc t), respectively. This idea is exploited in so-called quadrature amplitude
modulation communication systems.
Section 9.3 Fourier Transform of Distributions and Power Signals 323

Finding the Fourier transform of power signal u(t), the unit step, is somewhat
tricky: First, note that the unit step has an average value of Fourier
transform
 T
1 1 of unit step
lim u(t)dt =
T →∞ 2T −T 2

and it can be expressed as

1 1
u(t) = + sgn(t)
2 2
in terms of its DC component and the signum function sgn(t) shown in Figure 7.2d.
Since

1 ↔ 2πδ(ω),

the Fourier transform of 21 is πδ(ω); also, from Table 7.2 (and as verified in Example 9.29,
next),

2
sgn(t) ↔ .

Thus, using the addition property of the Fourier transform, we conclude that

1 1 1
u(t) = + sgn(t) ↔ πδ(ω) + ,
2 2 jω

as indicated in Table 7.2. Note that the term πδ(ω) in the Fourier transform of u(t)
accounts for its DC component equal to 21 ; the second term j1ω results from its AC
component 21 sgn(t).

Example 9.29
Notice that
1 1
u(t) = + sgn(t)
2 2
implies

du 1 d
= δ(t) = sgn(t),
dt 2 dt
or, equivalently,

d
sgn(t) = 2δ(t).
dt
324 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

Using the Fourier time-derivative property and the fact that δ(t) ↔ 1, verify

2
sgn(t) ↔ .

Solution Let S(ω) denote the Fourier transform of sgn(t). Then the Fourier
transform of the equation

d
sgn(t) = 2δ(t)
dt

is

j ωS(ω) = 2.

Hence,

2
S(ω) =

and, consequently,

2
sgn(t) ↔ .

T
Since sgn(t) is a pure AC signal—that is, lim 1
−T sgn(t)dt = 0—its
T →∞ 2T
Fourier transform does not contain an impulse term.

We offer one closing comment about the Fourier transform of power signals,
such as u(t) and cos(ωo t) where

1
u(t) ↔ πδ(ω) +

cos(ωo t) ↔ πδ(ω − ωo ) + πδ(ω + ωo ).

It should be apparent that, in the case of these types of signals, the Fourier integral
does not converge. Therefore, the corresponding Fourier transform does not exist in
the usual sense. This is the reason that the Fourier transforms of signals such as u(t)
and cos(ωo t) must be expressed in terms of impulses.
Section 9.4 Sampling and Analog Signal Reconstruction 325

9.4 Sampling and Analog Signal Reconstruction

Now that we know about the impulse train and its Fourier transform (see Example 9.25
in the previous section), we are ready to examine the basic ideas behind analog-to-
digital conversion and analog signal reconstruction. In this section we will learn about
the Nyquist criterion, which constrains the sampling rates used in CD production, and
find out how a CD player may convert data stored on a CD into sound.
Consider a bandlimited signal Bandlimited
signal
f (t) ↔ F (ω) with
bandwidth B
having bandwidth B Hz, so that

F (ω) = 0

outside the frequency interval

|ω| ≤ 2πB rad/s.

Suppose that only discrete samples of f (t) are available, defined by

fn ≡ f (nT ), −∞ < n < ∞,

where the samples are equally spaced at times t = nT , which are integer multiples of Samples
the sampling interval T . In the modern digital world, where music and image samples and
routinely are stored on a computer, we might ask whether it is possible to reconstruct sampling
the analog signal f (t) from its discrete samples fn with full fidelity (i.e., identically). interval T
It turns out that the answer is yes if the sampling interval T is small enough, compared
with the reciprocal of the signal bandwidth B. The specific requirement, called the
Nyquist criterion, is Nyquist
criterion
1
T < ,
2B
or, equivalently,
1
> 2B.
T
Notice that the version of the Nyquist criterion just presented states that the sampling
frequency 1/T must be larger than twice the highest frequency B (measured in Hertz)
in the signal being sampled. That is, each frequency component in f (t) must be
sampled at a rate of at least two samples per period. Under this condition, it is theo-
retically possible for us to exactly reconstruct the analog signal f (t) from its discrete Reconstruction
samples fn by using the so-called reconstruction formula formula
 π
f (t) = fn sinc( (t − nT )).
n
T
326 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

However, if the Nyquist criterion is violated, then the reconstruction formula becomes
invalid in the sense that the sum on the right side of the formula converges, not to the
original analog signal f (t) shown on the left, but to another analog signal known as
an aliased version of f (t). Before we verify the validity of the reconstruction formula
and the Nyquist criterion, let us examine the sampling and reconstruction examples
shown in Figures 9.18 and 9.19.

3 3
f(t) sinc(πt)
2 2

1 1
f(−1) f(1) f(2)

−4 −2 2 4 6 t −4 −2 2 4 6 t
−1 −1

−2 f(3) −2
f(−2)
(a) −3 (b) −3

3 3
∑ f (n)sinc(π(t − n)) f (n)sinc(π(t − n))
2 2

1 1

−4 −2 2 4 6 t −4 −2 2 4 6
t
−1 −1

−2 −2

(c) −3 (d) −3

Figure 9.18 (a) An analog signal f (t) and its discrete samples f (nT ) (shown by dots) taken at T = 1 s
intervals, (b) interpolating function sinc(π Tt ) for the case T = 1 s, and (c) reconstructed f (t), using the
interpolation formula. (See text.) Panel (d) shows the functions f (nT )sinc( πT (t − nT )) for T = 1 s, which
are summed up to produce the reconstructed f (t) shown in panel (c). In this example, reconstruction is
successful because the Nyquist criterion is satisfied.

In the case illustrated in Figure 9.18, sampling of f (t) is conducted in accordance


with the Nyquist criterion; therefore, the reconstructed signal shown in Figure 9.18c
is identical to the original f (t). Panel (d) in the figure shows the collection of shifted
and amplitude scaled sinc functions (components of the reconstruction formula), the
sum of which, shown in panel (c), replicates the original f (t) from panel (a). In the
example shown in Figure 9.19, however, the Nyquist criterion is violated and the
reconstruction formula fails to reproduce the original f (t). In this case the available
samples f (nT ) are spaced too far apart. The signal f (t) is said to be undersampled
and, as a result, the reconstruction is aliased; in this case the original co-sinusoid is
“impersonated” by a lower frequency co-sinusoid.
Section 9.4 Sampling and Analog Signal Reconstruction 327

2 2
f(t) sinc(πt)
1.5 1.5
1 1
0.5 0.5

−4 2 4 6 t −4 −2 2 4 6 t
−2 −0.5 −0.5
−1 −1
−1.5 −1.5
(a) −2 (b) −2

2
∑ f (n)sinc(π(t − n)) 3
f (n)sinc(π(t − n))
1.5
2
1
1
0.5

−4 −2 2 4 6 t −4 −2 2 4 6 t
−0.5
−1
−1
−2
−1.5
(c) −2 (d) −3

Figure 9.19 (a) f (t) = cos(4t) and its samples f (nT ), T = 1 s, (b) interpolating
function sinc(π Tt ) for T = 1 s, (c) reconstructed f (t), and (d) f (nT )sinc( πT (t − nT ))
for T = 1 and all n. Notice that the reconstructed f (t) does not match the
original f (t) because of undersampling of f (t) in violation of the Nyquist
1
criterion, which requires T < 2B . (See text.)

Verification of reconstruction formula: The impulse train identity (item


24 in Table 7.2),

 ∞
2π  2π
δ(t − nT ) ↔ δ(ω − n ),
n=−∞
T n=−∞ T

and the frequency-convolution property of the Fourier transform (item 14 in


Table 7.1) imply that the product


f (t) δ(t − nT )
n=−∞

has Fourier transform


∞ ∞
1 2π  2π  1 2π
F (ω) ∗ δ(ω − n ) = F (ω − n ).
2π T n=−∞ T n=−∞
T T

Hence,

 ∞
1 2π
f (t)δ(t − nT ) ↔ F (ω − n ),
n=−∞ n=−∞
T T
328 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

as listed in Table 7.2 (item 25), and also



 ∞
1 2π
f (nT )δ(t − nT ) ↔ F (ω − n ),
n=−∞ n=−∞
T T

in view of the sampling property of the shifted impulse.


The Fourier transform pair just obtained is the key to verification of the
reconstruction formula and the Nyquist criterion. Let us first interpret the
Fourier transform on the right side, namely

∞
1 2π
FT (ω) ≡ F (ω − n ),
n=−∞
T T

with the aid of Figure 9.20. Panel (b) in the figure presents a sketch of FT (ω)
as a collection of amplitude-scaled and frequency-shifted replicas of F (ω),
the Fourier transform of f (t) sketched in panel (a). For the FT (ω) shown in
panel (b), it is clear that 2πB < Tπ , or, equivalently, B < 2T1 , in accordance
with the Nyquist criterion. Also, it should be clear that if FT (ω) in the same
panel were multiplied with
ω
H (ω) = T rect( ),
2π/T

corresponding to an ideal low-pass filter having bandwidth Tπ and amplitude


T , the result would equal F (ω), the Fourier transform of f (t). Thus low-pass

1 F (ω)

(a) − 2πB 2πB


ω

1 FT (ω)
T

− 2πB 2πB π 2π ω

(b) −
T T T

1 FT (ω)
T

− 2πB π 2π ω
(c) T T

Figure 9.20 (a) The Fourier transform F(ω) of a band-limited signal f (t) with
∞
bandwidth  = 2πB rad s ; (b) and (c) are FT (ω) = n=−∞ T F(ω − T n) for the same
1 2π
1 1
signal, with the sampling interval T < 2B and T > 2B , respectively. Note that the
1
central feature of FT (ω) in panel (b) is identical to T F(ω). Also note that panel (c)
contains no isolated replica of F(ω). Therefore, F(ω) can be correctly inferred
1
from FT (ω) if and only if T < 2B .
Section 9.4 Sampling and Analog Signal Reconstruction 329

filtering of ∞ n=−∞ f (nT )δ(t − nT ) (having the# π Fourier
$ transform FT (ω))
via a system with impulse response h(t) = sinc T t (with Fourier transform
 
ω
T rect 2π/T ) produces f (t). But, the same convolution operation describing
the filter action yields precisely the right side of the reconstruction formula.
The proof of the reconstruction formula just completed is contingent
upon satisfying the Nyquist criterion. When the criterion is violated (i.e.,
when 2πB ≥ Tπ , as illustrated in Figure 9.20c), FT (ω) no longer contains an
isolated replica of F (ω) within the passband of H (ω), and, as a consequence,
in such situations the reconstruction formula produces aliased results (as in
Figure 9.19).
D/A (digital-to-analog) conversion is a hardware implementation that mimics the
reconstruction formula just verified. This is accomplished by utilizing a circuit that
creates a weighted pulse train

fn p(t − nT ),
n

where p(t) generally is a rectangular pulse of width T , and then low-pass filtering
the pulse train, using a suitable LTI system

h(t) ↔ H (ω).

The reconstruction of f (t) is nearly ideal if h(t) is designed in such a way that h(t) ∗
p(t) is a good approximation to (a delayed) sinc( πt T ). This reconstruction process,
which parallels our proof of the reconstruction formula, is illustrated symbolically on
the right side of the system shown in Figure 9.21. The left side of the same system
is a symbolic representation of an A/D (analog-to-digital) converter, where the input
f (t) is sampled every T seconds in order to generate the sequence of discrete samples
fn = f (nT ).

Discrete

T
f (t) f n ≡ f (nT ) y(t)
× H (ω)

A/D conversion ∑ p(t − nT )


n

D/A conversion

Figure 9.21 A model system that samples an analog input f (t) with a sampling
interval T and generates an  analog output y(t) by using the samples fn = f (nT ).
The input to filter H(ω) is n fn p(t − nT ). Mathematically, if p(t) = δ(t) and
H(ω) = T rect( 2π/T
ω
), then y(t) = f (t), assuming the Nyquist criterion is satisfied. In
real-world systems, p(t) is chosen to be a rectangular pulse of width T , and H(ω)
is chosen so that |P(ω)||H(ω)| ≈ T rect( 2π/Tω
).
330 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

The system shown in Figure 9.21 simply regenerates y(t) = f (t) at its output,
because the samples fn of input f (t) are not modified in any way within the system
prior to reconstruction. The option of modifying the samples fn is there, however,
and, hence, the endless possibilities of digital signal processing (DSP). DSP can be
used to convert analog input signals f (t) into new, desirable analog outputs y(t) by
replacing the samples fn by a newly computed sequence yn , prior to reconstruction.
Examples of digital processing, or manipulating the samples fn , are
1
yn = (fn + fn−1 ),
2
which is a simple smoothing (averaging) digital low-pass filter, and
1
yn = (fn − fn−1 ),
2
which is a simple high-pass digital filter that emphasizes variations from one sample
to the next in fn . More sophisticated digital filters compute outputs as a more general
weighted sum of present and past inputs and, sometimes, past outputs as well. Some
other types of digital processing are explored (along with aliasing errors) in one of
the labs. (See Appendix B.)
Sound cards, Sound cards, found nowadays in most PC’s, consist of a pair of A/D and D/A
A/D and D/A converters that work in concert with one another and the rest of the PC—CPU,
converters memory, keyboard, CD player, speaker, etc. These cards can perform a variety of
signal processing tasks in the audio frequency range. For instance, the D/A circuitry
on the sound card converts samples fn = f (nT ), fetched from a CD (or an MP3 file
stored on the hard disk), into a song f (t) by a procedure like the one described here.
44.1 kHz The standard sampling interval used in sound cards is T = 1/44100 s. In other
sampling words, sound cards sample their input signals at a rate of 1/T = 44100 samples/s,
1
rate or, 44.1 kHz, as usually quoted. Since the Nyquist criterion requires that T < 2B
or, equivalently, B < 2T1 = 22.05 kHz, only signals bandlimited to 22.05 kHz can
be processed (without aliasing). Thus, the input stage of a sound card (prior to the
A/D) typically incorporates a low-pass filter with bandwidth around 20 kHz (or less)
to reduce or prevent aliasing effects. The 44.1 kHz sampling rate of standard sound
cards is the same as the sampling rate used in audio CD production. Because human
hearing does not exceed 20 kHz, it is possible to low-pass filter analog audio signals
to 20kHz prior to sampling, with no audible effect. The choice of sampling rate
(44.1 kHz) in excess of twice the filtered sound bandwidth (20 kHz) works quite
well and allows for some flexibility in the design of the D/A (i.e., choice of H (ω) in
Figure 9.21). Special cards with wider bandwidths and higher sampling rates are used
to digitize and process signals encountered in communications and radar applications.
Digital oscilloscopes also use high-bandwidth A/D cards. Some applications use A/D
converters operating up into the GHz range.
Returning back to Figure 9.20, recall that
 1 2π
FT (ω) = F (ω − n ),
n
T T
Section 9.4 Sampling and Analog Signal Reconstruction 331

sketched in panel (b), is the Fourier transform of time signal



f (nT )δ(t − nT ) ≡ fT (t).
n

Also, recall that if f (t) is bandlimited and T satisfies the Nyquist criterion, then
FT (ω) = T1 F (ω) for |ω| < Tπ . (See Figures 9.20a and 9.20b.) So, if we can find a
way of calculating FT (ω), we then have a way of calculating F (ω). This is easily
done. Remembering that

δ(t − nT ) ↔ e−j ωnT ,

and then transforming the preceding expression for fT (t), term by term, we get Calculating
 F(ω) with
FT (ω) = f (nT )e−j ωnT , discrete
n samples
fn = f(nT)
which is an alternative formula for FT (ω). This formula provides us with a means of
computing the Fourier transform F (ω) of a bandlimited f (t) by using only its sample
data f (nT ), namely,
 π
F (ω) = T FT (ω) = T f (nT )e−j ωnT , |ω| < ,
n
T

where Tπ commonly is known as the Nyquist frequency.


We have used the approach just described to determine F (ω) and compute the
energy spectra |F (ω)|2 of the first 6 seconds of Beethoven’s 5th symphony and the
Beatles’ “It’s Been a Hard Day’s Night,” displayed earlier in Figure 7.11.6
6
The energy spectrum curves were produced as follows:
(1) The sound card of a laptop computer was controlled to sample the audio input (from a phonograph
turntable) at a rate of 44.1 kHz and store the samples as 16-bit integers using the .aiff file format.
Approximately 6 s of audio input were sampled to produce a total of N = 218 = 262144 data
samples fn = f (nT ), where T = 44100 1
s.
(2) A short Mathematica program was written to compute


N −1
mn 2π
Fm ≡ fn e−j 2π N = FT ( m),
NT
n=0

m ∈ [0, 1, · · · , N − 1], using the Mathematica “Fourier” function (which implements the popular
FFT algorithm). The plotted curves actually correspond to the normalized quantity

|Fm |2
N −1 ,
k=0 |Fk |
2

for m ∈ [0, 511, 1023, · · · , N2 − 1], after a 512-point running average operation was applied to
|Fm |2 . The purpose of the running average was to smooth the spectrum and reduce the number of
data points to be plotted.
332 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

9.5 Other Uses of the Impulse

The impulse definition and concept play important roles outside of signal processing,
for example in various physics problems. Take, for instance, the idealized notion
of a point charge. Physicists often envision charged particles, such as electrons, as
point charges occupying no volume, but carrying some definite amount of charge, say
q coulombs. Since the equations of electrodynamics (Newton’s laws and Maxwell’s
equations) are formulated in terms of charge density ρ(x, y, z, t), a physicist studying
the motion of a single free electron in some electromagnetic device needs an expres-
sion ρ(x, y, z, t) describing the state (the position and motion) of the electron. A
model for the charge density of a stationary electron located at the origin and envi-
sioned as a point charge is
C
ρ(x, y, z, t) = qδ(x)δ(y)δ(z)
m3
in terms of spatial impulses δ(x), δ(y), and δ(z) and electronic charge q. This is a
satisfactory representation, because the volume integral of ρ(x, y, z, t) over a small
cube centered about coordinates (xo , yo , zo ), that is,
  xo +ε/2 
qδ(x)δ(y)δ(z)dxdydz = q δ(x)dx
cube xo −ε/2
 yo +ε/2   zo +ε/2 
× δ(y)dy × δ(z)dz ,
yo −ε/2 zo −ε/2

equals q for xo = yo = zo = 0, for arbitrarily small  > 0, yielding the total amount
of charge contained within the box. Conversely, if the box excludes the origin, then
the integration result is zero.

Example 9.30
How would you express the charge density ρ(x, y, z, t) of an oscillating
electron with time-dependent coordinates (0, 0, zo cos(ωt))?
Solution In view of the preceding discussion, the answer must be
C
ρ(x, y, z, t) = qδ(x)δ(y) × δ(z − zo cos(ωt)) ,
m3
since the electron always remains on the z-axis and deviates from the origin
by an amount z = zo cos(ωt).

Electromagnetic radiation fields of an oscillating electron can be derived by


solving Maxwell’s equations—a linear set of coupled partial differential equations
(PDEs)—with a source function ρ(x, y, z, t) = qδ(x)δ(y)δ(z − zo cos(ωt)) mC3 . The
solution, in turn, can be used to describe how radio antennas convert oscillating
currents into traveling electromagnetic waves.
Exercises 333

Note that dimensional analysis of the foregoing expressions for ρ(x, y, z, t)


implies that the units for δ(x), δ(y), and δ(z) must be m1 . Likewise, the unit for
δ(t) is 1s .
Regarding other application areas, the impulse also can be used as an idealized
representation of the force density of a hammer blow on a wall, the current density
of a lightning bolt, the acoustic energy density of an explosion, or the light source
provided by a star.

EXERCISES

9.1 For the functions f (t) and g(t) sketched as shown,


(a) Find x(t) = g(t) ∗ g(t) by direct integration and sketch the result.
(b) Find y(t) = f (t) ∗ g(t), using appropriate properties of convolution
and the result of part (a). Sketch the result.
(c) Find z(t) = f (t) ∗ f (t − 1), using the properties of convolution and
sketch the result.

f (t)
1
2 3
1 4 5 t
−1

g(t)
1

1 2 3 4 5 t

9.2 Given h(t) = u(t) and f (t) = 2( 2t ),


(a) Determine y(t) = h(t) ∗ f (t) and sketch the result.
(b) Determine z(t) = h(t) ∗ df dt , using the derivative property of convolu-
tion, and sketch the result.
(c) Calculate z(t) = h(t) ∗ df dt , using some alternative method (avoiding
direct integration, if you can).

9.3 Given f (t) = u(t), g(t) = 2tu(t), and q(t) = f (t − 1) ∗ g(t), determine
q(4).
334 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

9.4 Suppose that the convolution of f (t) = rect(t) with some h(t) produces
y(t) = ( t−2
2 ). What is the convolution of rect(t) + rect(t − 4) with h(t) −
2h(t − 6)? Sketch the result.
9.5 2 ) ∗ ( 4 ).
Determine and plot c(t) = rect( t−5 t−8

9.6 Determine and plot y(t) = h(t) ∗ f (t) if


(a) h(t) = e−2t u(t) and f (t) = u(t).
(b) h(t) = rect(t) and f (t) = u(t)e−t − u(−t)et .

9.7 Simplify the following expressions involving the impulse and/or shifted
impulse and sketch the results:
(a) f (t) = (1 + t 2 )(δ(t) − 2δ(t − 4)).
(b) dt + δ(t + 0.5)).
g(t) = cos(2πt)( du
(c) h(t) = sin(2πt)δ(0.5 − 2t).
∞ 2
(d) y(t) = −6 (τ + 6)δ(τ − 2)dτ .
∞ 2
(e) z(t) = 6 (τ + 6)δ(τ − 2)dτ .
t
(f) a(t) = −∞ δ(τ + 1)dτ + rect( 6 )δ(t
t
− 2).
(g) b(t) = δ(t − 3) ∗ u(t).
(h) c(t) = ( 2t ) ∗ (δ(t) − δ(t + 2)).

9.8 For f (t) and g(t) as shown,


(a) Determine and sketch c(t) = f (t) ∗ g(t).
(b) Given that p(t) = rect(t − 1.5) and f (t) = dp dt , find x(t) = p(t) ∗
g(t), using the result of part (a). Explain your procedure clearly.

f (t) g(t)
1 1

2
1 t −1 1 t

−1

9.9 A system is described by an impulse response h(t) = δ(t − 1) − δ(t + 1).


Sketch the system response y(t) = h(t) ∗ f (t) to the following inputs:
(a) f (t) = u(t).
(b) f (t) = rect(t).
Exercises 335

9.10 For a system with impulse response h(t), the system output y(t) = h(t) ∗
f (t) = rect( t−4
2 ). Determine and sketch h(t) if

(a) f (t) = rect( 2t ).


(b) f (t) = 2u(t).
(c) f (t) = 4rect(t).

9.11 Determine the Fourier transform of the following signals—simplify the


results as much as you can and sketch the result if it is real valued:
(a) f (t) = 5 cos(5t) + 3 sin(15t).
(b) x(t) = cos2 (6t).
(c) y(t) = e−t u(t) ∗ cos(2t).
(d) z(t) = (1 + cos(3t))e−t u(t).

9.12 Determine the inverse Fourier transforms of the following:


(a) F (ω) = 2π[δ(ω − 4) + δ(ω + 4)] + 8πδ(ω).
(b) A(ω) = 6π cos(5ω).

(c) B(ω) = ∞ n=−∞ 2π 1+n2 δ(ω − n2).
1

(d) C(ω) = 8
jω + 4πδ(ω).

9.13 Signal f (t) = (5 + rect( 4t )) cos(60πt) is mixed with signal cos(60πt) to


produce the signal y(t). Subsequently, y(t) is low-pass filtered with a system
having frequency response H (ω) = 4rect( 4π ω
) to produce q(t). Sketch F (ω),
Y (ω), and Q(ω), and determine q(t).

9.14 If signal f (t) is not bandlimited, would it be possible to reconstruct f (t)


exactly from its samples f (nT ) taken with some finite sampling interval
T > 0? Explain your reasoning.

9.15 The inverse of the sampling interval T —that is, T −1 —is known as the
sampling frequency and usually is specified in units of Hz. Determine the
minimum sampling frequencies T −1 needed to sample the following analog
signals without causing aliasing error:
(a) Arbitrary signal f (t) with bandwidth 20 kHz.
(b) f (t) = sinc(4000πt).
(c) f (t) = sinc(4000πt) cos(20000πt).
336 Chapter 9 Convolution, Impulse, Sampling, and Reconstruction

9.16 Using

 ∞
1 2π
f (t)δ(t − nT ) ↔ F (ω − n )
n=−∞ n=−∞
T T
(item 25 in Table 7.2), sketch the Fourier transform FT (ω) of signal


fT (t) ≡ f (t)δ(t − nT )
n=−∞

if
f (t) = cos(4πt),
assuming (a) T = 1 s, and (b) T = 0.2 s. Which sampling period allows
f (t) to be recovered by applying an appropriate low-pass filter to fT (t)?
9.17 Given the identity
 ∞
δ(t − to )f (t)dt = f (to ),
−∞
where the function f (t) of time variable t is measured in units of, say, volts
(V), what would be the units of the shifted impulse δ(t − to )? What would
be the units of an impulse δ(x) if x is a position variable measured in units
of meters? What would be the units of a charge distribution specified as
qδ(x − 4) if q is measured in units of coulombs (C)? It is an interesting fact
that the impulse δ(t) has a unit (in the sense of a dimension), but it has no
numerical values; only integrals of the impulse have a numerical value.
9.18 To confirm the scaling property of the impulse, show that
δ(a(t − to )) ∗ f (t)
and
1
δ(t − to ) ∗ f (t)
|a|
are identical for a = 0. Hint: Write the above convolutions explicitly and
make use of an appropriate change of variable before applying the sifting
property of the impulse.
10
Impulse Response,
Stability, Causality, and
LTIC Systems

10.1 IMPULSE RESPONSE h(t) AND ZERO-STATE RESPONSE


y(t) = h(t) ∗ f (t) 338
10.2 BIBO STABILITY 346
10.3 CAUSALITY AND LTIC SYSTEMS 351
10.4 USEFULNESS OF NONCAUSAL SYSTEM MODELS 357
10.5 DELAY LINES 357
EXERCISES 359

In the last chapter we discovered that the zero-state response y(t) of an LTI system A time-domain
H (ω) to an arbitrary input can be calculated in the time-domain with the convolution perspective:
formula zero-state
response
y(t) = h(t) ∗ f (t);
y(t) = h(t) ∗ f (t),
impulse
response
as shown in Figure 10.1. Here, h(t) is the inverse Fourier transform of the system and causality;
frequency response H (ω), or equivalently, the zero-state response of the system to LTIC systems
an impulse input.
But what if the Fourier transform of an impulse response h(t) does not exist, as
in the case of the impulse response h(t) = et u(t)? Does this mean that we cannot
make use of h(t) ∗ f (t) to calculate the zero-state response in such cases?
To the contrary, as we shall see in Section 10.1, the convolution method y(t) =
h(t) ∗ f (t) is always valid for LTI systems, so long as the integral converges, and
is, in fact, more fundamental than Fourier inversion of Y (ω) = H (ω)F (ω). It is the
latter method that fails in the scenario invoked, when the Fourier transform of h(t)
(or of f (t), for that matter) does not exist.

337
338 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

f (t) y(t) = h(t) * f (t)


h(t) ↔ H (ω)
F (ω) Y (ω) = H (ω)F (ω)

Figure 10.1 The time-domain input–output relation for LTI systems with
frequency response H(ω). The convolution formula y(t) = h(t) ∗ f (t) describes
the system zero-state response to input f (t), where h(t) is the inverse Fourier
transform of the system frequency response H(ω).

Section 10.1 begins with a discussion of how h(t), the impulse response of an
LTI system, can be measured in the lab. The discussion subsequently focuses on the
universality of the convolution formula, h(t) ∗ f (t), and the relation between h(t) and
H (ω). In Section 10.2 we examine stability conditions for LTI systems and establish
the fact that only those systems with absolutely integrable h(t) are guaranteed to
produce bounded outputs when presented with bounded inputs. Next, in Section 10.3
we introduce the concept of causality and establish the fact that impulse responses
h(t) of causal real-time LTI systems, such as linear circuits built in the lab, vanish for
t < 0. We refer to causal LTI systems as LTIC systems and consider a sequence of
examples that illustrate how to recognize whether a system is causal, linear, and time-
invariant. We conclude with short sections that discuss the importance of noncausal
models in some settings (Section 10.4) and the modeling of delay lines (Section 10.5).

10.1 Impulse Response h(t) and Zero-State Response


y(t) = h(t) ∗ f (t)

10.1.1 Measuring h(t) of LTI systems


Measuring So far, we have obtained the impulse response h(t) of LTI systems by inverse Fourier
the impulse transforming the frequency response H (ω). Alternatively, the impulse response of a
response system can be measured in the lab with one of the following methods:
(1) Recall that the identity

δ(t) ∗ h(t) = h(t)

is a symbolic shorthand for

lim {p (t) ∗ h(t)} = h(t),


→0

where p (t) is a pulse centered about t = 0, having width  and area 1. (See
Figure 9.10.) Now, if we were to apply an input p (t) to a system in the lab,
having an unknown impulse response h(t), we would measure (or display on
a scope) an output p (t) ∗ h(t). Taking a sequence of such measurements with
inputs p (t) having decreasing widths  (and increasing heights), we should see
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 339

the output p (t) ∗ h(t) converge to h(t). We would need to keep reducing  until
further changes in the output were imperceptible. If that did not happen and the
output kept changing at every step, no matter how small  (see Example 10.1 that
follows, for a possible reason), then we instead could use the second method,
presented next.
(2) Excite the system with a unit-step input f (t) = u(t) to obtain the unit-step
response

y(t) = h(t) ∗ u(t) ≡ g(t).

In symbolic terms,

u(t) −→ LTI −→ h(t) ∗ u(t) ≡ g(t).

Differentiating g(t) = h(t) ∗ u(t) and using the time-derivative property of


convolution, we find
dg du
= h(t) ∗ = h(t) ∗ δ(t) = h(t).
dt dt
So, the second method for finding the impulse response h(t) is to differentiate Unit-step
the system unit-step response g(t), which can be measured with a single input. response g(t)
and h(t) = dg
dt
Example 10.1
Suppose that measurements in the lab indicate that the unit-step response
of a certain circuit is

g(t) = e−t u(t).

What is the system impulse response h(t)? Can we measure h(t) by using
the first method described above?
Solution We find h(t) by differentiating g(t):
dg d du
h(t) = = (e−t u(t)) = −e−t u(t) + e−t
dt dt dt
= −e−t u(t) + e−t δ(t) = δ(t) − e−t u(t).

Notice that we used the sampling property of the impulse to replace e−t δ(t)
with δ(t).
In implementing the first method, the system response to input p (t) is

h(t) ∗ p (t) = (δ(t) − e−t u(t)) ∗ p (t) = p (t) − e−t u(t) ∗ p (t).

As  is reduced, the second term of the output will converge to e−t u(t),
because

lim {e−t u(t) ∗ p (t)} = e−t u(t) ∗ δ(t) = e−t u(t).


→0
340 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

However, the first term p (t) will not converge (i.e., stop changing) as 
is reduced. Even if we guess that an impulse is appearing in the output as
 is made smaller, it will be difficult to estimate the area of the impulse.
Thus, the first method will not be workable in practice. This problem arises
because the system impulse response h(t) = δ(t) − e−t u(t) contains an
impulse.

Example 10.2
Measurements in the lab indicate that the unit-step response of a certain
circuit is

g(t) = te−t u(t).

What is the system impulse response h(t)? Can we measure h(t) by using
the first method described in this section?
Solution Similar to Example 10.1, the impulse response is
dg d du
h(t) = = (te−t u(t)) = (1 − t)e−t u(t) + te−t
dt dt dt
= (1 − t)e−t u(t) + te−t δ(t) = (1 − t)e−t u(t).

Because h(t) does not contain an impulse, the first method also will work.

Example 10.3
What is the frequency response of the system described in Example 10.2?
Solution Given that

h(t) = (1 − t)e−t u(t) = e−t u(t) − te−t u(t),

the Fourier transform


1 1 jω
H (ω) = − =
1 + j ω (1 + j ω)2 (1 + j ω)2
must be the corresponding frequency response.

Example 10.4
A system that is known to be LTI responds to the input u(t + 1) with the
output rect( 2t ), as shown at the top of Figure 10.2. What will be the system
response y(t) to the input f (t) = rect(t)? Solve this problem by first finding
the system impulse response.
Solution Since the system is time-invariant, the information
t
u(t + 1) −→ LTI −→ rect( )
2
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 341

1 1

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

f(t) LTI y(t)

1 1

−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

−1

Figure 10.2 An LTI system that responds to an input u(t + 1) with the output rect( 2t ) will respond to
input f (t) = rect(t) with output rect(t) − rect(t − 2), as shown here. (See Example 10.4.)

implies that
t −1
u(t) −→ LTI −→ rect( ) = u(t) − u(t − 2).
2
Thus,

g(t) = u(t) − u(t − 2)

so that

h(t) = g  (t) = δ(t) − δ(t − 2).

Consequently, the response to input f (t) = rect(t) is

y(t) = h(t) ∗ f (t) = [δ(t) − δ(t − 2)] ∗ rect(t) = rect(t) − rect(t − 2),

as shown at the bottom of Figure 10.2.

We can solve this same problem more directly by using the properties of super-
position and time invariance and the fact that the second input can be written as a
linear combination of delayed versions of the first input. Working the problem in this
way will produce the answer

t − 21 t − 23
y(t) = rect( ) − rect( ),
2 2
which can be shown to be the same as the former answer. Try it!
342 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

10.1.2 General validity of y(t) = h(t) ∗ f (t) for LTI systems

It is surprisingly easy for us to develop the convolution formula, describing the zero-
state response of linear, time-invariant (LTI) systems, by working strictly within the
time domain, without any need for the frequency response H (ω) to exist. To do so,
we begin by noting that every LTI system in zero state will respond to an impulse
input with a specific signal that we denote as h(t) and call the impulse response. That
is, in symbolic notation,

δ(t) −→ LTI −→ h(t)

is true for any LTI system as a matter of definition of its impulse response. Now,
invoking time-invariance of the system—meaning that delayed inputs cause equally
delayed outputs—we have

δ(t − τ ) −→ LTI −→ h(t − τ ).

Finally, invoking the linearity property of the system—wherein a weighted superpo-


sition of inputs leads to an equally weighted superposition of outputs—we obtain
 ∞  ∞
f (τ )δ(t − τ )dτ −→ LTI −→ f (τ )h(t − τ )dτ,
−∞ −∞

where f (τ ) is an arbitrary weighting function applied to impulse inputs δ(t − τ ) with


delays τ .1
The system input shown in the final statement just asserted is the convolu-
tion f (t) ∗ δ(t) = f (t), and the corresponding system output is f (t) ∗ h(t) = h(t) ∗
The convolution f (t). Hence, the final statement is equivalent to
formula
y(t) = h(t) ∗ f(t) f (t) −→ LTI −→ h(t) ∗ f (t),
is valid for
LTI systems,
meaning that
even if
H(ω) does
not converge y(t) = h(t) ∗ f (t)

is the formula for the zero-state response of all LTI systems with all possible inputs
f (t)—the existence of the Fourier transform of the system impulse response h(t) is
not necessary.

1
Here, we assume that the LTI system satisfies the superposition property even for an uncountably
infinite number of input terms; i.e., for linear combinations of δ(t − τ ) for all values of τ. This assumption
is satisfied by all LTI systems encountered in the lab.
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 343

Example 10.5
When applied to a particular LTI system, an impulse input δ(t) produces the
output h(t) = et u(t). What is the zero-state response of the same system
to the input f (t) = u(t)?
Solution
 t
y(t) = h(t) ∗ f (t) = e u(t) ∗ u(t) =
t
eτ u(τ )dτ = (et − 1)u(t).
−∞

Notice that we could not have calculated y(t) with the Fourier method,
because the Fourier transform of h(t) = et u(t) does not exist. Also, notice
that the output y(t) is not bounded (y(t) → ∞ as t → ∞), even though
the input f (t) = u(t) is a bounded function (|f (t)| ≤ 1 for all t).

10.1.3 Testing whether a system is (zero-state) LTI


The relationship between the system input f (t) and the system response y(t) always
reveals whether a system is LTI.2 If a system is LTI, then the relationship between
f (t) and y(t) can be expressed in the convolution form y(t) = h(t) ∗ f (t), where
h(t) does not depend on the choice of f (t). If a system is not LTI, then either the
linearity property or the time-invariance property (or possibly, both) is violated. The
following examples illustrate how we can recognize whether a given system is LTI.

Example 10.6
For a system with input f (t), the output is given as

y(t) = f (t + T ).

Is this system LTI?


Solution Because we can write

y(t) = f (t + T ) = δ(t + T ) ∗ f (t),

this system is LTI with impulse response

h(t) = δ(t + T ).

Thus, the system must satisfy both zero-state linearity and time invariance.

Example 10.7
Suppose a system has input–output relation

y(t) = f 2 (t + T ).

Is this system LTI?


2
We assume zero state, which is the condition under which time invariance was defined.
344 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

Solution We can write

y(t) = f 2 (t + T ) = (δ(t + T ) ∗ f (t))2 ,

which is not in the form y(t) = h(t) ∗ f (t). Thus, the system is not LTI.

Example 10.8
Is the system

y(t) = f 2 (t + T )

time invariant?
Solution We already know from Example 10.7 that the system is not
LTI. But, it still could be time invariant. To test time invariance, we feed
the system with a new input

f1 (t) = f (t − to )

and observe that the new output is

y1 (t) = f12 (t + T ) = f 2 (t + T − to ) = f 2 ((t − to ) + T ) = y(t − to ).

Because the new output is a delayed version of the original output, the
system is time invariant. Since the system is time invariant, but not LTI, it
must be nonlinear. We easily can confirm this by noting that a doubling of
the input does not double the output.
We also can test zero-state linearity of the system from first principles
by checking whether the output formula supports linear superposition. With
an input

f (t) = f1 (t) + f2 (t),

the system output is

y(t) = f 2 (t + T ) = [f1 (t + T ) + f2 (t + T )]2


= f12 (t + T ) + f22 (t + T ) + 2f1 (t + T )f2 (t + T ).

Notice that this is not the sum of the responses f12 (t + T ) and f22 (t + T )
due to individual inputs f1 (t) and f2 (t). Hence, as expected, the system is
not linear.

Example 10.9
A system responds to an unspecified input f (t) with the output 2rect(t).
If a delayed version f (t − 2) of the same input is applied, the output is
observed to be 4rect(t − 2). Is the system LTI? Time invariant? Zero-state
linear?
Section 10.1 Impulse Response h(t) and Zero-State Response y(t) = h(t) ∗ f (t) 345

Solution The system is not time invariant because, if it were, its response
to input f (t − 2) would have been 2rect(t − 2) instead of 4rect(t − 2).
Because the system is not time invariant, it cannot be LTI.
Is the system zero-state linear? It could be. There is not enough infor-
mation provided to test whether the system is zero-state linear. Unless a
general input–output formula that relates f (t) and y(t) is given, it is not
always possible to test zero-state linearity and time invariance.

10.1.4 Frequency response H(ω) of LTI systems


In Section 10.1.2 we learned that we can compute the zero-state response of any LTI
system to an input f (t) by using y(t) = h(t) ∗ f (t). Thus, for a complex-exponential
input f (t) = ej ωt ,
 ∞  ∞
y(t) = h(t) ∗ e =
j ωt
h(τ )e j ω(t−τ )
dτ = e j ωt
h(τ )e−j ωτ dτ.
−∞ −∞

This fundamental result indicates that, under zero-state conditions,

ej ωt −→ LTI −→ ej ωt H (ω),

where
 ∞
H (ω) ≡ h(τ )e−j ωτ dτ.
−∞

In other words, given a complex exponential input, the output of an LTI system is
the same complex exponential, except scaled by a constant that is the frequency
response evaluated at the frequency of the input. As before, the frequency response
is the Fourier transform of the impulse response h(t). What we have just shown is
consistent with our earlier development of the concept of frequency response, but
here we have arrived at the same notion through use of the convolution formula
Notice that this result implicitly assumes that the Fourier transform of h(t)
converges so that H (ω) is well defined (i.e., is finite at every value of ω). If the
Fourier integral does not converge (in engineering terms, H (ω) may be infinite), then
our result suggests that the system zero-state response to a bounded input ej ωt may
be unbounded. Such LTI systems with nonconvergent H (ω) are considered unstable,
a concept to be examined in more detail in the next section.

Example 10.10
Find the frequency responses H (ω) of the LTI systems having impulse
response functions h1 (t) = e−t u(t), h2 (t) = tu(t), and h3 (t) = u(t).
Solution The Fourier transform integral of h1 (t) = e−t u(t) equals H1 (ω) =
1
1+j ω , which is the frequency response of the system.
The Fourier transform integral of h2 (t) = tu(t) does not converge, and
consequently, the frequency response H2 (ω) does not exist.
346 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

The Fourier transform integral of h3 (t) = u(t) also does not converge.
Even though (according to Table 7.2) the Fourier transform of the power
signal u(t) is πδ(ω) + j1ω , this expression is not a regular function and the
frequency response H3 (ω) of the system does not exist.
A more general frequency-domain description will be discussed in
Chapter 11 for systems h2 (t) = tu(t) and h3 (t) = u(t), based on the Laplace
transform.

10.2 BIBO Stability

The previous section suggested that if the Fourier transform of an impulse response
h(t) does not converge—as in the case with h(t) = u(t)—then the zero-state response
of the corresponding LTI system to a bounded input may be unbounded. In most cases
this behavior of having an unbounded output would be problematic. We generally wish
to avoid designing such systems, because as the output signal continues to grow, one
of two things can happen: Either the output signal grows large enough that circuit
components burn out, or the output saturates at the supply voltage level (a nonlinear
effect). In either case the circuit fails to produce the designed response.
Ordinarily, we wish to have systems that produce bounded outputs from bounded
Bounded inputs. We will use the term BIBO stable—an abbreviation for bounded input, bounded
inputs output stable—to refer to such systems. Systems that are not stable are called unstable.
and The terms “bounded input” and “bounded output” in this context refer to functions
outputs f (t) and y(t) = h(t) ∗ f (t) having bounded magnitudes. In other words,

|f (t)| ≤ α < ∞

and

|y(t)| = |h(t) ∗ f (t)| ≤ β < ∞

for all t, where the bounds α and β are finite constants. For instance, signals f (t) =
e−t u(t), u(t), e−|t| , cos(2t), sgn[cos(−t)], and ej 3t are bounded by α = 1 for any
value of t. If a signal is not bounded, then it is said to be unbounded; the functions
e−t , tu(t), and et u(t) are examples of unbounded signals.
If an LTI system is not BIBO stable, this means that there is at least one bounded
input function f (t) that will cause an unbounded zero-state response y(t) = h(t) ∗
f (t). It does not mean that all possible outputs y(t) of an unstable system will
be unbounded. For instance, the system h(t) = u(t) is not BIBO stable (e.g., the
output y(t) = u(t) ∗ f (t) due to input f (t) = 1 is not bounded), but its zero-state
response y(t) = u(t) ∗ rect(t) due to input f (t) = rect(t) is bounded. In practice, we
are concerned if even one bounded input can cause an unbounded output, and hence,
we insist on BIBO stability.
We might ask how we can test whether a system is BIBO stable. Certainly, we
cannot try every possible bounded input and then examine the corresponding outputs
Section 10.2 BIBO Stability 347

to check whether they are bounded. Doing so would require trying an infinite number
of inputs, which would require an infinite amount of time. Instead, we need a simpler
test. From our earlier discussion, we feel certain that if the Fourier transform of the
impulse response h(t) does not converge, then the corresponding LTI system cannot
be BIBO stable. So, might this be the test we are looking for? The answer is “not
quite,” because although convergence of the Fourier transform of h(t) is a necessary
condition for BIBO stability, it is not sufficient. For example, the Fourier transform
of sinc(t) converges, and yet a system with impulse response h(t) = sinc(t) is not
BIBO stable, as we will see shortly.
The key to BIBO stability turns out to be absolute integrability of the impulse
response h(t). More specifically, it can be shown that:

An LTI system is BIBO stable if and only if its impulse response h(t) is BIBO stability
absolutely integrable, satisfying criterion
 ∞
|h(t)|dt < ∞.
−∞

The proof of this BIBO-stability criterion will be given in two parts: First, we
will show that absolute integrability of h(t) is sufficient for BIBO stability; then we
will show that absolute integrability also is necessary for BIBO stability.

Proof of sufficiency: With the help of the triangle inequality, note that
 ∞
|y(t)| = |h(t) ∗ f (t)| = | h(τ )f (t − τ )dτ |
−∞
 ∞  ∞
≤ |h(τ )||f (t − τ )|dτ ≤ |f (t)|max |h(τ )|dτ,
−∞ −∞

where |f (t)|max is the maximum value of |f (t)| for all t. Now, suppose that
h(t) is absolutely integrable, that is.
 ∞
|h(τ )|dτ = γ < ∞.
−∞

Then, for any bounded input f (t) such that |f (t)|max < ∞, the corresponding
output y(t) = h(t) ∗ f (t) also is bounded, and we have

|y(t)|max < γ |f (t)|max < ∞.

Hence, absolute integrability of h(t) is a sufficient condition for BIBO stability.


In the second part of the proof we need find only a single example of a bounded
input f (t) that causes an unbounded output h(t) ∗ f (t) when h(t) is not absolutely
integrable.
348 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

Proof of necessity: With an LTI system output expressed as


 ∞
y(t) = h(τ )f (t − τ )dτ,
−∞

we find that
 ∞
y(0) = h(τ )f (−τ )dτ.
−∞

Consider now the case of a bounded input signal specified as


f (t) = sgn(h(−t)).
In this case f (−τ ) = sgn(h(τ )) and, consequently,
 ∞  ∞
y(0) = h(τ )sgn(h(τ ))dτ = |h(τ )|dτ.
−∞ −∞

Clearly, if h(t) is not absolutely integrable then y(0) will be infinite. Thus, y(t)
will not be bounded, even though the input f (t) = sgn(h(−t)) is bounded;
the sgn function takes on only ±1 values. Hence, we have proven the necessity
of an absolutely integrable h(t) for BIBO stability.

Example 10.11
Determine whether the given systems are BIBO stable. For each system
that is unstable, provide an example of a bounded input that will cause an
unbounded output.
h1 (t) = e−t u(t),
h2 (t) = e−t ,
h3 (t) = 2u(t − 1),
h4 (t) = 2δ(t),
h5 (t) = e2t u(t),
h6 (t) = e−2t u(t) − e−t u(t),
d
h7 (t) = δ  (t) ≡ δ(t),
dt
h8 (t) = cos(ωo t) u(t),
h9 (t) = rect(t − 1).

Solution The system with h1 (t) = e−t u(t) is BIBO stable, because
 ∞  ∞
−t
|e u(t)|dt = e−t dt = 1,
−∞ 0

showing that h1 (t) is absolutely integrable.


Section 10.2 BIBO Stability 349

The system with h2 (t) = e−t is not BIBO stable because


 ∞  ∞
|e−t |dt = e−t dt,
−∞ −∞

which does not converge. There are many bounded inputs that will cause this
system to have an unbounded output. For example, choosing f (t) = u(t),
we see that the system response is
 t
y(t) = e−t ∗ u(t) = e−τ dτ,
−∞

which is not bounded.


The system with h3 (t) = 2u(t − 1) also is not BIBO stable, because
the area under |h3 (t)| = |2u(t − 1)| is infinite. Once again, choosing the
bounded input f (t) = u(t) will produce an unbounded output, because

y(t) = 2u(t − 1) ∗ u(t) = 2(t − 1)u(t − 1)

is unbounded.
Our proof that absolute integrability of h(t) is both a necessary and
sufficient condition for BIBO stability assumed that h(t) is a function.
Because the impulse response for system h4 (t) = 2δ(t) is not a function (it
involves an impulse), we resort to first principles to test whether this system
is BIBO stable. In particular, we note that the system zero-state response
to an arbitrary input f (t) is

y(t) = h4 (t) ∗ f (t) = 2δ(t) ∗ f (t) = 2f (t).

Thus, all bounded inputs f (t) cause bounded outputs 2f (t), and so the
system must be BIBO stable.
System h5 (t) = e2t u(t) is not BIBO stable, because the area under
|h5 (t)| = e2t u(t) is infinite. Many bounded inputs, including f (t) = u(t),
will produce an unbounded output.
System h6 (t) = e−2t u(t) − e−t u(t) is BIBO stable because

|h6 (t)| = |e−2t u(t) − e−t u(t)| ≤ e−2t u(t) + e−t u(t)

and the area under e−2t u(t) + e−t u(t) is finite.


System h7 (t) = δ  (t) is not BIBO stable. To see the reason, note that
the zero-state response of the system is

y(t) = h7 (t) ∗ f (t) = δ  (t) ∗ f (t) = f  (t).

Thus, if f (t) is a bounded input with a discontinuity, the response f  (t) is


not bounded (e.g., f (t) = u(t) with f  (t) = δ(t)).
350 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

System h8 (t) = cos(ωo t)u(t) is not BIBO stable because cos(ωo t)u(t)
is not an absolutely integrable function; the area under | cos(ωo t)u(t)| =
| cos(ωo t)|u(t) is infinite. An example of a bounded input that will cause
an unbounded output is f (t) = ej ωo t , which produces

y(t) = cos(ωo t)u(t) ∗ ej ωo t = ej ωo t H (ωo ),

where
 ∞
H (ωo ) = cos(ωo t)e−j ωo t dt
0

is non-convergent. Similarly, cos(ωo t) and sin(ωo t) are other bounded


inputs that will cause this system to produce an unbounded output.
Finally system h9 (t) = rect(t − 1) is BIBO stable because the rect
function has a finite area under the curve.
Table 10.1 shows plots of some of the impulse responses examined
previously, arranged in two columns according to whether each impulse
response corresponds to a stable or an unstable system.

We close this section with several observations that reinforce and add to what
we have learned about BIBO stability. First, note that the above examples illustrate
that the test for stability is the absolute integrability of the impulse response (for h(t)
that are functions). The test is not the boundedness of the impulse response. This is
sometimes a point of confusion. Notice that both the aforementioned h3 (t) and h8 (t)
are bounded, and yet they correspond to unstable systems.
Second, it should be clear that BIBO stable systems always have a well-defined
frequency response, H (ω). This follows from Chapter 7, where we learned that abso-
lute integrability is a sufficient condition for the existence of the Fourier transform.
Third, there are some systems with a well defined frequency response that are
not BIBO stable. So, the existence of H (ω) is not sufficient to imply stability. For
example, h(t) = sinc(t) is not absolutely integrable, and so a system having this
impulse response is not BIBO stable. Yet, H (ω) is well defined; indeed, H (ω) =
πrect( ω2 ), which represents an ideal low-pass filter.3
Fourth, most systems that are not BIBO stable—including very important systems
such as ideal integrators (h(t) = u(t)) and differentiators (h(t) = δ  (t))—do not have
convergent H (ω). Unlike in the case of the ideal low-pass filter, Fourier representation
of such systems is difficult or impossible. The best tool for frequency-domain analysis
of unstable systems is the Laplace transform, which will be introduced in Chapter 11.
The Laplace transform also is very convenient for solving zero-input and initial-value
problems, as we shall see.

3
The input f (t) = sgn(sinc(t)) is an example of a bounded input that will cause the ideal low-pass
filter to have an unbounded output. Many other bounded inputs, such as rect(t), cos(t), etc., will produce
a bounded output.
Section 10.3 Causality and LTIC Systems 351

BIBO stable Unstable

e− t u(t) e− t
1 1

1 t 1 t

(e− 2t − e− t )u(t) 2u(t − 1)


2
t
1 2 3

1 2 t
−0.5

rect(t − 1)
12
1 10
e2t u(t)
8
6
4
1 2 3 t 2

1 t

Absolutely integrable NOT absolutely integrable

Table 10.1 Some of the impulse response functions h(t) examined in


Example 10.11, categorized according to whether they correspond to stable
systems (h(t) absolutely integrable) or unstable systems.

Finally, some important unstable systems, such as ideal low-pass filters, and ideal
integrators and differentiators, can be approximated as closely as we like by stable
systems. Chapter 12 introduces this topic of filter design.

10.3 Causality and LTIC Systems

An LTI system, stable or not, is said to be causal if its zero-state response h(t) ∗ f (t)
depends only on past and present, but not future, values of the input signal f (t).
Systems that are not causal are said to be noncausal. All practical analog LTI circuits
built in the lab are, of course, causal, because the existence of a noncausal circuit—that
352 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

is, one that produces an output before its input is applied—would be as impossible as
a football flying off the tee before the kicker kicked the ball.4
Writing the LTI zero-state response in terms of the convolution formula,
 ∞
y(t) = h(τ )f (t − τ )dτ,
−∞

we see clearly that y(t1 ), the output at some fixed time t1 , can depend on f (t) for
t > t1 only if h(τ ) is nonzero for some negative values τ . For example, if h(−1)
is nonzero, then we see from the convolution formula that y(t1 ) will depend on
f (t1 − (−1)) = f (t1 + 1), that is, on a future value of f . Thus:
Causality
criterion An LTI system with impulse response function h(t) is causal if and only
for LTI if h(t) = 0 for t < 0.
systems
Clearly, then, the ideal low-pass system h(t) = sinc(t), just considered in the
previous section, is not a causal system (because sinc(t) is nonzero for t < 0) and
thus cannot be implemented by an LTI circuit in the lab, no matter how clever the
designer. Fortunately, there is no need for an exact implementation. In Chapter 12,
we will see how to design causal and realizable (as well as BIBO stable) alternatives
that will closely approximate the behavior of an ideal low-pass filter.
It is important to note that the causality criterion just stated, h(t) = 0 for t < 0,
applies only to systems whose impulse responses can be expressed in terms of
functions—for example, sinc(t), u(t), e−t u(t), etc. If we must write the impulse
response in terms of the impulse δ(t) and related distributions, then we should deter-
mine causality by directly examining the dependence of the zero-state output on the
system input.

Example 10.12
The zero-state response of an LTI system to arbitrary input f (t) is described
by

y(t) = f (t − 2).

Find the system impulse response h(t) and determine whether the system
is causal.
Solution Since f (t − 2) = δ(t − 2) ∗ f (t), the input–output formula can
be written as

y(t) = δ(t − 2) ∗ f (t).

4
As described in Section 10.4, however, noncausal models are highly useful in practice, especially
for processing of spatial data (e.g., images) and in digital signal processing, where entire signals can be
prestored prior to processing.
Section 10.3 Causality and LTIC Systems 353

Hence, the impulse response of the system is

h(t) = δ(t − 2).

Clearly, this system is causal, because the present system output is


simply the system input from 2 time units prior. Because the output at any
instant does not depend on future values of the input, the system is causal.

Example 10.13
The zero-state response y(t) of an LTI system to a unit-step input u(t) is
the function

g(t) = rect(t).

Find the system impulse response h(t) and determine whether the system
is causal.
Solution Since
1 1
rect(t) = u(t + ) − u(t − ),
2 2
and because the impulse response h(t) of an LTI system is the derivative of
the unit-step response g(t), we have
d 1 1 1 1
h(t) = rect(t) = u (t + ) − u (t − ) = δ(t + ) − δ(t − ).
dt 2 2 2 2
Thus, the system zero-state response to an arbitrary input f (t) is
1 1 1 1
y(t) = [δ(t + ) − δ(t − )] ∗ f (t) = f (t + ) − f (t − ).
2 2 2 2
Clearly, this system is noncausal, because the system output y(t) depends
on f (t + 21 ), representing an input half a time unit into the future.
Another way to see that this system is noncausal, without having to
find h(t), is to note that the output rect(t) turns on at time t = − 21 , which is
earlier than t = 0 when the input u(t) turns on. No practical causal circuit
built in the lab can behave this way!

It should be clear from the foregoing examples that LTI systems with impulse
responses of the form δ(t − to ) are causal when to ≥ 0 and noncausal when to < 0. In
general, we will use the term causal signal to refer to signals that could be the impulse
response of a causal LTI system, and use the term LTIC to describe LTI systems
that are causal. For instance, δ(t), u(t − 1), e−t u(t), δ  (t − 2), and cos(2πt)u(t) LTIC
are examples of causal signals that qualify as impulse responses of possible LTIC systems
systems, whereas signals δ(t + 2), e−t , u(t + 1), and cos(t) are noncausal and cannot and causal
be impulse responses of LTIC systems. (See Table 10.2.) signals
354 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

Causal Noncausal

δ(t − 1) δ(t + 2)
1 1

-2 2 4 6
t −2 2 4 6 t

u(t − 1) e− t
1 1

−2 2 4 6 t −2 2 4 6 t

cos(2πt)u(t) u(t + 1)
1
1

-2 2 4 6 t

−2 2 4 6 t

Table 10.2 Examples of causal and noncausal signals.

Example 10.14
Determine whether the following signal is causal:

h(t) = δ(t) + u(t + 1).

Solution To solve this problem, we must determine whether the hypoth-


esized h(t) can be the impulse response of a causal LTI system. The output
of an LTI system having the given h(t) as its impulse response would be

y(t) = (δ(t) + u(t + 1)) ∗ f (t) = δ(t) ∗ f (t) + u(t + 1) ∗ f (t)


 t
= f (t) + f (t + 1) ∗ u(t) = f (t) + f (τ + 1)dτ.
−∞

Clearly, y(t) depends on values of f (t) up to 1 time unit into the future,
and thus the LTI system is not causal. Consequently, the given h(t) is not
causal.
Section 10.3 Causality and LTIC Systems 355

A more direct approach to this problem is as follows: u(t + 1) clearly


is a noncausal signal since it has nonzero values for t < 0. The sum of
a noncausal signal with any other signal obviously will be a noncausal
signal, unless the two signals cancel one another for t < 0. With that logic,
the given h(t) clearly is a noncausal signal.

Note that, while all practical LTI systems built in the lab will be causal, not all
causal lab systems are LTI. The concept of causality also applies to systems that
are nonlinear and/or time-varying. The following examples illustrate some of these
possibilities.

Example 10.15
A particular time-varying (and thus not LTI) system is described by the
input–output relation

y(t) = cos(t + 5)f (t).

Is the system causal or noncausal?


Solution The system clearly is causal, because the output does not depend
on future values of the input f (t).

Example 10.16
A system is described by the input–output relation

y(t) = f (t 2 ).

Is the system causal or noncausal?


Solution The system is noncausal because, for instance,

y(−1) = f ((−1)2 ) = f (1),

showing that there are times for which the output depends on future values
of the input.

Example 10.17
A particular nonlinear system is described by the input–output relation

y(t) = f 2 (t + T ).

Is the system causal or noncausal?


Solution The answer depends on whether T is negative or positive. If
T ≤ 0, then the output y(t) depends only on the present or a past value of
the input f (t), and the system is causal. However, if T > 0, then the system
is noncausal.
356 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

Example 10.18
What type of filter is implemented by an LTI system having the impulse
response

h(t) = sinc(t) cos(ωo t),
π
assuming ωo > ? Discuss why this filter is impossible to build in the lab.
Solution According to Table 7.2
 ω
sinc(t) ↔ rect( ),
π 2
so use of the modulation property implies that the frequency response of
the given system is
1 ω − ωo 1 ω + ωo
H (ω) = rect( ) + rect( ).
2 2 2 2
Clearly, this frequency response describes an ideal band-pass filter with
a center frequency ωo and bandwidth . The ideal filter is impossible to
implement in the lab, because the system impulse response is noncausal.

Example 10.19
The input–output relation of a system is given as

y(t) = f (3t).

Is the system causal? Is it time invariant? Is it LTIC?


Solution Since y(1) = f (3), the output at t = 1 depends on the input
at t = 3. So, the system is not causal and, therefore, it cannot be LTIC.
However, it still could be time invariant, so let us check.
Time invariance requires that delayed inputs lead to equally delayed,
but otherwise unchanged, outputs. Consider a new system input, which is
a delayed version of the original, specified as

f1 (t) = f (t − to ).

According to the given input–output rule, the corresponding output is

y1 (t) = f1 (3t) = f (3t − to ).

Because y1 (t) is different from

y(t − to ) = f (3(t − to )),

the new output y1 (t) is not a to -delayed version of the original output and,
therefore, the system is time varying.
Section 10.5 Delay Lines 357

10.4 Usefulness of Noncausal System Models

While our focus in the remaining chapters will be on LTIC systems, it should be noted
that noncausal LTI system models are important in practice. First, as we saw in the
previous section, certain ideal filtering operations (e.g., ideal low-pass) are noncausal.
We need our mathematical apparatus to be general enough to model these types of
filters, because we often wish to design real filters that approximate such ideal filters.
As indicated earlier, Chapter 12 will introduce this filter design problem.
A second use for noncausal models arises in the processing of spatial data where
one or more of the variables involves position. Such examples abound in signal
processing descriptions of imaging systems—for instance, cameras, telescopes, x-ray
computer tomography (CT), synthetic aperture radar (SAR), and radio astronomy. In
such systems the signal being imaged is a spatial quantity in two or three dimensions.
Often, it is convenient to define the origin at the center of the scene, for both the
input and output of the system. Doing so generally leads to a noncausal model for the
processing. For example, a simple noise-smoothing operation, where each point in
the output image is an average of the values surrounding that same point in the input
image, can be described by LTI filtering with rect(x) as the impulse response, which
is noncausal. Here, we use the variable x, rather than t, to denote position.
Finally, noncausal models routinely are applied in digital signal processing, where
an entire sampled signal may be prestored prior to processing. Depending on how the
time origin is specified for the prestored signal samples and the output signal samples,
a “current” output sample may depend on “future” input samples. For example,

yn = fn+1 + 2fn + fn−1 ,

where yn depends on the future (already prestored) value fn+1 . There even are cases
with digital processing where signals are reversed and processed backwards. This is
the epitomy of noncausal processing!

10.5 Delay Lines

The system

h(t) = Kδ(t − to ) ↔ H (ω) = Ke−j ωto

is zero-state linear, time invariant, and BIBO stable. As we have seen in Chapter 8,
systems having the frequency response

H (ω) = Ke−j ωto

simply delay and amplitude-scale their inputs f (t) to produce outputs

y(t) = Kδ(t − to ) ∗ f (t) = Kf (t − to ).


358 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

Clearly, if to ≥ 0, then the output y(t) depends only on past or present values of f (t)
and the system is LTIC.
The LTIC system just described can be considered a delay line having a delay to
Delay-line and gain K. A practical example of a delay-line system is the coaxial cable. The delay
systems of a coaxial cable or any type of transmission-line cable can be calculated as to = Lv ,
where L is the physical cable length and v is the signal propagation speed in the cable,
usually a number close to (but less than) the speed of light in free space, c = 3 × 108
m/s. If the cable is short, then to will be very small and the delay can be neglected in
applications where signal durations are larger than to . For large L, on the other hand,
delay can be an appreciable fraction of signal durations (e.g., to = 0.1 ms for a cable
of length 30 km) and may need to be factored into system calculations. In a lossless
cable (with some further conditions satisfied concerning impedance matching), the
amplitude scaling constant is K = 1. However, in practice, K < 1.

Example 10.20
The signal input to a coaxial line is f (t) = u(t). At the far end of the line,
an output y(t) = 0.2u(t − 10) is observed. What is the impulse response
h(t) of the system?

Solution From the given information, we deduce that the unit-step response
of the system is

g(t) = 0.2u(t − 10).

Differentiating this equation on both sides, we get

h(t) = g  (t) = 0.2δ(t − 10).

Alternatively, the given information suggests that the coax has a 10-
second delay and a gain of 0.2. Thus, we can construct the impulse response
of the system as

h(t) = 0.2δ(t − 10).

Distributed Electrical circuits containing finite-length transmission lines are said to be distri-
versus buted circuits, as opposed to lumped-element circuits which are composed of discrete
lumped-element elements, such as capacitors and resistors, with insignificantly small (in the sense
circuits of associated propagation time delays) physical dimensions. Transmission lines and
distributed circuits are studied in detail in courses on electromagnetics and RF circuits.
While techniques from elementary circuit analysis are helpful in these studies, trans-
mission lines do not behave like lumped-element circuits.
Exercises 359

EXERCISES
10.1 An LTI circuit has the frequency response H (ω) = 1+j1
ω + 2+j ω . What is
1

the system impulse response h(t) and what is the system response y(t) =
h(t) ∗ f (t) to the input f (t) = e−t u(t)?
10.2 Find the impulse responses h(t) of the systems having the following frequency
responses:
(a) H (ω) = 1
3+j ω .
(b) H (ω) = 1
(4+j ω)2
.

(c) H (ω) = 5+j ω = 1 − 5+j ω .
5

−j ω
(d) H (ω) = 1
1+j ω e .

10.3 For a system with frequency response H (ω) = 1+j 1


ω , plot the system impulse
response h(t) and the system output y(t) = h(t) ∗ f (t) for f (t) = 10rect( 0.1
t
).
Explain how the plot of a different output y(t) = h(t) ∗ p(t) would appear
for p(t) = 1000rect( 0.001
t
). You need not do the actual calculation and plot-
ting.
10.4 Determine the zero-state response y(t) = h(t) ∗ f (t) of the following LTI
systems to the input f (t) = u(t) − u(t − 2):
(a) h(t) = u(t).
(b) h(t) = e−2t u(t).
(c) h(t) = e2t u(t).
10.5 Find the impulse responses h(t) of the LTI systems having the following
unit-step responses:
(a) g(t) = 5u(t − 5).
(b) g(t) = t 2 u(t).
(c) g(t) = u(t)(2 − e−t ).
10.6 If the unit-step response of an LTI system is g(t) = 6rect( t−6
3 ), find the
system zero-state responses to inputs
(a) f (t) = rect(t).
(b) f (t) = e−4t u(t).
(c) f (t) = 2δ(t).
10.7 Each of the given signals represents the impulse response of an LTI system.
Determine whether each system is causal and BIBO stable. If a system is
not BIBO stable, find an example of a bounded input f (t) that will cause an
unbounded response h(t) ∗ f (t).
(a) h(t) = et .
(b) h(t) = rect( t−1
2 ).
360 Chapter 10 Impulse Response, Stability, Causality, and LTIC Systems

(c) h(t) = rect(t).


(d) h(t) = δ(t + 1) − δ(t − 1).
(e) h(t) = u(t)e−j t .

10.8 Determine whether the LTI systems with the following impulse response
functions are causal:
(a) h(t) = u(t − 1).
(b) h(t) = u(t + 1).
(c) h(t) = δ(t − 2) ∗ u(t + 1).
(d) h(t) = u(1 − t) − u(t).
(e) h(t) = u(−t) ∗ u(−t).

10.9 Determine whether the following LTIC systems are BIBO stable and explain
why or why not:
(a) h1 (t) = 5δ(t) + 2e−2t u(t) + 3te−2t u(t).
(b) h2 (t) = δ(t) + u(t).
(c) h3 (t) = δ  (t) + e−t u(t).
(d) h4 (t) = −2δ(t − 3) − te−5t u(t).

10.10 For each unstable system in Problem 10.9, provide an example of a bounded
input that will cause an unbounded output.
10.11 Consider the given zero-state input–output relations for a variety of systems.
In each case, determine whether the system is zero-state linear, time invariant,
and causal.
(a) y(t) = f (t − 1) + f (t + 1).
(b) y(t) = 5f (t) ∗ u(t).
t−2
(c) y(t) = δ(t − 4) ∗ f (t) − −∞ f 2 (τ )dτ.
t+2
(d) y(t) = −∞ f (τ )dτ.
t−2
(e) y(t) = −∞ f (τ 2 )dτ.
(f) y(t) = f (t − 1).
3

(g) y(t) = f ((t − 1)2 ).


11
Laplace Transform,
Transfer Function, and LTIC
System Response

11.1 LAPLACE TRANSFORM AND ITS PROPERTIES 363


11.2 INVERSE LAPLACE TRANSFORM AND PFE 381
11.3 s-DOMAIN CIRCUIT ANALYSIS 389
11.4 GENERAL RESPONSE OF LTIC CIRCUITS AND SYSTEMS 396
11.5 LTIC SYSTEM COMBINATIONS 412
EXERCISES 419

Consider applying an exponential input f (t) = est to an LTIC system having impulse Laplace
response h(t). Then the zero-state response y(t) = h(t) ∗ f (t) can be calculated as transform
  and its
∞ ∞
inverse;
y(t) = h(t) ∗ est = h(τ )es(t−τ ) dτ = est h(τ )e−sτ dτ.
−∞ −∞ partial
fraction
Since in LTIC systems h(t) is zero for t < 0, it should be possible to move the lower expansion;
integration limit in the above formula to 0; however, in anticipation of a possible h(t) transfer
that includes δ(t) we will move the limit to 0− . Thus, we obtain a rule function Ĥ (s);
zero-state
est −→ LTIC −→ Ĥ (s)est , and general
response
where of LTIC
 ∞
circuits
Ĥ (s) ≡ h(t)e−st dt and systems;
0− cascade,
parallel
is known as both the Laplace transform of h(t) and the transfer function of the system and feedback
with impulse response h(t). configurations
The above relations hold whether s is real or complex, so long as the Laplace
transform integral defining Ĥ (s) converges. In general, s is complex, and then Ĥ (s)

361
362 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

is a function of a complex variable as described below in Section 11.1.1 and also in


Appendix A, Section A.7.
Note that in the special case with s = j ω, the input-output rule above becomes

ej ωt −→ LTIC −→ Ĥ (j ω)ej ωt ,

where
 ∞
Ĥ (j ω) = h(t)e−j ωt dt = H (ω)
0−

is both the system frequency response and the Fourier transform of the impulse
response h(t). Clearly, then, the frequency response H (ω) and transfer function Ĥ (s)
of LTIC systems are related as

H (ω) = Ĥ (j ω),

assuming that both transforms—Fourier and Laplace—converge.


The existence of the frequency response H (ω), that is, the convergence of the
Fourier transform of h(t), is guaranteed when the system is BIBO stable, (i.e., when
the impulse response h(t) is absolutely integrable). If the system is not BIBO stable,
then the frequency response H (ω) usually does not exist (ideal low-pass, band-pass,
and high-pass filters are exceptions). Unlike H (ω), however, the transfer function
Ĥ (s) frequently exists for unstable systems, for some values of s. This follows, since
for complex-valued s ≡ σ + j ω,
 ∞  ∞
−st
Ĥ (s) = h(t)e dt = h(t)e−σ t e−j ωt dt
0− 0−

is convergent for any real σ for which the product h(t)e−σ t is absolutely integrable.
For instance, h(t) = et u(t) represents an unstable system whose frequency response
H (ω) is undefined. But, because h(t)e−σ t is absolutely integrable for σ > 1, a conver-
gent Laplace transform Ĥ (s) of h(t) = et u(t) exists for all s = σ + j ω satisfying
σ > 1.
It should be apparent from the above discussion that the LTIC system transfer
function Ĥ (s) is a generalization of the frequency response H (ω) that remains valid
for many unstable systems. In this chapter we will develop an Ĥ (s)-based frequency-
domain method applicable to nearly all LTIC systems that we will encounter.
Section 11.1 focuses on the Laplace transform and its basic properties. We shall
see that the Laplace transform of the zero-state response y(t) = h(t) ∗ f (t) of LTIC
systems can be expressed as

Ŷ (s) = Ĥ (s)F̂ (s)

if F̂ (s) denotes the Laplace transform of a causal input signal f (t). Since causal
signals and their Laplace transforms form unique pairs (like Fourier transform pairs),
Section 11.1 Laplace Transform and its Properties 363

a causal zero-state response y(t) can be uniquely inferred from its Laplace transform
Ŷ (s) as described in Section 11.2. In Section 11.3 we will learn how to determine
Ĥ (s) and Ŷ (s) in LTIC circuits using circuit analysis methods similar to the phasor
method of Chapters 4 and 5, and then infer h(t) and y(t) from Ĥ (s) and Ŷ (s).
Section 11.4 examines the general response of nth -order LTIC circuits and systems
in terms of Ĥ (s), including their zero-input response to initial conditions. Finally, in
Section 11.5 we consider systems that are composed of interconnected subsystems,
and determine how the transfer function of the overall system is related to the transfer
functions of its subsystems.

11.1 Laplace Transform and its Properties

11.1.1 Definition, ROC, and poles

The Laplace transform Ĥ (s) of a signal h(t) is defined1 as


 ∞
Ĥ (s) = h(t)e−st dt,
0−

where

s ≡ σ + jω

is a complex variable with real part σ and imaginary part ω. Because a complex
variable is a pair of real variables, in this case s = (σ, ω), the Laplace transform
Ĥ (s) is a function of two real variables. For conciseness, we choose to write Ĥ (s)
rather than Ĥ (σ, ω).
Generally, the Laplace transform integral converges2 for some values of s and Laplace
not for others. The region of the complex number plane containing all transform

s = (σ, ω) = σ + j ω

for which the Laplace transform integral converges is said to be the region of conver- ROC
gence (ROC) of the Laplace transform. We refer to the entire complex number and
plane, containing all possible values of s = (σ, ω) = σ + j ω, as the s-plane. The s-plane

1
Ĥ (s) defined above also is known as the one-sided Laplace transform of h(t) to distinguish it from a

two-sided transform defined as −∞ h(t)e−st dt. In these notes, we will use only the one-sided transform.
Thus, there will be no occasion for ambiguity when we use the term Laplace transform to refer to its
one-sided version.
Meaning that the integral 0− h(t)e−st dt approaches a limit as T → ∞.
2 T
364 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

s-plane 3 ω

^ (s)|
|H
1
ω
10
−3 −2 −1 1 2 3 σ 8
2
6
4
−1 2 1

−2
0
0 σ
−1
−2
0 −1
1
(a) −3 (b) 2 −2

Figure 11.1 (a) The ROC (shaded region) and pole location (×) of Laplace
transform expression Ĥ(s) = s−1 1
on the complex s-plane, and (b) surface plot of
|Ĥ(s)| = | s−1 | = |σ +jω−1| above the s-plane.
1 1

horizontal (so-called real) axis of the s-plane is labeled with σ and the vertical (so-
called imaginary) axis with ω as shown in Figure 11.1a. It is important to realize that
this complex number plane is just the usual two-dimensional plane with a real variable
corresponding to each axis. The following example computes a Laplace transform
having the ROC shown as the shaded area in Figure 11.1a.

Example 11.1
Determine the Laplace transform Ĥ (s) of h(t) = et u(t) and its ROC.
Solution The Laplace transform of

h(t) = et u(t)

is
  ∞

−st

e(1−s)t 
Ĥ (s) = t
e u(t)e dt = e (1−s)t
dt =
0− 0 1 − s t=0

for all s = σ + j ω for which the last expression converges.


Notice that convergence is possible if and only if

e(1−s)t = e(1−σ −j ω)t

reaches a limit as t → ∞. This can happen only if σ = Re{s} > 1 in which


case the limit is zero—if σ = 1, then e(1−σ −j ω)t = e−j ωt , which has no
limiting value, and if σ < 1, then the expression diverges as t is increased.
Hence we conclude that we must have σ > 1 for convergence, and thus the
ROC for the Laplace transform of et u(t) is described by the inequality

σ = Re{s} > 1.
Section 11.1 Laplace Transform and its Properties 365

For values of s satisfying this inequality, the Laplace transform is


obtained as the algebraic expression
∞
e(1−s)t  0−1 1
Ĥ (s) =  = = .
1 − s t=0 1−s s−1

Figure 11.1 illustrates various aspects of the Laplace transform


1
Ĥ (s) =
s−1
of signal
h(t) = et u(t) :
• The shaded portion of the s-plane depicted in Figure 11.1a corresponds to the
ROC: {s : σ > 1}.
• The “×” sign shown in the figure marks the location s = 1 in the s-plane where
1
the Laplace transform expression s−1 , diverges.
• The variation of |
s−1 | over the entire s-plane is depicted in Figure 11.1b
1

in the form of a surface plot—note that the surface resembles a circus tent
supported by a single pole erected above the s-plane at the location × marked
in Figure 11.1a.3
Poles
Locations on the s-plane—or the values of s—where the magnitude of the of the
expression for Ĥ (s) goes to infinity are called poles of the Laplace transform Laplace
Ĥ (s). transform

As we shall discuss below, the ROC always has at least one pole on its boundary.

Example 11.2
Determine the Laplace transform F̂ (s) of signal f (t) = e−2t u(t) − e−t u(t).
Solution Proceeding as in Example 11.1,
 ∞  ∞
−2t −t −st
F̂ (s) = (e − e )u(t)e dt = (e−(2+s)t − e−(1+s)t )dt
0− 0
∞ ∞
e 
−(2+s)t
e−(1+s)t  1 1 −1
=  − = − =
−(2 + s) t=0 −(1 + s) t=0 s+2 s+1 (s + 2)(s + 1)
under the assumptions σ = Re{s} > −2 and σ = Re{s} > −1, dictated by
convergence of the two terms above. The first condition is automatically
3
Outside the ROC, {s : σ > 1}, this surface represents the analytic continuation of the Laplace trans-
form integral, in analogy with 1−s 1
representing an extension of the infinite series 1 + s + s 2 + s 3 + · · ·
beyond its region of convergence, |s| < 1, as an analytic function. The concept of analytic continuation
arises in the theory of functions, where it is shown that analytic functions known over a finite region of the
complex plane can be uniquely extended across the rest of the plane by a Taylor series expansion process.
366 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

s-plane 3 ω

1 | F^ (s)|
ω
10
−3 −2 −1 1 2 3 σ 8
2
6
4
−1 2 1

−2
0
0 σ
−1
−2
0 −1
1
(a) −3 (b) 2 −2

Figure 11.2 (a) The ROC (shaded region) and pole locations (×) of
−1
F̂(s) = (s+2)(s+1) , and (b) a surface plot of |F̂(s)|.
satisfied if the second is satisfied; hence the ROC consists of all complex
s such that σ = Re{s} > −1, which is the cross-hatched region shown in
Figure 11.2a.

The pole locations of

−1
F̂ (s) =
(s + 2)(s + 1)

are evident in the surface plot of |F̂ (s)| shown in Figure 11.2b, with poles located at
s = −2 and s = −1. Notice that the ROC of F̂ (s) lies to the right of the rightmost
pole located at s = −1.
The ROC and pole locations depicted in Figures 11.1 and 11.2 are consistent
with the following general rule:

ROC is all The ROC of a Laplace transform coincides with the portion of the s-plane
s to the to the right of the rightmost pole (not counting a possible pole at s = ∞).
right
of the As illustrated in Example 11.2, the Laplace transform of a function
rightmost
pole f (t) = f1 (t) + f2 (t) + · · ·

will have the form

F̂ (s) = F̂1 (s) + F̂2 (s) + · · ·

where F̂n (s) is the Laplace transform of fn (t), and, furthermore, the ROC of F̂ (s)
will be the intersection of the ROC’s of all of the F̂n (s) components. This is the reason
underlying the general rule that the ROC of a Laplace transform F̂ (s) is the region to
the right of its rightmost pole, not counting a possible pole at s = ∞. A pole at infinity
arises if F̂ (s) contains an additive term proportional to s (or any increasing function
Section 11.1 Laplace Transform and its Properties 367

of s). But, the ROC of F̂n (s) = s is the entire s-plane (as shown in Example 11.4
below), and so a pole at s = ∞ is not counted in the rule just explained.

Example 11.3
Determine the Laplace transform Ĥ (s) of signal h(t) = δ(t).
Solution In this case, using the sifting property of the impulse,
 ∞
Ĥ (s) = δ(t)e−st dt = e−s·0 = 1.
0−

Because we did not invoke any constraint on s in calculating the Laplace


transform, the ROC is the entire s-plane. Notice that in this case the Laplace
transform has no pole. Thus the rule that the ROC is the region to the right
of the rightmost pole holds here in a trivial way.

Example 11.4
Using the derivative of

δ(t) ∗ est = est ,

determine the Laplace transform of δ  (t), the impulse response of a differ-


entiator.
Solution Differentiating δ(t) ∗ est = est on both sides (with the help of
the derivative property of convolution) we find that

δ  (t) ∗ est = sest ,

which can be rewritten as


 ∞
δ  (τ )es(t−τ ) dτ = sest .
−∞

Evaluating both sides at t = 0 we find that


 ∞
δ  (τ )e−sτ dτ = s,
−∞

which shows that the Laplace transform of δ  (t) is simply s (as given in
Table 11.1). Since we did not invoke any constraint on s while calculating
the Laplace transform, the ROC is the entire s-plane, even though there is
a pole at infinity, (i.e., at s = ∞).

Example 11.5
Given that the Laplace transform of h(t) = et u(t) is Ĥ (s) = 1
s−1 , show
that the Laplace transform of f (t) = tet u(t) is F̂ (s) = (s−1)
1
2.
368 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

1 δ(t) ↔ 1 7 δ  (t) ↔ s

2 ept u(t) ↔ 1
s−p 8 u(t) ↔ 1
s

3 tept u(t) ↔ 1
(s−p)2
9 tu(t) ↔ 1
s2

4 t n ept u(t) ↔ n!
(s−p)n+1
10 t n u(t) ↔ n!
s n+1

5 cos(ωo t)u(t) ↔ s
s 2 +ωo2
11 sin(ωo t)u(t) ↔ ωo
s 2 +ωo2

6 e−αt cos(ωd t)u(t) ↔ s+α


(s+α)2 +ωd2
12 e−αt sin(ωd t)u(t) ↔ ωd
(s+α)2 +ωd2

Table 11.1 Laplace transforms pairs h(t) ↔ Ĥ (s) involving frequently encoun-
tered causal signals—α, ωo , and ωd stand for arbitrary real constants, n for non-
negative integers, and p denotes an arbitrary complex constant.

Solution We are told that


 ∞
1
et e−st dt = ,
0 s−1

which holds for {s : σ > 1}. Taking the derivative of the above expression
with respect to s, we find that
 ∞  ∞
d
( et e−st dt) = − tet e−st dt
ds 0 0

on the left, and,


d 1 1
( )=−
ds s − 1 (s − 1)2

on the right. Thus,


 ∞
1
tet e−st dt = ,
0 (s − 1)2

implying that the Laplace transform of f (t) = tet u(t) is F̂ (s) = (s−1)
1
2.
The ROC is {s : σ > 1} since our calculation has not required any addi-
tional constraints on s.

The Laplace transforms calculated in the examples so far all are special cases
from a list of important Laplace transform pairs shown in Table 11.1. Notice that
the list contains only causal signals and in each case the ROC can be deduced from
Section 11.1 Laplace Transform and its Properties 369

corresponding pole locations, which can, in turn, be directly inferred from the form
of the Laplace transform. For example,
s
cos(ωo t)u(t) ↔
s2 + ωo2
has a pair of poles at s = ±j ωo on the vertical axis of the s-plane, and, as a conse-
quence, the ROC of this particular Laplace transform coincides with the right half of
the s-plane, which we will refer to as the RHP (right half-plane).
Also, notice that the poles of Laplace transforms of absolutely integrable signals Laplace
included in Table 11.1 all are confined to the left half-plane (LHP). For example, the transforms
pole s = p for ept u(t) is in the LHP if and only if p < 0 and the signal is absolutely of absolutely
integrable. The same is true with the pole s = p for tept u(t), etc. This detail is, integrable
of course, not a coincidence. It is true more generally, because if a signal h(t) is signals
absolutely integrable and causal, then its Fourier transform integral is guaranteed to have only
converge to a bounded H (ω) = Ĥ (j ω)—this requires that all poles of Ĥ (s) (if any) LHP poles
be located within the LHP (as in Figure 11.2) so that the vertical axis of the s-plane,
where s equals j ω, is contained within the ROC.4
Remembering that BIBO stable systems must have absolutely integrable impulse
response functions, it is no surprise that the BIBO stability criterion from Chapter 10
can be restated as:
An LTIC system h(t) ↔ Ĥ (s) is BIBO stable if and only if its transfer LTIC
function Ĥ (s) has all of its poles in the LHP. systems
This alternative test for BIBO stability holds for any LTIC system with an Ĥ (s) are BIBO
that is a rational function written in minimal form (polynomial in s divided by a stable
polynomial in s, where all possible cancellations of terms between the numerator and if and only
denominator have been performed), giving rise to poles at distinct locations, as in the if all
examples above. A proof of this alternative stability test is accomplished by simply poles
writing Ĥ (s) in a partial fraction expansion via the method described in Section 11.2. in LHP

Example 11.6
Using Table 11.1 and the version of the BIBO stability criterion stated
above, determine whether the LTIC systems with the following impulse
response functions are BIBO stable:

ha (t) = u(t)
hb (t) = e−t u(t) + e2t u(t)
hc (t) = e−t cos(t)u(t)
hd (t) = sin(2t)u(t)
he (t) = e−t u(t) + hc (t)
hf (t) = δ  (t)

4
A pole at s = ∞ is no exception to this rule, because the corresponding Ĥ (s) = s does not lead to a
bounded H (ω) = Ĥ (j ω) as required by an absolutely integrable h(t).
370 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Solution Using Table 11.1, we find that the transfer function that corre-
sponds to impulse response

ha (t) = u(t)

is
1
Ĥa (s) = .
s
This transfer function has a pole at s = 0, which is just outside the LHP.
Thus, the system is not BIBO stable, which of course is consistent with the
fact that ha (t) = u(t) is not absolutely integrable.
For

hb (t) = e−t u(t) + e2t u(t),

we have
1 1 2s − 1
Ĥb (s) = + = .
s+1 s−2 (s + 1)(s − 2)

This transfer function has two poles, one at s = −1 within the LHP, and
another at s = 2 outside the LHP. Therefore, the system is not BIBO stable.
The system

hc (t) = e−t cos(t)u(t)

is BIBO stable because the poles of its transfer function

s+1 s+1
Ĥc (s) = = ,
(s + 1) + 1
2 (s + 1 + j )(s + 1 − j )

located at s = −1 ± j , are within the LHP. The pole locations and surface
plot of |Ĥc (s)| for this BIBO stable system are shown in Figures 11.3a and
11.3b. We refer to locations where Ĥc (s) = 0 as “zeros” of the transfer
function; Figure 11.3a shows an “O” that marks the location of the zero
“Zero” of Ĥc (s). Finally, Figure 11.3c shows a plot of |Ĥc (j ω)| = |Hc (ω)| as a
of a function of ω, which is the magnitude of the frequency response of the
transfer system.
function The poles of

2 2
Ĥd (s) = =
s2 +4 (s + j 2)(s − j 2)

are at s = ±j 2, just outside the LHP; therefore the system is not BIBO
stable.
Section 11.1 Laplace Transform and its Properties 371

s-plane 3 ω

2
^ (s)|
|Hc
1
ω
5
4
−3 −2 −1 1 2 3 σ 3 2
2
1 1
−1
−2
0
0 σ
−1
−2 0 −1
1
(a) (b) 2 −2
−3

1
^ (j ω )|
|Hc
0.8

0.6

0.4

0.2

−4 −2
(c) 2 4
ω

Figure 11.3 (a) The ROC (shaded region), pole locations (×), and the zero
location (O) of Ĥc (s) = (s+1+j)(s+1−j)
s+1
, (b) a surface plot of |Ĥc (s)|, and (c) line plot
of |Ĥc (jω)| versus ω representing the magnitude of the frequency response of
system Ĥc (s).
All three poles of

1
Ĥe (s) = + Ĥc (s),
s+1
at s = −1 and s = −1 ± j are within the LHP. Therefore the system is
BIBO stable.
Finally, for

hf (t) = δ  (t)

the system transfer function is

Ĥf (s) = s

(see Example 11.4). Since |Ĥf (s)| → ∞ as |s| → ∞, this transfer function
has poles at infinities to the right and left of the s-plane. A pole at s =
+∞ obviously is outside the LHP, and hence the system should not be
BIBO stable. This conclusion checks with our earlier observation that a
differentiator described by an impulse response δ  (t) cannot be BIBO stable.
372 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Hidden Poles at infinities as seen in the previous and upcoming examples are known as
poles hidden poles.

Example 11.7
Determine the Laplace transform Q̂(s) of causal signal

1
q(t) = rect(t − ) = u(t) − u(t − 1).
2

Solution
  1

−st
1
−st e−st 
Q̂(s) = q(t)e dt = e dt =
0− 0 −s t=0
e−s·1 − e−s·0 1 − e−s
= = ,
−s s

except in the limit as σ = Re{s} → −∞. Hence, in this case the ROC is
described by {s : σ = Re{s} > −∞}.
Note that Q̂(s) has a hidden pole at s = −∞ in the LHP. Also note
that s = 0 is not a pole, since, using l’Hospital’s rule,

d
ds (1 − e−s ) e−s
lim Q̂(s) = lim ds
= lim =1
s→0 s→0 s→0 1
ds

is finite.

11.1.2 Properties of the Laplace transform

Since the Laplace transform operation on a signal f (t) ignores the portion of f (t) for
t < 0, only causal signals f (t) can be expected to form unique transform pair relations
with F̂ (s), such as those listed in Table 11.1. In fact, if f (t) is not causal, then its
Laplace transform will match the Laplace transform of the causal signal f (t)u(t).
For example, the noncausal signal

f (t) = et u(t + 2),

1
shown in Figure 11.4a, shares the same Laplace transform, s−1 , with causal signal

g(t) = et u(t) = f (t)u(t)

shown in Figure 11.4b. Likewise, the Laplace transform of noncausal f (t) = δ(t +
1) + 2δ(t) is the same as the Laplace transform of causal g(t) = 2δ(t), namely 2.
Section 11.1 Laplace Transform and its Properties 373

et u(t + 2 )
3 3
et u(t)
2 2

1 1

−3 −2 −1 1 2 −3 −2 −1 1 2
(a) t (b) t

Figure 11.4 (a) Noncausal f (t) = et u(t + 2) and (b) causal g(t) = f (t)u(t).

We shall indicate Laplace transform pair relationships involving causal signals Laplace
using double-headed arrows, as in transform
of noncausal
1
et u(t) ↔ , signals
s−1
and use single-headed arrows to indicate the Laplace transform of noncausal signals,
as in
1
et u(t + 1) → .
s−1
Keep in mind the distinction between → and ↔ in reading and interpreting
Table 11.2, which lists some of the general properties of the Laplace transform. Prop-
erties given in terms of → are applicable to both causal and noncausal signals, while
those given in terms of ↔ apply only to causal signals. For instance, the time-delay
property (item 4 in Table 11.2) is stated using ↔, and therefore it applies only to causal
f (t). The next example illustrates use of the time-delay property when calculating
the Laplace transform of a delayed causal signal.

Example 11.8
Given that (according to Table 11.1)
1
f (t) = tu(t) ↔ F̂ (s) = ,
s2
find the Laplace transform P̂ (s) of

p(t) = (t − 1)u(t − 1).

Signals f (t) and p(t) are shown in Figures 11.5a and 11.5b, respectively.
Solution We first note that p(t) = f (t − 1) is a delayed version of the
causal f (t), with a positive delay of to = 1 s (as seen in Figure 11.5). Thus,
the time-delay property given in Table 11.2 is applicable, and

e−s
P̂ (s) = F̂ (s)e−sto = .
s2
374 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Name Condition Property


f (t) → F̂ (s),
1 Multiplication Kf (t) → K F̂ (s)
constant K
f (t) → F̂ (s),
2 Addition f (t) + g(t) + · · · → F̂ (s) + Ĝ(s) + · · ·
g(t) → Ĝ(s) · · ·
Time
3 f (t) → F̂ (s) , real a > 0 f (at) → a1 F̂ ( as )
scaling
Time
4* f (t) ↔ F̂ (s), to ≥ 0 f (t − to ) ↔ F̂ (s)e−sto
delay
Frequency
5 f (t) → F̂ (s) f (t)eso t → F̂ (s − so )
shift

f  (t) → s F̂ (s) − f (0− )


Time Differentiable f (t) → s 2 F̂ (s) − sf (0− ) − f  (0− )

6
derivative f (t) → F̂ (s) ···
f (n) (t) → s n F̂ (s) − · · · − f (n−1) (0− )
Time t
7 f (t) → F̂ (s) 0− f (τ )dτ → 1s F̂ (s)
integration
Frequency
8 f (t) → F̂ (s) −tf (t) → d
ds F̂ (s)
derivative
Time h(t) ↔ Ĥ (s),
9* h(t) ∗ f (t) ↔ Ĥ (s)F̂ (s)
convolution f (t) ↔ F̂ (s)
Frequency f (t) → F̂ (s),
10 f (t)g(t) → 1
2πj F̂ (s) ∗ Ĝ(s)
convolution g(t) → Ĝ(s)
11 Poles f (t) → F̂ (s) Values of s such that |F̂ (s)| = ∞
Portion of s − plane to the right
12 ROC f (t) → F̂ (s)
of rightmost pole = ∞
Fourier F (ω) = F̂ (j ω) if and only if
13* f (t) ↔ F̂ (s)
transform ROC includes s = j ω
14 Final value Poles of s F̂ (s) in LHP f (∞) = lims→0 s F̂ (s)
15 Initial value Existence of the limit f (0+ ) = lims→∞ s F̂ (s)

Table 11.2 Important properties and definitions for the one-sided Laplace transform. Properties marked
by * in the first column hold only for causal signals.
Section 11.1 Laplace Transform and its Properties 375

4 4
tu (t) (t − 1)u(t − 1)
3 3

2 2

1 1

−2 −1 1 2 3 −2 −1 1 2 3
(a) t (b) t

4
(t + 1)u(t + 1)
3

−2 −1 1 2 3
(c) t

Figure 11.5 (a) Causal ramp function f (t) = tu(t), (b) delayed ramp
p(t) = (t − 1)u(t − 1), and (c) noncausal q(t) = (t + 1)u(t + 1).

Example 11.9
Determine the Laplace transform Q̂(s) of

q(t) = (t + 1)u(t + 1)

plotted in Figure 11.5c.


Solution Clearly, the signal q(t) = f (t + 1) shown in Figure 11.5c is an
advanced, rather than a delayed, version of f (t), and so we cannot proceed
as in Example 11.8. Instead, we note that the Laplace transform of the
noncausal q(t) must be the same as the transform of the causal q(t)u(t),
which can be written as

q(t)u(t) = (t + 1)u(t + 1)u(t) = (t + 1)u(t) = tu(t) + u(t).

Using Table 11.1 as well as the addition property from Table 11.2, we
find that

1 1
q(t)u(t) ↔ Q̂(s) = 2
+ .
s s

The previous two examples illustrated how the properties in Table 11.2 can be
used to deduce new Laplace transforms from ones already known. Here is another
example of a similar type.
376 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Example 11.10
Using Table 11.2 confirm that
s+α
f (t) = e−αt cos(ωd t)u(t) ↔ F̂ (s) = .
(s + α)2 + ωd2

Solution We first note that


ej ωd t + e−j ωd t
f (t) = e−αt cos(ωd t)u(t) = e−αt u(t)
2
e−(α−j ωd )t e−(α+j ωd )t
= u(t) + u(t).
2 2
Therefore, using the multiplication and addition properties from Table 11.2,
as well as
1
ept u(t) ↔
s−p
from Table 11.1 (with p = −(α ± j ωd )), we find
1 1 1 1
F̂ (s) = +
2 s + (α − j ωd ) 2 s + (α + j ωd )
1 1 1
= { + }
2 (s + α) − j ωd (s + α) + j ωd
1 (s + α) + j ωd + (s + α) − j ωd
= { }
2 (s + α)2 + ωd2
s+α
=
(s + α)2 + ωd2
as desired, which is entry 6 in Table 11.1.

Verification of the time-delay property: Assume f (t) is causal. Then,


The by definition, the Laplace transform of f (t − to ), to ≥ 0, is
time-delay  ∞  ∞  ∞
property f (t − to )e−st dt = f (t − to )e−st dt = f (τ )e−s(τ +to ) dτ
is guaranteed t=0− t=to− τ =0−
to work  ∞
only for = e−sto f (τ )e−sτ dτ = e−sto F̂ (s).
τ =0−
causal
signals Notice that we used a change of variables τ = t − to with dτ = dt for
and for a fixed to in order to obtain the result. Also note that exchange of the lower
positive integration limit from 0− to to− is justified only for to ≥ 0 and for causal f (t).
delays to ≥ 0. Therefore, the time-delay property is guaranteed to hold only for causal f (t)
and to ≥ 0, as noted in Table 11.2.
Section 11.1 Laplace Transform and its Properties 377

Example 11.11
Using the time-delay property, determine the Laplace transform of
1
rect(t − ).
2

Solution Since
1
rect(t − ) = u(t) − u(t − 1),
2
where the second term is a delayed version of the causal first term, we have
1 1 −s
u(t) − u(t − 1) ↔ − e .
s s
Thus, it follows that, in agreement with Example 11.7,
1 1
rect(t − ) ↔ (1 − e−s ).
2 s

The frequency-derivative property in Table 11.2 can be verified using a procedure


similar to that used to prove the frequency-derivative property of the Fourier transform.
We leave this to the reader. The next example shows how to apply the property.

Example 11.12
Using the frequency-derivative property, show that
1
ept u(t) ↔
s−p
(from Table 11.1) implies
1
tept u(t) ↔ .
(s − p)2

Solution According to the frequency-derivative property, the Laplace


transform of tept u(t) must be minus the derivative with respect to s of
1
the Laplace transform s−p of ept u(t), giving

d 1 d 1
− = − (s − p)−1 = .
ds s − p ds (s − p)2
Hence,
1
tept u(t) ↔ .
(s − p)2
378 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Time-derivative Verification of the time-derivative property: By definition, the Laplace


∞ df −st
property transform of df
dt is the integral 0− dt e dt, for all values of s for which the
integral converges (i.e. within its ROC). Using integration by parts,
 ∞  ∞
df −st ∞ d
e dt = f (t)e−st 0− − f (t) (e−st )dt
0 − dt 0 − dt
 ∞
−st −
= lim f (t)e − f (0 ) + s f (t)e−st dt.
t→∞ 0−
∞ −st
Since 0− f (t)e dt ≡ F̂ (s), we have

df
≡ f  (t) → s F̂ (s) − f (0− ) + lim f (t)e−st .
dt t→∞

Now,

lim f (t)e−st = 0
t→∞

for all values of s in the ROC for F̂ (s); otherwise the Laplace integral F̂ (s)
cannot converge. Hence, for s in the ROC of F̂ (s),

f  (t) → s F̂ (s) − f (0− ),

and by induction

f (n) (t) → s n F̂ (s) − s (n−1) f (0− ) − · · · sf (n−2) (0− ) − f (n−1) (0− ).

For Note that for causal f (t), we have the simpler time-derivative rule
causal f(t),
f (n) (t) ↔ sn F̂ (s). f (n) (t) → s n F̂ (s) for all n ≥ 0.

Example 11.13
Given that
1
f (t) = e2t → F̂ (s) = ,
s−2
use the time-derivative property to determine the Laplace transforms of
df
= 2e2t
dt
and
d 2f
= 4e2t .
dt 2
Note that f (t) is a noncausal signal.
Section 11.1 Laplace Transform and its Properties 379

Solution Since
1
F̂ (s) =
s−2
and

f (0− ) = e2·0 = e0 = 1,

we have, using the formula s F̂ (s) − f (0− ) for the Laplace transform of
the derivative of f (t),

d 2t 1 s − (s − 2) 2
e = 2e2t → s −1= = .
dt s−2 s−2 s−2
Next, we notice that

f  (0− ) = 2e2t t=0− = 2.

Hence, using the formula s 2 F̂ (s) − sf (0− ) − f  (0− ) for the Laplace trans-
form of the second derivative of f (t), we have

d 2 2t 2 1 s 2 − s(s − 2) − 2(s − 2) 4
e = 4e 2t
→ s − s · 1 − 2 = = .
dt 2 s−2 s−2 s−2

Of course, the results 2e2t → s−2


2
and 4e2t → s−2
4
could have been obtained
by inspection using the multiplication rule and the fact that e2t → s−2
1
.

Verification of the convolution property: Table 11.2 states that h(t) ∗ The
f (t) ↔ Ĥ (s)F̂ (s), for causal h(t) and f (t). To prove this, write the Laplace convolution
transform of the convolution of h(t) and f (t) as property
 ∞  ∞  ∞ applies
−st
{h(t) ∗ f (t)}e dt = { h(τ )f (t − τ )dτ }e−st dt to causal
0− t=0− τ =−∞ signals
 ∞  ∞
= h(τ ){ f (t − τ )e−st dt}dτ
τ =−∞ t=0−
 ∞  ∞
= h(τ ){ f (t − τ )e−st dt}dτ
τ =0− t=0−
 ∞
= h(τ )e−sτ F̂ (s)dτ
τ =0−
 ∞
= F̂ (s) h(τ )e−sτ dτ = F̂ (s)Ĥ (s),
τ =0−
380 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

which proves the result. Notice that the change of the bottom integration limit
in line 3 from −∞ to 0− requires that h(t) be causal. Also, use of the time-shift
property between lines 3 and 4 requires that f (t) be causal and τ ≥ 0.

Example 11.14
Given that

f (t) = e−t u(t),

determine y(t) ≡ f (t) ∗ f (t) using the time-convolution property.


Solution We have

1
f (t) = e−t u(t) ↔ F̂ (s) = .
s+1

Because f (t) is causal, we can apply the time-convolution property to yield

1
Ŷ (s) = F̂ (s)F̂ (s) = .
(s + 1)2

But, according to Table 11.1

1
te−t u(t) ↔ ,
(s + 1)2

and so we deduce that

y(t) = te−t u(t).

Example 11.15
If f (t) = e−t , can we take advantage of the time-convolution property to
calculate y(t) ≡ f (t) ∗ f (t)?
Solution Because the given f (t) is not causal, the answer is “no”—in
particular,

e−t ∗ e−t = te−t .

Try computing the convolution e−t ∗ e−t directly in the time domain to see
that it does not equal either te−t or te−t u(t).
Section 11.2 Inverse Laplace Transform and PFE 381

11.2 Inverse Laplace Transform and PFE

Causal signals and their Laplace transforms form unique pairs, just like Fourier trans-
form pairs familiar from earlier chapters. Hence, we can associate with each Laplace
transform Ĥ (s) a unique causal signal h(t) such that

h(t) ↔ Ĥ (s),

where h(t) is said to be the inverse Laplace transform of Ĥ (s). For instance, the
−t
inverse Laplace transform5 of Ĥ (s) = (s+1)
1
2 is the causal signal h(t) = te u(t).
1 1
Inverse Laplace transforms for elementary cases, such as s n , s−p , (s−p)2 , etc., can
be directly identified by using the pairs in Table 11.1. More complicated cases call
for the use of systematic algebraic procedures discussed in this section, as well as
properties in Table 11.2.
The most important class of Laplace transforms encountered in LTIC system
theory is the ratio of polynomials. In fact, as we will see in Section 11.3, transfer
functions of all lumped-element LTIC circuits have this form, namely
B(s)
Ĥ (s) = ,
P (s)
where both B(s) and P (s) denote polynomials in s. As indicated earlier, such transfer
functions and/or Laplace transforms are said to be rational functions or to have Rational
rational form. If the denominator polynomial P (s) has degree n ≥ 1 then the rational form
function can be written as
B(s)
Ĥ (s) = .
(s − p1 )(s − p2 ) · · · (s − pn )
5
Uniqueness of and a formula for the inverse Laplace transform for causal signals: The Laplace
transform Ĥ (s) = Ĥ (σ + j ω) of a causal signal h(t) is also the Fourier transform of a causal signal
e−σ t h(t) with σ = Re{s} selected so that s is in the ROC of Ĥ (s). Thus, the uniqueness of the inverse
Fourier transform
 ∞
1
e−σ t h(t) = Ĥ (σ + j ω)ej ωt dω
2π −∞

implies the uniqueness of the inverse Laplace transform


 ∞
eσ t
h(t) ≡ Ĥ (σ + j ω)ej ωt dω.
2π −∞

The inverse Laplace transform formula can be expressed more compactly as


 σ +j ∞
1
h(t) = Ĥ (s)est ds,
2πj σ −j ∞

which is a line integral in the complex plane, within the ROC, along a vertical line intersecting the horizontal
axis at Re{s} = σ . Although this formula always is available to us, we generally will strive for simpler
methods of computing inverse Laplace transforms.
382 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Rational Ĥ (s) Expanded form Ĥ (s) h(t)


−1
Distinct real poles 1
s 2 +3s+2
= 1
(s+1)(s+2)
1
s+1 + s+2 (e−t − e−2t )u(t)

−1
Repeated poles s
(s+1)2
1
s+1 + (s+1)2
(e−t − te−t )u(t)
1/2 1/2 1 −j t
Complex conjugate pair s
s 2 +1
= s
(s+j )(s−j ) s+j + s−j 2 (e + ej t )u(t)

−2 −1
Mixed s+1
s 3 −s 2
= s+1
s 2 (s−1) s + s2
+ 2
s−1 (−2 − t + 2et )u(t)

Table 11.3 Four examples of proper rational expressions with different types of poles are shown in the
second column. In proper rational form, the degree of the denominator polynomial exceeds the degree of
the numerator polynomial. The third column of the table shows the same proper rational-form expressions
on the left rearranged as a weighted sum of elementary terms, known as a partial fraction expansion (PFE).
Finally, the last column shows the inverse Laplace transform in each case.

The second column in Table 11.3 lists a few examples of such rational Laplace trans-
forms where the poles p1 , p2 , · · ·, pn correspond to the roots of the denominator
polynomial P (s). In the following discussion we will assume that the degree of poly-
Proper nomial P (s) is larger than the degree of B(s)—just like in the examples shown in
rational Table 11.3—so that there are no poles6 of Ĥ (s) at infinity. When that condition holds,
form a rational form is said to be proper.
Our strategy for calculating inverse Laplace transforms of proper rational func-
tions will be to rewrite the Laplace transform as a sum of simple terms, called a partial
fraction expansion (PFE), and to then identify the inverse Laplace transform of each
simple term by inspection. Table 11.3 illustrates the strategy, where the third column
shows the Laplace transforms given on the left rearranged as PFEs (i.e., as weighted
1 1
sums of elementary terms (s−p) , (s−p)2 ). It then becomes an easy matter to write h(t)
in the last column. The subsections below explain how to rewrite a rational function
as a PFE.

11.2.1 PFE for distinct poles


When a Laplace transform in proper rational form has distinct poles, as in the first
row of Table 11.3, then finding the PFE consists of finding coefficients K1 , K2 , · · ·,
Kn such that

B(s) K1 K2 Kn
= + + ··· +
(s − p1 )(s − p2 ) · · · (s − pn ) s − p1 s − p2 s − pn

is true for all s.


6
We will assume here that Ĥ (s) is written in minimal form, so that the factored B(s) = bo (s − z1 )(s −
z2 ) · · · (s − zm ) does not contain a factor that is identical to some s − pk from the denominator.
Section 11.2 Inverse Laplace Transform and PFE 383

This can be done most easily by repeating the following procedure for each
unknown coefficient on the right: To determine K1 , for instance, we multiply both
sides of the expression above by s − p1 , yielding

B(s) K2 (s − p1 ) Kn (s − p1 )
= K1 + + ··· + ,
(s − p2 ) · · · (s − pn ) s − p2 s − pn

and then evaluate this expression at s = p1 . The right-hand side then becomes just K1 ,
matching, on the left, effectively what you would see after “covering up” the original
s − p1 factor in the denominator (e.g., with your THUMB) evaluated at s = p1 ; i.e.,

B(s) 
 = K1 .
THUMB(s − p2 ) · · · (s − pn ) s=p1

For example, given

1 K1 K2
= + ,
(s + 1)(s + 2) s+1 s+2

use of the cover-up method explained above leads to



1 
K1 =  =1
THUMB(s + 2) s=−1

and

1 
K2 =  = −1,
(s + 1)THUMB s=−2

confirming the PFE in the first row of Table 11.3.


The cover-up method can be used in exactly the same way when the poles are Cover-up
complex valued, so long as they are distinct. This is illustrated in the next example, method
which works out the PFE pertinent to the third row of Table 11.3.

Example 11.16
Using the cover-up method, find the PFE of the transfer function
s
Ĥ (s) =
s2 + 1

and then find h(t).


Solution First, we factor the denominator of Ĥ (s) to write
s
Ĥ (s) =
(s + j )(s − j )
384 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

so that the poles are visible. Since the poles (at s = ±j ) are distinct, the
PFE has the form:
s K1 K2
= + .
(s + j )(s − j ) s+j s−j

Now, using the cover-up idea,



s  −j 1
K1 =  = =
THUMB(s − j ) s=−j −2j 2

and

s  j 1
K2 =  = = .
(s + j )THUMB s=j 2j 2

Thus,
1/2 1/2
Ĥ (s) = +
s+j s−j

as shown in the third row of Table 11.3 and, consequently,

1 −j t
h(t) = (e + ej t )u(t) = cos(t)u(t).
2

With a little practice, one can apply the cover-up method “in place,” as illustrated
in the following example.

Example 11.17
We write
0+1 1+1
s+1 THUMB(0−1)
Ĥ (s) = = + 1THUMB
.
s(s − 1) s s−1

Thus,

h(t) = −u(t) + 2et u(t).

11.2.2 PFE with repeated poles


If a pole of a proper rational function is repeated, as in

b0 s m + · · · + bm
,
(s − p1 )(s − p1 ) · · · (s − pn )
Section 11.2 Inverse Laplace Transform and PFE 385

then there is no choice of PFE coefficients Ki that will satisfy

b0 s m + · · · + bm K1 K2 Kn
= + + ··· + .
(s − p1 )(s − p1 ) · · · (s − pn ) s − p1 s − p1 s − pn

For example, there are no K1 , K2 , and K3 such that

1 K1 K2 K3 K1 + K2 K3
= + + = + .
(s + 1)(s + 1)(s + 2) s+1 s+1 s+2 s+1 s+2
However, by modifying the form of the PFE, we still can express a rational
function as a sum of simple terms. In the above example, we can write

1 K1 K2 K3
= + +
(s + 1)(s + 1)(s + 2) (s + 1)2 s+1 s+2
with K1 = 1, K2 = −1, and K3 = 1. But, what is a simple way to find the Ki for the
repeated-pole case?
In general, the PFE of a proper-form rational expression with repeated poles will
contain terms such as
Ki Ki+1 Ki+r−1
+ + ··· +
(s − pi )r (s − pi )r−1 (s − pi )

associated with poles pi that repeat r times—or have multiplicity r. Simple poles, i.e., Simple poles
nonrepeating poles with multiplicity r = 1, contribute to the expansion in the same versus
way as before. The weighting coefficients for terms involving simple poles, as well repeated
as the coefficient of the leading term for each repeated pole, (i.e., Ki above), can be poles with
determined using the cover-up method. The remaining coefficients are obtained using multiplicity
a strategy illustrated in the next set of examples. r>1

Example 11.18
Find the PFE of
s
F̂ (s) = .
(s + 1)2

Solution Since the pole at s = −1 has a multiplicity of r = 2, we choose


the form of the PFE to be
s K1 K2
= + .
(s + 1)2 (s + 1)2 s+1

Next, using the cover-up method (where the entire (s + 1)2 factor is covered
up on the left), we determine K1 as

s 
K1 =  = −1.
THUMB s=−1
386 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Thus, the problem is now reduced to finding the weight K2 so that so that
s −1 K2
= +
(s + 1) 2 (s + 1)2 s+1
is true for all s. This is accomplished by evaluating the above expression
with any convenient value of s and then solving for K2 . For example, using
s = 0 we find
0 −1 K2
= + ,
(0 + 1)2 (0 + 1)2 0+1
which yields K2 = 1. Thus, we have
s −1 1
= + ,
(s + 1)2 (s + 1)2 s+1
confirming the example in the second row of Table 11.3.

Example 11.19
Determine the inverse Laplace transform h(t) of
s+1
Ĥ (s) = .
− 1)
s 2 (s

Solution We have a repeated pole at s = 0, so we begin with


1+1 0+1
s+1 12 THUMB THUMB(0−1) K
= + + ,
s 2 (s − 1) s−1 s2 s
giving
s+1 2 −1 K
= + 2 + .
s 2 (s − 1) s−1 s s
Next, evaluating this expression at s = −1, we find
2 −1 K
0= + +
−1 − 1 (−1)2 −1
yielding
2 −1
K= + = −2.
−1 − 1 (−1)2
Thus,
2 −1 −2
Ĥ (s) = + 2 +
s−1 s s
Section 11.2 Inverse Laplace Transform and PFE 387

and

h(t) = (2et − t − 2)u(t)

as in the bottom row of Table 11.3.

11.2.3 Inversion of improper rational expressions


The inversion of improper rational Laplace transforms can be handled in a number
of ways. We will illustrate two approaches to the problem, treating the very simple
case of
s
Ĥ (s) = .
s+1
This function is improper because the degree of the denominator is not larger than
the degree of the numerator.
Option 1: Think of Ĥ (s) = s
s+1 as

1
Ĥ (s) = s = s Ĝ(s),
s+1
where
1
g(t) = e−t u(t) ↔ Ĝ(s) = .
s+1
Then the derivative property of the Laplace transform for causal signals (so
that g(0− ) = 0) implies that
dg d
h(t) = = (e−t u(t)) = −e−t u(t) + e−t δ(t) = δ(t) − e−t u(t).
dt dt

Option 2: Think of Ĥ (s) = s


s+1 as

s+1−1 1
Ĥ (s) = =1− .
s+1 s+1

Since δ(t) ↔ 1 and e−t u(t) ↔ 1


s+1 , the expression above implies that

h(t) = δ(t) − e−t u(t),

the same result as obtained above.


In both strategies, we first rewrite the improper-form expression in terms of a
proper rational expression and then take advantage of the methods specific for those
two options. The following two examples illustrate further details of the inversion of
improper rational Laplace transforms.
388 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Example 11.20
Find the inverse Laplace transform of the improper rational expression

s2 + 1
F̂ (s) = .
(s + 1)(s + 2)

Solution Write

s2 1
F̂ (s) = + .
(s + 1)(s + 2) (s + 1)(s + 2)

So, letting

1
g(t) ↔ Ĝ(s) =
(s + 1)(s + 2)

we have

d 2g
f (t) = + g(t),
dt 2
where
1 1
THUMB(−1+2) (−2+1)THUMB
g(t) ↔ + .
s+1 s+2

Thus,

dg
g(t) = e−t u(t) − e−2t u(t), = −e−t u(t) + 2e−2t u(t),
dt
and

d 2g
= e−t u(t) − 4e−2t u(t) + δ(t).
dt 2
Therefore,

f (t) = δ(t) + 2e−t u(t) − 5e−2t u(t).

Alternatively, we could perform a long division operation with the original form
of F̂ (s) to obtain

s2 + 1 s 2 + 3s + 2 − (3s + 1) 3s + 1
F̂ (s) = = =1− .
s + 3s + 2
2 s + 3s + 2
2 (s + 1)(s + 2)
Section 11.3 s-Domain Circuit Analysis 389

Substituting the PFE of the proper second term in F̂ (s) gives


−3+1 −6+1
−1+2 −2+1 −2 5
F̂ (s) = 1 − − =1− − .
s+1 s+2 s+1 s+2
Thus, we obtain

f (t) = δ(t) + 2e−t u(t) − 5e−2t u(t)

as before.

Example 11.21
Determine the inverse Laplace transform of
s3
Ĝ(s) = .
s2 + 1

Solution We can attack this problem as follows: First we note


s2 1
Ĝ(s) = s = s(1 − 2 ).
s +1
2 s +1
Since
1
δ(t) − sin(t)u(t) ↔ 1 − ,
s2 + 1
implying
d 1
[δ(t) − sin(t)u(t)] ↔ s(1 − 2 ),
dt s +1
we find that

g(t) = δ  (t) − cos(t)u(t).

Note that in applying the derivative property, above, we used the fact that
δ(t) − sin(t)u(t) is a causal signal.

11.3 s-Domain Circuit Analysis


Consider an LTIC circuit in zero-state that is excited by some causal input

f (t) ↔ F̂ (s).

Then all of the voltage and current signals v(t) and i(t) excited in the circuit are
necessarily causal and satisfy the initial conditions

v(0− ) = i(0− ) = 0.
390 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Thus, the Laplace transform of the v-i relation

di
v(t) = L
dt
for an inductor in the circuit is

V̂ (s) = Ls Î (s),

where

v(t) ↔ V̂ (s) and i(t) ↔ Î (s).

Likewise, the Laplace transform of the v-i relation

dv
i(t) = C
dt
for a capacitor is

Î (s) = Cs V̂ (s),

implying

1
V̂ (s) = Î (s).
sC
Finally, the Laplace transform of the v-i relation

v(t) = Ri(t)

for a resistor is simply

V̂ (s) = R Î (s).

These results imply a general s-domain relation between V̂ (s) and Î (s) having
the form

V̂ (s) = Z Î (s),

s-domain where
impedance ⎧

⎨sL for an inductor L
Z≡ 1
for a capacitor C
⎪ sC

R for a resistor R.
Section 11.3 s-Domain Circuit Analysis 391

Here Z denotes an s-domain impedance, a generalization of the familiar phasor-


domain impedance. These algebraic V̂ (s)-Î (s) relations, along with Laplace trans-
form versions of KVL and KCL, namely s-domain
KVL and KCL
  
V̂ (s)drop = V̂ (s)rise
loop

and
  
Î (s)in = Î (s)out
node

can be used to perform s-domain circuit analysis, analogous to phasor circuit calcu-
lations familiar from earlier chapters. Because KVL and KCL hold in the s-domain,
techniques such as voltage division and current division do as well.
Assuming that the LTIC circuit shown in Figure 11.6a has zero initial state and a
causal input f (t), Figure 11.6b shows the equivalent s-domain circuit. We can use the
s-domain equivalent and simple voltage division to calculate the Laplace transform
Ŷ (s) of the zero-state response

y(t) = h(t) ∗ f (t)

of the circuit to a causal input. The result will have the form (because of the convolution
property of Laplace transforms)

Ŷ (s) = Ĥ (s)F̂ (s),

1Ω 1Ω

+ +
f (t) +− i(t) 1H y(t) = ν(t) F^ (s) +− sΩ Y^ (s) = V^ (s)
− −
(a) (b)

12
Input
10
e2t u(t)
8

4
Zero-state
2
response
(c) 1 t

Figure 11.6 (a) An LTIC circuit with zero initial state, (b) its s-domain equivalent assuming a causal
f (t) ↔ F̂(s), (c) plots of a causal input f (t) = e2t u(t) (heavy curve) and zero-state response (light curve)
calculated in Example 11.22.
392 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

allowing us to identify the transfer function

Ŷ (s)
Ĥ (s) =
F̂ (s)
of the circuit, as well as finding the zero-state response y(t) by computing the inverse
Zero-state Laplace transform of Ŷ (s). The following set of examples illustrate such s-domain
circuit circuit calculations.
response
to causal Example 11.22
inputs In the LTIC circuit shown in Figure 11.6a, determine the transfer function
Ĥ (s) = Ŷ (s)/F̂ (s) and also find the zero-state response y(t), assuming the
circuit input signal is
1
f (t) = e2t u(t) ↔ F̂ (s) = .
s−2

Solution Figure 11.6b shows the equivalent s-domain circuit where the
1 H inductor has been replaced by an s  impedance as a consequence of
Z = sL, pertinent for inductors. This equivalent circuit has been obtained
by assuming that the circuit input f (t) is a causal signal having some
Finding Laplace transform F̂ (s).
the Now, using voltage division in Figure 11.6b (similar to phasor analysis,
transfer but with s-domain impedances), we have
function
s
Ŷ (s) = F̂ (s) .
1+s
Hence the system transfer function is

Ŷ (s) s
Ĥ (s) = = .
F̂ (s) s+1

Given the causal input


1
f (t) = e2t u(t) ↔ F̂ (s) = ,
s−2
the Laplace transform of the zero-state response of the circuit is
s s 1 s 1/3 2/3
Ŷ (s) = F̂ (s) = = = + .
s+1 s+1s−2 (s + 1)(s − 2) s+1 s−2

Hence, the system zero-state response to input f (t) = e2t u(t) is the inverse
Laplace transform
1 −t
y(t) = (e + 2e2t )u(t).
3
Section 11.3 s-Domain Circuit Analysis 393

Figure 11.6c shows plots of the causal input f (t) = e2t u(t) and the causal
zero-state response y(t) derived above.

Example 11.23
Determine the transfer function and the impulse response of the LTIC circuit
shown in Figure 11.7a.
Solution Transfer functions and impulse responses are defined under
zero-state conditions. Figure 11.7b shows the s-domain equivalent zero-
state circuit, where F̂ (s) denotes the Laplace transform of a causal source
f (t). Using KVL in the s-domain, we find that

1
F̂ (s) = (2 + s + )Ŷ (s),
s
giving

F̂ (s) s s
Ŷ (s) = = F̂ (s) = F̂ (s).
2+s+ 1
s
s2 + 2s + 1 (s + 1)2

Hence, the system transfer function is

Ŷ (s) s
Ĥ (s) = =
F̂ (s) (s + 1)2

with a double pole at s = −1. Expanding Ĥ (s) in a PFE, we have Finding


the
s −1 K impulse
Ĥ (s) = = + .
(s + 1)2 (s + 1)2 s+1 response

Evaluating at s = 0, we find that Ĥ (0) = 0 = −1 + K, implying that K =


1. Hence,
−1 1
Ĥ (s) = +
(s + 1)2 s+1

yielding

h(t) = −te−t u(t) + e−t u(t) = (1 − t)e−t u(t).

2Ω 1H + 2Ω sΩ 1Ω +
f (t) +− 1F x (t) F^ (s) +− s
X^ (s)
(a)
y(t) −
(b) Y^ (s) −

Figure 11.7 (a) An RLC circuit, and (b) its s-domain equivalent.
394 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Notice how the result of Example 11.23 indicates an effective jump in the inductor
current y(t) = h(t) at t = 0 in response to an impulse input f (t) = δ(t). As pointed
out in Chapter 3, inductor currents and capacitor voltages cannot jump in response
to practical sources. However, when theoretical sources such as δ(t) are involved,
jumps can occur, as in Example 11.23.

Example 11.24
Determine the transfer function Ĥ (s) and impulse response h(t) of the
circuit described in Figure 11.8.
Solution Applying ideal op-amp approximations, and writing a KCL
equation at the inverting node of the op amp,

F̂ (s) − 0 0 − Ŷ (s)
= .
2 1 + 1s

Therefore, the transfer function is

Ŷ (s) 1 1
Ĥ (s) = = − (1 + ).
F̂ (s) 2 s

Taking the inverse Laplace transform yields the impulse response,

1
h(t) = − [δ(t) + u(t)].
2

Notice that the circuit in Example 11.24 is not BIBO stable—the transfer func-
tion has a pole at s = 0, which is outside the LHP, and the impulse response is not
absolutely integrable. Evidently, unlike the phasor method and Fourier analysis, the
s-domain method can work perfectly well for the analysis of unstable circuits. The
next example further illustrates this point.

1Ω 1
2Ω sΩ

+ Y^ (s)
F^ (s) − +

Figure 11.8 s-domain equivalent of an op-amp circuit with a 1 F capacitor.


Section 11.3 s-Domain Circuit Analysis 395

Example 11.25
Find the transfer function of the LTIC circuit described in Figure 11.9 and Circuit
determine the range of values of the real parameter A for which the circuit stability
is BIBO stable. analysis


1Ω s AV^x (s) 1Ω
+ ^ −
Vx (s)
+
F^ (s) +− A V^x (s) +
− 1Ω Y^ (s)

Figure 11.9 s-domain equivalent of a circuit with a single independent input


f (t) and a voltage-controlled voltage source Avx (t), where A is a real gain
constant.

Solution Note that Ŷ (s) is simply one half of AV̂x (s), so we begin by
finding V̂x (s) in terms of F̂ (s). Applying voltage division, we note that

1
V̂x (s) = (F̂ (s) − AV̂x (s)) ,
1+ 1
s

from which we obtain


s
V̂x (s) = F̂ (s).
s(1 + A) + 1

Thus,

AV̂x (s) A s A s
Ŷ (s) = = F̂ (s) = ( ) F̂ (s),
2 2 s(1 + A) + 1 2(1 + A) s + 1+A
1

and the circuit transfer function is


A s
Ĥ (s) = ( ) .
2(1 + A) s + 1+A
1

The circuit is BIBO stable if and only if the single pole of the system at

1
s=−
1+A
396 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

is within the LHP. This is will be the case only if


1
− < 0,
1+A
which implies
1 + A > 0,
or
A > −1.
For A ≤ −1 the circuit is unstable.

11.4 General Response of LTIC Circuits and Systems


As we have seen in Section 11.3, the impulse response h(t) of a lumped-element
LTIC circuit can be determined as the inverse Laplace transform of a rational transfer
function
B(s)
Ĥ (s) = .
P (s)
Here, B(s) and P (s) are mth- and nth-order polynomials

B(s) = b0 s m + b1 s m−1 + · · · + bm
and
P (s) = s n + a1 s n−1 + · · · + an = (s − p1 )(s − p2 ) · · · (s − pn ),
respectively. Since

Ŷ (s)
Ĥ (s) = ,
F̂ (s)
where Ŷ (s) is the Laplace transform of a causal zero-state response to a causal input
f (t) ↔ F̂ (s), it follows that for an LTIC circuit,
Ŷ (s) B(s)
=
F̂ (s) P (s)
or, equivalently,

P (s)Ŷ (s) = B(s)F̂ (s).


Thus, for an arbitrary LTIC circuit we can write

s n Ŷ (s) + a1 s n−1 Ŷ (s) + · · · + an Ŷ (s) = b0 s m F̂ (s) + b1 s m−1 F̂ (s) + · · · + bm F̂ (s),


Section 11.4 General Response of LTIC Circuits and Systems 397

which has the inverse Laplace transform nth -order


LTIC
d ny d n−1 y d mf d m−1 f circuits are
n
+ a1 n−1 + · · · + an y(t) = b0 m + b1 m−1 · · · + bm f (t).
dt dt dt dt described
by nth -order
This expression is a linear constant-coefficient ODE relating the circuit input and
linear
output. This development implies that all LTIC circuits and systems with rational
ODEs with
transfer functions are described by such ODE’s.
constant
Example 11.26 coefficients
Determine an ODE that describes the LTIC circuit shown in Figure 11.10a.
What are the polynomials B(s) and P (s) for the circuit? What is the order
n of the circuit? What homogeneous ODE is satisfied by the zero-input
response of the circuit?

2Ω 2Ω
f (t) +− 1F + 1
i(t) 2Ω y(t) F^ (s) − I^(s) 2Ω s
Y^ (s)
(a) 1H (b) s

Figure 11.10 (a) A 2nd-order LTIC circuit and (b) its s-domain equivalent.

Solution Using the s-domain equivalent of the circuit, shown in Figure 11.10b,
and writing a KVL equation for each loop, we have

F̂ (s) = 2Î (s) + 2(Î (s) − Ŷ (s)) + s Î (s)

and
1
2(Î (s) − Ŷ (s)) = Ŷ (s).
s
Solving the second equation for Î (s) gives
1
Î (s) = (1 + )Ŷ (s).
2s
Substituting for Î (s) in the first equation then yields
5 2
F̂ (s) = (s + + )Ŷ (s).
2 s
Hence, the transfer function is
1 s
Ĥ (s) = = .
s + 2.5 + 2
s
s2 + 2.5s + 2
398 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Therefore,
B(s) = s,
and
P (s) = s 2 + 2.5s + 2.
Furthermore, because
Ŷ (s) B(s)
= ,
F̂ (s) P (s)
we have
s 2 Ŷ (s) + 2.5s Ŷ (s) + 2Ŷ (s) = s F̂ (s),
so that an ODE describing the circuit is
d 2y dy df
+ 2.5 + 2y(t) =
dt 2 dt dt
and the circuit order is n = 2. The zero-input response of the circuit must
satisfy the homogeneous ODE
d 2y dy
2
+ 2.5 + 2y(t) = 0,
dt dt
which is obtained by setting the input f (t) to zero in the ODE for the circuit.

Beginning with Chapter 4, we have emphasized the zero-state response of circuits


and systems. The fact that LTIC circuits are described by ODEs is a reminder that the
output of a circuit is, in general, a sum of two components. The zero-state response
is h(t) ∗ f (t), where h(t) ↔ Ĥ (s). When initial conditions are zero, the zero-state
response is the entire response. However, if initial conditions are not zero, then the
output includes an additional signal yo (t) that satisfies the homogeneous form of the
General ODE. Thus, in general, for nth -order LTIC circuits and systems with rational transfer
response functions, we have

f (t) → LTIC → y(t) = yo (t) + h(t) ∗ f (t).


Zero-input The output component yo (t) is the zero-input response, the solution of some
response
d ny d n−1 y
and + a1 + · · · + an y(t) = 0
homogeneous dt n dt n−1
ODE satisfying a set of initial conditions related to energy storage within the system.
We already know how to determine the zero-state response h(t) ∗ f (t) for LTIC
circuits and systems using various methods—direct convolution, the Fourier method,
or the Laplace method if f (t) is causal—so we next focus our attention on the zero-
input response.
Section 11.4 General Response of LTIC Circuits and Systems 399

11.4.1 Zero-input response and asymptotic stability in LTIC


systems
For LTIC systems with rational transfer functions described by
B(s)
Ĥ (s) = ,
P (s)
with
B(s) = b0 (s − z1 )(s − z2 ) · · · (s − zm )
and
P (s) = (s − p1 )(s − p2 ) · · · (s − pn ),
the denominator P (s) is known as the characteristic polynomial. P (s) is the same
characteristic polynomial that is associated with the differential equation describing
the system. The roots, p1 , p2 , · · ·, pn , are called characteristic poles.7 The number n Characteristic
of characteristic poles is the system order and coincides with the number of distinct poles
energy storage elements (e.g., capacitors and inductors) in the system.
In this section we will see that the zero-input response of LTIC circuits and
systems, with rational transfer functions, always can be expressed as a weighted sum
of functions
cl (t) = t k epi t , 0 ≤ k < ri ,
known as characteristic modes. As explained below, characteristic modes of a system Characteristic
can be identified unambiguously once the characteristic poles pi and their multiplic- modes
ities ri are known.
For example, the characteristic modes of a circuit with simple (nonrepeated)
characteristic poles
p1 = −1
p2 = −2
are
c1 (t) = e−t
c2 (t) = e−2t ,
and the zero-input response of the circuit would have the form

y(t) = Ae−t + Be−2t .


If the initial conditions imposed on y(t) were y(0) = 2 and y  (0) = −3, then A =
B = 1.
7
The set of poles of Ĥ (s) defined earlier is the subset of characteristic poles that are not cancelled by
any term z − zi in the numerator of Ĥ (s). In most cases, the set of poles and the set of characteristic poles
are the same.
400 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

For another 2nd -order system with transfer function


1
Ĥ (s) =
(s + 1)2
there is a single characteristic pole at s = −1 with multiplicity 2. In this case, the
characteristic modes are
c1 (t) = te−t
c2 (t) = e−t ,
and the zero-input response would be
y(t) = Ate−t + Be−t
where the constants A and B would depend on the initial conditions for y(t).
Explanation: The zero-input response of an LTIC circuit is the solution of
an nth -order linear and homogeneous ODE
d ny d n−1 y
n
+ a1 n−1 + · · · + an y(t) = 0.
dt dt
Assume that the ODE and its solution y(t) are valid for all t, positive and
negative. Then you may recall from a course on differential equations that
a time-domain solution of this homogeneous ODE is a weighted sum of the
above characteristic modes. Alternatively, the Laplace transform of y(t) and
its derivatives result in
y(t) → Ŷ (s),
y  (t) → s Ŷ (s) − y(0− ),
y  (t) → s 2 Ŷ (s) − sy(0− ) − y  (0− ),
etc. Hence, the Laplace transform of the homogeneous ODE above can be
expressed as

(s n + a1 s n−1 + · · · + an )Ŷ (s) = C(s),

where C(s) is an (n − 1)th -order polynomial in s of the form

y(0− )(s n−1 + a1 s n−2 + · · ·) + y  (0− )(s n−2 + a1 s n−3 + · · ·) + · · · + y (n−1) (0− ).
Consequently, the Laplace transform of the system zero-input response is
C(s)
Ŷ (s) = .
(s − p1 )(s − p2 ) · · · (s − pn )
Note that Ŷ (s) is a proper rational expression with poles pi that coincide with
the characteristic poles of the system Ĥ (s). Also note that Ŷ (s) is linearly
Section 11.4 General Response of LTIC Circuits and Systems 401

dependent, through the polynomial C(s), on the initial conditions y(0− ),


y  (0− ), etc., representing the state of the system at t = 0− . Therefore, the
system zero-input response—the inverse Laplace transform of Ŷ (s) above,
which can be calculated using a PFE—turns out to be a weighted sum of
the characteristic modes of the system proportional to epi t (or t k epi t if pi is
repeated), with weighting coefficients that depend linearly on the system’s
initial state (as expected).
In the following examples, we will identify the characteristic poles and modes
of various LTIC systems and then calculate zero-input responses.
Example 11.27
Consider an LTIC circuit described by the ODE
d 2y
− 4y(t) = f (t)
dt 2
for −∞ < t < ∞. Determine the characteristic polynomial P (s), charac-
teristic poles pi , and characteristic modes ci (t) of the system. Also, find
the zero-input response y(t) assuming that y(0− ) = 1 and y  (0− ) = −1.
Solution Noting that the characteristic polynomial is the denominator of
the transfer function Ĥ (s), we take the Laplace transform of the ODE under
the assumption of causal y(t) and f (t) to obtain

(s 2 − 4)Ŷ (s) = F̂ (s).


Clearly, the characteristic polynomial of the system is

P (s) = s 2 − 4 = (s − 2)(s + 2).


Consequently, the characteristic poles are p1 = 2, p2 = −2, and the charac-
teristic modes are c1 (t) = e2t and c2 (t) = e−2t . We, therefore, can express
the zero-input response of the system as

y(t) = Ae2t + Be−2t ,


which is valid for all t, just like the homogeneous ODE it satisfies.
To determine A and B, we equate y(0) and y  (0) directly to the specified
initial conditions, y(0− ) = 1 and y  (0− ) = −1, and solve for A and B.
Doing so gives
y(0) = A + B = y(0− ) = 1,
y  (0) = 2A − 2B = y  (0− ) = −1,
yielding
1 3
A= and B = .
4 4
402 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

Hence, the zero-input response of the circuit with the given initial condi-
tions is

y(t) = 0.25e2t + 0.75e−2t .

Notice that the zero-input response examined in this example is nontransient because
of the contribution of a nontransient characteristic mode e2t due to the characteristic
pole at s = p1 = 2 in the RHP.

Example 11.28
Determine the characteristic modes and the form of the zero-input response
of the circuit shown in Figure 11.7a.
Solution From Example 11.23 in Section 11.3, the system transfer func-
tion is known to be
s
Ĥ (s) = .
(s + 1)2

Therefore, the characteristic polynomial is P (s) = (s + 1)2 and the char-


acteristic modes are c1 (t) = te−t and c2 (t) = e−t . Hence, the zero-input
solution for the circuit is

y(t) = Ate−t + Be−t ,

where A and B depend on initial conditions.

Example 11.29
Evaluate the constants A and B of Example 11.28 if the zero-input response
y(t) is known to satisfy y(0) = 1 and y(−1) = 0.
Solution Evaluating y(t) = Ate−t + Be−t at t = 0 and t = −1 s, we find
that

y(0) = B = 1,
y(−1) = −Ae1 + Be1 = 0.

Thus,

A=B=1

and

y(t) = te−t + e−t .


Section 11.4 General Response of LTIC Circuits and Systems 403

Example 11.30
Applying the Laplace transform to the ODE

dy
+ 3y(t) = f (t),
dt

determine the zero-input solution corresponding to y(0− ) = 2.


Solution Transforming the ODE gives

s Ŷ (s) − y(0− ) + 3Ŷ (s) = F̂ (s).

Because we are interested in the zero-input solution, we set F̂ (s) = 0 and


obtain

y(0− )
Ŷ (s) = .
s+3

Therefore, we find

y(t) = y(0− )e−3t = 2e−3t ,

valid for t ≥ 0 (and also before t = 0 if the ODE is valid for all times t).

Example 11.31
Suppose that a 3rd-order LTIC system is described by the ODE

d 3y d 2y dy df
2
−2 2 −6 − 8y(t) = − 4f (t).
dt dt dt dt

Determine the system transfer function Ĥ (s) and find the characteristic
poles and characteristic modes of the system. Find also the poles of the
transfer function Ĥ (s) (distinct from characteristic poles as will be seen in
the solution). Is the system BIBO stable? What is the zero-input response
if y(0− ) = 2, y  (0− ) = 3, y  (0− ) = 16?
Solution To determine the transfer function, we take the Laplace trans-
form of the ODE under the assumption of a causal zero-state response y(t)
to a causal input f (t). The transform is

s 3 Ŷ (s) − 2s 2 Ŷ (s) − 6s Ŷ (s) − 8Ŷ (s) = s F̂ (s) − 4F̂ (s).

Hence, the system transfer function is

Ŷ (s) s−4 s−4


Ĥ (s) = = = .
F̂ (s) s 3 − 2s 2 − 6s − 8 (s − 4)(s + 1 + j )(s + 1 − j )
404 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

The characteristic polynomial of the system is


P (s) = (s − 4)(s + 1 + j )(s + 1 − j ),
and the characteristic poles are at
s = 4, −1 − j, and − 1 + j.
Since all of the characteristic poles are simple (i.e., non-repeated), the char-
acteristic modes are
c1 (t) = e4t , c2 (t) = e(−1−j )t , c3 (t) = e(−1+j )t .
Because Ĥ (s) has a pole-zero cancellation at s = 4, the transfer func-
tion can be simplified to
1
Ĥ (s) = .
(s + 1 + j )(s + 1 − j )
Thus, the poles of the transfer function are at s = −1 − j and −1 + j.
Because both poles are in the LHP, the system is BIBO stable.
Finally, we will find the zero-input response to the stated initial condi-
tions. The result may be surprising!
The zero-input response is a linear combination of the characteristic
modes,
y(t) = Ae4t + Be(−1−j )t + Ce(−1+j )t .
Applying the initial conditions produces the set of equations
y(0− ) = 2 = A + B + C,
y  (0− ) = 3 = 4A + (−1 − j )B + (−1 + j )C,
y  (0− ) = 16 = 16A + (−1 − j )2 B + (−1 + j )2 C,
yielding
1
A = 1 and B = C = ,
2
so that
y(t) = e4t + e−t cos(t).
This is surprising indeed! The zero-input response is unbounded even
though the system is BIBO stable.

The reason for the behavior encountered in the above example is that the system
analyzed contains a characteristic pole at s = 4 that is cancelled in the transfer function
and yet it affects the zero-input response. It is easy to imagine how this might happen.
For example, a circuit may consist of two cascaded parts, with transfer functions
Section 11.4 General Response of LTIC Circuits and Systems 405

Ĥ1 (s) and Ĥ2 (s), and the overall transfer function Ĥ1 (s)Ĥ2 (s). Part 2 of the circuit
may have an unstable pole that is cancelled by a zero in the transfer function of Part
1. This can lead to a situation where the overall system is BIBO stable (all poles of
Ĥ1 (s)Ĥ2 (s) are in the LHP) and yet nonzero initial conditions in Part 2 of the circuit
may yield a zero-input response that is unbounded, because Ĥ2 (s) has a pole located
outside the LHP.
Because we do not wish to have systems whose zero-input responses can be
unbounded, we frequently require that systems exhibit a transient zero-input response.
Such systems with transient zero-input responses are called asymptotically stable:
Asymptotic
An LTIC circuit with a rational transfer function is asymptotically stable
stability
if and only if all of its characteristic poles are in the LHP.
criterion
For LTIC systems with proper rational transfer functions, BIBO stability and
asymptotic stability are equivalent unless there is cancellation of an unstable char-
acteristic pole in the transfer function. The BIBO stable system in Example 11.31 is
not asymptotically stable, because of the unstable characteristic pole at s = 4. Note
that the output of a circuit that is not asymptotically stable may be dominated by
(or at least include) components of the zero-input response. Likewise, any thermal
noise injected within such a circuit may create an unbounded signal either interior
to or at the output of the system. Thus, circuits that are not asymptotically stable (a
stronger condition than BIBO stability) must be used with care. One possible use of
such systems is exemplified in the following section.

11.4.2 Marginal stability and resonance


Systems with simple (i.e., nonrepeated) characteristic poles on the imaginary axis
deserve further discussion. Such systems are neither BIBO stable nor asymptotically
stable; yet, in the absence of RHP poles, they display bounded, but nontransient,
zero-input responses—such systems are said to be marginally stable.
The 1st-order system
1
Ĥ (s) =
s
with a simple pole at s = 0 on the ω-axis, and no RHP poles, is marginally stable
because its zero-input response

y(t) = y(0− )

is bounded. Likewise, the 2nd-order system


1 1
Ĥ (s) = = ,
s 2 + ωo2 (s − j ωo )(s + j ωo )

with ωo2 > 0, is marginally stable. The system has simple poles at s = ±j ωo . Under
initial conditions y(0− ) = 1 and y  (0− ) = 0, the system exhibits the bounded and
406 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

sinusoidal steady-state zero-input response

ej ωo t + e−j ωo t
y(t) = = cos(ωo t).
2
Reiterating, an
Marginal LTIC system is marginally stable if and only if it has simple characteristic
stability poles on the imaginary axis and no characteristic poles in the RHP.
criterion
Example 11.32
Determine the zero-input response of the nth -order system

1
Ĥ (s) = , n ≥ 1.
sn
Show that the system has a bounded zero-input response and is marginally
stable only for n = 1.
Solution With characteristic poles at s = 0, the zero-input response of
the system is

y(t) = K1 t n−1 + K2 t n−2 + · · · + Kn ,

where the constants K1 , K2 , · · ·, Kn depend on initial conditions. Clearly,


given arbitrary initial conditions, y(t) is unbounded unless n = 1, which
corresponds to a single pole on the imaginary axis and marginal stability.

The zero-input response y(t) = cos(ωo t) of the marginally stable 2nd-order


system

1
Ĥ (s) = ,
s2 + ωo2

ωo2 > 0, indicates that the system can be used as an “oscillator,” or as a co-sinusoidal
signal source. Because the oscillations cos(ωo t) are produced in the absence of an
external input, the system is resonant (see Chapter 4), which in turn implies that no net
energy dissipation takes place within the system.8 Clearly, resonance and marginal
stability are related concepts—all marginally stable systems also are resonant. For
instance, the 4th-order marginally stable system

1
Ĥ (s) =
(s 2 + 4)(s 2 + 9)

8
Such systems can be built with dissipative components so long as the systems also include active
elements that can be modeled by negative resistances.
Section 11.4 General Response of LTIC Circuits and Systems 407

1
νc (0 − )
t=0 + νc (t) − 2

+ 2Ω +
1
F s
+ +
− 2
y(t) − F^ (s) = s3 Y^ (s)
f (t)
1Ω 1Ω
− −
(a) (b)

Figure 11.11 (a) An LTIC circuit with a switch, and (b) its s-domain equivalent including an initial-value
source.

can sustain unforced and steady-state co-sinusoidal oscillations at two resonant frequen-
cies 2 and 3 rad/s. On the other hand, another 4th-order system

1
Ĥ (s) =
(s 2 + 4)(s 2 + 4)

with the resonant frequency 2 rad/s is not marginally stable because the poles at
s = ±j 2 are not simple.

11.4.3 Circuit initial-value problems


We will describe in this section an extension of the s-domain technique of Section 11.3
that can be used in LTIC circuit problems where the zero initial-state condition may
not be applicable. See, for instance, the circuit in Figure 11.11a, where a solution
for y(t) is sought for t > 0, assuming an arbitrary vc (0− ) just before the switch is
closed. Although finding y(t), t > 0, in the circuit is just a matter of superposing
zero-state and zero-input responses (determined using methods of previous sections),
an alternate approach is to apply s-domain analysis to an equivalent circuit shown in
Figure 11.11b, as worked out in Example 11.33 below. The procedure by which the
equivalent circuit can be obtained—where a current source 21 vc (0− ) accounts for the
nonzero initial state—will be described after the example.

Example 11.33
The source in the circuit shown in Figure 11.11a is specified as f (t) = 3 V.
Determine the system response y(t) for t > 0, for an arbitrary initial state
vc (0− ), by using the s-domain equivalent circuit shown in Figure 11.11b.
Solution Note that the source f (t) = 3 V appears in the equivalent circuit
as the Laplace transform F̂ (s) = 3s . The impedance 2s  is the counterpart
of the 21 F capacitor (as in Section 11.3), but it is in parallel with a current
source 21 vc (0− ) for reasons to be explained after this example. Focusing,
for now, on the analysis of the circuit in Figure 11.11b, we note that we may
408 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

use superposition to determine Ŷ (s). First, suppressing the current source


1 −
2 vc (0 ) and applying voltage division, we find

1 3 1 3
Ŷ (s) = F̂ (s) = = .
1+ 2
s
s 1+ 2
s
s+2

Next, suppressing F̂ (s) and applying the current 21 vc (0− ) through the
parallel combination of impedances of 1 and 2s , we obtain

1 1 · 2s vc (0− )
Ŷ (s) = − vc (0− ) = − .
2 1 + 2s s+2

Superposing the above contributions, the overall response is

3 vc (0− )
Ŷ (s) = − .
s+2 s+2
Taking the inverse Laplace transform of this result, we find that for t > 0

y(t) = 3e−2t − vc (0− )e−2t .

The first term of the solution is the zero-state component (driven by the
3 V source) while the second term, proportional to vc (0− ), is the zero-input
response.

Figures 11.12a and 11.12b, below, depict the s-domain equivalents of an inductor
L and capacitor C, with nonzero initial states i(0− ) and v(0− ), respectively. In the
equivalent networks, the s-domain impedances sL and 1/sC  are accompanied by
series and parallel voltage and current sources, Li(0− ) and Cv(0− ), having polarity
and flow direction opposite to those associated with v(t) and i(t). For instance, in
Figure 11.12a the voltage Li(0− ) is a drop in the direction of voltage rise v(t), while
in Figure 11.12b the current Cv(0− ) flows counter to i(t). The equivalence depicted
in Figure 11.12b is the transformation rule used to obtain the s-domain circuit shown
above in Figure 11.11b.

i(t) ^ i(t) ^
I (s) I (s)
+ + + +
sL Cν(0− )
1
L ν(t) ⇒ ^
V (s) C ν(t) ⇒ sC
^
V (s)

+

(a) − Li (0 ) − (b) − −

Figure 11.12 Transformation rule for obtaining s-domain equivalents of (a) an inductor L and (b) a
capacitor C, with initial states i(0− ) and v(0− ), respectively (see text for explanation).
Section 11.4 General Response of LTIC Circuits and Systems 409

To validate the transformation depicted in Figure 11.12a, we note that the Laplace
transform of the inductor v-i relation
di
v(t) = L
dt
is

V̂ (s) = sLÎ (s) − Li(0− ),

an equality that can be seen to satisfy the s-domain circuit constraint at the network
terminals shown at the right side of the figure. Likewise, the s-domain constraint at
the network terminals at the right side of Figure 11.12b is

Î (s) = sC V̂ (s) − Cv(0− ),

which is in agreement with the Laplace transform of the capacitor v-i relation
dv
i(t) = C .
dt
The Laplace transforms in both cases above result from applying the Laplace
transform derivative property (item 6 in Table 11.2) to account for the initial-value
source terms included in the above models. Note that by applying source transfor- Initial-value
mations to the s-domain equivalents shown in Figures 11.12a and 11.12b we can sources
generate additional forms (Norton and Thevenin types, respectively) of the equiva-
lents. That will not be necessary for solving the example problems given below, but
remembering the equivalences shown in Figure 11.12, and in particular the proper
initial-value source directions, is important.

Example 11.34
In the circuit shown in Figure 11.13a, the initial capacitor voltage is v(0− ) =
0 and the initial inductor current is i(0− ) = 2 A. Determine v(t) for t > 0
if f (t) = 1 for t > 0.
Solution Figure 11.13b shows the equivalent s-domain circuit, where an
initial-value voltage source, due to the nonzero initial inductor current
i(0− ) = 2 A, has been introduced. Since the initial capacitor voltage is zero,

2Ω t=0 ν(t) 2Ω ^
V (s)

^
+ i(t) + ^ I (s) 3s Ω 6
− − F (s) Ω
f (t) 3H 1 s
F − 3i(0 − )
6 +
(a) (b)

Figure 11.13 (a) An LTIC circuit with a switch and two energy storage elements, and (b) its s-domain
equivalent including an initial-value source.
410 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

the capacitor has been transformed to the s-domain without an accompa-


nying initial-value source. Applying KCL in the s-domain to the equivalent
circuit, we have

V̂ (s) − F̂ (s) V̂ (s) + 3i(0− ) V̂ (s)


+ + = 0,
2 3s 6/s

leading to

1 1 s F̂ (s) i(0− )
( + + )V̂ (s) = − .
2 3s 6 2 s

Solving for V̂ (s) gives

3s F̂ (s) 6i(0− )
V̂ (s) = − .
s 2 + 3s + 2 s 2 + 3s + 2

Thus, with f (t) = 1 → F̂ (s) = 1


s and i(0− ) = 2 A, we have

3 − 12 −9 9
V̂ (s) = = + ,
(s + 1)(s + 2) s+1 s+2

which implies a total response, for t > 0, of

v(t) = −9e−t + 9e−2t .

Example 11.35
In Figure 11.14a f1 (t) = 3t V for t > 0 and f2 (t) = 1 A. Assuming that
v(0− ) = 1 V, and using s-domain techniques, determine the zero-state and
zero-input components of the current i(t) for t > 0.

1 −
ν(0 )
1 3
t=0 F
3 4Ω ^
Vx 4Ω
+ − 3Ω ^
v(t) s I
f 1(t) +− i(t) 1H ^ sΩ
F1 +−
f 2(t) ^ −
F2 i(0 − ) +
(a) (b)

Figure 11.14 (a) A circuit with two independent sources, and (b) its s-domain equivalent including
initial-value sources.
Section 11.4 General Response of LTIC Circuits and Systems 411

Solution Figure 11.14b shows the s-domain equivalent of Figure 11.14a


for t > 0, where i(0− ) = 1 A in order to satisfy KCL in the circuit prior to
the switching action.
To determine the zero-state response of the circuit, we analyze the s-
domain equivalent with suppressed initial-value sources. In that case writing
a KCL equation at the top node, with F̂1 (s) = s32 and F̂2 (s) = 1s , yields

V̂x − 3
s2 1 V̂x
− + = 0,
3
s
s 4+s

from which
2
V̂x = s
.
s
3 + 1
4+s

Thus,

V̂x 6 6 2 −3 1
Î (s) = = 2 = = + + .
4+s s (s + 4) + 3s s(s + 1)(s + 3) s s+1 s+3

Taking the inverse Laplace transform, we find that the zero-state response
for t > 0 must be

i(t) = 2 − 3e−t + e−3t A.

Now, for the zero-input response we suppress input sources F̂1 (s) and
F̂2 (s), and then use the superposition method once again to calculate the
circuit response to initial value-sources 13 v(0− ) and i(0− ). First, due to the
current source 31 v(0− ), we have, using current division,

1 3
−v(0− )
Î (s) = − v(0− ) 3 s
= .
3 s + (s + 4)
3 + s(s + 4)

Next, due to the voltage source i(0− ), we have, dividing by the total
impedance around the loop,

i(0− ) si(0− )
Î (s) = = .
3
s + (s + 4) 3 + s(s + 4)

Thus, the total circuit response to initial-value sources is

si(0− ) − v(0− ) s−1 −1 2


Î (s) = = = +
s + 4s + 3
2 (s + 1)(s + 3) s+1 s+3
412 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

using v(0− ) = 1 V and i(0− ) = f2 (0− ) = 1 A. Taking the inverse Laplace


transform, we find that the zero-input response for t > 0 must be

i(t) = −e−t + 2e−3t A.

Naturally, the zero-state and zero-input responses calculated above can


be summed to obtain the expression for the total response of the system for
t > 0.

11.5 LTIC System Combinations


We close this chapter with a discussion of higher-order LTIC systems composed
of interconnections of lower-order subsystems. In particular, we consider cascade,
parallel, and feedback configurations. For each of these configurations, we determine
how the transfer function of the overall system is related to the transfer functions
of the subsystems. Cascade and parallel configurations are used for the implementa-
tion of higher-order filters, whose design is the subject of Chapter 12. The feedback
configuration is of fundamental importance in the field of control systems.

11.5.1 Cascade configuration


Higher-order systems frequently are composed of cascaded subsystems. Figure 11.15
shows a cascade of k LTIC systems with transfer functions Ĥi (s), 1 ≤ i ≤ k. Given a
causal input f (t) ↔ F̂ (s), the first-stage output h1 (t) ∗ f (t) ↔ Ĥ1 (s)F̂ (s) is also the
second-stage input. Therefore, the second-stage output is h2 (t) ∗ (h1 (t) ∗ f (t)) ↔
Ĥ2 (s)Ĥ1 (s)F̂ (s). Through similar reasoning, the overall system output is described by

y(t) = h(t) ∗ f (t) ↔ Ŷ (s) = Ĥ (s)F̂ (s)

with

h(t) ≡ h1 (t) ∗ h2 (t) ∗ · · · ∗ hk (t) ↔ Ĥ (s) ≡ Ĥ1 (s)Ĥ2 (s) · · · Ĥk (s).

The order of the cascade system h(t) ↔ Ĥ (s) is less than or equal to the sum of the
orders of its k components—the order can be less if there are pole-zero cancellations
in the product of the Ĥi (s) that forms Ĥ (s).

h(t) = h 1 (t) ∗ h 2 (t) ∗ . . . h k (t)

f (t) f (t) ∗ h 1 (t) f (t) ∗ h 1 (t) ∗ h 2 (t) y(t) = h(t) ∗ f (t)


H^1 (s) H^ 2(s) ... H^ k (s)
F^ (s) F^ (s) H^ 1 (s) F^ (s) H^ 1(s) H^ 2(s) Y^ (s) = H^ (s) F^ (s)

H^ (s) = H^ 1 (s) H^ 2(s) . . . H^ k (s)

Figure 11.15 A cascade system configuration.


Section 11.5 LTIC System Combinations 413

Example 11.36
The 3rd-order system
1
Ĥ (s) =
(s + 1)(s + 1 − j )(s + 1 + j )

is to be realized by cascading systems Ĥ1 (s) and Ĥ2 (s). Determine h1 (t) ↔
Ĥ1 (s) and h2 (t) ↔ Ĥ2 (s) in such a way that the transfer function of each
subsystem has real-valued coefficients.
Solution Since
1 1
Ĥ (s) = · ,
s + 1 (s + 1 − j )(s + 1 + j )
the system can be realized by cascading
1
h1 (t) = e−t u(t) ↔ Ĥ1 (s) =
s+1
(e.g., an RC circuit) with a 2nd-order system
1
h2 (t) = e−t sin(t)u(t) ↔ Ĥ2 (s) =
(s + 1 − j )(s + 1 + j )
1
= .
s2 + 2s + 2

Example 11.37
An LTIC system Ĥ1 (s) = s+1 s
is cascaded with the LTIC system h2 (t) =
−2t
δ(t) + e u(t) to implement a system Ĥ (s) = Ĥ1 (s)Ĥ2 (s). Discuss the
stability of system Ĥ (s) and examine H (ω) = Ĥ (j ω) to determine the
kind of filter implemented by Ĥ (s).
Solution Since
1 s+3
h2 (t) = δ(t) + e−2t u(t) ↔ Ĥ2 (s) = 1 + = ,
s+2 s+2
it follows that
s s+3 s(s + 3)
Ĥ (s) = Ĥ1 (s)Ĥ2 (s) = · = .
s+1 s+2 (s + 1)(s + 2)

Because both poles of Ĥ (s) are in the LHP, the system is BIBO stable.
Furthermore, both components of the system have only LHP system poles
and therefore they are asymptotically stable. The system frequency response
j ω(j ω + 3)
Ĥ (j ω) =
(j ω + 1)(j ω + 2)
414 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

vanishes as ω → 0 and approaches unity as ω → ∞. Hence, the system


implements a type of high-pass filter.

Cascading of subsystems is a straightforward idea, but in practical implementa-


tions we must be sure that stage i + 1 does not affect the properties of (change the
transfer function of) the preceding stage i. For instance, cascading two RC circuits
does not produce a system having a transfer function that is the product of the two
original transfer functions, unless a buffer stage is inserted between the two to prevent
the loading of (drawing current from) the front-end circuit by the second stage. On
the other hand, active filters that use op amps readily can be cascaded, because an
op amp in a later stage draws essentially no current from the preceding stages and,
therefore, does not alter the transfer functions of those stages.

11.5.2 Parallel configuration


Figure 11.16 shows a parallel combination of k LTIC subsystems with transfer func-
tions Ĥi (s), 1 ≤ i ≤ k. In this configuration, a single input f (t) ↔ F̂ (s) is applied
in parallel to the input of each subsystem (represented by the fork on the left), and
the outputs hi (t) ∗ f (t) ↔ Ĥi (s)F̂ (s) of the individual stages are summed to obtain
an overall system output of

y(t) = h(t) ∗ f (t) ↔ Ŷ (s) = Ĥ (s)F̂ (s)

with

h(t) ≡ h1 (t) + · · · + hk (t) ↔ Ĥ (s) ≡ Ĥ1 (s) + · · · + Ĥk (s).

The order of the parallel system h(t) ↔ Ĥ (s) is less than or equal to the sum of the
orders of its k subcomponents. (The order can be less if pole-zero cancellations occur
in the sum of the Ĥi (s) that forms Ĥ (s).)

h(t) = h 1 (t) + h 2 (t) + . . . + h k (t)

H^ 1(s)
h 1 (t ) ∗ f (t)
f (t) y(t) = h(t) ∗ f (t)
H^ 2(s) ∑
F^ (s) Y^ (s) = H^ (s) Y^ (s)
..
.
h k (t ) ∗ f (t)
H^ k (s)

H^ (s) = H^ 1(s) + H^ 2(s) + . . . + H^ k (s)

Figure 11.16 A parallel system configuration.


Section 11.5 LTIC System Combinations 415

Example 11.38
A 3rd-order LTIC system

s(s + 1)
Ĥ (s) =
(s + 3)(s 2 + s + 1)

is to be constructed using a parallel configuration with a 1st-order system,


Ĥ1 (s), and a 2nd-order system, Ĥ2 (s). Identify possible choices for Ĥ1 (s)
and Ĥ2 (s) and discuss their properties.

Solution Writing Ĥ (s) in a PFE, and then recombining the two terms
involving complex poles, we have

s(s + 1) 1 6 1 s−2
Ĥ (s) = = + .
(s + 3)(s + s + 1)
2 7 s + 3 7 s2 + s + 1

Hence, a possible parallel configuration has

6/7
Ĥ1 (s) = ,
s+3

and

1 s−2
Ĥ2 (s) = .
7 s2 + s + 1

Ĥ1 (s) is a low-pass system. Ĥ2 (s) is another low-pass system and is 2nd -
order. However, the overall system behavior is bandpass since the DC
outputs of Ĥ1 (s) and Ĥ2 (s) are equal in magnitude but opposite in sign
(Ĥ1 (0) = 27 and Ĥ2 (0) = − 27 ).
In this example, we also could have expressed Ĥ (s) in the more stan-
dard PFE, as a sum of three 1st-order terms. In that case the transfer
functions of two of the subsystems would have complex coefficients. Such
filter sections can be implemented (see Exercise Problem 11.28), but the
implementation is more complicated than for transfer functions having real-
valued coefficients.

As just one example of a real-world parallel system, high-quality loudspeakers


are configured and implemented as parallel systems with woofer, mid-range, and
tweeter subcomponents, responding to the low-, mid-, and high-frequency bands of
the same audio signal input, f (t), respectively.
416 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

11.5.3 Feedback configuration


Figure 11.17 shows a feedback system with subcomponents h1 (t) ↔ Ĥ1 (s) and
h2 (t) ↔ Ĥ2 (s). The input of system h1 (t) ↔ Ĥ1 (s) is

f (t) + h2 (t) ∗ y(t) ↔ F̂ (s) + Ĥ2 (s)Ŷ (s),

and therefore its output is

y(t) = h1 (t) ∗ (f (t) + h2 (t) ∗ y(t)) ↔ Ŷ (s) = Ĥ1 (s)(F̂ (s) + Ĥ2 (s)Ŷ (s)).

Moving all terms involving Ŷ (s) to the left-hand side gives

(1 − Ĥ1 (s)Ĥ2 (s))Ŷ (s) = Ĥ1 (s)F̂ (s).

Thus, the transfer function of the feedback configuration is obtained as

Ŷ (s) Ĥ1 (s)


Ĥ (s) = = .
F̂ (s) 1 − Ĥ1 (s)Ĥ2 (s)

∑ H^ 1(s)
F^ (s) Y^ (s) Y^ (s)

H^ 2(s)

H^ 1(s)
H^ (s) =
1 − H^ 1 (s) H^ 2(s)

Figure 11.17 A feedback system

Example 11.39
Consider a LTIC feedback system with Ĥ1 (s) = 1s and Ĥ2 (s) = A, where
A is a real constant. Determine the conditions under which the feedback
system is BIBO stable.
Solution
1
Ĥ1 (s) 1
Ĥ (s) = = s
= .
1 − Ĥ1 (s)Ĥ2 (s) 1− 1
sA
s−A

The pole of Ĥ (s), at s = A, is in the left-half plane only for A < 0; therefore
the system is BIBO stable if and only if A < 0.

Example 11.39 illustrates that a marginally stable system Ĥ1 (s) = 1s (which is
not BIBO stable) can be stabilized by “feeding back” an amount Ay(t) of the system
Section 11.5 LTIC System Combinations 417

output, y(t), to the system input. This is an example of “negative feedback” stabi-
lization, since for stability A < 0 is required. Positive feedback (A > 0) leads to an
unstable configuration. Notice that for A very small and negative, the transfer func-
tions Ĥ (s) and Ĥ1 (s) can be nearly identical, and yet the former is stable and has a
frequency response, whereas the latter does not.

Example 11.40
Consider a LTIC feedback system with Ĥ1 (s) = s+11
and Ĥ2 (s) = A, where
A is a real constant. Note that both Ĥ1 (s) and Ĥ2 (s) are stable systems.
Determine the values of A for which the feedback system itself is unstable.
Solution
1
Ĥ1 (s) s+1 1
Ĥ (s) = = = .
1 − Ĥ1 (s)Ĥ2 (s) 1 − s+1
1
A s+1−A

The pole at s = A − 1 is in the left-half plane if and only if A − 1 < 0 (i.e.,


A < 1). Hence, for A ≥ 1 the feedback system is unstable, even though its
components are not.

Notice that instability in Example 11.40 occurs once again with positive feedback
(A ≥ 1). However, in Example 11.40 negative feedback is not necessary for system
stability (e.g., with A = 0.5, the system is stable) because Ĥ1 (s) is stable to begin
with.

Example 11.41
For a LTIC feedback system with

1
Ĥ1 (s) =
s+1

and
1
Ĥ2 (s) = ,
s+K

where K is real-valued, determine the range of K for which the system is


BIBO stable.
Solution
1
Ĥ1 (s) s+1 s+K
Ĥ (s) = = =
1 − Ĥ1 (s)Ĥ2 (s) 1− 1 1
s+1 s+K
(s + 1)(s + K) − 1
s+K
= .
s2 + (1 + K)s + (K − 1)
418 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

The poles of Ĥ (s) are at


 √
−(1 + K) ± (1 + K)2 − 4(K − 1) −(1 + K) ± K 2 − 2K + 5
s= = .
2 2
It is easy to verify that K 2 − 2K + 5 > 0, for all K, which ensures that
both poles are real-valued. For BIBO stability, both poles must lie in the
LHP, requiring that

−(1 + K) ± K 2 − 2K + 5 < 0

or

± K 2 − 2K + 5 < 1 + K,

implying

K 2 − 2K + 5 < K 2 + 2K + 1.

Thus,

K>1

for stability.

Example 11.42
Determine the transfer function Ĥ (s) of the system shown in Figure 11.18.
Solution Labeling the output of the bottom adder as Ŵ (s),we have

Ŵ (s) = Ĥ3 (s)Ŷ (s) + Ĥ4 (s)F̂ (s).

The system output then can be written as

Ŷ (s) = Ĥ1 (s)[F̂ (s) + Ĥ2 (s)Ŵ (s)],

F^ (s) Y^ (s)
∑ H^ 1(s)

H^ 2(s) ∑ H^ 3(s)

H^ 4(s)

Figure 11.18 A system with parallel and feedback connections.


Exercises 419

so that substituting for Ŵ (s) gives

Ŷ (s) = Ĥ1 (s)[F̂ (s) + Ĥ2 (s)(Ĥ3 (s)Ŷ (s) + Ĥ4 (s)F̂ (s))].

Thus

Ŷ (s) − Ĥ1 (s)Ĥ2 (s)Ĥ3 (s)Ŷ (s) = Ĥ1 (s)F̂ (s) + Ĥ1 (s)Ĥ2 (s)Ĥ4 (s)F̂ (s),

and the system transfer function is

Ŷ (s) Ĥ1 (s)(1 + Ĥ2 (s)Ĥ4 (s))


Ĥ (s) = = .
F̂ (s) 1 − Ĥ1 (s)Ĥ2 (s)Ĥ3 (s)

EXERCISES

11.1 Determine the Laplace transform F̂ (s), and the ROC, for the following
signals f (t). In each case identify the corresponding pole locations where
|F̂ (s)| is not finite.
(a) f (t) = u(t) − u(t − 8).
(b) f (t) = u(t) − u(t + 8).
(c) f (t) = u(t + 8).
(d) f (t) = 6.
(e) f (t) = rect( t−4
2 ).
(f) f (t) = rect( t+8
3 ).
(g) f (t) = te2t u(t).
(h) f (t) = te2t u(t − 2).
(i) f (t) = 2te2t .
(j) f (t) = te−4t + δ(t) + u(t − 2).
(k) f (t) = e2t cos(t)u(t).

11.2 For each of the following Laplace transforms F̂ (s), determine the inverse
Laplace transform f (t).
(a) F̂ (s) = s+3
(s+2)(s+4) .
s2
(b) F̂ (s) = (s+2)(s+4) .
(c) F̂ (s) = 1
s(s−5)2
.
s 2 +2s+1
(d) F̂ (s) = (s+1)(s+2) .
(e) F̂ (s) = s
s 2 +2s+5
.
s3
(f) F̂ (s) = s 2 +4
.
420 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

11.3 Sketch the amplitude response |H (ω)| and determine the impulse response
h(t) of the LTIC systems having the following transfer functions:
(a) Ĥ (s) = s
s+10 .
(b) Ĥ (s) = 10
s+1 .
(c) Ĥ (s) = s
s 2 +3s+2
.
1 −s
(d) Ĥ (s) = s+1 e .

11.4 Determine the zero-state responses of the systems defined in Problem 11.3
to a causal input f (t) = u(t). Use y(t) = h(t) ∗ f (t) or find the inverse
Laplace transform of Ŷ (s) = Ĥ (s)F̂ (s), whichever is more convenient.
11.5 Repeat Problem 11.4 with f (t) = e−t u(t).
11.6 Given the frequency response H (ω), below, determine the system transfer
function Ĥ (s) and impulse response h(t).

(a) H (ω) = (1+j ω)(2+j ω) .

(b) H (ω) = 1−ω2 +j ω
.

11.7 Determine whether the LTIC systems with the following transfer functions
are BIBO stable and explain why or why not.
s 3 +1
(a) Ĥ1 (s) = (s+2)(s+4) .
(b) Ĥ2 (s) = 2 + (s+1)(s−2)
s
.
s 2 +4s+6
(c) Ĥ3 (s) = (s+1+j 6)(s+1−j 6) .

(d) Ĥ4 (s) = 1


s 2 +16
.
(e) Ĥ5 (s) = s−2
s 2 −4
.

11.8 For each unstable system in Problem 11.7 give an example of a bounded
input that causes an unbounded output.
11.9 Given
 ∞  ∞
− df −st df −st
s F̂ (s) − f (0 ) = e dt = f (0+ ) − f (0− ) + e dt,
0− dt 0+ dt
and assuming that Laplace transforms of f (t) and f  (t) exist, show that
(a) lims→0 s F̂ (s) = f (∞).
(b) lims→∞ s F̂ (s) = f (0+ ).

11.10 Consider the LTIC circuit shown in Figure 11.7a. What is the zero-state
response x(t) if the input is f (t) = u(t)? Hint: Use s-domain voltage divi-
sion to relate X̂(s) to F̂ (s) in Figure 11.7b.
Exercises 421

11.11 Repeat Problem 11.10 for (a) f (t) = δ(t), and (b) f (t) = tu(t).
11.12 Consider the following circuit with C > 0:

1Ω 1H

f (t) +− +
C y(t)

(a) Determine the zero-state response y(t) if f (t) = te−t u(t).


(b) Determine the zero-state response y(t) if f (t) = tu(t).

11.13 Determine the transfer functions Ĥ (s) and the zero-state responses for LTIC
systems described by the following ODEs:
d2y
(a) dt 2
+ 3 dy
dt + 2y(t) = e u(t).
3t

d2y
(b) dt 2
+ y(t) = cos(2t)u(t).
d2y
(c) dt 2
+ y(t) = cos(t)u(t).

11.14 If an LTIC system has the transfer function Ĥ (s) = F̂Ŷ (s)
(s)
= (s+2)
s+1
2 , determine
a linear ODE that describes the relationship between the system input f (t)
and the output y(t).
11.15 Determine the characteristic polynomial P (s), characteristic poles, charac-
teristic modes, and the zero-input solution for each of the LTIC systems
described below.
d2y −  −
(a) dt 2
+ 2 dy
dt − 8y(t) = 6f (t), y(0 ) = 0, y (0 ) = 1.
d3y 2
(b) dt 3
+ 2 ddt 2y
− dy
dt − 2y(t) = f (t), y(0− ) = 1, y  (0− ) = 1,
 −
y (0 ) = 0.
d2y −  −
(c) dt 2
+ 2 dy
dt + y(t) = 2f (t), y(0 ) = 1, y (0 ) = 1.

11.16 (a) Take the Laplace transform of the following ODE to determine Ŷ (s)
assuming f (t) = u(t), y(0− ) = 1, and y  (0− ) = 0. Determine y(t) for
t > 0 by taking the inverse Laplace transform of Ŷ (s).
d 2y dy df
2
+5 + 4y(t) = + 2f (t).
dt dt dt
422 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

(b) Repeat (a) for y(0− ) = 0 and


dy
3 + 6y(t) = δ(t).
dt
(c) Repeat (a) for y(0− ) = 0 and
dy
− y(t) = e−t u(t).
dt

11.17 The transfer function of a particular LTIC system is Ĥ (s) = F̂Ŷ (s)
(s)
= s−2
4
.
Is the system asymptotically stable? Explain. Is the system BIBO stable?
Explain.
11.18 What are the resonance frequencies in a system with the transfer function
s
Ĥ (s) = ?
(s + 1)(s 2 + 4)(s 2 + 25)
Is the system marginally stable? BIBO-stable? Explain.
11.19 Determine the zero-state response y(t) = h(t) ∗ f (t) of the marginally stable
system
1
Ĥ (s) =
(s 2 + 4)(s 2 + 9)
to an input f (t) = cos(2t)u(t).
11.20 (a) Determine the transfer function and characteristic modes of the circuit
shown below assuming that Ca = Cb = 1/2 F, R = 1.5 , and L =
1/2 H.

Ca L
ν (t) y(t)

i(t)
f (t) +− Cb R

(b) Given that v(0− ) = 1 V and i(0− ) = 0.5 A, and using the element
values given in part (a), determine y(t) for t > 0 in the circuit:

Ca L
ν(t) y(t)

i(t)
f (t) = 0 Cb R
Exercises 423

11.21 Consider the following circuit, which is in DC steady-state until the switch
is opened at t = 0:

t = 0

2Ω 2Ω
i(t)
+
+ 1F 2H
4V − ν(t)

(a) Determine i(0− ) and v(0− ).


(b) Determine the characteristic modes of the circuit to the right of the
switch for t > 0.
(c) Determine i(t) for t > 0.

11.22 In the circuit:

t= 0
i(t)
1Ω +
f (t) +− y(t) 1H
1F −

(a) Determine the transfer function Ĥ (s) = Ŷ (s)


F̂ (s)
for t > 0.
(b) Determine the zero-state response for t > 0 if f (t) = e3t .
(c) Determine the zero-input response for t > 0 if y(0− ) = 1 V and i(0− ) = 0.

11.23 Consider the circuit:

t= 0

2Ω 2Ω +
f (t) +− 1H 1F y(t)
i(t) −

(a) Show that the transfer function of the circuit for t > 0 is Ĥ (s) = Ŷ (s)
F̂ (s)
=
s
4s 2 +5s+2
.
(b) What are the characteristic modes of the circuit?
(c) Determine y(t) for t > 0 if f (t) = 1 V, y(0− ) = 1 V, and i(0− ) = 0.
424 Chapter 11 Laplace Transform, Transfer Function, and LTIC System Response

11.24 The system shown below can be implemented as a cascade of two 1st-order
systems Ĥ1 (s) and Ĥ2 (s). Identify the possible forms of Ĥ1 (s) and Ĥ2 (s).

1
s+ 2

F^ (s) Y^ (s)
2 ∑

s
s+ 1

11.25 Determine the impulse response h(t) of the system shown below. Also deter-
mine whether the system is BIBO stable.

s
s+ 1

F^ (s) Y^ (s)

1
∑ s+ 1

11.26 Determine the transfer function Ĥ (s) of the system shown below. Also deter-
mine whether the system is BIBO stable.

F^ (s) 1 s Y^ (s)
s+ 1
∑ s+ 1

2
s

11.27 Determine the transfer function Ĥ (s) of the system shown below and deter-
mine for which K the system is BIBO stable if
(a) Ĥf (s) = 1
s+K .
(b) Ĥf (s) = s + K.

F (s) 1 Y (s)
s+1 ∑

H f (s)
Exercises 425

11.28 Consider a system with transfer function Ĥ (s) = s+1−j 2


3 . Draw a block
diagram that implements this transfer function. Individual blocks in the
diagram may denote addition of real-valued signals, amplification by real
values, and integration of real-valued signals. Your diagram should contain
no complex numbers. Hints: 1) The transfer function Ĥ (s) = s+b a
corre-
dy
sponds in the time domain to the equation dt + by(t) = af (t) or, equiv-
alently, y(t) = −b y(τ )dτ + a f (τ )dτ. So, it is easy to draw a block
diagram of this system. 2) In this homework exercise, you may assume that
f (t) is real-valued, but y(t) is complex-valued, which means that y(t) is
a pair of real-valued signals. Your diagram will need to show yR (t) and
yI (t), the real and imaginary parts of y(t), separately. 3) Multiplication of
a complex signal by a complex number involves four real multiplications.
Addition of complex signals requires two real additions.
12
Analog Filters and
Low-Pass Filter Design

12.1 IDEAL FILTERS: DISTORTIONLESS AND NONDISPERSIVE 427


12.2 1ST- AND 2ND-ORDER FILTERS 430
12.3 LOW-PASS BUTTERWORTH FILTER DESIGN 437
EXERCISES 447

Ideal In earlier chapters we focused our efforts on the analysis of LTIC circuits and systems.
filters; For example, given a circuit, we learned how to find its transfer function and its
practical corresponding frequency response. In this final chapter, we consider the other side of
low-order the coin, the problem of design. For instance, suppose we wish to create a filter whose
filters; frequency response approximates that of an ideal low-pass shape with a particular
2nd-order cutoff frequency. How can this be accomplished? The solution lies in answering two
filters questions. First, how do we choose the coefficients in the transfer function? And,
and quality second, given the coefficients of a transfer function, how do we build a circuit having
factor Q; this transfer function (and, therefore, approximating the desired frequency response)?
Butterworth We will tackle these questions in reverse order, but before doing so, in Section
filter 12.1 we first describe the desired frequency response characteristics for ideal filters.
design Section 12.2 then examines how closely the ideal characteristics can be approximated
by 1st- and 2nd-order filters and describes how to implement such low-order filters
using op-amps. This section also includes discussion of a system design parameter
known as Q and its relation to frequency response and pole locations of 2nd-order
systems. Section 12.3 tackles the important problem of designing the filter transfer
function for higher-order filters, describing a method for designing one common
class of filters called Butterworth. The higher-order filters then can be implemented
as cascade or parallel op-amp first- and second-order circuits.

426
Section 12.1 Ideal Filters: Distortionless and Nondispersive 427

12.1 Ideal Filters: Distortionless and Nondispersive

Many real-world applications require filters that pass signal content in one frequency
band, with no distortion (other than a possible amplitude scaling and signal delay),
and that attenuate (reduce in amplitude) all signal content outside of this frequency
band. The sets of frequencies where signal content is passed and attenuated are called
the passband and stopband, respectively.
Figures 12.1a through 12.1c depict the magnitude and angle variations of three
such ideal frequency response functions H (ω) = Ĥ (j ω). Linear systems with the
frequency response curves shown in Figures 12.1a through 12.1c are known as distor-
tionless low-pass, band-pass, and high-pass filters, respectively.
To understand this terminology, consider the low-pass filter with frequency
response
ω −j ωto
H (ω) = Krect( )e ,
2
represented in Figure 12.1a. This filter responds to a low-pass input f (t) ↔ F (ω) of
bandwidth less than  with an output y(t) = Kf (t − to ), which is a time-delayed and
amplitude-scaled but undistorted replica of the input. In fact, all three filters shown
in Figures 12.1a through 12.1c produce delayed and scaled, but undistorted, copies
of their inputs within their respective passbands because of the filters’ flat-amplitude
responses and linear-phase characteristics.

|H (ω)|
K

−Ω ω
∠ H (ω) Ω

(a) ω
slope = − t o

|H (ω)|
K
ωc ω
∠H (ω) ωc − Ω ωc + Ω
ω
(b) slope = − t o

|H (ω)|
K

−Ω ω
∠ H (ω) Ω

(c) ω
slope = − t o

Figure 12.1 Sketches of the amplitude and phase responses of distortionless


filters: (a) low-pass, (b) band-pass, and (c) high-pass. Note that each filter has a
flat amplitude |H(ω)| within its passband (frequency band with nonzero |H(ω)|)
and makes step transitions from passband to stopband (where |H(ω)| = 0).
428 Chapter 12 Analog Filters and Low-Pass Filter Design

It so happens that none of these distortionless filters can be exactly implemented


as circuits, because each of the filters is associated with a noncausal impulse response
h(t) ↔ H (ω). For instance, the impulse response corresponding to the distortionless
low-pass filter is
K ω −j ωto
h(t) = sinc((t − to )) ↔ H (ω) = Krect( )e .
π 2
No physical circuit can have such an impulse response that begins prior to t = 0.
Distortionless Given that distortionless filters are unrealizable, we must accept some amount of
filters signal distortion in real-world filtering operations. Fortunately, practical analog filters
are can be designed to approximate the distortionless filter characteristics of Figure 12.1
noncausal as closely as we like, and such filters can be built using analog circuits. Practical filter
design involves determining a realizable and stable (with only LHP poles) transfer
function Ĥ (s) that provides
Practical (1) As flat an amplitude response |Ĥ (j ω)| as “needed” within a desired passband,
filter (2) As fast a “decay” of |Ĥ (j ω)| as needed outside the passband, and
design (3) A phase response ∠Ĥ (j ω) that is as linear as needed within the desired pass-
criteria band.
The “needs” may vary from application to application, and filter design may
require compromises, because the three design goals above generally are in conflict
to some degree. Tighter specifications related to the design goals generally lead to
the requirement of a filter transfer function Ĥ (s) having higher order, which in turn
requires a more complex circuit implementation.
Phase As indicated above, we usually seek phase linearity (corresponding to signal
linearity delay, but no distortion) only within the passband. This is sufficient, because high-
quality low-pass, band-pass, and high-pass filters pass very little signal energy within
the stopband, and therefore distortion of signal components lying in the stopband is
of little concern. Moreover, as we shall see below, we need not require that the linear
phase characteristic pass through the origin (i.e., ∠H (ω) = −ωto ).
Overall, there are two types of phase linearity that we will find acceptable. The
first is true linear phase, where ∠H (ω) = −ωto , for −∞ < ω < ∞, corresponding
to a distortionless filter introducing signal delay to . The second form of linear phase
requires that ∠H (ω) be linear only for ω within the passband of the filter (and the phase
characteristic need not pass through the origin). Such filters are called nondispersive
(rather than distortionless). To illustrate the difference, let
ω −j ωto
h1 (t) ↔ H1 (ω) = rect( )e
2
denote the impulse response of the distortionless low-pass filter having magnitude and
phase responses depicted in Figure 12.1a (K = 1 case) and repeated in Figure 12.2a.
Let
1 1
h2 (t) = h1 (t) cos(ωc t) ↔ H2 (ω) = H1 (ω − ωc ) + H1 (ω + ωc )
2 2
Section 12.1 Ideal Filters: Distortionless and Nondispersive 429

|H (ω)|
K =1

−Ω Ω ω
(a) ∠ H (ω)

slope = − t o ω

|H (ω)|
0.5
ωc ω
(b) ∠H (ω)

slope = − t o ω

Figure 12.2 (a) A distortionless low-pass filter, and (b) a nondispersive


band-pass filter.

(remember the Fourier modulation property) denote a band-pass filter having magni-
tude and phase responses depicted in Figure 12.2b. Clearly, H2 (ω) is a band-pass filter
with a linear phase variation within its passband, but the phase curve in Figure 12.2b
is different from the phase curve of the distortionless band-pass filter shown in
Figure 12.1b. We refer to the filter in Figure 12.2b as nondispersive. Now, how is
the output signal of a nondispersive filter different from the output of a distortionless Nondispersive
filter? filter
To answer this question, consider the response of the nondispersive band-pass
filter
ω − ωc −j (ω−ωc )to ω + ωc −j (ω+ωc )to
H (ω) = rect( )e + rect( )e
2 2
depicted in Figure 12.3 to an AM signal
1 1
f (t) = m(t) cos(ωc t) ↔ F (ω) = M(ω − ωc ) + M(ω + ωc ),
2 2
where m(t) ↔ M(ω) is a low-pass signal with the triangular Fourier transform shown
in the same figure. Also shown in the figure is the Fourier transform of the filter output,
1 −j (ω−ωc )to 1
Y (ω) = H (ω)F (ω) = e M(ω − ωc ) + e−j (ω+ωc )to M(ω + ωc ).
2 2
You should be able to confirm that this expression is the Fourier transform of the func-
tion m(t − to ) cos(ωc t). Therefore, the nondispersive filter responds to the band-pass
AM input f (t) = m(t) cos(ωc t) with the output y(t) = m(t − to ) cos(ωc t), which is
different from f (t − to ), which would be the response of a distortionless band-pass
filter to the same input signal. Evidently, a nondispersive1 filter delays only the enve-
lope of an AM signal, while a distortionless band-pass filter delays the entire signal.
1
The term nondispersive refers to the fact that the signal envelope is not distorted when the filter phase
variation is a linear function of ω across the passband. Deviations from phase linearity generally cause
430 Chapter 12 Analog Filters and Low-Pass Filter Design

M (ω)

ω
0.5M (ω − ω c) + 0.5M (ω + ω c)

ω
|H (ω)|
1
ωc ω
∠ H (ω)

slope = − t o ω
|Y (ω)|

ωc ω
∠Y (ω)

slope = − t o ω

Figure 12.3 Frequency-domain representations of a low-pass signal


m(t) ↔ M(ω), band-pass AM signal m(t) cos(ωc t) ↔ 0.5M(ω − ωc ) + 0.5M(ω + ωc ),
a nondispersive bandpass filter h(t) ↔ H(ω), and the filter response y(t) ↔ Y (ω)
with y(t) = h(t) ∗ m(t) cos(ωc t) = m(t − to ) cos(ωc t) due to the AM input.

Since both types of filters with linear phase variation preserve the envelope integrity
of the input, they both are compatible with the filter design goals stated above.

12.2 1st- and 2nd-Order Filters

Suppose we wish to design a filter having a frequency response approximating the


ideal characteristics outlined in the previous section (flat response in the passband,
steep drop-off in the stopband, and linear phase across the passband). In particular,
suppose we wish to design a low-pass filter.
We might begin by contemplating the simple 1st-order RC circuit shown in
Figure 12.4, which is a replica of Figure 5.1. Obviously, the amplitude and phase
characteristics of H (ω), shown in Figures 12.4c and 12.4d, are far from being close
approximations of the ideal features of distortionless low-pass filters. Although H (ω)
may be “good enough” in certain situations, most applications would require an
amplitude response that is more rectangular or a phase response that is more linear.
If a 1st-order circuit is not up to the job, then we must explore higher-order circuit
options, i.e., more complex circuits with higher-order Ĥ (s). If Ĥ (s) is a high-order
transfer function, then it can be realized in straightforward fashion by rewriting Ĥ (s)

“spreading” of the envelope, which is known as dispersion. Also, the terms phase delay and group delay
often are used to refer to the delay imposed by a filter on the carrier and envelope components, respectively,
of AM signals. In distortionless filters, phase and group delays are equal; in nondispersive filters they are
different.
Section 12.2 1st- and 2nd-Order Filters 431

1Ω 1Ω

Input : Output : Input : Output :


+ +
νi (t) + 1F νo(t) + 1
− Vi − Ω Vo
− jω −
(a) (b)

1
|H (ω)| 180
∠H (ω)
0.8
90
0.6

0.4 −10 −5 5 10 ω
0.2 −90

(c) −10 −5 5 10 ω (d) −180

Figure 12.4 (a) A 1st-order low-pass filter circuit, (b) its phasor representation, (c) the amplitude
response |H(ω)| , and (d) phase response ∠H(ω) (in degrees).

as a product of second-order (and possibly first-order) sections and then implementing


Ĥ (s) as a cascade of the corresponding low-order sections. The actual implementation
is accomplished most easily using a cascade of active op-amp filter sections, because
an op-amp draws essentially no current from the remainder of the circuit (the current
is supplied by the op-amp power supply). Thus, each interconnected op-amp filter
section continues to function as it would in isolation, so that the transfer function of
the entire cascade system is simply the product of the individual transfer functions,
which is Ĥ (s), as desired.

12.2.1 Active op-amp filters


Figures 12.5a and 12.5b show two low-pass op-amp filter circuits with 1st- and 2nd-
order transfer functions
K
Ĥa (s) = RC
s+ 1
RC

and

Kωo2
Ĥb (s) = ,
s 2 + 2αs + ωo2

respectively, where (as shown in Exercise Problems 12.1 and 12.2)

R1 1
K =1+ , ωo = √ ,
R2 R3 R4 C1 C2
432 Chapter 12 Analog Filters and Low-Pass Filter Design

C1
+ +

+
R − + +
R3 R4 − +
R1
R1
f (t) y(t) f (t) y(t)
C C2
− − R2
R2 − −

(a) (b)

Figure 12.5 (a) 1st-order low-pass active filter circuit, and (b) 2nd-order low-pass active filter circuit
known as the Sallen-Key circuit.

and

1 1 1−K
α= + + .
2R3 C1 2R4 C1 2R4 C2

Both circuits are BIBO stable, with low-pass frequency responses

Ha,b (ω) = Ĥa,b (j ω)

that can be controlled by capacitance and resistance values in the circuits. The op-amps
in the circuits provide a DC amplitude gain K ≥ 1 and the possibility, as discussed
above, of cascading similar active filter circuits in order to assemble higher-order filter
circuits having more ideal frequency response functions H (ω) than either Ha (ω) or
Hb (ω).
Figure 12.6a illustrates the amplitude response curves |HQ (ω)| of three different
versions of the 2nd-order filter shown in Figure 12.5b, labeled by distinct values of
Q = ω2αo . These responses are to be contrasted with |H (ω)| of Figure 12.6b describing
the cascade combination of the same three systems. The phase response ∠H (ω) of
the cascaded system also is shown in Figure 12.6b. Clearly, the cascade idea seems to
provide a simple means of obtaining practical filter characteristics approaching those
of ideal filters. In Section 12.3 we will learn a method for designing high-order transfer
functions Ĥ (s) that can be implemented as a cascade of 2nd-order (and possibly first-
order) sections to produce high-order op-amp filters having properties approximating
the ideal. We close this section with a discussion of the Q parameter introduced above
and its relation to pole locations and frequency responses of dissipative 2nd-order
systems.
Section 12.2 1st- and 2nd-Order Filters 433

|H Q (ω)|
2

Q = 1 .93
1.5

Q = 0 .707
0.5

Q = 0 .52
(a) −3 −2 −1 1 2 3 ω

|H (ω)| ∠ H (ω)
3

0.8 2

1
0.6

−3 −2 -1 1 2 3
ω
0.4
−1
0.2
−2

(b) −3 −2 −1 1 2 3 ω −3

Figure 12.6 The amplitude response of (a) 2nd-order low-pass filters with
ωo = 1 rad
s and Q ≡ 2α = 1.93, 0.707, and 0.52, and (b) a 6th-order filter obtained
ωo

by cascading (multiplying) the filter curves shown in (a); phase response of the
6th-order filter also is shown in (b).

12.2.2 2nd-order systems and Q


Characteristic polynomials of stable 2nd-order LTIC systems (for which the realizable
circuit of Figure 12.5b is an important example) can be expressed as

P (s) = (s − p1 )(s − p2 ) = s 2 + 2αs + ωo2 ,

where p1 and p2 denote characteristic system poles confined to the LHP. The parame-

ters ωo = p1 p2 and 2α ≡ −(p1 + p2 ) are real and positive coefficients having the
ratio ωo /2α denoted as Q.
The zero-input response of such systems takes underdamped, critically damped, Under-damped,
or overdamped forms illustrated in Table 12.1a, depending on the relative values critically
of ωo and α, known as the undamped resonance frequency and damping coefficient, damped,
respectively. Also, depending on the values of ωo and α, the poles p1 and p2 either are overdamped
both real valued (note the pole locations depicted in Table 12.1a) or form a complex
conjugate pair (i.e., p2 = p1∗ , specifically in underdamped systems). The parameter

ωo
Q≡

is called the quality factor and it has a number of useful interpretations summarized Quality
in Table 12.1b, which are discussed below. factor Q
434 Chapter 12 Analog Filters and Low-Pass Filter Design

(a)
400 ω 1
Complex conjugate poles: 0.8 y(t)
200
0.6
ωo > α
0.4
−400 −200 200 400
Underdamped: σ 0.2
−200
 0.02 0.04 0.06 0.08 0.1 0.12 0.14
y(t) = Ae−αt cos( ωo2 − α 2 t + θ) −400
−0.2
t
−0.4

400 ω 1
Repeated real poles: 0.8 y(t)
200
0.6
ωo = α,
0.4
−400 −200 200 400
Critically damped: σ 0.2
−200
0.02 0.04 0.06 0.08 0.1 0.12 0.14
y(t) = e−αt (A + Bt) −400
−0.2
t
−0.4

Distinct real poles: 400 ω 1


0.8 y(t)
ωo < α, 200
0.6
0.4
Overdamped: −400 −200 200 400
σ 0.2
√ −200
−αt − α 2 −ωo2
y(t) =
√ e (Ae
t
−0.2
0.02 0.04 0.06 0.08 0.1 0.12 0.14

−400 t
α 2 −ωo2
+Be t
) −0.4

(b)

Quality factor Q = ωo
2α Poles at s = −α ± ωo2 − α 2

center frequency
3 dB bandwidth Band-pass filter Ĥ (j ω)

≈ 2π energy stored energy


dissipated per period Energy interpretation

≈ countable cycles of oscillations Underdamped response

Table 12.1 (a) Zero-input response types and examples for dissipative 2nd-order systems with a charac-
teristic polynomial P (s) = s 2 + 2αs + ωo2 , where constants A, B, and θ depend on initial conditions and ×’s
mark the locations of characteristic poles, and (b) quality factor with interpretations.
Section 12.2 1st- and 2nd-Order Filters 435

1H 10−4F +
f (t) +− R y(t)

Figure 12.7 A 2nd-order series RLC band-pass filter.

When a 2nd-order system is used to implement a bandpass filter, such as the


circuit shown in Figure 12.7 having the transfer function

Ŷ (s) R Rs
Ĥ (s) = = = ,
F̂ (s) s+ 1
+R s 2 + Rs + 104
10−4 s

the quality factor Q turns out to be the ratio of center-frequency ωo and 3-dB
bandwidth 2α for the filter. To verify this, we first note that in the above transfer
function P (s) = s 2 + Rs + 104 , 2α = R, and ωo = 102 , so that the corresponding
filter frequency response is

2αω
H (ω) = Ĥ (j ω) = .
2αω + j (ω2 − ωo2 )

The peak amplitude response is then |H (ωo )| = 1 at a center frequency of ω = ωo Center


as shown in Figure 12.8 for Q = 5 (underdamped), 0.5 (critically damped), and 0.3 frequency ωo
(overdamped). We can calculate the 3-dB bandwidth of the same filter as  = ωu −
ωl , where ωu,l are 3-dB frequencies satisfying

1
|H (ωu,l )|2 = .
2

For ωu > ωo this requires ωu2 − ωo2 = 2αωu , implying


'
ωu = α + α 2 + ωo2 ,

|H (ω)|
1

0.8 Q = 0 .3
0.6

0.4
Q=5 Q = 0 .5
0.2

0.5 1 1.5 2 2.5 3 ω = ωo

Figure 12.8 The amplitude response |H(ω)| of a 2nd-order band-pass filter with
Q = ω2αo = 5, 0.5, and 0.3.
436 Chapter 12 Analog Filters and Low-Pass Filter Design

while for 0 < ωl < ωo , we obtain


'
ωl = −α + α 2 + ωo2 .

3-dB Thus, the 3-dB bandwidth of the filter is


bandwidth
2α  = ωu − ωl = 2α.

Therefore, as indicated in the first row of Table 12.1b, Q = ω2αo is the center frequency
to bandwidth ratio of the filter. Notice that ωu and ωl are not equidistant from ωo except
in the limit for very large Q.
The remaining interpretations of Q given in Table 12.1b have diagnostic value
in the case of high Q (i.e., Q 1), irrespective of system details (for electrical,
mechanical, or any type of system modeled by a second-order transfer function). To
understand the energy interpretation given in the second row of Table 12.1b, we first
note that the envelope of the instantaneous power |y(t)|2 for underdamped y(t) (see
Table 12.1a) decays with time-constant
1
τd = ,

while
2π 2π
T = ≈
ωo2 − α2 ωo

is the oscillation period for y(t) for high Q. Consequently we have


|y(t)|2
ωo τd τd 2 e
Q= ≈ 2π = 2π ,
T |y(t)|e
2
2α T
2

|y(t)|2
where |y(t)|e ≡ Ae−αt is the envelope function of y(t), T 2 e in the denominator
t+T
is the same as t |y(τ )|2 dτ , the energy dissipated in one underdamped oscillation
|y(t)|2 ∞
period T , and τd 2 e in the numerator is equivalent to t |y(τ )|2 dτ (see Exercise
Problem 12.5), the stored energy in the system at time t (can you see why?). Hence,
the energy interpretation of Q follows as indicated in Table 12.1b.

Example 12.1
Suppose a 2nd-order system with an oscillation period of 1 s dissipates 1
J/s when its stored energy is 10 J. Approximate the damping coefficient α
of the system.
Solution Based on the energy interpretation of Q given above,
10 J
Q ≈ 2π = 20π ≈ 63.
1J
Section 12.3 Low-Pass Butterworth Filter Design 437

Clearly,
 this is a high Q system and therefore the oscillation frequency
ωo2 − α 2 ≈ ωo = 2π 1 . Hence,


ωo 1
α= = 1
= = 0.05 s−1 .
2Q 2(20π) 20

Finally, for Q 1, once again, we can write



π
ωo T
Q= ≡ o ≈ α,
2α 2α T
where T ≈ To = 2π ωo is the period of underdamped oscillations in y(t). Thus Q can
be interpreted as the number of oscillation cycles of y(t) occuring during a time-
interval πα . Since the underdamped y(t) has an envelope proportional to e−αt , and
π
y( πα )/y(0) = e−α α = e−π ≈ 4.3%, this duration πα is just about the “lifetime” of the
underdamped response (before the amplitude gets too small to notice). This justifies
the count interpretation of Q given in the bottom row of Table 12.1b.

Example 12.2
A steel beam oscillates about 30 times before the oscillation amplitude is
reduced to a few percent of the initial amplitude. What is the system Q and
what is the damping rate if the oscillation period is 0.5 s?
Solution Q ≈ 30, and therefore

ωo
α= ≈ 0.5 ≈ 0.2 s−1 .
2Q 2 · 30

Before closing this section, it is worthwhile to point out that the limiting case
with Q → ∞ corresponds to a dissipation-free resonant circuit (a straightforward
application of the energy interpretation of Q). Thus, high-Q systems are considered
to be “near resonant.” In the following section we will see that such near-resonant
circuits are needed to build high-quality low-pass filters.

12.3 Low-Pass Butterworth Filter Design

Consider the low-pass amplitude response function


1
|H (ω)| = ' # ω $2n ,
1+ 

where  > 0 and n ≥ 1. Plots of |H (ω)|2 for n = 1, 2, 3, and 6 and  = 1 rad s are
shown in Figure 12.9. Clearly, |H (ω)| above describes a filter with a 3-dB bandwidth
438 Chapter 12 Analog Filters and Low-Pass Filter Design

10
|H (ω)| dB
|H (ω)| 2 0
−10
0.8
−20
n =1
0.6 −30
0.4 −40
−50
0.2 −60
2
63 2 n =1 6 3
(a) −3 −2 −1 1 2 3 (b) 0.1 1 10 100 1000
ω ω

Figure 12.9 (a) The square of the amplitude response curves of 1st, 2nd, 3rd, and 6th-order Butterworth
filters with a 3-dB bandwidth of  = 1 rad
s , and (b) decibel amplitude response plots of the same filters;
labels n indicate the filter order.

 (which equals 1 rads in Figure 12.9, where |H (1)| = 2 = −3dB) and an amplitude
2 1

response approaching the ideal with increasing n.


Stable low-pass filters with |H (ω)| as shown in Figure 12.9 are known as nth-
Butterworth order Butterworth filters. We next will describe how high-order Butterworth transfer
amplitude functions Ĥ (s) can be designed having the desired Butterworth magnitude responses
response |H (ω)| described above. The high-order transfer function Ĥ (s) can be implemented
by cascading 1st- and/or 2nd-order op-amp filter circuits as discussed earlier.

12.3.1 Finding the Butterworth Ĥ(s)

The first step in designing and building Butterworth circuits is finding a stable and
realizable transfer function Ĥ (s) that leads to the Butterworth amplitude response
|H (ω)| given above. That in turn amounts to selecting appropriate pole locations,
once the number of poles to be used is decided.
Given any stable LTIC system Ĥ (s) with a frequency response H (ω) = Ĥ (j ω),

|H (ω)|2 = H (ω)H ∗ (ω) = Ĥ (j ω)Ĥ ∗ (j ω) = Ĥ (j ω)Ĥ (−j ω).

Hence, for a Butterworth circuit with

1 1
|H (ω)|2 = # ω $2n =  2n = Ĥ (j ω)Ĥ (−j ω),
1+  1+ jω
j

the corresponding transfer function Ĥ (s) is constrained as

1
Ĥ (s)Ĥ (−s) =  2n .
1+ s
j
Section 12.3 Low-Pass Butterworth Filter Design 439

This equation indicates that an nth-order Butterworth circuit must have a transfer
function Ĥ (s) with n characteristic poles because the product Ĥ (s)Ĥ (−s) has 2n
poles corresponding to the 2n solutions of
 2n
s
1+ = 0.
j

After determining the poles of Ĥ (s)Ĥ (−s) (by solving the equation above) we will
assign the half of them from the LHP2 to the Butterworth transfer function Ĥ (s). As
we will see in Example 12.3 below, once the characteristic poles of the Butterworth
Ĥ (s) are known, Ĥ (s) itself is easy to write down.
To determine the poles of Ĥ (s)Ĥ (−s) we rearrange the equation above as
 2n
s
= −1 = ej mπ ,
j

where m is any odd integer ±1, ±3, ±5 · · ·. Taking the “2nth root” of both sides,

mπ ◦ πm ◦ m
s = j ej 2n = ∠(90 + ) = ∠90 (1 + ).
2 n n
For m positive and odd, m = 1, 3, · · · < 2n, this result gives the locations of n distinct
LHP poles of Ĥ (s)Ĥ (−s). The remaining n poles of Ĥ (s)Ĥ (−s), obtained with m
negative and odd, m = −1, , −3, · · · > −2n, are all in the RHP. Furthermore, as
we will see below in Example 12.3, the complex LHP poles of Ĥ (s)Ĥ (−s) come in
conjugate pairs, as needed for Ĥ (s) with real-valued coefficients.
Thus, here is the main result:

The characteristic poles of a stable nth-order Butterworth filter with a


3-dB bandwidth  > 0 are uniformly spaced around a semicircle in the
LHP, having radius , at locations Butterworth
poles
◦ m
s = ∠90 (1 + ), m = 1, 3, 5, · · · < 2n.
n

The following examples illustrate the applications of this result.

Example 12.3
Determine the pole locations of a 3rd-order Butterworth filter with 3-dB
bandwidth  = 10 rad
s . Also, determine Ĥ (s) so that the system DC gain
is 1.
2
There are many ways of selecting n of the 2n poles of Ĥ (s)Ĥ (−s) for Ĥ (s). However, for Ĥ (s) to
have real-valued coefficients, the selection must be made so that H (ω) = Ĥ (j ω) is conjugate symmetric
and, furthermore, the selection should include only LHP poles to assure stability.
440 Chapter 12 Analog Filters and Low-Pass Filter Design

Solution Since n = 3 and  = 10, the pole locations of the circuit transfer
function are given by the formula

◦ m ◦ m
s = ∠90 (1 + ) = 10∠90 (1 + ), m = 1, 3, 5 < 6.
n 3
Clearly, all three poles p1 = 10∠120◦ , p2 = 10∠180◦ , and p3 = 10∠240◦
have magnitudes 10 and are positioned in the LHP around a semicircle as
shown in Figure 12.10a. The angular separation between the neighboring
◦ ◦ ◦
n = 3 = 60 .
poles on the semicircle is 180 180

The transfer function of the filter with DC gain of 1 can be written as

K 1 1
Ĥ (s) = × ×
(s − 10∠120◦ ) (s − 10∠180◦ ) (s − 10∠240◦ )

for some constant K to be determined. Since


K ◦
Ĥ (0) = − ∠ − (120 + 180 + 240) = K × 10−3 ,
103
it follows that

Ĥ (0) = H (0) = 1

requires

K = 103 = 3 .

ω ω
15 1.5

10 1

5 0.5

−15 −10 −5 5 10 15 σ −1.5 −1 −0.5 0.5 1 1.5 σ


−5 −0.5

−10 −1

(a) −15 (b) −1.5

Figure 12.10 The pole locations of (a) a 3rd-order Butterworth filter with 3-dB
frequency  = 10 rad
s , and (b) 6th-order Butterworth filter with 3-dB frequency
 = 1 rad
s .
Section 12.3 Low-Pass Butterworth Filter Design 441

Example 12.4
How would the filter of Example 12.3 be implemented in a cascade config-
uration?
Solution The transfer function determined in Example 12.3, with K =
103 , can be expressed as

10 10 10
Ĥ (s) = × ×
(s − 10∠120◦ ) s + 10 (s − 10∠ − 120◦ )
10 102
= .
s + 10 s + 10s + 102
2

Hence, the system can be implemented by cascading a 1st-order system

10
Ĥ1 (s) =
s + 10
(e.g., the active filter shown in Figure 12.5a) with a 2nd-order system

102
Ĥ2 (s) =
s 2 + 10s + 102
(e.g., the active filter shown in Figure 12.5b).

Example 12.5
How would the filter of Example 12.3 be implemented in a parallel config-
uration?
Solution Note that
10 102 10 −10s
Ĥ (s) = = + 2 .
s + 10 s + 10s + 10
2 2 s + 10 s + 10s + 102
Hence, the system can be implemented with a parallel connection of a
low-pass filter

10
Ĥ1 (s) =
s + 10
and a band-pass filter

−10s
Ĥ2 (s) = .
s 2 + 10s + 102

The pole locations of a 6th-order Butterworth low-pass filter with 3-dB band-
width  = 1 rad
s are shown Figure 12.10b. Note that the poles are positioned on a
LHP semicircle with radius 1 and with 30◦ angular separation between neighboring
442 Chapter 12 Analog Filters and Low-Pass Filter Design

poles, which once again agrees with angular spacing specified by the formula 180 n .
Furthermore, the marked pole locations are the only possible locations in the LHP for
◦ ◦
6 = 30 angular separations, magnitudes  = 1, and complex conju-
6 poles with 180
gate pairings. The 6th-order filter can be implemented by cascading three 2nd-order
low-pass filters. The frequency response magnitudes for the individual second-order
sections are shown in Figure 12.11a.

|H Q (ω)| |H (ω)|
2

Q = 1.93 0.8
1.5
0.6
1
0.4
Q = 0 .707
0.5
0.2
Q = 0 .52
(a) −3 −2 −1 1 2 3 ω (b) −3 −2 −1 1 2 3 ω

| H^ (s)|

∠H (ω)
3
20 ω 2
15
1
10 1
5
0
0.5
−3 −2 −1 1 2 3
ω
−2 0
−1.5 −1
−0.5
−1
σ −0.5
−1
−2

(c) 0 (d) −3

Figure 12.11 (a) The amplitude response |HQ (ω)| of 2nd-order low-pass filters
with a resonance frequency of ωo = 1 rad s and Q = 1.93, 0.707, and 0.52,
respectively, (b) the amplitude response |H(ω)| of a 6-pole Butterworth filter
with a 3-dB bandwidth  = ωo = 1 rad s obtained by cascading the three low-pass
filters shown in (a), (c) a surface plot of the magnitude of the transfer function
Ĥ(s) of the same 6-pole filter, and (d) the phase response ∠H(ω) of the same
filter.

12.3.2 Cascade implementation of Butterworth filters


In general, the transfer function of an nth-order Butterworth filter with a 3-dB band-
width  and unity DC gain can be expressed in cascade form as

(
n−1
2
Ĥ (s) =
s 2 + 2 sin( π2 mn )s + 2
m=1,odd
Section 12.3 Low-Pass Butterworth Filter Design 443

for n even or

 (
n−2
2
Ĥ (s) =
s+ s 2 + 2 sin( π2 mn )s + 2
m=1,odd

for n odd. These expressions were obtained by noting that each pair of conjugate
poles contribute to Ĥ (s) a 2nd-order component

2
,
(s − ∠θm )(s − ∠ − θm )

where θm = 90◦ (1 + mn ) for odd m < n − 1. Multiplying out the denominator of this
expression gives the more compact forms above (see Exercise Problem 12.8).
Note that the 2nd-order cascade components above correspond to underdamped
systems with underdamped resonance frequency

ωo = ,

damping coefficients
πm
α =  sin( ) < ωo ,
2 n
and quality factors

ωo 1 1
Q= = π m > .
2α 2 sin( 2 n ) 2

Figure 12.11a shows the amplitude response curves of three 2nd-order cascade compo-
nents (Q = 1.93, 0.707, and 0.52) of a 6th-order Butterworth filter with a 3-dB
bandwidth  = 1 rad s . The amplitude and phase response of the 6th-order—or the
“6-pole”—filter are shown in Figures 12.11b and 12.11d, respectively.
Notice that amplitude response curves of the three 2nd-order low-pass filters
shown in Figure 12.11a are far from ideal (non-flat, rippled for higher Q, sluggish
decay for low Q). However, their product—which is the amplitude response of the 6th-
order Butterworth filter obtained by cascading the three filters—is flat almost all the
way up to the 3-dB bandwidth frequency 1 rad s and drops reasonably fast beyond that.
Also, the accompanying phase response curve shown in Figure 12.11d is reasonably
linear up to the 3-dB frequency. Higher-order Butterworth filters will do even better
in all these respects.
The Butterworth transfer function surface plot shown in Figure 12.11c clearly
indicates the locations and distributions of the six LHP system poles of Ĥ (s) (same
locations as in Figure 12.10b)—the poles are arranged around a semi-circle as expected.
The pole-pair closest to the ω-axis is the contribution of the highest-Q subcomponent
of the cascade; the same poles also are responsible for the peaks in the amplitude
response curve of the highest-Q subsystem shown in Figure 12.11a. By contrast, the
444 Chapter 12 Analog Filters and Low-Pass Filter Design

pole-pair furthest away from the ω-axis (and closest to the σ -axis) is the contribution
of the lowest-Q subcomponent of the system with the narrowest frequency response
curve shown in Figure 12.11a.

Example 12.6
A 5th-order (or a 5-pole) Butterworth filter with a 3-dB frequency of 5 kHz
needs to be built using a cascade configuration. Choose the appropriate
capacitor and resistance values C1 , C2 , R1 , R2 , R3 , and R4 for the highest-
Q subcomponent of the cascade assuming that the 2nd-order active filter
shown in Figure 12.5b will be used.
Solution A 5 kHz 3-dB frequency means that we have

rad
 = 2π(5000) = 104 π .
s

The quality factors of the subcomponent circuits will be Q = 2 sin(1π m ) ,


2 n
with n = 5 and m = 1, 3. The higher Q, obtained with m = 1, is

1
Q= = 1.618.
2 sin( π2 15 )

Hence, with ωo =  = 104 π and Q = 1.618, the damping coefficient of


the highest-Q subcomponent circuit must be

ωo 104 π
α= = ≈ 9708.1.
2Q 3.236

Finally, the required DC amplification of the circuit is K = 1 (since a Butter-


worth filter has a DC gain of 1).
Therefore, the element values of the circuit in Figure 12.5b need to
satisfy the constraints

K =1+ R R1
2
= 1,
ωo = R R C C = 104 π,
√ 1
3 4 1 2

and
1 1 1−K
α= + + = 9708.1.
2R3 C1 2R4 C1 2R4 C2

Clearly, the first constraint allows for R1 = 0 and R2 = ∞ (i.e., the


voltage-follower circuit) and the formula for α simplifies to

1 1
+ = 9708.1.
2R3 C1 2R4 C1
Section 12.3 Low-Pass Butterworth Filter Design 445

Suppose we decide to use R3 = R4 = 1 k. Then the second constraint


equation can be solved for C1 as

1
C1 = = 1.03 × 10−7 F = 103 pF.
1000 × 9708.1
Substituting R3 = R4 = 1 k and C1 = 103 pF into the first constraint
equation, we find

1
C2 = = 9.84 × 10−9 F = 9.84 pF.
108 π 2 × 103 × 103 × 1.03 × 10−7
In summary, R3 = R4 = 1 k, C1 ≈ 100 pF, C2 ≈ 10 pF, R1 = 0, and
R2 = ∞ give one possible solution of the filter design problem. Other values
for C1 and C2 also can be chosen by starting with different values of R3
and R4 .

12.3.3 Filters other than Butterworth low-pass


Clearly, in cascading several high- and low-Q filters to obtain improved filter char-
acteristics, it is not necessary to follow the Butterworth design formula. A 6th-order
Butterworth filter, for instance, is just one particular way of cascading three 2nd-order
filters to obtain a nice 6th-order filter. Other cascade possibilities—corresponding to
alternate pole locations in the LHP—exist, some of which may produce filter charac-
teristics that are better suited for some applications.
One virtue of Butterworth filters is that they are “optimal” in the sense that,
for a given filter order n, their amplitude response is maximally flat, meaning that
the amplitude response curve has the maximum number of zero-valued derivatives
at ω = 0 and ω = ±∞ among all possible amplitude response curves that can be
obtained from rational-form transfer functions of a given order.
In other respects Butterworth filters are not optimal. For instance, for a given filter
order n there are other low-pass filters (less flat and possibly “rippled” in the passband,
in the stopband, or both) with faster amplitude decay from passband to stopband. In
particular, “elliptic” filters offer the steepest decay from passband to stopband for a
filter of a given order, but with a magnitude response that exhibits ripple (oscillatory
behavior) in both the passband and stopband. The phase response of an elliptic filter
tends to be somewhat less linear than that for a Butterworth filter. Another type of
filter, called Chebyshev, is available in two types, one exhibiting ripple only in the
passband and the other having ripple only in the stopband. In terms of decay rate
and phase linearity, Chebyshev filters offer characteristics that are between those of
Butterworth and elliptic filters.
Whereas the design of an nth-order Butterworth filter begins with

1
|H (ω)| = ' # ω $2n ,
1+ 
446 Chapter 12 Analog Filters and Low-Pass Filter Design

the amplitude response of nth-order Chebyshev and elliptic filters replace

 ω 2n


with nth-order Chebyshev polynomials and Jacobi elliptic functions, respectively.


Tedious algebra is associated with the design of Chebyshev and elliptic filters, as well
as with the design of high-order Butterworth filters, so filter design virtually always is
performed using one of the standard computer software packages for signal procesing
design and simulation, such as Matlab.
High-pass and band-pass filters generally are designed by first designing a low-
pass transfer function and then transforming it to the transfer function of either a
high-pass or band-pass filter. The transformation is accomplished by a simple change
of variable. For example, if Ĥ (s) is the transfer function of a low-pass filter with
passband cutoff frequency , then a high-pass filter with passband cutoff frequency
ωh can be obtained by simply replacing s in Ĥ (s) with

ωh
.
s

Similarly, a band-pass filter with lower passband cutoff frequency ωl and upper pass-
band cutoff frequency ωu can be produced from a low-pass transfer function with
cutoff  by replacing s in the low-pass transfer function with

s 2 + ωl ωu

s(ωu − ωl )

as illustrated in Figure 12.12. Because the high-pass and low-pass transformations


replace the variable s with a nonlinear function of s, the mappings slightly distort
the frequency response characteristic of the low-pass filter. This occurs through a
nonuniform stretching or squashing of the frequency scale. So, for example, a band-
pass response is not just a shifted version of the low-pass response. Nevertheless, the
band-pass response takes on exactly the same set of values as the low-pass response –
just at different frequencies. In general, the resulting distortion is small and the above
transformations are very useful in practice.
Commonly used software packages permit the design of Butterworth, Chebyshev,
elliptic and other filters using a single command. A single user instruction triggers
both the design of a low-pass filter prototype Ĥ (s) and the transformation of the
prototype to the desired type of filter.
Exercises 447

|H (ω)| |H (ω)|
1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2
ω
ω kHz
(a) −3 −2 −1 1 2 3 (b) 1 2 3 4 5 2π

∠H (ω) ∠H (ω)
3
3
2
2
1

ω
1
ω
−3 −2 −1 1 2 3 1 2 3 4 5
kHz
−1

−1
−2 −2

−3 −3

Figure 12.12 Amplitude and phase response curves for (a) a 6th-order Butterworth low-pass filter with
 = 1, and (b) a bandpass filter with ωl /2π = 2 kHz and ωu /2π = 3 kHz derived from the same low-pass
filter.

EXERCISES

12.1 Derive the transfer function Ĥa (s) of the 1st-order active filter circuit depicted
in Figure 12.5a.
12.2 Derive the transfer function Ĥb (s) of the Sallen-Key circuit depicted in
Figure 12.5b.

12.3 Given the following transfer functions determine whether the system is
underdamped, overdamped, or critically damped, and calculate the quality
factor Q:
(a) Ĥ1 (s) = s
s 2 +4s+400
,

(b) Ĥ2 (s) = s+5


s 2 +2000s+106
,

s 2 +100
(c) Ĥ3 (s) = s 2 +20000s+106
.

12.4 Identify each of the three systems defined in Problem 12.3 as low-pass,
band-pass, or high-pass, and for the band-pass system determine the 3-dB
bandwidth .
448 Chapter 12 Analog Filters and Low-Pass Filter Design

12.5 Given that y(t) is an underdamped zero-input response of the form given in
the first row of Table 12.1a, show that
 ∞
|y(t)|2e
|y(τ )|2 dτ ≈ τd ,
t 2
where τd = 2α 1
and |y(t)|e = Ae−αt is the envelope of the underdamped

y(t), and explain why t |y(τ )|2 dτ can be interpreted as the stored energy
of the system at instant t. In performing the integral, make use of ωo α
to handle an integral with an oscillatory integrand.
12.6 The zero-input response of a 2nd-order band-pass filter is observed to oscil-
late about sixty cycles before the oscillation amplitude is reduced to a few
percent of its initial amplitude. Furthermore, the oscillation period of the
zero-input response is measured as 1 ms. Assuming that the maximum ampli-
tude response of the system is 1, write an approximate expression for the
frequency response H (ω) = Ĥ (j ω) of the system.
12.7 Determine the frequency response, 3-dB bandwidth, and quality factor Q
of the following parallel RLC bandpass filter circuit, below, in terms of
resistance R:

+
f (t) 1H 1F R y(t)

12.8 Given θm = 90◦ (1 + m


n ), verify that
(s − ∠θm )(s − ∠ − θm )
multiplies out as
πm
s 2 + 2 sin( )s + 2 .
2 n

12.9 Determine the pole locations of a 4th-order low-pass Butterworth filter with
a 3-dB frequency of 1 kHz.
12.10 What is the transfer function of the highest-Q subcomponent of the 4th-order
Butterworth filter described in Problem 12.9.
12.11 Assuming R3 = R4 = 4 k, determine the capacitance values C1 and C2
for the 2nd-order circuit of Figure 12.5b to implement the transfer function
of Problem 12.10.
12.12 Approximate the time delay of the filter described in Problem 12.10 by
calculating the slope of the phase response curve of the filter at ω = 0.
Exercises 449

12.13 Determine the transfer function Ĥ (s) of a 3rd-order Butterworth low-pass


filter having 3-dB cutoff frequency 15 kHz. Sketch the magnitude of the
frequency response. Verify that |Ĥ (j ω)| agrees with your sketch.
12.14 Design a 2nd-order Butterworth high-pass filter having cutoff frequency 50
Hz. Do so by first designing a Butterworth low-pass filter having cutoff
frequency 1 Hz and then transforming it to a high-pass filter using the low-
pass to high-pass transformation in Section 12.3.3. Sketch the magnitudes
of the frequency responses for both the low-pass prototype and the high-pass
filter.
12.15 Show that application of the low-pass to band-pass transformation in Section
12.3.3 results in a band-pass transfer function with frequency response satis-
fying
HB (ωl ) = H (−)
and
HB (ωu ) = H ()
where HB is the band-pass frequency response with lower and upper cutoff
frequencies ωl and ωu , respectively, and H is the frequency response of the
low-pass filter with cutoff frequency . Notice that these relations prove that
the filter transformation moves the 3-dB cutoff frequencies of the low-pass
filter to the desired frequencies for the band-pass filter.
A
Complex Numbers
and Functions

A.1 COMPLEX NUMBERS AS REAL NUMBER PAIRS 450


A.2 RECTANGULAR FORM 452
A.3 COMPLEX PLANE, POLAR AND EXPONENTIAL FORMS 454
A.4 MORE ON COMPLEX CONJUGATE 461
A.5 EULER’S IDENTITY 463
A.6 COMPLEX-VALUED FUNCTIONS 465
A.7 FUNCTIONS OF COMPLEX VARIABLES 468

A.1 Complex Numbers as Real Number Pairs

Complex arithmetic, an invention of the late sixteenth century,1 is an arithmetic of


real number pairs—such as (2, 0), (0, 1), (π, 2.5), etc.—known as complex numbers.
According to the rules of complex arithmetic, complex numbers (a, b) and (c, d)
are added and multiplied as

(a, b) + (c, d) = (a + c, b + d)

and

(a, b)(c, d) = (ac − bd, ad + bc),

respectively. Furthermore, the complex number (a, 0) and real number a are defined
to be the same; that is,

(a, 0) = a.

Two important consequences of these definitions are:


1
The introduction of a consistent theory of complex numbers is credited to Italian mathematician Rafael
Bombelli (1526–1572).

450
Section A.1 Complex Numbers as Real Number Pairs 451

(1) Complex addition and multiplication are compatible with real addition and
multiplication; for complex numbers (a, 0) and (c, 0), which also happen to be
the reals a and c,

(a, 0) + (c, 0) = (a + c, 0) = a + c

and

(a, 0)(c, 0) = (ac − 0, 0 + 0) = (ac, 0) = ac,

as in real arithmetic. Thus, real arithmetic is embedded in complex arithmetic


as a special case.
(2) The product of the complex number (0, 1) with itself is

(0, 1)(0, 1) = (0 − 1, 0 + 0) = (−1, 0) = −1.

If we denote (0, 1) as j , adopting the definition

j ≡ (0, 1),

we can express the preceding result more concisely as

j 2 = −1.

We refer to j as the imaginary unit,2 since there exists no real number whose square
equals −1. However, within the complex number system, j = (0, 1) is no less real or
no more imaginary than, say, (1, 0) = 1. Furthermore, because in the complex number j 2 = −1
system, j 2 = (0, 1)(0, 1) = −1 and (−j )2 = (0, −1)(0, −1) = −1,√the square root √
−1 = ±j
of −1 exists and equals (0, ±1) = ±j. Because of this, we say j = −1.

Example A.1
Notice that, as a consequence of j 2 = −1, it follows that

1 1j j
= = = −j.
j jj −1

2
In math and physics books, the imaginary unit usually is denoted as i. We prefer j in electrical
engineering, because i is often a current variable.
452 Appendix A Complex Numbers and Functions

A.2 Rectangular Form

Given the real numbers a and b, and the imaginary number j ≡ (0, 1),

a + j b = (a, 0) + (0, 1)(b, 0) = (a, 0) + (0 − 0, b + 0) = (a, 0) + (0, b) = (a, b).

Thus, a + j b and (a, b) are simply two different ways of expressing the same complex
number.
In the next section, we will explain how a + j b = (a, b) can be plotted in the
two-dimensional plane, where a is the horizontal coordinate and b is the vertical
coordinate. Because of this ability to directly map a + j b = (a, b) onto the Cartesian
plane, both a + j b and (a, b) are called rectangular forms of a complex number.
To distinguish these two slightly different representations, however, we will refer to
(a, b) as the pair form and reserve the term rectangular form for a + j b. The value a
is called the real part of the complex number, whereas the value b is referred to as the
imaginary part. This is dreadful terminology, because both a and b are real numbers.
We are stuck with these descriptors, though, because they were adopted long ago and
are in widespread use. Remember that the imaginary part of a complex number is real
valued! The imaginary part of a + j b is b, not j b.
When expressed in rectangular form, complex numbers add and multiply like
algebraic expressions in the variable j . This is the main advantage in using the j
notation. For example, the product of

X = (1, 1) = 1 + j

and

Y = (−1, 1) = −1 + j 1

is, simply,

XY = (1 + j 1)(−1 + j 1) = −1 + j 2 + j − j = −2,

since j 2 = −1. The result is the same as that obtained with the original multiplication
rule (which is hard to remember):

(1, 1)(−1, 1) = (−1 − 1, 1 − 1) = (−2, 0) = −2.

Also, notice that

X + Y = (1 + j 1) + (−1 + j 1) = 0 + j 2 = j 2,

in conformity with

(1, 1) + (−1, 1) = (0, 2).


Section A.2 Rectangular Form 453

All complex arithmetic, including subtraction and division, can be carried out by
treating complex numbers as algebraic expressions in j . For example,

X − Y = (1 + j ) − (−1 + j ) = 2 + j 0 = 2.

In complex division, we sometimes wish to express a given complex ratio, X Y , in


rectangular form. We can accomplish this by multiplying both X and Y by the complex
conjugate of Y . For example,
X 1+j (1 + j )(−1 − j ) 0 − j2 −j 2
= = = = = −j,
Y −1 + j (−1 + j )(−1 − j ) 2 + j0 2
where, in the second line, we multiplied both the numerator and denominator by the
complex conjugate of −1 + j , which is −1 − j . In general, the complex conjugate Complex-conjugate
of a + j b is defined to be a − j b.

Example A.2
Given P = (3, 6) and Q = 2 + j , determine R = P Q and express the
product in pair form.
Solution

R = P Q = (3, 6)(2 + j ) = (3 + j 6)(2 + j )


= 6 + 6j 2 + j 3 + j 12 = 0 + j 15.

Hence, in pair form, R = (0, 15).

Example A.3
P −Q
Given P = (3, 6), Q = 2 + j , and R = P Q, determine R .
Solution Since R = j 15, as determined in Example A.2,
P −Q (3 + j 6) − (2 + j ) 1 + j5 (1 + j 5)j
= = =
R j 15 j 15 (j 15)j
−5 + j 1 1
= = −j .
−15 3 15

Example A.4
Use the quadratic equation3 to determine the roots of the polynomial

x 2 + x + 1.
3
If ax 2 + bx + c = 0, then

−b ± b2 − 4ac
x= .
2a
454 Appendix A Complex Numbers and Functions

Solution The roots of x 2 + x + 1 are the numbers x such that

x 2 + x + 1 = 0.

Using the quadratic equation, we find that the roots are given by
√ √
−1 ± 1 − 4 −1 ± −3
x= = .
2 2

However, since −3 is not a real number, x 2 + x + 1 has no real roots; in
other words, there is no real number x for which the statement x 2 + x + 1 =
0 is true. That is, if we plot f (x) = x 2 + x + 1 versus x, where x is a
customary real variable, then f (x) does not cross through the x-axis.
But if we allow the variable x to take complex values, then x 2 + x +
1 = 0 is true for

1 3
x =− ±j ,
2 2

since two distinct roots of the number −3 are ±j 3. Therefore, if x is √
a
complex variable, the roots of the polynomial x + x + 1 are − 2 ± j 2 .
2 1 3

The notion of complex variables will be explored further in Section A.7.

While the pair form of complex numbers has conceptual importance, this form
is cumbersome when we are performing calculations by hand. In practice, we usually
will rely on rectangular form (with j notation) as well as the exponential and polar
forms discussed next.

A.3 Complex Plane, Polar and Exponential Forms

A complex number

C = (a, b) = a + j b

can be envisioned as a point (a, b) on the 2-D Cartesian plane where a is the coordinate
on the horizontal axis and b is the coordinate on the vertical axis. Since the reals a and b
are called the real and imaginary parts of the complex number C = (a, b) = a + j b,
we use the notation

a = Re{C}

and

b = Im{C}.
Section A.3 Complex Plane, Polar and Exponential Forms 455

We label the horizontal and vertical axes of the 2D plane as Re and Im, respectively,
as shown in Figure A.1a, where we have plotted several different complex numbers.
When plotting complex numbers in this fashion, we call the 2-D plane the complex Complex
plane. plane
Clearly, (a, b) and a + j b are equivalent ways of referencing the same point C
on the complex plane. As illustrated in Figure A.1b, point C also can be referenced Magnitude,
by another pair of numbers: its distance |C| from the origin, which is the length |C|,
of the dashed line connecting the origin to point C; and the angle ∠C between the and angle,
positive real axis and a straight line path from the origin to the point C. Using simple ∠C, of
trigonometry, we have C = a + jb

|C| = a 2 + b2

and
b
∠C = tan−1 ( ),
a
which are said to be the magnitude |C| and angle ∠C of

C = (a, b) = a + j b,

respectively.
The formula given for ∠C assumes a ≥ 0; otherwise, ±180◦ needs to be added
to the value of tan−1 ( ab ) provided by your calculator. For instance, the angle of the

Im Im

2 2
C = a + jb
b |C|
1 + j1
−1 + j1 1 1
θ=∠C

−2 −1 1 2 Re −2 −1 1 2 a Re

−1 −1

−2 2− j2 −2

(a) (b)

Figure A.1 The complex plane showing (a) the locations of complex numbers 1 + j1, −1 + j1, and 2 − j2,
and (b) an arbitrary complex number C = (a, b) = a + jb with magnitude, |C|, and angle, ∠C = θ.
456 Appendix A Complex Numbers and Functions

complex number

Y = −1 + j 1

shown in Figure A.2 is not −45◦ , as tan−1 ( ab ) would indicate, but rather, −45◦ ± 180◦ ,
as can be graphically confirmed. One version of ∠(−1 + j ),
◦ ◦ ◦
−45 + 180 = 135 ,

corresponds to counterclockwise rotation starting from the Re axis, while the second
version,
◦ ◦ ◦
−45 − 180 = −225 ,

corresponds to clockwise rotation. Either is correct, because they correspond to the


same complex number.
With the aid of Figure A.2, try to confirm that, for X = 1 + j 1,
√ ◦
|X| = 2 and ∠X = 45 ;

while for Z = 2 − j 2,
√ ◦
|Z| = 2 2 and ∠Z = −45 .

Polar Because complex numbers can be represented by their magnitudes and angles in
form the two-dimensional plane, we call this polar-form representation. We write a complex
|C|∠C number C in polar form as

Im

2
b
−1+ j1
1+ j1
1
∠(1 + j 1)
Re
−2 −1 1 2 a
∠(2 − j 2)
−1

−2 2 − j2

Figure A.2 Magnitudes and angles of complex numbers 1 + j1, −1 + j1, and
2 − j2 are indicated by dashed lines and arcs, respectively.
Section A.3 Complex Plane, Polar and Exponential Forms 457

|C|∠C.

So, X and Z from the preceding expression can be written as


√ ◦
X = 2∠45

and
√ ◦
Z = 2 2∠ − 45 .

There is another version of polar-form representation that is much more conve-


nient mathematically. To construct this new representation, we note that for any
complex number C = a + j b, trigonometry allows us to recover a and b from the
magnitude |C| and angle θ ≡ ∠C, as (see Figure A.1b)

a = Re{C} = |C| cos θ

and

b = Im{C} = |C| sin θ.

Hence,

C = a + j b = |C|(cos θ + j sin θ).

Now, as will be shown in Section A.5, there is a remarkably useful mathematical Euler’s
relation called Euler’s identity, which states that identity

ej θ = cos θ + j sin θ.

Applying Euler’s identity to the previous expression for C, we have

C = |C|ej θ ,

which is called the exponential form of C = a + j b. Exponential


For example, exponential forms of form
|C|ejθ
X = 1 + j 1,
Y = −1 + j 1,
Z = 2 − j2

are (see Figure A.2)


√ ◦
X= 2ej 45 ,
√ ◦
Y = 2ej 135 ,
√ ◦
Z = 2 2e−j 45 ,

respectively.
458 Appendix A Complex Numbers and Functions

Using the exponential form and ordinary algebraic rules, we have


√ ◦ √ ◦ √ √ ◦ ◦
XY = ( 2ej 45 )( 2ej 135 ) = 2 × 2 × ej 45 × ej 135
◦ ◦ ◦
+j 135 )
= 2e(j 45 = 2ej 180 = −2,

which is the same result as

XY = (1 + j 1)(−1 + j 1) = −1 + j 2 + j − j = −2.

Note that

2ej 180 = −2,

because
◦ ◦ ◦
ej 180 = cos 180 + j sin 180 = −1.

(Alternatively, it is even easier to reach the same conclusion graphically on the


complex plane.)
Likewise,
√ j 45◦
X 2e 1 j 45◦ ◦ 1 ◦ 1
= √ ◦ = e × ej 45 = ej 90 = j
Z 2 2e −j 45 2 2 2

and
√ ◦ √ ◦ √ √ ◦ ◦
XZ = ( 2ej 45 )(2 2e−j 45 ) = 2 × 2 2 × ej (45 −45 ) = 4ej 0 = 4.

As these examples illustrate, exponential form is convenient for complex multiplica-


tion and division.
Let us try a couple of examples where we convert from exponential to rectangular
form, or vice versa.

Example A.5
Convert the following complex numbers from exponential to rectangular
form:
◦ ◦
C1 = 7ej 25 , C2 = 7e−j 25 ,
◦ ◦
C3 = 2ej 160 , C4 = 2e−j 160 .

Solution Using Euler’s identity (and a calculator to evaluate the cosines


and sines), we get

C1 = 6.34 + j 2.95, C2 = 6.34 − j 2.95,


C3 = −1.88 + j 0.68, C4 = −1.88 − j 0.68.
Section A.3 Complex Plane, Polar and Exponential Forms 459

You should roughly sketch the locations of these points on the complex
plane and check that these exponential-to-rectangular conversions seem
correct.

Example A.6
Convert
C1 = 3 + j 4, C2 = −3 + j 4,
C3 = 3 − j 4, C4 = −3 − j 4

to exponential form.
Solution To write C = a + j b in the form |C|ej θ , use

|C| = a 2 + b2

and
b
θ = ∠C = tan−1 .
a
We have

|C1 | = |C2 | = |C3 | = |C4 | = 5.

We find the angle for C1 as

4 ◦
∠C1 = tan−1 = 53.13 .
3
For C2 we seemingly have

−4 ◦
∠C2 = tan−1 = −53.13 .
3
However, the inverse tangent is multivalued, and your calculator gives only
one of the two values. If a > 0, your calculator provides the correct value.
As indicated earlier, if a < 0 then you must add or subtract 180◦ from the
calculated value to give the correct angle. Thus,
◦ ◦ ◦
∠C2 = −53.13 + 180 = 126.87 .

Similarly, we have ∠C3 = −53.13◦ and ∠C4 = −126.87◦ . So finally,


◦ ◦
C1 = 5ej 53.13 , C2 = 5ej 126.87 ,
◦ ◦
C3 = 5e−j 53.13 , C4 = 5e−j 126.87 .
460 Appendix A Complex Numbers and Functions

Visualization of complex numbers on the complex plane is an important skill.


Remember that any complex number written in exponential form, |C|ej θ , is at a

distance |C| from the origin and at an angle θ from the positive real axis. Thus, ej 0 ,
◦ ◦ ◦
ej 90 , ej 180 , and ej 270 all lie on a circle of radius one and have values 1, j, −1,
◦ ◦ ◦ ◦
and −j , respectively. Similarly, 5ej 0 , 5ej 90 , 5ej 180 , and 5ej 270 all lie on a circle
of radius five and have values 5, j 5, −5, and −j 5, respectively. You should be able
to picture these locations on the complex plane.
Angles of complex numbers can be measured in units of degrees, as we have done
Radians so far, or in units of radians, with 360◦ and 2π radians signifying the same angle. By

convention, ej 90 and ej 2 are the same complex number because 90◦ = π2 radians.
π
versus
degrees Likewise,
π ◦ 1 1
e−j 4 = e−j 45 = √ − j √ .
2 2
The following equalities should be understood and remembered:
π
e±j 2 = ±j
e±j π = −1
e±j 2πn = 1 for any integer n.

Recall that exponential form was derived from polar form. So, for example, we
√ 3π √ ◦
can express Y = −1 + j 1 = 2ej 4 as 2∠135 √ . Since both exponential and polar
forms express Y in terms of magnitude |Y | = 2 and angle ∠Y = 3π 4 rad = 135◦ ,
their distinction is mainly cosmetic.

Example A.7
What is the magnitude of the product P of complex numbers U = 1 + j 2
π
and V = 3ej 2 ?
Solution Clearly,

P = U V = |U |ej ∠U |V |ej ∠V = |U ||V |ej (∠U +∠V ) .

Therefore, the magnitude of P is


 √
|P | = |U ||V | = ( 12 + 22 )3 = 3 5.

Example A.8
Given that A = 2∠45◦ and B = 3e−j 2 , determine AB and
π A
B.

Solution Since A = 2∠45◦ = 2ej 4 ,


π

π π π π π
AB = (2ej 4 )(3e−j 2 ) = 6ej ( 4 − 2 ) = 6e−j 4 .
Section A.4 More on Complex Conjugate 461

Also,
π
A 2ej 4 2 π π 2 3π
= −j π = ej ( 4 + 2 ) = ej 4 .
B 3e 2 3 3

Example A.9
A
Express AB and B from Example A.8 in rectangular form.
Solution Using Euler’s identity, we first note that

π π π π 1 1
e−j 4 = ej (− 4 ) = cos(− ) + j sin(− ) = √ − j √ ,
4 4 2 2
which also can be seen visually on the complex plane. Hence,

π 1 1 √ √
AB = 6e−j 4 = 6( √ − j √ ) = 3 2 − j 3 2.
2 2
Likewise,
√ √
j 3π 3π 3π 2 2
e 4 = cos( ) + j sin( ) = − +j ,
4 4 2 2
and so
√ √ √ √
A 2 3π 2 2 2 2 2
= ej 4 = (− +j )=− +j .
B 3 3 2 2 3 3

A.4 More on Complex Conjugate

For the four different forms of a complex number

C = (a, b) = a + j b = |C|ej θ = |C|∠θ,

the complex conjugate C ∗ is

C ∗ ≡ (a, −b) = a − j b = |C|e−j θ = |C|∠ − θ.

The last two equalities can be verified by using Euler’s formula. Try it!
In the rectangular and exponential forms, we obtain the conjugate by changing
j to −j , whereas in polar form the algebraic sign of the angle is reversed. Thus, for
X = 1 − j 1, we have

X ∗ = 1 − (−j )1 = 1 + j 1;
462 Appendix A Complex Numbers and Functions
π
whereas, for Y = ej 6 ,
π
Y ∗ = e−j 6 ;

and for Z = 2∠ − π4 ,

π
Z ∗ = 2∠ .
4
These same changes in sign work even for more complicated expressions, such as
the pair
π
j 3e−j 4 ◦
Q= 5∠30
(1 + j 2)2(1+j )

and
π
−j 3ej 4 ◦
Q∗ = 5∠ − 30 .
(1 − j 2)2(1−j )

Graphically, the conjugate C ∗ is the point on the complex plane opposite C on


the “other side” of the Re axis. For example, j ∗ = −j . Also, notice that the complex
conjugate of a complex conjugate gives the original number; that is, (C ∗ )∗ = C.
The product of

C = a + jb

and

C∗ = a − j b

is

CC ∗ = |C|2 ,

since, using the exponential form, we have CC ∗ = (|C|ej θ )(|C|e−j θ ) = |C|2 (which,
in turn, equals a 2 + b2 ). Thus, CC ∗ is always real.
The absolute value | − 2| of the√real number −2 is 2. The√absolute value |1 + j 1|
π
of the complex number 1 + j 1 = 2ej 4 is its magnitude 2, that is, its distance
from the√origin of the complex plane. The absolute values of −1 − j 1 √ and 1 − j 1
also are 2. Since, for an arbitrary C, CC ∗ = |C|2 , it follows that |C| = CC ∗ (the
positive root, only).
Taking the sum of C = a + j b and C ∗ = a − j b, we get

C + C ∗ = 2a = 2Re{C},
Section A.5 Euler’s Identity 463

yielding

C + C∗
Re{C} = .
2
The difference gives

C − C ∗ = j 2b = j 2Im{C},

implying that

C − C∗
Im{C} = .
j2

Thus, for example,

1 − j2 1 + j2 1 − j2
+ = 2Re{ } = Re{−j (1 − j 2)} = Re{−2 − j } = −2.
j2 −j 2 j2

A.5 Euler’s Identity

The function ex of real variable x has an infinite series4 expansion

x2 x3 x4 xn
ex = 1 + x + + + + ··· + + ···.
2 3! 4! n!

The complex function eC of complex variable C is defined as

C2 C3 C4 Cn
eC ≡ 1 + C + + + + ··· + + ···,
2 3! 4! n!

so that ex is a special case of eC (corresponding to C = x, obviously).


For C = j φ, where φ is real, eC evaluates to

φ2 φ4 φ3 φ5
ej φ = (1 − + − · · ·) + j (φ − + − · · ·) = cos φ + j sin φ,
2 4! 3! 5!

because j 2 = −1, j 3 = −j , j 4 = 1, etc., and

φ2 φ4
1− + − ···
2 4!
 dn x xn dn x
4
The Taylor series of ex , about x = 0, is obtained as ∞
n=0 dx n e|x=0 n! . Note that dx n
e = ex for any
x
n, leading to the series quoted above. The series converges to e for all x.
464 Appendix A Complex Numbers and Functions

and

φ3 φ5
φ− + − ···
3! 5!
are the series expansions of cos φ and sin φ, respectively. Therefore, for real φ,

Re{ej φ } = cos φ, Im{ej φ } = sin φ,

and

ej φ = cos φ + j sin φ.

Euler’s The last statement is Euler’s identity, which we introduced without proof in Section A.3.
identity Throughout this textbook, we make use of both Euler’s identity and its conjugate,

e−j φ = cos φ − j sin φ.

Euler’s identity and its corollary

Re{ej φ } = cos φ

are essential for understanding the phasor technique discussed in Chapter 4.


Euler’s identity and its conjugate imply that

ej φ + e−j φ
cos φ =
2
and

ej φ − e−j φ
sin φ = .
j2

These formulas will be used often; so they, along with Euler’s formula, should be
committed to memory.

Example A.10
Given that

4(ej 3 + e−j 3 ) = A cos(χ),

determine A and χ.
Solution Using the identity

ej φ + e−j φ
cos φ = ,
2
Section A.6 Complex-Valued Functions 465

we have
A # jχ $
4(ej 3 + e−j 3 ) = e + e−j χ .
2
Therefore, A = 8 and χ = 3.

Example A.11
ej 5t −e−j 5t
Express the function 2 in terms of a sine function.
Solution Using the identity

ej φ − e−j φ
sin φ = ,
j2

we find

ej 5t − e−j 5t ej 5t − e−j 5t
=j = j sin(5t).
2 j2

A.6 Complex-Valued Functions5

A function f (t) is said to be real valued if, at each instant t, its numerical value is a
real number. For example,

f (t) = cos(2πt)

is real valued. By contrast, complex-valued functions take on values that are complex
numbers. For example,

f (t) = ej 2πt = cos(2πt) + j sin(2πt) = (cos(2πt), sin(2πt))


π
is complex valued, since, for instance, f ( 41 ) = ej 2 = j is a complex number. A
complex-valued function can be expressed in exponential, rectangular, and pair forms,
as just illustrated.
Although voltage and current signals generated and measured in electrical circuits
are always real-valued functions, there are at least two reasons why complex-valued
functions are useful in the analysis of real-world signal processing systems. First,
real-valued signals can be expressed in terms of complex-valued functions, as, for
instance,

cos(ωt) = Re{ej ωt }

5
This section can be studied after Chapter 4, in preparation for Chapter 5.
466 Appendix A Complex Numbers and Functions

and

sin(ωt) = Im{ej ωt }.

In the case of linear time-invariant circuits, it is mathematically simple to calculate


output signals when input signals are complex exponentials. If the true input is a cosine
instead, then the output is simply the real part of the calculated complex output. This
is the phasor method of Chapter 4.
A second instance where complex-valued signals are useful is in the modeling
of certain types of communication or other signal processing systems where a pair
of channels use sinusoidal signals that are 90◦ out of phase. The sinusoids can be
concisely represented as

ej ωt = (cos(ωt), sin(ωt)).

The mathematical representation and manipulation for such systems is tremendously


simplified through the use of complex signals.
As a simple example of complex signal representation, consider the derivative
operation dtd , which converts the complex function ej ωt into j ωej ωt (i.e., j ω times the
function itself). This multiplicative scaling by j ω is simpler to remember and express
than are the conversions of cos(ωt) into −ω sin(ωt) and sin(ωt) into ω cos(ωt).
Pictorially, a differentiator, which can be implemented by a linear circuit, can be
described in terms of the input–output rule

ej ωt −→ Diff −→ j ωej ωt ,

where the function on the left is the system input and the function on the right is
the system output. This is a concise representation of a much more complicated real-
world situation, which can be understood by expressing the input and output functions
in pair form (with the help of Euler’s identity), given as

(cos(ωt), sin(ωt)) −→ Diff −→ ω(− sin(ωt), cos(ωt)).

Physically, this corresponds to a pair of differentiators, one of which converts a


cosine input cos(ωt) into −ω sin(ωt) and the second converting a sine input sin(ωt)
into ω cos(ωt).
Likewise, the pair form of

1
ej ωt −→ Lowpass −→ ej ωt
1 + jω

specifies how a low-pass filter circuit—another linear system—converts its cosine


and sine inputs into outputs (as discussed in detail in Chapter 5).
In summary, even though real-world systems and circuits (such as differentiators
or filter circuits) process (i.e., act upon) only real-valued signals, the models of such
Section A.6 Complex-Valued Functions 467

systems used for analysis and design purposes can be constructed in terms of complex-
valued functions whenever it is advantageous to do so. The advantages are made
abundantly clear in Chapter 4 and later chapters.

Example A.12
Given a linear filter circuit described by

1
ej ωt −→ Filter −→ ej ωt ,
1 + jω

what is the filter output y(t) if the input is f (t) = cos(2t)?


Solution According to the stated input–output relation,

1
ej 2t −→ Filter −→ ej 2t .
1 + j2

Since

ej 2t = (cos(2t), sin(2t))

and
−1
ej 2t ej 2t ej (2t−tan 2)
=√ = √
1 + j2 −1
5ej tan 2 5
1
= √ (cos(2t − tan−1 2), sin(2t − tan−1 2)),
5

the input–output relation implies that

cos(2t − tan−1 2)
cos(2t) −→ Filter −→ √ .
5

Therefore, the input

f (t) = cos(2t)

produces the output

1
y(t) = √ cos(2t − tan−1 2).
5
468 Appendix A Complex Numbers and Functions

A.7 Functions of Complex Variables6

Let us return to Example A.3 from Section A.2. In that example, we were asked to
determine the roots of the polynomial x 2 + x + 1. That is, we were asked to find the
values of x for which

x 2 + x + 1 = 0.

Figure A.3a shows a plot of

f (x) = x 2 + x + 1.

Clearly, there is no value of x for which this function passes through zero. But, isn’t
this function, which is a second-order polynomial, required to have two roots? The
answer is no, not if x is a real variable!
An nth-order polynomial is guaranteed to have n roots only if the polynomial is
a function of a complex variable.7 A complex variable x is an ordered pair of real
variables—say, xr and xi —much like a complex number C is an ordered pair of real
numbers (a, b). We can define

x ≡ (xr , xi )

where xr and xi are called the real and imaginary parts of x, respectively. Alternatively,
we can write x = xr + j xi .

| f (x )| 2
Im
7
f (x )
6
5 1
1
4 0.5
0.5
3 0
2 -1 0
-0.8
1 -0.6 -0.5 Re
-0.4
(a) -2 -1 1 2 x (b) -0.2 -1
0

Figure A.3 (a) Plot of function f (x) = x 2 + x + 1 of the real variable x, and (b)
surface plot of squared magnitude of function f (x) of the complex variable
x = (xr , xi ) = xr + jxi over the complex plane.

6
This section can be studied after Chapter 10, in preparation for Chapter 11.
7
In high school you probably worked with functions of only a real variable. However, in cases where
polynomials were factored and discovered to have complex roots, it was implicitly assumed that the variable
was complex (whether you were told that or not!).
Section A.7 Functions of Complex Variables 469

If x is a complex variable, then the function f (x) can be expanded as

f (x) = f ((xr , xi )) = f (xr + j xi ) = (xr + j xi )2 + xr + j xi + 1.

Because this function depends on two real-valued variables, xr and xi , we can contem-
plate plotting it in 3-D, as a surface over the xr –xi plane. There is one catch, though:
f (x) itself is complex valued. That is, the value of f (x) for a given x is generally a
complex number. For example, if xr = 0 and xi = 1, then f (x) = (j )2 + j + 1 = j .
How do we plot f (x) if it is complex valued?
The answer is that we must make two plots. Our first plot, or sketch, can be the
real part of f (x) as a 3-D surface over the xr –xi plane, which also happens to be the
complex plane. The second sketch can be the surface plot of the imaginary part of
f (x). These two sketches together would fully describe f (x). Alternatively, we could
sketch the magnitude of f (x) and the angle of f (x), both as surfaces over the xr –xi
plane, which also would fully describe f (x). Let’s go ahead and calculate, and then
plot, the square of the magnitude of f (x), and check whether there are any values
of x for which the squared magnitude hits zero (in which case f (x) itself must, of
course, be zero).
The squared magnitude of f (x) is the square of the real part of f (x) plus the
square of the imaginary part of f (x). We have

f (x) = (xr + j xi )2 + xr + j xi + 1 = xr2 + j 2xr xi − xi2 + xr + j xi + 1.

So,

Re{f (x)} = xr2 − xi2 + xr + 1

and

Im{f (x)} = 2xr xi + xi = xi (2xr + 1),

giving

|f (x)|2 = Re2 {f (x)} + Im2 {f (x)} = (xr2 − xi2 + xr + 1)2 + xi2 (2xr + 1)2 .

A 3-D surface plot of the squared magnitude |f (x)|2 is shown in Figure A.3b. It
appears that this plot may hit zero at two locations, but it is difficult see precisely. An
examination of the preceding expression for |f (x)|2 can help. This quantity can be
zero only if both terms are zero. The second term, xi2 (2xr + 1)2 , can be zero only if
either xi = 0 or xr = −1/2. With the first choice, it is impossible for xr2 − xi2 + xr + 1
to be zero (remember, xr is a real variable). However, if

xr = −1/2,

then

xr2 − xi2 + xr + 1 = 0
470 Appendix A Complex Numbers and Functions

when

xi = ± 3/2.

Thus, the squared magnitude of f (x), which is the plot shown in Figure A.3b, hits
zero when

x = (xr , xi ) = (−1/2, ± 3/2).

This agrees with the result we obtained in Section A.2 by using the quadratic formula.
Plots such as the one in Figure A.3b help us visualize functions of complex
variables. The key is that a complex variable is a pair of real variables, and so plots
of this sort always must be made over the 2-D complex plane. In Chapter 10 we
introduce the Laplace transform, which is a function of a complex variable. You
may have seen the Laplace transform used as an algebraic tool to assist in solving
differential equations. In this course, we will need to have a deeper understanding of
the functional behavior of the Laplace transform and we will sometimes want to plot
either the magnitude or squared magnitude, using ideas similar to those expressed
here. Doing so will give us insight into the frequency response and stability of signal
processing circuits.
B
Labs

Lab 1: RC-Circuits
Lab 2: Op-Amps
Lab 3: Frequency Response and Fourier Series
Lab 4: Fourier Transform and AM Radio
Lab 5: Sampling, Reconstruction, and Software Radio

471
Lab 1: RC-Circuits

Over the course of five laboratory sessions you will build a working AM radio receiver
that operates on the same principles as commercially available systems. The receiver
will consist of relatively simple subsystems examined and discussed in class. We will
build the receiver up slowly from its component subsystems, mastering each as it is
added.
In Lab 1 you will begin your AM receiver project with a study of RC circuits.
Although RC circuits, consisting of resistors R and capacitors C, are simple, they
can perform many functions within a receiver circuit. They are often used as audio
filters (e.g., the circuitry behind the “bass” and “treble” knobs), and as you will see
later in this lab, envelope detectors, with the inclusion of diodes. Lab 1 starts with an
exercise that will familiarize you with sources and measuring instruments to be used
in the lab, and continues with a study of characteristics of capacitors and steady-state
and transient behavior in RC circuits. Then you convert an RC filter circuit into an
envelope detector and test it using a synthetic AM signal.

1 Prelab

Prelab exercises are meant to alert you to topics to be covered in each lab session.
Make sure to complete them before coming to lab since their solutions often will be
essential for understanding/explaining the results of your lab measurements.
(1) For the circuit of Figure 1, calculate the following:
(a) The RC time constant.
(b) The voltage v(t) across the capacitor 1 ms after the switch is closed,
assuming the capacitor is initially uncharged—express in terms of Vs .
(c) The initial current i(0+ ) that will flow in the circuit.
(2) Suppose you are given an unknown capacitor. Describe an experimental tech-
nique that you could use to determine its value.

472
Lab 1: RC-Circuits 473

t= 0 R = 2 kΩ
+

Vs +
− i(t) ν(t) C = 0.1 μF

Figure 1 RC circuit for prelab exercise

2 Laboratory Exercise

• Equipment: Function generator, oscilloscope, protoboard, RG-58 cables with


BNC connectors, Y-cables, and wires.
• Components: 50  resistor, 2 k resistor, 0.1 μF capacitor, and 1N54 diode.

2.1 Generating and measuring waveforms


In the lab you will use a function generator, HP 33120A shown in Figure 2a, to
produce electrical signal waveforms—sinusoids, square waves, triangle waves, and
many others over a wide range of amplitudes and frequencies—applied across various
circuit components. The components, e.g., resistors, diodes, and so on, will be inter-
connected to one another on a protoboard (see Figure 2b) and you will use an
oscilloscope, HP 54645D shown in Figure 2c, to display and measure signal wave-
forms. For connections you will use coax cables with BNC and/or Y-endings and wires
of various lengths (see Figure 2d) as needed. User’s manuals for the HP 33120A and
HP 54645D can be downloaded from the class web site. The following sequence of
exercises should familiarize you with the lab equipment:

(a) (b) (c)

(d) (e)

Figure 2 Your (a) function generator, (b) protoboard, (c) oscilloscope, (d) cables
and wires, and (e) resistor on a protoboard with source and probe connections
to (a) and (c).
474 Lab 1: RC-Circuits

(1) Place a 50  resistor on your protoboard and use Y-cables to connect the
signal generator (output port) and oscilloscope (input port) across the resistor
terminals—see Figure 2e or ask your TA for help if you are confused about this
step.
(2) Press the power buttons of the scope and the function generator to turn both
instruments on.
(3) (a) Set the function generator to produce a co-sinusoid output with 5 kHz
frequency and 4 V peak-to-peak amplitude:
• Press “Freq,” “Enter Number,” “5,” “kHz.”
• Press “Ampl,” “Enter Number,” “4,” “Vpp.”

(b) Press “Auto-scale” on the scope and adjust, if needed, vertical and hori-
zontal scales further so that the scope display exhibits the expected co-
sinusoid waveform produced by the generator.
(c) Sketch what you see on the scope, labeling carefully horizontal and
vertical axes of your graph in terms of appropriate time and voltage
markers.

(4) The default setting of your function generator is to produce the specified voltage
waveform (e.g., 4 V peak-to-peak co-sinusoid as in above measurement) for
a 50  resistive load. An alternate setting, known as “High Z”, allows you to
specify the generator output in terms of open circuit voltage. To enter High Z
mode you can use the following steps (needed every time after turning on the
generator):
• Press “shift” and “enter” to enter the “MENU” mode.
• Press “>” three times until “D sys Menu” is highlighted.
• Press “V” twice until “50 Ohm” is highlighted.
Lab 1: RC-Circuits 475

• Press “>” once to select “High Z.”


• Press “Enter.”
Without modifying the protoboard connections used in step 3, switch the func-
tion generator to High Z mode as described above, and then reset the output
amplitude of the function generator to 4 V peak-to-peak once again. Observe
and make a sketch of the modified scope output.

(5) Remove the 50  resistor from the protoboard without modifying the remaining
connections to the function generator and scope. Observe and make a sketch of
the modified scope output once again.
476 Lab 1: RC-Circuits

(6) Based on observations from steps 3, 4, and 5, and the explanation for High
Z mode provided in step 4, determine the output resistance (Thevenin) of the
function generator and the input (load) resistance of the scope. Explain your
reasoning.
(7) Measure the period of the co-sinusoidal signal displayed in step 5 by using the
time cursors of the oscilloscope and compare the measured period with the
inverse of the input frequency.
(8) Repeat step 7 using a square wave with a 50% duty cycle (i.e., the waveform
is positive 50% of the time and negative also 50% of the time) instead of the
co-sinusoid.
When you power up your function generator it will come up by default in 50 Ohm
mode. Remember to switch it to High Z mode if you want to specify open-circuit
voltage outputs (as in the next section, and in most experiments).

2.2 Capacitor characteristics


In this section, you will study the characteristics of the capacitor in the network of
Figure 3. Once you have built the network on the protoboard, complete these steps:
(1) Apply a 20 kHz co-sinusoidal signal vin (t) measuring 10 V peak-to-peak to
the RC circuit as shown in Figure 3. Display voltages vin (t) and vC (t) using
Channels 1 and 2 of the oscilloscope, respectively.
(a) What is the phase difference in degrees between vin (t) and vC (t)? This
can be approximated from inspection, but obtain an accurate measurement
using the time cursors on the oscilloscope. Record your measurement.
(b) Repeat step 1 with a 1 kHz co-sinusoidal wave input.
(c) Is vC (t) leading or lagging vin (t)? Hint: If one waveform is leading
another, “it happens” earlier in time.
(2) Make sure that you change the input frequency back to 20 kHz for this ques-
tion. Determine the current i(t) in the circuit by first measuring the voltage
vR (t) across the 2 k resistor. You can measure vR (t) without moving your
oscilloscope test probes—instead, invert Channel 2 and add the two channels.
By KVL, vin = vR + vC , so vR = vin − vC . This sum can be displayed on the
oscilloscope using the +/- function button. What is the phase shift of the current
relative to vin (t)?

2 kΩ
+ −
Channel 1 νR (t) + Channel 2
νin (t) +
− i(t) νC (t)
0.1 μF −

Figure 3 RC circuit for laboratory exercises.


Lab 1: RC-Circuits 477

2.3 RC time constants


To investigate the transient response of your RC circuit, change the function generator
output to a 100 Hz square wave measuring 1 V peak-to-peak with a 0.5 V DC offset.
As in the previous section, Channel 1 should display vin (t), and Channel 2 should
display vC (t).
Across its positive half-period, the square wave approximates a unit step and the
capacitor is charged. During the negative half-period, the input voltage is zero and
the capacitor discharges through the resistor starting from a peak voltage value vmax
and with a time constant RC. Thus, the voltage across the capacitor over the negative
one-half period is
t
vC (t) = vmax e− RC ,

and, when t = RC,

vC = vmax e−1 ≈ 0.368vmax .

(1) Measure the time constant by determining the amount of time required for the
capacitor voltage to decay to 37% of vmax on the scope. Use the measurement
cursors and record the result, and sketch what you see on the oscilloscope.
(2) Compare this value to your theoretical value for RC (give percent error).

2.4 Frequency response


RC circuits have the very useful property that the co-sinusoidal voltage measured
across the capacitor or resistor changes in response to a change in the frequency of
the input co-sinusoid. This property provides the basis of the filters (and envelope
detectors) commonly found in AM radio receivers. You will investigate and graph-
ically display in this section the frequency dependence of capacitor voltage vC (t)
when the amplitude of vin (t) is held constant but its frequency is varied.
(1) Change the input waveform vin (t) back to a sine wave measuring 10 V peak-to-
peak with a zero DC offset. Adjust the frequency f of the co-sinusoidal vin (t)
to each of the following, and record the amplitude of the capacitor voltage vC (t)
for each.
Frequency f vc amplitude (V)
100 Hz
500 Hz
1 kHz
5 kHz
10 kHz
50 kHz
478 Lab 1: RC-Circuits

(2) Plot the amplitude versus frequency data collected above in the box below—
note that the axes are labeled using a logarithmic scale.

1
10

0
10

Amplitude

1
10

2
10
2 3 4 5
10 10 10 10

Frequency (Hz)

2.5 AM signal and signal envelope


An AM radio signal consists of a high-frequency sinusoid whose amplitude is modu-
lated, or scaled, by a message signal. The message signal is the speech or music
recorded at the radio station, and the objective of the AM radio receiver is to recover
this message signal to present it through a loudspeaker to the listener.
Figure 4 illustrates an AM signal with a co-sinusoidal amplitude modulation. The
horizontal axis represents time, and the vertical axis is signal voltage. You can see
that the amplitude of the high-frequency sinusoid changes over time. In this case, the
message signal is a low-frequency sinusoid. You can imagine the message signal as a
low-frequency sinusoid connecting the peaks of the high-frequency sinusoid. Because
the message signal connects the peaks of the high-frequency sinusoid, it is also called
the AM signal’s envelope, and a circuit that detects this envelope, producing the
message signal, is called an envelope detector.
The function generator can create an AM signal for you as follows:
(1) Connect the output of the function generator directly to Channel 1 of the oscil-
loscope.
(2) Set the function generator to create an AM signal with fc = 13 kHz and 4 V
peak-to-peak amplitude modulated by an 880 Hz sine-wave with 80% modu-
lation depth:
• Create a 13 kHz sine wave measuring 4 V peak-to-peak, no DC offset.
Lab 1: RC-Circuits 479
3

−1

−2

−3
0 200 400 600 800 1000 1200 1400 1600 1800 2000

Figure 4 Illustration of AM radio signal.

• Press “Shift,” then “AM” to enable amplitude modulation.


• Press “Shift,” then “Freq” to set the message signal frequency to 880
Hz.
• Press “Shift,” then “Level” to set the modulation depth to 80%. This
adjusts the height of the envelope.
(3) Adjust the oscilloscope until the display resembles Figure 4.

2.6 Envelope detector circuit

With a simple modification to your RC circuit, you can create an envelope detector
that will recover the message signal contained in the function generator’s AM signal.
Change your circuit to match Figure 5b. Make sure you reconnect the function gener-
ator to the circuit, as indicated.

Diode

+
Channel 1 Channel 2
ν in (t ) +
− i(t) ν C (t )
2 kΩ 0.1 μF

(a) (b)

Figure 5 (a) Circuit symbol and physical diagram for a diode, and (b) an
“envelope detector” circuit using a diode.
480 Lab 1: RC-Circuits

(1) View the voltage across the capacitor on the oscilloscope. What do you see?
Use the measurement cursors to determine the frequency of the output. Is this
the message signal?
(2) Can you explain how the circuit works? Hints: The diode in the circuit will
be conducting part of each cycle and non-conducting in the remainder—figure
out how the capacitor voltage will vary when the diode is conducting and non-
conducting. When is the RC time constant in the circuit relevant—when the
diode is conducting or not?
Also consider these questions:
(a) Is the envelope detector circuit linear or nonlinear? Explain.
(b) Recall that a capacitor is a charge-storage device. When you turn on your
radio, the capacitor will store an unknown charge. Will it be necessary to
account for this in the design of your receiver circuit? Why or why not?

Important! Leave your envelope detector assembled on your protoboard! You will
need it in future lab sessions.

THE NEXT STEP

Already you have come far: Your circuit can detect the message-containing envelope
of an AM signal. In the next lab session, you will explore amplifier circuits. You will
build an audio amplifier and connect your envelope detector to it so that you can listen
to the message signal you recovered in this session.
Lab 2: Op-Amps

Labs 2 and 3 study operational amplifiers, or “op-amps.” Op-amps were originally


developed to implement mathematical operations in analog computers, hence their
name. Today, op-amps are commonly used to build amplifiers, active filters, and
buffers. You will work with many of these circuits in lab. The following table contrasts
ideal versus practical characteristics of typical op-amps.

Op-Amp Property Ideal in practice


Gain A ∞ very large, ∼ 106 and constrained by supply voltage
Ri ∞ very large, ∼ 106 
Ro 0 very small, ∼ 15 
Frequency response flat gain depends on frequency

The objective of this experiment is to gain experience in the design and construc-
tion of operational amplifier circuits. You also will examine some nonideal op-amp
behavior.
By the end of Lab 2, you will have designed and built two amplifiers for your
radio circuit. Once you have verified that they work, you will connect them to your
envelope detector from Lab 1 and listen to synthetic AM signals.

1 Prelab
In the prelab exercises, you will review the analysis of op-amp circuits and design
amplifiers for your radio circuit.

481
482 Lab 2: Op-Amps

(1) Assuming ideal op-amps, derive an expression for the output voltage vo in the
circuit of Figure 1(a).
(2) Write two KCL equations in terms of vx that relate the output voltage to
input voltage for the circuit in Figure 1(b) (you do not need to solve the KCL
equations). Simplify your expressions as much as possible.

νx νi +
R2 R1 R2 νo

R1 R1
ν1 − −
νo νo
+ +
R3 R3 ν
ν2 ν1 x
R4 R2
R4 R5
(a) (b) (c)

Figure 1 Circuits for prelab analysis.

(3) Again assuming an ideal op-amp, derive an expression for the output voltage
vo in the circuit in Figure 1(c).
vo
(a) For a gain vi of 2, how must R1 and R2 be related?
vo
(b) For a gain vi of 11, how must R1 and R2 be related?
(c) How do you build two amplifiers, one with a gain of two and one with a
gain of 11, given four resistors: one 2 k, two 10 k, and one 20 k?
(d) Using the pin-out diagram in Figure 2 as a reference, draw how you will
wire the amplifier with a gain of 11. Draw in resistors and wires as needed.

2 Laboratory Exercise

• Equipment: Function generator, oscilloscope, protoboard, loudspeaker, audio


jack, RG-58 cables, Y-cables, and wires.
• Components: two 741 op-amps, one 0.1 μF capacitor, one 33 μF capacitor,
one 2 k resistor, two 10 k resistors, one 20 k resistor, and one 100 k
resistor.

2.1 Using the 741 op-amp


The 741 op-amp contains a complex circuit, composed of 17 bipolar-junction tran-
sistors, 5 diodes, 11 resistors, and a capacitor that provides internal compensation to
prevent oscillation. When wiring your circuits, however, you only have to make the
usual connections: the inverting and noninverting inputs, the output, and the positive
and negative DC supplies. Figure 2 labels and describes the pins.
Lab 2: Op-Amps 483

1 8 Pins 1,5 offset null correction—not used


2 inverting input (−)
VC C −
2 7 3 noninverting input (+)

4 VCC − supply set to −12 V (DC)
+
3 6 7 VCC + supply set to −12 V (DC)
6 output voltage νo
VC C +
4 5 8 not connected

Figure 2 Pin-out diagram for the 741 op-amp.

• The DC supplies must be set carefully to power the op-amp without destroying it.
• The DC sources should be set to +12 V for VCC + supply and −12 V for
VCC − supply.
• The VCC + and VCC − supplies always stay at the same pins throughout the
experiment. Do not switch them or you will destroy the op amp!
• When you are ready to turn the power on, always turn on the DC supplies first,
then the AC supply.
• When you turn the circuit off, always turn off the AC first and then the DC.

2.2 An op-amp amplifier


(1) Configure the op-amp amplifier circuit shown in Figure 3 on your protoboard
using component values of R1 = 10 k, R2 = 10 k, Rf = 100 k, and C =
0.1 μF. In this and all future circuits using op-amps, connect the circuit ground
to supply ground (the black terminal labeled “N” between + and −)—your TA
can show you how to do this. We recognize this circuit as an op-amp integrator
in the limit as Rf → ∞, so we will call the circuit with Rf = 100 k an
integrating amplifier.

Rf

C
R1

νo
+
νi +

R2

Figure 3 Integrating amplifier.


484 Lab 2: Op-Amps

(2) Apply a 500 Hz, 6 V peak-to-peak square wave to the input (with 50% duty
cycle). Sketch the input and output waveforms. Explain the shape of the output
waveform, and how it confirms that the circuit acts as an effective integrator.

(3) Switch the input waveform from square to co-sinusoid and decrease the frequency
to 100 Hz. Sketch the input and output waveforms again.

(4) Now slowly increase the input amplitude to 16 V peak-to-peak and describe
what happens to the shape of the output waveform. How do you explain what
you see? What is the peak-to-peak amplitude of the output waveform when the
input amplitude is 16 V peak-to-peak? If you were to increase the amplitude of
the input further, would that increase the amplitude of the output? Why or why
not?
Lab 2: Op-Amps 485

νi +
νo

R1
i−

R2

Figure 4 Noninverting amplifier, showing the signal i− .

2.3 Noninverting amplifiers


Now you will build the two amplifiers you designed in the prelab and connect them
to your envelope detector.
(1) First, build the noninverting amplifier with a gain of 11 that you designed in
the prelab. Make sure you connect the DC supplies and the bench ground to
your circuit before you apply any voltage to the op-amp inputs. Also, place the
circuit near the center of your protoboard, to alllow for other circuits to be built
on both sides.
(2) Apply a 100 Hz sine wave input vi with 1 V peak-to-peak amplitude. Measure
vi and vo with the oscilloscope. Calculate the voltage gain vvoi . How does this
compare with the theoretical value? What could account for the difference?
(3) Turn off the AC input, and then turn off the DC supplies. Without modifying
the noninverting amplifier you just built, build the noninverting amplifier with
a gain of two that you designed in the prelab. Place your new circuit to the
right side of your protoboard, allowing for some extra space in between the two
amplifiers.
(4) Turn on the DC supplies and connect the AC input (same as in step 2) to the
new amplifier (gain 2). Observe again the voltage gain vvoi . Does it agree with
the design value?
(5) Remove the AC input. Place your envelope detector circuit (from Lab 1) in
between the two amplifier circuits as shown in Figure 5. Note that a 33 μF
capacitor is included in the envelope detector circuit to remove the DC compo-
nent from the input signal of the last amplifier (recall that a capacitor acts as an
open circuit at DC). Have your TA check your circuit before you continue.
The three-stage circuit that you just built will be part of your radio receiver in Lab 4.
As you saw in Lab 1, the envelope detector stage will recover the message from an
AM signal. In order for it to work, however, the voltage of input signal must be
sufficiently high to turn the diode on and off—hence the first amplifier. The second
amplifier, which follows the envelope detector, increases the voltage of the recovered
message signal in order to drive a loudspeaker. It also provides a buffer between the
486 Lab 2: Op-Amps

Amplifier with gain = 11 Envelope detector with Amplifier with gain = 2


additional capacitor

Diode 33μF
νi +
+
− νo
R1 −
R1
2 kΩ 0.1μF
R2
R2

Figure 5 Three-stage circuit for radio.

envelope detector and the loudspeaker, preventing the loudspeaker from changing the
time constant of the tuned envelope detector.
Until now, we have been displaying signals on our oscilloscopes. However,
signals in the audio frequency range also can be “displayed” acoustically using loud-
speakers. We next will use a corner of our protoboard, an audio jack, and a pair of
speakers to listen to a number of signal waveforms:

(1) Connect the function generator output to the speaker input (via the protoboard
and audio jack) and generate and listen to an 880 Hz signal. Repeat for a 13
kHz signal. Describe what you hear in each case—in what ways do the 880 Hz
and 13 kHz audio signals sound different to your ear?
(2) To test your three-stage circuit, create an AM signal with the function generator
(almost like in Lab 1):

• Set the function generator to create a 13 kHz sine wave measuring 0.2 V
peak-to-peak, no DC offset.
• Press “Shift,” then “AM” to enable amplitude modulation.
• Press “Shift,” then “Freq” to set the message-signal frequency to 880 Hz.
• Press “Shift,” then “Level” to set the modulation depth to 80%. This
adjusts the height of the envelope.
• Listen to the AM signal on the loudspeaker.

Turn on the DC supplies, and connect the function generator to the input of your
three-stage circuit from step 5 above. Connect the output of the same circuit to
the oscilloscope. Sketch what you see on the oscilloscope display. How does it
compare to the waveform you obtained in the last part of Lab 1? On the function
generator, press “Shift,” then “Freq” and sweep the message-signal frequency
from 100 Hz and 2000 Hz. Describe what you see.
Lab 2: Op-Amps 487

(3) Disconnect the oscilloscope and replace it with a loudspeaker. What do you
hear? Sweep the message signal frequency from 100 Hz to 2000 Hz again.
Describe how the sound changes as the frequency is swept, do you recognize
the 880 Hz sound from step 1 above?

Important! Leave your circuit on the protoboard but return the audiojack to your
TA.

The next step


In the next laboratory session, you will continue working with op-amps to build an
active filter that will remove noise from your radio signal. You also will use the active
filter to study the Fourier series.
Lab 3: Frequency
Response
and Fourier Series

In this lab you will build an active bandpass filter circuit with two capacitors and
an op-amp, and examine the response of the circuit to periodic inputs over a range
of frequencies. The same circuit will be used in Lab 4 in your AM radio receiver
system as an intermediate frequency (IF) filter, but in this current lab our main focus
will be on the frequency response H (ω) of the filter circuit and the Fourier series
of its periodic input and output signals. In particular we want to examine and gain
experience with the response of linear time-invariant circuits to periodic inputs.

1 Prelab

(1) Determine the compact-form trigonometric Fourier series of the square wave
signal, f (t), with a period T and amplitude A shown in Figure 1. That is, find
cn and θn such that

c0 
f (t) = + cn cos(nωo t + θn ),
2
n=1

c0
where ωo = 2π T . Notice 2 = 0. How could you have determined that without
any calculation?
(2) Consider the circuit in Figure 2 where vi (t) is a co-sinusoidal input with some
radian frequency ω.
(a) What is the phasor gain VVoi in the circuit as ω → 0? (Hint: How does one
model a capacitor at DC—open or short?)

488
Lab 3: Frequency Response and Fourier Series 489

f (t)
A

−A

Figure 1 Square wave signal for prelab.

1 kΩ

5 kΩ ∼ νo (t) / 2
+
νi (t) 0.01 μF νo(t)

3.6 kΩ
0.01 μ F 1.7 kΩ

3.6 kΩ

Figure 2 Circuit for analysis in prelab and lab.

(b) What is the gain VVoi as ω → ∞? (Hint: Think of capacitor behavior in the
limit as ω → ∞.)
(c) In view of the answers to parts (a) and (b), and the fact that the circuit is
second-order (it contains two energy storage elements), try to guess what
kind of filter the system frequency response H (ω) ≡ VVoi implements—
low-pass, high-pass, or band-pass? The amplitude response |H (ω)| of the
circuit will be measured in the lab.

2 Laboratory Exercise

• Equipment: Function generator, oscilloscope, protoboard, and wires.


• Components: 741 op-amp, two 0.01 μF capacitors, one 1 k resistor, one 1.7
k resistor, two 3.6 k resistors, and one 5 k resistor.

2.1 Frequency response H(ω)


The frequency response H (ω) of a linear and dissipative time-invariant circuit contains
all of the key information about the circuit that is needed to predict the circuit response
to arbitrary inputs. Its magnitude |H (ω)| is known as the amplitude response and
∠H (ω) usually is referred to as the phase response. In this section, you will construct
490 Lab 3: Frequency Response and Fourier Series

1 8 Pins 1,5 offset null correction—not used


2 inverting input (−)
VC C −
2 7 3 noninverting input (+)

4 VCC − supply set to −12 V (DC)
+
3 6 7 VCC + supply set to −12 V (DC)
6 output voltage νo
VC C +
4 5 8 not connected

Figure 3 Pin-out diagram for the 741 op-amp.

an active bandpass filter circuit and measure its amplitude response over the frequency
range 1–20 kHz. Do the following:
(1) Construct the circuit shown in Figure 2 on your protoboard. For now, do not
connect it to the three-stage circuit from Lab 2. Remember the rules for wiring
the 741 op-amp, which are repeated in Figure 3.
(2) Turn on the DC supplies, then connect a 1 kHz sine wave with amplitude 1 V
peak-to-peak as the AC input vi (t). Display vi (t) and vo (t) on different channels
of the oscilloscope and verify the waveforms.
(3) Increase the function generator frequency from 1 kHz to 20 kHz in 1 kHz
increments. At each frequency enter the magnitude of the phasor voltage gain
Vo Vo
Vi in the graph shown below— Vi is the system frequency response H (ω) and
|Vo |
its magnitude |Vi | is the system amplitude response |H (ω)|.

10 1

10 0
Amplitude
response

10 −1

−2
10
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Frequency (kHz)
Lab 3: Frequency Response and Fourier Series 491

(4) The center frequency ωo = 2πfo of a band-pass response H (ω) is defined as


the frequency at which the amplitude response |H (ω)| is maximized. What
is the center frequency fo in kHz units, and what is the maximum amplitude
response |H (ωo )| of the circuit? Estimate fo and |H (ωo )| from your graph as
accurately as you can.
(5) The 3 dB cutoff frequencies ωu = 2πfu and ωl = 2πfl are the frequencies
above and below ωo = 2πfo at which the amplitude response |H (ω)| is √12 ≈
0.707 times its maximum value |H (ωo )|. The same frequencies also are known
as half-power cutoff frequencies since at frequencies ωu and ωl the output
signal power is one half the value at ωo , assuming equal input powers at all
three frequencies. Mark these cutoff frequencies as fu and fl on the horizontal
axis of your amplitude response plot shown above.
(6) Determine the 3 dB bandwidth B ≡ fu − fl of the bandpass filter in kHz units
and calculate the quality factor of the circuit defined as Q ≡ 2πB
ωo
= fBo .

2.2 Displaying Fourier coefficients


In order to display the Fourier coefficients of a periodic signal on the oscilloscope,
we can use the built in FFT function.1 On the “FFT screen” of your scope the hori-
zontal axis will represent frequency ω (like in the frequency response plot of the
last section) normalized by 2π, and you will see narrow spikes positioned at values
f = 2π ω ωo
equal to harmonic frequencies n 2π of the periodic input signal; spike ampli-
tudes will be proportional to compact Fourier coefficients cn = 2|Fn | in dB. The next
set of instructions tells you how to view the single Fourier coefficient (namely c1 ) of
a co-sinusoidal signal:
(1) Connect Channel 1 of your oscilloscope to the input terminal of your circuit
from the previous section. Connect the output of the circuit to Channel 2.
(2) Set the function generator to a 1 kHz sinusoid with amplitude 500 mV peak-
to-peak.
(3) Set the oscilloscope to compute the Fourier transform:
(a) Press “+/−.”
(b) Turn on Function 2.
(c) Press “Menu.”
(d) Press “Operation” until FFT appears.
(e) Set the Operand to Channel 1.
(f) Press “FFT Menu” (default window setting “Hanning” should be used).
(g) Adjust the time/div knob to set the frequency span to 24.4 kHz and the
center frequency to 12.2 kHz.

1
FFT stands for fast Fourier transform and it is a method for calculating Fourier transforms with sampled
signal data—see Example 9.26 in Section 9.3 of Chapter 9 to understand the relation of windowed Fourier
transforms to Fourier coefficients.
492 Lab 3: Frequency Response and Fourier Series

(4) Observe the output signal’s Fourier coefficient by setting the Operand to Channel
2 under the function menu. How does the FFT display change as you sweep
the frequency of the input from 1 kHz to 20 kHz? Describe what you see and
briefly explain what is happening.

2.3 Fourier coefficients of a square wave


Now you will introduce a periodic signal with a more interesting set of Fourier
coefficients—a square-wave:
(1) Change the function generator setting to create a 15 kHz square wave with
amplitude 0.5 V peak-to peak as the input to your circuit.
(2) Display the FFT of the square wave at the filter input by setting the Operand to
Channel 1. Keeping in mind your result for Problem 1 of the Prelab, describe
what you see on the screen.
(3) Display both the input and output in the time domain on the scope. Explain the
output taking into account the frequency response shape of your filter circuit.
(4) Repeat for a 10 kHz square wave and a 5 kHz square wave, explaining the
outputs.
(5) Still using the 5 kHz square wave, set the oscilloscope to display the FFT
of the output. What do you see on the scope? How does the Fourier domain
representation confirm your explanation of the output in the time domain?
(6) Repeat for a 10 kHz square wave and a 15 kHz square wave.

Important! Leave your active filter assembled on your protoboard! You will need
it in the next lab session.

The next step


The active filter is the last component you will build for the AM radio receiver. In
Lab 4, you will combine your components from Labs 1 through 3 to create a working
AM radio receiver. The frequency-domain techniques you learned this week will be
essential to following the AM signal through each stage of the receiver system.
Lab 4: Fourier Transform
and AM Radio

In Lab 4, you finally will connect all of your receiver components and tune in an
AM radio broadcast. You will follow the radio signal through the entire system, from
antenna to loudspeaker, in both the time domain and the frequency domain.

1 Prelab

You should prepare for this lab by reviewing Sections 8.3 and 8.4 of the text on AM
detection and superhetrodyne receivers, familiarizing yourself with your own receiver
design shown in Figure 1, and answering the following questions:
(1) Suppose you want to tune your AM receiver in the lab to WDWS, an AM
station broadcasting from Champaign-Urbana with a carrier frequency fc =
2π = 1400 kHz. Given that the IF (Intermediate Frequency) of your receiver is
ωc

fIF = ω2πIF =13 kHz, to which LO (Local Oscillator) frequency fLO = ω2πLO should
you set the function generator input of the mixer in your receiver (see Figure 1)
to be able to detect WDWS? (Hint: There are two possible answers; give both
of them.)
(2) Repeat 1, supposing you wish to listen to WILL, an AM broadcast at fc = 580
kHz.
(3) Sketch the amplitude response curve |HIF (ω)| of an ideal IF filter designed for
an IF of fIF = ω2πIF = 13 kHz and a filter bandwidth of 10 kHz. Label the axes
of your plot carefully using appropriate tick marks and units.

493
494 Lab 4: Fourier Transform and AM Radio

Local Oscillator

TP1 TP2 TP3 TP4

BPF

Antenna RF Amp Mixer IF Filter IF Amp Env. Detector Audio Amp Speaker

Figure 1 Superheterodyne AM receiver.

2 Laboratory Exercise

• Equipment: Function generator, oscilloscope, protoboard, and wires.


• Components: Three-stage circuit from Lab 2, band-pass filter from Lab 3, RF
amplifier, and mixer.

2.1 Fourier transform

Your scope is capable of displaying the Fourier transform of its input signal. We have
already used this feature in Lab 3 in “observing” the Fourier coefficients of periodic
signal inputs. In this section we will learn how to examine nonperiodic inputs in the
frequency domain.

(1) No circuit is used for this part of the laboratory. Connect the function generator’s
output to Channel 1 of the oscilloscope.
(2) Set the function generator to create a 1 kHz square wave with amplitude 1
V peak-to-peak. Turn on burst mode by pressing “Shift” then “Burst.” In this
mode the generator outputs a single rectangular pulse, (i.e., rect(t/τ )), which
is repeated at a 100 Hz rate. Sketch the burst signal output that you see on the
scope display. Confirm that pulses are generated with at a 100 Hz rate (that is,
100 pulses per second, or a pulse every 10 ms).
(3) Set the oscilloscope to display the magnitude of the Fourier transform of a
segment of the input signal (containing a single rectangle):
• Press “+/−.”
• Turn on Function 2.
• Press “Menu.”
• Press “Operation” until FFT appears.
• Set the Operand to Channel 1.
• Press “FFT Menu.”
Lab 4: Fourier Transform and AM Radio 495

• Set the window to “Hanning.”1


• Set the units/div to 10 dB and the ref level to 0 dBV.
• Set the frequency span to 24.4 kHz and the center frequency to 12.2 kHz.
• The oscilloscope now displays |F (ω)| in dB units, defined as 20 log
|F (ω)|, where F (ω) is the Fourier transform of a windowed (see the
footnote regarding Hanning window) segment of the oscilloscope input
f (t). Since 20 log |F (ω)| = 10 log |F (ω)|2 , the scope display is also
related to the energy spectrum |F (ω)|2 of the segment of f (t). We will
refer to the display as the frequency spectrum of the input. Note that
the spectrum is shown only over positive frequencies f = 2π ω
within the
frequency band specified in the last step above.
(4) Sketch the frequency spectrum |F (ω)| (in dB) of the oscilloscope input and
compare with theory. (What is the Fourier Transform of a rect function?)
(5) Change the input to a sinc pulse, and again sketch the frequency spectrum and
compare with theory. To do this, press “Shift,” “Arb,” select “sinc,” and press
“enter.” Set the time/div to 500 μs and the frequency span to 48.8 kHz.

2.2 AM signal in frequency domain


Amplitude Modulation (AM) is a communications scheme that allows many different
message signals to be transmitted in adjacent band-pass channels. Before the message
signal is multiplied by the high-frequency sinusoidal carrier, a DC component is
added so that the voltage of the message signal always is positive. This makes it
easy to recover the message signal from the envelope of the carrier. In Lab 1, you
synthesized an AM signal with the function generator and then displayed it on the
oscilloscope in the time domain. Let’s see how the same AM signal looks in the
frequency domain. (Hint: Use the modulation property to interpret what you will see
on scope display!)
(1) No circuit is used for this part of the laboratory. Connect the function generator’s
output to Channel 1 on the oscilloscope.
(2) Set the function generator to create an AM signal with fc = 13 kHz and 1 V
peak-to-peak amplitude modulated by an 880 Hz sine-wave with 80% modu-
lation depth.
• Create a 13 kHz sine wave measuring 1 V peak-to-peak, with no DC
offset.
• Press “Shift,” then “AM” to enable amplitude modulation.
• Press “Shift,” then “<,” then "∨,” to select the shape of the message
signal. You can select a sine, square, triangle, or arbitrary waveform by
pressing “>.” For this part of the lab, select a sine waveform.
1
As a result of this choice, the incoming signal f (t) is effectively multiplied with wH (t) ≡
rect( Tt ) cos2 ( πt
T ) prior to Fourier transformation. This procedure limits the length of the Fourier trans-
form segment to duration T of the window function wH (t). Alternative window functions such as
wB (t) = rect( Tt ) give rise to frequency spectra |F (ω)| with different resolution and sidelobe details.
496 Lab 4: Fourier Transform and AM Radio

• Press “Enter” to save the change and turn off the menu.
• Press “Shift,” then “Freq” to set the message signal frequency to 880
Hz.
• Press “Shift,” then “Level” to set the modulation depth to 80%. This
adjusts the DC component added to the message signal before modula-
tion.
(3) Set your oscilloscope to display the frequency spectrum of the input:
• Press “+/−.”
• Turn on Function 2.
• Press “Menu,” then “Operation” until FFT appears.
• Set the time/div to 2 ms.
• Set the units/div to 10 dB and the ref level to 0 dBV.
• Set the center frequency to 12.2 kHz.
• Set the frequency span to 24.4 kHz.
(4) Sketch the AM signal and its frequency spectrum using the oscilloscope display
and explain what you see in terms of the modulation property of the Fourier
transform.
(5) Change the shape of the message signal in the modulation menu from SINE to
SQUARE. Do not change the shape or frequency of the 13 kHz carrier signal.
Explain what you see. (Hint: See Example 9.26 in Section 9.3 of Chapter 9.)

2.3 AM radio receiver


The most popular AM communications receiver is the superheterodyne receiver,
which was developed for greater sensitivity and selectivity. A block diagram for
the superheterodyne receiver is shown in Figure 1.
The antenna, RF amplifier, and frequency mixer all rely on electrical components
not covered in this textbook, but their effects on the incoming signal should be familiar.
You built the remaining components of the circuit in Labs 1 through 3. In this section,
you will combine all of the components to tune in an AM radio broadcast and follow
the signal from the antenna to the loudspeaker.
Perform the following steps:
(1) Connect the RF amplifier and frequency mixer modules provided. Make sure
the labels on the boxes are right side up and facing you. Connect a Y-cable to
the input stage of the RF amplifier—this will serve as a crude AM antenna.
Use the DC source to power the modules by connecting +12 V to the blue
terminals, −12 V to the purple terminals, and the bench ground (“N”) to the
black terminals.
(2) Connect the output of the frequency mixer to the input of your band-pass filter
from Lab 3 (which will take the place of the ideal IF filter discussed in Problem 3
in the Prelab). Connect the supply ground (“N”) to your circuit’s signal ground,
and connect the DC supplies to your op-amp.
Lab 4: Fourier Transform and AM Radio 497

(3) Connect the output of the band-pass filter to the input of your three-stage circuit
from Lab 2. Make sure all of the signal grounds are connected, and connect the
DC supplies to your op-amps.
(4) Turn on the DC supplies. Then connect the function generator to the local-
oscillator input on the frequency mixer. Tune in AM 1400 by selecting an
appropriate mixing frequency for the local oscillator as described below. (Hint:
Look back to Problem 1 in the Prelab.)
(5) Now you will follow the processing of the received RF signal into an audible
signal by displaying the time and frequency domain of the four test-point signals
on the oscilloscope. The test points are described below.
When displaying the signals from each of the test points, it will be your
task to select an appropriate time scale for the time-domain waveform and
center frequency and frequency span for its Fourier spectrum. If you select
inappropriate numbers, all you will see on the oscilloscope is noise. You may
ask the TA for hints, but give your choice some thought and discuss it with your
partner before displaying the signal.

Test point 1: RF amplifier The antenna picks up the AM radio broadcast we are
trying to tune, along with many other unwanted signals. The antenna signal is then
passed through an LC resonator called the preselector, which acts as a crude band-pass
filter. The preselector removes some unwanted frequencies from the antenna signal
while preserving the frequencies associated with the desired AM radio broadcast. The
output of the preselector is amplified by the RF amplifier stage.
Connect an oscilloscope probe to the RF amplifier output (TP1). Sketch the
waveform in both the time and frequency domains. Set the oscilloscope for the FFT
as before, but set the center frequency to 1.2 MHz and the frequency span to 2.4
MHz. You should see at least one AM signal in the frequency domain, but nothing
discernible in the time domain. Touching your finger to one of the antenna terminals
will amplify the signal considerably.

Test point 2: frequency mixer The mixer multiplies the RF signal with the signal
coming from the local oscillator, which is tuned so that all subsequent processing is
independent of the radio broadcast’s carrier frequency. In most commercial receivers,
the mixer is tuned to produce an Intermediate Frequency (IF) of 455 kHz. For our
purposes, an IF of 13 kHz will suffice. The band-pass filter and envelope detector you
built are suited for an IF of 13 kHz.
The function generator will be our local oscillator, abbreviated as LO. To produce
an IF of 13 kHz, the LO must be set at a frequency 13 kHz above or below the carrier
signal of the station we are trying to tune. We will be tuning AM 1400, so set the LO
to be a 1413 kHz or a 1387 kHz sine wave with an amplitude of 400 mV peak-to-peak.
Make sure that the function generator’s AM feature is turned off.
You may have to adjust LO frequency slightly to better tune the station. Once
the station is tuned, probe TP2 and sketch the output in both the time domain and the
frequency domain. At TP2, the signal should be very noisy, just as it was at TP1.
498 Lab 4: Fourier Transform and AM Radio

Test point 3: IF filter and amplifier The IF filter is used to select the signal
centered on the IF frequency and to reject all other signals. Receivers employing
higher IFs typically include ceramic IF filters that operate on a piezoelectric prin-
ciple. Although small and inexpensive, ceramic filters can have very sharply tuned
responses, which are needed with large IF compared to AM bandwidth. With lower
IF, such as 13 kHz, a sharply tuned response is not necessary, and thus even the low-Q
op-amp-based band-pass filter from Lab 3 that we are using is more than adequate.
Depending on the AM signal strength from the antenna, and the noise level, you
may find it necessary to add gain to the IF amplifier. Feel free to experiment with
different gain values (remember the design equation from Lab 2) to get an output
that can be demodulated by the envelope detector. An IF gain of about 30 is not
uncommon.
Probe the signal at TP3. Sketch the time waveform and the frequency spectrum.

Test point 4: envelope detector and audio amplifier The envelope detector
then recovers the message signal from the IF signal. Probe TP4 and sketch the time-
domain waveform and frequency spectrum.
At this point, if all stages of the AM radio behave as expected, hook up the output
of the audio amplifier to the speaker. Do you hear what you expect to hear?

The next step


Now that your AM radio receiver is complete, you will turn your attention to a
“software radio” implementation of the same design using digital signal processing
in the next lab.
Lab 5: Sampling,
Reconstruction, and
Software Radio

Until this point, your study of signals and systems has concerned only the continuous-
time case,1 which dominated the early history of signal processing. About 50 years
ago, however, the development of the modern computer generated research interest in
digital signal processing (DSP), a type of discrete-time signal processing. Although
hardware limitations made most real-time DSP impractical at the time, the continuing
maturation of the computer has been matched with a continuing expansion of DSP.
Much of that expansion has been into areas previously dominated by continuous-
time systems: our telephone network, medical imaging, music recordings, wireless
communications, and many more.
You do not need to worry whether the time and effort you have invested in
studying continuous-time systems will be wasted because of the growth of DSP—
digital systems are practically always hybrids of analog and digital sub-systems.
Furthermore, many DSP systems are linear and time-invariant, meaning that the
same analysis techniques apply, although with some modifications. In this lab, you
will explore some of the parallels between continuous-time systems and DSP with
a “software radio” designed to the same specifications as the receiver circuit you
developed on your protoboard.

1
The term “continuous time” is used generically to refer to signals that are functions of a contin-
uous independent variable. Often that variable represents time, but it may instead represent distance, etc.
“Discrete time” is used in the same way.

499
500 Lab 5: Sampling, Reconstruction, and Software Radio

Analog Discrete Analog

T
x (t) f (t) f (nT ) f T (t) y(t)
H 1 (ω) × H 2 (ω)

A/D conversion ∑
n
δ(t − nT )

D/A conversion

Figure 1 A conceptual system that samples a bandlimited continuous-time


signal f (t) and reconstructs a continuous-time output y(t). An A/D converter
generates the samples f (nT ) of its analog input f (t). An ideal D/A converter
generates a signal fT (t) = n f (nT )δ(t − nT ) from samples f (nT ) and low-pass
filters fT (t) with H
2 (ω) to produce an analog y(t). In a practical D/A  converter
the impulse train n δ(t − nT ) is replaced by a practical pulse train n p(t − nT )
such that p(t) ∗ h2 (t) is a close approximation of a delayed sinc( πT t).

1 Prelab

Our software radio is typical of many DSP systems in that both the available input
and required output are continuous-time signals. The conversion of a continuous-
time input signal to a discrete-time signal is called sampling (or A/D conversion),
and the conversion of a discrete-time signal to a continuous-time output signal is
called reconstruction (or D/A conversion). As discussed in class, samples f (nT ) of
a bandlimited analog signal f (t) can be used to reconstruct f (t) exactly when the
sampling interval T and signal bandwidth  = 2πB satisfy the Nyquist criterion
1
T < 2B .
This is illustrated by the hypothetical system shown in Figure 1, where the analog
signal f (t) defined at the output stage of a low-pass filter H1 (ω) has a bandwidth
 = 2πB limited by the bandwidth 1 = 2πB1 of the filter. The A/D converter
extracts the samples f (nT ) from f (t) with a sampling interval of T . D/A conversion
of samples f (nT ) into an analogsignal y(t) can be envisioned as low-pass filtering
of a hypothetical signal fT (t) = n f (nT )δ(t − nT ) using the filter H2 (ω). With an
appropriate choice of H2 (ω), the system output y(t) will be identical to f (t) so long
as T < 2B1 1 . The reason for that easily can be appreciated after comparing the Fourier
transforms F (ω) and FT (ω) of signals f (t) and fT (t) with the help of Figure 2.
The following prelab exercises concern the system shown in Figure 1. Assume
that T = 441001
s (i.e., the sampling frequency is T −1 = 44100 Hz) and signal x(t)
has a Fourier transform X(ω) shown in Figure 3.
Lab 5: Sampling, Reconstruction, and Software Radio 501

1 F (ω)

2πB ω
1
T FT (ω)

2πB π 2π ω
T T

Figure 2 An example comparing the Fourier transforms of signals f (t) and fT (t)
defined in Figure 1. Since for |ω| < πT the two Fourier transforms have the same
shape, low-pass filtering of fT (t) yields the original analog signal f (t). FT (ω) is
constructed as a superposition of replicas of F(ω)
T shifted in ω by all integer
multiples of 2π
T (see item 25 in Table 7.2 in the text).

X (ω)

−22050 Hz 22050 Hz ω /2π

Figure 3 Fourier transform of x(t) for prelab questions.

H 1 (ω) H 2 (ω)

(a) −22050 Hz 22050 Hz ω/2π −22050 Hz 22050 Hz ω/2π

H 1 (ω) H 2 (ω)

(b) −22050 Hz 22050 Hz ω/2π −22050 Hz 22050 Hz ω/2π

H 1 (ω) H 2 (ω)

(c) −22050 Hz 22050 Hz ω/ 2π −22050 Hz 22050 Hz ω/2π

Figure 4 Filter frequency responses for the prelab problems.

(1) For frequency responses H1 (ω)  and H2 (ω) in Figure 4a, sketch the Fourier
transforms of f (t), fT (t) = n f (nT )δ(t − nT ), and y(t). Is y(t) a perfect
reconstruction of x(t)? Is y(t) a perfect reconstruction of f (t)?
(2) Now, consider an ideal H1 (ω) but a nonideal H2 (ω) given by Figure 4b. The
signals f (t) and fT (t) are unchanged, but sketch the Fourier transform of the
new y(t). Is y(t) a perfect reconstruction of f (t)?
502 Lab 5: Sampling, Reconstruction, and Software Radio

(3) Now suppose a nonideal H1 (ω) and an ideal H2 (ω) given by Figure 4c. Sketch
the Fourier transform of f (t), fT (t), and y(t). Is y(t) a perfect reconstruction
of f (t)?
(4) Discuss the role of filters H1 (ω) and H2 (ω) in the system examined above. In
what ways do they impact the system output?

2 Laboratory Exercise

• Equipment: Function generator, PC with a sound card, and wires.


• Components: antenna, RF amplifier, mixer, stereo jack, and 33 μF capacitor.
• Software: MATLAB (R2006b) and softRx.m.

2.1 Sampling and reconstruction

In this section, you will observe a real system much like the one you studied in the
prelab exercises. The lab also will illustrate the phenomenon of aliasing, which occurs
when the analog input is undersampled.

(1) Connect the function generator’s output to the “mic in” jack at the back of the
computer at your lab station. Use BNC “Y” cables to bring the signal from the
lab equipment to the protoboard. Use stereo jacks and stereo cables to run the
signal from the protoboard to the computer. Ask your TA if you need help.
(2) In Windows, go to Start, Control Panel, and click Sound, Speech, and Audio
Devices. Next click on Adjust the system volume. Under the volume tab turn
the Device Volume all the way up. Then in the Audio tab click on Volume...
under Sound Recording. Make sure the Microphone box is selected and turn
the Microphone level all the way down. Also make sure the “wave” setting is
all the way up. Leave this window open, as it may be necessary to change the
gain if the signal is too strong or too weak during the lab exercises.
(3) In Windows, go to Start, Programs, MATLAB, R2006b and click MATLAB
R2006b (MATLAB versions are continously updated. If you do not see
MATLAB R2006b, start the latest version installed on the computer). At
MATLAB command prompt, type “softRx.” This will launch the graphical
user interface (GUI) for the software AM receiver shown in Figure 5.
(4) Select “Output = Unprocessed Input” from the pull-down options menu near
the top left corner of the softRx GUI, in which case the analog “mic in” signal
is sampled and reconstructed as an analog signal as depicted in Figure 6 and
explained in the caption. Note the resemblance of the diagram to the system
studied in the prelab. The low-pass filters shown in Figure 6 are part of the sound
card and serve the same role as those in the prelab. Important: You should use
the zoom buttons at the top left corner of the GUI to zoom in on the waveform
if it looks like a solid line on the screen.
Lab 5: Sampling, Reconstruction, and Software Radio 503

Figure 5 Screenshot of the softRx Graphical User Interface (GUI).

x (t) f (t) T y(t)


f (nT ) f T (t)
LPF LPF


n
δ(t − nT )

Figure 6 Block diagram of the system implemented by softRx when the


“Output = Unprocessed Input” option is selected. Note that no signal processing
is applied to samples f (nT ) before D/A conversion.

(5) Set the function generator to produce a 5 kHz sinusoid with amplitude 500
mV peak-to-peak. Connect the 33 μF capacitor in parallel with the stereo jack
(adding this capacitor will prevent saturation of the input port; you can remove
it to see its effect). Click the “Start Data Acquisition” button in the GUI to begin
sampling and reconstruction. Describe what you see in the time and frequency
plots of the output in the GUI.
(6) Slowly sweep the input frequency up to 19 kHz. Does the output look like a
perfect reconstruction of the input signal?
504 Lab 5: Sampling, Reconstruction, and Software Radio

(7) Now, slowly sweep the input frequency from 19 to 25 kHz, passing through
22.05 kHz. What is the significance of the frequency 22.05 kHz? What happens
in the time and frequency domain? Does this look like a perfect reconstruction
or do we have an aliased component at the output? Which component(s) in the
system could be improved to reduce the aliasing effect? Note that the answer to
the last question is not the sampling frequency of the sound card or the sound
card itself.
(8) Finally, sweep the input frequency from 25 to 30 kHz. What do you observe?

2.2 Digital filtering


In this section, you will examine the digital filter option of softRx.
(1) Select the “Output = IF Filtered Input” setting in the GUI. The samples f (nT )
will be processed by the digital IF filter shown in Figure 7 to generate samples
g(nT ). You will be asked to enter two cutoff frequencies for the filter. Accept
the default values for this step.

x (t) f (t) T y(t)


f (nT ) g(nT ) gT (t)
IF
LPF
filter
× LPF


n
δ(t − nT)

Figure 7 Block diagram for the Software Receiver when the “Output = IF
Filtered Input” option is selected.

(2) Note that the filter frequency-response depicted in the GUI depends on the cutoff
frequency inputs. Change them, after clicking “Change Cutoff Frequencies,”
from fcl = 8 and fcu = 18 kHz to 10 and 16 kHz, respectively, and observe
the new filter response.
(3) Switch the input waveform from the sinusoid to a square wave with f = 1 kHz
and describe what you see at the filter output.

2.3 Receiving synthetic AM


Now you will generate a synthetic AM signal and process it with softRx.
(1) Set the function generator to create an AM signal:
• Set the function generator to create a 13 kHz sine wave measuring 500
mV peak-to-peak, no DC offset.
• Press “Shift,” then “AM” to enable amplitude modulation.
• Press “Shift,” then “<,” then ∨ to select the shape of the message signal.
You can select a sine, square, triangle, or arbitrary waveform by pressing
“>.” For this part of the lab, select a sine waveform.
Lab 5: Sampling, Reconstruction, and Software Radio 505

• Press “Enter” to save the change and turn off the menu.
• Press “Shift,” then “Freq” to set the message signal frequency to 880
Hz.
• Press “Shift,” then “Level” to set the modulation depth to 80%. This
adjusts the DC component added to the message signal before modula-
tion.
(2) Select the “Output = Unprocessed Input” option. Sketch and describe what you
see on the GUI output.
(3) Select the “Output = IF Filtered Input” option. Sketch and describe what you
see.
(4) Select the “Output = Envelope Detected Input” option. This introduces an enve-
lope detector to the system as shown in Figure 8. You will be asked to enter
three cut-off frequencies, two for the bandpass filter and one for the lowpass
filter of the envelope detector. Sketch and describe what you see on the output
panel. Overall, what does this system accomplish?

x (t) f (t) T y(t)


f (nT ) g(nT ) gT (t)
IF Envelope
LPF LPF
filter detector


n
δ(t − nT )

Figure 8 Block diagram for the Software Receiver when “Output = Envelope
Detected Input” is selected.

2.4 Receiving broadcast AM


In this last section, you will mix a broadcast AM signal to an IF of 13 kHz, as in
Lab 4. You then will process the IF signal with softRx and eventually listen to its
detected version.
(1) Connect the antenna, RF amplifier, and frequency mixer modules provided (as
in Lab 4). Make sure the labels on the boxes are right side up and facing you.
Use the DC source to power the modules by connecting +12 V to the blue
terminals, −12 V to the purple terminals, and the bench ground (“N”) to the
black terminals. Subsequently turn on the DC supply.
(2) Use the function generator as your local oscillator (LO). Generate a sinusoid
with 400 mV peak-to-peak amplitude (do not forget to turn off the AM feature
of the function generator). What frequencies could be used as the LO frequency
to tune to AM 1400 kHz via a 13 kHz IF? Select one of those frequencies and
connect the function generator to the LO input of the frequency mixer.
(3) Connect the output of the frequency mixer to “mic in” on the sound card (the
33 μF capacitor should be taken out). Select the “Output = Unprocessed Input”
option in the GUI. Sketch and describe what you see on the output panel.
506 Lab 5: Sampling, Reconstruction, and Software Radio

(4) Select the “Output = IF Filtered Input” option. Sketch and describe the output
that you see.
(5) Repeat 4 with the “Output = Envelope Detected Input” option. How do the
signals in each of these settings resemble those you observed in the continuous-
time case during Lab 4?
(6) Connect a pair of loudspeakers to “speaker out” to listen to AM 1400. Is your
software radio working as expected? Note: You might need to move the antenna
or change how you are holding the antenna in order to increase signal quality.
(7) Explore sound quality changes as you vary the following parameters:
(a) Increase the IF frequency to 16 kHz by varying the LO frequency.
(b) Set fcl = 14 kHz and fcu = 18 kHz.
(c) Set the lowpass filter cutoff frequency, fc , to 2 kHz.

The End
Congratulations on completing the ECE 210 labs! Over these five labs you have
learned and applied the most important principles of continuous-time signals and
systems and explored their parallels in discrete-time signals and systems. Advanced
coursework in ECE will require you to apply these principles again and again. You
are well prepared!
C
Further Reading

1) D. M. Bressoud, A Radical Approach to Real Analysis, 2nd ed. Washington,


D. C: The Mathematical Association of America, 2006.
2) J. W. Brown and R. V. Churchill, Complex Variables and Applications, 7th ed.
New York: McGraw Hill, 2003.
3) B. P. Lathi, Linear Systems and Signals, 2nd ed. Oxford, England: Oxford
Univ. Press, 2004.
4) M. J. Lighthill, Introduction to Fourier Analysis and Generalized Functions.
Cambridge, England: Cambridge University Press, 1964.
5) J. H. McClellan, R. W. Schafer, and M. A. Yoder, Signal Processing First.
Upper Saddle River, NJ: Prentice Hall, 2003.
6) P. J. Nahin, Science of Radio, 2nd ed. New York: Springer-Verlag, 2001.
7) J. W. Nilsson and S. A. Riedel, Electric Circuits, 8th ed. Upper Saddle River,
NJ: Prentice-Hall, 2008.
8) A. Papoulis, The Fourier Integral and its Applications. New York: McGraw
Hill, 1962.
9) J. G. Proakis and D. G. Manolakis, Digital Signal Processing: Principles,
Algorithms, and Applications, 4th ed. Upper Saddle River, NJ: Prentice Hall,
2006.
10) A. V. Oppenheim, R. W. Schafer, and J. R. Buck, Discrete-Time Signal Processing,
2nd ed. Upper Saddle River, NJ: Prentice Hall, 1999.
11) D. Rutledge, The Electronics of Radio. Cambridge, England: Cambridge Univer-
sity Press, 1999.
12) A. S. Sedra and K. C. Smith, Microelectronic Circuits, 5th ed. Oxford, England:
Oxford University Press, 2003.

507
This page intentionally left blank
Index

A/D, 329, 330 cascade, 412


A/D conversion, 1 causal signal, 353, 374, 381
absolutely integrable, 190 causality, 352
AC, 25 characteristic mode, 399
admittance, 133 characteristic pole, 399
aliased, 326 characteristic polynomial, 114, 399
AM, 259 co-sinusoidal, 127
amplitude, 124 coherent-detection, 265
amplitude response, 160 compact form, 187
analog, 1 complex arithmetic, 450
analog system, 87 complex conjugate, 453
aperiodic, 223 complex number, 450
asymptotic, 405 complex plane, 455
available power, 60 complex variable, 468
average power, 145, 212 complex-valued functions, 465
conductance, 133
band-pass filter, 163 conjugate symmetry, 164
band-pass signal, 241 controlled source, 22
bandlimited, 325 convolution, 282
bandwidth, 243 cosine signal, 124
basis, 191 cover-up method, 383
BIBO stability, 346, 369 current, 11
buffer, 73 current-division, 35
Butterworth filters, 438
D/A, 329, 330
capacitance, 24 D/A conversion, 1
capacitor, 24 DC, 25
carrier frequency, 259 DC component, 189

509
510 Index

decibel, 174 homogeneous ODE, 114


dependent sources, 22 homogeneous solution, 95
differentiator, 81
digital, 1 ideal op-amp, 71
digital signal processing, 330 IF filter, 273
digitization, 1 image station, 276
Dirichlet conditions, 190 imaginary part, 452
dissipative system, 112 impedance, 133
distortionless, 427 improper rational form, 387
distribution, 302 impulse, 302
doublet, 310 impulse response, 313, 338, 342
DSP, 330 impulse train, 317, 327
duty cycle, 202 independent current source, 20
independent voltage source, 19
electrical potential, 10 inductance, 24
energy conservation, 14 inductor, 24
energy spectrum, 241 initial state, 83, 88
energy storage, 25 initial value, 83
envelope detection, 267 initial value problem, 95
Euler’s identity, 457, 464 instantaneous power, 143
even function, 165, 193 integrator, 85
exponential form, 187, 457 inverse Fourier transform, 224
inverse Laplace transform, 381
feedback, 416 inverting input terminal, 70
flux linkage, 25 inverting amplifier, 76
Fourier coefficients, 190
Fourier series, 185 KCL, 16
Fourier transform, 224 KVL, 15
Fourier transform pair, 226
l’Hopital’s rule, 203
frequency response, 159
lag, 125
fundamental frequency, 189
Laplace transform, 361
general response, 398 lead, 125
general solution, 96 linearity, 49
Gibbs phenomenon, 202 LO frequency, 273
ground, 9 local oscillator, 273
loop current, 43
harmonic, 189 loop-current method, 43
harmonic distortion, 216 low-pass filter, 161
heterodyning, 263 low-pass signal, 241
hidden poles, 372 LTI, 3, 91
high-pass filter, 162 LTIC, 353
Index 511

marginal stability, 405 pole, 365


matched load, 62 positive feedback, 74
mixer, 262 power, 13
modulation, 261 power signal, 314
multiplicity, 385 power spectrum, 212
projection, 191
negative feedback, 73 proper rational form, 382
node, 9
node voltage, 9 radian frequency, 124
node-voltage method, 38 rational form, 381
noninverting input terminal, 70 RC-circuit, 93
noninverting amplifier, 71 reactance, 133
Norton current, 54 real number pairs, 450
Norton equivalent, 53 real part, 452
Nyquist criterion, 326 reconstruction formula, 325
Nyquist frequency, 331 rect, 227
odd function, 165, 193 rectangular form, 452
ODE, 93 reference, 9
Ohm’s law, 17 region of convergence, 363
op-amp, 68 repeated pole, 385
open-circuit, 19 resistance, 17, 133
open-circuit voltage, 54 resistor, 17
open-loop gain, 70 resonance, 151
operating frequency, 259 resonant frequency, 151
orthogonal, 190 RL-circuit, 100
RLC-circuit, 111
pair form, 452 ROC, 363
parallel, 11, 414
parallel equivalent, 32 s-plane, 364
Parseval’s theorem, 213, 241 sampling interval, 325
partial fraction expansion, 382 sampling rate, 330
particular solution, 95 saturated, 70
period, 124, 189 series, 12
periodic signal, 185 series equivalent, 32
PFE, 382 short-circuit, 19
phase, 124 short-circuit current, 54
phase response, 160 signal, 1
phase shift, 124 signum, 227
phasor, 124 simple pole, 385
polar form, 456 sinc function, 231
polarity, 9 sine signal, 125
512 Index

sinusoidal steady-state, 136 transfer function, 361


sound card, 330 transient, 107
source suppression, 52 trigonometric form, 187
source suppression method, 55
source transformation, 36 undersample, 326
steady-state, 110 unit triangle, 231
stored charge, 25 unit-step, 227
stored energy, 25 unit-step response, 339
super loop, 47 unstable, 345
super node, 42
voltage, 7
superheterodyne, 277
voltage division, 34
superposition principle, 49
voltage drop, 9
suppressed current source, 50
voltage follower, 71
suppressed voltage source, 51
voltage gain, 70, 72
susceptance, 133
voltage rise, 9
test signal method, 56
zero, 370
THD, 216
zero-input response, 88, 398
Thevenin equivalent, 53
zero-state response, 88, 248, 342, 398
Thevenin resistance, 55
Thevenin voltage, 54
time-constant, 96, 100,
time-invariance, 91
total harmonmic distortion, 216

You might also like