0% found this document useful (0 votes)
43 views

Lecture Slides For Signals and Systems-3.0

This document contains a 3-page license for lecture slides on signals and systems by Michael D. Adams from the University of Victoria. The license grants royalty-free rights to reproduce and distribute the work for non-commercial purposes, as long as the work is not adapted and proper attribution is provided. The license details the rights granted and restrictions, such as prohibiting commercial use and adaptations without permission.

Uploaded by

adityarajput2928
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Lecture Slides For Signals and Systems-3.0

This document contains a 3-page license for lecture slides on signals and systems by Michael D. Adams from the University of Victoria. The license grants royalty-free rights to reproduce and distribute the work for non-commercial purposes, as long as the work is not adapted and proper attribution is provided. The license details the rights granted and restrictions, such as prohibiting commercial use and adaptations without permission.

Uploaded by

adityarajput2928
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 641

Lecture Slides for Signals and Systems

Edition 3.0

Michael D. Adams

Department of Electrical and Computer Engineering


University of Victoria
Victoria, British Columbia, Canada

To obtain the most recent version of these lecture slides (with functional hyperlinks) or for additional
. ......................
information and resources related to these slides (including video lectures and errata), please visit:
................
http://www.ece.uvic.ca/~mdadams/sigsysbook
If you like these lecture slides, please consider posting a review of them at:
https://play.google.com/store/search?q=ISBN:9781550586787 or
http://books.google.com/books?vid=ISBN9781550586787

youtube.com/iamcanadian1867 github.com/mdadams @mdadams16


The author has taken care in the preparation of this document, but makes no expressed or implied warranty of any kind and assumes no
responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use
of the information or programs contained herein.

Copyright © 2013, 2016, 2020 Michael D. Adams


Published by the University of Victoria, Victoria, British Columbia, Canada
This document is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) License. A copy
of this license can be found on page iii of this document. For a simple explanation of the rights granted by this license, see:
http://creativecommons.org/licenses/by-nc-nd/3.0/

This document was typeset with LATEX.

ISBN 978-1-55058-677-0 (print)


ISBN 978-1-55058-678-7 (PDF)
License I

Creative Commons Legal Code


Attribution-NonCommercial-NoDerivs 3.0 Unported
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR
DAMAGES RESULTING FROM ITS USE.
License
THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE
COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY
COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS
AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE
TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY
BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS
CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND
CONDITIONS.
1. Definitions
a. "Adaptation" means a work based upon the Work, or upon the Work and
other pre-existing works, such as a translation, adaptation,
derivative work, arrangement of music or other alterations of a
literary or artistic work, or phonogram or performance and includes
cinematographic adaptations or any other form in which the Work may be
recast, transformed, or adapted including in any form recognizably
derived from the original, except that a work that constitutes a
Collection will not be considered an Adaptation for the purpose of
this License. For the avoidance of doubt, where the Work is a musical
work, performance or phonogram, the synchronization of the Work in
timed-relation with a moving image ("synching") will be considered an
Adaptation for the purpose of this License.
b. "Collection" means a collection of literary or artistic works, such as
encyclopedias and anthologies, or performances, phonograms or
broadcasts, or other works or subject matter other than works listed

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 iii
License II

in Section 1(f) below, which, by reason of the selection and


arrangement of their contents, constitute intellectual creations, in
which the Work is included in its entirety in unmodified form along
with one or more other contributions, each constituting separate and
independent works in themselves, which together are assembled into a
collective whole. A work that constitutes a Collection will not be
considered an Adaptation (as defined above) for the purposes of this
License.
c. "Distribute" means to make available to the public the original and
copies of the Work through sale or other transfer of ownership.
d. "Licensor" means the individual, individuals, entity or entities that
offer(s) the Work under the terms of this License.
e. "Original Author" means, in the case of a literary or artistic work,
the individual, individuals, entity or entities who created the Work
or if no individual or entity can be identified, the publisher; and in
addition (i) in the case of a performance the actors, singers,
musicians, dancers, and other persons who act, sing, deliver, declaim,
play in, interpret or otherwise perform literary or artistic works or
expressions of folklore; (ii) in the case of a phonogram the producer
being the person or legal entity who first fixes the sounds of a
performance or other sounds; and, (iii) in the case of broadcasts, the
organization that transmits the broadcast.
f. "Work" means the literary and/or artistic work offered under the terms
of this License including without limitation any production in the
literary, scientific and artistic domain, whatever may be the mode or
form of its expression including digital form, such as a book,
pamphlet and other writing; a lecture, address, sermon or other work
of the same nature; a dramatic or dramatico-musical work; a
choreographic work or entertainment in dumb show; a musical
composition with or without words; a cinematographic work to which are
assimilated works expressed by a process analogous to cinematography;
a work of drawing, painting, architecture, sculpture, engraving or
lithography; a photographic work to which are assimilated works
expressed by a process analogous to photography; a work of applied
art; an illustration, map, plan, sketch or three-dimensional work
relative to geography, topography, architecture or science; a
performance; a broadcast; a phonogram; a compilation of data to the
extent it is protected as a copyrightable work; or a work performed by
a variety or circus performer to the extent it is not otherwise
considered a literary or artistic work.
g. "You" means an individual or entity exercising rights under this

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 iv


License III

License who has not previously violated the terms of this License with
respect to the Work, or who has received express permission from the
Licensor to exercise rights under this License despite a previous
violation.
h. "Publicly Perform" means to perform public recitations of the Work and
to communicate to the public those public recitations, by any means or
process, including by wire or wireless means or public digital
performances; to make available to the public Works in such a way that
members of the public may access these Works from a place and at a
place individually chosen by them; to perform the Work to the public
by any means or process and the communication to the public of the
performances of the Work, including by public digital performance; to
broadcast and rebroadcast the Work by any means including signs,
sounds or images.
i. "Reproduce" means to make copies of the Work by any means including
without limitation by sound or visual recordings and the right of
fixation and reproducing fixations of the Work, including storage of a
protected performance or phonogram in digital form or other electronic
medium.
2. Fair Dealing Rights. Nothing in this License is intended to reduce,
limit, or restrict any uses free from copyright or rights arising from
limitations or exceptions that are provided for in connection with the
copyright protection under copyright law or other applicable laws.
3. License Grant. Subject to the terms and conditions of this License,
Licensor hereby grants You a worldwide, royalty-free, non-exclusive,
perpetual (for the duration of the applicable copyright) license to
exercise the rights in the Work as stated below:
a. to Reproduce the Work, to incorporate the Work into one or more
Collections, and to Reproduce the Work as incorporated in the
Collections; and,
b. to Distribute and Publicly Perform the Work including as incorporated
in Collections.
The above rights may be exercised in all media and formats whether now
known or hereafter devised. The above rights include the right to make
such modifications as are technically necessary to exercise the rights in
other media and formats, but otherwise you have no rights to make
Adaptations. Subject to 8(f), all rights not expressly granted by Licensor

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 v


License IV

are hereby reserved, including but not limited to the rights set forth in
Section 4(d).
4. Restrictions. The license granted in Section 3 above is expressly made
subject to and limited by the following restrictions:
a. You may Distribute or Publicly Perform the Work only under the terms
of this License. You must include a copy of, or the Uniform Resource
Identifier (URI) for, this License with every copy of the Work You
Distribute or Publicly Perform. You may not offer or impose any terms
on the Work that restrict the terms of this License or the ability of
the recipient of the Work to exercise the rights granted to that
recipient under the terms of the License. You may not sublicense the
Work. You must keep intact all notices that refer to this License and
to the disclaimer of warranties with every copy of the Work You
Distribute or Publicly Perform. When You Distribute or Publicly
Perform the Work, You may not impose any effective technological
measures on the Work that restrict the ability of a recipient of the
Work from You to exercise the rights granted to that recipient under
the terms of the License. This Section 4(a) applies to the Work as
incorporated in a Collection, but this does not require the Collection
apart from the Work itself to be made subject to the terms of this
License. If You create a Collection, upon notice from any Licensor You
must, to the extent practicable, remove from the Collection any credit
as required by Section 4(c), as requested.
b. You may not exercise any of the rights granted to You in Section 3
above in any manner that is primarily intended for or directed toward
commercial advantage or private monetary compensation. The exchange of
the Work for other copyrighted works by means of digital file-sharing
or otherwise shall not be considered to be intended for or directed
toward commercial advantage or private monetary compensation, provided
there is no payment of any monetary compensation in connection with
the exchange of copyrighted works.
c. If You Distribute, or Publicly Perform the Work or Collections, You
must, unless a request has been made pursuant to Section 4(a), keep
intact all copyright notices for the Work and provide, reasonable to
the medium or means You are utilizing: (i) the name of the Original
Author (or pseudonym, if applicable) if supplied, and/or if the
Original Author and/or Licensor designate another party or parties
(e.g., a sponsor institute, publishing entity, journal) for
attribution ("Attribution Parties") in Licensor’s copyright notice,

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 vi


License V

terms of service or by other reasonable means, the name of such party


or parties; (ii) the title of the Work if supplied; (iii) to the
extent reasonably practicable, the URI, if any, that Licensor
specifies to be associated with the Work, unless such URI does not
refer to the copyright notice or licensing information for the Work.
The credit required by this Section 4(c) may be implemented in any
reasonable manner; provided, however, that in the case of a
Collection, at a minimum such credit will appear, if a credit for all
contributing authors of Collection appears, then as part of these
credits and in a manner at least as prominent as the credits for the
other contributing authors. For the avoidance of doubt, You may only
use the credit required by this Section for the purpose of attribution
in the manner set out above and, by exercising Your rights under this
License, You may not implicitly or explicitly assert or imply any
connection with, sponsorship or endorsement by the Original Author,
Licensor and/or Attribution Parties, as appropriate, of You or Your
use of the Work, without the separate, express prior written
permission of the Original Author, Licensor and/or Attribution
Parties.
d. For the avoidance of doubt:
i. Non-waivable Compulsory License Schemes. In those jurisdictions in
which the right to collect royalties through any statutory or
compulsory licensing scheme cannot be waived, the Licensor
reserves the exclusive right to collect such royalties for any
exercise by You of the rights granted under this License;
ii. Waivable Compulsory License Schemes. In those jurisdictions in
which the right to collect royalties through any statutory or
compulsory licensing scheme can be waived, the Licensor reserves
the exclusive right to collect such royalties for any exercise by
You of the rights granted under this License if Your exercise of
such rights is for a purpose or use which is otherwise than
noncommercial as permitted under Section 4(b) and otherwise waives
the right to collect royalties through any statutory or compulsory
licensing scheme; and,
iii. Voluntary License Schemes. The Licensor reserves the right to
collect royalties, whether individually or, in the event that the
Licensor is a member of a collecting society that administers
voluntary licensing schemes, via that society, from any exercise
by You of the rights granted under this License that is for a
purpose or use which is otherwise than noncommercial as permitted

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 vii
License VI

under Section 4(b).


e. Except as otherwise agreed in writing by the Licensor or as may be
otherwise permitted by applicable law, if You Reproduce, Distribute or
Publicly Perform the Work either by itself or as part of any
Collections, You must not distort, mutilate, modify or take other
derogatory action in relation to the Work which would be prejudicial
to the Original Author’s honor or reputation.
5. Representations, Warranties and Disclaimer
UNLESS OTHERWISE MUTUALLY AGREED BY THE PARTIES IN WRITING, LICENSOR
OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE,
INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY,
FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF
LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS,
WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION
OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE
LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR
ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES
ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS
BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. Termination
a. This License and the rights granted hereunder will terminate
automatically upon any breach by You of the terms of this License.
Individuals or entities who have received Collections from You under
this License, however, will not have their licenses terminated
provided such individuals or entities remain in full compliance with
those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any
termination of this License.
b. Subject to the above terms and conditions, the license granted here is
perpetual (for the duration of the applicable copyright in the Work).
Notwithstanding the above, Licensor reserves the right to release the
Work under different license terms or to stop distributing the Work at
any time; provided, however that any such election will not serve to
withdraw this License (or any other license that has been, or is
required to be, granted under the terms of this License), and this

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 viii
License VII

License will continue in full force and effect unless terminated as


stated above.
8. Miscellaneous
a. Each time You Distribute or Publicly Perform the Work or a Collection,
the Licensor offers to the recipient a license to the Work on the same
terms and conditions as the license granted to You under this License.
b. If any provision of this License is invalid or unenforceable under
applicable law, it shall not affect the validity or enforceability of
the remainder of the terms of this License, and without further action
by the parties to this agreement, such provision shall be reformed to
the minimum extent necessary to make such provision valid and
enforceable.
c. No term or provision of this License shall be deemed waived and no
breach consented to unless such waiver or consent shall be in writing
and signed by the party to be charged with such waiver or consent.
d. This License constitutes the entire agreement between the parties with
respect to the Work licensed here. There are no understandings,
agreements or representations with respect to the Work not specified
here. Licensor shall not be bound by any additional provisions that
may appear in any communication from You. This License may not be
modified without the mutual written agreement of the Licensor and You.
e. The rights granted under, and the subject matter referenced, in this
License were drafted utilizing the terminology of the Berne Convention
for the Protection of Literary and Artistic Works (as amended on
September 28, 1979), the Rome Convention of 1961, the WIPO Copyright
Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996
and the Universal Copyright Convention (as revised on July 24, 1971).
These rights and subject matter take effect in the relevant
jurisdiction in which the License terms are sought to be enforced
according to the corresponding provisions of the implementation of
those treaty provisions in the applicable national law. If the
standard suite of rights granted under applicable copyright law
includes additional rights not granted under this License, such
additional rights are deemed to be included in the License; this
License is not intended to restrict the license of any rights under
applicable law.

Creative Commons Notice

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 ix


License VIII

Creative Commons is not a party to this License, and makes no warranty


whatsoever in connection with the Work. Creative Commons will not be
liable to You or any party on any legal theory for any damages
whatsoever, including without limitation any general, special,
incidental or consequential damages arising in connection to this
license. Notwithstanding the foregoing two (2) sentences, if Creative
Commons has expressly identified itself as the Licensor hereunder, it
shall have all rights and obligations of Licensor.
Except for the limited purpose of indicating to the public that the
Work is licensed under the CCPL, Creative Commons does not authorize
the use by either party of the trademark "Creative Commons" or any
related trademark or logo of Creative Commons without the prior
written consent of Creative Commons. Any permitted use will be in
compliance with Creative Commons’ then-current trademark usage
guidelines, as may be published on its website or otherwise made
available upon request from time to time. For the avoidance of doubt,
this trademark restriction does not form part of this License.
Creative Commons may be contacted at http://creativecommons.org/.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 x


Other Textbooks and Lecture Slides by the Author I

1 M. D. Adams, Lecture Slides for Programming in C++ — The C++


Language, Libraries, Tools, and Other Topics (Version 2020-02-29),
University of Victoria, Victoria, BC, Canada, Feb. 2020, xxii + 2543 slides,
ISBN 978-1-55058-663-3 (print), ISBN 978-1-55058-664-0 (PDF).
Available from Google Books, Google Play Books, University of Victoria
Bookstore, and author’s web site
http://www.ece.uvic.ca/~mdadams/cppbook.
2 M. D. Adams, Multiresolution Signal and Geometry Processing: Filter
Banks, Wavelets, and Subdivision (Version 2013-09-26), University of
Victoria, Victoria, BC, Canada, Sept. 2013, xxxviii + 538 pages, ISBN
978-1-55058-507-0 (print), ISBN 978-1-55058-508-7 (PDF). Available
from Google Books, Google Play Books, University of Victoria Bookstore,
and author’s web site
http://www.ece.uvic.ca/~mdadams/waveletbook.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 xi


Other Textbooks and Lecture Slides by the Author II

3 M. D. Adams, Lecture Slides for Multiresolution Signal and Geometry


Processing (Version 2015-02-03), University of Victoria, Victoria, BC,
Canada, Feb. 2015, xi + 587 slides, ISBN 978-1-55058-535-3 (print),
ISBN 978-1-55058-536-0 (PDF). Available from Google Books, Google
Play Books, University of Victoria Bookstore, and author’s web site
http://www.ece.uvic.ca/~mdadams/waveletbook.
4 M. D. Adams, Signals and Systems, Edition 3.0, University of Victoria,
Victoria, BC, Canada, Dec. 2020, xliv + 680 pages, ISBN
978-1-55058-673-2 (print), ISBN 978-1-55058-674-9 (PDF). Available
from Google Books, Google Play Books, University of Victoria Bookstore,
and author’s web site
http://www.ece.uvic.ca/~mdadams/sigsysbook.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 xii
Part 0

Preface

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 xiii
About These Lecture Slides

 This document constitutes a detailed set of lecture slides on signals and


systems, covering both the continuous-time and discrete-time cases.
 These slides are organized in such a way as to facilitate the teaching of a
course that covers:
2 only the continuous-time case, or
2 only the discrete-time case, or
2 both the continuous-time and discrete-time cases.
 These slides are intended to be used in conjunction with the following
textbook:
2 M. D. Adams, Signals and Systems, Edition 3.0, University of Victoria,
Victoria, BC, Canada, Dec. 2020, xliv + 680 pages, ISBN
978-1-55058-673-2 (print), ISBN 978-1-55058-674-9 (PDF). Available from
Google Books, Google Play Books, University of Victoria Bookstore, and
author’s web site http://www.ece.uvic.ca/~mdadams/sigsysbook.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 xiv
Video Lectures

 The author has prepared video lectures for some of the material covered
in this textbook.
 All of the videos are hosted by YouTube and available through the author’s
YouTube channel:
2 https://www.youtube.com/iamcanadian1867

 The most up-to-date information about this video-lecture content can be


found at:
2 https://www.ece.uvic.ca/~mdadams/sigsysbook/#video_lectures

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 xv


Typesetting Conventions

 In a definition, the term being defined is often typeset in a font like this.
 To emphasize particular words, the words are typeset in a font like this.
 URLs are typeset like http://www.ece.uvic.ca/~mdadams.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 xvi
Part 1

Introduction

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 1


Signals

 A signal is a function of one or more variables that conveys information


about some (usually physical) phenomenon.
 For a function f , in the expression f (t1 ,t2 , . . . ,tn ), each of the {tk } is
called an independent variable, while the function value itself is referred
to as a dependent variable.
 Some examples of signals include:
2 a voltage or current in an electronic circuit
2 the position, velocity, or acceleration of an object
2 a force or torque in a mechanical system
2 a flow rate of a liquid or gas in a chemical process
2 a digital image, digital video, or digital audio
2 a stock market index

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 2


Classification of Signals
 Number of independent variables (i.e., dimensionality):
2 A signal with one independent variable is said to be one dimensional (e.g.,

audio).
2 A signal with more than one independent variable is said to be
multi-dimensional (e.g., image).
 Continuous or discrete independent variables:
2 A signal with continuous independent variables is said to be continuous

time (CT) (e.g., voltage waveform).


2 A signal with discrete independent variables is said to be discrete time

(DT) (e.g., stock market index).


 Continuous or discrete dependent variable:
2 A signal with a continuous dependent variable is said to be continuous

valued (e.g., voltage waveform).


2 A signal with a discrete dependent variable is said to be discrete valued

(e.g., digital image).


 A continuous-valued CT signal is said to be analog (e.g., voltage
waveform).
 A discrete-valued DT signal is said to be digital (e.g., digital audio).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 3
Graphical Representation of Signals

x(t) x(n)

3 3

2 2

1 1

t n
−30 −20 −10 0 10 20 30 −3 −2 −1 0 1 2 3

Continuous-Time (CT) Signal Discrete-Time (DT) Signal

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 4


Systems

 A system is an entity that processes one or more input signals in order to


produce one or more output signals.

x0 y0
x1 y1
x2 System y2
.. .. .. ..
. . . .
xM yN
|{z} |{z}
Input Signals Output Signals

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 5


Classification of Systems

 Number of inputs:
2 A system with one input is said to be single input (SI).
2 A system with more than one input is said to be multiple input (MI).
 Number of outputs:
2 A system with one output is said to be single output (SO).
2 A system with more than one output is said to be multiple output (MO).

 Types of signals processed:


2 A system can be classified in terms of the types of signals that it processes.
2 Consequently, terms such as the following (which describe signals) can
also be used to describe systems:
2 one-dimensional and multi-dimensional,
2 continuous-time (CT) and discrete-time (DT), and
2 analog and digital.
2 For example, a continuous-time (CT) system processes CT signals and a
discrete-time (DT) system processes DT signals.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 6


Signal Processing Systems

Discrete-Time Discrete-Time
Input Signal Signal Output
Continuous-Time Before After Continuous-Time
Signal Continuous-to- Processing Processing Discrete-to- Signal
Discrete-Time
Discrete-Time Continuous-Time
System
(C/D) Converter (D/C) Converter

Processing a Continuous-Time Signal With a Discrete-Time System

Continuous-Time Continuous-Time
Input Signal Signal Output
Discrete-Time Before After Discrete-Time
Signal Discrete-to- Processing Processing Continuous-to- Signal
Continuous-Time
Continuous-Time Discrete-Time
System
(D/C) Converter (C/D) Converter

Processing a Discrete-Time Signal With a Continuous-Time System

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 7


Communication Systems

Estimate of
Message Transmitted Received Message
Signal Signal Signal Signal
Transmitter Channel Receiver

General Structure of a Communication System

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 8


Control Systems

Reference
Input Error Output
+ Controller Plant

Sensor
Feedback
Signal

General Structure of a Feedback Control System

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 9


Why Study Signals and Systems?

 Engineers build systems that process/manipulate signals.


 We need a formal mathematical framework for the study of such systems.
 Such a framework is necessary in order to ensure that a system will meet
the required specifications (e.g., performance and safety).
 If a system fails to meet the required specifications or fails to work
altogether, negative consequences usually ensue.
 When a system fails to operate as expected, the consequences can
sometimes be catastrophic.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 10


System Failure Example: Tacoma Narrows Bridge

 The (original) Tacoma Narrows Bridge was a suspension bridge linking


Tacoma and Gig Harbor (WA, USA).
 This mile-long bridge, with a 2,800-foot main span, was the third largest
suspension bridge at the time of opening.
 Construction began in Nov. 1938 and took about 19 months to build at a
cost of $6,400,000.
 On July 1, 1940, the bridge opened to traffic.
 On Nov. 7, 1940 at approximately 11:00, the bridge collapsed during a
moderate (42 miles/hour) wind storm.
 The bridge was supposed to withstand winds of up to 120 miles/hour.
 The collapse was due to wind-induced vibrations and an unstable
mechanical system.
 Repair of the bridge was not possible.
 Fortunately, a dog trapped in an abandoned car was the only fatality.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 11


System Failure Example: Tacoma Narrows Bridge (Continued)

Image of bridge collapse omitted for copyright reasons.

A video of the bridge collapse can be found at


https://youtu.be/j-zczJXSxnw.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 12


Part 2

Preliminaries

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 13


Section 2.1

Functions, Sequences, System Operators, and Transforms

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 14


Sets

 A rational number is a number of the form x/y, where x and y are


integers and y 6= 0 (i.e., a ratio of integers).
 For example, − 53 , 17 0
11 , and 0 = 1 are rational numbers, whereas π and e
are irrational numbers (i.e., not rational).
 The symbols employed to denote several commonly-used sets are as
follows:
Symbol Set
Z integers
R real numbers
C complex numbers
Q rational numbers

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 15


Notation for Sets of Consecutive Integers

 For two integers a and b, we define the following notation for sets of
consecutive integers:

[a . . b] = {x ∈ Z : a ≤ x ≤ b},
[a . . b) = {x ∈ Z : a ≤ x < b},
(a . . b] = {x ∈ Z : a < x ≤ b}, and
(a . . b) = {x ∈ Z : a < x < b}.
 In this notation, a and b indicate the endpoints of the range for the set,
and the type of brackets used (i.e., parenthesis versus square bracket)
indicates whether each endpoint is included in the set.
 For example:
2 [0 . . 4] denotes the set of integers {0, 1, 2, 3, 4};
2 [0 . . 4) denotes the set of integers {0, 1, 2, 3}; and

2 [0 . . N − 1] and [0 . . N) both denote the set of integers {0, 1, 2, . . . , N − 1}.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 16


Notation for Intervals on the Real Line
 For two real numbers a and b, we define the following notation for intervals
on the real line:

[a, b] = {x ∈ R : a ≤ x ≤ b},
(a, b) = {x ∈ R : a < x < b},
[a, b) = {x ∈ R : a ≤ x < b}, and
(a, b] = {x ∈ R : a < x ≤ b}.
 In this notation, a and b indicate the endpoints of the interval for the set,
and the type of brackets used (i.e., parenthesis versus square bracket)
indicate whether each endpoint is included in the set.
 For example:
2 [0, 100] denotes the set of all real numbers from 0 to 100, including both 0

and 100;
2 (−π, π] denotes the set of all real numbers from −π to π, excluding −π but

including π; and
2 [−π, π) denotes the set of all real numbers from −π to π, including −π but

excluding π.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 17
Mappings

 A mapping is a relationship involving two sets that associates each


element in one set, called the domain, with an element from the other set,
called the codomain.
 The notation f : A → B denotes a mapping f whose domain is the set A
and whose codomain is the set B.
 Example:
Domain A Codomain B
f :A→B
1 0 A = {1, 2, 3, 4}
2 1 B = {0, 1, 2, 3}

3 2 0 x ∈ {1, 2}
f (x) = 1 x = 4
4 3 2 x = 3.

 Although many types of mappings exist, the types of most relevance to


our study of signals and systems are: functions, sequences, system
operators, and transforms.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 18
Functions

 A function is a mapping where the domain is a set that is continuous in


nature, such as the real numbers or complex numbers.
 In practice, the codomain is typically either the real numbers or complex
numbers.
 Functions are also commonly referred to as continuous-time (CT)
signals.
 Example:
2 Let f : R → R such that f (t) = t 2 (i.e., f is the squaring function).
2 The function f maps each real number t to the real number f (t) = t 2 .
2 The domain and codomain are the real numbers.
2 Note that f is a function, whereas f (t) is a number (namely, the value of
the function f evaluated at t ).
 Herein, we will focus almost exclusively on functions of a single
independent variable (i.e., one-dimensional functions).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 19


Sequences
 A sequence is a mapping where the domain is a set that is discrete in
nature, such as the integers, or a subset thereof.
 In practice, the codomain is typically either the real numbers or complex
numbers.
 Sequences are also commonly referred to as discrete-time (DT) signals.
 Example:
2 Let f : Z+ → Z+ such that f (n) = n2 , where Z+ denotes the set of

(strictly) positive integers (i.e., f is the sequence of perfect squares).


2 The sequence f maps each (strictly) positive integer n to the (strictly)

positive integer f (n) = n2 .


2 The domain and codomain are Z+ (i.e., the positive integers).

2 Note that f is a sequence, whereas f (n) is a number (namely, the value of

the sequence f evaluated at n).


 As a matter of notation, the nth element of a sequence x is denoted as
either x(n) or xn .
 Herein, we will focus almost exclusively on sequences with a single
independent variable (i.e., one-dimensional sequences).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 20
Remarks on Notation for Functions and Sequences
 For a real-valued function f of a real variable and an arbitrary real
number t , the expression f denotes the function f itself and the
expression f (t) denotes the value of the function f evaluated at t .
 That is, f is a function and f (t) is a number.
 Unfortunately, the practice of using f (t) to denote the function f is quite
common, although strictly speaking this is an abuse of notation.
 In contexts where imprecise notation may lead to problems, one should be
careful to clearly distinguish between a function and its value.
 For the real-valued functions f and g of a real variable and an arbitrary
real number t :
2 The expression f + g denotes a function, namely, the function formed by

adding the functions f and g.


2 The expression f (t) + g(t) denotes a number, namely, the sum of: 1) the

value of the function f evaluated at t ; and 2) the value of the function g


evaluated at t .
 Similar comments as the ones made above for functions also hold in the
case of sequences.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 21
Remarks on Notation for Functions and Sequences (Continued)

 To express that two functions f and g are equal, we can write either:
1 f = g; or
2 f (t) = g(t) for all t .
 Of the preceding two expressions, the first (i.e., f = g) is usually
preferable, as it is less verbose.
 For the functions f and g and an operation ◦ that is defined pointwise for
functions (such as addition, subtraction, multiplication, and division), the
following relationship holds:

( f ◦ g)(t) = f (t) ◦ g(t).


 Some operations ◦ involving functions (such as convolution, to be
discussed later) cannot be defined in a pointwise manner, in which case
( f ◦ g)(t) is a valid mathematical expression, while f (t) ◦ g(t) is not.
 Again, similar comments as the ones made above for functions also hold
in the case of sequences.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 22


System Operators
 A system operator is a mapping used to represent a system.
 We will focus exclusively on the case of single-input single-output
systems.
 A (single-input single-output) system operator maps a function or
sequence representing the input of a system to a function or sequence
representing the output of the system.
 The domain and codomain of a system operator are sets of functions or
sequences, not sets of numbers.
 Example:
2 Let H : F → F such that Hx(t) = 2x(t) (for all t ∈ R) and F is the set of

functions mapping R to R.
2 The system H maps a function to a function.
2 In particular, the domain and codomain are each F , which is a set of

functions.
2 The system H multiplies its input function x by a factor of 2 in order to
produce its output function Hx.
2 Note that Hx is a function, not a number.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 23
Remarks on Operator Notation for CT Systems
 For a system operator H and a function x, Hx is the function produced as
the output of the system H when the input is the function x.
 Brackets around the operand of an operator are often omitted when not
required for grouping.
 For example, for an operator H, a function x, and a real number t , we
would normally prefer to write:
1 Hx instead of the equivalent expression H(x); and

2 Hx(t) instead of the equivalent expression H(x)(t).

 Also, note that Hx is a function and Hx(t) is a number (namely, the


value of the function Hx evaluated at t ).
 In the expression H(x1 + x2 ), the brackets are needed for grouping, since
H(x1 + x2 ) 6≡ Hx1 + x2 (where “6≡” means “not equivalent”).
 When multiple operators are applied, they group from right to left.
 For example, for the operators H1 and H2 , and the function x, the
expression H2 H1 x means H2 [H1 (x)].

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 24


Remarks on Operator Notation for DT Systems
 For a system operator H and a sequence x, Hx is the sequence
produced as the output of the system H when the input is the sequence x.
 Brackets around the operand of an operator are often omitted when not
required for grouping.
 For example, for an operator H, a sequence x, and an integer n, we would
normally prefer to write:
1 Hx instead of the equivalent expression H(x); and

2 Hx(n) instead of the equivalent expression H(x)(n).

 Also, note that Hx is a sequence and Hx(n) is a number (namely, the


value of the sequence Hx evaluated at n).
 In the expression H(x1 + x2 ), the brackets are needed for grouping, since
H(x1 + x2 ) 6≡ Hx1 + x2 (where “6≡” means “not equivalent”).
 When multiple operators are applied, they group from right to left.
 For example, for the operators H1 and H2 , and the sequence x, the
expression H2 H1 x means H2 [H1 (x)].

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 25


Transforms

 Later, we will be introduced to several types of mappings known as


transforms.
 Transforms have a mathematical structure similar to system operators.
 That is, transforms map functions/sequences to functions/sequences.
 Due to this similar structure, many of the earlier comments about system
operators also apply to the case of transforms.
 For example, the Fourier transform (introduced later) is denoted as F and
the result of applying the Fourier transform operator to the
function/sequence x is denoted as Fx.
 Some examples of transforms of interest in the study of signals and
systems are listed on the next slide.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 26


Examples of Transforms

Name Domain Codomain


CT Fourier Series T -periodic functions sequences
(with domain R) (with domain Z)
CT Fourier Transform functions functions
(with domain R) (with domain R)
Laplace Transform functions functions
(with domain R) (with domain C)
DT Fourier Series N -periodic sequences N -periodic sequences
(with domain Z) (with domain Z)
DT Fourier Transform sequences 2π-periodic functions
(with domain Z) (with domain R)
Z Transform sequences functions
(with domain Z) (with domain C)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 27


Section 2.2

Properties of Signals

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 28


Even Symmetry

 A function x is said to be even if it satisfies

x(t) = x(−t) for all t (where t is a real number).


 A sequence x is said to be even if it satisfies

x(n) = x(−n) for all n (where n is an integer).


 Geometrically, the graph of an even signal is symmetric with respect to
the vertical axis.
 Some examples of even signals are shown below.
x(t) x(n)

2 2

1 1

t n
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3
−1 −1

−2 −2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 29


Odd Symmetry
 A function x is said to be odd if it satisfies

x(t) = −x(−t) for all t (where t is a real number).


 A sequence x is said to be odd if it satisfies

x(n) = −x(−n) for all n (where n is an integer).


 An odd signal x must be such that x(0) = 0.
 Geometrically, the graph of an odd signal is symmetric with respect to the
origin.
 Some examples of odd signals are shown below.
x(t) x(n)

2 2

1 1

t n
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3
−1 −1

−2 −2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 30


Conjugate Symmetry

 A function x is said to be conjugate symmetric if it satisfies

x(t) = x∗ (−t) for all t (where t is a real number).


 A sequence x is said to be conjugate symmetric if it satisfies

x(n) = x∗ (−n) for all n (where n is an integer).


 The real part of a conjugate symmetric function or sequence is even.
 The imaginary part of a conjugate symmetric function or sequence is odd.
 An example of a conjugate symmetric function is a complex sinusoid
x(t) = cos ωt + j sin ωt , where ω is a real constant.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 31


Periodicity

 A function x is said to be periodic with period T (or T -periodic) if, for


some strictly-positive real constant T , the following condition holds:

x(t) = x(t + T ) for all t (where t is a real number).


 A sequence x is said to be periodic with period N (or N-periodic) if, for
some strictly-positive integer constant N , the following condition holds:

x(n) = x(n + N) for all n (where n is an integer).


 Some examples of periodic signals are shown below.
x(t)
x(n)

4
... ... 3

t ··· 2 ···
−2T −T T 2T
1

n
−4 −3 −2 −1 0 1 2 3 4 5 6 7

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 32


Periodicity (Continued 1)

 A function/sequence that is not periodic is said to be aperiodic.


 A T -periodic function x is said to have frequency T1 and angular
frequency 2πT .
 An N -periodic sequence x is said to have frequency N1 and angular
frequency 2πN.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 33


Periodicity (Continued 2)

 The period of a periodic signal is not unique. That is, a signal that is
periodic with period T is also periodic with period kT , for every (strictly)
positive integer k.
x(t)
2T 2T

... ...
t
−2T −T T 2T

T T

 The smallest period with which a signal is periodic is called the


fundamental period and its corresponding frequency is called the
fundamental frequency.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 34


Part 3

Continuous-Time (CT) Signals and Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 35


Section 3.1

Independent- and Dependent-Variable Transformations

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 36


Time Shifting (Translation)

 Time shifting (also called translation) maps the input function x to the
output function y as given by

y(t) = x(t − b),

where b is a real number.


 Such a transformation shifts the function (to the left or right) along the time
axis.
 If b > 0, y is shifted to the right by |b|, relative to x (i.e., delayed in time).
 If b < 0, y is shifted to the left by |b|, relative to x (i.e., advanced in time).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 37


Time Shifting (Translation): Example

x(t)

t
−3 −2 −1 0 1 2 3

x(t − 1) x(t + 1)

3 3

2 2

1 1

t t
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 38


Time Reversal (Reflection)

 Time reversal (also known as reflection) maps the input function x to the
output function y as given by

y(t) = x(−t).
 Geometrically, the output function y is a reflection of the input function x
about the (vertical) line t = 0.
x(t) x(−t)

3 3

2 2

1 1

t t
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 39


Time Compression/Expansion (Dilation)

 Time compression/expansion (also called dilation) maps the input


function x to the output function y as given by

y(t) = x(at),

where a is a strictly positive real number.


 Such a transformation is associated with a compression/expansion along
the time axis.
 If a > 1, y is compressed along the horizontal axis by a factor of a, relative
to x.
 If a < 1, y is expanded (i.e., stretched) along the horizontal axis by a factor
of 1a , relative to x.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 40


Time Compression/Expansion (Dilation): Example

x(t)

t
−2 −1 0 1 2

1

x(2t) x 2t

1 1

t t
−2 −1 0 1 2 −2 −1 0 1 2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 41


Time Scaling (Dilation/Reflection)

 Time scaling maps the input function x to the output function y as given by

y(t) = x(at),

where a is a nonzero real number.


 Such a transformation is associated with a dilation (i.e.,
compression/expansion along the time axis) and/or time reversal.
 If |a| > 1, the function is compressed along the time axis by a factor of |a|.
 If |a| < 1, the function is expanded (i.e., stretched) along the time axis by
a factor of 1a .
 If |a| = 1, the function is neither expanded nor compressed.
 If a < 0, the function is also time reversed.
 Dilation (i.e., expansion/compression) and time reversal commute.
 Time reversal is a special case of time scaling with a = −1; and time
compression/expansion is a special case of time scaling with a > 0.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 42


Time Scaling (Dilation/Reflection): Example

x(t) x(2t)

1 1

t t
−2 −1 0 1 2 −2 −1 0 1 2

1

x 2t x(−t)

1 1

t t
−2 −1 0 1 2 −2 −1 0 1 2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 43


Combined Time Scaling and Time Shifting
 Consider a transformation that maps the input function x to the output
function y as given by
y(t) = x(at − b),
where a and b are real numbers and a 6= 0.
 The above transformation can be shown to be the combination of a
time-scaling operation and time-shifting operation.
 Since time scaling and time shifting do not commute, we must be
particularly careful about the order in which these transformations are
applied.
 The above transformation has two distinct but equivalent interpretations:
1 first, time shifting x by b, and then time scaling the result by a;

2 first, time scaling x by a, and then time shifting the result by b/a.

 Note that the time shift is not by the same amount in both cases.
 In particular, note that when time scaling is applied first followed by time
shifting, the time shift is by b/a, not b.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 44


Combined Time Scaling and Time Shifting: Example
time shift by 1 and then time scale by 2

p(t) = x(t − 1) y(t) = p(2t)


1 1

− 12 1

Given x as shown −1 1 2 3
t
−2 −1
2
1 3
2 2
t
below, find −1 −1
y(t) = x(2t − 1).
x(t)
1

t time scale by 2 and then time shift by 12


−2 −1 1 2

−1 q(t) = x(2t) y(t) = q t − 21

1 1

− 12 1
t 2
t
−2 −1 1 2 −2 −1 1 3
2 2

−1 −1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 45


Two Perspectives on Independent-Variable Transformations

 A transformation of the independent variable can be viewed in terms of


1 the effect that the transformation has on the function; or

2 the effect that the transformation has on the horizontal axis.

 This distinction is important because such a transformation has opposite


effects on the function and horizontal axis.
 For example, the (time-shifting) transformation that replaces t by t − b
(where b is a real number) in x(t) can be viewed as a transformation that
1 shifts the function x right by b units; or

2 shifts the horizontal axis left by b units.

 In our treatment of independent-variable transformations, we are only


interested in the effect that a transformation has on the function.
 If one is not careful to consider that we are interested in the function
perspective (as opposed to the axis perspective), many aspects of
independent-variable transformations will not make sense.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 46


Amplitude Scaling
 Amplitude scaling maps the input function x to the output function y as
given by
y(t) = ax(t),
where a is a real number.
 Geometrically, the output function y is expanded/compressed in amplitude
and/or reflected about the horizontal axis.
x(t) 2x(t)
2 2

1 1

t t
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3
−1 −1

−2 −2

1 x(t) −2x(t)
2
2 2

1 1

t t
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3
−1 −1

−2 −2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 47


Amplitude Shifting

 Amplitude shifting maps the input function x to the output function y as


given by

y(t) = x(t) + b,

where b is a real number.


 Geometrically, amplitude shifting adds a vertical displacement to x.
x(t) x(t) − 2
2 2

1 1

t t
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3
−1 −1

−2 −2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 48


Combined Amplitude Scaling and Amplitude Shifting

 We can also combine amplitude scaling and amplitude shifting


transformations.
 Consider a transformation that maps the input function x to the output
function y, as given by

y(t) = ax(t) + b,

where a and b are real numbers.


 Equivalently, the above transformation can be expressed as
 
y(t) = a x(t) + ba .
 The above transformation is equivalent to:
1 first amplitude scaling x by a, and then amplitude shifting the resulting

function by b; or
2 first amplitude shifting x by b/a, and then amplitude scaling the resulting

function by a.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 49


Section 3.2

Properties of Functions

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 50


Symmetry and Addition/Multiplication

 Sums involving even and odd functions have the following properties:
2 The sum of two even functions is even.
2 The sum of two odd functions is odd.
2 The sum of an even function and odd function is neither even nor odd,
provided that neither of the functions is identically zero.
 That is, the sum of functions with the same type of symmetry also has the
same type of symmetry.
 Products involving even and odd functions have the following properties:
2 The product of two even functions is even.
2 The product of two odd functions is even.
2 The product of an even function and an odd function is odd.
 That is, the product of functions with the same type of symmetry is even,
while the product of functions with opposite types of symmetry is odd.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 51


Decomposition of a Function into Even and Odd Parts

 Every function x has a unique representation of the form

x(t) = xe (t) + xo (t),

where the functions xe and xo are even and odd, respectively.


 In particular, the functions xe and xo are given by

xe (t) = 12 [x(t) + x(−t)] and xo (t) = 12 [x(t) − x(−t)] .


 The functions xe and xo are called the even part and odd part of x,
respectively.
 For convenience, the even and odd parts of x are often denoted as
Even{x} and Odd{x}, respectively.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 52


Sum of Periodic Functions

 Sum of periodic functions. For two periodic functions x1 and x2 with


fundamental periods T1 and T2 , respectively, and the sum y = x1 + x2 :
1 The sum y is periodic if and only if the ratio T /T is a rational number (i.e.,
1 2
the quotient of two integers).
2 If y is periodic, its fundamental period is rT1 (or equivalently, qT2 , since
rT1 = qT2 ), where T1 /T2 = q/r and q and r are integers and coprime (i.e.,
have no common factors). (Note that rT1 is simply the least common
multiple of T1 and T2 .)
 Although the above theorem only directly addresses the case of the sum
of two functions, the case of N functions (where N > 2) can be handled by
applying the theorem repeatedly N − 1 times.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 53


Right-Sided Functions
 A function x is said to be right sided if, for some (finite) real constant t0 ,
the following condition holds:
x(t) = 0 for all t < t0
(i.e., x is only potentially nonzero to the right of t0 ).
 An example of a right-sided function is shown below.
x(t)

···

t
t0

 A function x is said to be causal if


x(t) = 0 for all t < 0.
 A causal function is a special case of a right-sided function.
 A causal function is not to be confused with a causal system. In these two
contexts, the word “causal” has very different meanings.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 54
Left-Sided Functions
 A function x is said to be left sided if, for some (finite) real constant t0 , the
following condition holds:
x(t) = 0 for all t > t0
(i.e., x is only potentially nonzero to the left of t0 ).
 An example of a left-sided function is shown below.
x(t)

···

t
t0

 Similarly, a function x is said to be anticausal if


x(t) = 0 for all t > 0.
 An anticausal function is a special case of a left-sided function.
 An anticausal function is not to be confused with an anticausal system. In
these two contexts, the word “anticausal” has very different meanings.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 55
Finite-Duration and Two-Sided Functions
 A function that is both left sided and right sided is said to be finite
duration (or time limited).
 An example of a finite duration function is shown below.

x(t)

t
t0 t1

 A function that is neither left sided nor right sided is said to be two sided.
 An example of a two-sided function is shown below.

x(t)
···
···

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 56


Bounded Functions

 A function x is said to be bounded if there exists some (finite) positive


real constant A such that

|x(t)| ≤ A for all t

(i.e., x(t) is finite for all t ).


 For example, the sine and cosine functions are bounded, since

|sint| ≤ 1 for all t and |cost| ≤ 1 for all t.


 In contrast, the tangent function and any nonconstant polynomial
function p (e.g., p(t) = t 2 ) are unbounded, since

lim |tant| = ∞ and lim |p(t)| = ∞.


t→π/2 |t|→∞

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 57


Energy and Power of a Function

 The energy E contained in the function x is given by


Z ∞
E= |x(t)|2 dt.
−∞

 A signal with finite energy is said to be an energy signal.


 The average power P contained in the function x is given by
Z T /2
P= lim 1 |x(t)|2 dt.
T →∞ T −T /2

 A signal with (nonzero) finite average power is said to be a power signal.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 58


Section 3.3

Elementary Functions

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 59


Real Sinusoidal Functions
 A real sinusoidal function is a function of the form
x(t) = A cos(ωt + θ),
where A, ω, and θ are real constants.

 Such a function is periodic with fundamental period T = |ω| and
fundamental frequency |ω|.
 A real sinusoid has a plot resembling that shown below.
A cos(ωt + θ)

A cos θ

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 60


Complex Exponential Functions

 A complex exponential function is a function of the form

x(t) = Aeλt ,

where A and λ are complex constants.


 A complex exponential can exhibit one of a number of distinct modes of
behavior, depending on the values of its parameters A and λ.
 For example, as special cases, complex exponentials include real
exponentials and complex sinusoids.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 61


Real Exponential Functions
 A real exponential function is a special case of a complex exponential
x(t) = Aeλt , where A and λ are restricted to be real numbers.
 A real exponential can exhibit one of three distinct modes of behavior,
depending on the value of λ, as illustrated below.
 If λ > 0, x(t) increases exponentially as t increases (i.e., a growing exponential).
 If λ < 0, x(t) decreases exponentially as t increases (i.e., a decaying exponential).
 If λ = 0, x(t) simply equals the constant A.
Aeλt Aeλt Aeλt

A A A

t t t

λ>0 λ=0 λ<0


Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 62
Complex Sinusoidal Functions

 A complex sinusoidal function is a special case of a complex exponential


x(t) = Aeλt , where A is complex and λ is purely imaginary (i.e.,
Re{λ} = 0).
 That is, a complex sinusoidal function is a function of the form

x(t) = Ae jωt ,

where A is complex and ω is real.


 By expressing A in polar form as A = |A| e jθ (where θ is real) and using
Euler’s relation, we can rewrite x(t) as

x(t) = |A| cos(ωt + θ) + j |A| sin(ωt + θ) .


| {z } | {z }
Re{x(t)} Im{x(t)}

 Thus, Re{x} and Im{x} are the same except for a time shift.

 Also, x is periodic with fundamental period T = |ω| and fundamental
frequency |ω|.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 63
Complex Sinusoidal Functions (Continued)

 The graphs of Re{x} and Im{x} have the forms shown below.

|A| cos(ωt + θ) |A| sin(ωt + θ)

|A| |A|

|A| cos θ

t t

|A| sin θ
− |A|
− |A|

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 64


Plot of x(t) = e jωt for ω ∈ {2π, −2π}
2
Im
Re
−4 2
1 1.5
−3
−2 1
0.5
ω = 2π
−1

−0.5 1
−1 2
−1
−1.5 3 t
−2 4
−2

2
Im
Re
−4 2
1 1.5
−3
−2 1
0.5
−1
ω = −2π
−0.5 1
−1 2
−1
−1.5 3 t
−2 4
−2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 65


General Complex Exponential Functions
 In the most general case of a complex exponential function x(t) = Aeλt , A
and λ are both complex.
 Letting A = |A| e jθ and λ = σ + jω (where θ, σ, and ω are real), and
using Euler’s relation, we can rewrite x(t) as

x(t) = |A| eσt cos(ωt + θ) + j |A| eσt sin(ωt + θ) .


| {z } | {z }
Re{x(t)} Im{x(t)}

 Thus, Re{x} and Im{x} are each the product of a real exponential and
real sinusoid.
 One of three distinct modes of behavior is exhibited by x(t), depending on
the value of σ.
 If σ = 0, Re{x} and Im{x} are real sinusoids.
 If σ > 0, Re{x} and Im{x} are each the product of a real sinusoid and a
growing real exponential.
 If σ < 0, Re{x} and Im{x} are each the product of a real sinusoid and a
decaying real exponential.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 66
General Complex Exponential Functions (Continued)

 The three modes of behavior for Re{x} and Im{x} are illustrated below.

|A| eσt
|A| eσt
|A| eσt

t t t

σ>0 σ=0 σ<0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 67


Relationship Between Complex Exponentials and Real
Sinusoids

 From Euler’s relation, a complex sinusoid can be expressed as the sum of


two real sinusoids as

Ae jωt = A cos(ωt) + jA sin(ωt).


 Moreover, a real sinusoid can be expressed as the sum of two complex
sinusoids using the identities
A h j(ωt+θ) i
A cos(ωt + θ) = e + e− j(ωt+θ) and
2
h
A j(ωt+θ) i
A sin(ωt + θ) = e − e− j(ωt+θ) .
2j
 Note that, above, we are simply restating results from the (appendix)
material on complex analysis.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 68


Unit-Step Function

 The unit-step function (also known as the Heaviside function), denoted


u, is defined as
(
1 t ≥0
u(t) =
0 otherwise.

 Due to the manner in which u is used in practice, the actual value of u(0)
is unimportant. Sometimes values of 0 and 21 are also used for u(0).
 A plot of this function is shown below.

u(t)

1 ···

··· −1 0 1
t

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 69


Signum Function

 The signum function, denoted sgn, is defined as



1
 t >0
sgnt = 0 t =0


−1 t < 0.
 From its definition, one can see that the signum function simply computes
the sign of a number.
 A plot of this function is shown below.

sgnt

1 ···
t
−3 −2 −1 1 2 3

··· −1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 70


Rectangular Function
 The rectangular function (also called the unit-rectangular pulse
function), denoted rect, is given by
(
1 − 21 ≤ t < 12
rectt =
0 otherwise.
 Due to the manner in which the rect function is used in practice, the actual
value of rectt at t = ± 12 is unimportant. Sometimes different values are
used from those specified above.
 A plot of this function is shown below.

rectt
1

t
− 12 0 1
2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 71


Indicator Function

 Functions and sequences that are one over some subset of their domain
and zero elsewhere appear very frequently in engineering (e.g., the
unit-step function and rectangular function).
 Indicator function notation provides a concise way to denote such
functions and sequences.
 The indicator function of a subset S of a set A, denoted χS , is defined as
(
1 if t ∈ S
χS (t) =
0 otherwise.
 A rectangular pulse (defined on R) having an amplitude of 1, a leading
edge at a, and falling edge at b is χ[a,b] .
 The unit-step function (defined on R) is χ[0,∞) .
 The unit-rectangular pulse (defined on R) is χ[−1/2,1/2] .
 The unit-step sequence (defined on Z) is χ[0..∞) .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 72


Triangular Function

 The triangular function (also called the unit-triangular pulse function),


denoted tri, is defined as
(
1 − 2 |t| |t| ≤ 12
trit =
0 otherwise.

 A plot of this function is shown below.

trit
1

t
− 12 0 1
2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 73


Cardinal Sine Function

 The cardinal sine function, denoted sinc, is given by


sint
sinct = .
t
 By l’Hopital’s rule, sinc 0 = 1.
 A plot of this function for part of the real line is shown below.
[Note that the oscillations in sinct do not die out for finite t .]
1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−10π −5π 0 5π 10π

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 74


Floor and Ceiling Functions

 The floor function, denoted b·c, is a function that maps a real number x
to the largest integer not more than x.
 In other words, the floor function rounds a real number to the nearest
integer in the direction of negative infinity.
 For example,
 1 1
− 2 = −1, 2 = 0, and b1c = 1.
 The ceiling function, denoted d·e, is a function that maps a real number x
to the smallest integer not less than x.
 In other words, the ceiling function rounds a real number to the nearest
integer in the direction of positive infinity.
 For example,
 1 1
− 2 = 0, 2 = 1, and d1e = 1.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 75


Some Properties of the Floor and Ceiling Functions

 Several useful properties of the floor and ceiling functions include:

bx + nc = bxc + n for x ∈ R and n ∈ Z;


dx + ne = dxe + n for x ∈ R and n ∈ Z;
dxe = − b−xc for x ∈ R;
bxc = − d−xe for x ∈ R;
lmm    
m+n−1 m−1
= = + 1 for m, n ∈ Z and n > 0; and
n n n
jmk    
m−n+1 m+1
= = − 1 for m, n ∈ Z and n > 0.
n n n

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 76


Unit-Impulse Function

 The unit-impulse function (also known as the Dirac delta function or


delta function), denoted δ, is defined by the following two properties:

δ(t) = 0 for t 6= 0 and


Z ∞
δ(t)dt = 1.
−∞

 Technically, δ is not a function in the ordinary sense. Rather, it is what is


known as a generalized function. Consequently, the δ function
sometimes behaves in unusual ways.
 Graphically, the delta function is represented as shown below.
δ(t) Kδ(t − t0 )
1 K

t t
0 0 t0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 77


Unit-Impulse Function as a Limit
 Define
(
1/ε |t| < ε/2
gε (t) =
0 otherwise.
 The function gε has a plot of the form shown below.
gε (t)
1
ε

t
− 2ε 0 ε
2
R∞
 Clearly, for any choice of ε, −∞ gε (t)dt = 1.
 The function δ can be obtained as the following limit:
δ(t) = lim gε (t).
ε→0
 That is, δ can be viewed as a limiting case of a rectangular pulse where
the pulse width becomes infinitesimally small and the pulse height
becomes infinitely large in such a way that the integral of the resulting
function remains unity.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 78
Properties of the Unit-Impulse Function

 Equivalence property. For any continuous function x and any real


constant t0 ,

x(t)δ(t − t0 ) = x(t0 )δ(t − t0 ).


 Sifting property. For any continuous function x and any real constant t0 ,
Z ∞
x(t)δ(t − t0 )dt = x(t0 ).
−∞

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 79


Graphical Interpretation of Equivalence Property

δ(t − t0 )
x(t)

1
x(t0 )
t
t t0
t0
Time-Shifted Unit-Impulse
Function x
Function
x(t)δ(t − t0 )

x(t0 )

t
t0

Product

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 80


Representing a Rectangular Pulse (Using Unit-Step Functions)

 For real constants a and b where a ≤ b, consider a function x of the form


(
1 a≤t <b
x(t) =
0 otherwise

(i.e., x is a rectangular pulse of height one, with a rising edge at a and


falling edge at b).
 The function x can be equivalently written as

x(t) = u(t − a) − u(t − b)

(i.e., the difference of two time-shifted unit-step functions).


 Unlike the original expression for x, this latter expression for x does not
involve multiple cases.
 In effect, by using unit-step functions, we have collapsed a formula
involving multiple cases into a single expression.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 81


Representing Functions Using Unit-Step Functions

 The idea from the previous slide can be extended to handle any function
that is defined in a piecewise manner (i.e., via an expression involving
multiple cases).
 That is, by using unit-step functions, we can always collapse a formula
involving multiple cases into a single expression.
 Often, simplifying a formula in this way can be quite beneficial.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 82


Section 3.4

Continuous-Time (CT) Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 83


CT Systems
 A system with input x and output y can be described by the equation

y = Hx,

where H denotes an operator (i.e., transformation).


 Note that the operator H maps a function to a function (not a number to
a number).
 Alternatively, we can express the above relationship using the notation
H
x −→ y.
 If clear from the context, the operator H is often omitted, yielding the
abbreviated notation

x → y.
 Note that the symbols “→” and “=” have very different meanings.
 The symbol “→” should be read as “produces” (not as “equals”).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 84
Block Diagram Representations

 Often, a system defined by the operator H and having the input x and
output y is represented in the form of a block diagram as shown below.

Input Output
x System y
H

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 85


Interconnection of Systems
 Two basic ways in which systems can be interconnected are shown below.
x y
H1 +
x y
H1 H2
H2
Series
Parallel
 A series (or cascade) connection ties the output of one system to the input
of the other.
 The overall series-connected system is described by the equation

y = H2 H1 x.
 A parallel connection ties the inputs of both systems together and sums
their outputs.
 The overall parallel-connected system is described by the equation

y = H1 x + H2 x.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 86
Section 3.5

Properties of (CT) Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 87


Memory

 A system H is said to be memoryless if, for every real constant t0 , Hx(t0 )


does not depend on x(t) for some t 6= t0 .
 In other words, a memoryless system is such that the value of its output at
any given point in time can depend on the value of its input at only the
same point in time.
 A system that is not memoryless is said to have memory.
 Although simple, a memoryless system is not very flexible, since its
current output value cannot rely on past or future values of the input.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 88


Memory (Continued)

If the system H is memoryless,


the output Hx at t0
can depend on the input x
only at t0 .

t
−∞ t0 ∞

Consider the calculation of the


output Hx at t0 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 89


Causality

 A system H is said to be causal if, for every real constant t0 , Hx(t0 ) does
not depend on x(t) for some t > t0 .
 In other words, a causal system is such that the value of its output at any
given point in time can depend on the value of its input at only the same or
earlier points in time (i.e., not later points in time).
 If the independent variable t represents time, a system must be causal in
order to be physically realizable.
 Noncausal systems can sometimes be useful in practice, however, since
the independent variable need not always represent time (e.g., the
independent variable might represent position).
 A memoryless system is always causal, although the converse is not
necessarily true.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 90


Causality (Continued)

If the system H is causal,


the output Hx at t0
can depend on the input x
only at points t ≤ t0 .

t ≤ t0
t
−∞ t0 ∞

Consider the calculation of the


output Hx at t0 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 91


Invertibility
 The inverse of a system H (if it exists) is another system H−1 such that,
for every function x,
H−1 Hx = x
(i.e., the system formed by the cascade interconnection of H followed by
H−1 is a system whose input and output are equal).
 A system is said to be invertible if it has a corresponding inverse system
(i.e., its inverse exists).
 Equivalently, a system is invertible if its input can always be uniquely
determined from its output.
 An invertible system will always produce distinct outputs from any two
distinct inputs (i.e., x1 6= x2 ⇒ Hx1 6= Hx2 ).
 To show that a system is invertible, we simply find the inverse system.
 To show that a system is not invertible, we find two distinct inputs that
result in identical outputs (i.e., x1 6= x2 and Hx1 = Hx2 ).
 In practical terms, invertible systems are “nice” in the sense that their
effects can be undone.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 92
Invertibility (Continued)

 A system H−1 being the inverse of H means that the following two
systems are equivalent (i.e., H−1 H is an identity):

x y x y
H H−1

System 1: y = H−1 Hx System 2: y = x

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 93


Bounded-Input Bounded-Output (BIBO) Stability
 A system H is said to be bounded-input bounded-output (BIBO)
stable if, for every bounded function x, Hx is bounded (i.e., |x(t)| < ∞ for
all t implies that |Hx(t)| < ∞ for all t ).
 In other words, a BIBO stable system is such that it guarantees to always
produce a bounded output as long as its input is bounded.
 To show that a system is BIBO stable, we must show that every bounded
input leads to a bounded output.
 To show that a system is not BIBO stable, we only need to find a single
bounded input that leads to an unbounded output.
 In practical terms, a BIBO stable system is well behaved in the sense that,
as long as the system input is finite everywhere (in its domain), the output
will also be finite everywhere.
 Usually, a system that is not BIBO stable will have serious safety issues.
 For example, a portable music player with a battery input of 3.7 volts and
headset output of ∞ volts would result in one vaporized human (and likely
a big lawsuit as well).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 94
Time Invariance (TI)

 A system H is said to be time invariant (TI) (or shift invariant (SI)) if,
for every function x and every real constant t0 , the following condition
holds:
Hx(t − t0 ) = Hx0 (t) for all t, where x0 (t) = x(t − t0 )
(i.e., H commutes with time shifts).
 In other words, a system is time invariant if a time shift (i.e., advance or
delay) in the input always results only in an identical time shift in the
output.
 A system that is not time invariant is said to be time varying.
 In simple terms, a time invariant system is a system whose behavior does
not change with respect to time.
 Practically speaking, compared to time-varying systems, time-invariant
systems are much easier to design and analyze, since their behavior
does not change with respect to time.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 95


Time Invariance (Continued)

 Let St0 denote an operator that applies a time shift of t0 to a function (i.e.,
St0 x(t) = x(t − t0 )).
 A system H is time invariant if and only if the following two systems are
equivalent (i.e., H commutes with St0 ):

x y
St0 H x y
H St0

 System 1: y =0 HSt0 x  System 2: y = St0 Hx


y(t) = Hx (t)  
y(t) = Hx(t − t0 )
x0 (t) = St0 x(t) = x(t − t0 )

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 96


Additivity, Homogeneity, and Linearity
 A system H is said to be additive if, for all functions x1 and x2 , the
following condition holds:
H(x1 + x2 ) = Hx1 + Hx2
(i.e., H commutes with addition).
 A system H is said to be homogeneous if, for every function x and every
complex constant a, the following condition holds:
H(ax) = aHx
(i.e., H commutes with scalar multiplication).
 A system that is both additive and homogeneous is said to be linear.
 In other words, a system H is linear, if for all functions x1 and x2 and all
complex constants a1 and a2 , the following condition holds:
H(a1 x1 + a2 x2 ) = a1 Hx1 + a2 Hx2
(i.e., H commutes with linear combinations).
 The linearity property is also referred to as the superposition property.
 Practically speaking, linear systems are much easier to design and
analyze than nonlinear systems.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 97
Additivity, Homogeneity, and Linearity (Continued 1)

 The system H is additive if and only if the following two systems are
equivalent (i.e., H commutes with addition):

x1 y x1 y
+ H H +
x2 x2
H

System 1: y = H(x1 + x2 )
System 2: y = Hx1 + Hx2

 The system H is homogeneous if and only if the following two systems


are equivalent (i.e., H commutes with scalar multiplication):

x y x y
a H H a

System 1: y = H(ax) System 2: y = aHx

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 98


Additivity, Homogeneity, and Linearity (Continued 2)

 The system H is linear if and only if the following two systems are
equivalent (i.e., H commutes with linear combinations):

x1 y x1 y
a1 + H H a1 +
x2 x2
a2 H a2

System 1: y = H(a1 x1 + a2 x2 ) System 2: y = a1 Hx1 + a2 Hx2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 99


Eigenfunctions of Systems

 A function x is said to be an eigenfunction of the system H with the


eigenvalue λ if
Hx = λx,
where λ is a complex constant.
 In other words, the system H acts as an ideal amplifier for each of its
eigenfunctions x, where the amplifier gain is given by the corresponding
eigenvalue λ.
 Different systems have different eigenfunctions.
 Many of the mathematical tools developed for the study of CT systems
have eigenfunctions as their basis.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 100
Part 4

Continuous-Time Linear Time-Invariant (LTI) Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 101
Why Linear Time-Invariant (LTI) Systems?

 In engineering, linear time-invariant (LTI) systems play a very important


role.
 Very powerful mathematical tools have been developed for analyzing LTI
systems.
 LTI systems are much easier to analyze than systems that are not LTI.
 In practice, systems that are not LTI can be well approximated using LTI
models.
 So, even when dealing with systems that are not LTI, LTI systems still play
an important role.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 102
Section 4.1

Convolution

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 103
CT Convolution

 The (CT) convolution of the functions x and h, denoted x ∗ h, is defined


as the function
Z ∞
x ∗ h(t) = x(τ)h(t − τ)dτ.
−∞

 The convolution result x ∗ h evaluated at the point t is simply a weighted


average of the function x, where the weighting is given by h time reversed
and shifted by t .
 Herein, the asterisk symbol (i.e., “∗”) will always be used to denote
convolution, not multiplication.
 As we shall see, convolution is used extensively in systems theory.
 In particular, convolution has a special significance in the context of LTI
systems.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 104
Practical Convolution Computation

 To compute the convolution


Z ∞
x ∗ h(t) = x(τ)h(t − τ)dτ,
−∞

we proceed as follows:
1 Plot x(τ) and h(t − τ) as a function of τ.

2 Initially, consider an arbitrarily large negative value for t . This will result in
h(t − τ) being shifted very far to the left on the time axis.
3 Write the mathematical expression for x ∗ h(t).
4 Increase t gradually until the expression for x ∗ h(t) changes form. Record
the interval over which the expression for x ∗ h(t) was valid.
5 Repeat steps 3 and 4 until t is an arbitrarily large positive value. This
corresponds to h(t − τ) being shifted very far to the right on the time axis.
6 The results for the various intervals can be combined in order to obtain an
expression for x ∗ h(t) for all t .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 105
Properties of Convolution

 The convolution operation is commutative. That is, for any two functions x
and h,

x ∗ h = h ∗ x.
 The convolution operation is associative. That is, for any functions x, h1 ,
and h2 ,

(x ∗ h1 ) ∗ h2 = x ∗ (h1 ∗ h2 ).
 The convolution operation is distributive with respect to addition. That is,
for any functions x, h1 , and h2 ,

x ∗ (h1 + h2 ) = x ∗ h1 + x ∗ h2 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 106
Representation of Functions Using Impulses

 For any function x,


Z ∞
x ∗ δ(t) = x(τ)δ(t − τ)dτ = x(t).
−∞

 Thus, any function x can be written in terms of an expression involving δ.


 Moreover, δ is the convolutional identity. That is, for any function x,

x ∗ δ = x.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 107
Periodic Convolution

 The convolution of two periodic functions is usually not well defined.


 This motivates an alternative notion of convolution for periodic functions
known as periodic convolution.
 The periodic convolution of the T -periodic functions x and h, denoted
x ~ h, is defined as
Z
x ~ h(t) = x(τ)h(t − τ)dτ,
T

where T denotes integration over an interval of length T .


R

 The periodic convolution and (linear) convolution of the T -periodic


functions x and h are related as follows:

x ~ h(t) = x0 ∗ h(t) where x(t) = ∑ x0 (t − kT )
k=−∞

(i.e., x0 (t) equals x(t) over a single period of x and is zero elsewhere).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 108
Section 4.2

Convolution and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 109
Impulse Response
 The response h of a system H to the input δ is called the impulse
response of the system (i.e., h = Hδ).
 For any LTI system with input x, output y, and impulse response h, the
following relationship holds:

y = x ∗ h.
 In other words, a LTI system simply computes a convolution.
 Furthermore, a LTI system is completely characterized by its impulse
response.
 That is, if the impulse response of a LTI system is known, we can
determine the response of the system to any input.
 Since the impulse response of a LTI system is an extremely useful
quantity, we often want to determine this quantity in a practical setting.
 Unfortunately, in practice, the impulse response of a system cannot be
determined directly from the definition of the impulse response.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 110
Step Response

 The response s of a system H to the input u is called the step response of


the system (i.e., s = Hu).
 The impulse response h and step response s of a LTI system are related
as
ds(t)
h(t) = .
dt
 Therefore, the impulse response of a system can be determined from its
step response by differentiation.
 The step response provides a practical means for determining the impulse
response of a system.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 111
Block Diagram Representation of LTI Systems

 Often, it is convenient to represent a (CT) LTI system in block diagram


form.
 Since such systems are completely characterized by their impulse
response, we often label a system with its impulse response.
 That is, we represent a system with input x, output y, and impulse
response h, as shown below.

x y
h

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 112
Interconnection of LTI Systems

 The series interconnection of the LTI systems with impulse responses h1


and h2 is the LTI system with impulse response h1 ∗ h2 . That is, we have
the equivalence shown below.

x y x y
h1 h2 ≡ h1 ∗ h2

 The parallel interconnection of the LTI systems with impulse responses


h1 and h2 is the LTI system with impulse response h1 + h2 . That is, we
have the equivalence shown below.

x y
h1 +

x y
≡ h1 + h2
h2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 113
Section 4.3

Properties of LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 114
Memory
 A LTI system with impulse response h is memoryless if and only if

h(t) = 0 for all t 6= 0.


 That is, a LTI system is memoryless if and only if its impulse response h is
of the form

h(t) = Kδ(t),

where K is a complex constant.


 Consequently, every memoryless LTI system with input x and output y is
characterized by an equation of the form

y = x ∗ (Kδ) = Kx

(i.e., the system is an ideal amplifier).


 For a LTI system, the memoryless constraint is extremely restrictive (as
every memoryless LTI system is an ideal amplifier).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 115
Causality

 A LTI system with impulse response h is causal if and only if

h(t) = 0 for all t < 0

(i.e., h is a causal function).


 It is due to the above relationship that we call a function x, satisfying

x(t) = 0 for all t < 0,

a causal function.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 116
Invertibility

 The inverse of a LTI system, if such a system exists, is a LTI system.


 Let h and hinv denote the impulse responses of a LTI system and its (LTI)
inverse, respectively. Then,

h ∗ hinv = δ.
 Consequently, a LTI system with impulse response h is invertible if and
only if there exists a function hinv such that

h ∗ hinv = δ.
 Except in simple cases, the above condition is often quite difficult to test.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 117
BIBO Stability

 A LTI system with impulse response h is BIBO stable if and only if


Z ∞
|h(t)| dt < ∞
−∞

(i.e., h is absolutely integrable).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 118
Eigenfunctions of LTI Systems
 As it turns out, every complex exponential is an eigenfunction of all LTI
systems.
 For a LTI system H with impulse response h,

H{est }(t) = H(s)est ,

where s is a complex constant and


Z ∞
H(s) = h(t)e−st dt.
−∞

 That is, est is an eigenfunction of a LTI system and H(s) is the


corresponding eigenvalue.
 We refer to H as the system function (or transfer function) of the
system H.
 From above, we can see that the response of a LTI system to a complex
exponential is the same complex exponential multiplied by the complex
factor H(s).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 119
Representations of Functions Using Eigenfunctions
 Consider a LTI system with input x, output y, and system function H .
 Suppose that the input x can be expressed as the linear combination of
complex exponentials
x(t) = ∑ ak esk t ,
k
where the ak and sk are complex constants.
 Using the fact that complex exponentials are eigenfunctions of LTI
systems, we can conclude

y(t) = ∑ ak H(sk )esk t .


k

 Thus, if an input to a LTI system can be expressed as a linear combination


of complex exponentials, the output can also be expressed as a linear
combination of the same complex exponentials.
 The above formula can be used to determine the output of a LTI system
from its input in a way that does not require convolution.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 120
Part 5

Continuous-Time Fourier Series (CTFS)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 121
Introduction

 The (CT) Fourier series is a representation for periodic functions.


 With a Fourier series, a function is represented as a linear combination
of complex sinusoids.
 The use of complex sinusoids is desirable due to their numerous attractive
properties.
 For example, complex sinusoids are continuous and differentiable. They
are also easy to integrate and differentiate.
 Perhaps, most importantly, complex sinusoids are eigenfunctions of LTI
systems.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 122
Section 5.1

Fourier Series

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 123
Harmonically-Related Complex Sinusoids

 A set of complex sinusoids is said to be harmonically related if there


exists some constant ω0 such that the fundamental frequency of each
complex sinusoid is an integer multiple of ω0 .
 Consider the set of harmonically-related complex sinusoids given by

φk (t) = e jkω0t for all integer k.


 The fundamental frequency of the kth complex sinusoid φk is kω0 , an
integer multiple of ω0 .
 Since the fundamental frequency of each of the harmonically-related
complex sinusoids is an integer multiple of ω0 , a linear combination of
these complex sinusoids must be periodic.
 More specifically, a linear combination of these complex sinusoids is

periodic with period T = ω .
0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 124
CT Fourier Series
 A periodic (complex-valued) function x with fundamental period T and
fundamental frequency ω0 = 2π T can be represented as a linear
combination of harmonically-related complex sinusoids as

x(t) = ∑ ck e jkω0t .
k=−∞

 Such a representation is known as (the complex exponential form of) a


(CT) Fourier series, and the ck are called Fourier series coefficients.
 The above formula for x is often referred to as the Fourier series
synthesis equation.
 The terms in the summation for k = K and k = −K are called the K th
harmonic components, and have the fundamental frequency Kω0 .
 To denote that a function x has the Fourier series coefficient sequence ck ,
we write
CTFS
x(t) ←→ ck .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 125
CT Fourier Series (Continued)

 The periodic function x with fundamental period T and fundamental


frequency ω0 = 2πT has the Fourier series coefficients ck given by

1
Z
ck = x(t)e− jkω0t dt,
T T

where T denotes integration over an arbitrary interval of length T (i.e.,


R

one period of x).


 The above equation for ck is often referred to as the Fourier series
analysis equation.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 126
Section 5.2

Convergence Properties of Fourier Series

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 127
Remarks on Equality of Functions
 The equality of functions can be defined in more than one way.
 Two functions x and y are said to be equal in the pointwise sense if
x(t) = y(t) for all t (i.e., x and y are equal at every point).
 Two functions x and y are said to be equal in the mean-squared error
(MSE) sense if |x(t) − y(t)|2 dt = 0 (i.e., the energy in x − y is zero).
R

 Pointwise equality is a stronger condition than MSE equality (i.e.,


pointwise equality implies MSE equality but the converse is not true).
 Consider the functions
x1 (t) = 1 for all t, x2 (t) = 1 for all t, and
(
2 t =0
x3 (t) =
1 otherwise.
 The functions x1 and x2 are equal in both the pointwise sense and MSE
sense.
 The functions x1 and x3 are equal in the MSE sense, but not in the
pointwise sense.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 128
Convergence of Fourier Series

 Since a Fourier series can have an infinite number of (nonzero) terms,


and an infinite sum may or may not converge, we need to consider the
issue of convergence.
 That is, when we claim that a periodic function x is equal to the Fourier
series ∑∞k=−∞ ck e
jkω0 t , is this claim actually correct?

 Consider a periodic function x that we wish to represent with the Fourier


series

∑ ck e jkω0t .
k=−∞
 Let xN denote the Fourier series truncated after the N th harmonic
components as given by
N
xN (t) = ∑ ck e jkω0t .
k=−N
 Here, we are interested in whether limN→∞ xN is equal (in some sense)
to x.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 129
Convergence of Fourier Series (Continued)

 Again, let xN denote the Fourier series for the periodic function x truncated
after the N th harmonic components as given by
N
xN (t) = ∑ ck e jkω0t .
k=−N
 If limN→∞ xN (t) = x(t) for all t (i.e., limN→∞ xN is equal to x in the
pointwise sense), the Fourier series is said to converge pointwise to x.
 If convergence is pointwise and the rate of convergence is the same
everywhere, the convergence is said to be uniform.
 If limN→∞ 1 2
T T |xN (t) − x(t)| dt = 0 (i.e., limN→∞ xN is equal to x in the
R

MSE sense), the Fourier series is said to converge to x in the MSE sense.
 Pointwise convergence is a stronger condition than MSE convergence
(i.e., pointwise convergence implies MSE convergence, but the converse
is not true).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 130
Convergence of Fourier Series: Continuous Case

 If a periodic function x is continuous and its Fourier series coefficients ck


are absolutely summable (i.e., ∑∞ k=−∞ |ck | < ∞), then the Fourier series
representation of x converges uniformly (i.e., pointwise at the same rate
everywhere).
 Since, in practice, we often encounter functions with discontinuities (e.g.,
a square wave), the above result is of somewhat limited value.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 131
Convergence of Fourier Series: Finite-Energy Case

 If a periodic function x has finite energy in a single period (i.e.,


2
T |x(t)| dt < ∞), the Fourier series converges in the MSE sense.
R

 Since, in situations of practice interest, the finite-energy condition in the


above theorem is typically satisfied, the theorem is usually applicable.
 It is important to note, however, that MSE convergence (i.e., E = 0) does
not necessarily imply pointwise convergence (i.e., x̃(t) = x(t) for all t ).
 Thus, the above convergence theorem does not provide much useful
information regarding the value of x̃(t) for specific values of t .
 Consequently, the above theorem is typically most useful for simply
determining if the Fourier series converges.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 132
Dirichlet Conditions
 The Dirichlet conditions for the periodic function x are as follows:
T |x(t)| dt < ∞);
1 over a single period, x is absolutely integrable (i.e.,
R

2 over a single period, x has a finite number of maxima and minima (i.e., x is

of bounded variation); and


3 over any finite interval, x has a finite number of discontinuities, each of

which is finite.
 Examples of functions violating the Dirichlet conditions are shown below.
x(t)
x(t)

t −1
t
1 2

··· ··· ··· ···



sin 2πt −1
t
0 1

x(t)

··· ···
1
2

1
4
··· ···
t
−1 0 1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 133
Convergence of Fourier Series: Dirichlet Case

 If a periodic function x satisfies the Dirichlet conditions, then:


1 the Fourier series converges pointwise everywhere to x, except at the

points of discontinuity of x; and


2 at each point t of discontinuity of x, the Fourier series x̃ converges to
a
 
x̃(ta ) = 12 x(ta− ) + x(ta+ ) ,

where x(ta− ) and x(ta+ ) denote the values of the function x on the left- and
right-hand sides of the discontinuity, respectively.
 Since most functions tend to satisfy the Dirichlet conditions and the above
convergence result specifies the value of the Fourier series at every point,
this result is often very useful in practice.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 134
Gibbs Phenomenon

 In practice, we frequently encounter functions with discontinuities.


 When a function x has discontinuities, the Fourier series representation of
x does not converge uniformly (i.e., at the same rate everywhere).
 The rate of convergence is much slower at points in the vicinity of a
discontinuity.
 Furthermore, in the vicinity of a discontinuity, the truncated Fourier series
xN exhibits ripples, where the peak amplitude of the ripples does not seem
to decrease with increasing N .
 As it turns out, as N increases, the ripples get compressed towards
discontinuity, but, for any finite N , the peak amplitude of the ripples
remains approximately constant.
 This behavior is known as Gibbs phenomenon.
 The above behavior is one of the weaknesses of Fourier series (i.e.,
Fourier series converge very slowly near discontinuities).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 135
Gibbs Phenomenon: Periodic Square Wave Example
1.5 1.5

1 1

0.5 0.5

0 0
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1

-0.5 -0.5

-1 -1

-1.5 -1.5

Fourier series truncated after the Fourier series truncated after the
3rd harmonic components 7th harmonic components
1.5 1.5

1 1

0.5 0.5

0 0
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1

-0.5 -0.5

-1 -1

-1.5 -1.5

Fourier series truncated after the Fourier series truncated after the
11th harmonic components 101st harmonic components
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 136
Section 5.3

Properties of Fourier Series

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 137
Properties of (CT) Fourier Series
CTFS CTFS
x(t) ←→ ak and y(t) ←→ bk

Property Time Domain Fourier Domain


Linearity αx(t) + βy(t) αak + βbk
Translation x(t − t0 ) e− jk(2π/T )t0 ak
Modulation e jM(2π/T )t x(t) ak−M
Reflection x(−t) a−k
Conjugation x∗ (t) a∗−k
Periodic Convolution x ~ y(t) Tak bk
Multiplication x(t)y(t) ∑∞n=−∞ an bk−n

Property
Parseval’s Relation 1R
T T |x(t)|2 dt = ∑∞
k=−∞ |ak |
2

Even Symmetry x is even ⇔ a is even


Odd Symmetry x is odd ⇔ a is odd
Real / Conjugate Symmetry x is real ⇔ a is conjugate symmetric

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 138
Linearity

CTFS
 Let x and y be two periodic functions with the same period. If x(t) ←→ ak
CTFS
and y(t) ←→ bk , then

αx(t) + βy(t) ←→ αak + βbk ,


CTFS

where α and β are complex constants.


 That is, a linear combination of functions produces the same linear
combination of their Fourier series coefficients.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 139
Time Shifting (Translation)

 Let x denote a periodic function with period T and the corresponding


frequency ω0 = 2π/T . If x(t) ←→ ck , then
CTFS

x(t − t0 ) ←→ e− jkω0t0 ck = e− jk(2π/T )t0 ck ,


CTFS

where t0 is a real constant.


 In other words, time shifting a periodic function changes the argument (but
not magnitude) of its Fourier series coefficients.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 140
Frequency Shifting (Modulation)

 Let x denote a periodic function with period T and the corresponding


frequency ω0 = 2π/T . If x(t) ←→ ck , then
CTFS

e jM(2π/T )t x(t) = e jMω0t x(t) ←→ ck−M ,


CTFS

where M is an integer constant.


 In other words, multiplying a periodic function by e jMω0 t shifts the
Fourier-series coefficient sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 141
Time Reversal (Reflection)

 Let x denote a periodic function with period T and the corresponding


frequency ω0 = 2π/T . If x(t) ←→ ck , then
CTFS

CTFS
x(−t) ←→ c−k .
 That is, time reversal of a function results in a time reversal of its Fourier
series coefficients.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 142
Conjugation

 For a T -periodic function x with Fourier series coefficient sequence c, the


following property holds:

x∗ (t) ←→ c∗−k
CTFS

 In other words, conjugating a function has the effect of time reversing and
conjugating the Fourier series coefficient sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 143
Periodic Convolution

 Let x and y be two periodic functions with the same period T . If


CTFS CTFS
x(t) ←→ ak and y(t) ←→ bk , then

x ~ y(t) ←→ Tak bk .
CTFS

 In other words, periodic convolution of two functions corresponds to the


multiplication (up to a scale factor) of their Fourier-series coefficient
sequences.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 144
Multiplication

CTFS
 Let x and y be two periodic functions with the same period. If x(t) ←→ ak
CTFS
and y(t) ←→ bk , then


CTFS
x(t)y(t) ←→ an bk−n
n=−∞

 As we shall see later, the above summation is the DT convolution of a


and b.
 In other words, the multiplication of two periodic functions corresponds to
the DT convolution of their corresponding Fourier-series coefficient
sequences.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 145
Parseval’s Relation

 A function x and its Fourier series coefficient sequence a satisfy the


following relationship:
Z ∞
1
T
T
|x(t)|2 dt = ∑ |ak |2 .
k=−∞

 The above relationship is simply stating that the amount of energy in x


2
(i.e., T1 T |x(t)| dt ) and the amount of energy in the Fourier series
R
2
coefficient sequence a (i.e., ∑∞
k=−∞ |ak | ) are equal.
 In other words, the transformation between a function and its Fourier
series coefficient sequence preserves energy.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 146
Even and Odd Symmetry

 For a periodic function x with Fourier series coefficient sequence c, the


following properties hold:

x is even ⇔ c is even; and


x is odd ⇔ c is odd.
 In other words, the even/odd symmetry properties of x and c always
match.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 147
Real Functions

 A function x is real if and only if its Fourier series coefficient sequence c


satisfies

ck = c∗−k for all k

(i.e., c is conjugate symmetric).


 Thus, for a real-valued function, the negative-indexed Fourier series
coefficients are redundant, as they are completely determined by the
nonnegative-indexed coefficients.
 From properties of complex numbers, one can show that ck = c∗−k is
equivalent to

|ck | = |c−k | and arg ck = − arg c−k

(i.e., |ck | is even and arg ck is odd).


 Note that x being real does not necessarily imply that c is real.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 148
Trigonometric Forms of a Fourier Series

 Consider the periodic function x with the Fourier series coefficients ck .


 If x is real, then its Fourier series can be rewritten in two other forms,
known as the combined trigonometric and trigonometric forms.
 The combined trigonometric form of a Fourier series has the
appearance

x(t) = c0 + 2 ∑ |ck | cos(kω0t + θk ),
k=1

where θk = arg ck .
 The trigonometric form of a Fourier series has the appearance

x(t) = c0 + ∑ [αk cos(kω0t) + βk sin(kω0t)] ,
k=1

where αk = 2 Re ck and βk = −2 Im ck .
 Note that the trigonometric forms contain only real quantities.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 149
Other Properties of Fourier Series

 For a T -periodic function x with Fourier-series coefficient sequence c, the


following properties hold:
1 c is the average value of x over a single period T ;
0
2 x is real and even ⇔ c is real and even; and

3 x is real and odd ⇔ c is purely imaginary and odd.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 150
Section 5.4

Fourier Series and Frequency Spectra

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 151
A New Perspective on Functions: The Frequency Domain

 The Fourier series provides us with an entirely new way to view functions.
 Instead of viewing a function as having information distributed with respect
to time (i.e., a function whose domain is time), we view a function as
having information distributed with respect to frequency (i.e., a function
whose domain is frequency).
 This so called frequency-domain perspective is of fundamental
importance in engineering.
 Many engineering problems can be solved much more easily using the
frequency domain than the time domain.
 The Fourier series coefficients of a function x provide a means to quantify
how much information x has at different frequencies.
 The distribution of information in a function over different frequencies is
referred to as the frequency spectrum of the function.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 152
Motivating Example
 Consider the real 1-periodic function x having the Fourier series
representation
2 j − j10πt
x(t) = − 10j e− j14πt − 10 e 4 j − j6πt
− 10 e − 13 j − j2πt
10 e
+ 13 j j2πt
10 e
4 j j6πt
+ 10 e 2 j j10πt
+ 10 e + 10j e j14πt .
 A plot of x is shown below.
x(t)
1
1
2

t
−1 − 12 1 1
2
− 12
−1

 The terms that make the most dominant contribution to the overall sum
are the ones with the largest magnitude coefficients.
 To illustrate this, we consider the problem of determining the best
approximation of x that keeps only 4 of the 8 terms in the Fourier series.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 153
Motivating Example (Continued)
approximation
1 x(t)
1
2

t
−1 − 12 1 1
2
− 12
−1

Approximation using the 4 terms with the


largest magnitude coefficients

approximation
1 x(t)
1
2

t
−1 − 12 1 1
2
− 12
−1

Approximation using the 4 terms with the


smallest magnitude nonzero coefficients

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 154
Fourier Series and Frequency Spectra

 To gain further insight into the role played by the Fourier series coefficients
ck in the context of the frequency spectrum of the function x, it is helpful to
write the Fourier series with the ck expressed in polar form as follows:
∞ ∞
x(t) = ∑ ck e jkω0t = ∑ |ck | e j(kω0t+arg ck ) .
k=−∞ k=−∞

 Clearly, the kth term in the summation corresponds to a complex sinusoid


with fundamental frequency kω0 that has been amplitude scaled by a
factor of |ck | and time shifted by an amount that depends on arg ck .
 For a given k, the larger |ck | is, the larger is the amplitude of its
corresponding complex sinusoid e jkω0 t , and therefore the larger the
contribution the kth term (which is associated with frequency kω0 ) will
make to the overall summation.
 In this way, we can use |ck | as a measure of how much information a
function x has at the frequency kω0 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 155
Fourier Series and Frequency Spectra (Continued)

 The Fourier series coefficients ck are referred to as the frequency


spectrum of x.
 The magnitudes |ck | of the Fourier series coefficients are referred to as
the magnitude spectrum of x.
 The arguments arg ck of the Fourier series coefficients are referred to as
the phase spectrum of x.
 Normally, the spectrum of a function is plotted against frequency kω0
instead of k.
 Since the Fourier series only has frequency components at integer
multiples of the fundamental frequency, the frequency spectrum is
discrete in the independent variable (i.e., frequency).
 Due to the general appearance of frequency-spectrum plot (i.e., a number
of vertical lines at various frequencies), we refer to such spectra as line
spectra.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 156
Frequency Spectra of Real Functions
 Recall that, for a real function x, the Fourier series coefficient sequence c
satisfies

ck = c∗−k

(i.e., c is conjugate symmetric), which is equivalent to

|ck | = |c−k | and arg ck = − arg c−k .


 Since |ck | = |c−k |, the magnitude spectrum of a real function is always
even.
 Similarly, since arg ck = − arg c−k , the phase spectrum of a real function is
always odd.
 Due to the symmetry in the frequency spectra of real functions, we
typically ignore negative frequencies when dealing with such functions.
 In the case of functions that are complex but not real, frequency spectra
do not possess the above symmetry, and negative frequencies become
important.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 157
Section 5.5

Fourier Series and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 158
Frequency Response

 Recall that a LTI system H with impulse response h is such that


R∞
H{est }(t) = HL (s)est , where HL (s) = −∞ h(t)e−st dt . (That is, complex
exponentials are eigenfunctions of LTI systems.)
 Since a complex sinusoid is a special case of a complex exponential, we
can reuse the above result for the special case of complex sinusoids.
 For a LTI system H with impulse response h,

H{e jωt }(t) = H(ω)e jωt ,

where ω is a real constant and


Z ∞
H(ω) = h(t)e− jωt dt.
−∞

 That is, e jωt is an eigenfunction of a LTI system and H(ω) is the


corresponding eigenvalue.
 We refer to H as the frequency response of the system H.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 159
Fourier Series and LTI Systems

 Consider a LTI system with input x, output y, and frequency response H .


 Suppose that the T -periodic input x is expressed as the Fourier series

x(t) = ∑ ck e jkω0t , where ω0 = 2π
T .
k=−∞

 Using our knowledge about the eigenfunctions of LTI systems, we can


conclude

y(t) = ∑ ck H(kω0 )e jkω0t .
k=−∞

 Thus, if the input x to a LTI system is a Fourier series, the output y is also
CTFS CTFS
a Fourier series. More specifically, if x(t) ←→ ck then y(t) ←→ H(kω0 )ck .
 The above formula can be used to determine the output of a LTI system
from its input in a way that does not require convolution.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 160
Filtering

 In many applications, we want to modify the spectrum of a function by


either amplifying or attenuating certain frequency components.
 This process of modifying the frequency spectrum of a function is called
filtering.
 A system that performs a filtering operation is called a filter.
 Many types of filters exist.
 Frequency selective filters pass some frequencies with little or no
distortion, while significantly attenuating other frequencies.
 Several basic types of frequency-selective filters include: lowpass,
highpass, and bandpass.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 161
Ideal Lowpass Filter
 An ideal lowpass filter eliminates all frequency components with a
frequency whose magnitude is greater than some cutoff frequency, while
leaving the remaining frequency components unaffected.
 Such a filter has a frequency response of the form
(
1 |ω| ≤ ωc
H(ω) =
0 otherwise,
where ωc is the cutoff frequency.
 A plot of this frequency response is given below.
H(ω)

ω
−ωc ωc

Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 162
Ideal Highpass Filter
 An ideal highpass filter eliminates all frequency components with a
frequency whose magnitude is less than some cutoff frequency, while
leaving the remaining frequency components unaffected.
 Such a filter has a frequency response of the form
(
1 |ω| ≥ ωc
H(ω) =
0 otherwise,
where ωc is the cutoff frequency.
 A plot of this frequency response is given below.
H(ω)

1
··· ···

ω
−ωc ωc

Passband Stopband Passband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 163
Ideal Bandpass Filter
 An ideal bandpass filter eliminates all frequency components with a
frequency whose magnitude does not lie in a particular range, while
leaving the remaining frequency components unaffected.
 Such a filter has a frequency response of the form
(
1 ωc1 ≤ |ω| ≤ ωc2
H(ω) =
0 otherwise,

where the limits of the passband are ωc1 and ωc2 .


 A plot of this frequency response is given below.
H(ω)

ω
−ωc2 −ωc1 ωc1 ωc2

Stopband Passband Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 164
Part 6

Continuous-Time Fourier Transform (CTFT)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 165
Motivation for the Fourier Transform

 The (CT) Fourier series provide an extremely useful representation for


periodic functions.
 Often, however, we need to deal with functions that are not periodic.
 A more general tool than the Fourier series is needed in this case.
 The (CT) Fourier transform can be used to represent both periodic and
aperiodic functions.
 Since the Fourier transform is essentially derived from Fourier series
through a limiting process, the Fourier transform has many similarities with
Fourier series.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 166
Section 6.1

Fourier Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 167
Development of the Fourier Transform [Aperiodic Case]

 The (CT) Fourier series is an extremely useful function representation.


 Unfortunately, this function representation can only be used for periodic
functions, since a Fourier series is inherently periodic.
 Many functions are not periodic, however.
 Rather than abandoning Fourier series, one might wonder if we can
somehow use Fourier series to develop a representation that can be
applied to aperiodic functions.
 By viewing an aperiodic function as the limiting case of a T -periodic
function where T → ∞, we can use the Fourier series to develop a
function representation that can be used for aperiodic functions, known as
the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 168
Development of the Fourier Transform [Aperiodic Case] (Continued)

 Recall that the Fourier series representation of a T -periodic function x is


given by
∞  Z T /2 
x(t) = ∑ 1
T
−T /2
x(τ)e− jk(2π/T )τ dτ e jk(2π/T )t .
k=−∞ | {z }
ck
 In the above representation, if we take the limit as T → ∞, we obtain
Z ∞ Z ∞ 
x(t) = 1
2π x(τ)e− jωτ dτ e jωt dω
−∞ −∞
| {z }
X(ω)

(i.e., as T → ∞, the outer summation becomes an integral, T1 becomes


1 2π
2π dω, and T k becomes ω).
 This representation for aperiodic functions is known as the Fourier
transform representation.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 169
Generalized Fourier Transform

 The
R ∞ classical Fourier transform for aperiodic functions does not exist (i.e.,
x(t)e − jωt dt fails to converge) for some functions of great practical
−∞
interest, such as:
2 a nonzero constant function;
2 a periodic function (e.g., a real or complex sinusoid);
2 the unit-step function (i.e., u); and
2 the signum function (i.e., sgn).
 Fortunately, the Fourier transform can be extended to handle such
functions, resulting in what is known as the generalized Fourier
transform.
 For our purposes, we can think of the classical and generalized Fourier
transforms as being defined by the same formulas.
 Therefore, in what follows, we will not typically make a distinction between
the classical and generalized Fourier transforms.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 170
CT Fourier Transform (CTFT)
 The (CT) Fourier transform of the function x, denoted Fx or X , is given
by
Z ∞
Fx(ω) = X(ω) = x(t)e− jωt dt.
−∞

 The preceding equation is sometimes referred to as Fourier transform


analysis equation (or forward Fourier transform equation).
 The inverse Fourier transform of X , denoted F−1 X or x, is given by
Z ∞
F−1 X(t) = x(t) = 1
2π X(ω)e jωt dω.
−∞

 The preceding equation is sometimes referred to as the Fourier


transform synthesis equation (or inverse Fourier transform equation).
 As a matter of notation, to denote that a function x has the Fourier
CTFT
transform X , we write x(t) ←→ X(ω).
 A function x and its Fourier transform X constitute what is called a
Fourier transform pair.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 171
Remarks on Operator Notation

 For a function x, the Fourier transform of x is denoted using operator


notation as Fx.
 The Fourier transform of x evaluated at ω is denoted Fx(ω).
 Note that Fx is a function, whereas Fx(ω) is a number.
 Similarly, for a function X , the inverse Fourier transform of X is denoted
using operator notation as F−1 X .
 The inverse Fourier transform of X evaluated at t is denoted F−1 X(t).
 Note that F −1 X is a function, whereas F −1 X(t) is a number.
 With the above said, engineers often abuse notation, and use expressions
like those above to mean things different from their proper meanings.
 Since such notational abuse can lead to problems, it is strongly
recommended that one refrain from doing this.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 172
Remarks on Dot Notation

 Often, we would like to write an expression for the Fourier transform of a


function without explicitly naming the function.
 For example, consider writing an expression for the Fourier transform of
the function v(t) = x(5t − 3) but without using the name “v”.
 It would be incorrect to write “Fx(5t − 3)” as this is the function Fx
evaluated at 5t − 3, which is not the meaning that we wish to convey.
 Also, strictly speaking, it would be incorrect to write “F{x(5t − 3)}” as the
operand of the Fourier transform operator must be a function, and
x(5t − 3) is a number (i.e., the function x evaluated at 5t − 3).
 Using dot notation, we can write the following strictly-correct expression
for the desired Fourier transform: Fx(5 · −3).
 In many cases, however, it is probably advisable to avoid employing
anonymous (i.e., unnamed) functions, as their use tends to be more error
prone in some contexts.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 173
Remarks on Notational Conventions
 Since dot notation is less frequently used by engineers, the author has
elected to minimize its use herein.
 To avoid ambiguous notation, the following conventions are followed:
1 in the expression for the operand of a Fourier transform operator, the
independent variable is assumed to be the variable named “t” unless
otherwise indicated (i.e., in terms of dot notation, each “t ” is treated as if it
were a “·”)
2 in the expression for the operand of the inverse Fourier transform operator,
the independent variable is assumed to be the variable named “ω” unless
otherwise indicated (i.e., in terms of dot notation, each “ω” is treated as if it
were a “·”).
 For example, with these conventions:
2 “F{cos(t − τ)}” denotes the function that is the Fourier transform of the

function v(t) = cos(t − τ) (not the Fourier transform of the function


v(τ) = cos(t − τ)).
2 “F −1 {δ(3ω − λ)}” denotes the function that is the inverse Fourier transform

of the function V (ω) = δ(3ω − λ) (not the inverse Fourier transform of the
function V (λ) = δ(3ω − λ)).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 174
Section 6.2

Convergence Properties of the Fourier Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 175
Convergence of the Fourier Transform

 Consider an arbitrary function x.


 The function x has the Fourier transform representation x̃ given by
Z ∞ Z ∞
1
x̃(t) = 2π X(ω)e jωt dω, where X(ω) = x(t)e− jωt dt.
−∞ −∞

 Now, we need to concern ourselves with the convergence properties of


this representation.
 In other words, we want to know when x̃ is a valid representation of x.
 Since the Fourier transform is essentially derived from Fourier series, the
convergence properties of the Fourier transform are closely related to the
convergence properties of Fourier series.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 176
Convergence of the Fourier Transform: Continuous Case

 If a function x is continuous and absolutely integrable (i.e.,


R∞
−∞ |x(t)|
R∞
dt < ∞) and the Fourier transform X of x is absolutely integrable
(i.e., −∞ |X(ω)| dω < ∞), then the Fourier transform representation
 of x
1 R∞ R∞ − jωt dt e jωt dω for all t ).
converges pointwise (i.e., x(t) = 2π −∞ −∞ x(t)e
 Since, in practice, we often encounter functions with discontinuities (e.g.,
a rectangular pulse), the above result is sometimes of limited value.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 177
Convergence of the Fourier Transform: Finite-Energy Case
R∞ 2
 If a function x is of finite energy (i.e., −∞ |x(t)| dt < ∞), then its Fourier
transform representation converges in the MSE sense.
 In other words, if x is of finite energy, then the energy E in the difference
function x̃ − x is zero; that is,
Z ∞
E= |x̃(t) − x(t)|2 dt = 0.
−∞

 Since, in situations of practical interest, the finite-energy condition in the


above theorem is often satisfied, the theorem is frequently applicable.
 It is important to note, however, that the condition E = 0 does not
necessarily imply x̃(t) = x(t) for all t .
 Thus, the above convergence result does not provide much useful
information regarding the value of x̃(t) at specific values of t .
 Consequently, the above theorem is typically most useful for simply
determining if the Fourier transform representation converges.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 178
Dirichlet Conditions
 The Dirichlet conditions for the function x are as follows:
R∞
−∞ |x(t)| dt < ∞);
1 the function x is absolutely integrable (i.e.,

2 on any finite interval, x has a finite number of maxima and minima (i.e., x is

of bounded variation); and


3 on any finite interval, x has a finite number of discontinuities and each

discontinuity is itself finite.


 Examples of functions violating the Dirichlet conditions are shown below.
t −1 u(t)
sin(2π/t) rect(t/2)

10 ··· t
−1.25 −1 −0.75 −0.5 −0.25 0.25 0.5 0.75 1 1.25
···t −1
0.5 1

0.75

0.5

0.25
···
t
0.25 0.5 0.75 1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 179
Convergence of the Fourier Transform: Dirichlet Case

 If a function x satisfies the Dirichlet conditions, then:


1 the Fourier transform representation x̃ converges pointwise everywhere to

x, except at the points of discontinuity of x; and


2 at each point t of discontinuity of x, the Fourier transform representation x̃
a
converges to

1
 + 
x̃(ta ) = 2 x(ta ) + x(ta− ) ,

where x(ta− ) and x(ta+ ) denote the values of the function x on the left- and
right-hand sides of the discontinuity, respectively.
 Since most functions tend to satisfy the Dirichlet conditions and the above
convergence result specifies the value of the Fourier transform
representation at every point, this result is often very useful in practice.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 180
Section 6.3

Properties of the Fourier Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 181
Properties of the (CT) Fourier Transform

Property Time Domain Frequency Domain


Linearity a1 x1 (t) + a2 x2 (t) a1 X1 (ω) + a2 X2 (ω)
Time-Domain Shifting x(t − t0 ) e− jωt0 X(ω)
Frequency-Domain Shifting e jω0t x(t) X(ω − ω0 )
1 ω

Time/Frequency-Domain Scaling x(at) |a| X a
Conjugation x∗ (t) X ∗ (−ω)
Duality X(t) 2πx(−ω)
Time-Domain Convolution x1 ∗ x2 (t) X1 (ω)X2 (ω)
1
Time-Domain Multiplication x1 (t)x2 (t) 2π X1 ∗ X2 (ω)
d
Time-Domain Differentiation dt x(t) jωX(ω)
d
Frequency-Domain Differentiation tx(t) j dω X(ω)
Rt 1
Time-Domain Integration −∞ x(τ)dτ jω X(ω) + πX(0)δ(ω)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 182
Properties of the (CT) Fourier Transform (Continued)

Property
R∞ 2 1 R∞ 2
Parseval’s Relation −∞ |x(t)| dt = 2π −∞ |X(ω)| dω
Even Symmetry x is even ⇔ X is even
Odd Symmetry x is odd ⇔ X is odd
Real / Conjugate Symmetry x is real ⇔ X is conjugate symmetric

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 183
(CT) Fourier Transform Pairs

Pair x(t) X(ω)


1 δ(t) 1
1
2 u(t) πδ(ω) + jω
3 1 2πδ(ω)
2
4 sgn(t) jω
5 e jω0t 2πδ(ω − ω0 )
6 cos(ω0t) π[δ(ω − ω0 ) + δ(ω + ω0 )]
π
7 sin(ω0t) j [δ(ω − ω0 ) − δ(ω + ω0 )]
8 rect(t/T ) |T | sinc(T ω/2)
|B| ω

9 π sinc(Bt) rect 2B
1
10 e−at u(t), Re{a} >0 a+ jω
(n−1)!
11 t n−1 e−at u(t), Re{a} > 0 (a+ jω)n
|T | 2
12 tri(t/T ) 2 sinc (T ω/4)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 184
Linearity

CTFT CTFT
 If x1 (t) ←→ X1 (ω) and x2 (t) ←→ X2 (ω), then
CTFT
a1 x1 (t) + a2 x2 (t) ←→ a1 X1 (ω) + a2 X2 (ω),

where a1 and a2 are arbitrary complex constants.


 This is known as the linearity property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 185
Time-Domain Shifting (Translation)

CTFT
 If x(t) ←→ X(ω), then

x(t − t0 ) ←→ e− jωt0 X(ω),


CTFT

where t0 is an arbitrary real constant.


 This is known as the translation (or time-domain shifting) property of
the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 186
Frequency-Domain Shifting (Modulation)

CTFT
 If x(t) ←→ X(ω), then

e jω0t x(t) ←→ X(ω − ω0 ),


CTFT

where ω0 is an arbitrary real constant.


 This is known as the modulation (or frequency-domain shifting)
property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 187
Time- and Frequency-Domain Scaling (Dilation)

CTFT
 If x(t) ←→ X(ω), then

CTFT 1 ω
x(at) ←→ X ,
|a| a
where a is an arbitrary nonzero real constant.
 This is known as the dilation (or time/frequency-domain scaling)
property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 188
Conjugation

CTFT
 If x(t) ←→ X(ω), then
x∗ (t) ←→ X ∗ (−ω).
CTFT

 This is known as the conjugation property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 189
Duality
CTFT
 If x(t) ←→ X(ω), then
CTFT
X(t) ←→ 2πx(−ω)
 This is known as the duality property of the Fourier transform.
 This property follows from the high degree of symmetry in the forward and
inverse Fourier transform equations, which are respectively given by
Z ∞ Z ∞
X(λ) = x(θ)e− jθλ dθ and x(λ) = 2π 1
X(θ)e jθλ dθ.
−∞ −∞

 That is, the forward and inverse Fourier transform equations are identical
except for a factor of 2π and different sign in the parameter for the
exponential function.
CTFT
 Although the relationship x(t) ←→ X(ω) only directly provides us with the
Fourier transform of x(t), the duality property allows us to indirectly infer
the Fourier transform of X(t). Consequently, the duality property can be
used to effectively double the number of Fourier transform pairs that we
know.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 190
Time-Domain Convolution

CTFT CTFT
 If x1 (t) ←→ X1 (ω) and x2 (t) ←→ X2 (ω), then
CTFT
x1 ∗ x2 (t) ←→ X1 (ω)X2 (ω).
 This is known as the convolution (or time-domain convolution)
property of the Fourier transform.
 In other words, a convolution in the time domain becomes a multiplication
in the frequency domain.
 This suggests that the Fourier transform can be used to avoid having to
deal with convolution operations.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 191
Time-Domain Multiplication

CTFT CTFT
 If x1 (t) ←→ X1 (ω) and x2 (t) ←→ X2 (ω), then
Z ∞
1 1
X1 (θ)X2 (ω − θ)dθ.
CTFT
x1 (t)x2 (t) ←→ 2π X1 ∗ X2 (ω) = 2π
−∞

 This is known as the (time-domain) multiplication (or


frequency-domain convolution) property of the Fourier transform.
 In other words, multiplication in the time domain becomes convolution in
the frequency domain (up to a scale factor of 2π).
1
 Do not forget the factor of 2π in the above formula!
 This property of the Fourier transform is often tedious to apply (in the
forward direction) as it turns a multiplication into a convolution.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 192
Time-Domain Differentiation

CTFT
 If x(t) ←→ X(ω), then

dx(t) CTFT
←→ jωX(ω).
dt
 This is known as the (time-domain) differentiation property of the
Fourier transform.
 Differentiation in the time domain becomes multiplication by jω in the
frequency domain.
 Of course,
 by repeated application of the above property, we have that
d n
( jω)n X(ω).
CTFT

dt x(t) ←→
 The above suggests that the Fourier transform might be a useful tool
when working with differential (or integro-differential) equations.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 193
Frequency-Domain Differentiation

CTFT
 If x(t) ←→ X(ω), then

CTFT d
tx(t) ←→ j X(ω).

 This is known as the frequency-domain differentiation property of the
Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 194
Time-Domain Integration

CTFT
 If x(t) ←→ X(ω), then
Z t
1
X(ω) + πX(0)δ(ω).
CTFT
x(τ)dτ ←→
−∞ jω
 This is known as the (time-domain) integration property of the Fourier
transform.
 Whereas differentiation in the time domain corresponds to multiplication
by jω in the frequency domain, integration in the time domain is
associated with division by jω in the frequency domain.
 Since integration in the time domain becomes division by jω in the
frequency domain, integration can be easier to handle in the frequency
domain.
 The above property suggests that the Fourier transform might be a useful
tool when working with integral (or integro-differential) equations.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 195
Parseval’s Relation

R∞ 2
 Recall that the energy of a function x is given by −∞ |x(t)| dt .
CTFT
 If x(t) ←→ X(ω), then
Z ∞ Z ∞
|x(t)|2 dt = 1
2π |X(ω)|2 dω
−∞ −∞

(i.e., the energy of x and energy of X are equal up to a factor of 2π).


 This relationship is known as Parseval’s relation.
 Since energy is often a quantity of great significance in engineering
applications, it is extremely helpful to know that the Fourier transform
preserves energy (up to a scale factor).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 196
Even/Odd Symmetry

 For a function x with Fourier transform X , the following assertions hold:

x is even ⇔ X is even; and


x is odd ⇔ X is odd.
 In other words, the forward and inverse Fourier transforms preserve
even/odd symmetry.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 197
Real Functions

 A function x is real if and only if its Fourier transform X satisfies

X(ω) = X ∗ (−ω) for all ω

(i.e., X is conjugate symmetric).


 Thus, for a real-valued function, the portion of the graph of X(ω) for ω < 0
is completely redundant, as it is determined by symmetry.
 From properties of complex numbers, one can show that X(ω) = X ∗ (−ω)
is equivalent to

|X(ω)| = |X(−ω)| and arg X(ω) = − arg X(−ω)

(i.e., |X(ω)| is even and arg X(ω) is odd).


 Note that x being real does not necessarily imply that X is real.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 198
More Fourier Transforms

T HIS SLIDE IS INTENTIONALLY LEFT BLANK .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 199
Section 6.4

Fourier Transform of Periodic Functions

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 200
Fourier Transform of Periodic Functions
 The Fourier transform can be generalized to also handle periodic
functions.
 Consider a periodic function x with period T and frequency ω0 = 2π
T .
 Define the function xT as
(
x(t) − T2 ≤ t < T2
xT (t) =
0 otherwise.

(i.e., xT (t) is equal to x(t) over a single period and zero elsewhere).
 Let a denote the Fourier series coefficient sequence of x.
 Let X and XT denote the Fourier transforms of x and xT , respectively.
 The following relationships can be shown to hold:

X(ω) = ∑ ω0 XT (kω0 )δ(ω − kω0 ),
k=−∞

ak = T1 XT (kω0 ), and X(ω) = ∑ 2πak δ(ω − kω0 ).
k=−∞

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 201
Fourier Transform of Periodic Functions (Continued)

 The Fourier transform X of a periodic function is a series of impulses that


occur at integer multiples of the fundamental frequency ω0 (i.e.,
X(ω) = ∑∞ k=−∞ 2πak δ(ω − kω0 )).
 Due to the preceding fact, the Fourier transform of a periodic function can
only be nonzero at integer multiples of the fundamental frequency.
 The Fourier series coefficient sequence a is produced by sampling XT at
integer multiples of the fundamental frequency ω0 and scaling the
resulting sequence by T1 (i.e., ak = T1 XT (kω0 )).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 202
Section 6.5

Fourier Transform and Frequency Spectra of Functions

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 203
The Frequency-Domain Perspective on Functions

 Like Fourier series, the Fourier transform also provides us with a


frequency-domain perspective on functions.
 That is, instead of viewing a function as having information distributed with
respect to time (i.e., a function whose domain is time), we view a function
as having information distributed with respect to frequency (i.e., a function
whose domain is frequency).
 The Fourier transform of a function x provides a means to quantify how
much information x has at different frequencies.
 The distribution of information in a function over different frequencies is
referred to as the frequency spectrum of the function.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 204
Fourier Transform and Frequency Spectra

 To gain further insight into the role played by the Fourier transform X in
the context of the frequency spectrum of x, it is helpful to write the Fourier
transform representation of x with X(ω) expressed in polar form as
follows:
Z ∞ Z ∞
x(t) = 1
2π X(ω)e jωt dω = 1
2π |X(ω)| e j[ωt+arg X(ω)] dω.
−∞ −∞

 In effect, the quantity |X(ω)| is a weight that determines how much the
complex sinusoid at frequency ω contributes to the integration result x.
 The quantity arg X(ω) determines how the complex sinusoid at frequency
ω is shifted related to complex sinusoids at other frequencies.
 Perhaps, this can be more easily seen if we express the above integral as
the limit of a sum, derived from an approximation of the integral using the
areas of rectangles, as shown on the next slide. [Recall that
R∞ ∞
−∞ f (x)dx = lim∆x→0 ∑k=−∞ ∆x f (k∆x).]

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 205
Fourier Transform and Frequency Spectra (Continued 1)

 Expressing the integral (from the previous slide) as the limit of a sum, we
obtain

x(t) = lim 1
∆ω→0 2π
∑ ∆ω |X(ω)| e j[ωt+arg X(ω)] ,
k=−∞
where ω = k∆ω.
 In the above equation, the kth term in the summation corresponds to a
complex sinusoid with fundamental frequency ω = k∆ω that has had its
amplitude scaled by a factor of |X(ω)| and has been time shifted by an
amount that depends on arg X(ω).
 For a given ω = k∆ω (which is associated with the kth term in the
summation), the larger |X(ω)| is, the larger the amplitude of its
corresponding complex sinusoid e jωt will be, and therefore the larger the
contribution the kth term will make to the overall summation.
 In this way, we can use |X(ω)| as a measure of how much information a
function x has at the frequency ω.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 206
Fourier Transform and Frequency Spectra (Continued 2)
 The Fourier transform X of the function x is referred to as the frequency
spectrum of x.
 The magnitude |X(ω)| of the Fourier transform X is referred to as the
magnitude spectrum of x.
 The argument arg X(ω) of the Fourier transform X is referred to as the
phase spectrum of x.
 Since the Fourier transform is a function of a real variable, a function can
potentially have information at any real frequency.
 Since the Fourier transform X of a periodic function x with fundamental
frequency ω0 and the Fourier series coefficient sequence a is given by
X(ω) = ∑∞ k=−∞ 2πak δ(ω − kω0 ), the Fourier transform and Fourier series
give consistent results for the frequency spectrum of a periodic function.
 Since the frequency spectrum is complex (in the general case), it is
usually represented using two plots, one showing the magnitude
spectrum and one showing the phase spectrum.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 207
Frequency Spectra of Real Functions
 Recall that, for a real function x, the Fourier transform X of x satisfies

X(ω) = X ∗ (−ω)

(i.e., X is conjugate symmetric), which is equivalent to

|X(ω)| = |X(−ω)| and arg X(ω) = − arg X(−ω).


 Since |X(ω)| = |X(−ω)|, the magnitude spectrum of a real function is
always even.
 Similarly, since arg X(ω) = − arg X(−ω), the phase spectrum of a real
function is always odd.
 Due to the symmetry in the frequency spectra of real functions, we
typically ignore negative frequencies when dealing with such functions.
 In the case of functions that are complex but not real, frequency spectra
do not possess the above symmetry, and negative frequencies become
important.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 208
Bandwidth
 A function with the Fourier transform X is said to be bandlimited if, for
some (finite) nonnegative real constant B, the following condition holds:
X(ω) = 0 for all ω satisfying |ω| > B.
 The bandwidth B of a function with the Fourier transform X is defined as
B = ω1 − ω0 , where X(ω) = 0 for all ω 6∈ [ω0 , ω1 ].
 In the case of real-valued functions, however, this definition of bandwidth
is usually amended to consider only nonnegative frequencies.
 The real-valued function x1 and complex-valued function x2 with the
respective Fourier transforms X1 and X2 shown below each have
bandwidth B (where only nonnegative frequencies are considered in the case of x1 ).
X1 (ω) X2 (ω)

1 1

ω ω
−B B − B2 B
2

 One can show that a function cannot be both time limited and
bandlimited.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 209
Energy-Density Spectra
 By Parseval’s relation, the energy E in a function x with Fourier transform
X is given by
Z ∞
1
E= 2π Ex (ω)dω,
−∞

where

Ex (ω) = |X(ω)|2 .
 We refer to Ex as the energy-density spectrum of the function x.
 The function Ex indicates how the energy in x is distributed with respect to
frequency.
 For example, the energy contributed by frequencies in the range [ω1 , ω2 ]
is given by
Z ω2
1
2π Ex (ω)dω.
ω1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 210
Section 6.6

Fourier Transform and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 211
Frequency Response of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h, and
let X , Y , and H denote the Fourier transforms of x, y, and h, respectively.
 Since y(t) = x ∗ h(t), we have that

Y (ω) = X(ω)H(ω).
 The function H is called the frequency response of the system.
 A LTI system is completely characterized by its frequency response H .
 The above equation provides an alternative way of viewing the behavior of
a LTI system. That is, we can view the system as operating in the
frequency domain on the Fourier transforms of the input and output
functions.
 The frequency spectrum of the output is the product of the frequency
spectrum of the input and the frequency response of the system.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 212
Frequency Response of LTI Systems (Continued 1)

 In the general case, the frequency response H is a complex-valued


function.
 Often, we represent H(ω) in terms of its magnitude |H(ω)| and argument
arg H(ω).
 The quantity |H(ω)| is called the magnitude response of the system.
 The quantity arg H(ω) is called the phase response of the system.
 Since Y (ω) = X(ω)H(ω), we trivially have that

|Y (ω)| = |X(ω)| |H(ω)| and argY (ω) = arg X(ω) + arg H(ω).
 The magnitude spectrum of the output equals the magnitude spectrum of
the input times the magnitude response of the system.
 The phase spectrum of the output equals the phase spectrum of the input
plus the phase response of the system.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 213
Frequency Response of LTI Systems (Continued 2)

 Since the frequency response H is simply the frequency spectrum of the


impulse response h, if h is real, then

|H(ω)| = |H(−ω)| and arg H(ω) = − arg H(−ω)

(i.e., the magnitude response |H(ω)| is even and the phase response
arg H(ω) is odd).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 214
Unwrapped Phase
 For many types of analysis, restricting the range of a phase function to an
interval of length 2π (such as (−π, π]), often unnecessarily introduces
discontinuities into the function.
 This motivates the notion of unwrapped phase.
 The unwrapped phase is simply the phase defined in such a way so as
not to restrict the phase to an interval of length 2π and to keep the phase
function continuous to the greatest extent possible.
 For example, the function H(ω) = e jπω has the unwrapped phase
Θ(ω) = πω.
Arg H(ω) Θ(ω) = πω
3π 3π
2π 2π
π π
ω ω
−4 −2 2 4 −4 −2 2 4
−π −π
−2π −2π
−3π −3π

Phase Unwrapped Phase


Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 215
Interpretation of Magnitude and Phase Response
 Recall that a LTI system H with frequency response H is such that

H e jωt (t) = H(ω)e jωt .
 Expressing H(ω) in polar form, we have

H{e jωt }(t) = |H(ω)| e j arg H(ω) e jωt


= |H(ω)| e j[ωt+arg H(ω)]
= |H(ω)| e jω(t+arg[H(ω)]/ω) .
 Thus, the response of the system to the function e jωt is produced by
applying two transformations to this function:
2 (amplitude) scaling by |H(ω)|; and

2 translating by −
arg H(ω)
ω .
 Therefore, the magnitude response determines how different complex
sinusoids are scaled (in amplitude) by the system.
 Similarly, the phase response determines how different complex sinusoids
are translated (i.e., delayed/advanced) by the system.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 216
Magnitude Distortion

 Recall that a LTI system H with frequency response H is such that

H{e jωt }(t) = |H(ω)| e jω(t+arg[H(ω)]/ω) .


 If |H(ω)| is a constant (for all ω), every complex sinusoid is scaled by the
same amount when passing through the system.
 A system for which |H(ω)| = 1 (for all ω) is said to be allpass.
 In the case of an allpass system, the magnitude spectra of the system’s
input and output are identical.
 If |H(ω)| is not a constant, different complex sinusoids are scaled by
different amounts, resulting in what is known as magnitude distortion.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 217
Phase Distortion
 Recall that a LTI system H with frequency response H is such that

H{e jωt }(t) = |H(ω)| e jω(t+arg[H(ω)]/ω) .


 The preceding equation can be rewritten as

H{e jωt }(t) = |H(ω)| e jω[t−τp (ω)] where τp (ω) = − arg H(ω)
ω .
 The function τp is known as the phase delay of the system.
 If τp (ω) = td (where td is a constant), the system shifts all complex
sinusoids by the same amount td .
 Since τp (ω) = td is equivalent to the (unwrapped) phase response being
of the form arg H(ω) = −td ω (which is a linear function with a zero
constant term), a system with a constant phase delay is said to have
linear phase.
 In the case that τp (ω) = 0, the system is said to have zero phase.
 If τp (ω) is not a constant, different complex sinusoids are shifted by
different amounts, resulting in what is known as phase distortion.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 218
Distortionless Transmission
 Consider a LTI system H with input x and output y given by

y(t) = x(t − t0 ),
where t0 is a real constant.
 That is, the output of the system is simply the input delayed by t0 .
 This type of behavior is the ideal for which we strive in real-world
communication systems (i.e., the received signal y equals a delayed
version of the transmitted signal x).
 Taking the Fourier transform of the preceding equation, we have

Y (ω) = e− jωt0 X(ω).


 Thus, the system has the frequency response H given by

H(ω) = e− jωt0 .

 Since the phase delay of the system is τp (ω) = − −ωt
ω
0
= t0 , the phase
delay is constant and the system has linear phase.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 219
Magnitude and Phase Distortion in Audio

 The relative importance of the magnitude spectrum and phase spectrum


is highly dependent on the particular application of interest.
 Consider the case of the human auditory system (i.e., human hearing).
 The human auditory system tends to be quite sensitive to changes in the
magnitude spectrum of an audio signal.
 That is, a significant change in the magnitude spectrum of an audio signal
is very likely to lead to a noticeable difference in the perceived sound.
 On the other hand, the human auditory system tends to be much less
sensitive to changes in the phase spectrum of an audio signal.
 In other words, changes to the phase spectrum of an audio signal are
often only barely perceptible or not perceptible at all.
 For the above reasons, in applications involving the human auditory
system, magnitude distortion often tends to be more of a concern than
phase distortion.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 220
Magnitude and Phase Distortion in Images

 Consider the case of the human visual system.


 The human visual system tends to be quite sensitive to changes in the
phase spectrum of an image.
 That is, a significant change in the phase spectrum of an image is likely to
lead to a very substantial difference in how the image is perceived.
 The phase spectrum of an image tends to capture information about the
location of the edges in the image, and edges are play a crucial role in
how humans perceive images.
 On the other hand, the human visual system tends to be somewhat less
sensitive to changes in the magnitude spectrum of an image.
 For the above reasons, phase distortion is usually deemed highly
undesirable in systems that process images, when the image data is to be
consumed by humans.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 221
Example: Magnitude and Phase Distortion in Images (1)

Image A Image B

Magnitude Spectrum from Image B and Magnitude Spectrum from Image A and
Phase Spectrum from Image A Phase Spectrum from Image B

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 222
Example: Magnitude and Phase Distortion in Images (2)

Image A Image B (White Noise)

Magnitude Spectrum from Image B and Magnitude Spectrum from Image A and
Phase Spectrum from Image A Phase Spectrum from Image B

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 223
Block Diagram Representations of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h, and
let X , Y , and H denote the Fourier transforms of x, y, and h, respectively.
 Often, it is convenient to represent such a system in block diagram form in
the frequency domain as shown below.

X Y
H

 Since a LTI system is completely characterized by its frequency response,


we typically label the system with this quantity.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 224
Interconnection of LTI Systems

 The series interconnection of the LTI systems with frequency responses


H1 and H2 is the LTI system with frequency response H1 H2 . That is, we
have the equivalence shown below.
X Y X Y
H1 H2 ≡ H1 H2

 The parallel interconnection of the LTI systems with frequency responses


H1 and H2 is the LTI system with the frequency response H1 + H2 . That
is, we have the equivalence shown below.
X Y
H1 +

X Y
≡ H1 + H2
H2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 225
LTI Systems and Differential Equations
 Many LTI systems of practical interest can be represented using an
Nth-order linear differential equation with constant coefficients.
 Consider a system with input x and output y that is characterized by an
equation of the form
N  M 
d k d k
∑ bk dt y(t) = ∑ ak dt x(t),
k=0 k=0

where the ak and bk are complex constants and M ≤ N .


 Let h denote the impulse response of the system, and let X , Y , and H
denote the Fourier transforms of x, y, and h, respectively.
 One can show that H is given by
Y (ω) ∑M ak jk ωk
H(ω) = = k=0 .
X(ω) ∑Nk=0 bk jk ωk
 Observe that, for a system of the form considered above, the frequency
response is a rational function.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 226
Section 6.7

Application: Filtering

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 227
Filtering

 In many applications, we want to modify the spectrum of a function by


either amplifying or attenuating certain frequency components.
 This process of modifying the frequency spectrum of a function is called
filtering.
 A system that performs a filtering operation is called a filter.
 Many types of filters exist.
 Frequency selective filters pass some frequencies with little or no
distortion, while significantly attenuating other frequencies.
 Several basic types of frequency-selective filters include: lowpass,
highpass, and bandpass.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 228
Ideal Lowpass Filter
 An ideal lowpass filter eliminates all frequency components with a
frequency whose magnitude is greater than some cutoff frequency, while
leaving the remaining frequency components unaffected.
 Such a filter has a frequency response H of the form
(
1 |ω| ≤ ωc
H(ω) =
0 otherwise,
where ωc is the cutoff frequency.
 A plot of this frequency response is given below.

H(ω)

ω
−ωc ωc

Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 229
Ideal Highpass Filter
 An ideal highpass filter eliminates all frequency components with a
frequency whose magnitude is less than some cutoff frequency, while
leaving the remaining frequency components unaffected.
 Such a filter has a frequency response H of the form
(
1 |ω| ≥ ωc
H(ω) =
0 otherwise,
where ωc is the cutoff frequency.
 A plot of this frequency response is given below.

H(ω)

1
··· ···

ω
−ωc ωc

Passband Stopband Passband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 230
Ideal Bandpass Filter
 An ideal bandpass filter eliminates all frequency components with a
frequency whose magnitude does not lie in a particular range, while
leaving the remaining frequency components unaffected.
 Such a filter has a frequency response H of the form
(
1 ωc1 ≤ |ω| ≤ ωc2
H(ω) =
0 otherwise,
where the limits of the passband are ωc1 and ωc2 .
 A plot of this frequency response is given below.

H(ω)

ω
−ωc2 −ωc1 ωc1 ωc2

Stopband Passband Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 231
Section 6.8

Application: Equalization

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 232
Equalization

 Often, we find ourselves faced with a situation where we have a system


with a particular frequency response that is undesirable for the application
at hand.
 As a result, we would like to change the frequency response of the system
to be something more desirable.
 This process of modifying the frequency response in this way is referred to
as equalization. [Essentially, equalization is just a filtering operation.]
 Equalization is used in many applications.
 In real-world communication systems, equalization is used to eliminate or
minimize the distortion introduced when a signal is sent over a (nonideal)
communication channel.
 In audio applications, equalization can be employed to emphasize or
de-emphasize certain ranges of frequencies. For example, equalization
can be used to boost the bass (i.e., emphasize the low frequencies) in the
audio output of a stereo.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 233
Equalization (Continued)
Input Output x y
horig heq horig

Original System New System with Equalization

 Let Horig denote the frequency response of original system (i.e., without
equalization).
 Let Hd denote the desired frequency response.
 Let Heq denote the frequency response of the equalizer.
 The new system with equalization has frequency response

Hnew (ω) = Heq (ω)Horig (ω).


 By choosing Heq (ω) = Hd (ω)/Horig (ω), the new system with equalization
will have the frequency response

Hnew (ω) = [Hd (ω)/Horig (ω)] Horig (ω) = Hd (ω).


 In effect, by using an equalizer, we can obtain a new system with the
frequency response that we desire.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 234
Section 6.9

Application: Circuit Analysis

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 235
Electronic Circuits

 An electronic circuit is a network of one or more interconnected circuit


elements.
 The three most basic types of circuit elements are:
1 resistors;
2 inductors; and
3 capacitors.
 Two fundamental quantities of interest in electronic circuits are current and
voltage.
 Current is the rate at which electric charge flows through some part of a
circuit, such as a circuit element, and is measured in units of amperes (A).
 Voltage is the difference in electric potential between two points in a
circuit, such as across a circuit element, and is measured in units of
volts (V).
 Voltage is essentially a force that makes electric charge (or current) flow.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 236
Resistors

 A resistor is a circuit element that opposes the flow of current.


 A resistor is characterized by an equation of the form

v(t) = Ri(t) or equivalently, i(t) = R1 v(t) ,

where R is a nonnegative real constant, and v and i respectively denote


the voltage across and current through the resistor as a function of time.
 As a matter of terminology, the quantity R is known as the resistance of
the resistor.
 Resistance is measured in units of ohms (Ω).
 In circuit diagrams, a resistor is denoted by the symbol shown below.

R
i + −

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 237
Inductors
 An inductor is a circuit element that converts an electric current into a
magnetic field and vice versa.
 An inductor uses the energy stored in a magnetic field in order to oppose
changes in current (through the inductor).
 An inductor is characterized by an equation of the form
Z t
d 1
v(t) = L dt i(t) (or equivalently, i(t) = L v(τ)dτ),
−∞
where L is a nonnegative real constant, and v and i respectively denote
the voltage across and current through the inductor as a function of time.
 As a matter of terminology, the quantity L is known as the inductance of
the inductor.
 Inductance is measured in units of henrys (H).
 In circuit diagrams, an inductor is denoted by the symbol shown below.

L
i + −

v
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 238
Capacitors
 A capacitor is a circuit element that stores electric charge.
 A capacitor uses the energy stored in an electric field in order to oppose
changes in voltage (across the capacitor).
 A capacitor is characterized by an equation of the form
Z t
1
v(t) = C i(τ)dτ (or equivalently, i(t) = C dtd v(t)),
−∞
where C is a nonnegative real constant, and v and i respectively denote
the voltage across and current through the capacitor as a function of time.
 As a matter of terminology, the quantity C is known as the capacitance of
the capacitor.
 Capacitance is measured in units of farads (F).
 In circuit diagrams, a capacitor is denoted by the symbol shown below.
C
i + −

v
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 239
Circuit Analysis with the Fourier Transform
 The Fourier transform is a very useful tool for circuit analysis.
 The utility of the Fourier transform is partly due to the fact that the
differential/integral equations that describe inductors and capacitors are
much simpler to express in the Fourier domain than in the time domain.
 Let v and i denote the voltage across and current through a circuit
element, and let V and I denote the Fourier transforms of v and i,
respectively.
 In the frequency domain, the equations characterizing a resistor, an
inductor, and a capacitor respectively become:

V (ω) = RI(ω) (or equivalently, I(ω) = R1 V (ω));


1
V (ω) = jωLI(ω) (or equivalently, I(ω) = jωL V (ω)); and
1
V (ω) = jωC I(ω) (or equivalently, I(ω) = jωCV (ω)).

 Note the absence of differentiation and integration in the above equations


for an inductor and a capacitor.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 240
Section 6.10

Application: Amplitude Modulation (AM)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 241
Motivation for Amplitude Modulation (AM)
 In communication systems, we often need to transmit a signal using a
frequency range that is different from that of the original signal.
 For example, voice/audio signals typically have information in the range of
0 to 22 kHz.
 Often, it is not practical to transmit such a signal using its original
frequency range.
 Two potential problems with such an approach are:
1 interference; and
2 constraints on antenna length.
 Since many signals are broadcast over the airwaves, we need to ensure
that no two transmitters use the same frequency bands in order to avoid
interference.
 Also, in the case of transmission via electromagnetic waves (e.g., radio
waves), the length of antenna required becomes impractically large for the
transmission of relatively low frequency signals.
 For the preceding reasons, we often need to change the frequency range
associated with a signal before transmission.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 242
Trivial Amplitude Modulation (AM) System

c1 (t) = e jωc t c2 (t) = e− jωc t

x y y x̂
× ×

Transmitter Receiver

 The transmitter is characterized by

y(t) = e jωct x(t) ⇐⇒ Y (ω) = X(ω − ωc ).


 The receiver is characterized by

x̂(t) = e− jωct y(t) ⇐⇒ X̂(ω) = Y (ω + ωc ).


 Clearly, x̂(t) = e jωc t e− jωc t x(t) = x(t).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 243
Trivial Amplitude Modulation (AM) System: Example

X(ω)
C1 (ω) C2 (ω)
1
2π 2π

ω
−ωb ωb
ω ω
ωc −ωc
Transmitter Input

Y (ω) X̂(ω)

1 1

ω ω
ωb ωc − ωb ωc ωc + ωb −ωb ωb

Transmitter Output Receiver Output

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 244
Double-Sideband Suppressed-Carrier (DSB-SC) AM

c(t) = cos(ωct) c(t) = cos(ωct)


2ωc0
h(t) = π sinc(ωc0t)
x y y v x̂
× × h

Transmitter Receiver

 Let X = Fx, Y = Fy, and X̂ = Fx̂.


 Suppose that X(ω) = 0 for all ω 6∈ [−ωb , ωb ].
 The transmitter is characterized by

Y (ω) = 12 [X(ω + ωc ) + X(ω − ωc )] .


 The receiver is characterized by
 
ω
X̂(ω) = [Y (ω + ωc ) +Y (ω − ωc )] rect 2ωc0 .

 If ωb < ωc0 < 2ωc − ωb , we have X̂(ω) = X(ω) (implying x̂(t) = x(t)).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 245
DSB-SC AM: Transmitter

c(t) = cos(ωct)

x y
×

y(t) = cos(ωct)x(t)

X = Fx, Y = Fy

Y (ω) = F{cos(ωct)x(t)}(ω)
 
= F 21 e jωc t + e− jωc t x(t) (ω)
 
= 21 F{e jωc t x(t)}(ω) + F{e− jωc t x(t)}(ω)
= 12 [X(ω − ωc ) + X(ω + ωc )]

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 246
DSB-SC AM: Receiver

c(t) = cos(ωct)
2ωc0
h(t) = π sinc(ωc0t)
y v x̂
× h

2ωc0
v(t) = cos(ωct)y(t), h(t) = π sinc(ωc0t), x̂(t) = v ∗ h(t)
Y = Fy, V = Fv, H = Fh, X̂ = Fx̂
V (ω) = F{cos(ωct)y(t)}(ω)
 
= F 21 e jωc t + e− jωc t y(t) (ω)
   
= 12 F e jωc t y(t) (ω) + F e− jωc t y(t) (ω)
= 21 [Y (ω − ωc ) +Y (ω + ωc )]
n o
H(ω) = F 2ωπc0 sinc(ωc0t) (ω)
 
= 2 rect 2ωωc0

X̂(ω) = H(ω)V (ω)


Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 247
DSB-SC AM: Complete System

c(t) = cos(ωct) c(t) = cos(ωct)


2ωc0
h(t) = π sinc(ωc0t)
x y y v x̂
× × h

Y (ω) = 12 [X(ω − ωc ) + X(ω + ωc )]

V (ω) = 12 [Y (ω − ωc ) +Y (ω + ωc )]

= 12 21 [X([ω − ωc ] − ωc ) + X([ω − ωc ] + ωc )] +
1

2 [X([ω + ωc ] − ωc ) + X([ω + ωc ] + ωc )]
= 12 X(ω) + 14 X(ω − 2ωc ) + 14 X(ω + 2ωc )
X̂(ω) = H(ω)V (ω)
 
= H(ω) 12 X(ω) + 14 X(ω − 2ωc ) + 14 X(ω + 2ωc )
= 12 H(ω)X(ω) + 14 H(ω)X(ω − 2ωc ) + 14 H(ω)X(ω + 2ωc )
= 21 [2X(ω)] + 14 (0) + 14 (0)
= X(ω)
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 248
DSB-SC AM: Example
X(ω)
1 C(ω) H(ω)
π π 2

−ωb ωb ω
−ωc ωc ω −ωc0 ωc0 ω
Transmitter Input
Y (ω)
1
2

ω
−2ωc −ωc − ωb−ωc−ωc + ωb −ωb ωb ωc − ωb ωc ωc + ωb 2ωc

Transmitter Output
V (ω)
1
2

1
4
ω
−2ωc − ω−2ωc c + ωb −ωb ωb 2ωc − ωb 2ωc 2ωc + ωb
b −2ω

X̂(ω)

ω
−2ωc −ωc −ωb ωb ωc 2ωc

Receiver Output
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 249
Single-Sideband Suppressed-Carrier (SSB-SC) AM

c(t) = cos(ωc t) c(t) = cos(ωc t)


g(t) = δ(t) − ωπc sinc(ωc t) h(t) = 4ωc0
π sinc(ωc0 t)
x q y y v x̂
× g × h

Transmitter Receiver

 The basic analysis of the SSB-SC AM system is similar to the DSB-SC


AM system.
 SSB-SC AM requires half as much bandwidth for the transmitted signal as
DSB-SC AM.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 250
SSB-SC AM: Example
X(ω) C(ω) G(ω) H(ω)
1
π π ··· 1 ··· 4

ω ωc ω ωc ω ωc0 ω
−ωb ωb −ωc −ωc −ωc0

Q(ω)
1
2

−ωc − ω−ω ω
−2ωc c c + ωb −ωb
b −ω ωb ωc − ωb ωc ωc + ωb 2ωc

Y (ω)
1
2

−ωc − ω−ω ω
−2ωc c c + ωb −ωb
b −ω ωb ωc − ωb ωc ωc + ωb 2ωc

V (ω)
1
2
1
4

ωb ω
ωb −2ω
−2ωc − −2ω c c + ωb −ωb 2ωc − ωb2ωc2ωc + ωb

X̂(ω)
1

−2ωc −ωc −ωb ωb ωc 2ωc ω

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 251
Section 6.11

Application: Sampling and Interpolation

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 252
Sampling and Interpolation

 Often, we want to be able to transform a continuous-time signal (i.e., a


function) into a discrete-time signal (i.e., a sequence) and vice versa.
 This is accomplished through processes known as sampling and
interpolation.
 Sampling, which is performed by a continuous-time to discrete-time
(C/D) converter shown below, transforms a function x to a sequence y.
x y
C/D
Converter

 Interpolation, which is performed by a discrete-time to


continuous-time (D/C) converter shown below, transforms a sequence y
to a function x.
y x
D/C
Converter

 Note that, unless very special conditions are met, the sampling process
loses information (i.e., is not invertible).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 253
Periodic Sampling
 Although sampling can be performed in many different ways, the most
commonly used scheme is periodic sampling.
 With this scheme, a sequence y of samples is obtained from a function x
according to the relation
y(n) = x(T n) for all integer n,
where T is a (strictly) positive real constant.
 As a matter of terminology, we refer to T as the sampling period, and
ωs = 2π
T as the (angular) sampling frequency.
 An example of periodic sampling is shown below, where the function x has
been sampled with sampling period T = 10, yielding the sequence y.
x(t) y(n)

4 4

3 3

2 2

1 1

t n
0 10 20 30 40 50 60 70 0 1 2 3 4 5 6 7

Function to Be Sampled Sequence Produced by Sampling

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 254
Invertibility of Sampling
 Unless constraints are placed on the functions being sampled, the
sampling process is not invertible.
 In other words, in the absence of any constraints, a function cannot be
uniquely determined from a sequence of its equally-spaced samples.
 Consider, for example, the functions x1 and x2 given by

x1 (t) = 0 and x2 (t) = sin(2πt).


 Sampling x1 and x2 with the sampling period T = 1 yields the respective
sequences
y1 (n) = x1 (T n) = x1 (n) = 0 and
y2 (n) = x2 (T n) = sin(2πn) = 0.
 So, although x1 and x2 are distinct, y1 and y2 are identical.
 Given the sequence y where y = y1 = y2 , it is impossible to determine
which function was sampled to produce y.
 Only by imposing a carefully chosen set of constraints on the functions
being sampled can we ensure that a function can be exactly recovered
from only its samples.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 255
Model of Sampling
 An impulse train is a function of the form v(t) = ∑∞
k=−∞ ck δ(t − kT ),
where ck and T are real constants.
 For the purposes of analysis, sampling with sampling period T and
frequency ωs = 2π
T can be modelled as shown below.
ideal C/D converter

p(t) = ∑ δ(t − kT )
k=−∞

x s convert from y
× impulse train
to sequence

 The sampling of a function x to produce a sequence y consists of the


following two steps (in order):
1 Multiply the function x to be sampled by a periodic impulse train p, yielding

the impulse train s(t) = ∑∞ n=−∞ x(nT )δ(t − nT ).


2 Convert the impulse train s to a sequence y by forming y from the weights of

successive impulses in s so that y(n) = x(nT ).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 256
Model of Sampling: Various Signals
x(t)

p(t)
4

3 1 1 1 1

2 ··· ···
1 t
0 T 2T 3T
t
0 T 2T 3T Periodic Impulse Train
Input Function

s(t)
y(n)
x(T )
4 x(T )
4
3 x(3T )
x(2T )
3 x(2T ) x(3T )
2 x(0)
2 x(0)
1
1
t
0 T 2T 3T n
0 1 2 3
Impulse-Sampled Function
Output Sequence (Discrete-Time)
(Continuous-Time)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 257
Model of Sampling: Invertibility of Sampling Revisited
ideal C/D converter

p(t) = ∑ δ(t − kT )
k=−∞

x s convert from y
× impulse train
to sequence

 Since sampling is not invertible and our model of sampling consists of only
two steps, at least one of these two steps must not be invertible.
 Recall the two steps in our model of sampling are as follows (in order):

1 x −→ s(t) = x(t)p(t) = ∑ x(nT )δ(t − nT ); and
n=−∞

2 s(t) = ∑ x(nT )δ(t − nT ) −→ y(n) = x(nT ).
n=−∞
 Step 1 cannot be undone (unless we somehow restrict which functions x
can be sampled).
 Step 2 is always invertible.
 Therefore, the fact that sampling is not invertible is entirely due to step 1.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 258
Model of Sampling: Characterization
ideal C/D converter

p(t) = ∑ δ(t − kT )
k=−∞

x s convert from y
× impulse train
to sequence

 In the time domain, the impulse-sampled function s is given by



s(t) = x(t)p(t) where p(t) = ∑ δ(t − kT ).
k=−∞
 In the Fourier domain, the preceding equation becomes

S(ω) = ωs
2π ∑ X(ω − kωs ) (where ωs = 2π
T ).
k=−∞
 Thus, the spectrum of the impulse-sampled function s is a scaled sum of
an infinite number of shifted copies of the spectrum of the original
function x.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 259
Sampling: Fourier Series for a Periodic Impulse Train


p(t) = ∑ δ(t − kT ), ωs = 2π
T
k=−∞


p(t) = ∑ ck e jkωs t
k=−∞
Z T /2
ck = 1
T p(t)e− jkωs t dt
−T /2
Z T /2
= 1
T δ(t)e− jkωs t dt
−T /2
Z ∞
= 1
T δ(t)e− jkωs t dt
−∞
1
= T
ωs
= 2π

p(t) = ωs
2π ∑ e jkωs t
k=−∞

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 260
Sampling: Multiplication by a Periodic Impulse Train
ideal C/D converter

p(t) = ∑ δ(t − kT )
k=−∞

x s convert from y
× impulse train
to sequence


s(t) = p(t)x(t), p(t) = ∑ δ(t − kT ), ωs = 2π
T
k=−∞


p(t) = ωs
2π ∑ e jkωs t
k=−∞


s(t) = ωs
2π ∑ e jkωs t x(t)
k=−∞

X = Fx, S = Fs

S(ω) = ωs
2π ∑ X(ω − kωs )
k=−∞

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 261
Model of Sampling: Aliasing
 Consider frequency spectrum S of the impulse-sampled function s given by

S(ω) = ωs
2π ∑ X(ω − kωs ).
k=−∞

 The function S is a scaled sum of an infinite number of shifted copies of X .


 Two distinct behaviors can result in this summation, depending on ωs and
the bandwidth of x.
 In particular, the nonzero portions of the different shifted copies of X can
either:
1 overlap; or
2 not overlap.
 In the case where overlap occurs, the various shifted copies of X add
together in such a way that the original shape of X is lost. This
phenomenon is known as aliasing.
 When aliasing occurs, the original function x cannot be recovered from its
samples in y.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 262
Model of Sampling: Aliasing (Continued)
X(ω)

Spectrum of Input
1
Function
(Bandwidth ωm )
ω
−ωm 0 ωm

S(ω)
Spectrum of Impulse-
1
T Sampled Function:
··· ··· No Aliasing Case
(ωs > 2ωm )
ω
−ωs − ωm −ωs −ωs + ωm −ωm 0 ωm ωs − ωm ωs ωs + ωm

S(ω)

Spectrum of Impulse-
1
T
Sampled Function:
··· ··· Aliasing Case
ω
(ωs ≤ 2ωm )
0 ωm
ωs − ωm ωs

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 263
Model of Interpolation

 For the purposes of analysis, interpolation can be modelled as shown


below.
ideal D/C converter

y convert from s x̂
sequence to h
impulse train
π

h(t) = sinc Tt

 The reconstruction of a function x from its sequence y of samples (i.e.,


bandlimited interpolation) consists of the following two steps (in order):
1 Convert the sequence y to the impulse train s by using the samples in y as

the weights of successive impulses in s so that s(t) = ∑∞


n=−∞ y(n)δ(t − T n).
2 Apply the lowpass filter with impulse response h to s to produce x̂ so that
π 
x̂(t) = s ∗ h(t) = ∑∞
n=−∞ y(n) sinc T (t − T n) .
 The lowpass filter is used to eliminate the extra copies of the
originally-sampled function’s spectrum present in the spectrum of s.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 264
Sampling Theorem
 Sampling Theorem. Let x be a function with Fourier transform X , and
suppose that |X(ω)| = 0 for all ω satisfying |ω| > ωM (i.e., x is
bandlimited to frequencies [−ωM , ωM ]). Then, x is uniquely determined by
its samples y(n) = x(T n) for all integer n, if

ωs > 2ωM ,
where ωs = 2πT . The preceding inequality is known as the Nyquist
condition. If this condition is satisfied, we have that
∞ π 
x(t) = ∑ y(n) sinc T (t − T n) ,
n=−∞

or equivalently (i.e., rewritten in terms of ωs instead of T ),


∞ 
x(t) = ∑ y(n) sinc ωs
2 t − πn .
n=−∞

 We call ω2s the Nyquist frequency and 2ωM the Nyquist rate.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 265
Part 7

Laplace Transform (LT)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 266
Motivation Behind the Laplace Transform

 Another important mathematical tool in the study of signals and systems


is known as the Laplace transform.
 The Laplace transform can be viewed as a generalization of the
(classical) Fourier transform.
 Due to its more general nature, the Laplace transform has a number of
advantages over the (classical) Fourier transform.
 First, the Laplace transform representation exists for some functions that
do not have a Fourier transform representation. So, we can handle
some functions with the Laplace transform that cannot be handled with
the Fourier transform.
 Second, since the Laplace transform is a more general tool, it can provide
additional insights beyond those facilitated by the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 267
Motivation Behind the Laplace Transform (Continued)

 Earlier, we saw that complex exponentials are eigenfunctions of LTI


systems.
 In particular, for a LTI system H with impulse response h, we have that
Z ∞
H{est }(t) = H(s)est where H(s) = h(t)e−st dt.
−∞

 Previously, we referred to H as the system function.


 As it turns out, H is the Laplace transform of h.
 Since the Laplace transform has already appeared earlier in the context of
LTI systems, it is clearly a useful tool.
 Furthermore, as we will see, the Laplace transform has many additional
uses.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 268
Section 7.1

Laplace Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 269
(Bilateral) Laplace Transform
 The (bilateral) Laplace transform of the function x, denoted Lx or X , is
defined as
Z ∞
Lx(s) = X(s) = x(t)e−st dt.
−∞
 The inverse Laplace transform of X , denoted L−1 X or x, is then given
by
Z σ+ j∞
1
L−1 X(t) = x(t) = X(s)est ds,
2π j σ− j∞
where Re(s) = σ is in the ROC of X . (Note that this is a contour
integration, since s is complex.)
 We refer to x and X as a Laplace transform pair and denote this
relationship as
LT
x(t) ←→ X(s).
 In practice, we do not usually compute the inverse Laplace transform by
directly using the formula from above. Instead, we resort to other means
(to be discussed later).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 270
Bilateral and Unilateral Laplace Transforms

 Two different versions of the Laplace transform are commonly used:


1 the bilateral (or two-sided) Laplace transform; and

2 the unilateral (or one-sided) Laplace transform.

 The unilateral Laplace transform is most frequently used to solve systems


of linear differential equations with nonzero initial conditions.
 As it turns out, the only difference between the definitions of the bilateral
and unilateral Laplace transforms is in the lower limit of integration.
 In the bilateral case, the lower limit is −∞, whereas in the unilateral case,
R∞ R∞
the lower limit is 0 (i.e., −∞ x(t)e−st dt versus 0− x(t)e−st dt ).
 For the most part, we will focus our attention primarily on the bilateral
Laplace transform.
 We will, however, briefly introduce the unilateral Laplace transform as a
tool for solving differential equations.
 Unless otherwise noted, all subsequent references to the Laplace
transform should be understood to mean bilateral Laplace transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 271
Remarks on Operator Notation

 For a function x, the Laplace transform of x is denoted using operator


notation as Lx.
 The Laplace transform of x evaluated at s is denoted Lx(s).
 Note that Lx is a function, whereas Lx(s) is a number.
 Similarly, for a function X , the inverse Laplace transform of X is denoted
using operator notation as L−1 X .
 The inverse Laplace transform of X evaluated at t is denoted L−1 X(t).
 Note that L−1 X is a function, whereas L−1 X(t) is a number.
 With the above said, engineers often abuse notation, and use expressions
like those above to mean things different from their proper meanings.
 Since such notational abuse can lead to problems, it is strongly
recommended that one refrain from doing this.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 272
Remarks on Dot Notation

 Often, we would like to write an expression for the Laplace transform of a


function without explicitly naming the function.
 For example, consider writing an expression for the Laplace transform of
the function v(t) = x(5t − 3) but without using the name “v”.
 It would be incorrect to write “Lx(5t − 3)” as this is the function Lx
evaluated at 5t − 3, which is not the meaning that we wish to convey.
 Also, strictly speaking, it would be incorrect to write “L{x(5t − 3)}” as the
operand of the Laplace transform operator must be a function, and
x(5t − 3) is a number (i.e., the function x evaluated at 5t − 3).
 Using dot notation, we can write the following strictly-correct expression
for the desired Laplace transform: L{x(5 · −3)}.
 In many cases, however, it is probably advisable to avoid employing
anonymous (i.e., unnamed) functions, as their use tends to be more error
prone in some contexts.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 273
Remarks on Notational Conventions
 Since dot notation is less frequently used by engineers, the author has
elected to minimize its use herein.
 To avoid ambiguous notation, the following conventions are followed:
1 in the expression for the operand of a Laplace transform operator, the
independent variable is assumed to be the variable named “t” unless
otherwise indicated (i.e., in terms of dot notation, each “t ” is treated as if it
were a “·”)
2 in the expression for the operand of the inverse Laplace transform operator,
the independent variable is assumed to be the variable named “s” unless
otherwise indicated (i.e., in terms of dot notation, each “s” is treated as if it
were a “·”).
 For example, with these conventions:
2 “L{(t − τ)u(t − τ)}” denotes the function that is the Laplace transform of

the function v(t) = (t − τ)u(t − τ) (not the Laplace transform of the function
v(τ) = (t − τ)u(t − τ)).
2 “L−1 { 1 }” denotes the function that is the inverse Laplace transform of
s2 −λ
the function V (s) = { s2 1−λ } (not the inverse Laplace transform of the
function V (λ) = { s2 1−λ }).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 274
Relationship Between Laplace and Fourier Transforms
 Let X and XF denote the Laplace and (CT) Fourier transforms of x,
respectively.
 The function X evaluated at jω (where ω is real) yields XF (ω). That is,

X( jω) = XF (ω).
 Due to the preceding relationship, the Fourier transform of x is sometimes
written as X( jω).
 The function X evaluated at an arbitrary complex value s = σ + jω (where
σ = Re(s) and ω = Im(s)) can also be expressed in terms of a Fourier
transform involving x. In particular, we have

X(σ + jω) = XF0 (ω),


where XF0 is the (CT) Fourier transform of x0 (t) = e−σt x(t).
 So, in general, the Laplace transform of x is the Fourier transform of an
exponentially-weighted version of x.
 Due to this weighting, the Laplace transform of a function may exist when
the Fourier transform of the same function does not.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 275
Laplace Transform Examples

T HIS SLIDE IS INTENTIONALLY LEFT BLANK .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 276
Section 7.2

Region of Convergence (ROC)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 277
Left-Half Plane (LHP)

 The set R of all complex numbers s satisfying

Re(s) < a

for some real constant a is said to be a left-half plane (LHP).


 Some examples of LHPs are shown below.
a<0 Im{s} a>0 Im{s}

Re{s} Re{s}
a a

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 278
Right-Half Plane (RHP)

 The set R of all complex numbers s satisfying

Re(s) > a

for some real constant a is said to be a right-half plane (RHP).


 Some examples of RHPs are shown below.
Im{s} a<0 Im{s} a>0

Re{s} Re{s}
a a

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 279
Intersection of Sets

 For two sets A and B, the intersection of A and B, denoted A ∩ B, is the


set of all points that are in both A and B.
 An illustrative example of set intersection is shown below.
Im Im Im
2 2 2

1 1 1

Re Re Re
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3 −3 −2 −1 1 2 3
−1 −1 −1

−2 −2 −2

R1 R2 R1 ∩ R2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 280
Adding a Scalar to a Set

 For a set S and a scalar constant a, S + a denotes the set given by

S + a = {z + a : z ∈ S}

(i.e., S + a is the set formed by adding a to each element of S).


 Effectively, adding a scalar to a set applies a translation (i.e., shift) to the
region associated with the set.
 An illustrative example is given below.
Im Im

2 2

1 1

Re Re
−3 −2 −1 1 2 3 −3 −2 −1 1 2 3

−1 −1

−2 −2

R R+1
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 281
Multiplying a Set by a Scalar
 For a set S and a scalar constant a, aS denotes the set given by
aS = {az : z ∈ S}
(i..e, aS is the set formed by multiplying each element of S by a).
 Multiplying z by a affects z by: scaling by |a| and rotating about the origin
by arg a.
 So, effectively, multiplying a set by a scalar applies a scaling and/or
rotation to the region associated with the set.
 An illustrative example is given below.
Im
2

Re
−5 −4 −3 −2 −1 1 2 3 4 5

−1

−2

Im
R Im

2 2

1 1

Re Re
−5 −4 −3 −2 −1 1 2 3 4 5 −5 −4 −3 −2 −1 1 2 3 4 5

−1 −1

−2 −2

2R −2R
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 282
Region of Convergence (ROC)

 As we saw earlier, for a function x, the complete specification of its


Laplace transform X requires not only an algebraic expression for X , but
also the ROC associated with X .
 Two very different functions can have the same algebraic expressions
for X .
 On the slides that follow, we will examine a number of key properties of
the ROC of the Laplace transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 283
ROC Property 1: General Form

 The ROC of a Laplace transform consists of strips parallel to the


imaginary axis in the complex plane.
 That is, if a point s0 is in the ROC, then the vertical line through s0 (i.e.,
Re(s) = Re(s0 )) is also in the ROC.
 Some examples of sets that would be either valid or invalid as ROCs are
shown below.
Im Im Im

Re Re Re

Valid Valid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 284
ROC Property 2: Rational Laplace Transforms

 If a Laplace transform X is a rational function, the ROC of X does not


contain any poles and is bounded by poles or extends to infinity.
 Some examples of sets that would be either valid or invalid as ROCs of
rational Laplace transforms are shown below.
Im Im Im

Re Re Re

Valid Valid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 285
ROC Property 3: Finite-Duration Functions

 If a function x is finite duration and its Laplace transform X converges for


at least one point, then X converges for all points in the complex plane
(i.e., the ROC is the entire complex plane).
 Some examples of sets that would be either valid or invalid as ROCs for
X , if x is finite duration, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 286
ROC Property 4: Right-Sided Functions

 If a function x is right sided and the (vertical) line Re(s) = σ0 is in the


ROC of the Laplace transform X = Lx, then all values of s for which
Re(s) > σ0 must also be in the ROC (i.e., the ROC includes a RHP
containing Re(s) = σ0 ).
 Thus, if x is right sided but not left sided, the ROC of X is a RHP.
 Some examples of sets that would be either valid or invalid as ROCs for
X , if x is right sided but not left sided, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 287
ROC Property 5: Left-Sided Functions

 If a function x is left sided and the (vertical) line Re(s) = σ0 is in the ROC
of the Laplace transform X = Lx, then all values of s for which Re(s) < σ0
must also be in the ROC (i.e., the ROC includes a LHP containing
Re(s) = σ0 ).
 Thus, if x is left sided but not right sided, the ROC of X is a LHP.
 Some examples of sets that would be either valid or invalid as ROCs for
X , if x is left sided but not right sided, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 288
ROC Property 6: Two-Sided Functions

 If a function x is two sided and the (vertical) line Re(s) = σ0 is in the ROC
of the Laplace transform X = Lx, then the ROC will consist of a strip in
the complex plane that includes the line Re(s) = σ0 .
 Some examples of sets that would be either valid or invalid as ROCs for
X , if x is two sided, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 289
ROC Property 7: More on Rational Laplace Transforms

 If the Laplace transform X of a function x is rational (with at least one


pole), then:
1 If x is right sided, the ROC of X is to the right of the rightmost pole of X

(i.e., the RHP to the right of the rightmost pole).


2 If x is left sided, the ROC of X is to the left of the leftmost pole of X (i.e., the

LHP to the left of the leftmost pole).


 This property is implied by properties 1, 2, 4, and 5.
 Some examples of sets that would be either valid or invalid as ROCs for
X , if X is rational and x is left/right sided, are given below.
Im Im Im Im
Re Re Re Re

Valid Invalid Valid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 290
General Form of the ROC
 To summarize the results of properties 3, 4, 5, and 6, if the Laplace
transform X of the function x exists, the ROC of X depends on the left-
and right-sidedness of x as follows:
x
left sided right sided ROC of X
no no strip
no yes RHP
yes no LHP
yes yes everywhere

 Thus, we can infer that, if X exists, its ROC can only be of the form of a
LHP, a RHP, a vertical strip, or the entire complex plane.
 For example, the sets shown below would not be valid as ROCs.
Im Im

Re Re

Invalid Invalid
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 291
Section 7.3

Properties of the Laplace Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 292
Properties of the Laplace Transform

Property Time Domain Laplace Domain ROC


Linearity a1 x1 (t) + a2 x2 (t) a1 X1 (s) + a2 X2 (s) At least R1 ∩ R2
Time-Domain Shifting x(t − t0 ) e−st0 X(s) R
Laplace-Domain Shifting es0t x(t) X(s − s0 ) R + Re(s0 )
1 s

Time/Laplace-Domain Scaling x(at) |a| X a aR
Conjugation x∗ (t) X ∗ (s∗ ) R
Time-Domain Convolution x1 ∗ x2 (t) X1 (s)X2 (s) At least R1 ∩ R2
d
Time-Domain Differentiation dt x(t) sX(s) At least R
d
Laplace-Domain Differentiation −tx(t) ds X(s) R
Rt 1
Time-Domain Integration −∞ x(τ)dτ s X(s) At least R ∩ {Re(s) > 0}

Property
Initial Value Theorem x(0+ ) = lim sX(s)
s→∞
Final Value Theorem lim x(t) = lim sX(s)
t→∞ s→0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 293
Laplace Transform Pairs
Pair x(t) X(s) ROC
1 δ(t) 1 All s
1
2 u(t) s Re(s) > 0
1
3 −u(−t) s Re(s) < 0
n!
4 t n u(t) sn+1
Re(s) > 0
n!
5 −t n u(−t) sn+1
Re(s) < 0
1
6 e−at u(t) s+a Re(s) > −a
1
7 −e−at u(−t) s+a Re(s) < −a
n!
8 t n e−at u(t) (s+a)n+1
Re(s) > −a
n!
9 −t n e−at u(−t) (s+a)n+1
Re(s) < −a
s
10 cos(ω0t)u(t) s2 +ω20
Re(s) > 0
ω0
11 sin(ω0t)u(t) s2 +ω20
Re(s) > 0
s+a
12 e−at cos(ω0t)u(t) (s+a)2 +ω20
Re(s) > −a
ω0
13 e−at sin(ω0t)u(t) (s+a)2 +ω20
Re(s) > −a

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 294
Linearity

LT LT
 If x1 (t) ←→ X1 (s) with ROC R1 and x2 (t) ←→ X2 (s) with ROC R2 , then
LT
a1 x1 (t) + a2 x2 (t) ←→ a1 X1 (s) + a2 X2 (s) with ROC R containing R1 ∩ R2 ,

where a1 and a2 are arbitrary complex constants.


 This is known as the linearity property of the Laplace transform.
 The ROC R always contains R1 ∩ R2 but can be larger (in the case that
pole-zero cancellation occurs).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 295
Time-Domain Shifting

LT
 If x(t) ←→ X(s) with ROC R, then

x(t − t0 ) ←→ e−st0 X(s) with ROC R,


LT

where t0 is an arbitrary real constant.


 This is known as the time-domain shifting property of the Laplace
transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 296
Laplace-Domain Shifting
LT
 If x(t) ←→ X(s) with ROC R, then

es0t x(t) ←→ X(s − s0 ) with ROC R0 = R + Re(s0 ),


LT

where s0 is an arbitrary complex constant.


 This is known as the Laplace-domain shifting property of the Laplace
transform.
 As illustrated below, the ROC R is shifted right by Re(s0 ).
Im Im

Re Re
σmin σmax σmin + Re(s0 ) σmax + Re(s0 )

R R0 = R + Re(s0 )

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 297
Time-Domain/Laplace-Domain Scaling

LT
 If x(t) ←→ X(s) with ROC R, then

1 s
with ROC R0 = aR,
LT
x(at) ←→ X
|a| a
where a is a nonzero real constant.
 This is known as the (time-domain/Laplace-domain) scaling property
of the Laplace transform.
 As illustrated below, the ROC R is scaled and possibly flipped left to right.
Im Im Im

Re Re Re
σmin σmax aσmin aσmax aσmax aσmin

R R0 = aR, a > 0 R0 = aR, a < 0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 298
Conjugation

LT
 If x(t) ←→ X(s) with ROC R, then

x∗ (t) ←→ X ∗ (s∗ ) with ROC R.


LT

 This is known as the conjugation property of the Laplace transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 299
Time-Domain Convolution

LT LT
 If x1 (t) ←→ X1 (s) with ROC R1 and x2 (t) ←→ X2 (s) with ROC R2 , then
LT
x1 ∗ x2 (t) ←→ X1 (s)X2 (s) with ROC R containing R1 ∩ R2 .
 This is known as the time-domain convolution property of the Laplace
transform.
 The ROC R always contains R1 ∩ R2 but can be larger than this
intersection (if pole-zero cancellation occurs).
 Convolution in the time domain becomes multiplication in the Laplace
domain.
 Consequently, it is often much easier to work with LTI systems in the
Laplace domain, rather than the time domain.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 300
Time-Domain Differentiation

LT
 If x(t) ←→ X(s) with ROC R, then

dx(t) LT
←→ sX(s) with ROC R0 containing R.
dt
 This is known as the time-domain differentiation property of the
Laplace transform.
 The ROC R0 always contains R but can be larger than R (if pole-zero
cancellation occurs).
 Differentiation in the time domain becomes multiplication by s in the
Laplace domain.
 Consequently, it can often be much easier to work with differential
equations in the Laplace domain, rather than the time domain.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 301
Laplace-Domain Differentiation

LT
 If x(t) ←→ X(s) with ROC R, then

LT dX(s)
−tx(t) ←→ with ROC R.
ds
 This is known as the Laplace-domain differentiation property of the
Laplace transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 302
Time-Domain Integration

LT
 If x(t) ←→ X(s) with ROC R, then
Z t
1
x(τ)dτ ←→ X(s) with ROC R0 containing R ∩ {Re(s) > 0}.
LT

−∞ s
 This is known as the time-domain integration property of the Laplace
transform.
 The ROC R0 always contains at least R ∩ {Re(s) > 0} but can be larger (if
pole-zero cancellation occurs).
 Integration in the time domain becomes division by s in the Laplace
domain.
 Consequently, it is often much easier to work with integral equations in the
Laplace domain, rather than the time domain.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 303
Initial Value Theorem

 For a function x with Laplace transform X , if x is causal and contains no


impulses or higher order singularities at the origin, then

x(0+ ) = lim sX(s),


s→∞

where x(0+ ) denotes the limit of x(t) as t approaches zero from positive
values of t .
 This result is known as the initial value theorem.
 In situations where X is known but x is not, the initial value theorem
eliminates the need to explicitly find x by an inverse Laplace transform
calculation in order to evaluate x(0+ ).
 In practice, the values of functions at the origin are frequently of interest,
as such values often convey information about the initial state of systems.
 The initial value theorem can sometimes also be helpful in checking for
errors in Laplace transform calculations.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 304
Final Value Theorem

 For a function x with Laplace transform X , if x is causal and x(t) has a


finite limit as t → ∞, then

lim x(t) = lim sX(s).


t→∞ s→0

 This result is known as the final value theorem.


 In situations where X is known but x is not, the final value theorem
eliminates the need to explicitly find x by an inverse Laplace transform
calculation in order to evaluate limt→∞ x(t).
 In practice, the values of functions at infinity are frequently of interest, as
such values often convey information about the steady-state behavior of
systems.
 The final value theorem can sometimes also be helpful in checking for
errors in Laplace transform calculations.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 305
More Laplace Transform Examples

T HIS SLIDE IS INTENTIONALLY LEFT BLANK .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 306
Section 7.4

Determination of Inverse Laplace Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 307
Finding Inverse Laplace Transform

 Recall that the inverse Laplace transform x of X is given by


Z σ+ j∞
1
x(t) = X(s)est ds,
2π j σ− j∞

where Re(s) = σ is in the ROC of X .


 Unfortunately, the above contour integration can often be quite tedious to
compute.
 Consequently, we do not usually compute the inverse Laplace transform
directly using the above equation.
 For rational functions, the inverse Laplace transform can be more easily
computed using partial fraction expansions.
 Using a partial fraction expansion, we can express a rational function as a
sum of lower-order rational functions whose inverse Laplace transforms
can typically be found in tables.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 308
Section 7.5

Laplace Transform and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 309
System Function of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h. Let
X , Y , and H denote the Laplace transforms of x, y, and h, respectively.
 Since y(t) = x ∗ h(t), the system is characterized in the Laplace domain by

Y (s) = X(s)H(s).
 As a matter of terminology, we refer to H as the system function (or
transfer function) of the system (i.e., the system function is the Laplace
transform of the impulse response).
 A LTI system is completely characterized by its system function H .
 When viewed in the Laplace domain, a LTI system forms its output by
multiplying its input with its system function.
 If the ROC of H includes the imaginary axis, then H( jω) is the frequency
response of the LTI system.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 310
Block Diagram Representations of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h, and
let X , Y , and H denote the Laplace transforms of x, y, and h, respectively.
 Often, it is convenient to represent such a system in block diagram form in
the Laplace domain as shown below.

X Y
H

 Since a LTI system is completely characterized by its system function, we


typically label the system with this quantity.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 311
Interconnection of LTI Systems

 The series interconnection of the LTI systems with system functions H1


and H2 is the LTI system with system function H1 H2 . That is, we have the
equivalence shown below.
X Y X Y
H1 H2 ≡ H1 H2

 The parallel interconnection of the LTI systems with system functions H1


and H2 is the LTI system with the system function H1 + H2 . That is, we
have the equivalence shown below.
X Y
H1 +

X Y
≡ H1 + H2
H2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 312
Causality

 If a LTI system is causal, its impulse response is causal, and therefore


right sided. From this, we have the result below.
 Theorem. The ROC associated with the system function of a causal LTI
system is a RHP or the entire complex plane.
 In general, the converse of the above theorem is not necessarily true.
That is, if the ROC of the system function is a RHP or the entire complex
plane, it is not necessarily true that the system is causal.
 If the system function is rational, however, we have that the converse
does hold, as indicated by the theorem below.
 Theorem. For a LTI system with a rational system function H , causality
of the system is equivalent to the ROC of H being the RHP to the right
of the rightmost pole or, if H has no poles, the entire complex plane.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 313
BIBO Stability

 Whether or not a system is BIBO stable depends on the ROC of its


system function.
 Theorem. A LTI system is BIBO stable if and only if the ROC of its
system function H contains the imaginary axis (i.e., Re(s) = 0).
 Theorem. A causal LTI system with a (proper) rational system function H
is BIBO stable if and only if all of the poles of H lie in the left half of the
plane (i.e., all of the poles have negative real parts).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 314
Invertibility

 A LTI system H with system function H is invertible if and only if there


exists another LTI system with system function Hinv such that

H(s)Hinv (s) = 1,

in which case Hinv is the system function of H−1 and

1
Hinv (s) = .
H(s)
 Since distinct systems can have identical system functions (but with
differing ROCs), the inverse of a LTI system is not necessarily unique.
 In practice, however, we often desire a stable and/or causal system. So,
although multiple inverse systems may exist, we are frequently only
interested in one specific choice of inverse system (due to these
additional constraints of stability and/or causality).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 315
LTI Systems and Differential Equations
 Many LTI systems of practical interest can be represented using an
Nth-order linear differential equation with constant coefficients.
 Consider a system with input x and output y that is characterized by an
equation of the form
N  M 
d k d k
∑ bk dt y(t) = ∑ ak dt x(t),
k=0 k=0

where the ak and bk are complex constants and M ≤ N .


 Let h denote the impulse response of the system, and let X , Y , and H
denote the Laplace transforms of x, y, and h, respectively.
 One can show that H is given by
Y (s) ∑M ak sk
H(s) = = k=0 .
X(s) ∑Nk=0 bk sk
 Observe that, for a system of the form considered above, the system
function is always rational.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 316
Section 7.6

Application: Circuit Analysis

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 317
Electronic Circuits

 An electronic circuit is a network of one or more interconnected circuit


elements.
 The three most basic types of circuit elements are:
1 resistors;
2 inductors; and
3 capacitors.
 Two fundamental quantities of interest in electronic circuits are current and
voltage.
 Current is the rate at which electric charge flows through some part of a
circuit, such as a circuit element, and is measured in units of amperes (A).
 Voltage is the difference in electric potential between two points in a
circuit, such as across a circuit element, and is measured in units of
volts (V).
 Voltage is essentially a force that makes electric charge (or current) flow.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 318
Resistors

 A resistor is a circuit element that opposes the flow of current.


 A resistor is characterized by an equation of the form

v(t) = Ri(t) or equivalently, i(t) = R1 v(t) ,

where R is a nonnegative real constant, and v and i respectively denote


the voltage across and current through the resistor as a function of time.
 As a matter of terminology, the quantity R is known as the resistance of
the resistor.
 Resistance is measured in units of ohms (Ω).
 In circuit diagrams, a resistor is denoted by the symbol shown below.

R
i + −

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 319
Inductors
 An inductor is a circuit element that converts an electric current into a
magnetic field and vice versa.
 An inductor uses the energy stored in a magnetic field in order to oppose
changes in current (through the inductor).
 An inductor is characterized by an equation of the form
Z t
d 1
v(t) = L dt i(t) (or equivalently, i(t) = L v(τ)dτ),
−∞
where L is a nonnegative real constant, and v and i respectively denote
the voltage across and current through the inductor as a function of time.
 As a matter of terminology, the quantity L is known as the inductance of
the inductor.
 Inductance is measured in units of henrys (H).
 In circuit diagrams, an inductor is denoted by the symbol shown below.

L
i + −

v
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 320
Capacitors
 A capacitor is a circuit element that stores electric charge.
 A capacitor uses the energy stored in an electric field in order to oppose
changes in voltage (across the capacitor).
 A capacitor is characterized by an equation of the form
Z t
1
v(t) = C i(τ)dτ (or equivalently, i(t) = C dtd v(t)),
−∞
where C is a nonnegative real constant, and v and i respectively denote
the voltage across and current through the capacitor as a function of time.
 As a matter of terminology, the quantity C is known as the capacitance of
the capacitor.
 Capacitance is measured in units of farads (F).
 In circuit diagrams, a capacitor is denoted by the symbol shown below.
C
i + −

v
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 321
Circuit Analysis with the Laplace Transform
 The Laplace transform is a very useful tool for circuit analysis.
 The utility of the Laplace transform is partly due to the fact that the
differential/integral equations that describe inductors and capacitors are
much simpler to express in the Laplace domain than in the time domain.
 Let v and i denote the voltage across and current through a circuit
element, and let V and I denote the Laplace transforms of v and i,
respectively.
 In the Laplace domain, the equations characterizing a resistor, an
inductor, and a capacitor respectively become:

V (s) = RI(s) (or equivalently, I(s) = R1 V (s));


1
V (s) = sLI(s) (or equivalently, I(s) = sL V (s)); and
1
V (s) = sC I(s) (or equivalently, I(s) = sCV (s)).

 Note the absence of differentiation and integration in the above equations


for an inductor and a capacitor.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 322
Section 7.7

Application: Design and Analysis of Control Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 323
Control Systems
 A control system manages the behavior of one or more other systems with
some specific goal.
 Typically, the goal is to force one or more physical quantities to assume
particular desired values, where such quantities might include: positions,
velocities, accelerations, forces, torques, temperatures, or pressures.
 The desired values of the quantities being controlled are collectively
viewed as the input of the control system.
 The actual values of the quantities being controlled are collectively viewed
as the output of the control system.
 A control system whose behavior is not influenced by the actual values of
the quantities being controlled is called an open loop (or non-feedback)
system.
 A control system whose behavior is influenced by the actual values of the
quantities being controlled is called a closed loop (or feedback) system.
 An example of a simple control system would be a thermostat system,
which controls the temperature in a room or building.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 324
Feedback Control Systems

Reference
Input Error Output
+ Controller Plant

Sensor
Feedback
Signal
 input: desired value of the quantity to be controlled
 output: actual value of the quantity to be controlled
 error: difference between the desired and actual values
 plant: system to be controlled
 sensor: device used to measure the actual output
 controller: device that monitors the error and changes the input of the
plant with the goal of forcing the error to zero

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 325
Stability Analysis of Feedback Systems

 Often, we want to ensure that a system is BIBO stable.


 The BIBO stability property is more easily characterized in the Laplace
domain than in the time domain.
 Therefore, the Laplace domain is extremely useful for the stability analysis
of systems.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 326
Stabilization Example: Unstable Plant

 causal LTI plant:

X Y
P

10
P(s) = s−1

 ROC of P:
Im

1 Re

 system is not BIBO stable

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 327
Stabilization Example: Using Pole-Zero Cancellation

 system formed by series interconnection of plant and causal LTI


compensator:

X Y
W P

10 s−1
P(s) = s−1 , W (s) = 10(s+1)

 system function H of overall system:


  
s−1 10 1
H(s) = W (s)P(s) = 10(s+1) s−1 = s+1

 ROC of H :
Im

−1 Re

 overall system is BIBO stable

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 328
Stabilization Example: Using Feedback (1)
 feedback system (with causal LTI compensator and sensor):

X R Y
+ C P

10
P(s) = s−1 , C(s) = β, Q(s) = 1
 system function H of feedback system:
C(s)P(s) 10β
H(s) = 1+C(s)P(s)Q(s) = s−(1−10β)
 ROC of H :
Im

Re
1 − 10β

 feedback system is BIBO stable if and only if 1 − 10β < 0 or equivalently


1
β> 10
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 329
Stabilization Example: Using Feedback (2)
X R Y
+ C P

R(s) = X(s) − Q(s)Y (s)


Y (s) = C(s)P(s)R(s)

Y (s) = C(s)P(s)R(s)
= C(s)P(s)[X(s) − Q(s)Y (s)]
= C(s)P(s)X(s) −C(s)P(s)Q(s)Y (s)

[1 +C(s)P(s)Q(s)]Y (s) = C(s)P(s)X(s)


Y (s) C(s)P(s)
H(s) = =
X(s) 1 +C(s)P(s)Q(s)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 330
Stabilization Example: Using Feedback (3)

10
P(s) = s−1 , C(s) = β, Q(s) = 1

C(s)P(s)
H(s) =
1 +C(s)P(s)Q(s)
10
β( s−1 )
= 10
1 + β( s−1 )(1)
10β
=
s − 1 + 10β
10β
=
s − (1 − 10β)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 331
Remarks on Stabilization Via Pole-Zero Cancellation
 Pole-zero cancellation is not achievable in practice, and therefore it cannot
be used to stabilize real-world systems.
 The theoretical models used to represent real-world systems are only
approximations due to many factors, including the following:
2 Determining the system function of a system involves measurement, which
always has some error.
2 A system cannot be built with such precision that it will have exactly some
prescribed system function.
2 The system function of most systems will vary at least slightly with changes
in the physical environment.
2 Although a LTI model is used to represent a system, the likely reality is that
the system is not exactly LTI, which introduces error.
 Due to approximation error, the effective poles and zeros of the system
function will only be approximately where they are expected to be.
 Since pole-zero cancellation requires that a pole and zero be placed at
exactly the same location, any error will prevent this cancellation from
being achieved.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 332
Section 7.8

Unilateral Laplace Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 333
Unilateral Laplace Transform

 The unilateral Laplace transform of the function x, denoted Lu x or X , is


defined as
Z ∞
Lu x(s) = X(s) = x(t)e−st dt.
0−

 The unilateral Laplace transform is related to the bilateral Laplace


transform as follows:
Z ∞ Z ∞
Lu x(s) = x(t)e−st dt = x(t)u(t)e−st dt = L {xu} (s).
0− −∞

 In other words, the unilateral Laplace transform of the function x is simply


the bilateral Laplace transform of the function xu.
 Since Lu x = L{xu} and xu is always a right-sided function, the ROC
associated with Lu x is always either a RHP or the entire complex plane.
 For this reason, we often do not explicitly indicate the ROC when
working with the unilateral Laplace transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 334
Inversion of the Unilateral Laplace Transform

 With the unilateral Laplace transform, the same inverse transform


equation is used as in the bilateral case.
 The unilateral Laplace transform is only invertible for causal functions.
 In particular, we have

L−1 −1
u {Lu x}(t) = Lu {L{xu}}(t)
= L−1 {L{xu}}(t)
= x(t)u(t)
(
x(t) t ≥ 0
=
0 t < 0.

 For a noncausal function x, we can only recover x(t) for t ≥ 0.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 335
Unilateral Versus Bilateral Laplace Transform

 Due to the close relationship between the unilateral and bilateral Laplace
transforms, these two transforms have some similarities in their properties.
 Since these two transforms are not identical, however, their properties
differ in some cases, often in subtle ways.
 In the unilateral case, we have that:
1 the time-domain convolution property has the additional requirement that
the functions being convolved must be causal;
2 the time/Laplace-domain scaling property has the additional constraint that
the scaling factor must be positive;
3 the time-domain differentiation property has an extra term in the expression
for Lu {Dx}(t), where D denotes the derivative operator (namely, −x(0− ));
4 the time-domain integration property has a different lower limit in the
time-domain integral (namely, 0− instead of −∞); and
5 the time-domain shifting property does not hold (except in special
circumstances).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 336
Properties of the Unilateral Laplace Transform

Property Time Domain Laplace Domain


Linearity a1 x1 (t) + a2 x2 (t) a1 X1 (s) + a2 X2 (s)
Laplace-Domain Shifting es0t x(t) X(s − s0 )
1 s

Time/Laplace-Domain Scaling x(at), a > 0 aX a
Conjugation x∗ (t) X ∗ (s∗ )
Time-Domain Convolution x1 ∗ x2 (t), x1 and x2 are causal X1 (s)X2 (s)
d
Time-Domain Differentiation dt x(t) sX(s) − x(0− )
d
Laplace-Domain Differentiation −tx(t) ds X(s)
Rt 1
Time-Domain Integration 0− x(τ)dτ s X(s)

Property
Initial Value Theorem x(0+ ) = lim sX(s)
s→∞
Final Value Theorem lim x(t) = lim sX(s)
t→∞ s→0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 337
Unilateral Laplace Transform Pairs

Pair x(t), t ≥ 0 X(s)


1 δ(t) 1
1
2 1 s
n!
3 tn sn+1
1
4 e−at s+a
n!
5 t n e−at (s+a)n+1
s
6 cos(ω0t) s2 +ω20
ω0
7 sin(ω0t) s2 +ω20
s+a
8 e−at cos(ω0t) (s+a)2 +ω20
ω0
9 e−at sin(ω0t) (s+a)2 +ω20

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 338
Solving Differential Equations [Using the Unilateral Laplace Transform]

 Many systems of interest in engineering applications can be characterized


by constant-coefficient linear differential equations.
 One common use of the unilateral Laplace transform is in solving
constant-coefficient linear differential equations with nonzero initial
conditions.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 339
Part 8

Discrete-Time (DT) Signals and Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 340
Section 8.1

Independent- and Dependent-Variable Transformations

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 341
Time Shifting (Translation)

 Time shifting (also called translation) maps the input sequence x to the
output sequence y as given by

y(n) = x(n − b),

where b is an integer.
 Such a transformation shifts the sequence (to the left or right) along the
time axis.
 If b > 0, y is shifted to the right by |b|, relative to x (i.e., delayed in time).
 If b < 0, y is shifted to the left by |b|, relative to x (i.e., advanced in time).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 342
Time Shifting (Translation): Example

x(n)

n
−4 −3 −2 −1 0 1 2 3 4

x(n − 1) x(n + 1)

3 3

2 2

1 1

n n
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 343
Time Reversal (Reflection)

 Time reversal (also known as reflection) maps the input sequence x to


the output sequence y as given by

y(n) = x(−n).
 Geometrically, the output sequence y is a reflection of the input sequence
x about the (vertical) line n = 0.
x(n) x(−n)

3 3

2 2

1 1

n n
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 344
Downsampling

 Downsampling maps the input sequence x to the output sequence y as


given by

y(n) = (↓ a)x(n) = x(an),

where a is a strictly positive integer.


 The output sequence y is produced from the input sequence x by keeping
only every ath sample of x.
x(n) (↓ 2)x(n)

3 3

2 2

1 1

n n
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 345
Upsampling

 Upsampling maps the input sequence x to the output sequence y as


given by
(
x(n/a) n/a is an integer
y(n) = (↑ a)x(n) =
0 otherwise,

where a is a strictly positive integer.


 The output sequence y is produced from the input sequence x by inserting
a − 1 zeros between all of the samples of x.
x(n) (↑ 2)x(n)

3 3

2 2

1 1

n n
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 346
Combined Independent-Variable Transformations
 Consider a transformation that maps the input sequence x to the output
sequence y as given by

y(n) = x(an − b),


where a and b are integers and a 6= 0.
 Such a transformation is a combination of time shifting, downsampling,
and time reversal operations.
 Time reversal commutes with downsampling.
 Time shifting does not commute with time reversal or downsampling.
 The above transformation is equivalent to:
1 first, time shifting x by b;

2 then, downsampling the result by |a| and, if a < 0, time reversing as well.

 If ba is an integer, the above transformation is also equivalent to:


1 first, downsampling x by |a| and, if a < 0, time reversing;

2 then, time shifting the result by ba .


 Note that the time shift is not by the same amount in both cases.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 347
Section 8.2

Properties of Sequences

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 348
Symmetry and Addition/Multiplication

 Sums involving even and odd sequences have the following properties:
2 The sum of two even sequences is even.
2 The sum of two odd sequences is odd.
2 The sum of an even sequence and odd sequence is neither even nor odd,
provided that neither of the sequences is identically zero.
 That is, the sum of sequences with the same type of symmetry also has
the same type of symmetry.
 Products involving even and odd sequences have the following
properties:
2 The product of two even sequences is even.
2 The product of two odd sequences is even.
2 The product of an even sequence and an odd sequence is odd.
 That is, the product of sequences with the same type of symmetry is even,
while the product of sequences with opposite types of symmetry is odd.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 349
Decomposition of a Sequence into Even and Odd Parts

 Every sequence x has a unique representation of the form

x(n) = xe (n) + xo (n),

where the sequences xe and xo are even and odd, respectively.


 In particular, the sequences xe and xo are given by

xe (n) = 21 [x(n) + x(−n)] and xo (n) = 21 [x(n) − x(−n)] .


 The sequences xe and xo are called the even part and odd part of x,
respectively.
 For convenience, the even and odd parts of x are often denoted as
Even{x} and Odd{x}, respectively.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 350
Sum of Periodic Sequences

 The least common multiple of two (strictly positive) integers a and b,


denoted lcm(a, b), is the smallest positive integer that is divisible by both
a and b.
 The quantity lcm(a, b) can be easily determined from a prime factorization
of the integers a and b by taking the product of the highest power for each
prime factor appearing in these factorizations. Example:

lcm(20, 6) = lcm(22 · 51 , 21 · 31 ) = 22 · 31 · 51 = 60;


lcm(54, 24) = lcm(21 · 33 , 23 · 31 ) = 23 · 33 = 216; and
3 1 1 2 1 3 2 1
lcm(24, 90) = lcm(2 · 3 , 2 · 3 · 5 ) = 2 · 3 · 5 = 360.
 Sum of periodic sequences. For any two periodic sequences x1 and x2
with fundamental periods N1 and N2 , respectively, the sum x1 + x2 is
periodic with period lcm(N1 , N2 ).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 351
Right-Sided Sequences
 A sequence x is said to be right sided if, for some (finite) integer constant
n0 , the following condition holds:
x(n) = 0 for all n < n0
(i.e., x is only potentially nonzero to the right of n0 ).
 An example of a right-sided sequence is shown below.

x(n)

2 ···

n
−4 −3 −2 −1 0 1 2 3 4

 A sequence x is said to be causal if


x(n) = 0 for all n < 0.
 A causal sequence is a special case of a right-sided sequence.
 A causal sequence is not to be confused with a causal system. In these
two contexts, the word “causal” has very different meanings.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 352
Left-Sided Sequences
 A sequence x is said to be left sided if, for some (finite) integer constant
n0 , the following condition holds:
x(n) = 0 for all n > n0
(i.e., x is only potentially nonzero to the left of n0 ).
 An example of a left-sided sequence is shown below.

x(n)

··· 2

n
−4 −3 −2 −1 0 1 2 3 4

 A sequence x is said to be anticausal if


x(n) = 0 for all n > 0.
 An anticausal sequence is a special case of a left-sided sequence.
 An anticausal sequence is not to be confused with an anticausal system.
In these two contexts, the word “anticausal” has very different meanings.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 353
Finite-Duration and Two-Sided Sequences
 A sequence that is both left sided and right sided is said to be finite
duration (or time limited).
 An example of a finite-duration sequence is shown below.
x(n)

n
−4 −3 −2 −1 0 1 2 3 4

 A sequence that is neither left sided nor right sided is said to be two
sided.
 An example of a two-sided sequence is shown below.

x(n)

··· 2 ···

n
−4 −3 −2 −1 0 1 2 3 4

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 354
Bounded Sequences

 A sequence x is said to be bounded if there exists some (finite) positive


real constant A such that

|x(n)| ≤ A for all n

(i.e., x(n) is finite for all n).


 Examples of bounded sequences include any constant sequence.
 Examples of unbounded sequences include any nonconstant polynomial
sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 355
Energy of a Sequence

 The energy E contained in the sequence x is given by



E= ∑ |x(k)|2 .
k=−∞

 A signal with finite energy is said to be an energy signal.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 356
Section 8.3

Elementary Sequences

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 357
Real Sinusoidal Sequences
 A real sinusoidal sequence is a sequence of the form
x(n) = A cos(Ωn + θ),
where A, Ω, and θ are real constants.
 A real sinusoid is periodic if and only if Ω is a rational number, in which

case the fundamental period is the smallest integer of the form 2πk
|Ω| where
k is a (strictly) positive integer.
 For all integer k, xk (n) = A cos([Ω + 2πk]n + θ) is the same sequence.
 An example of a periodic real sinusoid with fundamental period 12 is
shown plotted below.
π

x(n) = cos 6n

··· ···
n
−12 12

−1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 358
Oscillation Rate of Real Sinusoidal Sequences

 Unlike their continuous-time counterparts, real sinusoidal sequences have


an upper bound on the rate at which they can oscillate.
 Since xk (n) = A cos([Ω + 2πk]n + θ) is the same sequence for all integer
k, we consider only 0 ≤ Ω < 2π without loss of generality.
 Consider the set of real sinusoidal sequences of the form

x(n) = A cos(Ωn + θ),

where 0 ≤ Ω < 2π.


 The rate of oscillation of x is least (i.e., x is constant) when Ω = 0.
 The rate of oscillation of x is greatest when Ω = π.
 As Ω increases from 0 to π, the rate of oscillation of x increases.
 As Ω increases from π to 2π, the rate of oscillation of x decreases.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 359
Effect of Increasing Frequency on Oscillation Rate


 π

cos(0n) = cos 8 n cos 8n
1 1

n n
−16 16 −16 16
−1 −1

π
 4π
 8π

cos 2n = cos 8 n cos(πn) = cos 8 n
1 1

n n
−16 16 −16 16
−1 −1

15π
 16π

cos 8 n cos(2πn) = cos 8 n
1 1

n n
−16 16 −16 16
−1 −1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 360
Complex Exponential Sequences

 A complex exponential sequence is a sequence of the form

x(n) = can ,

where c and a are complex constants.


 Such a sequence can also be equivalently expressed in the form

x(n) = cebn ,

where b is a complex constant chosen as b = ln a. (This this form is more


similar to that presented for CT complex exponentials).
 A complex exponential can exhibit one of a number of distinct modes of
behavior, depending on the values of the parameters c and a.
 For example, as special cases, complex exponentials include real
exponentials and complex sinusoids.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 361
Real Exponential Sequences

 A real exponential sequence is a special case of a complex exponential

x(n) = can ,

where c and a are restricted to be real numbers.


 A real exponential can exhibit one of several distinct modes of behavior,
depending on the magnitude and sign of a.
 If |a| > 1, the magnitude of x(n) increases exponentially as n increases
(i.e., a growing exponential).
 If |a| < 1, the magnitude of x(n) decreases exponentially as n increases
(i.e., a decaying exponential).
 If |a| = 1, the magnitude of x(n) is a constant, independent of n.
 If a > 0, x(n) has the same sign for all n.
 If a < 0, x(n) alternates in sign as n increases/decreases.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 362
Real Exponential Sequences (Continued 1)

x(n) x(n)

3 3

2 2

1 ··· ··· 1
··· ···
n n
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4

|a| > 1, a > 0 [a = 54 ; c = 1] |a| < 1, a > 0 [a = 45 ; c = 1]

x(n)

1
··· ···
n
−4 −3 −2 −1 0 1 2 3 4

|a| = 1, a > 0 [ a = 1; c = 1]

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 363
Real Exponential Sequences (Continued 2)
x(n) x(n)

3 3

2 2

··· ···
1 1
··· ···
n n
−4 −3 −2 −1 1 2 3 4 −4 −3 −2 −1 1 2 3 4

1 1

2 2

3 3

|a| > 1, a < 0 [a = − 54 ; c = 1] |a| < 1, a < 0 [a = − 45 ; c = 1]

x(n)

1
··· ···
n
−4 −3 −2 −1 1 2 3 4

−1

|a| = 1, a < 0 [a = −1; c = 1]

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 364
Complex Sinusoidal Sequences

 A complex sinusoidal sequence is a special case of a complex exponential


x(n) = can , where c and a are complex and |a| = 1 (i.e., a is of the form
e jΩ where Ω is real).
 That is, a complex sinusoidal sequence is a sequence of the form

x(n) = ce jΩn ,

where c is complex and Ω is real.


 Using Euler’s relation, we can rewrite x(n) as

x(n) = |c| cos(Ωn + arg c) + j |c| sin(Ωn + arg c) .


| {z } | {z }
Re{x(n)} Im{x(n)}

 Thus, Re{x} and Im{x} are real sinusoids.



 A complex sinusoid is periodic if and only if 2π is a rational number, in
which case the fundamental period is the smallest integer of the form 2πk
|Ω|
where k is a (strictly) positive integer.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 365
Complex Sinusoidal Sequences (Continued)
 For x(n) = e j(2π/7)n , the graphs of Re{x} and Im{x} are shown below.


Re{e j(2π/7)n } = cos 7 n

··· ···
n
−7 7

−1



Im{e j(2π/7)n } = sin 7 n

··· ···

n
−7 7

−1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 366
Oscillation Rate of Complex Sinusoidal Sequences

 Unlike their continuous-time counterparts, complex sinusoidal sequences


have an upper bound on the rate at which they can oscillate.
 Since xk (n) = ce j(Ω+2πk)n is the same sequence for all integer k, we
consider only 0 ≤ Ω < 2π without loss of generality.
 Consider the set of complex sinusoidal sequences of the form

x(n) = ce jΩn ,

where 0 ≤ Ω < 2π.


 The rate of oscillation of x is least (i.e., x is constant) when Ω = 0.
 The rate of oscillation of x is greatest when Ω = π.
 As Ω increases from 0 to π, the rate of oscillation of x increases.
 As Ω increases from π to 2π, the rate of oscillation of x decreases.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 367
General Complex Exponential Sequences
 In the most general case of a complex exponential sequence x(n) = can ,
c and a are both complex.
 Letting c = |c| e jθ and a = |a| e jΩ where θ and Ω are real, and using
Euler’s relation, we can rewrite x(n) as

x(n) = |c| |a|n cos(Ωn + θ) + j |c| |a|n sin(Ωn + θ) .


| {z } | {z }
Re{x(n)} Im{x(n)}

 Thus, Re{x} and Im{x} are each the product of a real exponential and
real sinusoid.
 One of several distinct modes of behavior is exhibited by x, depending on
the value of a.
 If |a| = 1, Re{x} and Im{x} are real sinusoids.
 If |a| > 1, Re{x} and Im{x} are each the product of a real sinusoid and
a growing real exponential.
 If |a| < 1, Re{x} and Im{x} are each the product of a real sinusoid and
a decaying real exponential.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 368
General Complex Exponential Sequences (Continued)

 The various modes of behavior for Re{x} and Im{x} are illustrated
below.

|a| > 1 |a| < 1

|a| = 1
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 369
Relationship Between Complex Exponentials and Real
Sinusoids

 From Euler’s relation, a complex sinusoid can be expressed as the sum of


two real sinusoids as

ce jΩn = c cos(Ωn) + jc sin(Ωn).


 Moreover, a real sinusoid can be expressed as the sum of two complex
sinusoids using the identities
c h j(Ωn+θ) i
c cos(Ωn + θ) = e + e− j(Ωn+θ) and
2 h i
c
c sin(Ωn + θ) = e j(Ωn+θ) − e− j(Ωn+θ) .
2j
 Note that, above, we are simply restating results from the (appendix)
material on complex analysis.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 370
Unit-Step Sequence

 The unit-step sequence, denoted u, is defined as


(
1 n≥0
u(n) =
0 otherwise.
 A plot of this sequence is shown below.

u(n)

1
···
n
−3 −2 −1 0 1 2 3

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 371
Unit Rectangular Pulses

 A unit rectangular pulse is a sequence of the form


(
1 a≤n<b
p(n) =
0 otherwise

where a and b are integer constants satisfying a < b.


 Such a sequence can be expressed in terms of the unit-step sequence as

p(n) = u(n − a) − u(n − b).


 The graph of a unit rectangular pulse has the general form shown below.

p(n)
1
···
n
a−3 a−2 a−1 a a+1 a+2 a+3 b−1 b b+1 b+2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 372
Unit-Impulse Sequence
 The unit-impulse sequence (also known as the delta sequence), denoted
δ, is defined as
(
1 n=0
δ(n) =
0 otherwise.
 The first-order difference of u is δ. That is,
δ(n) = u(n) − u(n − 1).
 The running sum of δ is u. That is,
n
u(n) = ∑ δ(k).
k=−∞
 A plot of δ is shown below.
δ(n)

n
−3 −2 −1 0 1 2 3

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 373
Properties of the Unit-Impulse Sequence

 For any sequence x and any integer constant n0 , the following identity
holds:

x(n)δ(n − n0 ) = x(n0 )δ(n − n0 ).


 For any sequence x and any integer constant n0 , the following identity
holds:

∑ x(n)δ(n − n0 ) = x(n0 ).
n=−∞

 Trivially, the sequence δ is also even.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 374
Representing Rectangular Pulses (Using Unit-Step Sequences)

 For integer constants a and b where a < b, consider a sequence x of the


form
(
1 a≤n<b
x(n) =
0 otherwise

(i.e., x is a rectangular pulse of height one that is nonzero from a to b − 1


inclusive).
 The sequence x can be equivalently written as

x(n) = u(n − a) − u(n − b)

(i.e., the difference of two time-shifted unit-step sequences).


 Unlike the original expression for x, this latter expression for x does not
involve multiple cases.
 In effect, by using unit-step sequences, we have collapsed a formula
involving multiple cases into a single expression.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 375
Representing Sequences Using Unit-Step Sequences

 The idea from the previous slide can be extended to handle any sequence
that is defined in a piecewise manner (i.e., via an expression involving
multiple cases).
 That is, by using unit-step sequences, we can always collapse a formula
involving multiple cases into a single expression.
 Often, simplifying a formula in this way can be quite beneficial.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 376
Section 8.4

Discrete-Time (DT) Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 377
DT Systems
 A system with input x and output y can be described by the equation

y = Hx,

where H denotes an operator (i.e., transformation).


 Note that the operator H maps a sequence to a sequence (not a number
to a number).
 Alternatively, we can express the above relationship using the notation
H
x −→ y.
 If clear from the context, the operator H is often omitted, yielding the
abbreviated notation

x → y.
 Note that the symbols “→” and “=” have very different meanings.
 The symbol “→” should be read as “produces” (not as “equals”).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 378
Block Diagram Representations

 Often, a system defined by the operator H and having the input x and
output y is represented in the form of a block diagram as shown below.

Input Output
x System y
H

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 379
Interconnection of Systems
 Two basic ways in which systems can be interconnected are shown below.
x y
H1 +
x y
H1 H2
H2
Series
Parallel
 A series (or cascade) connection ties the output of one system to the input
of the other.
 The overall series-connected system is described by the equation

y = H2 H1 x.
 A parallel connection ties the inputs of both systems together and sums
their outputs.
 The overall parallel-connected system is described by the equation

y = H1 x + H2 x.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 380
Section 8.5

Properties of (DT) Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 381
Memory

 A system H is said to be memoryless if, for every integer constant n0 ,


Hx(n0 ) does not depend on x(n) for some n 6= n0 .
 In other words, a memoryless system is such that the value of its output at
any given point in time can depend on the value of its input at only the
same point in time.
 A system that is not memoryless is said to have memory.
 Although simple, a memoryless system is not very flexible, since its
current output value cannot rely on past or future values of the input.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 382
Memory (Continued)

If the system H is memoryless,


the output Hx at n0
can depend on the input x
only at n0 .

n
−∞ n0 ∞

Consider the calculation of the


output Hx at n0 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 383
Causality

 A system H is said to be causal if, for every integer constant n0 , Hx(n0 )


does not depend on x(n) for some n > n0 .
 In other words, a causal system is such that the value of its output at any
given point in time can depend on the value of its input at only the same or
earlier points in time (i.e., not later points in time).
 If the independent variable n represents time, a system must be causal in
order to be physically realizable.
 Noncausal systems can sometimes be useful in practice, however, since
the independent variable need not always represent time (e.g., the
independent variable might represent position).
 A memoryless system is always causal, although the converse is not
necessarily true.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 384
Causality (Continued)

If the system H is causal,


the output Hx at n0
can depend on the input x
only at points n ≤ n0 .

n ≤ n0
n
−∞ n0 ∞

Consider the calculation of the


output Hx at n0 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 385
Invertibility
 The inverse of a system H is another system H−1 such that, for every
sequence x,
H−1 Hx = x
(i.e., the system formed by the cascade interconnection of H followed by
H−1 is a system whose input and output are equal).
 A system is said to be invertible if it has a corresponding inverse system
(i.e., its inverse exists).
 Equivalently, a system is invertible if its input x can always be uniquely
determined from its output y.
 An invertible system will always produce distinct outputs from any two
distinct inputs.
 To show that a system is invertible, we simply find the inverse system.
 To show that a system is not invertible, we find two distinct inputs that
result in identical outputs.
 In practical terms, invertible systems are “nice” in the sense that their
effects can be undone.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 386
Invertibility (Continued)

 A system H−1 being the inverse of H means that the following two
systems are equivalent (i.e., H−1 H is an identity):

x y x y
H H−1

System 1: y = H−1 Hx System 2: y = x

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 387
Bounded-Input Bounded-Output (BIBO) Stability

 A system H is BIBO stable if, for every bounded sequence x, Hx is


bounded (i.e., |x(n)| < ∞ for all n implies that |Hx(n)| < ∞ for all n).
 In other words, a BIBO stable system is such that it guarantees to always
produce a bounded output as long as its input is bounded.
 To show that a system is BIBO stable, we must show that every bounded
input leads to a bounded output.
 To show that a system is not BIBO stable, we need only find a single
bounded input that leads to an unbounded output.
 In practical terms, a BIBO stable system is well behaved in the sense that,
as long as the system input remains finite for all time, the output will also
remain finite for all time.
 Usually, a system that is not BIBO stable will have serious safety issues.
 For example, a portable music player with a battery input of 3.7 volts and
headset output of ∞ volts would result in one vaporized human (and likely
a big lawsuit as well).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 388
Time Invariance (TI)

 A system H is said to be time invariant (TI) (or shift invariant (SI)) if,
for every sequence x and every integer n0 , the following condition holds:

Hx(n − n0 ) = Hx0 (n) for all n, where x0 (n) = x(n − n0 )

(i.e., H commutes with time shifts).


 In other words, a system is time invariant if a time shift (i.e., advance or
delay) in the input always results only in an identical time shift in the
output.
 A system that is not time invariant is said to be time varying.
 In simple terms, a time invariant system is a system whose behavior does
not change with respect to time.
 Practically speaking, compared to time-varying systems, time-invariant
systems are much easier to design and analyze, since their behavior
does not change with respect to time.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 389
Time Invariance (Continued)

 Let Sn0 denote an operator that applies a time shift of n0 to a sequence


(i.e., Sn0 x(n) = x(n − n0 )).
 A system H is time invariant if and only if the following two systems are
equivalent (i.e., H commutes with Sn0 ):

x y
Sn0 H x y
H Sn0

System 1: y = HSn0 x
  System
 2: y = Sn0 Hx

y(n) = Hx0 (n)
y(n) = Hx(n − n0 )
x0 (n) = Sn0 x(n) = x(n − n0 )

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 390
Additivity, Homogeneity, and Linearity
 A system H is said to be additive if, for all sequences x1 and x2 , the
following condition holds:
H(x1 + x2 ) = Hx1 + Hx2
(i.e., H commutes with sums).
 A system H is said to be homogeneous if, for every sequence x and every
complex constant a, the following condition holds:
H(ax) = aHx
(i.e., H commutes with multiplication by a constant).
 A system that is both additive and homogeneous is said to be linear.
 In other words, a system H is linear, if for all sequences x1 and x2 and all
complex constants a1 and a2 , the following condition holds:
H(a1 x1 + a2 x2 ) = a1 Hx1 + a2 Hx2
(i.e., H commutes with linear combinations).
 The linearity property is also referred to as the superposition property.
 Practically speaking, linear systems are much easier to design and
analyze than nonlinear systems.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 391
Additivity, Homogeneity, and Linearity (Continued 1)

 The system H is additive if and only if the following two systems are
equivalent (i.e., H commutes with addition):

x1 y x1 y
+ H H +
x2 x2
H

System 1: y = H(x1 + x2 )
System 2: y = Hx1 + Hx2

 The system H is homogeneous if and only if the following two systems


are equivalent (i.e., H commutes with scalar multiplication):

x y x y
a H H a

System 1: y = H(ax) System 2: y = aHx

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 392
Additivity, Homogeneity, and Linearity (Continued 2)

 The system H is linear if and only if the following two systems are
equivalent (i.e., H commutes with linear combinations):

x1 y x1 y
a1 + H H a1 +
x2 x2
a2 H a2

System 1: y = H(a1 x1 + a2 x2 ) System 2: y = a1 Hx1 + a2 Hx2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 393
Eigensequences of Systems

 A sequence x is said to be an eigensequence of the system H with the


eigenvalue λ if

Hx = λx,

where λ is a complex constant.


 In other words, the system H acts as an ideal amplifier for each of its
eigensequences x, where the amplifier gain is given by the corresponding
eigenvalue λ.
 Different systems have different eigensequences.
 Many of the mathematical tools developed for the study of DT systems
have eigensequences as their basis.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 394
Part 9

Discrete-Time Linear Time-Invariant (LTI) Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 395
Why Linear Time-Invariant (LTI) Systems?

 In engineering, linear time-invariant (LTI) systems play a very important


role.
 Very powerful mathematical tools have been developed for analyzing LTI
systems.
 LTI systems are much easier to analyze than systems that are not LTI.
 In practice, systems that are not LTI can be well approximated using LTI
models.
 So, even when dealing with systems that are not LTI, LTI systems still play
an important role.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 396
Section 9.1

Convolution

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 397
DT Convolution

 The (DT) convolution of the sequences x and h, denoted x ∗ h, is defined


as the sequence

x ∗ h(n) = ∑ x(k)h(n − k).
k=−∞

 The convolution x ∗ h evaluated at the point n is simply a weighted sum of


elements of x, where the weighting is given by h time reversed and shifted
by n.
 Herein, the asterisk symbol (i.e., “∗”) will always be used to denote
convolution, not multiplication.
 As we shall see, convolution is used extensively in the theory of (DT)
systems.
 In particular, convolution has a special significance in the context of (DT)
LTI systems.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 398
Practical Convolution Computation

 To compute the convolution



x ∗ h(n) = ∑ x(k)h(n − k),
k=−∞

we proceed as follows:
1 Plot x(k) and h(n − k) as a function of k .

2 Initially, consider an arbitrarily large negative value for n. This will result in
h(n − k) being shifted very far to the left on the time axis.
3 Write the mathematical expression for x ∗ h(n).
4 Increase n gradually until the expression for x ∗ h(n) changes form. Record
the interval over which the expression for x ∗ h(n) was valid.
5 Repeat steps 3 and 4 until n is an arbitrarily large positive value. This
corresponds to h(n − k) being shifted very far to the right on the time axis.
6 The results for the various intervals can be combined in order to obtain an
expression for x ∗ h(n) for all n.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 399
Properties of Convolution

 The convolution operation is commutative. That is, for any two sequences
x and h,

x ∗ h = h ∗ x.
 The convolution operation is associative. That is, for any sequences x, h1 ,
and h2 ,

(x ∗ h1 ) ∗ h2 = x ∗ (h1 ∗ h2 ).
 The convolution operation is distributive with respect to addition. That is,
for any sequences x, h1 , and h2 ,

x ∗ (h1 + h2 ) = x ∗ h1 + x ∗ h2 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 400
Representation of Sequences Using Impulses

 For any sequence x,



x(n) = ∑ x(k)δ(n − k) = x ∗ δ(n).
k=−∞

 Thus, any sequence x can be written in terms of an expression involving δ.


 Moreover, δ is the convolutional identity. That is, for any sequence x,

x ∗ δ = x.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 401
Circular Convolution
 The convolution of two periodic sequences is usually not well defined.
 This motivates an alternative notion of convolution for periodic sequences
known as circular convolution.
 The circular convolution (also known as the DT periodic convolution) of
the N -periodic sequences x and h, denoted x ~ h, is defined as
N−1
x ~ h(n) = ∑ x(k)h(n − k) = ∑ x(k)h(mod(n − k, N)),
k=hNi k=0

where mod(a, b) is the remainder after division when a is divided by b.


 The circular convolution and (linear) convolution of the N -periodic
sequences x and h are related as follows:

x ~ h(n) = x0 ∗ h(n) where x(n) = ∑ x0 (n − kN)
k=−∞

(i.e., x0 (n) equals x(n) over a single period of x and is zero elsewhere).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 402
Section 9.2

Convolution and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 403
Impulse Response

 The response h of a system H to the input δ is called the impulse


response of the system (i.e., h = Hδ).
 For any LTI system with input x, output y, and impulse response h, the
following relationship holds:

y = x ∗ h.
 In other words, a LTI system simply computes a convolution.
 Furthermore, a LTI system is completely characterized by its impulse
response.
 That is, if the impulse response of a LTI system is known, we can
determine the response of the system to any input.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 404
Step Response

 The response s of a system H to the input u is called the step response of


the system (i.e., s = Hu).
 The impulse response h and step response s of a system are related as

h(n) = s(n) − s(n − 1).


 Therefore, the impulse response of a system can be determined from its
step response by (first-order) differencing.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 405
Block Diagram of LTI Systems

 Often, it is convenient to represent a (DT) LTI system in block diagram


form.
 Since such systems are completely characterized by their impulse
response, we often label a system with its impulse response.
 That is, we represent a system with input x, output y, and impulse
response h, as shown below.

x y
h

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 406
Interconnection of LTI Systems

 The series interconnection of the LTI systems with impulse responses h1


and h2 is the LTI system with impulse response h = h1 ∗ h2 . That is, we
have the equivalences shown below.

x y x y
h1 h2 ≡ h1 ∗ h2

x y x y
h1 h2 ≡ h2 h1

 The parallel interconnection of the LTI systems with impulse responses


h1 and h2 is a LTI system with the impulse response h = h1 + h2 . That is,
we have the equivalence shown below.

x y
h1 +

x y
≡ h1 + h2
h2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 407
Section 9.3

Properties of LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 408
Memory

 A LTI system with impulse response h is memoryless if and only if

h(n) = 0 for all n 6= 0.


 That is, a LTI system is memoryless if and only if its impulse response h is
of the form

h(n) = Kδ(n),

where K is a complex constant.


 Consequently, every memoryless LTI system with input x and output y is
characterized by an equation of the form

y = x ∗ (Kδ) = Kx

(i.e., the system is an ideal amplifier).


 For a LTI system, the memoryless constraint is extremely restrictive (as
every memoryless LTI system is an ideal amplifier).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 409
Causality

 A LTI system with impulse response h is causal if and only if

h(n) = 0 for all n < 0

(i.e., h is a causal sequence).


 It is due to the above relationship that we call a sequence x, satisfying

x(n) = 0 for all n < 0,

a causal sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 410
Invertibility

 The inverse of a LTI system, if such a system exists, is a LTI system.


 Let h and hinv denote the impulse responses of a LTI system and its (LTI)
inverse, respectively. Then,

h ∗ hinv = δ.
 Consequently, a LTI system with impulse response h is invertible if and
only if there exists a sequence hinv such that

h ∗ hinv = δ.
 Except in simple cases, the above condition is often quite difficult to test.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 411
BIBO Stability

 A LTI system with impulse response h is BIBO stable if and only if



∑ |h(n)| < ∞
n=−∞

(i.e., h is absolutely summable).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 412
Eigensequences of LTI Systems
 As it turns out, every complex exponential is an eigensequence of all LTI
systems.
 For a LTI system H with impulse response h,

H{zn }(n) = H(z)zn ,


where z is a complex constant and

H(z) = ∑ h(n)z−n .
n=−∞

 zn
That is, is an eigensequence of a LTI system and H(z) is the
corresponding eigenvalue.
 We refer to H as the system function (or transfer function) of the
system H.
 From above, we can see that the response of a LTI system to a complex
exponential is the same complex exponential multiplied by the complex
factor H(z).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 413
Representation of Sequences Using Eigensequences

 Consider a LTI system with input x, output y, and system function H .


 Suppose that the input x can be expressed as the linear combination of
complex exponentials
x(n) = ∑ ak znk ,
k
where the ak and zk are complex constants.
 Using the fact that complex exponentials are eigenfunctions of LTI
systems, we can conclude

y(n) = ∑ ak H(zk )znk .


k

 Thus, if an input to a LTI system can be expressed as a linear combination


of complex exponentials, the output can also be expressed as linear
combination of the same complex exponentials.
 The above formula can be used to determine the output of a LTI system
from its input in a way that does not require convolution.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 414
Part 10

Discrete-Time Fourier Series (DTFS)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 415
Introduction

 The Fourier series is a representation for periodic sequences.


 With a Fourier series, a sequence is represented as a linear combination
of complex sinusoids.
 The use of complex sinusoids is desirable due to their numerous attractive
properties.
 Perhaps, most importantly, complex sinusoids are eigensequences of (DT)
LTI systems.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 416
Section 10.1

Fourier Series

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 417
Harmonically-Related Complex Sinusoids

 A set of periodic complex sinusoids is said to be harmonically related if


there exists some constant 2π
N such that the fundamental frequency of
each complex sinusoid is an integer multiple of 2π
N.
 Consider the set of harmonically-related complex sinusoids given by

φk (n) = e j(2π/N)kn for all integer k.


 In the above set {φk }, only N elements are distinct, since

φk = φk+N for all integer k.


 Since the fundamental frequency of each of the harmonically-related
complex sinusoids is an integer multiple of 2π
N , a linear combination of
these complex sinusoids must be N -periodic.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 418
DT Fourier Series (DTFS)

 An N -periodic complex-valued sequence x can be represented as a linear


combination of harmonically-related complex sinusoids as
x(n) = ∑ ak e j(2π/N)kn ,
k=hNi

where ∑k=hNi denotes summation over any N consecutive integers (e.g.,


[0 . . N − 1]). (The summation can be taken over any N consecutive
integers, due to the N -periodic nature of x and e j(2π/N)kn .)
 The above representation of x is known as the (DT) Fourier series and
the ak are called Fourier series coefficients.
 The above formula for x is often called the Fourier series synthesis
equation.
 To denote that the sequence x has the Fourier series coefficient sequence
a, we write
DTFS
x(n) ←→ ak .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 419
DT Fourier Series (DTFS) (Continued)

 A periodic sequence x with fundamental period N has the Fourier series


coefficient sequence a given by

ak = 1
N ∑ x(n)e− j(2π/N)kn .
n=hNi

(The summation can be taken over any N consecutive integers due to the
N -periodic nature of x and e− j(2π/N)kn .)
 The above equation for ak is often referred to as the Fourier series
analysis equation.
 Due to the N -periodic nature of x and e− j(2π/N)kn , the sequence a is also
N -periodic.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 420
Convergence of Fourier Series

 Since the analysis and synthesis equations for (DT) Fourier series involve
only finite sums (as opposed to infinite series), convergence is not a
significant issue of concern.
 If an N -periodic sequence is bounded (i.e., is finite in value), its Fourier
series coefficient sequence will exist and be bounded and the Fourier
series analysis and synthesis equations must converge.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 421
Section 10.2

Properties of Fourier Series

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 422
Properties of (DT) Fourier Series
DTFS DTFS
x(n) ←→ ak and y(n) ←→ bk

Property Time Domain Fourier Domain


Linearity αx(n) + βy(n) αak + βbk
Translation x(n − n0 ) e− jk(2π/N)n0 ak
Modulation e j(2π/N)k0 n x(n) ak−k0
Reflection x(−n) a−k
Conjugation x∗ (n) a∗−k
1
Duality an N x(−k)
Periodic Convolution x ~ y(n) Nak bk
Multiplication x(n)y(n) a ~ bk

Property
1 2 2
Parseval’s Relation N ∑n=hNi |x(n)| = ∑k=hNi |ak |
Even Symmetry x is even ⇔ a is even
Odd Symmetry x is odd ⇔ a is odd
Real / Conjugate Symmetry x is real ⇔ a is conjugate symmetric

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 423
Linearity

DTFS DTFS
 Let x and y be N -periodic sequences. If x(n) ←→ ak and y(n) ←→ bk ,
then

αx(n) + βy(n) ←→ αak + βbk ,


DTFS

where α and β are complex constants.


 That is, a linear combination of sequences produces the same linear
combination of their Fourier series coefficients.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 424
Translation (Time Shifting)

DTFS
 Let x denote a periodic sequence with period N . If x(n) ←→ ck , then

x(n − n0 ) ←→ e− jk(2π/N)n0 ck ,
DTFS

where n0 is an integer constant.


 In other words, time shifting a periodic sequence changes the argument
(but not magnitude) of its Fourier series coefficients.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 425
Modulation (Frequency Shifting)

DTFS
 Let x denote a periodic sequence with period N . If x(n) ←→ ck , then

e j(2π/N)k0 n x(n) ←→ ck−k0 ,


DTFS

where k0 is an integer constant.


 That is, multiplying a sequence by a complex sinusoid whose frequency is
an integer multiple of 2π
N results in a translation of the corresponding
Fourier series coefficient sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 426
Reflection (Time Reversal)

DTFS
 Let x denote a periodic sequence with period N . If x(n) ←→ ck , then
DTFS
x(−n) ←→ c−k .
 That is, time reversing a sequence results in a time reversal of the
corresponding Fourier series coefficient sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 427
Conjugation

DTFS
 Let x denote a periodic sequence with period N . If x(n) ←→ ck , then

x∗ (n) ←→ c∗−k .
DTFS

 In other words, conjugating a sequence has the effect of time reversing


and conjugating the corresponding Fourier series coefficient sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 428
Duality

DTFS
 Let x denote a periodic sequence with period N . If x(n) ←→ a(k), then

a(n) ←→ N1 x(−k).
DTFS

 This is known as the duality property of Fourier series.


 This property follows from the high degree of symmetry in the analysis
and synthesis Fourier-series equations, which are respectively given by

x(m) = ∑ a(`)e j(2π/N)`m and a(m) = 1


N ∑ x(`)e− j(2π/N)m` .
`=hNi `=hNi

 That is, the analysis and synthesis equations are identical except for a
factor of N and different sign in the parameter for the exponential
function.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 429
Periodic Convolution

DTFS DTFS
 Let x and y be N -periodic sequences. If x(n) ←→ ak and y(n) ←→ bk ,
then

x ~ y(n) ←→ Nak bk .
DTFS

 That is, periodic convolution of two sequences multiplies their


corresponding Fourier series coefficient sequences (up to a scale factor).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 430
Multiplication

DTFS DTFS
 Let x and y be N -periodic sequences. If x(n) ←→ ak and y(n) ←→ bk ,
then

x(n)y(n) ←→ a ~ b(k).
DTFS

 That is, multiplying two sequences results in a circular convolution of their


corresponding Fourier series coefficient sequences.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 431
Parseval’s Relation

 A sequence x and its Fourier series coefficient sequence a satisfy the


following relationship:
1
N ∑ |x(n)|2 = ∑ |ak |2 .
n=hNi k=hNi

 The above relationship is simply stating that the amount of energy in a


single period of x and the amount of energy in a single period of a are
equal up to a scale factor.
 In other words, the transformation between a sequence and its Fourier
series coefficient sequence preserves energy (up to a scale factor).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 432
Even/Odd Symmetry

 For an N -periodic sequence x with Fourier-series coefficient sequence a,


the following properties hold:

x is even ⇔ a is even; and


x is odd ⇔ a is odd.
 In other words, the even/odd symmetry properties of x and a always
match.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 433
Real Sequences

 A sequence x is real if and only if its Fourier series coefficient sequence a


satisfies

ak = a∗−k for all k

(i.e., a is conjugate symmetric).


 From properties of complex numbers, one can show that ak = a∗−k is
equivalent to

|ak | = |a−k | and arg ak = − arg a−k

(i.e., |ak | is even and arg ak is odd).


 Note that x being real does not necessarily imply that a is real.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 434
Trigonometric Form of a Fourier Series

 Consider the N -periodic sequence x with Fourier series coefficient


sequence a.
 If x is real, then its Fourier series can be rewritten in trigonometric form as
shown below.
 The trigonometric form of a Fourier series has the appearance

 N/2−1   



α0 + ∑ αk cos 2πkn
N + βk sin 2πkn
N +


 k=1
x(n) = αN/2 cos(πn) N even



 (N−1)/2   

α0 + ∑ αk cos 2πkn

 N + βk sin 2πkn
N N odd,
k=1

where α0 = a0 , αN/2 = aN/2 , αk = 2 Re ak , and βk = −2 Im ak .


 Note that the above trigonometric form contains only real quantities.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 435
Other Properties of Fourier Series

 For an N -periodic sequence x with Fourier-series coefficient sequence a,


the following properties hold:
1 a is the average value of x over a single period;
0
2 x is real and even ⇔ a is real and even; and

3 x is real and odd ⇔ a is purely imaginary and odd.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 436
Section 10.3

Discrete Fourier Transform (DFT)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 437
Prelude to the Discrete Fourier Transform (DFT)
 Letting a0k = Nak , we can rewrite the Fourier series synthesis and analysis
equations, respectively, as
N−1 N−1
x(n) = 1
N ∑ a0k e j(2π/N)kn and a0k = ∑ x(n)e− j(2π/N)kn .
k=0 n=0

 Since x and a0 are both N -periodic, each of these sequences is


completely characterized by its N samples over a single period.
 If we only consider the behavior of x and a0 over a single period, this leads
to the equations
N−1
x(n) = 1
N ∑ a0k e j(2π/N)kn for n ∈ [0 . . N − 1] and
k=0
N−1
a0k = ∑ x(n)e− j(2π/N)kn for k ∈ [0 . . N − 1].
n=0
 As it turns out, the above two equations define what is known as the
discrete Fourier transform (DFT).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 438
Discrete Fourier Transform (DFT)

 The discrete Fourier transform (DFT) X of the N -element sequence x


is defined as
N−1
X(k) = ∑ x(n)e− j(2π/N)kn for k ∈ [0 . . N − 1].
n=0

 The preceding equation is known as the DFT analysis equation.


 The inverse DFT x of the N -element sequence X is given by
N−1
x(n) = 1
N ∑ X(k)e j(2π/N)kn for n ∈ [0 . . N − 1].
k=0

 The preceding equation is known as the DFT synthesis equation.


 The DFT maps a finite-length sequence of N samples to another
finite-length sequence of N samples.
 The DFT will be considered in more detail later.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 439
Properties of Discrete Fourier Transform (DFT)

Property Time Domain Fourier Domain


Linearity a1 x1 (n) + a2 x2 (n) a1 X1 (k) + a2 X2 (k)
Translation x(n − n0 ) e− jk(2π/N)n0 X(k)
Modulation e j(2π/N)k0 n x(n) X(k − k0 )
Reflection x(−n) X(−k)
Conjugation x∗ (n) X ∗ (−k)
Duality X(n) Nx(−k)
Periodic Convolution x1 ~ x2 (n) X1 (k)X2 (k)
1
Multiplication x1 (n)x2 (n) N X1 ~ X2 (k)

Property
2 2
Parseval’s Relation ∑N−1 1 N−1
n=0 |x(n)| = N ∑k=0 |X(k)|
Even Symmetry x is even ⇔ X is even
Odd Symmetry x is odd ⇔ X is odd
Real / Conjugate Symmetry x is real ⇔ X is conjugate symmetric

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 440
Section 10.4

Fourier Series and Frequency Spectra

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 441
A New Perspective on Sequences: The Frequency Domain

 The Fourier series provides us with an entirely new way to view


sequences.
 Instead of viewing a sequence as having information distributed with
respect to time (i.e., a function whose domain is time), we view a
sequence as having information distributed with respect to frequency (i.e.,
a function whose domain is frequency).
 This so called frequency-domain perspective is of fundamental
importance in engineering.
 Many engineering problems can be solved much more easily using the
frequency domain than the time domain.
 The Fourier series coefficients of a sequence x provide a means to
quantify how much information x has at different frequencies.
 The distribution of information in a sequence over different frequencies is
referred to as the frequency spectrum of the sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 442
Fourier Series and Frequency Spectra
 To gain further insight into the role played by the Fourier series
coefficients ak in the context of the frequency spectrum of the N -periodic
sequence x, it is helpful to write the Fourier series with the ak expressed in
polar form as
N−1 N−1
x(n) = ∑ ak e j(2π/N)kn = ∑ |ak | e j([2π/N]kn+arg a ) .
k

k=0 k=0

 Clearly, the kth term in the summation corresponds to a complex sinusoid


with fundamental frequency 2π N k that has been amplitude scaled by a
factor of |ak | and time-shifted by an amount that depends on arg ak .
 For a given k, the larger |ak | is, the larger is the amplitude of its
corresponding complex sinusoid e j(2π/N)kn , and therefore the larger the
contribution the kth term (which is associated with frequency 2π N k) will
make to the overall summation.
 In this way, we can use |ak | as a measure of how much information a
sequence x has at the frequency 2π N k.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 443
Fourier Series and Frequency Spectra (Continued 1)

 The Fourier series coefficients ak of the sequence x are referred to as the


frequency spectrum of x.
 The magnitudes |ak | of the Fourier series coefficients ak are referred to as
the magnitude spectrum of x.
 The arguments arg ak of the Fourier series coefficients ak are referred to
as the phase spectrum of x.
 The frequency spectrum ak of an N -periodic sequence is N -periodic in the
coefficient index k and 2π-periodic in the frequency Ω = 2π
N k.
 The range of frequencies between −π and π are referred to as the
baseband.
 Often, the spectrum of a sequence is plotted against frequency Ω = 2π Nk
(over the single 2π period of the baseband) instead of the Fourier series
coefficient index k.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 444
Fourier Series and Frequency Spectra (Continued 2)

 Since the Fourier series only has frequency components at integer


multiples of the fundamental frequency, the frequency spectrum is
discrete in the independent variable (i.e., frequency).
 Due to the general appearance of frequency-spectrum plot (i.e., a number
of vertical lines at various frequencies), we refer to such spectra as line
spectra.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 445
Frequency Spectra of Real Sequences

 Let x denote an N -periodic sequence with the corresponding


Fourier-series coefficient sequence c.
 As we saw earlier:
x is real ⇔ c is conjugate symmetric.
 Furthermore, if x is real, the following assertions hold for ck for
k ∈ [0 . . N − 1]:
1 c = c∗
k N−k for k ∈ [1 . . N − 1]; N 
2 of the N coefficients c for k ∈ [0 . . N − 1], only + 1 coefficients are
k   N  2
independent; for example, ck for k ∈ 0 . . 2 completely determines ck for
all k ∈ [0 . . N − 1];
3 c is real; and
0
4 if N is even, c
N/2 is real.
 Note that approximately half of the coefficients in a single period of c are
redundant if x is real.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 446
Section 10.5

Fourier Series and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 447
Frequency Response
 Recall that a LTI system H with impulse response h is such that
H{zn }(n) = HZ (z)zn , where HZ (z) = ∑∞ −n
n=−∞ h(n)z . (That is, complex
exponentials are eigensequences of LTI systems.)
 Since a complex sinusoid is a special case of a complex exponential, we
can reuse the above result for the special case of complex sinusoids.
 For a LTI system H with impulse response h,

H e jΩn (n) = H(Ω)e jΩn ,
where Ω is real and

H(Ω) = ∑ h(n)e− jΩn .
n=−∞

 That is, e jΩn is an eigensequence of a LTI system and H(Ω) is the


corresponding eigenvalue.
 The function H is 2π-periodic, since e jΩ is 2π-periodic.
 We refer to H as the frequency response of the system H.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 448
Fourier Series and LTI Systems

 Consider a LTI system with input x, output y, and frequency response H .


 Suppose that the N -periodic input x is expressed as the Fourier series
N−1
x(n) = ∑ ak e jkΩ n , 0
where Ω0 = 2π
N.
k=0

 Using our knowledge about the eigensequences of LTI systems, we can


conclude
N−1
y(n) = ∑ ak H(kΩ0 )e jkΩ n .
0

k=0

 Thus, if the input x to a LTI system is a Fourier series, the output y is also a
DTFS DTFS
Fourier series. More specifically, if x(n) ←→ ak then y(n) ←→ H(kΩ0 )ak .
 The above formula can be used to determine the output of a LTI system
from its input in a way that does not require convolution.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 449
Filtering

 In many applications, we want to modify the spectrum of a sequence by


either amplifying or attenuating certain frequency components.
 This process of modifying the frequency spectrum of a sequence is called
filtering.
 A system that performs a filtering operation is called a filter.
 Many types of filters exist.
 Frequency selective filters pass some frequencies with little or no
distortion, while significantly attenuating other frequencies.
 Several basic types of frequency-selective filters include: lowpass,
highpass, and bandpass.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 450
Ideal Lowpass Filter
 An ideal lowpass filter eliminates all baseband frequency components
with a frequency whose magnitude is greater than some cutoff frequency,
while leaving the remaining baseband frequency components unaffected.
 Such a filter has a frequency response of the form
(
1 |Ω| ≤ Ωc
H(Ω) =
0 Ωc < |Ω| ≤ π,

where Ωc is the cutoff frequency.


 A plot of this frequency response is given below.

H(Ω)
1


−π −Ωc Ωc π

Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 451
Ideal Highpass Filter
 An ideal highpass filter eliminates all baseband frequency components
with a frequency whose magnitude is less than some cutoff frequency,
while leaving the remaining baseband frequency components unaffected.
 Such a filter has a frequency response of the form
(
1 Ωc < |Ω| ≤ π
H(Ω) =
0 |Ω| ≤ Ωc ,

where Ωc is the cutoff frequency.


 A plot of this frequency response is given below.

H(Ω)


−π −Ωc Ωc π

Passband Stopband Passband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 452
Ideal Bandpass Filter
 An ideal bandpass filter eliminates all baseband frequency components
with a frequency whose magnitude does not lie in a particular range, while
leaving the remaining baseband frequency components unaffected.
 Such a filter has a frequency response of the form
(
1 Ωc1 ≤ |Ω| ≤ Ωc2
H(Ω) =
0 |Ω| < Ωc1 or Ωc2 < |Ω| < π,

where the limits of the passband are Ωc1 and Ωc2 .


 A plot of this frequency response is given below.

H(Ω)


−π −Ωc2 −Ωc1 Ωc1 Ωc2 π

Stopband Passband Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 453
Part 11

Discrete-Time Fourier Transform (DTFT)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 454
Motivation for the Fourier Transform

 The (DT) Fourier series provide an extremely useful representation for


periodic sequences.
 Often, however, we need to deal with sequences that are not periodic.
 A more general tool than the Fourier series is needed in this case.
 The (DT) Fourier transform can be used to represent both periodic and
aperiodic sequences.
 Since the (DT) Fourier transform is essentially derived from (DT) Fourier
series through a limiting process, the Fourier transform has many
similarities with Fourier series.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 455
Section 11.1

Fourier Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 456
Development of the Fourier Transform [Aperiodic Case]

 The (DT) Fourier series is an extremely useful signal representation.


 Unfortunately, this signal representation can only be used for periodic
sequences, since a Fourier series is inherently periodic.
 Many sequences are not periodic, however.
 Rather than abandoning Fourier series, one might wonder if we can
somehow use Fourier series to develop a representation that can also be
applied to aperiodic sequences.
 By viewing an aperiodic sequence as the limiting case of an N -periodic
sequence where N → ∞, we can use the Fourier series to develop a
signal representation that can be used for aperiodic sequences, known as
the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 457
Development of the Fourier Transform [Aperiodic Case] (Continued)

 Recall that the Fourier series representation of an N -periodic sequence x


is given by
!
x(n) = ∑ 1
N ∑ x(`)e− j(2π/N)k` e j(2π/N)kn .
k=hNi `=hNi
| {z }
ck
 In the above representation, if we take the limit as N → ∞, we obtain
!
Z ∞
x(n) = 1


∑ x(`)e− jΩ` e jΩn dΩ
`=−∞
| {z }
X(Ω)
(i.e., as N → ∞, the two finite summations
 become an integral and infinite
1 1 2π
summation, N becomes 2π dΩ, and N k becomes Ω).
 This representation for aperiodic sequences is known as the Fourier
transform representation.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 458
Generalized Fourier Transform

 The classical Fourier transform for aperiodic sequences does not exist
(i.e., ∑∞
n=−∞ x(n)e
− jΩn fails to converge) for some sequences of great
practical interest, such as:
2 a nonzero constant sequence;
2 a periodic sequence (e.g., a real or complex sinusoid); and
2 the unit-step sequence (i.e., u).
 Fortunately, the Fourier transform can be extended to handle such
sequences, resulting in what is known as the generalized Fourier
transform.
 For our purposes, we can think of the classical and generalized Fourier
transforms as being defined by the same formulas.
 Therefore, in what follows, we will not typically make a distinction between
the classical and generalized Fourier transforms.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 459
DT Fourier Transform (DTFT)
 The Fourier transform of the sequence x, denoted Fx or X , is given by

Fx(Ω) = X(Ω) = ∑ x(n)e− jΩn .
n=−∞

 The preceding equation is sometimes referred to as Fourier transform


analysis equation (or forward Fourier transform equation).
 The inverse Fourier transform of X , denoted F−1 X or x, is given by
Z
F−1 X(n) = x(n) = 1
2π X(Ω)e jΩn dΩ.

 The preceding equation is sometimes referred to as the Fourier


transform synthesis equation (or inverse Fourier transform equation).
 As a matter of notation, to denote that a sequence x has the Fourier
DTFT
transform X , we write x(n) ←→ X(Ω).
 A sequence x and its Fourier transform X constitute what is called a
Fourier transform pair.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 460
Section 11.2

Convergence Properties of the Fourier Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 461
Convergence of the Fourier Transform

 For a sequence x, the Fourier transform analysis equation (i.e.,


X(Ω) = ∑∞ −∞ x(n)e
− jΩn ) converges uniformly if


∑ |x(k)| < ∞
k=−∞

(i.e., x is absolutely summable).


 For a sequence x, the Fourier transform analysis equation (i.e.,
X(Ω) = ∑∞ −∞ x(n)e
− jΩn ) converges in the MSE sense if


∑ |x(k)|2 < ∞
k=−∞

(i.e., x is square summable).


 For a bounded Fourier transform X , the Fourier transform synthesis
1 R jΩn dΩ) will always converge, since the
equation (i.e., x(n) = 2π 2π X(Ω)e
integration interval is finite.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 462
Section 11.3

Properties of the Fourier Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 463
Properties of the (DT) Fourier Transform

Property Time Domain Frequency Domain


Linearity a1 x1 (n) + a2 x2 (n) a1 X1 (Ω) + a2 X2 (Ω)
Translation x(n − n0 ) e− jΩn0 X(Ω)
Modulation e jΩ0 n x(n) X(Ω − Ω0 )
Conjugation x∗ (n) X ∗ (−Ω)
Time Reversal x(−n) X(−Ω)
Upsampling (↑ M)x(n) X(MΩ)
Ω−2πk

Downsampling (↓ M)x(n) 1
M ∑M−1
k=0 X M
Convolution x1 ∗ x2 (n) X1 (Ω)X2 (Ω)
1 R
Multiplication x1 (n)x2 (n) 2π 2π X1 (θ)X2 (Ω − θ)dθ
d
Freq.-Domain Diff. nx(n) j dΩ X(Ω)

Differencing x(n) − x(n − 1) 1 − e− jΩ X(Ω)
e jΩ
Accumulation ∑nk=−∞ x(k) e jΩ −1
X(Ω) + πX(0) ∑∞
k=−∞ δ(Ω − 2πk)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 464
Properties of the (DT) Fourier Transform (Continued)

Property
Periodicity X(Ω) = X(Ω + 2π)
2 2
Parseval’s Relation ∑∞ 1 R
n=−∞ |x(n)| = 2π 2π |X(Ω)| dΩ
Even Symmetry x is even ⇔ X is even
Odd Symmetry x is odd ⇔ X is odd
Real / Conjugate Symmetry x is real ⇔ X is conjugate symmetric

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 465
(DT) Fourier Transform Pairs

Pair x(n) X(Ω)


1 δ(n) 1
2 1 2π ∑∞
k=−∞ δ(Ω − 2πk)
e jΩ
3 u(n) e jΩ −1
+ ∑∞ k=−∞ πδ(Ω − 2πk)
e jΩ
4 an u(n), |a| < 1 e jΩ −a
e jΩ
5 −an u(−n − 1), |a| > 1 e jΩ −a
1−a2
6 a|n| , |a| < 1 1−2a cos Ω+a2

7 cos(Ω0 n) π ∑k=−∞ [δ(Ω − Ω0 − 2πk) + δ(Ω + Ω0 − 2πk)]
8 sin(Ω0 n) jπ ∑∞ k=−∞ [δ(Ω + Ω0 − 2πk) − δ(Ω − Ω0 − 2πk)]
e j2Ω −e jΩ cos Ω0
9 cos(Ω0 n)u(n) e j2Ω −2e jΩ cos Ω0 +1
+ π2 ∑∞ k=−∞ [δ(Ω − 2πk − Ω0 ) + δ(Ω − 2πk + Ω0 )]
e jΩ sin Ω0 π ∞
10 sin(Ω0 n)u(n) e −2e cos Ω0 +1
j2Ω jΩ + 2 j ∑ k=−∞ [δ(Ω − 2πk − Ω0 ) − δ(Ω − 2πk + Ω0 )]

11 B
π sinc(Bn), 0 < B < π ∑∞ k=−∞ rect Ω−2πk
 2B 
12 u(n) − u(n − M) e− jΩ(M−1)/2 sin(MΩ/2) sin(Ω/2)
ae jΩ
13 nan u(n), |a| < 1 2
(e jΩ −a)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 466
Periodicity

 Recall the definition of the Fourier transform X of the sequence x:



X(Ω) = ∑ x(n)e− jΩn .
n=−∞

 For all integer k, we have that



X(Ω + 2πk) = ∑ x(n)e− j(Ω+2πk)n
n=−∞

= ∑ x(n)e− j(Ωn+2πkn)
n=−∞

= ∑ x(n)e− jΩn
n=−∞
= X(Ω).
 Thus, the Fourier transform X of the sequence x is always 2π-periodic.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 467
Linearity

DTFT DTFT
 If x1 (n) ←→ X1 (Ω) and x2 (n) ←→ X2 (Ω), then
DTFT
a1 x1 (n) + a2 x2 (n) ←→ a1 X1 (Ω) + a2 X2 (Ω),

where a1 and a2 are arbitrary complex constants.


 This is known as the linearity property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 468
Translation

DTFT
 If x(n) ←→ X(Ω), then

x(n − n0 ) ←→ e− jΩn0 X(Ω),


DTFT

where n0 is an arbitrary integer.


 This is known as the translation (or time-domain shifting) property of
the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 469
Modulation

DTFT
 If x(n) ←→ X(Ω), then

e jΩ0 n x(n) ←→ X(Ω − Ω0 ),


DTFT

where Ω0 is an arbitrary real constant.


 This is known as the modulation (or frequency-domain shifting)
property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 470
Conjugation

DTFT
 If x(n) ←→ X(Ω), then

x∗ (n) ←→ X ∗ (−Ω).
DTFT

 This is known as the conjugation property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 471
Time Reversal

DTFT
 If x(n) ←→ X(Ω), then
DTFT
x(−n) ←→ X(−Ω).
 This is known as the time-reversal property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 472
Upsampling

DTFT
 If x(n) ←→ X(Ω), then
DTFT
(↑ M)x(n) ←→ X(MΩ).
 This is known as the upsampling property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 473
Downsampling

DTFT
 If x(n) ←→ X(Ω), then
M−1 
1
∑X Ω−2πk
DTFT
(↓ M)x(n) ←→ M M .
k=0

 This is known as the downsampling property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 474
Convolution

DTFT DTFT
 If x1 (n) ←→ X1 (Ω) and x2 (n) ←→ X2 (Ω), then
DTFT
x1 ∗ x2 (n) ←→ X1 (Ω)X2 (Ω).
 This is known as the convolution (or time-domain convolution)
property of the Fourier transform.
 In other words, a convolution in the time domain becomes a multiplication
in the frequency domain.
 This suggests that the Fourier transform can be used to avoid having to
deal with convolution operations.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 475
Multiplication

DTFT DTFT
 If x1 (n) ←→ X1 (Ω) and x2 (n) ←→ X2 (Ω), then
Z
1
X1 (θ)X2 (Ω − θ)dθ.
DTFT
x1 (n)x2 (n) ←→ 2π

 This is known as the multiplication (or time-domain multiplication)


property of the Fourier transform.
1
 Do not forget the factor of 2π in the above formula!
 This property of the Fourier transform is often tedious to apply (in the
forward direction) as it turns a multiplication into a convolution.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 476
Frequency-Domain Differentiation

DTFT
 If x(n) ←→ X(Ω), then
d DTFT
nx(n) ←→ j dΩ X(Ω).
 This is known as the frequency-domain differentiation property of the
Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 477
Differencing

DTFT
 If x(n) ←→ X(Ω), then

x(n) − x(n − 1) ←→ 1 − e− jΩ X(Ω).
DTFT

 This is known as the differencing property of the Fourier transform.


 Note that this property follows quite trivially from the linearity and
translation properties of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 478
Accumulation

DTFT
 If x(n) ←→ X(Ω), then
n ∞
e jΩ
∑ πX(0) ∑ δ(Ω − 2πk).
DTFT
x(k) ←→ X(Ω) +
k=−∞ e jΩ − 1 k=−∞

 This is known as the accumulation property of the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 479
Parseval’s Relation

DTFT
 If x(n) ←→ X(Ω), then
∞ Z
∑ |x(n)|2 = 1


|X(Ω)|2 dΩ
n=−∞

(i.e., the energy of x and energy of X are equal up to a factor of 2π).


 This is known as Parseval’s relation.
 Since energy is often a quantity of great significance in engineering
applications, it is extremely helpful to know that the Fourier transform
preserves energy (up to a scale factor).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 480
Even and Odd Symmetry

 For a sequence x with Fourier transform X , the following assertions hold:


1 x is even ⇔ X is even; and

2 x is odd ⇔ X is odd.

 In other words, the forward and inverse Fourier transforms preserve


even/odd symmetry.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 481
Real Sequences

 A sequence x is real if and only if its Fourier transform X satisfies

X(Ω) = X ∗ (−Ω) for all Ω

(i.e., X is conjugate symmetric).


 Thus, for a real-valued sequence, the portion of the graph of a Fourier
transform for negative values of frequency Ω is redundant, as it is
completely determined by symmetry.
 From properties of complex numbers, one can show that
X(Ω) = X ∗ (−Ω) is equivalent to

|X(Ω)| = |X(−Ω)| and arg X(Ω) = − arg X(−Ω)

(i.e., |X(Ω)| is even and arg X(Ω) is odd).


 Note that x being real does not necessarily imply that X is real.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 482
Section 11.4

Fourier Transform of Periodic Sequences

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 483
Fourier Transform of Periodic Sequences
 The Fourier transform can be generalized to also handle periodic
sequences.
 Consider an N -periodic sequence x.
 Define the sequence xN as
(
x(n) 0 ≤ n < N
xN (n) =
0 otherwise.

(i.e., xN (n) is equal to x(n) over a single period and zero elsewhere).
 Let a denote the Fourier series coefficient sequence of x.
 Let X and XN denote the Fourier transforms of x and xN , respectively.
 The following relationships can be shown to hold:
∞  
X(Ω) = 2π
N ∑ XN 2πk
N δ Ω − 2πk
N ,
k=−∞
 ∞ 
ak = N1 XN 2πk
N , and X(Ω) = 2π ∑ ak δ Ω − 2πk
N .
k=−∞

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 484
Fourier Transform of Periodic Sequences (Continued)

 The Fourier series coefficient sequence a is produced by sampling XN at


integer multiples of the fundamental frequency 2π
N and scaling the
1
resulting sequence by N .
 The Fourier transform of a periodic sequence can only be nonzero at
integer multiples of the fundamental frequency.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 485
Section 11.5

Fourier Transform and Frequency Spectra of Sequences

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 486
Frequency Spectra of Sequences

 Like Fourier series, the Fourier transform also provides us with a


frequency-domain perspective on sequences.
 That is, instead of viewing a sequence as having information distributed
with respect to time (i.e., a function whose domain is time), we view a
sequence as having information distributed with respect to frequency (i.e.,
a function whose domain is frequency).
 The Fourier transform X of a sequence x provides a means to quantify
how much information x has at different frequencies.
 The distribution of information in a sequence over different frequencies is
referred to as the frequency spectrum of the sequence.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 487
Fourier Transform and Frequency Spectra

 To gain further insight into the role played by the Fourier transform X in
the context of the frequency spectrum of x, it is helpful to write the Fourier
transform representation of x with X(Ω) expressed in polar form as
follows:
Z Z
x(n) = 1
2π X(Ω)e jΩn dΩ = 1
2π |X(Ω)| e j[Ωn+arg X(Ω)] dΩ.
2π 2π

 In effect, the quantity |X(Ω)| is a weight that determines how much the
complex sinusoid at frequency Ω contributes to the integration result x(n).
 Perhaps, this can be more easily seen if we express the above integral as
the limit of a sum, derived from an approximation of the integral using the
area of rectangles, as shown on the next slide. [Recall that
Rb n b−a
a f (x)dx = limn→∞ ∑k=1 f (xk )∆x where ∆x = n and xk = a + k∆x.]

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 488
Fourier Transform and Frequency Spectra (Continued 1)
 Expressing the integral (from the previous slide) as the limit of a sum, we
obtain
`
∑ ∆Ω
0 0
x(n) = lim 1
X(Ω0 ) e j[Ω n+arg X(Ω )] ,
`→∞ 2π
k=1

where ∆Ω = 2π
` and Ω = k∆Ω.
0

 In the above equation, the kth term in the summation corresponds to a


complex sinusoid with fundamental frequency Ω0 = k∆Ω that has had its
amplitude scaled by a factor of |X(Ω0 )| and has been time shifted by an
amount that depends on arg X(Ω0 ).
 For a given Ω0 = k∆Ω (which is associated with the kth term in the
summation), the larger |X(Ω0 )| is, the larger the amplitude of its
0
corresponding complex sinusoid e jΩ n will be, and therefore the larger the
contribution the kth term will make to the overall summation.
 In this way, we can use |X(Ω0 )| as a measure of how much information a
sequence x has at the frequency Ω0 .
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 489
Fourier Transform and Frequency Spectra (Continued 2)

 The Fourier transform X of the sequence x is referred to as the frequency


spectrum of x.
 The magnitude |X(Ω)| of the Fourier transform X is referred to as the
magnitude spectrum of x.
 The argument arg X(Ω) of the Fourier transform X is referred to as the
phase spectrum of x.
 Since the Fourier transform is a function of a real variable, a sequence
can potentially have information at any real frequency.
 Earlier, we saw that for periodic sequences, the Fourier transform can only
be nonzero at integer multiples of the fundamental frequency.
 So, the Fourier transform and Fourier series give a consistent picture in
terms of frequency spectra.
 Since the frequency spectrum is complex (in the general case), it is
usually represented using two plots, one showing the magnitude
spectrum and one showing the phase spectrum.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 490
Frequency Spectra of Real Sequences
 Recall that, for a real sequence x, the Fourier transform X of x satisfies

X(Ω) = X ∗ (−Ω)

(i.e., X is conjugate symmetric), which is equivalent to

|X(Ω)| = |X(−Ω)| and arg X(Ω) = − arg X(−Ω).


 Since |X(Ω)| = |X(−Ω)|, the magnitude spectrum of a real sequence is
always even.
 Similarly, since arg X(Ω) = − arg X(−Ω), the phase spectrum of a real
sequence is always odd.
 Due to the symmetry in the frequency spectra of real sequences, we
typically ignore negative frequencies when dealing with such sequences.
 In the case of sequences that are complex but not real, frequency spectra
do not possess the above symmetry, and negative frequencies become
important.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 491
Bandwidth
 A sequence x with Fourier transform X satisfying X(Ω) = 0 for all Ω in
(−π, π] except for some interval I is said to be bandlimited to
frequencies in I .
 The bandwidth of a sequence x with Fourier transform X is the length of
the interval in (−π, π] over which X is nonzero.
 For example, the sequence x with the Fourier transform X shown below is
bandlimited to frequencies in [−B, B] and has bandwidth B − (−B) = 2B.
X(Ω)


−π −B B π

 Since x is real in the above example (as X is conjugate symmetric), we


might choose to ignore negative frequencies, in which case x would be
deemed to be bandlimited to frequencies in [0, B] and have bandwidth
B − 0 = B.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 492
Energy-Density Spectra
 By Parseval’s relation, the energy E in a sequence x with Fourier
transform X is given by
Z
1
E= 2π Ex (Ω)dΩ,

where

Ex (Ω) = |X(Ω)|2 .
 We refer to Ex as the energy-density spectrum of the sequence x.
 The function Ex indicates how the energy in x is distributed with respect to
frequency.
 For example, the energy contributed by frequencies in the range [Ω1 , Ω2 ]
is given by
Z Ω2
1
2π Ex (Ω)dΩ.
Ω1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 493
Section 11.6

Fourier Transform and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 494
Frequency Response of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h, and
let X , Y , and H denote the Fourier transforms of x, y, and h, respectively.
 Since y(n) = x ∗ h(n), we have that

Y (Ω) = X(Ω)H(Ω).
 The function H is called the frequency response of the system.
 A LTI system is completely characterized by its frequency response H .
 The above equation provides an alternative way of viewing the behavior of
a LTI system. That is, we can view the system as operating in the
frequency domain on the Fourier transforms of the input and output
signals.
 The frequency spectrum of the output is the product of the frequency
spectrum of the input and the frequency response of the system.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 495
Frequency Response of LTI Systems (Continued 1)

 In the general case, the frequency response H is a complex-valued


function.
 Often, we represent H(Ω) in terms of its magnitude |H(Ω)| and argument
arg H(Ω).
 The quantity |H(Ω)| is called the magnitude response of the system.
 The quantity arg H(Ω) is called the phase response of the system.
 Since Y (Ω) = X(Ω)H(Ω), we trivially have that

|Y (Ω)| = |X(Ω)| |H(Ω)| and argY (Ω) = arg X(Ω) + arg H(Ω).
 The magnitude spectrum of the output equals the magnitude spectrum of
the input times the magnitude response of the system.
 The phase spectrum of the output equals the phase spectrum of the input
plus the phase response of the system.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 496
Frequency Response of LTI Systems (Continued 2)

 Since the frequency response H is simply the frequency spectrum of the


impulse response h, if h is real, then

|H(Ω)| = |H(−Ω)| and arg H(Ω) = − arg H(−Ω)

(i.e., the magnitude response |H(Ω)| is even and the phase response
arg H(Ω) is odd).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 497
Unwrapped Phase
 For many types of analysis, restricting the range of a phase function to an
interval of length 2π (such as (−π, π]), often unnecessarily introduces
discontinuities into the function.
 This motivates the notion of unwrapped phase.
 The unwrapped phase is simply the phase defined in such a way so as
not to restrict the phase to an interval of length 2π and to keep the phase
function continuous to the greatest extent possible.
 For example, the function H(Ω) = e j3Ω has the unwrapped phase
Θ(Ω) = 3Ω.
Arg H(Ω) Θ(Ω) = 3Ω
3π 3π
2π 2π
π π
Ω Ω
−π − π2 π π −π − π2 π π
−π 2 −π 2

−2π −2π
−3π −3π

Phase Unwrapped Phase


Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 498
Interpretation of Magnitude and Phase Response

 Recall that a LTI system H with frequency response H is such that



H e jΩn (n) = H(Ω)e jΩn .
 Expressing H(Ω) in polar form, we have
H{e jΩn }(n) = |H(Ω)| e j arg H(Ω) e jΩn
= |H(Ω)| e j[Ωn+arg H(Ω)]
= |H(Ω)| e jΩ(n+arg[H(Ω)]/Ω) .
 Thus, the response of the system to the sequence e jΩn is produced by
applying two transformations to this sequence:
2 (amplitude) scaling by |H(Ω)|; and

2 translating by −
arg H(Ω) arg H(Ω)
Ω (using bandlimited interpolation if − Ω 6∈ Z).
 Therefore, the magnitude response determines how different complex
sinusoids are scaled (in amplitude) by the system.
 Similarly, the phase response determines how different complex sinusoids
are translated (i.e., delayed/advanced) by the system.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 499
Magnitude Distortion

 Recall that a LTI system H with frequency response H is such that

H{e jΩn }(n) = |H(Ω)| e jΩ(n+arg[H(Ω)]/Ω) .


 If |H(Ω)| is a constant (for all Ω), every complex sinusoid is scaled by the
same amount when passing through the system.
 A system for which |H(Ω)| = 1 (for all Ω) is said to be allpass.
 In the case of an allpass system, the magnitude spectra of the system’s
input and output are identical.
 If |H(Ω)| is not a constant, different complex sinusoids are scaled by
different amounts, resulting in what is known as magnitude distortion.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 500
Phase Distortion
 Recall that a LTI system H with frequency response H is such that

H{e jΩn }(n) = |H(Ω)| e jΩ(n+arg[H(Ω)]/Ω) .


 The preceding equation can be rewritten as

H{e jΩn }(n) = |H(Ω)| e jΩ[n−τp (Ω)] where τp (Ω) = − arg H(Ω)
Ω .
 The function τp is known as the phase delay of the system.
 If τp (Ω) = nd (where nd is a constant), the system shifts all complex
sinusoids by the same amount nd .
 Since τp (Ω) = nd is equivalent to the (unwrapped) phase response being
of the form arg H(Ω) = −nd Ω (which is a linear function with a zero
constant term), a system with a constant phase delay is said to have
linear phase.
 In the case that τp (Ω) = 0, the system is said to have zero phase.
 If τp (Ω) is not a constant, different complex sinusoids are shifted by
different amounts, resulting in what is known as phase distortion.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 501
Distortionless Transmission
 Consider a LTI system H with input x and output y given by

y(n) = x(n − n0 ),
where n0 is an integer constant.
 That is, the output of the system is simply the input delayed by n0 .
 This type of behavior is the ideal for which we strive in real-world
communication systems (i.e., the received signal y equals a delayed
version of the transmitted signal x).
 Taking the Fourier transform of the preceding equation, we have

Y (Ω) = e− jΩn0 X(Ω).


 Thus, the system has the frequency response H given by

H(Ω) = e− jΩn0 .
 
 Since the phase delay of the system is τp (Ω) = − −Ωn

0
= n0 , the
phase delay is constant and the system has linear phase.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 502
Block Diagram Representations of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h, and
let X , Y , and H denote the Fourier transforms of x, y, and h, respectively.
 Often, it is convenient to represent such a system in block diagram form in
the frequency domain as shown below.

X Y
H

 Since a LTI system is completely characterized by its frequency response,


we typically label the system with this quantity.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 503
Interconnection of LTI Systems

 The series interconnection of the LTI systems with frequency responses


H1 and H2 is the LTI system with frequency response H1 H2 . That is, we
have the equivalences shown below.
X Y X Y
H1 H2 ≡ H1 H2

X Y X Y
H1 H2 ≡ H2 H1

 The parallel interconnection of the LTI systems with frequency responses


H1 and H2 is the LTI system with the frequency response H1 + H2 . That
is, we have the equivalence shown below.
X Y
H1 +

X Y
≡ H1 + H2
H2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 504
LTI Systems and Difference Equations

 Many LTI systems of practical interest can be represented using an


Nth-order linear difference equation with constant coefficients.
 Consider a system with input x and output y that is characterized by an
equation of the form
N M
∑ bk y(n − k) = ∑ ak x(n − k).
k=0 k=0

 Let h denote the impulse response of the system, and let X , Y , and H
denote the Fourier transforms of x, y, and h, respectively.
 One can show that H(Ω) is given by

Y (Ω) ∑M ak (e jΩ )−k ∑M
k=0 ak e
− jkΩ
H(Ω) = = Nk=0 = N .
X(Ω) ∑k=0 bk (e jΩ )−k ∑k=0 bk e− jkΩ

 Each of the numerator and denominator of H is a polynomial in e− jΩ .


 Thus, H is a rational function in the variable e− jΩ .
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 505
Section 11.7

Fourier Transform Relationships

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 506
Duality Between DTFT and CTFS
 The DTFT analysis and synthesis equations are, respectively, given by
∞ Z
X(Ω) = ∑
x(k)e− jkΩ and x(n) = 2π 1
X(Ω)e jnΩ dΩ.

k=−∞
 The CTFS synthesis and analysis equations are, respectively, given by
∞ Z
xc (t) = a(k)e ∑
jk(2π/T )t
and a(n) = T1
xc (t)e− jn(2π/T )t dt,
T
k=−∞

which can be rewritten, respectively, as


∞ Z
xc (t) = ∑ a(−k)e − jk(2π/T )t
and a(−n) = 1
T
T
xc (t)e jn(2π/T )t dt.
k=−∞
 The CTFS synthesis equation with T = 2π corresponds to the DTFT
analysis equation with X = xc , Ω = t , and x(n) = a(−n).
 The CTFS analysis equation with T = 2π corresponds to the DTFT
synthesis equation with X = xc and x(n) = a(−n).
 Consequently, the DTFT X of the sequence x can be viewed as a CTFS
representation of the 2π-periodic spectrum X .
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 507
Relationship Between DTFT and CTFT
 Let x be a bandlimited function and let T denote a sampling period for x
that satisfies the Nyquist condition.
 Let ỹ be the function obtained by impulse sampling x with sampling period
T . That is,

ỹ(t) = ∑ x(T n)δ(t − T n).
n=−∞

 Let y denote the sequence obtaining by sampling x with sampling period


T . That is,

y(n) = x(T n).


 Let Ỹ denote the (CT) Fourier transform of ỹ and let Y denote the (DT)
Fourier transform of y.
 Then, the following relationship holds:


Y (Ω) = Ỹ T for all Ω ∈ R.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 508
Relationship Between DTFT and DFT

 Let x be a sequence with (DT) Fourier transform X such that

x(n) = 0 for all n 6∈ [0 . . M − 1].


 Let X̃ denote the N -point DFT of X . That is,
N−1
X̃(k) = ∑ x(n)e− j(2π/N)kn for k ∈ [0 . . N − 1].
n=0

 Suppose now that N ≥ M .


 Then, the following relationship holds:


X Nk = X̃(k) for k ∈ [0 . . N − 1].
 In other words, the elements of the sequence X̃ correspond to
uniformly-spaced samples of the function X .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 509
Spectral Sampling Example

 Consider the sequence

x(n) = u(n) − u(n − 4).


 The Fourier transform X of x can be shown to be
" #
sin (2Ω)
X(Ω) = e− j(3/2)Ω  .
sin 21 Ω

 Clearly, x(n) = 0 for all n 6∈ [0 . . 3].


 Therefore, uniformly-spaced samples of X can be obtained from an
N -point DFT X̃ of x, where N ≥ 4.
 The subsequent slides show the sampled spectrum obtained by the DFT
for several values of N .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 510
Spectral Sampling Example: N = 4

X̃(k)
4
3
|X(Ω)|
2
1
Ω = π2 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 4 4 4

Magnitude Spectrum

arg X̃(k)
arg X(Ω) π
π
2
Ω = π2 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 − π2 4 4 4

−π
Phase Spectrum

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 511
Spectral Sampling Example: N = 8

X̃(k)
4
3
|X(Ω)|
2
1
Ω = π4 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 4 4 4

Magnitude Spectrum

arg X̃(k)
arg X(Ω) π
π
2
Ω = π4 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 − π2 4 4 4

−π
Phase Spectrum

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 512
Spectral Sampling Example: N = 16

X̃(k)
4
3
|X(Ω)|
2
1
Ω = π8 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 4 4 4

Magnitude Spectrum

arg X̃(k)
arg X(Ω) π
π
2
Ω = π8 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 − π2 4 4 4

−π
Phase Spectrum

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 513
Spectral Sampling Example: N = 64

X̃(k)
4
3
|X(Ω)|
2
1
π
Ω= 32 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 4 4 4

Magnitude Spectrum

arg X̃(k)
arg X(Ω) π
π
2
π
Ω= 32 k
−π − 3π − 2π − π4 π 2π 3π π
4 4 − π2 4 4 4

−π
Phase Spectrum

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 514
Section 11.8

Application: Filtering

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 515
Filtering

 In many applications, we want to modify the spectrum of a signal by


either amplifying or attenuating certain frequency components.
 This process of modifying the frequency spectrum of a signal is called
filtering.
 A system that performs a filtering operation is called a filter.
 Many types of filters exist.
 Frequency selective filters pass some frequencies with little or no
distortion, while significantly attenuating other frequencies.
 Several basic types of frequency-selective filters include: lowpass,
highpass, and bandpass.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 516
Ideal Lowpass Filter
 An ideal lowpass filter eliminates all baseband frequency components
with a frequency whose magnitude is greater than some cutoff frequency,
while leaving the remaining baseband frequency components unaffected.
 Such a filter has a frequency response H of the form
(
1 |Ω| ≤ Ωc
H(Ω) =
0 Ωc < |Ω| ≤ π,

where Ωc is the cutoff frequency.


 A plot of this frequency response is given below.

H(Ω)
1


−π −Ωc Ωc π

Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 517
Ideal Highpass Filter
 An ideal highpass filter eliminates all baseband frequency components
with a frequency whose magnitude is less than some cutoff frequency,
while leaving the remaining baseband frequency components unaffected.
 Such a filter has a frequency response H of the form
(
1 Ωc < |Ω| ≤ π
H(Ω) =
0 |Ω| ≤ Ωc ,

where Ωc is the cutoff frequency.


 A plot of this frequency response is given below.

H(Ω)


−π −Ωc Ωc π

Passband Stopband Passband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 518
Ideal Bandpass Filter
 An ideal bandpass filter eliminates all baseband frequency components
with a frequency whose magnitude does not lie in a particular range, while
leaving the remaining baseband frequency components unaffected.
 Such a filter has a frequency response H of the form
(
1 Ωc1 ≤ |Ω| ≤ Ωc2
H(Ω) =
0 |Ω| < Ωc1 or Ωc2 < |Ω| < π,

where the limits of the passband are Ωc1 and Ωc2 .


 A plot of this frequency response is given below.

H(Ω)


−π −Ωc2 −Ωc1 Ωc1 Ωc2 π

Stopband Passband Stopband Passband Stopband

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 519
Part 12

z Transform (ZT)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 520
Motivation Behind the z Transform

 Another important mathematical tool in the study of signals and systems


is known as the z transform.
 The z transform can be viewed as a generalization of the (classical)
Fourier transform.
 Due to its more general nature, the z transform has a number of
advantages over the (classical) Fourier transform.
 First, the z transform representation exists for some sequences that do
not have a Fourier transform representation. So, we can handle some
sequences with the z transform that cannot be handled with the Fourier
transform.
 Second, since the z transform is a more general tool, it can provide
additional insights beyond those facilitated by the Fourier transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 521
Motivation Behind the z Transform (Continued)

 Earlier, we saw that complex exponentials are eigensequences of LTI


systems.
 In particular, for a LTI system H with impulse response h, we have that

H{zn }(n) = H(z)zn where H(z) = ∑ h(n)z−n .
n=−∞

 Previously, we referred to H as the system function.


 As it turns out, H is the z transform of h.
 Since the z transform has already appeared earlier in the context of LTI
systems, it is clearly a useful tool.
 Furthermore, as we will see, the z transform has many additional uses.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 522
Section 12.1

z Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 523
(Bilateral) z Transform
 The (bilateral) z transform of the sequence x, denoted Zx or X , is
defined as

Zx(z) = X(z) = ∑ x(n)z−n .
n=−∞

 The inverse z transform of X , denoted Z−1 X or x, is then given by


I
Z−1 X(n) = x(n) = 1
2π j X(z)zn−1 dz,
Γ
where Γ is a counterclockwise closed circular contour centered at the
origin and with radius r such that Γ is in the ROC of X .
 We refer to x and X as a z transform pair and denote this relationship as
ZT
x(n) ←→ X(z).
 In practice, we do not usually compute the inverse z transform by directly
using the formula from above. Instead, we resort to other means (to be
discussed later).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 524
Bilateral and Unilateral z Transform

 Two different versions of the z transform are commonly used:


1 the bilateral (or two-sided) z transform; and

2 the unilateral (or one-sided) z transform.

 The unilateral z transform is most frequently used to solve systems of


linear difference equations with nonzero initial conditions.
 As it turns out, the only difference between the definitions of the bilateral
and unilateral z transforms is in the lower limit of summation.
 In the bilateral case, the lower limit is −∞, whereas in the unilateral case,
the lower limit is 0.
 For the most part, we will focus our attention primarily on the bilateral z
transform.
 We will, however, briefly introduce the unilateral z transform as a tool for
solving difference equations.
 Unless otherwise noted, all subsequent references to the z transform
should be understood to mean bilateral z transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 525
Relationship Between Z and Fourier Transforms
 Let X and XF denote the z and (DT) Fourier transforms of x, respectively.
 The function X(z) evaluated at z = e jΩ (where Ω is real) yields XF (Ω).
That is,
X(e jΩ ) = XF (Ω).
 Due to the preceding relationship, the Fourier transform of x is sometimes
written as X(e jΩ ).
 The function X(z) evaluated at an arbitrary complex value z = re jΩ (where
r = |z| and Ω = arg z) can also be expressed in terms of a Fourier
transform involving x. In particular, we have
X(re jΩ ) = XF0 (Ω),
where XF0 is the (DT) Fourier transform of x0 (n) = r−n x(n).
 So, in general, the z transform of x is the Fourier transform of an
exponentially-weighted version of x.
 Due to this weighting, the z transform of a sequence may exist when the
Fourier transform of the same sequence does not.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 526
z Transform Examples

T HIS SLIDE IS INTENTIONALLY LEFT BLANK .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 527
Section 12.2

Region of Convergence (ROC)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 528
Disk

 A disk with center 0 and radius r is the set of all complex numbers z
satisfying

|z| < r,

where r is a real constant and r > 0.

Im

r
Re

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 529
Annulus

 An annulus with center 0, inner radius r0 , and outer radius r1 is the set of
all complex numbers z satisfying

r0 < |z| < r1 ,

where r0 and r1 are real constants and 0 < r0 < r1 .

Im

r1

Re
r0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 530
Circle Exterior

 The exterior of a circle with center 0 and radius r is the set of all complex
numbers z satisfying

|z| > r,

where r is a real constant and r > 0.

Im

r
Re

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 531
Example: Set Intersection
Im Im

Re Re
3/4 5/4

includes ∞

R1 R2
Im

Re
3/4 5/4

R1 ∩ R2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 532
Example: Scalar Multiple of a Set

Im Im

Re Re
1 2 2 4

R 2R

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 533
Example: Reciprocal of a Set

Im Im

Re Re
3/4 4/3

includes ∞

R R−1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 534
Region of Convergence (ROC)

 As we saw earlier, for a sequence x, the complete specification of its z


transform X requires not only an algebraic expression for X , but also the
ROC associated with X .
 Two very different sequences can have the same algebraic expressions
for X .
 Now, we examine some of the constraints on the ROC (of the z transform)
for various classes of sequences.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 535
Property 1: General Form

 The ROC of a z transform consists of concentric circles centered at 0 in


the complex plane.
 That is, if a point z0 is in the ROC, then the circle centered at 0 passing
through z0 (i.e., |z| = |z0 |) is also in the ROC.
 Some examples of sets that would be either valid or invalid as ROCs are
shown below.
Im Im Im

Re Re Re

Valid Valid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 536
Property 2: Rational z Transforms

 If a z transform X is a rational function, then the ROC of X does not


contain any poles and is bounded by poles or extends to infinity.
 Some examples of sets that would be either valid or invalid as ROCs of
rational z transforms are shown below.
Im Im Im

Re Re Re

Valid Valid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 537
Property 3: Finite-Duration Sequences

 If a sequence x is finite duration and its z transform X converges for at


least one point, then X converges for all points the complex plane, except
possibly 0 and/or ∞.
 Some examples of sets that would be either valid or invalid as ROCs for
X , if x is finite duration, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 538
Property 4: Right-Sided Sequences

 If a sequence x is right sided and the circle |z| = r0 is in the ROC of


X = Zx, then all (finite) values of z for which |z| > r0 will also be in the
ROC of X (i.e., the ROC contains the exterior of a circle centered at 0,
possibly including ∞).
 Thus, if x is right sided but not left sided, the ROC of X is the exterior of
a circle centered at 0, possibly including ∞.
 Examples of sets that would be either valid or invalid as ROCs for X , if x is
right sided but not left sided, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 539
Property 5: Left-Sided Sequences

 If a sequence x is left sided and the circle |z| = r0 is in the ROC of


X = Zx, then all values of z for which 0 < |z| < r0 will also be in the ROC
of X (i.e., the ROC contains a disk centered at 0, possibly excluding 0).
 Thus, if x is left sided but not right sided, the ROC of X is a disk centered
at 0, possibly excluding 0.
 Examples of sets that would be either valid or invalid as ROCs for X , if x is
left sided but not right sided, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 540
Property 6: Two-Sided Sequences

 If a sequence x is two sided and the circle |z| = r0 is in the ROC of


X = Zx, then the ROC of X will consist of a ring that contains this circle
(i.e., the ROC is an annulus centered at 0).
 Examples of sets that would be either valid or invalid as ROCs for X , if x is
two sided, are shown below.
Im Im Im

Re Re Re

Valid Invalid Invalid

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 541
Property 7: More on Rational z Transforms
 If a sequence x has a rational z transform X (with at least one pole), then:
1 If x is right sided, then the ROC of X is the region outside the circle of

radius equal to the largest magnitude of the poles of X (i.e., outside the
outermost pole), possibly including ∞.
2 If x is left sided, then the ROC of X is the region inside the circle of radius

equal to the smallest magnitude of the nonzero poles of X and extending


inward to, and possibly including, 0 (i.e., inside the innermost nonzero
pole).
 This property is implied by properties 1, 2, 4, and 5.
 Some examples of sets that would be either valid or invalid as ROCs for
X , if X is rational and x is left/right sided, are given below.
Im Im Im Im

Re Re Re Re

Valid Invalid Valid Invalid


Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 542
General Form of the ROC
 To summarize the results of properties 3, 4, 5, and 6, if the z transform X
of the sequence x exists, the ROC of X depends on the left- and
right-sidedness of x as follows:
x
left sided right sided ROC of X
yes yes everywhere, except possibly 0 and/or ∞
no yes exterior of circle centered at 0, possibly including ∞
yes no disk centered at 0, possibly excluding 0
no no annulus centered at 0

 Thus, we can infer that, if X exists, the ROC can only be of one of the
forms listed above.
 For example, the sets shown below would not be valid as ROCs.
Im Im

Re Re

Invalid Invalid
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 543
Section 12.3

Properties of the z Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 544
Properties of the z Transform

Property Time Domain Z Domain ROC


Linearity a1 x1 (n) + a2 x2 (n) a1 X1 (z) + a2 X2 (z) At least R1 ∩ R2
Translation x(n − n0 ) z−n0 X(z) R except possible addition/deletion of 0
Modulation an x(n) X(a−1 z) |a| R
Conjugation x∗ (n) X ∗ (z∗ ) R
Time Reversal x(−n) X(1/z) R−1
Upsampling (↑ M)x(n) X(zM ) R1/M
1 M−1 − j2πk/M z1/M

Downsampling (↓ M)x(n) M ∑k=0 X e RM
Convolution x1 ∗ x2 (n) X1 (z)X2 (z) At least R1 ∩ R2
d
Z-Domain Diff. nx(n) −z dz X(z) R
Differencing x(n) − x(n − 1) (1 − z−1 )X(z) At least R ∩ |z| > 0
Accumulation ∑nk=−∞ x(k) z
z−1 X(z) At least R ∩ |z| > 1

Property
Initial Value Theorem x(0) = lim X(z)
z→∞
Final Value Theorem lim x(n) = lim[(z − 1)X(z)]
n→∞ z→1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 545
z Transform Pairs

Pair x(n) X(z) ROC


1 δ(n) 1 All z
z 1
2 u(n) z−1 = 1−z−1 |z| > 1
z 1
3 −u(−n − 1) z−1 = 1−z−1 |z| < 1
z −1
4 nu(n) (z−1)2
= z −1 2 |z| > 1
(1−z )
z −1
5 −nu(−n − 1) (z−1)2
= z −1 2 |z| < 1
(1−z )
z 1
6 an u(n) z−a = 1−az−1 |z| > |a|
z 1
7 −an u(−n − 1) z−a = 1−az−1 |z| < |a|
az −1
8 nan u(n) (z−a)2
= az −1 2 |z| > |a|
(1−az )
az −1
9 −nan u(−n − 1) (z−a)2
= az −1 2 |z| < |a|
(1−az )
(n+1)(n+2)···(n+m−1) n zm 1
10 (m−1)! a u(n) (z−a)m = (1−az−1 )m |z| > |a|
(n+1)(n+2)···(n+m−1) n zm 1
11 − (m−1)! a u(−n − 1) (z−a)m = (1−az−1 )m |z| < |a|

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 546
z Transform Pairs (Continued)

Pair x(n) X(z) ROC


z(z−cos Ω0 ) 1−(cos Ω0 )z−1
12 cos(Ω0 n)u(n) z2 −2z cos Ω0 +1
= 1−(2 cos Ω0 )z−1 +z−2
|z| > 1
z(z−cos Ω0 ) 1−(cos Ω0 )z−1
13 − cos(Ω0 n)u(−n − 1) z2 −2z cos Ω0 +1
= 1−(2 cos Ω0 )z−1 +z−2
|z| < 1
z sin Ω0 (sin Ω0 )z−1
14 sin(Ω0 n)u(n) z2 −2z cos Ω0 +1
= 1−(2 cos Ω0 )z−1 +z−2
|z| > 1
z sin Ω0 Ω0 )z−1
15 − sin(Ω0 n)u(−n − 1) z2 −2z cos Ω0 +1
= 1−(2(sin
cos Ω0 )z−1 +z−2
|z| < 1
z(z−a cos Ω0 ) 1−(a cos Ω0 )z−1
16 an cos(Ω0 n)u(n) z2 −2az cos Ω0 +a2
= 1−(2a cos Ω0 )z−1 +a2 z−2
|z| > |a|
az sin Ω0 (a sin Ω0 )z−1
17 an sin(Ω0 n)u(n) z2 −2az cos Ω0 +a2
= 1−(2a cos Ω0 )z−1 +a2 z−2 |z| > |a|
z(1−z−M ) 1−z−M
18 u(n) − u(n − M), M > 0 z−1 = 1−z−1 |z| > 0
(a−a−1 )z
19 a|n| , |a| <1 (z−a)(z−a−1 )
|a| < |z| < a−1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 547
Linearity

ZT ZT
 If x1 (n) ←→ X1 (z) with ROC R1 and x2 (n) ←→ X2 (z) with ROC R2 , then
ZT
a1 x1 (n) + a2 x2 (n) ←→ a1 X1 (z) + a2 X2 (z) with ROC R containing R1 ∩ R2 ,

where a1 and a2 are arbitrary complex constants.


 This is known as the linearity property of the z transform.
 The ROC always contains the intersection but could be larger (in the case
that pole-zero cancellation occurs).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 548
Translation (Time Shifting)

ZT
 If x(n) ←→ X(z) with ROC R, then

x(n − n0 ) ←→ z−n0 X(z) with ROC R0 ,


ZT

where n0 is an integer constant and R0 is the same as R except for the


possible addition or deletion of zero or infinity.
 This is known as the translation (or time-shifting) property of the z
transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 549
Z-Domain Scaling
ZT
 If x(n) ←→ X(z) with ROC R, then

an x(n) ←→ X(z/a) with ROC |a| R,


ZT

where a is a nonzero constant.


 This is known as the z-domain scaling property of the z transform.
 As illustrated below, the ROC R is scaled by |a|.
Im Im

Re Re
r0 r1 |a| r0 |a| r1

R |a| R
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 550
Time Reversal

ZT
 If x(n) ←→ X(z) with ROC R, then
ZT
x(−n) ←→ X(1/z) with ROC 1/R.
 This is known as the time-reversal property of the z transform.
 As illustrated below, the ROC R is reciprocated.
Im Im

Re 1 1 Re
r0 r1 r1 r0

R 1/R

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 551
Upsampling

 Define (↑ M)x(n) as
(
x(n/M) n/M is an integer
(↑ M)x(n) =
0 otherwise.
ZT
 If x(n) ←→ X(z) with ROC R, then

(↑ M)x(n) ←→ X(zM ) with ROC R1/M .


ZT

 This is known as the upsampling (or time-expansion) property of the z


transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 552
Downsampling

ZT
 If x(n) ←→ X(z) with ROC R, then
M−1  
1
∑ X e− j2πk/M z1/M with ROC RM .
ZT
(↓ M)x(n) ←→ M
k=0

 This is known as the downsampling property of the z transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 553
Conjugation

ZT
 If x(n) ←→ X(z) with ROC R, then

x∗ (n) ←→ X ∗ (z∗ ) with ROC R.


ZT

 This is known as the conjugation property of the z transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 554
Convolution

ZT ZT
 If x1 (n) ←→ X1 (z) with ROC R1 and x2 (n) ←→ X2 (z) with ROC R2 , then
ZT
x1 ∗ x2 (n) ←→ X1 (z)X2 (z) with ROC containing R1 ∩ R2 .
 This is known that the convolution (or time-domain convolution)
property of the z transform.
 The ROC always contains the intersection but can be larger than the
intersection (if pole-zero cancellation occurs).
 Convolution in the time domain becomes multiplication in the z domain.
 This can make dealing with LTI systems much easier in the z domain than
in the time domain.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 555
Z-Domain Differentiation

ZT
 If x(n) ←→ X(z) with ROC R, then
ZTd
nx(n) ←→ −z dz X(z) with ROC R.
 This is known as the z-domain differentiation property of the z
transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 556
Differencing

ZT
 If x(n) ←→ X(z) with ROC R, then

x(n) − x(n − 1) ←→ (1 − z−1 )X(z) for ROC containing R ∩ |z| > 0.


ZT

 This is known as the differencing property of the z transform.


 Differencing in the time domain becomes multiplication by 1 − z−1 in the z
domain.
 This can make dealing with difference equations much easier in the z
domain than in the time domain.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 557
Accumulation

ZT
 If x(n) ←→ X(z) with ROC R, then
n
z

ZT
x(k) ←→ X(z) for ROC containing R ∩ |z| > 1.
k=−∞ z−1

 This is known as the accumulation property of the z transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 558
Initial Value Theorem

 For a sequence x with z transform X , if x is causal, then

x(0) = lim X(z).


z→∞

 This result is known as the initial-value theorem.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 559
Final Value Theorem

 For a sequence x with z transform X , if x is causal and limn→∞ x(n) exists,


then

lim x(n) = lim[(z − 1)X(z)].


n→∞ z→1

 This result is known as the final-value theorem.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 560
More z Transform Examples

T HIS SLIDE IS INTENTIONALLY LEFT BLANK .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 561
Section 12.4

Determination of Inverse z Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 562
Finding the Inverse z Transform

 Recall that the inverse z transform x of X is given by


I
x(n) = 1
2π j X(z)zn−1 dz,
Γ

where Γ is a counterclockwise closed circular contour centered at the


origin and with radius r such that Γ is in the ROC of X .
 Unfortunately, the above contour integration can often be quite tedious to
compute.
 Consequently, we do not usually compute the inverse z transform directly
using the above equation.
 For rational functions, the inverse z transform can be more easily
computed using partial fraction expansions.
 Using a partial fraction expansion, we can express a rational function as a
sum of lower-order rational functions whose inverse z transforms can
typically be found in tables.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 563
Section 12.5

z Transform and LTI Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 564
System Function of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h, and
let X , Y , and H denote the z transforms of x, y, and h, respectively.
 Since y(n) = x ∗ h(n), the system is characterized in the z domain by

Y (z) = X(z)H(z).
 As a matter of terminology, we refer to H as the system function (or
transfer function) of the system (i.e., the system function is the z
transform of the impulse response).
 When viewed in the z domain, a LTI system forms its output by multiplying
its input with its system function.
 A LTI system is completely characterized by its system function H .
 If the ROC of H includes the unit circle |z| = 1, then H(e jΩ ) is the
frequency response of the LTI system.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 565
Block Diagram Representation of LTI Systems

 Consider a LTI system with input x, output y, and impulse response h, and
let X , Y , and H denote the z transforms of x, y, and h, respectively.
 Often, it is convenient to represent such a system in block diagram form in
the z domain as shown below.
X Y
H

 Since a LTI system is completely characterized by its system function, we


typically label the system with this quantity.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 566
Interconnection of LTI Systems

 The series interconnection of the LTI systems with system functions H1


and H2 is the LTI system with system function H = H1 H2 . That is, we
have the equivalences shown below.
X Y X Y
H1 H2 ≡ H1 H2

X Y X Y
H1 H2 ≡ H2 H1

 The parallel interconnection of the LTI systems with impulse responses


H1 and H2 is a LTI system with the system function H = H1 + H2 . That is,
we have the equivalence shown below.
X Y
H1 +

X Y
≡ H1 + H2
H2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 567
Causality

 If a LTI system is causal, its impulse response is causal, and therefore


right sided. From this, we have the result below.
 Theorem. A LTI system is causal if and only if the ROC of the system
function is:
1 the exterior of a circle, including ∞; or

2 the entire complex plane, including ∞ and possibly excluding 0.

 Theorem. A LTI system with a rational system function H is causal if and


only if:
1 the ROC of H is the exterior of a (possibly degenerate) circle outside the

outermost pole of H or, if H has no poles, the entire complex plane; and
2 H is proper (i.e., when H(z) is expressed as a ratio of polynomials in z, the

order of the numerator polynomial does not exceed the order of the
denominator polynomial).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 568
BIBO Stability

 Whether or not a system is BIBO stable depends on the ROC of its


system function.
 Theorem. A LTI system is BIBO stable if and only if the ROC of its
system function contains the unit circle (i.e., |z| = 1).
 Theorem. A causal LTI system with a rational system function H is BIBO
stable if and only if all of the poles of H lie inside the unit circle (i.e., each
of the poles has a magnitude less than one).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 569
Invertibility

 A LTI system H with system function H is invertible if and only if there


exists another LTI system with system function Hinv such that

H(z)Hinv (z) = 1,

in which case Hinv is the system function of H−1 and

1
Hinv (z) = .
H(z)
 Since distinct systems can have identical system functions (but with
differing ROCs), the inverse of a LTI system is not necessarily unique.
 In practice, however, we often desire a stable and/or causal system. So,
although multiple inverse systems may exist, we are frequently only
interested in one specific choice of inverse system (due to these
additional constraints of stability and/or causality).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 570
LTI Systems and Difference Equations

 Many LTI systems of practical interest can be represented using an


Nth-order linear difference equation with constant coefficients.
 Consider a system with input x and output y that is characterized by an
equation of the form
N M
∑ bk y(n − k) = ∑ ak x(n − k) where M ≤ N.
k=0 k=0

 Let h denote the impulse response of the system, and let X , Y , and H
denote the z transforms of x, y, and h, respectively.
 One can show that H(z) is given by

Y (z) ∑M ak z−k
H(z) = = k=0 .
X(z) ∑Nk=0 bk z−k
 Observe that, for a system of the form considered above, the system
function is always rational.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 571
Section 12.6

Application: Analysis of Control Systems

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 572
Feedback Control Systems

Reference
Input Error Output
+ Controller Plant

Sensor
Feedback
Signal
 input: desired value of the quantity to be controlled
 output: actual value of the quantity to be controlled
 error: difference between the desired and actual values
 plant: system to be controlled
 sensor: device used to measure the actual output
 controller: device that monitors the error and changes the input of the
plant with the goal of forcing the error to zero

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 573
Stability Analysis of Feedback Control Systems

 Often, we want to ensure that a system is BIBO stable.


 The BIBO stability property is more easily characterized in the z domain
than in the time domain.
 Therefore, the z domain is extremely useful for the stability analysis of
systems.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 574
Section 12.7

Unilateral z Transform

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 575
Unilateral z Transform

 The unilateral z transform of the sequence x, denoted Zu x or X , is


defined as

Zu x(z) = X(z) = ∑ x(n)z−n .
n=0

 The unilateral z transform is related to the bilateral z transform as follows:


∞ ∞
Zu x(z) = ∑ x(n)z−n = ∑ x(n)u(n)z−n = Z {xu} (z).
n=0 n=−∞

 In other words, the unilateral z transform of the sequence x is simply the


bilateral z transform of the sequence xu.
 Since Zu x = Z{xu} and xu is always a right-sided sequence, the ROC
associated with Zu x is always the exterior of a circle.
 For this reason, we often do not explicitly indicate the ROC when
working with the unilateral z transform.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 576
Unilateral z Transform (Continued 1)

 With the unilateral z transform, the same inverse transform equation is


used as in the bilateral case.
 The unilateral z transform is only invertible for causal sequences. In
particular, we have

Z−1 −1
u {Zu {x}}(n) = Zu {Z{xu}}(n)
= Z−1 {Z{xu}}(n)
= x(n)u(n)
(
x(n) n ≥ 0
=
0 otherwise.

 For a noncausal sequence x, we can only recover x(n) for n ≥ 0.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 577
Unilateral z Transform (Continued 2)

 Due to the close relationship between the unilateral and bilateral z


transforms, these two transforms have some similarities in their properties.
 Since these two transforms are not identical, however, their properties
differ in some cases, often in subtle ways.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 578
Properties of the Unilateral z Transform

Property Time Domain Z Domain


Linearity a1 x1 (n) + a2 x2 (n) a1 X1 (z) + a2 X2 (z)
Time Delay x(n − 1) z−1 X(z) + x(−1)
Time Advance x(n + 1) zX(z) − zx(0)
Modulation an x(n) X(a−1 z)
e jΩ0 n x(n) X(e− jΩ0 z)
Conjugation x∗ (n) X ∗ (z∗ )
Upsampling (↑ M)x(n) X(zM )
1 M−1 − j2πk/M z1/M

Downsampling (↓ M)x(n) M ∑k=0 X e
Convolution x1 ∗ x2 (n), x1 and x2 are causal X1 (z)X2 (z)
d
Z-Domain Diff. nx(n) −z dz X(z)
Differencing x(n) − x(n − 1) (1 − z−1 )X(z) − x(−1)
Accumulation ∑nk=0 x(k) 1
1−z−1
X(z)

Property
Initial Value Theorem x(0) = lim X(z)
z→∞
Final Value Theorem lim x(n) = lim[(z − 1)X(z)]
n→∞ z→1

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 579
Unilateral z Transform Pairs

Pair x(n), n ≥ 0 X(z)


1 δ(n) 1
z
2 1 z−1
z
3 n (z−1)2
z
4 an z−a
az
5 an n (z−a)2
z(z−cos Ω0 )
6 cos(Ω0 n) z2 −2(cos Ω0 )z+1
z sin Ω0
7 sin(Ω0 n) z2 −2(cos Ω0 )z+1
z(z−|a| cos Ω0 )
8 |a|n cos(Ω0 n)
z2 −2|a|(cos Ω0 )z+|a|2
z|a| sin Ω0
9 |a|n sin(Ω0 n)
z2 −2|a|(cos Ω0 )z+|a|2

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 580
Solving Difference Equations [Using the Unilateral z Transform]

 Many systems of interest in engineering applications can be characterized


by constant-coefficient linear difference equations.
 One common use of the unilateral z transform is in solving
constant-coefficient linear difference equations with nonzero initial
conditions.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 581
Part 13

Complex Analysis

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 582
Complex Numbers

 A complex number is a number of the form z = x + jy where x and √ y are


real numbers and j is the constant defined by j2 = −1 (i.e., j = −1).
 The Cartesian form of the complex number z expresses z in the form

z = x + jy,

where x and y are real numbers. The quantities x and y are called the real
part and imaginary part of z, and are denoted as Re z and Im z,
respectively.
 The polar form of the complex number z expresses z in the form

z = r(cos θ + j sin θ) or equivalently z = re jθ ,

where r and θ are real numbers and r ≥ 0. The quantities r and θ are
called the magnitude and argument of z, and are denoted as |z| and
arg z, respectively. [Note: e jθ = cos θ + j sin θ.]

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 583
Complex Numbers (Continued)

 Since e jθ = e j(θ+2πk) for all real θ and all integer k, the argument of a
complex number is only uniquely determined to within an additive multiple
of 2π.
 The principal argument of a complex number z, denoted Arg z, is the
particular value θ of arg z that satisfies −π < θ ≤ π.
 The principal argument of a complex number (excluding zero) is unique.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 584
Geometric Interpretation of Cartesian and Polar Forms

Im Im
z z
y

θ
Re Re
x

Cartesian form: Polar form:


z = x + jy z = r(cos θ + j sin θ) = re jθ
where x = Re z and y = Im z where r = |z| and θ = arg z

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 585
The arctan Function

 The range of the arctan function is −π/2 (exclusive) to π/2 (exclusive).


 Consequently, the arctan function always yields an angle in either the first
or fourth quadrant.

Im Im
(1, 1)
1 1

π + arctan( −1
−1
) −1
arctan( −1 )
arctan( 11 )
Re Re
−1 1 −1 1

−1 −1
(−1, −1)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 586
The atan2 Function
 The angle θ that a vector from the origin to the point (x, y) makes with the
positive x axis is given by θ = atan2(y, x), where


arctan(y/x) x>0



π/2
 x = 0 and y > 0
atan2(y, x) , −π/2 x = 0 and y < 0



arctan(y/x) + π x < 0 and y ≥ 0



arctan(y/x) − π x < 0 and y < 0.
 The range of the atan2 function is from −π (exclusive) to π (inclusive).
 For the complex number z expressed in Cartesian form x + jy,
Arg z = atan2(y, x).
 Although the atan2 function is quite useful for computing the principal
argument (or argument) of a complex number, it is not advisable to
memorize the definition of this function. It is better to simply understand
what this function is doing (namely, intelligently applying the arctan
function).
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 587
Conversion Between Cartesian and Polar Form

 Let z be a complex number with the Cartesian and polar form


representations given respectively by

z = x + jy and z = re jθ .
 To convert from polar to Cartesian form, we use the following identities:

x = r cos θ and y = r sin θ.


 To convert from Cartesian to polar form, we use the following identities:
p
r= x2 + y2 and θ = atan2(y, x) + 2πk,

where k is an arbitrary integer.


 Since the atan2 function simply amounts to the intelligent application of
the arctan function, instead of memorizing the definition of the atan2
function, one should simply understand how to use the arctan function to
achieve the same result.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 588
Properties of Complex Numbers

 For complex numbers, addition and multiplication are commutative. That


is, for any two complex numbers z1 and z2 ,

z1 + z2 = z2 + z1 and
z1 z2 = z2 z1 .
 For complex numbers, addition and multiplication are associative. That is,
for any three complex numbers z1 , z2 , and z3 ,

(z1 + z2 ) + z3 = z1 + (z2 + z3 ) and


(z1 z2 )z3 = z1 (z2 z3 ).
 For complex numbers, the distributive property holds. That is, for any
three complex numbers z1 , z2 , and z3 ,

z1 (z2 + z3 ) = z1 z2 + z1 z3 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 589
Conjugation

 The conjugate of the complex number z = x + jy is denoted as z∗ and


defined as

z∗ = x − jy.
 Geometrically, the conjugation operation reflects a point in the complex
plane about the real axis.
 The geometric interpretation of the conjugate is illustrated below.
Im

z = x + jy

Re

z∗ = x − jy

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 590
Properties of Conjugation

 For every complex number z, the following identities hold:

|z∗ | = |z| ,
arg z∗ = − arg z,
zz∗ = |z|2 ,
Re z = 12 (z + z∗ ), and
1 ∗
Im z = 2 j (z − z ).

 For all complex numbers z1 and z2 , the following identities hold:

(z1 + z2 )∗ = z∗1 + z∗2 ,


(z1 z2 )∗ = z∗1 z∗2 , and

(z1 /z2 ) = z∗1 /z∗2 .

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 591
Addition
 Cartesian form: Let z1 = x1 + jy1 and z2 = x2 + jy2 . Then,
z1 + z2 = (x1 + jy1 ) + (x2 + jy2 )
= (x1 + x2 ) + j(y1 + y2 ).
 That is, to add complex numbers expressed in Cartesian form, we simply
add their real parts and add their imaginary parts.
 Polar form: Let z1 = r1 e jθ1 and z2 = r2 e jθ2 . Then,
z1 + z2 = r1 e jθ1 + r2 e jθ2
= (r1 cos θ1 + jr1 sin θ1 ) + (r2 cos θ2 + jr2 sin θ2 )
= (r1 cos θ1 + r2 cos θ2 ) + j(r1 sin θ1 + r2 sin θ2 ).
 That is, to add complex numbers expressed in polar form, we first rewrite
them in Cartesian form, and then add their real parts and add their
imaginary parts.
 For the purposes of addition, it is easier to work with complex numbers
expressed in Cartesian form.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 592
Multiplication

 Cartesian form: Let z1 = x1 + jy1 and z2 = x2 + jy2 . Then,

z1 z2 = (x1 + jy1 )(x2 + jy2 )


= x1 x2 + jx1 y2 + jx2 y1 − y1 y2
= (x1 x2 − y1 y2 ) + j(x1 y2 + x2 y1 ).
 That is, to multiply two complex numbers expressed in Cartesian form, we
use the distributive law along with the fact that j2 = −1.
 Polar form: Let z1 = r1 e jθ1 and z2 = r2 e jθ2 . Then,
  
z1 z2 = r1 e jθ1 r2 e jθ2 = r1 r2 e j(θ1 +θ2 ) .

 That is, to multiply two complex numbers expressed in polar form, we use
exponent rules.
 For the purposes of multiplication, it is easier to work with complex
numbers expressed in polar form.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 593
Division
 Cartesian form: Let z1 = x1 + jy1 and z2 = x2 + jy2 . Then,
z1 z1 z∗2 z1 z∗2 (x1 + jy1 )(x2 − jy2 )
= = 2
=
z2 z2 z2 |z2 |

x22 + y22
x1 x2 − jx1 y2 + jx2 y1 + y1 y2 x1 x2 + y1 y2 + j(x2 y1 − x1 y2 )
= = .
x22 + y22 x22 + y22
 That is, to compute the quotient of two complex numbers expressed in
Cartesian form, we convert the problem into one of division by a real
number.
 Polar form: Let z1 = r1 e jθ1 and z2 = r2 e jθ2 . Then,
z1 r1 e jθ1 r1
= jθ
= e j(θ1 −θ2 ) .
z2 r2 e 2 r2
 That is, to compute the quotient of two complex numbers expressed in
polar form, we use exponent rules.
 For the purposes of division, it is easier to work with complex numbers
expressed in polar form.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 594
Properties of the Magnitude and Argument

 For any complex numbers z1 and z2 , the following identities hold:

|z1 z2 | = |z1 | |z2 | ,


z1 |z1 |
= for z2 6= 0,
z2 |z2 |
arg z1 z2 = arg z1 + arg z2 , and
 
z1
arg = arg z1 − arg z2 for z2 6= 0.
z2
 The above properties trivially follow from the polar representation of
complex numbers.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 595
Euler’s Relation and De Moivre’s Theorem

 Euler’s relation. For all real θ,

e jθ = cos θ + j sin θ.
 From Euler’s relation, we can deduce the following useful identities:

cos θ = 12 (e jθ + e− jθ ) and
1 jθ − jθ
sin θ = 2 j (e − e ).
 De Moivre’s theorem. For all real θ and all integer n,
 n
e jnθ = e jθ .

[Note: This relationship does not necessarily hold for real n.]

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 596
Roots of Complex Numbers

 Every complex number z = re jθ (where r = |z| and θ = arg z) has n


distinct nth roots given by

n
re j(θ+2πk)/n for k = 0, 1, . . . , n − 1.
 For example, 1 has the two distinct square roots 1 and −1.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 597
Quadratic Formula

 Consider the equation

az2 + bz + c = 0,

where a, b, and c are real, z is complex, and a 6= 0.


 The roots of this equation are given by

−b ± b2 − 4ac
z= .
2a
 This formula is often useful in factoring quadratic polynomials.
 The quadratic az2 + bz + c can be factored as a(z − z0 )(z − z1 ), where
√ √
−b − b2 − 4ac −b + b2 − 4ac
z0 = and z1 = .
2a 2a

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 598
Complex Functions

 A complex function maps complex numbers to complex numbers. For


example, the function F(z) = z2 + 2z + 1, where z is complex, is a
complex function.
 A complex polynomial function is a mapping of the form

F(z) = a0 + a1 z + a2 z2 + · · · + an zn ,

where z, a0 , a1 , . . . , an are complex.


 A complex rational function is a mapping of the form

a0 + a1 z + a2 z2 + . . . + an zn
F(z) = ,
b0 + b1 z + b2 z2 + . . . + bm zm
where a0 , a1 , . . . , an , b0 , b1 , . . . , bm and z are complex.
 Observe that a polynomial function is a special case of a rational function.
 Herein, we will mostly focus our attention on polynomial and rational
functions.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 599
Continuity

 A function F is said to be continuous at a point z0 if F(z0 ) is defined and


given by

F(z0 ) = lim F(z).


z→z0

 A function that is continuous at every point in its domain is said to be


continuous.
 Polynomial functions are continuous everywhere.
 Rational functions are continuous everywhere except at points where the
denominator polynomial becomes zero.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 600
Differentiability
 A function F is said to be differentiable at a point z = z0 if the limit
F(z)−F(z0 )
F 0 (z0 ) = limz→z0 z−z0

exists. This limit is called the derivative of F at the point z = z0 .


 A function is said to be differentiable if it is differentiable at every point in
its domain.
 The rules for differentiating sums, products, and quotients are the same
for complex functions as for real functions. If F 0 (z0 ) and G0 (z0 ) exist, then
1 (aF)0 (z ) = aF 0 (z ) for any complex constant a;
0 0
2 (F + G)0 (z ) = F 0 (z ) + G0 (z );
0 0 0
3 (FG)0 (z ) = F 0 (z )G(z ) + F(z )G0 (z );
0 0 0 0 0
4 (F/G)0 (z ) =
G(z0 )F 0 (z0 )−F(z0 )G0 (z0 )
0 G(z0 )2
; and
5 if z = G(w ) and G0 (w ) exists, then the derivative of F(G(z)) at w is
0 0 0 0
F 0 (z0 )G0 (w0 ) (i.e., the chain rule).
 A polynomial function is differentiable everywhere.
 A rational function is differentiable everywhere except at the points where
its denominator polynomial becomes zero.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 601
Open Disks

 An open disk in the complex plane with center z0 and radius r is the set of
complex numbers z satisfying

|z − z0 | < r,

where r is a strictly positive real number.


 A plot of an open disk is shown below.

Im

z0

Re

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 602
Analyticity

 A function is said to be analytic at a point z0 if it is differentiable at every


point in an open disk about z0 .
 A function is said to be analytic if it is analytic at every point in its domain.
 A polynomial function is analytic everywhere.
 A rational function is analytic everywhere, except at the points where its
denominator polynomial becomes zero.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 603
Zeros and Singularities

 If a function F is zero at the point z0 (i.e., F(z0 ) = 0), F is said to have a


zero at z0 .
 If a function F is such that F(z0 ) = 0, F (1) (z0 ) = 0, . . . , F (n−1) (z0 ) = 0
(where F (k) denotes the kth order derivative of F ), F is said to have an
nth order zero at z0 .
 A point at which a function fails to be analytic is called a singularity.
 Polynomials do not have singularities.
 Rational functions can have a type of singularity called a pole.
 If a function F is such that G(z) = 1/F(z) has an nth order zero at z0 , F is
said to have an nth order pole at z0 .
 A pole of first order is said to be simple, whereas a pole of order two or
greater is said to be repeated. A similar terminology can also be applied
to zeros (i.e., simple zero and repeated zero).

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 604
Zeros and Poles of a Rational Function

 Given a rational function F , we can always express F in factored form as

K(z − a1 )α1 (z − a2 )α2 · · · (z − aM )αM


F(z) = ,
(z − b1 )β1 (z − b2 )β2 · · · (z − bN )βN
where K is complex, a1 , a2 , . . . , aM , b1 , b2 , . . . , bN are distinct complex
numbers, and α1 , α2 , . . . , αM and β1 , β2 , . . . , βN are strictly positive
integers.
 One can show that F has poles at b1 , b2 , . . . , bN and zeros at
a1 , a2 , . . . , aM .
 Furthermore, the kth pole (i.e., bk ) is of order βk , and the kth zero (i.e., ak )
is of order αk .
 When plotting zeros and poles in the complex plane, the symbols “o” and
“x” are used to denote zeros and poles, respectively.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 605
Part 14

Partial Fraction Expansions (PFEs)

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 606
Motivation for PFEs

 Sometimes it is beneficial to be able to express a rational function as a


sum of lower-order rational functions.
 This can be accomplished using a type of decomposition known as a
partial fraction expansion.
 Partial fraction expansions are often useful in the calculation of inverse
Laplace transforms, inverse z transforms, and inverse CT/DT Fourier
transforms.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 607
Strictly-Proper Rational Functions

 Consider a rational function


αm vm + αm−1 vm−1 + . . . + α1 v + α0
F(v) = .
βn vn + βn−1 vn−1 + . . . + β1 v + β0
 The function F is said to be strictly proper if m < n (i.e., the order of the
numerator polynomial is strictly less than the order of the denominator
polynomial).
 Through polynomial long division, any rational function can be written as
the sum of a polynomial and a strictly-proper rational function.
 A strictly-proper rational function can be expressed as a sum of
lower-order rational functions, with such an expression being called a
partial fraction expansion.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 608
Section 14.1

PFEs for First Form of Rational Functions

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 609
Partial Fraction Expansions (PFEs) [CT and DT Contexts]

 Any rational function F can be expressed in the form of

am vm + am−1 vm−1 + . . . + a0
F(v) = .
vn + bn−1 vn−1 + . . . + b0
 Furthermore, the denominator polynomial D(v) = vn + bn−1 vn−1 + . . . + b0
in the above expression for F(v) can be factored to obtain

D(v) = (v − p1 )q1 (v − p2 )q2 · · · (v − pn )qn ,

where the pk are distinct and the qk are integers.


 If F has only simple poles, q1 = q2 = · · · = qn = 1.
 Suppose that F is strictly proper (i.e., m < n).
 In the determination of a partial fraction expansion of F , there are two
cases to consider:
1 F has only simple poles; and
2 F has at least one repeated pole.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 610
Simple-Pole Case [CT and DT Contexts]

 Suppose that the (rational) function F has only simple poles.


 Then, the denominator polynomial D for F is of the form

D(v) = (v − p1 )(v − p2 ) · · · (v − pn ),
where the pk are distinct.
 In this case, F has a partial fraction expansion of the form
A1 A2 An−1 An
F(v) = + +...+ + ,
v − p1 v − p2 v − pn−1 v − pn
where

Ak = (v − pk )F(v)|v=pk .
 Note that the (simple) pole pk contributes a single term to the partial
fraction expansion.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 611
Repeated-Pole Case [CT and DT Contexts]
 Suppose that the (rational) function F has at least one repeated pole.
 In this case, F has a partial fraction expansion of the form
 
A1,1 A1,2 A1,q1
F(v) = + +...+
v − p1 (v − p1 )2 (v − p1 )q1
 
A2,1 A2,q2
+ +...+
v − p2 (v − p2 )q2
 
AP,1 AP,qP
+...+ +...+ ,
v − pP (v − pP )qP
where
1 h q −` i
d k qk
Ak,` = [(v − p k ) F(v)] .
(qk − `)! dv v=pk

 Note that the qk th-order pole pk contributes qk terms to the partial fraction
expansion.
 Note that n! = (n)(n − 1)(n − 2) · · · (1) and 0! = 1.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 612
Section 14.2

PFEs for Second Form of Rational Functions

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 613
Partial Fraction Expansions (PFEs) [DT Context]
 Any rational function F can be expressed in the form of

am vm + am−1 vm−1 + . . . + a1 v + a0
F(v) = .
bn vn + bn−1 vn−1 + . . . + b1 v + 1
 Furthermore, the denominator polynomial
D(v) = bn vn + bn−1 vn−1 + . . . + b1 v + 1 in the above expression for F(v)
can be factored to obtain
q1 −1 q2 −1 qn
D(v) = (1 − p−1
1 v) (1 − p2 v) · · · (1 − pn v) ,

where the pk are distinct and the qk are integers.


 If F has only simple poles, q1 = q2 = · · · = qn = 1.
 Suppose that F is strictly proper (i.e., m < n).
 In the determination of a partial fraction expansion of F , there are two
cases to consider:
1 F has only simple poles; and
2 F has at least one repeated pole.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 614
Simple-Pole Case [DT Context]

 Suppose that the (rational) function F has only simple poles.


 Then, the denominator polynomial D for F is of the form

D(v) = (1 − p−1 −1 −1
1 v)(1 − p2 v) · · · (1 − pn v),

where the pk are distinct.


 In this case, F has a partial fraction expansion of the form
A1 A2 An−1 An
F(v) = −1
+ −1
+...+ + ,
1 − p1 v 1 − p2 v 1 − pn−1 v 1 − p−1
−1
n v

where

Ak = (1 − p−1
k v)F(v) v=pk
.
 Note that the (simple) pole pk contributes a single term to the partial
fraction expansion.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 615
Repeated-Pole Case [DT Context]
 Suppose that the (rational) function F has at least one repeated pole.
 In this case, F has a partial fraction expansion of the form
" #
A1,1 A1,2 A1,q1
F(v) = −1
+ −1 2
+...+
1 − p1 v (1 − p1 v) (1 − p−1
1 v)
q1
" #
A2,1 A2,q2
+ −1
+...+
1 − p2 v (1 − p−1
2 v)
q2
 
AP,1 AP,qP
+...+ +...+ ,
1 − p−1
P v (1 − p−1P v)
qP

where
1 h q −` i
Ak,` = (−pk )qk −` dv
d k
[(1 − p−1
k v)qk
F(v)] .
(qk − `)! v=pk

 Note that the qk th-order pole pk contributes qk terms to the partial fraction
expansion.
 Note that n! = (n)(n − 1)(n − 2) · · · (1) and 0! = 1.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 616
Part 15

Miscellany

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 617
Sum of Arithmetic and Geometric Sequences

 The sum of the arithmetic sequence a, a + d, a + 2d, . . . , a + (n − 1)d is


given by
n−1
n[2a + d(n − 1)]
∑ (a + kd) = 2
.
k=0

 The sum of the geometric sequence a, ra, r2 a, . . . , rn−1 a is given by


n−1
rn − 1
∑ rk a = a r−1
for r 6= 1.
k=0

 The sum of the infinite geometric sequence a, ra, r2 a, . . . is given by



a
∑ rk a = 1 − r for |r| < 1.
k=0

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 618
Part 16

Epilogue

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 619
Other Courses Offered by the Author of These Lecture
Slides

 If you did not suffer permanent emotional scarring as a result of using


these lecture slides and you happen to be a student at the University of
Victoria, you might wish to consider taking another one of the courses
developed by the author of these lecture slides:
2 ECE 486: Multiresolution Signal and Geometry Processing with C++

2 SENG 475: Advanced Programming Techniques for Robust Efficient

Computing
 For further information about the above courses (including the URLs for
web sites of these courses), please refer to the slides that follow.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 620
1

0.8

0.6

0.4

0.2

0
80

60 70
60
40 50
40
30
20 20
10
0 0

ECE 486/586:
Multiresolution Signal and Geometry Processing with C++

normally offered in Summer (May-August) term; only prerequisite


ECE 310
subdivision surfaces and subdivision wavelets
3D computer graphics, animation, gaming (Toy Story, Blender software)
geometric modelling, visualization, computer-aided design
multirate signal processing and wavelet systems
sampling rate conversion (audio processing, video transcoding)
signal compression (JPEG 2000, FBI fingerprint compression)
communication systems (transmultiplexers for CDMA, FDMA, TDMA)
C++ (classes, templates, standard library), OpenGL, GLUT, CGAL
software applications (using C++)
for more information, visit course web page:
http://www.ece.uvic.ca/˜mdadams/courses/wavelets

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 621
SENG 475:
Advanced Programming Techniques for Robust Efficient
Computing (With C++)

 advanced programming techniques for robust efficient computing explored


in context of C++ programming language
 topics covered may include:
 concurrency, multithreading, transactional memory, parallelism,
vectorization; cache-efficient coding; compile-time versus run-time
computation; compile-time versus run-time polymorphism; generic
programming techniques; resource/memory management; copy and move
semantics; exception-safe coding
 applications areas considered may include:
 geometry processing, computer graphics, signal processing, and numerical
analysis
 open to any student with necessary prerequisites, which are:
 SENG 265 or CENG 255 or CSC 230 or CSC 349A or ECE 255 or
permission of Department
 for more information, see course web site:
http://www.ece.uvic.ca/˜mdadams/courses/cpp

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 622
Part 17

References

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 623
Online Resources I

1 Michael Adams. 2020-05 ECE 260 Video Lectures Playlist on YouTube.


https://www.youtube.com/playlist?list=
PLbHYdvrWBMxYGMvQ3QG6paNu7CuIRL5dX.
2 Barry Van Veen. All Signal Processing Channel on YouTube.
https://www.youtube.com/user/allsignalprocessing.
3 Iman Moazzen. Signal Processing Hacks With Iman.
http://www.sphackswithiman.com.
4 Iman Moazzen. YouTube Channel for Signal Processing Hacks With Iman.
https://www.youtube.com/channel/UCVkatNMgkEdpWLhH0kBqqLw.
5 Wolfram Alpha Derivative Calculator.
https://www.wolframalpha.com/input/?i=derivative+.
6 Wolfram Alpha Integral Calculator.
https://www.wolframalpha.com/input/?i=integral+.
7 Wolfram Alpha Unilateral Laplace Transform Calculator. https://www.
wolframalpha.com/input/?i=laplace+transform+calculator.
Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 624
Online Resources II

8 Wolfram Alpha Unilateral Z Transform Calculator. https:


//www.wolframalpha.com/input/?i=Z+transform+calculator.
9 DSP Stack Exchange. https://dsp.stackexchange.com.
10 Math Stack Exchange. https://math.stackexchange.com.

Copyright © 2013–2020 Michael D. Adams Signals and Systems Edition 3.0 625

You might also like