0% found this document useful (0 votes)
14 views

Signals and Systems Theory and Practical Explorations With Python-20240526

The document is a textbook titled 'Signals and Systems' by Fatos Yarman Vural and Emre Akbas, focusing on theoretical and practical explorations of signals and systems using Python. It covers various topics including mathematical representations, properties of signals and systems, and applications in machine learning. The book is published by John Wiley & Sons and includes numerous chapters with detailed content and problems for practice.

Uploaded by

Güven Yurtseven
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Signals and Systems Theory and Practical Explorations With Python-20240526

The document is a textbook titled 'Signals and Systems' by Fatos Yarman Vural and Emre Akbas, focusing on theoretical and practical explorations of signals and systems using Python. It covers various topics including mathematical representations, properties of signals and systems, and applications in machine learning. The book is published by John Wiley & Sons and includes numerous chapters with detailed content and problems for practice.

Uploaded by

Güven Yurtseven
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 619

SIGNALS AND SYSTEMS

SIGNALS AND SYSTEMS


Theory and Practical Explorations
with Python

Fatos Yarman Vural


Middle East Technical University
Emre Akbas
Middle East Technical University

LOGO
Copyright ©¡provide-copyright-year¿ by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy
fee to
the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400,
fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission
should
be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken,
NJ
07030, (201) 748-6011, fax (201) 748-6008.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts
in
preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herin may not be
suitable for your situation. You should consult with a professional where appropriate. Neither the
publisher nor author shall be liable for any loss of profit or any other commercial damages, including
but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services please contact our Customer Care
Department with the U.S. at 877-762-2974, outside the U.S. at 317-572-3993 or fax 317-572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print,
however, may not be available in electronic format.

Library of Congress Cataloging-in-Publication Data:

Title, etc
Printed in the United States of America.

10 9 8 7 6 5 4 3 2 1
To ...

v
vi
Contents

Contributors xv
Foreword xvii
Preface xix
Acknowledgments xxi
Introduction xxiii

1 Introduction to Systems and Signals 1


1.1 Example applications . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Relationship Between Signals and Systems . . . . . . . . . . . . 6
1.3 Mathematical Representation of Signals and Systems . . . . . . 6
1.3.1 Signals Represented by Functions . . . . . . . . . . . 7
1.3.2 Types of Signals . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Energy of a Signal . . . . . . . . . . . . . . . . . . . . 12
1.3.4 Power of a Signal . . . . . . . . . . . . . . . . . . . . 13
1.4 Operations on the Time Variable of Signals . . . . . . . . . . . . 14
1.4.1 Time Shift . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.2 Time Reverse . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.3 Time Scale . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.4 Time Scale and Shift . . . . . . . . . . . . . . . . . . 21
1.5 Signals with Symmetry Properties . . . . . . . . . . . . . . . . . 26
1.5.1 Periodic Signals . . . . . . . . . . . . . . . . . . . . . 29
1.5.2 Even and Odd Signals . . . . . . . . . . . . . . . . . . 33
1.6 Complex Signals Represented by Complex Functions . . . . . . 39
1.6.1 Complex Numbers Represented in Cartesian Coor-
dinate System . . . . . . . . . . . . . . . . . . . . . . 40
1.6.2 Complex Numbers Represented in Polar Coordinate
System and Euler’s Number . . . . . . . . . . . . . . 42
1.6.3 Complex Functions . . . . . . . . . . . . . . . . . . . 46
1.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

vii
2 Basic Building Blocks of Signals 59
2.1 LEGO Functions of Signals . . . . . . . . . . . . . . . . . . . . . 59
2.2 King of the Functions: Exponential Function . . . . . . . . . . . 60
2.2.1 Real Exponential Function . . . . . . . . . . . . . . . 61
2.2.2 Complex Exponential Function . . . . . . . . . . . . . 65
2.3 Unit Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . 74
2.3.1 Discrete Time Unit Impulse Function or Dirac-Delta
Function . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.3.2 Continuous-Time Unit Impulse Function . . . . . . . 76
2.3.3 Comparison of Discrete Time and Continuous Time
Unit Impulse Functions . . . . . . . . . . . . . . . . . 78
2.4 Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.4.1 Discrete Time Unit Step Function . . . . . . . . . . . 79
2.4.2 Relationship Between the Discrete Time Unit Step
and Unit Impulse Functions . . . . . . . . . . . . . . 79
2.4.3 Continuous-Time Unit Step Function . . . . . . . . . 82
2.4.4 Comparison of Discrete Time and Continuous Time
Unit Step functions . . . . . . . . . . . . . . . . . . . 83
2.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

3 Basic Building Blocks and Properties of Systems 95


3.1 Representation of Systems by Equations . . . . . . . . . . . . . . 95
3.2 Interconnection of Basic Systems: Series, Parallel, Hybrid and
Feedback Control Systems . . . . . . . . . . . . . . . . . . . . . . 97
3.2.1 Series Systems . . . . . . . . . . . . . . . . . . . . . . 97
3.2.2 Parallel Systems . . . . . . . . . . . . . . . . . . . . . 98
3.2.3 Hybrid Systems . . . . . . . . . . . . . . . . . . . . . 99
3.3 Properties of Systems . . . . . . . . . . . . . . . . . . . . . . . . 103
3.3.1 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.3.2 Causality . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3.3 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . 106
3.3.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.3.5 Time Invariance . . . . . . . . . . . . . . . . . . . . . 111
3.3.6 Linearity and Superposition Property . . . . . . . . . 112
3.4 Basic Building Blocks of Systems and Their Properties . . . . . 117

viii
3.4.1 Scalar Multiplier . . . . . . . . . . . . . . . . . . . . . 117
3.4.2 Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.4.3 Multiplier . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.4.4 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.4.5 Differentiator . . . . . . . . . . . . . . . . . . . . . . . 119
3.4.6 Unit Delay Operator . . . . . . . . . . . . . . . . . . . 119
3.4.7 Unit Advance Operator . . . . . . . . . . . . . . . . 120
3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

4 Representation of Linear Time Invariant Systems by Impulse


Response and Convolution Operation 131
4.1 Representation of LTI Systems by Impulse Response . . . . . . 133
4.1.1 Representation of Discrete-Time Linear Time-Invariant
Systems by Impulse Response . . . . . . . . . . . . . 134
4.1.2 Representation of Continuous-Time Linear Time-
Invariant System . . . . . . . . . . . . . . . . . . . . . 134
4.1.3 Convolution Operation in Continuous Time . . . . . 139
4.1.4 Convolution Operation in Discrete Time Systems . . 146
4.1.5 Cross-correlation and Autocorrelation Operations . . 148
4.2 Properties of Impulse Response for LTI Systems . . . . . . . . . 151
4.2.1 Impulse Response of Memoryless LTI Systems . . . 152
4.2.2 Impulse Response of Causal LTI Systems . . . . . . 152
4.2.3 Inverse of Impulse Response for LTI Systems . . . . 153
4.2.4 Impulse Response of Stable LTI Systems . . . . . . . 157
4.2.5 Unit Step Response . . . . . . . . . . . . . . . . . . . 159
4.3 An application of convolution in machine learning . . . . . . . . 161
4.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

5 Representation of LTI Systems by Differential and Difference


Equations 171
5.1 Linear Constant-Coefficient Differential Equations . . . . . . . . 172
5.2 Representation of a Continuous-time LTI system by Differential
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.3 Solving the Linear Constant Coefficient Differential Equations
which Represents LTI Systems . . . . . . . . . . . . . . . . . . . 176

ix
5.3.1 Finding the Particular Solution . . . . . . . . . . . . . 177
5.3.2 Finding the Homogeneous Solution . . . . . . . . . . 178
5.3.3 Finding the General Solution . . . . . . . . . . . . . . 180
5.3.4 Transfer Function of a Continuous Time LTI System 186
5.4 Linear Constant Coefficient Difference Equations . . . . . . . . 189
5.4.1 Representation of a Discrete Time LTI Systems by
Difference Equations . . . . . . . . . . . . . . . . . . 190
5.4.2 Solution to Linear Constant Coefficient Difference
Equations . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.4.3 Transfer Function of a Discrete Time LTI System . 193
5.5 Relationship Between the Impulse Response and Difference or
Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . 194
5.6 Block Diagram Representation of Differential Equations for LTI
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

6 Fourier Series Representation of Continuous-Time Periodic


Signals 215
6.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.2 Mathematical Representation of Waves and Harmony . . . . . . 218
6.3 Fourier Series Representation of Continuous-Time Periodic Func-
tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
6.4 Dirichlet Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.5 Fourier Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.6 Frequency Domain and Hilbert Spaces . . . . . . . . . . . . . . . 227
6.7 Response of a Linear Time-Invariant System to the Continuous-
Time Complex Exponential Input Signal . . . . . . . . . . . . . . 234
6.7.1 Eigenfunctions and Eigenvalues of a Continuous-
Time LTI Systems . . . . . . . . . . . . . . . . . . . . 236
6.8 Convergence of the Fourier Series and Gibbs Phenomenon . . . 238
6.9 Properties of Fourier Series for Continuous-Time Functions . . 239
6.10 Trigonometric Fourier Series for Continuous-Time Functions . . 249
6.11 Trigonometric Fourier Series for Continuous-Time Even and
Odd Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.12 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

x
7 Fourier Series Representation of Discrete-Time Periodic Sig-
nals 263
7.1 Fourier Series Theorem for Discrete-Time Functions . . . . . . . 263
7.2 Discrete-Time Fourier Series Representation in Hilbert Space . 266
7.3 Properties of Discrete-Time Fourier Series . . . . . . . . . . . . 275
7.3.1 Difference Property . . . . . . . . . . . . . . . . . . . 281
7.3.2 Convolution Property . . . . . . . . . . . . . . . . . . 283
7.3.3 Multiplication Property . . . . . . . . . . . . . . . . . 288
7.4 Discrete-Time LTI Systems with Periodic Input and Output Pairs 293
7.4.1 Eigen-functions, Eigenvalues and Transfer Functions
of a Discrete-Time LTI Systems . . . . . . . . . . . . 293
7.4.2 Relationship Between the Fourier Series of Periodic
Input and Output Pairs of Discrete-Time LTI Systems 295
7.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300

8 Continuous Time Fourier Transform and its Extension to Laplace


Transform 307
8.1 Fourier Series Extension to Aperiodic Functions . . . . . . . . . 308
8.2 Existence and Convergence of the Fourier Transforms: Dirichlet
Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
8.3 Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 313
8.4 Comparison of Fourier Series and Fourier Transform . . . . . . . 314
8.5 Frequency Content of Fourier Transform . . . . . . . . . . . . . 315
8.6 Representation of LTI Systems in Frequency Domain by Fre-
quency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
8.7 Relationship Between the Fourier Series and Fourier Transform
of Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . . . 325
8.8 Properties of Fourier Transform: For Continuous Time Signals
and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8.8.1 Basic Properties of Continuous Time Fourier Trans-
form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.8.2 Continuous Time Linear Time Invariant Systems in
Frequency Domain . . . . . . . . . . . . . . . . . . . . 355
8.9 Laplace Transforms as an Extension of Continuous Time Fourier
Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
8.9.1 One Sided Laplace Transform . . . . . . . . . . . . . 360

xi
8.9.2 Region of Convergence in Laplace Transforms . . . . 361
8.10 Inverse of Laplace Transform . . . . . . . . . . . . . . . . . . . . 367
8.11 Continuous Time Linear Time Invariant Systems in Laplace Do-
main . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8.11.1 Eigenvalues and Transfer Functions in s-Domain . . 373
8.12 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

9 Discrete Time Fourier Transform and Its Extension to z-Transforms391


9.1 Fourier Series Extension to Discrete Time Aperiodic Functions 392
9.1.1 Discrete Time Fourier Transform . . . . . . . . . . . 392
9.2 Dirichlet Conditions are Relaxed for the Existence of Discrete
Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . 395
9.3 Fourier Transform of Discrete Time Periodic Functions . . . . . 407
9.4 Properties of Fourier Transforms For Discrete Time Signals and
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
9.4.1 Basic Properties of Discrete Time Fourier Transform 413
9.5 Discrete Time Linear Time Invariant Systems in Frequency Do-
main . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
9.6 Representation of Discrete Time LTI Sytems . . . . . . . . . . . 432
9.6.1 Impulse Response . . . . . . . . . . . . . . . . . . . . 432
9.6.2 Unit Step Response . . . . . . . . . . . . . . . . . . . 433
9.6.3 Frequency Response . . . . . . . . . . . . . . . . . . . 434
9.6.4 Difference Equation . . . . . . . . . . . . . . . . . . . 435
9.6.5 Block Diagram Representation . . . . . . . . . . . . . 437
9.7 Z-Transforms as an Extension of Discrete Time Fourier Trans-
forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
9.7.1 One Sided Z-Transform . . . . . . . . . . . . . . . . . 444
9.7.2 Region of Convergence in Z-Transforms . . . . . . . 444
9.8 Inverse of Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . 452
9.9 Discrete Time Linear Time Invariant Systems in z-Domain . . . 458
9.10 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465

10 Linear Time Invariant Systems as Filters 477


10.0.1 Filtering the Periodic Signals by Frequency Response 478

xii
10.0.2 Filtering the Aperiodic Signals by Frequency Response 480
10.1 Frequency Ranges of Frequency Response . . . . . . . . . . . . . 482
10.2 Filtering with LTI Systems . . . . . . . . . . . . . . . . . . . . . . 483
10.3 Ideal Filters For Discrete Time and Continuous time LTI systems 484
10.3.1 Ideal Low Pass Filters . . . . . . . . . . . . . . . . . . 485
10.3.2 Ideal High Pass Filters . . . . . . . . . . . . . . . . . 486
10.3.3 Ideal Band Pass and Band Reject Filters . . . . . . . 488
10.4 Discrete Time Real Filters . . . . . . . . . . . . . . . . . . . . . 498
10.4.1 Discrete Time Low Pass and High Pass Real Filters 498
10.4.2 Band Stop Filters for Filtering Well-Defined Fre-
quency Bandwidths . . . . . . . . . . . . . . . . . . . 505
10.5 Continuous Time Real Filters . . . . . . . . . . . . . . . . . . . . 508
10.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516

11 Continuous Time Sampling 523


11.1 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
11.2 Properties of the Sampled Signal in Time and Frequency Domains 525
11.3 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
11.4 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
11.5 Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 540
11.6 Sampling with Zero-Order Hold . . . . . . . . . . . . . . . . . . . 542
11.7 Reconstruction with Zero-Order Hold . . . . . . . . . . . . . . . 545
11.8 Sampling and Reconstruction with First Order Hold . . . . . . . 548
11.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552

12 Discrete Time Sampling and Processing 561


12.1 Time Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . 562
12.2 C/D Conversion: x(t) → x[n] . . . . . . . . . . . . . . . . . . . . 564
12.3 D/C Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
12.3.1 Band-Limitted Digital Differentiator . . . . . . . . . 569
12.3.2 Digital Time Shift . . . . . . . . . . . . . . . . . . . . 574
12.4 Sampling the Discrete Time Signals . . . . . . . . . . . . . . . . 576
12.4.1 Discrete Time Impulse Train Sampling . . . . . . . . 577

xiii
12.5 Reconstruction of Discrete Time Signal from its Sampled Coun-
terpart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
12.6 Discrete-Time Decimation and Interpolation . . . . . . . . . . . 583
12.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
Index 591
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593

xiv
Contributors

xv
Foreword

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vesti-
bulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris.
Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. Donec
vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et ne-
tus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus
rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu
tellus sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium
quis, viverra ac, nunc. Praesent eget sem vel leo ultrices bibendum. Aenean
faucibus. Morbi dolor nulla, malesuada eu, pulvinar at, mollis ac, nulla. Cura-
bitur auctor semper nulla. Donec varius orci eget risus. Duis nibh mi, congue
eu, accumsan eleifend, sagittis quis, diam. Duis eget orci sit amet orci dignissim
rutrum.
Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi
auctor lorem non justo. Nam lacus libero, pretium at, lobortis vitae, ultricies
et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet
magna, vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis.
Suspendisse ut massa. Cras nec ante. Pellentesque a nulla. Cum sociis natoque
penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam
tincidunt urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus
mauris.

xvii
Preface

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vesti-
bulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris.
Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. Donec
vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et ne-
tus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus
rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu
tellus sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium
quis, viverra ac, nunc. Praesent eget sem vel leo ultrices bibendum. Aenean
faucibus. Morbi dolor nulla, malesuada eu, pulvinar at, mollis ac, nulla. Cura-
bitur auctor semper nulla. Donec varius orci eget risus. Duis nibh mi, congue
eu, accumsan eleifend, sagittis quis, diam. Duis eget orci sit amet orci dignissim
rutrum.

place
date

xix
Acknowledgments

Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vesti-
bulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris.
Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. Donec
vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et ne-
tus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus
rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu
tellus sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium
quis, viverra ac, nunc. Praesent eget sem vel leo ultrices bibendum. Aenean
faucibus. Morbi dolor nulla, malesuada eu, pulvinar at, mollis ac, nulla. Cura-
bitur auctor semper nulla. Donec varius orci eget risus. Duis nibh mi, congue
eu, accumsan eleifend, sagittis quis, diam. Duis eget orci sit amet orci dignissim
rutrum.
Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi
auctor lorem non justo. Nam lacus libero, pretium at, lobortis vitae, ultricies
et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet
magna, vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis.
Suspendisse ut massa. Cras nec ante. Pellentesque a nulla. Cum sociis natoque
penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam
tincidunt urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus
mauris.

I. R. S.

xxi
Introduction

xxiii
xxiv
Chapter 1
Introduction to Systems and
Signals

‘There is nothing more practical than a good theory!”


Vladimir Vapnik

This book is about the mathematical representation of systems and signals.


Let us start by describing the meaning of the words systems and signals.
The origin of the word systems dates back to XV. century, when it is used
as a Latin word systema, which means the entire universe. Since then, this
very wide meaning has narrowed to a set of connected items or devices that
operate together. In the context of this book, a system can be defined as
a unified collection of interrelated and interdependent parts. And in
many cases, it is more than the summation of its parts.
The above definition is quite flexible and may cover both natural or human-
made systems. It can be as large as a planet, a star, or a galaxy, or as small
as a single cell, a molecule, or a microchip.
In this book, we shall use the systems approach to model, analyze and
investigate the natural systems, and design and implement human-made sys-
tems.
Motivating Question: What is the systems approach?
The systems approach is a holistic paradigm to mathematically represent a
system. Holism is the philosophy, which accepts a system as a whole, not only
as a collection of its parts. It is the opposite of the reductionist paradigm, which
assumes that a complex system can be represented by its simpler components.
For example, in a reductionist paradigm, a puzzle can be represented by the
collection of its pieces, which come in a box. However, when we turn the box
of puzzle over a table, we see all the pieces, but, we cannot perceive the theme
of it. On the other hand, in the systems approach, we need to do the puzzle
and look at the ordered puzzle to see that it consists of a picture (Figure 1.1).

1
Figure 1.1: Waterfall by M.C. Escher.1 The puzzle on the left consists of the
pieces of the entire lithograph, but have no meaning. In order to observe the
falling water of the watermill, we need to solve the puzzle.

In order to model a system using the holistic paradigm, we not only repre-
sent the attributes of its multiple components but also, formulate their inter-
relationships, considering the objective of the entire system. This approach
implicitly models the synergy created by a system.

Learn more about the systems approach @ https:


//384book.net/v0101
WATCH

The origin of the word signal is even older than that of systems, dating
back to XIII. century. It comes from the Latin word signale, which means
anything that serves to indicate or communicate information.
When we observe a signal we assume that there is a source system, which
generates the signal. Thus, signals can be considered as partial information
about the systems. In most cases, systems can be modeled and represented
by a collection of subsystems. The interrelations among the subsystems of
a system can be modeled by the received input signal(s) and the generated
output signal(s), i.e. signals, of each subsystem.
In summary, the response of a system under a specific set of input signals

1
https://mcescher.com/gallery/impossible-constructions/#iLightbox[gallery image 1]/5

2
provides information about the properties of systems. Signals describe the in-
terrelations among the parts of a system. Loosely speaking, signals are the
measurements of our varying observations about a system and/or its parts.

1.1. Example applications


Models for representing signals and systems are widely used in Electrical En-
gineering and Computer Science for filter design, control, communica-
tions, computer vision, machine learning, speech, image, and video
processing. The the formalism of signals and systems is also used in a wide
range of multidisciplinary areas, including bio-informatics, robotics, neu-
roscience, remote sensing, aeronautics, seismology, biomedical engi-
neering, chemical process control, energy, and mechatronics, astron-
omy, and cosmology.
Let us give some examples, where the methodologies of the systems ap-
proach are intensively utilized, in the modeling, design, and implementation
stages of natural and man-made systems. Most of these models are generated
by using the signals measured at the input and/or output of the systems.

Three-Dimensional World Models by LIDAR Signals. LIDAR (LIght


Detection And Ranging) signals are generated by a source, which emits laser
beams. These signals bounce off the surrounding objects and return to a sensor.
Systems approach is, then, used to create a 3-dimensional representation of
the physical environment by measuring the elapsed time for each laser pulse
to return to the sensor (Figure 1.2).

Learn more about the LIDAR example @ https:


//384book.net/v0102
WATCH

Modeling the Brain Networks from the Brain Signals. fMRI (functional
Magnetic Resonance Imaging) technique records the brain signals, which indi-
rectly measure the activities in the anatomical regions. It is possible to model
and analyze the cognitive processes, such as vision, speech, memory, etc., of
the human brain from the fMRI signals.
Representing brain activities by networks is crucial to understanding vari-
ous cognitive states. It is possible to extract brain networks from the functional
Magnetic Resonance Images (fMRI) recorded while the subjects perform a pre-
defined cognitive task. Figure 1.3 shows two brain networks for planning and
execution phases while the subject solves a complex problem. The suggested

3
Figure 1.2: LIDAR image of the Oakbrook Mall, Oakbrook, IL.2

computational network model can successfully discriminate the planning and


execution phases of complex problem-solving processes with more than 90% ac-
curacy, when the estimated dynamic networks, extracted from the fMRI data,
are classified by a machine learning algorithm.

Figure 1.3: Visualizing anatomical regions during both the planning and exe-
cution phases while the selected subject solves a complex problem. 3 .

2
https://flic.kr/p/bzUU32
3
F.T. Yarman Vural, G.G Değirmendereli: 25th International Conference on Pattern
Recognition (ICPR), 2020

4
An example: speech synthesis from neural decod-
ing of spoken sentences @ https://384book.net/
WATCH v0103

Detecting the Buildings from the Remote Sensed Satellite Images.


Remote Sensing Images are recorded by measuring the signals of several elec-
tromagnetic waveforms reflected from the earth’s surface. These signals are
used to extract various information, such as measuring environmental pollu-
tion or climate change, the growth rate of cities or green areas, etc.
One important application of Remote Sensing is to detect the buildings in
municipalities. For this purpose, a multi-dimensional signal measured from the
earth’s surface is modeled to filter the buildings in the remotely sensed data,
as shown in Figure 1.4.

Figure 1.4: Example of building detection using remote sensing applications 4 .

Noise Reduction in Old Records. Due to the technological limitations of


their time, the old gramophone recordings are mostly noisy. These recordings
can be cleaned by using methodologies of signal processing. Additive noise is
partially eliminated by estimating a mathematical model for the noise and
subtracting it from the corrupted signal. An example of noise reduction can
be found in the companion website of the book.

Noise reduction on “O’sole mio” @ https://


384book.net/v0104
WATCH

4
C. Senaras, M. Ozay, F.T Yarman Vural: IEEE Journal of Selected Topics in Applied
Earth Observations and Remote Sensing, vol. 6, issue 3, pp. 1295-1304, 2013)

5
1.2. Relationship Between Signals and
Systems
The above brief descriptions and examples of signals and systems show that
there is a remarkable relationship between the signals and the underlying sys-
tem, which generates the signal. Philosophically, one may consider the signals
as the manifestation of systems. We, humans, can perceive the the physical
world through these manifestations. Heraclitus of Ephesus summarizes this
view by his famous saying:
τα παντα ῥεῖ (ta panta rhei),
which translates to English as,
All flows!
Almost 2500 years ago, Heraclitus claimed that everything changes. Since
then, as we study the nature, we discover some invariant laws, which lie behind
the changes. Although we can only perceive the world of flux, these invariant
laws govern our changing observations. In other words, we can only perceive
variances, generated by the invariant laws, which govern the natural systems.
Our aim is to find these invariant laws, manifested through our varying obser-
vations.
To analyze and understand a natural system or design and implement a
human-made system, we need rigorous mathematical representations of signals,
which correspond to our varying observations. Based on these observations, we
can model the invariant rules of a system, which administers a set of prescribed
tasks.
Motivating Question: How can we analyze and understand the laws
that govern the natural systems? How can we design a human-made system to
achieve a specific goal?
The answers to these questions require the mathematical representation
of systems and/or their subsystems. To follow the holistic approach, we also
need the mathematical representation of signals, which describe interrelations
among the subsystems and the interaction between a system with its environ-
ment.

1.3. Mathematical Representation of Sig-


nals and Systems
There are many ways to formally represent signals and systems. In the context
of this book,

6
x(t) Equations / Algorithms y(t)

Figure 1.5: Schematic representation of a system by a box, which consists of an


equation or an algorithm. A system receives an input signal, x(t), and generates
an output signal, y(t). The equation or algorithm relates the input signal to
the output signal.

• signals are represented by functions,


• systems are represented by equations and/or algorithms.

Loosely speaking, a system receives an input signal, represented by a func-


tion x(t), and generates an output signal, represented by a function y(t), for
this particular input. The relationship between the input and output signals
provides us with system equation or algorithm (Figure 1.5).
Throughout this book, we shall study the signals by representing and ma-
nipulating them with well-known mathematical objects, namely functions. We
shall, also, study the systems by establishing the relationship between the in-
put and output signals using a class of equations, namely, the differential and
integral equations. We pay special attention to linear systems, not only be-
cause of their mathematical tractability but also because they open the door
to analyze and design a wide range of nonlinear systems.
In the rest of this chapter, we shall provide a brief overview of functions
to represent and manipulate signals. We shall, also, study a very interesting
property of functions, called symmetry.

1.3.1. Signals Represented by Functions


Let us start by recalling the definition of a function. A function associates the
elements in a domain set and the elements in a range set. Formally, a function,
x, maps the values in the domain to values in the range,

x : D → R, (1.1)
where D is the domain set and R is the range set.Therefore, a function is
represented by a triplet,

(D, R, x), (1.2)


where x is a set of ordered pairs, (d ∈ D, r ∈ R).

7
1.3.2. Types of Signals
The elements of the domain and range sets define the type of the signals. The
domain and range sets may consist of multi-dimensional vectors, with real,
complex or integer-valued entries. In other words, the domain of the function,
x, can be an m-dimensional vector with entries, defined over the set of real
numbers, integer numbers, or complex numbers,

d ∈ Rm or d ∈ Im or d ∈ Cm , (1.3)
respectively. Note that, the sets, Rm and Cm form a vector space over the
field of real numbers and complex numbers, respectively. However, the set of
integers, Im is not closed under scalar multiplication, thus it is not a vector
space.
Similarly, the range of a function can be defined as an n-dimensional vector,
with entries defined over the set of real numbers, integer numbers and complex
numbers,

r ∈ Rn or r ∈ In or r ∈ Cn , (1.4)
respectively.
When the dimension of the domain, m > 1, the function, x, is called a
multivariate function. For m = 1, the function, x, is called a univariate
function.
In this book, we focus on univariate functions, where the domain variable
is either real or integer-valued scalar time measures. When the domain
variable is real, we indicate the time measure by t ∈ R and the corresponding
function by x(t). When the domain variable is integer-valued, we indicate the
time measure by n ∈ I and the corresponding function by x[n]. The elements
of the range can also be real, complex, or integer-valued. Depending on the
elements of the domain and range, we define the following types of signals.
1) Continuous-Time Signals: Continuous-time signals are represented by
continuous functions, where both the domain, t and the range, x(t), consists
of real numbers, i.e.,

t ∈ R, x(t) ∈ R. (1.5)
As an example, a continuous-time sinusoidal signal is represented by the
following function:

x(t) = A sin(ω0 t), (1.6)


where the time t ∈ R is the domain and x(t) ∈ R is the range. The parameter
A is called the amplitude of the sinusoidal function (Figure 1.6).

8
x(t)
A

0 t
π 2π
ω0 ω0

Figure 1.6: Plot of the continuous-time function, x(t) = A sin(ω0 t).

Note that the range of the sinusoidal function is bounded by its amplitude,
i.e., −A < x(t) < A, where the amplitude, A ∈ R, is a finite number. The
signal, represented by the above sinusoidal function repeats itself at every
time instance, T = 2π/ω0 , where T is called period.
2) Discrete-Time Signals: Discrete-time signals are represented by discrete
functions, where the domain variable, n, is an integer number and range,
x[n], is a real number, i.e.,

n ∈ I, x[n] ∈ R. (1.7)
As an example, a discrete-time sinusoidal signal can be represented by the
following function:

x[n] = A sin(Ω0 n). (1.8)


Above discrete-time function, repeats itself at every integer time period,
N = 2π/Ω0 . An example is illustrated in Figure 1.7 for N = 25.
Note that the range of a discrete-time function consists of the vector space
of real numbers, x[n] ∈ R. Thus, all the values in the range are well-defined.
On the other hand, the domain, n ∈ I, does not form a vector space. Since
the values between two integers are not defined, we cannot perform scalar
multiplication operations in the domain. This fact requires special attention
when we deal with discrete time signals. For example, when we multiply
the time variable, n of a function, x[n] by a rational number, the resulting
time instances may not yield integer values. In such cases, the range of the
function is not defined. As a result, if the period N is not integer-valued,
the integer multiples of the period, kN of the domain of a sinusoidal func-
tion becomes undefined. Thus, the function does not satisfy the periodicity
property.

9
x[n]
1

0.5

n
10 30 50
-0.5

-1

Figure 1.7: Plot of the discrete-time sinusoidal function, x[n] = sin( 2π


25 n). Note
that the period is N = 25.

20

15
°C

10

0
1 2 3 4 5 6 7 8 9 10 11 12
months

Figure 1.8: Average monthly temperature in Ankara (data source: climate-


data.org).

discrete-time signals can be inherently discrete or they can be obtained by


quantizing the domain of a continuous-time signal. The signal of Figure 1.7
can be obtained from its continuous counterpart, x(t) = sin( 2π 25 t) by quan-
tizing the domain of t. On the other hand, inherently discrete signals can
be obtained by recording the data at specific time instances. Popular ex-
amples include meteorological data (temperature, humidity, etc.) recorded
on a daily basis and economic or social indicators (growth rate, population,
disease distributions, etc.) of countries on a yearly basis. For example, the
signal showing average monthly temperature values in Figure 1.8 is inher-
ently discrete.
3) Digital Signals: Digital signals are represented by range-quantized discrete

10
x[t]

7
6
5
4
3
2
1

1 2 3 4 5 6 7 8 9 10 11 12 13 t

Figure 1.9: The digital signal (black) is obtained by quantizing the range of a
continuous time signal (red) into 8 quantization levels and that of the domain
into 13 levels.

functions. Thus, both the domain and range of the digital signals consist of
integer numbers, i.e., n ∈ I and x[n] ∈ I, i.e.,

d ∈ I, r ∈ I. (1.9)
Digital signals can be obtained by quantizing the domain and range of
a continuous-time or the range of the discrete-time signal. On the other
hand, some signals are inherently digital. As an example, we can measure
the existence and non-existence of an event on a timely basis by a binary
function, such as daily records of rain and no rain. As another example,
Figure 1.9 shows a digital signal obtained by quantizing the domain and the
range of a continuous-time signal into 8 quantization levels, in the interval
[0, 7].
There is a bridge, which relates the continuous-time signals to discrete-time
signals and/or digital signals, through the famous Sampling Theorem of
Claude Shannon. This bridge will be established in the last two chapters
of this book.
The modern information and communication technology is based on digital
signals and systems. Even if a signal is inherently continuous, it is digi-
tized prior to processing by a digital computing device. After the process
is completed, the digital signal may be converted to its continuous-time
counterpart, if necessary.

Explore different types of signals @ https://


384book.net/i0101
INTERACTIVE

11
1.3.3. Energy of a Signal
An important characteristic of signals is the concept of energy. In physics,
energy is a quantitative property, which is transferred to an object to perform
a work. Although the energy of a signal is somewhat related to the energy of
a physical system, there is no one-to-one correspondence between the energy
in physics and the definitions given below. However, it is customary to use the
term energy in many fields, including signal processing.
The energy of a signal provides us with useful information about the math-
ematical tractability and realizability of the signals and systems, during the
design and implementation phases.

Definition 1.1: In continuous-time, the energy of a signal in the interval


of [t1 , t2 ] is defined as follows,
Z t2
E[t1 ,t2 ] (x(t)) = |x(t)|2 dt. (1.10)
t1

In discrete-time, the energy of the signal in the interval of [n1 , n2 ] is defined


as follows,
n2
X
E[n1 ,n2 ] (x[n]) = |x[n]|2 . (1.11)
n=n1

Definition 1.2: Total energy of a signal for continuous-time and discrete-


time signals are defined as
Z ∞
E(x(t)) = |x(t)|2 dt (1.12)
−∞
and

X
E(x[n]) = |x[n]|2 , (1.13)
−∞

respectively.

The above definitions reveal that the energy of a signal is the area under
the squared magnitude of the corresponding function. The amount of this area
gives us important information about the characteristics of a signal. If the area
is large, then we suppose that the signal consists of very large amplitudes. In
some cases, the area may approach to infinity, which makes the signal mathe-
matically intractable and physically unrealizable in real-life applications.

Definition 1.3: A signal x(·) is called an energy signal if its total energy

12
is finite, i.e., E(x) < ∞.

Energy signals are absolutely summable for discrete-time, and absolutely


integrable, for continuous-time signals. This property brings a bound to the
signals and enables us to apply many tractable mathematical operations on
the functions, which represent signals.

1.3.4. Power of a Signal


The time average of the total energy is called the power of a signal.

Definition 1.4: Power of a signal for continuous-time and discrete-time


signals are defined as follows,
T
1 E(x(t))
Z
P (x(t)) = lim |x(t)|2 dt = lim (1.14)
T →∞ 2T −T T →∞ 2T
and
N
1 X E[x[n]]
P [x[n]] = lim |x[n]|2 = lim , (1.15)
N →∞ 2N + 1 N →∞ 2N + 1
−N

respectively.

Definition 1.5: A signal x is called a power signal, if its power is nonzero


and finite, i.e.,

P (x) ̸= 0 and P (x) < ∞. (1.16)

Note that, when both the power and energy of a signal are infinite, the
signal is neither a power nor an energy signal. In practice, a power signal
cannot exist in the real world, because it would require a power source that
operates for an infinite amount of time.
Question: Show that the periodic signals are power signals and the aperi-
odic signals, which are nonzero in a finite interval are energy signals.

Explore elementary operations on signals @


https://384book.net/i0102
INTERACTIVE

13
x(t)

t
0 6

Figure 1.10: The plot of a continuous-time pulse signal which is nonzero in the
interval [0, 6].

1.4. Operations on the Time Variable


of Signals
Recall that signals can be represented by functions. Hence, arithmetic oper-
ations, including addition, division, and multiplication, can be systematically
employed to generate novel signals from the existing ones.
Signals can be manipulated in many ways. One major way is to apply
operations on the time variable. In other words, we change the domain of the
function that represents the signal and investigate the range of the function
with respect to the newly defined domain. These operations can be categorized
under four headings:
1. Time shift,
2. Time reverse,
3. Time scale,
4. Time shift and scale.
The above operations enable us to define new signals, based on elementary
functions, such as periodic functions, to represent some real-life signals.
As an example, consider a simple signal, called a pulse signal, defined by
the continuous time function in Figure 1.10. The analytical form of the pulse
signal of Figure 1.10 is
(
1 for 0 ≤ t ≤ 6,
x(t) = (1.17)
0 otherwise.
A discrete-time version of this pulse signal is given in Figure 1.11.
The analytical form of the discrete-time pulse signal of Figure 1.11 is given
below:

14
x[n]
1

n
0 1 2 3 4 5 6

Figure 1.11: The plot of a discrete-time pulse signal, which is nonzero in the
interval [0, 6].

(
1 for 0 ≤ n ≤ 6,
x[n] = (1.18)
0 otherwise.
Let us define the operations on the time parameter of signals and apply
these operations to the above continuous and discrete-time pulse functions,
x(t) and x[n].

1.4.1. Time Shift


Timeshift operation for a continuous-time signal replaces the time variable t
of x(t) by t′ = (t − T ) to obtain x(t′ ) = x(t − T ), where T is the amount of
shift.
For the continuous-time signal of Figure 1.10, time shift of x(t) by the
amount of T is given as
(
1 for T ≤ t ≤ T + 6,
x(t − T ) = (1.19)
0 otherwise.
Similarly, time shift operation for a discrete-time signal, replaces the time
variable n of x[n] by n′ = (n − N ) to obtain x[n′ ] = x[n − N ], where N is the
amount of shift.
For the discrete-time signal of Figure 1.11, time shift of x[n] by the amount
of N is given as
(
1 for N ≤ n ≤ N + 6,
x[n − N ] = (1.20)
0 otherwise.

15
x(t − T ) x[n − N ]

1 1

t n
T T +6 N N +6

Figure 1.12: Shift of a continuous-time pulse signal, x(t), given in Figure 1.10
by the amount of T > 0 (left) and shift of a discrete-time pulse signal, x[n],
given in Figure 1.11 by the amount of N > 0 (right).

Note that the amount of continuous-time shift, T is a real number, whereas


the amount of discrete-time shift, N must be an integer number, for the time
shift operation.
When T > 0, for continuous-time signals, the time-shift operation delays
the signal by T units in continuous-time. The shift is towards the right on
the time axis. Similarly, when N > 0, for discrete-time signals, the signal is
delayed N units in discrete-time. Figure 1.12 shows the results of this time-shift
operation.
For T < 0 and N < 0, the shift is towards the left on the time axis. This
operation advances the signal by T units in continuous-time and N units in
discrete-time. In practice, it is not possible to advance a signal in real time.
We can only advance a signal if all of it is previously recorded.
Question: Suppose that we are in a movie theater. What happens to the
movie video, when we play it after a time shift operation for T = 2 hours?
If we assume that the start time of the movie is at t = 0, then there will
be a delay of two hours for the movie to start.

1.4.2. Time Reverse


Time reverse operation of a continuous-time signal changes the sign of the time
variable t of the signal, x(t), by t′ = −t to obtain x(t′ ) = x(−t).
For the signal of Figure 1.10, time reverse can be given as follows:
(
1 for 0 ≤ −t ≤ 6,
x(−t) = (1.21)
0 otherwise.

16
x(−t) x[−n]

1 1

t n
−6 0 −6 0

Figure 1.13: Time reversed versions of continuous-time and discrete-time pulse


signals, given in Figure 1.10 and Figure 1.11.

Similarly, the time reverse operation of a discrete-time signal changes the sign
of the time variable n by n′ = −n, as follows:
(
1 for 0 ≤ −n ≤ 6,
x[−n] = (1.22)
0 otherwise.
Note that time reverse operation flips the signal with respect to the ordinate
axis (Figure 1.13).
Question: Suppose that we are given a movie video, what happens to this
video, when we play it after the time reverse operation?
Time reverse operation makes the end of the signal, the start of the signal.
In practice, there is no negative time. Thus the reversed signal cannot start
at t = −6. Hypothetically, if we assume that the movie starts at t = −6,
we observe that the movie starts from the end and progresses towards the
beginning, like Benjamin Button.

1.4.3. Time Scale


Time scale operation of a continuous-time function replaces the time variable,
t of x(t) by t′ = at to obtain x(t′ ) = x(at), where a is a time scale parameter.
For the continuous function, x(t) of Figure 1.10, time scale by the amount
of a is given as follows:
(
1 for 0 ≤ at ≤ 6,
x(at) = (1.23)
0 otherwise.
Time scale of a discrete-time function by a scalar, a, replaces the time variable,
n, by n′ = an . For the discrete function, x[n] of Figure 1.11, time scale by the

17
amount of a can be given as follows;
(
1 for 0 ≤ an ≤ 6,
x[an] = (1.24)
0 otherwise.
Time scale operation, either squishes or stretches a signal, depending on the
value of the multiplicative factor, a. For a > 1, the signal becomes narrower,
whereas for 0 < a < 1 it gets wider.
Let us investigate the effect of the parameter a to the scaling process, in
the following exercises.

Exercise 1.1: Given the continuous and discrete-time signals of Figure 1.10
and Figure 1.11, find and plot x(2t) and x[2n].

Solution: For the continuous-time signal, x(t), all we need to do is to replace


t by t′ = 2t, as follows.
(
1 for 0 ≤ t ≤ 3,
x(2t) = (1.25)
0 otherwise.
For the discrete-time signal, x[n], we replace n by n′ = 2n, as follows;
(
1 for 0 ≤ n ≤ 3,
x[2n] = (1.26)
0 otherwise.
We can evaluate the discrete-time function, x[2n], in the nonzero interval,
0 ≤ n ≤ 3, as follows
For n = 0 → x[2n] = x[0] = 1
For n = 1 → x[2n] = x[2] = 1
For n = 2 → x[2n] = x[4] = 1
For n = 3 → x[2n] = x[6] = 1
For n > 3 → x[n] = 0.

Figure 1.14 shows the plots of x(2t) and x[2n].

Note that, in the discrete-time case, when a > 1, the scaled signal, x[an]
captures the original signal x[n], at only each an time instance. Thus, we
throw away the samples of the original signal between an and a[n − 1] time
instances, for all n. This operation is called decimation.

Exercise 1.2: Decimation: Given the discrete signal of Figure 1.11, find
and plot x[4n].

18
x(2t) x[2n]
1
1

t n
0 3 0 1 2 3

Figure 1.14: Scaled continuous-time and discrete-time signals for a = 2.

x[4n]
1

n
0 1

Figure 1.15: The original discrete-time signal is decimated by a factor of a = 4.

Solution: Let us evaluate the function x[4n] for all possible values of n.
For n < 0 → x[4n] = x[0] = 0
For n = 0 → x[4n] = x[0] = 1
For n = 1 → x[4n] = x[4] = 1
For n ≥ 2 → x[n] = 0.

Note that the decimation process by a factor of a = 4, keeps the values of


the original function at x[0] and the values at every 4n, skipping the 3 values
between 4n, and 4[n − 1], for all n. Therefore, it squishes the original signal by
decimating with a factor of a = 4. The signal x[4n] is plotted in Figure 1.15.
A good example of decimation with a > 1, corresponds to increasing the
playing speed of a video recording.

Exercise 1.3: Given the continuous-time signal of Figure 1.10, find and plot
x(t/2).

19
x(t/2) x[n/2]

1 1

t n
0 12 0 6 12

Figure 1.16: Time scaled continuous and discrete-time pulse signals, given in
Figure 1.10 and Figure 1.11, for a = 12 . Notice that in the continuous-time case,
we simply stretch the function. On the other hand, in discrete-time cases, we
assign 0 values for each inserted time point. This operation is called expansion.

Solution: For the continuous-time signal, x(t), we replace t by t′ = t/2, as


follows.
(
′ 1 for 0 ≤ t ≤ 12,
x(t ) = x(t/2) = (1.27)
0 otherwise.
Note that for a < 1, the scaled continuous signal x(at) is the stretched version
of the original continuous signal, x(t). The plot is given in Figure 1.16 (left).

Exercise 1.4: Expansion: Given the discrete-time signal of Figure 1.11,


Find and plot x[n/2].

Solution: For the discrete-time signal, x[n], we replace n by n′ = n/2, as


follows: (
′ 1 for 0 ≤ n ≤ 12,
x[n ] = x[n/2] = (1.28)
0 otherwise.

As it can be seen from the above example, the values of the original signal,
x[n], are placed at every an time instance in the scaled signal, x[an]. However,
when a < 1, we stretch the signal by inserting extra time instances between
the time instances of the original signal. It is customary to put zero values to
the inserted time instances. This operation is called expansion. The plot of
x[n/2] can be found in Figure 1.16 (right).
In some practical applications, it is possible to estimate nonzero values
for the inserted time instances of the stretched function x[an], using the past
and future known values of the inserted points. For example, we can take the
average of the closest known values of an inserted time sample and assign it
the average. This process is called interpolation.

20
x(2t − 3)

t
3/2 9/2

Figure 1.17: The plot of shifted and squished continuous-time signal, x(2t − 3).

A good example of interpolation with a < 1, corresponds to slowing down


the play speed of a video recording.
Question: Suppose that we are given a speech recording. How does it
sound, when we scale the recording by a < 1 and a > 1?
For the values of a > 1, the decimation process skips some values of the
signal resulting in a speed-up in the recording. Thus, the person speaks faster.
For the values, a < 1, the interpolation process adds extra points in between
the samples of the recording. Thus, the person speaks slower.

1.4.4. Time Scale and Shift


We can combine the time scale and time shift operations, to obtain the signal
x(at − b) from the continuous-time signal, x(t) and the signal x[an − b] from
the discrete-time signal, x[n].
For the continuous function, x(t) of Figure 1.10, time scale and shift by the
amount of a and b, can be given as follows;
(
1 for 0 ≤ at − b ≤ 6
x(at − b) = (1.29)
0 otherwise.
For example, when we take a = 2 and b = 3, the continuous-time signal is
both squished and shifted (Figure 1.17), as follows:
(
1 for 23 ≤ t ≤ 92 ,
x(2t − 3) = (1.30)
0 otherwise.
For the discrete function, x[n] of Figure 1.11, time scale and shift of x[2n −
3], is given as follows;

21
x(t) x(2t − 3)

3 3

t t
3 3/2 3

Figure 1.18: Plot of x(t) and x(2t − 3).

(
1 for 2 ≤ n ≤ 4,
x[2n − 3] = (1.31)
0 otherwise.
Note that the function, x[2n − 3], is undefined for non-integer values of the
domain, defined by [2n − 3]. In order to satisfy the inequality constraints, we
have to round or truncate the upper and lower bounds of the domain of the
function to integer values.

Exercise 1.5: Given the following continuous-time signal,


(
t for 0 ≤ t ≤ 3,
x(t) = (1.32)
0 otherwise.
Find and plot x(2t − 3).

Solution: Let us define a new time variable t′ = 2t − 3 and replace t by t′ in


the above equation:

2t − 3 for 0 ≤ 2t − 3 ≤ 3

x(2t − 3) = (for 32 ≤ t ≤ 3) (1.33)

0 otherwise.

A practical approach for the combined time shit and scale operation is to
shift the function first, then apply the scaling operation to the shifted signal.
Thus, we shift the function x(t) by the amount of b = 3 and then scale the
shifted function x(t − 3) by the amount of a = 2. The shifted and scaled signal
is depicted in Figure 1.18.

22
x[n] x[2n − 3]

3 3

1 1

n n
−1 0 1 2 3 4 −1 0 1 2 3 4

Figure 1.19: Plot of x[n] and x[2n − 3].

Exercise 1.6: Given the following discrete-time signal,


(
n for 0 ≤ n ≤ 3,
x[n] = (1.34)
0 otherwise.
Find and plot x[2n − 3].

Solution: As we did in the previous example, we define a new time variable


n′ = 2n − 3 and replace n by n′ in the above equation:

2n − 3 for 0 ≤ 2n − 3 ≤ 3

x[2n − 3] = (for 3/2 ≤ n ≤ 3) (1.35)

0 otherwise.

As in the continuous-time case, first, we shift the signal to the right by the
amount of b = 3, then apply the scaling operation to the shifted signal by the
amount of a = 2. However, the lower bound of the domain is not integer valued,
and the signal is not defined for x[3/2]. Thus, we round the lower bound of the
domain of the function to the nearest integer and define the non-zero interval
in 2 ≤ n ≤ 3 as follows:
(
2n − 3 for 2 ≤ n ≤ 3,
x[2n − 3] = (1.36)
0 otherwise.
This fact can be observed from Figure 1.19.

Exercise 1.7: Given the following discrete-time signal:

23


 2 for n = −2,

−1 for n = −1,



x[n] = −3 for n = 1, (1.37)

4 for n = 2,





0 otherwise.

Find and plot the signal x[3n − 4].

Solution: First, we shift the signal to the left by b = 4:




 2 for n = 2

−1 for n = 3



x[n − 4] = −3 for n = 5 (1.38)

4

 for n = 6


0 otherwise.

Then, we scale the shifted signal x[n − 4] by a = 3 to obtain the shifted and
scaled signal, x[3n − 4]. For this purpose, we replace n by n′ = 3n in the above
equation. Since the non-integer values of n is not defined, for 3n = 2 and
3n = 5, the values of x[3n − 4] disappear at n = 2/3 and n = 5/3. Finally, we
get the following shifted and squished signal:

−1 for n = 1

x[3n − 4] = 4 for n = 2 (1.39)

0 otherwise.

The plots of the original function, x[n], its shifted version, x[n − 4], and
then the scaled version, x[3n − 4] are shown in Figure 1.20. Note that, scaling
the function by a factor of a = 3 squishes the function, omitting the values of
the original function at n = −2 and n = 1.

The above definitions and examples reveal that the operations on the time
parameter of a signal requires a little care for the discrete-time functions, since
we deal with integer arithmetic in the time variable, n.
Question: Suppose that we apply scaling and shifting processes at the
same time to a movie video, represented by x[n]. What happens to the video
for x[2n − 1] and for x[0.5n + 1]?
For the signal, x[2n − 1], the movie starts with an hour delay and plays
twice as fast as the original. For the signal, x[0.5n + 1], the movie starts an
hour early and plays twice as slow as the original.

24
4 x[n] 4 x[n − 4]

2 2
−1 1 n n
−2 2 0 2 4 6

−3 −3

(a) (b)

4 x[3n − 4]

n
0 2 4
−1

(c)

Figure 1.20: Plots of (a) x[n], (b) x[n − 4] and (c) x[3n − 4] in Exercise 1.7.

25
Explore operations on the time variable of signals
@ https://384book.net/i0103
INTERACTIVE

1.5. Signals with Symmetry Properties


Loosely speaking, the symmetry of an object is a transformation that leaves
certain properties of the object unchanged. For example, the number plate of
a car does not change, when we change its location. If we drive a car, we may
change its location, but not the plate of the car. Therefore, the number plate
has a location translation symmetry.
Symmetry is a crucial property in many fields of mathematics, arts, and
science. The study of symmetries in physics started with Noether’s Theorem,
which allows us to derive conserved quantities of physics from symmetries
of the laws of nature. Loosely speaking, Noether’s theorem states that every
conservation law has a corresponding continuous symmetry transformation. For
example, the conservation of energy arises from the time translation symmetry.
In other words, time translation symmetry results in the law of conservation
of energy.
There is a rigorous definition of symmetry in group theory, over the symme-
try groups. A symmetry group is defined as the group of all transformations
under which the objects of the group are invariant.
Although we do not know the exact reason(s), we find aesthetic values
in symmetry. One reason may be the wide range of symmetry types found
in natural objects. For example, the reflection symmetry of the human body,
the rotational symmetry of flower petals, and the hexagonal symmetry of the
honeycomb are aesthetically pleasing. In fact, asymmetry of an object may be
an indication of a disease or a problem in the natural world.
Another reason of aesthetic values in symmetry may be attributed to the
working principles of the human brain. It is well-known that objects with some
type of symmetry require relatively less storage space, compared to asymmet-
rical objects. For example, a square shape has 90-degree rotation symmetry
and reflection symmetry. In order to store a square, all we need to know is the
length of its edge. On the other hand, an amorphous shape with no symmetry
requires to store all data points on the boundary. As an information processing
and storing device, our brain creates a model of the physical world surrounding
us from the sensory stimuli. The human brain compresses the world model by
extracting a wide range of symmetry transformations and updates them con-
tinuously throughout our lives. Thus, symmetry brings an efficiency to process
information in our brain, which creates an aesthetically pleasing feeling.

26
Figure 1.21: Snakes by the Dutch artist M.C.Escher, which depicts rotational
symmetry.5

Figure 1.22: A Penrose tiling with five-fold symmetrical two different rhombi.6

5
https://mcescher.com/gallery/most-popular/#iLightbox[gallery image 1]/30
6
https://www.nist.gov/image/penrose-tiling

27
Figure 1.23: Symmetric tiling Alhambra Palace.7

The history of art is full of painters, architects, designers, and composers,


who use the concept of symmetry in their artwork. A pioneering artist in this
field, known for his symmetrical lithographs is M.C. Escher, who is inspired by
the highly symmetrical geometric decorations of the Alhambra Palace. Escher
displayed the beauty of mathematics in his lithographs without any formal
mathematics education (Figure 1.21). His lithographs were sources of inspira-
tion to many mathematicians, who work on tessellations. The famous mathe-
matician R. Penrose was fascinated by the art of Escher when he visited one
of Escher’s exhibitions and created highly symmetrical impossible tilings, one
of which is shown in Figure 1.22.
Figure 1.23 shows the divine art of Alhambra Palace, which inspired many
artists and mathematicians for over 800 years.

Learn more about the symmetric decorations of the


Alhambra Palace @ https://384book.net/v0105
WATCH

Learn more about the symmetric art of Escher @


https://384book.net/v0106
WATCH

There are many forms of symmetry, specifically in group theory and

7
https://www.alhambra.info/img/interiores/4.jpg

28
representation theory. In this book, we deal with signals with specific forms
of symmetry. First, we study two crucial classes of functions, namely,
1. signals represented by periodic functions,
2. signals represented by even and odd functions.
Then, in the next chapter, we study the functions, which can be considered
as the basic building blocks of a large class of functions. Interestingly, the
symmetry functions enable us to represent a wide range of signals by rigorous
mathematical models, in a compact way, as we shall see throughout this book.

Learn more about the mathematics of symmetry


@ https://384book.net/v0107
WATCH

Let us briefly study the basic properties of periodic signals, and even and
odd signals in the following subsections.

1.5.1. Periodic Signals


Signals are called periodic if the representing function repeats itself at every
finite interval, called the period. Periodic functions are symmetric with respect
to periodic translation. In other words, periodic functions are translation in-
variant functions, where the translation of the signal by the amount of its
period gives the same signal.
In this book, we use periodic functions intensively, to represent compli-
cated signals, even the aperiodic and asymmetric ones, in continuous-time and
discrete-time cases. Periodic functions open the door to new vector spaces,
where a large class of functions are represented in terms of periodic functions.

1.5.1.1. Continuous-Time Periodic Signals


A continuous-time signal, x(t), is periodic if there exists a finite and nonzero
real value, T ∈ R, such that,

x(t) = x(t + T ). (1.40)

Definition 1.6: Fundamental period, T0 ∈ R, is defined as the the small-


est positive real number, in which the function repeats itself as follows,

x(t) = x(t + T0 ), (1.41)

29
x(t)

... ...
t
−2T −T 0 T T

Figure 1.24: A continuous-time periodic signal with the fundamental period,


T . This function is symmetric with respect to reflection and translation by the
amount of the period.

and it is measured by a time unit, such as hours, minutes, or seconds. Figure


1.24 shows an example of a continuous-time periodic signal.

Based on the fundamental period, T0 , there are two more measures for
periodic signals:

• Angular Frequency: ω0 = 2π
T0 , measured by radians/second.
• Fundamental Frequency: f0 = T10 , measured by Hertz (cycle/second).

Exercise 1.8: Find the fundamental period of the following signals:


a) x(t) = cos t,
b) x(t) = cos(ω0 t).

Solution: Recall from Calculus:


a) x(t) = cos t = cos(t + 2π). Thus, the fundamental period is T0 = 2π.
b) x(t) = cos(ω0 t) = cos(ω0 (t+ ω2π0 )). Thus, the fundamental period is T0 = 2π
ω0 .

Note that the parameter, ω0 , which multiplies the time variable corresponds
to angular frequency. We can, then, obtain the fundamental frequency, T0 by
using the relationship between the angular frequency and period. For ω0 = 1,
the period of the cosine function is 2π.

Exercise 1.9: Plot the following signal and find its fundamental period:

x(t) = A cos(ω0 t − K). (1.42)

30
x(t)

t
K/w0 K/w0 + t

Figure 1.25: The continuous-time cosine signal, x(t) = A cos(ω0 t − K), with
amplitude A and period T = 2π/ω0 . For A = 1 and K = 0 the plot is reduced
to x(t) = cos ω0 t of Exercise 1.8.

Solution: We need to find the smallest period T0 , such that

x(t) = A cos(ω0 t − K) = x(t + T0 ). (1.43)


This equation is satisfied when T0 = 2π/ω0 .
For the signal x(t) = A cos(ω0 t − K), the parameters A, K and ω0 have
special names. The parameter A is called the amplitude of the signal. The
parameter K is called the phase of the signal, ω0 = 2π/T , is called the angular
frequency. It is plotted in Figure 1.25

1.5.1.2. Discrete-Time Periodic Signals


A discrete-time signal, x[n], is periodic if there exists a finite and nonzero
integer value, N ∈ I, such that

x[n] = x[n + N ]. (1.44)


An example is given in Figure 1.26.

Definition 1.7: The smallest integer, N0 ∈ I, which satisfies x[n] = x[n+N0 ]


is called the fundamental period.

Based on the fundamental period, N0 , of a discrete-time periodic function,


we can define:
• Angular Frequency: Ω0 = N

0
, measured by radians/second.
• Fundamental Frequency: f0 = N10 , measured by Hertz (cycle/second).

Exercise 1.10: Find the fundamental period of the following discrete-time

31
x[n]

.... ....
n

Figure 1.26: A discrete-time periodic signal with fundamental period N0 = 3.

signal,

x[n] = A cos(Ω0 n − K), (1.45)


where K is an integer.

Solution: For periodicity, we need to find the fundamental period N0 , which


satisfies,

x[n] = x[n + N0 ]. (1.46)


Since x[n] is a discrete-time signal, N0 must be an integer. Thus, we need to
find the fundamental period,

N0 = m, (1.47)
Ω0
where m is the smallest integer which makes N0 an integer.

Note that, the angular frequency, Ω0 is an irrational number, obtained by


dividing 2π with an integer number. This integer divider corresponds to the
fundamental period, N0 .
For example, if Ω0 = π6 = 2π12 , then, N0 = 12. This signal is plotted in
Figure 1.27. On the other hand, if Ω0 = ϕ, where ϕ is a real number, then x[n]
is not periodic at all.

Exercise 1.11: Is the below signal periodic? If yes, find the period.

x[n] = sin( n + 1). (1.48)
7

Solution: We need to find the smallest integer value, N0 , which satisfies the

32
x[n]

... ...

Figure 1.27: Plot of x[n] = A cos(Ω0 n − K), for Ω0 = 2π/12 and K = 4.

following equation:

6π 6π 6π
x[n] = x[n + N0 ] = sin( (n + N0 ) + 1) = sin( n + N0 + 1). (1.49)
7 7 7

Here, 7 N0 must be equal to 2πm, where m is the smallest integer, satisfying,
6π 7m
N0 = 2πm, N0 = . (1.50)
7 3
The smallest integer, which satisfies the above equality is m = 3. Then, the
fundamental period is N0 = 7. Therefore, this signal is periodic.

1.5.2. Even and Odd Signals


Another set of signals with symmetry properties are represented by even and
odd functions.
Even functions have reflection symmetry. In other words, they are invariant
to flipping around the vertical axis (Figure 1.28). Mathematically, a signal is
called even if
x(t) = x(−t), for continuous-time signals,
x[n] = x[−n], for discrete-time signals.
Odd functions have rotation symmetry. In other words, they are invariant
to a rotation of 180 degrees around the origin (Figure 1.29). Mathematically,
a signal is called odd if
x(t) = −x(−t), for continuous-time signals,
x[n] = −x[−n], for discrete-time signals.

Exercise 1.12: An important family of functions, which is widely used in


Digital Signal Processing (DSP) technology is the parabola. Simply, a parabola

33
x(t)

0 t

Figure 1.28: An even signal has reflection symmetry about the vertical axis. In
other words, even functions are invariant to reflection, when they are flipped
around the vertical axis.

x(t)

Figure 1.29: An odd signal has rotation symmetry about the origin. In other
words, odd functions are invariant to rotation, when they are rotated 1800
around the origin.

34
x(t) = 2t2

x(t) = t2

Figure 1.30: Plot of parabolas for a = 1, 2.

is defined as the trajectory generated by equidistant points from a focal point


and a fixed line. It is represented by the following analytical form:

x(t) = at2 , ∀a ̸= 0. (1.51)


Plot the parabolas for a = 1 and a = 2. Show that it is an even function.

Solution: Replacing t by −t, we get, x(−t) = at2 = x(t). Plots are given in
Figure 1.30.

Exercise 1.13: Another important family of functions in DSP is called hy-


perbola. Simply, hyperbola is defined as the trajectory generated by a moving
point, such that the difference of the distances from two fixed focal points
are always constant. Hyperbola is represented by the following analytical form:

x(t) = a/t, ∀a > 0. (1.52)


Plot the hyperbolas for a = 1 and 2. Show that it is an odd function.

Solution: Replacing t by −t, we get x(−t) = a/ − t = −x(−t). Plots are given


in Figure 1.31.

Exercise 1.14: Can a function be both even and odd? In other words, is
there a function, which is symmetric with respect to both the vertical axis and
the origin? If yes, give an example.

Solution: A function, x(t) is even and odd if it satisfies both of the following
equalities:
x(t) = x(−t) = −x(−t). (1.53)
The only function, which satisfies the above equalities is x(t) = 0. In other

35
x(t) = 2/t
x(t) = 1/t
t

Figure 1.31: Plot of hyperbolas for a = 1, 2.

words, when there is no signal at all, the function is doubly symmetric.

Proposition: Any signal can be represented by its even and odd compo-
nents, as follows:

x(t) = Odd{x(t)} + Even{x(t)}, (1.54)


where
1
Odd{x(t)} = (x(t) − x(−t)), (1.55)
2
1
Even{x(t)} = (x(t) + x(−t)). (1.56)
2
The proof of the above proposition follows from the addition of even and
odd parts of the function.
The above decomposition of a signal into its even and odd parts reveals
an important property of the functions: Even if a function does not possess
any type of symmetry property, it can be represented by the decomposition of
functions with symmetry properties.

Exercise 1.15: Given the plot of a continuous-time signal, x(t), in Figure


1.32, plot it’s even and odd parts.

Solution: Even and odd parts of x(t) can be found using Equations (1.55)
and (1.56). These are shown in Figure 1.33.

Exercise 1.16: Find and plot the even and odd part of the following func-
tion:

36
x(t)

t
1 3 5 7

Figure 1.32: An arbitrary continuous-time signal, x(t).

Even{x(t)} Odd{x(t)}

1 1

t t
−7 −5 −3 −1 1 3 5 7 −7 −5 −3 −1 1 3 5 7

Figure 1.33: The odd and even parts of x(t) of Figure 1.32.

37
x(t) Even{x(t)}
20
20
10
10

−2 0 2 t
−2 0 2 t
−10 −10
−20 −20

Odd{x(t)}
20
10

−2 0 2 t
−10
−20

Figure 1.34: Plot of x(t) = t3 − t + 1, and its even and odd parts.

x(t) = t3 − t + 1 (1.57)

Solution: Using the definition of even and odd parts of functions, given above,
we find Even{x(t)} = 1 and Odd{x(t)} = t3 − t. Plots are given in Figure 1.34.

The above definitions of even and odd functions can be trivially extended
to the discrete-time signals. Mathematically, a discrete-time signal is even if

x[n] = x[−n] (1.58)


and it is odd if

x[n] = −x[−n]. (1.59)


Symmetry groups of functions is a very interesting field of mathematics,
which is beyond the scope of this book.

38
Decompose signals into their even and odd parts
@ https://384book.net/i0104
INTERACTIVE

1.6. Complex Signals Represented by


Complex Functions
Most of the natural signals can be represented by functions with real or integer
valued domains and ranges. For example, we perceive the objects around us by
a world model, created in our brain, through the visible light reflected from the
objects, in real-valued time domain. Although we define the domain and the
range of the functions in the space of real numbers or integers, there are more
compact and efficient ways of representing the functions in some other vector
spaces spanned by a set of complex basis functions. Before diving into these
elegant vector spaces, let us briefly overview the space of complex numbers.
Complex numbers are first introduced by Gerolamo Cardono, in his book,
Ars Magna, in XV. century. It appears in a wide range of problems in engineer-
ing, science, and mathematics. For example, when we need to find a solution
to x2 + 1 = 0, the space of real numbers, R, does not enable us to provide
the value for the variable, x. To find the solution to this type of equations, an
imaginary number,

j= −1 (1.60)
is introduced to define a new vector space of complex numbers, C, where an
extra imaginary dimension is added to the real number space.
Despite the historical nomenclature, imaginary numbers are not imaginary
or nonsense at all. They are essential in many aspects of the scientific descrip-
tion of the natural and man-made world. Imaginary numbers can be considered
as an extension of the one-dimensional vector space of real numbers, R, to two-
dimensional space of complex numbers, C, called the complex plane. While
the first dimension quantifies the amount of real value, the second dimension
quantifies the imaginarity of a complex number.
Complex numbers can be equivalently represented in Cartesian and polar
coordinate systems, as described below.

39
y
x + yj

r y

θ
x
x

Figure 1.35: Complex Plane, where the real numbers are augmented by imag-
inary numbers.

1.6.1. Complex Numbers Represented in Carte-


sian Coordinate System
A complex number is represented in a Cartesian coordinate system C as
follows:

z = x + jy, (1.61)
where, x = Re{z}, is called the real part and y = Im{z} is called the imag-
inary part of the complex number z. Note that complex numbers form a
two-dimensional vector space spanned by the standard basis vectors,

c1 = [1 0] and c2 = [0 j]. (1.62)


Thus, any complex number z can be represented in terms of the linear combi-
nation of c1 and c2 , as follows:

z = xc1 + yc2 . (1.63)

Since one of the basis vectors has the imaginary number, basis vectors span
the complex plane. The complex plane C reduces to the space of real numbers,
R, for ∀y = Im{z} = 0. It reduces to the space of purely imaginary numbers,
I, for ∀x = Re{z} = 0. The complex plane is illustrated in Figure 1.35.
Arithmetic operations, such as addition, subtraction, division and multi-
plication of complex numbers are defined by considering the fact that the
imaginary number has a square,

40
j 2 = −1. (1.64)
Reflection symmetry with respect to the real axis is called the complex
conjugate of the complex number z,

z ∗ = x − jy. (1.65)
Note that the multiplication of a complex number with its complex conju-
gate gives a real number,

z · z ∗ = x2 + y 2 , (1.66)
corresponding to the square of the Euclidean norm, which is measured by the
length of the vector z.

Exercise 1.17: Given the following complex numbers,

z1 = 1 + 2j and z2 = 2 + 3j, (1.67)

Find and plot the following.


a) z1 + z2 ,
b) z1 · z2 ,
c) z1 ÷ z2 .

Solution:
a) We apply vector addition in two-dimensional space of complex numbers, as
follows:

z1 + z2 = 3 + 5j. (1.68)
Geometrically speaking, the addition operation translates one of the vectors
by the amount of the other vector. Note that the addition operation is
commutative.
b) Multiplication operation is associative and distributive, resulting in

z1 · z2 = (1 + 2j)(2 + 3j) = 2 + 7j + 6j 2 = −4 + 7j. (1.69)


c) For division operation, first, we multiply the dividend and the divisor by
the complex conjugate of the divisor to ensure a real number at the de-
nominator. Then, we apply multiplication operation to the numerator and
divide both the real and imaginary part of the numerator by a real number
obtained in the denominator, as follows;

41
Im
−4 + 5j 3 + 5j

8 1
13 + 13 j Re

Figure 1.36: Plots of three complex numbers in Exercise 1.17.

1 + 2j (1 + 2j)(2 − 3j) 8 1
z1 ÷ z2 = = = + j. (1.70)
2 + 3j 13 13 13
The plots of these complex numbers can be found in Figure 1.36.

When we represent the complex numbers in the polar coordinate system,


in the next subsection, we shall observe that the multiplication and division of
two complex numbers corresponds to rotation of the complex numbers.

1.6.2. Complex Numbers Represented in Polar


Coordinate System and Euler’s Number
A complex number, z, can be considered as a vector in the complex plane,
represented by the length of the vector and the angle between the vector and
x axis by defining the following relations,

x = Re{z} = r cos θ (1.71)


and

y = Im{z} = r sin θ, (1.72)


where r = |z|, is called the magnitude and θ = ∠z is called the phase of
the complex number z. Then, the complex number, z, can be equivalently
represented by

z = x + jy = r(cos θ + j sin θ). (1.73)


There is an important number in mathematics called the Euler’s number,
discovered by the great mathematician Jacob Bernoulli, while he was trying to

42
compute the compound interest of a bank account from the following series,
 n ∞
1 X 1
e = lim 1+ = ≈ 2.71828182.... (1.74)
n→∞ n n!
n=0

Like the irrational number π, Euler’s number, e, has a great impact in


mathematics. Euler’s number has a wide range of interesting properties. One
very important implication of Euler’s number is to relate the trigonometric
functions, such as cosine and sine of an angle, in the complex plane, through
Euler’s formula, as stated in the below proposition.

Learn more about the Euler number @ https://


384book.net/v0108
WATCH

Proposition: Euler’s Formula: For any real number θ,

ejθ = cos θ + j sin θ. (1.75)


Proof. We expand ejθ to Maclaurin series (Taylor series around θ = 0):

(jθ)2 (jθ)3 (jθ4 )


ejθ = 1 + jθ + + + + .... (1.76)
2! 3! 4!
We expand cos θ and j sin θ to Maclaurin series, and add them up:

θ2 θ4 θ6 θ3 θ5 θ7
cos θ + j sin θ = (1 − + − + ....) + j(θ − + − + ....). (1.77)
2! 4! 6! 3 5! 7!
By a simple arrangement, we show that the right hand side of the above
equations are the same. Therefore,

ejθ = cos θ + j sin θ. (1.78)



Proof. (Alternative proof) Let us define a function f (θ) as the ratio of
the right-hand side of the Euler’s formula to the left-hand side:

cos θ + j sin θ
f (θ) = . (1.79)
ejθ
Showing that f (θ) = 1 would prove the Euler’s formula. Let us take the deriva-
tive of f (θ):

43
f ′ (θ) = −je−jθ (cos θ + j sin θ) + e−jθ (− sin θ + j cos θ) (1.80)
= −je−jθ cos θ + je−jθ cos θ − j 2 e−jθ sin θ − e−jθ sin θ = 0. (1.81)

Since f ′ (θ) = 0, we conclude that f (θ) is a constant function. Further, when


we evaluate f (0), we get 1. So, we conclude that f (θ) = 1.

Euler’s formula explained in simple group theory


@ https://384book.net/v0109
WATCH

Interestingly, for θ = π the Euler’s formula becomes

ejπ = −1. (1.82)


This beautiful equation contains two irrational numbers of infinite length,
namely, e and π and the imaginary number j, on the left-hand side. And it is
simply equal to -1.
Euler’s formula enables us to represent any complex number, z = x + jy,
in the polar coordinate system, equivalently;

z = rejθ , (1.83)
where r ∈ R is the magnitude and 0 < θ < 2π is the phase of the complex
number. The relationship between the two coordinate systems is established
by

x = r cos θ and y = r sin θ (1.84)


and

Im{z}
r 2 = x2 + y 2 and θ = arctan . (1.85)
Re{z}

Exercise 1.18: Consider the following complex numbers, given in polar co-
ordinates,

z1 = 2ejπ/2 and z2 = 2ejπ . (1.86)


Compute the following:
a) z1 + z2
b) z1 · z2

44
c) z1 ÷ z2

Solution:
a) For addition operation in complex arithmetic, we need to use Cartesian
coordinate representation. Using the Euler’s formula, we get

z1 + z2 = 2(cos π/2 + j sin π/2) + 2(cos π + j sin π) (1.87)


= (2 cos π/2 + 2 cos π) + j(2 sin π/2 + 2 sin π) = −2 + 2j. (1.88)

Next, we use Euler’s formula again to convert the Cartesian representation


to the polar representation. Then, the magnitude is
p p √ √
r= Re{z}2 + Im{z}2 = x2 + y 2 = 8 = 2 2, (1.89)
and the phase is

Im{z}
θ = arctan = arctan(−2/2) = −π/4. (1.90)
Re{z}
Thus, the polar representation becomes,

z1 + z2 = 2 2e−jπ/4 . (1.91)
Recall from the Cartesian representation of complex numbers, the addition
operation translates the complex numbers with respect to the other one.
b) For the multiplication of complex numbers in the polar coordinate system,
we multiply the magnitudes and add the phases in the exponent, as follows;

z1 · z2 = 4ej(π+π/2) = 4ej(3π/2) . (1.92)


Notice that the multiplication operation scales the amplitude and rotates
one of the complex number by the amount of the phase of the other, in the
clockwise direction.
c) For the division of complex numbers, we divide the magnitudes and subtract
the phases in the exponent,

z1 ÷ z2 = ej(π/2−π) = e−jπ/2 . (1.93)


Notice that the division operation divides the amplitude of the dividend by
that of the divider. It rotates the dividend in counterclockwise direction by
the phase of the divider.

In summary, arithmetic operations on complex numbers have very nice

45
geometric interpretations.

Exercise 1.19: Find the Cartesian representation of the following complex


number;

z = ejπ/2 + j. (1.94)

Solution: Using the Euler’s formula, we get

z = ejπ/2 + j = cos π/2 + j sin π/2 + j = 2j. (1.95)


Since the real part is 0, the number, z = ejπ/2 + j, is purely imaginary.

1.6.3. Complex Functions


Based on the above introduction to complex numbers, we can define a complex
function, as follows:

X : z → X(z), (1.96)
where the domain of the function is the complex variable z ∈ C and the range
is X(z) ∈ C.
Then, a complex function can be represented by its real and imaginary
parts, in Cartesian Coordinate system, as follows:

X(z) = Re{X(z)} + jIm{X(z)} (1.97)


Similarly, a complex function can be represented in polar coordinate system
as follows:

X(z) = |X(z)|ejΘ , (1.98)


where the magnitude and phase of the complex function, X(z) is
q
|X(z)| = Re2 {X(z)} + Im2 {X(z)} (1.99)
and

Im{X(z)}
Θ = ∠X(z) = tan−1 , (1.100)
Re{X(z)}
respectively.
Arithmetic operations on complex functions are trivial extensions of the
complex numbers, as depicted in the following exercises.

46
Exercise 1.20: Given the following complex functions, in Cartesian coordi-
nate system,

X1 (z) = 1 + jz and X2 (z) = 2 + jz, (1.101)


find the results of the following arithmetic operations:
a) X1 (z) + X2 (z),
b) X1 (z) · X2 (z),
c) X1 (z) ÷ X2 (z).

Solution:
a) We apply vector addition in the Cartesian form of the functions in two-
dimensional space of complex numbers, as follows:

X1 (z) + X2 (z) = 3 + j2z. (1.102)


b) We apply multiplication operation in two-dimensional space of complex
numbers as follows:

X1 (z) · X2 (z) = 2 + 3jz + j 2 z 2 = (2 − z 2 ) + 3jz, (1.103)


where the real part

Re{X(z)} = 2 − z 2 (1.104)
and the imaginary part

Im{X(z)} = 3z (1.105)
of X(z) are functions of the complex variable z.
c) For division, we simply multiply both divider and the dividend with the
complex conjugate of the divider. This simple trick makes the divider a
real number, reducing the vector to vector division to a scalar multipli-
cation of a vector, as follows:

1 + jz (1 + jz)(2 − jz) 2 + z2 1
X1 (z) ÷ X2 (z) = = 2 2
= 2
+ jz.
2 + jz 4−j z 4+z 4 + z2
(1.106)

Exercise 1.21: Given the following complex functions, in polar coordinate


system,

X1 (z) = zejπ and X2 (z) = ejπz , (1.107)

47
find the results of the following arithmetic operations:
a) X1 (z) + X2 (z),
b) X1 (z) · X2 (z),
c) X1 (z) ÷ X2 (z).

Solution: When the complex functions are rather complicated, addition


and subtraction operations are mostly done in Cartesian coordinate system,
whereas multiplication and division are relatively easier in polar coordinate
system, as we demonstrate in the following solutions.
a) Simplifying the addition of two functions in the polar form is not pos-
sible. Thus, we convert the polar form of the complex functions, X1 (z)
and X2 (z) into Cartesian form representation by employing the Euler’s
formula:

X1 (z) = zejπ = z(cos π + j sin π) = −z (1.108)


and
X2 (z) = ejπz = cos πz + j sin πz. (1.109)
Using vector addition, we get the addition of two functions in Cartesian
coordinate system, as follows;

X1 (z) + X2 (z) = (cos πz − z) + j sin πz. (1.110)


The magnitude of the addition is
q
|X1 (z) + X2 (z)| = (cos πz − z)2 + sin2 πz. (1.111)
The phase of the addition is

sin πz
∠(X1 (z) + X2 (z)) = tan−1 . (1.112)
cos πz − z
Then, the polar representation of the addition becomes
q
−1 sin πz
X1 (z) + X2 (z) = (cos πz − z)2 + sin2 πzej tan cos πz−z . (1.113)

b) We apply multiplication operation in polar coordinate system, as follows;

X1 (z) · X2 (z) = zejπ · ejπz = zejπ(1+z) , (1.114)


c) We apply division operation in polar coordinate system, as follows;

X1 (z) ÷ X2 (z) = zejπ ÷ ejπz = zejπ(1−z) . (1.115)

48
The simple examples above show that the arithmetic operations on complex
functions result in real and imaginary parts in the Cartesian coordinate system
and result in the magnitude and phase in polar coordinate system, as functions
of a complex variable, z = x + jy = |z|ejθ , where the relationship between the
polar and Cartesian representation is given by

x = |z| cos θ, y = |z| sin θ,


y
|z| = x2 + y 2 , θ = tan−1 .
p
x
As we shall see throughout this book, complex functions have very interesting
symmetry properties, such that they simplify the signals and systems for a
wide range of problems that deal with real functions.

1.7. Chapter Summary


What are systems? What are signals? How are they related to each other?
What are the examples of systems in nature? What are the examples of man-
made systems? How can we represent signals and systems mathematically, so
that we can investigate the laws, which governs the natural systems based on
the signals they generate? How can we design man-made systems? How can
we operate on signals? What are the specific types of signals, which enable us
to model a wide range of general signals? What is a complex signal?
In this chapter, we provide introductory answers to the above questions.
We define a system as a unified collection of interrelated and interdependent
parts, governed by a set of invariant laws. We define signals as anything that
serves to indicate or communicate information. Signals can be considered as the
measurements of our varying observations about a system and/or its parts. De-
pending of the nature of the underlying physical phenomena, signals can take
different forms called, continuous-time, discrete-time or digital signals. The
human body, a single cell, and a molecule are some examples of natural sys-
tems. Computers, vehicles, and telecommunication devices are some examples
of human-made systems.
A wide range of systems can be represented by a set of equations and/or
algorithms. On the other hand, a wide range of signals can be represented
by functions. Thus, we can use mathematical tools available in calculus, linear
algebra, differential equations and algorithms to model and analyze signals and
systems.
Some specific types of functions, that represent signals, can be used as the
building blocks of a general class of signals. These functions possess symmetry
properties. A very popular set of functions, which have time translation sym-
metry are the periodic functions. Also, even signals have reflection symmetry

49
and odd signals have rotation symmetry. Functions, which have some type of
symmetry, can be used to represent asymmetric signals.
Some signals have the domain and range of complex numbers. In order to
represent this type of signals, we use complex functions, where the domain and
range is represented by two dimensional complex numbers. Complex functions
are very powerful mathematical objects in solving many real-life problems,
such as designing digital systems.

50
Problems

1. Consider the plot of the signal x(t) given in Figure P1.1.


x(t)

t
−1 0 1 2 3 4 5
Figure P1.1
a) Find the analytical expression of this function.
b) Find and plot the following functions:
i) 3x(4t + 8)
ii) x(−4t + 8)
iii) x(−4t − 8)

2. Consider the plot of the signal x(t) given in Figure P1.2.


x(t)

t
−1 0 1 2 3 4 5
Figure P1.2
a) Find an analytical expression for this figure.
t
b) Find and plot x( + 1).
2
c) Find and plot x(2t + 1).

3. Consider the plot of the signal x[n], given in Figure P1.3.

51
x[n]

1
−5 −4 −2 −1 0 4 5
n
−3 1 2 3
−1

Figure P1.3
a) Find an plot x[1 − n].
b) Find and plot x[2n + 2].
c) Find and plot y[n] = x[2n + 2] + x[1 − n].

4. Consider the plot of the odd signal x(t), given in Figure P1.4.
a) Find an analytical expression for this signal using the symmetry property.
b) Find and plot y(t) = 2x(2t − 3). Is this an odd function?
c) Find and plot y(t) = 2x(2t). Is this an odd function?
x(t)

1
2
t
−4 −2 0 4
−1

−2

Figure P1.4.

5. A continuous-time signal x(t) is given in Figure P1.5. Find the analytical


expression of each of the following signals. Plot them all.
a) y(t) = x( 12 t − 2)
b) y(t) = x(1 − 2t)
c) y(t) = x(2t)

52
x(t)

−2 1 2
0
t
−1
−1

Figure P1.5

6. A discrete-time even signal x[n] is shown in Figure P1.6. Find the analytical
expression for each of the following signals and plot them all.
a) Find an analytical expression of this signal using the symmetry property.
b) y[n] = x[2 − n]
c) y[n] = x[ 21 n + 1]
1 1 1

... −3 −2 2 3 ... n
−1 0 1
− 12 − 21
−1 −1
Figure P1.6

7. Determine whether or not the following signals are periodic. Determine the
fundamental period of the periodic functions.
π
a) x1 [n] = cos(5 n)
2
b) x2 [n] = sin(5n)
π
c) x3 (t) = 5 sin(4t + ).
3

8. Consider the following signal,

sin((m + 21 )t)
x(t) = ,
2 sin( 2t )

53
where m is a natural number. Find the fundamental period of this signal.

9. Find and plot the even and odd parts of the continuous-time signal given
in Figure P1.8.
x(t)

−4 −3 −2 −1 1 2 3 4
t

−1

Figure P1.8

10. Determine whether or not each of the following continuous-time signals is


odd, even, or neither.
a) x(t) = cos(3πt)
b) x(t) = 2ej(3t−2)
c) x(t) = sin(2π) + 3 cos(π)
d) x(t) = −2t · cos( 41 π)

11. Given the odd signal below,


3
x[n] = sin( πn),
4
show that
+∞
X
x[n] = 0.
n=−∞

12. Given the signals below,

x1 [n] = cos(n) and x2 (n) = sin(n),


show that y[n] = x1 [n]x2 [n] is an odd signal.
13. Determine whether the following functions are periodic, even, odd, power
and energy signals. If they are periodic, find the fundamental period.

54
π
a) x(t) = cos(4t + ).
9
π
b) x[n] = 2 sin[ n].
3
14. Consider the following complex number:
√ √
2 + 2j
z= √ .
2 + 2 3j

a) Find the real and imaginary part of this number.


b) Find the magnitude and phase of this number.

15. Consider the following complex number:

z = e5j + e7j .

a) Find the magnitude and phase of this number.


b) Find the real and imaginary part of this number.

16. Consider the following complex number:


√ √
z = ( 3 − 3j)40 .

a) Find the magnitude and phase of this number.


b) Find the real part and the imaginary part of this number

17. Solve the following and show all solution steps in detail. Simplify your results
as much as possible to the format: a + jb, where a and b are real numbers.
1 1 1
a) z = − j ⇒ = ?
4 3 z
b) z1 = 4 − 3j,
√ z2 = −j ⇒ |z1 · z̄2 + z2 · z̄1 | = ?
(1 + 3j)2 (2 − j)
c) z = ⇒ |z| = ?
(1 + 2j)3

18. Evaluate the following integrals and show all solution steps in detail.
√ √
(a) t2 e−( 2+ 2i)t dt
R
π π
(b) e−t cos( t + ) dt
R
4 6

55
19. Find the real and imaginary parts of the following complex functions, where

z = x + jy.

a) X(z) = cos z
b) X(z) = (z + 1) sin z
c) X(z) = z 3 + 5z − 1

20. Find the magnitude and phase of the following complex functions, where

z = |z|ejθ .

a) X(z) = sin z
b) X(z) = z 2 cos z
c)X(z) = ejz + e3jz

21. Take the following integrals:


R 1+j
a) 0 ez dz
R 1+j
b) 0 z 3 dz

22. Write a computer program to plot the even and odd parts of a discrete-
time signal x[n]. Your program takes the signal and the starting index(si )
of the signal as input. For example, let’s say x[n] = [1, 6, 8, 9] and si = 3,
then x[3] = 1, x[4] = 6, x[5] = 8, x[6] = 9 and x[n] = 0 for other n values.

You should add your codes and the outputs for the given 3 input files
(sine part a.csv8 , shifted sawtooth part a.csv9 , chirp part a.csv10 ) to your
solution. The first element in the files is the starting index and remaining
ones are the elements of the signal.

23. Write a computer program to plot the shifted and scaled version x[an + b] of
a discrete-time signal x[n]. Your program takes the signal and the starting
index(si ) of the signal as input. Differently from part a, you should also
take a and b values as input.

8
https://384book.net/resources/sine part a.csv
9
https://384book.net/resources/shifted sawtooth part a.csv
10
https://384book.net/resources/chirp part a.csv

56
You should add your codes and the outputs for the given 3 input files
(sine part b.csv11 , shifted sawtooth part b.csv12 , chirp part b.csv13 ) to your
solution. The first element in the files is the starting index, the second ele-
ment is the value of a, the third element is the value of b and the remaining
ones are the elements of the signal.

You should write your code in Python and no library is allowed other than
matplotlib.pyplot.

11
https://384book.net/resources/sine part b.csv
12
https://384book.net/resources/shifted sawtooth part b.csv
13
https://384book.net/resources/chirp part b.csv

57
58
Chapter 2
Basic Building Blocks of
Signals

With a bucket of Lego toys, we can construct many objects, as long


as we can imagine. With a set of simple functions, we can construct
many complicated signals, as long as we speak the language of math-
ematics!

In Chapter 1, we introduced the general concepts of the signals and systems,


together with some mathematical background is needed for the rest of the
book. As we mentioned before, we may attempt to model and analyze natural
objects, such as the human brain or we may design and implement man-made
objects, such as a computer vision system.
Unfortunately, the available mathematical tools are quite short to model all
classes of systems and signals. However, it is possible to put a set of “valid” as-
sumptions and rules to decompose a complicated system into inter-connectable
subsystems for modeling a large class of systems.
In this chapter, we explore some basic building blocks of signals, which
enable us to design and implement mathematically tractable and realizable
models with systems approach.

2.1. LEGO Functions of Signals


There are some basic functions, which can be used to represent a large variety
of signals, such as speech and music. Just like playing with LEGO toys, we can
play with the basic functions to construct relatively more complicated signals,
using the available mathematical tools. Most of these functions have symme-
try properties. In other words, they are invariant with respect to some type
of transformations. In Chapter 1, we saw the class of periodic, even, and odd

59
functions. These functions were symmetric with respect to translation, reflec-
tion, and rotation, respectively. In this section, we shall investigate additional
basic functions, namely,
1. exponential functions,
2. the unit impulse function,
3. the unit step function,
Later in this book, we shall see that linear combinations of the basic func-
tions can be used to represent a large class of signals.

2.2. King of the Functions: Exponential


Function
One of the most important functions in mathematics is the exponential func-
tion. It is widely used to model natural phenomena, such as the growth of can-
cer cells or uncontrolled forest fires. It is, also, extensively used in man-made
systems, for example in the “softmax layer” of Artificial Neural Networks, or
the non-linear activation functions.
Loosely speaking, an exponential function is a function, in which its deriva-
tive (the amount of increase or decrease) with respect to its variable is con-
stantly proportional to itself.
Formally speaking, the exponential function, in continuous-time is defined
as follows,

x(t) = Ceαt , (2.1)


whereas the exponential function, in discrete time is defined as follows,

x[n] = Ceαn , (2.2)


where

X 1
e= ≈ 2.71828182.... (2.3)
n!
n=0

is a transcendental number, called, the Euler number.


There are two parameters of an exponential function; the amplitude, C,
and the parameter of exponent, α. Both parameters can be real or complex
numbers. Depending on the parameters of the exponential function, we shall in-
vestigate two types of exponential functions, namely, real exponential func-
tion and complex exponential function, for both continuous-time and
discrete time signals.

60
2.2.1. Real Exponential Function
A real exponential function has real parameters, C ∈ R and α ∈ R. Real
exponential functions can be used to represent continuous-time or discrete
time signals.

Continuous-Time Real Exponential Function. A continuous-time real


exponential function is a monotonically increasing function, when α > 0 and
it is a monotonically decreasing function, when α < 0, as shown in Figure 2.1.

x(t) x(t)

C C
t t

Figure 2.1: Continuous time exponential, x(t) = Ceαt , (left) for α > 0 and
(right) α < 0.

When both parameters, C = α = 1, then we obtain natural exponential


function, as follows,

x(t) = et . (2.4)
The inverse of the natural exponential function is called the natural log-
arithm, which provides the variable t, as follows,

t = ln x(t). (2.5)
The natural exponential function has special importance in mathematics. It
is the only function whose derivative and integral are the same as the function
itself, as formally stated in the following Lemma.

Lemma 2.1: The first derivative of the natural exponential function is et


itself:

det
= et , (2.6)
dt
and the integral of the real exponential in (0, t) is itself:
Z t
et dt = et . (2.7)
0

61
Proof. In order to find the derivative of et , we simply use the definition
of the derivative,

det et+h − et et eh − et eh − 1
= lim = lim = et lim . (2.8)
dt h→0 h h→0 h h→0 h
The limit term in the right hand side of Equation (2.8) is equal to 1,
because;
lim (eh − 1) = lim h = 0. (2.9)
h→0 h→0

Thus,
det
= et . (2.10)
dt
To show that the integral of the real exponential is itself, we simply take the
derivative of both sides of Equation (2.7) with respect to t. The fundamental
theorem of calculus gives us

d t t
Z
e dt = et , (2.11)
dt 0
which is equivalent to the derivative of the right-hand side of Equation (2.7).□
The above Lemmas show that continuous time real exponentials, with base
e are symmetric functions with respect to the derivative and integration oper-
ations.

Definition 2.1: The general exponential function for any parameter,


β, is defined as follows;

x(t) = β t . (2.12)
The inverse of the general exponential function is defined by base-β loga-
rithm, as follows;

t = logβ x(t). (2.13)

Motivating Question: Why do we call et as natural exponential?


The natural exponential function enables us to study all kinds of general
exponential functions. Because, the exponential function for any base param-
eter, β ∈ R, can be written in terms of the natural exponential, with base-e,
as shown by the following Lemma.

Lemma 2.2: Any general form of an exponential function can be written in


terms of the natural exponential function, et , i.e.,

x(t) = Aβ t = Aeαt . (2.14)

62
Proof. Note that for any real number a ∈ R,

a = eln(a) . (2.15)
Thus,
t
β t = eln(β ) = e(t ln β)
. (2.16)
Then, for any A and β,

x(t) = Aβ t = Aeαt , (2.17)


where α = ln β.

Exercise 2.1: Show that the following algebraic properties hold for the real
exponential function:
a) eαt+β = eαt eβ
b) e−t = 1/et
αt
c) eαt−β = eeβ
d) eαt = (et )α , ∀α is rational.

Solution:
a) Take the logarithm of the left hand side of the equation,

ln(eαt+β ) = αt + β = ln eαt + ln eβ = ln(eαt eβ ).

Recall that ln t is a monotonically increasing function. Thus, it is one-to-one


and onto. Then,

eαt+β = eαt eβ .
b) Note that the 0th power of any real number is 1. Then,

e0 = 1 = e(t−t) = et e−t .

Therefore, e−t = 1/et .


c) Let’s rephrase the left hand side of the equation

eαt−β = eαt+(−β) = eαt e−β = eαt /eβ

d) For α = n is integer,

ent = e(t+t+...+t) = et et et ...et = (et )n .

63
x[n] x[n]

... ... ... ...

n n

(a) (b)
x[n] x[n]

... ... ... ...


n n

(c) (d)

Figure 2.2: Discrete time real exponential function for (a) β > 1, (b) 0 < β < 1,
(c) −1 < β < 0, (d) β < −1.

For α = n/m is rational,


n 1 1 n
e m t = (e m t )n = ((et ) m )n = (et ) m .

Discrete Time Real Exponential Function. Discrete time real exponential


function is an exponential function, which is only defined at every integer value
of time instance.
The analytic form of discrete time real exponential function is

x[n] = Cβ n = Ceαn , (2.18)


where β = eα , α, β are real valued numbers and n is integer valued variable.
Discrete time real exponential function can be represented by a sequence of
real numbers, where each value of the sequence is generated by Equation (2.18)
for n = 0, ±1, ±2, ±3, ... The shape of the function depends on the value of the
β or α = ln β parameter. There are four different forms of a discrete real
exponential function (see; Figure 2.2):
- For β > 1, it increases monotonically,
- For 0 < β < 1, it decreases monotonically,
- For −1 < β < 0, it alternates, while its absolute value decreases mono-
tonically,

64
- For β < −1, it alternates, while its absolute value increases monotonically.
When we work with the discrete time exponential functions, we always keep
in mind that the time variable n takes only integer values. For example, the
logarithm of the discrete time exponential function, x[n] = β n ,

n = lnβ x[n] (2.19)

exits only at integer values of n.

2.2.2. Complex Exponential Function


When the parameter, α, of the exponential functions for a continuous time
signal,

x(t) = Ceαt , (2.20)


and a discrete time signal,

x[n] = Ceαn , (2.21)


is a complex number, in other words,

α = a + jω0 , (2.22)
then we call them complex exponential functions. Adding an extra imagi-
nary dimension to the parameter, α, at the exponent change the behaviour of
the exponential function, as we shall see in the next section.
In the context of this book, we focus on a special form of complex functions,
where α is purely imaginary. In other words, a = 0.

Continuous-Time Complex Exponential Functions. We define the con-


tinuous time complex exponential function as follows,

x(t) = Cejω0 t . (2.23)


Note that the time variable is always a real number, in all of the functions,
t ∈ R. The amplitude, C, may or may not be a complex number in complex
exponential. Euler formula, introduced in Chapter 1, enables us to represent
the continuous time complex exponential function in terms of trigonometric
functions in the complex plane:

x(t) = Cejω0 t = C(cos ω0 t + j sin ω0 t). (2.24)


Continuous time complex exponential function has a very crucial symmetry
property: It is periodic! This property has a special importance in representing

65
signals in function spaces.
Let us prove the periodicity property of continuous time complex exponen-
tial in the following Lemma.

Lemma 2.3: Given a continuous time complex exponential function, x(t) =


ejωt , there exists a finite, non-zero value, T ∈ R, such that,

ejω0 t = ejω0 (t+T ) . (2.25)

Proof. Let us split the complex exponential on the right-hand side of the
equation into two expressions;

ejω0 (t+T ) = ejω0 t ejω0 T . (2.26)


In order to satisfy the periodicity property of complex exponential, we need,

ejω0 T = 1. (2.27)
Using the Euler formula, we get,

ejω0 T = cos ω0 T + j sin ω0 T. (2.28)


When the period of the complex exponential function satisfies,

T =k . (2.29)
|ω0 |
we can write,

2πk 2πk

ejω0 T = 1 = cos ω0 + j sin
ω0 . (2.30)
|ω0 |  |ω0 |
When k is an integer, the imaginary part of the above equation cancels
and that of the real part becomes 1. Thus, a complex exponential function is
periodic, with the fundamental period,

T = . (2.31)
|ω0 |

Learn more about the Euler formula, which ex-


plains complex exponential as rotation @ https:
WATCH //384book.net/v0201

Harmonically Related Complex Exponential. Harmony is an important


universal concept in arts and sciences. In music, harmony means simultaneously

66
Figure 2.3: An example of a painting by Robie Benve, “Japanese Maple Tree”
created with the support of Color Harmony Theory. The fundamental color is
red. The harmonics are the complementary color pairs.8

occurring harmonically related sounds of musical instruments and voices. In


painting, color harmony refers to combining the colors of different frequencies
in a way that is harmonious to the human eye (Figure 2.3).
Musical tunes, human voice, photographs and paintings are all measurable
signals, which can be represented by functions. Thus, the aesthetic values of
harmony can be quantified to a certain extent by using the mathematics of
harmony.
In the context of this book, we investigate the mathematical properties
of harmonically related functions (signals), specifically, harmonically related
complex exponential. Interestingly, harmonically related complex exponentials
provide us with a set of orthogonal basis functions, which span a vector space,
called function space. As we shall study in Chapter 6, in a function space, a
large class of functions can be represented in terms of weighted summation of
harmonically related complex exponentials.
Motivating question: What do harmonically related frequencies or func-
tions mean in mathematics?
A complex variable in polar coordinates,

z = ejω0 , (2.32)
moves on a unit circle with radius r = 1, as we change the angle, ω0 , in the

8
https://www.artranked.com/topic/Simple+Japanese#&gid=1&pid=34

67
y
ejθ = cos(θ) + j sin(θ)

sin(θ)
1
θ
cos(θ)
1
x

Figure 2.4: Periodic motion represented by a complex exponential function, in


continuous time: Unit Circle on a complex plane, with r = 1 and z = x + jy =
ejω0
.

interval of 0 ≤ ω0 ≤ 2π, in the complex plane (Figure 2.4).


The complex exponential function in continuous time,

x(t) = ejω0 t = cos ω0 t + j sin ω0 t, (2.33)


includes a time dimension, t, which is perpendicular to the complex plane.
Thus, like the complex variable z, complex exponential rotates on a unit circle
as a function of time, drawing a spiral along the time axis (Figure 2.5)

Visualization of continuous time complex exponen-


tial @ https://384book.net/v0202
WATCH

The real part, Re{x(t)} = cos ω0 t and an imaginary part, Im{x(t)} =


sin ω0 t of the complex exponential, are, also periodic with the fundamental
period,

T0 = , (2.34)
| ω0 |
as shown in Figure 2.4. The fundamental period, T0 is, then, the amount of
time seconds to complete one tour around the unit circle.
Motivating Question: What if we multiply the angular frequency ω0 by
an integer value k?
The fundamental period becomes,

T0 = . (2.35)
k|ω0 |

68
x y

1 j

2π θ 2π θ

y
x
j 1

Figure 2.5: Periodic motion represented by a complex exponential function,


in continuous time: (top left) Real part of complex exponential function,
Re{ejω0 t } = cos ω0 t (top right) Imaginary part of complex exponential func-
tion, Im{ejω0 t } = sin ω0 t (bottom) Complex exponential function, x(t) = ejω0 t
.

In the above equation, the angular frequency, ω0 , is increased by a factor


of k, while the fundamental period is decreased by a factor of k. Then, the
complex exponential function completes a cycle in a shorter time on the unit
circle, drawing spirals at a faster rate, as we increase the integer value k.

Definition 2.2: Harmonically related set of complex exponentials


are defined as

ϕk (t) = ejkω0 t , ∀k = 0, ∓1, ∓2, ∓3, . . . (2.36)


Harmonically related complex exponential functions have angular frequen-
cies, called harmonics, which are integer multiples of the angular frequency,

ωk = kω0 ∀k = 0, ∓1, ∓2, ∓3, . . . (2.37)

Definition 2.3: Given a periodic real-time signal, x(t) with a fundamental


frequency f0 , the harmonically related periodic signals are defined as the

69
set of all signals,

xk (t), ∀k = 0, ∓1, ∓2, ∓3, . . . , (2.38)


where each xk (t), has a fundamental frequency of kf0 , called harmonics.

Harmonically related periodic functions have the fundamental frequencies,


which are the integer multiple of the fundamental frequency, f0 of a function,
x(t). Like the concept of symmetry, harmonically related signals have aesthetic
values. For example, behind the art of composing music, there is a science of
musical harmony for combining the harmonically related musical tunes. Sim-
ilarly, great painters use color harmony theory, where they use harmonically
related colors. Nature, as a whole, bears infinitely many harmonically related
signals, which we can partially observe all over the universe. A popular ex-
ample is the harmonically related electromagnetic waveforms, that make the
observable universe.

Learn more about the fundamental frequency and


its harmonics in music @ https://384book.net/
WATCH v0203

Exercise 2.2: Consider the superposition of the following complex exponen-


tial signal
1
x(t) = (ejω0 t + e−jω0 t ) (2.39)
2
a) Find its trigonometric form.
b) Find its angular frequency and the fundamental period.
c) Find and plot the superposition of the first and second harmonics of x(t),
given below
x1 (t) + x2 (t), (2.40)
for ω0 = π/2.

Solution:
a) Use the Euler formula, which relates the complex exponential to the trigono-
metric functions,

ejω0 t = cos ω0 t + j sin ω0 t. (2.41)


Then,

70
1
x(t) = (cos ω0 t + j sin
 0 t + cos ω0 t − 
ω j sin
 ω
0 t) = cos ω0 t. (2.42)
2
b) Angular frequency is ω0 and the fundamental period is T0 = 2π/ω0 .
c) Recall that the k th harmonic of x(t) is defined as,
1
xk (t) = (ejkω0 t + e−jkω0 t ). (2.43)
2
Superposition of the first two harmonics of x(t) for ω0 = 1 is

x1 (t) + x2 (t) = cosω0 t + cos 2ω0 t = cos t + cos 2t. (2.44)


The plot of x1 (t) + x2 (t) is given in Figure 2.6.

x1 (t) + x2 (t)

1
t
0
−2π −π π 2π
−1

Figure 2.6: Plot of the superposition of the first two harmonics of x(t).

Exercise 2.3: Show that trigonometric functions, x(t) = cos ω0 t and x(t) =
sin ω0 t, can be represented in terms of the complex exponentials, ejω0 t .

Solution: Recall Euler formula

ejω0 t = cos ω0 t + j sin ω0 t, (2.45)


Similarly,

e−jω0 t = cos ω0 t − j sin ω0 t (2.46)


Thus,

ejω0 t + e−jω0 t
cos ω0 t = , (2.47)
2

71
ejω0 t − e−jω0 t
sin ω0 t = . (2.48)
2j
Note that, although sine and cosine functions are real functions, they can
be uniquely represented by the superposition of harmonically related complex
exponentials, for k = −1 and 1.

Exercise 2.4: Consider the following function:

x(t) = e2jω0 t + e4jω0 t .


a) Find the magnitude and phase of this function.
b) Find the real part and the imaginary part of this function.

Solution: a) Recall that a complex function, represented in polar form,

x(t) = C(t)ejθ(t) .
has the magnitude C(t) and the phase θ(t). Taking the average of the
exponents and defining the complex exponential with the average exponent,
we get,
x(t) = e3jω0 t (ejω0 t + e−jω0 t ) = (2 cos ω0 t)e3jω0 t .
Thus, the magnitude is C(t) = 2 cos ω0 t and the phase is θ(t) = 3ω0 t.
b) Recall that a complex function in Cartesian form is represented by,

x(t) = Re{x(t)} + jIm{x(t)}.


Using the Euler formula, we obtain,

x(t) = cos 2ω0 t + cos 4ω0 t + j(sin 2ω0 t + sin 4ω0 t).
Thus the real part is,

Re{x(t)} = cos 2ω0 t + cos 4ω0 t

and the imaginary part is,

Im{x(t)} = sin 2ω0 t + sin 4ω0 t.

Complex Exponential Function for Discrete Time Signals. Discrete


time complex exponential function is an exponential function, which is only
defined at every integer value of time instance.
The analytic form of discrete time complex exponential function is

x[n] = Aejω0 n . (2.49)

72
Most of the properties of the discrete time complex exponential functions
are very similar to that of the continuous time exponential functions, except
that n can only take integer values. This brings a serious constraint to the
fundamental period, N0 , which has to be an integer.
Euler formula, introduced in Chapter 1, is also applicable to discrete time
complex exponential functions to represent it in terms of trigonometric func-
tions:

x(t) = Aejω0 n = A(cos ω0 n + j sin ω0 n). (2.50)


Motivating Question: Is discrete complex exponential periodic?
Not always!
As we mentioned above, the discrete time complex exponential is periodic
if there exists an integer value N , which satisfies the following condition;

ejω0 (n+N ) = ejω0 n = ejω0 N ejω0 n . (2.51)


The above equality is valid, if

ejω0 N = 1. (2.52)
To satisfy the periodicity property of the discrete time complex exponen-
tials, we need to find an integer period, N , satisfying ω0 N = 2kπ.
For the fundamental period, we need to find the smallest integer value k,
such that N0 = ω2π0 k is an integer.

Exercise 2.5: Is the following signal periodic?

e−jn = cos n − j sin n

Solution: The angular frequency of the function is ω0 = 1. The period is,

N = 2kπ/ω0 = 2kπ. (2.53)


Due to the irrational number π, the period N cannot be an integer for any
value of k.
Therefore, the function e−jn is not periodic!

Exercise 2.6: Is the following signal periodic?

e−jπn = cos πn − j sin πn (2.54)

Solution: Angular frequency is π. The period is

73
N = 2kπ/π. (2.55)
For k = 1 the fundamental period is, N0 = 2. Yes, it is periodic!

Exercise 2.7: Find the fundamental period of the following discrete time
signals:
a) x[n] = e(π/3)n − e(π/4)n
b) x[n] = e(2π/3)n − e(π/4)n

Solution: a) The function x[n] has two components: x1 [n] = e(π/3)n and
x2 [n] = e(π/4)n . The fundamental period of x1 [n] is N1 = 6 and that of x2 [n]
is N2 = 8. The first component of x[n] repeats itself with period 6, and the
second component with 8. Then, the combined signal will repeat itself with
period N = 24, which is the least common multiple of 6 and 8.
b) As in part (a), x[n] has two components: x1 [n] = e(2π/3)n and x2 [n] = e(π/4)t .
The fundamental period of x1 [n] is N1 = 3 and that of x2 [n] is N2 = 8. This
time 8 is not integer multiple of 3. Thus, the fundamental period of x[n] is the
least common multiple of 3 and 8, which is N = 24.

Explore the complex exponential signal @ https:


//384book.net/i0201
INTERACTIVE

2.3. Unit Impulse Function


The simplest basic building block of the functions is the unit impulse function.
If we play it as a sound, it would sound like an explosion for a very short time
interval.
In the following, we shall investigate the discrete time and continuous time
unit impulse functions.

2.3.1. Discrete Time Unit Impulse Function or


Dirac-Delta Function
Discrete time unit impulse function is a real function, defined as

74
(
1 for n = 0,
δ[n] = (2.56)
0 otherwise.
Discrete time unit impulse function has only one nonzero value at the origin,
which is equal to 1 (Figure 2.7).

δ[n]

n
0

Figure 2.7: Discrete time unit impulse function. We put a small dot at every
point to show that the height is either zero or finite value of 1, at n = 0.

Exercise 2.8: Find the value of the following function:

x[n] = (n + 2)δ[n].

Solution: The value of this function is non-zero at n = 0, only. Thus,

x[n] = 2δ[n].

Exercise 2.9: Simplify the function:

x[n] = (n − 2)δ[n − 2].

Solution: The value of this function is non-zero at n = 2, only. Thus,

x[n] = 0.

In general, we can write

x[n]δ[n − n0 ] = x[n0 ]δ[n − n0 ]. (2.57)

75
2.3.2. Continuous-Time Unit Impulse Function
Loosely speaking, a continuous time impulse function is a real valued function
with an infinite height and zero width and it integrates to one. Continuous time
impulse function can be simulated by a really big instantaneous explosion.
The formal definition of the continuous time unit impulse function requires
the concept of limit. First, we define a function, δ∆ (t) as,
(
1
for 0 < t < ∆,
δ∆ (t) = ∆ (2.58)
0 otherwise.
Note that the area under δ∆ (t) is equal to 1 :
Z ∞
δ∆ (t)dt = 1. (2.59)
−∞

The plot of δ∆ (t) is given in Figure 2.8.

δ∆ (t)

1

t
0 ∆

Figure 2.8: The plot of δ∆ (t) function. The width of this function is ∆ and the
height is 1/∆.

Unit impulse function in continuous time is defined as the limit of the δ∆ (t)
function, as follows;

δ(t) = lim δ∆ (t). (2.60)


∆→0

This function has a peculiar shape, with zero width and infinite height.
However, the area under this function is finite;
Z ∞
δ(τ )dτ = 1. (2.61)
−∞

The unit impulse, δ(t), is plotted as an arrow of height 1 (Figure 2.9).

76
δ(t)

t
0

Figure 2.9: Schematic representation of the unit impulse function in continuous


time. We put an arrow to the function to indicate that the height is infinity.

Exercise 2.10: Plot the superposition of two shifted continuous time unit
impulse functions,

x(t) = δ(t − 1) + δ(t − 2). (2.62)

Solution:

x(t)

t
1 2

Figure 2.10: Plot of x(t) = δ(t − 1) + δ(t − 2).

Multiplication of a function by the continuous time impulse function re-


quires to take the limit. For sufficiently small ∆, we can write the following
approximation:
x(t)δ∆ (t) ≈ x(0)δ∆ (t). (2.63)
Taking the limit of both sides, we obtain

77
lim x(t)δ∆ (t) = lim x(0)δ∆ (t). (2.64)
∆→0 ∆→0

Thus,
x(t)δ(t) = x(0)δ(t). (2.65)
Similarly, we can show that,

x(t)δ(t − t0 ) = x(t0 )δ(t − t0 ). (2.66)

Exercise 2.11: Find the value of the following continuous time function:
Z ∞
x(t) = (t + 2)δ(t)dt.
−∞

Solution: Let us first evaluate the integrand:

(t + 2)δ(t) = 2δ(t).

Thus,
Z ∞
x(t) = 2 δ(t)dt = 2. (2.67)
−∞

2.3.3. Comparison of Discrete Time and Con-


tinuous Time Unit Impulse Functions
Discrete time impulse function is a bounded and realizable function, whereas
the continuous time counterpart is unbounded and, thus unrealizable. Contin-
uous time impulse function is a somewhat hypothetical function.
When we deal with the continuous time unit impulse function, we prefer to
take its integral after the operations, such as multiplication or addition with
other functions. Otherwise, the operations are not mathematically tractable.
Note that we could give the definition of discrete time unit impulse function
by a simple analytical equation. However, we had to use limit or integral opera-
tion to define the continuous time unit impulse function. In mathematics, when
we do not have a direct definition of a mathematical object, we provide some
indirect definitions using mathematical tools. This second type of definitions
is called operational definitions.

78
2.4. Unit Step Function
Unit step function is a real function, which is always 0 for negative values of
its argument and always 1 for the positive values of its argument.

2.4.1. Discrete Time Unit Step Function


Discrete time unit step function is analytically defined as follows;
(
1 for n ≥ 0
u[n] = (2.68)
0, otherwise.
It is customary to put little dots at the top of each bar of magnitude 1 and
zero to show that the function exists at only integer values and it is bounded
for all values of n, as shown in the Figure 2.11.

1
...

Figure 2.11: The plot of the discrete time unit step function, u[n].

2.4.2. Relationship Between the Discrete Time


Unit Step and Unit Impulse Functions
There is an interesting relationship between the discrete time unit step and
unit impulse functions: One of them can be used to represent the other. Math-
ematically speaking,

X
u[n] = δ[n − k] (2.69)
k=0

and

79
δ[n] = u[n] − u[n − 1]. (2.70)
The first equation reveals that the discrete time unit step function is noth-
ing but the superposition of the discrete time-shifted unit impulse functions
with equal weights, for all positive values of k. In other words, when we add the
shifted versions of unit impulse functions, δ[n], δ[n − 1], δ[n − 2], ..., we obtain
the unit step function (Figure 2.12).

δ[n]
δ[n − 1]
δ[n] + δ[n − 1]

n n n
0 0 1 0 1

Figure 2.12: Addition of the unit impulse and its shifted version, δ[n − 1]. It
is possible to generate the unit step function by adding the shifted impulse
functions, δ[n − k], for all k.

The second equation reveals that if we subtract the infinitely many impulses
of unit step function from its shifted version, they all cancel out and we simply
get the value of the unit step function at the origin, which is δ[n] (Figure 2.13).

δ[n]
u[n] u[n-1]

... − ... =
n n n
0 1 2 3 0 1 2 3 0

Figure 2.13: We can also generate the unit impulse function by subtracting the
shifted unit step function from the unit step function, δ[n] = u[n] − u[n − 1].

Motivating Question: Consider any discrete time bounded function x[n].


Can we represent this signal by the superposition of the shifted discrete time
impulse functions?
The answer is yes! Interestingly, the bounded functions, which do not have
a closed analytical form, can be represented by the weighted summation of
shifted impulses.
When we multiply the value of the discrete time function x[n] by a shifted
impulse δ[n − k], we get,

80
x[n]

n
−1 0 1 2 3

x[1]δ[n − 1]

Figure 2.14: A discrete time signal, which has non-zero values in the interval
−1 ≤ n ≤ 3.

(
x[k] for n = k (since δ[n − k] = 1 only when n = k),
x[n]δ[n − k] =
0 otherwise.
(2.71)
If we sum all the values of x[k], we get x[n],

X
x[n] = x[k]δ[n − k]. (2.72)
k=−∞

The above equation reveals that we can represent any bounded discrete
time function, x[n], analytically by using the weighted summation of the shifted
impulse functions, δ[n − k], where the weights correspond to the amplitude of
the function at the point k.

Exercise 2.12: Find an analytical expression for the signal plotted in Figure
2.14.

Solution: This function does not have any closed-form representation. How-
ever, we can represent it by the superposition of the shifted impulse function,
3
X
x[n] = x[k]δ[n − k], (2.73)
k=−1

where x[k] is the value of x[n] at point k.

81
x[n]
4

n
−2 −1 1 2 3 4
−1

−2

Figure 2.15: A function, which consists of two shifted impulse functions.

u(t)

Figure 2.16: Plot of continuous time unit step function, u(t). Note that this
function has a discontinuity at the origin.

Exercise 2.13: Find an analytical expression for the signal plotted in Figure
2.15.

Solution: We can represent this signal as the superposition of two discrete


time impulse function:

x[n] = 2δ[n − 1] − δ[n − 2]. (2.74)

2.4.3. Continuous-Time Unit Step Function


Continuous time unit step function (Figure 2.16) is defined as follows;

82
(
1 for t ≥ 0,
u(t) = (2.75)
0 otherwise.
The above definition of the continuous time unit step function reveals that
there is a discontinuity at t = 0.

2.4.4. Comparison of Discrete Time and Con-


tinuous Time Unit Step functions
Recall the analytical form of the continuous-time and discrete-time unit step
functions, as follows
(
1 for t ≥ 0,
u(t) = (2.76)
0 otherwise,
and
(
1 for n ≥ 0,
u[n] = (2.77)
0 otherwise.
All we need to do is to replace the continuous variable t with the discrete
variable n. Their analytical forms are the same. However, the discrete unit step
function is undefined between the two integer values of n.

Relationship Between the Continuous Time Unit Step and Unit Im-
pulse Functions. Continuous time unit step and unit impulse functions are
related to each other with derivatives and integrals. Recall that we obtain the
continuous time impulse function by taking the limit of the δ∆ (t) function, as
follows:

δ(t) = lim δ∆ (t). (2.78)


∆→0

Recall, that the area under the impulse function was obtained by integrat-
ing it,
Z ∞
δ(τ )dτ = 1. (2.79)
−∞

We use the relationship between the well behaved function δ∆ (t) (Figure 2.8)
and the unit step function,

u(t) − u(t − ∆)
δ∆ (t) = (Figure 2.17, 2.18). (2.80)

83
δ∆ (t)

1

t
0 ∆

Figure 2.17: The well-behaved function δ∆ (t) approaches to the unit impulse
function in the limit as ∆ → 0.

u(t) − u(t − ∆)

t
0 ∆

Figure 2.18: The well-behaved function δ∆ (t) can be written in terms of two
unit step functions, as δ∆ (t) = u(t)−u(t−∆)
∆ .

84
Table 2.1: Relationship between the unit step and unit impulse
functions.

Continuous time Discrete time


Obtain u from δ Integration: Summation:
Rt ∞
P
u(t) = −∞ δ(τ )dτ u[n] = δ[n − k]
k=0

Obtain δ from u Differentiation: Difference:


δ(t) = du(t)
dt δ[n] = u[n] − u[n − 1]

Taking the limit as ∆ → 0, we get,

u(t) − u(t − ∆) du(t)


δ(t) = lim δ∆ (t) = lim = . (2.81)
∆→0 ∆→0 ∆ dt
Thus, we find the relationship between the unit impulse and unit step functions
for the continuous time signals as follows:

du(t)
δ(t) = . (2.82)
dt
If we integrate both sides of the above equation, we obtain,
Z t
u(t) = δ(τ )dτ. (2.83)
−∞
Comparing the relationship between the unit impulse function and unit
step function in discrete time and continuous time signals, we observe that

• Difference operation in discrete time is replaced by the differentiation


operation in continuous time,
• Sum operation in discrete time is replaced by the integral operation in
continuous time.

These relationships are summarized in Table 2.1.

Explore the relation between the unit impulse and


the unit step functions @ https://384book.net/
INTERACTIVE i0202

Motivating Question: Consider any continuous time bounded function


x(t). Can we represent this signal by the weighted integral of continuous time
impulse functions?

85
δ∆ (t) x(t)

0 ∆
t
Figure 2.19: We can slide the function δ∆ (t), all over the function x(t), as we
multiply and integrate them to obtain a relatively coarse representation of x(t).

Suppose that we are given a continuous time-bounded signal x(t). Suppose,


also, that we multiply x(t) by δ∆ (t) . What do we get? Assuming that ∆ is
very small, we get a rectangular function, where the width is ∆ and height is,

x(0)δ∆ (t). (2.84)


When we shift the function δ∆ (t) by τ and multiply it by x(τ ), we get another
rectangular function, where the width is ∆ and the height is,

x(τ )δ∆ (t − τ ). (2.85)


This is illustrated in Figure 2.19. Now let us integrate x(τ )δ∆ (t − τ ), in the
interval of −∞ < τ < ∞, to obtain an approximation of x(t), as follows;
Z ∞
x∆ (t) = x(τ )δ∆ (t − τ )dτ. (2.86)
−∞
Now let’s take the limit of the above equation:

Z ∞ Z ∞
lim x∆ (t) = x(t) = lim x(τ )δ∆ (t − τ )dτ = x(τ )δ(t − τ )dτ. (2.87)
∆→0 ∆→0 −∞ −∞
Z ∞
x(t) = x(τ )δ(t − τ )dτ. (2.88)
−∞
Similar to the discrete case, we can represent any continuous time, bounded
signal, x(t), in terms of the weighted integral of shifted impulse functions,

86
δ(t − τ ), where the weights correspond to the value of the function at the shift
point, τ.

Exercise 2.14: Show that


Z ∞
x(τ )δ(τ )dτ = x(0).
−∞

Solution: In the above integral, let us replace δ(τ ) by


(
1
for 0 < τ < ∆,
δ∆ (τ ) = ∆
0 otherwise.
When ∆ is very small, we approximate the integral as follows;
Z ∞
x(τ )δ∆ (τ )dτ ≈ x(0).
−∞
Take the limit,
Z ∞ Z ∞ Z ∞
lim x(τ )δ∆ (τ )dτ = x(τ )δ(τ )dτ = x(0) δ(τ )dτ = x(0).
∆→0 −∞ −∞ −∞

Exercise 2.15: Show that,


Z ∞
eτ δ(τ )dτ = 1.
−∞

Solution: Using the result of the previous example,


Z ∞
x(τ )δ(τ )dτ = x(0)
−∞

and noting that et = 1, for t = 0, we obtain,


Z ∞
eτ δ(τ )dτ = e0 = 1.
−∞

Exercise 2.16: Show that the impulse function is even, in other words, δ(t) =
δ(−t).

Solution: Note that δ(t) is symmetric with respect to the y-axis. Thus it is
even (Figure 2.20). This directly implies that,

87
δ(−t)

t
0

Figure 2.20: The continuous time unit impulse function is even.

x(t)

t
1 2 3 4 5

Figure 2.21: A piece-wise constant function, which does not have a compact
analytical form.

δ(t) = δ(−t).

Exercise 2.17: Find an analytical expression for signal in Figure 2.21.

Solution: We can use the superposition of shifted superposition of unit step


functions, as follows;

x(t) = 2u(t − 1) − u(t − 2) − u(t − 5). (2.89)

Exercise 2.18: Find an analytical expression for the signal in Figure 2.22.

Solution: We can use the shifted superposition of unit step functions, as fol-

88
x(t)

0
1 2 3 4 5 6 7 t

Figure 2.22: A bounded piece-wise constant function.

lows;

x(t) = u(t + 2) + u(t − 1) − u(t − 5) − u(t − 6). (2.90)

Above example reveals that a bounded piece-wise constant function can be


represented by the superposition of continuous time unit step functions.

2.5. Chapter Summary


Can we represent simple signals by some basic functions? Is it possible to use
these basic signals to represent more complicated ones? How can we manipulate
a signal to generate a signal of the desired form?
In this chapter, we introduced the basic building blocks of signals, which
have well-defined analytical forms, namely exponential functions, unit step
functions, and unit impulse functions. We introduced both continuous time and
discrete time counterparts of these functions. We also investigate the methods
for manipulating them by changing the time variable.
Throughout this book, we shall use and manipulate these basic functions
to model and analyze the signals, even the complicated ones. Among these
basic functions, we pay special attention to harmonically related complex ex-
ponential functions, which will be used as basis of a vector space to represent
natural and man-made signals in a very efficient and compact way.

89
Problems
1. Consider the following real exponential function;

x(t) = e−0.5t u(t).


a) Find and plot x(2t − 4).
b) Find and plot the derivative;

dx(t)
.
dt
c) Find and plot the integral;
Z t
y(t) = x(τ )dτ.
0

d) Find and plot the integral;


Z t
y(t) = x(2τ − 4)dτ.
0

2. Find and plot the real and imaginary parts of the following continuous time
signals:
a) x1 (t) = ejπ/2 cos(2t + 2π)
b) x2 (t) = 4e−2t sin(3t + 2π)
c) x3 (t) = 2je(−20+60j)t
3. Find and plot the magnitude and phases of the following discrete time
signals:
a) x1 [n] = ejπ/2 cos(2n)
b) x2 [n] = 4e−2n sin(3n + 2π)
c) x3 [n] = 2je(−20+j60)n
4. Determine if the following signals are periodic or not, for each periodic signal
determine its fundamental period:
a) x1 (t) = 4ej20t
b) x2 (t) = e(−3+j2)t
c) x3 [n] = ej9πn
5. Determine the fundamental period of the following signal:

x(t) = 4 cos(5t + 2) − 2 sin(10t − 1).

90
6. Determine the fundamental period of the following signal:

x[n] = 13 + ej8πn/3 − ej4πn/5 .

7. Consider the superposition of the following periodic signal:


2π π
x(t) = sin t + 2 cos t (2.91)
3 2
a) Find its exponential form
b) Find its angular frequency and the fundamental period.
c) Find the first and second harmonics of x(t).
8. Given a continuous time signal x(t) = e−t , sketch the following functions:
a) x(t − 2)
b) x(2t − 2)
c) x(t)[δ(2t + 4)]
d) [x(2t) + x(t)]u(t)
9. Given a discrete time signal x[n] = u[n], sketch the following functions,
a) x[n − 5]
b) x[2n + 2]
c) x[n]u[2n]
d) x[n] + (−1)n x[n]
e) x[n + 2]δ[n + 2]
10. State what kind of symmetry or symmetries the following function has:

x[n] = u[n + 1] − u[n − 1].


11. Express and plot the following discrete time function in terms of shifted
impulse functions:

n
 0≤n≤2
x[n] = −n −2 ≤ n < 0

0 otherwise.

12. Let x(t) = x(t + 6) be a continuous time periodic function, represented by


the following analytical expression in one full period:

(
t |t| < 2
x(t) =
0 2 ≤ |t| ≤ 4

a) Plot this function.

91
b) Find and plot x(2t).
c) What is the fundamental period of x(2t).
13. Consider the following discrete time periodic function, where x[n] = x[n+4].
The function is defined in one enumeratefull period, as follows:
(
2 0≤t≤2
x(t) =
−2 2 < t < 4.

a) Find the derivative dx(t)


dt and represent it by the sum of shifted impulse
functions. Rt
b) Find the integral 0 x(τ )dτ .
14. Show that
1
δ(at) = δ(t).
|a|

15. Plot each of the functions given below:


a) x(t) = 2δ(2t)
b) x(t) = δ(−2t) + δ(2t)
c) x(t) = δ(2t + 1)

16. Suppose that x(t) is a continuous time function, show that x(t)δ(t) =
x(0)δ(t). Find and plot x(t)δ(t − 2) , for x(t) = 2t2 .

17. Given the following shifted discrete time signal,



X
x[n + 3] = 4 − δ[n − 2k],
k=5

find a closed form for x[n] in terms of a shifted and scaled unit step function.

18. Solve the following integrals:


R π/4
a) −π/4 cos(2t)u(t)dt
R 10
b) −10 cos(4πt)[δ(t + 5) + δ(t − 3)] dt
R6
c) −6 sin(6πt)u(t − 1)dt
19. Consider the plot of the signal x[n], given in Figure P1.3.

92
x[n]

1
−5 −4 −2 −1 0 4 5
n
−3 1 2 3
−1

Figure P1.1
a) Find an analytical expression of this figure using shifted impulse functions.
b) Find and plot y[n] = x[2n + 2] + x[1 − n].
c) Find y[n] = x[2n + 2] + x[1 − n] signal in terms of the shifted impulse
functions.

93
94
Chapter 3
Basic Building Blocks and
Properties of Systems

We are the basic building blocks of the cosmos to know itself.


Adopted from Carl Sagan quote ”We Are A Way For The
Cosmos To Know Itself.”

In Chapter 2, we studied the basic building blocks of signals and their


mathematical representations as functions. In this chapter, we shall study the
basic building blocks of systems and their representations by equations.
We shall combine the basic blocks of a system, called subsystems, in various
forms to represent relatively more complicated systems. We also study the
properties of systems, which provide us with a framework to model, design, and
implement a wide range of systems, using the available mathematical tools.

3.1. Representation of Systems by Equa-


tions
A system can be considered as a mapping, which transforms the input signal(s)
to the output signal(s). Thus, a system is a functional operator in which signals
are transformed into other signals.
Suppose that a system, represented by a model, h, receives a set of input
signals to generate a set of output signals, as shown in the block diagram
representation of Figure 3.1.
In order to represent a system by a mathematical model, we need to estab-
lish a relationship between the input signal, x(·) and output signal, y(·) in the
following general form,

95
x(·) h(·) y(·)

Figure 3.1: Black box representation of a system, where h represents a func-


tional operator, which maps an input signal to an output signal. Dot notation
is used to indicate both continuous time and discrete time input signal x(·),
output signal y(·), and the model, h(·).

y(·) = h(x(·)). (3.1)


In the above equation, h stands for the model of the system. It can be
an algebraic equation, a differential equation, an integral equation, a
nonlinear equation etc. We use a loose notation, (·), to cover both continuous
and discrete time systems.
In most practical problems, the model of the system, h, is not available,
thus, we represent it with a black box. However, we can observe a set of input-
output pairs,{xi (·), yi (·)}ni=1 , which enables us to find a model h, satisfying,

h(x1 (·)) = y1 (·)


h(x2 (·)) = y2 (·)
(3.2)
...............
h(xn (·)) = yn (·)
As we mentioned before, the model of the system, h, should be invariant
under all of our varying input-output observations, {xi (·), yi (·)}ni=1 .

Exercise 3.1: Consider a system represented by the following equation;

y(t) = Ax(t). (3.3)


Find the outputs of this system for the following inputs:
a) x1 (t) = cos(ω0 t),
b) x2 (t) = ejω0 t .

Solution: The model, h, for this system is simply a multiplier, A, of an alge-


braic equation. The input signal is multiplied by the constant parameter, A,
to generate the corresponding output signal. Thus, the output function is the
constant multiple of the input function.
a) y1 (t) = A cos(ω0 t).
b) y2 (t) = Aejω0 t .

96
neural signal
x(t) eye brain y(t)
light label of an object

Figure 3.2: Human Visual system, represented by the series connection of


two subsystems: eye and brain. In this representation, the light, reflected from
the objects, is the input signal to the eye. After processing this input signal, the
eye outputs a set of neural signals. The input to the brain is this set of signals.
They are processed in various regions of the brain to generate the final output
signal, which consists of a high-level decision, such as the label or location of
an object.

Note that we may feed different inputs to the same system. The corre-
sponding output obeys the rules governed by the system equation. No matter
what the input is, the system equation, which relates the input and output
signals remains invariant.
There are also some systems, where the input and output pairs change the
system model h. This type of systems, called model-aware systems, is beyond
the scope of this book.

3.2. Interconnection of Basic Systems:


Series, Parallel, Hybrid and Feed-
back Control Systems
When a system, such as a cellular phone or a computer, is very complicated, it
is not possible to find a single equation, which we can put into the black box
of Figure 3.1. Instead, we may consider its subsystems and the relationships
among the subsystems. In such cases, we may represent a system by collection
of subsystems, which are interrelated by signals. The connections among the
subsystems can be in various forms, such as series connection, parallel con-
nection, hybrid connection, or loopy connection. Let us exemplify the type of
connections among the subsystems of a large system.

3.2.1. Series Systems


As an example, let us consider modeling the Human Visual System. It has
many components, which is highly difficult to model by a single equation.
Fortunately, we can split it into two subsystems, namely, the eye and the

97
brain. We represent each subsystem with a black box. The input to the first
box (the eye) is the light signal, which generates a set of neural signals at
the output. The output of the eye component is continuously fed to the brain
component (second box) as the input. The output of the brain component can
be one of the wide range of cognitive processes, such as perceiving colors and
shapes, recognizing object, and interpreting scenes, etc. (Figure 3.2). These
types of connections, where the outputs of the subsystems are fed to the input
of another subsystem sequentially, are called series representation. Instead
of finding a single equation to represent the human visual system, we can find
two relatively more tractable equations, one of which represents the eye and
the other represents the brain.
Note that the eye as a subsystem is still too complicated to be represented
by a single equation. Thus, it needs to be partitioned into further subsystems,
such as the eye lens, retina, blind spot, eye muscles, etc. Similarly, we can
partition the brain components into anatomical regions, which are responsible
in vision.
In summary, we can partition a complicated system into as many subsys-
tems as we need, until we get a mathematically tractable representation for
each subsystem. However, we need to also define the signals, which establish the
relationships among the subsystems, considering the input and output signal
of each subsystem.

3.2.2. Parallel Systems

Auditory
sound
System
+ World Model
Visual
light
System

Figure 3.3: Block diagram representation of the audio-visual system, as two


parallel subsystems. Each subsystem receives different inputs. While the eye
subsystem receives light and outputs a set of neural signals, the ear (auditory
system) receives sound and outputs a different set of neural signals. These two
sets of signals are added and processed in some anatomical regions of the brain
to generate an output signal, which is the perceived world.

As a second example, let us consider the Human Audio-Visual System.


The auditory system receives the sound waves as input, whereas the visual

98
x1 (t) h1 h2
sound ear brain + h5 y(t)

x2 (t) h3 h4 brain
light eye brain

Figure 3.4: Block diagram representation of the audio-visual system, as a com-


bination of parallel and series subsystems. This representation is called hybrid
representation.

system receives the light. These two different types of signals are separately
processed in auditory and visual systems. Then, the outputs of both systems
are combined in anatomic regions of our brain, responsible for the audiovisual
process. Finally, a set of cognitive processes can be generated, such as creating
and storing a world model in our brain. As we did in the previous example, we
can construct subsystems for the auditory system as the ear and the auditory
regions of the brain. We can further represent the ear and the auditory regions
by subsystems until we obtain a mathematically tractable model for the audio-
visual system. Note that we should also establish the relationships by defining
the input and output signals among the subsystems
In this case, the auditory and visual systems receive and generate differ-
ent input-output pairs. Then, the outputs of the subsystems are merged by a
system component, such as an adder or multiplier (Figure 3.3). These types of
models are called parallel systems.

3.2.3. Hybrid Systems


It is more realistic to represent the audio-visual system by further decomposing
it into smaller subsystems. In this case, we need to develop parallel and series
connections to form a hybrid representation, as shown in Figure 3.4. This
block diagram approximates the human audio-visual system better than the
coarse parallel representation of Figure 3.3. It does not only split the auditory
and visual systems into two blocks, but it also adds an additional brain block,
which process the added signals together. It is possible to add more blocks to
obtain finer representations of the human audiovisual system. This example
shows that there is more than one block diagram representation for the same
physical phenomenon.
In real-life applications for designing or modeling a system, we may need
hundreds or thousands of subsystems, hybridly interconnected with each other

99
x y
+ h1

h2

Figure 3.5: Block diagram representation of a feedback control system.

and working in harmony to achieve a certain goal or to serve some other sys-
tems.

3.2.3.1. Feedback Control Systems


Some systems feed their generated output back to the system as an input
to evaluate or adjust their functionalities. For example, the thermostat of a
heating system measures the temperature of a house. Then, this output tem-
perature is fed back to the system, as an input. If the temperature exceeds a
certain threshold value, the system halts for a while. These types of intercon-
nected systems are called feedback control systems. A simple feedback control
system is depicted in Figure 3.5.
In the above representations, each subsystem receives a set of signals, called
input and emits a set of signals, called output. An output of a subsystem is
fed as the input to another subsystem. These signals glue all the subsystems
to generate the overall system.
Systems approach enables us to model and implement complex man-made
systems as a collection of subsystems interconnected by signals. Good exam-
ples include cars, airplanes, robots, telecommunication networks, etc. However,
modeling the natural systems may get complicated in many cases. For exam-
ple, the basic building blocks of natural systems are the simple cells. There
are several representations of the simple cells, each of which requires several
dozens of subsystems to approximately model and implement by circuit ele-
ments. Considering the fact that there are approximately 2 billion cells in a
spider, modeling it with a systems approach is quite difficult, because of the
scalability problem.

3.2.3.2. An Example of System Modeling: Neurons as a


Subsystem of Human Brain
Despite the great effort in many fields of science and engineering, the available
methods for representing the cognitive activities of the human brain are too
short to decipher the underlying complex structure. Nevertheless, let us give a

100
x1 w1
w2
x2
P
+ f (·) y = f ( wi xi )

x3 w3

Figure 3.6: An artificial neuron.


P It first takes the linear combination of the
input signals to obtain x = wi xi . This signal is fed to a function f (x) to
generate the output of the neuron, y. The circle with a dashed line shows the
artificial neuron.

try to model the human brain.


Neurons can be considered as the basic building blocks of the human brain.
Thus, they can be considered as billions of subsystems of the brain, massively
interconnected by neural signals. A single neuron receives multiple inputs,
through the dendrites and outputs a single signal through an axon.
When a neuron receives a set of inputs from the dendrites, these neural
signals are processed in the neuron by a bunch of highly complicated electro-
chemical activities. Finally, the neuron

• either stays silent, when the electrochemical processes generate a weak signal
or
• fires a single output when the electrochemical processes generate a signal
which is above a certain threshold.

The output signal, generated by a neuron is conveyed to the dendrites of


the other neurons by synaptic connections, as inputs.

Learn more about how signal travels through a


neuron @ https://384book.net/v0301
WATCH

A simple mathematical model for a neuron was proposed by Frank Rosen-


blatt in 1957. In this representation, neuron is defined as a system, which
receives multiple inputs, xi , ∀i = 1, ..., n. Then, each input is multiplied by
a weight wi , and the linear combination of the inputs,
n
X
x= wi x i , (3.4)
i=1

101
Weights
f (·)

f (·)
Inputs Output
f (·)

f (·)
Middle Layer

Figure 3.7: An Artificial Neural Network, with one hidden layer, obtained by
the interconnection of eight artificial neurons. The three neurons at the first
layer receive the input values of a sample. The neurons inP the middle layer
compute the weighted linear P combination of the inputs as wi xi , and then
compute the output as f ( wi xi ). The output of each neuron is fed to the
next layer as the input. In this example, there is a single output neuron, which
receives the weighted combination of the outputs of the hidden layer and passes
it through a loss function to predict the label of the input sample.

are fed to a function f (x) to generate an output,

y = f (x). (3.5)
f (·) is a nonlinear function, typically a unit step function defined as
(
1, x ≥ 0,
f (x) = (3.6)
0, x < 0.

The block diagram representation of this model is given in Figure 3.7.


If we organize many of these simple artificial neurons layer by layer, we
can construct an Artificial Neural Network. The weights wij are adjusted by
an optimization algorithm in such a way that the label of the input sample is
predicted at the output neuron. Figure 3.7 depicts an example of an Artificial
Neural Network with three layers: an input layer (leftmost three neurons),
a hidden layer (four neurons in the middle), and an output layer (a single
rightmost neuron ). Depending on the application domain, we can design an
Artificial Neural Network with as many neurons and layers as we need.
The Artificial Neural Networks, briefly mentioned above, do not get even

102
close to representing the human brain. However, they are widely used in Arti-
ficial Intelligence and Machine Learning systems. Design of these networks for
a specific goal, such as designing large language models, detecting an object
in a video, or diagnosing a disease from an x-ray, is beyond the scope of this
book.

3.3. Properties of Systems


Studying the systems with respect to some quantifiable properties, simplifies
the modeling, design, and implementation problems. Let us investigate the
basic properties of systems under six headings:
1. Memory
2. Causality
3. Invertibility
4. Stability
5. Time Invariance
6. Linearity
The above properties establish a mathematically tractable framework to
model and design systems. Once we decide on what type of properties are
required for a system, we restrict the system equation to satisfy the desired
properties.
In the following subsections, we provide the definitions of the system prop-
erties enriched with simple examples.

3.3.1. Memory
Memory is a cognitive process of humans and some animals to store, retain
and retrieve the information perceived by sensory stimuli.
The definition of memory is slightly different in System Theory. A system
is memoryless if the present value of the output depends only on the present
value of the input. A memoryless system operates on the current value of
the input to generate a current value of the output. Otherwise, the system
has memory. A system with memory can store and retrieve past or future
values of the input values. Therefore, the present value of the output y(t) can
be expressed in terms of the past or future values of an input for the systems
with memory.
Let us give the formal definitions of systems with and without memory. In
the definitions below, we suppose that a system is represented by a model h.

103
Definition 3.1: A continuous time system, represented by the model h is
memoryless, if the model h relates the present value of the input x(t) to the
present value of the output y(t), as follows:

y(t) = h(x(t)). (3.7)


Similarly, a discrete time system, represented by the model h is memoryless,
if the model h relates the present value of the input x[n] to the present value
of the output y[n], as follows:

y[n] = h[x[n]]. (3.8)

Definition 3.2: A continuous time system, represented by the model h has


memory, if the model h relates the past and/or future values of the input x(t)
to the present value of the output y(t), as follows:

y(t) = h(x(t − t0 )), t0 ̸= 0. (3.9)


Similarly, a a discrete time system, represented by the model h has memory,
if the model h relates the past and/or future values of the input x[n] to the
present value of the output y[n], as follows:

y[n] = h[x[n − n0 ]], n0 ̸= 0. (3.10)

For t0 > 0 or n0 > 0 the present value of the output of the system depends
on the past value of the input. For t0 < 0 or n0 < 0 the present value of the
output of the system depends on the future value of the input. Thus, the formal
definition of memory extends beyond its everyday meaning. In the cognitive
process, memory is confined to recalling the past, not predicting the future.

Exercise 3.2: Is this system memoryless?

y(t/3) = x(t) (3.11)

Solution: No! This system has memory. For example, for t = 3, the output
value depends on the future value of the input, y(1) = x(3).

Exercise 3.3: Accumulator: Consider a discrete time system represented


by the following equation,
n
X
y[n] = x[n]. (3.12)
k=−∞

Does this system have memory?

104
Solution: This system has memory. It accumulates all the past values of the
input to generate an output. Thus, it remembers all the past values of the
input.

Exercise 3.4: Consider a continuous time system, represented by the follow-


ing equation,

y(t) = x2 (t) + 1. (3.13)


Does this system have memory?

Solution: This is a memoryless system. For all values of t, the present values
of the output depend on just the present values of the input. Specifically, the
system receives the current value of an input at time t. It adds 1 to the square
of the input to generate the output at the same time.

3.3.2. Causality
The memory property of systems can be counter-intuitive. In systems with
memory, the output may depend not only on past values but also on future
input values. In other words, the system remembers both the past and the
future values. This is contradictory in most physical systems, which are causal.
Causality is a fundamental concept in many fields of science and philosophy.
The major assumption of causality is that the response of a system is caused
by stimuli. In systems, output signals are caused by input signals. In other
words, responses cannot come before the signals they are responding to.
The concept of memory can be further restricted to define causal systems,
where the output y(t) depends on the past and present values of the input. If
the present value of the input depends on the future value of the input, then
this system is called non-causal. In real-life problems, we have many examples
of non-causal systems. For example, the prediction systems are non-causal. An
aircraft pilot defines the route at a present time based on the future weather
forecast.

Definition 3.3: A causal system is defined as

y(t) = h(x(t − t0 )), t0 ≥ 0. (3.14)

Note that a causal system may or may not be memoryless. A memoryless


system is always causal.

Definition 3.4: A system is non causal, if ∃ t0 < 0 such that,

105
y(t) = h(x(t − t0 )). (3.15)

A non-causal system has always memory.

Exercise 3.5: Is the following system causal and/or memoryless?

dx(t)
y(t) = . (3.16)
dt

Solution: The answer follows from the definition of derivatives:

dx(t) x(t) − x(t − ∆t)


y(t) = = lim (3.17)
dt ∆t→0 ∆t
As ∆t → 0, the system becomes memoryless. Because the current value of the
output depends only on the current value of the input. It is also causal.

Exercise 3.6: Averaging: Is the following system causal?


N
1 X
y[n] = x[n − k]. (3.18)
2N + 1
k=−N

Solution: This system takes the average of past and future values of the input
signal, defined in a time interval (−N, N ). Thus, it is non-causal.

3.3.3. Invertibility
In a general form, a system receives an input signal and emits an output signal,
according to the model, h, satisfying the following equation,

y(·) = h(x(·)). (3.19)


In some practical applications, we can find a model h and measure the
output, y(·), without observing the input which causes y(·). As an example,
consider the speech signal, generated by the vocal track. There are several
reliable methods, which model the subsystems of the vocal track, such as the
throat, teethes, tongue, mouth, nasal cavities, etc. With a simple recorder, we
can easily record a speech signal. However, we do not have access to the input
signal of the vocal track, which is the air in the lung pushed by the diaphragm.
It may be important to study the properties of the input air, which generates
speech signal at the output of the vocal track. Can we compute the input

106
x(·) h h−1 x(·)
y(·)
Input Input

Figure 3.8: When we cascade the system, h with its inverse h−1 , the input of
the system is obtained at the output of the inverse system.

function, x(·), from the output function, y(·), using the model h?
Motivating Question: Suppose that we are given the model, h, of a
system. Suppose also that we can observe the output, but not the input of the
system. Is it possible to obtain the input signal for any observed output signal?
This is only possible if the system is invertible. When we concatenate the
system, represented by a model h, to its inverse system, h−1 , in a series con-
nection, we obtain the input to the original system, h, at the output of the
inverse system, h−1 (Figure 3.8).

Definition 3.5: Given a continuous time system, y(t) = h(x(t)), if ∃ a


unique h−1 so that

x(t) = h−1 (y(t)), (3.20)


then, the system is invertible.

Definition 3.6: Given a discrete time system, y[n] = h(x[n]), if ∃ a unique


h−1 for all possible inputs, so that

x[n] = h−1 [y[n]], (3.21)


then, the system is invertible.

Exercise 3.7: Is the following continuous time system invertible?

y(t) = cos x(t). (3.22)

Solution: In order to show that the system is not invertible, it is sufficient


to find two different inputs, which generate the same output. This violates the
uniqueness assumption of the inverse model, making the system non-invertible.
Take, for example, two different constant signals, x1 (t) = π/2 and x2 (t) =
−π/2, as input. Both of them generate the same output:

y1 (t) = cos x1 (t) = 0 = cos x2 (t) = y2 (t).


The inverse is not unique. Thus, the system is not invertible.

107
Exercise 3.8: Is the following continuous time system invertible?

y(t) = 0.5x(t) + 3. (3.23)

Solution: Yes, this system is invertible. The inverse of this system is uniquely
obtained by solving the above equation for x(t) as follows:

x(t) = h−1 (y(t)) = 2y(t) − 6. (3.24)


The input signal x(t) can be uniquely obtained from the output signal y(t).
Thus, the system is invertible.

Exercise 3.9: Is the following discrete time system invertible?

y[n] = 0.5x2 [n] + 3. (3.25)

Solution: The inverse of this system can be obtained by solving the above
equation for x[n] as follows:
p
x[n] = ± 2y[n] − 6 (3.26)
Since there are two distinct inputs to generate the same output, there is not a
unique inverse for the model h. Thus, the system is not invertible.

Exercise 3.10: Is the following discrete time system invertible?

y[n] = x[n − 2]. (3.27)

Solution: Yes, this system is invertible. The inverse of this system is uniquely
obtained by,

h−1 [y[n]] = y[n + 2] = x[(n + 2) − 2] = x[n]. (3.28)

Note that finding the inverse of a system may not be possible for many
systems, even if there exists a unique inverse.

3.3.4. Stability
Stability is an important performance characteristic of a system. It assures the
controllability of the output signal generated by the system for all the input
signals. A stable system is robust to imperfections and unexpected changes

108
Figure 3.9: Stable system (left): When we put a glass billiard ball to the side
of a bowl, it will swing for a while, then after a certain period of time it will
reach an equilibrium at the bottom of a bowl. Unstable system (right): If
we put the billiard ball on top of an upside-down bowl, its falling speed will
increase, uncontrollably, apparently, it will fall down the bowl and crash.

at the input. It bears self-correcting mechanisms to bring the output of the


system into equilibrium.
Most of the ecological systems are stable in their environment. They keep
the prey and predator populations under an equilibrium. However, when we
manipulate an internal parameter of an ecological system by an external source,
we may create an unstable system. As an example, a small industrial waste
slightly changes the internal characteristics of an ecological water system. This
external perturbation, even if it is small, may spoil the equilibrium between the
preys and predators, which may result in destroying most of the population,
including the water cleaning agents. In this case, we do not only loose important
species, but we may loose control of water pollution, creating an extremely toxic
environment.
An unstable system lacks the ability to control inputs, leading to explo-
sively large outputs. This results in an uncontrollable system incapable of self-
restoration, potentially causing catastrophic outcomes for certain inputs.
Loosely speaking, a stable system generates a bounded output for all pos-
sible bounded inputs. This type of stability is called BIBO (Bounded Input,
Bounded Output) stability.

Definition 3.7: A continuous time signal is bounded if there exists a finite


number b ∈ R such that for all t ∈ R,

|x(t)| ≤ b. (3.29)

Definition 3.8: A discrete time signal is bounded if there exists a finite


number, b ∈ R such that for all n ∈ I,

|x[n]| ≤ b. (3.30)

109
Definition 3.9: A continuous time system, represented by a model h, is
BIBO (Bounded Input, Bounded Output) stable if for any bounded
input function x(t), the corresponding output function,

y(t) = h(x(t)), (3.31)


is also bounded. In other words, for all inputs x(t) < b there exists a finite
number b′ ∈ R such that the output y(t) remains below b′ , i.e. y(t) < b′ .

Definition 3.10: A discrete time system represented by a model h, is BIBO


stable (Bounded Input, Bounded Output) if for any bounded input
function x[n], the corresponding output function,

y[n] = h[x[n]], (3.32)


is also bounded. In other words, for all n there exists a finite number b′ ∈ R
such that y[n] ≤ b′ .

Definition 3.11: If a system does not satisfy the BIBO stability condition,
then, it is called unstable.

Exercise 3.11: Is the following system stable?


n
X
y[n] = x[k]. (3.33)
k=−∞

Solution: In order to show the stability property of a system, we select a


bounded input, such that |x[n]| < b for all values of n and show that the
corresponding output is also bounded. A popular bounded signal is the unit
step function, where u[n] = 1 for all values of n ≥ 0. Suppose that we feed the
unit step function, a bounded signal, as input: x[n] = u[n]. The output is
n
X
y[n] = u[k] = (n + 1)u[n] (3.34)
k=−∞

The above output signal is unbounded for n → ∞. Thus, this system is unsta-
ble!

Exercise 3.12: Is the following system stable?

y(t) = Aex(t) , (3.35)


where A is a bounded parameter.

110
Solution: Suppose that we feed a bounded input signal to this system. In
other words, there exists a finite number b, such that, |x(t)| < b. Then, we
have y(t) < Aeb . Since the right-hand side of this inequality is bounded, the
output y(t) is also bounded. Thus, this system is BIBO stable.

3.3.5. Time Invariance


Time invariance is a crucial system property, which makes the behavior of
the system independent of time. A system is time invariant when a time shift
at the input generates the same time shift at the output. In other words, no
matter when you feed an input signal to the system, you receive the same
corresponding output.
As an example, consider a piano as a system. While we play it, we press a
set of keys at the input and we hear a set of tunes at the output. No matter
when we play it, as long as we press the same set of keys, we hear the same
tunes. Thus, a piano is a time-invariant system. On the other hand, if a system
is time-varying, the dynamics of this system change with time. A good example
of time-varying system is human behavior. We may have a diverse set of moods
to communicate with our friends at different times.
Below is the formal definitions of time invariance for continuous and discrete
time systems

Definition 3.12: A continuous time system, y(t) = h(x(t)), is time invariant


if for all t0 ∈ R and for all input signals, x(t), the corresponding output satisfies,

y(t − t0 ) = h(x(t − t0 )). (3.36)

Definition 3.13: A discrete time system, y[n] = h[x[n]], is time invariant if


for all n0 ∈ I and for all input signals, x[n], the corresponding output satisfies,

y[n − n0 ] = h[x[n − n0 ]]. (3.37)


A system is time varying if it is not time-invariant.

Exercise 3.13: Is the following continuous time system time-invariant?

y(t) = cos x(t) + sin x(t). (3.38)

Solution: Suppose that we feed an input signal, shifted by t0 : x1 (t) = x(t−t0 ).


The corresponding output becomes,

111
y1 (t) = cos x1 (t) + sin x1 (t) = cos x(t − t0 ) + sin x(t − t0 ) = y(t − t0 ). (3.39)

We obtain the same amount of shift at the output. Thus, the system is time-
invariant.

Exercise 3.14: Is the following system time-invariant?

y[n] = x2 [n] + nx[n] (3.40)

Solution: No! Since for input x1 [n] = x[n − n0 ], we have

y1 [n] = x21 [n] + nx1 [n] = x2 [n − n0 ] + nx[n − n0 ], (3.41)


which is not equal to

y[n − n0 ] = x2 [n − n0 ] + [n − n0 ]x[n − n0 ]. (3.42)

Thus, this system is time-varying.

3.3.6. Linearity and Superposition Property


In general, linearity is a relationship between two mathematical objects, which
are proportional to each other. In our context, the mathematical objects are
functions, which represent input and output signals.
For example, a continuous time system h is linear when the input signal is
proportional to the output signal, as follows:

y(t) = ax(t), (3.43)


where a ̸= 0 is a constant parameter.

Definition 3.14(Superposition property): Suppose that a continuous time


system, represented by a model, y(t) = h(x(t)), is fed by two different in-
puts, x1 (t) and x2 (t). The corresponding outputs are y1 (t) = h[x1 [n]] and
y2 (t) = h[x2 [n]].
Superposition property holds iff,

y(t) = a1 y1 (t) + a2 y2 (t) = h(a1 x1 (t) + a2 x2 (t)), (3.44)


where a1 and a2 are arbitrary numbers.
Similarly, suppose that a discrete time system, represented by a model,

112
x1 (·) h y1 (·)

x2 (·) h y2 (·)

a1 x1 (·) + a2 x2 (·) h a1 y1 (·) + a2 y2 (·)

Figure 3.10: Linearity property. Given that y1 (·) = h(x1 (·)) and y2 (·) =
h(x2 (·)), a linear system represented by h(·) satisfies a1 y1 (·) + a2 y2 (·) =
h(a1 x1 (·) + a2 x2 (·)).

y[n] = h[x[n]], is fed by two different inputs, x1 [n] and x2 [n]. The corresponding
outputs are y1 [n] = h[x1 [n]] and y2 [n] = h[x2 [n]].
Superposition property holds iff,

y[n] = a1 y1 [n] + a2 y2 [n] = h[a1 x1 [n] + a2 x2 [n]]. (3.45)

Definition 3.15: A system is called linear if and only if it satisfies the


superposition property. In other words, the superposition of two different inputs
yields the same superposition at the output:

y1 (·) = h(x1 (·)) and y2 (·) = h(x2 (·)) ⇐⇒ a1 y1 (·)+a2 y2 (·) = h(a1 x1 (·)+a2 x2 (·)),
(3.46)
where (·) shows a generic notation for both continuous time variable t and
discrete time variable n.

Exercise 3.15: Are the following continuous and discrete time systems lin-
ear?
y(t) = ax(t) + b (3.47)
and
y[n] = ax[n] + b (3.48)

Solution: Let us omit the time variable (·), for simplicity. Suppose that, the
input x1 generates the output y1 = ax1 + b and the input x2 generates the
output y2 = ax2 + b. Superposition of two inputs,

113
x = a1 x1 + a2 x2 , (3.49)
does not generate the same superposition of the output,

y = a1 y1 + a2 y2 = a1 (ax1 + b) + a2 (ax2 + b) ̸= a(a1 x1 + a2 x2 ) + b. (3.50)

Thus, superposition property does not hold. This system does not satisfy the
linearity property of Definition 3.15.
However, if we take the difference of the system equations with different input-
output pairs the additive term b vanishes and the difference becomes linear.
Formally, for the continuous time case, the system equation for two different
pairs of input-output are

y1 (t) = ax1 (t) + b, (3.51)


y2 (t) = ax2 (t) + b. (3.52)
Define a new input-output pair, x3 and y3 , which are the differences of the
above input-output pairs, as follows:

y3 (t) = y1 (t) − y2 (t) = a(x1 (t) − x2 (t)) (3.53)

y3 (t) = ax3 (t), (3.54)


which is linear.
Similar derivations are applied for the discrete time case:

y1 [n] = ax1 [n] + by2 [n] = ax2 [n] + b. (3.55)


We define a new input-output pair,x3 , y3 , which are the differences of the above
input-output pairs, as follows

y3 [n] = y1 [n] − y2 [n] = a(x1 [n] − x2 [n]) (3.56)

y3 [n] = ax3 [n], (3.57)


which is linear.

Definition 3.16: A system is called incrementally linear if the difference


of the system equations for different input-output pairs are linear.

The difference between linear and incrementally linear systems is similar


to the difference between linear and affine transformation in linear algebra.

114
Exercise 3.16: Is the following continuous time system linear, time-invariant,
causal, invertible, stable, memoryless?

y(t) = tx(t) (3.58)

Solution:
Linearity: Suppose we feed two inputs, x1 (t) and x2 (t), to the system. The
corresponding outputs will be

x1 (t) → y1 (t) = tx1 (t) (3.59)

x2 (t) → y2 (t) = tx2 (t). (3.60)


Output for the superposition of the two inputs, x3 (t) = a1 x1 (t) + a2 x2 (t) is

y3 (t) = t(a1 x1 (t) + a2 x2 (t)) = a1 y1 (t) + a2 y2 (t). (3.61)


Superposition property holds. Thus, the system is linear.
Time Invariance: Let us shift the input of the system by t0 to define a new
input, x1 (t) = x(t − t0 ). Then, the corresponding output becomes

y1 (t) = tx1 (t) = tx(t − t0 ) ̸= y(t − t0 ) = (t − t0 )x(t − t0 ). (3.62)


The shift for the input x(t − t0 ) does not give the same amount of shift at the
output. Thus, the system is not time-invariant.
Memory and Causality: Present values of the output depend on the present
values of the input. For all t, the output is y(t) = h(x(t)) = tx(t). The system
is memoryless. Therefore, it is causal.
Invertibility: For t = 0, it is not invertible. However, for t ̸= 0, it is invertible,
where the inverse can be obtained from the following equation:
1
x(t) = y(t) for t ̸= 0. (3.63)
t
Stability: The system is not stable, because a bounded input does not generate
bounded output. For any bounded input x(t) = B,

lim y(t) = tx(t) = tB = ∞. (3.64)


t→∞

Exercise 3.17: Is the following discrete time system linear, time-invariant,


memoryless, causal, invertible, and stable?
 π 
y[n] = cos n x[n] (3.65)
2

115
Solution: In the right-hand side of the system equation, there is a multi-
plicative factor, which depends on the time variable n. Therefore, the system
equation can be written as y[n] = A(n)x[n] where A(n) = cos π2 n.
Linearity: Suppose we feed two inputs, x1 [n] and x2 [n], to the system. The
corresponding outputs will be

x1 [n] → y1 [n] = A(n)x1 [n], (3.66)


x2 [n] → y2 [n] = A(n)x2 [n]. (3.67)
Output for the superposition of the two inputs, x3 [n] = a1 x1 [n] + a2 x2 [n] is

y3 [n] = A(n)(a1 x1 [n] + a2 x2 [n]) = a1 y1 [n] + a2 y2 [n]. (3.68)


Superposition property holds. Thus, the system is linear.
Time Invariance: Let us shift the input of the system by the time amount
of n0 to define a new input, x1 [n] = x[n − n0 ]. Then, the corresponding output
becomes,

y1 [n] = A(n)x1 [n] = A(n)x[n − n0 ] ̸= y[n − n0 ] = A(n − n0 )x[n − n0 ] (3.69)

The shift for the input x[n − n0 ] does not give the same amount of shift at the
output. Thus, the system is not time-invariant.
Memory and Causality: Present values of the output depend on the present
values of the input. For all n, the output is,
π
y[n] = h[x[n]] = (cos n)x[n]. (3.70)
2
The system is memoryless. Also, it is causal.
Invertibility: Let us solve this equation for x[n]:

y[n]
x[n] = . (3.71)
cos π2 n
Note that, x[n] → ∞ for all odd values of n. Thus, the system is not invertible.
Stability: The system is stable because a bounded input generates bounded
output. For any bounded input x[n] = b,
 π 
y[n] = cos n b < ∞ (3.72)
2
is bounded.

116
A
x(·) A y(·) x(·) y(·)

Figure 3.11: Schematic representations of a scalar multiplier. Both representa-


tions are used for scalar multipliers.

3.4. Basic Building Blocks of Systems


and Their Properties
Recall that we introduced the basic building blocks of signals, such as trigono-
metric functions, exponential functions, unit impulse, and unit step functions.
We could use these functions as the basic building blocks of more complicated
functions. Similarly, we can define basic building blocks or system components
for systems. These simple components can be used as subsystems to build a
more complicated system.
There are dozens of basic building blocks in System Theory, for both con-
tinuous time and discrete time systems. The type of the basic components
varies depending on the application domain. For example, if we design an elec-
tric circuit, the basic building blocks of the system are the circuit elements,
such as resistors, capacitors, inductors, etc.
In this book, we shall focus on some generic basic building blocks, for mod-
eling a wide range of discrete time and continuous time systems, as described
below.

3.4.1. Scalar Multiplier


Scalar multiplier simply multiplies an input signal by a scalar number. This
component is used for both discrete time and continuous time systems, as
follows:

y(·) = Ax(·). (3.73)


It is easy to show that scalar multiplier is a memoryless, linear, time-
invariant, stable and invertible system component, where the present value
of the output is the scaled version of the present value of the input (Figure
3.11).

117
x1 (·)

x2 (·) + y(·)

x3 (·)

Figure 3.12: An adder, which adds three inputs, to generate output y(·) =
x1 (·) + x2 (·) + x3 (·).

x1 (·)

× y(·)

x2 (·)

Figure 3.13: Schematic representation of a multiplier, which multiplies two


inputs to generate output y(·) = x1 (·) × x2 (·).

3.4.2. Adder
An adder simply adds multiple inputs to generate a single output. This com-
ponent, too, is used for both discrete time and continuous time systems, as
follows:

y(·) = x1 (·) + x2 (·) + ... + xn (·). (3.74)


This is a memoryless, linear, time-invariant, stable, and noninvert-
ible system component (Figure 3.12).

3.4.3. Multiplier
A multiplier multiplies all the inputs to generate the output. It is used for both
discrete time and continuous time systems, as follows:

y(·) = x1 (·) × x2 (·) × ... × xn (·). (3.75)


A multiplier is a memoryless, nonlinear, time-invariant, stable and
noninvertible system component, which multiplies multiple inputs (Figure
3.13).

118
R
x(t) y(t)

Figure 3.14: Schematic representation of an integrator.

d
x(t) y(t)
dt

Figure 3.15: Schematic representation of a differentiator.

3.4.4. Integrator
Integrator is used in the continuous time systems, which takes the integral
of the input for all of the past values, until the present time t, as follows:
Z t
y(t) = x(τ )dτ. (3.76)
−∞
An integrator is linear, time invariant, invertible and causal system
with memory (Figure 3.14).

3.4.5. Differentiator
Differentiator is used in the continuous time systems, which takes the deriva-
tive of the input to generate the output, as follows;

dx(t)
y(t) = . (3.77)
dt
A differentiator is a memoryless, linear, time invariant, stable and
non-invertible system component (Figure 3.15).

3.4.6. Unit Delay Operator


Unit delay operator is used in discrete time systems. It is related to the
discrete version of the differentiator (i.e., y[n] = x[n] − x[n − 1]). Unit delay
operator is defined as follows:

y[n] = x[n − 1]. (3.78)


It is easy to show that the unit delay operator is a linear, time-invariant,
causal, invertible and stable system with memory (Figure 3.16).

119
x[n] D y[n] = x[n − 1]

Figure 3.16: Schematic representation of the unit delay operator.

x[n] A y[n] = x[n + 1]

Figure 3.17: Block diagram representation of the system, y[n] = x[n + 1],
represented by the unit advance operator A.

3.4.7. Unit Advance Operator


Unit advance operator is used in discrete time systems. It is defined as

y[n] = x[n + 1]. (3.79)


The unit advance operator is a linear, time-invariant, invertible and
stable system with memory (Figure 3.17). However, it is non-causal.

Exercise 3.18: Find a block diagram representation for the following system:

y[n] = x[n] − x[n − 1]. (3.80)


Is this system memoryless? Is this system causal?

Solution:
This system requires an adder with a minus sign and a unit delay operator, as
shown in Figure 3.18.
The system has memory. Since, for example, the output y(1) depends on x(0).
The system is causal, because the present value of the output does not depend
on the future values of the input.

x[n] + y[n]

-
D

Figure 3.18: Block diagram representation of the system, y[n] = x[n] − x[n − 1].

120
b
b a
1
a a
x(t) + y(t) + x(t)

h h−1

Figure 3.19: Cascaded representation of an incrementally linear system (given


in Exercise 3.19), represented by the model, h and its inverse, h−1 .

Exercise 3.19: Find the system equation of the block diagram represen-
tation below. Is this system linear? Is this system invertible? If yes, find its
inverse.
b

a
x(t) + y(t)

Solution: From the given block diagram, we can write the system equation as
y(t) = ax(t) + b.
This system does not satisfy the superposition property, because when the
input is x(t) = a1 x1 (t) + a2 x2 (t), the corresponding output is not y(t) =
a1 y1 (t) + a2 y2 (t):

y(t) = a[a1 x1 (t) + a2 x2 (t)] + b ̸= a1 y1 (t) + a2 y2 (t). (3.81)


Although this system is not strictly linear, it bears many properties of a linear
system, such as invertibility. Recall that we call these systems as incremen-
tally linear.
The inverse of the incrementally linear system can be directly obtained from
the system equation as follows:

y(t) − b
x(t) = for a ̸= 0. (3.82)
a
There exists a unique input, x(t) for every output y(t). Thus it is invertible
(Figure 3.19).

Exercise 3.20: Find the equation for the system represented by the following
block diagram.

121
R
x(t) + y(t)

Solution: This is a feedback control system, where the adder receives two
inputs: x(t) and y(t). The output of the adder must be dy(t) dt so that when
it is integrated, it outputs y(t). Thus, this system can be represented by the
following differential equation:

dy(t)
− y(t) = x(t). (3.83)
dt
This system is represented by a first order differential equation. It is pos-
sible to represent a system with cascaded integrators and adders, by higher
order differential equations. We shall explore the properties of the systems,
represented by differential equations, in Chapter 5.

3.5. Chapter Summary


How can we represent a discrete time or a continuous time system? Is it possible
to decompose a complicated system into a set of interrelated subsystems, so
that we can model and design a large class of natural and man-made systems?
How do we categorize systems with respect to predefined properties?
In this chapter, we define a system as a mapping between the input signal(s)
and output signal(s). Depending on the behavior of the system, this mapping
can be represented by an algebraic equation, differential equation, or integral
equation.
In most applications, finding a single equation to represent a system is
not possible. In order to represent complicated systems, we decompose them
into a set of subsystems, each of which is represented by an equation. The
interrelations among the subsystems are established by the input and output
signals of each subsystem.
We can combine subsystems, in various forms, called, series, parallel,
hybrid, and feedback control systems. We also study the properties of
systems, which provide us with a framework to model, design, and implement
a wide range of systems, using the available mathematical tools. In case we can
identify some of these properties, namely, memory, causality, invertibility,
stability, time invariance, and linearity, it is possible to represent systems
in more compact and precise models.
Finally, it is possible to define a set of basic building blocks for both discrete
time and continuous time systems. Combining the simple building blocks, such

122
as adders, multiplies, integrators, differentiators for continuous time
systems, and unit advance and unit delay operators for the discrete time
systems enable us to model and design a large class of systems.

123
Problems
1. Consider two discrete time subsystems S1 and S2, defined by the following
difference equations;

S1 : y1 [n] = 4x1 [n] + 2x1 [n − 1],


S2 : y2 [n] = y1 [n − 2].

Suppose that S1 and S2 are connected in series to form an overall system,


S, as shown below:
y1 [n]
x1 [n] S1 S2 y2 [n]

a) Find the difference equation for the overall system, which relates the
input x[n] = x1 [n] and output y[n] = y2 [n].
b) Would the system equation you obtain in part a be different if the order
of the series connection of S1 and S2 is reversed? In other words, is
the series connection of sub systems commutative? Verify your answer.
c) Is the overall system S linear? Show if the superposition property holds
or not.
d) Is the overall system S time invariant? Verify your answer.
2. Consider two continuous time subsystems S1 and S2, defined by the follow-
ing differential equation;

dx(t)
S1 : y1 (t) = 4x(t) + 2 ,
dt
dx(t)
S2 : y2 (t) = .
dt
Suppose that these S1 and S2 are connected in parallel to form an overall
system S with the same input x(t) and with an output y(t) = y1 (t) + y2 (t),
as shown below:

x(t) S1
+ y(t)

x(t) S2

a) Find the differential equation, which relates the input x(t) and the
output y(t) for the overall system S.

124
b) Is the overall system S linear?
c) Is the overall system S time invariant?
d) If S1 and S2 were interchanged in the parallel configuration, would it
make any difference to the overall system S?
3. Consider a discrete time system, represented by the following difference
equations;
π
y[n] = x[n] sin[ n].
2
a) Is this system stable?
b) Is this system invertable?
c) Is this system causal?
d) Is this system linear?
4. Consider the continuous time systems, represented by the following equa-
tions. Are these systems linear and time-invariant? Verify your answers.
a) y(t) = 2tx(t + 1)
b) y(t) = x(t) sin(t − 1)
c) y(t) = 2δ(t)
d) y(t) = x(2t2 )
5. Consider the discrete time systems, represented by the following equations.
Are these systems linear and time-invariant? Verify your answers.
a) y[n] = x[2n] cos[πn]
b) y[n] = x[n2 − 1]
c) y[n] = 2x[n − 1] + x[2n − 2]
d) y[n] = x2 [2n + 2]
6. Given the following equations for discrete time systems, check if the proper-
ties of memory, stability, causality, linearity, invertibility, and time-
invariance hold. Verify your answers for each system and for each property.
a) y[n] = 2x[n2 ]
b) y[n] = (x[n − 100] + x[100 − n]) sin 5n
c) y[n] = δ[n]x[2n]
n
d) y[n] = x[ ]
(3
0, n<0
e) y[n] =
x[n + 4] n ≥ 0
7. Given the following equations for continuous time systems, check if the
properties of memory, stability, causality, linearity, invertibility, and
time-invariance hold. Verify your answers for each system and for each
property.

125
R 5t
a) y(t) = −∞ x(2τ ) dτ
d(x(t) sin(3t))
b) y(t) =
dt
c) y(t) = x(2t
R∞ + 3)
d) y(t) = −5t 2x(τ )dτ
dx(2t)
e) y(t) =
dt
8. Given the following system equations, check if the properties of memory,
stability, causality, linearity, invertibility, and time-invariance hold.
Verify your answers for each system and each property.
a) y(t) = x(t − 5)
sin(2t) 2
b) y(t) = ( ) x(t)
t
c) y(t) = (3t
Pn+ cos t)x(t)
d) y[n] = k=−∞ x[k]
d(x(t) + sin(cos(t))
e) y(t) =
dt
9. Consider a discrete time system S, represented by the following difference
equation;
y[n] = x[n]h[n + 1] + 2h[n].
a) If h[n] = C for all n and C is a constant, show that S is time invariant.
b) If h[n] = n, show that S is not time invariant.
c) Is this system linear for h[n] = n?
10. Consider the following statements and determine if they are always true,
always false or neither. Justify your answers.
a) The parallel connection of two linear time-invariant systems is itself a
time-invariant system.
b) The series connection of two causal and linear systems is itself causal
and linear.
c) A system consisting of a causal and linear system and a non-linear and
time-varying connected serially is not causal or linear.
11. Consider the subsystems S1 ,S2, and S3, where the inputs and outputs are
represented by the following equations;

(
0, n even,
S1 : y1 [n] = 2
x[n] , n odd
S2 : y2 [n] = 2y1 [n/2],
1
S3 : y[n] = y2 [n + 2] + y2 [n].
4

126
Suppose that these systems are connected in series, as shown in the below
block diagram:
y1 [n] y2 [n]
x[n] S1 S2 S3 y[n]

a) Find the equation, which represent the overall system S, which relates
the overall input x[n] to the overall output y[n].
b) Are the subsystems, S1, S2, and S3 linear, time-invariant, causal, in-
vertible, and stable? Verify your answer for each subsystem.
c) Is the overall system S linear, time-invariant, causal, invertible, and sta-
ble? Verify your answer for S.
12. Show that if the input of a time-invariant system is periodic, then the cor-
responding output is also periodic for both continuous and discrete time
systems.
13. Consider a time-invariant system with an aperiodic input. Is the output
signal always aperiodic? Verify your answer by giving examples.

127
14. Consider the block diagram representation of a causal feedback control sys-
tem
e[n]
x(t) + y[n] = 2e[n − 1] y(t)

−1

a) Find and plot the output for x[n] = δ[n] + δ[n − 2].
b) Sketch the output given x[n] = u[n] − u[n − 4].
15. Are the following systems invertible? Find the inverse systems if they are
invertible.
a) y[n] = 2x[n2 ]
b) y(t) = sin[x(t)]
c) y[n] = x[n + 1]x[n − 1]
dx(t)
d) y(t) =
(dt
x[2n], n≥0
e) y[n] =
x[n − 2], n < 0
f ) y(t) = cos(3t)x(t)
16. Find a block diagram representation for the continuous time system, repre-
sented by the following equation:

d2 y(t) dx(t)
+ ay(t) = .
dt dt
17. Find a block diagram representation for the discrete time system, repre-
sented by the following equation:

y[n − 2] + 0.5y[n − 1] + 0.6y[n] = x[n].


18. Consider the following continuous time system equation, where the input is
represented by the superposition of derivatives of the output:
PN dk y(t)
k=0 ak = x(t).
dtk
a) Find a block diagram representation of this system.
b) For x(t) = 0, show that if s0 satisfies the following algebraic equation,
p(s) = N k
P
k=0 ak s = 0,

then, Aes0 t satisfies the system equation given above, where A is an arbi-
trary complex constant number.

128
19. Consider the system equation given below;

d3 y(t) d2 y(t) dy(t)


+ 4 +5 + 2y(t) = x(t).
dt3 dt2 dt
a) Find a block diagram representation of this system.
b) For x(t) = 0, find a solution to this equation in the form Aes0 t .

20. Consider the following discrete time system equation, where the input is
equal to the superposition of differences of the output:
PN
k=0 ak y[n − k] = x[n].

a) Find a block diagram representation of this system.


b) For x[n] = 0, show that, if z0 is a solution of the equation
p(s) = N −k = 0,
P
k=0 ak s

then, Az0n is a solution of eq. (1), where A is an arbitrary constant.


21. Given a discrete time system, represented by the following equation;
y[n] − 3y[n − 1] + 2y[n − 2] = x[n].
a) Find a block diagram representation of this system.
b) For x[n] = 0, find a solution to this equation in terms of Az0n .

129
130
Chapter 4
Representation of Linear
Time Invariant Systems by
Impulse Response and
Convolution Operation

“The universe is nonlinear. But straight lines are really nice.”


D. C. Vural

In Chapter 1, we mentioned about the systems approach, where we noted


that the whole is more than the sum of its parts, in a wide range of systems.
However, this inequality becomes an equality, if a system is linear, holding the
superposition property. In this case, the whole system can be represented by a
set of subsystems interconnected with each other by signals. The behavior of
the linear systems can be studied by modeling its subsystems and the signals
relating the subsystems. On the other hand, if a system is not linear, the
interactions among the subsystems may generate unpredictable outputs, which
cannot be studied by the analysis of the subsystems. This behavior prevents
us to model the nonlinear systems as a collection of interrelated subsystems.
Thus, linear models are very important, in the sense that they enable us to
break down the systems into a bunch of mathematically tractable components.
Time invariance is a type of symmetry, where the representation of a system
does not change in time. Time invariant systems perform the same operation at
time instant t and time instant (t − t0 ) for any t0 ̸= 0. This property simplifies
the modeling and analysis of systems.
A large class of systems can be represented by linear, time-invariant (LTI)
systems, which enable us to find a unique model for a system by establishing
the relationship between the input and the output signals. In this chapter, we

131
x[n] x(t)

δ[n − k] δ(t − τ )

n t
k τ

X Z ∞
x[n] = x[k]δ[n − k] x(t) = x(τ )δ(t − τ )dτ
n=−∞ −∞

Figure 4.1: Representation of discrete-time signals in terms of weighted sum-


mation of shifted impulse functions (left) and representation of continuous time
signals in terms of the weighted integral of shifted impulse functions (right).

shall study the discrete time and continuous time LTI systems.
Our goal is to find an equation, which relates the input x(·) to the corre-
sponding output y(·) to represent an LTI system.
Suppose that we observe an input-output pair, x(·) and y(·), of an LTI
system. In Chapter 2, we showed that it is possible to represent any function
in terms of the weighted integral of shifted impulse functions δ(t − τ ) for
continuous time systems. Similarly, we can represent any function in terms of
weighted summation of shifted impulses, δ[n − k] for discrete time signals, as
shown in Figure 4.1.
For example, suppose that a continuous time, linear time invariant sys-
tem receives an input signal, x(t) = e−jωt , and outputs a unit step signal,
y(t) = u(t). These signals can be represented by the weighted integral of shifted
impulse functions. Mathematically,
Z ∞
x(t) = e−jωτ δ(t − τ )dτ = e−jωt . (4.1)
−∞
Z t
y(t) = δ(t − τ )dτ = x(t) = u(t). (4.2)
−∞

Motivating Question: Can we find a relationship between x(t) and y(t),


using the equations (4.1) and (4.2) to represent an LTI system by a single
equation?
In this chapter, we answer this question by defining a new operation, called
convolution for both continuous time and discrete time systems. This opera-

132
x(·) Black Box y(·)

Figure 4.2: If we observe the input x(·) and the corresponding output y(·), can
we find a unique model h(·) for the “Black Box”?

x(t) = δ(t) y(n) = h(n)


LTI System
x[n] = δ[n] y[n] = h[n]

Figure 4.3: Impulse response is the response of an LTI system to a unit impulse
function. A discrete-time LTI system generates the impulse response h[n] for
the unit impulse input δ[n], whereas a continuous-time LTI system generates
the impulse response h(t) for the unit impulse input δ(t).

tion relates the input-output pair of an LTI system through a specific response,
called the impulse response.

4.1. Representation of LTI Systems by


Impulse Response
Suppose that we feed an input signal x(·) and observe a signal y(·) at the
output of a black box of Figure 4.2. What is it in the box? The answer lies in
the very important concept, called the impulse response.

Definition 4.1(Impulse response): Impulse response is the signal y(·) =


h(·) of an LTI the system, observed at the output when the input signal is a
unit impulse function, x(·) = δ(·) (Figure 4.3).

Motivating Question: Why is impulse response very important? Because it


uniquely represents an LTI system! In other words, impulse response uniquely
relates any input to the corresponding output of an LTI system.

133
4.1.1. Representation of Discrete-Time Linear
Time-Invariant Systems by Impulse Re-
sponse
Suppose that we feed a discrete time unit impulse, δ[n] to an LTI system, and
at the output, we measure the impulse response, h[n], as shown in Figure 4.4.
Next, we shift the impulse response at the input and feed the shifted impulse
response, δ[n − 1]. Since the system is time-invariant, we obtain the same shift
at the output and obtain the shifted impulse response, h[n − 1]. If we keep
sifting the impulse functions, δ[n − k] for all k and feeding it as an input to an
LTI, we obtain the shifted versions of the impulse responses, h[n − k] (Figure
4.4). Now, let’s multiply the shifted impulse function by the k th component
of a general signal x[n], to feed x[k]δ[n − k] at the input. Since the system is
linear, at the output, we get x[k]h[n − k].
Finally, let’s superpose the input signals, x(k)h[n − k] for all k, as follows:

X
x[n] = x[k]δ[n − k] (4.3)
k=−∞

and feed it as the input signal. Since the system is linear and time-invariant,
the superposition of the shifted impulses outputs the same superposition of the
shifted impulse responses, x[k]h[n − k], as follows,


X
y[n] = x[k]h[n − k] (4.4)
k=−∞

as shown in Figure 4.4. This equation is called the convolution summation.


Basically, this equation states that the output of a discrete time LTI sys-
tem is equal to the weighted superposition of its shifted impulse responses.
The weights are the value of the input signal at the point of the shift. Thus,
convolution summation relates a general input x[n] to a general output y[n]
through the impulse response, providing us with an equation for representing
a linear time-invariant system.

4.1.2. Representation of Continuous-Time Lin-


ear Time-Invariant System
We can use the methodology above to derive a relationship between the input
and output signals of the continuous time LTI systems. This time, we feed a
continuous time unit impulse input, δ(t) to an LTI system, and at the output,

134
δ[n] LTI h[n]

(a)

δ[n − k] LTI h[n − k]

(b)


X ∞
X
x[n] = x[k]δ[n − k] LTI y[n] = x[k]δ[n − k]
k=−∞ k=−∞
(c)

Figure 4.4: Relationship between the input and output signal of a discrete-time
LTI system: (a) Response of a system, y[n] = h[n] to a unit impulse function,
δ[n]. (b) Since the system is time-invariant, shifted unit impulse, δ[n − k],
generates a shifted impulse response, h[n − k]. (c) Since the system is linear,
we can superpose all the shifted impulses at the input and obtain the same
superposition at the output.

we measure the impulse response, h(t), as shown in Figure 4.5.


Next, we shift the impulse response at the input and feed the shifted impulse
response, δ(t − ∆t), as in Figure 4.5. Since the system is time-invariant, we
obtain the same shift at the output to obtain the shifted impulse response,
h(t − ∆t). If we keep sifting the impulse responses, δ(t − k∆t) for all k and
keep feeding it as an input to an LTI, we obtain the shifted versions of the
impulse responses, h(t − k∆t).
Now, let’s multiply the shifted impulse function by x(k∆t), to feed x(k∆t)δ(t−
k∆t) at the input. Since the system is linear, at the output of the system, we
get x(k∆t)h(t − k∆t).
Next, let’s superpose all the input signals to obtain a new signal,

X
x∆t (t) = x(k∆t)δ(t − k∆t) (4.5)
k=−∞

and feed it as the input signal. Since the system is linear, the superposition of
the shifted impulse functions at the input results in the superposition of the
shifted impulse responses at the output, as follows:

135
δ(t) LTI h(t)

(a)

δ(t − k∆τ ) LTI h(t − k∆τ )

(b)

x(k∆τ )δ(t − k∆τ ) LTI x(k∆τ )h(t − k∆τ )


X ∞
X
lim x(k∆τ )δ(t − k∆τ ) LTI lim x(k∆τ )h(t − k∆τ )
∆τ →0 ∆τ →0
k=−∞ k=−∞
Z ∞ Z ∞
x(t) = x(τ )δ(t − τ ) LTI y(t) = x(τ )h(t − τ )dτ
−∞ −∞

(c)

Figure 4.5: Relationship between the input and output signal of a continuous
time LTI system: a) Response of a system, y(t) = h(t) to a unit impulse
function, δ(t). b) since the system time-invariantant, shifted unit impulse, δ(t−
∆t), generates a shifted impulse response, h(t−∆t), c) Since the system is linear
we can superpose all of the shifted impulses at the input and obtain the same
superposition at the output

136
x(·) h(·) y(·) = x(·) ∗ h(·)

Figure 4.6: Representation of LTI system by its impulse response, h(·).


X
y∆t (t) = x(k∆t)h(t − k∆t). (4.6)
k=−∞

If we take the limits with respect to time, the above summation operations
converge to the integral operations:
Z ∞
x(t) = lim x∆t (t) = x(τ )δ(t − τ )dτ, (4.7)
∆t→0 −∞
Z ∞
y(t) = lim y∆t (t) = x(τ )h(t − τ )dτ, (4.8)
∆t→0 −∞

where lim∆t→0 x∆t (t) → x(t) and lim∆t→0 y∆t (t) → y(t). Thus, we have
Z ∞
y(t) = x(τ )h(t − τ )dτ (4.9)
−∞

as shown, in Figure 4.5. This equation is called convolution integral.


Basically, the above equation states that the output of a continuous time
LTI system is equal to the weighted integral of its shifted impulse responses.
The weights are the value of the input signal at the point of shift. Thus,
the convolution integral relates a general input x(t) to a general output y(t)
through the impulse response, providing us an equation for representing a
continuous-time LTI system.
Later in this book, we shall see that given an input-output pair of a
continuous-time or discrete-time LTI system, we can uniquely find its impulse
response. Since impulse response uniquely defines an LTI system, it is possible
to model the black box of an LTI system from an observed input-output signal
pair.
Convolution integral and convolution sum uniquely identify continuous-
time and discrete-time LTI systems, respectively (Figure 4.6). Thus, the con-
volution operation, for continuous-time LTI systems,
Z ∞
y(t) = x(τ )h(t − τ )dτ = x(t) ∗ h(t) (4.10)
−∞

137
x(·) h1 (·) ∗ h2 (·) y(·) x(·) h1 (·) h2 (·) y(·)

Figure 4.7: Associativity of convolution. Provided that h1 (·) and h2 (·) are im-
pulse responses of two LTI systems, the system on the left is equivalent to the
system on the right. Using the commutativity property, we can find another
equivalent system: x → h2 → h1 → y.

and for discrete-time systems,



X
y[n] = x[k]h[n − k] = x[n] ∗ h[n] (4.11)
−∞

are considered as system equations. We use “∗” as the shorthand notation for
the convolution operation.

Remark 4.1: In continuous time LTI systems, the operand functions and
the output of the convolution integral are all continuous time functions.
Similarly, in discrete time LTI systems, the operand functions and the output
of the convolution summation are all discrete time functions.

Question: Show that convolution operation satisfies the following properties:


1) Commutativity: Convolution operation is commutative:

x(·) ∗ h(·) = h(·) ∗ x(·). (4.12)


Formally speaking,
Z ∞ Z ∞
y(t) = x(τ )h(t − τ )dτ = x(t − τ )h(τ )dτ (4.13)
−∞ −∞

x(·) → h(·) → y(·) = h(·) → x(·) → y(·) (4.14)


The commutativity property of convolution operation reveals that we can
replace the input signal with the impulse response of an LTI system with-
out changing the output.
2) Associativity: Convolution operation is associative:

x(·) ∗ [h1 (·) ∗ h2 (·)] = [x(·) ∗ h1 (·)] ∗ h2 (·). (4.15)


This property reveals that for an input signal x(·), passing it through
h1 (·) and h2 (·) in a series connection, and passing it through a single
system represented by h1 (·) ∗ h2 (·) would yield the same output signal

138
h2 (t)

x(t) + y(t)

h1 (t)

x(t) h1 (t) + h2 (t) y(t)

Figure 4.8: Distributivity of convolution. Two block diagrams are equivalent.

y(·) (Figure 4.7).


Together with commutativity, associativity property reveals that in the
series connection, the order of the subsystems does not matter (Figure
4.7).
3) Distributivity: Convolution operation is distributive:

x(·) ∗ [h1 (·) + h2 (·)] = x(·) ∗ h1 (·) + x(·) ∗ h2 (·) (4.16)


The distributivity property of convolution operation reveals that the addi-
tion of convolutions of an input signal with two different impulse responses is
equivalent to the convolution of added impulse responses with the same input.
This property reduces a parallel system with two subsystems into a system
with one components, which has two added impulse responses of the parallel
system (Figure 4.8).
In summary, convolution operation allows us to shuffle the input signals
and impulse responses of sub systems, provided that they obey the above
properties.
Motivating Question: What does convolution operation do in a real life
system, which is fed by a particular input? When we convolute an input signal
x(·) with an impulse response, h(·), what do we measure at the output signal?
Let us further study the convolution operation to understand the meaning
and implications on systems.

4.1.3. Convolution Operation in Continuous Time


When we convolute two functions, x(t) and h(t), we apply the following steps:

139
Step 1: Change the time variable to the dummy variable of integral to get h(τ ).
Step 2: Time reverse the dummy variable of integral, τ , to obtain the reversed
impulse response, h(−τ ).
Step 3: Shift the impulse response, by the time variable t, to obtain reversed and
shifted impulse response, h(t − τ ).
Step 4: Multiply the shifted impulse response with the input signal, x(τ ) to get
h(t − τ )x(τ ).
Step 5: Find the area under the multiplication, using the integration operation
for each translation of the time variable t,
Z ∞
y(t) = x(τ )h(t − τ )dτ = x(t) ∗ h(t). (4.17)
−∞
Since the convolution operation is commutative, in the above steps we can
replace the signal x(t), by the impulse response,h(t). Formally,
Z ∞ Z ∞
y(t) = x(τ )h(t − τ )dτ = x(t − τ )h(τ )dτ. (4.18)
−∞ −∞
Notice that in the convolution operation, there are two variables. The vari-
able τ is the dummy variable of integration. After we take the integral of
x(τ )h(t − τ ), it disappears. The time variable t translates the reversed impulse
response h(−τ ) all over the input function x(τ ). Due to the commutativity
property of the convolution operation, we can replace the impulse response
with the input.

Explore convolution @ https://384book.net/


i0401
INTERACTIVE

Loosely speaking, convolution operation measures the degree of similarity of


two functions, x(t) and h(−t) as a function of time. The convolution operation
is intensively used in designing a wide range of systems, including signal, video,
and image processing, and machine learning.
Impulse response, which represents an LTI system, takes different names,
depending on the application domains.
• In signal processing, the impulse response is called filter, because it passes
only the parts of the input signal, which resembles the impulse response
and suppresses the rest. Like a water filter, which keeps the chemicals and
only passes the clean part of polluted water, a filter purifies the signal. For
example, one can design a filter for a microphone to suppress the noise of
the environment and just pass the clean speech or music. In other words,
the convolution operation modifies the input signal depending on the shape

140
of the filter function.
• In image and video processing, impulse response is called mask, because it
detects an object in an image. When we design a mask with the character-
istics of an object, the convolution of an image signal with a mask outputs
high values in the regions that resemble the object. Thus, it is possible to
identify the location or class of an object at the output of the convolution
operation.
• In machine learning, it is called model, because it is very handy in de-
signing a learning algorithm for object detection and classification. In this
case, a network architecture, called Convolutional Neural Network, learns
the impulse responses to detect and/or classify an object.
Designing a problem-specific LTI system is beyond the scope of this book.

Exercise 4.1: Suppose that, we are given the following impulse response and
the input signal,

h(t) = e−2t u(t), x(t) = e−t u(t). (4.19)


Find and plot the output of this LTI system.

Solution: We follow the steps of convolution operation, given above and take
the convolution integral, as follows:
Z ∞
y(t) = x(t − τ )h(τ )dτ
−∞
Zt
= e−(t−τ ) e−2τ dτ
0 (4.20)
Z t
= e−t e−τ dτ
0
= e−t (1 − e−t )u(t)

Explore convolution of two exponential functions


@ https://384book.net/i0402
INTERACTIVE

Exercise 4.2: Find the output of an LTI system, for any input, x(t), when
the impulse response is an impulse input, h(t) = δ(t).

Solution: Setting h(t) = δ(t) in the convolution integral, we get

141
x(t) δ(t) y(t) = x(t) ∗ h(t) = x(t)

Figure 4.9: When the impulse response of an LTI system is δ(t), the system
acts as an identity system. The output is equal to the input.

x(t) δ(t − t0 ) y(t) = x(t − t0 )

Figure 4.10: When the impulse response of an LTI system is δ(t − t0 ), the
system delays its input signal by t0 .

Z ∞
y(t) = x(t) ∗ h(t) = x(τ )δ(t − τ )dτ = x(t). (4.21)
−∞

Thus, δ(t) acts as an identity function in convolution operation (Figure 4.9).

Exercise 4.3: Find the output of an LTI system for any input, x(t), when
the impulse response is the shifted impulse function, h(t) = δ(t − t0 ).

Setting h(t) = δ(t − t0 ) in the convolution operation, we get


Z ∞
y(t) = x(t) ∗ δ(t − t0 ) = x(τ )δ(t − t0 − τ )dτ = x(t − t0 ). (4.22)
−∞

The output is just the time-shift version of the input. Thus, δ(t − t0 ) acts as a
delay operator (Figure 4.10).

Exercise 4.4: Find the response of an LTI system represented by the impulse
response, h(t) = u(t), when the input is x(t) = u(t).

Solution: We need to evaluate


Z ∞
y(t) = u(τ )u(t − τ )dτ. (4.23)
−∞

Since the unit step function u(t) equals to 1 for t > 0, the integral operand is
non-zero for τ > 0 and t − τ > 0, hence, for 0 < τ < t. Then, the output is
Z ∞ Z t
y(t) = u(τ )u(t − τ )dτ = dτ = tu(t). (4.24)
−∞ 0

142
y(t)
u(t − τ ) u(τ )

τ
t t

Figure 4.11: (left) Convolution of two unit step functions. Prior to integration
we multiply u(τ ) by u(t − τ ). The shaded area shows the overlap. Note that
as we change the time variable t between (−∞, ∞), we translate one of the
unit step function over the other one. For t < 0, there is no overlap. Thus the
output signal y(t) = 0 for t < 0. As we increase the time variable for t > 0,
we get more and more overlap between the two unit-step functions. Thus, the
convolution operation outputs a monotonically increasing function y(t), which
is a ramp. (right) The result of convolution, y(t) = tu(t).

We multiplied t with u(t) above because y(t) = 0 for t < 0. This convolution
is illustrated in Figure 4.11.

Exercise 4.5: Find the response of an LTI system, represented by the im-
pulse response, h(t) = eλ2 t u(t) for the input, x(t) = eλ1 t u(t). An example is
illustrated in Figure 4.12.

Solution: This is the general form of convolution of two exponential functions:

Z t
y(t) = eλ2 (t−τ ) eλ1 τ dτ (4.25)
0
Z t
=e λ2 t
eτ (λ1 −λ2 ) dτ (4.26)
−0
t
eλ2 t τ (λ1 −λ2 )
= e (4.27)
λ1 − λ2
0
eλ1 t − eλ2 t
= u(t). (4.28)
λ1 − λ2
Note that there is no overlap for t < 0. Thus, y(t) = 0 for t < 0, hence, we
have u(t) as the multiplier. It increases as we increase t > 0, until the tails get
sufficiently small. Then it starts to decrease.

143
h(t − τ ) x(τ )
τ
t

Figure 4.12: While we convolute two exponential functions with negative de-
cays, we reverse one of the exponential and translate it over the other one as
we change the time variable −∞ < t < ∞.

convolution of two exponential functions @ https:


//384book.net/i0402
INTERACTIVE

Exercise 4.6: Find the convolution of the following input signal x(t) and
impulse response h(t) given below to obtain the output signal, y(t) = x(t) ∗
h(t) :
(
1 for − 1 ≤ t ≤ 2
x(t) = (4.29)
0 otherwise.
(
1 for − 2 ≤ t ≤ 2
h(t) = (4.30)
0 otherwise.

Solution: The input signal and impulse response are both nonzero in a finite
interval. Thus, the output of the convolution integral will be also nonzero in a
finite interval. In order to find the nonzero interval of the output, we add the
lower limits and upper limits of the two function. In this particular example
y(t) ̸= 0 for −3 ≤ t ≤ 4.
When we evaluate the convolution integral, as the time variable changes in
(−∞, ∞) the interval of the non-zero overlap, thus, the limits of the integral
change. This requires to evaluate the integral for all different limits of the
integral as t varies. Since there are no overlaps, for t < −3 and t > 4, y(t) = 0.
Considering the convolution formula,
Z ∞
y(t) = x(τ )h(t − τ )dτ (4.31)
−∞

144
3

t
−3 1 4

Figure 4.13: Result of the convolution operation y(t) = x(t) ∗ h(t), in Exercise
4.6.

x(τ ) and h(t − τ ) will


• partially overlap for −3 ≤ t < 0,
• fully overlap for 0 ≤ t < 1,
• partially overlap for 1 ≤ t < 4.
We need to evaluate the convolution integral for these three regions separately.
For −3 ≤ t < 0, we need to find the boundaries of the integral, that is, the
bounds on the integration variable τ . From the definitions of x and h, we know
that −1 ≤ τ ≤ 2 and t − 2 ≤ τ ≤ t + 2. Imposing the constraint −3 ≤ t < 0 on
these two inequalities restricts τ to −1 ≤ τ ≤ t + 2. We have
Z t+2
y(t) = dτ = t + 3 for − 3 ≤ t < 0. (4.32)
−1
For 0 ≤ t < 1; −1 ≤ τ ≤ 2 and t − 2 ≤ τ ≤ t + 2 restricts τ to −1 ≤ τ ≤ 2:
Z 2
y(t) = dτ = 3 for 0 ≤ t < 1. (4.33)
−1
For 1 ≤ t < 4, −1 ≤ τ ≤ 2 and t − 2 ≤ τ ≤ t + 2 restricts τ to t − 2 ≤ τ ≤ 2:
Z 2
y(t) = dτ = −t + 4 for 1 ≤ t < 4. (4.34)
t−2

Overall, the output y(t) is a trapezium, which shows that the maximum re-
semblance between the input signal and the impulse response is for 0 ≤ t < 1
(Figure 4.13).

145
4.1.4. Convolution Operation in Discrete Time
Systems
For discrete-time signals and systems, instead of convolution integral, we apply
the convolution summation. The properties and meaning of the convolution
sum are very similar to that of the convolution integral. Thus, we do not
repeat them in this section. All we need to do is to replace the signal and the
impulse response with their discrete time counterparts, x[n] and h[n], and use
the convolution sum,

X
y[n] = x[k]h[n − k]. (4.35)
k=−∞

Let us go over some exercises to see the similarities and distinctions between
the continuous time and discrete time convolution.

Exercise 4.7: Find the output of a discrete time LTI system, represented by
the impulse response, h[n] = αn u[n], 0 < α < 1, when the input is x[n] = u[n].

Solution: When we deal with infinite limits of the convolution summation, we


need to take into account of the nonzero intervals of the input function and
the impulse response. In this example, the convolution sum is as follows:

X
y[n] = u[k]αn−k u[n − k], (4.36)
k=−∞

where u[k] is non-zero for k ≥ 0, and u[n − k] is non-zero for n − k ≥ 0. These


two constraints yield 0 ≤ k ≤ n. The convolution sum becomes:
n
X n
X ′
n−k
y[n] = α = αk . (4.37)
k=0 k′ =0

There is a closed form for the above summation, as follows:


n
X ′ 1 − αn+1
y[n] = αk = . (4.38)
1−α
k′ =0

x[n], h[n] and y[n] are illustrated in Figure 4.14.

Exercise 4.8: Find the output of an LTI system, represented by the impulse
response, h[n] = δ[n − n0 ], for any input x[n].

146
2 2 2 y[n]
x[n] h[n]

1 1 1

0 2 4 n 0 2 4 n 0 2 4 n

Figure 4.14: Output y[n] of a discrete-time LTI system, represented by the


impulse response, h[n] = 0.5n u[n], when the input is x[n] = u[n].

x[n] δ[n − n0 ] y[n] = x[n − n0 ]

Figure 4.15: When the impulse response of a discrete-time LTI system is δ[n −
n0 ], the system delays its input signal by n0 .

Solution: This LTI system simply time-shifts the input by an integer amount
of n0 (Figure 4.15), as follows:

y[n] = x[n] ∗ h[n] = x[n] ∗ δ[n − n0 ] = x[n − n0 ]. (4.39)

Exercise 4.9: Find the impulse response of the discrete-time system, repre-
sented by the following difference equation:

y[n] = x[n] − x[n − 1]. (4.40)

Solution: We replace the input by the unit impulse function. Then, the cor-
responding output is the impulse response:

h[n] = δ[n] − δ[n − 1]. (4.41)

Exercise 4.10: Find the output y[n] of a discrete-time system represented


by the impulse response,

h[n] = u[n + 1] − u[n − 2], (4.42)

when the input is


x[n] = u[n] − u[n − 2]. (4.43)

Solution: Both the input and impulse response can be represented by shifted

147
impulse functions, as follows:

x[n] = δ[n] + δ[n − 1] (4.44)

h[n] = δ[n − 1] + δ[n] + δ[n + 1]. (4.45)


Convolution of x[n] and h[n],

y[n] = (δ[n] + δ[n − 1]) ∗ (δ[n − 1] + δ[n] + δ[n + 1]), (4.46)

yields
y[n] = δ[n + 1] + 2δ[n] + 2δ[n − 1] + δ[n − 2]. (4.47)
Here we used the distributivy property of convolution and the facts that (i)
convolution with δ[n] does not change the input, (ii) convolution with δ[n − n0 ]
shifts the input by n0 .

4.1.5. Cross-correlation and Autocorrelation Op-


erations
Convolution of two functions involves a time reverse operation together with
the translation over time in one of the functions, for both continuous time and
discrete time LTI systems, as given below:
Z ∞
y(t) = x(τ )h(t − τ )dτ = x(t) ∗ h(t), (4.48)
−∞

X
y[n] = x[k]h[n − k] = x[n] ∗ h[n] (4.49)
−∞

However, in many practical applications the filter, representing the sys-


tem can be designed in a reversed form or it can be an even function. In this
case, time reverse step of the convolution operation becomes redundant. Fur-
thermore, the signals can be complex functions, which makes the time reverse
operation more difficult in real and imaginary axes. Thus, we can define a
slightly simple version of convolution operation, called cross-correlation.

Definition 4.2: Cross-correlation or correlation operation between


two continuous time signals x(t) and h(t) is defined as
Z ∞
y(t) = x∗ (τ )h(t + τ )dτ = x(t) ⋆ h(t), (4.50)
−∞

where x∗ (t) indicates the complex conjugate of x(t) and ⋆ indicates the cor-

148
relation operation. When the signals are represented by real functions, their
complex conjugates become the same, and complex conjugate operations dis-
appear.
Note that the above integral approaches to ∞ for power signals. It is cus-
tomary to normalize the cross-correlation functions for the power signals as
T /2
1
Z
y(t) = lim x∗ (τ )h(t + τ )dτ = x(t) ⋆ h(t). (4.51)
T →∞ T −T /2

Similarly, cross-correlation or correlation operation between two discrete


time signals x[n] and h[n] is defined as,

X
y[n] = x[k]∗ h[n + k] = x[n] ⋆ h[n]. (4.52)
k=−∞

For the discrete-time power signals, the cross-correlation function is nor-


malized as
N
1 X
y[n] = lim x[k]∗ h[n + k] = x[n] ⋆ h[n]. (4.53)
N →∞ 2N + 1
k=−N

Cross-correlation operation measures the degree of containment of one sig-


nal in the other signal. Since it avoids the time reverse operation, it is preferred
in many application domains. If one of the signals is even, convolution and cor-
relation yields the same result.

Exercise 4.11: Find the cross-correlation between the following continuous


time complex exponential functions with different harmonics:

x(t) = ejkω0 t and h(t) = ejlω0 t for k ̸= l. (4.54)

Solution:
Since this is a periodic signal, it is a power signal. Thus, cross-correlation
function is defined as
T /2
1
Z
y(t) = lim x∗ (τ )h(t + τ )dτ
T →∞ T −T /2
T /2
1
Z
= lim e−jkω0 τ ejlω0 (t+τ ) dτ (4.55)
T →∞ T −T /2
T /2
1
Z
= lim ejlω0 t e−jω0 τ (k−l) dτ.
T →∞ T −T /2

149
Using the Euler formula, we obtain
T /2
1
Z
y(t) = lim ejlω0 t (cos(ω0 τ (k − l)) − j sin(ω0 τ (k − l)))dτ = 0. (4.56)
T →∞ T −T /2

As long as k ̸= l, sine and cosine functions remain periodic, which makes the
above integral 0. Therefore, for k ̸= l the complex exponentials with different
harmonics are not contained in each other. These types of signals are called
orthogonal.

Exercise 4.12: Find the cross-correlations between the following two toy
digital signals.

x[n] = −3δ[n] + 2δ[n − 1] − 1δ[n − 2] + δ[n − 3] (4.57)

h[n] = −δ[n] − 3δ[n − 2] + 2δ[n − 3] (4.58)

Solution: The cross-correlation of these to signals is obtained from the defi-


nition:

X
y[n] = x[k]h[n + k]. (4.59)
k=−∞

For n ≤ −4, y[n] evaluates to 0, since the two signals, x[k] and h[k + n], do
not overlap.
For n = −3, the last non-zero element of x[k] overlaps with the first non-zero
element of h[k − 3], which yields y[−3] = −1.
As we increase n, we obtain y[−2] = 1, y[−1] = −5, y[0] = 8, y[1] = −8,
y[2] = 13 and y[3] = −6. For n ≥ 4, the two signals do not overlap, therefore,
y[n] = 0 for n ≥ 4. Overall, y[n] can be expressed as

y[n] = −δ[n + 3] + δ[n + 2] − 5δ[n + 1] + 8δ[n] − 8δ[n − 1] + 13δ[n − 2] − 6δ[n − 3].

Definition 4.3: Auto-correlation operation of a continuous-time signal


x(t) is defined as
Z ∞
y(t) = x(t) ⋆ x(t) = x∗ (τ )x(t + τ )dτ. (4.60)
−∞

Similarly auto-correlation operation of a discrete-time signal x[n] is defined as

150

X
y[n] = x[n] ⋆ x[n] = x∗ [k]x[n + k]. (4.61)
k=−∞

Auto-correlation operation measures the correlation of a signal with a


lagged copy of itself as a function of the lag. It measures the similarity be-
tween different time instances of a function.
Time series analysis of signals, such as speech, medical signals, seismo-
graphic signals, etc. intensively uses the auto-correlation functions. Below is a
very simple example of auto-correlation of a finite duration signal.

Explore cross-correlation and auto-correlation @


https://384book.net/i0403
INTERACTIVE

Exercise 4.13: Find the auto-correlation function of the signal given below:

−1 for t = 0,

x[n] = 2 for t = 1, (4.62)

1 for t = 2.

Solution:
We need to evaluate the discrete-time auto-correlation given in Equation (4.61).
For n < −2 and n > 2, x[k] and x[k + n] do not overlap, hence, y[n] = 0.
For other values of n, we have y[−2] = −1, y[−1] = 0, y[0] = 6, y[1] = 0 and
y[2] = −1. Overall, we can express the result as

y[n] = −δ[n + 2] + 6δ[n] − δ[n − 2].

4.2. Properties of Impulse Response for


LTI Systems
Linear time-invariant (LTI) systems are of special importance in system theory
due to many reasons. First of all, they can be represented by a unique func-
tion, called impulse response. Secondly, a relatively complicated system can
be represented by parallel, series, hybrid, or loopy connections of many simple
LTI subsystems. Thus, it is easy to develop mathematically tractable methods
to design, analyze, and model natural and man-made objects using LTI sys-

151
tems. Even if the system is not inherently linear and/or time-invariant, it is
possible to define manifolds, where the system is piecewise linear and locally
time-invariant.
In the previous sections, we showed that an LTI system can be uniquely rep-
resented by its impulse response. Let us investigate the properties and behavior
of the impulse response for memory, causality, invertibility, and stability.

4.2.1. Impulse Response of Memoryless LTI Sys-


tems
Recall that a system is memoryless if the present value of the output depends
only on the present value of the input. Recall also, that an LTI system has a
system equation, where the input is proportional to the output, as follows;

y(·) = Kx(·). (4.63)


If we replace the input signal by the unit impulse function, we obtain the
impulse response of a memoryless LTI system at the output:

h(t) = Kδ(t), ∀K, for continuous time systems,


(4.64)
h[n] = Kδ[n], ∀K, for discrete time systems.
For all other cases, where h(t) ̸= Kδ(t), the LTI system has memory.
When an LTI system is memoryless, it simply multiplies an input by a
constant factor K. Thus, a memoryless LTI system is a scalar multiplier.

4.2.2. Impulse Response of Causal LTI Systems


Recall that if a system has memory, it may be causal or noncausal. A system
with memory is causal if the present value of the output depends on the past
and/or present value of the input. Recall, also, that all the memoryless
systems are causal.
When a system is LTI, the convolution sum and integral ranges between
(−∞, ∞). The above definition of causality is violated for negative values of
the time variable. Thus, an LTI system is causal if the limits of the convolution
summation and convolution integral ranges between (0, ∞), as follows:

152
x(·) h (y = x ∗ h) h−1 x(·)

Figure 4.16: If the output of an LTI system represented by an impulse response


h(·), for input signal x(·), is fed to the inverse of the system, represented by
h−1 (·), we obtain the original signal x(·) back.


X
y[n] = x[n − k]h[k], for discrete-time systems, (4.65)
k=0
Z ∞
y(t) = x(t − τ )h(τ )dτ, for continuous time systems. (4.66)
0

Furthermore, the impulse response of a causal LTI system satisfies the


following condition:

h[n] = 0 for n < 0,


(4.67)
h(t) = 0 for t < 0.
Thus, the impulse response of a causal system is all zero values for negative
values of time instances.

4.2.3. Inverse of Impulse Response for LTI Sys-


tems
In general, a system represented by a model y(·) = h(x(·)), is invertible if there
exists a unique inverse model h−1 , such that

x(·) = h−1 (y(·)). (4.68)


In particular, an LTI system, represented by y(·) = x(·) ∗ h(·), is invertible if
there exists a unique inverse impulse response, h−1 (·), such that

x(·) = h−1 (·) ∗ y(·), (4.69)


which is illustrated in Figure 4.16.
Motivating Question: How can we find h−1 (t) and h−1 [n], for continuous
time and discrete time LTI systems, respectively?
This is not an easy task for a general impulse response function. It requires
an operation called deconvolution, which is beyond the scope of this book.

153
However, we can find a relationship between an impulse response h(·) and
its inverse, h−1 (·) by convoluting both sides of system equation with h−1 (·), as
follows:

y(·) ∗ h−1 (·) = x(·) ∗ h(·) ∗ h−1 (·). (4.70)


Recall that the convolution of a function by a unit impulse function is the
function itself, i.e.,

x(t) = x(t) ∗ δ(t). (4.71)


Thus, for

x(·) = y(·) ∗ h−1 (·), (4.72)


we need

h(·) ∗ h−1 (·) = δ(·). (4.73)


Thus, we seek an inverse impulse response such that the convolution of the
impulse response and its inverse results in a unit impulse function.
In summary, the relationship between an impulse response and its inverse
is

h(t) ∗ h−1 (t) = δ(t), for continuous-time systems, (4.74)


−1
h[n] ∗ h [n] = δ[n], for discrete-time systems, (4.75)

which satisfies

x(t) = h−1 (t) ∗ y(t), for continuous time systems, (4.76)


x[n] = h−1 [n] ∗ y[n], for discrete time systems. (4.77)

Exercise 4.14: Consider an LTI system represented by the following equa-


tion,

y(t) = x(t − t0 ). (4.78)


a) Find the impulse response, h(t).
b) Find the inverse of impulse response, h−1 (t).
c) Does this system have memory?
d) Is this system causal?

154
Solution:
a) In order to find the impulse response, we replace the input by the unit im-
pulse function. The corresponding output is the impulse response. Thus,
the impulse response is

h(t) = δ(t − t0 ), (4.79)


b) In order to find the inverse of the impulse response, we need to satisfy

h(t) ∗ h−1 (t) = δ(t). (4.80)


The above equality is satisfied, when

δ(t − t0 ) ∗ δ(t + t0 ) = δ(t). (4.81)


Thus,

h−1 (t) = δ(t + t0 ). (4.82)


c) This system has memory for t0 ̸= 0.
d) This system is causal, for t0 ≥ 0, because the present value of the output
depends only on the past values. Otherwise, it is non-causal.

Exercise 4.15: Consider an LTI system, called accumulator, represented by


the following equation:
Xn
y[n] = x[k]. (4.83)
k=−∞

a) Find the impulse response, h(t).


b) Find a block diagram representation of this system, using an adder and
a unit delay operator.
c) Find the inverse of the impulse response, h−1 (t).
d) Does this system have memory?
e) Is this system causal?

Solution:
a) We replace the input with the unit impulse function. The corresponding
output is the impulse response. Thus the impulse response is
n
X
h[n] = δ[k] = u[n], (4.84)
k=−∞

where u[n] is the unit step function.

155
b) We can use a simple mathematical trick to find a closed-form equation to
represent this system by taking the difference between y[n] and y[n − 1],
to obtain a more compact form of the system equation as follows:

y[n] − y[n − 1] = x[n]. (4.85)


This LTI system can be represented by the feedback control system illus-
trated in Figure 4.17.
c) The inverse of the impulse response of h[n] = u[n] should satisfy

h[n] ∗ h−1 [n] = δ[n]. (4.86)


We need to find a function h−1 [n], such that when it is convoluted by
u[n], the output is the impulse function. We know that

u[n] − u[n − 1] = δ[n] (4.87)


and we know that δ[n − 1] ∗ u[n] = u[n − 1]. Thus,

h−1 [n] = δ[n] − δ[n − 1]. (4.88)


Note that, finding the inverse of an impulse response requires some heuris-
tics. In this particular example, we found an inverse function h−1 [n] to
satisfy

u[n] ∗ (δ[n] − δ[n − 1]) = δ[n], (4.89)


|{z} | {z }
h[n] h−1 [n]

and

y[n] ∗ (δ[n] − δ[n − 1]) = x[n]. (4.90)


| {z }
h−1 [n]

d) This is a causal system because h[n] = 0 for n < 0.


e) This system has memory, because h[n] ̸= Kδ[n].

Exercise 4.16: Consider the LTI system represented by the following equa-
tion,

y(t) = x(t − 2) + x(t + 2). (4.91)


a) Find the impulse response of this system.
b) Is this system memoryless?
c) Is this system causal?

156
x[n] + y[n]

−1
D

Figure 4.17: Block diagram for the LTI system represented by y[n] − y[n − 1] =
x[n] in Exercise 4.15.

Solution:
a) Impulse response of this LTI system is,

h(t) = δ(t − 2) + δ(t + 2). (4.92)


b) This system has memory, because h(t) ̸= Kδ(t).
c) This system is non causal, because h(t) = δ(t − 2) + δ(t + 2) ̸=
0, for t < 0.

4.2.4. Impulse Response of Stable LTI Systems


Recall that a system is called stable if a bounded input signal generates a
bounded output signal. Let us apply this definition to the convolution sum
and convolution integral.
For a discrete-time system, when the input is bounded, in other words,

|x[n]| < B, (4.93)


we need that the convolution summation results in a bounded output. Math-
ematically speaking,

X
|x[n]| < B ⇒ |y[n]| < B |h(k)| (4.94)
k=−∞

X
⇒ |h(k)| < ∞. (4.95)
k=−∞

When a discrete time function, h[n], satisfies the above inequality, it is


called absolutely summable function.
For a continuous time system, when the input is bounded, |x(t)| < B, we
need that the convolution integral results in a bounded output. Mathematically
speaking,

157
Z ∞
|x(t)| < B ⇒ |y(t)| < B |h(τ )|dτ (4.96)
−∞
Z ∞
⇒ |h(τ )|dτ < ∞. (4.97)
−∞

When a continuous time function, h(t), satisfies the above inequality, it is


called absolutely integrable function.

Exercise 4.17: Consider a discrete-time LTI system represented by the im-


pulse response h[n] = u[n].
a) Find the system equation, which gives the relationship between the input
and the output.
b) Is this system memoryless?
c) Is this system causal?
d) Is this system stable?

Solution:
a) The system equation of this LTI system can be obtained from the convo-
lution summation:
n
X n
X
y[n] = x[n] ∗ u[n] = x[k]u[n − k] = x[k]. (4.98)
k=−∞ k=−∞

For y[n − 1], the above equation can be written as,


n−1
X n−1
X
y[n − 1] = x[k]u[n − k] = x[k]. (4.99)
k=−∞ k=−∞

Subtracting the above Equations 4.98 and 4.99 side by side yields;

y[n] − y[n − 1] = x[n] (4.100)

b) This system has memory, because h[n] ̸= Kδ[n].


c) This system is causal, because h[n] = 0 for n < 0.
d) This system is unstable, because for a bounded input x[n] < B, the output
becomes unbounded due to the infinite summation.

Exercise 4.18: Consider a continuous-time LTI system represented by the


following equation: Z 3
y(t) = x(t − τ )dτ (4.101)
0

158
a) Find the impulse response of this system.
b) Is this system memoryless?
c) Is this system causal?
d) Is this system stable?

Solution:
a) We replace the input by the impulse function to obtain the impulse re-
sponse as follows:
Z 3
h(t) = δ(t − τ )dτ = u(t) − u(t − 3). (4.102)
0
b) This system has memory, because h(t) ̸= Kδ(t).
c) This system is causal, because h(t) = 0 for t < 0.
d) This system is stable, because h(t) is absolutely summable:
Z ∞ Z 3
|h(τ )|dτ = u(τ )dτ < ∞. (4.103)
−∞ 0

4.2.5. Unit Step Response


The above analysis shows that at the heart of an LTI system lies the concept
of impulse response, which is the response of an LTI system to a unit impulse
function. In other words, given an LTI system, if we can measure its response
to a unit impulse input, we can find its response to any input through the
convolution operation. Similar to the impulse response, we can define a unit
step response as defined below.

Learn more about the impulse response and the


acoustics of instruments @ https://384book.
WATCH net/v0401

Definition 4.4: Unit Step Response is the response of an LTI system to


the unit step function.

For a discrete-time system, replacing the input by the unit step function in
convolution summation, we can obtain the discrete-time unit step response,
as follows:

X n
X
s[n] = u[n] ∗ h[n] = u[n − k]h(k) = h(k). (4.104)
k=−∞ k=−∞

159
There is a one-to-one correspondence between the unit step response and
impulse response. If we take the difference between the unit step responses of
s[n] and s[n − 1], we get

h[n] = s[n] − s[n − 1] (4.105)


For a continuous time system, replacing the input by the unit step function
in convolution integral, we can obtain the unit step response as

Z ∞ Z t
s(t) = u(t) ∗ h(t) = u(t − τ )h(τ )dτ = h(τ )dτ. (4.106)
−∞ −∞

There is a one-to-one correspondence between the continuous time unit


step response and impulse response. If we take the derivative of the above
convolution integral, we get

ds(t)
h(t) = (4.107)
dt

Exercise 4.19: Find the impulse response of a continuous LTI system if its
unit step response is

s(t) = e−αt u(t). (4.108)

Solution: Take the derivative of the unit step response with respect to t,

ds(t)
h(t) = = −αe−αt u(t). (4.109)
dt
Look at the beautiful symmetry of the continuous time exponential func-
tion! When the unit step response is an exponential function, its impulse re-
sponse is also an exponential function, scaled by the exponent.

Exercise 4.20: Find the impulse response of a discrete-time LTI system if


the unit step response is
s[n] = e−αn u[n]. (4.110)

Solution: Take the first difference of the unit step response,

h[n] = s[n] − s[n − 1] = e−αn u[n] − e−α(n−1) u[n − 1] = δ[n]. (4.111)

Since, u[n] − u[n − 1] = δ[n] and e0 = 1, the result is simply δ[n].

160
Figure 4.18: Sample images from the MNIST dataset.

4.3. An application of convolution in ma-


chine learning
Convolution and cross-correlation operations play a crucial role in computer
vision and machine learning, particularly in tasks like visual recognition. Let
us delve into a practical application of convolution in hand-written digit recog-
nition.
Our objective is to develop a system utilizing a cross-correlation filter, with
the filter weights learned through minimizing an error measure on the MNIST
dataset1 . This dataset comprises 60,000 28-by-28 grayscale images of hand-
written digits from 0 to 9. Some examples from the dataset are illustrated in
Figure 4.18.
Instead of tackling the multiclass problem, for simplicity, we will focus on
a binary classification task, specifically recognizing the digit “3.” Thus, we will
treat the class “3” as the positive class and all other digits as the negative
class. The goal of our system is to predict the class of its input image.
Specifically, our system takes an image x, which is a 28-by-28 matrix, as
input and processes it through a cross-correlation operation with a 28-by-28
filter denoted with h. Then, the system outputs a scalar number ŷi which

1
https://yann.lecun.com/exdb/mnist/

161
x (x ⋆ h)[0, 0] σ ŷ

Figure 4.19: The system which takes an input image x (as a 28x28 matrix),
processes it through a correlation filter h and applies the sigmoid function (σ)
on the correlation output. Finally, the system produces ŷ, which should be
close to 1 if the image belongs to the positive class, and to 0, otherwise.

should be 1 if the input image belongs to the positive class, 0 otherwise. The
system is illustrated in Figure 4.19.
In this chapter, we have defined the cross-correlation operations for one-
dimensional signals. In two dimensions, it is defined as
XX
(x ⋆ h)[i, j] = x[i + m, j + n]h[m, n]. (4.112)
m n

This operations slides the 28-by-28 filter h on the input signal x, and computes
results for all possible locations, even if x and h overlap partially. However,
in our system, we are only interested in (x ⋆ h)[0, 0], which is the result of
correlation when the filter and the input fully overlap. Subsequently, (x⋆h)[0, 0]
is passed through the sigmoid function defined as
1
σ(x) = , (4.113)
1 + e−x
which squashes its input into the interval [0, 1].
“Training” this system means finding the filter h that solves this task in
such a way that it makes a minimal amount of prediction errors on the examples
of the training set. This error is measured by a “loss function.” There are many
different loss functions in machine learning, which are beyond the scope of this
book. Here, we will use the “mean squared error (MSE)” loss function, which
is easy to understand but not necessarily the best for the task. The MSE is
defined as:
N
1 X
MSE = (ŷi − yi )2 , (4.114)
N
i=1

where ŷi is the output of our system for input image xi . This loss function
achieves its minimal value, which is zero, when all the predictions (ŷi ) are
equal to their corresponding labels (yi ). To minimize MSE on the training
set, we apply a method called the “gradient descent.” In essence, it involves
iteratively adjusting the filter weights in the negative direction of the gradient
of MSE with respect to the filter. Note that the gradient is a vector that points

162
Figure 4.20: The optimal filter h learned on the MNIST training set by mini-
mizing the MSE. The filter looks like the digit 3.

in the direction of the greatest rate of increase. In each iteration of training,


the system learns from the discrepancies between its predictions (ŷi ) and the
actual labels (yi ). The cross-correlation filter gradually adapts to recognize
distinctive features of the digit “3.” After training, the optimal h looks like the
digit “3” as shown in Figure 4.20.
It is important to note that this simplified explanation focuses on a binary
classification task. Extending this approach to multiclass classification involves
modifying the output layer, utilizing techniques like softmax activation, and
using different loss functions like cross-entropy. We provide the code for this
example and its multiclass extension in the companion website of the book,
accessible through the link below.

A convolution (cross-correlation) example from


machine learning @ https://384book.net/i0404
INTERACTIVE

4.4. Chapter Summary


Can we find a unique equation, which relates the input-output pair of a contin-
uous time and discrete time system? Can we define a function, which uniquely
represents a continuous time and discrete time system? If so, can we study the
basic properties of the systems using these representations?
The answers to the above questions are all “yes”, provided that the system
is linear and time-invariant (LTI). Thus, linearity and time invariance are cru-
cial properties to model and analyze systems. In this chapter, we introduced
an interesting function, called impulse response, which uniquely represents an
LTI system. Impulse response is defined as the output of an LTI system when
the input is a unit impulse function. We define a new mathematical opera-
tion, called convolution, which relates any input-output pair of an LTI system

163
through the impulse response. As an alternative to convolution, we also define
correlation and auto-correlation operations, which are widely used in machine
learning applications.
Finally, we studied the stability, memory, and invertibility properties of LTI
systems using their mathematical representation with the impulse response.

164
Problems
1. Consider a discrete-time LTI system, represented by the following impulse
response:
h[n] = u[n + 2].
a) Find the output of the system for the following input:
x[n] = (0.5)(n−0.5) u[n − 2].
b) Is this system invertible? If yes, find its inverse.
c) Is this system BIBO stable? Verify your answer.
2. Find the output, y[n], of the system for the following input and impulse
response plots:
x[n] h[n]

1 1

n n
−3 7 1 15

3. Consider the discrete-time system, represented by the following system


equation:


X
y[n] = x[k]g[n − 2k],
k=−∞

where g[n] = u[n + 2] − u[n − 2]


a) Find and plot y[n] for x[n] = δ[n].
b) Find and plot y[n] for x[n] = δ[n − 2].
c) Find and plot y[n] for x[n] = u[n].
4. Find and plot the convolution of the following continuous-time signals:

x(t) = δ(t + 2) + δ(t + 1)


t + 1 , − 1 ≤ t ≤ 1

h(x) = 2 − t , 1 < t ≤ 3

0 , otherwise

165
5. Given the input x(t) = 2e−αt (u(t)−u(t−1)) and and the impulse response
h(t) = u(t/α), where 0 < α ≤ 1
a) Find and plot y(t) = x(t) ∗ h(t).
dx(t)
b) Find ∗ h(t)
dt
6. Given the input and impulse response of an LTI system,

x(t) = u(4 − t) − u(8 − t),

h(t) = e−4t u(−t).


a) Find the output y(t) = x(t) ∗ h(t).
dx(t)
b) Find the function g(t) = ∗ h(t).
dt
c) Find g(t) in terms of y(t).
7. Consider the output of a continuous-time LTI system represented by the
following convolution equation:

X
y(t) = e−t u(t) ∗ δ(t − 3k).
k=−∞

a) Find the output function y(t) for 0 ≤ t < 3, in terms of exponential


function by taking the convolution operation.
b) What is a possible input and impulse response of this system?
c) When you switch the input with the impulse response, does the out-
put function of this system change? Explain why?
8. Find and plot the outputs of the discrete-time LTI systems given below
for the input x[n] = δ[n] + 2δ[n − 2] − 3δ[n − 4] and the impulse response
h[n] = 2δ[n + 2] + δ[n − 2]:
a) y1 [n] = x[n] ∗ h[n]
b) y2 [n] = x[n + 2] ∗ h[n]
c) y3 [n] = x[n + 2] ∗ h[n − 2]
9. Consider the two signals given below:

x[n] = 3n u[−2n − 1],


h[n] = u[1 − n]
a) Find and plot the output of the cross-correlation operation between
these functions.
b) Find and plot the output of the convolution operation between these
functions.

166
c) Compare the results of parts a and b.
10. Consider a discrete-time causal LTI system, represented by the following
difference equation:
1
y[n] = x[n − 1] + x[n]
5
a) Find the impulse response , h[n], of the system.
b) Find the output , y[n], for the input x[n] = δ[n − 2].
c) Is this system BIBO stable?
d) Does this system have memory?
e) Is this system invertible? If yes, find its inverse.
11. Consider the following impulse responses, each of which represents a con-
tinuous time LTI system. Determine if these systems are BIBO stable.
Verify your answers.
a) h1 (t) = e−(1−2j)t u(t − 1)
b) h2 (t) = e−t cos(t)u(−t)
c) h3 (t) = e−t sin(t)u(−t)
12. Consider the following impulse responses, each of which represents a dis-
crete time LTI system. Determine if these systems are BIBO stable. Verify
your answers.
4n π 
a) h1 [n] = cos n u[n]
πn 4
b) h2 [n] = 7 u[7 − n]

13. Consider a discrete-time LTI system, represented by the following impulse


response:

h[n] = (2n)αn u[n],


where |α| < 1. Show that the step response of this system is

1 − αn nαn
 
s[n] = 2 − u[n]
(1 − α)2 1 − α
(Hint: Note that
N N
X
k d X k
(k)α = α

k=0 k=0

and

167
N
X 1 − αn+1
αk = .)
1−α
k=0

14. Consider the discrete-time signal

 n−1
1
x[n] = {u[n + 3] − u[n − 3]}
3

Find an analytical expression for x[n − k] as a piece-wise linear function.


15. Prove that convolution operation is commutative, associative and dis-
tributive:
(a) x(t) ∗ h(t) = h(t) ∗ x(t)
(b) x(t) ∗ (h1 (t) ∗ h2 (t)) = x(t) ∗ h1 (t)) ∗ h2 (t))
(c) x(t) ∗ (h1 (t) + h2 (t) = x(t) ∗ h1 (t) + x(t) ∗ h2 (t)
16. Does cross-correlation have the commutativity, associativity, and distribu-
tivity properties? Prove your answer.

168
17. Write a computer program to take discrete convolution of 2 signals. (You
are not allowed to use any xx.convolve() function from any library.) Your
function takes 4 inputs: the first signal x[n], the starting index of the first
signal sxi , the second signal h[n] and the starting index of the second sig-
nal shi (Starting indexes and signals are in the same format as the ones
in HW1) and returns the output signal y[n] and the starting index of the
output signal syi .

(a) (5 pts) Generate a shifted discrete impulse function δ[n − 5] in the


given signal form and plot the output function that is the result
of your discrete convolution function when x[n]=“the signal in sig-
nal.csv” and h[n]=δ[n − 5]. What is the effect of convolution with
δ[n − 5]? Comment on that.

(b) (10 pts) The N-Point moving average filter is defined as follows:
(
1
if 0 ≤ n ≤ N − 1
h[n] = N
0 otherwise

Generate an N-point moving average filter m[n] in the given signal


form and plot 4 output functions that are the result of your dis-
crete convolution function when x[n]=“the signal in signal.csv” and
h[n]=m[n] by setting N=3,5,10,20. What is the effect of convolution
with m[n]? What are the differences between different N values?
You should write your code in Python 3. You are not allowed to use any
library other than matplotlib.pyplot and numpy.

169
170
Chapter 5
Representation of LTI
Systems by Differential and
Difference Equations

“... Since Newton, mankind has come to realize that the laws of
physics are always expressed in the language of differential equations.
Steven H. Strogatz

In Chapter 4, we studied a representation of LTI systems by impulse re-


sponse, which relates the input and output signals by convolution summation
and convolution integral. We showed that the response of an LTI system to
any arbitrary input, x(·), is completely characterized by the impulse response,
h(·).
There are other mathematical tools to represent an LTI system. An im-
portant group of models involves differential equations for the continuous
time systems and difference equations for discrete-time systems. Differential
and difference equations are one of the most fundamental tools for modeling
systems. A differential equation links the rate of change of an input to that
of the output, in continuous time systems. Similarly, the difference equation
links the present, future, and past values of input signal to that of the output
signals, in discrete time systems.

Learn more about differential equations @ https:


//384book.net/v0501
WATCH

There is a wide range of types of differential and difference equations. In


this book, we shall only focus on linear, constant-coefficient differential and

171
difference equations. We shall study the differential and difference equation to
relate an input signal x(·) to an output signal y(·) to represent an LTI system.

5.1. Linear Constant-Coefficient Differ-


ential Equations
A linear, constant-coefficient differential equation is given in the fol-
lowing general form:
N M
X dk y(t) X dk x(t)
ak = bk , (5.1)
dtk dtk
k=0 k=0
dk
where dt th derivative of the input x(t) and the output y(t).
k shows the k
The constant parameters {ak , bk } show the degree of the contribution of each
derivative to the behavior of the system. In most of the practical systems,
N ≥ M , where N is the order of the differential equation and M is the order
of the derivative of x(t).
In order to find an explicit relationship between the input and output pairs,
we need to solve the differential equation. This is only possible if we can obtain
N auxiliary (initial) conditions about the system,

dy(t0 ) d(N −1) y(t0 )


y(t0 ), ẏ(t0 ) = , ... , y (N −1) (t0 ) = , (5.2)
dt dt
at a specific time instance t0 .
An important set of initial conditions can be obtained when an LTI system
is initially at rest.

Definition 5.1: A continuous time LTI system is initially at rest at time


t0 , if the input and output pairs, x(t), y(t), are zero for all t < t0 .

Formally speaking,

x(t) = 0 for t < t0 ⇒ y(t0 ) = 0 for t < t0 . (5.3)


The above conditions imply that a continuous time LTI system is initially
at rest if,

x(t) = 0 for t < t0 ⇒ y(t0 ) = ẏ(t0 ) = ... = y N −1 (t0 ) = 0 for t < t0 . (5.4)

In most practical applications, the initial time starts from t0 = 0. For this

172
reason, we assume that the system is initially at rest when there is no input
and output for t < 0. Thus, initial rest condition gives us the following initial
conditions,

y(0) = ẏ(0) = ... = y N −1 (0) = 0. (5.5)


Motivating Question: Why do we need initial conditions?
A differential equation does not give an explicit equation, which relates
the input to the output, but, instead, it gives a relationship between the rate
of changes of the input and output. There are infinitely many solutions to a
differential equation, depending on the starting points of the system. These
initial values shape up the explicit equation between the input and output,
which is obtained by solving the differential equation. In order to obtain a
unique solution to a differential equation, we need a set of initial conditions.

5.2. Representation of a Continuous-time


LTI system by Differential Equa-
tions
Recall that the derivative of a function measures the rate of change of that
function with respect to the variable of differentiation, which is typically the
time, t in the context of this book. Similarly, the second derivative measures
the rate of the change of change. As we increase the degree of the derivatives,
we measure the change of the change of the change, etc. When it is possible to
relate the rate of the changes of the input and output signals, we can represent
this system by a differential equation.
Proposition: A linear constant coefficient differential equation, which is
initially at rest
N M
X dk y(t) X dk x(t)
ak = bk (5.6)
dtk dtk
k=0 k=0

represents a continuous time, causal LTI system, when the input-output pair
of this system is x(t) and y(t).
Verification of the proposition: In order to show that the system rep-
resented by a differential equation is linear, we need to check the superposi-
tion property. Mathematically speaking, given two input-output pairs, x1 (t) →
y1 (t) and x2 (t) → y2 (t), superposition of the inputs must yield the same su-
perposition as the output, as follows:

173
xS (t) = A1 x1 (t) + A2 x2 (t) → yS (t) = A1 y1 (t) + A2 y2 (t), for t ≥ 0, (5.7)

where A1 and A2 are arbitrary scalar numbers.


When the system is initially at rest, the output and all of its derivatives
are zero, until we feed a nonzero input to the system. Thus, the superposition
property can be shown by simply replacing the input and output pairs, x1 (t) →
y1 (t) and x2 (t) → y2 (t), in the equation, for t ≥ 0, as follows:
N M
X dk y1 (t) X dk x1 (t)
ak = bk , (5.8)
dtk dtk
k=0 k=0

N M
X dk y2 (t) X dk x2 (t)
ak = bk . (5.9)
dtk dtk
k=0 k=0

When we multiply the first equation by A1 and the second equation by A2 , we


obtain
N M
X dk y1 (t) X dk x1 (t)
A1 ak = A 1 bk , (5.10)
dtk dtk
k=0 k=0

N M
X dk y2 (t) X dk x2 (t)
A2 ak = A 2 bk , (5.11)
dtk dtk
k=0 k=0

and we add them side by side to obtain,

N M
X dk [A1 y1 (t) + A2 y2 (t)] X dk [A1 x1 (t) + A2 x2 (t)]
ak = bk . (5.12)
dtk dtk
k=0 k=0

Thus, for any superposed input-output pairs, xS (t) → yS (t), the equation,
N M
X dk yS (t) X dk xS (t)
ak = bk (5.13)
dtk dtk
k=0 k=0

is satisfied.
In the above derivations, we could freely move the constant parameters, A1
and A2 , inside of the sum and derivation operators; because, both summation
and derivation are linear operators. However, if the system is not initially
at rest, there are some non-zero outputs even if the input values are zero for
some T , t < T . In this case, the superposition property of the differential
equation is not satisfied for the initial conditions.

174
Since the superposition property holds, an “initial at rest” system repre-
sented by a constant-coefficient linear differential equation is linear with respect
to the input-output pairs, x(t) and y(t).
Linear constant coefficient differential equations are, also, time-invariant,
when the system is initially at rest, because, for an arbitrary input output
pair, x(t) → y(t), a time shift by t0 of the input generates the same shift at
the output, for t ≥ 0, i.e.,
N M
X dk y(t − t0 ) X dk x(t − t0 )
ak = bk . (5.14)
dtk dtk
k=0 k=0

Defining a shifted time variable, τ = t − t0 and noting that dτ = dt, we can


show time invariance. Thus, the system is represented by a linear constant
coefficient differential equation, which is initially at rest, is time-invariant,
x(t − t0 ) → y(t − t0 ).
Recall that causality simply means that the response of a system at a given
time, t, does not depend on the future of the input signal. Since the system is
initially at rest, the output takes nonzero values starting from t > t0 . Consid-
ering the fact that the derivation operator is causal, the initial rest condition
assures the causality condition of a linear constant coefficient differential equa-
tion.
Motivating Question: How do we estimate the parameters ak and bk ,
and the orders N and M to model a continuous time system by a differential
equation or to model a discrete-time system by a difference equation?
This requires a great amount of domain knowledge and experimentation.
Sometimes it takes the effort of hundreds of scientists over the centuries, like
quantum mechanical systems, represented by Schrodinger’s equation. Some-
times it is not possible at all. Recently, there has been a trend to estimate
the parameters of the differential equations by using Artificial Neural Net-
works. These machine learning tools are trained by the experimental data of
the underlying physical phenomenon, obtained in the laboratory environments.
It is possible to model and analyze biological, chemical, or physical systems
by finding the representative differential or difference equation based on LTI
differential or difference equations. However, these techniques are beyond the
scope of this book.

175
PN dk y(t) PM dk x(t)
x(t) k=0 ak dtk = k=0 bk dtk
y(t)

Figure 5.1: Representation of a continuous-time LTI Systems by differential


equation.

5.3. Solving the Linear Constant Coef-


ficient Differential Equations which
Represents LTI Systems
When we model a continuous-time LTI system by a differential equation, in-
stead of the impulse response, we put the differential equation in the black box
(Figure 5.1). This representation formalizes the dynamic interactions between
the input and output pairs, with respect to the time variable.
In order to study the behavior of the system we need to find an explicit
relationship between the input and output pairs, so that we can compute the
output for a given input signal. When the system is nonlinear, finding the solu-
tion of the differential equation may not always be easy, or may not be possible
at all. However, when the system is linear, time-invariant, and causal, there
are systematic methods for solving the corresponding differential equation. A
detailed study of solutions, which covers all forms of input-output pairs is be-
yond the scope of this book. However, we shall focus on an important class of
differential equations, which represent the LTI systems.
There are two solutions, which satisfy a linear constant coefficient differen-
tial equation:
1. Particular solution, which is the output, yp (t) of the system for a given
input, x(t).
2. Homogeneous solution, which is the solution of the differential equa-
tion of a system, when the input is x(t) = 0.
Recall that for an LTI system, if a set of input-output pairs satisfy the
representative differential equation, then the superposition of all input-output
pairs also satisfy this equation. Therefore, the general solution of a differential
equation, which represents an LTI system is obtained by superposing all the
solutions. Mathematically,

y(t) = yh (t) + yp (t). (5.15)


Let us study the methods for finding the solutions mentioned above.

176
5.3.1. Finding the Particular Solution
In System Theory, a particular solution of a linear constant coefficient differ-
ential equation is a unique response of the underlying system to a particular
input, which satisfies the differential equation. Since the system is LTI, the
analytical form of the particular solution, yp (t) must be similar to that of the
input signal x(t). Thus, a practical method for finding the particular solution
is to assume that the particular solution has the general analytical form of the
input. Then, we find the parameters of the particular solution, which satisfies
the differential equation.
Motivating Question: How do we find the unique parameters of the
particular solution, given its analytical form?
All we have to do is take the derivatives of the assumed particular solution
yp (t) for a given x(t) and insert it into the differential equation. Then, solve
this equation for the parameters of the particular solution, to obtain a unique
set of parameters.

Exercise 5.1: Find the particular solution of the following differential equa-
tion, when the input is x(t) = t + 1.

dy(t)
+ 3y(t) = x(t). (5.16)
dt

Solution:
The input is a line equation with slope and intercept equal to 1. Since the
differential equation is linear with a constant coefficient, the particular solution
should be another line equation with a different slope and intercept. Thus, it
should be in the following analytical form:

yp (t) = At + B.

The derivative of the particular solution is ẏp (t) = A. Let us insert the above
particular solution and its derivative into the differential equation:

3At + (A + 3B) = t + 1.

Equating the coefficient of t and the constant term on both sides of the equa-
tion, we obtain
3A = 1 and A + 3B = 1.
Thus, A = 1/3 and B = 2/9, yielding
1 2
yp (t) = t + .
3 9

177
In general, if the input is an nth order polynomial, the particular solution
is another nth order polynomial with n + 1 arbitrary parameters,
n
X
yp (t) = Ak tk . (5.17)
k=0

The parameters are computed by inserting all the derivatives of yp (t) into the
differential equation and finding the unique set of {Ak }nk=0 .
Similarly, if the input is an exponential function, the particular solution
is another exponential function with a different magnitude and parameter of
the exponent. If the input consists of trigonometric functions, the particular
solution consists of similar trigonometric functions with different amplitudes
and frequencies, etc.
Note that the parameters of the particular solution, depend on the parame-
ters of the differential equation, ak and bk and the analytical form of the input
signal, x(t).

5.3.2. Finding the Homogeneous Solution


When the right-hand side of the differential equation is zero, the corresponding
output yh (t) is represented by the following differential equation,
N
X dk yh (t)
ak = 0. (5.18)
dtk
k=0

This equation is called the homogeneous equation.


The solution to the homogeneous equation is called homogeneous solu-
tion. The homogeneous solution satisfies the differential equation for x(t) = 0.
Due to the linearity property of the systems described by linear differential
equations, it is trivial to show that

y(t) = yh (t) + yp (t) (5.19)


also satisfies the differential equation.
Motivating Question: How do we find the solution of the homogeneous
differential equation?
At this point, we shall use a crucial property of the exponential function.
Recall that the k th derivative of an exponential function is the function itself,
scaled by the k th power of the exponent. Formally,

dk (eβt )
= β k eβt . (5.20)
dtk

178
Suppose that a solution to the homogeneous differential equation is in the
following form:

yh (t) = Ceβt . (5.21)


where C is a non-zero constant. Then, using this form in the homogeneous
equation (5.18), we obtain
N
X N
X
ak β k Ceβt = Ceβt ak β k = 0. (5.22)
k=0 k=0

Since Ceβt cannot be zero, we get


N
X
ak β k = 0, (5.23)
k=0

which is an algebraic equation with N roots, i.e. β1 , β2 , . . . , βN . This fact reveals


that the homogeneous solution is actually in the following form:
N
X
yh (t) = Ck eβk t , (5.24)
k=1

with separate constants Ck for each root βk . Since the system is linear, the
superposition of all of the valid solutions (Equation (5.24)) satisfies the homo-
geneous differential equation. Note that the exponential form of the homoge-
neous solution (Equation (5.21)) converts an N th order homogeneous differen-
tial equation ((5.18)) into an N th order algebraic equation (Equation (5.23)).
Finally, the linearity property enables us to add the particular and homo-
geneous solutions to obtain a general solution to the system:

y(t) = yh (t) + yp (t). (5.25)


The method for solving a linear constant coefficient differential equation
shows the beauty of the linearity property of the differential equation and the
derivative property of the exponential function. Combining these two proper-
ties generates a very efficient and powerful method to solve linear constant
coefficient differential equations:
i) Derivative property of the exponential function converts the problem of
solving a homogeneous differential equation (when there is no input) into
an algebraic equation.
ii) Linearity property enables us to superpose all of the valid solutions. In
order to find the overall homogeneous solution, we superpose all of the
valid homogeneous solutions. Then, we superpose the homogeneous and

179
particular solutions to obtain the general solution.

5.3.3. Finding the General Solution


Equation (5.24) reveals that we can obtain infinitely many general solutions
to a differential equation by changing the constant superposition weights, Ck .
However, a model for an LTI system requires a unique explicit relationship
between the output and input. Thus, we need additional information about
the system. This additional information comes from some auxiliary conditions
about the system, called initial conditions, at a specific time instance t0 , as
follows:
dy(t0 ) d(N −1) y(t0 )
y(t0 ), ẏ(t0 ) = , ... , y (N −1) (t0 ) = . (5.26)
dt dt
Recall that for this system to be a causal LTI system, it is to be initially at
rest, i.e., for t = 0 the initial conditions are to be zero:

y(0) = ẏ(0) =, ... , y (N −1) (0) = 0. (5.27)


These N initial conditions are used to find a unique set of constant coefficients
of Ck for k = 1, ..., N in the general solution,
N
kt
X
y(t) = Ck e β + yp (t). (5.28)
k=1

Exercise 5.2: Given the following first-order differential equation,

dy(t)
+ 2y(t) = x(t), (5.29)
dt
a) Find the particular solution, when the input is x(t) = e−t u(t).
b) Find the homogeneous solution.
c) Find the general solution for y(t) = 0, for t ≤ 0.
d) Is this system initially at rest?
e) Is this system causal?

Solution:
a) Since this equation is linear, the particular solution would be another
exponential function in the following form:

yp (t) = Ke−t u(t). (5.30)


Its derivative is ẏp (t) = −Ke−t u(t) + Ke−t δ(t). However, for practical

180
purposes, we will use ẏp (t) = −Ke−t u(t) as explained in the following
remark.

Remark 5.1: As we discussed in Chapter 2, the unit impulse δ(t) is


operationally defined as the derivative of unit step function u(t). The
derivative of the discontinuity of the unit step function at t = 0 is not
realizable as δ(t), with infinite magnitude and infinitesimally narrow in-
terval. As such, impulse function is not actually a function, it is only
tractable inside of an integral.
For practical purposes, we ignore the Ke−t δ(t) term in ẏp (t). This cor-
responds to considering the behavior of ẏp (t) starting from t = 0+ and
ignoring its behavior right at t = 0.

Replacing yp (t) and ẏp (t) in the differential equation and solving for K,
we obtain −K + 2K = 1. Thus, K = 1 and yp (t) = x(t) = e−t u(t).
b) A homogeneous solution for x(t) = 0 has the form: yh (t) = Ceβt . Its
derivative is ẏh (t) = Cβeβt . Inserting yh (t) and ẏh (t) into the homoge-
neous equation, we find the characteristic equation: β + 2 = 0. Thus,
β = −2, which gives us the overall homogeneous equation as

yh (t) = Ce−2t . (5.31)


c) General solution is the superposition of the particular and homogeneous
solutions:

y(t) = yh (t) + yp (t) = Ce−2t + e−t u(t). (5.32)


The constant coefficient, C can be obtained from the initial condition,
y(0) = 0, as y(0) = C + 1 = 0. Thus, C = −1. Since y(t) = 0 for t ≤ 0,
the general solution is

y(t) = −e−2t + e−t u(t).



(5.33)
d) This system is initially at rest because

x(t) = 0 for t < 0 ⇒ y(t) = 0 for t < 0. (5.34)


e) Recall that for an LTI system to be causal, we need h(t) = 0 for t < 0.
This condition is implied by the initial rest conditions given in (c) above.
Thus, the system is causal.
The system is also memoryless, because, the present value of the output
depends only on the present value of the input for all t.

Exercise 5.3: Consider the following first-order differential equation,

181
dy(t)
+ 2y(t) = x(t), (5.35)
dt
with the initial conditions y(−1) = 1 for a particular input, x(t) = e−t u(t).
a) Find the general solution for this differential equation.
b) Is this system initially at rest?
c) Is this system causal?

Solution:
a) The general solution is found from the previous example, as

y(t) = yh (t) + yp (t) = Ce−2t + e−t u(t), (5.36)

where the constant term C is to be obtained from a different initial con-


dition, which is y(−1) = 1. Replacing this initial condition in Equation
5.45, we get,

y(−1) = Ce2 + eu(−1) = 1. (5.37)


The second term on the right-hand side of this equation is zero. Then,
the constant term becomes,

C = e−2 . (5.38)
Replacing the value of C in the general solution of Equation (5.36), we
obtain,

y(t) = e−2(t+1) + e−t u(t), (5.39)


Comparison of equations (5.33) and (5.39) shows that the initial condi-
tions change the analytic form of the solution of the differential equations.
b) The input to the system, x(t) = e−t u(t), is zero for t ≤ 0. The initial
rest condition implies y(t) = 0 for t ≤ 0. However, this is not true since
y(−1) = 1. Therefore, the system is not initially at rest.
c) Since this is an LTI system, for y(−1) = 1, we need the convolution
integral to evaluate to 1:
Z ∞
y(−1) = x(τ )h(−1 − τ ) = 1. (5.40)
−∞

We know that x(τ ) = 0 for τ ≤ 0. So, we must have h(−1 − τ ) ̸= 0 for


τ > 0, which violates the causality property for LTI systems. Recall that
an LTI system is causal if its impulse response is zero for negative time

182
values, that is, h(t) = 0 for t < 0.

Exercise 5.4: Given the following second-order differential equation,

ÿ(t) + 3ẏ(t) + 2y(t) = ẋ(t). (5.41)


a) Find the homogeneous solution for zero input, x(t) = 0.
b) Find the particular solution for the input,

x(t) = eλt u(t) (5.42)


where λ is a parameter to determine the decay rate or the growth rate of
the exponential.
c) Find the general solution for λ = 1 and assuming that the system is
initially at rest with the initial conditions given below,

y(0) = ẏ(0) = 0. (5.43)

Solution:
a) The homogeneous solution can be obtained by assuming the following
analytical form:
yh (t) = eβt , (5.44)
where β is the parameter to be computed by taking the derivatives of
yh (t),

yh (t) = eβt
ẏh (t) = βeβt ,
(5.45)
ÿh (t) = β 2 eβt .

and inserting into the differential equations as follows:

(β 2 + 3β + 2)eβt = 0. (5.46)
The above homogeneous equation gives a second-order characteristic equa-
tion in terms of β, with two roots:

β 2 + 3β + 2 = 0 ⇒ β1 = −1, β2 = −2. (5.47)


Notice that for a second-order differential equation, we obtain two values
of β. Thus, the homogeneous solution, for zero input response is the su-
perposition of two exponential functions, e−t and e−2t , in the following
form:

183
yh (t) = C1 e−t + C2 e−2t . (5.48)
b) Since the system is linear, we can assume that the particular solution has
the following form,

yp (t) = Kx(t) = Keλt u(t). (5.49)


Let us take the derivative of the particular solution and insert it into
the differential equation to find the constant, K. Then, the differential
equation becomes,

(Kλ2 + 3Kλ + 2K)eλt u(t) = λeλt u(t). (5.50)


Exponential function, eλt u(t), cancels in both sides of the equation and
we obtain an algebraic equation in terms of K, as follows:

Kλ2 + 3Kλ + 2K = λ. (5.51)


The constant value K can be obtained as,

λ
K= . (5.52)
λ2 + 3λ + 2
Interestingly, the constant parameter K depends on the parameter λ,
which is the decay or growth rate of the exponential function. Note that
λ = −1 or −2 are also the roots of the characteristic equation and for
these values of λ, K → ∞. This is a degenerate solution. When the roots
of the characteristic equation are the same as the decay values λ, the
analytical form of the particular solution must take a different form to
avoid degeneracy. The following exercise solves this problem.
c) In order to find the constant coefficients C1 and C2 , we form the general
solution, as follows,

y(t) = [C1 e−t + C2 e−2t + Keλt ]u(t). (5.53)


Then, we apply the initial conditions to find the values of the constant
parameters, C1 and C2 . Assuming that the system is initially at rest, we
obtain the following initial conditions,

y(0) = ẏ(0) = 0. (5.54)


For λ = 1,
1
K= (5.55)
6

184
and the general solution becomes
1
y(t) = [C1 e−t + C2 e−2t + et ]u(t). (5.56)
6
Finally, using the initial conditions, we obtain the values for C1 and C2 ,
and the general solution, as follows;
1 1
C1 + C2 + = 0, − C1 − 2C2 + = 0
6 6
1 1
C1 = − , C 2 = (5.57)
2 3
1 −t 1 −2t 1 t
y(t) = [− e + e + e ]u(t).
2 3 6
Exercise 5.5: Given the following second-order differential equation,

ÿ(t) + 3ẏ(t) + 2y(t) = ẋ(t). (5.58)


a) Find the homogeneous solution for zero input, x(t) = 0.
b) Find the particular solution for the input,

x(t) = e−λt u(t) (5.59)


where λ = −2.
c) Find the general solution for λ = −2 and assuming that the system is
initially at rest with the initial conditions given below,

y(0) = ẏ(0) = 0. (5.60)

Solution:
a) The homogeneous solution is the same as the previous example,

yh (t) = C1 e−t + C2 e−2t . (5.61)


b) One of the roots of the characteristic equation is the same as the decay
rate λ = −2. In order to avoid the degenerate solution, we assume that
the particular solution has the following analytical form;

yp (t) = Kte−2t u(t). (5.62)


Let us take the derivatives of the particular solution with respect to t:

ẏp (t) = K(1 − 2t)e−2t u(t), (5.63)


ÿp (t) = 4K(t − 1)e−2t u(t), (5.64)

185
and insert them into the differential equation to find the constant param-
eter, K = 2. Then, the particular solution is,

yp (t) = 2te−2t u(t). (5.65)

c) The general solution is the sum of homogeneous and particular solutions;

y(t) = [C1 e−t + C2 e−2t + 2te−2t ]u(t). (5.66)


The constant parameters, C1 and C2 are obtained by substituting initial
conditions,

y(0) = ẏ(0) = 0. (5.67)


into the general solution;

y(0) = C1 + C2 = 0, (5.68)

ẏ(0) = −C1 − 2C2 + 2 = 0. (5.69)


Solving the above equations, we find that C2 = −C1 = 2.

y(t) = 2[te−2t − e−t + e−2t +]u(t). (5.70)

Remark 5.2: The above simple examples show that finding the particu-
lar solutions to the linear constant coefficient differential equations requires
heuristics to make an initial guess about the analytical form.

5.3.4. Transfer Function of a Continuous Time


LTI System
Exponential inputs are of special importance for the LTI systems represented
by
N M
X dk y(t) X dk x(t)
ak = bk . (5.71)
dtk dtk
k=0 k=0

Linearity property enables us to model the particular solution to an exponential


input, as follows:

yp (t) = Keλt . (5.72)


Differentiation property of the exponential function,

186
Particular input, x(t) = eλt LTI yp (t) = H(λ)eλt

PN βj t
Zero input, x(t) = 0 LTI yh (t) = j=1 Cj e

Figure 5.2: For an LTI system represented by a linear constant-coefficient dif-


ferential equation; when the input is an exponential signal, eλt , the output is
also an exponential signal scaled by the transfer function, H(λ). When the
Cj eβj t . Since both the
P
input is the zero signal, the output is of the form
particular input and zero input satisfy the differential equation, the general
solution, y(t) = yp (t) + yh (t), is the superposition of the homogeneous and
particular solutions.

dk yp (t) dk eλt
= K = Kλk eλt (5.73)
dtk dtk
enables us to obtain the value of K in terms of the parameters of the equa-
tion ak , bk and λ. Inserting the derivatives of the particular solution into the
differential equation, we obtain
PM
k=0 bk λk
K = H(λ) = PN . (5.74)
k
k=0 ak λ
The coefficient of the particular solution, K = H(λ), is called the transfer
function.
Motivating Question: What is the meaning of the transfer function?
When the input is an exponential function, x(t) = eλt , the corresponding
output,

yp (t) = H(λ)eλt , (5.75)


is just the scaled version of the exponential input, x(t). The scaling factor is the
transfer function, which is parameterized by the coefficient of the exponent, λ.
Thus, the transfer function directly determines how much of the exponential
input is transferred to the output (Figure 5.2).
When the input is an exponential function, x(t) = eλt , the general solution
has the following form,

X N
X
βk t λt
y(t) = Ck e + H(λ)e = Ck eβk t + H(λ)x(t). (5.76)
| {z } | {z } k=1
yh yp

187
Remark 5.3: The constant coefficients, Ck , of the homogeneous solution
not only depend on the initial condition of the differential equation, but also
depend on the particular solution of an input.

Exercise 5.6: Consider the differential equation given below,

ÿ(t) + 3ẏ(t) + 2y(t) = ẋ(t). (5.77)


a) Find the particular solution for x(t) = cos ω0 t.
b) Find the homogeneous solution.
c) Find the general solution, in terms of the constant coefficients of the
homogeneous solution and the angular frequency ω0 .

Solution:
a) Recall, the Euler formula to represent cosine function in terms of complex
exponential,

ejω0 t + e−jω0 t
cos ω0 t = . (5.78)
2
We can directly use the result of the previous exercise by setting λ = jω0
to obtain the transfer function as H(jω0 ).
When the input is x(t) = ejω0 t , the corresponding output is yp (t) =
H(jω0 )ejω0 t . When the input is x(t) = e−jω0 t , the corresponding output
is yp (t) = H(−jω0 )e−jω0 t .
If we superpose the two inputs as,

ejω0 t + e−jω0 t
cos ω0 t = , (5.79)
2
we can obtain the output as the superposition of the two particular solu-
tions as follows,
1
H(jω0 )ejω0 t + H(−jω0 )e−jω0 t ,

yP (t) = (5.80)
2
where
jω0
H(jω0 ) = . (5.81)
(jω0 )2 + 3jω0 + 2
The above solution shows the power of the linearity property. Each term
in the right-hand side of Equation (5.80) shows a subsystem of the overall
system. The first subsystem receives,

ejω0 t
x1 (t) = (5.82)
2
188
x(t) = eλt LTI yp (t) = H(λ)eλt

ejω0 t +e−jω0 t 1
H(jω0 )ejω0 t + H(−jω0 )e−jω0 t
 
x(t) = 2 LTI yp (t) = 2

Figure 5.3: Using the superposition property to find the particular solution in
Exercise 5.6.

and outputs yp1 (t), whereas the second one receives

e−jω0 t
x2 (t) = (5.83)
2
and outputs yp2 (t). The overall particular solution is the superposition of
the two responses,

yp (t) = yp1 (t) + yp2 (t). (5.84)


Note: As in the previous example, we use the superposition property to
find the particular solution (Figure 5.3).
b) The homogeneous solution is the same as the previous example.

yh (t) = C1 e−t + C2 e−2t . (5.85)


c) The general solution is,

1
y(t) = C1 e−t +C2 e−2t +yp (t) = C1 e−t +C2 e−2t + H(jω0 )ejω0 t + H(−jω0 )e−jω0 t .

2
(5.86)
Since the initial conditions are not given, there are infinitely many solu-
tions, each of which depends on the constant coefficients, C1 and C2 . We
leave them as unknown parameters.

5.4. Linear Constant Coefficient Differ-


ence Equations
A linear constant coefficient difference equation is given in the following
general form:

189
N
X M
X
ak y[n − k] = bk x[n − k], (5.87)
k=0 k=0

where y[n − k] and x[n − k] shows the k th difference, N is the order of the
difference equation, M is order of the difference of x[n] and {ak , bk } are the
constant parameters of the difference equation.
As in the continuous-time systems, we can define the initial condition to
satisfy the initial rest property of the system, as defined below.

Definition 5.2: Discrete-time LTI system is initially at rest at time n0 ,


if the input and output pairs, x[n] and y[n], are zero for all n < n0 . Formally
speaking, a system is initially at rest, if

x[n] = 0 for n < n0 ⇒ y[n] = 0 for n < n0 . (5.88)

The initial rest condition is crucial for discrete-time LTI systems repre-
sented by a difference equation. It simplifies finding the solution, which pro-
vides an explicit analytical expression for the output of the system for a given
input.

5.4.1. Representation of a Discrete Time LTI


Systems by Difference Equations
If we can relate the linear combinations of the past, present, and/or future
values of the input and output signals, then, we can represent a discrete-time
system by a difference equation. There is a wide range of applications of differ-
ence equations to model discrete-time systems. For example, they are used in
biomedical signal processing, for modeling the heart or brain signals. They are
also used to model time series, where the present value of a discrete-time func-
tion is related to its past values. Examples of time series data include speech
signals, and financial, economic, or demographic data, collected on a yearly,
monthly, or daily basis.
Proposition: A difference equation, with initial rest conditions,
N
X M
X
ak y[n − k] = bk x[n − k], (5.89)
k=0 k=0

represents a discrete time, causal LTI system, which relates the superposition
of the input x[n − k] to that of the output y[n − k].
Since the verification of this proposition is a trivial extension of the con-
tinuous time case, it is omitted here. When we represent a discrete time causal

190
PN PM
x[n] k=0 ak y[n − k] = k=0 bk x[n − k] y[n]

Figure 5.4: Representation of a discrete-time LTI system with difference equa-


tion.

LTI system with a difference equation, we can replace the impulse response
with the difference equation, in the black box, as shown in Figure 5.4.

5.4.2. Solution to Linear Constant Coefficient


Difference Equations
The solution to a difference equation can be obtained by using a recursive
method. Suppose, we normalize the above difference equation by dividing all
the constant parameters to a0 so that the coefficient of y[n] is 1. Leaving the
y[n] alone on the left-hand side of the above equation, we get,
N
X M
X
y[n] = − ak y[n − k] + bk x[n − k]. (5.90)
k=1 k=0

This is a recursive equation. Given the input, x[n] and and N initial conditions,

y[n0 ], y[n0 − 1], ... y[n0 − N ], (5.91)


we can start from the initial conditions and iterate it for all possible values of
y[n]. Let us solve an example.

Exercise 5.7: Consider the following difference equation, which represents


a discrete-time LTI system,
1
y[n] − y[n − 1] = x[n], (5.92)
2
with the input, x[n] = n2 u[n] and initial condition, y[−1] = 16.
a) Find and plot the output, y[n], using the recursive method.
b) Is this system initially at rest?

Solution:
a) Let us leave y[n] alone in the left-hand side of the equation;

191
y[n]
12.25

8.0 6.5
5.0

n
0 1 2 3

Figure 5.5: Solution of the difference equation of y[n] = 12 y[n − 1] + x[n] with
initial condition, y[−1] = 16. Notice that the plot continues as we increase n.

1
y[n] = y[n − 1] + x[n]. (5.93)
2
Then, using the initial condition and input let us evaluate the values of
y[n] for all n, as follows:

y[0] = 8
1
y[1] = y[0] + 1 = 5
2
1
y[2] = y[1] + 4 = 6.5 (5.94)
2
1
y[3] = y[2] + 9 = 12.25
2
..
.
y[n] is plotted in Figure 5.5.
b) This system is not initially at rest, since for n = 0, the input is x[n] = 0,
however, the output is y[0] = 8.

Exercise 5.8: Consider the following equation, which represents a discrete-


time LTI system,
1
y[n] − y[n − 1] = x[n]. (5.95)
2
Assuming that the system is initially at rest, with y[−1] = 0, find the output,
when the input is x[n] = u[n].

Solution: Leave y[n] in the left hand side of the equation,

y[n] = x[n] + 0.5y[n − 1], (5.96)

192
and solve it recursively:

y[0] = x[0] + 0.5y[−1] = 1,


y[1] = x[1] + 0.5y[0] = 1 + 0.5 = 1.5,
y[2] = x[2] + 0.5y[1] = 1 + 0.5(1 + 0.5), (5.97)
..
.
In a more compact form, the output is:
n
(0.5)k u[n].
X
y[n] = (5.98)
k=0

Recall that
n
X 1 − αn+1
αk = . (5.99)
1−α
k=0

Set α = 0.5 to find a closed form solution for the output

y[n] = [2 − 0.5n ]u[n]. (5.100)

5.4.3. Transfer Function of a Discrete Time LTI


System
As in the continuous-time case, exponential inputs are also very important for
discrete-time LTI systems. An LTI system can be represented by the following
difference equation:
N
X M
X
ak y[n − k] = bk x[n − k]. (5.101)
k=0 k=0

It can also be represented by its impulse response h[n]. Suppose that we feed
the following exponential input to the system:

x[n] = eλn .
The corresponding output can be obtained by the convolution sum:

∞ ∞
!
X X
λ(n−k) λn −λk
y[n] = x[n] ∗ h[n] = e h[k] = e h[k]e . (5.102)
k=−∞ k=−∞

193
Above equation reveals that the exponential input is directly passed to the
output with a scaling factor,

X
λ
H(e ) = h[k]e−λk , (5.103)
k=−∞

which is called the transfer function.


The output to the exponential input x[n] = eλn is then written as

y[n] = eλn H(eλ ).


Inserting the value of the above output and the exponential input x[n] = eλn ,
in the difference equation, we obtain
N
X M
X
λ λ(n−k)
ak H(e )e = bk eλ(n−k) . (5.104)
k=0 k=0

Arranging the above equation, we obtain the transfer function for discrete-
time LTI systems in terms of the parameters of the difference equation, as
follows:
PM
λ k=0 bk e−λk
H(e ) = PN . (5.105)
−λk
k=0 ak e
Hence, when the input of a discrete-time LTI system is an exponential
function, the corresponding output is just the scaled version of the input,
where the scaling factor H(eλ ) is called the transfer function.

5.5. Relationship Between the Impulse


Response and Difference or Differ-
ential Equations
In Chapter 4, we mentioned that impulse response uniquely represents an LTI
system through
• convolution integral for continuous-time systems and
• convolution summation for the discrete-time systems.
In this chapter, we have seen that it is possible to uniquely represent a linear
time-invariant system by
• differential equation for continuous-time systems and
• difference equation for discrete-time systems.

194
Therefore, we should be able to obtain impulse response from a differential
or difference equation. All we have to do is to replace the input by the impulse
function and to replace the corresponding output by the impulse response.
Then, we solve the differential equation, which represents the LTI system, to
get an explicit analytical form for the impulse response.
Let us try to find the impulse response from a differential equation in the
following example.

Exercise 5.9: Find the impulse response, h(t) of the following first order
differential equation, which is initially at rest:

ẏ(t) + 3y(t) = x(t). (5.106)

Solution: Recall that impulse response is the output of an LTI system when
the input is an impulse function, i.e., x(t) = δ(t). Thus, replacing the input by
the unit impulse function in the differential equation, we obtain the following
differential equation:

ḣ(t) + 3h(t) = δ(t). (5.107)


Let us solve this differential equation by finding the homogeneous solution and
the particular solution, then adding them to obtain the overall solution.
The homogeneous solution, hH (t), is obtained from the homogeneous equation,

ḣH (t) + 3hH (t) = 0. (5.108)


Solving it, for hH (t) = Ceαt , we find

hH (t) = Ce−3t u(t). (5.109)


Particular solution is obtained by replacing x(t) = δ(t) and integrating both
sides of the differential equation between a little bit larger than 0, (+0), and
a little bit smaller than 0, (−0), as follows;
Z +0 Z +0 Z +0
dh(t) + 3 h(τ )dτ = δ(τ )dτ = 1. (5.110)
−0 −0 −0
The right-hand side of the above equation is equivalent to the operational
definition of the impulse function,
Z +0 Z ∞
δ(τ )dτ = δ(τ )dτ = 1. (5.111)
−0 −∞

The first term in Equation (5.110) is

195
Z +0
dh(τ ) = h(+0) − h(−0). (5.112)
−0

Since the system is initially at rest, h(−0) = 0. Thus,


Z +0
dh(τ ) = h(+0). (5.113)
−0
The second term in the above integral approaches to 0,
Z +0
lim h(τ )dτ = 0. (5.114)
0+ →0 −0
0− →0

Thus, the particular solution provides an auxiliary condition for the impulse
response, as h(+0) = 1. This condition is used to find the constant parameter
C of the homogeneous solution hH (t),

hH (+0) = Ce−3×0 = C = 1. (5.115)


Thus, the general solution of the differential equation is h(t) = e−3t u(t).

Remark 5.4: The particular solution just provides us with the initial con-
dition as h(+0) = 1.

The above example to find the impulse response from the differential equa-
tion, can be expanded to a general differential equation as follows:
Given an N th order ordinary differential equation which represents a system
that is initially at rest,
N
X N
X
ak y (k) (t) = x(t), ak h(k) (t) = δ(t) (5.116)
k=0 k=0

The impulse response h(t) of a continuous time system, which is initially at


rest, is obtained by the following 2 steps:
Step 1: Find the homogeneous solution, hH (t) = N αk t .
P
k=0 Ck e
Step 2: Obtain the constant coefficients, Ck , from the N initial conditions.

h(0+ ) = ḣ(0+ ) = · · · = h(N −2) (0+ ) = 0 and


1 (5.117)
h(N −1) (0+ ) = .
aN
Recall that the derivative of the unit step response provides the impulse re-
sponse of an LTI system. Therefore, finding the solution of a differential equa-
tion to a unit step response and taking the derivative of this solution also

196
provides the impulse response, as exemplified in the following exercise.

Exercise 5.10: Given the following differential equation of an LTI system,


which is initially at rest for ẏ(0) = y(0) = 0,

ÿ(t) + 3ẏ(t) + 2y(t) = x(t), (5.118)


a) Find the unit step response.
b) Find the impulse response.

Solution:
a) The unit step response, s(t) is obtained when the particular input to this
equation is x(t) = u(t). The equation becomes

s̈(t) + 3ṡ(t) + 2s(t) = u(t). (5.119)


The homogeneous solution of the above differential equation has the fol-
lowing analytical form:

sh (t) = eαt , (5.120)


where the parameter α is determined by finding the roots of the character-
istic equation. Inserting sh (t) into the above equation, gives the following
characteristic equation:

α2 + 3α + 2 = 0. (5.121)
with two roots α1 = −1 and α2 = −2. Thus, the homogeneous solution,
for zero input response has the following form:

sh (t) = C1 e−t + C2 e−2t . (5.122)


The particular solution has the following form:
(
K for t ≥ 0
sp (t) = Ku(t) = (5.123)
0 otherwise
The derivatives of the particular solution are all zero, yielding

2K = 1 for t ≥ 0. (5.124)
The general solution is then,
1
s(t) = [C1 e−t + C2 e−2t + ]u(t). (5.125)
2

197
The parameters C1 and C2 are obtained by using the initial conditions
ṡ(0) = s(0) = 0, in the above equation, as follows:
1
C1 + C2 + =0 (5.126)
2
1
−C1 − 2C2 + = 0 (5.127)
2
Then, C1 = −1 and C2 = 12 . The unit step response is then,
1 1
s(t) = [−e−t + e−2t + ]u(t). (5.128)
2 2
b) The impulse response of the LTI system is obtained by taking the deriva-
tive of the unit step response:
1
h(t) = [e−t − e−2t ]u(t) (5.129)
2
Obtaining the impulse response for a discrete-time LTI system from a dif-
ference equation is relatively easy compared to the continuous case, as shown
in the example below.

Exercise 5.11: Find the impulse response for the following discrete-time LTI
system, which is initially at rest,

y[n] − 0.5y[n − 1] = x[n]. (5.130)


where the initial rest condition is h[−1] = 0.

Solution: Set y[n] = h[n] for x[n] = δ[n] to obtain

h[n] − 0.5h[n − 1] = δ[n]. (5.131)


Using the recursion equation,

h[n] = 0.5h[n − 1] + δ[n], (5.132)


with h[−1] = 0, we can obtain the values of h[n] for all n ≥ 0, as follows,

198
h[n]

1
1
2
1
4

n
0 1 2

Figure 5.6: Impulse response of an IIR (Infinite Impulse Response) filter.

h[0] = 0.5h[−1] + 1 = 1
1
h[1] = 0.5h[0] + 0 =
2
 2
1 1 1
h[2] = . = (5.133)
2 2 2
 3
1 1 1 1
h[3] = . . =
2 2 2 2
...
Thus, the impulse response can be obtained in the following closed form,
 n
1
h[n] = u[n]. (5.134)
2

2
Remark 5.5: The filter, h[n] = 12 u[n], has an infinite length of 0 ≤
n ≤ ∞. For this reason, this is an infinite impulse response (IIR) filter
(Figure 5.6).

Exercise 5.12: Find the discrete-time impulse response corresponding to


the following difference equation, when the system is initially at rest:
M
X
y[n] = bk x[n − k]. (5.135)
k=0

Solution: Let us set y[n] = h[n] for x[n] = δ[n]. Then, the impulse response
is obtained as follows:

199
h[n]

b1 b4
b0 b2
bM
b3
n
finite length

Figure 5.7: Impulse response of a FIR (Finite Impulse Response) filter is the
shifted superposition of the impulse functions, δ[n − k].

M
X
h[n] = bk δ[n − k]. (5.136)
k=0

Remark 5.6: The above filter has only non-zero values in a finite interval,
i.e., h[n] ̸= 0 for 0 ≤ n ≤ M . For this reason, it is a finite impulse response
(FIR) filter (Figure 5.7).

Note that FIR filters are realizable in the physical environment by hard-
ware. They are widely used in many application areas of signal processing to
shape up a finite-length input signal.

5.6. Block Diagram Representation of


Differential Equations for LTI Sys-
tems
In Chapter 3, we have seen that a system can be represented by a set of
subsystems connected with each other by input and output signals. A subset
of these components can be used to represent LTI systems, as summarized
below.
An adder is used to add or subtract signals for both continuous time and
discrete time systems and it is symbolized as shown in Figure 5.8.
A scalar multiplier multiplies its input signal by a scalar parameter for
both continuous and discrete-time systems, and it is symbolized as shown in
Figure 5.9.
A unit delay operator is used to translate an input signal, x[n], to
obtain y[n] = x[n − 1] for discrete time systems, and it is symbolized as

200
x1 + y = x1 + x2

x2

Figure 5.8: Schematic representation of an adder for two input signals.

a y = xa
x

Figure 5.9: Schematic representation of a scalar multiplier.

shown in Figure 5.10.


A unit advance operator is used to translate an input signal, x[n], to
obtain y[n] = x[n + 1] for discrete time systems, and it is symbolized as
shown in Figure 5.11.
An integrator integrates an input signal as
Z t
y(t) = x(τ )dτ (5.137)
−∞

for continuous time systems, and it is symbolized shown in Figure 5.12.


A differentiator takes the derivative of an input signal as

dx(t)
y(t) = (5.138)
dt
for continuous time systems, and it is symbolized as shown in Figure 5.13.
In the following exercises, we find the block diagram representation of dif-
ferential and difference equations to realize LTI systems. We also find the
differential and difference equations, given block diagrams.

Exercise 5.13: Find a block diagram representation of the following differ-


ential equation:

ẏ(t) + ay(t) = bx(t). (5.139)

x[n] D y[n] = x[n − 1]

Figure 5.10: Schematic representation of unit delay operator.

201
x[n] A y[n] = x[n + 1]

Figure 5.11: Schematic representation of the unit advance operator.

R Rt
x(t) y(t) = −∞ x(τ )dτ

Figure 5.12: Schematic representation of an integrator.

Solution: Leave the highest order of the derivative on the left-hand side of
the equation,

ẏ(t) = bx(t) − ay(t). (5.140)


The block diagram representation requires an integrator and an adder to realize
the above first-order differential equation, as depicted in Figure 5.14.

Remark 5.7: When we draw a block diagram, traditionally, we always put


the input to the left-hand side, and the output to the right-hand side.

Remark 5.8: The block diagram representation is not unique. We could use
differentiators instead of the integrator to represent the same system. Another
block diagram representation can be obtained by leaving y(t) on the left-hand
side:

b 1
y(t) = x(t) − ẏ(t). (5.141)
a a
The corresponding block diagram is given in Figure 5.15.

Motivating Question: Which block diagram is better? Figure 5.14 or


Figure 5.15?
Integrators reduce power consumption compared to the differentiators.
Thus, the block diagram representation of Figure 5.14 is more efficient and
cost effective compared to that of Figure 5.15.

d dx(t)
x(t) dt y(t) = dt

Figure 5.13: Schematic representation of a differentiator.

202
b ẏ(t) R
x(t) + y(t)

−a

Figure 5.14: The output of the adder is ẏ(t), which is equal to bx(t) − ay(t). If
we integrate ẏ(t), we get the output signal y(t).

b
a y(t) d
x(t) + dt
ẏ(t)

− a1

Figure 5.15: Realization of the differential equation, ẏ(t) + ay(t) = bx(t) by a


differentiator, an adder, and two scalar multipliers.

The subsystems to be used in block diagram representations are a design


issue, which is beyond the scope of this book.

Exercise 5.14: Find the block diagram representation of the following discrete-
time LTI system:

y[n] + ay[n − 1] − by[n − 2] = x[n − 1]. (5.142)

Solution: Leave y[n] in the left-hand side of the equation, as follows:

y[n] = x[n − 1] − ay[n − 1] − by[n − 2]. (5.143)


Then, form the right-hand side of the equation using the adder and unit delay
operators, as shown in Figure 5.16.

Exercise 5.15: Find the differential equation which is represented by the


block diagram of Figure 5.17.

Solution: The output y(t) is expressed as the integral of difference of (x(t) −


y(t)) and added by x(t):

203
x[n] D + y[n]

−a
+ y[n − 1]

−b
y[n − 2]

Figure 5.16: Block diagram representation of the difference equation, y[n] +


ay[n − 1] − by[n − 2] = x[n − 1].

x(t) + y(t)

+
−1

Figure 5.17: Block diagram representation of a differential equation in Exercise


5.15.

Z
y(t) = [x(t) − y(t)]dt + x(t). (5.144)

Taking the derivative of both sides of the above equation, we obtain,

ẏ(t) = x(t) − y(t) + ẋ(t) (5.145)


or
ẏ(t) + y(t) = ẋ(t) + x(t). (5.146)

Exercise 5.16: Find the difference equation which is represented by the


block diagram of Figure 5.18.

204
2
x[n] + y[n]

Figure 5.18: Block diagram representation of a differential equation in Exercise


5.16.

Solution: The output of the adder gets multiplied by 2, and becomes y[n].
Therefore, the output of the adder should be 12 y[n]. If we equate the inputs
and the output of the adder, we obtain the difference equation:
1
y[n] = x[n] + y[n − 1]. (5.147)
2

5.7. Chapter Summary


What is a differential equation? What is a difference equation? Can we use dif-
ferential equations to represent continuous-time systems? Can we use difference
equations to represent discrete-time systems?
A differential equation links the rate of change of an input to that of the
output, in continuous time systems. Similarly, a difference equation relates the
rate of change on the input-output pairs, when the time variable is integer-
valued.
Both the difference and differential equations can be used to represent the
dynamic changes of the systems. In particular, a linear and constant-coefficient
differential equation, which satisfies the initial rest condition, uniquely de-
scribes a continuous-time LTI system. Similarly, when the time variable is
integer-valued, a linear and constant coefficient difference equation, which sat-
isfies the initial rest condition, uniquely describes a discrete-time LTI system.
Thus, differential and difference equations are the mathematical objects to
represent LTI systems.
There is a one-to-one correspondence between the impulse response and the
differential equation, which represents a continuous time system. Similarly,
there is a one-to-one correspondence between the impulse response and the
difference equation, which represents a discrete-time system.
In order to realize the LTI systems in practice, we can use simple compo-
nents, such as adders, differentiators, integrators, unit delay, and unit advance
operators. These components enable us to represent LTI systems with block

205
diagrams and realize them in real-life applications.

206
Problems
1. The general form of an N th order homogeneous differential equation is
given below:

N
X dk y(t)
ak = 0.
dtk
k=0

a) Find a solution to this differential equation in terms of the roots of the


following algebraic equation:

N
X
ak sk = 0.
k=0

b) How many different solutions can you obtain for this differential equa-
tion if the initial conditions are not specified?
2. A second-order homogeneous differential equation is given below:

ÿ(t) + 6ẏ(t) + 9y(t) = 0


a) Find the solution of this equation for the initial conditions, y(0) = 0
and ẏ(0) = 18.
b) Find the solution of this equation for the initial conditions, y(0) = 0
and ẏ(0) = 0.
c) Compare the solutions you obtained in parts a and b. Explain the
effect of the initial conditions on the solutions.
3. Let’s make a slight modification in Problem 2 as follows;

ÿ(t) − 6ẏ(t) + 9y(t) = 0.

a) Find the solution of this equation for the initial conditions y(0) =
ẏ(0) = 0.
b) Compare the solution you obtained in part a to that of Problem 2.b.
Explain the changes and no changes.
4. An initially at rest continuous time system is represented by the following
first-order differential equation:

ẏ(t) + 2y(t) = x(t).

207
a) Find the output of the system, y1 (t), for the input x1 (t) = 3e3t .
b) Find the output of the system, y2 (t), for the input x2 (t) = 2e2t .
c) Find the output of the system, y(t) for the input x(t) = 6e3t + 6e2t .
d) Find the output of the system, y3 (t), for the input x3 (t) = Ae2t u(t).
e) Find the output of the system in terms of y3 (t), which you calculated
in part d, for the input Ae2(t−T ) u(t − T ).
5. A continuous-time LTI system is represented by the following first-order
differential equation;

ẏ(t) + 6y(t) = x(t),

where x(t) is the input and y(t) is the output of the system.
a) Find the output y(t) of this system for the input x(t) = e(3j−1)t u(t).
b) What is the output of the system, y(t), when the input is Re{x}(t)?
c) Find a transfer function of this system.
6. The transfer function of a continuous time LTI system is given as follows:


H(λ) = ,
λ2 − 2λ + 1
where the system is initially at rest.
a) Find the differential equation that represents this system.
b) Find the output of this system for x(t) = 0.
c) Find the output of this system for x(t) = (2t + 1)u(t).
7. The general form of an N th order homogeneous difference equation is
given below:

N
X
ak y[n − k] = 0
k=0

a) Find a solution to this difference equation in terms of the roots of the


following algebraic equation:

N
X
ak z k = 0
k=0

b) How many different solutions can you obtain for this differential equa-

208
tion if the initial conditions are not specified?
8. A second-order homogeneous difference equation is given below:

y[n] − 2y[n − 1] + y[n − 2] = 0.


a) Find the solution of this equation for the initial conditions, y[0] =
0, y[1] = 3.
b) Find the solution of this equation for the initial conditions, y[0] =
0, y[1] = 0.
c) Compare the solutions you obtained in parts a and b. Explain the
effect of the initial conditions on the solutions.
9. Let’s make a slight modification in Problem 8, as follows;

y[n] − 2y[n − 1] + y[n − 2] = 0.

a) Find the solution of this equation for the initial conditions y[0] =
y[1] = 0
b) Compare the solution you obtained in part a to that of the Problem
8.b. Explain the changes and no changes.
10. A discrete-time system is represented by the following difference equation;

1 1
y[n] = y[n − 1] + x[n].
3 9
a) Does this system satisfy the conditions of initially rest for the initial
condition y[0] = 1?
b) Is this system LTI? Verify your answer.
c) Find the transfer function of this system.
11. A discrete-time LTI system, which is initially at rest, is represented by
the following difference equation

1
y[n] = y[n − 1] + 2x[n − 2].
5
a) Find the impulse response of this system.
b) Find the transfer function of this system.
c) Find a block diagram representation of this system using the adders
and unit delay operators.
12. A discrete-time LTI system is represented by the following difference equa-
tion. Assume that the system is initially at rest.

209
1 3
y[n] + y[n − 1] + y[n − 2] = x[n]
2 20
Find the output, y[n], of the system for the following input.

x[n] = δ[n + 2] + 2δ[n + 1] + 3δ[n] + 2δ[n − 1] + 2δ[n − 2] + δ[n − 3]

13. A discrete-time system is represented by the following difference equation;

1
y[n] = y[n − 1] + x[n],
2
where the system is initially at rest.
a) Find the homogeneous solution of the system.
1
b) Find the general solution of the system for the input x[n] = ( )n u[n].
4
c) Find the impulse response of this system.
14. A discrete-time causal LTI system consists of two subsystems, S1 and S2 ,
given below;
w[n]
x[n] S1 S2 y[n]

The subsystem S1 is represented by the following difference equation:

1
w[n] = w[n − 1] + x[n]
3
The subsystem S2 is represented by the following difference equation:

2 1
y[n] = y[n − 1] + w[n]
3 2
a) Find the difference equation for the overall system, which relates the
input x[n] and output y[n].
b) Draw a block diagram of the overall system, which receives the input
x[n] and outputs y[n], using adders and unit delay operators.
c) Find and plot the impulse response of the overall system.

210
15. The following discrete system consists of three subsystems, with impulse
responses, h1 [n], h2 [n], and h3 [n] = h2 [n], respectively.

x[n] h1 [n] h2 [n] h3 [n] y[n]

The impulse response h2 [n] is given as follows:

h2 [n] = u[n] − u[n − 2]

and the overall impulse response, h[n], of the system is:

h[n] = δ[n] + 5δ[n − 1] + 10δ[n − 2] + 11δ[n − 3] + 8δ[n − 4] + δ[n − 5].

a) Find and plot the impulse response h1 [n].


b) Find and plot the output y[n] for the input x[n] = δ[n] − δ[n − 1].
16. A discrete-time LTI system is represented by the following impulse re-
sponse;
1
h[n] = ( )n+1 u[n + 3].
4
a) Find the output, y[n], of the system for the following input;
1
x[n] = 4n u[−n] + ( )n u[n].
4
b) Find and plot the output for the input x[n] = eλn .
c) Find a transfer function of this system.
d) Is this system causal?
17. A discrete-time LTI system is represented by the following difference equa-
tion;

y[n] = x[n] − 5y[n − 1].

Assume that the system is initially at rest.


a) Find and plot the impulse response of this system.
b) Find and plot the output, y[n] of the system for the input x[n] = u[n].
c) Find and plot the output for the input x[n] = eλn .
d) Find a transfer function of this system.

211
e) Is this system causal?
18. A continuous-time system S is represented by the impulse response h(t).

x(t) S y(t)

When the derivative x1 (t) = dx(t)


dt of the input signal x(t) = 2
−3t u(t − 1)

is fed to this system, the corresponding output becomes,

y1 (t) = −3y(t) + e−2t u(t).

dx(t)
x1 (t) = dt
h(t) y1 (t) = −3y(t) + e−2t u(t)

Find the impulse response of the system S.


19. A discrete-time system, which is initially at rest, is represented by the
following difference equation:

1
y[n] − y[n − 1] = x[n].
4
a) Find and plot the output y[n] for n = 0 and for the input x[n] = δ[n].
b) Find and plot the impulse response, h[n], for n ≥ 1.

20. Find the impulse response, h[n] of an initially at rest discrete-time system,
represented by the following difference equation:

1
y[n] − y[n − 1] = x[n] + 2x[n − 1].
4
21. For the discrete-time LTI system given below:

N
X
y[n − k] = x[n],
k=0

find y[0] for x[n] = δ[n].

22. Find the impulse responses of the causal LTI systems represented by the

212
following difference equations:
a) y[n] − 12 y[n − 2] = x[n]
b) y[n] − 21 y[n − 2] = x[n] + x[n − 1]
c) y[n] − y[n

− 2] = x[n] − 2x[n − 4]
3
d) y[n] − 4 y[n − 1] + 41 y[n − 2] = x[n]
23. A discrete-time system is represented by the following difference equation;
1 1
y[n] = y[n − 1] + x[n].
3 2
a) Find a block diagram representation of this system using adders and
unit delay operators.
b) Find a block diagram representation of this system using adders and
unit advance operators.
24. A discrete-time LTI system is represented by the following block diagram:
a) Find the difference equation corresponding to the following block di-
agram.
b) Find a block diagram representation using unit advance operators
and adders.

x[n] D + y[n]

1
3
D

213
25. A continuous time causal LTI system is represented by the following dif-
ferential equation:
1
y(t) = − ẏ(t) + 4x(t).
2
a) Find a block diagram representation of this system using integrators
and adders.
b) Find a block diagram representation of this system using differentia-
tors and adders.
26. A continuous-time LTI system is represented by the following block dia-
gram:
R
x[n] + y[n]

3 R

a) Find the differential equation corresponding to this system.


b) Find a block diagram representation of this system using adders and
differentiators.

214
Chapter 6
Fourier Series Representation
of Continuous-Time Periodic
Signals

“The deep study of nature is the most fruitful source of knowledge!”


Jean Baptiste Joseph Fourier

We, humans, perceive the physical world mostly by dynamic changes of


signals, such as light, sound, speech, heat, etc., perceived by sensory stimuli,
in the time domain. Thus, it is natural to model signals and systems in the
time domain.
Until now, we represented the signals as a function of time. In other words,
the domain of the functions was time. In the time domain, we represent sig-
nals in terms of weighted integral and sum of basic functions, such, as unit
impulses:
Z ∞
x(t) = x(τ )δ(t − τ )dτ, (6.1)
−∞
for continuous-time signals, and,

X
x[n] = x[k]δ[n − k], (6.2)
k=−∞

for discrete-time signals.


We represented linear time-invariant (LTI) systems with equations, which
establish a relationship between the input and output signals of LTI systems
using the impulse response, h(·) by convolution integral and sum:

215
Z ∞
y(t) = x(τ )h(t − τ )dτ, (6.3)
−∞
for continuous-time systems,

X
y[n] = x[k]h[n − k], (6.4)
k=−∞

for discrete-time systems.


Equivalently, we also represented a continuous-time LTI system with a
linear constant coefficient differential equation,
N M
X dk y(t) X dk x(t)
ak = bk , (6.5)
dtk dtk
k=0 k=0

and a discrete-time LTI system with a linear constant coefficient differ-


ence equation,
N
X M
X
ak y[n − k] = bk x[n − k]. (6.6)
k=0 k=0

All of the above functions and equations represent the signals and systems
in terms of time.
Motivating Questions: Can we represent signals and systems in domains
other than time, such that they provide different types of useful information
about the underlying phenomena? How can we model our observations, such as
the trajectory of celestial objects, to study the universe surrounding us? How do
we represent a periodic motion? Is it possible to represent complicated periodic
motions in terms of simple periodic functions? Is it possible to represent any
function in terms of simple periodic motions at all? If this is ever possible,
what is the relationship between this new representation and the time domain
representation of the function?
Human curiosity has searched for answers to questions of this kind over the
centuries of the history of science, as summarized in the next section.

6.1. History
At the heart of the Fourier analysis lies the periodic motions and harmony.
The very first question was asked by the Babylonians, around 1500 BC,
when they attempted to understand what was happening in the sky: Is it
possible to model the motion of terrestrial objects, which repeats with some
regularity?

216
In order to answer this question they recorded the location of the sun and
moon relative to time to predict their trajectories. Following the Babyloni-
ans, Indian mathematicians developed an early version of the periodic func-
tions, called Jiva, in Sanskrit. In 9th century AD, Muhammad ibn Musa Al-
Khwarizmi, produced the first accurate tables of periodic motions, by improv-
ing Jiva, and named it as Jaib, which means bosom in Arabic. The term
Jaib is translated in Latin as Sinus, meaning bosom or bay, to represent pe-
riodic motion. Al-Khwarizmi was, also, a pioneer in circular trigonometry.
The scientists in medieval Islam accomplished a series of studies for calcu-
lating the trajectories of celestial objects using the simple periodic functions.
Among them, Al-Biruni, Al-Farghani, Al-Haytham, used circular motion
and trigonometry to formalize the periodic motion.
The next important question was asked by the European mathematicians
in enlightenment: Is it possible to represent complicated periodic motions in
terms of simple periodic functions?
Answering this question was possible by using the concept of harmony. In
18th century, L. Euler expressed a periodic motion of a string in terms of the
linear combination of the harmonically related sinusoidal functions. However,
J. Bernoulli and J. L. Lagrange argued that the representation of functions
as a superposition of periodic waveforms was not possible, especially, when the
function has sharp corners.
Finally, Jean-Baptiste Joseph Fourier claimed that any periodic func-
tion can be represented in terms of the superposition of harmonically related
sine and cosine functions, today, known as Fourier series for continuous-time
functions. Then, he extended this idea to any continuous-time aperiodic func-
tion and he wrote his pioneering paper, in 1807. Four referees examined this
mind-blowing work: S.F. Lacroix, G. Mogne, P.S. Laplace and his advi-
sor, J. L. Lagrange. Three of the committee members accepted the paper.
However, his advisor, Lagrange rejected the paper and it was not published. In
1822, Fourier published his theorems in a book called, Theory Analytique
de La Chaleur (Heat Diffusion). Later, in 1829, P. G. L. Dirichlet, a
student of Fourier showed that under some conditions, Fourier’s theorems are
correct. These conditions are called Dirichlet conditions.

Learn more about the life of J. B. Fourier @ https:


//384book.net/v0601
WATCH

Loosely speaking Fourier’s theorems together with Dirichlet condition states


that
• Any periodic function, satisfying a set of conditions, called Dirichlet con-

217
ditions, can be represented in terms of the superposition of harmonically
related waves.
• Furthermore, any function, satisfying a set of conditions, called Dirichlet
conditions, can be represented in terms of the superposition of harmoni-
cally related waves.
Motivating Question: But, what is a wave?
Loosely speaking, waves are defined as propagating dynamic changes from
an equilibrium of one or more quantities. Waves can be periodic, in which
case those quantities oscillate repeatedly about an equilibrium value at some
frequency.
Examples of waves include sound waves, light waves, radio waves, mi-
crowaves, water waves, stadium waves, earthquake waves, waves on a string.

Figure 6.1: Digital artwork showing colorful waves.

6.2. Mathematical Representation of Waves


and Harmony
In mathematics, waves or waveforms are frequently represented by periodic
functions, such as sines, cosines and complex exponentials.
Recall that a complex exponential, represented by the Euler Formula,

Φk (t) = ejkω0 t = cos(kω0 t) + j sin(kω0 t) (6.7)

218
θ
θ π 2π

Figure 6.2: A simple rotation on a circle in a complex plane creates trigono-


metric waveforms of sine and cosine. As θ increases, the point on the circle
(left) travels in the counter-clockwise direction, at the same time, θ vs the
y-coordinate of that point follows a sine curve, shown on the right.

is a periodic function, where the angular frequency is ω0 = 2π T0 and T0 is the


fundamental period. The set of all complex exponentials, Φk (t), for the integer
values, −∞ < k < ∞, are called harmonically related exponentials.
In the above formulation, for each integer value of k, we define a complex
exponential, Φk (t), with frequency, kω0 , called the k th harmonics. Complex
exponentials are very interesting functions, which rotate on a unit circle in a
complex plane, as a function of time. If we take the time as the third dimension,
which is orthogonal to the complex plane, we can generate a sinusoidal signal
in the time domain, as the complex exponential rotates in the complex plane.
The speed of the rotation depends on the angular frequency, ω0 . We generate
the k th harmonics of the complex exponential, ejω0 t , as follows:

Φk (t) = ejkω0 t (6.8)


where the rotation speed of the exponential function on the unit circle becomes
k times faster. Recall that the exponential functions with the integer multiples
of angular frequencies, kω0 , are called the harmonically related complex ex-
ponentials. The motion of the complex exponential on the unit circle and the
corresponding sinusoidal function is depicted in Figure 6.2.
Motivating Question: Can we represent a periodic function in terms of
weighted summation of waves, such as sinusoidals or complex exponentials?
Mathematically speaking, given a periodic function, x(t), our goal is to find
a set of coefficients {ak } such that we can represent x(t) by the superposition
of harmonically related complex exponentials, as follows:

219

X
x(t) = ak ejkω0 t . (6.9)
k=−∞

Let’s study a simple example below to see if this is ever possible.

Exercise 6.1: Can we represent the following signal in terms of the super-
position of complex exponentials?

x(t) = β1 sin(2πt) + β2 cos(πt). (6.10)

x(t)
1.0

0.5

t
−3 −2 −1 0 1 2 3

−0.5

−1.0

Figure 6.3: Plot of the periodic function, x(t) = 0.5 sin(2πt) + 0.5 cos(πt).

Solution: Let us use the Euler formula to represent the sines and cosines in
terms of complex exponentials.

ej2πt − e−j2πt ejπt + e−jπt


x(t) = β1 + β2 . (6.11)
2j 2
Let us put the above function into the following general form,


X
x(t) = ak ejkω0 t = a0 + a1 ejω0 t + a−1 e−jω0 t + a2 e2jω0 t + a−2 e−2jω0 t + . . . ,
k=−∞
(6.12)
where ejkω0 t s
are infinitely many harmonically related complex exponentials,
since the integer value, k, ranges in {−∞, ∞}.
According to this equation,

220
β2
- for k = ±1 → a1 = a−1 = 2
- for k = 2 → a2 = β2j1 ,
- for k = −2 → a−2 = − β2j1 ,
- otherwise, ak = 0.
Thus, function x(t) is represented by four coefficients of the harmonically re-
lated complex exponential functions as

{a1 , a−1 , a2 , a−2 }. (6.13)


Note: Coefficients of four harmonically related complex exponentials are suf-
ficient to represent the signal x(t) = β1 sin(2πt) + β2 cos(πt).
Let us set β1 = β2 = 0.5. Then, function x(t) becomes

x(t) = 0.5 sin(2πt) + 0.5 cos(πt) (6.14)


and the four coefficients of the harmonically related complex exponential be-
comes,
0.25
a1 = a−1 = 0.25, a2 = −a−2 = = −0.25j. (6.15)
j
Each coefficient ak shows the contribution of the corresponding harmonic of
the complex exponential to make the signal x(t). For this particular example,
there are two pairs of nonzero harmonics; a1 , a−1 and a2 , a−2 . The rest of
the coefficients are zero, showing that the function is made of two pairs of
harmonics of the complex exponential function.
The plot of the coefficients ak with respect to k provides information about the
frequency content of the signals. When the coefficients are complex numbers,
we need to generate two plots: one for the magnitude and the other for the
phase. For this particular example:
Magnitudes are |a1 | = |a2 | = |a−1 | = |a−2 | = 0.25 and
Phases are ∢a1 = ∢a−1 = 0, ∢a2 = −∢a−2 = −0.5π.

6.3. Fourier Series Representation of Continuous-


Time Periodic Functions
The simple example of the previous section shows that it is possible to represent
combinations of trigonometric functions in terms of the superposition of the
harmonically related complex exponentials, using the Euler formula. What if
we had a complicated function, which does not consist of any sines and cosines
or does not have any known analytical forms?

221
|ak |
^ak

0.25

−2 2
k
−1 1

k −0.5π
−2 −1 1 2

Figure 6.4: Magnitude |ak | vs. k and phase ∢ak vs. k plots. The magnitudes
are all the same for non-zero values of ak , indicating equal contribution of all
of the harmonics. There are two nonzero phases at k = ±2, which shows that
the harmonics line up at ∢ak = −0.5π.

Representation of a complicated function by super-


position of harmonically related complex exponen-
WATCH
tials @ https://384book.net/v0602

Learn more about the Fourier series @ https://


384book.net/v0603
WATCH

6.4. Dirichlet Conditions


Motivating Question: Is it possible to represent any periodical signal as a
superposition of infinitely many harmonics of complex exponentials?
When Fourier presented his original paper stating that any function can
be represented in terms of superposition of harmonically related complex expo-
nentials, he received objections from some of the mathematicians of that time,
including his advisor Lagrange. Indeed, Fourier Series representation does not
exist for all periodic signals. This fact is shown by the famous student of
Fourier, named P. G. Lejeune Dirichlet.
Dirichlet established the conditions for the existence of Fourier Series rep-
resentation of a continuous-time periodic function.
Dirichlet conditions for the existence of a Fourier series of a continuous
time periodic functions are summarized below:

222
−1 0 1 2
t

x(t)

Figure 6.5: The plot of x(t) = ln(t), for 0 ≤ t ≤ 1 and x(t) = x(t + 1).

Condition 1. The function x(t) must be absolutely integrable in a finite in-


terval. Formally, we should have
Z
|x(t)|dt < ∞. (6.16)
T

Exercise 6.2: Does the following signal satisfy Condition 1?

x(t) = ln(t) for 0≤t≤1


and x(t) is periodic: x(t) = x(t + T ) for T = 1.

Solution: No, because the absolute integral of this function is not finite:
Z 1
| ln(t)|dt → ∞. (6.17)
0

The plot of this function is shown in Figure 6.5.

Condition 2. The function x(t) must have bounded variation in any finite
interval. That is, the number of minima and maxima should be bounded in
any finite interval.

Exercise 6.3: Does the following signal satisfy Condition 2?

 

x(t) = sin for 0 ≤ t ≤ 1 and x(t) = x(t + T ) for T = 1.
4t

223
x(t)

t
1

Figure 6.6: A function with infinitely many discontinuities in a finite interval.

Solution: No, because the number of minima and maxima of this function in
a finite period is ∞. The plot of this function is shown in Figure 6.6.

Condition 3. In a finite interval, there are to be finitely many discontinuities.

Exercise 6.4: Does the following function satisfy Condition 3?


(
1, t = n1 for any positive integer n
x(t) = (6.18)
0, otherwise.

Solution: Since there are infinitely many positive integers, this function
switches between 0 and 1 infinitely many times. This implies an infinite number
of discontinuities. Therefore, this function does not satisfy Condition 3.

Fact: If a continuous-time periodic function does not satisfy Dirichlet condi-


tions, then, it is not possible to find a set of coefficients {ak } to represent this
function by the superposition of the harmonically related complex exponen-
tials.

6.5. Fourier Theorem


Based upon the above analyses and conditions we can state the following the-
orem:

Theorem 6.1: A periodic function, x(t) with fundamental period, T , satis-


fying the Dirichlet conditions, can be represented as the superposition of har-

224
monically related complex exponentials:


X
x(t) = ak ejkω0 t Synthesis Equation (6.19)
k=−∞

The coefficients ak are called the Fourier Series coefficients or spectral


coefficients and can be uniquely obtained from the following integral:

1
Z
ak = x(t)e−jkω0 t dt Analysis Equation (6.20)
T T

The limits of the integral covers one full period T of the periodic function x(t).

Remark 6.1: When the function x(t) does not satisfy the Dirichlet con-
ditions, it is not possible to find the Fourier series coefficients, ak , since the
integral, which defines the coefficients, is not bounded or does not exist. Al-
though the Dirichlet conditions are rather intuitive, the formal proof of the
conditions are beyond the scope of this book.

Proof Sketch for the Fourier Theorem. Multiply both sides of the Fourier
Series representation equation by e−jnω0 t :

X ∞
X
x(t)e−jnω0 t = ak ejkω0 t e−jnω0 t = ak ej(n−k)ω0 t . (6.21)
k=−∞ −∞

Integrate both sides of the equation in the interval (0, T ):


Z T ∞
X Z T
x(t)e−jnω0 t dt = ak ej(k−n)ω0 t dt = T ak , (6.22)
0 k=−∞ 0

which gives us the synthesis equation of the Fourier Theorem. In this represen-
tation, each ak measures the amount of the k th harmonic in the function x(t).
The following remark is about the evaluation of the integral on the right-hand
side above.

Remark 6.2: Harmonically related complex exponentials are orthogonal


to each other. In other words, their inner product is

(
T
T for n = k,
Z
< ejnω0 t , ejkω0 t >= ejnω0 t e−jkω0 t dt = (6.23)
0 0 otherwise.

As an exercise, you can try to prove Equation (4). (Hint: Handle n = k and

225
n ̸= k cases separately, and use the relationship between the angular frequency
and the fundamental period, ω0 = 2π T ).

Terminology. The coefficients ak of the synthesis equation can be considered


as measures of the frequency content of the function x(t). Depending on the
harmonic values k, the terminology below is commonly used for the Fourier
series coefficients:
• Fourier series coefficients, ak , for k = ±1, 2, 3, .... ± ∞, measures the con-
tribution of each frequency to build up the signal. They form a frequency
spectrum of the signal. For this reason, they are called the spectral coef-
ficients.
• The 0th spectral coefficient,
1
Z
a0 = x(t)dt (6.24)
T T

is constant, showing the area under the curve of the function x(t). For this
reason, a0 is called the average term.
• The spectral coefficients of the lowest frequency harmonics, ak and a−k , for
|k| = 1,

1
Z
a±1 = x(t)e±jω0 t dt (6.25)
T T
is called the spectral coefficients of the fundamental frequency.
• The rest of the frequencies of ak for |k| ≥ 2 are called the k th harmonic.
Fourier series representation of a function, x(t), forms a rigorous framework
for the development of digital technology, since it bridges the continuous and
discrete-time world, as we shall later see in the Sampling Theorem (Chapter
11). It has a great impact in many fields of science and engineering, whenever
the frequency content of the functions provides us with useful information
about the underlying physical phenomenon.
Motivating Question: What do the analysis and synthesis equation tell
us about the function x(t)?
Let’s give a glimpse of the meaning of Fourier series representation, by
introducing a new space, called the Hilbert space.

Explore Fourier series representation for


continuous-time signals @ https://384book.
INTERACTIVE net/i0601

226
6.6. Frequency Domain and Hilbert Spaces
A Hilbert space is considered a vector space, spanned by functions, rather
than vectors, where the distance between the functions is defined by the inner
products.
Recall a vector space of dimension n, where we represent a vector, x ∈ V n ,
as the linear combination of the basis vectors as follows:
n
X
x= ak ek . (6.26)
k=1

Recall, also, that ak ’s are called the coordinates of the vector x = [a1 a2 .... an ]T ,
with respect to the set of basis vectors, {ek }nk=1 .
In the above representation, we may use the standard basis vectors, ek =
[0....1....0]T for k = 1, ..., n, where the k th entry has value 1 and all other entries
are zeros. The superscript, T , of the vectors indicate the vector transpose
operation.

Remark 6.3: The total of n basis vectors of a vector space V n , are to be


orthogonal to each other, in order to span the entire vector space. In this case,
the basis vectors are called linearly independent. It is easy to show that the
standard basis vectors, ek , are orthogonal to each other. In other words, the
inner product should satisfy the following condition:
(
1 for j = k,
< ek , ej >= eTk ej = (6.27)
0 otherwise.
The total of n standard basis vectors, ek , ∀k = 1, ..., n span the n-dimensional
Euclidean vector space.

Motivating Question: What does orthogonality mean?


In geometry, orthogonality is defined as the perpendicularity of two lines.
This property is generalized to other mathematical objects, such as sets, graphs,
functions, matrices, and vectors. Loosely speaking, orthogonality assures that
there are no interference, dependence, or correlation among the orthogonal ob-
jects. Given a class of mathematical objects, an object cannot be represented
in terms of other objects, which are orthogonal to that object.
Let us study the concept of orthogonality in the following simple exercise.

Exercise 6.5: Consider the vector x ∈ R2 of Figure 6.7, represented by


the coordinates, x = [a1 a2 ] with respect to the standard basis vector, e1 =
[1 0], e2 = [0 1].
(a) Find the inner product between x and e1 , and between x and e2 .

227
e2
x = [a1 a2 ]
a2

e1
a1

Figure 6.7: A two-dimensional Euclidean vector space, called two-tuples. A


vector x ∈ R2 is represented by its coordinates a1 , a2 with respect to the
orthogonal basis vectors, e1 = [1 0] and e2 = [0 1].

(b) Find the projection of x on e1 and e2 .


(c) Find the projection of e1 on e2 .

Solution: a) The coordinates a1 and a2 are the inner product of the vector x
with the basis vectors, e1 and e2 . Mathematically,

a1 =< x, e1 >= xT e1 (6.28)


and

a2 =< x, e2 >= xT e2 . (6.29)


b) The inner products of < x, e1 > and < x, e2 > are actually the projections
of the vector x on the basis vectors, e1 and e2 .
Motivating Question: What is the meaning of projection?
The projections of x on the basis vectors, which are the coordinates of a1 and
a2 are the measures of the amount of x in the basis vectors, e1 and e2 .
c) The projection of e1 on e2 is

< e1 , e2 >= eT1 e2 = 0.

Motivating Question: What is the amount of the basis vector e1 in e2 ?


The vector, e1 is not measured in the vector e2 .
If a mathematical object is orthogonal to other mathematical objects, then
this object cannot be observed or measured in the other objects. This property
of vectors, in a vector space, is called linear independence. If a set of math-
ematical objects are linearly independent, one of them cannot be represented
in terms of the others.

228
Learn more about vector spaces @ https://
384book.net/v0604
WATCH

Let us, now, compare


n
X
x= ak ek (6.30)
k=1

to the Fourier Series representation of a function,



X
x(t) = ak ejkω0 t , (6.31)
k=−∞

There is a marvelous resemblance between the above representations. Repre-


sentation of a vector x in an n-dimensional Euclidean space looks very similar
to the representation of a function x(t) in a space spanned by the harmonically
related complex exponentials of infinite dimension. This representation brings
us to an entirely new space, called a function space, introduced by David
Hilbert. Thus, it is named after him, as Hilbert space. In this new space, the
coordinates of a function are the Fourier series coefficients, {ak }∞
k=−∞ , and the
basis functions are harmonically related complex exponentials, {ejkω0 t }∞ k=−∞ ,
which are orthogonal to each other. Thus, harmonically related complex expo-
nentials span an infinite dimensional Hilbert space. Each vector in a Hilbert
space is a function, x(t), with the coordinates, {ak }∞k=−∞ .
Thus, the concept of Vector Space can be easily extended to Hilbert Space,
where the bases of this infinite-dimensional space are functions. In other words,
Hilbert space generalizes the Euclidean space. It extends the methods of Linear
Algebra from the finite-dimensional vector spaces to the spaces with infinite
number of dimensions.

Remark 6.4: It is possible to define Hilbert spaces spanned by functions


other than the harmonically related complex exponentials. These Hilbert spaces,
such as Wavelet, Hadamard, Haar, Sine and Cosine spaces are beyond the scope
of this book.

A loose definition of Hilbert space is given below.

Definition 6.1: A Hilbert space H is a vector space, spanned by a set of


orthogonal functions, Ψk (t) for integer k, where a function x(t) corresponds to
a vector, in this space. The distance between two functions, x(t), y(t) ∈ H are
defined by the inner product on an interval [a,b],

229
Z b
< x(t), y(t) >= x(t)y ∗ (t)dt, (6.32)
a
where [*] indicates the complex conjugate operation.

A signal, which is represented in the time domain as a continuous-time


periodic function, x(t) can be equivalently represented as a vector in a Hilbert
space by its coordinates {ak }∞
k=−∞ . This particular Hilbert space, spanned by
infinitely many harmonically related complex exponential basis functions is
called as the frequency domain. Thus, we have two equivalent representa-
tions of a continuous-time periodic function, in time and frequency domains:

x(t) ←→ ak . (6.33)
In time domain, a signal is represented by a function, where the domain of
the function is time, whereas in the frequency domain a signal is represented
by the coordinates {ak } of harmonically related frequencies, called spectral
coefficients. Observe that since the function x(t) is continuous, the time variable
is a real number, t ∈ R. On the other hand, the spectral coefficients have integer
harmonics, k ∈ I.
There are beautiful properties of Hilbert spaces, which is beyond the scope
of this course.

Learn more about the Hilbert space and inner


products @ https://384book.net/v0605
WATCH

Exercise 6.6: Consider the signal represented by the following function:

x(t) = 1 + cos(ω0 t). (6.34)


Find the coordinates of x(t) in Hilbert space, spanned by the harmonically
related complex exponential functions. Plot ak vs. k.

Solution: The coordinates of a periodic function in Hilbert space are the


Fourier series coefficients. The Euler formula provides us with the representa-
tion of a periodic function in terms of harmonically related complex exponen-
tials:
1
x(t) = 1 + (ejω0 t + e−jω0 t ). (6.35)
2
Comparing this to the Fourier synthesis equation,

230
ak

1
1
2

k
−1 1

Figure 6.8: The plot of the spectral coefficients ak vs. k in Exercise 6.6.


X 1
x(t) = ak ejkω0 t = 1 + (ejω0 t + e−jω0 t ), (6.36)
2
k=−∞

and equating the coefficients of harmonic exponential functions with the same
harmonics k in both sides of the equation, we obtain the Fourier series coeffi-
cients as a0 = 1, a1 = a−1 = 21 , and ak = 0 for |k| > 1. The plot of ak is given
in Figure 6.8.

Exercise 6.7: Consider the signal represented by the following function:

x(t) = 1 + cos(ω0 t) + sin(ω0 t). (6.37)


Find the coordinates of x(t) in Hilbert space, where the basis functions are
harmonically related complex exponential functions. Plot the coordinates, ak ,
vs. k.

Solution: As we did in the previous example, we use the Fourier series syn-
thesis equation together with the Euler formula:
1 1
x(t) = 1 + (ejω0 t + e−jω0 t ) + (ejω0 t − e−jω0 t ) (6.38)
2 2j

1 jω0 t 1 −jω0 t
X
= 1 + (1 − j)e + (1 + j)e = ak ejkω0 t . (6.39)
2 2
k=−∞

Equating the coefficients of the exponential functions with the same harmonics,
we obtain a0 = 1, a1 = 12 (1 − j), a−1 = 21 (1 + j) and ak = 0 for |k| > 1.
Note: The coordinates of the function in Hilbert space are complex numbers.
Recall that, complex numbers can be represented in two different coordinate
systems:

231
|ak | θk
π
4

1
√1
2 k
−1 1
−π
k 4

−1 1

Figure 6.9: Plot of magnitude spectrum (left) and phase spectrum (right) of
the Fourier Series coefficients of x(t) = 1 + cos(ω0 t) + sin(ω0 t).

1. In Cartesian coordinate system: ak = Re{ak } + jIm{ak },


2. In polar coordinate system: ak = |ak |eθk , where the magnitude
square of ak is

|ak |2 = (Re{ak })2 + (Im{ak })2 (6.40)


and the phase of ak is

Re{ak }
θk = tan−1 . (6.41)
Im{ak }
Let us plot the spectral coefficients ak vs. k in the polar coordinate system. In
this case, we need two plots:
1) Magnitude spectrum, which is the plot of the magnitude of Fourier
series coefficients |a1 | = |a−1 | = √12 , a0 = 1.
2) Phase spectrum, which is the plot of the phase of the Fourier series
coefficients, θ1 = − π4 , θ−1 = π4 , and θ0 = 0.
The magnitude and phase spectrum are given in Figure 6.9.

Exercise 6.8: Consider a periodic signal represented by the following equa-


tion:
(
1, if |t| < T0 ,
x(t) = (6.42)
0, if T0 ≤ |t| ≤ T − T0 ,
and x(t) = x(t+T ). Signals of such form are called pulse trains (Figure 6.10).
Find and plot the Fourier Series coefficients.

Solution: Since the above signal does not consist of trigonometric functions,

232
x(t)

−T −T0 T0 T

Figure 6.10: A periodic function, called pulse train, which repeats itself at every
period, T .

it is not possible to apply Euler formula to find the coefficients of harmonically


related complex exponentials. Thus, we use the Fourier analysis equation to
compute the average term,
T0
1 2T0
Z
a0 = dt = , (6.43)
T −T0 T
and the spectral coefficient of k th harmonic,

1 T0 −jkω0 t 1  jkω0 T0
Z 
−jkω0 T0
ak = e dt = e −e
T −T0 jkω0 T (6.44)
1 1 T0
= sin(kω0 T0 ) = sin(2πk )
πk πk T
where the angular frequency is ω0 = 2π T .
Motivating Question: What is the effect of the fundamental period T of
x(t) on the spectral coefficients ak ?
In order to answer this question let us plot the spectral coefficients for three
cases of the fundamental period:
1
Case 1: For T = 4T0 → ak = πk sin(kπ/2).
1
Case 2: For T = 8T0 → ak = πk sin(kπ/4).
1
Case 3: For T = 16T0 → ak = πk sin(kπ/8).

Figure 6.11 shows the spectrum of ak vs. k for T = 4T1 , T = 8T1 , and
T = 16T1 . When you compare the spectral coefficients of the function x(t), in
the Fourier domain, what do you observe?
Recall that, the spectral coefficient, ak , for each k shows the amount of

233
αk αk αk
0.5 0.5 0.2

0 0 0

−0.5 −0.5 −0.2


−20 −10 0 10 20 −20 −10 0 10 20 −20 −10 0 10 20
k k k

Figure 6.11: Plot of Fourier spectrum (k vs. ak ) for T = 4T0 and T = 8T0 and
T = 16T0 for the signal in Exercise 6.8.

the corresponding harmonic frequency in the signal, x(t). For small periods,
e.g. T = 4T0 , the signal x(t) has relatively less low-frequency components,
compared to the other signals. As we increase the period of the signal, the
low frequency components increase, and the rate of change of the spectral
coefficients decreases. Investigation of the behavior of the spectral coefficients
shows the amount of each frequency component relative to each other, which
makes the signal. This is why we call the plot, ak vs. k as spectrum, meaning
the band of frequencies, in x(t).

6.7. Response of a Linear Time-Invariant


System to the Continuous-Time Com-
plex Exponential Input Signal
Until now, we have studied various interesting properties of the complex ex-
ponential function, x(t) = ejω0 t , which can be summarized as follows:
• Complex exponential functions can be written in terms of the trigonometric
functions through the Euler formula, ejω0 t = cos ω0 t + j sin ω0 t
• Complex exponentials are periodic functions with period T = 2π/ω0 .
• Harmonically related complex exponentials, ϕk (t) = ejkω0 t represent peri-
odic motions of different speeds on a unit circle, defined in the complex
plane. The speed of the motion depends on the harmonics kω0 .
• Harmonically related complex exponentials are orthogonal to each other,
i.e., their inner product is,
Z T (
T for n = k,
< ejnω0 , ejkω0 >= ejnω0 t e−jkω0 t dt = (6.45)
0 0 otherwise.
• Infinitely many harmonically related complex exponentials span a Hilbert
space of functions, called Frequency domain.

234
• A periodic function can be uniquely represented in terms of the superposi-
tion of the harmonically related complex exponentials, called Fourier series
representation,

X
x(t) = ak ejkω0 t , (6.46)
k=−∞

where
1
Z
ak = x(t)e−jkω0 t dt. (6.47)
T T
In this section, we observe one more property of the complex exponential
function from the systems point of view: When a complex exponential is fed
at the input of an LTI system, we obtain a response, which is just the scaled
version of the input. The function satisfying this property is called the eigen-
function of an LTI system. We hinted at this behavior when we described the
“transfer function” of LTI systems represented by differential and difference
equations, in Chapter 5.

Definition 6.2: Eigenfunction of a continuous-time LTI system, de-


fined on a Hilbert Space, is a non-zero input function Φ(t), which outputs the
scaled input Φ(t). Mathematically speaking, the eigenfunction, Φ(t), of an LTI
system, represented by an operator, L, satisfies the following equation;

LΦ(t) = ΛΦ(t), (6.48)


where the scalar value, Λ is called the eigenvalue of the continuous-time LTI
system, represented by the operator, L.

Recall the definition of the eigenvalues and eigenvectors of matrices, in


Linear Algebra: Given a matrix, A, its eigenvectors, ξ, and eigenvalues, λ,
satisfy the following equation, called the characteristic equation:

Aξ = λξ. (6.49)
The above equation states that an N × N matrix operator, A, can be
represented by a simple scalar, λ, when it is multiplied by an eigenvector, ξ,
of that matrix. In other words, instead of multiplying the matrix A by one
of its eigenvectors, we simply multiply it with the corresponding eigenvalue
to get the same result of multiplication. The scalar eigenvalue λ replaces the
entire N 2 elements of the matrix when it is multiplied with an eigenvector.
That is why we call the λ value and the corresponding vector as eigen, which
is a German word, meaning “own”, in English. The scalar λ characterizes a
matrix with N 2 entries on its own, in a very compact form, when the matrix
is multiplied by its own (eigen) vectors.

235
x(t) = ejω0 t h(t) y(t) = H(jω0 )ejω0 t

Figure 6.12: An LTI system, represented by h(t), receives a complex exponen-


tial, x(t) = ejω0 t , as input and outputs the same complex exponential scaled
by H(jω0 ).

Recall from Linear Algebra that the eigenvectors of a matrix are orthogonal
to each other and they form a basis of a vector space, where the entries of each
vector in this space are the coordinates of the vector with respect to the basis
vectors.

6.7.1. Eigenfunctions and Eigenvalues of a Continuous-


Time LTI Systems
In this section, we are going to show that the complex exponentials are the
eigenfunctions of a continuous-time LTI system, represented by the impulse
response, h(t), via convolution integral,
Z ∞
y(t) = h(τ )x(t − τ )dτ, (6.50)
−∞
and we are going to find the corresponding eigenvalue of this system.
Let us start by replacing the input by x(t) = ejω0 t , in the convolution
integral:
Z ∞
y(t) = h(τ )ejω0 (t−τ ) dτ, (6.51)
−∞

where ω0 = 2π/T0 is the angular frequency of the periodic input. The output
of the LTI system to the input, x(t) = ejω0 t , can be written as
Z ∞
y(t) = e jω0 t
h(τ )e−jω0 τ dτ. (6.52)
−∞
Note that the integral,
Z ∞
H(jω0 ) = h(τ )e−jω0 τ dτ, (6.53)
−∞

is just a scaling factor of the input, x(t) = ejω0 t , of the LTI system.
Therefore, when the input is a complex exponential, x(t) = ejω0 t , the cor-
responding output, y(t) = H(jω0 )ejω0 t , is just the scaled form of the input.

236
Definition 6.3: Eigenvalue of a continuous-time LTI system for a
complex exponential input x(t) = ejω0 t is defined as
Z ∞
H(jω0 ) = h(τ )e−jω0 τ dτ, (6.54)
−∞

where ω0 = 2π/T0 is the angular frequency of the periodic input.

Remark 6.5: A linear time-invariant system has eigenvalues for each har-
monic of the complex exponential. Formally speaking, when the input is an
eigenfunction which corresponds to the k th harmonic of the complex exponen-
tial function,

x(t) = ejkω0 t , (6.55)


the corresponding output is

y(t) = H(jkω0 )ejkω0 t , (6.56)


where H(jkω0 ) is the k th eigenvalue of the LTI system, defined as
Z ∞
H(jkω0 ) = h(τ )e−jkω0 τ dτ. (6.57)
−∞

Remark 6.6: The eigenfunctions of an LTI system are the harmonically


related complex exponentials, which span a Hilbert space. Each periodic func-
tion, x(t), in this space can be represented by the superposition of the basis
functions, Φk (t) = ejkω0 t , as shown in the Fourier Theorem. The Fourier series
coefficients are the coordinates of this function with respect to the harmonically
related complex exponentials.

Remark 6.7: When we set λ = jkω0 , the exponential input of Equation


(6.55) becomes x(t) = eλt , and the corresponding eigenvalue becomes the
transfer function of an LTI system, defined in Section 5.4.3.

Exercise 6.9: Find an eigenvalue and impulse response of the following con-
tinuous time system.

y(t) = x(t − 2). (6.58)

Solution: Since this is an LTI system, an eigenfunction of this system is the


complex exponential input:

x(t) = ejω0 t . (6.59)

237
Inserting this input into the system equation, we get

y(t) = ejω0 (t−2) = ejω0 t e−2jω0 = H(jω0 )ejω0 t (6.60)


The multiplicative term in the above equation,

H(jω0 ) = e−2jω0 (6.61)


is the eigenvalue of the LTI system.
The impulse response can be easily obtained from the convolution equation,

y(t) = x(t) ∗ h(t) = x(t − 2). (6.62)


Thus, the impulse response is h(t) = δ(t − 2).
Indeed, we have
Z ∞
H(jω0 ) = δ(τ − 2)e−jω0 τ dτ = e−2jω0 . (6.63)
−∞

Note that in the above example, each H(jkω0 ) is an eigenvalue of the LTI
system when the input is the eigenfunction x(t) = ejkω0 .

6.8. Convergence of the Fourier Series


and Gibbs Phenomenon
In theory, the Fourier series representation of a function has infinitely many
terms under the summation, each of which corresponds to a coordinate ak
multiplied by complex exponential harmonics. For some functions, such as
x(t) = cos ω0 t or x(t) = sin ω0 t, only finitely many coefficients are nonzero.
Thus, we avoid infinite summations.
Unfortunately, most of the signals have infinitely many spectral coefficients.
In order to find the Fourier series representation of these types of functions, we
use some approximation techniques. A practical way is to truncate the series
after a relatively high value of k = N and compute an approximation of the
Fourier series representation:
N
X
xN (t) = ak ejkω0 t . (6.64)
k=−N

As we increase the number of the terms, N , the function gets better ap-
proximated by the series sum. The error between the theoretical and practical
computation can be defined as

238
1 1 1
N =1 N =2 N = 50
0 0 0

−1 −1 −1

Figure 6.13: Gibbs phenomenon. From left to right, we are approximating a


square pulse signal (shown with gray color) using N = 1, N = 2, and N = 50
harmonics. Although the Fourier theorem shows that the series representation
perfectly represents any function, satisfying the Dirichlet conditions, even for
very large sums the approximated function oscillates around the discontinu-
ities.

eN (t) = |x(t) − xN (t)|. (6.65)


Thus, when we compute the spectral parameters ak , we need to pay attention
to reduce the energy of the error,
Z ∞
EN = |eN (t)|2 dt (6.66)
−∞
as much as possible.
We expect that as we increase N , the energy of the error gets smaller.
However, there is a peculiar behavior of the Fourier series while approximating
the functions, which have discontinuities. As we increase the number of terms,
the width of the ripples around the discontinuities decreases and converges
to a constant value of oscillation around the discontinuities. This behavior is
known as the Gibbs phenomenon, discovered by Henry Wilbraham and J.
Willard Gibbs. An example is illustrated in Figure 6.13.

Explore the Gibbs phenomenon @ https://


384book.net/i0602
INTERACTIVE

6.9. Properties of Fourier Series for Continuous-


Time Functions
Fourier series representation of periodic functions in Hilbert space, called
the frequency domain has a wide range of interesting properties. These

239
properties not only link the time domain and frequency domain, but they also
enable us to solve many problems, which are not mathematically tractable in
time domain. Also, the frequency domain representation of signals enables us
to observe properties of signals and systems, which are not possible to observe
in the time domain.
The most crucial characteristics of the relationship between time and fre-
quency domain is the representation of functions in these two separate domains
are one-to-one and onto,

x(t) ←→ ak . (6.67)
In other words, if a periodic function, x(t), satisfies the Dirichlet conditions,
we can uniquely obtain its Fourier series representation by finding the spectral
coefficients using the analysis equation of Fourier Theorem:

1
Z
ak = x(t)e−jkω0 t dt. (6.68)
T T
Equivalently, if the spectral coefficients, {ak }, ∀k, are given, we can ob-
tain the time domain representation of the function, x(t), uniquely, using the
synthesis equation:

X
x(t) = ak ejkω0 t . (6.69)
k=−∞

Let us briefly outline the relationships among the time and frequency do-
main representation of functions and their properties.

Linearity Property. Finding the Fourier series representation of a function,


x(t), is a linear operation. Mathematically speaking, given the time and fre-
quency domain representations of two signals,

x(t) ←→ ak and y(t) ←→ bk . (6.70)


The following superposition property holds in time and frequency domains:

Ax(t) + By(t) ←→ Aak + Bbk . (6.71)


Linearity property follows from the linearity of the integration operation, which
defines the analysis and synthesis equations.

Time Shifting Property. A time shift of the function, x(t) by the amount
of t0 , is equivalent to multiplication of its spectral coefficients by the complex
exponential, e−jkω0 t0 . Mathematically, if

240
x(t) ←→ ak and y(t) = x(t − t0 ) ←→ bk , (6.72)
then,

bk = e−jkω0 t0 ak . (6.73)
Time shifting property can be easily shown by inserting the shifted signal into
the analysis equation:

1
Z
y(t) ←→ bk = x(t − t0 )e−jkω0 t dt. (6.74)
T T
Let us replace the dummy variable of the integral by t′ = t − t0 =. Then, the
above analysis equation becomes,
1 −jkω0 t0 1
Z Z
′ −jkω0 (t0 +t′ ) ′
bk = x(t )e dt = e x(t′ )t0 e−jkω0 t dt′ = e−jkω0 t0 ak .
T T T T
(6.75)

Time Scale Property. Scaling the time of a function, x(t), does not change
its spectral coefficients, ak , but the fundamental frequency of the spectral co-
efficient is scaled to αω0 .
Formally speaking, the Fourier series representation of x(αt) is defined as,

X
x(αt) = ak ejk(αω0 )t . (6.76)
k=−∞

The above synthesis equation shows that the spectral coefficients of the
scaled function x(αt) are the same as the spectral coefficients of the signal
x(t). However, the angular frequency is scaled by α. Hence, the fundamental

period of x(αt) is T = αω 0
.

Time Reversal Property. Reverse of the time of a function x(t) is equivalent


to taking the time-scaling parameter α = −1. Hence,

X
x(−t) = ak e−jkω0 t . (6.77)
k=−∞

This equation reverses the harmonics of the spectral coefficients. Mathemati-


cally speaking, if

x(t) ←→ ak and y(t) = x(−t) ←→ bk , (6.78)


then,

241
bk = a−k . (6.79)

Convolution Property. Convolution operation in time domain corresponds


to the multiplication operation in frequency domain.
Given two functions and their corresponding spectral coefficients,

x(t) ←→ ak and y(t) ←→ bk , (6.80)


with period T , the convolution of x(t) ∗ y(t) has the following spectral coeffi-
cients;

1 ∞
Z
x(t) ∗ y(t) ←→ ck = (x(t) ∗ y(t))e−jkω0 t dt
T −∞
(6.81)
1 ∞ ∞
Z Z
= x(τ )y(t − τ )e−jkω0 t dτ dt.
T −∞ −∞

Changing the dummy variable of integral to t′ = t − τ , we obtain,

1 ∞
Z Z ∞
−jkω0 t ′
ck = x(τ )e dτ y(t′ )e−jkω0 t dt′ . (6.82)
T −∞ −∞
Above, the first and second integrals are equivalent to T ak and T bk , respec-
tively. Hence, the spectral coefficients of two convoluted periodic signals, x(t) ∗
y(t), is the multiplication of their spectral coefficients, scaled by the funda-
mental period:
x[n] ∗ y[n] ←→ T ak bk . (6.83)

Multiplication Property. Multiplication of two functions in the time do-


main is equivalent to convolution of their spectral coefficients in the frequency
domain.
Mathematically speaking, given two functions and their corresponding spec-
tral coefficients,

x(t) ←→ ak and y(t) ←→ bk , (6.84)


with period T , multiplication of the signals x(t) and y(t) can be written in
terms of Fourier series representation as
∞ ∞
! !
−j 2π 2π
X X
kt −j lt
x(t)y(t) = ak e T bl e T . (6.85)
k=−∞ l=−∞

Arranging the summations and changing the dummy variable l = m − k gives

242
∞ ∞
X X k+l
x(t)y(t) = ak bl e−j2π T
.

k=−∞ l=−∞
∞ ∞ (6.86)
−j2π m
X X
= ak bm−k e T .
k=−∞ m=−∞

Therefore, the multiplication of signals x(t)y(t) in time domain corresponds to


the convolution of their spectral coefficients, ak and bk :
X
z(t) = x(t)y(t) ←→ ck = am bk−m . (6.87)
∀m

Considering the fact that the spectral coefficients, ak vs. k and bk vs. k,
are discrete functions, the operation,
X
ak ∗ bk = am bk−m , (6.88)
∀m

is the discrete convolution of ak and bk .

An example on the duality of convolution and mul-


tiplication @ https://384book.net/i0603
INTERACTIVE

Conjugate Symmetry. When x(t) is a complex function, the spectral coeffi-


cients of its complex conjugate, x∗ (t), satisfy the conjugate symmetry property.
Mathematically speaking,

If x(t) ←→ ak , then x∗ (t) ←→ a∗−k . (6.89)


This property directly follows the Fourier series synthesis equation.

Parseval’s Equality. The energy of a signal in time and frequency domain is


preserved. Mathematically speaking,

1
Z X
|x(t)|2 dt = |ak |2 . (6.90)
T T
∀k

We can show Parseval’s equality by inserting the analysis equation into the
left-hand side of the above equation:

243
1 1
Z Z
2
|x(t)| dt = x(t)x∗ (t)dt
T T T T
∞ ∞ (6.91)
1
Z X X
= ak ejkω0 t a∗l le−jlω0 t dt.
T T k=−∞ l=−∞

Recall that harmonically related complex exponential functions are orthogonal:


Z T (
T for l = k,
ejkω0 t e−lω0 t dt = (6.92)
0 0 otherwise.
Using this fact and arranging the right-hand side of Equation (6.91), we obtain

∞ ∞
1 1
Z Z X X X
|x(t)|2 dt = ak a∗l e(k−l)jω0 t dt = |ak |2 . (6.93)
T T T T k=−∞ l=−∞ ∀k

Parseval’s equality reveals that the representation of signals in Hilbert space


conserves the energy of the time domain.

Differentiation Property. The derivative of a function in the time domain


corresponds to the multiplication of its spectral coefficients by (jkω0 ). Math-
ematically speaking,

dx(t)
←→ (jkω0 )ak . (6.94)
dt
We can derive the differentiation property by taking the derivative of both
sides of the synthesis equation with respect to t:

dx(t) X
= (jkω0 )ak ejkω0 t . (6.95)
dt
k=−∞

Notice that derivation operation in time domain corresponds to multiplication


operation, in the frequency domain.
There are many interesting properties of the Fourier series representation
other than the ones summarized above. For further studies, the reader is re-
ferred to the book, “Fourier Analysis” by Eric Stade (Wiley, 2005). We provide
a short list of properties in Table 6.1.
We also provide some popular continuous-time periodic functions and their
spectral coefficients in Table 6.2. The reader is encouraged to derive the spectral
coefficients given in the table, using the analysis equation.

244
Table 6.1: Summary of the continuous-time Fourier Series properties.
Periodic signal Fourier series coefficient
x(t) is periodic with fundamental period T0 ak

y(t) is periodic with fundamental period T0 bk

Synthesis equation: Analysis equation:


∞ 1
Z
ak = x(t)e−jkω0 t dt
X
x(t) = ak ejkω0 t where ω0 = 2π
T0 T0 T0
k=−∞

Ax(t) + By(t) Aak + Bbk

x(t − t0 ) ak e−jkω0 t0

ejM ω0 t x(t) ak−M

x∗ (t) a∗−k

x(−t) a−k

x(αt), α > 0 (Periodic with fundamental pe- ak


riod T0 /α)

x(t) ∗ y(t) T0 ak bk
P∞
x(t)y(t) t=−∞ al bk−l
d
dt x(t) jkω0 ak
Rt 1
−∞ x(τ )dτ (Bounded and periodic only if
jkω0
αk
a0 = 0)

a = a∗−k
 k


Re{ak } = Re{a−k }



For real-valued x(t): Im{ak } = −Im{a−k }

|ak | = |a−k |




∢ak

= −∢a−k

Even part of x(t) Re{ak }

Odd part of x(t) jIm{ak }


ak ejkω0 t + a−k e−jkω0 t = 2Re{ak } cos(kω0 t) − 2Im{ak } sin(kω0 t)
Continued on next page

245
Table 6.1: Summary of the continuous-time Fourier Series properties. (Contin-
ued)

1
Z X
2
Parseval’s relation: |x(t)| dt = |ak |2
T0 T0
k=−∞

Table 6.2: Some popular continuous-time periodic signals and their spectral
coefficients.
Periodic signal ak or Fourier series expansion
x(t) = ∞ 1
P
n=−∞ δ(t − nT ) ak = T for all k

x(t) = 1 a0 = 1, ak = 0 for all other k, ∀T0 > 0

x(t) = ejω0 t a1 = 1, ak = 0 for all other k

x(t) = cos(ω0 t) a1 = a−1 = 12 , ak = 0 for all other k


1
x(t) = sin(ω0 t) a1 = −a−1 = 2j , ak = 0 for all other k
(
1, |t| < T1 ω0 T1 kω0 T1
x(t) = T0
ak = π π = sin kωkπ
0 T1

0, T1 < |t| ≤ 2
with fundamental period T0
(
1, 0<t<π 4 sin t sin 3t sin 5t

x(t) = π 1 + 3 + 5 + ...
−1, −π < t < 0
(
t, 0<t<π π 4 cos t cos 3t cos 5t

x(t) = 2 − π 12
+ 32
+ 52
+ ...
−t, −π < t < 0
sin t sin 3t sin 5t

x(t) = t, −π < t < π 2 1 + 3 + 5 + ...
sin t sin 3t sin 5t

x(t) = t, 0 < t < 2π π−2 1 + 3 + 5 + ...
2 4 cos 2t cos 4t cos 6t

x(t) = | sin t|, −π < t < π π − π 1·3 + 3·5 + 5·7 + ...

0, 0 < t < π − α

α 2 sin α cos t sin 2α cos 2t sin 3α cos 3t

x(t) = 1, π − α < t < π + α
 π −π 1 + 2 + 3 + ...
0, π + α < t < 2π

Let us now solve some exercises to demonstrate the power of the properties

246
x(t) ak

... ... ... 1/T ...

t k
−T T -2 -1 1 2

Figure 6.14: A continuous-time impulse train, x(t) = ∞


P
k=−∞ δ(t − kT ), has
spectral coefficients, ak , which is a discrete impulse train, scaled by the period
T.

of the Fourier series representation.

Exercise 6.10: Find the spectral coefficients of the following impulse train,

X
x(t) = δ(t − kT ), (6.96)
k=−∞

where T = 2π/ω0 is the fundamental period.

Solution: Apply the analysis equation to cover one full period of the signal
x(t),
T /2
1 1
Z
ak = δ(t)e−jkω0 t = . (6.97)
T −T /2 T
The signals and their spectral coefficients are plotted in Figure 6.14.

Remark 6.8: The Fourier series representation of a continuous-time func-


tion, x(t), is a discrete function, ak vs. k. Interestingly, a continuous-time
impulse train in the time domain is a discrete impulse train in the frequency
domain (Figure 6.14). As the period of the impulse train increases in the time
domain, the amplitude of the spectral coefficients decreases in the frequency
domain.

Exercise 6.11: Find the spectral coefficients of the derivative of the square
wave given below:

247
g(t)
q(t)
1

... ...
... ... T1
t
−T −T1 T
t
−T −T1 T1 T

Figure 6.15: Square wave of width 2T1 and period T and its derivative.

(
1, if |t| < T1
g(t) = (6.98)
0, if T1 ≤ |t| ≤ T − T1 ,
where g(t) = g(t + T ).

Solution: The plot of g(t) and its derivative are given in Figure 6.15. The
derivative q(t) of the square wave is an impulse train with alternating sign, at
the discontinuities of g(t), as follows:

X ∞
X
q(t) = δ(t − kT + T1 ) − δ(t − kT − T1 ). (6.99)
k=−∞ k=−∞

From the previous example, we know that the spectral coefficients of



X
x(t) = δ(t − kT ) (6.100)
k=−∞

is ak = T1 . Using the time shift and linearity property, we obtain the spectral
coefficients of q(t), as follows;

q(t) ←→ bk = ejkω0 T1 ak − e−jkω0 T1 ak . (6.101)

2j
bk =
sin(kω0 T1 ). (6.102)
T
In fact, if we use the differentiation property, we can obtain the spectral co-
efficients of g(t) from that of q(t). Let g(t) ←→ ck . Using the differentiation
property, we have bk = jkω0 ck . And, ck are

248
(
sin(kω0 T1 )
k≠ 0,
ck = 1
R kπ 2T1
(6.103)
T y(t)dt = T k = 0.

6.10. Trigonometric Fourier Series for


Continuous-Time Functions
Instead of using the harmonically related complex exponentials as the basis
functions of Hilbert space, we can define a Fourier series representation of a
periodic signal, x(t), using sine and cosine basis functions. This equivalent
representation is called the Trigonometric Fourier Series and can be obtained
by using the Euler formula,

ejω0 t = cos ω0 t + j sin ω0 t. (6.104)

Theorem 6.2: A continuous-time periodic function, with period T , satisfy-


ing the Dirichlet conditions can be represented by the following trigonometric
Fourier series:

X
x(t) = a0 + (Bk cos(kω0 t) + Ck sin(kω0 t)), (6.105)
k=1

where the average term is

1
Z
a0 = x(t)dt (6.106)
T T
and the trigonometric coefficients are

2
Z
Bk = x(t) cos(kω0 t)dt (6.107)
T T
and

2
Z
Ck = x(t) sin(kω0 t)dt. (6.108)
T T
The relationship between the spectral coefficients and trigonometric coefficients
is given by

1 1
ak = (Bk + jCk ) and a−k = (Bk − jCk ), ∀k ≥ 1. (6.109)
2 2
Proof sketch: In the trigonometric Fourier series, replace

249
x(t)

t
π 2π

Figure 6.16: Plot of x(t) = e−t/2 with period T = π.

ejkω0 t − e−j2kω0 t ejkω0 t + e−j2kω0 t


sin kω0 t = and cos kω0 t = (6.110)
2j 2
to obtain


X Bk Ck jkω0 t
x(t) = a0 + ( (ejkω0 t + e−jkω0 t ) + (e − e−j2kω0 t )). (6.111)
2 2j
k=1

1 1
Then, insert ak = 2 (Bk + jCk ) and a−k = 2 (Bk − jCk ), in the synthesis
equation,

∞ ∞ −1
X
jkω0 t 1X jkω0 t 1 X
x(t) = ak e = a0 + (Bk + jCk )e + (Bk − jCk )ejkω0 t
2 2
k=−∞ k=1 k=−∞
(6.112)
to show that


X ∞
X
x(t) = ak ejkω0 t = a0 + (Bk cos(kω0 t) + Ck sin(kω0 t)). (6.113)
k=0 k=1

Exercise 6.12: Find trigonometric Fourier series representation of x(t) =


e−t/2 , 0 < t < π, where T = π, ω0 = 2π
T = 2. The plot of this signal is given in
Figure 6.16.

Solution: The average term is

1 π −t/2 2 2
Z
a0 = e dt = − π/2 ≈ 0.504. (6.114)
π 0 π πe

250
Bk
1 Ck

0.2
0.1
−6 −4 −2 k
2 4 6
−0.1
k −0.2
−6 −4 −2 0 2 4 6

Figure 6.17: Plot of trigonometric coefficients, Bk and Ck , for x(t) = e−t/2 ,


0 < t < π with period T = π.

The trigonometric coefficients are

2 π −t/2 2 × 0.504
Z
Bk = e cos(2kt)dt = (6.115)
π 0 1 + 16k 2
2 π −t/2 8k
Z
Ck = e sin(2kt)dt = 0.504. (6.116)
π 0 1 + 16k 2
Figure 6.17 presents the plots of Bk and Ck .

Explore trigonometric waveforms @ https://


384book.net/i0604
INTERACTIVE

6.11. Trigonometric Fourier Series for


Continuous-Time Even and Odd
Functions
The Trigonometric Fourier Series has an interesting property. It decomposes
the signal into its even and odd parts.
The basis functions of the trigonometric Fourier series consist of cos kω0 t,
which is an even function, and sin ω0 t, which is an odd function. When x(t) is
an even function, all of the Ck ’s become zeros. Similarly, when x(t) is odd, then
Bk ’s are all zeros. Thus, the trigonometric Fourier series nicely decomposes the
function, x(t), into its even and odd parts,

251
x(t)

−T −T0 T0 T

Figure 6.18: A periodic function, called square wave, which repeats itself at
every period, T .


X
x(t) = a0 + (Bk cos(kω0 t) + Ck sin(kω0 t)), (6.117)
| {z } | {z }
k=1 even part odd part

where is Bk represents the trigonometric coefficients of the even part and Ck


represents the trigonometric coefficients of the odd part of the function x(t).

Exercise 6.13: Find the trigonometric Fourier series representation of the


following square wave signal:
(
1, if |t| < T1
x(t) = (6.118)
0, if T1 ≤ |t| ≤ T − T1
where x(t) = x(t + T ) is periodic. Find and plot the Fourier Series coefficients.

Solution: Note that this function is even. Thus, the spectral coefficients cor-
responding to the odd part , Ck = 0 for all k.
The spectral coefficients corresponding to the even part can be computed from
the following equation:
T1
2 4 sin(kω0 T1 )
Z
Bk = cos(kω0 t)dt = . (6.119)
T −T1 kω0 T

Exercise 6.14: Find the trigonometric Fourier series representation of the


triangle wave given below:

252
(
−t + 1, if 0 < t < 2,
x(t) = (6.120)
t + 1, if − 2 ≤ t ≤ 0
where x(t) = x(t + T ) is periodic, with T = 4. Find the Fourier series coeffi-
cients.

Solution: This is another even function. Hence, the odd part of the trigono-
metric Fourier series is zero and the even part is,

T /2 0 Z 2
2 1
Z Z
Bk = x(t) cos(kω0 t)dt = (t+1) cos(kω0 t)dt+ (−t+1) cos(kω0 t)dt.
T −T /2 2 −2 0
(6.121)
Using the integration by parts and by replacing the angular frequency ω0 =
2π/T = π/2, we obtain,

2 sin πk 8 sin( πk 2
2 ) − πk sin πk 8 sin( πk
2 )
2
Bk = + = . (6.122)
πk π2 k2 π2 k2
The coefficients, Bk can be further simplified considering the even and odd
values of k as follows:
(
4 k
2 2 (1 − (−1) ), for k is even,
Bk = π k (6.123)
0, for k is odd.
The average value, a0 , is 0. Hence, the trigonometric Fourier series represen-
tation of this function is

X 8 sin( πk )2 2 π
x(t) = cos(k t). (6.124)
π2 k2 2
k=1

Exercise 6.15: Find the trigonometric Fourier series representation of the


sawtooth wave given below:

x(t) = t for −1<t<1 (6.125)


where x(t) = x(t + T ) is periodic, with T = 2. Find the Fourier representation
of this function.

Solution: This is an odd function. Hence, the even part of the trigonometric
Fourier series is zero and the odd part is
T /2 1
2
Z Z
Ck = x(t) sin(kω0 t)dt = t sin(kω0 t)dt. (6.126)
T −T /2 −1

253
Using the integration by parts, we obtain

2(sin πk − πk cos πk) 2


Ck = 2 2
= − (−1)k . (6.127)
π k πk
The average value is a0 = 0 and the angular frequency is ω0 = 2π/2 = π.
Hence, the trigonometric Fourier series representation of this function is

2 X (−1)k
x(t) = − sin(kπt). (6.128)
π k
k=1

Exercise 6.16: Consider the following continuous-time LTI system, repre-


sented by its impulse response h(t).

x(t) h(t) y(t)

a) Find the impulse response of this system, if the unit step response is
s(t) = e−2t u(t).
b) Suppose the input to this system is x(t) = cos(πt) + sin(2πt). Find the
spectral coefficients of x(t).
c) Find and plot the spectral coefficients of the output y(t) for the input
given in part (b).

Solution:
a) Impulse response is simply the derivative of the unit step response,

h(t) = −2e−t u(t). (6.129)


b) We can split x(t) as x(t) = x1 (t) + x2 (t) and find the spectral coefficients
of each component:
• For x1 (t), the angular frequency is ω0 = π and the period is T = 2.
The spectral coefficients are a1 = a−1 = 1/2.
• For x2 (t), the angular frequency is ω0 = 2π and the period is T = 1.
The spectral coefficients are a2 = 1/(2j), a−2 = −1/(2j).
Using the linearity property of Fourier series, we obtain the spectral co-
efficients of x(t), as follows;
1 1
a1 = a−1 = , a2 = −a−2 = . (6.130)
2 2j
c) Recall that when the input is the eigenfunction of the LTI system, the
corresponding output is

254
y(t) = H(jω0 )ejω0 t , (6.131)
where the eigenvalue of the system is
Z ∞
H(jω0 ) = h(t)e−jω0 t dt. (6.132)
−∞

For the impulse response we obtain in part (a), h(t) = −2e−t u(t), the
eigenvalue of the system is obtained as follows;
Z ∞
2
H(jω0 ) = −2 e−(2+jkω0 )t dt = − . (6.133)
0 2 + jω0
Recall, also, that output y(t) can be represented by Fourier series and the
convolution integral as follows:

X Z ∞
jkω0 t
y(t) = bk e = x(τ )h(t − τ )dτ. (6.134)
k=−∞ k=−∞

In the above equation, replace x(τ ) by its Fourier series representation,



X
x(τ ) = ak ejkω0 τ (6.135)
k=−∞

and arrange to obtain the spectral coefficients of y(t) as follows:

bk = ak H(jkω0 ). (6.136)
Since, the fundamental frequency of the input signal is ω0 = π, we obtain
the spectral coefficients of the output as follows:
2
bk = −ak . (6.137)
2 + jkπ
Insert the values of the spectral coefficients, ak , found in the previous part
into the above equation to obtain

1 1 1 1
b1 = − , b−1 = − , b2 = , b−2 = . (6.138)
2 + jπ 2 − jπ 2(π − j) 2(π + j)

6.12. Chapter Summary


Harmony is an essential concept in natural and man-made systems. A very
powerful mathematical object to represent harmony is harmonically related

255
complex exponential function, Φk (t) = ejkω0 t , for continuous-time signals,
where each k defines a harmonic, for k = 0, 1, ..., ∞.
Harmonically related exponentials bear very interesting properties. First of
all, they are periodic functions with angular frequencies, kω0 . As k increases
they represent periodic motion on a unit circle of the complex plane with
higher speeds. Secondly, they are orthogonal to each other. Thus, they span
an infinite-dimensional function space, called Hilbert space, where each pe-
riodic function is uniquely represented by a set of coordinates {ak }, called the
spectral coefficients, provided that the function satisfies the Dirichlet condi-
tions. The representation of a signal in Hilbert space is called Fourier series,
named after J. B. Fourier, a revolutionary French politician and mathemati-
cian. Fourier series representation enables us to decompose a periodic signal
into its harmonically related frequency components. Since a periodic function
is represented in terms of its frequency content, we call this specific Hilbert
space as frequency domain.
Finally, harmonically related complex exponentials are the eigenfunctions
of an LTI system. In other words, when we feed a harmonics of the complex
exponential at the input of an LTI system, the output is just the scaled version
of the input. This scale is called the eigenvalue or the transfer function of
the system and it uniquely describes an LTI system in the frequency domain.

256
Problems
1. Represent the following signals in terms of the superposition of complex
exponentials:
a) x(t) = 1 + sin(πt).
b) x(t) = cos(πt + π2 ).
π
c) x(t) = 1 + sin(πt) + cos( 10 t).
2. Does the following function satisfy Dirichlet conditions:
(
|t| for − 1 ≤ t ≤ 1
x(t) =
0 o.w.

3. Let x(t) be a continuous-time square wave signal with a fundamental


period T = 6 seconds, represented by the following analytical expression
in one full period:

(
1 |t| < 2
x(t) =
0 2 ≤ |t| ≤ 4.

a) Find and plot the Fourier series coefficients for x(t).


b) Find and plot the Fourier series coefficients for dx
dt .
c) Find the Trigonometric Fourier series representation of x(t).

4. Show that the inner product between two harmonically related complex
exponential functions satisfies the following equation:
Z T (
jnω0 t jkω0 t jnω0 t −jkω0 t T for n = k,
<e ,e >= e e dt =
0 0 n ̸= k.

5. Show that the functions x1 = ejω0 t and x2 = e2jω0 t are orthogonal to each
other.

6. Let x(t) be a continuous-time signal, represented by the following func-


tion:

 
πt  π
x(t) = cos + 2 cos πt + .
3 2

a) Find the fundamental period of this signal.


b) Find and plot the Fourier series coefficients for x(t).

257
c) Find the Trigonometric Fourier series representation of x(t).

7. Let x(t) be a continuous-time signal, represented by the following func-


tion:

 
πt
x(t) = 1 + sin + 4 sin (πt) .
2

a) Find the fundamental frequency ω0 of this signal.


b) Find and plot the nonzero Fourier series coefficients for x(t).
c) Find the Trigonometric Fourier series representation of x(t).

8. The eigenvalues of a continuous-time LTI system is given by the following


equation:

cos(2kω0 )
H(jkω0 ) =
kw0
Let us define the input signal x(t) by the following function in one full
period;

(
1 0≤t<3
x(t) =
−1 3 ≤ t < 6,

where the fundamental period is T = 6.


a) Find and plot the Fourier series coefficients of the input signal x(t).
b) Find and plot the Fourier series coefficients of the output y(t).
c) Find and plot the output signal y(t), approximately.

9. The eigenvalues of a continuous-time LTI system is represented by the


following equation:

(
1 |ω| < 50
H(jkω0 ) =
0 |ω| ≥ 50.

Suppose that a periodic input signal x(t) ←→ ak with fundamental period


T = π3 is fed to the LTI system. If the output is the same as the input,
i.e., y(t) = x(t), what are the Fourier series coefficients of the input and
output?

258
10. Find the eigenvalues of the LTI systems, described by the following input-
output pairs:
a) x(t) = ejt , y(t) = je5jt
b) x(t) = ejπt , y(t) = ejπt−2π .

cos 3t
c) x(t) = ej3t , y(t) = j + sin(3t).

11. Find the eigenvalue of the following differential equation, when the input
is the eigenfunction, x(t) = ejkω0 t :

ÿ(t) + 6ẏ(t) + 9y(t) = x(t).

12. Find the spectral coefficients of the signal given below.


x(t)

2
1
0 t
−4 −3 −2 −1 1 2 3 4
−1
−2

13. Find the spectral coefficients of the function given below.


x(t)

2
1
0 t
−6 −5 −4 −3 −2 −1 1 2 3 4 5 6
−1
−2

14. Find the spectral coefficients of the function given below.

259
x(t)

3
2
1
0 t
−6 −5 −4 −3 −2 −1 1 2 3 4 5 6
−1

15. Determine the periodic continuous-time signal x(t) with a period T = 4


whose Fourier series coefficients of x(t) are as follows:

(
k·j k≤2
αk =
0 otherwise

16. Determine the periodic continuous-time signal x(t) with a period T = 6


whose Fourier series coefficients of x(t) are as follows:

(
k·j −2 ≤ k ≤ 2
αk =
0 otherwise

17. Determine the periodic continuous-time signal x(t) with a period T = 4


whose Fourier series coefficients of x(t) are as follows:

(
−1 k is even
αk =
1 k is odd

18. Consider a continuous-time periodic signal in one full period;

(
2t 0≤x≤2
x(t) =
4 − t 2 < x ≤ 4,

where the fundamental period is T = 4.


and Fourier series coefficients ak .
a) Find the spectral coefficients of this function.
b) Find the spectral coefficients of dx
dt using differentiation property.
19. Consider the following periodic continuous-time signals:

260
 
πt
x1 (t) = cos
2
 
πt
x2 (t) = sin
2
x3 (t) = x1 (t)x2 (t)

a) Find the spectral coefficients of x1 (t), x2 (t) and x3 (t).


b) Find the spectral coefficients of x3 (t) using the multiplication prop-
erty and compare your results with what you found in part a).

20. Consider a continuous-time LTI system whose unit step response is

s(t) = e−3t u(t).

a) Find the impulse response h(t) of the system.


b) Find and plot the spectral coefficients of the input x(t) = 2 cos(2πt)−
sin(πt).
c) Find and plot the spectral coefficients of the output of this system
when the input is x(t) = 2 cos(2πt) − sin(πt).

261
262
Chapter 7
Fourier Series Representation
of Discrete-Time Periodic
Signals

Remember that Fourier series are useful mathematical representations of continuous-


time periodic signals, which enable us to decompose them into their frequency
components. Loosely speaking, Fourier series provides the “amount” of each
harmonic frequency in a signal. These amounts are called spectral coeffi-
cients of the signal.

Learn more about the Fourier series and the fre-


quency spectrum @ https://384book.net/v0701
WATCH

In this chapter, we extend the methods of continuous-time Fourier analysis


methods into discrete-time signals. We shall represent a discrete-time periodic
function in terms of weighted summation of the harmonically related discrete-
time complex exponentials, Φk [n] = ejkω0 n , where the weights are the spectral
coefficients, {ak }.

7.1. Fourier Series Theorem for Discrete-


Time Functions
Although the representation of discrete-time periodic signals in frequency do-
main resembles that of the continuous-time signals, there are three major dif-
ferences between them:
1. Since the signal x[n] is discrete, the integral operations of the analysis
equation, which uniquely determines the spectral coefficients, is replaced

263
by the summation operation. Thus, we do not bother to take compli-
cated integrals.
2. Furthermore, the limits of the summation of the analysis and synthesis
equation are finite. Thus, we do not have a convergence problem, nor do
we need Dirichlet conditions.
3. The spectral coefficients of discrete-time Fourier series of a periodic func-
tion are always periodic, with the fundamental period of the signal.
Let us formally introduce the Fourier series representation of discrete-time
periodic signals and investigate the above facts.

Theorem 7.1: A discrete-time periodic signal, x[n], with the fundamental


period N , can be decomposed into harmonically related complex exponentials,
ejkω0 n , weighted by a set of coefficients, called the spectral coefficients, ak , as
follows:

X
x[n] = ak ejkω0 n Synthesis equation (7.1)
k=<N >

where the spectral coefficients are uniquely obtained from,

1 X
ak = x[n]e−jkω0 n Analysis equation (7.2)
N
n=<N >

In the above equations, the limit of the summations, < N >, indicates N
successive integers to cover one full range of fundamental period.
Furthermore, the spectral coefficients, {ak }, are periodic with period N .
In other words, {ak } repeats with ak = ak+N , ∀k ∈ (−∞, ∞).
Since both the spectral coefficients, {ak }, and the harmonically related com-
plex exponentials, Φk [n] = ejkω0 n , are periodic, with the fundamental period of
the signal x[n], the summations of synthesis and analysis equations are evalu-
ated over one full range of the fundamental period, N.

Proof sketch: Let us start by showing that the spectral coefficients, {ak }
are periodic, with the fundamental period of the signal, x[n]. Recall that har-
monically related complex exponentials are periodic, that is, for integer value
N = 2π/ω0 , we have Φk [n] = Φk+N [n], which is easy to show:

Φk+N [n] = ej(k+N )ω0 n = ejkω0 n ejN ω0 n = ejkω0 n = Φk [n], (7.3)


since ejN ω0 n = 1 due to the fact that ejN ω0 n = cos(2πn) + j sin(2πn) = 1.
The analysis equation,

264
1 X
ak = x[n]e−jkω0 n , (7.4)
N
n=<N >

is simply a superposition of the harmonically related complex exponentials,


weighted by the periodic function, x[n], where the fundamental period is N .
Since both the signal x[n] and the complex exponentials Φk [n] are periodic,
summation of the multiplications of these functions is also periodic, with the
fundamental period N . Therefore, the spectral coefficients, obtained from the
analysis equation, repeat themselves after the fundamental period N :

ak = ak+N (7.5)
Next, let us show that the coefficients ak obtained from the analysis equa-
tion satisfy the synthesis equation.
In order to show that the synthesis equation can be obtained from the
analysis equation, we multiply both sides of the synthesis equation by Φr [n] =
e−jrω0 n and sum them over one period n =< N >, to obtain the following
equation:
X X X
x[n]e−jrω0 n = ak ejkω0 n e−jrω0 n . (7.6)
n=<N > n=<N > k=<N >

We arrange the summations to get


X X X
x[n]e−jrω0 n = ak ej(k−r)ω0 n . (7.7)
n=<N > k=<N > n=<N >

The second summation on the right-hand side is the inner product of


two harmonically related discrete-time complex exponentials. We visited the
continuous-time case of this inner product in Equation (4) (p. 257). The
discrete-time case is similar:
(
jkω0 n jrω0 n
X
j(k−r)ω0 n N if r = k,
<e ,e >= e = (7.8)
n=<N >
0 otherwise.

The r = k case is trivial. For the r ̸= k case, let c = ej(k−r)ω0 n and S be the
sum:

S = c + c2 + c3 + ... + cN . (7.9)
Multiplying both sides above by c, we obtain

cS = c2 + c3 + c4 + ... + cN +1 . (7.10)
Subtracting Equation (7.10) from (7.9) and with some arrangement, we get

265
c(1 − cN ) ej(k−r)ω0 n (1 − ej(k−r)ω0 nN )
S= = . (7.11)
1−c 1 − ej(k−r)ω0 n
Since ω0 = 2π/N and r, k, n are integers, ej(k−r)ω0 nN is equal to ej2π = 1,
which yields S = 0. Using this result in Equation (7.7), we obtain
X
x[n]e−jkω0 n = ak N, (7.12)
n=<N >

which directly implies the analysis equation,

1 X
ak = x[n]e−jkω0 n . (7.13)
N
n=<N >

Remark 7.1: Although the spectral coefficients {ak } are generated for only
one full period, < N >, we need to keep in mind that they repeat themselves
at every period N for all k ∈ (−∞, ∞):

ak = ak±N . (7.14)

Since the summations of analysis and synthesis equations have finite limits,
there is no concern about the existence of spectral coefficients. This relaxes the
convergence constraints imposed by Dirichlet conditions. Furthermore, period-
icity of the spectral coefficients brings a substantial computational efficiency
in representing discrete-time periodic signals in the frequency domain. This
is not the case for the continuous-time signals, since spectral coefficients are
computed by taking the integral of two periodic functions, x(t) and ejkω0 t ,
which may not result in a periodic function.

Explore Fourier series representation for discrete-


time signals @ https://384book.net/i0701
INTERACTIVE

7.2. Discrete-Time Fourier Series Rep-


resentation in Hilbert Space
In the previous chapter, we defined a continuous-time periodic function as
a vector, represented by the coordinates, {ak }, corresponding to the spectral
coefficients, in an infinite dimensional space, called Hilbert space. Recall that
a Hilbert space H is a vector space, where the vectors are functions and the

266
distance between the functions are defined by the inner product.
An extension of the representation of continuous-time periodic functions
to that of the discrete-time functions is possible by defining a Hilbert space,
spanned by the discrete-time harmonically related complex exponential func-
tions. In this case, the inner product between two discrete-time periodic func-
tions, x[n] and y[n] is defined as
X
< x[n], y[n] >= x[n]y ∗ [n], (7.15)
n=<N >

where (∗) indicates the complex conjugate operation. This time, the limit of
the summation is finite. Thus, we can represent one period of discrete-time
functions, x[n], y[n] ∈ H, by finite length vectors, x and y:

x = [x[0] x[1] ....... x[N − 1]]T (7.16)

y = [y[0] y[1] ....... y[N − 1]]T , (7.17)


where T represents the vector transpose operation. The entries of the vector
x and y, cover one full period of the function x[n] and y[n], respectively. This
representation enables us to show that the inner product, defined for a Hilbert
space H, is reduced to that of a classical vector space as follows:
X
< x[n], y[n] >= x[n]y ∗ [n] = xT y∗ . (7.18)
n=<N >

Furthermore, we can define an N dimensional basis vector, ejnω0 , which


only covers one full period, N = 2π/ω0 , of the complex exponential function
ϕk [n] = ejkω0 n as follows:

ejnω0 = [1 ejnω0 e2jnω0 .... ej(N −1)nω0 ]T . (7.19)


Finally, we can define the spectral coefficient vector over one period of N ,

a = [a0 a1 .......aN −1 ]T , (7.20)


so that we represent the Fourier series of a discrete-time function as

x = aT ejω0 n . (7.21)
Using the definition of the inner product given above, we can show that
discrete-time harmonically related complex exponential functions are orthogo-
nal to each other. Mathematically, the inner product of two complex exponen-
tial functions with different harmonics is

267
(
X N
if r = k
< ejkω0 , ejrω0 >= (ejkω0 )T (ejrω0 )∗ = ej(k−r)(2π/N ) =
n=<N >
0
otherwise.
(7.22)
Hence, harmonically related discrete-time exponentials form a basis and
they span the Hilbert space of discrete-time periodic functions. Thus, the spec-
tral coefficients, each of which shows the amount of a particular harmonic
frequency, are the coordinates of a discrete-time function in this Hilbert space.

Exercise 7.1: Consider the discrete-time function, x[n] = sin 0.1πn.


a) Plot this signal. Is this a periodic function? If yes, what is the period?
b) Find and plot the spectral coefficients, {ak }, which are the coordinates of
this function in Hilbert space.
c) Comment on the frequency content of this signal analyzing the magnitude
and phase spectrum.

Solution:
a) This signal is periodic, provided that there exists an integer N , satisfying
the following equation:

x[n] = x[n + N ]. (7.23)


The fundamental period of the signal is the smallest integer N , which is
related to the angular frequency, as follows:

ω0 = 0.1π = m.
N
The smallest integer value for the fundamental period N is obtained for
2
m = 1, as N = 0.1 = 20. Hence, this signal is periodic! The plot of this
signal is given in Figure 7.1.
b) The spectral coefficients, {ak }, can be easily obtained by applying the
Euler formula:
1 j0.1πn
− e−j0.1πn

x[n] = e (7.24)
2j
1 1
a1 = , a−1 = − . (7.25)
2j 2j
Since the spectral coefficients are periodic with the fundamental period
of the signal (N = 20), we have

268
x[n]
1

0.5

n
5 10 15 20 25
-0.5

-1

Figure 7.1: The plot of x[n] = sin 0.1πn. Its fundamental period is N = 20.

ak ^ak
π
2
0.5

k
−20 −10 10 20
k
−π
−20 −10 10 20 2

Figure 7.2: Magnitude and phase spectrum of x[n] = sin 0.1πn. Both |ak | vs. k
and the ∢ak vs. k are discrete periodic functions, which repeat themselves at
every N = 20.

a1 = a21 = a41 = ....akN +1 , ∀k. (7.26)


and

a−1 = a19 = a39 = ....akN −1 , ∀k. (7.27)


Recall that when the spectral coefficients are complex numbers, we need
two plots:
1) Magnitude spectrum: |a1 | = |a−1 | = 12
2) Phase spectrum: ∢a1 = − π2 , ∢a2 = π2
Figure 7.2 provides the plots for these spectra.
c) Analyzing the magnitude spectrum (Figure 7.2) for one period, we observe
that, the signal consists of the fundamental frequency corresponding to
kω0 for k = ±1. Similarly, analyzing the phase spectrum, we observe
that there is a phase shift of the signal by the amount of ±π/2 at the

269
fundamental frequency corresponding to kω0 for k = ±1.

Exercise 7.2: Consider the following signal:


π   
2π π
x[n] = 1 + sin n + cos n+ . (7.28)
5 5 2
a) Find the coordinates of this function, in a Hilbert space spanned by dis-
crete complex exponentials, Φk [n] = ejkω0 n , ∀k, ∀n.
b) Plot the spectral coefficients using the Cartesian coordinates.
c) Plot the spectral coefficients in polar coordinates.
d) Comment on the frequency content of this signal, compared to that of the
previous example.

Solution:
a) Spectral coefficients of a periodic signal correspond to the coordinates
in Hilbert space. Using the Euler formula for cosines and sines, we can
directly find the spectral coefficients as follows;

1 j(π/5)n
x[n] = 1 + [e − e−j(π/5)n ]+
2j
1 j(2πn/5+π/2)
[e + e−j(2πn/5+π/2) ] (7.29)
2
Arranging the terms, we obtain,

1 j(π/5)n 1
x[n] = 1 + e − e−j(π/5)n +
2j 2j
   
1 jπ/2 j(2π/5)n 1 −jπ/2 −j(2π/5)n
e e + e e . (7.30)
2 2

Thus, the Fourier series coefficients of this function, which are the coor-
dinates in Hilbert space are as below.

270
a0 = 1,
1 1
a1 = = − j,
2j 2
1 1
a−1 = − = j,
2j 2 (7.31)
e jπ/2 1
a2 = = j,
2 2
e−jπ/2 1
a−2 = = − j,
2 2
with ak = 0 for other values of k in the interval of summation in the
synthesis equation.
For this signal, ω0 = π/5 which implies that the period is N = 10 (ω0 =
pi/N ). Thus, the Fourier coefficients are periodic with period N = 10. In
other words, ak = ak±N .
b) In the Cartesian coordinate system, we plot the real and imaginary part
of the spectral coefficients. For real parts of the spectral coefficients, we
plot
(
1, for k = 0, ∀m,
Re{ak } = Re{ak+mN } = (7.32)
0, otherwise
For the imaginary parts of the spectral coefficients, we plot

1
2
 for k = −1, 2, ∀m,
Im{ak } = Im{ak+mN } = − 12 for k = 1, −2, ∀m, (7.33)

0 otherwise

Figure 7.3 (top row) shows the plots of the real part and imaginary part
of the spectral coefficients.
c) In the polar coordinate system, we plot the magnitude of the spectral
coefficients.
(
1
for k = ±1, ±2, ∀m,
|ak | = |ak+mN | = 2 (7.34)
0 otherwise.
and the phase of the spectral coefficients,

π
−sign(k) 2 for k = ±1, ∀m,

∢ak = ∢ak+mN = sign(k) π2 for k = ±2, ∀m, (7.35)

0 otherwise.

Figure 7.3 (bottom row) shows the plots of the magnitude and phase of

271
Re{αk }
3
Im{αk }
2
1
2
1

−2N

−N

2N
k
k −1
2
−2N −N 0 N 2N

|αk |

^αk
10 π
2 2
1

−2N

−N

2N
2

−π k
2
−2N −N 0 N 2N k

Figure 7.3: Plot of the real and imaginary parts (top row),and the magnitude
and phase of spectral coefficients, for x[n] = 1 + sin 2π 2π

10 n + 3 cos 10 n +
cos 4π π
10 n + 2 (bottom row).

the spectral coefficients for N = 10.


d) Analyzing the magnitude and phase spectrum of this signal and compar-
ing it to that of the previous example, we observe that the second signal
has additional harmonics for k = ±2, indicating a “richer” signal in terms
of the frequency content, compared to the first one.

The above two examples are relatively easy to be represented in the fre-
quency domain, since the application of Euler formula directly yields the spec-
tral coefficients. The frequency content of these signals can be observed in
both time and frequency domains. However, signals, such as speech and music,
which consist of a large variety of harmonics, cannot be analyzed in the time
domain. On the other hand, the amount of spectral coefficients provides us a
measurable value for each harmonic frequency, contained in the signal.
Let us investigate the frequency content of the signal given below.

Exercise 7.3: Find the coordinates of the discrete-time periodic square wave
shown in Figure 7.4, in a Hilbert space spanned by discrete complex exponen-
tials, Φk [n] = ejkω0 n and investigate the frequency content of this signal.

Solution: Analyzing the frequency content of this signal in time domain is

272
1

... ...

n
−N −N1 0 N1 N

Figure 7.4: Discrete-time periodic square wave with a fundamental period N .

not possible. All we observe is a square wave of fundamental period, N. Let us


investigate the signal in the frequency domain by finding the coordinates from
the analysis equation for the nonzero values of x[n] over one full period, N ,
N1
1 X
ak = e−jk(2π/N )n (7.36)
N
n=−N1

Changing the dummy variable of sum as, m = n + N1 , we observe that this


equation becomes,
2N
1 X1 −jk(2π/N )(m−N1 )
ak = e
N
m=0
(7.37)
2N1
1 X
= ejk(2π/N )N1 e−jk(2π/N )m
N
m=0

The summation is a finite geometric series, which has the following closed form;
!
1 jk(2π/N )N1 1 − e−jk2π(2N1 +1)/N
ak = e
N 1 − e−jk(2π/N )
1 e−jk(2π/2N ) [ejk2π(N1 +1/2)/N − e−jk2π(N1 +1/2)/N ]
=
N e−jk(2π/2N ) [ejk(2π/2N ) − e−jk(2π/N ) ]
1 sin[2πk(N1 + 1/2)/N ] (7.38)
= , for k ̸= 0, ±N, ±2N, ...
N sin(πk/N )
and
2N1 + 1
ak = , for k = 0, ±N, ±2N, ...
N
The coefficients ak for 2N1 + 1 = 5 are sketched for N = 10, 20 and 40 in
Figure 7.5 (a), (b), and c, respectively.

273
k

−8
−4
0
4
8

k
−8
−4
0
4
8

k
−8
−4
0
4
8

Figure 7.5: Fourier series coefficients for the periodic square wave of Example
3.12 from the book; plots of N ak for 2N 1 + 1 = 5 and (a) N = 10; (b) N = 20;
and c N = 40.

Analysis of Figure 7.5 reveals that as we increase the period from N = 10 to


40 we obtain a smoother periodic discrete representation of the coefficients, ak
vs. k.

274
7.3. Properties of Discrete-Time Fourier
Series
Most of the properties of the discrete-time Fourier Series are similar to those
of the continuous-time Fourier series, such as linearity, time reverse, conjugate
symmetry and frequency shifting. For this reason, we suffice to provide them
in Table 7.1. The properties, listed in this table, can be easily proved by the
direct application of the analysis and synthesis equation of the discrete-time
Fourier series.
We, also, provide some popular discrete-time signals and their spectral
coefficients, in Table 7.2. It is highly recommended to the readers to derive the
spectral coefficients from the functions, x[n] of Table 7.2.
Recall that time and frequency domain representations of the signals are
one-to-one and onto. In other words, given the signal x[n] it is possible to
compute the spectral coefficients {ak }, uniquely, from the synthesis equation.
Equivalently, given the spectral coefficients {ak }, it is possible recover the
discrete-time function, x[n], uniquely, from the analysis equation. This prop-
erty is shown as follows:

x[n] ↔ {ak }. (7.39)


Let us study the following examples, to see the application of the properties
on solving the problems.

Exercise 7.4: Find the discrete-time signal x[n], which has the following
spectral coefficients:
π π
ak = cos(k ) + sin(3k ). (7.40)
4 4

Solution: Using the linearity property, we can split ak into two terms, as
(1) (2)
ak = ak + ak , (7.41)
|{z} |{z}
cos(k π4 ) sin(3k π4 )

each of which represents the signal x1 [n] and the signal x2 [n], with the corre-
sponding spectral coefficients:
(1) (2)
x1 [n] ↔ ak and x2 [n] ↔ ak .
Then, linearity property implies that,
(1) (2)
x[n] = x1 [n] + x2 [n] ↔ ak = ak + ak .

275
Let us first find the signals x1 [n] and x2 [n], represented by the spectral coeffi-
(1) (2)
cients of ak and ak , then add them to find the signal x[n], as follows:
(1) (2)
The spectral coefficients ak and ak , can be written in terms of complex
exponential functions as;
π π
(1) π ejk 4 + e−jk 4
ak = cos(k ) = (7.42)
4 2
and π π
(2) π ej3k 4 − e−j3k 4
ak = sin(3k ) = , (7.43)
4 2j
respectively.
(1) (1)
The angular frequency of ak and ak is ω0 = π4 . The fundamental period is
N = ω2π0 = 8. Thus, the fundamental periods of x[n], as well as, x1 [n] and x2 [n]
are all N = 8. In other words, ak = ak±8 and x[n] = x[n ± 8].
(1) (2)
The analysis equations for ak and ak can be written as,
4
(1) 1 X π 1 π 1 π
ak = x1 [n]e−jkn 4 = ejk 4 + e−jk 4 , (7.44)
8 2 2
n=−3

and
4
(2) 1 X π 1 π 1 π
ak = x2 [n]e−jkn 4 = ej3k 4 − e−j3k 4 . (7.45)
8 2j 2j
n=−3

Comparing the left hand side and the right hand side of the above equations,
gives the signal, in −3 ≤ n ≤ 4 as follows:
x1 [1] = x1 [−1] = 4 and x1 [n] = 0 for k ̸= ±1, in the duration −3 ≤ n ≤ 4,
x2 [3] = −x2 [−3] = 4j and x[n] = 0 for n ̸= ±3, in the duration −3 ≤ n ≤ 4.
Hence, in one full period −3 ≤ n ≤ 4,

x1 [n] = 4δ[n − 1] + 4δ[n + 1])


and

x2 [n] = 4jδ[n − 3] − 4jδ[n + 3]).


Finally, in the period of −3 ≤ n ≤ 4, the signal x[n] is written as follows:

x[n] = x1 [n] + x2 [n] = 4δ[n − 1] + δ[n + 1]) + 4j(δ[n − 3] + δ[n + 3]). (7.46)

276
The periodicity, x[n] = x[n + N ] implies that

x[−3] = x[−3 + 8] = x[5]

and
x[−1] = x[−1 + 8] = x[7].
Hence, x[n] can also be written in the interval of 0 ≤ n ≤ 7, as follows:

x[n] = 4δ[n − 1] − 4jδ[n − 3] + 4jδ[n − 5] + 4δ[n − 7] for 0 ≤ n ≤ 7. (7.47)

Note that, x[n] is periodic, where x[n] = x[n ± 8] for all −∞ < n < ∞.

Exercise 7.5: Find the fundamental period and the spectral coefficients of
the following discrete-time function:
6π π
x[n] = cos( n+ ) (7.48)
13 6

Solution:
The angular frequency of this function is ω0 = 6π 2π
13 . Recall that ω0 = N .m =

13 .3. Hence, the fundamental period is N = 13 .
From Table 7.2, we see that the spectral coefficients of x′ [n] = cos ω0 n for
ω0 = 2πN .m is,
(
1
′ , for k = ±m, ±m ± N, ±m ± 2N, ....
ak = 2 (7.49)
0 o.w.
In this example m = 3 and N = 13. Thus, the spectral coefficients for cos ω0 n
for ω0 = 2π
N .m is
(
1
′ , for k = ±3, (±3 ± 13), (±3 ± 26), ...
ak = 2 (7.50)
0 o.w.
From Table 7.1, we see that a time shift gives a multiplicative exponential
factor in the frequency domain:

x[n] = x′ [n − n0 ] ↔ a′k .ejkω0 n0



In this exercise, ω0 = 13 . In order to find the amount of shift n0 , we factorize
ω0 , as follows:
6π π 6π 13
x[n] = cos( n + ) = cos( (n + ) (7.51)
13 6 13 36

277
Hence, n0 = −13/36. Replacing the values of n0 and ω0 , we obtain
1 jk π6
(
2 .e , for k = ±3, (±3 ± 13), (±3 ± 26), ...
ak = (7.52)
0 o.w.
The spectral coefficients are complex numbers. In this case, we need two plots
for the magnitude spectrum,
(
1
, for k = ±3, (±3 ± 13), (±3 ± 26), ...
|ak | = 2 (7.53)
0 o.w.
and the phase spectrum,
(
k π6 , for k = ±3, (±3 ± 13), (±3 ± 26), ...
∠ak = (7.54)
0 o.w.

278
Table 7.1: Summary of the properties of discrete-time Fourier series.
Periodic signal Fourier series coeffi-
cient
x[n] is periodic with fundamental period N ak

y[n] is periodic with fundamental period N bk

Synthesis equation: Analysis equation:


1 X
X
x[n] = ak ejkω0 n where ω0 = 2π
N ak = x[n]e−jkω0 n
k=<N > N
n=<N >

Ax[n] + By[n] Aak + Bbk

x[n − n0 ] ak e−jkω0 n0

ejM ω0 n x[n] ak−M

x∗n a∗−k

x[−n] a−k
(
x[n/m], if n is a multiple of m 1
x(m) [n] = m ak , period mN
0, otherwise

x[n] ∗ y[n] N ak bk
X
x[n]y[n] al bk−l
l=<N >

x[n] − x[n − 1] (1 − e−jω0 k )ak



X 1
x[k], bounded and periodic only if a0 = 0 ak
1 − e−jkω0
k=−∞


 ak = a∗−k

Re{ak } = Re{a−k }



For real-valued x[n]: Im{ak } = −Im{a−k }

|ak | = |a−k |




∢ak

= −∢a−k

Even part of x[n] Re{ak }

Continued on next page

279
Table 7.1: Summary of the properties of discrete-time Fourier series. (Contin-
ued)

Odd part of x[n] jIm{ak }


1
x[n]2 =
X X
Parseval’s relation: ak 2
N
n=<N > k=<N >

Table 7.2: Some popular discrete-time periodic signals and their spectral coefficients.
Periodic signal x[n] with fun- Spectral coefficients ak
damental period N

X 1
δ(n − kN ) ak = , for all k
N
k=−∞
(
1, k = 0, ±N, ±2N, . . .
1 ak =
0, otherwise
2πm
In the following, ω0 = N and m, N are integers; otherwise, the signal is not periodic.
(
1, k = ±m, ±m ± N, ±m ± 2N, . . .
ejω0 n ak =
0, otherwise

 1 , k = ±m, ±m ± N, ±m ± 2N, . . .

cos ω0 n ak = 2
0, otherwise

1

 ,
 k = ±m, ±m ± N, ±m ± 2N, . . .
sin ω0 n ak = 2j 1
− , k = −m, −m ± N, −m ± 2N, . . .

2j
sin 2πk 1

N (N1 + 2 )

, k ̸= 0, ±N, ±2N, ...

1, |n| ≤ N1 

x[n] = N ak = N sin πk
N
N1 < |n| ≤  2N1 + 1 ,
0, 
2 k = 0, ±N, ±2N, ...

N

There are two major differences in the properties of the continuous-time


and discrete-time Fourier series representations:
1. Instead of the differentiation operation of continuous-time, we have the
difference operation in discrete-time.

280
2. Since both the signal and its corresponding spectral coefficients are peri-
odic, convolution property in time domain requires circular convolution
of the signals. Similarly, the multiplication property requires circular
convolution of the spectral coefficients in the frequency domain.
In the following subsections, we focus on three properties, the difference,
convolution and multiplication properties, as follows:

7.3.1. Difference Property


Given a discrete-time periodic function and the corresponding spectral coeffi-
cients,

x[n] ↔ ak , (7.55)
the delay operation in time domain corresponds to the multiplication operation
in the frequency domain, as follows:


X ∞
X ∞
X
x[n − n0 ] = ak ejkω0 (n−n0 ) = (ak e−jω0 n0 )ejkω0 n = a′k ejkω0 n
k=−∞ k=−∞ k=−∞
(7.56)
Thus, the spectral coefficients of x[n − n0 ] is a′k = ak e−jω0 n0 :

x[n − n0 ] ↔ ak e−jkn0 N . (7.57)
When the input-output pairs of an LTI system are periodic, we can use the
Fourier series representation to find the spectral coefficients of the output from
the spectral coefficients of the input. Difference property converts difference
equations into algebraic equations in terms of the spectral coefficients of the
input and output. Therefore, the spectral coefficients of the output of an LTI
system is obtained from the spectral coefficients of the input, without solving
the difference equation by recursive methods.

Exercise 7.6: Consider the discrete-time LTI system represented by the fol-
lowing difference equation;

y[n] = x[n] − x[n − 1], (7.58)


a) Find the spectral coefficients of the output of this system, when the spec-
tral coefficients of the input are {ak }.
b) Find the spectral coefficients of the output, when the input is x[n] =
sin(0.1πn).

281
Solution:
a) Using the difference property, for x[n] ↔ ak , we obtain,

x[n − 1] ↔ ak e−jkn N . (7.59)
Since Fourier series representation is linear, and one-to-one and onto,
we can replace the corresponding spectral coefficients into the difference
equation to obtain its frequency domain representation, which gives the
spectral coefficients of the output, in terms of the spectral coefficients of
the input;

y[n] = x[n] − x[n − 1] ↔ bk = (1 − e−jk N )ak . (7.60)
b) Recall that the non-zero spectral coefficients of the input signal, x[n] =
sin(0.1πn), is obtained in the previous example as,
1 1
a1 = , a−1 = − , (7.61)
2j 2j
where ak is periodic with ak = ak±N , ∀k.
Considering the fact that the fundamental period of the input signal is
N = 20, the non-zero spectral coefficients of the output is,
1 π −1 π
b1 = (1 − e−j 10 ) b−1 = (1 − ej 10 ), (7.62)
2j 2j
where bk is periodic, so that bk = bk±20 , ∀k.

Exercise 7.7: Consider the discrete-time LTI system represented by the fol-
lowing difference equation;

y[n] + 0.5y[n − 1] = x[n]. (7.63)


a) Find the spectral coefficients bk of the output of this system, when the
spectral coefficients of the input are {ak }.
b) Find the spectral coefficients,
P∞ bk , of the output, when the input is an
impulse train, x[n] = k=−∞ δ[n − 2k].

Solution:
a) Using the difference property, for y[n] ↔ bk , we obtain,

y[n − 1] ↔ bk e−jkn N . (7.64)
Using the linearity property, we can represent both sides of the Equation
7.61 by the spectral coefficients:

282

y[n] − 0.5y[n − 1] = x[n] ↔ bk (1 − 0.5e−jkn N ) = ak . (7.65)
Leaving the spectral coefficients of bk in the right hand side of the equa-
tion, we obtain:
ak .
bk = 2π . (7.66)
1 − 0.5e−jkn N
b) From Table:7.2, we see that,

X
δ[n − 2k] ↔ ak = 1/2. (7.67)
−∞

where ak is periodic with ak = ak±2 , ∀k.


Considering the fact that the fundamental period of the input signal is
N = 2,
1
bk = . (7.68)
2 − e−jknπ
where bk is periodic, so that bk = bk±2 , ∀k.

7.3.2. Convolution Property


Convolution operation in time domain corresponds to the multiplication opera-
tion in frequency domain. Given two functions and their corresponding spectral
coefficients,

x[n] ↔ ak and y[n] ↔ bk , (7.69)


with period N , the spectral coefficients of the convolution x[n] ∗ y[n] has the
following spectral coefficient, ck ;

N −1 N −1 N −1
1 X 1 X X
x[n] ∗ y[n] ↔ ck = (x[n] ∗ y[n])e−jkω0 n = x[l]y[n − l]e−jkω0 n .
N N
n=0 n=0 l=0
(7.70)
Changing the dummy variable of summation to m = n − l, we obtain,
N −1 N −1
1 X X
ck = x[l]e−jkω0 l y[m]e−jkω0 m . (7.71)
N
l=0 m=0

283
In the above equation, the first sum is,
N
X −1
N ak = x[l]e−jkω0 n , (7.72)
l=0

and the second sum is,


N
X −1
N bk = y[m]ejkω0 m . (7.73)
m=0

Hence,
x[n] ∗ y[n] ↔ N ak bk . (7.74)

Remark 7.2: In Equation: 7.68, the limits of the convolution summation do


not range in (−∞, ∞), as in the classical definition of convolution. Instead we
cover one full period,
N
X −1
x[n] ∗ y[n] = x[l]y[n − l]. (7.75)
l=0

This is a special case of convolution operation used for periodic signals,


called circular convolution, where we slide one of the function on the other
one, as we multiply the overlaps, until we cover one period, N .

Remark 7.3: We keep in mind that both x[n ± N ] ↔ ak±N and y[n ± N ] ↔
bk±N are periodic, for −∞ < k < ∞ and −∞ < n < ∞.

Exercise 7.8: Consider the following discrete-time signal,

x[n] = sin 0.1πn + sin 0.1π(n − 1). (7.76)


a) What is the fundamental period of x[n]?
b) Find the spectral coefficients ak of the signal x[n].

Solution:
a) The fundamental period is,

N = m ω2π0 = m 0.1π = 20

b) Define, x [n] = sin 0.1πn. Then, the corresponding spectral coefficients in
one full period is are
1 1
a′1 = , a′−1 = − .
2j 2j
Since ak is periodic, for all −∞ < k < ∞,

284
ak±20 = ak .
Using the linearity and time shift properties, we get,

x[n] = x′ [n] + x′ [n − 1] ↔ (1 + e−jk N )ak . (7.77)

Remark 7.4: In the above example, we can use the linearity property easily,
since both terms have the same fundamental period.

Motivating Question: What if the fundamental period of two terms were


different?
The following exercise answers this question.

Exercise 7.9: Consider, a periodic signal,

y[n] = x1 [n] + x2 [n],


where x1 [n] and x2 [n] has the fundamental period N1 and N2 ,respectively.
What is the fundamental period of the signal y[n]?

Solution:
The fundamental period of y[n] is the greatest common divisor of N1 .N2 .
Indeed,

y[n] = x1 [n] + x2 [n] = x1 [n + N1 N2 ] + x2 [n + N1 N2 ] = y[n + N1 N2 ]. (7.78)

is satisfied for N1 N2 . Hence, it is also satisfied for the greatest common divisor
of N1 N2 .

Exercise 7.10: Find the spectral coefficients ak of the sequence x[n] shown
in Figure 7.6. a.

285
2
x[n]

1
... ...

−5 0 5
n

1 x1 [n]
... ...

0
n

1 x2 [n]

... ...

0
n

Figure 7.6: a) Periodic sequence x[n] and its representation as a sum of (b) the
square wave x1 [n] and c the dc sequence x2 [n].

This sequence has a fundamental period of N = 5. We observe that x[n] may be


viewed as the sum of the square wave x1 [n] in Figure 7.6(b) and the de sequence
x2 [n] in Figure 7.6c. Denoting the Fourier series coefficients representations by

x1 [n] ↔ bk (7.79)
and

286
x2 [n] ↔ ck , (7.80)
we can use the linearity property of Table ?? to obtain the spectral coefficients
of the signal x[n] as follows;

ak = bk + ck (7.81)
Using the result of Exercise: 5.3 for N = 5 and N1 = 1, Fourier series coeffi-
cients bk corresponding to x1 [n] can be expressed as
( sin(3πk/5)
1
, for k ̸= 0, ±5, ±10, ...
bk = 53 sin(πk/5) (7.82)
5, for k = 0, ±5, ±10, ...
The sequence x2 [n] has only a constant value, which is captured by its zeroth
Fourier series coefficient:
4
1X
c0 = x2 [n] = 1 (7.83)
5
n=0

Since the discrete-time Fourier series coefficients are periodic, it follows that
ck = 1 whenever k is an integer multiple of 5. The remaining coefficients of
x2 [n] must be zero, because x2 [n] contains only a dc component. We can now
substitute the expressions for bk and ck into ak = bk + ck to obtain

bk = 15 sin(3πk/5)
(
sin(πk/5) , for k ̸= 0, ±5, ±10, ...
ak = 8 (7.84)
5, for k = 0, ±5, ±10, ...

Exercise 7.11: Find the signal x[n], described by the following properties:
1. x[n]
P5 is periodic with period N = 6.
2. Pn=0 x[n] = 2
7 n x[n] = 1
3. n=2 (−1)
4. x[n] has the minimum power per period among the set of signals satisfying
the preceding three conditions.

Solution: We denote the Fourier series coefficients of the signal, x[n], as follows

x[n] ↔ ak . (7.85)
From Fact 2, we conclude that a0 = 1/3. Noting that (−1)n = e−jπn =
e−j(2π/6)3n , we see from Fact 3 that a3 = 1/6. From Parseval’s relation (see
Table ??), the average power in x[n] is

287
5
X
P = |ak |2 (7.86)
k=0

Since each nonzero coefficient contributes a positive amount to P , and since


the values of a0 and a3 are prespecified, the value of P is minimized by choosing
a1 = a2 = a4 = a5 = 0. It then follows that

x[n] = a0 + a3 ejπn = (1/3) + (1/6)(−1)n , (7.87)


which is sketched in Figure 7.7.

1
2
x[n]

1
6
... ...
n
−2 −1 0 1 2 3

Figure 7.7: Sequence x[n] that is consistent with the properties specified in the
example.

7.3.3. Multiplication Property


Multiplication operation in time domain corresponds to the convolution oper-
ation in frequency domain.
Given two functions and their corresponding spectral coefficients,

x[n] ↔ ak and y[n] ↔ bk , (7.88)


with period N , multiplication of the of the signals x[n] and y[n] can be
written in terms of Fourier series representation, as follows:
N −1 N −1
jk 2π 2π
X X
x[n]y[n] = ( ak e N
kn
).( bl e−j N ln ). (7.89)
k=0 l=0

Arranging the summations and changing the dummy variable l = m − k


gives,

288
N −1 N −1
X X k+l
x[n]y[n] = ak bl e−jn2π N
.
(7.90)
k=0 l=0

N −1 N −1
X X m
= ak bm−k e−jn2π N . .
k=0 m=0

Therefore, multiplication of two discrete-time signals in time domain cor-


responds to convolution of the spectral coefficients in the frequency domain:
N
X −1
x[n]y[n] ↔ ak bn−k . (7.91)
k=0

In the Fourier domain, we perform circular convolution, where the limits


of the summation covers only one full period, N.

Remark 7.5: We keep in mind that both x[n] ↔ ak and y[n ± N ] ↔ bk}pmN
are periodic, with −∞ < k < ∞ and −∞ < n < ∞.

Exercise 7.12: Given two periodic signals, with the fundamental period
N = 7 and the corresponding Fourier series representation,

x[n] ↔ ak and y[n] ↔ bk , (7.92)


a) Find the spectral coefficients of the following signal, in terms of ak and
bk :
X
w[n] = x[r]y[n − r]. (7.93)
∀r

b) Suppose, now, that x[n] = y[n] and the spectral coefficients of


X
w[n] = x[r]x[n − r] ↔ ck (7.94)
∀r

is as follows,

1 sin2 (3πk/7)
ck = . (7.95)
7 sin2 (πk/7)
Find and plot the signal, x[n].
c) Find and plot w[n].

Solution:

289
a) The signal w[n] is also periodic with N = 7. Therefore, the limits of the
summation should cover only one full period of N consecutive values of
r, which indicates a circular convolution operation.
From the Convolution property, we know that,

x[n] ∗ y[n] ↔ N ak bk . (7.96)


Since w[n] = x[n] ∗ y[n],, Fourier series coefficients of w[n] for N = 7 is,

ck = N ak bk = 7ak bk . (7.97)
b) When x[n] = y[n]
Then, the convolution of the signal x[n] by itself has the following spectral
coefficients;

x[n] ∗ x[n] ↔ N a2k (7.98)


In this case, the spectral coefficients of w[n] is the ck = 1/7a2k ,

1 sin(3πk/7) sin(3πk/7) 1
ck = = ak ak (7.99)
7 sin(πk/7) sin(πk/7) 7
From Table 7.2, we can see that the spectral coefficients

sin(3πk/7)
ak = , (7.100)
sin(πk/7)
belongs to the square wave signal, with N1 = 1 and N = 7 (See; Figure
7.8a).
c) Using the periodic convolution property, we see that

X 3
X
w[n] = x[r]x[n − r] = x[r]x[n − r], (7.101)
r=⟨7⟩ r=−3

where, in the last equality, we have chosen to sum over the interval −3 ≤
r ≤ 3. Except for the fact that the sum is limited to a finite interval, the
product-and-sum method for evaluating convolution is applicable here. In
fact, we can convert this equation to an ordinary convolution by defining
a signal x̂[n] that equals x[n] for −3 ≤ n ≤ 3 and is zero otherwise. Then,
from this equation,
3
X +∞
X
w[n] = x̂[r]x[n − r] = x̂[r]x[n − r], (7.102)
r=−3 r=−∞

That is, w[n] is the aperiodic convolution of the sequences x̂[n] and x[n].
The sequences x[r], x̂[r], and x[n−r] are sketched in Figure 7.8 (a)-c. From

290
the figure, we can immediately calculate w[n]. In particular we see that
w[0] = 3; w[−1] = w[1] = 2; w[−2] = w[2] = 1; and w[−3] = w[3] = 0.
Since w[n] is periodic with period 7, we can then sketch w[n] as shown in
Figure 7.8 (d).

291
x[r]

-3 -2 -1 0 1 2 3 n

x̂[r]

-1 0 1 n

x[n − r]

n
n-7

n-1
n
n+1

w[n]

1
−7

−3
−2
−1
0
1
2
3

Figure 7.8: (a) The square-wave sequence x[r] in the example; (b) the sequence
x̂[r] equal to x[r] for −3 ≤ r ≤ 3 and zero otherwise; c the sequence x[n − r];
(d) the sequence w[n] equal to the periodic convolution of x[n] with itself and
to the aperiodic convolution of x̂[n] with x[n].
292
7.4. Discrete-Time LTI Systems with Pe-
riodic Input and Output Pairs
Consider an LTI system, represented by the impulse response h[n] and equiv-
alently by the following difference equation,
N
X M
X
ak y[n − k] = bk x[n − k], (7.103)
k=0 k=0

Suppose that a periodic input x[n] generates a periodic output y[n].


Motivating Question:: What is the relationship between the spectral
coefficients of the input, {ak } and spectral coefficients of the output, {bk } ?

X X
x[n] = ak ejkω0 n → h[n] → y[n] = bk ejkω0 n (7.104)
k=<N > k=<N >

Figure: 10: An LTI system, where the periodic input-output pairs are rep-
resented by Fourier series.
This is a very crucial question, because, if we can establish a relationship
between the spectral coefficients of the input and output, then, we can just feed
the spectral coefficients of the input to receive the spectral coefficients of the
output. Then, based on the spectral coefficients of the output, we can construct
the output signal. This approach saves a great deal of cost in implementing
the LTI systems.
In order to find the relationship between the spectral coefficients of the in-
put and that of the output, we use the harmonically related complex exponen-
tial eigenfunctions and the linearity property. Then, we find the corresponding
eigenvalues of the LTI system, as explained in the next section.

7.4.1. Eigen-functions, Eigenvalues and Trans-


fer Functions of a Discrete-Time LTI Sys-
tems
Suppose that the input to an LTI system is a discrete-time complex exponen-
tial,

x[n] = ejω0 n , (7.105)


then, convolution summation for this input is,

293

X ∞
X
y[n] = x[n]∗h[n] = h[k]e jω0 (n−k)
=ejω0 n
h[k]e−jω0 k = ejω0 n H(ejω0 )
k=−∞ k=−∞
(7.106)
where,

X
H(ejω0 ) = h[k]e−jω0 k . (7.107)
k=−∞

In the above formulation, H(ejω0 )


is a purely imaginary function of complex
exponential, where the purely imaginary exponent is scaled by the angular
frequency ω0 . Note that since the impulse response is not a periodic function,
the limits of the summation range in(−∞, ∞). This equation is not to be
confused with the Fourier series representation of periodic signals.
As in the continuous case, we can define the eigenvalue of a discrete-time
system, when the input is a discrete-time complex exponential eigenfunction,
x[n] = ejω0 n , as follows:

Definition 7.1: Definition: Eigenvalue of a discrete-time LTI system


for a complex exponential eigenfunction x[n] = ejω0 n is defined as,

X
H(ejω0 ) = h[k]e−jω0 k . (7.108)
k=∞

Thus, when the input is an eigenfunction, x[n] = ejω0 n , the corresponding


output is the scaled version of the eigenfunction, where the scaling factor is
the eigenvalue of the LTI system,

y[n] = ejω0 n H(ejω0 ). (7.109)

x[n] = ejω0 n h[n] y[n]

Figure 7.9: A discrete-time LTI system, represented by the impulse response,


h[n], is fed by the eigenfunction, x[n] = ejω0 n . The corresponding output is
y[n] = ejω0 n H(ejω0 ).

In general, any exponential input eλn is directly passed to the output with
a scaling factor,

294

X
H(eλ ) = h[k]e−λk , (7.110)
k=−∞

which is called transfer function.


The output to the exponential input x[n] = eλn is then written as,

y[n] = eλn H(eλ ).


Inserting the value of the above output and the exponential input x[n] = eλn ,
in the difference equation, we obtain,
N
X M
X
λ n−k
ak H(e )e = bk en−k . (7.111)
k=0 k=0

Arranging the above equation, we obtain the transfer function for discrete-
time LTI systems in terms of the parameters of the difference equation, as
follows;
PM
λ k=0 bk eλk
H(e ) = PN . (7.112)
λk
k=0 ak e
Hence, when the input of a discrete-time LTI system is an exponential
function, the corresponding output is just the scaled version of the input,
where the scaling factor H(eλ ) is the transfer function.

7.4.2. Relationship Between the Fourier Series


of Periodic Input and Output Pairs of
Discrete-Time LTI Systems
Suppose, now, that the input to an LTI system is the k th harmonic of a discrete-
time complex exponential function,

x[n] = ejkω0 n , (7.113)


then, the corresponding output becomes,


X
y[n] = x[n] ∗ h[n] = h[l]ejkω0 (n−l) = ejkω0 n H(ejkω0 ) (7.114)
l=−∞

where,

295

X
H(ejkω0 ) = h[l]e−jkω0 l . (7.115)
l=−∞

In general, we can superpose harmonically related complex exponentials to


represent the input signal by the Fourier series,
X
x[n] = ak ejkω0 n , (7.116)
k=<N >

to obtain the Fourier series representation of the output,


X X
y[n] = bk ejkω0 n = ak H(ejkω0 )ejkω0 n , (7.117)
k=<N > k=<N >

Therefore, the relationship between the spectral coefficients of the input


and output pairs of a discrete-time LTI system is,

bk = ak H(ejkω0 ) (7.118)

where the eigenvalue for the k th harmonic eigenfunction is,


X
H(e jkω0
)= h[n]e−jkω0 n (7.119)
n=−∞

Exercise 7.13: Given the following discrete-time LTI system,

y[n] = x[n − 1] − x[n − 2]. (7.120)


a) Find the impulse response of this system.
b) Find the eigenvalue of this system corresponding to the k th harmonic
eigen-function.

Solution:
a) The impulse response of this LTI system is easily obtained by replacing
the input by the impulse function,

h[n] = δ[n − 1] − δ[n − 2]. (7.121)


b) The eigenvalue of this system corresponding to the k th harmonic eigen-
function, x[n] = ejkω0 n , can be obtained by replacing z = ejkω0 ,as follows;

H(ejkω0 ) = e−jkω0 − e−2kjω0 . (7.122)

296
Exercise 7.14: Consider an LTI system represented by the impulse response,
h[n] = αn u[n] , −1 < α < 1,
Find the Fourier series representation of the output, when the input is
x[n] = cos ω0 n, where the fundamental period is N = 4.

Solution:
We know that the spectral coefficients ak of the input and the spectral
coefficients bk of the output are related by the eigenvalue of the LTI system,
as follows:

bk = ak H(ejkω0 ). (7.123)
Let us first find the spectral coefficients of the input:
1
x[n] ↔ a1 = a−1 = . (7.124)
2
Since the period is N = 4, the spectral coefficients will repeat at every N ,
ak = ak±4 for all k.
Next, let us find the eigenvalue of the system for the eigenfunction input,
x[n] = ejω0 n , as follows,

∞ ∞
X X 1
H(ejω0 ) = h[k]e−jω0 k = αk e−jω0 k = (7.125)
1 − αe−jω0 .
k=−∞ k=0

Therefore, k th harmonics of the spectral coefficient of the output is,


ak
bk = ak H(ejkω0 ) = . (7.126)
1 − αe−jkω0
Considering the nonzero spectral coefficients of the input for k = ±1 and
inserting the value of the fundamental period N = 4 in angular frequency,
ω0 = 2π π
N = 2 , the spectral coefficients of the output becomes,

0.5
(
π for k = ±1, (±1 ± N ), (±1 ± 2N ), (±1 ± 3N ), ...
bk = 1−αe−jk 2
0 otherwise.
(7.127)
Finally, the Fourier series representation of the output, then, becomes,


X 0.5 0.5
y[n] = ak H(ejkω0 )ejkω0 n = jk π2
+ π (7.128)
−∞ 1 − αe 1 − αe−jk 2

297
Remark 7.6: The eigenvalues of a discrete-time LTI system not only relate
the harmonically related complex exponential inputs and the corresponding
outputs of the system,

LΦk [n] = H(ejkω0 n )Φk [n], (7.129)


but it, also, relates the spectral coefficients of the periodic input and output
signals,

bk = ak H(ejkω0 ). (7.130)
This is due to the beauty of linearity, and time-invariance, together with
the harmony of the exponential functions.

7.5. Chapter Summary


Is it possible to extend the Fourier series theorems to discrete-time periodic
signals and systems? If yes, what type of Hilbert space can be defined to
represent spectral coefficients of a discrete-time periodic function, in a function
space? What are the basis functions, which span this space? Is this an infinite
dimensional space as in the continuous-time signals or is it finite dimensional?
What is the dimension of this space?
In this chapter, we try to answer the above questions. We define discrete-
time Fourier series for discrete-time periodic functions and represent them in a
Hilbert space spanned by the orthogonal harmonically related complex expo-
nential functions. The discrete nature of signal, x[n], with period N , enables us
to replace the integral operation by the summation operation to compute
the spectral coefficients, {ak }, of the signal. Considering the fact that harmon-
ically related complex exponentials are periodic functions and superposition of
the periodic functions are also periodic, we end up with periodic and discrete
spectral coefficients, ak = ak+N , which corresponds to the coordinates of the
discrete-time periodic functions in Hilbert space, spanned by N harmonically
related and orthogonal discrete-time complex exponentials. Thus, discrete-time
functions can be represented in finite dimensional Hilbert spaces. The pe-
riod of the spectral coefficients is the same as the period of the signal in the
time domain, which defines the dimension of the Hilbert space.
The relationship between the input and output signals can be uniquely
identified by the spectral coefficients of the input and output functions. The
spectral coefficients of the output is just the scaled version of the spectral
coefficients of the input, where the scale factor is the eigenvalue of the discrete-
time LTI systems, corresponding to the complex exponential eigenfunctions.
This scaling factor is called the transfer function and it uniquely represents

298
the discrete-time LTI system.
In summary, we represent a discrete-time periodic signal in an N -dimensional
Hilbert space spanned by harmonically related discrete complex exponentials,
where the coordinate of each function is also periodic with the same period of
the discrete-time function. Furthermore, a discrete-time LTI system is uniquely
represented by the eigenvalue of the system corresponding to the complex ex-
ponential eigenfunction, given at the input.

299
Problems

1. Consider the discrete-time signal given below:


 
2π π 
x[n] = 1 + sin n + 3 cos n.
3 5
a) Find the fundamental period of this signal.
b) Find the coordinates of this function, in a Hilbert space spanned by
discrete complex exponentials, Φk [n] = ejkω0 n , ∀k, ∀n.
c) Plot the spectral coefficients in polar coordinates.
d) Plot the spectral coefficients in Cartesian coordinates.
2. Consider the discrete-time periodic signal given in one full period,
(
1 for 0 ≤ n ≤ 4
x[n] =
0 5 ≤ n ≤ 9.
a) Find the coordinates of this function, in a Hilbert space spanned by
discrete complex exponentials, Φk [n] = ejkω0 n , ∀k, ∀n.
b) Plot the spectral coefficients of this function.
3. Find the discrete-time signal x[n], which has the following spectral coef-
ficients:
π π
ak = cos(k ) + cos(k ).
3 4
4. Consider the following discrete-time periodic signal;
6π π
x[n] = sin( n+ )
13 2
a) Find the fundamental period of this signal.
b) Find and plot the spectral coefficients in the polar coordinate system.
5. Consider the following discrete-time periodic function:
x[n] = +∞
P
m=−∞ {2δ[n − m] + δ[n − 2m − 1]}.

a) Plot this function and find the fundamental period.


b) Calculate the values of spectral coefficients of x[n] over a period using
the analysis equation.
6. A discrete-time, real and odd signal x[n] with period N = 4, has the
following Fourier series coefficients ak :
a9 = 2j, a14 = j, a19 = 4j

300
Find a0 , a1 , a−6 and a−3 .

7. A discrete-time real and periodic signal x[n] has the fundamental period
N = 5. The nonzero spectral coefficients of x[n] are
a0 = 4, a2 = a−2 = 4e−jπ/6 and a4 = a−4 = 2ejπ/3
Find x[n].
8. Consider the discrete-time periodic signal x[n] with N = 8 given below:
πn
x[n] = 2 − sin for 0 ≤ n ≤ 7
4
a) Find Fourier series coefficients ak of x[n].
b) Plot the magnitude and phase diagram for ak .
9. A discrete-time periodic signal x[n] has the fundamental period N = 16.
Fourier series coefficients ak and the signal x[n] satisfies the following
properties:
ak = −ak−8
x[2n + 1] = (−1)n+1
b) Find and plot x[n].
a) Find and plot the magnitude and phase of the Fourier series coeffi-
cients ak .
10. The signal x[n] is a real-valued discrete-time period signal whose funda-
mental period is N. The complex spectral coefficients of x[n] have the
following form:

ak = bk − jck

where bk and ck are real valued sequences.


a) Prove that a∗k = a−k .
b) Find the relation between bk and b−k .
c) Find the relation between ck and c−k .
11. Fourier series representation of a discrete-time periodic signal is as follows:

X
x[n] = ak ejk(2π/N )n
k=<N >

Find Fourier series representations of the following signals in terms of ak .


N
a) x[n] − x[n − 2 ], assume that N is even.

301
b) x[n] + x[n + N2 ], assume that N is even.
(Hint: This signal is periodic with fundamental period N/2)
c) (−1)n x[n], assume that N is even.
d) (−1)n x[n], assume that N is even.
(Hint: This signal is periodic with fundamental period 2N )
(1 − (−1)n+1 )
e) x[n]
2
12. Consider the following discrete-time signals,
πn πn π
x[n] = 1 + cos( ), y[n] = sin( + )
3 3 4
a) Find the Fourier series coefficients of x[n].
b) Find the Fourier series coefficients of y[n].
c) Use convolution property to calculate the Fourier Series coefficients
of z[n] = x[n] ∗ y[n].
13. The discrete-time periodic signals x[n] and y[n] have the same fundamen-
tal period N. Let

+∞
X
g[n] = x[k]y[n − k].
k=−∞

be their periodic convolution. What is the fundamental period of g[n]?


Verify your answer.
14. The discrete-time periodic signals x[n] and y[n] has the fundamental pe-
riod N = 4. The corresponding Fourier series coefficients are ak and bk ,
respectively. Mathematically,
x[n] ←→ ak and y[n] ←→ bk .
Moreover, it is given that
2a0 = 2a3 = a1 = a2 = 1 and b0 = b1 = b2 = b3 = 4
and
g[n] = x1 [n]x2 [n] ←→ ck a
a) What is the period of g[n]? Verify your answer.
b) Use multiplication property to find ck .
15. The impulse train

+∞
X
x[n] = 2δ[n − k]
n=−∞

302
is fed into a LTI system. The corresponding output of the system is found
as

5π 9π
y[n] = cos( n+ )
2 4

Find the eigenvalues of H(ej5kπ/2 ) at k = 0, 1, 2 and 3.


Assume that the above LTI system is applied to the following periodic
input signals. Find the corresponding output for each input signal.
a) x1 [n] = (−1)n+2
b) x2 [n] = 1 + cos( π2 n + π2 )
c) x3 [n] = +∞ 4k−n u[n − k]
P
k=−∞ 2

16. Consider a causal discrete-time LTI system whose difference equation is


given as above.
1
y[n] − y[n − 1] = x[n − 1]
4
a) Plot the block diagram of the system.
b) Determine the Fourier series representation of the output y[n] if the
input
π π
x[n] = cos( n) + cos( n) is fed into the system.
4 2
17. Consider a discrete-time LTI system whose impulse response h[n] is


2,
 0≤n≤3
h[n] = −1, −3 ≤ n ≤ −2

0, otherwise

Calculate the Fourier series coefficients of the output y[n] if the input to
the system is x[n] is

+∞
X
x[n] = δ[n − 6k]
n=−∞

18. Consider the discrete-time LTI system represented by the following dif-
ference equation;

y[n] − 2y[n − 1] + y[n − 2] = x[n]. (7.131)


a) Find the spectral coefficients bk of the output of this system, when

303
the spectral coefficients of the input are {ak }.
b) Find the spectral coefficients, bk , of the output, when the input is,
x[n] = cos 3π
2 n.
19. Consider an LTI system represented by the impulse response,

h[n] = a|n| , |a| < 1. (7.132)

Find the Fourier series representation of the output, when the input is
x[n] = cos ω0 n, where the fundamental period is N = 3.
20. In this programming task, we try to approximate two different periodic
functions by using their Fourier Series representations.

(a) Firstly, write a function that computes the first n+1 Fourier Series
coefficients of a given signal. Your function takes the given signal, the
period of the signal, and the number of coefficients as input. You will
need to compute the DC component and the coefficients of n har-
monic.(For safety you can compute one DC coefficient, n coefficients
for cosine components, and n coefficients for sine components.)

(b) Write a function to generate the approximate function by using Fourier


Series coefficients.

(c) Generate the following square wave function by dividing [-0.5, 0.5]
range into 1000 points.
(
−1 if − 0.5 < n < 0
s[n] =
1 if 0 < n < 0.5

You can assume that this function is periodic and above definition
belongs to one cycle of the signal. Compute n Fourier Series coeffi-
cients of the given function by using the function you implemented in
the first part. Then, generate the approximate function by using the
function you implemented in the second part. Plot both the original
function and the approximated function on the same plot by setting
n=[1, 5, 10, 50, 100]. (You can use plt.plot() function for better vi-
sualization.)

(d) Generate the following sawtooth function by dividing [-0.5, 0.5] range
into 1000 points. (You can use scipy.signal.sawtooth() function or
you can implement it by hand.)

304
(
1 + 2n if − 0.5 < n < 0
s[n] =
−1 + 2n if 0 < n < 0.5

Apply the procedure in the third part to the new signal. What is the
effect of increasing n?
You should write your code in Python and no library is allowed other
than matplotlib.pyplot, numpy and scipy.signal.sawtooth().

305
306
Chapter 8
Continuous Time Fourier
Transform and its Extension
to Laplace Transform

“Primary causes are unknown to us; but are subject to simple and
constant laws, which may be discovered by observation, the study of
them being the object of natural philosophy.”
Jean Baptiste Joseph Fourier

A visual introduction to Fourier Transform @


https://384book.net/v0801
WATCH

In Chapter 6, we decomposed a continuous time periodic signal into its


frequency harmonics by using Fourier series representation. The spectral coef-
ficients of Fourier series give us the amount of each frequency component in a
periodic signal.
We have seen that Fourier series representation decomposes a periodic sig-
nal into its harmonic frequencies and enables us to observe and analyze the
frequency content of the periodic signals. In this representation, spectral coeffi-
cients measure the amount of the corresponding frequency components, which
make up the signal. Hence, it is possible to attenuate the unwanted frequency
components or to boost some important frequencies of signals by changing
the spectral coefficients of the signal. Unfortunately, Fourier series analysis is
applicable only to the periodic functions.
Motivating Question: What if the function is not periodic? Can we ex-
tend the Fourier series representation to aperiodic functions?

307
The answer is yes!
The generalized form of the Fourier series, which enables us to represent
both periodic and aperiodic functions in terms of their frequency content, is
called the Fourier Transform.
In this chapter, we shall extend the Fourier series representation of periodic
functions to aperiodic functions to define Fourier transforms. We shall study
the properties of Fourier transform. We shall see that Fourier transforms are
very important tools for analyzing natural systems. They are also extremely
useful to design and implement a wide range of man-made signals and systems.
However, it is not possible to find a finite Fourier transform of all functions. In
order to utilize the transform domain techniques for such functions, we shall
generalize the continuous-time Fourier transform to Laplace transform by
extending the exponential basis functions with purely imaginary exponent to
exponential functions with complex exponent.

8.1. Fourier Series Extension to Aperi-


odic Functions
The idea of extending the Fourier analysis to aperiodic functions is simple. We
can assume an aperiodic continuous time function, x(t), as a function with an
infinite period. In other words, we assume that the fundamental period of the
function, x(t), approaches infinity, T → ∞.
Recall that a function is periodic if ∃T , such that

x(t) = x(t + T ). (8.1)


Note: The period, T , can be as large as we need. When a signal has an
infinite period, we assume that it repeats itself at every T → ∞.
Interestingly, as T → ∞, the sum operation of the Fourier series synthesis
equation converges to an integral operation.
Let us see how.
Formally speaking, consider a continuous time aperiodic function, x(t),
(
x̃(t) −T1 < t < T2
x(t) = (8.2)
0 otherwise,
where x̃(t) is a periodic function, generated by repeating the aperiodic function,
x(t). For the time being, let us assume that the nonzero range in the interval,
(−T1 , T2 ) is finite, in the above equation. Then, the fundamental period of

x̃(t) = x̃(t + T ) (8.3)

308
should be T ≥ (T1 + T2 ), as shown in Figure 8.1. Since (−T1 , T2 ) is a finite
interval, we can repeat the function, x(t) to, generate a periodic function, x̃(t),
with finite fundamental period, T.
Note: The center part of the periodic function, x̃(t) is the aperiodic func-
tion, x(t), which is nonzero in a finite interval, (T1 , T2 ).
If we stretch the aperiodic function, x(t), such that T1 → ∞ and T2 → ∞,
then, we obtain a function with infinite period. We need to use our imagination
to think about a signal, that repeats itself at every period, T =→ ∞ . In
summary, any aperiodic function can be considered as a periodic function with
an infinite fundamental period.
Now, we can extend the Fourier series theorem to the signals of infinite
period as follows:
Recall that the Fourier series representation of a periodic signal was given
by the following synthesis and analysis equations,

1
X Z
x̃(t) = ak ejkw0 t and ak = x̃(t)e−jkw0 t dt, (8.4)
T T
k=−∞

respectively.
Consider a signal, x(t) = 0 for t < −T1 and t > T2 , which can be defined
in one full period of a periodic function x̃(t). Then,
Z Z T2
−jkw0 t
T ak = x̃(t)e dt = x(t)e−jkw0 t dt. (8.5)
T −T1

Let us define a complex-variable function, as follows,

X(jkw0 ) ≜ T ak . (8.6)
X(jkw0 ) is proportional to the spectral coefficients of the periodic signal,
x̃(t).
Then, we can replace ak by X(jkw0 )/T and T = 2π/w0 in the Fourier
series synthesis equation to obtain,

1 X
x̃(t) = X(jkw0 )ejkw0 t
T
k=−∞
∞ (8.7)
1 X
= X(jkw0 )ejkw0 t w0 .

k=−∞

Finally, we stretch the nonzero interval of x(t) and the fundamental period
of x̃(t) to infinity,

(−T1 , T2 ) → ∞ and T → ∞ (8.8)

309
x̃(t)
x(t)

... ...

−T1 T2 −T −T1 T2 T

−T1 T2

Figure 8.1: Top row: Given an aperiodic function x(t), which is nonzero in a
finite interval −T1 < t < T2 , we generate a periodic signal , x̃(t) by repeating
x(t) with period, T ≥ (T1 + T2 ). Middle row: We can enlarge the nonzero
interval of x(t) and the period, T of x̃(t) as much as we like. Bottom row: We
stretch the nonzero interval, (T1 + T2 ), of x(t) to ∞.

310
and take the limit,

1 X
lim x̃(t) = x(t) = lim X(jkw0 )ejkw0 t w0 . (8.9)
T →∞ w0 →0 2π

As the fundamental period, T → ∞, the angular frequency, ω0 → 0, in


the limit, harmonically related discrete frequencies converge to continuous fre-
quency,

lim kω0 → dω, (8.10)


ω0 →0

which implies,

X(jkω0 ) → X(jω). (8.11)


Then, the function, x(t) = limT →∞ x̃(t) can be represented by,
Z ∞
1
x(t) = X(jw)ejwt dw. (8.12)
2π −∞
The above equation completely represents an aperiodic function, x(t), in
terms of the weighted integral of complex exponential function, where the
weight X(jω) is a complex function of a continuous frequency variable, ω, in
the frequency domain.
Considering the fact that

X(jkw0 ) ≜ T ak , (8.13)
we can uniquely obtain the weight function, X(jω) from the function, x(t)
by taking the limit of X(jkw0 ), as ω0 → 0, as follows;
Z ∞
X(jω) ≜ lim X(jkw0 ) ≜ lim T ak = x(t)e−jwt dt. (8.14)
ω0 →0 ω0 →0 −∞

The complex weight function, X(jω), which is called the Fourier trans-
form of the time domain function, x(t), generalizes the Fourier series represen-
tation of periodic functions to aperiodic functions. In the above rough formal-
ism, the idea of representing periodic functions by the weighted summation
of complex exponentials, is brilliantly extended to representing aperiodic func-
tions by weighted integral of complex exponentials.

311
8.2. Existence and Convergence of the
Fourier Transforms: Dirichlet Con-
ditions
The validity of the extension of the Fourier series of periodic signals to aperiodic
signals rely upon a very major assumption: We need to be able to uniquely
obtain the frequency domain function, X(jω), from the time domain function,
x(t) by following integral,
Z ∞
X(jω) = x(t)e−jwt dt. (8.15)
−∞

The above integral exists when the function, x(t) satisfies the Dirichlet
conditions, which can be summarized as follows:
1. The function x(t) should have finite energy,
Z ∞
|x(t)|2 dt < ∞. (8.16)
−∞

2. The function x(t) should have a finite number of maxima and minima
in a finite interval.
3. The function x(t) should have a finite number of discontinuities in a
finite interval and all the discontinuities are to be finite.
The Dirichlet conditions assure that the Fourier transform function, X(jω),
exists. In other words, the Fourier transform is finite;

X(jω) < ∞. (8.17)


Dirichlet conditions, also, assure the convergence of the periodic function
x̃(t) to the aperiodic function x(t), as the period T = ω2π0 → ∞, where

1 X
x̃(t) = X(jkw0 )ejkw0 t . (8.18)
T
k=−∞

Formally speaking, the absolute integral of the error e(t) between the func-
tion x(t) and x̃(t),

e(t) = x(t) − x̃(t), (8.19)


converges to 0,
Z ∞
lim |e(t)|2 dt → 0. (8.20)
T →∞ −∞

The satisfaction of the Dirichlet conditions is rather intuitive for the ex-

312
istence and convergence of the Fourier transform. Let us get a feeling about
the necessity and sufficiency of the Dirichlet conditions for existence and con-
vergence of the Fourier transform by a simple example given below, leaving
the formal proofs to the interested readers (Singh, P., Singhal, A., Fatimah,
B., Gupta, A., & Joshi, S. D. (2022). Proper definitions of Dirichlet condi-
tions and convergence of Fourier representations [lecture notes]. IEEE Signal
Processing Magazine, 39(5), 77-84.).

Exercise 8.1: Does the following function satisfy Dirichlet conditions?


1
x(t) = (8.21)
4 − t2

Solution:
No! This function violates the first Dirichlet condition, mentioned above.
It has an infinite energy,
Z ∞
1 2
| 2
| dt → ∞. (8.22)
−∞ 4 − t

Indeed, the integral to obtain the weight function, X(jω), does not exist,
Z ∞ Z ∞
1
X(jω) = x(t)e−jwt dt = 2
e−jwt dt → ∞. (8.23)
−∞ −∞ 4 − t

8.3. Fourier Transforms


The extension of the Fourier series representation of periodic functions to ape-
riodic functions enables us to define Fourier transform, as follows:
Any continuous time function, x(t), satisfying Dirichlet conditions, can be
uniquely represented by the weighted integral of complex exponentials, called
synthesis equation, as follows:

1
Z
x(t) = X(jw)ejwt dw . (8.24)
2π −∞

The weight, X(jw) of the synthesis equation can be uniquely obtained


from the weighted integral of complex conjugate exponentials, called analysis
equation, as follows:
Z ∞
X(jw) = x(t)e−jwt dt (8.25)
−∞

313
where the complex function, X(jw) is called the Fourier transform of x(t).
The weight of the complex exponential in the analysis equation is the function
x(t) itself.
Motivating question: What do analysis and synthesis equations
tell us?
Synthesis equation recovers a time domain phenomenon, represented by a
function, x(t) from its frequency content, where the amount of each frequency
w is measured by X(jw). Compared to the discrete spectral coefficients of
Fourier series representation, w is a continuous variable. Therefore, we can
continuously measure the frequency content of a time domain function, x(t)
by its Fourier transform, X(jw).
Note: The domains of x(t) and X(jw) are different. While the domain, w,
of X(jw) is frequency domain, the domain, t, of x(t) is time domain. The
argument of the Fourier transform X is not only the frequency variable, ω, but
jω to remind us that the function X(jw) is a complex function.
The analysis equation is even more interesting: It tells us that if we ob-
serve a physical phenomenon in the time domain, we can uniquely obtain its
representation in the frequency domain, where time completely disappears.
A physical phenomenon, which is represented by a function of time can be
uniquely represented by a function of frequency. In the frequency domain, a
physical phenomenon, such as music, speech, heartbeats, etc., is independent
of time, but it is represented by its frequency variations.
Time domain and frequency domain representations are one-to-one and
onto. Loosely speaking, if x(t) exists in the time domain, then X(jw) exists in
the frequency domain and vice versa. This fact is mathematically formalized
as follows:

x(t) ↔ X(jw). (8.26)


The time domain and the frequency domain representations show two dif-
ferent aspects of the same physical phenomenon, which is represented in two
different spaces.

8.4. Comparison of Fourier Series and


Fourier Transform
Let us now conceptually compare the Fourier series of a periodic signal and
Fourier transform of an aperiodic signal. Recall that Fourier series representa-
tion of a continuous time periodic signal,

x(t) ↔ ak (8.27)

314
and the Fourier transform of a continuous time aperiodic signal

x(t) ↔ X(jω) (8.28)


are one-to-one and onto, provided that they satisfy the Dirichlet conditions.
Recall, also, that the Fourier transform of an aperiodic function is obtained by
assuming that an aperiodic function can be considered as a periodic function
of infinite period. In this sense, the Fourier transform can be considered as the
generalized form of the Fourier series, which is applicable to both periodic and
aperiodic signals.
Motivating question: Do the Fourier series and transform resemble each
other? What are the similarities and distinctions between the two representa-
tions in the frequency domain?
The major similarity is that spectral coefficients, ak of Fourier series and
the Fourier transform, X(jw) indicate the frequency content of the signal. In
both cases, the signal is represented as a function of its frequencies.
The major distinction between the Fourier series and Fourier transform
is that the spectral coefficients, {ak }, form a discrete function of k, whereas
X(jw) is a continuous function of frequency variable, w. While the Fourier
series representation provides us the amount of each harmonic frequency,
kω0 , which contributes to a periodic signal, Fourier transform, provides us the
amount of the frequency interval in a continuous band of frequencies, which
makes up the signal (Figure 8.2). Thus, instead of the harmonics at integer
values of k, we have frequency intervals of the Fourier transform function.

8.5. Frequency Content of Fourier Trans-


form
The functions in the frequency domain bear some properties, which cannot
be observed and quantified in the time domain. In order to quantify these
properties we need to introduce new concepts and their formal definitions.
In particular, the behavior of a function is to be quantified in terms of its
frequency content. An important concept is the bandwidth of a function in
frequency domain, as defined below.
Definition: Bandwidth of a Function in Frequency domain: The
frequency interval, where the Fourier transform of a time domain function
x(t), is X(jω) ̸= 0 is called bandwidth.
When the time domain function is real valued its Fourier transform is
conjugate symmetric. In other words, if x(t) is real, then, X(jω) = X ∗ (−jω).
For conjugate symmetric functions, the cutoff frequencies at the lower and
upper bounds of the Fourier transform are −ωc and ωc . Hence, the bandwidth

315
of conjugate symmetric Fourier transforms are ωbw = 2ωc

ak X(jω)

ω
k −ωA ∆ω ωA

Figure 8.2: Left figure illustrates the discrete spectral coefficients, ak of a pe-
riodic function, x(t). At each integer value of k, spectral coefficients {ak },
measure the amount of the harmonic frequency, kw0 , which is the integer mul-
tiple of angular frequency, w0 . Right figure illustrates the Fourier transform
of an aperiodic function. As we can observe, X(jw) is a continuous function of
the frequency variable ω. This time we can measure the amount of a frequency
in an interval, which is the area of the rectangle of X(jw) · ∆w. Note that,
in this illustration, the Fourier transform, X(jw) is zero outside (wA + wB ).
This particular class of signals, called band-limited, has a special importance
to develop digital technologies.

Definition: Band-limited Functions


A function is called band-limited if its Fourier transform has zero values
outside a frequency interval. Mathematically speaking,

X(jw) = 0 for |w| > wc . (8.29)


A band-limited signal, such as, speech or music, consists of only a finite
band of frequencies. These signals can be easily created, stored, transmitted,
and/or processed by digital technologies for many real world applications.
Bandwidth of a signal determines the frequency content of the signal. The
larger the bandwidth is the more frequency it carries.

Definition 8.1: Cutoff Frequency of a Function: The cutoff frequency


ωc , is the frequency at the upper and lower limits of a band-limited function,
where beyond these frequencies the function has 0 values.
In the above definition, the cutoff frequency is the angular frequency ω0 =

T = 2πf , which is measured by radians. The fundamental period T is mea-
sured by seconds and the fundamental frequency f is measured by Hertz (cy-
cle/second).

316
Listen to Maria Callas, Luciano Pavarotti and oth-
ers for examples of different human voice band-
WATCH
widths @ https://384book.net/v0802

In the following examples we investigate the behavior of signals in the


frequency domain.

Exercise 8.2: Consider the following signal,

x(t) = e−a|t| , a > 0. (8.30)


a) Sketch this signal and give a real life example which can be approximated
by this signal.
b) Find and plot the Fourier transform of this signal.
c) Is this a band-limited signal? Comment on the frequency content of this
signal.

Solution:
a) This is an even signal, as shown in Figure 8.3. This type of function
can be used to approximate the expected amount of money we have left after
optimal gambling in a casino with non-favorable odds.

x(t)

Figure 8.3: Sketch of the signal, x(t) = e−a|t| .

b) The Fourier transform of the signal is obtained by using the synthesis


equation,

317
Z +∞ Z 0 Z +∞
X(jω) = e−a|t| e−jωt = eat e−jωt + e−at e−jωt
−∞ −∞ 0
1 1 (8.31)
= +
a − jω a + jω
2a
= 2 .
a + ω2
Fourier transform, X(jω) does not have an imaginary part. Thus, it is a
real function, as illustrated in Figure 8.4.

X(jω)

2/a

1/a

ω
−a a

Figure 8.4: Fourier transform of the signal, x(t), depicted in Figure 8.3.

c) This is not a band-limited signal, because; X(jω) ̸= 0 for −∞ < ω < ∞.


Recall that the frequency content of signals can be investigated from the
magnitude and phase spectrum of the Fourier transform. Since the Fourier
transform of this signal is a real function of frequency, the magnitude, |X(jω)| =
X(jω) and ∡X(jω) = 0. Although the signal consists of all the frequencies, we
observe that low frequency content dominates the signal, allowing relatively
less proportion of high frequencies, outside the interval, |ω| > a.

Exercise 8.3: Consider the following impulse function,

x(t) = δ(t − t0 ). (8.32)


a) Give a real life example, which can be approximated by this signal.
b) Find the Fourier transform for t0 ̸= 0 and
c) Find the Fourier transform for t0 = 0.
d) Is this a band-limited signal?

Solution:
a) This function approximates a physical phenomenon, such as a lightening

318
pulse at t = t0 , which acts for a short duration with a very high voltage.
b) Let us investigate the frequency domain representation of the impulse
function. For this purpose, we compute the Fourier transform by using the
analysis equation:
Z ∞
X(jw) = δ(t − t0 )e−jwt dt = e−jwt0 . (8.33)
−∞

c) If the lightning occurs at t0 = 0, then, X(jw) = 1.


For this particular example, the Fourier transform, X(jw) is a real func-
tion. The magnitude is the same as the function itself, |X(jω)| = 1, for all
frequencies and the phase, ∡X(jω) = 0. This result indicates that a lightning
consists of all the frequencies with equal amounts. Formally speaking,

x(t) = δ(t) ↔ X(jω) = 1. (8.34)


d) This is not a band-limited signal, because; X(jω) = 1 ̸= 0 or

X(jω) = e−jwt0 ̸= 0, for − ∞ < ω < ∞. (8.35)

Exercise 8.4: Consider the rectangular pulse signal, given below,


(
1, |t| < T1
x(t) = (8.36)
0, |t| > T1 .
as shown in Figure 8.5 (a).
a) Give a real life example, which can be approximated by this signal.
b) Find and plot the Fourier transform of this signal.
c) Is this a band-limited signal? Comment on the frequency content of this
signal.

Solution:
a) Rectangular pulse signal can be approximately generated by opening
and closing a switch of an electrical circuit,
b)The analysis equation gives us the Fourier transform of x(t),
T1
sin ωT1
Z
X(jω) = e−jωt dt = 2 , (8.37)
−T1 ω
as sketched in Figure 8.5 (b).
c) This is not a band-limited signal; because, X(jω) ̸= 0, for −∞ < ω < ∞.
The analysis of Figure 8.5 reveals that the time domain signal has a lim-
ited duration. The frequency content of the signal alternates and attenuates,
as ω → ∞. The high-frequency components of the signal correspond to the

319
x(t)

X(jω)

2T1

ω
− Tπ1 π
T1

−T1 T1

(a) (b)

Figure 8.5: (a) The rectangular pulse signal of the example and (b) its Fourier
transform.

discontinuities of the time domain function at −T1 and T1 . Notice that as −T1
and T1 approaches 0, the Fourier transform X(jω) gets flatter.

Exercise 8.5: Consider the signal x(t) whose Fourier transform is given be-
low,
(
1, |ω| < W
X(jω) = (8.38)
0, |ω| > W
a) Find the signal x(t).
b) Is this a band-limited signal? If yes, what is the bandwidth? Comment
on the frequency content of this signal.

Solution:
a) This transform is illustrated in Figure 10.16(a). Using the synthesis
equation, we can determine the signal, x(t), in time domain,
W
1 sin W t
Z
x(t) = ejωt dω = , (8.39)
2π −W πt
which is depicted in Figure 10.16(b).
c) This is a band-limited signal; because; X(jω) = 0, for |ω| > W. The
bandwidth of this signal is 2W.
The analysis of Figure 10.16 reveals that the frequency domain signal con-
sists of low frequencies for |ω| > W. The time domain counterpart, x(t) alter-
nates and attenuates, as t → ±∞. Notice that as W approaches 0, the time
domain function x(t) gets flatter.

320
x(jω)
x(t)

W/π

−π/W π/W

ω
−W W

(a) (b)

Figure 8.6: Fourier transform pair of the example: (a) Fourier transform, X(jω)
and (b) the corresponding time domain function, x(t).

Comparison of Figure 8.5 and Figure 10.16 shows the beautiful duality
between the analytical shape of the functions in time and frequency domains.

Sine Cardinal (Sinc) Function: The functions in the time and frequency
domains appeared in the above examples,
sin W t
x(t) =
πt
and
sin ωT1
X(jω) =
ω
are called sine cardinal, or in short, sinc functions. The general form of the
normalized sinc function is,
sin πt
x(t) = .
πt
This even function is maximum at t = 0 in the time domain and ω = 0 in
frequency domain. It keeps attenuating as t → ±∞. Replacing x(t) by h(t),
and X(jω) by H(jω). Later in Chapter 10, we call this LTI system as the ideal
low-pass filter;
(
sin πωc t 1, |ω| < ωc
h(t) = ↔ H(jω) = (8.40)
πt 0, |ω| > ωc
The sinc function is very important in establishing the relationship between
the continuous time and discrete time wolds, which lies in the foundation of the
entire digital technology, as we shall see in the Sampling theorems of Chapters
11 and 12.

321
8.6. Representation of LTI Systems in
Frequency Domain by Frequency Re-
sponse
In Chapter 6, we have seen that the response of an LTI system to the complex
exponential input with the fundamental frequency ω0 ,

x(t) = ejω0 t (8.41)


is,

y(t) = H(jω0 )ejω0 t , (8.42)


where the eigenfunction ejω0 t is scaled by the eigenvalue of an LTI system,
defined as,
Z ∞
H(jω0 ) = h(t)e−jω0 t dt. (8.43)
−∞
Note that, for ω0 → ω, the above eigenvalue converges to the Fourier
transform of the impulse response. Considering the fact that impulse response
uniquely represents an LTI system in the time domain, its Fourier transform
is very crucial to represent the LTI system in the frequency domain. Hence, it
requires a special name, called frequency response, as defined below.
Definition: Frequency response of a continuous time LTI system is
defined as the Fourier transform of the impulse response. Mathematically, fre-
quency response of a continuous time LTI system is,
Z ∞
H(jω) = h(t)e−jωt dt. (8.44)
−∞
Therefore, impulse response and frequency response,

h(t) ↔ H(jω) (8.45)


are one-to-one and onto representation of the same LTI system in two different
domains, namely, in time and frequency domains. While the time dependent
properties of the LTI system are investigated by its impulse response or by the
corresponding differential equation, the frequency shaping properties of the
LTI system are investigated by analyzing the frequency response, which is the
Fourier Transform of the impulse response.
Note that, the eigen values H(jkω0 ) of a continuous time LTI system for
each harmonic frequncy kω0 for all integer values of k are specific instances of
the frequency response H(jω). In other words, the eigenvalues are the values

322
of the frequency response at ω = kω0 , ∀k.
Recall that Fourier transform is a complex-valued function. In polar coordi-
nate system, Fourier transform of the impulse response, namely the frequency
response, is represented by,

H(jω) = |H(jω)|ej∡H(jω) , (8.46)


where the real-valued functions |H(jω)| and ∡H(jω) are called the magnitude
and phase spectrum respectively. Analysis of Fourier transform of a function
requires the analysis of magnitude and phase spectrums.
Motivating question: What does the magnitude and phase spectrum
of the Fourier transform of the impulse response indicate?
Generally speaking, the magnitude and phase spectrum of the frequency re-
sponse H(jw) indicate the frequency content of the impulse response function,
h(t).
Therefore,
• Magnitude spectrum of the Fourier transform, |H(jω)|, determines the rel-
ative presence of frequencies in the impulse response. The magnitude spec-
trum of the Fourier transform is simply how much each frequency component
is contributing to form this function.
• Phase spectrum of the Fourier transform, ∡H(jω) determines how the fre-
quencies line up relative to one another. The phase spectrum of the Fourier
transform is simply how much the frequency response delays a frequency
component of an input signal.
In the following example, we investigate the behavior of an LTI system in
various one-to-one and onto representations, namely, as impulse and frequency
responses, and differential equations.

Exercise 8.6: Consider a continuous time LTI system, represented by the


following impulse response:

h(t) = e−at u(t), a > 0. (8.47)


a) Find the differential equation which represents this system.
b) What type of physical phenomenon does this system represent?
c) Find the Fourier transform of the impulse response and comment on the
frequency content of this system.
Assume that the system is initially at rest at t = 0.

Solution:
We have seen this impulse response in the previous chapters, where the
plot in time domain is shown in Figure 8.7.

323
a) This impulse response represents a first order differential equation, given
below;

dy(t)
+ ay(t) = x(t), (8.48)
dt
assuming that the system is initially at rest, with x(t) = 0 and y(t) = 0 for
t ≤ 0.

h(t)

1/e
t
1/a

Figure 8.7: The exponential function is obtained at the output of an LTI system
represented by a first order homogeneous differential equation, dy(t)
dt + ay(t) =
x(t).

b) For example, this system may approximately represent the velocity decay
of a car, running with a unit speed. At time t = 0, if we stop pushing the gas
pedal, the velocity will decay and approach to 0. The decay rate, a, depends
on the environmental conditions and the properties of the car.
c) Let us investigate the structure of the phenomenon represented by h(t)
in frequency domain, using the analysis equation:
Z ∞ Z ∞
1
H(jw) = h(t)e−jωt dt = e−at e−jωt dt = . (8.49)
−∞ 0 a + jw
Since H(jw) is a complex function, we need to find the magnitude and the
phase of this function:
1
| H(jw) | = √
a2
+ w2 (8.50)
w
∡H(jw) = − tan−1 ( )
a
Magnitude and phase plot of H(jw) is shown in Figure 8.8.
The magnitude spectrum is relatively high for |ω| ≤ a, compared to the
frequencies outside of this interval. It converges to zero as the absolute value
of the angular frequency, |ω| is increased. Thus, this LTI system passes the

324
|H(ω)|

1/a
1

a 2

a −a ω

∠H(ω)
π
2

π
4

−a ω
− π4

− π2

Figure 8.8: Magnitude and phase spectrum of the frequency response, H(jw) =
1
a+jω .

low frequency components and suppresses the high frequency components of


the input signal (Figure 8.8). The phase spectrum shows that the LTI system
delays the high frequencies more compared to the low frequencies of an input
signal.
The decay rate a of the impulse response defines the bandwidth of the
frequency response. The higher the decay rate gets, the larger bandwidth we
define in the magnitude spectrum. Hence, this LTI system passes more fre-
quencies as a increases.

8.7. Relationship Between the Fourier


Series and Fourier Transform of Pe-
riodic Functions
Suppose that we are given a continuous time periodic function, x(t), with the
corresponding spectral coefficients,

x(t) ↔ ak . (8.51)
Suppose, also, that the Fourier Transform of the function, x(t), is,

x(t) ↔ X(jω). (8.52)

325
Motivating question: What is the relationship between the spectral co-
efficients, ak and Fourier transform X(jω) of x(t)?
In order to find an answer to the above question, let us solve the following
exercise:

Exercise 8.7: Consider the following impulse train in frequency domain,



X
X(jw) = 2πak δ(w − kw0 ). (8.53)
k=−∞

Find the inverse Fourier transform, x(t), of the above frequency domain
signal.

Solution: The frequency domain function, X(jω), is a periodic function, where


the fundamental period is ω = ω0 . Let us find the inverse Fourier transform,
x(t) by using the synthesis equation.
Inserting the right hand side of the Fourier transform, X(jω), into the synthesis
equation, we obtain,
∞ ∞
1
Z X
x(t) = 2πak δ(ω − kω0 )ejωt dω. (8.54)
2π −∞ k=−∞

Note: Finding the inverse Fourier transform involves taking the weighted in-
tegral of a complex function, X(jw). This may not be an easy task, which may
require contour integration. In order to avoid taking complex integrals most
of the time we employ look up tables. However, for this particular function we
can arrange Equation 9.52 we obtain,

∞ ∞ ∞ ∞
1
Z X X Z
jωt
x(t) = 2πak δ(ω − kω0 )e dω = ak δ(ω − kω0 )ejωt dω.
2π −∞ k=−∞ −∞
k=−∞
(8.55)
The integral in the right hand side of the about equation is easy to take;
Z ∞
δ(ω − kω0 )ejωt dω = ejkω0 t . (8.56)
−∞
Replacing the result of the integral in Equation: 9.53, we obtain,
∞ ∞ ∞
1
Z X X
x(t) = 2πak δ(ω − kω0 )ejωt dω = ak ejkω0 t . (8.57)
2π −∞ k=−∞ k=−∞

Interestingly, the right hand side of the above equation is the Fourier series
representation of the function, x(t), which is periodic with fundamental period,

326

T =w0
.
Comparing the Fourier Transform and Fourier Series of x(t),

X ∞
X
jkw0 t
x(t) = 2πak e ↔ X(jw) = 2πak δ(w − kw0 ) (8.58)
k=−∞ k=−∞

provides us the relationship between Fourier transform and spectral coefficients


of a periodic signal, x(t), as follows:
X
X(jw) = 2πak δ(w − kw0 ) . (8.59)

Above relationship between the Fourier series coefficients of a periodic signal


and its Fourier transform, shows that the Fourier transform of a periodic signal
is the weighted sum of shifted impulses, where the weights are the spectral
coefficients, ak . Let us not forget the constant multiplicative factor 2π.

Exercise 8.8: Consider the following periodic signal,

x(t) = sin(w0 t). (8.60)


a) Find and plot the Fourier series representation of x(t).
b) Find and plot the Fourier transform of x(t).
c) Is this a band-limited signal? If yes, find the bandwidth. Comment on
the frequency content of this signal.

Solution:
a) Recall that there are only two nonzero spectral coefficients of this func-
tion;
1
a1 = −a−1 = .
2j
b) We can easily compute the Fourier transform by inserting the spectral
coefficients into the Equation 9.57, as follows:


X π
X(jw) = 2πak δ(w − kw0 ) = (δ(w − w0 ) − δ(w + w0 )). (8.61)
j
k=−∞

The plots of spectral coefficients, ak vs. k and Fourier transform, X(jω) vs ω


is shown in Figure 8.9. Note that both the spectral coefficients and the Fourier
transform consists of two impulse functions. While the spectral coefficients
consists of two discrete shifted impulse functions, the Fourier transform consists
of two shifted continuous impulses. The amount of shift of the impulse functions
in spectral coefficients are integer valued. On the other hand, the amount of

327
shift of the continuous impulse functions in the Fourier transform are irrational
numbers of angular frequency, which is |ω0 | = 2πT .

ak X(jω)
1 π
2j j

−1 −ω0
k ω
1 ω0

−1
2j
− πj

Figure 8.9: Comparison of the spectral coefficients and Fourier transform of


the periodic signal x(t) = sin(w0 t). While the spectral coefficients are discrete
impulses, the Fourier transform is continuous impulses.

c) This is a band-limited signal, where the bandwidth is 2ω0 .


Interestingly, Fourier transform of the sinusoidal signal, x(t) = sin ω0 t,
which ranges −∞ < t < ∞, generates a pair of impulse functions, providing
a very compact representation in the frequency domain. This behavior of the
Fourier transform enables us to compress time domain signals by storing them
compactly, in the frequency domain.

Exercise 8.9: Suppose that the impulse train function in time domain is
given by the following equation;

X
x(t) = δ(t − kT ). (8.62)
k=−∞

a) Find the Fourier series representation of the impulse train, x(t).


b) Find the Fourier transform of the impulse train, x(t).
c) Compare the function in time domain x(t), its spectral coefficients ak
and its Fourier transform X(jω).

Solution:
a) Fourier series representation of x(t) is,

1 1
Z
ak = δ(t − kT )ejkw0 t dt = e−jkw0 T = . (8.63)
T T T
b) By using the Fourier series coefficients, we can find the Fourier transform
X(jw), as follows:

328
∞ ∞
X 2π X 2π
X(jw) = 2πak δ(w − kw0 ) = δ(w − k ). (8.64)
T T
k=−∞ k=−∞

x(t) αk

1/T

.... ....
t k
−2T −T T 2T 1 2 3

(a) (b)
X(jω)

.... ....
ω
− 4π
T
− 2π
T

T

T

(c)

Figure 8.10: Impulse train in time domain (a), its spectral coefficients (b) and
its Fourier transform (c)

c) Interestingly, the impulse train function in the time domain has an im-
pulse train function in the frequency domain. While the Fourier transform
of a continuous time impulse train is a continuous frequency impulse train,
its Fourier series representation is a discrete impulse train. Thus, the impulse
train preserves its analytical form in both frequency and time domains. In other
words, the periodic function impulse train is always impulse train in both time
and frequency domains, where
• the fundamental period of the time domain function, x(t), is T = ω2π0 ,
• the fundamental period of the Fourier transform, X(jω) in the frequency

329
domain is ω = ω0 = 2π
T ,
• the fundamental period of the spectral coefficients, ak , is k = 1.

Note: The smaller period in the time domain results in a larger period
w0 = 2πT , in the frequency domain.
This conservative behavior of the impulse train function, which preserves
the analytical form in both domains, breaks the thick wall between the time
and frequency domains and opens a wide window to the digital era, as we shall
see in Sampling theorem of Chapter 11 and 12.

8.8. Properties of Fourier Transform: For


Continuous Time Signals and Sys-
tems
The sound of sine waves and their Fourier trans-
forms @ https://384book.net/v0803
WATCH

So far, we have seen that Fourier transforms map a time domain function
into a new domain, called frequency domain. In this domain, the time variable
disappears and the function is represented in terms of a continuous variable of
frequencies by the following analysis equation,
Z ∞
X(jw) = x(t)e−jwt dt, (8.65)
−∞
where the time domain function can be uniquely obtained from its fre-
quency domain representation by the synthesis equation,
Z ∞
1
x(t) = X(jw)ejwt dw. (8.66)
2π −∞
Recall that the complex exponential function, in the above equations rep-
resents a waveform of frequency of ω by Euler formula,

ejωt = cos ωt + j sin ωt. (8.67)


Loosely speaking, the Fourier analysis and synthesis equations reveal that
all the functions satisfying Dirichlet conditions can be described by gather-
ing uncountably infinite wave forms with varying frequencies. Therefore, the
Fourier transform gives us a unique and powerful way of viewing a physical

330
phenomenon in time domain, in terms of weighted integral of wave forms, where
the weights are the Fourier transform function X(jω).

Learn more about the Euler’s identity @ https:


//384book.net/v0804
WATCH

In this section, we shall investigate the properties of this marvelous tool of


Fourier transform for the continuous time signals and systems. We shall use
the properties to switch between the continuous time and frequency domains
without taking the integrals of analysis and synthesis equations. We shall study
the frequency content of the aperiodic continuous time signals. We shall design
and implement LTI systems in the time and frequency domains for reshaping
the continuous time signals.

8.8.1. Basic Properties of Continuous Time Fourier


Transform
Recall that Fourier transform is the extension of Fourier series, where an ape-
riodic function is represented as a periodic function of infinite period. As the
period approaches infinity, the Fourier series of a periodic function converges
to the Fourier transform of an aperiodic function. Consequently, most prop-
erties of Fourier transform resemble the properties of Fourier series. A list of
properties are provided in Table 8.1.

331
Table 8.1: Basic properties of Fourier transform.
Non-periodic signal Fourier transform
1
R∞ R∞
x(t) = 2π −∞ X(jω)e
jωt dω X(jω) = −∞ x(t)e−jωt dt
(can also be written with frequency f = 2π/ω)
x(t) X(jω)

y(t) Y (jω)

ax(t) + by(t) aX(jω) + bY (jω)

x(t − t0 ) e−jωt0 X(jω)

ejω0 t x(t) X(j(ω − ω0 ))

x∗ (t) X ∗ (j(−ω))

x(−t) X(j(−ω))
1 ω
x(at) |a| X( a )

1
x(t) ∗ y(t) 2π X(jω) ∗ Y (jω)
d
dt x(t) jωX(jω)
Rt 1
−∞ x(t)dt jω X(jω) + πX(0)δ(ω)
d
tx(t) j dω X(jω)


 X(jω) = X ∗ (j(−ω))

Re{X(jω)} = Re{X(j(−ω))}



For real-valued x(t) Im{X(jω)} = Im{X(j(−ω))}

|X(jω)| = |X(j(−ω))|



∢X(jω) = −∢X(j(−ω))

Even part of x(t) Re{X(jω)}

Odd part of x(t) jIm{X(jω)}


F
(
R∞ −juv g(t) ←→ f (jω)
Duality: f (u) = −∞ g(v)e dv =⇒ F
f (t) ←→ 2πg(j(−ω))
Z ∞ Z ∞
2 1
Parseval’s relation for non-periodic signals: |x(t)| dt = |X(jω)|2 dω
−∞ 2π −∞

332
Table 8.2: Fourier transform pairs of popular continuous time functions.
Signal x(t) Fourier transform X(jω)

X ∞
X
ak ejkω0 t 2π ak δ(ω − kω0 )
k=−∞ k=−∞

K 2πKδ(ω)

δ(t) 1

δ(t − t0 ) e−jωt0
∞ ∞
X 2π X 2πk
δ(t − nT ) δ(ω − )
n=−∞
T T
k=−∞

cos ω0 t π[δ(ω − ω0 ) + δ(ω + ω0 )]


π
sin ω0 t [δ(ω − ω0 ) − δ(ω + ω0 )]
j
ejω0 t 2πδ(ω − ω0 )
1
u(t) πδ(ω) +

2 sin ωT1
u(t + T1 ) − u(t − T1 )
ω
(or x(t) = 1 for |t| < T1 , 0 for |t| > T1 )
t 1
sign(t) = |t| jω

1
πt −jsign(ω)
1
tu(t) jπδ ′ (ω) −
ω2
tn 2π(j)n δ (n) (ω)
1
e−αt u(t), Re{α} > 0
jω + α

e−α|t| , Re{α} > 0
ω 2 + α2
1 h −αt i 1
e − e−βt u(t),
β−α (jω + α)(jω + β)
Re{α} > 0, Re{β} > 0

Continued on next page

333
Table 8.2: Fourier transform pairs of popular continuous time functions. (Con-
tinued)
1
te−αt u(t), Re{α} > 0
(jω + α)2
tn−1 −αt 1
e u(t)
(n − 1)! (jω + α)n
sin W t W Wt
= , Re{α} > 0 u(ω + W ) − u(ω − W )
πt π π

2 π −(ω/2α)2
e−(αt) , Re{α} >0 e
α
ω0
e−αt sin (ω0 t)u(t), Re{α} > 0
(jω + α)2 + ω02
−ω0
eαt sin (ω0 t)u(−t), Re{α} > 0
(α − jω)2 + ω02
α + jω
e−αt cos (ω0 t)u(t), Re{α} > 0
(jω + α)2 + ω02
α − jω
eαt cos (ω0 t)u(−t), Re{α} > 0
(α − jω)2 + ω02
sin (ω − ω0 )T1

(cos ω0 t) [u(t + T1 ) − u(t − T1 )] T1 +
(ω − ω0 )T1
sin (ω + ω0 )T1
(ω + ω0 )T1
Periodic square wave:

(
1, |t| < T1 X 2 sin kω0 T1
δ(ω − kω0 )
0, T1 < |t| ≤ T2 k=−∞

with period T

The properties of Fourier transform not only give us an insight about the
frequency content of a physical phenomenon, but they also allow us to observe
the similarities and distinctions between the behavior of time domain and
frequency domain functions, which represent that phenomenon. Furthermore,
it helps us to compute the analysis and synthesis equations, in an efficient way.
Considering the fact that the Fourier transform and its inverse requires
taking the integral of complex functions, computing the analysis and synthesis
equations may not be easy. In order to simplify the computations, we provide
the Fourier transform pairs of popular functions, in Table 8.2. The properties

334
of Table 8.1 and look-ups of Table 8.2 enable us to find the Fourier transforms
of complicated functions and their inverse, without evaluating the integrals, in
most problems.
The properties can be directly proved by employing the Fourier transform
analysis and synthesis equations. For this reason, we shall not provide a rigor-
ous proof for each of the properties. Instead, we roughly show the way how they
can be proved. The reader is strongly recommended to prove all the properties
and solve the Fourier transform pairs, given in Tables 8.1 and 8.2.
In the following, we study a selected set of the properties to grasp the time
and frequency domain representations and their relationship.
1) Linearity: Fourier transform is a linear transform. Mathematically speak-
ing, if we have two functions and their corresponding Fourier transforms,

x(t) ←→ X(jω) (8.68)


and

y(t) ←→ Y (jω), (8.69)


then,

ax(t) + by(t) ←→ aX(jω) + bY (jω). (8.70)


Thus, the superposition is transformed from the time to the frequency
domain and vice versa. In other words, if we superpose two signals in the
time domain, the same superposition applies in the frequency domain.
This property follows from the fact that integral is a linear operator.

2) Time Shifting: If the function x(t) is shifted in time by a constant


amount, t0 ∈ R , its Fourier transform X(jw) is multiplied by e−jωt0 , as
follows:

x(t − t0 ) ←→ e−jωt0 X(jω). (8.71)


This property can be proved by defining y(t) = x(t − t0 ) and inserting
the shifted signal into the analysis equation
Z ∞
Y (jw) = x(t − t0 )e−jwt dt (8.72)
−∞
Let us change the variable of integration to τ = t − t0 and get,
Z ∞
Y (jw) = x(τ )e−jw(τ +t0 )t dt = e−jωt0 X(jω) (8.73)
−∞

335
Note: Complex exponential, e−jωt0 , has a magnitude of 1. Thus, the time
delay alters the phase of the frequency domain signal, X(jw), but not its
magnitude. As a result, time delay doesn’t cause the frequency content of
X(jw) to change at all.
In order to illustrate the use of the linearity and time-shift properties, let
us solve the following example:

Exercise 8.10: Consider the following signal,



1,
 if 1 < t < 2 and 3<t<4
x(t) = 1.5, if 2 < t < 3 (8.74)

0, otherwise.

a) Plot and decompose x(t) as a linear combination of two signals in time


domain using two rectangular pulse signals.
b) Use the above decomposition to find and plot the magnitude and phase
of the Fourier transform X(jω).

Solution:
a) Figure 1.a shows the plot of x(t). We observe that x(t) can be expressed
as the linear combination of two signals,
1
x(t) = x1 (t − 2.5) + x2 (t − 2.5), (8.75)
2
where the signals x1 (t) and x2 (t) are the rectangular pulse signals shown
in Figure 1b and c.
b) Fourier transform of rectangular pulses, x1 (t) and x2 (t) are,

2 sin(ω/2)
x1 (t) ↔ X1 (jω) = (8.76)
ω
and

2 sin(3ω/2)
x2 (t) ↔ X2 (jω) = , (8.77)
ω
respectively.
Equation 10.11 shows that for both x1 (t) and x2 (t), time shift is t0 = 5/2.
In order to find the Fourier transform X(jω), we multiply the Fourier
5
transforms of X1 (jω) and X2 (jω) by e−jωt0 = e−jω 2 . Using the linearity
and time-shift properties, the Fourier transform becomes,
 
sin(ω/2) + 2 sin(3ω/2)
X(jω) = e−j5ω/2 . (8.78)
ω

336
x(t) x1 (t)

1.5

1
1

t t
1 2 3 4 − 12 1
2

(a) (b)
x2 (t)

t
− 32 3
2

(c)

Figure 8.11: Decomposing a signal into the linear combination of two simpler
signals, x1 (t) and x2 (t). a) The signal x(t) = 21 x1 (t − 2.5) + x2 (t − 2.5), b) and
c) the signals, x1 (t) and x2 (t), which is used to represent x(t).

337
Exercise 8.11: Consider the following signal,

x(t) = cos(ω0 t + ϕ). (8.79)


a) Find the Fourier transform of x(t), for ϕ = 0.
̸ 0.
b) Find the Fourier transform of x(t), for ϕ =
c) Compare the results of part a and b.

Solution:
a) For ϕ = 0, the function is

x(t) = cos ω0 t. (8.80)


Spectral coefficients of this function is a1 = a−1 = 12 . Fourier transform
of this function is,

X(jω) = π(δ(ω + ω0 ) + δ(ω − ω0 )). (8.81)


b) For ϕ ̸= 0, let us arrange the function as follows;

ϕ
x(t) = cos(ω0 (t + )) (8.82)
ω0
and apply the time shift property,

j ωϕ ω
X(jω) = πe 0 (δ(ω + ω0 ) + δ(ω − ω0 )). (8.83)
Note that there are only two non-zero values of X(jω), one at ω = ω0
and the other at ω = −ω0 . Therefore,

X(jω) = π(e−jϕ δ(ω + ω0 ) + ejϕ δ(ω − ω0 )). (8.84)

c) Comparing the results of part (a) and (b), we observe that a phase
shift of ϕ, in time domain, does not change the frequency content of the
signal, but it multiplies the amplitude at each frequency by an amount of
ejϕ and e−jϕ .

3) Time Scale: If the time variable of the function x(t) is scaled by a


constant amount, a ∈ R , its Fourier transform has the following form;
1 jω
x(at) ←→ X( ). (8.85)
|a| a
The Fourier analysis integral provide different solutions for positive and
negative values of a for convergence. This is basically because the dummy
variable of integral, τ = t/a changes the sign:

338
For a > 0, the analysis equation becomes,

∞ ∞
1 1 jω
Z Z
τ
Y (jw) = x(at)e−jwt dt = x(τ )e−jw a dτ = X( ). (8.86)
−∞ a −∞ a a

For a < 0, the analysis equation becomes,


Z ∞
1 ∞ 1 jω
Z
τ
−jwt
Y (jw) = x(at)e dt = − x(τ )e−jw a dτ = − X( ).
−∞ a −∞ a a
(8.87)
1
The above derivation shows the reason of the absolute value, |a| , which
multiplies the frequency scaled Fourier transform.

Exercise 8.12: Given a signal in time and frequency domain as,

x(t) ↔ X(jω),

find the Fourier transform of

y(t) = x(3t + 7) (8.88)

in terms of X(jω).

Solution:
The signal y(t) is both time scaled and time shifted version of x(t) and it
can be written in the following form;
7
y(t) = x(3(t + )). (8.89)
3
Using the time shift and time scale properties for a = 3 and t0 = −7/3,
we find that,
7
e 3 jω jω
y(t) = x(3t + 7) ←→ Y (jω) = X( ). (8.90)
3 3
4) Time Reversal: A special case of time scale is time reversal. Applying
the time scale property for a = −1, we observe that reversing a signal in
time also reverses the frequencies in the Fourier transform:

x(−t) ←→ X(−jω). (8.91)


Note: Reversing the time, i.e., starting from the end of the signal does
not change the magnitude of the signal, but just its phase.

339
Exercise 8.13: Consider the following continuous time signal:

x(t) = e−a|t| . (8.92)


Find the Fourier transform of this signal, without taking the integral of
the analysis equation.

Solution:
First, we split x(t) into two parts as follows:

x(t) = e−a|t| = e−at u(t) + eat u(−t). (8.93)


Let’s define,

x1 (t) = e−at u(t) and x2 (t) = eat u(−t). (8.94)


Then, by using the linearity property,

X(jω) = X1 (jω) + X2 (jω). (8.95)


By using the look-up tables for Fourier transforms, we find
1
X1 (jω) = (8.96)
a + jω
and
1
X2 (jω) = X1 (−jω) = . (8.97)
a − jω
Therefore, the Fourier transform of x(t) is,
2a
X(jω) = X1 (jω) + X2 (jω) = . (8.98)
a2 + ω2

5) Conjugate Symmetry: In most practical applications, such as biological


signals, speech, music, the time domain function, x(t) is real. However,
when we design an electrical circuit, we face complex signals. There is an
interesting relationship between the time and frequency representation of
complex signals:

x(t) ←→ X(jω) ⇐⇒ x∗ (t) ←→ X ∗ (−jω), (8.99)


where (∗) indicates the complex conjugate operation. This result can eas-
ily be shown from the analysis equation.
Note: Conjugacy is preserved in both time and frequency domain repre-
sentation of signals.

340
6) Differentiation Property: Taking the derivative of a signal in time
domain corresponds to multiplying its Fourier transform by jw in the
frequency domain:

dx(t)
←→ jωX(jω). (8.100)
dt
The above property can be easily shown by taking the derivative of both
sides of the synthesis equation, as follows;

∞ ∞
dx(t) 1 d 1
Z Z
jwt
= X(jw)e dw = (jω)X(jw)ejwt dw. (8.101)
dt 2π dt −∞ 2π −∞

This property can be generalized to the nth derivative as follows:

dn x(t)
←→ (jω)n X(jω). (8.102)
dtn
Note: If we have an nth order differential equation in the time domain, its
Fourier transform gives us an nth order algebraic equation, with the pow-
ers of (jω). Therefore, solving a differential equation in the time domain
is equivalent to solving an algebraic equation in the frequency domain.

Exercise 8.14: Solve the following differential equation using the dif-
ferentiation property, when the input is x(t) = e−3t u(t), for the initially
rest systems.

d2 y(t) dy(t)
+ − 2y(t) = x(t). (8.103)
dt dt

Solution:
The Fourier transform of the input is,
1
F [x(t)] = F [e−3t u(t)] = . (8.104)
jω + 3
Taking the Fourier transform of both sides of the differential equation
yields,
1
[(jω)2 + (jω) − 2]Y (jω) = (8.105)
jω + 3
Leaving the Fourier transform of the output in the left hand side and
factorizing the second order term, we get,

341
1
Y (jω) = . (8.106)
(jω + 3)(jω − 1)(jω + 2)
Using the method of partial fraction expansion,
1 A B C
Y (jω) = = + + ,
(jω + 3)(jω − 1)(jω) + 2) (jω + 3) (jω − 1) (jω + 2)
(8.107)
we find A = 1/4, B = 1/12, C = −1/3.
Thus, the Fourier transform of the output is,

1 1/4 1/12 1/3


Y (jω) = = + − .
(jω + 3)(jω − 1)(jω) + 2) (jω + 3) (jω − 1) (jω + 2)
(8.108)
Taking the inverse Fourier transform we get the output as,
1 1 1
y(t) = e−3t u(t) − et u(−t) − e−2t u(t). (8.109)
4 12 3
Above exercise shows that, we can solve nth order linear differential equa-
tions with constant coefficients, algebraically in the frequency domain.
7) Integration Property: Integrating a signal in time domain corresponds
to dividing its Fourier transform by jw in the frequency domain:
Z t
1
x(τ )dτ ←→ X(jω) + πX(0)δ(ω). (8.110)
−∞ jω
Integration property can be derived by defining the synthesis equation for
x(t),
Z ∞
1
x(t) = X(jω)ejωt dω. (8.111)
2π −∞
Let us take the integral of both sides of the above synthesis equation:

t t ∞ ∞
1 1 X(jω) jωt
Z Z Z Z
x(t)dt = X(jω)ejωt dωdt = e dω.
−∞ 2π −∞ −∞ 2π −∞ jω
(8.112)
The above integral shows that for ω ̸= 0,
Z t
X(jω)
x(t)dt ←→ . (8.113)
−∞ jω
For ω = 0, however, the integral in the right hand side of Equation:
10.48 approaches to ∞. Hence, the above Fourier transform of the inte-
gral of function x(t) is incomplete. Using Cauchy integral theorem, (Com-

342
R
x(t) y(t)

Figure 8.12: An integrator.

plex Analysis: A Modern First Course in Function Theory Jerry R. Muir


Jr.ISBN: 978-1-118-70522-3 April 2015, Wiley) we can evaluate the inte-
gral of Equation 10.48 for ω = 0 and obtain an additive term πX(0)δ(ω)
to complete the Fourier transform of the integral of x(t), as
Z t
1
x(τ )dτ ←→ X(jω) + πX(0)δ(ω). (8.114)
−∞ jω
Similar to the differentiation property, integration property converts an
integral equation into an algebraic equation in the frequency domain.

Exercise 8.15: Consider an integrator system (Figure 8.12),


a) Find the impulse response of this LTI system.
b) Find the frequency response of this LTI system.

Solution:
a) When the input of an integrator is the impulse function, the corre-
sponding output is the impulse response,
Z t
h(t) = δ(t)dτ = u(t), (8.115)
−∞
which is a unit step function.
b) Integration property o reveals that the output of an integrator for a
general input x(t) is,
Z t
X(jω)
y(t) = x(t)dτ ←→ Y (jω) = + πX(0)δ(ω). (8.116)
−∞ jω
When we replace the input by an impulse function,
F .T .
x(t) = δ(t) ←→ X(jω) = 1, (8.117)
we obtain the frequency response as follows;
1
H(jω) = + πδ(ω). (8.118)

Thus, the impulse response is,

343
1
h(t) = u(t) ↔ + πδ(ω). (8.119)

Exercise 8.16: Find the Fourier transform X(jw) for the signal x(t)
displayed in Figure 8.13a, without evaluating the synthesis integral.

x(t)
1
1

−1 1
−1 t + t
t −1 1
1

−1
−1

(a) (b)

Figure 8.13: (a) A signal x(t) for which the Fourier transform is to be evaluated;
(b) representation of the derivative of x(t) as the sum of two components.

Solution:
Rather than applying the Fourier integral directly to x(t), we consider
the signal

d
g(t) = x(t) (8.120)
dt
As illustrated in Figure 8.13b, g(t) is the sum of a rectangular pulse and
two impulses. The Fourier transforms of each of these component signals
may be determined from Table 8.2:
 
2 sin ω
G(jω) = − ejω − e−jω (8.121)
ω
Using the integration property, we obtain

G(jω)
X(jω) = + πG(0)δ(ω). (8.122)

Since G(0) = 0, Fourier transform of x(t) becomes,

2 sin ω 2 cos ω
X(jω) = − . (8.123)
jω 2 jω

344
Note: If we have an integral equation in the time domain, its Fourier
transform gives us an algebraic equation. Therefore, taking an integral in
the time domain is equivalent to dividing its Fourier transform by jw and
adding the value of πX(0)δ(ω) , in the frequency domain.
8) Convolution Property: One of the most useful properties of Fourier
transform is the convolution property. This property states that convo-
lution in the time domain corresponds to multiplication in the frequency
domain. Mathematically,

y(t) = x(t) ∗ h(t) ←→ Y (jω) = X(jω)H(jω). (8.124)


Convolution property follows the definition of Fourier analysis equation
and convolution integral. Inserting the convolution integral,
Z ∞
x(t) ∗ y(t) = x(τ )h(t − τ )dτ,
−∞
into the Fourier analysis equation, we obtain,
Z ∞Z ∞
Y (jω) = F [y(t)] = F [x(t) ∗ h(t)] = x(τ )h(t − τ )ejωt dτ dt,
−∞ −∞
(8.125)
Changing the dummy variable of integration, t′ = t − τ , we get

Z ∞ Z ∞

Y (jω) = F [x(t)∗h(t)] = x(τ )e jωτ
dτ h(t′ )ejωt dt′ = X(jω)H(jω)
−∞ −∞
(8.126)
Recall that time and frequency representations of signals and systems are
one-to-one and onto. Therefore, the block diagram of an LTI system can
be represented in time and frequency domains, equivalently, as shown in
Figures 8.14 and 8.15.
Recall that the impulse response is the inverse Fourier transform of the
frequency response,
Z ∞
1
h(t) = H(jω)e−jωt dω (8.127)
2π −∞
and frequency response is the Fourier transform of the impulse response,
Z ∞
H(jω) = h(t)e−jωt dt. (8.128)
−∞

Exercise 8.17: Consider an LTI system represented by the following


shifted impulse function;

345
h1 h2

x(t) + y(t)

h3

y(t) = x(t) ∗ [h1 (t) ∗ h2 (t) ∗ h3 (t)]

Figure 8.14: A sample block diagram representation in time domain. Note


that all the operations between the input and impulse response functions are
convolution.

H1 H2

x(t) + y(t)

H3

y(t) ←→ Y (jω) = X(jω)[H1 (jω)H2 (jω) + H3 (jω)]

Figure 8.15: The block diagram representation of Figure 8.14 in frequency


domain. Note that the convolution operations are replaced by multiplication
operations.

346
h(t) = δ(t − t0 ) (8.129)
a) Find the frequency response.
b) Find the system equation, which relates the input, X(jω) and output,
Y (jω), in the frequency domain.
c) Find the system equation, which relates the input, x(t) and output,
y(t), in time domain.

Solution:
a) Using the time shift property and Table 2, we obtain the frequency
response as follows;

h(t) = δ(t − t0 ) ←→ H(jω) = e−jωt0 (8.130)


b) System equation in frequency domain can be directly obtained by con-
sidering the relationship among the input, output and frequency response,
as follows;

Y (jω)
H(jω) = = e−jωt0 . (8.131)
X(jω)
Thus, the system equation in the frequency domain is,

Y (jω) = e−jωt0 X(jω). (8.132)


c) System equation in time domain can be obtained by taking the inverse
Fourier transform of the system equation in frequency domain, found in
part b;

y(t) = x(t − t0 ). (8.133)

9) Multiplication Property: Multiplication of two signals in time domain


corresponds to convolution of the Fourier transforms of the signals in
frequency domain:
1
y(t) = x(t)h(t) ←→ Y (jω) = X(jω) ∗ H(jω). (8.134)

In order to show the multiplication property, we take the Fourier trans-
form of the multiplication of two functions, y(t) = x(t)h(t) using the
analysis equation,
Z ∞
Y (jω) = F [y(t)] = F [x(t)h(t)] = x(t)h(t)ejωt dt, (8.135)
−∞

Then, we insert the inverse Fourier transform of x(t),

347

1
Z
x(t) = X(jω)e−jω tdω (8.136)
2π −∞
into Equation 10. 67,

∞ ∞
1
Z Z

Y (jω) = F [x(t)h(t)] = [ X(jω ′ )e−jω t dω ′ ]h(t)ejωt dt,
2π −∞ −∞
(8.137)
Finally, we arrange the integrals of the above equation,

∞ ∞
1
Z Z

Y (jω) = F [x(t)h(t)] = X(jω ′ ) h(t)ejt(ω−ω ) dtdω ′ . (8.138)
2π −∞ −∞

Note that the integral in the right hand side of the above equation is the
shifted Fourier transform of the function h(t),
Z ∞
′ ′
H(j(ω − ω )) = h(t)ejt(ω−ω ) dω ′ . (8.139)
−∞

Replacing the above value of the frequency domain function H(j(ω − ω0 ))


with the integral of Equation: 10.74, we obtain,
Z ∞
1 1
Y (jω) = F [x(t)h(t)] = X(jω ′ )H(j(ω−ω ′ ))dt = X(jω)∗H(jω).
2π −∞ 2π
(8.140)
This property is very useful in designing communication networks, be-
cause it enables one to change the bandwidth of a signal without changing
its analytical structure. The following example illustrates this fact.

Exercise 8.18: Amplitude Modulation: Suppose that we have a


band-limited signal with its Fourier transform, given below:
F
s(t) ←→ S(jω) (8.141)
Suppose also that we have a periodic signal, p(t) = cos(ω0 t) .
a) Find the Fourier transform, M (jw), of the signal, m(t) = p(t)s(t).
b) Compare M (jw) to S(jw) and comment about the bandwidth and
analytical form of these two functions.

Solution:
a) Let’s multiply a band limited signal s(t) with the cosine function p(t)
in time domain,

348
S(jw)

w
−w1 w1

Figure 8.16: A band-limited signal, S(jω) = 0 for |ω| > ω1 .

m(t) = p(t)s(t) = s(t) cos(ω0 t). (8.142)


In order to find the Fourier transform of m(t), we use the multiplication
property:
1
m(t) = p(t)s(t) ←→ M (jω) = P (jω) ∗ S(jω). (8.143)

In general, Fourier transform of a periodic signal, p(t) is

X
P (jω) = 2πak δ(w − kω0 ), (8.144)
−∞

where ak ’s are the Fourier series coefficients of p(t).


For this particular example, p(t) = cos w0 t. Then, there are two non-zero
Fourier series coefficients: a1 = a−1 = 1/2.
Thus,

P (jω) = π(δ(ω − ω0 ) + δ(ω + ω0 )). (8.145)


Therefore, Fourier transform of m(t) is,

1 1
M (jω) = S(jω) ∗ P (jω) = [S(j(ω − ω0 )) + S(j(ω + ω0 ))]. (8.146)
2π 2
b) Comparison of M (jω) and S(jω) shows an interesting similarity: Both
signals have the same analytical form. When we multiply a low frequency
bandwidth signal, s(t) with a cosine waveform of high frequency, the signal
in Fourier domain preserves its analytical form, but the bandwidth is

349
P (jw)

w
−w0 −w0

Figure 8.17: Fourier transform, P (jw), of p(t) = cos(w0 t).

shifted towards the high frequencies. Also, the magnitude of the signal is
decreased by a factor of 0.5. This fact is depicted in Figure 8.18.

M (jw)

A/2

w
−w0 w0
(−w0 − w1 ) (−w0 + w1 ) (w0 − w1 ) (w0 + w1 )

F.T.
Figure 8.18: Amplitude modulation: The signal s(t) −−→ S(jω) is shifted to
F.T.
high frequencies by a carrier signal, p(t) = cos(w0 t) −−→ P (jω).

The above property of Fourier transform has a very crucial implication.


Suppose that we have a signal, s(t), which has a low frequency bandwidth.
A good example is the speech signal, which ranges between 20 Hz and 20
kHz. This signal cannot be transmitted over a long distance by wireless
communication technologies, in its original form. It is well known that
a signal can only be transmitted in long distances if it has a very high
frequency bandwidth, such as microwaves, which requires the frequency
bands in the order of megahertz or gigahertz.
When we need to transmit a signal over a long distance, we shift the signal
to microwave bands by multiplying it with a very high frequency periodic
function, such as, p(t) = cos(w0 t). This operation is called amplitude

350
modulation. While m(t) is called the modulated signal, p(t) is called
carrier signal. After transmitting the modulated signal, m(t) = p(t)s(t)
to the final destination, a demodulation method is needed to reconstruct
the original signal s(t) from the modulated signal m(t).

10) Duality: Duality is an important concept, which appears in a wide range


of areas in mathematics. Fundamentally, duality refers to having a one-to-
one correspondence between two mathematical objects, which represent
different, but complementing characteristics of the same phenomenon.
Although the formal definition of duality changes in different areas of
mathematics, the following two major properties are generally satisfied:
1) Symmetry: Given two mathematical objects, X and Y , if X is the
dual of Y , then Y is the dual of X.
2) Involution: Dual of the dual of a mathematical object is the object
itself. Formally speaking, let S and S ′ be two dual spaces of mathe-
matical objects, and F be a duality mapping,

F : S −→ S ′ . (8.147)
Then,

F(F(X)) = X, ∀X ∈ S (8.148)
A simple everyday example of duality is a coin with two sides, which
satisfies the two properties mentioned above;
1. Symmetry: The dual of head is tail. The dual of the tail is the head.
2. Involution: The dual of the dual of head is head.
Fourier transforms exhibit the above major duality properties, which link
the time and frequency domain representations of the same phenomenon.
Duality of Fourier transform follows from the fact that the analysis and
1
synthesis equations are almost identical except for a factor of 2π and the
difference of a minus sign in the exponential in the integral.
There are many remarkable symmetries and involution between the time
and frequency domains. In the following, we overview three basic dualities
of Fourier transforms.
• Duality 1: Fourier Transform of Fourier Transform
The analytical form of the Fourier transform of the Fourier transform
of a function is very similar to the analytical form of the function itself.
Formally speaking, when a time domain function has an analytical
function, in the form of x and its frequency domain representation has
an analytical form of X, these two functions are related to each other
by Fourier transform,

351
x(t) ←→ X(jω). (8.149)
If we replace the variable jw by t and take the Fourier transform of
X(t), we obtain the reflected analytical form of x, scaled by 2π, in
frequency domain, as follows:

X(t) ←→ 2πx(−ω). (8.150)


In other words, taking the Fourier transform of the Fourier transform of
a function nicely returns the turned around function scaled by 2π. One
consequence of this duality is that whenever we evaluate the Fourier
transform of a function, the inverse can be obtained with the same
algorithm, with a minor modification.

Exercise 8.19: Consider the following function,


(
1, if |t| < T
x1 (t) = (8.151)
0, otherwise.
a) Find the Fourier transform of x1 (t).
b) Replace the time variable t, by jω and replace the threshold T by W
in x1 (t) to obtain a new frequency domain function, X2 (jω), as below;
(
1, if |ω| < W
X2 (jω) = (8.152)
0, otherwise.
and find the inverse Fourier transform of X2 (jω).
c) Compare the analytical form of x2 (t) to that of the inverse of X1 (jω).

Solution: a) The Fourier transform of x1 (t) is

T
1 jωT −e−jωT 2 sin(ωT )
Z
X1 (jω) = e−jωt dt = (e )= . (8.153)
−T jω ω

b) Now let’s take the inverse Fourier transform of


(
1, if |ω| < W
X2 (jω) = (8.154)
0, otherwise.
Then, we obtain
W
1 sin(ωt)
Z
x2 (t) = ejωt dω = . (8.155)
2π −W πt

352
x1 (t) X1 (jw)
2T1

1 F
w
t
−T1 T1 − Tπ1 π
T1

x2 (t) X2 (jw)
W/π
F
1
t
π π w
−W W −W W

Figure 8.19: Duality property which shows the relationships between the ana-
lytical form of two functions in time and frequency domains.

c) The analytical form of x2 (t) and X1 (jω) are the same, as observed
from Figure 8.19.

• Duality 2: Convolution versus Multiplication Operations


Convolution of two functions in the time domain corresponds to mul-
tiplication in the frequency domain. Similarly, multiplication in time
domain corresponds to convolution in the frequency domains:
Convolution in time ←→ Multiplication in frequency,
Multiplication in time ←→ Convolution in frequency.
Formally speaking,

x(t) ∗ y(t) ←→ X(jω)Y (jω) (8.156)


and
1
x(t)y(t) ←→ X(jω) ∗ Y (jω). (8.157)

This remarkable symmetry between the convolution and multiplication
operations enables us to design many LTI systems, in a wide range of
areas, in many disciplines of science and engineering.
• Duality 3: Time Shift and Frequency Shift
A shift in time domain corresponds to multiplication in the frequency
domain. Similarly, multiplication in the time domain corresponds to a
shift in frequency domain.
Time shift: x(t − t0 ) ←→ Multiplication: e−jωt0 X(jω)
Multiplication: ejω0 x(t) ←→ Frequency shift: X(j(ω − ω0 ))
Note: Whenever we need a shift in one of the domains, the corre-
sponding function in the other domain is just multiplied by a complex
exponential function.

353
11) Parseval’s Equality: In the above properties and examples, we observe
that the representation of signals and systems in time and frequency do-
mains, have substantially different analytical forms and structures. For
example, a periodic signal, which consists of sine and cosine functions,
has a Fourier transform consisting of shifted impulse functions at the fun-
damental frequency and its harmonics. However, the energy of the signals
in both domains does not change:
Z ∞ Z ∞
2 1
|x(t)| dt = |X(jω)|2 dω. (8.158)
−∞ 2π −∞
We can show Parseval’s equality by inserting the analysis equation into
the left hand side of Equation: 10.94, as follows;
Z ∞ Z ∞
2
|x(t)| dt = x(t)x∗ (t)dt = (8.159)
−∞ −∞
∞ Z ∞ ∞
1
Z Z

jωt
X(jω)e dω X ∗ (jω ′ )e−jω t dω ′ dt.
(2π)2 −∞ −∞ −∞
Considering the fact that harmonically related complex exponential func-
tions are periodic with 2π and they are orthogonal to each other,
(
2π for ω = ω ′
Z 2π

ejωt e−ω t dt = (8.160)
0 0 otherwise
and arranging the right hand side of Equation: 10.95, we obtain,
Z ∞
1
Z
|x(t)|2 dt = |X(jω)|2 dω. (8.161)
−∞ 2π −∞
Parseval’s equality reveals that representation of signals in Hilbert space
conserves the energy of time domain. Note that there is a factor of 1/2π
which scales the energy of time domain.
The above properties of Fourier transform simplifies a large variety of dif-
ficult problems for designing and implementing the LTI systems, analyzing
the frequency content of the signals, solving differential and integral equation,
taking convolution etc. Since time and frequency domains are one-to-one and
onto, we can freely switch between them during the steps of our design and
analysis processes, depending on our needs.
In the following section, we shall study continuous time LTI systems rep-
resented by differential equations and show an algebraic method to solve the
differential equations in the frequency domain. We observe that differentia-
tion and integration properties make our lives quite easy for designing and
analyzing LTI systems, in the frequency domain. Let us see how?

354
8.8.2. Continuous Time Linear Time Invariant
Systems in Frequency Domain
Recall that a continuous time LTI system is represented by the following ordi-
nary constant coefficient differential equation in time domain ,
N M
X dk y(t) X dk x(t)
ak = bk . (8.162)
dtk dtk
k=0 k=0

Also, recall that if the eigen function of x(t) = ejωt is the input to an
LTI system represented by the impulse response, h(t), then, the corresponding
output is,

y(t) = H(jω)ejωt , (8.163)


where
Z ∞
H(jω) = h(t)e−jωt dt, (8.164)
−∞
is called the frequency response. Since the right hand side of the above equation
is the Fourier transform of the impulse response, the frequency response is just
the Fourier transform of the impulse response;

h(t) ←→ H(jω). (8.165)


If we take the Fourier transform of both sides of the nth order differential
equation given above, we obtain the following equation, which represents an
LTI system in the frequency domain;
N
X M
X
k
ak (jω) Y (jω) = bk (jω)k X(jω). (8.166)
k=0 k=0

Note: All of the derivation operations in Equation: 10.56 disappear in


Equation:10.98 and the differential equation becomes an algebraic equation in
powers of (jω) Thus, an LTI system represented by a differential equation in
the time domain is equivalently represented by an algebraic equation, in the
frequency domain.
Arranging the Equation:10.102, we obtain a relationship between the input
and output of an LTI system in the frequency domain, as follows;
PM k
Y (jω) k=0 bk (jω)
= PN (8.167)
X(jω) k=0 ak (jω)
k

We can also obtain the frequency response of an LTI system by using the

355
above algebraic equation. When we replace the input of an LTI system by the
impulse function in the time domain, the output becomes impulse response.
When the input is an impulse function in time domain, its Fourier transform
becomes,

F [x(t)] = F [δ(t)] = X(jω) = 1. (8.168)


Then, the Fourier transform of the corresponding output becomes the fre-
quency response, H(jω). Therefore, replacing the input by X(jω) = 1, the
differential equation of Equation 10.60 becomes,
N
X M
X
k
ak (jω) H(jω) = bk (jω)k . (8.169)
k=0 k=0

The above equation provides us the frequency response of an LTI system,


represented by an ordinary constant coefficient differential equation in time
domain and an algebraic equation in frequency domain. Arranging Equation
10.105, we obtain the frequency response, as a rational function of jω, as
follows:
PM
k=0 bk (jω)k
H(jω) = PN . (8.170)
k
k=0 ak (jω)
Comparing Equation: 10. 103 and Equation: 10.106, we observe that the
right hand sides are the same. Thus, the ratio of the Fourier transform of the
output and input signals is equal to the frequency response;
PM k
Y (jω) k=0 bk (jω)
H(jω) = = PN (8.171)
X(jω) k=0 ak (jω)
k

Taking the inverse Fourier transform of the frequency response directly


gives us the impulse response, without solving the differential equation, be-
cause; time and frequency domain representations are one-to-one;

h(t) ←→ H(jω). (8.172)


In the following example, we are going to study the relationships between
the differential equations in the time domain and their algebraic representation
in the frequency domain.

Exercise 8.20: Consider the following first order differential equation:

dy(t)
+ y(t) = x(t), (8.173)
dt

356
a) Find impulse response.
b) Find the solution y(t), for the input

x(t) = e−2t u(t), (8.174)

when the system is initially at rest.

Solution:
a) If we did not know the Fourier transforms, we would replace the input by
an impulse function and solve the above differential equation for h(t). However,
taking the Fourier transform avoids solving the differential equation as follows:
First, let us take the Fourier transform of both sides of the differential equation
given above;

[(jω) + 1]Y (jω) = X(jω). (8.175)


Then, let us replace the input by an impulse function and its Fourier trans-
form, which is,

x(t) = δ(t) −→ X(jω) = 1. (8.176)


The corresponding output, which is the frequency response, can be directly
obtained from the Fourier transform of the differential equation, as follows;
1
H(jω) = . (8.177)
1 + jω
Impulse response of this system can be obtained from the inverse Fourier
transform of the frequency response, given in Table 8.2;
1 inverse F.T
H(jω) = −−−−−−−→ h(t) = e−t u(t). (8.178)
1 + jω
The above method enables us to find the impulse response of an LTI system
without solving the differential equation.
b) Next, we shall find the output y(t) without solving the differential equa-
tion. We simply use the relationship between the input and output in the
frequency domain,

Y (jω) = H(jω)X(jω). (8.179)


The Fourier transform of the input is found in Table 8.2, as
1
x(t) = e−2t u(t) ←→ X(jω) = . (8.180)
(2 + jω)
Then, we obtain the Fourier transform of the output;

357
1
Y (jω) = (8.181)
(1 + jω)(2 + jω)
1 1
= − . (8.182)
1 + jω 2 + jω
By using the linearity property and Table 8.2, we find the inverse Fourier
transform of Y (jw), as follows;

y(t) = (e−t + e−2t )u(t). (8.183)

8.9. Laplace Transforms as an Exten-


sion of Continuous Time Fourier
Transforms
Recall that the Fourier transform of a continuous time function exists if it sat-
isfies the Dirichlet conditions. When a time domain function is not absolutely
integrable, it violates the Dirichlet condition and it is not possible to find a
finite Fourier transform, in the frequency domain.
Motivating Question: Can we further generalize the Fourier transform
in such a way that the transform domain representation of a time domain
function exists in some predefined values of the new variable of this domain?
Laplace transform opens a door to answer the above question by defining a
new domain, called, Laplace domain or s-domain. In Laplace domain, a com-
plex variable, s = σ + jω, is defined as an alternative to the purely imaginary
variable, jω, of the frequency domain. Therefore, Laplace transform is con-
sidered as an extension of the Fourier transform, where the purely imaginary
variable of the frequency domain is generalized by defining a complex variable,
s with real and imaginary parts, in the Laplace domain. Considering the 2-
dimensional complex plane, Fourier transform, X(jω) can only exist on the jω
axis. In other words, while the Fourier transform maps a time domain func-
tion into the one-dimensional purely imaginary axis, jω, of the complex plane,
Laplace transform maps the function into the entire 2-dimensional complex
plane.
Recall that Fourier transform of a time domain function is defined as the
weighted integral of complex exponential function,
Z ∞
F {x(t)} = X(jω) = x(t)e−jωt dt. (8.184)
−∞
Laplace transform can be obtained by extending the Fourier transform.

358
This requires simply to replace the purely imaginary frequency variable jω of
Fourier transform with a complex variable, s = σ + jω, which consists of a real
and imaginary parts.
Formally, the Laplace transform of a continuous time function x(t) is de-
fined as,
Z ∞
L{x(t)} = X(s) = x(t)e−st dt, (8.185)
−∞

where s = σ + jω is a complex variable and e−st is a complex exponential


function. If we replace s = σ + jω in the above integral, we obtain,
Z ∞ Z ∞
−(σ+jω)t
L{x(t)} = x(t)e dt = x(t)e−σt ejωt dt, (8.186)
−∞ −∞

which yields a relationship between Laplace and Fourier transforms, as follows;

L{x(t)} = F {x(t)e−σt }. (8.187)


Theorem: A time domain function can be uniquely obtained from its
Laplace transform by the following equation;
σ+j∞
1
Z
x(t) = X(s)est ds, (8.188)
2πj σ−j∞

provided that the function, x(t)e−σt , is absolutely integrable, i.e.,


Z ∞
|x(t)|e−σt ejωt dt < ∞. (8.189)
−∞
Approximate proof: Recall the relationship between the Laplace trans-
form and Fourier transform is given by,

Z ∞
−σt
L{x(t)} = F {x(t)e }= x(t)e−σt e−jωt dt = X(σ + jω). (8.190)
−∞

Let us take the inverse Fourier transform of x(t)e−σt ;


1
Z
−σt −1
x(t)e =F {X(σ + jω)} = X(σ + jω)e−jωt dω. (8.191)
2π −∞

Leaving x(t) alone in the left hand side of the equation, we obtain,
Z ∞
1
x(t) = X(σ + jω)e−t(σ+jω) dω. (8.192)
2π −∞

359
Now, let us replace s = σ + jω. Then, assuming that σ is fixed, we replace
ds = jdω in the above integral equation to obtain the inverse Laplace transform
for each value of σ, from the following synthesis equation;

σ+j∞
1
Z
x(t) = X(s)e−st dω (8.193)
2πj σ−j∞

where the weight X(s) of the complex exponential function is called the
Laplace transform of x(t), obtained from the following analysis equation;
Z ∞
X(s) = x(t)e−st dt (8.194)
−∞

Note that finding the inverse Laplace transform, using the above equation
requires contour integration, which can be done by using the Cauchy residue
theorem [see: Complex Analysis: A Modern First Course in Function Theory
Jerry R. Muir Jr., Wiley, ISBN: 978-1-118-70522-3 April 2015]. In the context
of this book we suffice to use look up tables and the properties of the Laplace
transforms for finding the inverse Laplace transforms.
Laplace transform has several advantages compared to the Fourier trans-
form. It is very handy to solve the differential equations. It is applicable to
the functions, where the Fourier transform do not exists. It is a very powerful
tool to analyze the stability of linear or nonlinear systems. It has a wide range
of applications in developing Signal, Image and Video Processing , Computer
Vision and Machine Learning Systems.

8.9.1. One Sided Laplace Transform


Fourier Transform requires that the limit of the integral ranges (−∞, ∞). Thus,
the time domain function is to be definite for both positive and negative values
of time in the real axis. On the other hand, it is possible to define one sided
Laplace transform, where the time domain functions do not require negative
real numbers for time variable. For a wide range of practical problems, negative
times are undefined or zero. In order to avoid negative times, we restrict the
Laplace transform integral for 0 ≤ t ≤ ∞.
Formally, we define one sided Laplace transform, where the limits of the
integral ranges between (0, ∞), as follows;
Z ∞
X(s) = x(t)e−st dt, (8.195)
0

where s = σ + jω is still a complex variable and e−st is a complex exponential


function. The time domain function can be uniquely obtained from the one

360
sided Laplace transform by the following equation
Z σ+j∞
1
x(t) = X(s)est ds. (8.196)
2πj σ−j∞

8.9.2. Region of Convergence in Laplace Trans-


forms
Addition of the real part, σ, to the purely imaginary part, jω of the frequency
domain variable, enables us to evaluate the Laplace transform for each specific
value of σ. In the Laplace domain, where the variable s = σ + jω is a two
dimensional complex number, it is possible to find region(s) of the complex
plane for some values of σ, such that the Laplace integral converges to a finite
value.
The above capability of Laplace transform creates a great advantage of the
Laplace transforms over the Fourier transform, when the function, x(t) is not
absolutely integrable, but, the function x(t)e−σt is. Thus, Laplace transform,
relaxes the Dirichlet condition of the Fourier transform, leaving us some regions
of the complex plane, where Laplace transform exits. The regions, where the
existence of the Laplace transform is assured are called Region of Convergence
(ROC).
Definition: Region of Convergence (ROC): The Region of Conver-
gence (ROC) is defined as the regions in the complex plane, where the Laplace
transform X(s) of the function x(t) exits for some values of σ = Re{s}.
Region of convergences of the Laplace transform are in the form of vertical
stripes in the complex plane. The location(s) and width(s) of the stripes depend
on the type of the time domain function, x(t).
There are four major forms for the ROC of the Laplace transform:
1) If the function x(t) is absolutely integrable and has finite duration, in
other words, (
̸= 0 for t0 < t < t1 ,
x(t) (8.197)
= 0 otherwise.
for some finite values of t0 < t1 , then, ROC covers the entire s-plane.
Since it also covers the jω axis, the Fourier transform of the function also
exists.
2) If the function is right-sided, in other words, if there exists a finite t0 ,
such that
x(t) = 0 for t ≤ t0 ,
then, the ROC is the right-hand side of the complex plane with σ > σ0 .

361
3) If there exists a finite t0 , such that

x(t) = 0 for t ≥ t0 ,

then, the ROC is the left hand side of the complex plane with σ < σ0 .
4) If the function is left-sided, in other words, if there exists two finite values,
t0 and t1 , such that
(
̸= 0 for t < t0 and t > t1 ,
x(t) (8.198)
= 0 otherwise,
then, the ROC is a stripe with σ0 < σ < σ1 .
In order to observe the above forms of ROC and various capabilities of
Laplace transform over the continuous time Fourier transform, let us solve the
following exercises and investigate the existences of both Fourier and Laplace
transforms.

Exercise 8.21: Consider the following continuous time right sided signal:

x(t) = eat u(t). (8.199)

a) Find and compare the Laplace and Fourier transforms of this signal.
b) Find the values of a, which assures the existence of the Fourier transform
c) Find the ROC and the values of a, which assures the existence of the
Laplace transform.
d) Compare the range of a, which assures the existence of Fourier and
Laplace transforms.

Solution:
a) Fourier transform of the signal, x(t) is defined as,
Z ∞ Z ∞
−jωt
X(jω) = x(t)e dt = eat e−jωt dt = (8.200)
−∞ 0

1
Z
e−t(jω−a) dt = , f or a < 0. (8.201)
0 jω − a
The above integral does not converge for a > 0.
b) Laplace transform of the signal x(t) is,

Z ∞ Z ∞ Z ∞
−st −t(s−a)
X(s) = x(t)e dt = e dt = e−t((σ−a)+jω) dt. (8.202)
−∞ 0 0

The above integral exists for only σ − a ≥ 0 or σ ≥ a. Otherwise it ap-

362
proaches to ∞. Taking the integral in ROC for σ ≥ a, we obtain,
1 1
= . (8.203)
s−a jω + (σ − a)

c) Comparison of the convergence properties of Fourier and Laplace trans-


form, reveals that Fourier transform exists, for only negative values of a. On
the other hand, Laplace transform exits in the region of the complex plane,
where the real part of s is greater then a. Although for positive values of
a Fourier transform does not exit, Laplace transform exits in the region of
the complex plain, where σ > a. The region of the complex plane, where the
Laplace transform exits is Region of Convergence (ROC).

Exercise 8.22: Consider a slightly different version of the continuous time


signal of the previous example, which is left sided;

x(t) = −eat u(−t). (8.204)

a) Find and compare the Laplace and Fourier transforms of this signal.
b) Find the values of a, which assures the existence of the Fourier transform
c) Find the ROC and the values of a, which assures the existence of the
Laplace transform.
d) Compare the region of convergence (ROC) of Fourier and Laplace trans-
forms.

Solution:
a) Fourier transform of the signal, x(t) is defined as,
0
1
Z
X(jω) = − eat e−jωt dt = , f or a > 0. (8.205)
−∞ jω − a
The above integral does not converge for a < 0.
b) Laplace transform of the signal x(t) is,
Z ∞
1 1
X(s) = x(t)e−st dt = = . (8.206)
−∞ s−a (σ − a) + jω
Therefore, the integral exists for σ − a < 0 or σ < a.
c) As in the previous example, the analytical forms of Fourier and Laplace
transforms are the same. However, the conditions of convergence changes.
While Fourier transform exists, for only positive values of a, Laplace trans-
form exits in the region of the complex plain, where the real part of s is less
then or equal to a. Since Laplace transform may exit for some restricted values
of σ and for all values of jω, the ROCs of X(s) consists of strips parallel to
the jw-axis in the s-plane.

363

a σ

Figure 8.20: Region of convergence for the Laplace transform of x(t) =


−eat u(−t).

Exercise 8.23: Find the Laplace transform and its ROC for the following
right sided function:

x(t) = u(t). (8.207)

Solution:
From the definition of Laplace transform;

∞ ∞
1 1
Z Z
−st
X(s) = x(t)e dt = e−(σ+jω)t dt = = . (8.208)
−∞ 0 (σ + jω) s

Note that the Laplace integral exists for σ = Re{s} > 0. Thus, ROC
is positive half of the complex plane. This is the case, when an absolutely
integrable function x(t) is right sided.

Exercise 8.24: Find the Laplace transform and its ROC for the following
limited time duration function:

x(t) = u(t) − u(t − T ). (8.209)

Solution:
From the definition of Laplace transform;

364

Figure 8.21: Region of convergence for the Laplace transform of the unit step
function, u(t).

∞ T
1
Z Z
−st
X(s) = x(t)e dt = e−(σ+jω)t dt = [1 − e−sT ]. (8.210)
−∞ 0 s

Since the time duration, t ∈ [0, T ] is bounded, the Laplace integral exists
for all values of σ. Thus, ROC is the entire complex plane. This is the case,
when an absolutely integrable function x(t) has finite duration.

Exercise 8.25: Find the Laplace transform and ROC of the following two
sided function:

x(t) = e−at u(t) + eat u(−t). (8.211)

Solution:
From the definition of Laplace transform;

∞ ∞
1 1
Z Z
−st
X(s) = at
(e + e )eat
dt = e−(σ+jω)t dt = + (8.212)
−∞ 0 s+a s−a

In order to find the ROC of the above Laplace transform, we need to find
the ROC of the first and the second term in the left hand side of the above
equation:
1
ROC for s+a is σ ≥ −a and
1
ROC for s−a is σ ≤ a.

365

Figure 8.22: Region of convergence for the Laplace transform of x(t) = u(t) −
u(t − T ).

x(t)
1

Figure 8.23: Two sided function x(t) = e−at u(t) + eat u(−t).

366

−a a σ

Figure 8.24: Region of convergence for the Laplace transform of x(t) =


e−at u(t) + eat u(−t).

These examples show that it is critical to determine the Region of Conver-


gence of the Laplace transforms in the complex plane.

8.10. Inverse of Laplace Transform


As we mentioned above, recovering the time domain signal x(t) from its Laplace
transform, X(s) requires the following contour integration;
σ+j∞
1
Z
x(t) = X(s)est ds, (8.213)
2πj σ−j∞

which may not be easy for a large class of functions. In order to avoid contour
integration, we frequently use the look-up tables and properties of the Laplace
transform. Since they are quite similar to that of the Fourier series and Fourier
transformation, we suffice to provide the list of properties and look up tables for
common transform pairs, x(t) ↔ X(s) together with ROCs, in Tables 8.3 and
8.4. The following examples demonstrate how we utilize the tables to compute
the inverse Laplace transform.

367
Table 8.3: Properties of Laplace transform.
Signal Transform Pair ROC
x(t) X(s) R
x1 (t) X1 (s) R1
x2 (t) X2 (s) R2
ax1 (t) + bx2 (t) aX1 (s) + bX2 (s) At least R1 ∩ R2
x(t − t0 ) e−st0 X(s) R
e−s0 t x(t) X(s − s0 ) Shifted version of R (i.e., s is in the ROC
if s − s0 is in R)

1 s s
x(at) |a| X( a ) Scaled ROC (i.e., s is in the ROC is in
a
R)
x∗ (t) X ∗ (s∗ ) R
x1 (t) ∗ x2 (t) X1 (s)X2 (s) At least R1 ∩ R2
d
dt x(t) sX(s) At least R
d
−tx(t) ds X(s) R
Rt 1
−∞ x(τ )d(τ ) s X(s) At least
R ∩ {Re{s} > 0}
Initial- and Final-Value Theorems:
If x(t) = 0 for t < 0 and x(t) contains no impulses or higher-order singularities at t = 0, then
x(0+ ) = lim sX(s)
s→∞

368
Table 8.4: Laplace transform pairs.
Signal Transform ROC
δ(t) 1 All s
1
u(t) s Re{s} > 0
1
−u(−t) s Re{s} < 0
tn−1 1
(n−1)! u(t) sn Re{s} > 0

tn−1 1
− (n−1)! u(−t) sn Re{s} < 0

e−αt u(t) 1
s+α Re{s} > −α

−e−αt u(−t) 1
s+α Re{s} < −α
tn−1 −αt 1
(n−1)! e u(t) (s+α)n Re{s} > −α

tn−1 1
− (n−1)! e−αt u(−t) (s+α)n Re{s} < −α

δ(t − T ) e−sT All s


s
[cos ω0 t]u(t) s2 +ω 2
Re{s} > 0
ω0
[sin ω0 t]u(t) s2 +ω 2
Re{s} > 0
s+α
[e−αt cos ω0 t]u(t) (s+α)2 +ω 2
Re{s} > −α
ω0
[e−αt sin ω0 t]u(t) (s+α)2 +ω 2
Re{s} > −α
dn δ(t)
un (t) = dtn sn All s
1
un (t) = u(t) ∗ · · · ∗ u(t) sn Re{s} > 0
| {z }
n times

Exercise 8.26: Find the inverse Laplace transform of the following s-domain
function;
1
X(s) = , ROC for σ < −1. (8.214)
(s + 1)(s + 2)

Solution: Let us apply partial fraction expansion to simplify the Laplace func-
tion;

369
1 1 1
X(s) = = − (8.215)
(s + 1)(s + 2) s+1 s+2
In the above equation, inverse Laplace transformation of the first term is,
1
L −1 [
] = e−t u(t) (8.216)
s+1
the inverse Laplace transformation of the second term is,
1
L −1 [ ] = e−2t u(t) (8.217)
s+2
Using the linearity property, we obtain the inverse Laplace transform of X(s)
as follows:
x(t) = [e−t − e−2t ]u(t) (8.218)

Exercise 8.27: Find the inverse Laplace transform of the following s-domain
function;
3s + 2
X(s) = , ROC for σ≥0 (8.219)
s2 + 9

Solution:
Let us separate the function into two parts:
3s + 2 3s 2
X(s) =
2
= 2 + 2 . (8.220)
s +9 s +9 s +9
From the Laplace transform table, we can see that the inverse Laplace trans-
form of the first term is,
3s
L −1 [ ] = [3 cos 3t]u(t), ROC for σ > 0 (8.221)
+9 s2
and the inverse Laplace transform of the second term is,
2 2
L −1 [ ] = [ sin 3t]u(t), ROC for σ > 0 (8.222)
s2 +9 3
Using the linearity property, we obtain the inverese Laplace transform of X(s),
as follows;

3s 2 2
x(t) = L −1 [ ] + L −1 [ 2 ] = [3 cos 3t + sin 3t]u(t). (8.223)
s2+9 s +9 3
Exercise 8.28: Find the inverse Laplace transform of the following s-domain

370
function;
2 3
X(s) = , ROC for σ< . (8.224)
3 − 7s 7

Solution:
Let us factorize the constant term 3/7
2 2 1
X(s) = =− 3 (8.225)
3 − 7s 7s− 7
Using the linearity property and the look up table, we get,

2 3 2 1 3
x(t) = e 7 t u(−t) ↔ X(s) = − , ROC for σ < .. (8.226)
7 7 s − 37 7

Exercise 8.29: Find the inverse Laplace transform of the following s-domain
function;
1 3 − 2s
X(s) = + 2 ROC for σ > 3/4 (8.227)
3 − 4s s + 49

Solution:
Let us separate the function into three parts:
1 3 2s
X(s) = + 2 − 2 . (8.228)
3 − 4s s + 49 s + 49
From the Laplace transform table, we can see that the inverse Laplace trans-
form of the first term is,
1 1 3
L −1 [ ] = −( e 4 t )u(t) ROC for σ > 3/4, (8.229)
3 − 4s 4
the inverse Laplace transform of the second term is,
3 3
L −1 [ ] = ( sin 7t)u(t) ROC for σ > 0, (8.230)
s2 + 49 7
and the inverse Laplace transform of the third term is,
2s
L −1 [ ] = (2 cos 7t)u(t) ROC for σ > 0. (8.231)
s2 + 49
Thus, the inverse Laplace transform of X(s) is
1 3 3
x(t) = [− e 4 t + sin 7t − 2 cos 7t]u(t). (8.232)
4 7

371
The above exercises show that a practical method for finding the inverse
Laplace transform is to make algebraic manipulations on the s-domain function
and put it into the linear combination of the known pairs of transform table.
Then, use the linearity property to obtain the inverse transform.

8.11. Continuous Time Linear Time In-


variant Systems in Laplace Do-
main
Recall that a continuous time LTI system is represented by the following ordi-
nary constant coefficient differential equation in time domain ,
N M
X dk y(t) X dk x(t)
ak = bk . (8.233)
dtk dtk
k=0 k=0

Also, recall that if the eigen function of x(t) = ejωt is the input to an
LTI system represented by the impulse response, h(t), then, the corresponding
output is,

y(t) = H(jω)ejωt , (8.234)


where
Z ∞
H(jω) = h(t)e−jωt dt, (8.235)
−∞
is called the frequency response. Since the right hand side of the above equation
is the Fourier transform of the impulse response, the frequency response is just
the Fourier transform of the impulse response;

h(t) ←→ H(jω). (8.236)


Motivating Question: What if the frequency response does not exits?
Can we employ Laplace transform to analyze the frequency content of an LTI
system in some region of convergences of the s-plane?
Laplace transform, indeed, provides us a strong tool to analyze the LTI
systems, which do not exits in the frequency domain.
Let us take the Laplace transform of both sides of the nth order differential

372
equation given above,
N
X M
X
k
ak s Y (s) = bk sk X(s). (8.237)
k=0 k=0

Note that all of the derivation operations in Equation: 10.56 disappear in


Equation:10.98 and the differential equation becomes an algebraic equation
in powers of s Thus, an LTI system represented by a differential equation in
the time domain is equivalently represented by an algebraic equation, in the
Laplace domain.
Arranging the Equation:10.102, we obtain a relationship between the input
and output of an LTI system in the Laplace domain, as follows;
PM
Y (s) bk sk
= PNk=0 (8.238)
X(s) k=0 ak (s)
k

8.11.1. Eigenvalues and Transfer Functions in


s-Domain
Recall that when the input of an LTI system is an exponential function the
output is just the scaled version of the input. Thus, exponential functions are
the eigen functions of the LTI systems and the scaling factor is simply the
eigenvalue, computed from the convolution integral;
Z ∞
y(t) = x(t) ∗ h(t) = eλ(t−τ ) h(τ )dτ = H(λ)eλt , (8.239)
−∞
where Z ∞
H(λ) = h(t)e−λt dt. (8.240)
−∞

x(t) = eλt → LTI → yP (t) = H(λ)eλt (8.241)


In the above formulation, if we set , λ = jω, then, the eigen function at the
input becomes x(t) = ejωt and the eigenvalue of the LTI system becomes the
Fourier transform of the impulse response, which is called frequency response;
Z ∞
H(jω) = h(t)e−jωt dt. (8.242)
−∞

If we set, λ = s = σ + jω, then, the eigen function at the input becomes


x(t) = est and the eigenvalue becomes the Laplace transform of the impulse
response.

373
Definition: Transfer Function The Laplace transform of the impulse
response is called transfer function,
Z ∞
H(s) = h(t)e−st dt. (8.243)
−∞
When the frequency response of an LTI system does not converge, we can-
not represent the LTI system with an eigenvalue, in the frequency domain.
However, Laplace transform enables us to find the eigenvalue of the system,
which converges in some regions of the complex s-plane. In other words, Laplace
transform generalizes the Fourier transform by extending the purely imaginary
variable jω to a complex variable s = σ + jω. This extension enabled us to
find the Laplace transform of a continuous time function, even if the Fourier
transform does not exists. We found the regions of the complex plane , called
the region of convergence(ROC), where the Laplace transform exits.
For a more general representation of frequency response, instead of jω, we
can define a complex number, as s = σ + jw, then, the frequency response
becomes transfer function,
PM k
Y (s) k=0 bk s
H(s) = = PN , (8.244)
X(s) k=0 ak s
k

which transfers an input signal to an output signal of an LTI system represented


by a differential equation. The type of this transferring process is determined
by the constant coefficients, {ak } and {bk } of the differential equation.
Similarly, taking the inverse Laplace transform of the transfer function
also gives us the impulse response, without solving the differential equation,
because; time and s- domain representations are one-to-one,

h(t) ←→ H(s). (8.245)

The following exercise demonstrate the utilization of Laplace transforms for


describing various properties of the LTI systems.

Exercise 8.30: Consider an LTI system, represented by the following im-


pulse response:
h(t) = [e2t − e−3t ]u(t). (8.246)
a) Find the frequency response of this system
b) Find the transfer function of this system.
c) Comment on the Region of Convergence.

Solution:

374

−a a σ

1 1
Figure 8.25: Region of convergence of the transfer function H(s) = s−2 + s−3
for ROC σ > 2.

a) This system is causal, but unfortunately the Fourier transform of the first
term does not exit. Thus, H(jω) → ∞.
b) The transfer function is the Laplace transform of the impulse response:
Z ∞ Z ∞
−st
H(s) = h(t)e dt = [e2t − e−3t ]e−st dt. (8.247)
−∞ 0
Transfer function consists of two subsystems, which are paralleled to each
other;
H(s) = H1 (s) − H2 (s) (8.248)
where,

1
Z
H1 (s) = e2t e−st dt = , ROC for Re{s} > 2 (8.249)
0 s−2
and

1
Z
H2 (s) = e−3t e−st dt = , for ROC Re{s} > −3. (8.250)
0 s+3

Thus,
5
H(s) = , for ROC Re{s} > 2. (8.251)
s2 +s−6
c) The ROC of the overall transfer function, H(s), lies in the intersection
of ROCs of H1 (s) and H2 (s), which includes the region of the complex
plane for σ > 2.

375
Exercise 8.31: Consider an initially at rest LTI system given by the follow-
ing differential equation;

d2 y(t) dy(t)
2
+4 + 2y(t) = 5x(t) (8.252)
dt dt
a) Find the transfer function of this system.
b) Find the impulse response of this system.

Solution:
a) Let us set the input to impulse function, x(t) = δ(t), then, the correspond-
ing output of the differential equation becomes the impulse response, h(t).
The above differential equation for impulse response is,

d2 h(t) dh(t)
+4 + 2h(t) = 5.δ(t) (8.253)
dt2 dt
From the properties, we see that

dn y(t)
↔ sn X(s),
dtn
from the Fourier transform table, we see that

x(t) = δ(t) ↔ X(s) = 1.

Using the above transform pairs, we take the Laplace transform of both
sides of the differential equation:9.23, we obtain an equation for transfer
function;
[s2 + 4s + 2]H(s) = 5.
Finally, we get the transfer function, as follows;
5
H(s) = for ROC: σ > −2.
(s + 2)2
b) The impulse response of this system is the inverse of the transfer function.
Using the Fourier transform table, we obtain the impulse response, as
follows;

h(t) = (5te−2t )u(t)

The above example demonstrates that we can obtain the transfer function
and impulse response of an LTI system, which is initially at rest, without solv-
ing the differential equation. This methods is also available in Fourier domain,
provided that the frequency response exits. In case of undefined frequency

376
responses, Laplace domain enables us to compute the transfer function and
impulse response, using a simple algebraic method.
As it is observed throughout of this chapter, the transform domains capture
different view of the physical phenomena other then time domain representa-
tions. Furthermore, the beautiful synergy created by the representations of
time and transform domains bridges the mathematics of linear algebra and
differential equations.

8.12. Chapter Summary


Can we extend the Fourier series representation of continuous time periodic
signals to aperiodic ones? If yes, how do we represent an aperiodic function in
frequency domain? Is it possible to represent any time domain signal in the
frequency domain uniquely? Are the representations in the frequency domain
and time domain one-to-one and onto? What are the necessary and sufficient
conditions to transform a time domain function to the frequency domain?
In this chapter, first, we answer the above questions by generalizing the
Fourier series of continuous time periodic signals to aperiodic signals. We sim-
ply assume that an aperiodic signal can be considered as periodic, as the period
approaches to infinity, T → ∞.
The generalized form of Fourier series, which enables us to represent both
continuous time periodic and aperiodic functions in terms of its frequency con-
tent, is called the Fourier Transform. Spectral coefficients of Fourier series and
Fourier transform enable us to represent the time domain signals in the fre-
quency domain. The existence of frequency domain representation is assured
by Dirichlet conditions. Although the frequency domain of the periodic signals
and that of the aperiodic signals resemble each other, they possess different
characteristics. Both frequency domains enable us to represent signals in terms
of their frequency content. While the frequency domain of spectral coefficients
are discrete harmonics of the fundamental frequency, kω0 , the frequency do-
main of the Fourier transform is continuous functions of frequency, ω.
The periodic signals can be represented by both Fourier series and Fourier
transform. Weighted summation of shifted impulse functions, δ(ω − kω0 ) for
each harmonics of the fundamental frequency, kω0 , provides us the Fourier
transform of periodic signals, where the weights are the scaled spectral coeffi-
cients, 2πak . Frequency domain representations provide us important informa-
tion about the frequency content of the signals and the characteristics of LTI
systems.
Next we try to answer the following questions:
What type of an operator is Fourier transform? What are the relationships
between the functions represented in the time and frequency domains? What

377
are the properties of the signals and systems in the frequency domain? Where
do we use Fourier transformations?
In order to answer the above questions, we dive into the deeper meanings
of Fourier analysis and synthesis equations, investigating the power of Fourier
transforms in solving mathematical problems and designing LTI systems.
We studied basic properties of Fourier transforms, such as linearity, time
shifting, time scaling, derivation and integration properties. We saw that con-
volution operation in time domain is transformed into multiplication operation
in frequency domain. We noticed that the energy is preserved in time and fre-
quency domains. We also studied an important concept, called duality. In short,
we observed that infinite dimensional frequency domain, spanned bu uncount-
ably many complex exponential functions has many interesting properties and
forms a fertile environment for understanding the frequency content of contin-
uous time signals. We observed that differential and integral equations become
algebraic equations in the frequency domain. Thus, solving them in the fre-
quency domain is rather easier compared to solving them in time domain. We
also, show that there is one-to one correspondences between the representa-
tion of LTI systems by impulse response, frequency response and differential
equations.
Finally, we define a new domain, called Laplace domain, where the purely
imaginary frequency domain variable (jω), is extended to a complex plain vari-
able s = σ + jω. This generalization enables us to find the Laplace transforms
of the functions, which do not exist in the frequency domain, in which the
Laplace transform exists in some regions of the complex plane called, Region
of Convergence. While Fourier Transform maps a time-domain function into
a frequency-domain function, Laplace Transform maps a time-domain func-
tion into an s-domain function, in the entire complex plane, where the the
transformed function exits in the Region of Convergence (ROC).

378
Problems
1. Consider the following continuous time signal:

(
e−2t , 0 ≤ t ≤ 1
x(t) =
0, otherwise

Determine the Fourier transform of the following signals.


a) x(−t) + x(t)
b) x(t) − x(−t)
c) x(t) + x(t + 1)
d) tx(t)
2. Find the Fourier transforms of the following continuous time signals:
a) e−2(t−2) u(t − 2)
b) e−2|t−2|

3. Find the inverse Fourier Transform of the following frequency domain


signals:
a) X(jw) = 4πδ(ω) + 10πδ(ω − 7π) − 10πδ(ω + 7π)
b) 
−3, 0 ≤ ω ≤ 3

X(jw) = −3, −3 ≤ ω ≤ 0

0, |ω| < 3

4. A signal in frequency domain is represented by polar coordinate system


X(jω) = |X(jω)|ej∢X(jω) has the following magnitude and phase:
|X(jω)| = u(ω − 3) − u(ω + 3)
1
∢X(jω) = − ω + π
2
a) Find the real and imaginary part of X(jω).
b) Find the inverse Fourier transform of this signal.
c) Find the time interval t, where x(t) = 0.
5. Use the Fourier transform properties and tables to evaluate the Fourier
transform of the following signals
a) y(t) = te|t| + e|t|
t
b) y(t) =
(1 + t2 )2
6. The input of an LTI system is a continuous-time signal whose Fourier

379
transform is,
X(jw) = δ(ω) + δ(ω − 2) + δ(ω + 2)
.
a) Find the inverse Fourier transform x(t) of this signal. Is x(t) periodic?
If yes, find the period.
b) Given an LTI system represented by the impulse response,

h(t) = u(t + 1) + u(t − 1),

find the output y(t) when the input is,


i. x(t)
ii. tx(t)
7. Consider a causal linear time-invariant system represented by the follow-
ing frequency response
1
H(jw) =
4jω + 3
a) Find the impulse response of the system.
b) Find the input x(t), when we observe the following output,

y(t) = (e−5t − e−10t )u(t).

8. Find the Fourier transforms of the following signals:


a) e−4|t| sin3t
b) x(t) given in figure (8.a).
c) x(t) given in figure (8.b).

x(t)
... 4 ...
2
t
−6−5−4−3−2−1 0 1 2 3 4 5 6 7 8

(8.a)

380
x(t)

t
−4 −2 2 4
−2

(8.b)
9. In figure (a), x(t + 1) is given. x(t) has Fourier Transform X(jω).
x(t + 1)

t
−2 −1 0 1 2

(9.a)
a) Find ∢X(jω) and sketch it.
b) Calculate
R∞ X(jω) at ω = 0.
c) −∞ X(jω) dω

381
10. Find the Fourier Transform of the following signals:
π
a) sin(2πt + )
4
π
b) 1 + cos(12πt + )
4

11. A continuous time signal is given below:


X sin(k π3 ) π
x(t) = p(t − k ).
k π3 3
k=−∞

where p(t) = δ(t).


a) Plot x(t).
c) Find and plot X(jω).
(a) is X(jω) periodic? If yes, what is the period?
12. The signal x(t) has Fourier Transform X(jω). Find the Fourier Transform
of the Following signals in terms of X(jω).
a) x(2 − t) + x(2 + t)
b) x(−2t − 4)
d3
c) 3 x(1 − t)
dt

13. Find the inverse Fourier transform x(t). Are these real in time domain?
a) X(jω) = u(ω + 8) − u(ω − 8)
b) X(jω) = P
cos(ω) sin(ω)
c) X(jω) = ∞ 1 |k| kπ
k=−∞ ( 8 ) δ(ω − 4 )

14. A continuous time signal is given below:

x(t) = (t + 1)(u(t + 1) − u(t − 1)) + u(t − 1))


a) Find and plot the Fourier Transform, X(jw), of x(t).
b) Find and plot the Fourier Transform of the even part of x(t).
(a) Find and plot the Fourier transform of the odd part of x(t).

382
15. A continuous time signal is given below:

(t + 1)
x(t) = (u(t + 1) − u(t − 1))
2

a) Find and plot the Fourier Transform, X(jω), of x(t).


b) Find the energy of the signal in time domain.
c) Find the energy of this signal in the frequency domain.

16. A continuous time signal is given below:

sin2 t
x(t) = .
2π 2 t
a) Find and plot the Fourier transform of this signal.
b) Find the energy of this signal.
17. Find the inverse Fourier Transform of the signals given in the frequency-
domain.
|X(jω)|

ω
−1 1

383
∢X(jω)

1
ω

(23.a)

384
X(jω)

1
−3 −2 −1
ω
1 2 3
−1

(23.b)
a) X(jω) satisfying graphs in (a)
b) X(jω) satisfying graphs in (b)
c) X(jω) = cos(8ω + π/3)
d) X(jω) = sin(ω−3π)
ω−3π
e) X(jω) = 6{δ(w + 4) − δ(w − 4)} + 4{δ(w + π) − δ(w − π)}
18. It is given that

(
1, |ω| ≤ 2
g(t) ←→ G(jω) =
0, otherwise

g(t)
a) Find x(t) such that x(t) =
cost
g(t)
b) Find h(t) such that h(t) =
cos 16 t
19. Calculate the response corresponding to x(t) = cos(t) of the following
systems whose impulse responses are given below.
i) h1 (t) = 2u(t)
ii) h1 (t) = −4δ(t) + 10e−2t u(t).
iii) h1 (t) = 2te−t u(t)
20. The signal x(t) has the Fourier Transform X(jω) and let g(t) be a periodic
signal whose fundamental frequency is ω0 . Its Fourier Series representation
as follows:


X
g(t) = an ejnω0 t
n=−∞

a) Find the fourier transform of y(t) = x(t)g(t).


b) For each of the following g(t), sketch the spectrum of y(t).

385
i) g(t) = cos(4t)
ii) g(t) = sin(2t) sin(4t)
g(t) = ∞
P
iii) n=−∞ δ(t − 8πn)
iv) g(t) as drawn in (a).
g(t) = 2 n=−∞ δ(t − πn) − ∞
1 P∞ P
v) n=−∞ δ(t − 2πn)

X(jω)

ω
−1 1

(a)
21. The following signal is fed to a continuous time LTI system:

x(t) = u(t − 3) − 2u(t − 4) + u(t − 5).

a) Find and plot the Fourier Transform, X(jω), of x(t).


b) Find and plot the Fourier transform of the corresponding output


X
y(t) = x(t − kT ).
k=−∞

c) Find the frequency response of this system.


22. a) Calculate the convolution of the following signals. (Hint: Use Fourier
Transform.)
i) x1 (t) = te−3t u(t), x2 (t) = e−6t u(t)
ii) x1 (t) = te−3t u(t), x2 (t) = te−6t u(t)
iii) x1 (t) = e−2t u(t), x2 (t) = e2t u(−t)

386
b) Consider x(t) = e2−t u(t − 2) and g(t) as drawn in figure (a). Find the
Fourier Transform of x(t) ∗ g(t) and then find X(jω)G(jω).
g(t)

t
−2 4

(a)

23. Find the Laplace Transform and the region of convergence for the follow-
ing signals in time-domain.
a) x(t) = (e−5t + e−6t )u(t)
b) x(t) = (e−7t + e−8t sin(8t))u(t)
c) 
t 0≤t≤1
x(t) =
2−t 1≤t≤2
d) x(t) = δ(2t) + u(2t)
24. Find the functions x(t) whose Laplace Transforms and region of conver-
gences are given.
s
a) 2 , Re{s} > 0
s + 25
s+1
b) 2 ,−3 < Re{s} < −2
s + 5s + 6
s2 + 2s + 1 1
c) 2 , Re{s} >
s −s+1 2

25. Consider a continuous-time LTI system which is represented by the fol-


lowing second degree differential equation.

d2 y(t) dy(t)
− − 2y(t) = x(t)
d2 t dt
a) Find the transfer function, H(s) of the impulse response of the sys-
tem.
b) Find the frequency response of this system.

387
c) Find the impulse response of this system.
d) Find a block diagram representation of this system.

26. The transfer function of causal LTI system is given as

s+2
H(s) =
s2 + 4s + 5
a) Find the the impulse response of the system.
b) Find the frequency response of this system.
c) Find the differential equation, which represents this system.
d) Find the response, y(t), when the input is x(t) = e−2|t| for −∞ < t <
∞.
e) Find a block diagram representation of this system.

388
27. The transfer function of causal LTI system, S1 , is given as

s2 + 2s − 3
H1 (s) =
s2 + 3s + 2
Another system, S2 , has the system function

2
H2 (s) =
s2 + 3s + 2
Assume that both systems have the same input, x(t). Let corresponding
output of S1 be y1 (t) and that of S2 be y2 (t). Find y1 (t) in terms of the
followings:
a) y2 (t)
dy2 (t)
b)
dt
d2 y2 (t)
b)
d2 t
28. Find the Laplace Transform of the following signals with their region of
convergences:
a) 2δ(t + 2) − δ(t − 3)
d
b) {u(−1 − t) + u(t − 1)}
dt

29. Consider the continuous time LTI system, represented by the following
block diagram in the s-domain:

389
x(t) + + y(t)

1
s

+ −2 −1 +

1
s

−1 −6

a) Find the differential equation representing the system.


b) Is the system stable ? Explain briefly.

390
Chapter 9
Discrete Time Fourier
Transform and Its Extension
to z-Transforms

Listen to the sounds of periodic functions @


https://384book.net/v0901
WATCH

Historically, the continuous world converged to a digital world gradually,


starting from F. Gauss, who invented the fast Fourier transform, around 1800s.
The advent of digital computers necessitated the need to work with discrete
time signals and systems. After the pioneering work of C. Shannon, which
bridges the continuous and discrete time functions through Sampling theorem,
in 1949, J. Cooley and J. Tukey published an efficient method for digital imple-
mentation of the fast Fourier transform, in 1966. Since then the entire informa-
tion and telecommunication technology is smoothly converted from analog to
digital systems. And the related theoretical background is developed to extend
the continuous time Fourier series and transform to their discrete counterparts.
In this chapter, we carry the methodology and intuition lying behind the
continuous time Fourier transform to the discrete domain.
Recall that, we extended the continuous time Fourier series representation
of periodic signals to aperiodic signals by using Fourier transforms.
Motivating Question: How did we make such a generalization?
The idea was simple: We assumed that an aperiodic signal can be considered
as a signal with infinite period.
In this chapter, we use the same idea to extend the discrete time Fourier
Series representation to discrete time Fourier transforms. We study the basic
properties of discrete time Fourier transform. Furthermore, we generalize the
discrete time Fourier transform to z-transform by extending the complex ex-
ponential basis functions with unit magnitude to complex variable z = |r|ejω

391
with arbitrary magnitude, where r ∈ R.

9.1. Fourier Series Extension to Discrete


Time Aperiodic Functions
Recall that a discrete time function is periodic if there exists an integer value,
N , such that,

x[n] = x[n + N ]. (9.1)


In the above formulation, we can extend the integer value, N as large as
we need. When a discrete time function has an infinite period, we assume that
it repeats itself at every integer value N → ∞.
The above approach enables us to define a discrete time aperiodic function,
x[n], as a periodic function for N → ∞, in the limit. Interestingly, as N → ∞,
the sum operation of the Fourier series synthesis equation converges to an
integral operation allowing us to represent aperiodic functions in terms of their
frequency content.
Let’s see how?

9.1.1. Discrete Time Fourier Transform


Theorem: A discrete time function, x[n], can be uniquely represented as a
weighted integral of complex exponential function by the following synthesis
equation;

1
Z
x[n] = X(ejω )ejωn dω, (9.2)
2π 2π
where the weight, called the Fourier transform, is a continuous function of
frequency, which can be uniquely obtained from the time domain function by
the following analysis equation;

X

X(e ) = x[n]e−jωn . (9.3)
n=−∞

The synthesis equation states that a discrete time function, x[n], can
be uniquely represented by the weighted integral of waves, i.e., complex ex-
ponentials. The weight function, X(ejω ), called the discrete time Fourier
transform of x[n], is a continuous function of frequency variable, ω, which
measures the amount of wave with a particular frequency band in the signal.

392
The analysis equation shows us how to obtain the Fourier transform,
X(ejω ) of x[n], which represents the signal as a function of frequencies, in the
frequency domain. The Fourier transform representation of a signal enables us
to decompose an aperiodic discrete time signal into its frequency components,
which is embedded in the signal.
The above representation of a physical phenomenon by a function in dis-
crete time domain and continuous frequency domain is one-to-one and onto:

x[n] ←→ X(ejω ) (9.4)


Approximate Proof: Consider the Fourier series representation of a pe-
riodic signal, x̃[n] and its spectral coefficients, as follows:
X
x̃[n] = ak ejkω0 n . (9.5)
k=<N >

and
1 X
ak = x̃[n]e−jkω0 n , (9.6)
N
n=<N >

where < N > indicates the coverage of one full period in the limits of the
summations. Consider, also, a finite duration discrete time aperiodic function,
x[n], which corresponds to the center part of the periodic function, x̃[n], for
one full period,
(
x̃[n] −N1 < n < N2
x[n] = (9.7)
0 otherwise,
In the above formulation, the periodic function x̃[n] is generated by repeat-
ing an aperiodic function, x[n], with the fundamental period, N . For the time
being, let us assume that the nonzero range in the interval, N1 + N2 < N is
finite, in the above equation, as shown in Figure 9.1.
Now, let’s define a new function, in the frequency domain,

X(ejkω0 ) = N ak , (9.8)
and replace it by N ak in the analysis equation, to obtain the following
equation;
X
X(ejkω0 ) = x̃[n]e−jkω0 n . (9.9)
n=<N >

Since ak is periodic, with the fundamental period N , X(ejkw0 ) is also pe-


riodic with the same fundamental period.

393
x[n] x̃[n]

n n

(a) (b)

Figure 9.1: A finite duration signal x[n] (a) is repeated at every fundamental
period N , to generate a periodic signal, x̃[n] (b).

Replacing ak by X(ejkω0 )/N , in the synthesis equation, Fourier series rep-


resentation of x̃[n] becomes,
1 X
x̃[n] = X(ejkω0 )ejkω0 n . (9.10)
N
k=<N >

Now, let’s take the limit to stretch the period N to infinity. Then, the
angular frequency converges to an infinitesimal interval,

ω0 = lim → dω (9.11)
N →∞ N

and the discrete variable kω0 converges to a continuous variable,

lim kω0 → ω. (9.12)


N →∞

In the limit, the periodic function, x̃[n] converges to the aperiodic func-
tion, x[n], which repeats itself at every N → ∞. The summation operation of
the synthesis equation converges to integral operation, yielding a continuous
frequency domain function, as follows;
1 X
lim x̃[n] = x[n] = lim ω0 X(ejkω0 )ejkω0 n ,
N →∞ N →∞ 2π
k=<N >
(9.13)

1
Z
jω jωn
x[n] = X(e )e dω,
2π 0
where

X
X(e ) = jω
x[n]e−jωn . (9.14)
n=−∞

394
Interestingly, due to the limit of N → ∞, the Fourier transform of a dis-
crete time function converges to a continuous frequency function. Moreover,
the transform domain function, X(ejω ) becomes periodic, as shown by the
following Lemma.
Lemma: Discrete time Fourier transform, X(ejω ) of a function x[n] is
always periodic, with 2π.
Proof: Recall that the discrete time complex exponential,
e−jωn = ejω(n+2π) = e−jωn (cos 2π + j sin 2π)
is periodic with N = 2π. Since the linear combination of periodic functions
is also periodic, the Fourier transform, X(ejω ) is periodic with N = 2π.
Motivating Question: Why does the Fourier transform, X(ejω ), of a
discrete time function have argument ejω instead of jω of the continuous time
Fourier transform, X(jω)?
This is basically because of the analytical form of the synthesis equation
of the discrete time Fourier transform, which has a summation operation in-
stead of the integral operation of the continuous time Fourier transform. The
integral operation of the continuous time synthesis equation changes the an-
alytical form of the complex exponential basis functions. On the other hand,
the sum operation of the discrete time synthesis equation keeps them as is.
Thus, the Fourier transform of a discrete time functions are always functions
of the complex exponentials, ejω . Keep in mind that, we can always use the
Euler formula,
ejω = cos ω + j sin ω,
to convert the Fourier transform into trigonometric form.
Note: The Fourier tansform X(ejω ) of a discrete time aperiodic function
x[n] is continuous and periodic with 2π. Thus, the integral of the synthesis
equation covers only one full period of 2π.

9.2. Dirichlet Conditions are Relaxed for


the Existence of Discrete Time Fourier
Transform
Since the discrete time functions carry a finite number of samples in a finite
interval, we do not bother with bounded variations and the finiteness of the
discontinuities in a finite interval. However, the existence of the discrete time
Fourier transform still requires some constraints to the class of functions to
be represented in the frequency domain. We must consider the convergence of
the infinite sum in the analysis equation. Mathematically speaking, the Fourier
transform, X(ejw ), exists if and only if

395
lim |X(ejω ) − XK (ejω )| → 0, (9.15)
K→∞

where
K
X
XK (ejω ) = x[n]e−jωn , (9.16)
n=−K

is a truncated Fourier transform for a finite K.


A sufficient condition (but not necessary) for the existence of the discrete
time Fourier transform is that the function should be absolutely summable.
Mathematically, if a function is absolutely summable,

X
|x[n]| < ∞. (9.17)
n=−∞

then, its Fourier transform exists, i.e.,



X

X(e ) = x[n]e−jωn < ∞. (9.18)
n=−∞

However, even if the function is not absolutely summable, the Fourier trans-
form of this function may exist. In this case, it may be possible to represent
the Fourier transform in terms of continuous time impulse functions.
In the following exercises, let us practice to find the Fourier transform of
popular discrete time functions.

Exercise 9.1: Consider the following discrete time signal,

x[n] = an u[n], where |a| < 1. (9.19)

a) Is this function absolutely summable?


b) If your answer is yes, find the Fourier transform of x[n] and plot its magni-
tude and phase spectra.

Solution:
a) This function is absolutely summable,
∞ ∞
X X 1
|x[n]| = |an | = < ∞. (9.20)
n=−∞
1 − |a|
n=0

b) Let’s use the synthesis equation to find the one full period of Fourier
transform of this signal:

396

X 1
X(ejω ) = an e−jωn = ,
1 − ae−jω
n=0 (9.21)
jω 1 − a(cos ω + j sin ω)
X(e ) = .
(1 − a cos ω)2 + a2 sin2 ω
Let us express this complex function in rectangular coordinate system:

1 − a cos ω a sin ω
X(ejω ) = 2
+j . (9.22)
1 − 2a cos ω + a 1 − 2a cos ω + a2
Then, the magnitude and phase spectra of the complex signal, X(ejω ) are
computed as follows:
1 a sin ω
|X(ejω )| = √ , ∡X(ejω ) = tan−1 − . (9.23)
1 − 2a cos ω + a2 1 − a cos ω

The behaviour of the magnitude and phase plot depends on the value of the
parameter, a. Figure and Figure show two plots of the magnitude and phase
spectra for positive and negative values of a.

|X(ejω )| ∠X(ejω )
tan−1 1−a
a
a>0
1
1−a

−π π
1
1+a

−2π −π π 2π

Figure 9.2: Magnitude and phase plots of X(ejw ) for a > 0

Exercise 9.2: Consider the following absolutely summable discrete time sig-
nal,

x[n] = a|n| , |a| < 1. (9.24)


a) Plot the signal, x[n] for for 0 < a < 1.
b) Find and plot the discrete time Fourier transform, X(ejω ).

Solution:
a) This signal is sketched for 0 < a < 1 in Figure 9.4.

397
|X(ejω )| ∠X(ejω )
tan−1 1−a
a
a<0
1
1+a

−π π
1
1−a

−tan−1 1−a
a

−π π

Figure 9.3: Magnitude and phase plots of X(ejw ) for a < 0

b) Noting that −1 < a < 1, we can split the analysis equation into two
sums, to obtain the Fourier transform of this signal, as follows;
+∞
X
X(ejω ) = a|n| e−jωn
n=−∞
+∞ −1
(9.25)
X X
= an e−jωn + a−n e−jωn .
n=0 n=−∞

Making the substitution of variables m = −n in the second summation, we


obtain
+∞
X ∞
X
X(ejω ) = (ae−jω )n + (aejω )m . (9.26)
n=0 m=1

Both of these summations are infinite geometric series that we can evaluate
in closed form, yielding

1 aejω
X(ejω ) = +
1 − ae−jω 1 − aejω
2
(9.27)
1−a
= .
1 − 2a cos ω + a2
The Fourier transform, X(ejω ) is a real, periodic and continuous frequency
function, where the period is 2π, as illustrated in Figure 9.4, for 0 < a < 1.

Exercise 9.3: Consider the general form of a symmetric rectangular pulse,


given below;
(
1, |n| ≤ N1 ,
x[n] = (9.28)
0, |n| > N1

398
x[n] X(ejω )
1+a
1−a

1−a
1+a

n
ω
0 −2π 0 2π

(a) (b)

Figure 9.4: (a) Signal x[n] = a|n| of the above example and (b) its Fourier
transform (0 < a < 1).

a) Plot the signal for N1 = 2.


b) Find the discrete time Fourier transform of this signal, and plot the magni-
tude and phase spectrum for N1 = 2.

Solution:
a) The plot of x[n] is given in Figure 9.5 for N1 = 2.
b) Discrete time Fourier transform of this signal is,
N1
X

X(e ) = e−jωn . (9.29)
n=−N1

Using the finite sum formula,


N −1
(
X N, for α = 1,
αn = 1−αN (9.30)
n=0 1−α , otherwise.
and changing the limits of the summation, the finite sum of Fourier trans-
form can be written in a compact form as follows;

jω sin ω(N1 + 12 )
X(e ) = . (9.31)
sin(ω/2)
For N1 = 2, the Fourier transform can be written as,

sin 25 ω
X(ejω ) = . (9.32)
sin(ω/2)
The above Fourier transform is a real, symmetric and periodic function
with period 2π, and it is sketched in Figure 9.5.

399
x[n]
1 X(ejw )
5

n w
0 −2π −π 0 π 2π
−N1 N1

(a) (b)

Figure 9.5: (a) Rectangular pulse signal of the above example for N1 = 2 and
(b) its Fourier transform.

Exercise 9.4: Consider a physical phenomenon represented by the discrete


time impulse function,

x[n] = δ[n]. (9.33)


a) Find and plot the discrete time Fourier transform of x[n].
b) Comment on the frequency content of X(ejw ).

Solution:
a) We employ the analysis equation;

X
x[n] = δ[n] ↔ X(ejω ) = δ[n]e−jωn = 1. (9.34)
n=0

b) Discrete time Fourier transform of the impulse function consists of all


frequencies with the same amount.

Exercise 9.5: Consider a physical phenomenon represented by the discrete


time shifted impulse function,

x[n] = δ[n − n0 ]. (9.35)


a) Find the discrete time Fourier transform of x[n].
b) Compare your results with the previous example.

Solution:
a) We employ the analysis equation;

400
x[n] X(ejw )

n w

Figure 9.6: Fourier transform of a discrete impulse function is constant. This


function can be considered as periodic with 2π, where it is constant at every
interval of 2kπ, for all k.


X ∞
X
−jωn

x[n] = δ[n−n0 ] ↔ X(e ) = δ[n−n0 ]e = δ[n]e−jω(n+n0 ) = e−jwn0 .
n=−∞ n=−∞
(9.36)
b) Let us consider the magnitude and the phase of this complex function
and compare it to the Fourier transform of the impulse function which was a
real function, X(ejω ) = 1. The magnitude of the shifted impulse is the same
as that of the the unshifted impulse,

|X(ejω )| = 1. (9.37)
However, the phase is

∡X(ejω )| = −wn0 . (9.38)

Exercise 9.6: Consider a physical phenomenon represented by the discrete


time Fourier transform, given as a continuous impulse train, in the frequency
domain;

X

X(e ) = 2π δ(ω − 2πl). (9.39)
l=−∞

a) Find and plot the inverse discrete time Fourier transform of X(ejω ).
b) Compare time and frequency domain representation of the underlying phys-

401
ical phenomenon.
c) Give a real life example, which is represented by an impulse train.

Solution:
a) Let’s use the synthesis equation:

2π ∞ Z 2π
1
Z X
jω jωn
x[n] = X(e )e dω = δ(ω − 2πl)ejωn dω = 1, ∀n. (9.40)
2π 0 l=−∞ 0

Equivalently,

X
x[n] = δ[n − l]. (9.41)
l=−∞

b) Interestingly, the discrete time impulse train in the time domain, is a con-
tinuous frequency impulse train, with amplitude 2π and period 2π, in the
frequency domain.

X(ejw ) x[n]

w n
−4π−2π 0 2π 4π 1 2

Figure 9.7: Impulse train preserves its analytic form in both time and frequency
domains. However, while it is a discrete time impulse train, in the time domain,
it becomes a continuous frequency impulse train in the frequency domain. Note
that it scaled by 2π in amplitude and has the fundamental period 2π.

c) Neurons in the brain produce action potentials, which travel along the
axons to govern the communication all over the brain. The signals generated
by these electrochemical activities are in the form of impulse trains. Thus, our
brain can be considered as a massively parallel impulse train generator and
processor.

402
Exercise 9.7: Consider a physical phenomenon represented by the discrete
time Fourier transform, given as a shifted continuous impulse train, in the
frequency domain;

X
X(ejω ) = 2π δ(ω − ω0 − 2πl) (9.42)
l=−∞

a) Find and plot the inverse discrete time Fourier transform of X(ejω ). b)
Study the effect of shift by comparing your results with the previous example
of an unshifted impulse train.

X(ejw )

n
ω0 − 2π ω0 ω0 + 2π ω0 + 4π

Figure 9.8: Impulse train, shifted by ω0 .

Solution:
a) Let’s directly use the synthesis equation:
Z ω0 +π
x[n] = δ(ω − ω0 )ejωn dω = ejω0 n
ω0 −π

X (9.43)
jω0 n
x[n] = e ↔ 2π δ(ω − ω0 − 2πl)
l=−∞

b) The inverse Fourier transform of the shifted impulse train is a discrete


time complex function. Therefore, the function x[n] = ejω0 n , involves two
plots, namely, the magnitude and the phase spectra are,

|x[n]| = 1, ∀n and ∡x[n] = ω0 n, ∀n, (9.44)


respectively. Analysis of Figure 9.9 reveals that a shift in frequency domain
does not change the magnitude spectrum of the unshifted version of the signal,
but it creates a phase spectrum which is linear with respect to time.

Exercise 9.8: Consider the following discrete time signal,

403
|x[n]| ∠x[n] 3ω0
2ω0
ω0
1
−2 −1

1 2 3 n

−ω0

−2 −1 0 1 2 3 n −2ω0

Figure 9.9: Magnitude and phase spectrum of the complex discrete time domain
function, x[n] = ejw0 n .

x[n] = u[n] − u[n − N ]. (9.45)


a) Find the Fourier transform of the x[n].
b) Plot the magnitude and phase spectra of the Fourier transform for N = 4.

Solution:
a) Let’s find the discrete time Fourier transform of x[n] by using the syn-
thesis equation for one full period in the interval, −π ≤ ω ≤ π:
∞ N −1
X
−jωn
X 1 − e−jwN

X(e ) = x[n]e = e−jωn = . (9.46)
n=−∞
1 − e−jw
n=0

We have to keep in mind that X(ejω ) is periodic with 2π.


b) There is an easy way of computing the magnitude and phase spectrum,
in the interval, −π ≤ ω ≤ π, by arranging the above equation and putting it
in a polar form, as follows;

1 − e−jwN e−jwN/2 ejwN/2 − e−jwN/2 −jw(N −1)/2 sin(wN/2)


X(ejω ) = −jw
= −jw/2 · jw/2 = e .
1−e e e − e−jw/2 sin(w/2)
(9.47)
Recall that a complex function in polar form is represented in terms of its
magnitude and phase, as,
jω )
X(ejω ) = |X(ejω )|ej∡X(e . (9.48)
Therefore, the magnitude spectrum of X(ejw ) is

sin(wN/2)
|X(ejω )| = (9.49)
sin[w/2]

404
and the phase spectrum of X(ejw ) is

w(N − 1)
∡X(ejw ) = , (9.50)
2
which are repeated at every 2π period.
For N = 4, the magnitude of X(ejw ) is

sin(2w)
|X(ejω )| = (9.51)
sin[w/2]
and the phase of X(ejw ) is
3w
∡X(ejw ) = , (9.52)
2
as shown in Figure 9.10.

5
magnitude
4

0
−4π −3π −2π −π 0 π 2π 3π 4π

3
phase
2
1
0
−1
−2
−3
−4π −3π −2π −π 0 π 2π 3π 4π

Figure 9.10: Magnitude and Phase

Exercise 9.9: Consider the discrete time unit step function, u[n].
a) Is this function absolutely summable?
b) Can you find the discrete time Fourier transform of this function?

Solution:

405
x[n] = u[n] − u[n − 4]

1 2 3

Figure 9.11: Magnitude and phase spectrum of X(ejw ) which is the Fourier
Transform of x[n] = u[n] − u[n − 4].

a) This function is not absolutely summable



X
|u[n]| → ∞. (9.53)
n=0

b) Finding the Fourier transform of the unit step function is a little tricky.
First let us represent the unit step function in terms of the summation of two
functions,

u[n] = f [n] + g[n], (9.54)


where
1
f [n] = , ∀n (9.55)
2
and
(
1
2, if n ≥ 0
g[n] = (9.56)
− 12 , if n < 0
In order to find the Fourier transform of u[n], we find the Fourier transform
of f [n] and g[n] and then add them. Mathematically,

u[n] = f [n] + g[n] ←→ U (ejw ) = F (ejw ) + G(ejw ). (9.57)


Considering the fact that the inverse Fourier transform of the shifted im-
pulse function, δ(w − k), is

1
Z
x[n] = δ(w − k)ejωn dω = e−jkw , (9.58)
2π 0
the Fourier transform of f [n] can be obtained as the sum of shifted impulses,

406
as follows;

∞ ∞ ∞
X 1 X −jωn X
F (ejω ) = f [n]e−jωn = e =π δ(w − 2πn). (9.59)
n=−∞
2 n=−∞ n=−∞

and the Fourier transform of g[n] is

∞ 1 ∞
X 1 X −jωn 1 X −jωn 1
G(ejω ) = g[n]e−jωn = − e + e = .
n=−∞
2 n=−∞ 2 1 − e−jw
n=0
(9.60)
Therefore, the discrete time Fourier transform of the unit step function is


1 X
U (ejw ) = F (ejw ) + G(ejw ) = + π δ[w − 2πn]. (9.61)
1 − e−jw n=−∞

Note: The above exercises demonstrate that although the unit step func-
tion u[n] = 0 for n < 0, we need to evaluate the analysis equation in the interval
of −∞ < n < ∞. The continuous frequency Fourier transform of discrete time
unit step function covers one period in −π < ω < π and it repeats itself at
every 2π period.

9.3. Fourier Transform of Discrete Time


Periodic Functions
Let us now investigate the relationship between the Fourier series and Fourier
transform representation of discrete time periodic functions.
Recall that the Fourier series representation of a discrete time periodic
function is given by the following synthesis and analysis equation pair;
X F.S. 1 X
x[n] = ak ejkω0 n ←→ ak = x[n]e−jkω0 n , (9.62)
N
k=<N > n=<N >

where x[n] and ak are both periodic with N = ω2π0 .


Recall, also that the Fourier transform of a discrete time signal is,

2π ∞
1
Z
F.T.
X
x[n] = X(ejω )ejωn ←→ X(ejω ) = x[n]e−jωn . (9.63)
2π 0 n=−∞

407
Motivating Question: What is the relationship between the spectral co-
efficients ak and Fourier transform of X(ejω ), when the discrete time signal
x[n] is periodic?
As we show in the previous example, the discrete time complex exponential
for the k th harmonic has the Fourier transform as the shifted impulse train, as
follows,

X
x[n] = ejkω0 n ←→ X(ejω ) = 2πδ(ω − kω0 − 2πl). (9.64)
l=−∞

Let us replace x[n] in the Fourier transform equation by its Fourier series
representation. Each term in the Fourier series equation of x[n] and its Fourier
transform will be as follows:
X
x[n] = ak ejkω0 n . (9.65)
k=<N >

Each term in the right hand side of the summation in equation: 11.64 has
the following Fourier transform:

X
a0 ←→ a0 2πδ(ω − 2πl),
X
a1 ejω0 n ←→ a1 2πδ(ω − ω0 − 2πl),
X
a2 ej2ω0 n ←→ a2 2πδ(ω − 2ω0 − 2πl), (9.66)
..
.
X
aN −1 ej(N −1)ω0 n ←→ aN −1 2πδ(ω − (N − 1)ω0 − 2πl).

If we add all the terms in the left hand side of of the above transforms, we
obtain the Fourier series representation of the discrete time signal x[n]. If we
add the right hand side of the above transform, we obtain the superposition
of the shifted impulse functions, where the superposition parameters are 2πak .
In order to get an idea about the behavior of this superposition, let’s plot the
Fourier transform of the first term a0 and that of the second term a1 ejωn and
add them together as shown in Figure ??.
Figure ?? indicates that the two superposed terms in time domain generate
an impulse train, in the frequency domain,

a0 + a1 ejωn ←→ 2π[a0 δ(ω) + a1 δ(ω − (N + 1))],
N
which repeats at every 2π period.
If we add all the terms in the left hand side and right hand side of the

408
2πα0 2πα0 2πα0 2πα0

−2π 0 2π 4π ω

2πα1 2πα1 2πα1

0 2π 2π 2π
N N
(N + 1) N
(2N + 1)

2πα0

2πα1

0 2π 4π

Figure 9.12: TODO

409
Fourier transoms of Equation 11.65, we obtain the Fourier transform of a dis-
crete time periodic signal x[n] in terms of the spectral coefficients as follows:

N X
X 2π
X(ejω ) = 2πak δ(ω − (k + lN )). (9.67)
N
k=0 l=−∞

We can further simplify the above equation to obtain the relationship be-
tween the Fourier transform and Fourier series representation of a periodic
signal, as follows:


X 2πk
X(e ) = 2πak δ(ω − ), (9.68)
N
k=−∞

where ω0 = N and
X
x[n] = ak ejkω0 n ,
k=<N >
1 X (9.69)
ak = x[n]e−jkω0 n .
N
n=<N >

The above equation reveals that the Fourier transform of a periodic signal
x[n] converts the discrete time spectral coefficients of weighted and shifted
impulses into continuous time weighted and shifted impulses.
Note: While the period of the spectral coefficients is N = 2π/w0 , the
period of the Fourier transform is 2π. Therefore, the Fourier transform axis is
scaled by w0 in the frequency domain.

Exercise 9.10: Consider the following discrete time periodic signal;


1 1
x[n] = cos ω0 n = ejω0 n + e−jω0 n . (9.70)
2 2
a) Find the spectral coefficients of x[n].
b) Find the Fourier transform of x[n].
c) Compare the Fourier series and Fourier transform representations of x[n].

Solution:
a) The spectral coefficients of x[n] are,
1 2π
a1 = a−1 = , N= . (9.71)
2 ω0
b) Fourier transform of x[n] is,

410

X 2πk
X(ejω ) = 2πak δ(ω − )=
N (9.72)
k=−∞
π[δ(ω − ω0 ) + δ(ω + ω0 )], for − π ≤ ω ≤ π.
and it repeats at every 2π.
c) While the spectral coefficients are discrete time impulse train with period
N , the Fourier transform is a continuous impulse train with period 2π, as shown
in Figure 9.13. In other words, the Fourier transform, X(ejω ) repeats itself with
period 2π. The spectral coefficients ak repeat with period N :

X(ejω ) = X(ejω±2kπ ), and ak = ak±N , ∀k. (9.73)

αk X(ejω )

−N −1 1 N k −ω0 ω0 2π ω

Figure 9.13: Fourier series coefficients and Fourier transform of the periodic
signal, x[n] = cosω0 n.

Exercise 9.11: Consider an arbitrary pulse signal, defined in a finite interval,


N2
X
x[n] = ck δ[n − k]. (9.74)
k=− N1

a) Is this function absolutely sumable?


b) If your answer is yes, find the discrete time Fourier transform of this
signal.

Solution:
a) Since the summation is bounded, this function is absolutely sumable,
provided that all the values of ck are bounded:
N2
X
x[n] = ck δ[n − k] < ∞. (9.75)
k=− N1

411
b) Discrete time Fourier transform of this function can be obtained from
the analysis equation;


X N2
X N2
X
X(ejω ) = ck δ[n − k]e−jnω = ck e−jkω . (9.76)
n=−∞ k=− N1 k=− N1

An example application: removing unwanted noise


from audio @ https://384book.net/i0901
INTERACTIVE

9.4. Properties of Fourier Transforms For


Discrete Time Signals and Systems
So far, we saw that discrete time Fourier transforms map a discrete time do-
main function into a continuous and periodic frequency domain function. In
the frequency domain, the time variable disappears and the function is repre-
sented in terms of a continuous variable of frequencies by the following analysis
equation,

X
X(ejω ) = x[n]e−jωn , (9.77)
n=−∞

where the time domain function can be uniquely obtained from its frequency
domain representation by the synthesis equation;

1
Z
x[n] = X(ejω )ejωn dω. (9.78)
2π 2π
Discrete time Fourier analysis and synthesis equations reveal that the class
of absolutely summable discrete time functions can be represented by uncount-
ably infinite waveforms, namely complex exponentials, with continuously vary-
ing frequencies. Therefore, the Fourier transform, X(ejω ), gives us a unique and
powerful way of viewing a physical phenomenon in terms of weighted sum-
mation of waveforms, where the weights are the time domain function x[n].
Furthermore, the time domain function can be uniquely recovered from its
Fourier transform. In other words, time and frequency domain representation
of a physical phenomenon is one-to-one and onto;

x[n] ←→ X(ejω ). (9.79)

412
In the following sections, we shall investigate the properties of discrete time
Fourier transform. We shall use the properties to go back and forth between the
discrete time and continuous frequency domains. We shall study the frequency
content of the aperiodic discrete time signals. We shall design and implement
LTI systems in the time and frequency domains for filtering the discrete time
signals.

9.4.1. Basic Properties of Discrete Time Fourier


Transform
Recall that discrete time Fourier transform is the extension of discrete time
Fourier series, where an aperiodic function is represented as a periodic function
of infinite period. As we stretch the period N to infinity, the Fourier series of a
discrete time periodic function converges to a continuous time aperiodic func-
tion. Furthermore, since the complex exponential basis functions {e−jωn }∞ n=−∞
are periodic with 2π, the superposition of the complex exponential functions,
which makes the Fourier transform X(ejωn ) is also periodic with 2π. Mathe-
matically,

X(ejω ) = X(ej(ω+2π) ). (9.80)


Although some properties of discrete time Fourier series resemble the prop-
erties of discrete time and continuous time Fourier transform, there are sub-
stantial differences imposed by taking the limit of the integer period, N → ∞.
As we did for the continuous time Fourier transforms, we provide the basic
properties and transform pairs in Tables 9.1 and 9.2, for the discrete time
counterparts. These tables simplify the computations for finding the Fourier
transforms and/or their inverse.
The properties can be proven by directly employing the analysis and synthe-
sis equations. The reader is strongly recommended to prove all the properties
and solve the Fourier transform pairs, given in Tables 9.1 and 9.2.

413
Table 9.1: Properties of the discrete time Fourier transform.
Non-periodic signal Fourier transform

1
Z X
x[n] = X(ejω )ejωn dω X(ejω ) = x[n]e−jωn
2π 2π n=−∞

x[n] X(ejω ), periodic with period 2π

y[n] Y (ejω ), periodic with period 2π

ax[n] + by[n] aX(ejω ) + bY (ejω )

x[n − n0 ] e−jωn0 X(ejω )

ejω0 n x[n] X(ej(ω−ω0 ) )

x∗ [n] X ∗ (e−jω )
x[−n] X(e−jω )
x(m) [n] = X(ejmω )
(
x[n/m], n is multiple of m
0, otherwise
x[n] ∗ y[n] X(ejω )Y (ejω )
1
Z
x[n]y[n] X(ejθ )Y (ej(ω−θ) )dθ
2π 2π
x[n] − x[n − 1] (1 − ejω )X(ejω )
n ∞
X 1 jω )+πX(0)
X
x[k] X(e δ(ω−2πk)
1 − ejω
k=−∞ k=−∞

d
nx[n] j X(ejω )
dω

 X(ejω ) = X ∗ (e−jω )

jω −jω )}
Re{X(e )} = Re{X(e



For real-valued x[n] Im{X(ejω )} = −Im{X(e−jω )}
|X(ejω )| = |X(e−jω )|





∢X(ejω ) = −∢X(e−jω )

Even part of x[n] Re{X(ejω )}


Odd part of x[n] Im{X(ejω )}
Continued on next page

414
Table 9.1: Properties of the discrete time Fourier transform. (Continued)

1
X Z
2
Parseval’s relation for non-periodic signals: |x[n]| = |X(ejω )|2 dω
n=−∞
2π 2π

Table 9.2: Fourier transform pairs of popular discrete time functions.


x[n] X(ejω )
δ[n] 1
δ[n − n0 ] e−jωn0
∞ ∞  
X 2π X 2πk
δ(n − kN ) δ ω−
N N
k=−∞ k=−∞

X
1 2π δ(ω − 2πk)
k=−∞
X∞
ejω0 n 2π δ(ω − ω0 − 2πk)
k=−∞
X∞
cos(ω0 n) π [δ(ω − ω0 − 2πk) + δ(ω + ω0 − 2πk)]
k=−∞

π X
sin(ω0 n) [δ(ω − ω0 − 2πk) − δ(ω + ω0 − 2πk)]
j
k=−∞

1 X
u[n] +π δ(ω − 2πk)
1 − e−jω
k=−∞
1
an u(n), |a| < 1
1 − ae−jω
1
(n + 1)an u[n], |a| < 1
(1 − ae−jω )2
(n + m − 1)! n 1
a u[n], |a| < 1
n!(m − 1)! (1 − ae−jω )m
1 1
2
a|n| , |a| < 1
1
 − a 1 + a2 − 2a cos ω
1, |n| ≤ N1 ∞  
X 2πk
, period N 2π ak δ ω
0, N1 < |n| ≤ N N
2 k=−∞

Continued on next page

415
Table 9.2: Fourier transform pairs of popular discrete time functions. (Contin-
ued)
 
1
( sin ω N1 +
1, |n| ≤ N1 2
0, |n| > N1 ω
sin
 2
sin W n W W n (
 = sinc 1, |ω| ≤ W
πn π π , period 2π
0 < W < π 0, W < |ω| ≤ π

In the following, we study some of the selective properties to grasp the


discrete time and continuous frequency domain representations of signals and
systems and their relationship.
1) Linearity: Like the continuous time Fourier transform, the discrete-time
Fourier transform is a linear operator. Mathematically speaking, if we
have two functions and their corresponding Fourier transforms,

x[n] ←→ X(ejω ) (9.81)


and

y[n] ←→ Y (ejω ), (9.82)


then,

ax[n] + by[n] ←→ aX(ejω ) + bY (ejω ). (9.83)


This property follows from the fact that both summation and integral
operators, required to take the Fourier transform and its inverse, are linear
operators.
2) Time Shifting: If the function x[n] is shifted in time by a constant
amount, n0 , its Fourier transform X(ejω ) is multiplied by the complex
exponential function, e−jωn0 , in the frequency domain:

x[n − n0 ] ←→ e−jωn0 X(ejω ). (9.84)


This property can be shown by defining y[n] = x[n − n0 ] and inserting
the shifted signal into the analysis equation,

X
Y (ejw) = x[n − n0 ]e−jwn . (9.85)
n=−∞

Let us change the variable of summation to n′ = n − n0 and get the time

416
shifting property:


X
Y (jw) = x(n′ )e−jw(n +n0 )t = e−jωn0 X(jω). (9.86)
−∞

Note that, since the multiplicative complex exponential function, e−jωn0 ,


has a magnitude of 1, the time delay alters the phase of X(ejω ), but not
its magnitude. As a result, time delay doesn’t cause the frequency content
of X(ejω ) to change at all.
Linearity and time-shifting properties enable us to determine the re-
sponse of LTI systems without solving the representative linear constant-
coefficient difference equations, as illustrated by the following example.

Exercise 9.12: Consider the difference equation of a discrete time LTI


system, which is initially at rest:
1 1
y[n] + y[n − 1] − y[n − 2] = x[n] − x[n − 1]. (9.87)
4 8
a) Find the frequency response of this system.
b) Find the impulse response of this system.

Solution:
a) Let’s take the Fourier transform of both sides using the time shifting
property,
1 1
Y (ejw )[1 + e−jw − e−2jw ] = X(ejw )[1 − e−jw ]. (9.88)
4 8
Let us replace the input by the impulse function,

x[n] = δ[n] ←→ X(ejw ) = 1.

Then, the corresponding output is to be replaced by the frequency re-


sponse. In this case, the system equation in the frequency domain be-
comes;
1 1
H(ejw )[1 + e−jw − e−2jw ] = 1 − e−jw . (9.89)
4 8
The frequency response of this system can be directly obtained from the
above equation, as follows;

1 − e−jw
H(ejw ) = . (9.90)
1 + 14 e−jw − 81 e−2jw
b) Impulse response of this system is the inverse Fourier transform of the

417
frequency response. By using partial fraction expansion method, we can
simplify the frequency response;
2 1
H(ejw ) = 1 −jw − 1 −jw . (9.91)
1+ 2e 1− 4e
Using the Fourier transform pairs of Table 2, we obtain the impulse re-
sponse as follows;
1 1
h[n] = 2(− )n u[n] − ( )n u[n]. (9.92)
2 4
The above exercise shows how the time difference property converts a
difference equation of time domain into an algebraic equation in the fre-
quency domain. This handy property allows us to avoid the cumbersome
recursions for finding the output y[n] and the impulse response h[n] of an
LTI system.

Exercise 9.13: Find the Fourier transform of the following signal:

x[n] = δ[n − 1] + δ[n + 1]. (9.93)

Solution:
Using the Fourier transform of the impulse function together with the
time shift property, we obtain,

δ[n] ↔ 1
δ[n − n0 ] ↔ e−jωn0 (9.94)
jω −jω jω
X(e ) = e +e = 2cosω
Two discrete time impulses located at n = 1 and n = −1, are represented
by a continuous time periodic Cosine function in the frequency domain,
where the period is 2π

Exercise 9.14: Find the inverse Fourier transform of the following sig-
nal:

Y (ejω ) = e−jω cosω. (9.95)

Solution:
Recall, from the previous example,

F.T.
x[n] = δ[n − 1] + δ[n + 1] ←→ X(ejω ) = 2cosω. (9.96)

418
In order to get a multiplicative factor e−jω in the frequency domain, we
need to get a time shift with n0 = 1, in the time domain;

x[n − 1] ←→ e−jω X(ejω ). (9.97)

Let us shift x[n] by n0 = 1;

x[n − 1] = δ[n − 2] + δ[n] ←→ 2e−jω cosω. (9.98)

Hence,
1 1
y[n] = x[n − 1] = (δ[n − 2] + δ[n]) ←→ e−jω cosω. (9.99)
2 2
Note: We avoided taking the integral to find the inverse Fourier trans-
form. Instead we used the properties. Why? Because taking a complex
integral is not an easy task in most cases. It may require sophisticated
methods, which is beyond the scope of this book.

3) Frequency Shift: A shift in frequency domain corresponds to scaling


the time domain function by the complex exponential:

ejω0 n x[n] ↔ X(ej(ω−ω0 ) ). (9.100)


The above property can be shown directly by defining y[n] = e−jω0 n x[n]
and finding its Fourier transform using the analysis equation,


X ∞
X
Y (ejw) = x[n]ejω0 n e−jwn = x[n]e−j(ω−ω0 )n = X(ej(w−ω0 ) ).
n=−∞ n=−∞
(9.101)
Comparison of time and frequency shift properties uncovers an elegant
symmetry between the shifts in time and frequency domains. A shift in
one domain corresponds to a multiplication in the other domain. This is
one of the duality properties of Fourier transforms.
4) Time Scale: Time scale in discrete time functions requires a special care,
since the domain should remain integer valued after the scaling. When we
scale time by a factor of m, we need to define a new function,
(
x[n/m], for n is integer multiple of m
y[n] = (9.102)
0 otherwise.
In other words, when m > 1, we stretch the signal x[n] by inserting zero
values into the discrete time function y[n] for the non integer values of
n/m. When m < 1, we squish the signal by skipping some of the values

419
of the function x[n].
Taking the discrete time Fourier transform of y[n], we obtain,

(P
∞ −jwn ,
n=−∞ x[n/m]e
for n is integer multiple of m
Y (ejω ) =
0, otherwise.
(9.103)

Changing the dummy variable of summation to n = n/m, we obtain,

(P
∞ ′ ]e−jwmn′ ,
jω n=−∞ x[n for n is integer multiple of m
Y (e ) =
0, otherwise.
(9.104)
Therefore,
(
jω X(ejmω ), for n is integer multiple of m
Y (e ) = (9.105)
0, otherwise.
Note: When we stretch the function, x[n] in time domain, for m > 1,
the frequency of the Fourier transform is increased by mn. On the other
hand, when we squish the time domain signal, for m < 1, the frequency
of the Fourier transform is decreased by mn.
5) Time Reversal: A special case of time shift property is the time reverse
property, where m = −1. Replacing the value of m in Equation:12.25, we
obtain,

x[−n] ←→ X(e−jω ). (9.106)


Equation:12.30 states that reversing the time in time domain corresponds
to reversing the frequency in the frequency domain.
6) Fourier Transform of Even and Odd Functions: An even function in
the time domain has a real Fourier transform, in the frequency domain.
Similarly, an odd function in the time domain has a purely imaginary
Fourier transform in the frequency domain.Mathematically,

Ev {x[n]} ↔ Re {X(ejω )},


(9.107)
Odd {x[n]} ↔ Im {X(ejω )}.
The above property directly follows the definition of even and odd parts
of the functions. Suppose that the Fourier transform pair of a function
x[n] is given by,

x[n] ↔ X(ejω ), (9.108)

420
where the Fourier transform can be represented in Cartesian coordinate
system, as follows;

X(ejω ) = Re {X(ejω )} + j Im {X(ejω )}. (9.109)

Recall that even part of a function x[n] is defined as,


1
Ev {x[n]} = [x[n] + x[−n]]. (9.110)
2
From the time reverse property, we get,

x[−n] ←→ X(e−jω ) = Re {X(ejω )} − j Im {X(ejω )}. (9.111)

Taking the Fourier transform of both sides of Equation: 12.34, we obtain,

1
Ev {x[n]} ↔ [Re {X(ejω )} + j Im {X(ejω )} + Re {X(ejω )} − j Im {X(ejω )}].
2
(9.112)
Hence,
Ev {x[n]} ↔ Re {X(ejω )} (9.113)
Similarly, the odd part of a function x[n] is defined as,
1
Odd {x[n]} = [x[n] − x[−n]]. (9.114)
2
Taking the Fourier transform of both sides of the above equation, we
obtain,

1
Odd {x[n]} ↔ [Re {X(ejω )} + j Im {X(ejω )} − Re {X(ejω )} + j Im {X(ejω )}].
2
(9.115)
Hence,
Odd {x[n]} ↔ Im {X(ejω )}. (9.116)
Since the Fourier transform of an even function is real, there is no imag-
inary part. Thus, the phase is 0 for all the frequencies. Mathematically,
the magnitude of an even function is,

|X(ejω | = Re {X(ejω )}
and the phase of an even function is,

∠X(ejω ) = 0.
On the other hand, the Fourier transform of an odd function is a purely

421
imaginary function. Thus, the phase spectra is a constant value at π/2.
The magnitude is the imaginary part itself. Mathematically, the magni-
tude of an even function is,

|X(ejω | = Im {X(ejω )}
and the phase of an even function is,

∠X(ejω ) = π/2.
7) Convolution Property: Convolution of two functions in the time do-
main corresponds to multiplication of their Fourier transform in the fre-
quency domain.

y[n] = x[n] ∗ h[n] ←→ Y (ejω ) = X(ejω )H(ejω ) (9.117)


Convolution property follows the definition of Fourier analysis equation
and convolution summation. Inserting the convolution sumation,

X
x[n] ∗ y[n] = x[k]h[n − k],
k=−∞

into the Fourier analysis equation, we obtain,



X ∞
X
Y (ejω ) = F [y(t)] = F [x[n] ∗ h[n]] = x[k]h[n − k]e−jωn .
k=−∞ n=−∞
(9.118)
Changing the dummy variable of summation, n′ = n − k, we get,

∞ ∞

X X
−jωk

Y (e ) = F [x[n]∗h[n]] = x[k]e h[n′ ]e−jωn = X(jω)H(jω).
k=−∞ n′ =−∞
(9.119)
Convolution property is a direct consequence of the fact that the Fourier
transform decomposes a signal into a linear combination of complex ex-
ponential functions, {ejωn }∞
n=−∞ each of which is an eigen function of a
linear, time-invariant system,

y[n] = H(ejω )ejωn .


The frequency response H(ejω ) corresponds to the eigenvalue of the LTI
system. Filtering a discrete time signal is a direct consequence of the
convolution property.
8) Modulation Property: This property is similar to the continuous time

422
modulation. The only difference is to deal with periodic convolution.
Modulation property states that multiplication of two signals in the time
domain corresponds to circular convolution of their Fourier transform
in the frequency domain, which is evaluated for one full period. Formally
speaking,

1 1
Z

y[n] = x[n]h[n] ←→ Y (e ) = X(ejω )⊗H(ejω ) = X(ejθ )H(ej(w−θ) )dθ.
2π 2π 2π
(9.120)
Since the Fourier transform of a discrete time function is periodic with
2π, in the frequency domain, we apply circular convolution operation over
2π, which is indicated by the ⊗ symbol. Circular convolution brings, also,
1
a scaling factor of 2π .
In order to show the multiplication property, we take the Fourier trans-
form of the multiplication of two functions, y[n] = x[n]h[n] using the
analysis equation,

X
Y (ejω ) = F [y[n]] = F [x[n]h[n]] = x[n]h[n]e−jωn , (9.121)
n=−∞

Then, we insert the inverse Fourier transform of x[n],



1
Z
x[n] = X(ejω )ejωn dω (9.122)
2π 0
into Equation 12. 41,
∞ Z 2π
1 X ′ ′
Y (ejω ) = F [x[n]h[n]] = X(ejω )ejω n h[n]e−jωn dω ′
2π n=−∞ 0
(9.123)
Finally, we arrange the above equation,

∞ Z ∞
1 X ′ ′
Y (ejω ) = F [x[n]h[n]] = X(ejω )h[n]e−jn(ω−ω ) dω ′ .
2π n=−∞ −∞
(9.124)
Note that the summation in the right hand side of the above equation is
the shifted Fourier transform of the function h(t),

′ ′
X
H(ej(ω−ω ) ) = h[n]e−jn(ω−ω ) . (9.125)
n=−∞

Inserting the shifted frequency response H(j(ω − ω0 )) into Equation:

423
12.48, we obtain,

1 1
Z
Y (jω) = F [x(t)h(t)] = X(jω ′ )H(j(ω−ω ′ ))dt =
X(jω)∗H(jω).

−∞ 2π
(9.126)
Modern communication systems rely on discrete-time modulation tech-
niques, rather than their continuous counterparts. Modulation property
is extensively used to increase or decrease the frequency bandwidth of
signals.
9) Parseval’s Equality: In all of the above properties and examples, we ob-
serve that the representation of signals and systems in time and frequency
domains, have substantially different analytical forms and structures. One
striking difference is that a discrete time aperiodic function is represented
by a continuous periodic function in the frequency domain. For exam-
ple, a discrete time complex exponential signal has a Fourier transform
consisting of continuous time impulse train function.
An important invariant between the two domains is the energy of a signal.
Formally speaking, the energy of the signals in both domains does not
change:

1
X Z
2
|x[n]| = |X(ejω )|2 dω. (9.127)
n=−∞
2π 2π

The above relation, called Parseval Equality, shows that the energy of a
signal in time and frequency domains are preserved.
As we did in the continuous time functions, we can show Parseval’s equal-
ity by inserting the analysis equation into the left hand side of Equation:
12.51;

X ∞
X
|x(t)|2 dt = x(t)x∗ (t)dt = (9.128)
n=−∞ −∞
∞ Z
1
Z
′ ′
X
jω −jωn
X(e )e dω X ∗ (ejω )e−jω n dω ′ =
(2π)2 n=−∞ 2π 2π

∞ Z Z
1 X ′
2
X(ejω )X ∗ (ejω )e−jn(ω−ω ) dωdω ′
(2π) n=−∞ 2π 2π

We can show that,



(

2π for ω = ω ′
Z
−jn(ω−ω ′ )
X
e dω = (9.129)
0 n=−∞ 0 otherwise

424
Inserting the right hand side of the above equality into Equation 12. 52,
we obtain,
Z ∞
1
Z
2
|x(t)| dt = |X(jω)|2 dω. (9.130)
T 2π −∞
Parseval’s equality reveals that representation of signals in Hilbert space
conserves the energy of time domain. Note that there is a factor of 1/2π
which scales the energy of time domain.

Exercise 9.15: Find the energy of the following discrete time impulse
response;

sin ωc n
h[n] = . (9.131)
πn

Solution:
From the definition of the energy,
∞ ∞
X
2
X sin ωc n 2
E= |h[n]| = | | . (9.132)
n=−∞ n=−∞
πn

It is very difficult to evaluate the above summation. However, considering


the Fourier transform of h[n], as
(
sin ωc n 1 for |ω| < ωc .
h[n] = ←→ H(ejω ) = (9.133)
πn 0 otherwise.
and using the Parseval’s equality, we can write,
∞ ωc
sin ωc n 2 1
X Z
E= | | = |1|2 dω = ωc /π. (9.134)
n=−∞
πn 2π −ωc

Note that this is the energy of an ideal low pass filter. Thus, the energy
of the ideal low pass filter is proportional to its cutoff frequency.

10) Duality: A close look at the properties of discrete time Fourier transform
reveals that although the analytical forms of the time and frequency do-
main functions are mostly different, there are elegant dualities between
the time and frequency domain representations of physical phenomenon.
Duality properties between the time and frequency domain representa-
tions of discrete time signals and systems are very similar to that of the
continuous time counterpart. In the following, we study three striking
dualities of discrete time Fourier transform:

425
• Duality Between Time and Frequency Shifts: A shift in time do-
main corresponds to multiplication in the frequency domain. Similarly,
multiplication in the time domain corresponds to a shift in frequency
domain.
Time shift: x[n − n0 ] ←→ Multiplication: e−jωn0 X(ejω )
Multiplication: ejω0 n x[n] ←→ Frequency shift: X(ej(ω−ω0 )
In summary, whenever we need a shift in one of the domains, the corre-
sponding function in the other domain is just multiplied by a complex
exponential function.
• Duality Between the Convolution and Multiplication Oper-
ations: As in the continuous case, convolution in the time domain
corresponds to multiplication in the frequency domain and vice versa.
However, the convolution operation is replaced by the circular convo-
lution for the discrete time Fourier transform:

x[n] ∗ h[n] ←→ X(ejω )H(ejω ),


1 (9.135)
x[n]h[n] ←→ X(ejω ) ⊗ H(ejω ).

• Duality Between Discrete Time Fourier Transform and Con-
tinuous Time Fourier Series: There is an interesting duality be-
tween the discrete time Fourier transform and continuous time
Fourier series, as explained below.
Suppose that we are given a continuous time periodic signal, x(t) with
its Fourier series representation and a discrete time aperiodic signal
x[n], with its Fourier transform representation;

x(t) ←→ ak and x[n] ←→ X(ejw ). (9.136)


Comparison of the continuous time Fourier series synthesis equation
to the discrete time Fourier transform analysis equation shows that
they have the same analytical forms. The continuous time periodic
signal x(t) and discrete time Fourier transform, X(ejw ) of the aperiodic
signal x[−n] are the same, for kw0 = w. Both functions are periodic
and continuous. However, x(t) is in the time domain, whereas X(ejw )
is in the frequency domain.
Similarly, the spectral coefficients, ak , of the continuous time function
x(t) correspond to the discrete time aperiodic signal x[n]. This duality
states that if we are given a continuous time periodic signal, we can
find a dual discrete time signal x[−n], which corresponds to the spec-
tral coefficients of the continuous time function x(t). Surprisingly, the
Fourier transform, X(ejw ), of the aperiodic signal x[n] corresponds to
the continuous time periodic signal x(t). This fact is depicted in Figure

426
9.14.

Figure 9.14: If we replace w by kw0 in the discrete time Fourier transform of


x[n], the reversed signal, x[−n] becomes the spectral coefficients of the contin-
uous time signal x(t).

Since the Fourier transform X(ejω ) of a discrete time signal, x[n], is a


periodic and continuous function, we can expand it into Fourier series
as follows:

X
X(ejω ) = cn ejωn . (9.137)
n=−∞

The corresponding Fourier transform is,



X
X(ejω ) = x[n]e−jωn . (9.138)
n=−∞

Equating the right hand sides of the above Fourier series and Fourier
transform equations, we get a relationship between the Fourier series
coefficients, {cn } of the discrete time Fourier transform, X(ejω ) and its
inverse time domain signal, x[n],

cn = x[−n] . (9.139)
Therefore, the Fourier series coefficients of a discrete time Fourier trans-
form is the time domain signal itself, with reversed time direction.

427
9.5. Discrete Time Linear Time Invari-
ant Systems in Frequency Domain
Recall that a discrete time LTI system can be represented by the following
constant coefficient difference equation in time domain,
N
X M
X
ak y[n − k] = bk x[n − k]. (9.140)
k=0 k=0

Also, recall that if the eigen function of x[n] = ejω0 n is fed as an input to
an LTI, then, the corresponding output is,

y[n] = h[n] ∗ x[n] = H(ejω0 )ejω0 n , (9.141)


where,

X
H(ejω0 ) = h[n]e−jω0 n . (9.142)
n=−∞

is called the eigenvalue of the system corresponding to the eigenfunction


x[n] = ejω0 n .
Note that, for ω0 → ω, the above eigenvalue converges to the discrete time
Fourier transform of the impulse response. Fourier transform of the impulse
response. As in the continuous time case, discrete time Fourier transform of
the impulse response uniquely represents the LTI system in frequency domain,
as defined below.
Definition: Frequency response of a discrete time LTI system is defined
as the Fourier transform of the impulse response. Mathematically, frequency
response of a discrete time LTI system is,

X
H(ejω ) = h[n]e−jωn . (9.143)
n=−∞

Therefore, impulse response and frequency response,

h(t) ↔ H(ejω ) (9.144)


are one-to-one and onto representation of the same LTI system in two different
domains, namely, in time and frequency domains. While the time dependent
properties of the LTI system are investigated by its impulse response or by the
corresponding difference equation, the frequency shaping properties of the LTI
system are investigated by analyzing the frequency response. response.
Note that, the eigen values H(ejkω0 ) of a discrete time LTI system for each
harmonic frequency kω0 for all integer values of k are specific instances of the

428
frequency response H(ejω ). In other words, the eigenvalues are the values of
the frequency response at ω = kω0 , ∀k.
Frequency response of a discrete time LTI system is represented by the
following polar coordinate form;
jω )
H(ejω ) = |H(ejω )|ej∡H(e , (9.145)
where the real-valued functions |H(ejω )| and ∡H(ejω ) are called the magnitude
and phase spectrum respectively. Analysis of Fourier transform of a function
requires the analysis of magnitude and phase spectrum.
Generally speaking, the magnitude and phase spectrum of the frequency re-
sponse H(ejw ) indicate the frequency content of the impulse response function,
h[n].
Let us now take the discrete time Fourier transform of both sides of the
nth order difference equation given above, we obtain the following equation,
which represents a discrete time LTI system in the frequency domain;
N
X M
X
−jωk
ak e jω
Y (e ) = bk e−jωk X(ejω ). (9.146)
k=0 k=0

Thus, an LTI system represented by a difference equation in the time do-


main is equivalently represented by an algebraic equation, in the frequency
domain.
Let us find the frequency response of an LTI system by using the above
algebraic equation: Recall, the Fourier transform of the impulse function is,

x[n] = δ[n] ←→ X(ejω ) = 1. (9.147)


When the input is a discrete time impulse function in the time domain,
the output becomes impulse response and the Fourier transform of the output
becomes frequency response. Therefore, replacing the input by X(ejω ) = 1,
the output becomes the frequency response,
N
X M
X
ak e−jωk H(e−jω ) = bk e−jωk . (9.148)
k=0 k=0

The above equation provides us the frequency response of an LTI system,


represented by an ordinary constant coefficient difference equation in time do-
main and an algebraic equation in frequency domain. Arranging this equation,
we obtain the frequency response, as follows:
PM
jωk=0 bk e−jωk
H(e ) = PN . (9.149)
−jωk
k=0 ak e

429
The right hand side of the above equation is equal to the ratio between the
Fourier transforms of the input and output:
PM −jωk
jω Y (ejω ) k=0 bk e
H(e ) = = N
. (9.150)
X(ejω ) −jωk
P
k=0 ak e

Taking the inverse Fourier transform of the frequency response directly


gives us the impulse response, without solving the differential equation. Be-
cause;

h[n] ←→ H(ejω ). (9.151)


In the following examples, let us study the representation of a discrete time
LTI system in time and frequency domains.

Exercise 9.16: Consider a discrete time LTI system given by the following
block diagram:

h1 (t)

x[n] + y[n]

h2 (t)

Figure 9.15: A discrete time linear time invariant system with two parallel
impulse responses, h1 [n] and h2 [n], joined by an adder.

 n
1
Given that h1 [n] = u[n], and the Frequency response of the overall
3
5e−jω − 12
system is H(ejω ) = ,
e −2jω − 7e−jω + 12
a) Find H2 (ejw ) and h2 [n].
b) Find the difference equation, which represents this system.

Solution:
a) The overall impulse response of this system is h[n] = h1 [n] + h2 [n].
Taking the Fourier transform of both sides of the above equation we obtain,

430
5e−jω − 12
H(ejω ) = = H1 (ejω ) + H2 (ejω ). (9.152)
e−2jω − 7e−jω + 12
Fourier transform of h1 [n] is,
∞  n ∞  n

X 1 −jωn
X 1 −jω
H1 (e ) = e = e
3 3
n=0 n=0 (9.153)
jω 1 3
H1 (e ) = 1 −jω = 3 − e−jω .
1 − 3e
Insert H1 (ejω ) into the overall frequency response equation to find the
Fourier transform of h2 [n]
as,

5e−jω − 12 3
H2 (ejω ) = H(ejω ) − H1 (ejω ) = −
e−2jω
− 7e−jω + 12 3 − e−jω
2e−2jω + 24e−jω − 72
= −3jω
e − 4e−2jω − 9e−jω + 36 (9.154)

1
H2 (ejω ) = −2 1 −jω .
1− 4e

Take the inverse Fourier transform of H2 (ejω ) to obtain,


 n
1
h2 [n] = −2 u[n]. (9.155)
4
b) The difference equation of this system can be obtained by taking the
inverse Fourier transform of the frequency response,

5e−jw − 12 Y (ejw )
H(ejw ) = = , (9.156)
e−2jw − 7e−jw + 12 X(ejw )
which yields,

y[n − 2] − 7y[n − 1] + 12y[n] = 5x[n − 1] − 12x[n]. (9.157)

431
9.6. Representation of Discrete Time LTI
Sytems
Until now we used the word “representation” frequently to formally describe
a physical phenomenon.
Q: What does representation mean?
Representation is a general concept in mathematics. In system theory, rep-
resentation means expressing or describing a system by some mathematical
objects, such as, equations, relations, functions, graphs, trees, matri-
ces, vectors, groups, sets, manifolds etc. Representation of a system is
not unique and depends on the design goal(s) of systems.
So far, we have seen a variety of representations for LTI systems, as sum-
marized below:
A Discrete time LTI system can be represented by:
1. Impulse Response, h[n]
2. Unit Step Response, s[n]
3. Frequency Response, H(ejω )
4. Difference Equation
5. Block Diagram
The above representations are all related and one-to-one, except the block
diagram representation. Since the realization of an LTI system in a physical
environment requires a set of hardware components together with some driving
softwares, it is possible to implement it in a variety of design forms. In the
following, we summarize the relationships among the representations of an
LTI system.

9.6.1. Impulse Response


Recall the definition of impulse response, which is the response of an LTI
system when the input signal is a unit impulse function:

x[n] = δ[n] → y[n] = δ[n] ∗ h[n] = h[n] (9.158)

Exercise 9.17: Given the following impulse response of a discrete time LTI
system,
h[n] = K0 δ[n] + K1 δ[n − 1] (9.159)
a) Find the frequency response.
b) Find the difference equation, which represents this system.
c) Find the unit step response.

432
Solution:
a) Frequency response is just the Fourier transform of the impulse response,

H(ejω ) = K0 + K1 e−jω (9.160)

b) Recall that
Y (ejω )
H(ejω ) = . (9.161)
X(ejω )
Hence,
Y (ejω ) = X(ejω )[K0 + K1 e−jω ]. (9.162)
Taking the inverse Fourier transform of both sides of the above equation,
we obtain,
y[n] = K0 x[n] + K1 x[n − 1] (9.163)

c) Unit step response of this LTI system is

s[n] = u[n] ∗ h[n] = u[n] ∗ [K0 δ[n] + K1 δ[n − 1]]. (9.164)

Hence,

s[n] = K0 u[n] + K1 u[n − 1]. (9.165)

9.6.2. Unit Step Response


Recall the definition of unit step response, which is the response of an LTI
system when the input signal is a unit step function function:

x[n] = u[n] → y[n] = s[n] = u[n] ∗ h[n]. (9.166)


The relationship between the impulse response and unit step response is,

h[n] = s[n] − s[n − 1]. (9.167)

Exercise 9.18: Consider a discrete time LTI system represented by the fol-
lowing unit step response,
n
X 1
s[n] = ( )k . (9.168)
2
k=0

a) Find the impulse response.


b) Find the frequency response.
c) Find the difference equation, which represents this system.

433
Solution:

n n−1
X 1 k X 1 k 1
h[n] = s[n] − s[n − 1] = ( ) − ( ) = ( )n u[n]. (9.169)
2 2 2
k=0 k=0

b) Frequency response is
1
H(ejω ) = F [h[n]] = 1 −jω (9.170)
1− 2e

c) Recall that,
Y (ejω )
H(ejω ) = . (9.171)
X(ejω )
Hence,
1
Y (ejω )[1 − e−jω ] = X(ejω ). (9.172)
2
Taking the inverse Fourier transform of both sides of the above equation,
we find the following difference equation;
1
y[n] − y[n − 1] = x[n]. (9.173)
2

9.6.3. Frequency Response


Recall the definition of the frequency response, which is the Fourier transform
of the impulse response:

H(ejω ) = F {h[n]}
y[n] = h[n] ∗ x[n] ↔ Y (ejω ) = X(ejω )H(ejω )
(9.174)
jωY (ejω )
H(e ) =
X(ejω )

Exercise 9.19: Consider the following frequency response of a discrete time


LTI system;
H(ejω ) = e−jωn0 . (9.175)
a) Find the impulse response of this system.
b) Find the system equation equation, in time domain.
c) Find the unit step response.

Solution:
a) Frequency response of this system can be written as,

434

X
H(ejω ) = e−jωn0 = δ[n − n0 ]e−jωn . (9.176)
n=−∞

The above equation is the Fourier transform of the shifted impulse function,
δ[n − n0 ]. Thus, the impulse response is

h[n] = δ[n − n0 ]. (9.177)

b) Using the convolution equation, we get,

y[n] = x[n] ∗ h[n] = x[n] ∗ δ[n − n0 ]. (9.178)

Thus, the system equation is,

y[n] = x[n] ∗ h[n] = x[n − n0 ]. (9.179)

c) The unit step response of this LTI system is,

s[n] = u[n] ∗ h[n] = u[n − n0 ]. (9.180)

9.6.4. Difference Equation


A discrete time LTI system can be uniquely represented by a difference equa-
tion:
N
X N
X
ak y[n − k] = bk x[n − k]. (9.181)
k=0 k=0

The relationship between the constant coefficients of the above difference


equation and impulse response can be obtained by simply replacing the input
signal by the unit impulse function x[n] = δ[n] and the output signal by the
impulse response, y[n] = h[n], as follows:
X X
ak h[n − k] = bk δ[n − k] (9.182)
If we take the Fourier transform of both sides of the above difference equa-
tion, we obtain the relationship between the frequency response and the con-
stant coefficients of the difference equation, as follows:

ak e−jωk Y (ejω )
P
jω (9.183)
H(e ) = P =
bk e−jωk X(ejω )
Note: The coefficients ak and bk of the difference equation determine the
structure of the filter represented by the frequency response, H(ejw ).

435
Exercise 9.20: Consider the following second order difference equation, which
represents an time LTI system;

y[n] − (2b cos β)y[n − 1] + b2 y[n − 2] = x[n], (9.184)


a) Find the frequency response of this system.
b) Find the impulse response for b = 0.5 and β = π/4

Solution:
a) Taking he Fourier transform of both sides, we get the frequency response
as follows;

Y (ejω ) 1
H(ejω ) = = . (9.185)
X(ejω ) 1 − (2b cos β)e−jω + b2 e−2jω
b) In order to find the impulse response, we need to take the inverse Fourier
transform of the frequency response.
Inserting the Euler formula,

ejβ + e−jβ
cos β = .
2
into Equation:12.115 and arranging it, we can factorize the denominator as
follows:

1 1
H(ejω ) = = .
1 − b(ejβ + e−jβ )e−jω + b2 e−2jω (1 − be−j(ω−β )(1 − be−j(ω+β) )
(9.186)
Inserting the values for b = 0.5 and β = π/4 and using partial fraction
expansion, we obtain;

A B
H(ejω ) = + , (9.187)
1 − (0.5e jπ/4 )e −jω 1 − (0.5e−jπ/4 )e−jω
√ √
where A = −j 2ejπ/4 , and B = j 2e−jπ/4 .
Taking the inverse Fourier transform of Equation 12.117, we obtain the
impulse response as follows;
1
h[n] = [A(0.5ejπ/4 )n + B(0.5e−jπ/4 )n ]u[n]. (9.188)
2
or equivalently,

1
h[n] = √ [(ejπ/4 )(0.5ejπ/4 )n − (e−jπ/4 )(0.5e−jπ/4 )n ]u[n]. (9.189)
j 2

436
The above impulse response consists of complex exponential terms. However,
reorganizing the above equation and using the Euler formula, we obtain,

(0.5)n π j(n+1) π √ π
h[n] = √ [e 4 − e 4 j(n+1) ]u[n] = 2(0.5)n sin (n + 1)u[n]. (9.190)
j 2 4

Note that, in the above exercise, the coefficients A and B are complex. How-
ever, all the complex derivations yield a real discrete time impulse response.

9.6.5. Block Diagram Representation


Block diagram representation of an LTI system enables us to realize the system
in a real life application environment. Depending on the quality and the cost of
the individual components in the diagram, we can design a variety of versions
of the same LTI system.
A popular way of realization of an nth order difference equation is the direct
form, as shown in Figure 9.16. In this representation N + M delay operators
together with M adders are used.

Exercise 9.21: Let’s study, different representations of a general first order


LTI system of the following difference equation,

y[n] − ay[n − 1] = x[n], 0 < a < 1,


assuming that the system is initially at rest.
a) Find and plot the frequency response.
b) Find and plot the impulse response.
c) Find and plot the unit step response.
d) Find a block diagram representation.

Solution:
a) Frequency response can be obtained directly by taking the Fourier trans-
form of both sides of the difference equation and arranging it, as follows:

(1 − ae−jω )Y (ejω ) = X(ejω )


Y (ejω ) 1 1 − acosω asinω
H(ejω ) = jω
= −jω
= 2
+j
X(e ) 1 − ae 1 − 2acosω + ω 1 − 2acosω + ω 2

(9.191)
The magnitude and the phase spectra of the frequency response is as fol-
lows:

437
b0 a0 = 1
x[n] + y[n]

D D

b1 −a1
x[n − 1] + y[n − 1]

D D

b2 −a2
x[n − 2] + y[n − 2]

D D

x[n − M ] + y[n − M ]

Figure 9.16: A block diagram representation of discrete time LTI systems, with
unit delay operators, D, and adders.

438
1 asinω
|H(ejω )| = , ∡H(ejω ) = tan−1 (9.192)
1 − 2acosω + a2 1 − acosω
Note that the frequency response is a continuous and periodic function,
with period, ω = 2π.

|H(ejω )| ∠H(ejω )

1
1−2α+α2

ω
−π π

ω
−π π

Figure 9.17: Magnitude and phase spectrum of the frequency response of a first
order difference equation.

1
ω=0 : H(ejω ) = , ∡tan−1 0
1 − 2a + a2
π 1
ω= : H(ejω ) = , ∡tan−1 a (9.193)
2 1 + a2
1
ω = ±π : H(ejω ) = , ∡tan−1 0
1 + 2a + a2
b) Since there is a one to one correspondence between the impulse response
and frequency response, h[n] ←→ H(ejω ), the impulse response of this system
can be obtained by taking the inverse Fourier transform of the frequency re-
sponse, using Table 9.2, h[n] = an u[n].
c) Unit Step Response of this system is

X ∞
X
s[n] = h[n] ∗ u[n] = an−k = an a−k
k=0 k=0 (9.194)
an an+1
s[n] = u[n] = u[n].
1 − a−1 a−1
d) A block Diagram Representation of this system is given in Figure 9.20.

439
h[n]

Figure 9.18: Impulse response of a first order difference equation for 0 < a < 1.

s[n]

−1

Figure 9.19: Plot of the unit step response for a = 0.5.

440
x[n] + y[n]

a
D

Figure 9.20: Block diagram representation of a feedback control system, repre-


sented by a first order difference equation, using a unit delay operator, D, and
an adder.

How to reconstruct a 2D image using only sine


functions @ https://384book.net/i0902
INTERACTIVE

9.7. Z-Transforms as an Extension of


Discrete Time Fourier Transforms
Although the Dirichlet conditions are relaxed in the discrete time signals, we
still need a sufficient condition for the existence of the discrete time Fourier
transform. Recall that the existence of the discrete time Fourier transform is
assured, when a time domain function is absolutely summable. If this condition
is violated the discrete time Fourier transform may or may not exits. In this
case, it may not be possible to find a finite Fourier transform, in the frequency
domain.
Motivating Question: Can we generalize the discrete time Fourier trans-
form in such a way that the transform domain representation of a time domain
function exists in some predefined values of the new variable of this domain?
As we did in the continuous time Fourier transform, we define a new do-
main, called z-domain, where a complex variable,

z = rejω = Re{{}z} + Im{{}z},

is defined as an alternative to the complex exponential function with magnitude


one.
Recall that the discrete time Fourier transform of a function x[n] is defined
as the weighted summation of complex exponential function,

441

X
F {x(t)} = X(ejω ) = x[n]e−jωn . (9.195)
n=−∞

z- transform can be obtained by extending the discrete time Fourier trans-


form. This requires simply to replace the frequency variable ejω of discrete
time Fourier transform with a complex variable, z = rejω .
Formally, the z-transform of a function x[n] is defined as,

X
Z{x[n]} = X(z) = x[n](rejω )−n . (9.196)
n=−∞

If we replace z = rejω in the above summation, we obtain,



X ∞
X
jω −n
Z{x[n]} = x[n](re ) = x[n]z −n , (9.197)
n=−∞ n=−∞

which yields a relationship between z-transform and discrete time Fourier


transforms, as follows;

Z{x[n]} = F {x[n]r−n }. (9.198)


Note that z-transform reduces the discrete time Fourier transform for r = 1.
Therefore, z-transform is considered as an extension of the discrete time Fourier
transform, where the the complex exponential variable z has a magnitude
r ∈ R. In the two dimensional complex plane, z represents a variable, with
a trajectory of a circle with radius r. Comparing the definitions of the dis-
crete time Fourier transform and that of z-transform, we observe that discrete
time Fourier transform, X(ejω ) is only defined over the unit circle of radius
r = 1. In other words, while the discrete time Fourier transform maps a time
domain function into the unit circle with radius r = 1, in the complex plane, z-
transform maps the discrete time function into uncountably many circles with
radius r ∈ R, in 2-dimensional complex plane.
Theorem: A discrete time function can be uniquely obtained from its
z-transform by the following contour integration in the complex plane;

1
I
x[n] = X(z)z n−1 dz, (9.199)
2πj
C

provided that the function, x[n], is absolutely summable,



X
|x[n]| < ∞, (9.200)
n=−∞

442
for which the function X(z) exists in the region C.
Approximate proof: Recall that the relationship between the z- trans-
form and discrete time Fourier transform is given by

X(z) = X(rejω ) = F {x[n]r−n }. (9.201)


Taking the inverse discrete time Fourier transform of the both sides of the
above equation, we obtain,

1
Z
x[n]r−n = X(rejω )e−jωn dω. (9.202)
2π 2π
Leaving x[n] alone in the left hand side of the equation, we obtain,

1
Z
x[n] = X(rejω )rn e−jωn dω. (9.203)
2π 2π
Now, let us change the dummy variable of integral by defining z = rejω
and assuming that r is fixed, we obtain,

dz = rjejω dω.

The above change of dummy variable defines a region of convergence (ROC)


for the existence of Equation of : 84 in the shape of a ring around the origin
as r varies. Then, the integral becomes the contour integral in the region of
convergence (ROC) of the complex plain,

1
I
x[n] = X(z)z −1 z n dz, (9.204)
2πj
C

where C is a counter-clock-wise closed path, lying entirely in the region of


convergence. Hence, the inverse z-transform is,

1
I
x[n] = X(z)z n−1 dz. (9.205)
2πj
C

Note that finding the inverse z-transform, using the above equation re-
quires sophisticated methods for contour integration [see: Complex Analysis:
A Modern First Course in Function Theory Jerry R. Muir Jr., Wiley, ISBN:
978-1-118-70522-3 April 2015]. In the context of this book, we suffice to use
look up tables and the properties of z-transforms for finding the inverse of
z-transform.
z-transform has several advantages over the discrete time Fourier trans-
form. It is very handy to solve the difference equations. It is applicable to the
functions, where the discrete time Fourier transform do not exists. It is a very
powerful tool to analyze the stability of linear or nonlinear discrete time sys-

443
tems. It has a wide range of applications in developing the digital systems, and
storing, transmitting and processing digital signals.

9.7.1. One Sided Z-Transform


Discrete time Fourier transform requires the time domain function to be defined
in the interval of n ∈ (−∞, ∞). However, in most of the real time systems there
are no negative values of time. As in the Laplace transform of continuous time
functions, it is possible to define one sided z-transform, where the discrete time
functions do not require negative integer numbers for time variable. Hence,
in order to avoid negative times, we restrict the z-transform summation for
0 ≤ n ≤ ∞, as follows;

X
X(z) = x[n]z −n , (9.206)
n=0

where z = rejω is a complex variable.


The time domain function can be uniquely obtained from the one sided
Laplace transform by the following equation;
1
I
x[n] = X(z)z n−1 dz, (9.207)
2πj
C

where C is the counter-clock-wise closed path which lie in the region of con-
vergence, specific to the one sided signal x[n].

9.7.2. Region of Convergence in Z-Transforms


The magnitude r ∈ R of the complex variable z = rejω enables us to evaluate
the z-transform for each specific value of r. In the z-domain, where the variable
z represents a circular trajectory with radius r, it is possible to find a ring of the
complex plane around the origin for some values of r0 < r < r1 , such that the
z-transform summation converges to a finite value. In some cases, the lowest
value of r0 = 0, then, the ROC is inside of the circle with radius r = r1 . Or
for some functions r1 → ∞, then, the ring is the region, which covers outside
of the circle with radius r = r0 .
The above capability of z-transform creates a great advantage over the
discrete time Fourier transform, when the function, x[n] is not absolutely
summable, but, it is absolutely summable for some values of r. Thus, z-
transform, relaxes the absolute summability condition of the discrete time
Fourier transform, leaving us a ring like region of the complex plane, where

444
Im{z}

Re{z}

Figure 9.21: ROC for the z-transform of finite duration signals.

z-transform exits. The region, where the existence of the z-transform is assured
is called the Region of Convergence (ROC).
Definition: Region of Convergence (ROC): The Region of Conver-
gence (ROC) is defined as the set of points in the complex plane, where the
z-transform X(z) of the function x[n] exits for some values of r = |z|.
Region of convergences of the z-transform are in the form of ring centered
about the origin, in the complex plane. The radius and width of the ring
depends on the type of the time domain function, x[n].
There are four major forms of the ring for the ROC of z-transform:
1) If the function x[n] has finite duration, in other words,
(
̸= 0 for n0 < n < n1 ,
x[n] (9.208)
= 0 otherwise.

for some finite values of n0 < n1 , then, ROC covers the entire z-plane,
except z = 0. Since it also covers the circle with radius r = 1, the discrete
time Fourier transform of the function also exists.
2) If the function x[n] is right-sided, in other words, there exists a finite n0 ,
such that
x[n] = 0 for n ≤ n0 ,
then, there exits an r = r0 , such that ROC is outside of the circle, r > r0 .
3) If the function x[n] is left sided, in other words, if there exists a finite n0 ,
such that
x[n] = 0 for n ≥ n0 ,
then, the ROC is inside of the circle, 0 < r < r1 .

445
Im{z}

r1
Re{z}

Figure 9.22: ROC for the z-transform of right sided signals.

Im{z}

r1

Re{z}

Figure 9.23: ROC for the z-transform of left sided signals.

446
Im{z}

r1

r0
Re{z}

Figure 9.24: ROC for the z-transform of two sided signals.

4) If the function x[n] is two sided, in other words, there exists two finite
values, n0 and n1 , such that
(
̸= 0 for n < n0 and n > n1 ,
x[n] (9.209)
= 0 otherwise.

then, the ROC is in the shape of the ring about the origin, r0 < r < r1 .
In order to observe the capabilities of z-transform over the discrete time
Fourier transform, let us solve the following exercises and investigate the exis-
tences of both discrete time Fourier transform and z-transforms.

Exercise 9.22: Consider the following discrete time right sided signal:

x[n] = an u[n]. (9.210)

a) Find the discrete time Fourier transforms of this signal.


b) Find the values of a, which assures the existence of the discrete time
Fourier transform.
c) Find the z-transform of this signal and its ROC, which assures the
existence of the Laplace transform.
d) Compare the range of a, which assures the existence of discrete time
Fourier and z-transforms.

Solution:
a) Discrete time Fourier transform of the signal, x[n] is defined as,

447

X ∞
X
X(ejω ) = x[n]e−jωn = an e−jωn (9.211)
n=−∞ n=0

b) The above summation diverges for a ≥ 1. Thus, it only exists for a < 1 with
the following equation;
1
X(ejω ) = , for |a| < 1. (9.212)
1 − ae−jω
Hence, the discrete time Fourier transform does not exit for |a| ≥ 1.
c) The z- transform of the signal x[n] is,
∞ ∞
X
−n
X 1
X(z) = x[n]z = an z −n = (9.213)
n=−∞
1 − az −1
n=0

The above summation converges to a finite value if it is absolutely summable,



X
X(z) = |az −1 |n < ∞. (9.214)
n=0

This is only possible if|az −1 |


< 1, which implies that |z| > a. Hence the
z-transform exits for the ROC is r > a.
d) Comparison of the convergence properties of discrete time Fourier and
z-transform reveals that discrete time Fourier transform exists, for only a < 1.
On the other hand, z- transform exits for the ROC is r > a. Thus, the ROC
depends on the value of a . For example, given a discrete time function,

x[n] = 2n u[n],
the base a = 2. Then, the discrete time Fourier transform does not exists.
However, z-transform exits for r > 2.

Exercise 9.23: Consider a slightly different version of the discrete time signal
of the previous example, which is a left sided function;

x[n] = an u[−n]. (9.215)

a) Find the discrete time Fourier transforms of this signal.


b) Find the values of a, which assures the existence of the discrete time
Fourier transform
c) Find the z-transform and the ROC of this signal, which assures the
existence of the z-transform.
d) Compare the discrete time Fourier and z-transforms of this signal.

448
Im{z}

a
Re{z}

Figure 9.25: ROC for the z-transform of x[n] = an u[n].

Solution:
a) Discrete time Fourier transform of the signal, x[n] is defined as,


X 0
X ∞
X
−jωn n −jωn

X(e ) = x[n]e = a e = a−n ejωn (9.216)
n=−∞ n=−∞ n=0

b) The above summation diverges for a ≤ 1. Thus, it only exists for a > 1 with
the following equation;
1
X(ejω ) = , for |a| > 1. (9.217)
1 − a−1 ejω
Hence, the discrete time Fourier transform does not exit for |a| ≤ 1.
c) The z- transform of the signal x[n] is,
∞ 0
X X a
X(z) = x[n]z −n = an z −n = (9.218)
n=−∞ n=−∞
a−z

The above summation converges to a finite value if it is absolutely summable,


0
X ∞
X
X(z) = |az −1 |n = |a−1 z|n < ∞. (9.219)
n=−∞ n=0

This is only possible if |a−1 z| > 1, which implies that |z| < |a|. Hence the
z-transform exits for the ROC is r < |a|.
d) Comparison of the convergence properties of discrete time Fourier and
z-transform reveals that discrete time Fourier transform exists, for only a > 1.

449
Im{z}

a
Re{z}

Figure 9.26: ROC for the z-transform of x[n] = an u[−n].

On the other hand, z- transform exits for the ROC is r < |a|. Thus, the ROC
depends on the value of a.

Exercise 9.24: Find the z-transform and its ROC for the following right
sided function:

x[n] = u[n]. (9.220)

Solution:
From the definition of z-transform;
∞ ∞
X X 1
X(z) = x[n]z −n = z −n = . (9.221)
n=−∞
1 − z −1
n=0

The ROC is |z| > 1.

Exercise 9.25: Find the z-transform and its ROC for the following limited
time duration function:

x[n] = an (u[n] − u[n − n0 ]). (9.222)

Solution:
From the definition of z- transform;

450
∞ 0 −1
nX
X 1 − (az −1 )n0
X(z) = x[n]z −n = an z −n = . (9.223)
n=−∞
1 − az −1
n=0

Since the time duration, n ∈ [0, n0 ] is bounded, the the summation of the
z-transform is finite for all values of n0 < ∞. Thus, ROC is the entire complex
plane. This is the case, when an absolutely summable function x[n] has finite
duration.

Exercise 9.26: Find the z-transform and ROC of the following two sided
function:

x[n] = a−n u[n] + an u[−n], for a > 0. (9.224)

Solution:
From the definition of z-transform;

∞ ∞ 0
X X X 1 1
X(z) = x[n]z −n = a−n z −n + an z −n =
−1 −1
+
n=−∞ n=−∞
1−a z 1 − az −1
n=0
(9.225)
In order to find the ROC of the above z-transform, we need to find the ROC
of the first and the second term in the left hand side of the above equation:
For the first term,
1
ROC is |z| > 1/a.
1 − a−1 z −1
For the second term,
1
ROC is |z| < a.
1 − az −1
Hence, the ROC is a ring in

1/a < r < a

These examples show that it is critical to determine the Region of Conver-


gence of the z-transforms in the complex plane.

451
Im{z}

a
1/
Re{z}

Figure 9.27: ROC for the z-transform of x[n] = a−n u[n] + an u[−n].

9.8. Inverse of Z-Transform


As we mentioned above, recovering the time domain signal x[n] from its z-
transform, X(z) requires the following contour integration;

1
I
x[n] = X(z)z n−1 dz, (9.226)
2πj
C

which may not be easy for a large class of functions. In order to avoid con-
tour integration, we frequently use the look-up tables and properties of the
z-transform. Since they are quite similar to that of the discrete time Fourier
transformation, we suffice to provide the list of properties and look up tables
for common transform pairs, x[n] ↔ X(z) together with ROCs, in Tables 9.3
and 9.4. The following examples demonstrate how we utilize the Tables to
compute the inverse z-transform.

Table 9.3: Properties of Z-transform.


Signal Z-transform ROC

X
x[n] X(z) = x[n]z −n Rx
n=−∞
y[n] Y (z) Ry
ax[n] + by[n] aX(z) + bY (z) Contains Rx ∩ Ry
Continued on next page

452
Table 9.3: Properties of Z-transform. (Continued)
x[n − n0 ] z −n0 X(z) Rx , except possible ad-
dition or deletion of the
origin or ∞
ejω0 n x[n] X e−jω0 z

Rx
z0n x[n] X(z/z0 ) |z0 |Rx
X z −1 Inverted R (i.e., R−1 =

x[−n]
the set of points z −1 ,
where z is in R)
x∗ [n] X ∗ (z ∗ ) Rx
x∗ [−n] X ∗ (1/z ∗ ) 1/Rx
x[n] ∗ y[n] X(z)Y (z) Contains Rx ∩ Ry
x[n] − x[n − 1] (1 − z −1 )X(z) At least the intersection
of R and |z| > 0
n
X 1
x[k] X(z) At least the intersection
1 − z −1
k=−∞ of R and |z| > 1
d
nx[n] −z X(z) Rx , except possible ad-
dz
dition or deletion of the
origin or ∞
1
Re{x[n]} [X(z) + X ∗ (z ∗ )] Contains Rx
2
1
Im{x[n]} [X(z) − X ∗ (z ∗ )] Contains Rx
2j

Table 9.4: Z-transform pairs for popular functions.


x[n] X(z) ROC
δ[n] 1 All z
δ[n − n0 ] z −n0 All z, except 0(n0 > 0)
or ∞(n0 < 0)
1
u[n] |z| > 1
1 − z −1
1
−u[−n − 1] |z| < 1
1 − z −1
Continued on next page

453
Table 9.4: Z-transform pairs for popular functions. (Continued)
1
an u[n] |z| > a
1 − az −1
1
−an u[−n − 1] |z| < a
1 − az −1
az −1
nan u[n] |z| > a
(1 − az −1 )2
az −1
−nan u[−n − 1] |z| < a
(1 − az −1 )2
1 − [cos ω0 ]z −1
[cos ω0 n]u[n] |z| > 1
1 − [2 cos ω0 ]z −1 + z −2
1 − [sin ω0 ]z −1
[sin ω0 n]u[n] |z| > 1
1 − [2 cos ω0 ]z −1 + z −2
1 − [r cos ω0 ]z −1
[rn cos ω0 n]u[n] |z| > r
1 − [2r cos ω0 ]z −1 + r2 z −2
1 − [r sin ω0 ]z −1
[rn sin ω0 n]u[n] |z| > r
1 − [2r cos ω0 ]z −1 + r2 z −2
(
an , 0 ≤ n ≤ N − 1 1 − aN z −N
|z| > 0
0, otherwise 1 − az −1

Exercise 9.27: Find the inverse z-transform of the following function in the
z-domain;
0.2z
X(z) = , (9.227)
(z − 0.5)(z − 0.3)
for three different region of convergences given below:
a) ROC for |z| > 0.5.
b) ROC for |z| < 0.3.
c) ROC for 0.3 < |z| < 0.5.

Solution:
Firstly, let us apply partial fraction expansion to simplify the z-transform
function;
0.2z z z
X(z) = = − (9.228)
(z − 0.5)(z − 0.3) z − 0.5 z − 0.3
1 1
X(z) = X1 (z) − X2 (z) = −1
− .
1 − 0.5z 1 − 0.3z −1
Inverse of the z-transform of the above function depends on the ROCs
defined in parts a, b and c.

454
Im{z}

0.5

0.3 Re{z}

Figure 9.28: ROC for |z| > 0.5.

a) In order to obtain the inverse z-transform of the given function X(z) for
ROC |z| > 0.5, we need to get the ROC of X1 (z) as |z| > 0.5 and the ROC of
X2 (z) as |z| > 0.3, so that the intersection of both ROCs becomes |z| > 0.5.
Hence, we obtain the inverse z-transform of the first term as,

1
X1 (z) = ←→ x1 [n] = 0.5n u[n] ROC for |z| > 0.5. (9.229)
1 − 0.5z −1
Similarly, the inverse z- transformation of the second term is,

1
X2 (z) = ←→ x2 [n] = 0.3n u[n] ROC for |z| > 0.3. (9.230)
1 − 0.3z −1
Using the linearity property of z-transform, we obtain the inverse z-transform
of X(z) as follows:

x[n] = x1 [n] − x2 [n] = [0.5n − 0.3n ]u[n], (9.231)

where the ROC is the intersection of |z| > 0.5. and|z| > 0.3, which is |z| > 0.5.
Note: This is a right sided function.
b) In order to obtain the inverse z-transform of the given function X(z) for
ROC |z| < 0.3, we need to get the the ROC of X1 (z) as |z| < 0.5 and the ROC
of X2 (z) as |z| < 0.3, so that the intersection of both ROCs becomes |z| < 0.3.
Hence, the inverse z-transform of the first term is,

455
Im{z}

0.5

0.3 Re{z}

Figure 9.29: ROC for |z| < 0.3.

1
X1 (z) = ←→ x1 [n] = 0.5n u[−n−1] ROC for |z| > 0.5. (9.232)
1 − 0.5z −1
The inverse z- transformation of the second term is,

1
X2 (z) = ←→ x2 [n] = 0.3n u[−n−1] ROC for |z| > 0.3. (9.233)
1 − 0.3z −1
Finally, the inverse z-transform of X(z) as follows:

x[n] = x1 [n] − x2 [n] = [0.5n − 0.3n ]u[−n − 1], (9.234)

where the ROC is the intersection of |z| < 0.5. and|z| < 0.3, which is |z| < 0.3.
Note: This is a left sided function.
c) In order to obtain the inverse z-transform of X(z) for ROC 0.3 < |z| <
0.5, we need to get the the ROC of X1 (z) as |z| < 0.5 and the ROC of X2 (z)
as |z| > 0.3, so that we obtain a ring shaped region.
Hence, the inverse z-transform of the first term is,

1
X1 (z) = ←→ x1 [n] = 0.5n u[−n−1] ROC for |z| > 0.5, (9.235)
1 − 0.5z −1
the inverse z- transformation of the second term is,

456
Im{z}

0.5

0.3 Re{z}

Figure 9.30: ROC for 0.3 < |z| < 0.5.

1
X2 (z) = ←→ x2 [n] = 0.3n u[n] ROC for |z| > 0.3. (9.236)
1 − 0.3z −1
Finally, the inverse z-transform of X(z) as follows:

x[n] = x1 [n] − x2 [n] = [0.5−n−1 − 0.3n ]u[n], (9.237)

where the ROC is the intersection of |z| < 0.5. and|z| < 0.3, which is |z| < 0.3.
Note: This is a two sided function.

Exercise 9.28: Find the inverse z-transform of the following z-domain func-
tion;
z+2
X(z) = , ROC for all z ̸= 0. (9.238)
z

Solution:
Let us arrange the function as follows:
1
X(z) = 1 += 1 + z −1 . (9.239)
z
From the z-transform table, we can see that the inverse z- transform of the
first term is,

Z −1 [1] = δ[n], ROC for all z (9.240)

457
and the inverse z-transform of the second term is,

Z −1 [δ[n − 1]] ROC for all z ̸= 0 (9.241)

Using the linearity property, we obtain the inverese Laplace transform of X(s),
as follows;

x[n] = δ[n] − δ[n − 1] (9.242)

The above exercises show that a practical method for finding the inverse
z-transform is to make algebraic manipulations on the z-domain function and
put it into the linear combination of the known pairs of transform table. Then,
use the linearity property to obtain the inverse transform.

9.9. Discrete Time Linear Time Invari-


ant Systems in z-Domain
Recall that a discrete time LTI System is represented by difference equation
and impulse response, in time domain and it is represented by the frequency
response in the frequency domain.
Motivating Question: What if the frequency response does not exits?
Can we employ z-transform to analyze the frequency content of an LTI system
in some region of convergences of the z-plane?
z- transform, indeed, provides us a strong tool to analyze the LTI systems,
which do not exits in the frequency domain.

9.9.0.1. Eigenvalues and Transfer Functions in z-Domain


Recall that, when the input of a discrete time LTI system is an exponential
function the output is just the scaled version of the input. Thus, exponential
functions are the eigen functions of the LTI systems and the scaling factor is
simply the eigenvalue, computed from the convolution summation;

X
y[n] = x[n] ∗ h[n] = eλ(n−k) h[k] = H(λ)eλn , (9.243)
k=−∞

where

X
H(λ) = h[k]e−λk (9.244)
k=−∞

x(t) = eλt → LTI → yP (t) = H(λ)eλt (9.245)

458
In the above formulation, if we set , λ = jω, then, the eigen function at
the input becomes x[n] = ejωn and the eigenvalue of the LTI system becomes
the discrete time Fourier transform of the impulse response, which is called
frequency response;
X∞

H(e ) = h[n]e−jωn . (9.246)
n=−∞

Now let us extend the above discrete time Fourier transform to the z-
transform of the impulse response, by defining z = rejω . In this case, the eigen
function at the input becomes x[n] = reȷωn and the eigenvalue becomes the
z-transform of the impulse response.
Definition: Transfer Function of Discrete time Systems The z-
transform of the impulse response is called transfer function,

X
H(z) = h[n]z −n . (9.247)
n=−∞

When the frequency response of a discrete time LTI system does not con-
verge, we cannot represent the LTI system with an eigenvalue, in the frequency
domain. However, z-transform enables us to find the eigenvalue of the system,
which converges in some regions of the complex z-plane.
Let us now establish the relationship between the transfer function and the
difference equation of an LTI system,
N
X M
X
ak y[n − k] = bk x[n − k]. (9.248)
k=0 k=0

Taking the z-transform of both sides of the above equation, we obtain,


N
X M
X
ak z −k Y (z) = bk z k X(z). (9.249)
k=0 k=0

Let us find the transfer function of an LTI system by using the above
algebraic equation: Recall, the z- transform of the impulse function is,

x[n] = δ[n] ←→ X(z) = 1. (9.250)


When the input is a discrete time impulse function in the time domain, the
output becomes impulse response and the z-transform of the output becomes
transfer function. Therefore, replacing the input by X(z) = 1, the output
becomes the transfer function,
N
X M
X
−k
ak z H(z) = bk z −k . (9.251)
k=0 k=0

459
The above equation provides us the transfer function of an LTI system,
represented by an ordinary constant coefficient difference equation in time
domain and an algebraic equation in z-domain. Arranging this equation, we
obtain the transfer function, as follows:
PM −k
Y (z) k=0 bk z
H(z) = = PN , (9.252)
X(z) k=0 ak z
−k

which transfers an input signal to an output signal of an LTI system repre-


sented by a difference equation. The type of this transferring process is deter-
mined by the constant coefficients, {ak } and {bk } of the differential equation.
Taking the inverse z-transform of the transfer function directly gives us the
impulse response, without solving the differential equation. Because;

h[n] ←→ H(z). (9.253)


Therefore, impulse response and transfer function are one-to-one and onto
representation of the same LTI system in two different domains, namely, in
time and z- domains.
Note that, the eigen values H(ejkω0 ) of a discrete time LTI system for each
harmonic frequency kω0 for all integer values of k are specific instances of the
transfer function at z = ejkω0 . Furthermore, frequency response H(ejω ) is a
specific form of the transfer function for z = ejω .
Transfer function of a discrete time LTI system is represented by the fol-
lowing polar coordinate form;

H(z) = |H(z)|ej∡H(z) , (9.254)


where the real-valued functions |H(z)| and ∡H(z) are called the magnitude
and phase of the transfer function, respectively. Analysis of z-transform of a
discrete time function requires the analysis of magnitude and phase spectrums.
The following exercises demonstrate the utilization of discrete time transfer
function for describing various properties of the LTI systems.

Exercise 9.29: Consider a discrete time LTI system, represented by the


following impulse response:

h[n] = [(0.5)n u[n] + (0.5)n−1 u(n − 1]. (9.255)

a) Find the transfer function of this system.


b) Comment on the Region of Convergence.
c) Find the difference equation, which represents this system

460
Solution:
a) The transfer function is the z-transform of the impulse response. Using
the look up table and linearity property, we obtain

1 z −1
H(z) = + . (9.256)
1 − 0.5z −1 1 − 0.5z −1
b) Transfer function consists of two subsystems, which are paralleled to each
other;
H(z) = H1 (z) + H2 (z) (9.257)
where,

the ROC forH1 (z) : |z| < 0.5 (9.258)


and
the ROC for H2 (z) : |z| < 1. (9.259)
The ROC for the transfer function H(z) lies in the intersection of both
ROCs, hence it is |z| < 1.
c) Recall that the transfer function provides an relationship between the
input and output of an LTI system, in the z domain, as follows;

Y (z) 1 z −1
H(z) = = + . (9.260)
X(z) 1 − 0.5z −1 1 − 0.5z −1

Arranging the above equation we get,

[1 − 0.5z −1 ]Y (z) = [1 + z −1 ]X(z). (9.261)

Taking the inverse z-transform of both sides of the above equation, we


obtain,

y[n] − 0.5y[n − 1] = x[n] + x[n − 1]. (9.262)

Exercise 9.30: Consider an LTI system at initial rest, given by the following
difference equation;

y[n − 2] − 4y[n − 1] + 4y[n] = 2x[n − 1] (9.263)


a) Find the transfer function of this system.
b) Find the impulse response of this system.

Solution:
a) Let us set the input to impulse function, x[n] = δ[n], then, the correspond-

461
ing output of the difference equation becomes the impulse response, h[n].
The above difference equation for impulse response is,

h[n − 2] − 4h[n − 1] + 4y[n] = 2δ[n − 1]. (9.264)


From the z-transform properties, we see that

x[n − n0 ] ↔ z −n0 X(z).


From the z-transform table, we see that

x[n] = δ[n] ↔ X(z) = 1.

Using the above transform pairs, we take the z-transform of both sides of
the difference equation,

[z −2 − 4z −1 + 4]H(z) = 2z −1 .

Arranging the above equation, we get the transfer function, as follows;

2z −1 0.5z −1
H(z) = = .
(z −1 + 2)2 (1 − 0.5z −1 )2
Note that taking the z-transform of the above difference equation does
not provide the ROC for the transfer function. In fact, there are two
alternatives for the ROC of this transfer function; The first alternative is
|z| < 0.5 and the second alternative is |z| > 0.5.
b) From the z-transform table, we observe that depending on the selection
of ROC, there are two impulse responses, which correspond to the same
difference equation: For ROC, |z| < 0.5,

h[n] = ((0.5)n )u[−n − 1].

Since the system is causal, the above impulse response is eliminated.


Hence, the impulse response can be obtained for the second alternative
of ROC, |z| > 0.5,
h[n] = (0.5)n )u[n],

The above example demonstrates that we can obtain the transfer function
and impulse response of an LTI system, which is initially at rest, without solv-
ing the differential equation. This methods is also available in Fourier domain,
provided that the frequency response exits. In case of undefined frequency
responses, Laplace domain enables us to compute the transfer function and
impulse response, using a simple algebraic method.
As it is observed throughout of this chapter, the transform domains capture

462
different view of the physical phenomena other then time domain representa-
tions. Furthermore, the beautiful synergy created by the representations of
time and transform domains bridges the mathematics of linear algebra and
recursive equations.

9.10. Chapter Summary


Can we extend the Fourier series representation of discrete time periodic func-
tions to the aperiodic ones? If, yes, how can we do that? What are the con-
ditions for the existence of such transformations? What is the relationship
between the Fourier transform and Fourier series representations of discrete
time periodic signals? What type of an operator is a discrete-time Fourier
transform? What are the similarities and distinctions between the continuous
time and discrete time Fourier transforms and Fourier series? What are the
relationships between the functions represented in the discrete time and con-
tinuous frequency domain? What are the properties of the discrete time signals
and systems in the frequency domain?
In this chapter, first, we studied the discrete time Fourier transforms by
extending the Fourier series representation of periodic functions to aperiodic
functions assuming that an aperiodic function can be considered as a periodic
function of infinite period. Then, we derived the Fourier analysis and synthesis
equations for discrete time functions by stretching the period of a discrete
time function to infinity. Interestingly, when we take the limit as the period
approaches to infinity, the Fourier transform of a discrete time function became
a continuous frequency function.
While taking the discrete time Fourier transform, we replace the integral op-
eration of the continuous time Fourier transform by the summation operation.
In this case, we do not have to bother the existence of the complex integrals of
the continuous time Fourier transform. This replacement relaxed the Dirichlet
conditions for existence of the discrete time Fourier transform, where the only
constraint we need is finite summability of the discrete time functions. Fur-
thermore, the fact that the superposition of periodic complex functions of the
analysis equation are periodic, makes the discrete time Fourier transformations
also periodic with period 2π. We also established the relationship between the
discrete time Fourier series and Fourier transforms for periodic functions.
Second, we investigate the power of discrete time Fourier transforms in
manipulating the frequency content of discrete time signals. We studied basic
properties of discrete time Fourier transforms, such as linearity, time shifting,
time scaling and difference properties. We observe many similarities between
the continuous and discrete time Fourier transforms. As in the continuous
time Fourier transforms, the energy of a discrete time signals is also preserved

463
in continuous frequency domain. We also studied several duality properties
between the time and frequency domain.
We observe that difference equations become algebraic equations in the
frequency domain. Thus, solving them in the frequency domain is rather easier
compared to solving them in time domain. We also, show that there is one-to
one correspondences between the representation of LTI systems by impulse
response, frequency response and difference equations.

Noise reduction using the Fourier transform @


https://384book.net/v0902
WATCH

464
Problems
1. Find and plot the discrete time Fourier transforms of the following signals
in polar coordinate system:
1 n+1

a) x1 [n] = 2 u[n + 1]
1 |n+1|

b) x2 [n] = 2

2. Find and plot the discrete time Fourier transforms of the following signals
in Cartesian coordinate system:
a) x1 [n] = (0.5)−n u[−n + 2]
b) x2 [n] = x[n − 5], and x[n] = u[n] − u[n − 3]
|n|
c) x3 [n] = 25 u(5n − 2)

3. Find and plot the even parts and odd parts of the discrete time Fourier
transforms of the following signals:
a) x1 [n] = δ[n−2]
2 + δ[n+2]
2
b) x2 [n] = δ[n+1]
2 − δ[n−1]
2
c) x3 [n] = cos ω0 n + cos 2ω0 n

4. Find and plot the discrete time Fourier transforms of the following signals
and comment about  the frequency content of these signals:
a) x1 [n] = sin π2 n
b) x2 [n] = cos π4 n + π

2
c) x3 [n] = 2 sin π6 n + π cos π3 n + π4
 

5. Consider the following discrete time Fourier transform of a signal x[n], in


one full period:

(
jω −j 0<ω≤π
X(e ) =
j −π < ω ≤ 0

a) Find and plot the inverse Fourier transform x[n].


b) Find and plot the even and odd parts of x[n].
c) Find and plot the even and odd parts of X(ejω ).
6. Consider the following discrete time Fourier transform of a signal x[n]:


X  π   π 
X(ejω ) = πδ(ω + 2πk) − 4πδ ω + + 2πk − 4πδ ω − + 2πk
3 3
k=−∞

a) Plot this signal in the frequency domain.

465
b) Find and plot the inverse Fourier transform x[n].
c) Find and plot the even and odd parts of x[n].
7. Consider the following discrete time Fourier transform of a signal x[n]:

(
jω e−0.5jω 0 ≤ |ω| < π3
X(e ) = π
0 3 ≤ |ω| < π

a) Find and plot the magnitude and the phase of this function.
b) Find and plot the real and imaginary part of this function.
c) Find and plot the inverse Fourier transform x[n].
8. Consider an LTI system represented by the following impulse response
and frequency response pair:

h[n] ←→ H(ejω ),
where the frequency response H(ejω ) ̸= 0 in 0 ≤ ω ≤ π and it is zero
otherwise.
a) Given that H(ej(ω/3) ) = π, find the frequency response H(ejω ).
b) Find the impulse response of this system. 
n
c) When the input to this system is x[n] = π1 u[n], find the output
y[n].
9. Consider an LTI system represented by the following impulse response
and frequency response pair:

h[n] ←→ H(ejω ),
with the following input-output pair;
 n
2
x[n] = u[n],
3
 n+1
2
y[n] = n u[n].
3
a) Find and plot the frequency response H(ejω ).
b) Find the difference equation relating the input x[n] and output y[n].
10. Consider a discrete-time LTI system with impulse response h[n] = ( 13 )n u[n].
Find the output signal y[n] for all the following inputs to this system:
a) x[n] = ( 41 )n u[n]
n
b) x[n] = (n − 2)( 52 u[n]
b) x[n] = cos(πn)

466
11. Consider an initially at rest discrete-time LTI system with impulse re-
sponse,
h[n] = (0.5)(n+2) u[n].
a) Find the frequency response of this system.
b) Find the difference equation, which represents this system.
c) Find the discrete time Fourier transform of the output, when the
input is,
π
x[n] = sin( n).
2
12. Consider an initially at rest discrete-time LTI system with impulse re-
sponse,  n
1  πn 
h[n] = cos u[n].
3 2
a) Find the frequency response.
b) Find the Fourier transform of the output signal y[n], when the input
signal is x[n] = cos πn

2 .
c) Find the output y[n].
13. Consider a causal LTI system whose input and output are related by the
difference equation

y[n] − 0.2y[n − 1] = x[n]

Find the outputs y1 [n] and y2 [n] for each of the following inputs defined
in one period:
1−0.2ejω
a) X1 (ejω ) = 1+ 1 −jω
e
2
1
b) X2 (e−jω ) = (1+0.3e−jω )(1−0.2e−jω )

14. Find the inverse discrete time Fourier transform of the signals given in
one period, asPfollows:
a) X(ejω ) = 15 k=1 e
jωk cos(kω)

b) X(ejω ) = j sin( π2 ω)

467
15. Find and plot the inverse discrete time Fourier transform of the following
signals:
a) X1 (ejω ) = ejω sin(2ω + π) , for −π < ω ≤ π
b) X2 (ejω ) = j tan( π3 ω) , for −π < ω ≤ π

16. Find the inverse x[n] of the following discrete time Fourier transform:

17. Find the discrete time Fourier transform of the following signal:

sin( π3 n) sin( π6 n)
   
x[n] = ∗
2πn 2πn

18. An LTI system is defined by its impulse response which is h[n] = h′ [n] +
2 n
5 u[n]. The frequency response of this system is given as follows:

60 − 20e−jω
H(ejω ) = .
2 − 9e−jω + 10e−2jω
a) Find and plot h[n].
b) Find and plor h′ [n].
c) Find and plot H ′ (ejω ).
19. Let x[n] be a discrete-time signal defined as follows:

sin(ω0 n)
x[n] = −
πn
a) Find the total energy of x[n].
b) If the discrete time Fourier transform of x[n] is X(ejω ) = 2/3, for
−π < ω ≤ π, find the value of ω0 .
20. Does the following discrete time function satisfy the Dirichlet conditions? Ver-
ify your answer.

x[n] = 2n u[n]. (9.265)


21. Consider the z- transform of a discrete time signal x[n] given below:

3
X (0.5)k
X(z) = π .
k=0
4 − e− 2 k z −1

468
a) Find x[n].
b) Find the z-transform of g[n] = an x[n].
22. Consider a discrete time initially at rest LTI system, represented by the fol-
lowing difference equation:

y[n] − 0.2y[n − 1] + 0.1y[n − 2] = x[n].

a) Find and plot the frequency response H(ejω ).


b) Find and plot the the impulse response h[n] of this system.
c) Find a block diagram representation of this system.
23. Find the inverse discrete time Fourier transforms of the following transforms:
 k  
1 2π
a) X1 (ejω ) = ∞
P
k=−∞ δ ω − k
2 3
1 jω
1 + 8e
b) X2 (ejω ) =
1 − 16 ejω − 61 e2jω
24. The discrete time Fourier transform of the signal x[n] is given in the below
figure:
X(jω)

ω
−π −π/2 π/2 π
−1

Find and plot the


 Fourier
 π transforms of the following functions:
a) x1 [n] = x[n] cos n
  2 

b) x2 [n] = πx[n] sin n
P∞ 3
c) x3 [n] = x[n] k=−∞ δ(n − 3k)

25. Consider a discrete-time LTI system with the following Fourier transforms of
the input signal x[n] and the impulse response h[n] respectively:

X(ejω ) = 6e−jω + e−2jω − 2e−jω


H(ejω ) = 2 − e−jω + 4e−3jω

(a) Find and plot the discrete time Fourier transform Y (ejω ) of the out-
put.
(b) Find and plot the output y[n], in the time domain.

469
26. Consider a system consisting of parallel connection of two subsystems with the
following impulse responses:

j
h1 [n] = ( )n u[n + 5]
2
j
h2 [n] = −( )n u[n − 5]
2
a) Find and plot the frequency response of h1 [n].
b) Find and plot the frequency response of h2 [n].
c) Find and plot the frequency response of the overall system h[n] =
h1 [n] + h2 [n].

27. Consider an initially at rest LTI systems represented by the following difference
equation:

y[n] + y[n − 1] + 0.5y[n − 2] = x[n] − 0.25x[n − 1]

a) Find and plot the frequency response of this system .


b) Find and plot the impulse response of this system.
c) Find and plot the inverse h−1 [n] of this system.

28. Consider a discrete-time LTI system, represented by the following impulse


response:

h[n] = (0.5)n u[n] + (0.25)n u[n].

a) Find and plot the the frequency response of this system.


b) Find the difference equation, which represents this system.
c) Find and plot the output y[n], when the input is,

x[n] = (0.5)n u[n] − 0.5(0.5)(n−1) u[n − 1].

29. Find and plot the z-transform of the following signal, and specify the corre-

470
sponding region of convergence of the following signal:
 n
1
x[n] = u[n − 4]
3

471
30. Let x[n] to be defined as follows:
 n
−1
x[n] = u[n] + αu[−n − 2]
4

Given that the region of convergence of X(z) is 1 < |z| < 2, find the possible
values of the magnitude of the complex value α.

31. Find the z-transforms of the following signals and their Region of Conver-
gences:
a)
 n
2 π 
x[n] = sin n u[−n]
5 3

b)
 n
2 π 
x[n] = sin n u[n]
5 3

32. Find the inverse z-transform of the following signal:


1
1 + z −1
X(z) = 4 , |z| > 1.
−1
1 −1
(1 + z )(1 − z )
2
33. Consider a discrete-time LTI system represented by the following equation:

y[n] = x[2 − n] − x[−n − 1]

a) Find the transfer function of this system.


b) Find the impulse response of this system.
c) Find the z-transform of the output and its region of convergence,
when the input is x[n] = δ(n).
34. Consider a discrete-time system, represented by the following equation:

y[n] = (n + 2)x[n]

a) Find the output y[n], when the discrete time Fourier transform of the
signal x[n] is,
2
X(ejω ) = , for − π ≤ ω ≤ π.
2 − e−jω

472
b) Find and plot the discrete time Fourier transform Y (ejω ).
c) Find the z-transform of the output Y (z)

473
35. Programming - Frequency Domain Encoding

• Introduction
We are not using text messages anymore. They are very boring. Now,
almost all mainstream messaging apps support voice messages. There-
fore, as of this moment, you and I will communicate with voice mes-
sages, but I have a problem. I am paranoid about privacy. I do not trust
any Big Tech company, so I encoded my voice message with a special
encoding that only you and I know. Your task is to decode and write
my message. Don’t worry. I will give you the decoding recipe.

• Decoding Recipe(Encoding is also same, so you can send me encoded


messages using the same recipe)
(a) Transform the given voice signal to the Fourier domain using Fast
Fourier Transform.
(b) Split the Fourier domain representation into two parts, positive
and negative frequencies, reverse both parts and concatenate them
again. For example, if Fast Fourier Transform of x is X[jw] =
[a, b, c, d, e, f, g, h], then the encoded list must be X ′ [jw] = [d, c, b, a, h, g, f, e]
(c) Return to the time domain using Inverse Fast Fourier Transform
and listen the message.
• Fast Fourier Transform(FFT)
DTFT and CTFT are great tools for theoretical purposes and filter
designs, but they are not so practical for digital signals because they
are defined in infinite domain. On the other hand, there is a better
tool to analyze the finite sampled signals in a more elegant way, called
Discrete Fourier Transform(DFT). The method is the bridge between
continous time signals and discrete time signals. Also, it is not possible
to catch some frequencies in a sampled signals.(see Nyquist-Shannon
Sampling Theorem).

DFT is one of the fundamental tools for analyzing signals in frequency


domain. DFT, X[k], of a signal x[n] is defined as follows:
N −1
X k
Discrete F ourier T ransf orm : X[k] = x[n]e−j2π N n , k = [0, .., N −1]
n=0

N −1
1 X k
Inverse Discrete F ourier T ransf orm : x[n] = X[k]ej2π N n , n = [0, .., N −1]
N
k=0

474
The complexity of naive DFT algorithm is O(n2 ). Therefore, a lot effort
was spent to improve the efficiency of DFT algorithm family. The result
is elegant divide and conquer Fast Fourier Transfrom(FFT) Algorithm,
which is chosen as one of the most important 10 algorithms in the
20th century by Science. Although the algorithm was invented by Carl
Friedrich Gauss in 1805 when he needed it to interpolate the orbit of
asteroids Pallas and Juno from sample observations, it is reinvented
and popularized during 60s. The complexity of the algorithm is O(N
logN). After that point, lots of FFT variants was proposed.
You will implement the best-known FFT algorithm. The main idea is
to divide DFT algorithm into odd and even parts. It first computes
the DFTs of the even-indexed inputs (x2m = x0 , x2 , . . . , xN −2 ) and of
the odd-indexed inputs (x2m+1 = x1 , x3 , . . . , xN −1 ), and then combines
those two results to produce the DFT of the whole sequence. This idea
can then be performed recursively to reduce the overall runtime to O(N
log N).
N/2−1 N/2−1
X −j2π X −j2π
(2n)k (2n+1)k
X[k] = x[2n]e N + x[2n + 1]e N

n=0 n=0

We can simplify the procedure of the formula as follows:


−j2π
(k−1)
X[k] = O[k] + E[k]e N

where O[k] and E[k] are the discrete Fourier Transforms of elements
with odd and even indices, respectively. Moreover, since we know that
the discrete Fourier Transform of a signal is periodic, we do not have to
calculate two periods in the summations, we can calculate only the first
period and then concatenate the result with itself. The only concern
−j2π
is that we are multiplying E[k] with e N . However, it has a nice
property that:
−j2π −j2π
e N (k−1+N/2) = e N (k−1)
Therefore, we can write this equation as:
−j2π
(
O[k] + E[k]e N (k−1) , if k ≤ N/2
X[k] = −j2π
(k−1−N/2)
O[k − N/2] − E[k − N/2]e N , if k > N/2

You can implement ifft() function by using fft() function. Think about
that.

475
• Hints
(a) You can check your fft() function by comparing numpy.fft.fft(). If
the individual differences is below 10−7 for our input, your function
works correctly.
(b) For simplicity, you can assume that the length of the input file is
2n , where n ∈ N
(c) Complexity of Fast Fourier Transform algorithm is O(N logN).
Please be careful about the complexity.
(d) To read the sound data, you can use scipy.io. It also returns the
sample rate of the audio file. It is very useful to determine the
frequency bins.
(e) Please be careful about the frequency bins of your implementation
even if they are not required to complete the problem. They may
be out of order.
• Regulations
(a) You should add the plot of frequency domain magnitude of encoded
and decoded signal and the time domain plots to your solutions to
see the difference between two signals. That means, your solution
must contain 4 different plots.
(b) You should write the secret message to your solution. You can find
the encoded message in encoded.wav
(c) You should use Python3 during the problem.
(d) You are not allowed to use any library other than numpy, mat-
plotlib.pyplot and scipy.io
(e) You are not allowed to use numpy.fft in the problem, you should
implement your own fft() and ifft() function.

476
Chapter 10
Linear Time Invariant
Systems as Filters

Until now, we studied various representations of Linear Time Invariant (LTI)


systems. We defined impulse responses in time domain and the corresponding
frequency responses in the frequency domain,

h(t) ←→ H(jω) and h[n] ←→ H(ejω ),


for continuous time and discrete time LTI systems respectively. Then, we ex-
tend the Fourier transforms to Laplace transforms for continuous time systems
and z-transforms for for discrete time systems to generalize the frequency re-
sponse to transfer functions,

h(t) ←→ H(s) and h[n] ←→ H(z),


for continuous time and discrete time LTI systems, respectively.
We represented the dynamic behavior of a continuous time and discrete
time LTI system by a differential equation,
N M
X dk y(t) X dk x(t)
ak = bk (10.1)
dtk dtk
k=0 k=0

and difference equations,


N
X M
X
ak y[n − k] = bk x[n − k], (10.2)
k=0 k=0

respectively.
We studied the basic properties of LTI systems, such as memory, causality,
stability convertibility. We studied how an LTI system relates the input and
output signals. But, where and when do we use LTI systems? What do they
do to the input signals? How do they change the input signal to produce an
output signal?

477
The answers to the above questions are many folded and depend on the
application areas. One very important area, where the LTI systems are widely
used is the filtering. LTI systems act as a filter to change the structure of
the input signals.
Motivating Question: What is a filter?
In general, filters are devices, which separate the unwanted stuff from a
pool of objects to obtain the wanted stuff. The pool may contain anything,
such as water, air, chemicals, soils, etc.
In the context of signals and systems, the “pool” contains signals. LTI
systems are considered as filters to ”clean-up” the input signals.
Recall that Fourier series representation enables us to decompose a periodic
input signal into harmonically related complex exponentials. The amount of
each harmonic frequency is measured by the spectral coefficients of the Fourier
Series, {ak }. We may want to eliminate some of the unwanted harmonics of
complex exponential functions. These components may correspond to a type
of noise in a speech recording or some unwanted objects in an image.
Later, we extended the Fourier series representation to Fourier transforms,
where we could represent a time domain signal by a continuous frequency
spectrum. In order to manipulate and/or reshape the frequency content of an
input signal for generating a desired signal at the output, we can design an LTI
system, which suppresses some of the frequencies or emphasizes some others.
For example, we may want to change the frequency content of a signal to isolate
some musical instruments in an orchestra or accentuate the voice of the singer.
Motivating Question: How can we design an LTI System to filter the in-
put signal, which generates an output signal in a desired form? In other words,
how can we design an LTI system, which outputs a signal with a prescribed
frequency content?
At the core of the answers to the above questions resides the frequency
response or transfer function of the LTI systems.

10.0.1. Filtering the Periodic Signals by Fre-


quency Response
Recall that, a periodic input signal of a discrete time LTI system can be
represented by the spectral coefficients, ak , using Fourier series. The corre-
sponding output is also periodic, hence, can be represented by the Fourier
series with the spectral coefficients of bk .
The spectral coefficients of the output signal, bk , can be uniquely obtained
from the spectral coefficients of the input signal ak by scaling it with the kth
eigenvalue of the discrete time LTI system,

478
X X
x[n] = ak ejkω0 n → h[n] → y[n] = ak H(ejkω0 )ejkω0 n .
k=<N > k=<N >

Figure 10.1: When the input of a discrete time LTI system is the superposition
of the eigenfunctions, the output is the superposition of the input, scaled by
the k th eigenvalue, which is the frequency response H(ejω ), for ω = kω0 .

bk = ak H(ejkω0 ), (10.3)
as shown in Figure 10.1. In the above equation, the k th eigenvalue is the fre-
quency response of an LTI system at ω = kω0 and can be calculated from the
impulse response as follows;

X
H(e jkω0
)= h[n]e−jω0 nk . (10.4)
n=−∞

Similarly, a continuous time LTI system, scales the complex exponential


eigenfunction ejkω0 t by its eigenvalue H(jkω0 ), which is the frequency response
of the LTI system at ω = kω0 .
As in the discrete time LTI systems, we observe that the spectral parame-
ters of the output is the scaled version of the spectral parameters of the input,

bk = ak H(jkω0 ), (10.5)
where the scaling factor is the k th eigenvalue of the continuous time LTI system,
which is defined as the frequency response at ω = kω0 ,
Z ∞
H(jkω0 ) = h(t)e−jkω0 t dt, (10.6)
−∞
as shown in Figure 10.2.
As in the discrete time case, k th eigenvalue of a continuous time LTI system
scales each spectral coefficient, at the output, i.e, H(jkω0 ) multiplies ak to
generate bk .
Therefore, frequency response of an LTI system reshapes the frequency
content of periodic signals by scaling the spectral coefficients of the input signal.
For example, the complex exponential functions ejkω0 for the frequencies kω0
may correspond to the noise embedded in a discrete time input signal. In this
case, we design an LTI system, where the frequency response H(ejkω0 ) = 0 for
the corresponding k-values. At the output of the filter, we obtain the spectral

479

X ∞
X
x(t) = ak ejkω0 t → h(t) → y(t) = ak ejkω0 t H(jkω0 )
k=−∞ k=−∞

Figure 10.2: When the input of a continuous time LTI system is the super-
position of the eigen functions , ejkω0 t , the output is the superposition of the
same eigen functions. However, each weight, ak at the input is scaled by the
k th eigenvalue of the LTI system, at the output, where the scaling factor is the
frequency response at ω = kω0 .

coefficients,
bk = H(ejkω0 )ak = 0,
which corresponds to the noise, vanishes, yielding a ”clean” signal. This process
of reshaping the spectral coefficients of the input signals is called filtering.

10.0.2. Filtering the Aperiodic Signals by Fre-


quency Response
In the above Fourier series representations of the periodic input and output
signals for discrete time and continuous time systems, we observe that an
eigenvalue scales the spectral coefficients for each harmonic, kω0 , ∀k, where k
is an integer. We can relax this assumption and take the limiting case to define
the eigenfunction of a continuous time LTI system as,

x(t) = ejωt
and eigenfunction of a discrete time LTI system as,

x[n] = ejωn ,
where kω0 → ω. In this case, the output of the LTI system for the contin-
uous time LTI system becomes,

y(t) = H(jω)ejωt . (10.7)


where the eigen function ejωt is scaled by the eigenvalue of an LTI system,
which is the frequency response,
Z ∞
H(jω) = h(t)e−jωt dt. (10.8)
−∞

480
Similarly, the output of a discrete time LTI system becomes,

y[n] = H(ejω )ejωn ,


where the eigen function ejωn is scaled by the eigenvalue of an LTI system,
which is the frequency response,

X

H(e ) = h[n]e−jωn . (10.9)
n=−∞

In the above approach, rather than defining the eigenvalue of an LTI sys-
tem for each integer multiple of the fundamental frequency, kω0 , we define a
function with the continuous frequency variable ω. This generalization, allows
us to represent the eigenvalues of LTI systems by the frequency response, in
the frequency domain. Hence, in the limit,

lim H(ejkω0 ) → H(ejω ), (10.10)


kω0 →ω

the countably infinite eigenvalues of an LTI system, at the harmonics, kω0 ,


for all k converges to the frequency response with continuous frequencies, ω.
Note that, the eigenvalue of an LTI system, corresponding to the kth harmonic
of a periodic signal is a special value of the frequency response at ω = kω0 .
Recall that the impulse response uniquely defines an LTI system. Thus,
the frequency response, also, uniquely defines a discrete time and continuous
time LTI systems. Impulse response and frequency response of an LTI system
is one-to-one and onto for both discrete a time LTI system,

h[n] ↔ H(ejω ) (10.11)


and for a continuous time LTI systems,

h(t) ↔ H(jω). (10.12)


Recall also that the relationship between an aperiodic input and output signal
pair for the continuous time LTI system is given by

Y (jω) = H(jω)X(jω)
and the relationship between the input and output signal for the discrete time
LTI system is given by

Y (ejω ) = H(ejω )X(ejω ).

Hence, the frequency content of an aperiodic input signal can be easily scaled
by the frequency response to increase or decrease the amount of predefined

481
frequency range in the signal and change the frequency content of the signal
to generate a desired signal at the output. All we need to do is to design
an appropriate frequency response, H(jω) for a continuous time system and
H(ejω ) for a discrete time system, which attenuates the undesired frequency
ranges and amplifies the desired ones to shape up the frequency content of the
input signal.
We can further represent the eigenvalues of an LTI system by the trans-
fer function H(s) in Laplace domain for continuous time systems and by the
transfer functions H(z) in z-domain for discrete time systems. However, in the
rest of the chapter, we use frequency response to represent the eigenvalues of
the LTI systems, assuming that the Fourier transform of the impulse response
exists.

10.1. Frequency Ranges of Frequency


Response
When an LTI system is designed as a filter, it passes some frequency ranges
and blocks the rest of the frequencies. These frequency ranges are character-
ized by two major quantities, namely, cutoff frequency and bandwidth, These
quantities shape the analytical form of the frequency response, as defined be-
low:

Definition 10.1: Cutoff Frequency of the Frequency Response: The


cutoff frequency ωc , is the boundary of the frequency response, at which the
energy flow of the LTI system is stopped or significantly reduced.
In the above definition the cutoff frequency is the angular frequency ω0 =

T = 2πf , which is measured by radians. The fundamental period T is mea-
sured by seconds and the fundamental frequency f is measured by Hertz (cy-
cle/second).

Definition 10.2: Bandwidth of the Frequency Response: The interval


of the frequency response between the lowest and highest cutoff frequencies,
ωbw = ωc1 − ωc2 , is called the bandwidth.
For example, human ear can hear the sounds between 20 Hertz and 20 Kilo
Hertz. The lowest cutoff frequency is ωc1 = 20 Hertz and the highest cutoff
frequency is ωc2 = 20, 000 Hertz. Thus, the bandwidth of the human auditory
system is ωbw = 20, 000 − 20 = 19, 980 Hertz. The frequency response of the
human auditory system is nonzero only within the bandwidth. Outside the
this bandwidth both the magnitude and the phase of the frequency response
is zero.

482
Selecting the cutoff frequencies and the bandwidths of a frequency response
is an important design issue and it depends on the application domains. For
example, if we need to chop an additive noise corresponding to the high fre-
quency components of the input signal, we first identify the bandwidth of the
noise. Then, we set the cutoff frequency to the the lowest limit of the band-
width. The frequency response of the LTI system with this cutoff frequency
removes the noise in the input signal by eliminating the spectral coefficients
corresponding to the noise at the output.
The cutoff frequencies of the frequency response of the continuous time sys-
tems can be selected as high as the dynamic range of the equipment. However,
since the frequency response of discrete time systems is periodic with = 2π,
the cutoff frequency of the discrete time systems should lie within the limits
of the fundamental period, |ωc | ≤ π.

10.2. Filtering with LTI Systems


In order to filter an input signal , we design the frequency response of an
LTI system with a predefined cutoff frequencies and bandwidths, such that
the output signal consists of the spectral coefficients of “desired form”. During
the design of a filter, different representations show different properties of the
filter, which complement each other:
1. Differential equations for continuous time filters and difference equations
for discrete time filters give implicit relationship between the change of
input and output, relative to each other, in time domain.
2. Impulse response gives the output of a system, when the input is a unit
impulse function. It also provides the nonzero time duration of the filter,
which is an important design issue.
3. Frequency response shows the response of an LTI system to the frequency
components of the input signals. It is the most informative representation
of the filters about how they shape the frequency content of the input
signals. Therefore, in most practical applications, filter design starts in
the frequency domain.
There is a large variety of filters for both discrete time and continuous time
systems, in Signal Processing literature. Depending on the physical realizability
of the frequency response and/or impulse response in real life applications, the
filters can be categorized as follows;
1. Ideal Filters, where the frequency response takes only binary values,
either 0 or 1. The abrupt switch between 0 and 1 creates discontinuities at
cutoff frequencies. These filters pose challenges in implementation, that is why
they are called ideal.

483
2. Real Filters, which are designed with a smooth frequency response
to avoid discontinuities. These filters gradually attenuate the undesired fre-
quencies and reach the cutoff frequencies smoothly. Real filters avoids most of
the problems of ideal filters, such as Gibbs problem, which creates undesired
fluctuations at discontinuities.
Depending on the frequency content, LTI filters can be classified under four
headings:
1. Low pass filters, which suppress the high frequency ranges of the input
signal and pass relatively lower frequencies. In other words, a low pass filter
has a frequency response, which has high magnitudes in low frequencies and
low magnitudes in high frequencies, to suppress or eliminate the high frequency
ranges of the input signal, at the output.
2. High pass filters, which is rather the complement of the low pass filters.
They suppress the low frequency ranges and pass relatively higher frequency
ranges of the input signal, at the output.
3. Band pass filters, which passes the frequency ranges of the input signal
corresponding to the desired interval of frequencies.
4. Band stop filters, which suppress the frequency ranges of the input
signal in a desired interval of frequencies.

Learn more about the effect of different types of


filters on music signals @ https://384book.net/
WATCH v1001

The filters defined above can be designed for both discrete time systems or
continuous time systems.
The sections below overview the ideal and real filters for low pass, high pass,
band pass and band reject filters. We illustrate how an LTI system behaves as
a filter and shapes the frequency content of an input signal for both continuous
time and discrete time cases.

10.3. Ideal Filters For Discrete Time and


Continuous time LTI systems
The magnitude of the ideal filters take either 1 or 0 values to either pass
or vanish the frequency band of an input signal, at the output of the filter.
Hence, the ideal filters either retain a frequency band of an input signal as
is, or remove it, depending on the value of the magnitude spectrum of the
frequency response.

484
Mathematically, an ideal filter outputs the following spectral coefficients
for periodic inputs, for continuous time systems,
(
ak for H(jkω0 ) = 1
bk = (10.13)
0 for H(jkω0 ) = 0.
Similarly, an ideal low pass filter outputs the following spectral coefficients for
periodic inputs, for the discrete time systems,
(
ak for H(ejkω0 ) = 1
bk = (10.14)
0 for H(ejkω0 ) = 0.
When the input signal is aperiodic, an ideal filter yields the following out-
put, for continuous time systems,
(
X(jω) for H(jkω) = 1
Y (jω) = (10.15)
0 for H(jkω) = 0.
Similarly, when the input signal is aperiodic, an ideal filter yields the following
output for the discrete time systems,
(
X(ejω ) for H(ejkω ) = 1
Y (ejω ) = (10.16)
0 for H(ejkω ) = 0.
In the following subsections, we overview the ideal filters for discrete time
and continuous time systems.

10.3.1. Ideal Low Pass Filters


An ideal low pass filter passes all the frequencies, below the cutoff frequency
for |ω| ≤ ωc and suppresses the rest of the frequencies. Mathematically, the
frequency response of an ideal low pass filter for a continuous time system is,
(
1 for |ω| ≤ ωc
H(jω) = (10.17)
0 otherwise.
and for discrete time system is,
(
1 for |ω| ≤ ωc
H(ejω ) = (10.18)
0 otherwise,
for one period, in −π ≤ ω ≤ π and repeats at every period of 2π, in other
words, H(ejω ) = H(ej(ω+2kπ) ) for all k. Therefore, the cutoff frequency should
also lie in −π < ωc < π.

485
H(jω)

ω
−ωc ωc

Figure 10.3: Continuous time ideal low pass filter.

H(jω)

··· ···

ω
−π −ωc ωc π

Figure 10.4: Discrete time ideal low pass filter.

The cutoff frequency ωc determines the number of nonzero spectral coef-


ficients of the output signal. Lower cutoff frequencies pass less spectral coef-
ficients, chopping the rest of the spectral coefficients corresponding to higher
frequency harmonics.

10.3.2. Ideal High Pass Filters


The bandwidth of a high pass filter is the complement of that of the low pass
filter. A high pass filter passes all the frequencies, when |ω| ≥ ωc .
Mathematically, the frequency response of an ideal low pass filter for con-
tinuous time system is,

486
H(jω)

ω
−ωc ωc

Figure 10.5: Continuous time ideal high pass filter.

H(jω)

··· ···

ω
−π −ωc ωc π

Figure 10.6: Discrete time ideal high pass filter.

(
1 for |ω| ≥ ωc
H(jω) = (10.19)
0 otherwise.
and for discrete time system is,
(
jω 1 for |ω| ≥ ωc
H(e ) = (10.20)
0 otherwise.
for one period, in −π ≤ ω ≤ π and repeats at every period of 2π. Note that,
H(ejω ) = H(ej(ω+2kπ) ) for all k.

487
H(jω)

ω
−ωc2 −ωc1 ω c1 ω c2

Figure 10.7: Continuous time ideal band pass filter.

10.3.3. Ideal Band Pass and Band Reject Filters


For the band pass and band reject filters, the cutoff frequencies, (ωc1 , ωc2 )
are defined for pass band and reject band intervals for passing or suppressing
the frequencies.
A band pass filter passes the frequencies, for ωc1 ≤ |ω| ≤ ωc2 . Mathemati-
cally, the frequency response of an ideal band pass filter for a continuous time
system is,
(
1 for ωc1 ≤ |ω| ≤ ωc2 ,
H(jω) = (10.21)
0 otherwise.
and for discrete time system is,
(
jω 1 for ωc1 ≤ |ω| ≤ ωc2 ,
H(e ) = (10.22)
0 otherwise.
for one period, in −π ≤ ω ≤ π and repeats at every period of 2π.
A band reject filter passes the frequencies, when |ω| ≤ ωc1 and |ω| ≥
ωc2 . Mathematically, the frequency response of an ideal band stop filter for a
continuous time system is,
(
1, for |ω| ≤ ωc1 and |ω| ≥ ωc2 ,
H(jω) = (10.23)
0 otherwise.
and for discrete time system is,
(
1, for |ω| ≤ ωc1 and |ω| ≥ ωc2 ,
H(ejω ) = (10.24)
0, otherwise.

488
H(jω)

··· ···

ω
−π −ωc ωc π

Figure 10.8: Discrete time ideal band pass filter.

H(jω)

ω
−ωc2 −ωc1 ω c1 ω c2

Figure 10.9: Continuous time ideal band reject filter.

for one period, in −π ≤ ω ≤ π and repeats at every period of 2π.


Unfortunately, ideal filters have discontinuities at the cutoff frequencies,
in the frequency response, while they switch between 0 and 1. Due to the
discontinuities in the frequency domain, the time domain representations of
ideal filters by impulse response, range in infinite duration. This property of
ideal filters name them as IIR (Infinite Impulse Response) filters. Although
the ideal filters have simple analytical forms, they cannot be realized perfectly
in real life applications by using physical system components.
Figure 10.11 a and b shows the magnitudes of the frequency responses for
low-pass, high-pass, band-pass and band-stop ideal filters, for continuous time
and discrete time systems, respectively.
In the following, we study the effect of the ideal filters on the input signals.

Exercise 10.1: Consider a discrete time ideal low pass filter, given by the

489
H(jω)

··· ···

ω
−π −ωc2 −ωc1 ω c1 ω c2 π

Figure 10.10: Discrete time ideal band reject filter.

following frequency response:

(
1 for |ω| < ωc
Hlp (ejω ) = (ωc is the cutoff frequency) (10.25)
0 otherwise

Note: Hlp (ejω ) is periodic with 2π. Hence, |ωc | ≤ π


a) Find the impulse response of this filter.
b) Plot the Fourier transform of the output for the input of Figure 10.12, where
the bandwidth of the signal is 2wm > 2ωc .
c) Compare the input and output signals in frequency domain.

Solution:
This filter is depicted in Figure 10.13.
In order to find the impulse response, we can easily take the inverse Fourier
transform of the frequency response:
Z ωc
1 1 sinωc n
h[n] = ejωn dω = (ejωc n − e−jωc n ) = (10.26)
2π −ωc 2πjn πn

Note: Ideal Low Pass filters are NOT causal: h[n] ̸= 0 for n < 0.
b) Recall the convolution property,

y[n] = x[n]∗h[n] ←→ Y (ejω ) = X(ejω )Hlp (ejω ) (10.27)


In order to find the output of the ideal low pass filter to the input signal given
in Figure 10.12, we multiply the frequency response with the Fourier transform
of the input,

490
H(jω) H(jω)

ω ω
−ωc ωc −ωc ωc

(a) (b)
H(jω) H(jω)

ω ω
−ω2 −ω1 ω1 ω2 −ω2 −ω1 ω1 ω2

(c) (d)

Figure 10.11: Ideal filters, with lowpass (a), high pass (b), band pass (c) and
band reject (d) frequency responses.

Y (ejω ) = X(ejω )Hlp (ejω ). (10.28)


Therefore, at the output, the input signal is chopped for the frequencies
higher than the cutoff frequency, wc as indicated in Figure 10.15.
c) Comparison of the input and output signals in the frequency domain
reveals that the ideal low pass filters chop the frequencies higher than the
cutoff frequency |ωc |.

Exercise 10.2: Find the impulse response of the discrete time high pass
filter, represented by the following frequency response;
(
1, for |ω| ≥ ωc
Hhp (ejω ) = (10.29)
0, otherwise.

Note: H(ejω ) is periodic with 2π. Hence, |ωc | < π

Solution:
The frequency response of the above high pass filter can be represented as

491
X(ejω )

−ωm ωm

Figure 10.12: One full period of the Fourier transform of an input, for a discrete
time band-limited signal, with cutoff frequency, wc . Keep in mind that this
Fourier transform is periodic and repreats itself at every 2π.

the complement of the low pass filter, as follows:


( (
jω 1, for |ω| ≥ ωc 1 − Hlp for |ω| < ωc
Hhp (e ) = = (10.30)
0, otherwise. 0, otherwise.

We already obtained the impulse response of the low pass filter in the
previous example. Thus, taking the inverse Fourier transform of the above
frequency response, gives,

sin ωc n
h[n] = δ[n] − . (10.31)
πn
Note: Ideal high bass filters are also non causal. In fact, all the ideal filters
are non causal. Therefore, they are not realizable in the real life applications.
However, they are very useful to design real filters as they form a quality
metric, when the designed filter is compared to its ideal counterpart.

Exercise 10.3: Consider the frequency response of the following continuous


time low pass filter;
(
1, |ω| < ωc
H(jω) = (10.32)
0, |ω| > ωc

a) Find the impulse response h(t) of this system.


b) What is the bandwidth of this filter?
c) Find the output of this filter, when the input is

492
H(ejω )

0
−2π −ωc ωc 2π

Figure 10.13: Frequency response of an ideal low pass filter. Note that since it
is a discrete time LTI system, the frequency response is periodic with w = 2π.

(
1, |ω| < ωm
X(jω) = (10.33)
0, |ω| > ωm
where ωc < ωm .
d) Compare the input and output of this filter in time and frequency do-
mains.

Solution: a) The frequency response of this filter is illustrated in Figure


10.16(a). Using the synthesis equation, we can determine the impulse response,
h(t), as follows,
Z ωc
1 sin ωc t
h(t) = ejωt dω = , (10.34)
2π −−ωc πt
which is a sinc function, depicted in Figure 10.16(b).
b) The bandwidth of this continuous time low pass filter is 2ωc .
c) The output of the low pass filter is,
(
X(jω), |ω| < ωm
Y (jω) = X(jω)H((jω) = . (10.35)
0, |ω| > ωm
Since the cutoff frequency of the frequency response is smaller than that of the
input signal, is ωc < ωm , the low pass filter keeps the input signal for ω < ωc
and chops the frequencies for ω > ωc .
The analysis of Figure 10.16 reveals that the frequency response consists of
low frequencies for |ω| < ωc < ωm . The impulse response, h(t) alternates and

493
h[n]

Figure 10.14: Impulse response of a discrete time ideal low pass filter.

attenuates, as t → ±∞. Notice that as ωc approaches to 0, the impulse response


h(t) gets flatter.
The impulse response is maximum at t = 0 in time domain and ω = 0 in
frequency domain. It keeps attenuating as t → ±∞. Hence, the continuous
time low pass filter with the cutoff frequency ωc is represented in time and
frequency domain as follows:
(
sin πωc t 1, |ω| < ωc
h(t) = ↔ H(jω) = (10.36)
πt 0, |ω| > ωc

d) The only difference between the input and output signals in the frequency
domain is the bandwidth. While the bandwidth of the input signal is 2ωm ,
the bandwidth of the output signal is the same as the bandwidth of the filter,
which is 2ωc < 2ωm .
In the time domain, the bandwidth of the input signal is decreased at the
output. Hence, the sinc function of the input becomes flatter around the origin
at the output signal.

Exercise 10.4:

Consider the following continuous time band pass filter;


(
1 for ωc1 ≤ |ω| ≤ ωc2 ,
Hbp (jω) = (10.37)
0 otherwise.
a) Find the impulse response of this filter.

494
Y (ejω )

−ωc ωc 2π

Figure 10.15: The frequency domain representation of the output signal, when
an input signal is filtered by a low-pass filter, with cutoff frequency, w0 .

b) Find the output y(t) of this filter when the input is X(jω) = 1.
c) Compare the input and output pair of this filter in the time and frequency
domains.

Solution
a) Band pass filter can be represented by a shifted low pass filter,
ωc1 + ωc2
Hbp = Hlp (j(ω − )).
2
From the Fourier transform properties, we know that

ejω0 t h(t) ←→ H(j(ω − ω0 )).


The shift for the band pass filter is
ωc1 + ωc2
ω0 =
2
.
Hence,
ωc1 +ωc2
hbp = ej 2 hlp (t).
The bandwidth of the low pass filter Hlp is,

495
x(jω)

ω
−W W

(a)
x(t)

W/π

−π/W π/W

(b)

Figure 10.16: a) Frequency response H(jω) of a continuous time low pass filter
with the cutoff frequency |ωc |, (b) the corresponding impulse response, h(t).

496
BWbp = ωc2 − ωc1 .
The cutoff frequency of the corresponding low pass filter is,
ωc2 − ωc1
ωc = .
2
Therefore, the impulse response of the band pass filter is ,

j sin π( ωc1 −ω
ωc1 +ωc2
2
c2 )

hbp = e . 2 .
πt
b) From the Fourier transform table,

x(t) = δ(t) ←→ X(jω) = 1.


Hence,
sin π( ωc1 −ω
ωc1 +ωc2
c2 )

y(t) = hpb (t) ∗ x(t) = h(t) = ej 2 2


. .
πt
c) In the frequency domain, we note that the input signal, X(jω) = 1, is
NOT band limited. On the other hand, the output signal is,

(
1 for ωc1 ≤ |ω| ≤ ωc2 ,
Y (jω) = X(jω).Hbp (jω) = Hbp (jω) = (10.38)
0 otherwise,

which is a band limited signal.


In the time domain the output becomes the impulse response of the filter.
As it can be observed from the above examples, ideal filters consist of dis-
continuities, at the cutoff frequencies, ωc , ωc1 and ωc2 , which gives non calusal
impulse responses in the time domain, ranging between (−∞, ∞). These type
of filters are called IIR (Infinite impulse response) filters. The discontinuities,
also, result in undesired noise effect, when the systems and signals are re-
constructed in time domain. Furthermore, the discontinuities are not easy to
realize in real life applications. For this reason, the real life filters are mostly
designed with smooth corners, as explained in the following sections.

Filtering with low, band and high-pass @ https:


//384book.net/i1001
INTERACTIVE

497
10.4. Discrete Time Real Filters
Discrete time real filters approximate the ideal filters by smoothing the discon-
tinuities of the frequency response. The smoothing effect in frequency domain
truncates the infinite impulse response (IIR) to make a finite impulse response
(FIR) functions.
Let us study few discrete time real filters and their frequency and impulse
responses.

10.4.1. Discrete Time Low Pass and High Pass


Real Filters
Design methodologies of low pass and high pass filters are very similar to each
other. Both of them can be represented by difference equations. The cutoff
frequencies and the bandwidths can be adjusted by the constant parameters of
the difference equations. In the following exercises, we provide simple examples
of discrete time real low pass and high pass filters.

Exercise 10.5: Discrete time Low Pass FIR Filter


Suppose that we have the following discrete time LTI system, which aver-
ages two consecutive input signals, as follows:
1
y[n] = (x[n] + x[n − 1]). (10.39)
2
a) Find the impulse response of this filter.
b) Find the frequency response of this filter.
c) Find the spectral coefficients of the output signal in terms of the spectral
coefficients of the input signal.
d) Comment on the effect of the filter on the output signal. What type of
a filter is this?
e) Find the output, when the input is

x[n] = sin 0.005πn + 0.1 cos πn. (10.40)

Solution:
a) Impulse response, h[n] can be easily obtained by replacing the input
with the impulse function, as follows:
1
h[n] = (δ[n] + δ[n − 1]) (10.41)
2
b) Frequency response H(ejω ) can be obtained from its definition,

498

X 1 1 ω ω ω ω ω

H(e ) = h[k]e−jωk = (1 + e−jω ) = e−j 2 (ej 2 + e−j 2 ) = e−j 2 cos .
2 2 2
k=−∞
(10.42)
Equivalently, we could obtain the frequency response from the difference
equation by replacing the input with the eigenfunction, x[n] = ejωn and the
corresponding output, y[n] = H(ejω )ejωn .
Then, the above difference equation becomes,
1
y[n] = H(ejω )ejωn = (ejωn + ejω[n−1] ). (10.43)
2
Finally, arranging the above equation, we obtain the frequency response of
this LTI system, as follows;
ω ω
H(ejω ) = e−j 2 cos
. (10.44)
2
The impulse response and frequency response of this LTI System is one-to-
one and onto, e.g.,
1 ω ω
h[n] = (δ[n] + δ[n − 1]) ↔ H(ejω ) = e−j 2 cos . (10.45)
2 2
Note: Since the impulse response of this filter has finite time duration
between 0 ≤ n ≤ 1, this is a FIR (Finite Impulse Response) filter.
c) The spectral coefficients, bk of the output signal, y(t), is obtained from,
ak
bk = ak H(ejkω0 ) = (1 + e−jkπ ), (10.46)
2
where {ak } are the spectral coefficients and ω0 = π is the angular frequency
of the input signal, x(t), corresponding to the fundamental period, T = 2.
d) Now, let us investigate the effect of H(ejω ) on changing the spectral
coefficients of the input, {ak } to generate the spectral coefficients of the output,
{bk }.
In order to observe how this filter shapes the input signal, we can plot the
magnitude and phase of the frequency response, given below:
Magnitude of the frequency response: |H(ejω )| = cos ω2 .
Phase of the frequency response: ∠H(ejω ) = −ω 2 .
The plot of the magnitude and phase spectrum of this filter is given in
Figure 10.17.
Recall that frequency response scales the spectral parameters of the input
to generate the spectral parameters of the output by

bk = ak H(ejkω0 ). (10.47)

499
|H(ejω )|

ω
−π 0 π

(a)
^H(ejω )

π/2

ω
−π 0 π

−π/2

(b)

Figure 10.17: Magnitude and phase spectrum of a low pass filter, represented
ω
by the frequency response, H(ejω ) = e−j 2 cos ω2 . Note that both magnitude
and phase plots are periodic, with periods of the angular frequency, ω = 2π.
The plots show only one full period of the magnitude and phase.

500
The type of the filter is, basically, specified by the magnitude of the fre-
quency response. When the magnitude of the frequency response is high, at
a particular harmonic, kω0 , the corresponding spectral parameter gets rela-
tively larger, increasing the contribution of that harmonic frequency to the
signal. Conversely, when the magnitude of the frequency response gets low,
the corresponding harmonics get attenuated.
Phase of the frequency response delays the harmonics, depending on its
value at a certain frequency. Large phase shifts result in more delays in the
corresponding harmonics.
An analysis of the magnitude plot reveals that the filter of this example
attenuates the high frequency components of the input signal, as the frequency
approaches to |π|. Therefore, this is a low pass filter. Unlike an ideal low pass
filter with a discontinuity a cutoff frequency, ωc , this filter gradually suppresses
the high frequency components, as ω → π. For example; if the input is a voice
recording, the filter decreases the treble voices, making the sound more bass.
The phase plot of the frequency response shows simply the phase angle we
get between the output and input, as a function of frequency, ω. In Figure
10.17, we observe that the phase shift between the input and output signals
increases as the frequency increases.
e) The output of this LTI system, can be easily obtained by the convolution
of the input and impulse response, as follows;

y[n] = h[n] ∗ x[n],


where the input and impulse response are given as follows,

x[n] = sin 0.005πn + 0.1 cos πn, (10.48)

1
h[n] = (δ[n] + δ[n − 1]), (10.49)
2
respectively.
The input signal consists of superposition of two signals,

x[n] = x1 [n] + 0.1 x2 [n], (10.50)


where x1 [n] = sin 0.005πn and x2 [n] = cos πn (see; Figure 10.18).
Note that the fundamental period of x1 [n] is T1 = 40 and it is relatively
larger than the fundamental period of x2 [n], which is T2 = 2. As a result, x2 [n]
adds ripples to x1 [n] to generate x[n].
Inserting the input and the impulse response into the convolution operation,
we obtain,

501
sin(0.05πn) cos(πn)

n n

(a) (b)

x1 [n] + 0.1x2 [n]

(c)

Figure 10.18: The input signal is defined as the addition of two signals, a) x1 [n]
and b) x2 [n]. c) Addition of x1 [n] and x2 [n] yields x[ n] = sin 0.005πn + cos πn.
The signal, x1 [n] has a relatively low fundamental frequency, which is ω0 =
0.05π, compared to the signal x2 [n] which has the fundamental frequency of
ω0 = π.

1
y[n] = ( (δ[n] + δ[n − 1])) ∗ (sin 0.005πn + cos πn)
2 (10.51)
1
= (sin 0.005πn + sin 0.005π(n − 1)).
2
Comparison of Figure 10.18 and 10.19 shows that the ripples of the input
are nicely smoothed by the low pass filter, which takes the average of the
consecutive signals, in time domain.
Loosely speaking, a low pass filter smooths the input signal, depending on
the cutoff frequency, ωc . The lower the cutoff frequency results in smoother
signal, obtained at the output.

Exercise 10.6: Example: Discrete Time Low Pass IIR Filter


In this example, we shall investigate the filtering properties of discrete time
LTI systems, represented by a first difference equation. We assume that the

502
y[n]

Figure 10.19: The plot of the output signal y[n] = h[h]∗x[n] when the input and
impulse responses are x[n] = sin 0.005πn+cos πn, and h[n] = 12 (δ[n]+δ[n−1]),
respectively.

system is initially at rest:

y[n] − ay[n − 1] = x[n]. (10.52)


a) Find and plot the frequency response of this filter and comment on the
type of the filter.
b) Find the impulse response and unit step response of this filter.

Solution: As we did before, in the continuous time case, we need to investigate


the properties of frequency response.
a) Let us feed the eigen function of the system at the input, x[n] = ejwn . The
corresponding output is

y[n] = ejwn H(ejw ), (10.53)


Replacing the input and output in the difference equation, y[n] − ay[n − 1] =
x[n], we get,

1 − ae−jw H(ejw )ejwn = ejwn


 
(10.54)
Arranging the above equation, we obtain the frequency response,
1
H(ejw ) = . (10.55)
1 − ae−jw
We need two plots for the frequency response:
1
1. Magnitude of the frequency response: |H(ejw )| = √
a2 +1−2a cos(ω)
 
2. Phase of the frequency response: ∠H(ejw ) = − tan−1 1−a a sin ω
cos ω
Analysis of the plot of magnitude spectrum of this filter, in Figure 10.20, reveals

503
|H(ejω )| ∠H(ejω )

π
2 2

ω
−π π

ω
−π π

Figure 10.20: One full period of the magnitude and phase plots of the frequency
response, H(ejw ) = 1−ae1−jw , for the difference equation, y[n]−ay[n−1] = x[n],
in the interval, −π ≤ ω ≤ π.

that the high frequency components an input signal is gradually attenuated as


|ω| → π. Thus, this is a low pass filter.
b) Impulse response and unit step response can be easily obtained as follows;

h[n] = an u[n], (10.56)

n
X 1 − an+1
s[n] = u[n] ∗ h[n] = ak = u[n], (10.57)
1−a
k=0

respectively.
Note: The impulse response of this filter has an infinite time duration in
0 ≤ n ≤ ∞. Hence, it is an IIR filter. However, considering the exponential
decay of the function, it approaches very close to zero for large values of n.
Furthermore, it does not have any discontinuities in one full period. Therefore,
it can be realized with practically good approximation.

Exercise 10.7: Discrete Time High Pass FIR Filter


Consider the following difference equation:
1
y[n] = (x[n] − x[n − 1]). (10.58)
2
a) Find the impulse response.
b) Find and plot the frequency response.
c) Comment on this LTI system. What type of a filter is this?

Solution:
a) Impulse response is,

504
1
h[n] = (δ[n] − δ[n − 1]). (10.59)
2
This a FIR filter, with only two nonzero values at n = 0 and n = 1.
b) Frequency response is,
1 1 −jω
H(ejω ) =− e = j sin(ω/2)e−jω/2 . (10.60)
2 2
Magnitude and phase spectrum of the above frequency response is as fol-
lows:
1. The magnitude spectrum: |H(ejω )| = | sin(ω/2)|
2. Phase spectrum: ∠H(ejω ) = tan−1 (cot(ω/2))

|H(jω)|
∠H(jω)
1 π
2

w
−π π
w − π2
−π π

Figure 10.21: Magnitude and phase plot of the frequency response, H(ejω ) =
j sin(ω/2)e−jω/2 .

The analysis of Figure 10.21 reveals that this filter suppresses low frequency
component and gradually passes the high frequency components of the spectral
coefficients of the input signal. Therefore, it is a high pass filter.

10.4.2. Band Stop Filters for Filtering Well-Defined


Frequency Bandwidths
In the above examples, we have seen that a low-pass filter essentially passes
signals with a frequency lower than a selected cutoff frequency and attenu-
ates those with a frequency bandwidth higher than the cutoff frequency. Sim-
ilarly, a high-pass filter passes high frequency bandwidths, while attenuating
the low frequencies.
What if we want to filter a well-defined frequency? The following example
explains a specific type of band-stop filter, called notched filter.

Exercise 10.8: Example: Autoregressive Models as Notched Filters

505
The following LTI system is called first order autoregressive model:

y[n] − ay[n − 1] = x[n] − bx[n − 1], (10.61)


where the present value of the output depends on the present and previous
value of the input, and previous value of the output. The parameters a and
b indicate the degree of dependency of the current value of the output to the
past values of output and input, respectively.
Suppose that the system is initially at rest and we select particular values
of the parameters, as follows:

b = ejωc and a = 0.99ejωc (10.62)


a) Find the impulse response of this model.
b) Find the frequency response of this model.
c) What type of a filter is this?

Solution:
a) Impulse response of this model can be obtained by setting y[n] = h[n]
and x[n] = δ[n], and leaving h[n] in the left hand side of the equation alone,

h[n] = ah[n − 1] + δ[n] − bδ[n − 1]. (10.63)


Considering the fact that the system is initially at rest, h[n] = 0 for n < 0
and using the recursive method, we obtain the impulse response, as follows;

h[n] = an u[n] − ban−1 u[n − 1]. (10.64)


This is a FIR filter in 0 < n < 1.
b) From the definition of the frequency response of discrete time LTI sys-
tems, we obtain,

X 1 − be−jω
H(ejω ) = h[n]e−jω = (10.65)
1 − ae−jω
n=0

c) Let us replace,

b = ejωc and a = 0.99ejωc (10.66)

in the frequency response,

1 − ejωc e−jω 1 − e−j(ω−ωc )


H(ejω ) = = . (10.67)
1 − 0.99ejωc e−jω 1 − 0.99e−j(ω−ωc )
The above filter has a very interesting property, which eliminates the fre-

506
quency at ω = ωc . In this case the numerator becomes

1 − ej(ωc−ωc) = 1 − 1 = 0. (10.68)

When ω ̸= ωc , the numerator and denominator get very close to each


other,

1 − e−j(ω−ωc )
H(ejω ) = ≈ 1. (10.69)
1 − 0.99e−j(ω−ωc )
This filter is called notched filter. It eliminates a very narrow band around
the frequency, ω = ωc . Notch filters are special types of band-stop filters,
which attenuates a signal in a predefined frequency interval around, ωc to very
low levels and pass the rest unaltered.
A notch filter is basically a band-stop filter with a narrow stopband.
When we take a look at its frequency response, we can see a very narrow ”V”
shape. Hence the name ”Notch”.
Suppose, we have a noisy audio recording. Suppose also that the actual
signal lies in the interval of f = 60 − 160 Hz and the noise occurs just around
90 Hz. Low pass or high pass filters do not offer a solution to remove the noise.
That is where the notch filters come in handy. If we set, ωc = 2π × 90, the
above notch filter eliminates the noise at 90 Hz.

(f2 − f1 )

t
f1 f2

Figure 10.22: The frequency response of a band-stop filter.

Learn more about designing a notch filter @


https://384book.net/v1002
WATCH

507
0
−5
−10
−15
−20
−25
−30
−35
100 101 102 103

Figure 10.23: The frequency response of a notch filter.

10.5. Continuous Time Real Filters


When the input-output pairs of an LTI system are represented by continuous
time signals, we need to design a continuous time filter. Recall that the fre-
quency response of a continuous time system is not periodic with respect to
the frequency variable, ω.

Exercise 10.9: First Derivative Filter


Consider the continuous time LTI represented by a simple derivative oper-
ator.

d dx(t)
x(t) → → y(t) = (10.70)
dt dt
a) Find the frequency response of this filter.
b) Find the spectral coefficients of the output signal in terms of the spectral
coefficients of the input signal.
c) Comment on the effect of the filter on the output signal. What type of
a filter is this?

Solution: a) A simple way of finding the frequency response is to find the


generalized eigenvalue of the LTI system for a continuous domain of frequency,
ω. This task is achieved by finding the output, y(t), of an LTI system, when
the input is x(t) = ejωt ,

y(t) = (jω)ejωt , (10.71)

508
which directly gives us the frequency response of the system as the scaling
factor of the complex exponential as follows;

H(jω) = jω. (10.72)


In order to analyze the effect of this filter on an input signal, we need to find
and plot the magnitude and phase of it.
1. Magnitude of the frequency response: |H(ejω )| = |ω|
2. Phase of the frequency response: ∠H(ejω ) = |π/2|,

|H(jω)|

∠H(jω)

π
2

− π2

Figure 10.24: Magnitude and phase plots of the frequency response, H(jω) =
jω. of the first derivative filter.

The analysis of the magnitude plot of the frequency response of the first deriva-
tive filter, in Figure 10.24 reveals that this filter linearly attenuates the low
frequency components of the signal. For example; if the input signal is a voice
recording, the filter will trim the low frequency components yielding a more
treble voice at the output.
The phase plot shows that there is a constant phase shift of |π|, at all frequen-
cies, which results in a constant delay between the input and output signals.

Exercise 10.10: Continuous Time Low Pass Filter


In this example, we shall explore the properties of a first order differential
equation as a filter. We shall see that a continuous time LTI system represented
by the following first order differential equation, is a low-pass filter, provided

509
that it is initially at rest.

dy(t)
a + y(t) = x(t). (10.73)
dt
This filter attenuates the high frequency components of an input signal
at the output of the system. The degree of attenuation is determined by the
constant coefficient, a, of the differential equation.
Let us answer the following questions to investigate the behavior of the
above first order constant coefficient differential equation.
a) Find the block diagram representation of this filter.
b) Find the frequency response of this filter.
c) Find the real and imaginary part of the frequency response, can you
comment on the type of the filter by analyzing real and imaginary part of the
frequency response?
d) Find and plot the magnitude and phase of the frequency response and
comment on the type of filter.
e) Find the impulse response and unit step response of this filter.

Solution: a) In order to build such a filter we need an adder and an integrator,


as depicted in Figure 10.25.

a R
x(t) + y(t)

−1

Figure 10.25: Block diagram representation of a first order differential equation.

b) Let’s first find the frequency response, as a scaling factor, H(jω) of the
eigen function x(t) = ejωt , of this system, as follows;

y(t) = H(jω)ejωt . (10.74)


Let us take the derivative of the both sides of the above equation,

dy(t)
= (jω)H(jω)ejωt (10.75)
dt
and insert it in the differential equation,

(ajω + 1)H(jω)ejωt = ejωt (10.76)

510
to find the frequency response,
1 1 − ajω
H(jω) = = . (10.77)
1 + ajω 1 + a2 ω 2
c) Real and imaginary part of this complex frequency response can be obtained,
as follows;
1
Re{H(jω)} = (10.78)
1 + a2 ω 2
and
−ajω
Im{H(jω)} = . (10.79)
1 + a2 ω 2
By analysing the real and imaginary part it is not easy to observe the type of
the filter.
d) Using the definitions, we compute the magnitude, given below,
1
|H(jω)| = (Re{H(jω)}2 + Im{H(jω)}2 ) 2 (10.80)
and phase,

Im{H(jω)}
∠H(jω) = tan−1 , (10.81)
Re{H(jω)}
we compute the magnitude of the frequency response as follows;
s
1 (aω)2 1 + a2 ω 2
|H(jω)| = + = . (10.82)
(1 + a2 ω 2 )2 (1 + a2 ω 2 )2 (1 + a2 ω 2 )2
Simplifying the above equation, we obtain the magnitude and the phase of the
frequency response, as follows,
1
|H(jω)| = √ (10.83)
1 + a2 ω 2
and

∠H(jω) = tan−1 −aω (10.84)


respectively.
The magnitude plot of the frequency response, in Figure 10.26 shows that a
first order differential equation is a low pass filter, which smoothly attenuates
the high frequency components of the input signal. The phase plot shows that
the signal is smoothly delayed as the frequency increases.
e) The impulse response can be obtained by solving the differential equation
for unit impulse input.

511
|H(jω)|

−1 1 ω
α
0 α

∠H(jω)

π/4
1/α
−1/α ω
−π/4

Figure 10.26: Magnitude and the phase plot of the frequency response of the
first order differential equation.

dh(t)
a + h(t) = δ(t) (10.85)
dt
Homogeneous solution:

hH (t) = Ke−t/a u(t) (10.86)

Z 0+ Z 0+ Z
a dh(t) + h(t)dt = δ(t)dt = 1 (10.87)
0− 0−

ah(0+ ) − ah(0− ) = 1 (10.88)


we know h(0− ) = 0 (initially at rest), therefore:
1
h(0+ ) = (10.89)
a
1 −t/a
h(t) = e u(t) (10.90)
a
512
s(t)
h(t)
1

1/a
t

Figure 10.27: The impulse response and unit step response of a low pass filter
represented by a first order differential equation.

Note: This is an IIR filter with an exponential decay and it has no discon-
tinuities. Thus, it can be realized with satisfactory approximations for large
t.
The unit step response can be easily obtained by taking the integral of the
impulse response, as follows;
Z t
1 t −τ
Z
s(t) = h(τ )dτ = e a dτ
−∞ a 0 (10.91)
t
= (1 − e− a )u(t).

Exercise 10.11: Continuous Time High Pass Filter


This time, we shall investigate the continuous time LTI systems, repre-
sented by a first order differential equation, where in the right hand side we
have the derivative of the input signal, as given below;

dy(t) dx(t)
+ ay(t) = . (10.92)
dt dt
a) Find the block diagram representation of this filter.
b) Find and plot the frequency response of this filter and comment on the
type of the filter.
c) Find the impulse response and unit step response of this filter.

Answer: a) The block diagram representation of this filter consists of a dif-


ferentiator, an integrator and an adder, as shown in Figure 10.28.
b) The frequency response of this system can be directly obtained by replacing
the input with the complex exponential eigen function,

x(t) = ejωt , (10.93)


and obtain the corresponding output, as the scaled version of the eigen func-

513
d
ẏ R y(t)
x(t) dt
+

Figure 10.28: We take the derivative of the input signal by a differentiator.


Then, we subtract the scaled output, ay(t) from the derivative of the input
to obtain the derivative of the output. The integrator returns the output, y(t)
from its derivative, ẏ(t).

tion, as follows;

y(t) = H(jω)ejωt . (10.94)


Taking the derivative of the input, ẋ(t) = (jw)ejwt and the derivative of the
output, ẏ(t) = (jw)H(jw)ejwt , and replacing them in the differential equation,
we get,

(jw + a)H(jw) = jw,


jw (10.95)
H(jw) = .
a + jw
Next, we need to find and plot the magnitude of the frequency response.
Magnitude of the frequency response, |H(jω)| = w2w+a2
Phase of the frequency response is, ∠H(jω) = tan−1 ωa .

|H(jw)|
∠H(jw)
π/2

1
w

w
−π/2

Figure 10.29: Magnitude and the phase plots of the frequency response,

H(jω) = a+jω .

As it is observed from the magnitude and phase plots of the frequency re-

514
sponse, this filter attenuates the low frequency components of the input signal.
The degree of attenuation is determined by the constant coefficient, a, of the
differential equation.

10.6. Chapter Summary


Fourier series representation enables us to decompose a periodic signal into
its harmonically related frequency components. In this representation, spec-
tral coefficients can be considered as the measures, which shows the amount
of a particular frequency in the signal. Small spectral coefficients indicate rel-
atively less proportion of the corresponding frequency harmonics, contained in
the signal compared to large spectral coefficients. Fourier transforms take the
limit of harmonically related frequency components and define a continuous
spectrum of frequencies. Therefore, Fourier transform of a signal manifests the
frequency content of a time domain signal.
In the LTI systems, the spectral coefficients of the output signal is just the
scaled version of the spectral coefficients of the periodic input signal. When
the input signal is aperiodic, the output signal is just the scaled version of
the input signal, in the Fourier domain. The scaling factor is an eigenvalue of
the system and it is called the frequency response. Therefore, it is possible to
design the frequency response of an LTI system to reshape the input signals.
This process is called filtering in the frequency domain. Depending on a pre-
defined goal it is possible to suppress the undesired frequency components or
accentuate some others. Design of the filters is an important area in digital
signal processing. In the context of this book, we just study special forms of
high pass, low pass, band pass and band stop filters.
Please, keep in mind that designing filters is not an easy task and it requires
elaborate methods, which are beyond the scope of this book.

515
Problems
1. A discrete time filter, which averages K consecutive input signals, is given
as follows:

K−1
1 X
y[n] = x[n − k].
K
k=0

a) Find and plot the magnitude and phase of the frequency response of
this filter.
b) What type of filter is this?
c) Find the output, when the input is x[n] = sin(0.2n) for K= 2 and
K=3.
d) What happens to the output y[n], as we increase K?

2. A continuous-time ideal low pass filter is represented by the following


frequency response:

(
1 |ω| < ωc
H(jω) =
0 |ω| ≥ ωc

a) Find and plot the impulse response of this filter, for |ωc | = 50π radi-
ans/second. Indicate the cutoff frequency on the plot.
b) Find impulse response as |ωc | → ∞. What type of a filter is this?
c) Given that the input signal x(t) is periodic with coefficients ak and
fundamental period of the input is T = 1 seconds, what are the
spectral coefficients of the output in terms of ak , for ωc = 50π radi-
ans/second?
d) Suppose that y(t) = x(t) , find the Fourier series coefficients of the
input x(t) for ωc = 20π radians/second?
3. A continuous-time ideal high-pass filter is represented by the following
frequency response

(
1 |ω| ≥ ωc
H(jω) =
0 |ω| < ωc

a) Find and plot the impulse response of this filter, for |ωc | = 50π radi-
ans/second. Indicate the cutoff frequency on the plot.
b) What happens to the impulse response of the high-pass filter, as we

516
increase the cutoff frequency to |ωc | = 100π radians/ second?
c) Given that the input signal x(t) is periodic with coefficients ak and
fundamental period of the input is T = 0.1, what are the spectral
coefficients of the output in terms of ak , for ωc = 60π radians/second?
d) Suppose that y(t) = x(t) , find the Fourier series coefficients of the
input x(t) for ωc = 40π radians/second?
4. A continuous time ideal band pass filter is represented by the following
frequency response:
(
1 for ωc1 ≤ |ω| ≤ ωc2 ,
Hbp (jω) = (10.96)
0 otherwise.
a) Find and plot the impulse response of this filter, for |ωc1 | = 50π
radians/second and |ωc2 | = 100π radians/second . Indicate the cutoff
frequencies on the plot.
b) What happens to the analytical form of the impulse response of the
filter, as we increase the cutoff frequency to |ωc1 | = 150π radians/sec-
ond and |ωc2 | = 200π radians/second?
c) Given that the input signal x(t) is periodic with coefficients ak and
fundamental period of the input is T = 0.01, what are the spectral
coefficients of the output in terms of ak , for ωc = 50π radians/second?

5. A continuous-time real filter is represented by the following frequency


response:

200 + 2ω 2
H(jω) =
−ω 2 − 110jω + 1000

a) Find the differential equation, which represents this filter.


b) Find and plot the magnitude and phase of the frequency response of
this filter.
c) What type of a filter is this?
d) Find and plot the output Y (jω), when the input is x(t) = e−t u(t).
6. A continuous-time filter is represented by the following frequency re-
sponse:

8 + 8jω
H(jω) =
(4 + jω)(2 + jω)

a) Find the differential equation, which represents this filter.

517
b) Find and plot the magnitude and phase of the frequency response.
c) What type of a filter is this?
d) Find and plot the output y(t), when the input is X(jω) = e−2jω .
7. A continuous-time LTI system is represented by the following equation:

y(t) = 6x(t − 2π)

a) Find the Fourier transform of the output, when the input is x(t) =
sin(ω0 t + ϕ0 ).
b) Find and plot the magnitude and phase of the frequency response of
this system.
c) Comment on the behavior of the system.
d) Find and plot the output y(t), when the input is X(jω) = e−2jπω .
8. A discrete-time LTI system is represented by the following equation:

y[n] = 2x[n − 2].

a) Find and plot the magnitude and phase of the frequency response of
this system.
b) Find the spectral coefficients of of the output y(t),when the input is
x[n] = cos(ω0 n + ϕ0 ).
c) Comment of the behavior of the system.
9. An initially at rest discrete time filter is represented by the following
difference equation:

y[n] + y[n − 1] = x[n] − x[n − 1].

a) Find and plot the magnitude and phase of the frequency response of
this system.
b) Find the impulse response of this filter.
c) Suggest an ideal high pass filter, which approximates this system.
10. An initially at rest discrete time LTI system is represented by the following
difference equation:

518
y[n] = 3y[n − 1] − 2y[n − 2] + x[n] − 4x[n − 1]

a) Find and plot the magnitude and phase of the frequency response of
this system.
b) What type of a filter is this system?
c) Suggest an ideal filter, which approximates the above filter. Be specific
about the bandwidth and cutoff frequencies.

11. Consider the impulse response of the low pass Gaussian filter, given below:
1 t2
h(t) = √ e− 2σ2 ,
2πσ
where σ is the standard deviation of the filter.
a) Find the transfer function of this filter.
b) Find the equation, which represents this filter.
c) What type of a filter is this? Discuss about the effect of the parameter
σ on the structure of the filter.
d) Find the ideal filter, which approximates the above transfer function.
12. Consider the transfer function of a second-order Butterworth filter, given
below:
1
H(s) = .
(s + ejπ/4 )(s + e−jπ/4 )

a) Find the differential equation, which represents this filter.


b) Find the impulse response of this filter.
c) What type of a filter is this?
13. An initially at rest continuous time LTI system is represented by the
following differential equation:

d2 y(t) dy(t)
2
+5 + 4y(t) = x(t)
dt dt

a) Find and plot the magnitude and phase of the frequency response of
this system.
b) Find and plot the impulse response.

519
c) What type of a filter is this system?
d) Suggest an ideal filter, which approximates the above system.

14. An initially at rest continuous time LTI system is represented by the


following differential equation:

d2 y(t) dy(t)
5 +2 + 5y(t) = 3x(t)
dt2 dt
a) Find and plot the magnitude and phase of the frequency response of
this system.
b) Plot a block diagram representation of this filter.
c) What type of a filter is this system?
d) Suggest an ideal filter, which approximates the above filter. Be specific
about the bandwidth and cutoff frequencies.

15. An initially at rest LTI system is represented by the following differential


equation:

d2 y(t) dy(t) d2 x(t) dx(t)


2
+ 5 + 4y(t) = 2
+9 + 6x(t)
dt dt dt dt

a) Find and plot the magnitude and phase of the frequency response of
this system.
b) Find the unit step response of this system.
c) What type of a filter is this system?
d) Suggest an ideal filter, which approximates the above filter. Be specific
about the bandwidth and cutoff frequencies.

16. An initially at rest discrete time LTI system is represented by the following
frequency response:

H(ejω ) = 1 + 0.5e−jω

a) Plot the magnitude and phase of the frequency response of this sys-
tem.

520
b) Find and plot the step response of this system.
c) Find the difference equation, which represents this system.
d) What type of a filter is this system?

17. An initially at rest discrete time LTI system is represented by the following
frequency response:

1 + 2e−2jω
H(ejω ) =
1 + 0.5e−jω

a) Plot the magnitude and phase of the frequency response of this sys-
tem.
b) Find and plot the impulse response of this system.
c) What type of a filter is this system?
d) Suggest an ideal filter, which approximates the above filter.Be specific
about the bandwidth and cutoff frequencies.

18. An initially at rest discrete time LTI system is represented by the following
frequency response:
1
H(ejω ) =
(1 − 0.25e−jω )(1 + 0.75e−jω )

a) Plot the magnitude and phase of the frequency response of this sys-
tem.
b) Find and plot the impulse response of this system.
c) What type of a filter is this system?
d) Plot a block diagram representation of this system.

521
522
Chapter 11
Continuous Time Sampling

“I visualize a time, when we will be to robots what dogs are to


humans, and I’m rooting for the machines.”
Claude Shannon

Until now, we defined signals and systems in two types,


• Continuous time signals and systems,
• Discrete time signals and systems.
We represented and analyzed these signals and systems in time and fre-
quency domains separately. A system with its input and output, could be
either discrete or continuous, but it could not be both. There was a great wall
between the discrete time and continuous time signals and systems.
Motivating Question: Is it possible to break down this wall? Given a
continuous time signal, can we find its discrete version without loss of any
information? Or can we convert a discrete time signal into a continuous time
counterpart?
In order to answer the above questions, let us investigate the continuous
time function of Figure 11.1. No matter how close we select a set of discrete
points on this function, there are infinitely many continuous time functions
that pass from a finite set of discrete samples. Therefore, the intuitive answer
to the above questions would be “no!”
However, C. Shannon showed that it is possible to represent a continuous
time, band-limited function with finitely many discrete samples, under certain
conditions. This pioneering discovery, called the Sampling Theorem, opened
the door to the age of digital revolution. This week, we shall see that fitting a
unique continuous time function to a set of discrete samples is possible through
the Sampling Theorem.
Sampling Theorem bridges continuous time and discrete time functions. It
enables us to convert a continuous time function into a discrete time function
by using sampling techniques. Interestingly, we can convert the discrete

523
x1 (t)

x2 (t)

x3 (t)

−3T −2T −T 0 T 2T 3T t

Figure 11.1: Given finitely many discrete points, we can define infinitely many
functions, such as, x1 (t), x2 (t), x3 (t), . . . etc., passing from these discrete points.
Intuitively, it looks impossible to define finitely many samples to represent a
continuous time signal, without losing information.

time function to a continuous time function, without losing information by


using some reconstruction techniques.
Loosely speaking, given a continuous time signal, we can uniquely find its
discrete time counterpart, provided that the continuous time signal is band-
limited. Recall that a continuous time signal is band-limited, when the band-
width of its Fourier transform is finite. Similarly, given a discrete time signal,
we can uniquely find its continuous time counterpart, provided that the dis-
crete time signal is also band limited. In both cases sampling theorem provides
the necessary and sufficient conditions of sampling and reconstruction, without
losing any information about the signals and the systems.
Mathematically speaking, a continuous time signal and its sampled coun-
terpart is one-to-one and onto,

x(t) ←→ x[n], (11.1)


under the conditions of the Sampling theorem, which will be stated later.
In the following, first we provide the formal definition of sampling and
reconstruction. Then, we state the sampling theorem, similar to the original
paper of C. Shannon, published in 1948.

524
x(t) × xp (t) = x(t)p(t): Sampled signal

P
p(t) = δ(t − nT )

Figure 11.2: Block diagram representation of sampling process. A continuous


time signal x(t) is multiplied by an impulse train. The output signal is called
sampled signal, xp (t), with sampling period T .

11.1. Sampling
A continuous time signal x(t) is sampled by multiplying it with an impulse
train of period T ,
X∞
p(t) = δ(t − nT ),
n=−∞

as shown in Figure 11.2.


Sampled signal is defined as

X
xp (t) = x(t)p(t) = x(nT )δ(t − nT ),
n=−∞

where the Sampling period is T .


Since it uses an impulse train, this type of sampling is sometimes called
impulse train sampling.

11.2. Properties of the Sampled Signal


in Time and Frequency Domains
Let us investigate the properties of the signal,

x(t) → X(jω)

and its sampled version


xp (t) → Xp (jω),
in time and frequency domain, where the signal is band limited with bandwidth
2ωM , as depicted in Figure 11.4.
In the time domain, sampling operation involves the multiplication of the
signal x(t) with impulse train p(t), which gives the sampled signal xp (t);

525
x(t) p(t)
1

t t
0 −3T −2T −T 0 T 2T 3T

x(t)
xp (t)

t
−3T −2T −T 0 T 2T 3T

Figure 11.3: Continuous time signal x(t) is multiplied by an impulse train


p(t) to obtain the sampled signal xp (t). That is, xp (t) = x(t)p(t). Note that
the sampled signal xp (t) skips all the points of the original continuous time
signal x(t) between two impulses, which repeats at every period nT for all
−∞ < n < ∞.

526
X(jω)
x(t)
F
... ... ←−−→

ω
t −ωM ωM

Figure 11.4: A band-limited signal x(t) ranges −∞ < t < ∞, in time domain.
However, since it is band-limited in the frequency domain, X(jω) = 0 outside
the interval −wM < w < wM .


X
xp (t) = x(t)p(t) = x(kT )δ(t − kT ) (11.2)
k=∞

In the frequency domain, the above multiplication operations corresponds


to the convolution of the Fourier transform of the signal and that of the impulse
train;


1 1
Z
Xp (jω) = X(jω) ∗ P (jω) = X(jθ)P (j(ω − θ))dθ. (11.3)
2π 2π −∞

Thus, impulse train sampling involves the convolution of the Fourier trans-
form of the signal with that of the impulse train, in the frequency domain.
Recall that impulse train preserves its analytical form in both time and
the frequency domain (see; Exercise 8.9). Hence, the Fourier transform of the
impulse train is also an impulse train, given below;

∞ ∞
X 2π X
p(t) = δ(t − kT ) ←→ P (jω) = δ(ω − kωs ), (11.4)
T
k=−∞ k=−∞

where T = 2π ωs is the sampling period and ωs is the sampling frequency. Note


that time domain representation of impulse train repeats itself at every pe-
riod T , whereas the frequency domain impulse train repeats itself at every
period 2π/T . The amplitude of the impulse train is also scaled by 2π/T in the
frequency domain, as shown in Figure 11.5.
Inserting the Fourier transform of impulse train P (jω) into the convolution
integral of Equation 11.3, we obtain the Fourier transform of the sampled
signal, as follows;

527

1 X
Xp (jω) = X(j(ω − kωs ). (11.5)
T
k=−∞

p(t) P (jω)

F
←−−→

t ω
−T 0 T 2T −ωs 0 ωs 2ωs

Figure 11.5: Impulse train in time domain (left) and its Fourier transform:
p(t) ↔ P (jω) (right). While the fundamental period of p(t) is T and its ampli-
tude is 1, in time domain; the fundamental frequency of P (jω) is ws = 2π/T
and the amplitude is 2π/T , in frequency domain.

Interestingly, the sampled signal Xjw), in the frequency domain is a peri-


odic function, which is generated by shifting the original function, X(jω) with
kωs for all integer values of −∞ < k < ∞.
Figure 11.6 shows the impulse train sampling in time and frequency domain.
As it can be observed from this figure, the sampled signal in time domain
consists of the superposition of the impulse train, weighted with the amplitude
of the signal x(t).
On the other hand the sampled signal in the frequency domain is just the
repetition of the Fourier transform of the continuous time signal, at every
sampling frequency, ωs = 2π/T . Note that the original function, X(jω) is
scaled by 1/T , during the sampling in the frequency domain.
In the time domain, it looks as if we lose information about the signal by
skipping infinitely many values of the continuous time signal x(t) in between
each sampling period, T . However, we observe that the sampling process carries
all the information about the signal X(jω), in the frequency domain. Moreover,
the sampled signal creates a periodic signal in the frequency domain, where at
each period it caries the original signal, X(jω), scaled by 1/T.
In the following exercise, we study the behavior of the sampled signal in
time and frequency domains.

Exercise 11.1: Consider an impulse train sampling;

xp (t) = x(t)p(t),

528
Figure 11.6: Sampled signal in time and frequency domains.

where the input signal is


x(t) = sin ω0 t
and the time domain impulse train is

X k
p(t) = δ(t − ).
3
k=−∞

a) What is the sampling period T and sampling frequency ωs ?


b) Find xp (t), for ω0 = π/2.
c) Find Xp (jω), for ω0 = π/2.

Solution:
1
a) Sampling period of the impulse train function is T = 3 second, whereas the
sampling frequency is ωs = 2π
T = 6π radian/second.
b) The sampled signal in time domain is,

X
xp (t) = x(t)p(t) = x(kT )δ(t − kT ). (11.6)
k=−∞

Inserting the function x(t) = sin ω0 t for ω0 = π/2 and T = 1/3, we obtain

X π k
xp (t) = x(t)p(t) = sin( k)δ(t − ). (11.7)
6 3
k=−∞

x(t), p(t) and xp (t) are illustrated in Figure 11.7.


c) The sampled signal in transform domain is,

1 X
Xp (jω) = X(j(ω − kωs )). (11.8)
T
k=−∞

529
x(t) = sin( π2 t) p(t)
1 1

... ...
0 t ... ...
0 2 4
t
−1 0 2 4

xp (t)
1

... ...
0 t
0 2 4

−1

Figure 11.7: Plots of x(t), p(t) and xp (t) in Exercise 11.1.

|Xp (jω)|
3π ^Xp (jω)
π
2

... ...
... ... π
2 ω
−6π − π2 6π
ω
−6π − π2 π
6π − π2
2

Figure 11.8: Plot of the magnitude and phase of Xp (jω) in Exercise 11.1.

Recall that Fourier transform pair for the function sin ω0 t is,

x(t) = sin(ω0 t) ←→ X(jω) = jπ(δ(ω + ω0 ) − δ(ω − ω0 )). (11.9)

Inserting the Fourier transform of x(t) = sin ω0 t in Equation 11.8, for ω0 = π/2
we obtain

X π π
Xp (jω) = 3πj (δ(ω + − 6kπ) − δ(ω − − 6kπ)), (11.10)
2 2
k=−∞

which is plotted in Figure 11.8.

The above exercise shows that the sampled signal in time domain and
frequency domain both consist of impulse trains. The sampled signal in time
domain is weighted by the amplitude of the sine function at every sampling

530
instance k/3. Hence, the envelope of the sampled signal is a sinusoidal function,
x(t) = sin π2 t. In the frequency domain, the impulse train of the sampled signal
is weighted with the same scalar, which is 3πj at every sampling frequency,
kωs = 6kπ for all k ∈ (−∞, ∞) .

Exercise 11.2: Suppose that an impulse train sampling,

xp (t) = x(t)p(t),

generates the following sampled signal,



X
xp (t) = (−1)k δ(t − 10−3 k), (11.11)
k=−∞

where the input signal is,


x(t) = cos ω0 t, (11.12)
a) What is the sampling period and sampling frequency of the signal x(t)?
b) Find the smallest value of the angular frequency, ω0 of x(t) to get the
sampled signal xp (t).
c) Find the sampled signal Xp (jω), in the frequency domain.

Solution:
a) The sampled signal in time domain is,

X ∞
X
xp (t) = x(t)p(t) = x(kT )δ(t − kT ) = (−1)k δ(t − 10−3 k), (11.13)
k=−∞ k=−∞

Thus, the sampling period is T = 10−3 seconds and the sampling frequency is
ωs = 2π 3
T = 2 × 10 π radian/second.
b) Inserting the function x(t) = cos ω0 t and noting that the sampling period
is T = 10−3 seconds, we obtain,

X
xp (t) = x(t)p(t) = cos(10−3 ω0 k)δ(t − 10−3 k). (11.14)
k=−∞

Hence, we need

x(kT ) = cos kT ω0 = cos 10−3 ω0 k = (−1)k . (11.15)

The smallest value, should satisfy 10−3 ω0 = π. Hence, the angular frequency
of x(t) should be at least w0 = 103 π radian/second.
c) The sampled signal in transform domain is,

531

1 X
Xp (jω) = X(j(ω − kωs )). (11.16)
T
k=−∞

Recall that Fourier transform pair for the function cos ω0 t is,

x(t) = cos(ω0 t) ←→ X(jω) = π(δ(ω + ω0 ) + δ(ω − ω0 )). (11.17)

Inserting the Fourier transform of x(t) in Equation 13.16, for ω0 = 103 π radi-
an/second , we obtain,


X
Xp (jω) = 103 π (δ(ω+103 π−2×103 kπ)+δ(ω−103 π−2×103 kπ). (11.18)
k=−∞

Motivating Question: How can we reconstruct the original signal,

x(t) ←→ X(jω)

from its sampled version,


xp (t) ←→ Xp (jω)
without losing any information about the original continuous time signal x(t)?
Figure 11.6 gives us a clue about the answer to this question. Notice that
the central part of the sampled signal Xp (jω) in the interval of the bandwidth
(−wM , wM ) is nothing but the scaled version of the original signal, X(jω),
where the scale is 1/T . Therefore, all we need to do is to design a filter, which
re-scales and passes the central part of the periodic signal Xp (jω), in the
frequency domain by multiplying it with T and suppresses the rest of the
periodic signal Xp (jω), which is somewhat redundant and can be omitted.

11.3. Reconstruction
Reconstruction of the original signal,

x(t) ←→ X(jω)

from its sampled version,


xp (t) ←→ Xp (jω)
can be accomplished by designing a low pass filter in the frequency domain, so
that when we filter Xp (jω) we obtain X(jω).
Comparing the continuous time signal X(jω) and its sampled version Xp (jω),
in the frequency domain, we can easily design the following ideal low pass filter

532
for reconstruction:
(
T for |ω| < ωc
H(jω) = (11.19)
0 otherwise.
where ωc is the cutoff frequency of
Pthe filter and T is the sampling period
of the impulse train function, p(t) = ∞ k=−∞ δ(t − kT ).

H(jω)

−ωc ωc

Figure 11.9: Reconstruction filter H(jω), in frequency domain.

The above reconstruction filter, H(jω) is just an ideal low pass filter, scaled
by the fundamental period of the impulse train, T , to recover the amplitude
of the continuous time signal X(jω), in the frequency domain.
Note: Selection of the cutoff frequency wc is crucial in designing the recon-
struction filter, H(jω). The cutoff frequency wc should fully cover the band-
width 2ωM of the signal X(jω), for a correct reconstruction.
Once, we design the reconstruction filter H(jω), with the parameters T and
ωc , all we need to do is to multiply the sampled signal with the reconstruction
filter, in the frequency domain, as follows;

Xr (jω) = Xp (jω)H(jω) = X(jω). (11.20)


Note that in the above theoretical derivation, the reconstructed signal
Xr (jω) is exactly equal to the original signal, X(jω), provided that the re-
construction filter has the cut-off frequency, ωM < |ωc | < (ωM − ωs ).
Note also that, while the sampling process involves multiplying a contin-
uous time signal by an impulse train in time domain, this corresponds to the
convolution of them in the frequency domain. Also, the reconstruction process
involves multiplication of the frequency domain signal and the ideal low pass
filter for filtering the sampled signal, in frequency domain. This process cor-
responds to the convolution of the sampled signal with the time domain low

533
X(jω)
x(t)
F −1
−−−−−→ ... ...
ω
−ωM ωM t

Figure 11.10: Reconstructed signal Xr (jω) = X(jω) and its inverse Fourier
transform.

Sampling Reconstruction
x(t) × H(jω) xr (t)

P∞
p(t) = k=−∞ δ(t − kT )
...

Figure 11.11: Sampling a continuous time signal x(t) and reconstruction of the
sampled signal xp (t) to obtain the original continuous time signal, x(t).

pass filter to recover the continuous time signal.

Exercise 11.3: Consider the sampling and reconstruction system, shown in


Figure 11.12, where the signal x(t) = cos(ω0 t + θ) is sampled by the sampling
period T = 10−3 second and H(jω) is a low pass filter,
(
10−3 for |ω| < 1000π
H(jω) = (11.21)
0 otherwise.

Find the reconstructed signal, xr (t), for the following angular frequencies
and phases of the signal x(t):
a) ω0 = 500π and θ = π/4.
b) ω0 = 1000π and θ = π/2.

Solution: Fourier transform pair of the input signal is,

534
x(t) × H(jω) xr (t)

P∞
p(t) = k=−∞ δ(t − kT )

Figure 11.12: Reconstructed signal xr (t) is obtained by filtering the sampled


signal xp (t), in the frequency domain by an ideal low pass filter, H(jω) with
height T .

x(t) = cos(ω0 t + θ) ←→ X(jω) = πejθ δ(ω + ω0 ) + πe−jθ δ(ω − ω0 ). (11.22)

Fourier transform of the sampled signal is,



1 X
Xp (jω) = X(j(ω − kωs )), (11.23)
T
k=−∞

where the sampling frequency of the input signal is ωs = 2000π.


a) For ω0 = 500π and θ = π/4: The Fourier transform of the sampled signal
is,

X
Xp (jω) = 1000π ejθ δ(ω − 500π − 2000kπ) + e−jθ δ(ω + 500π − 2000kπ).
k=−∞
(11.24)
The Fourier transform of the reconstructed signal is,

Xr (jω) = X(jω)H(jω). (11.25)


The low pass filter H(jω) suppresses the signal for k ̸= 0 in Xr (jω). Hence,

Xr (jω) = π[ejθ δ(ω − 500π) + e−jθ δ(ω + 500π)]. (11.26)


Taking the inverse Fourier transform of Xr (jω) and inserting θ = π/4, we
obtain, xr (t) = cos(500πt + π/4), which is equal to the input signal.
b) For ω0 = 1000π and θ = π/2: The Fourier transform of the sampled signal
is,

X
Xp (jω) = 1000π ejπ/2 δ(ω−1000π−2000kπ)+e−jπ/2 δ(ω+1000π−2000kπ).
k=−∞
(11.27)

535
The Fourier transform of the reconstructed signal is,

Xr (jω) = X(jω)H(jω). (11.28)


The cutoff frequency of the low pass filter H(jω) is ωc < 1000π. Thus, it does
not cover the sampled signal. Hence,

Xr (jω) = 0 (11.29)
Note that selecting the cutoff frequency of the low pass filter is an important
design issue. In order to be able to reconstruct the original signal from the
sampled signal, the cutoff frequency of the low pass filter should be selected to
cover the bandwidth of the original signal.

11.4. Aliasing
In the above analysis and derivations of sampling and reconstruction, we made
a very major assumption: We assumed that the sampling period T = 2π/ws
is small enough, so that the sampling frequency ws becomes large enough
to generate the sampled signal Xp (jω) with non-overlapping original signal,
X(jω), as it repeats itself at every sampling frequency.
Mathematically speaking, we assumed that ωM < 2ωs . This assumption
assures that the sampled signal is made of non-overlapping original signals,
X(jω), scaled by 1/T and is repeated every ws , in the frequency domain.
Therefore, we can design a reconstruction filter with a cut-off frequency, which
can cover the entire bandwidth of the continuous time signal, X(jω) by a
reconstruction filter.
Motivating Question: What if ωs < 2ωM ?
When we enlarge the sampling period T = 2π/ωs , the sampling frequency
ωs gets smaller. If we keep enlarging the sampling period, at a certain point,
the sampling frequency gets so small that ωs < 2ωM . This process is called
undersampling.
In this case, the original signal X(jω) starts to overlap as it repeats at each
sampling frequency and the sampled signal cannot capture all the information
embedded in the original signal, as indicated in Figure 11.13. When ωs <
2ωM , some of the information about the signal is shaded under the overlaps.
Even if we design a low pass filter, which covers the entire bandwidth of the
original signal, the output of the filter does not provide the original signal.
This phenomenon is called as aliasing.
In summary, aliasing is an effect that causes an information loss of the
original signal, x(t), during the sampling process due to undersampling of
the original signal. It causes distortions or artifacts, when a signal is recon-

536
Xp (jω) Xp (jω)H(jω)

ω ω

−ωs−ωM ωM ωs −ωc ωc

Figure 11.13: Aliasing: If ωs < 2ωM , then, T1 X(jω)’s in the sampled signal
Xp (jω) overlap with each other. The analytical form of the signal, in overlapped
frequencies is distorted and it becomes impossible to recover the original signal
from its sampled version by low pass filtering.

structed from its samples using an ideal low pass filter with any bandwidth.
The reconstructed signal, xr (t), is no longer equal to the original continuous
time signal, x(t). In the following examples, we study the effect of aliasing on
the reconstructed signal.

Exercise 11.4: Suppose that we need to sample the following periodic signal:

x(t) = cos(ω0 t) ←→ X(jω) = π(δ(ω − ω0 ) + δ(ω + ω0 )) (11.30)

a) What is the maximum allowable sampling period Ts and the correspond-


ing sampling frequency ws , so that we can uniquely reconstruct the signal x(t)
from its sampled version xp (t)?
b) Suppose that the sampling frequency is ωs = 23 ω0 . Can you reconstruct
the original signal from its sampled version? Find a reconstructed signal, xr (t),
which approximates the original signal x(t), as much as possible.
c) Suppose that the signal x(t) represents the motion of a turning wheel.
What is the difference between the representations of signal x(t) and its recon-
structed version xr (t) when the signal is sampled and reconstructed with the
sampling rate, ωs = 23 ω0 ?

Solution:
a) The bandwidth of X(jω) is 2ω0 . In order to avoid aliasing, we need to obtain
non-overlapping X(jω)’s in the sampled signal Xp (jω). This requires that ws
should be slightly larger than 2w0 . Therefore, the sampling period should be
Ts < π/w0 .
Note: As the sampling period, Ts gets smaller, in time domain; the sampling
frequency, ws gets larger, in frequency domain. In other words, getting more

537
x(t) X(jω)

π π

t

T = w0

−ω0 ω0

Figure 11.14: Cosine function with period T in time domain and its Fourier
transform of two impulses at |w0 | = 2π/T .

Figure 11.15: Sampling with a period Ts , where Ts ≤ 2π/ws to avoid aliasing.

samples in the time domain makes the original signal X(jω) fall far apart from
each other, in the sampled signal Xp (jω) of the frequency domain.
b) When the sampling frequency is ωs = 23 ω0 , the sampling period becomes

Ts = 3w 0
. The sampled signal in time domain has the following form:

X
xp (t) = x(t)p(t) = x(kTs )δ(t − kTs ). (11.31)
k=−∞

Fourier transform of the sampled signal has the following form:


∞ ∞
1 X 3w0 X 3
Xp (jw) = X(j(w − kws )) = X(j(w − kw0 )). (11.32)
Ts 4π 2
k=−∞ k=−∞

Note that, since ωs = 23 w0 < 2ω0 , the original signal has overlaps in the
sampled signal (See: Figure: 11.19) . Hence, there is aliasing. The original
signal cannot be reconstructed from the sampled signal.
Let’s now try to reconstruct the original signal by low pass filtering the sampled
signal in the frequency domain, using the following equation,

Xr (jw) = Xp (jw)H(jw),

538
Figure 11.16: Reconstruction of the signal from its samples, by low pass fil-
tering. In order to recover the original signal x(t), the cutoff frequency of the
filter H(jω) should be slightly greater than w0 .

when ωs = 23 ω0 .
In order to reconstruct the signal from its sampled version, we design an ideal
low pass filter,
(
Ts for |ω| < ωc
H(jω) = (11.33)
0, otherwise,
where the cutoff frequency is selected in |ωs − ω0 | < |ωc | < |ω0 |.
In this case, the reconstructed signal in the frequency domain will be,

Xr (jω) = Xp (jω)H(jω) = π(δ(ω − (ωs − ω0 )) + δ(ω + (ωs − ω0 )). (11.34)

Hence, the reconstructed signal in time domain is,

xr (t) = cos((ωs − ω0 )t). (11.35)


Note: Although the reconstructed signal is still a cosine function, its angular
frequency is not the same as that of the original signal,
1
xr (t) = cos((ωs − ω0 )t) = cos( ω0 t) ̸= x(t) = cos(ω0 t).
2
The angular frequency of the reconstructed signal is, ωr = 21 ω0 , and the period
of the reconstructed signal is Tr = ω2πr = ω4π0 = 2T .
The period of the reconstructed signal is 2 times more than that of the original
signal.
c) Suppose that the original signal x(t) represents a wheel, turning with a
speed of angular frequency ω0 . The reconstructed signal xr (t) will represent a
turning wheel with speed, two times slower than that of the original speed of
the wheel.

The above analysis and example brings one of the most influential theorems
of the modern age: Sampling theorem.

539
x(t)

Xp (jω) H(jω)
T

t
−ωs −ω0 ω0 ωs

Figure 11.17: Impulse train sampling withPperiod Ts : Up: Original signal


x(t) = cosω0 t and its sampled version xp (t) = n cos(ω0 nTs )δ(t−nTs ). Down:
Sampled signal Xp (jω), in the frequency domain and the low-pass reconstruc-
tion filter, H(jω), indicated by blue.

Learn more about aliasing @ https://384book.


net/v1101
WATCH

11.5. Sampling Theorem


Let x(t) be a band-limited signal with X(jω) = 0 for |ω| > ωM . Then, x(t) is
uniquely recovered from its samples x(nT ), n = 0, ±1, ±2, ..., if ωs > 2ωM .
ωN = 2ωM is called the Nyquist rate (Harry Nyquist, 1894-1976). Nyquist
rate is the smallest possible sampling frequency to avoid aliasing.
The formal proof of this theorem is given in the original paper by Shannon.
Sampling theorem states that there are two important factors to sample
a continuous time signal x(t) and reconstruct this function from its sampled
version, xp (t) without losing any information:
1. The continuous time signal x(t) is to be band-limited with a finite band-

540
p(t) x(t)

Ts
0 t

X(jω)

t

3ω0 −ω0 ω0 ωs = 3ω0
2 2

(ωs − ω0 )

Figure 11.18: Impulse train sampling with sampling frequency, ws =


3
2 w0 : Up: Original signal x(t) = cosω0 t and its sampled version xp (t) =
P 4π
n cos(ω0 nTs )δ(t − nTs ), where Ts = 3w0 seconds. Down: Sampled signal
Xp (jω), in the frequency domain.

541
width 2ωM .
2. The sampling frequency ωs should be greater than the Nyquist rate, ωN =
2ωM .
If the above two conditions are satisfied, then it is theoretically possible to
reconstruct the original signal exactly from its sampled version.

Sampling and reconstruction of a continuous time


signal @ https://384book.net/i1101
INTERACTIVE

11.6. Sampling with Zero-Order Hold


Sampling Theorem is based on sampling a continuous time signal with an im-
pulse train, which consists of finitely many impulses of zero width and infinite
height. Although this theorem is quite elegant to prove that we can uniquely
reconstruct a continuous time signal from its sampled version, in practice, it is
not possible to realize the impulse train to sample a continuous time function.
Therefore, we need some type of approximations to sample a continuous time
signal.
One way of realizing the sampling theorem is to use zero-order hold. Loosely
speaking, zero order hold approximates a function with a set of piecewise con-
stant functions using a sequence of sampled points from the signal. This ap-
proximated function can be used to reconstruct the original signal. Let’s see
how.
Consider the piecewise constant signal x0 (t), shown in Figure 11.19. The
signal x0 (t), is quite easy to generate from x(t), by a simple switch, which
measures the value of x(t) at every period T and keeps the value constant in
between the measured values. This process is called sampling with zero
order hold.
Motivation Question: If we sample a continuous time signal x(t) by zero
order hold and obtain a piecewise constant function x0 (t), can we uniquely
reconstruct x(t) from the sampled signal x0 (t)?
Let’s answer this question by formalizing the sampling and reconstruction
processes with zero order hold and analyze the behavior of x0 (t), theoretically,
in time and frequency domains.
In time domain:
Let’s start by defining the zero order hold filter, which is an LTI system
represented by the impulse response, h0 (t) as follows:

542
x0 (t)

Figure 11.19: Sampling with zero order hold: Generation of a piecewise


constant function x0 (t), from a continuous time function x(t).

(
1 for 0 < t < T
h0 (t) = (11.36)
0 otherwise
Suppose that we feed the impulse train sampled signal,

X
xp (t) = x(t)p(t) = x(kT )δ(t − kT ),
k=−∞

at the input of the zero order hold filter h0 (t). Then, the corresponding output
becomes,
y0 (t) = xp (t) ∗ h0 (t).
Motivation Question: What does y0 (t) look like?
Let’s evaluate the convolution of the input xp (t) and the impulse response
h0 (t):
" ∞ #
X
y0 (t) = x(kT )δ(t − kT ) ∗ h0 (t). (11.37)
k=−∞

Recall that δ(t − kT ) ∗ h0 (t) = h0 (t − kT ). Therefore,

y0 (t) = x(kt), for all k, kT < t < k(T + 1).


Thus, the output, y0 (t) of the zero order hold filter h0 (t) is just the piecewise
constant function, x0 (t);

y0 (t) = x0 (t). (11.38)

543
h0 (t)

Figure 11.20: An LTI system represented by the impulse response h0 (t).

The above derivations establish the relationship between the impulse train
sampling, which outputs xp (t) and zero order hold sampling, which outputs
x0 (t), in the time domain. Formally speaking, zero order hold sampled signal
x0 (t) is the output of an LTI system represented by the zero order hold filter
h0 (t), when it is fed by the impulse train sampled signal, xp (t):

x0 (t) = xp (t) ∗ h0 (t). (11.39)


In frequency domain:
Let’s start by finding the Fourier transform of the zero order hold filter,

H0 (jω) = F {h0 (t)}.

Notice that zero order hold filter is the shifted version of the following
impulse response, indicated in Figure 11.21:
(
1 for −T /2 < t < T /2
h(t) = (11.40)
0 otherwise.
Recall that Fourier transform of h(t) is the following Sinc function:

sin(ωT /2)
H(jω) = (11.41)
ω
We can use the time shift property to compute the Fourier transform of

544
h(t)

t
−T /2 T /2

Figure 11.21: The impulse response function h(t) is the shifted version of h0 (t).
In other words, h0 (t) = h(t − T /2)

h0 (t) directly from H(jω), as follows:

sin(ωT /2)
h0 (t) = h(t − T /2) ←→ H0 (jω) = e(−jωT /2) . (11.42)
ω
The above equation shows that sampling with zero order hold corresponds
to filtering the impulse train sampled signal Xp (jω) with a Sinc function,

sin(ωT /2) −j ωT
H0 (jω) = e 2 . (11.43)
ω
There is a very elegant duality between the impulse train sampling and
zero-order hold sampling:
Impulse train sampling involves multiplication in the time domain and
convolution in the frequency domain:

xp (t) = x(t) · p(t) ←→ Xp (jω) = X(jω) ∗ P (jω).


Zero order hold sampling involves convolution in time domain and mul-
tiplication in the frequency domain:

x0 (t) = xp (t) ∗ h0 (t) ←→ X0 (jω) = Xp (jω) · H0 (jω).

11.7. Reconstruction with Zero-Order Hold


In this section, our goal is to design an LTI system, represented by the impulse
response hr (t), which reconstructs the original signal x(t) from its sampled
version x0 (t), in time domain. Equivalently, in the frequency domain, we need

545
Figure 11.22: Block diagram representation of sampling with zero order hold.

x0 (t) hr (t) xr (t) = x(t)

Figure 11.23: The reconstruction filter hr (t) receives the zero order hold sam-
pled signal x0 (t) = xp (t) ∗ h0 (t) and outputs xr (t) = x(t).

to find a filter Hr (jω) which outputs X(jω) for the input X0 (jω).
Motivating Question: How to define the LTI filter,

hr (t) ↔ Hr (jω),
so that the output of this filter is,

xr (t) = x(t) ↔ Xr (jω) = X(jω),

when the input is the zero order hold sampled signal?


The reconstructed signal can be obtained by convolution of the sampled
signal x0 (t) with the reconstruction filter h0 (t), in the time domain.

xr (t) = x0 (t) ∗ hr (t) = xp (t) ∗ h0 (t) ∗ hr (t)

Let’s use the convolution property in time domain, which corresponds to


the multiplication property in frequency domain:
xr (t) = xp (t) ∗ h0 (t) ∗ hr (t) ←→ Xr (jω) = Xp (jω)H0 (jω)Hr (jω).
Recall that reconstruction of x(t) ↔ X(jω) from xp (t) ↔ Xp (jω) in im-
pulse train sampling is accomplished by an ideal low pass filter:
(
T, if − ωc < ω < ωc
H(jω) = (11.44)
0, otherwise,
where

X(jω) = Xp (jω)H(jω). (11.45)


Therefore, if we set,

546
H(jω) = H0 (jω)Hr (jω), (11.46)
then,

Xr (jω) = X(jω). (11.47)


The reconstruction filter of zero order hold we are looking for is, then,

H(jω) ejω(T /2)


Hr (jω) = = ωH(jω). (11.48)
H0 (jω) 2 sin(ωT /2)
Motivating Question: What type of a filter is Hr (jω)?
In order to investigate the effect of Hr (jω) on the sampled signal X0 (jω),
let’s plot the magnitude and phase of Hr (jω).
Let’s set the cutoff frequency of Hr (jω) as, ωc = ω2s . Then, the magnitude
and the phase of the reconstruction filter becomes,

ωT ωs ωs
|H(jω)| = for − ≤ω≤ (11.49)
2 sin(ω(T /2)) 2 2
and
ωT ωs ωs
∡Hr (jω) = for − ≤ω≤ , (11.50)
2 2 2
respectively.

|Hr (jω)|
∠Hr (jω)

π/2
1
ω

ω
−ωs /2 ωs /2 −π/2

Figure 11.24: Magnitude and phase plots of the reconstruction filter, Hr (jω),
for zero order hold sampled signal. Hr (jω) is called the ideal compensation
filter.

Figure 11.24 show that the reconstruction filter Hr (jω), for zero order hold
sampling is a low-pass filter. This filter slightly suppresses the lower frequencies
around the origin.
Let’s compare the reconstruction filters H(jω) for impulse train sampling
and Hr (jω) for zero order hold sampling. Both of them are low pass filters.

547
However, H(jω) is an ideal low pass filter, whereas, Hp (jω) somehow compen-
sates for the process of zero order hold.

11.8. Sampling and Reconstruction with


First Order Hold
So far, we have seen two types of sampling: Impulse train sampling and
zero order hold sampling. While impulse train sampling provides us theo-
retically exact reconstruction of the sampled signal, it was not easy to realize
it in practice. On the other hand, zero order hold sampling simply converts
the signal into a piecewise constant signal, which is more practical to realize
sampling and reconstruction. However, zero order hold filters still carry discon-
tinuities at the cutoff frequency, which result in generation of high frequency
noise due to the Gibbs phenomenon, while we take the Fourier transform.
Motivating Question: Is it possible to define a more practical sampling
method, which avoids the problems of impulse train and zero order sampling?
For example, can we somehow replace impulse train sampled signal

xp (t) ←→ Xp (jω)

by a realizable signal, so that instead of zero order hold we can use the ap-
proximated form of impulse train sampling?
Let’s start by analyzing the structure of the reconstruction filter h(t) ↔
H(jω), for impulse train sampling, in the time domain.
Formally speaking,

X
xr (t) = x(t) = xp (t) ∗ h(t) = x(nT )h(t − nT ), (11.51)
n=−∞

where T is the sampling period.


If we take the inverse Fourier transform of the ideal low pass filter, H(jω),
we get the following Sinc function as the impulse response of an LTI system,
in time domain;

ωc T sin(ωc t)
h(t) = . (11.52)
πωc t
Inserting the impulse response into the convolution equation, xr (t) = xp (t)∗
h(t), we get,

X ωc T sin(ωc (t − nT ))
xr (t) = x(nT ) . (11.53)
n=−∞
π ωc (t − nT )

548
Note: The reconstructed signal xr (t) = x(t) is just the superposition of
shifted Sinc functions, each of which is weighted by the value of x(t) at nT ,
namely, x(nT ).
Rather than taking the superposition of the shifted Sinc functions, we sim-
ply connect the peak values of the reconstructed signal to obtain a linear
interpolation. This method of sampling is called first order hold.

x(t)

t t

Figure 11.25: Left: Reconstructed signal in time domain from impulse train
sampled signal, xp (t) which is obtained by the superposition of the shifted
Sinc functions. Right: Approximating the reconstructed signal by linear inter-
polation to obtain first order hold sampled signal.

First order hold sampling offers a practical method for sampling a continu-
ous time signal in time domain. At the first step, we find the bandwidth, 2ωM ,
of the signal x(t). Then, set the sampling rate as the Nyquist rate, which is

ωs = ωN = 2ωM .
Then, we set corresponding sampling period to,

T = .
ωN
Finally, we simply create the sampled signal by connecting the selected
points of x(nT ) for all n by a straight line, in time domain.

Sampling and reconstruction with first-order hold


@ https://384book.net/i1102
INTERACTIVE

549
11.9. Chapter Summary
Can we select a set of time points x(nT ) from a continuous time function x(t),
which represents the function x(t) without losing any information? As we all
know, a continuous time function is represented by uncountably many points
(t, x(t)), where t ∈ R is a real number. Thus, intuitively, neglecting infinitely
many points at every interval between x(nT ) and x((n ± 1)T reveals that we
lose a great amount of information about the function. Fortunately, this is not
a valid statement, provided that the signal in the frequency domain is band
limited, in other words, the signal has nonzero values only in a finite interval
of frequencies, in the transform domain.
In this chapter, we studied the famous Sampling Theorem proved by Claude
Shannon, which states that a continuous time band limited signal can be sam-
pled without losing any information. The original signal can be reconstructed
from its sampled version uniquely, provided that the sampling rate ωs = 2π/T
is at least twice the bandwidth, ωM , of the original signal, called the Nyquist
rate, ωN . Mathematically, for a unique sampling and reconstruction, the fol-
lowing inequality should be satisfied:

ωs = 2π/T > ωN = 2ωW .

We investigate sampling a continuous time signal to generate a more compact


signal using three types of sampling methods.
First, we study the theoretical foundation of the sampling theorem by defin-
ing the impulse train sampling. In this method, we multiply the signal by an
impulse train of period T , which is to be small enough to assure large enough
sampling frequency, so that the Nyquist rate is satisfied. We reconstruct the
original signal by simply using an ideal low pass filter with a cutoff frequency,
ωC , which covers the entire bandwidth of the original signal.
Secondly, we define zero order sampling, where we approximate the signal
x(t) by a piecewise constant function, between every sampling period T . The
reconstruction filter in the frequency domain is another low pass filter with the
cutoff frequency ωC which covers the entire bandwidth of the original signal.
However, this time, the low frequencies are slightly attenuated.
Both impulse train sampling and zero order hold sampling require filtering
in the frequency domain with a filter, which has discontinuities at the cutoff
frequencies. Then, we take the inverse Fourier transform of the filtered signal
to reconstruct the original signal from its sampled version. Taking the inverse
Fourier transform of a signal with discontinuities results in high frequency noise
due to the Gibbs phenomenon, described in Chapter 6. Thus, a more practical
method is required for sampling.
Finally, we introduce first order hold sampling. This method avoids filtering
in the transform domain and handles the sampling in time domain using the

550
fact that impulse train sampling is nothing but the superposition of the Sinc
function generated at every sampling point T . This superposition can be simply
approximated by connecting the sampled signal with a straight line.

551
Problems
1. Consider a signal x(t) whose Nyquist rate is ωN . Find the Nyquist rate
for each of the following signals in terms of ωn .
a) x(t − 2) + x(t + 2))
b) x(2t) + x(t − 2))
dx(t)
c)
dt
d) x2 (t)
e) x(t)cosω0 t
2. Find the Nyquist rate for each of the following signals.
a) x(t) = 1 + cos(4000πt) + sin(8000πt)
b) x(t) = sin(8000πt)
πt
c) x(t) = ( sin(16000πt)
πt )2
3. A continuous time-signal,

x(t) = e−0.5t u(t)

is fed to an ideal low pass filter, h(t) ←→ H(jω) with cutoff frequency
wc = 2000π to obtain the output signal y(t) = h(t) ∗ x(t). Suppose that
impulse-train sampling is performed as yp (t) = y(t)p(t), where

X
p(t) = δ(t − nT ).
n=−∞

a) Find and plot the magnitude and phase of the Fourier transform of
the output, Y (jω).
b) Find the largest possible period T , which avoids aliasing.
c) Find and plot the reconstructed signal yr (t), when the sampling pe-
riod is T = 2 ∗ 10−3 and for T = 2 ∗ 10−4 .
d) Suppose that the sampling period is T = 2 ∗ 10−4 . What is the valid
interval of cut-off frequency of the low pass filter for reconstruction,

Yr (jω) = Yp (jω)Hr (jω),

which avoids aliasing.


4. A continuous time-signal,

sin(3000πt)
x(t) =
πt
is fed to an ideal low pass filter, h(t) ←→ H(jω) with cutoff frequency

552
wc = 2000π to obtain the output signal y(t) = h(t) ∗ x(t). Suppose that
impulse-train sampling is performed as yp (t) = y(t)p(t), where

X
p(t) = δ(t − nT ).
n=−∞

a) Find the largest possible period T , which avoids aliasing of yp (t).


b) What is the cutoff frequency of the low pass filter Hr (jω) for recon-
struction,
Yr (jω) = Yp (jω)Hr (jω).
c) Find and plot the sampled signal yp (t), when the sampling period is
T = 2 ∗ 10−2 . Is there aliasing?
5. The input to a continuous time system has the following Fourier trans-
form:
1
X(jω) = (δ(ω − ω0 ) + δ(ω + ω0 )) .
2
The Fourier transform of the corresponding output is,

Y (jω) = 2X(j(ω − ω0 )).

a) Find the the output y(t).


b) Find an equivalent multiplicative system z(t), which satisfies,

y(t) = x(t)z(t).

6. A continuous time band-limited input signal x(t) is fed to a system, which


generates the following output:

y(t) = x(t) cos ω0 t.


a) Find and plot the magnitude and phase Y (jω), when
(
1
for |ω| < 100π
X(jω) = jω+1
0 o.w.

b) Find the interval of w0 , which guarantees the reconstruction of x(t)


from y(t).
c) Define a system, which reconstructs the input x(t) from the output
y(t). Be specific about the type and the cutoff frequency of the re-
construction filter.
d) Find and plot Y (jω), for ω0 = 75π. Can you reconstruct x(t) from
y(t)?

553
7. A continuous-time system is given in the following figure.

x(t) × H(jω) y(t)

cos(4000πt)

The frequency response of the ideal low pass filter is shown in the following
figure.
H(jω)

ω
−4000π 4000π

a) Find the Fourier transform of the signal obtained at the output of


the multiplier, in terms of a band-limited input, X(jω):

g(t) = x(t) cos(4000πt)

b) Find the maximum bandwidth of X(jω), which assures the recon-


struction of it from G(jω).
c) Find and plot the output signal y(t) ←→ Y (jω) , when the input
signal is,
(
1 for |ω| ≤ 2000π.
X(jω) =
0 o.w.
8. A continuous time system is given in Figure P7.a, where the frequency
response of the ideal low pass filter is given in Figure P7.b. The input to
this system is,
x(t) = sin(400πt) + 2 sin(800πt).

554
x(t) × H(jω) y(t)

sin(100πt)

Figure P7.a
H(jω)

ω
−800π 800π

Figure P7.b
a) Find and plot the Fourier transform of the signal, obtained at the
output of the multiplier:

g(t) = x(t) sin(100πt)

b) Find the output y(t) of the low pass filter given in Figure 7.a.

555
9. Consider the following input signal,

sin(100πt)
x(t) = ,
πt
which is fed to a system to create the following output

y(t) = x(t) cos(500πt)

a) Find and plot the Fourier transform of the input, x(t).


b) Find and plot the Fourier transform of the output, y(t). What is the
Bandwidth of Y (jω)?
c) Find the Nyquist rate for the impulse train sampled output yp (t).
d) Find and plot the sampled output yp (t).
10. Consider the impulse train sampling and reconstruction system of Figure
P10, which is fed by the following input:

x(t) = cos(1000πt) + cos(5000πt).

x(t) × H(jω) xr (t)

P∞
p(t) = n=−∞ δ(t − nT )

Figure P10
a) Roughly plot the input signal, x(t).
b) Find and plot the sampled signal xp (t) ↔ Xp (jω), for the sampling
frequency ωs = 10, 000π.
c) Find the Nyquist rate of the sampled signal xp (t) = x(t)p(t).
d) Find the bandwidth of low-pass filter, H(jω) to reconstruct the orig-
inal signal from its sampled version without loosing any information.
11. Consider the system given in Figure P11.a, which is fed by a band limited
signal,
(
1 for ω ≤ ωm
X(jω) =
0 o.w.

556
x(t) × G(jω) y(t)

P∞ n
p(t) = n=−∞ (−1) δ(t − nT )

Figure P11.a
The band pass filter G(jω) has the form, given in Figure P11.b
G(jω)

ω
−2π − Tπ π
T

T T

Figure P11.b
a) Find and plot the Fourier transform of xp (t) = x(t)p(t), when the
sampling period is T = 3ωπm
b) Find and plot the Fourier transform of y(t) for T = 3ωπm .
c) Define a system, which reconstructs the input signal x(t) from the
output signal y(t).
12. Consider the system shown in Figure P12, where the frequency response
of the filter is given as follows,
(
j if, ω > 0
H(jω) =
−j if, ω < 0.

The input signal x(t), to this system is band-limited with the Fourier
transform X(jω) = 0, for |ω| > 1000π.

557
sin(wc t)

h(t) ×

x(t) + y(t)

cos(wc t)

Figure P12

558
a) Find the Fourier transform of the output signal, y(t) in terms of the
Fourier transform of the input signal x(t).
b) Can we reconstruct x(t) from y(t), for ωc = 500π? Verify your answer.
13. A sampling system is illustrated in Figure P13.a,

x(t) × H(jω) × yp (t)

P∞
e−j2πt p(t) = n=−∞ δ(t − nT )

Figure P13.a
where the low pass filter H(jω) has the cutoff frequency, |ωc | = π, as
shown in Figure P13.b.
H(jω)

ω
−wc wc

Figure P13.b
The input to this system is shown in Figure P13.c.
X(jω)

ω
−3π −π π 3π

Figure P13.c
a) Find the Fourier transform of the output of the multiplier, g(t) =
x(t)e−j2πt .
b) Find the Fourier transform of the output of the low pass filter, y(t) =
g(t) ∗ h(t).
c) Find the maximum sampling period T , which recovers the input signal
x(t) from the sampled signal yp (t).
d) Suggest a system, which recovers x(t) from yp (t).

559
14. An LTI system, represented by a band limited frequency response
(
1 for ω ≥ 1000π
H(jω) =
0 o.w.

a) Find the output y(t) of this system for x(t) = sin 2000π.
b) Find the maximum sampling period Tmax to recover the signal y(t)
from the sampled signal yp (t) = y(t)p(t), where
X
p(t) = δ(t − nT ).

c) Find and plot the sampled signal yp (t) = y(t)p(t) for T = Tmax /2.

560
Chapter 12
Discrete Time Sampling and
Processing

“Information is the resolution of uncertainty.”


Claude Shannon

In Chapter 11, we introduced the classical sampling methods, based on the


Sampling Theorem of Claude Shannon. This pioneering theorem allowed us to
sample a continuous time signal into the sampled signal, without losing any
information.

Learn more about Claude Shannon, a hero of the


digital revolution @ https://384book.net/v1201
WATCH

We have seen three methods for sampling:


1. Impulse train sampling, where the sampled signal is represented by the
superposition of the shifted impulse functions, weighted by the amplitude
of the function.
2. Zero order hold sampling, where the sampled function is represented by
piece wise constant functions, weighted by the amplitude of the functions
at each sampling interval.
3. First order hold sampling, where the sampled function is represented by
piece wise linear functions, fitted between two sampling points.
Note: In all of the sampling methods, we convert a continuous time signal
into another continuous time signal, where we omit the points of the input
function between the sampling points. We showed that we can reconstruct
the original signal from its sampled version without losing any information,
provided that the signal is band limited and the sampling frequency is at least
twice as big as the bandwidth of the signal.

561
Suppose that we need to process a speech signal to reduce the noise or to
decompose orchestral music into its instruments by a digital computer. The
classical sampling theorem does not allow us to process a continuous time
sampled signal in a digital machine. Also, we cannot design a computer vi-
sion system, for example, to extract an object from a given image dataset by
classical sampling methods.
Motivating Question 1: Considering the fact that the sampled signal
is still in continuous time, how do we process a continuous-time signal by a
digital computer?
The answer to the above question requires conversion of a sampled signal
into a discrete time signal, where we define the function only at integer values
of time. This process is called C/D conversion (continuous to discrete time
conversion).
Motivating Question 2: After we process the discrete time signal by a
digital computer, how do we reconstruct the continuous time counterpart of
the discrete signal?
The answer to the above questions require a conversion of a discrete time
signal into a continuous time signal. This process is called D/C conversion
(discrete to continuous time conversion).

Figure 12.1: Block diagram representation of a general digital signal processing


system.

The continuous time signal xc (t) is converted to a discrete signal, xd [n] by


a C/D converter. The discrete time signal xd [n] is, then, processed by a digital
computer. Finally, the discrete time signal at the output of a digital computer
is reconstructed by a D/C converter.
In this chapter, we study the methods for C/D and D/C conversions. Then,
we will introduce simple methods to design discrete time filters from their
continuous counterparts using the sampling theorem. Finally, we introduce the
discrete version of the sampling theorem, where we sample a discrete time
signal for further compressing the data without losing information.

12.1. Time Normalization


Recall that impulse train sampling replaces a continuous time signal by a set
of weighted and shifted impulses, each of which is placed at every sampling

562
period, T , of the signal.
A discrete time signal exists only at integer values of time. Therefore, one
needs to convert a continuous time output of a sampled signal into a discrete
time signal. The conversion is simple: We replace the continuous time impulse
functions placed at every sampling period T by discrete time impulses, placed
at every integer value, as shown in Figure 12.2. Mathematically, given a con-
tinuous time signal x(t), let us define the value of this function at every point
nT as,
xc (t) = x(nT ), ∀n. (12.1)
Then, we define the discrete time counterpart of this continuous time signal
as,

xd [n] = xc (nT ), ∀n. (12.2)


This process is known as time normalization. The discrete time function
x[n] is called the time normalized counterpart of the continuous time function
x(t).

xp (t)

xd [n]

0 T 2T t −4 −3 −2 −1 0 1 2 3 4 n

xp (t)

xd [n]

0 T 2T t −4 −3 −2 −1 0 1 2 3 4 n

Figure 12.2: Converting a continuous time weighted impulse functions of the


sampled signal into discrete time impulse train by time normalization.

In the time normalization process, the value of x(t) is kept the same at
every t = nT value of time to generate the discrete time signal x[n]. However,

563
the time axis of x[n] is normalized by 1/T . Upper row of Figure 12.2 shows
that the continuous time impulses are placed at every sampling period T = T1
(left). The corresponding discrete time impulses are placed at every integer
value of time. The time axis is replaced by integer values at every period, T . In
the bottom row of this figure, we double the sampling period, as T = 2T1 . The
corresponding discrete time function is presumably smoother (left). However,
the time axis is replaced by integer values at every period T = 2T1 .
Note: As can be observed in Figure 12.2, when we make time normaliza-
tion, the analytic form of the discrete time signal changes as we change the
sampling period, T . Furthermore, the information about the sampling period
T disappears after time normalization. No matter what the sampling period of
the continuous time signal is, the time axis of the corresponding discrete time
function consists of integer values of n.

12.2. C/D Conversion: x(t) → x[n]


Our goal is to find a discrete counterpart x[n] of a continuous time signal
x(t), such that from this discrete function we can uniquely recover the original
continuous time function.
Motivating Question: How do we convert a continuous time signal into
a discrete time signal without losing any information?
Let’s realize time normalization to convert a continuous time sampled signal
into a discrete time signal, as shown in Figure ??.

Figure 12.3: C/D Conversion: Conversion of a continuous time signal into a


discrete time signal. First, we apply continuous time impulse train sampling
to a continuous time signal x(t), to obtain xp (t). Then, we apply time normal-
ization to obtain the discrete version of xp (t), namely, xd [n].

Recall the continuous time sampling in the frequency domain,

xp (t) = x(t)p(t). (12.3)


Equivalently, we can express the sampled signal xp (t) as follows:

564

X
xp (t) = xc (nT )δ(t − nT ), (12.4)
n=−∞

where xp (nT ) is the value of the continuous time signal x(t) at nT for all values
of n. Recall also that F {δ(t − nT )} = e−jωnT , then the Fourier transform of
xc (t) is,

X
Xp (jω) = xc (nT )e−jωnT . (12.5)
n=−∞

Fourier transform of the discrete time signal is,



X
Xd (ejω ) = xd [n]e−jωn . (12.6)
n=−∞

Considering the fact that xc (nT ) = xd [n], for all n and comparing the
Fourier transforms of Xp (jω) and Xd (ejω ), we can obtain the relationship
between the Xd (ejω ) and Xp (jω) as follows;
 
jω jω
Xd (e ) = Xp (12.7)
T

Therefore, the discrete time counterpart Xd (ejω ) of a continuous time signal


generated by time normalization, is simply the frequency scaled version of the
continuous time sampled signal Xp (jω). In other words, the original signal
X(jw) is repeated at every sampling frequency, ws = 2π T , in Xp (jω), whereas
its discrete counterpart, Xd (ejω ), is repeated in every 2π.
Replacing,

1 X
Xp (jw) = X(j(w − nws ))
T n=−∞

in the above equation, we obtain a relationship between the Fourier transform


of continuous and discrete time functions as follows;


1 X w − 2πn
Xd (ejw ) = X(j( )) (12.8)
T n=−∞ T

Comparison of Xd (ejω ) and Xp (jω) in Figure 12.4 shows that the only dif-
ference between these two functions is the scale T in the frequency axis. While
the continuous sampled signal, Xp (jw) is periodic with 2π/T , the discrete
counterpart, Xd (ejw ) is periodic with 2π.
Although x(t) is a continuous time signal and xd [n] is a discrete time signal,

565
Xp (jω)

xc (t) 1
T

xp (t)

{ 2π
T = ωs
−ωs ωs = 2π
T

Xd (ejω )

xc (t) 1
T

xd (t)
{


T = ωs −2π 2π

Figure 12.4: Comparison of the continuous time sampled signal Xp (jw) and
its discrete time counterpart Xd (ejw ), in the frequency domain.

their Fourier transforms are both continuous. Furthermore, the analytical form
of X(jω) is preserved in both Xd (ejω ) and Xp (jω) in the frequency domain.
However, due to time normalization, sampling period and/or frequency disap-
pears in Xd (ejω ). This reveals that we can recover the original continuous time
signal from its discrete version, provided that the sampling period or sampling
frequency is given.

Exercise 12.1: Consider a continuous time band-limited signal xc (t) and


its Fourier transform, where Xc (jω) = 0 for |ω| ≥ 6000π. The discrete time
counterpart of this signal is obtained by the following C/D conversion:

10−3
xd [n] = xc ( n)
3
a) What is the sampling frequency in Hertz and the angular frequency in
radian/second.
b) Can we reconstruct xc (t) from its discrete time counterpart xd [n] without
losing information?

566
Solution
10−3
a) The sampling period is T = 3 seconds. Therefore, the sampling frequency
fs is

1 1
fs = = 10−3
= 3000 Hz.
T
3

The angular frequency corresponding to the sampling frequency is,

ωs = 2πfs = 2π · 3000 = 6000π rad/s

b) No, because there is aliasing, where the sampling frequency is smaller then
the bandwidth of the signal:

ωs = 6000π < 2 × 6000π.

12.3. D/C Conversion


Suppose that after converting the continuous time signal x(t) = xc (t) into a
discrete time signal x[n] = xd [n] by the method described above, we perform
some operations on it in a digital computer, such that reducing the embed-
ded noise or filtering the signal etc. After all these digital signal processing
applications, we generate a discrete time output signal, y[n] = yd [n].
Motivating Question: How do we reconstruct a continuous time version
y(t) from the discrete time signal yd [n] ?
In D/C conversion, the continuous counterpart of the discrete time signal
is obtained in three steps:
In the first step, we convert the discrete time signal yd [n] into the con-
tinuous time sampled signal yp (t) by theoretically replacing the discrete time
shifted impulses to their continuous time counterpart at each integer value n,
as follows:

X
yp (t) = yd [n]δ(t − nT ). (12.9)
n=−∞

Recall that impulse train sampled signal yp (t) is obtained by multiplying


the continuous time signal y(t) with the impulse train function:

yp (t) = y(t)p(t), (12.10)


where

567

X
p(t) = δ(t − nT ), (12.11)
n=−∞

and T is the sampling period,



T ≤ .
wN
The Nyquist rate, ωN is the bandwidth of the signal, which is the minimum
allowable sampling rate to uniquely reconstruct the original signal from its
sampled counterpart.
Recall also that the discrete time Fourier transform, Yd (ejw ) is the fre-
quency scaled and repeated version of the continuous time Fourier transform,
Y (jw) with period 2π,

jw 1 X w − 2πn
Yd (e ) = Y (j( )) (12.12)
T n=−∞ T

In the second step, we obtain the Fourier transform of yp (t),



1 X
Yp (jw) = Y (j(w − nws )),
T n=−∞

where ωs = 2π/T is the sampling frequency of the continuous time input signal
x(t). Then, we design a low pass filter, H(jω) with the cutoff frequency wc ,
to cover the bandwidth of the discrete time signal Yd (ejw ) and low pass filter
the impulse train sampled signal yp (t) ↔ Yp (jw), in the frequency domain to
reconstruct the continuous counterpart of the discrete time signal Yd (ejω ), as
follows;

Yr (jω) = H(jω)Yp (jω). (12.13)


The reconstructed signal, Yr (jw) = Y (jw), provided that the cutoff frequency
of the low pass filter, H(jw) covers the bandwidth of the signal at the center
of Yp (jw).
Finally, we take the inverse Fourier transform of Yr (jω) to obtain the con-
tinuous time reconstructed signal signal, yr (t).
Note: In order to recover the continuous time signal yr (t) ↔ Yr (jw), from
its discrete time counterpart yd [n] ↔ Yd (ejw ) without losing any information,
the output signal Yd (ejw ) should also be band-limited and the sampled signal
Yp (jω) should be periodic with non-overlapping functions to avoid aliasing.
Interestingly, low pass filtering of the converted signal Yp (jω) yields an in-
terpolation of the discrete time signal Yd (ejw ), where we fill up the discrete

568
Figure 12.5: D/C Conversion: Recovering a continuous time signal from its
discrete counterpart.

time function yd [n] in between the sampled time instances to obtain the con-
tinuous time function y(t). For this reason, D/C conversion is sometimes called
interpolation.
Now, we know how to perform C/D and D/C conversion. Thus, we can
design the discrete time counterpart of a given continuous time LTI system
and vice versa. Let’s give some examples below.

12.3.1. Band-Limitted Digital Differentiator


Suppose that we are given a block diagram representation of a continuous
time system, which consists of differentiators and adders. We can find the
discrete counterpart of this LTI system by replacing the diffenetiators with
their discrete counterpart using C/D conversion methods.
Recall the differentiation property of Fourier Transforms in continuous time
signals:

dx(t)
←→ jωX(jω). (12.14)
dt

Figure 12.6: A continuous LTI system with a differentiation operator.

Suppose that we have a subsystem in a block diagram representation, where


the input and output are related by a differentiation operator, as seen in Figure
12.6,

dx(t)
y(t) = ←→ Y (jω) = jωX(jω). (12.15)
dt
The frequency response of the differentiation subsystem is,

569
Y (jω)
H(jω) = = jω, for − ∞ < ω < ∞. (12.16)
X(jω)
Motivating Question: How can we find the discrete time counterpart
Hd (ejω ) of a continuous time differentiator H(jω)?
The continuous time differentiator is not band-limited. According to the
sampling theorem, it cannot be sampled without losing information about the
analytical shape of the frequency response function. One way of converting the
continuous time differentiator into its discrete time counterpart is to chop the
frequency spectrum after a predefined value to create a band limited frequency
response. Then, we can apply sampling to this band-limited function.
Let’s define the band-limited differentiator as follows:
(
jω, if |ω| < ωc
Hc (jω) = (12.17)
0, otherwise,
where ωc is the cutoff frequency and determined by considering the design
issues about the underlying physical phenomenon. Then, the magnitude of
Hc (jω) is,
(
ω, if |ω| < ωc
|Hc (jω)| = (12.18)
0, otherwise
and the phase is,

π/2,
 if 0 < ω < ωc
∢Hc (jω) = −π/2, if − ωc < ω < 0 (12.19)

0, otherwise.

Since the function Hc (jω) is now band-limited, we can sample it and re-
construct the original signal from its sampled version.
Now, we can apply the C/D conversion methods to find the discrete-time
version of a band limited differentiator Hd (ejw ) from the continuous time band
limited frequency response Hc (jω)?
At the very first step, we need to find the Nyquist rate of the band limited
function Hc (jω), which is,
ωN = 2ωc .
Then, we select a sampling frequency ωs ≥ ωN .
Let’s set the sampling frequency to the Nyquist rate, ωN = ωs = 2ωc . The
corresponding sampling period is,
2π π
Ts = = . (12.20)
ωs ωc

570
|Hc (jω)|

ωc

ω
−ωc ωc

∠Hc (jω)

π
2

ω
ωc
− π2

Figure 12.7: Magnitude and phase plots of the continuous time band-limited
differentiator, Hc (jω).

Figure 12.8: Discrete time counterpart of a band limited continuous time dif-
ferentiator: Magnitude and Phase spectrum.

Finally, we apply Equation 5.8 for discrete version of band limited differ-
entiator as follows;

1 X ω − 2πn
Hd (ejω ) = Hc (j( )). (12.21)
Ts n=−∞ Ts

Notice that the discrete version of the band limited differentiator is periodic
with 2π. For one full period, the magnitude and phase of Hd (ejω ) are,
(
ω
jω , if |ω| < ωc
|Hd (e )| = Ts (12.22)
0, otherwise

571
and

π/2,
 if 0 < ω < ωc
∢Hd (e ) = −π/2, if − ωc < ω < 0

(12.23)

0, otherwise.

respectively, and they repeat at every 2π.

Exercise 12.2: Consider the following input signal to a continuous time


differentiator:

sin(πt/T )
x(t) = . (12.24)
πt
a) Find the discrete time counterpart, xd [n] of the input x[n].
b) Find the output, y(t) of the continuous time differentiator.
c) Find the discrete time output, yd [n] of the digital differentiator.
d) Find the discrete time impulse response of the digital differentiator.

Figure 12.9: A continuous time LTI system represented by H(jw) = jw (above)


and its discrete counterpart H(ejω ) = j( Tω ), for |ω| < π (below).

Solution:
a) The discrete time counterpart of x(t) is simply obtained by time normal-
ization of its continuous counterpart as follows:

sin(πn)
xd [n] = xc (nT ) = . (12.25)
πnT
Note: The discrete time function xd [n] is indefinite at n = 0, i.e, it
approaches to ∞, for n = 0.
In order to find the value of xd [n], as n → 0, we use L’Hopital’s rule: We
take the derivative of the numerator and the denominator, which is,

π cos(πn) 1
xd [0] = lim = . (12.26)

n→∞ πT
 T

572
For the rest of the values n ̸= 0, xd [n] = 0.
Therefore, the discrete time counterpart of the input is
1
xd [n] = δ[n]. (12.27)
T

xd [n]

1
T

Figure 12.10: Discrete time counterpart xd [n] is time normalized version of the
continuous time input x(t).

b) The output of the continuous time differentiator is obtained by simply


taking the derivative of the input:

dx(t) cos(πt/T ) sin(πt/T )


y(t) = = − . (12.28)
dt Tt πt2
c) In order to find the discrete time counterpart of the continuous time
output all we need to do is time normalization:

cos(πn) sin(πn)
yd [n] = yc (nT ) = − . (12.29)
πT n πT 2 n2
Note: There is a problem in the representation of the above function. It
is indefinite, i.e, approaches to 0/∞ for n = 0.
In order to find the limiting value of yd [n] at n = 0, we apply L’Hopital’s
rule:

π sin(πn) π cos(πn)
yd [n] = − − = 0, (12.30)
πT 2πT 2 n
and for n ̸= 0, the discrete time counterpart of the output becomes,

(−1)n
yd [n] = yc (nTs ) = . (12.31)
nT 2
d) Finally, we need to find the discrete time counterpart of the impulse re-

573
sponse, hd [n].
The discrete time output of a digital differentiator can be written in the
following compact form:
(−1)n
(
nT 2
, if n ̸= 0
yd [n] = yc (nT ) = (12.32)
0, otherwise.
Recall the discrete time convolution, which relates the input to the output
through the impulse response as follows:
1
yd [n] = xd [n] ∗ hd [n] =
δ[n] ∗ hd [n]. (12.33)
T
Then, the discrete time impulse response is,
(−1)n
(
hd [n] = nT , if n ̸= 0 (12.34)
0, otherwise.

12.3.2. Digital Time Shift


Recall the elementary signal operations on the time variable, such as time
shift, time scale and time reverse. These operations are very important for
manipulating and generating a wide range of signals. We studied the operations
on time variables in discrete time and continuous time signals separately. We
stated that the values of the signals in between the integer time variable is
undefined for discrete time signals. Is it really so?
Motivating Question: Can we switch the elementary time operations
between the continuous time and discrete time signals?
Sampling theorem provides us a rigorous methodology to answer the above
question: Yes! It is possible to switch between the elementary time operations
between the continuous time and discrete time signals, provided that we satisfy
the following assumptions:
Assumption 1: The Input signal X(jω) is band-limited with the cutoff
frequency ωc .
Assumption 2: The sampling frequency is set at least to the Nyquist rate,
in other words,
ωs ≥ ωN ,
where the Nyquist rate is
ωN = 2ωc .
Let us study a specific time operation, namely the time shift, in continuous
time and discrete time cases:
Given a continuous-time signal, which is the shifted version of a signal x(t),

574
y(t) = x(t − t0 ), (12.35)
the discrete time counterpart, yd [n] can be directly obtained by time nor-
malization,

yd [n] = y(nT ) = x(nT − t0 ), (12.36)


where T = 2π/ωs is the sampling period, which is slightly bigger than the
Nyquist rate.

Exercise 12.3:

Consider an LTI system represented by the following equation;

y(t) = x(t − t0 ), (12.37)


where the input signal x(t) is always band limited with the bandwidth, 2ωc .
a) Is y(t) also band limited?
b) Find the Frequency response, H(jω) of this system. Is the frequency
response band limited?
c) Find the discrete counterpart, Hd (ejω ) of the frequency response.

Solution:
a) Let us take the Fourier transform of y(t), which is the direct application
of time shift property,

y(t) = x(t − t0 ) ←→ Y (jω) = e−jωt0 X(jω). (12.38)


Since the Fourier transform of the shifted signal brings just a multiplicative
factor of the complex exponential, the bandwidth of Y (jω) is the same as that
of X(jω). Thus, the signal y(t) is band limited.
b) Since it is the eigenvalue of the system, the multiplicative factor corre-
sponds to the frequency response,

H(jω) = ejωt0 . (12.39)


This system is not band limited. However, since we assume that the input
signal is always band limited, we can chop the frequency response at the cutoff
frequency of X(jω). Thus, the band limited frequency response becomes,
(
e−jωt0 , for |ω| < ωc
Hc (jω) = (12.40)
0, otherwise
c) The discrete time counterpart of the continuous time frequency response

575
|Hc (jω)| ^Hc (jω)
1 slope:∆
ωc

−ωc
ω
−ωc ωc
|Hd (ejΩ )| ^Hd (ejΩ )
π∆
T
1
−π π Ω

ω
−π π

Figure 12.11: The magnitude and phase plots of frequency responses of the
band-limited continuous time system, represented by y(t) = x(t − t0 ) and its
discrete time counterpart, Hd (ejw ).

can be obtained by frequency normalization of the band limited frequency


response, for one full period, as follows:
(
jω Hc (jω), for |ω| < ωc
Hd (e ) = (12.41)
0 otherwise

In the above equation, the sampling period is T = 2πωc . Considering the fact
that discrete time counterpart of the band-limited frequency response is always
periodic with ω = 2π, we can extend the cut-off frequency to ωc = 2π. Since
Y (ejω ) = H(ejω )X(ejω ), this extension will not change the bandwidth of the
output signal, which is the same as that of the input signal. Hence,
( t0
jω e−jω T , for |ω| < π
Hd (e ) = (12.42)
0 otherwise
and repeats at every 2π period.

12.4. Sampling the Discrete Time Sig-


nals
Until now, we considered sampling the continuous time signals and systems
to find their discrete time counterpart. This operation enabled us to process a
continuous time signal in a digital computer and recover the processed contin-
uous time signal from its discrete time counterpart.

576
Motivating Question: What if we need to sample a discrete time signal?
If we can manage to extend the sampling theorem to discrete time, we can
further reduce the data used to represent signals and systems without losing
their information content. Sampling in discrete time signals is sometimes called
down-sampling because it results in a type of compression, when the signal
is represented by a sequence of numbers, in the form of a time series.

12.4.1. Discrete Time Impulse Train Sampling


Let us now extend the continuous time sampling theorem to the discrete world.
Impulse train sampling of a discrete time signal x[n] is defined as,

(
x[n], if n is integer multiple of N
xp [n] = x[n]p[n] = (12.43)
0, otherwise

where, N is the integer sampling period and the discrete time impulse train is
defined as,

X
p[n] = δ[n − kN ]. (12.44)
k=−∞

Note: The major difference between the continuous time and discrete time
sampling is that sampled signal, x[n] is set to 0 values between each sampling
period N .

Figure 12.12: Impulse train sampling of a discrete signal x[n].

Motivating Question: What is the output of the discrete time impulse


train sampling xp [n] ↔ Xp (ejω )?
Although the discrete time impulse train sampling is an extension of the
continuous time counterpart, there are three major restrictions, which shape
the analytical form of xp [n] ↔ Xp (ejω ) :
1. Sampling period N should be integer valued and sampling is applied at
integer multiples of N .

577
2. Since the Fourier transform of a discrete time function is periodic with
2π, the cutoff frequency of the band limited signal cannot exceed 2π. Hence,
the bandwidth of the discrete time signal should be small enough, i.e.,

ωM < 2π. (12.45)


This brings an upper limit to the bandwidth of the discrete time signal to
be sampled.
3. Given the band-limited signal with cut-off frequency, ωM , the sampling
frequency ωs = 2π
N should be at least as big as the Nyquist frequency ωN ;

ωs ≥ ωN = 2ωM . (12.46)
Otherwise, if ωs < 2ωM , then there are overlaps in Xp (ejω ), which results
in aliasing.
The above constraints on the sampling frequency ws , Nyquist frequency
wN and cutoff frequency wM of the signal shape up the analytical form of the
sampled signal xp [n] ↔ Xp (ejω ).
Let us now investigate the effect of the above constraints on the sampling
of the discrete time signal x[n] ←→ X(ejw ) on structure of the sampled signal,
in both time and frequency domain.
Recall the Fourier transform pair of discrete time impulse train was,

∞ ∞
X
jω 2π X
p[n] = δ[n − kN ] ←→ P (e ) = δ(ω − kωs ). (12.47)
N
k=−∞ k=−∞

Fourier transform of the sampled signal xp [n] = x[n]p[n] is,


1 1 X
Xp (ejw ) = X(ejw ) ∗ P (ejw ) = X(ej(w−kws ) ) (12.48)
2π N
k=−∞

Note: There are two types of periodicity in both P (ejw ) and Xp (ejw ).
The first periodicity comes from the nature of discrete time Fourier transform,
which is 2π. The second periodicity comes from the repetition of the discrete
time signal at each sampling frequency, kws for all k = 0, ±1, ±2, ..... The
relationship between these two periodicities affect the result of discrete time
sampling.
Note: We assume that

x[n] ←→ X(ejω )

578
P (ejω )


N

ω
ωs 2ωs 2π

Xp (ejω )

1
N

ω
ωM ωs 2ωs 2π

Figure 12.13: Fourier transform P (ejω ) of discrete time impulse train p[n] =
δ[n − kN ] (above) and Fourier transform of the sampled signal Xp (ejω ) of
P
a discrete time signal xp [n] (below).

x[x] X(ejw )

w
n −2π −ωM ωM 2π

xp [x] Xp (ejw )

w
n −ωM ωM ωs 2π

Figure 12.14: Time and frequency domain representations of the discrete time
signal and its sampled version. We delete the values of x[n] in between each
sampling period N . Hence, the sampled signal in time domain becomes xp [n] =
0 in between each sampling period kN and (k + 1)N .

579
is a band-limited signal and the bandwidth 2ωM is small enough so that when
we sample the signal with a sampling frequency ωs , we can squish the repeated
signals in the 2π period without any overlaps.

Exercise 12.4: A discrete time signal y[n] is obtained by impulse train sam-
pling of a signal x[n], as follows;

X
y[n] = x[n]δ[n − kN ],
k=−∞

where the band limited signal x[n] has the Fourier transform,
π
X(ejω ) = 0 for ≤ |ω| ≤ π.
4
Find the largest value for the sampling period N , satisfying the Nyquist
rate.

Solution: Fourier transform of the output signal, Y (ejω ) is the periodic repe-
titions of X(ejω ) at every ωs = 2π
N . The Nyquist rate is achieved, when

ωs = 2ωc ,
where ωc is the cutoff frequency of X(ejω ). Since the Fourier transform of the
input is given as,
π
X(ejω ) = 0 for ≤ |ω| ≤ π,
4
which is periodic with 2π, the nonzero part of the input is,

X(ejω ) ̸= 0 for − π/4 < ω < π/4,


with the the cutoff frequency, |ωc | = π/4. According to the sampling theorem,
the sampling frequency ωs = 2π N , should satisfy,

2π π
≥2 .
N 4
Hence, the Nyquist rate for the largest sampling interval is achieved at N = 4.

580
12.5. Reconstruction of Discrete Time
Signal from its Sampled Counter-
part
Practically speaking, down-sampling deletes the values of the discrete time
function between each sampling period kN and (k+1)N . Instead of the deleted
values we insert 0s in the sampled signal xp [n]. In order to recover the original
signal x[n] from the sampled signal xp [n], we would like to fill up the zero
values with the original values of x[n].
Mathematically speaking, given the sampled signal,

xp [n] = x[n]p[n],

where p[n] is an impulse train with sampling period N , our goal is reconstruct
the original discrete time signal x[n], without losing any information. This
reconstruction process is sometimes called up-sampling.
We assume that the signal is band limited with the cutoff frequency ωM
and it is properly sampled with the sampling frequency ωs = 2πN to satisfy the
Nyquist rate,

ωs = 2ωM . (12.49)
If these constraints are satisfied, then we can design a reconstruction filter
H(ejω ), which recovers the original signal x[n] = xr [n] from its sampled version
xp [n] without losing any information.

Figure 12.15: Reconstruction filter for discrete time sampling.

When there is no aliasing, ωM ≤ ωc ≤ ωs − ωM , then

xr [kN ] = x[kN ]. (12.50)


Recall that the impulse response of the reconstruction filter can be obtained
by taking the inverse Fourier transform of the frequency response:

N ωc sin(ωc π)
h[n] = ←→ H(ejω ). (12.51)
π ωc π

581
H(ejω )

ω
−2π −ωc ωc 2π

Figure 12.16: Frequency response, H(ejω ) of the reconstruction filter for dis-
crete time sampling. This is a low pass filter which recovers the original signal
by Xr (ejω ) = X(ejω ) = Xp (ejω ).H(ejω ).

Exercise 12.5: Consider the following discrete time signal,


(
jω e−jω for |ω| ≤ 2π/9
X(e ) = (12.52)
0 otherwise.
a) What is the bandwidth of this signal?
b) Find the largest sampling period N to down-sample the signal without
losing any information.

X(ejω )

ω
−π − 2π
9

9
π

Figure 12.17: Fourier transform of a discrete time band-limited signal, where


the bandwidth is 4π/9.

Solution: a)The bandwidth of the signal is BW = 4π/9.

582
b) The largest sampling frequency is to be the Nyquist rate:

ωs = .
9
Hence, the sampling period should satisfy,

N≤ ≤ 9/2.
ωs
The maximum integer, which satisfies the above inequality is Nmax = 4.

12.6. Discrete-Time Decimation and In-


terpolation
When we represent a down-sampled signal in a sequence of numbers in the
form of time series, the array of numbers consists of zero values in between the
integer multiples of the sampling period N . This representation carries a lot of
redundancies during signal processing. One way to further compress the signal
is to skip these repeated 0 values and just keep the signal at kN for all k. The
operation of discarding the repeated zero values of the samples signal is called
decimation.
Let’s investigate the analytical structure of the decimation operation in
time and frequency domains:
Recall, discrete-time sampling:

Fourier transform of the down-sampled signal consists of the repeated form


of the Fourier transform of the original signal as follows:

1 X
Xp (ejω ) = X(ej(ω−kωs ) ). (12.53)
N
k=−∞

If the signal is sampled above the Nyquist frequency, we obtain a plot


similar to Figure: 12.18.

583
x[n] X(ejw )

n w
−ωc ωc π 2π
xp [n] Xp (ejw )

1
F N

n w
−2π −ωs ωs 2π

Figure 12.18: When a discrete time signal x[n] is down-sampled, we only keep
the values of this signal at integer multiples of N . In the frequency domain, this
process corresponds to inserting the analytical form of the original function at
every ωs .

Lets define a decimated discrete time signal xde [n], as the sequence of the
selected values of the sampled signal. Formally speaking, the decimated signal
and its discrete time Fourier transform is,


X ∞
X
xde [n] = xp [nN ] ←→ Xde (ejω ) = xde [n]e−jωn = xp [nN ]e−jωn
n=−∞ n=−∞
(12.54)
Therefore, the relationship between the decimated signal Xde (ejω ) and its
sampled version Xp (ejω ), in the frequency domain can be written as follow:

Xde (ejω ) = Xp (ejω/N ) (12.55)


Note: The only difference between the decimated signal and the down-
sampled signal is the scale in the frequency axis. When we decimate the sam-
pled signal, all we need to do is to scale the frequency axis by 1/N , in frequency
domain. In order to recover the original signal, x[n] ↔ X(ejω ) from its deci-
mated version xde [n] ↔ Xde (ejω ), we need to first rescale the frequency axis
by N , then, apply a low pass filter to the rescaled signal by H(ejω ), with the
cut-off frequency of the original signal x[n].

584
Xb (ejw )

1
N

w
−2π −ωs −N ωM N ωM ωs 2π

Figure 12.19: Fourier transform of the decimated signal Xb (ejω ).

Sampling and reconstruction of a discrete time sig-


nal @ https://384book.net/i1201
INTERACTIVE

12.7. Chapter Summary


In this era, where almost the entire technology relies on digital computing, how
can we assure preserving the information content of signals and systems? How
can we efficiently store the signals in the minimum space? How can we retrieve
and transmit them without losing significant information? How can we process
them for a pre-defined goal so that we can extract the desired information from
the signals and design digital systems?
Discrete time sampling and decimation opens a door to answer the above
questions. In this chapter, we extended the continuous time sampling theorem
of C. Shannon into discrete time signals and systems. We studied the methods
for down-sampling the discrete time signals. We studied the constraints im-
posed by the discrete nature of signals in time and frequency domains: First
of all, the sampling period is restricted to be integer valued. Secondly, as we
sample the signal with period N ≥ 1, we define a new signal, where we insert
0 values in between the integer multiples of the sampling period. Finally, we
restricted the signal to be band-limited with the cutoff frequency ωc < 2π.
We showed that a discrete time signal can be down-sampled with the sam-
pling frequency ωs , which is at least equal to the Nyquist rate ωN = 2ωc ,
without losing any information.
Reconstruction of a sampled signal is very similar to that of the continuous
time signals. All we need to do is to design a low pass filter H(ejω ) with the
cutoff frequency ωc of the band limited signal x[n] ←→ X(ejω ).

585
Problems
1. Consider an A/D converter system, shown in Figure P1, where the Fourier
transform of the continuous time input is,
(
|ω| for |ω| < 10π
X(jω) =
0 o.w.

Figure P1

a) Find and plot the Fourier transform of the sampled signal Xp (jω),
when the sampling periods are T1 = 0.05 seconds and T2 = 0.1 sec-
onds.
b) Find and plot the Fourier transform of the discrete time signal Xd (ejω )
obtained at the output of the converter, for T = 0.05 seconds and
T = 0.1 seconds.
2. Consider a D/C converter shown in Figure P2, where the signal,

yd [n] = xd [n] ∗ hd [n]

is obtained at the output of an LTI system, when the input is,

sin 104 π
xd [n] =
πn
and the impulse response is,
sin π4 n
hd [n] =
πn

Figure P2
a) Find the Fourier transform of the output Yd (ejω ).
b) Find the continuous counterpart yp (t) ↔ Yp (jω), obtained at the
output of the D/C converter for T = 0.05 milliseconds.

586
c) Find the Fourier transform of the continuous time output signal

Y (jω) = Yp (jω)H(jω),

when the cutoff frequency of the low pass filter H(jω) is ωc = 2π/T .
3. A discrete time band-limited signal x[n], defined by the following Fourier
transform; (
A for |ω| < π/4
X(ejω ) =
0 otherwise,
and repeats at every 2π period. We down-sample the signal x[n] to gen-
erate a signal g[n] , as follows:
(
x[n/2] for n = 0, ±2, ±4, ±6...
g[n] =
0, otherwise
a) Find and plot x[n] and g[n]. Comment on the differences and simi-
larities between the two signals.
b) Find and plot G(ejω ). What is the effect of down sampling on the
signal in terms of the bandwidth?

4. A discrete time LTI system, which give the following output, in −π <
ω ≤ π interval:
j (ω− π3 )

X(e
 ) π3 < ω ≤ π2
π
Y (ejω ) = X(ej (ω+ 3 ) ) − π2 < ω ≤ − π3

0 o.w.

to the input given below,


(
e−jω for |ω| ≤ π/6
X(ejω ) =
0 otherwise,
and repeats at every 2π.

587
(a) Find and plot the output signal Y (ejω ). What is the cutoff frequencies
and the bandwidth of the output Y (ejω ).
(b) Find and plot the Frequency response of this system. What is the
bandwidth of the frequency response.
(c) Find the continuous time counterpart Hc (jω) of the discrete time
frequency response, Hd (ejω ), for the sampling periods T = 0.1 and
T = 1 seconds.

5. A continuous time signal xc (t) is band limited with the Fourier transform
Xc (jω) = 0 for |ω| ≥ 3000π radian/second . The discrete time counterpart
of this signal is,

  −3 
10
xd [n] = xc n
2

Determine, which of the following constraints is satisfied by Xc (jω).


a) Xd (ejω ) is imaginary


b) Xd (ejω ) = 0 for ∥ω| ≥ 2
π
c) Xd (ejω ) = Xd (ej(ω− 2 ) )

6. A discrete time band limited signal xd [n] has the Fourier transform Xd (ejω ) =
0 for 2π
5 ≤ |ω| < π. Its continuous time counterpart is,


sin 104 π(t − k · 10−4 )

X
−4
xc (t) = 2 · 10
πt − 2π · 10−4 k
k=−∞

a) Find the discrete time counterpart, xd [n] of this signal.


b) Find the bandwidth of Xc (jω).
c) Find the bandwidth of Xd (ejω ).
7. A continuous time LTI system is represented by the following equation:

 
d 1
y(t) = x t −
dt 3

a) Find the frequency response of this system.

588
b) Find the discrete time counterpart of the frequency response by defin-
ing a band-limited differentiator, where the cutoff frequency is ωc =
2π/3.
c) Find the discrete time impulse response h[n] of this system.
8. A discrete time band stop filter is represented by the following frequency
response:

(
π 5π
1 |ω| ≤ and |ω| ≥
H(ejω ) = 6 6 |ω| ≤ π
0 o.w.

a) Find the impulse response, h[n] of this system


b) Find the frequency response of a decimated system with impulse re-
sponse h[3n].
9. Consider a system, which decimates an input x[n] to generate the output
y[n], given below:

y[n] = x[7n].

a) Find and plot the output y[n] of this system, when the input is,

sin( π7 n)
x[n] = .
πn
b) Find a continuous counterpart of y[n].
c) Find the output g[n] = h[n] ∗ y[n], where h[n] ↔ H(ejω ) is band stop
filter with the following frequency response:

(
π 3π
1 |ω| ≤ and |ω| ≥
H(ejω ) = 2 2 |ω| ≤ π
0 o.w.

10. A band limited continuous time signal x(t) with the Fourier transform
X(jω) = 0 for |ω| > 2500π, is sampled by an impulse train, as follows:


X
xp (t) = x(nT )δ(t − nT )
n=−∞

where the sampling period is T = 10−3


a) Find the Nyquist rate of this signal.

589
b) Can we find a discrete counterpart x[n] of this signal by C/D conver-
sion method such that we can recover the original continuous time
signal x(t)? Explain your answer.

11. A signal x(t) with Fourier transform X(jω) undergoes impulse-train sam-
pling to generate the following:


X
xp (t) = x(nT )δ(t − nT )
n=−∞

where T = 10−4 .
If it is known that X(jω) = 0 for |ω| > 7500π, does the sampling theorem
guarantee that x(t) can be recovered exactly from xp (t)?

12. Consider the following signal obtained by down sampling of an input


signal x[n];
(
x[n/3] for n = 0, ±3, ±6, ±9...
g[n] =
0, otherwise

where the input is,


sin 4π
5 n
x[n] = .
πn
(a) Find and plot G(ejω ).
(b) Find the low-pass filtered output Y (ejω ) = G(ejω )H(ejω ), where
(
jω 1 for |ω ≤ π/5
H(e ) =
0 o.w.

(c) Find the decimated output z[n] = y[5n].

590
Index

accumulator, 104 Hilbert space, 229


adder, 118 homogeneous solution, 178
aliasing, 536 hybrid systems, 99
angular frequency, 30, 31
auto-correlation, 150 ideal band-pass filter, 488
ideal high-pass filter, 486
basis vectors, 227 ideal low-pass filter, 485
block diagram, 202 IIR filter, 199
bounded signal, 109 imaginary part, 40
impulse response, 133
C/D conversion, 564 impulse train sampling, 525, 577
causal system, 105 incrementally linear system, 114
complex conjugate, 41 infinite impulse response filter, 199
complex exponential, 69 initially at rest, 172, 190
complex exponential function, 65 integrator, 119
complex number, 40 interpolation, 583
complex plane, 39 invertible system, 107
convergence of Fourier series, 238
cross-correlation, 148 Laplace transform, 358, 359
linear constant-coefficient difference equa-
D/C conversion, 567 tion, 189
decimation, 583 linear constant-coefficient differential equa-
differentiator, 119 tion, 172
Dirichlet conditions, 222 linear system, 113
discrete time Fourier transform, 392
magnitude and phase, 42
eigenfunction of LTI system, 235 memoryless system, 103
eigenvalue of LTI system, 237, 294 multiplier, 118
energy of a signal, 12
energy signal, 12 Noether’s theorem, 26
Euler’s formula, 43 non-causal system, 105
Euler’s number, 42
even and odd components of a signal, 36 orthogonal signals, 150
exponential function, 62
parallel systems, 98
feedback control systems, 100 particular solution, 177, 196
finite impulse response filter, 200 periodic signal - harmonically related, 69
FIR filter, 200 periodic signals, 29
Fourier theorem, 224 power of a signal, 13
Fourier transform, 313 power signal, 13
fundamental frequency, 30, 31 pulse train, 232
fundamental period, 29, 31
real exponential function, 61
general solution (linear constant-coefficient real part, 40
differential equation), 180 reconstruction, 532
Gibbs phenomenon, 239
sampling of discrete time signals, 576
harmonically related complex exponential, sampling theorem, 540
69 sampling with first-order hold, 548
harmonically related complex exponentials, sampling with zero-order hold, 542
225 scalar multiplier, 117
harmonics, 69 series systems, 97

591
stable system, 110
superposition property, 112
symmetry group, 26
system with memory, 104

time normalization, 562


transfer function, 187, 194
trigonometric Fourier series, 249
types of signals, 8

unit advance operator, 120


unit delay operator, 119
unit impulse function, 74
unit step function, 79
unit step response, 159
unstable system, 110

z-transform, 442

592
Bibliography
J K Aggarwal. Notes on Nonlinear Systems, volume 1. Van Nostrand Reinhold
New York, NY, 1972.

Michel Antsaklis, Panos J. Linear Systems, volume 1. McGraw-Hill Singapore,


1998.

Shaila D Apte. Signals and Systems: Principles and Applications, volume 1.


Cambridge University Press, 2016.

S P Bhattacharyya, L H Keel, and D N Mohsenizadeh. Linear Systems: A


Measurement Based Approach, volume 1. Springer, 2013.

Richards Bronson and Gabriel B Costa. Schaum’s Outline of Differential Equa-


tions, volume 5. McGraw-Hill Education, 2021.

Andrew Burton. Signals and Systems: An Engineering Perspective, volume 1.


Larsen and Keller Education, 2020.

Gordon E Carlson. Signal and Linear System Analysis, volume 1. Houghton


Mifflin Boston MA, 1992.

Chi-tsong Chen. System and Signal Analysis, volume 2. Saunders College


Publishing, 1994.

J S Chitode. Signals and Systems: Signals, Systems and Transforms Analysis,


volume 1. Technical Publications, 2020.

C A Desoer. Notes for A Second Course on Linear Systems, volume 1. D. van


Nostrand, 1970.

Bradley W Dickinson. Systems Analysis, Design and Computation, volume 1.


Prentice hall Englewood Cliffs, NJ, 1991.

Milos Ercegovac, Tomas Lang, and Jaime H. Moreno. Introduction to Digital


Systems, volume 1. John Wiley Sons, Inc., 1999.

Richard W Hamming. Coding and Information Theory, volume 2. Prentice


hall Engelwood Cliffs, NJ, 1986.

Francis B Hildebrand. Methods of Applied Mathematics, volume 2. Prentice


hall Engelwood Cliffs, NJ, 1969.

Franz E Hohn. Elementary Matrix Algebra, volume 3. The Macmillan Company


New York, NY, 1973.

593
Hwei P Hsu. Schaum’s Outline of Signals and Systems, volume 3. McGraw-Hill
Education, 2013.

Thomas Kailath. Linear Systems, volume 1. Prentice hall, 1980.

V Kamaraju and R L Narasimham. Linear Systems: Analysis and Applications,


volume 2. I K International, 2019.

A Anand Kumar. Signals and Systems, volume 3. PHI Learning, 2013.

B P Lathi. Signal Processing and Linear Systems, volume 1. Oxford University


Press, 2000.

B P Lathi and Roger Green. Linear Systems and Signals, volume 3. Oxford
University Press, 2017.

Jae S Lim. Two-Dimensional Signal and Image Processing, volume 1. Prentice


hall London, UK, 1990.

Douglas K Lindner. Introduction to Signals and Systems, volume 1. McGraw-


Hill, 1999.

Richard G Lyons. Understanding Digital Signal Processing, volume 3. Prentice


hall, 2010.

Alan V Oppenheim, Alan S Willsky, Syed Hamid Nawab, and Jian-Jiun Ding.
Signals and Systems, volume 2. Prentice hall Upper Saddle River, NJ, 1997.

Alan V Oppenheim, Ronald W Schafer, and John R Buck. Discrete-Time


Signal Processing, volume 2. Pearson Education, 2013.

Abraham Peled and Bede Liu. Digital Signal Processing, volume 1. John Wiley
Sons, Inc., 1976.

John A Pierce. An Introduction to Information Theory: Symbols, Signals and


Noise, volume 2. Dover Publications, 1980.

K Deergha Rao. Signals and Systems, volume 1. Birkhäuser, 2018.

Donald G Schultz and James L Melsa. State Functions and Linear Control
Systems, volume 1. Prentice hall Upper Saddle River, NJ, 1997.

Ken Steiglitz. A Digital Signal Processing Primer with Applications to Digital


Audio and Computer Music, volume 1. Addison-Wesley Publishing Company
Menlo Park, CA, 1996.

Fred J Taylor and Anthony N Michel. Principles of Signals and Systems,


volume 1. McGraw-Hill Singapore, 1994.

594
David L Verbyla and Kang-tsung (Karl) Chang. Processing Digital Images in
Geographic Information Systems, volume 1. OnWord Press Santa Fe, NM,
1997.

James S Walker. Signals and Systems, volume 2. John Wiley Sons, Inc., 2002.

595

You might also like