UG-Signals and Systems 17-05-2024
UG-Signals and Systems 17-05-2024
Author
Prof. Sanjay L. Nalbalwar
Dean (Academics - FoE&T),
Department of Electronics and Telecommunication Engineering (E&TC),
Dr. Babasaheb Ambedkar Technological University,
Lonere, Raigad, Maharashtra.
Reviewed by
Prof. Satyabrata Jit
Professor (HAG),
Department of Electronics Engineering, IIT(BHU), Varanasi
(ii)
BOOK AUTHOR DETAILS
Prof. Sanjay L. Nalbalwar, Professor and Dean, Dept. of Electronics and Telecommunication Engineering
(E&TC), Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad, Maharashtra.
Email ID: [email protected]
Prof. Satyabrata Jit, Professor (HAG), Department of Electronics Engineering IIT(BHU), Varanasi.
Email ID: [email protected]
1. Dr. Ramesh Unnikrishnan, Advisor-II, Training and Learning Bureau, All India Council for Technical
Education (AICTE), New Delhi, India.
Email ID: [email protected]
Phone Number: 011-29581215
2. Dr. Sunil Luthra, Director, Training and Learning Bureau, All India Council for Technical Education
(AICTE), New Delhi, India.
Email ID: [email protected]
Phone Number: 011-29581210
May, 2024
© All India Council for Technical Education (AICTE)
ISBN : 978-93-6027-309-5
All rights reserved. No part of this work may be reproduced in any form, by mimeograph or any
other means, without permission in writing from the All India Council for Technical Education
(AICTE).
Further information about All India Council for Technical Education (AICTE) courses may be obtained
from the Council Office at Nelson Mandela Marg, Vasant Kunj, New Delhi-110070.
Printed and published by All India Council for Technical Education (AICTE), New Delhi.
Disclaimer: The website links provided by the author in this book are placed for informational, educational
& reference purpose only. The Publisher do not endorse these website links or the views of the speaker /
content of the said weblinks. In case of any dispute, all legal matters to be settled under Delhi Jurisdiction,
only.
(iii)
(iv)
ACKNOWLEDGEMENT
I am deeply grateful to the authorities of AICTE, with special acknowledgment to
Prof. T. G. Sitharam, Chairman; Dr. Abhay Jere, Vice-Chairman; Prof. Rajive Kumar,
Member-Secretary; and Dr. Ramesh Unnikrishnan, Advisor-II and Dr. Sunil Luthra, Director,
Training and Learning Bureau. Their collective vision and planning were instrumental in the
publication of the book on Signals and Systems. Sincere appreciation is extended to
Prof. Satyabrata Jit, Professor, Department of Electronics Engineering IIT(BHU), Varanasi.
As the reviewer of this book, Prof. Satyabrata Jit has made invaluable contributions, ensuring
the content is not only student-friendly but also aesthetically pleasing and well-organized.
I extend my heartfelt thanks to Hon’ble Vice Chancellor Prof. Dr. Karbhari Kale for his
motivational and continuous encouragement throughout the process. My gratitude also goes
to Prof. Dr. A W. Kiwelekar, Dr. B. F. Jogi, Dr. S. M. Pore, Dr. Brijesh Iyer and other senior
colleagues in the University for their motivation and unwavering institutional support which
was crucial in the successful completion of this book. I extend my deepest appreciation to Dr.
Snehal Gaikwad, Dr. Pallavi Ingale and Prof. Prashant Mahajan for their insightful feedback
and constructive criticism that has greatly enriched the quality of this work. Their
contributions have been indispensable in refining the ideas and arguments presented herein.
Special acknowledgment goes to my wife, daughter and son for their invaluable support for
showing patience, excusing my work during weekends and family. I wish to thank my parents,
whose nurturing and support have been my pillars through the significant events of my life.
This book is an outcome of various suggestions of AICTE members, experts and authors who
shared their opinion and thought to further develop the engineering education in our country.
Acknowledgements are due to the contributors and different workers in this field whose
published books, review articles, papers, photographs, footnotes, references and other
valuable information enriched me at the time of writing the book.
(v)
PREFACE
Welcome to the world of signals and systems – a cornerstone of modern engineering that lies
at the heart of countless technological innovations shaping our world today. From
telecommunications to medical imaging, from audio processing to control systems, the
principles of signals and systems permeate virtually every facet of our lives.
This book serves as a comprehensive guide to understanding the fundamental concepts,
theories, and applications of signals and systems. Whether you are a student embarking on
your academic journey in engineering or a seasoned professional seeking to deepen your
understanding, this text aims to provide you with the knowledge and tools necessary to
navigate this fascinating field.
Throughout these pages, you will embark on a journey that explores the mathematics, physics,
and engineering principles that underpin signals and systems. From the basic properties of
signals to the intricacies of system analysis and design, each chapter is carefully crafted to
build upon the previous one, offering a structured approach to learning that facilitates
comprehension and retention.
Furthermore, this book emphasizes the practical relevance of signals and systems by
incorporating numerous real-world examples and applications. By grounding theoretical
concepts in practical scenarios, readers can gain a deeper appreciation for the significance of
signals and systems in solving real-world engineering challenges. Moreover, this text is
designed to be accessible to readers with a range of backgrounds and expertise levels. Whether
you are encountering signals and systems for the first time or seeking to deepen your
understanding of advanced topics, this book strives to provide clear explanations, illustrative
examples, and helpful insights to aid your learning journey.
As an author, my goal is to provide a valuable resource that inspires curiosity, fosters
understanding, and equips readers with the knowledge and skills needed to tackle the
complexities of signals and systems. We hope that this book serves as a trusted companion on
your exploration of this captivating subject and empowers you to make meaningful
contributions to the ever-evolving landscape of engineering.
Thank you for embarking on this journey with us.
(vi)
OUTCOME BASED EDUCATION
For the implementation of an outcome based education the first requirement is to develop an
outcome based curriculum and incorporate an outcome based assessment in the education
system. By going through outcome based assessments evaluators will be able to evaluate
whether the students have achieved the outlined standard, specific and measurable outcomes.
With the proper incorporation of outcome based education there will be a definite
commitment to achieve a minimum standard for all learners without giving up at any level.
At the end of the programme running with the aid of outcome based education, a student will
be able to arrive at the following outcomes:
(vii)
PO8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.
PO9. Individual and team work: Function effectively as an individual, and as a member
or leader in diverse teams, and in multidisciplinary settings.
PO10. Communication: Communicate effectively on complex engineering activities with
the engineering community and with society at large, such as, being able to
comprehend and write effective reports and design documentation, make effective
presentations, and give and receive clear instructions.
PO11. Project management and finance: Demonstrate knowledge and understanding of
the engineering and management principles and apply these to one’s own work, as a
member and leader in a team, to manage projects and in multidisciplinary
environments.
PO12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of
technological change.
(viii)
COURSE OUTCOMES
By the end of the course the students are expected to learn:
CO-1: Analyse the spectral characteristics of continuous-time periodic and Aperiodic
Signal.
CO-2: Analyse LTI systems in the time domain.
CO-3: Analyse signals using Fourier series and Fourier transform.
CO-4: Apply DFT to analyse discrete-time systems.
CO-5: Analyse LTI systems using Z-Transform.
CO-6: Understand sampling theorem and its implications.
CO-1 3 3 2 1 1 1 1 2 1 1 1 3
CO-2 3 3 3 1 2 1 1 2 1 1 1 3
CO-3 3 3 3 3 2 3 3 2 1 3 1 3
CO-4 3 3 3 3 2 1 1 2 1 3 1 3
CO-5 3 3 3 3 2 1 1 2 1 3 1 3
CO-6 3 2 2 1 1 1 1 2 1 1 1 3
(ix)
GUIDELINES FOR TEACHERS
To implement Outcome Based Education (OBE) knowledge level and skill set of the students
should be enhanced. Teachers should take a major responsibility for the proper
implementation of OBE. Some of the responsibilities (not limited to) for the teachers in OBE
system may be as follows:
Within reasonable constraint, they should manoeuvre time to the best advantage of all
students.
They should assess the students only upon certain defined criterion without considering any
other potential ineligibility to discriminate them.
They should try to grow the learning abilities of the students to a certain level before they
leave the institute.
They should try to ensure that all the students are equipped with the quality knowledge as
well as competence after they finish their education.
They should always encourage the students to develop their ultimate performance capabilities.
They should facilitate and encourage group work and team work to consolidate newer
approach.
They should follow Blooms taxonomy in every part of the assessment.
Bloom’s Taxonomy
Teacher should Student should Possible Mode of
Level
Check be able to Assessment
Students ability
Create Design or Create Mini project
to create
Students ability
Evaluate Argue or Defend Assignment
to justify
Students ability
Operate or Technical Presentation/
Apply to use
Demonstrate Demonstration
information
Students ability
Explain or
Understand to explain the Presentation/Seminar
Classify
ideas
Students ability
Remember to recall (or Define or Recall Quiz
remember)
(x)
GUIDELINES FOR STUDENTS
Students should take equal responsibility for implementing the OBE. Some of the
responsibilities (not limited to) for the students in OBE system are as follows:
Students should be well aware of each UO before the start of a unit in each and every
course.
Students should be well aware of each CO before the start of the course.
Students should be well aware of each PO before the start of the programme.
Students should think critically and reasonably with proper reflection and action.
Learning of the students should be connected and integrated with practical and real life
consequences.
Students should be well aware of their competency at every level of OBE.
(xi)
LIST OF FIGURES
Unit 1: Introduction to Signals and Systems
Fig. 1.1: Sinusoidal Signal 5
Fig. 1.2: ECG Signal 5
Fig. 1.3: Representation of signal and system 6
Fig. 1.4: Continuous time signal 7
Fig. 1.5: Discrete time signal 8
Fig. 1.6: Pulse signal 8
Fig. 1.7: CT Unit impulse signal 9
Fig. 1.8: DT Unit impulse sequence 10
Fig. 1.9: Unit Step Function 11
Fig. 1.10: DT unit step sequence 12
Fig. 1.11: u[n] represented by shifted impulse sequences 13
Fig. 1.12: CT Signum function 14
Fig. 1.13: DT Signum sequence 15
Fig. 1.14: CT ramp function 15
Fig. 1.15: DT ramp sequence 16
Fig. 1.16: CT real exponential signal 17
Fig. 1.17: CT complex exponential signal 17
Fig. 1.18: DT complex exponential sequence 18
Fig. 1.19: CT sinusoid signal 19
Fig. 1.20: CT sinusoid signal (a) Increasing sinusoid signal (b) Decaying sinusoid signal 19
Fig. 1.21: CT deterministic signal 20
Fig. 1.22: CT random signal 21
Fig. 1.23: CT Even signal 21
Fig. 1.24: CT Odd signal 22
Fig. 1.25: CT periodic signal 24
Fig. 1.26: CT aperiodic signal 25
Fig. 1.27: DT periodic signal 25
Fig. 1.28: Examples of finite duration signals (a) CT finite-duration signal (b) DT finite duration 32
signal
(xii)
Fig. 1.29: Examples of infinite duration signals (a) CT infinite-duration signal (b) DT infinite 32
duration signal
Fig. 1.30: Examples of causal signals (a) CT causal signal (b) DT causal signal 33
Fig. 1.31: Examples of noncausal signals (a) CT noncausal signal (b) DT noncausal signal 33
Fig. 1.32 Example of application of cartesian coordinates 34
Fig. 1.33: Random (stochastic) Signal 34
Fig. 1.34: Continuous and Discrete Amplitude Signals 37
Fig. 1.35: Examples of bounded and unbounded signal 43
(xiii)
Fig. 2.22 Periodic input to LTI system 78
Unit 5: Z - Transform
Fig. 5.1: Z-plane 191
Fig. 5.2: Poles and zeros in the Z-plane 192
Fig. 5.3: x(n) = anu(n) 196
Fig. 5.4: Pole-zero plot and ROC |z| > |a| 196
Fig. 5.5 x[n]= an u( n 1) 198
Fig. 5.6: Pole-zero plot and ROC |z| < |a| 198
Fig. 5.7: Pole-zero plot and ROC |a| < |z| < |b| 207
(xiv)
Fig. 5.8: Pole-zero plot and ROC : 13 < |z| 208
Fig. 5.9: Pole-zero plot and ROC |z| < 14 210
Fig. 6.12: Effect of aliasing on sinusoidal signal. For each of the four values of , the original 240
sinusoidal signal (solid curve), its samples, and reconstructed signal (dashed curve) are
illustrated: (a) =(5 )/6, in (a) and (b) no aliasing occurs, whereas in (c) & (d) there
is aliasing
Sinusoidal signal for example 6.1 241
Fig. 6.13:
Strobe effect 242
Fig. 6.14:
Fig. 6.15: Discrete-time processing of Continuous Time signals 243
Fig. 6.16: Notation for A/D conversion and D/A conversion 244
(xv)
Fig. 6.17: Sampling then followed by conversion to Discrete-time sequence: (a) Overall System; 245
(b) (t) for two sampling rates. Dashed envelop represents (t); (c) The output
sequence for two different sampling rates.
Fig. 6.18: The relationship among (jω), (jω) and ( ) with different sampling rates 246
Fig. 6.19: Conversion of discrete time sequence to a continuous time signal 247
Fig. 6.20: Frequency Response of ideal band-limited differentiator 248
Fig. 6.21: Frequency Response of discrete-time filter used to implement a continuous time band- 248
limited differentiator
Fig. 6.22: Discrete time sampling 250
Fig. 6.23: Impulse-train sampling of discrete-time signal in frequency domain: (a) Original signal 251
spectrum; (b) Spectrum of sampling sequence; (c) Spectrum of sampled signal with >
; (d) Spectrum of sampled signal with < . No aliasing occurs.
Fig. 6.24: Exact recovery of discrete time signal from its samples using an ideal low pass filter: (a)
252
Block diagram for sampling & reconstruction; (b) spectrum of x[n];
Fig. 6.25: Relationship between [n] corresponding to sampling and [n] corresponding to
254
decimation
Fig 6.26: Frequency domain illustration of the relationship between sampling & decimation 255
Fig 6.27: Continuous time signal that was originally sampled at Nyquist rate. After discrete time 256
filtering, the resulting sequence can be further downsampled. Here (jω) is the
continuous time Fourier Transform of (t), ( ) and ( ) are the discrete
time Fourier transforms of [ ] & [ ] respectively. And ( ) is the frequency
response of the discrete time low pass filter depicted in the block diagram.
Upsampling: (a) Overall system; (b) associated sequences and spectra for upsampling
Fig 6.28: 257
by a factor of 2
Spectra associated with example 6.4: (a) Spectrum of x[n]; (b) Spectrum after
Fig 6.29: downsampling by 4; (c) Spectrum after upsampling of x(n) by factor of 2; (d) Spectrum 258
after upsampling x[n] by 2 then downsampling by 9
(xvi)
CONTENTS
Foreword iv
Acknowledgement v
Preface vi
Outcome Based Education vii
Course Outcomes ix
Guidelines for Teachers x
Guidelines for Students xi
List of Figures xii
(xvii)
1.3.11 CT Ramp Function 14
1.3.12 DT Ramp Sequence 15
1.3.13 CT Exponential signal 15
1.3.14 DT Exponential signal 17
1.3.15 CT Sinusoid Signal 18
1.3.16 DT Sinusoid Sequence 19
1.4 Classification of Continuous Time and Discrete Time signals 19
1.4.1 Deterministic and Random signals
19
1.4.2 Even and Odd signals
21
1.4.3 Periodic and Aperiodic signals
23
1.4.4 Energy and Power signals
27
1.4.5 Finite (Time-limited) and Infinite Duration signals
31
1.4.6 Causal and Noncausal signals 32
1.5 Some signal properties 33
1.5.1 Absolute integrability 33
1.5.2 Determinism 33
1.5.3 Stochastic character 33
(xviii)
Unit 2: Behavior of Continuous and Discrete-time LTI Systems 51-86
Unit specifics
Rationale
Pre-requisites
Unit outcomes
2.1 Introduction 55
2.1.1 Linear Time Invariant Systems 55
2.1.2 Time Invariant Systems 57
2.1.3 Linear Time Invariant Systems 57
2.2 Impulse response of LTI System 58
2.2.1 Discrete-Time Unit Impulse Response and the Convolution 58
2.2.2 Representation of Continuous-Time Signals in Terms of Impulses 59
2.3 The Unit Step Response of an LTI System 60
2.4 Convolution Integral 61
2.4.1 Properties of the Convolution Integral 61
2.5 Input-output behaviour with aperiodic convergent inputs 62
2.5.1 Response of a continuous time system
62
2.5.2 Response of a discrete time system 62
2.6 Cascade interconnections 64
2.7 Causality for LTI systems 66
2.8 Stability for LTI Systems 66
2.9 System Representation through Differential Equations and Difference Equations 67
2.9.1 Differential Equation Description of CT LTI systems
67
2.9.2 Difference Equation Description of DT LTI systems 68
2.10 State space representation of systems 69
2.11 State Space Analysis 70
2.11.1 State equations
70
2.11.2 Output equations
71
2.11.3 State Model 72
2.12 Transfer function of a Continuous Time System 72
2.13 State Transition Matrix 72
2.14 Multi-Input, Multi-Output Representation 75
2.15 Periodic Inputs to LTI system 78
(xix)
2.16 The Notion of frequency response and its relation to impulse response 79
Unit summary
81
Exercises
82
Know more
85
References and suggested readings 86
(xx)
3.5.10. Real and Even 105
3.5.11 Real and Odd 105
3.5.12 Parseval’s Relation 105
3.6 Gibb’s Phenomenon 106
3.7 Fourier Transform (FT) 111
3.7.1. Definition
111
3.7.2. Definition of Inverse Fourier Transform (IFT)
112
3.7.3 Magnitude and Phase Spectrum using Fourier Transform 112
3.8 Properties of Fourier Transform (FT) 112
3.8.1. Linearity
112
3.8.2. Time Shifting
113
3.8.3 Time Scaling
123
3.8.4 Time Reversal
114
3.8.5. Conjugation
114
3.8.6. Frequency Shifting
115
3.8.7 Time Differentiation
115
3.8.8 Time Integration
116
3.8.9 Differentiation in Frequency
116
3.8.10 Convolution
117
3.8.11 Parseval’s Theorem
118
3.8.12 Duality Property
119
3.9 Fourier Transform (FT) Representation of Continuous-Time (CT) LTI System in terms of Convolution 120
and Multiplication
3.9.1. Representation of Transfer Function of CT LTI System in Frequency Domain 120
3.9.2. Relation of Impulse Response and Transfer Function of CT LTI System 120
3.9.3 Response of CT LTI System in terms of Fourier Transform 121
3.9.4 Magnitude and Phase Response of CT LTI System 121
(xxi)
3.12 Properties of DTFT 132
3.12.1. Linearity 132
3.12.2. Time Shifting 133
3.12.3 Periodicity 134
3.12.4 Time Reversal 134
3.12.5 Conjugation 134
3.12.6 Frequency Shifting 135
3.12.7 Differentiation in Frequency 135
3.12.8 Convolution 136
3.12.9 Parseval’s Theorem 137
3.13 Discrete Fourier Transform (DFT) Representation 138
3.13.1 Definition of DFT
138
3.13.2 Definition of Inverse DFT
139
3.14
Properties of DFT
139
3.14.1. Linearity
139
3.14.2. Circular Time Shifting
140
3.14.3 Periodicity
140
3.14.4 Time Reversal 141
3.14.5 Conjugation 142
3.14.6 Circular Frequency Shifting
142
3.14.7 Multiplication
142
3.14.8 Convolution
143
3.14.9 Parseval’s Theorem
144
Unit summary
144
Exercises
145
Know more
149
References and suggested readings 150
(xxii)
4.2.1 Complex Frequency Plane (s-Plane) 154
4.2.2 Definition of Laplace Transform (LT) 154
4.2.3 Definition of Inverse Laplace Transform (ILT) 155
4.3 Region of Convergence (RoC) 155
4.4 Properties of Laplace Transform 168
4.4.1 Scaling of amplitude 168
4.4.2 Linearity 168
4.4.3 Time differentiation 168
4.4.4 Integration in time domain 169
4.4.5 Shifting in frequency domain 170
4.4.6 Shifting in time domain 170
4.4.7 Differentiation in frequency
170
4.4.8 Time Scaling
171
4.4.9 Initial Value theorem
172
4.4.10 Final Value theorem
172
4.4.11 Convolution Property 172
4.5 Poles and Zeros of System Functions and Signals 173
4.5.1 Poles and Zeros 173
4.6 ROC Properties for Laplace Transform 174
4.7 Inverse Laplace Transform by Partial Fraction Expansion Method 175
4.8 Laplace domain analysis 178
4.8.1 Transfer Function of LTI Continuous Time System
178
4.8.2 Impulse Response and Transfer Function 178
4.9 Solving Differential Equations by Using Laplace Transform 179
Unit summary
179
Exercises
180
Know more
186
References and suggested readings 187
(xxiii)
5.1 Introduction 190
5.2 Need of Z-transform 190
5.3 Types of Z-transforms 191
5.3.1 Unilateral Z-Transform 191
5.3.2 Bilateral Z-Transform 191
5.4 The Z-plane 192
5.4.1 Poles 193
5.3.2 Zeros 193
5.5 Region of Convergence (ROC) for Z-Transform 194
5.6 Properties of ROC 199
5.7 Properties of the Z-transform 200
5.7.1. Linearity 200
5.7.2. Time Shifting 201
5.7.3 Time Reversal 202
5.7.4 Scaling in z-domain 202
5.7.5 Time scaling property 203
5.7.6 Convolution 204
5.7.7 Differentiation in z-domain 205
5.7.8 Conjugation 205
5.7.9 Initial Value Theorem 206
5.7.10 Final Value Theorem 206
5.7.11 Accumulation 207
5.8 Relationship between DTFT and z-transform 211
5.9 Inverse Z- transform 212
5.9.1 Power series expansion method 213
5.9.2 Partial fraction method 216
5.10 Z – Domain Causality and stability analysis 219
Unit summary 220
Exercises 221
Know more 225
References and suggested readings 226
(xxiv)
Rationale
Pre-requisites
Unit outcomes
6.1 Introduction 230
6.2 Sampling Theorem 230
6.2.1 Impulse train sampling 231
6.2.2 Sampling with Zero-Order Hold 234
6.3 Reconstruction of a signal from its samples using interpolation 236
6.3.1 The effect of under sampling: Aliasing 238
6.4 Discrete Time Processing of Continuous Time Signals 245
6.4.1 Digital differentiator 249
6.5 Sampling of discrete time signals 251
6.5.1 Impulse train sampling 251
6.5.2 Discrete time decimation and interpolation 255
Unit summary 261
Exercises 275
Know more 282
References and suggested readings 283
(xxv)
Introduction to
1
d
UNIT SPECIFICS
This unit presents information related to the following topics:
Explore fundamentals of signals and systems; understand their significance in engineering
and science;
Examine properties of signals: periodicity, integrability, determinism, stochastic character;
Study special signals: unit step, unit impulse, sinusoid, complex exponential, time-limited
signals;
Differentiate between continuous-time and discrete-time signals; analyze their
characteristics and representations;
Compare continuous and discrete amplitude signals; understand their applications;
Explore system properties: linearity, shift-invariance, causality, stability, realizability;
Discuss practical applications of signals and systems in everyday life and various fields;
Learning outcomes: clear understanding of concepts; ability to analyze signals and
comprehend system properties; recognition of signal processing importance;
Unit provides foundational knowledge for further studies and applications in engineering
and science.
This unit provides an introduction to signals and systems, focusing on their fundamental
concepts and applications in engineering and science. Students will explore the properties of
signals, including periodicity, integrability, determinism, and stochastic character, and
understand how these properties relate to real-world scenarios. Special signals such as the unit
step, unit impulse, sinusoid, and complex exponential will be studied, along with their
characteristics and applications.
The unit will differentiate between continuous-time and discrete-time signals, as well as
continuous and discrete amplitude signals, and analyze their representations and
characteristics. Students will also explore system properties such as linearity, shift-invariance,
causality, stability, and realizability, and understand their impact on signal processing.
2 | Introduction to Signals and Systems
The practical applications of signals and systems in various fields will be discussed,
highlighting their relevance in everyday life. Students will be encouraged to apply their
knowledge to solve practical problems and develop critical thinking and problem-solving skills.
Effective communication of concepts using appropriate terminology and notation will be
emphasized.
By completing this unit, students will have a strong foundation in signals and systems, enabling
them to further their studies and apply their knowledge in engineering and scientific contexts.
RATIONALE
The unit on "Introduction to Signals and Systems" is to provide students with a solid foundation
in understanding the fundamental concepts of signals and systems. This knowledge is essential
as signals and systems are pervasive in various branches of engineering and science.
Overall, this unit aims to equip students with a strong foundational understanding of signals
and systems, enabling them to pursue more advanced topics and apply their knowledge in
engineering and scientific contexts.
PRE-REQUISITES
6. These prerequisites are essential for students to effectively engage with the unit on
"Introduction to Signals and Systems" and ensure a smooth transition into the study of the
subject matter.
UNIT OUTCOMES
After studying this unit students will be able to:
U1-O1: Understand fundamental concepts of signals and systems.
U1-O2: Identify and analyze different signal types: periodic, deterministic, stochastic.
U1-O3: Recognize the importance of signal properties in real-world applications.
U1-O4: Apply special signals (unit step, unit impulse, sinusoid, complex exponential) in
problem-solving and system analysis.
U1-O5: Differentiate and understand representations and characteristics of continuous-time
and discrete-time signals; analyze system properties and their impact on signal
processing.
1.1 Signals and systems as seen in everyday life, and in various branches of
engineering and science
Signals and systems are ubiquitous in everyday life and have significant applications in engineering and
science. The term signal is generally applied to something that conveys information. Signals may, for
example, convey information about the state or behavior of a physical system. As another class of
examples, signals are synthesized for the purpose of communicating information between humans or
between humans and machines. Although signals can be represented in many ways, in all cases, the
information is contained in some pattern of variations. Signals are represented mathematically as functions
of one or more independent variables.
Signals can be any physical quantity that varies with time, space, or other independent variables. They can
be represented in either the time domain or frequency domain. Examples of signals include human speech,
electric current, and voltage. Signals can be dependent on one or more independent variables, such as time,
temperature, position, pressure, or distance. If a signal depends on only one independent variable, it is
called a one-dimensional signal, while a signal dependent on two independent variables is called a two-
dimensional signal. Examples include audio signals (speech, music), image and video signals,
communication signals, biomedical signals (ECG, EEG), control systems, digital signal processing,
electrical circuits, mechanical systems, and feedback systems. These concepts find practical use in areas
such as audio processing, image recognition, telecommunications, medical diagnostics, robotics, power
systems, and scientific research. Signals and systems provide the framework for analyzing, manipulating,
and understanding information in various fields.
1.1.1 Signal
A signal can be realized as a physical quantity which conveys the information related to some physical
phenomena like any voltage signal, any electromagnetic wave that is transmitted over the air from any base
station to the mobile station and this carries the information about the communication between two
individuals. A signal can also be predicted in terms of a video or an image carrying information. In this
way, a signal is a carrier of data or information.
The examples of the signal can be, when we power on the mobile handset, the electromagnetic field gets
associated with the antenna and we receive a signal from a base station. Another example can be, when we
listen a whistle of train passing nearby, that is a kind of signal which is identified in our brain. Basically,
signals arise in many forms, e.g. acoustic, light, pressure, flow, mechanical, thermal, electrical etc.
For us to study the signal theory, we consider signal as a variation or change of an entity with respect to
time or space or any other independent variable. When time signals are considered, they are represented
using x(t), y(t) etc., and such 1D signals which varies in time can also be extended for 2D, 3D signals.
Signals occurring in many different physical forms are often converted in electrical form by a transducer
Signals and Systems | 5
for ease of processing. For example, a microphone converts sound wave into electrical signal which is
convenient for further processing.
To study signals, we must see how the signals look! Below is one signal which is very commonly used
i.e., sinusoidal signal.
y-axis
Amplitude
Time x-axis
Fig. 1.1: Sinusoidal Signal
Graphically, the independent variable is represented by horizontal axis (x-axis) and the dependent
variable is represented by vertical axis (y-axis). In the Fig. 1.1, we must notice that as the time value is
changing, the instantaneous value of amplitude (height of the signal waveform) is also changing, so the
value plotted on y-axis are dependent on value on x-axis.
The general expression for sinusoidal signal can be written as sin( ), where is constant and is time
which is value on -axis. Thus, quantity on -axis is dependent variable and quantity on -axis is
independent variable, as we all know that time depends on nothing and it varies independently. So, from
the discussion the definition of Signal is any physical quantity that varies with time, space or any other
independent variable, which conveys some information.
Similarly, ECG signal shown in Fig. 1.2 is also a biological signal which depicts the function of heart to
find or study any abnormalities in the heart.
Other examples of the signals are speech signal, EEG signal, Image/Video signal, Radar signal, AM/FM
signal etc.
1.1.2 System
For the examples of the signals given in section 1.1.1, a system is always associated with their generation
and extraction of useful information. For example, any waveform like sinusoid can be generated by a
function generator and displayed on a cathode-ray oscilloscope (CRO). Similarly, other signals like ECG,
EEG are generated by our heart and brain, respectively. These signals are analyzed using the biological
equipment with suitable software systems.
From this discussion, a continuous time system accepts an input signal, ( ) and produces an output signal,
( ). A system is often represented as an operator on the input signal. Hence, a system can be defined as
any physical device that performs a certain operation or a set of operations on the input signal x(t) to result
in a new signal y(t) as its output. Fig. 1.3 shows the system to which an input signal ( ) is provided that
is further processed to get the output ( ).
Above Eq. (1.1) is of sinusoidal signal, which is a continuous time signal as it is defined continuously over
time from -∞ to +∞ or over any continuous time interval as shown in Fig. 1.4.
Signals and Systems | 7
So, stem plot contains discrete time instants also called as sequence of numbers. From Fig. 1.5 it can be
seen that, at time instant 0, the instance is x (0) that shows the amplitude values at time 0, at time instant
1, the instance is x (1) and so on. Hence, these x (-2), x (-1), x (0), x (1), x (2) are called as time series or
sequence of numbers.
Consider the given signal which is a pulse from − , hence pulse has a width of and height equals
. So, area under the pulse equals × = 1. So, for each pulse ( ) for every , the area under the pulse
equal to 1.
( )= → ( ) (1.3)
As, tends to 0, the width goes to 0 and height to infinity, but area still remains the unity (constant).
( ) 1 <0<
∫ = (1.4)
0 ℎ
The representation of CT unit impulse signal is represented by Fig. 1.7.
( )= ∞ =0
and ∫ δ(t) dt = 1
0 ℎ
The arrow on the top of impulse indicates that it has infinite amplitude as shown in Fig 1.7.
1. ∫ ( ) ( ) = (0)
i.e., if we multiply the impulse function by any signal x(t) and integrate it, it picks the value of the
function x(t) at t = 0
2. ∫ ( ) ( − ) = ( )
Here, ( − ) is basically the shifted impulse to =
3. Scaling property: ( ) = | |
( ), >0
[ ] = [ ]. [ ] = [0] [ ] (1.6)
2. Sifting property:
The product property leads to sifting property,
∑ [ ] [ − ]= [ ] (1.7)
Eq. (1.7) is true when [ − ] is within the given summation limit otherwise the RHS of the
equation becomes zero.
For example, consider a scenario where a light switch is turned on at time t = 0. The unit step signal
can be used to represent this event. Before t = 0, the signal is 0 (light off), and after t = 0, the signal
is 1 (light on).
Let us consider the relationship between the unit impulse and unit step function:
( )= ( ) (1.9)
By differentiating the step response impulse response can be obtained.by integrating the step
response, ramp response can be obtained.
Application of step signal is equivalent to the application of numerous sinusoidal signals with a wide
range of frequencies. Step response is considered as a white noise which is drastic if the system
response is satisfactory for a step signal; it is likely to give satisfactory response to other types of
signals.
The DT unit impulse sequence in forms of DT unit step sequence can be represented by,
[ ] = [ ] − [ − 1] (1.11)
Conversely, DT unit step sequence can be represented by DT unit impulse sequence as a running
sum of impulse. As,
[ ]= [ − ] (1.12)
[ ]= [ ] + [ − 1] + [ − 2] + ⋯ (1.13)
That mean, u[n] can be recognized as a linear combination of shifted impulse sequences as shown
by fig. 1.11
Solution:
x[n] = ∑∞
k=0 x[k]δ[n − k] (1.14)
[ ]= x[k]δ[n − k] (1.15)
( ) = 2 ( ) −1 (1.18)
Fig. 1.12 show the CT Signum function
∝
( )= (1.25)
Depending upon the values of parameters A and α, the exponential signal is categorized
16 | Introduction to Signals and Systems
into two types.
1. Real Exponential Signal
2. Complex Exponential Signal
For this signal, A and α are real. Depending upon the value of α, the signal increases or decreases
exponentially.
The complex exponential signal can be represented graphically in terms of sine and cosine waves
depending upon value of σ in Eq. (1.28). If value of σ=0, real and imaginary parts of Eq. (1.28)
become sinusoidal signals as shown by Fig. 1.17.
( ∅)
[ ] = | || | (1.31)
[ ] = | || | [ ( + ∅) + ( + ∅)] (1.32)
( ) = | |[ ( + ∅) + ( + ∅)] (1.33)
( ) σ> 0
( )
σ< 0
-t 0 t -t t
(a) (b)
Fig 1.20 CT sinusoid signal (a) Increasing sinusoid signal (b) Decaying sinusoid signal
For example, the sound produced by a tuning fork is a sinusoidal signal. As the prongs of the
fork vibrate back and forth, they create a pure tone that can be represented by a sinusoidal
waveform.
|A| cos(ω n + ∅) and |A| sin(ω n + ∅) are discrete time sinusoidal sequence where
represents the amplitude, ω is frequency in radians/sample and ∅ is phase angle in radians.
In Fig. 1.21, ( ) = ( for -∞<t<∞ which determines the values at different time instants.
)
Other examples of deterministic signals include voltage signals, current signals, etc.
Whereas, the random signals are random in nature that means it takes random values at
various time instants. Mathematically, the behaviour of random signals cannot be predicted.
For example, the outcome of a coin toss experiment can lead to a random signal. If its outcome
is a head, it is presented by +1 and if the outcome is tail, it is presented by -1. That means,
signal [ ] = +1, if outcome equals heads or [ ] = − 1, if outcome equals tails. Since
the outcome of the coin toss experiment is random, the signal itself is random in nature and
this is a discrete time random signal. On the other hand, a continuous time random signal
includes speech signal, record of temperature of the city in a particular month over a time or
it can be interpreted as noise with too many variations along the time axis t. This noise tends
to limit the performance of a system and hence it is important to understand the behavior of
the noise to characterize the performance and behavior of any system. Fig. 1.22 shows the
CT random signal.
For example, sin(2 ) is an odd signal as sin(2 ) = −sin(2 ) as shown in Fig. 1.24.
Hence, it can be seen that the even signals are symmetric about 0 that means they have even
symmetry and the odd signals are anti-symmetric about the vertical axis that means they have
odd symmetry.
Any real valued CT or DT unsymmetric signal can be represented in terms of its even and
odd parts as,
( )= ( )+ ( ) (1.40)
and
[ ]= [ ]+ [ ] (1.41)
Where, base E represents even part of the signal and base O represents odd part of the signal.
( )= { ( )} = { ( ) + (− )} (1.42)
( )= { ( )} = { ( ) − (− )} (1.43)
[ ]= { [ ]} = { [ ] + [− ]} (1.44)
[ ]= { [ ]} = {[ ] − [− ]} (1.45)
(− ) = (− ) (− ) = ( ) ( )= ( )
(− ) = (− ) (− ) = (− ( )) − ( ) = ( ) ( )= ( )
− (− ) = −[ (− ) ] = −[ ( ) (− ( ))] = ( ) ( )= ( )
Now consider two periodic signals, ( ) and ( ), with fundamental periods and ,
respectively. The sum of two periodic signals may or may not be periodic.
( )= ( + ), ( )= ( + ) (1.49)
24 | Introduction to Signals and Systems
= = (1.52)
or
= (1.53)
The sum of two periodic signals is periodic only if the ratio of their respective periods can be
expressed as a rational number. We can say that the fundamental period of ( ) is the smallest
positive value of that is an integer multiple of both and and this value is called as least
common multiple (LCM) of and . If is an irrational number, then ( ) and ( ) do
not have common period and ( ) is aperiodic. The Fig. 1.26 shows the example of aperiodic
signal which does not follows the conditions satisfied by the periodic signal that means
aperiodic signals are not periodic with respect to defined fundamental period.
[ ]= [ + ]= [ +2 ]= [ +2 ]=⋯= [ + ] (1.55)
for all and any integer . If [ ] is periodic with period N, then it is also periodic with
period 2 , 3 , ….
Example 1.2:
Determine whether the following continuous time signals are periodic or not. If periodic, find
the fundamental period T.
1. ( ) = (50 )
2. ( ) = 20 (10 + /6)
3. ( ) = 2 cos(10 + 1) – sin(4 − 1)
4. ( ) = 60 + 50
5. ( ) = 3 cos 4 + 2 sin
Solution:
1. Given, ( ) = ( )
( ) = (50 ), where = 50 π.
2. Given, ( ) = ( + / )
= 10 π
T=2 / = 2 π / 10 π = 1/5
3. , ( ) = ( + )– ( − )
let ( )=2cos(10t+1)
( )= sin(4t-1)
= 2 π / 10 = π /5 and = 2 π/4 = π /2
Since, ratio / = (π /5) / (π /2) = 2/5 is a rational number, signal x (t) is periodic with
fundamental period
4. Given, ( ) = +
T1 = 2 / 60 = 1 / 30 sec
T2 = 2 50 = 1 25 sec
T = 1 / 2 = (1 / 30) / (1 /25) = 5 / 6
T= 6T1 = 5T2
T = 1 / 5 sec.
5. , ( ) = +
T1 = 2 /4 = / 2 sec
T2 = 2 / = 2 sec
T= 1/ 2= /4
= ( ) = ( )
Hence for any continuous signal ( ), the energy E can be written as,
= ∫ | ( )| (1.58)
=∑ | [ ]| (1.59)
A signal is called as energy signal if the energy is finite, i.e., 0 < < ∞. For example,
( ) or [ ] are the energy signals.
The power of a periodic continuous signal ( ) which is periodic with period T can be written
as,
= lim ∫ | ( )| (1.60)
→
= lim ∑ | [ ]| (1.61)
→
A signal is called as power signal if the power is finite (0 < < ∞) and non-zero ( ≠ 0).
A signal with finite energy has zero power and a signal with finite power has infinite energy.
A signal neither is an energy signal nor a power signal if it has infinite energy and zero power.
For example, CT and DT ramp signal i.e., ( ) and [ ] are neither energy nor power signal.
28 | Introduction to Signals and Systems
2. ( )= ( − 2)
3. [ ] = 3(0.5) [ ]
Solution:
1. Given, ( ) = ( ) >
=∫ | ( )| =∫ 2
=4
∞
=4
−2 0
4 ∞
= [ ]
−2 0
4
= [ − ]
−2
4
= [ − ]
−2
4 4 2
= [ 0 − 1] = =
−2 2
1
= lim 2 .
→
4
= lim
→
4
= lim [ ]
→ −2
4
= lim [ − ]
→ −2
=0
Since, P=0, ( ) is not a power signal.
2. Given, ( ) = ( − )
∞
= | ( )| = = =
1 2
2
=∞
E is infinite, ( ) is not an energy signal
1
= lim | ( )|
→
1
= lim
→
1
= lim
→ 1
2
2
= lim
→
30 | Introduction to Signals and Systems
2
= lim − (2)
→
2
= lim √ − √2
→
=0
Since, = 0, ( )is also not a power signal.
3. Given, [ ] = ( . ) [ ]
[ ] = 3(0.5) [ ]
= | ( )|
= |3(0.5) |
= 9(0.5)
=9 (0.25)
1
∴ =9
1 − 0.25
4
=9
3
= 12
E has finite value. Hence, ( ) is an energy signal.
Now,
Signals and Systems | 31
1
= lim | ( )|
→
1
= lim 9(0.25)
→
9
= lim (0.25)
→
Using, ∑ = , ≠1
9 1 − (0.25)
= lim
→ 1 − 0.25
=0
Since power P=0, [ ] is not a power signal.
Fig 1.28: Examples of finite duration signals (a) CT finite-duration signal (b) DT finite-duration signal
Infinite duration signals are signals that exists from -∞ to ∞ i.e., for finite interval of time.
The examples of CT and DT infinite duration signals are shown by Fig 1.29.
32 | Introduction to Signals and Systems
Fig 1.29: Examples of infinite duration signals (a) CT infinite-duration signal (b) DT infinite-duration
signal
Fig 1.30: Examples of causal signals (a) CT causal signal (b) DT causal signal
The signals that are zero for ≥ 0 (for CT signals) or ≥ 0 (for DT signals) are known as
Noncausal (left-sided) signals. Examples of such signals are shown by Fig. 1.31.
Fig 1.31: Examples of noncausal signals (a) CT noncausal signal (b) DT noncausal signal
Signals and Systems | 33
Suppose we have a strictly time limited signal that is a rectangular pulse, so obviously this
curve has a finite area under it. Therefore, we can say this signal is absolutely integrable.
1.5.2 Determinism:
Determinism refers to the property of a signal that follows a predictable and deterministic
pattern.
A signal is said to be deterministic if there is no uncertainty with respect to its value at any
instant of time or the signals which can be defined exactly by a mathematical formula are
known as deterministic signals. Examples of deterministic signals include most mathematical
functions, such as polynomial functions or exponential functions.
Example 1.4: Represent the following discrete time signals graphically. (Arrow indicates
position of = 0)
[ ] = {1, 2,3,4,5}
↑
[ ] = {−6, −3,2, 5, 1,3,7,8}
↑
[ ] = { 4,3,1,0,5, 3}
↑
Solution:
1. The given sequence is [ ] = { , , , , }
↑
As the arrow position indicates, = 0
i.e. [0] = 1, [1] = 2, [2] = 3, [3] = 4, [4] = 5
So, the plot of [ ] (Amplitude) versus time is,
Signals and Systems | 35
2. The given sequence is, [ ] = {− , − , , , , , , }
↑
i. e. [0] = 5, [1] = 1 , [2] = 3, [3] = 7, [4] = 8, [−1] = 2, [−2] = −3, [−3] = −6
amplitude values. Analog audio signals and continuous waveform signals are examples of
continuous amplitude signals.
Continuous-time discrete amplitude signals are basically digital signals. Discrete amplitude
is one which we get through quantization process and it depends how many levels of
quantization one wants. In simple words, quantization means assigning the amplitude values
of any analog signal to certain discrete levels, equidistant of each other based on certain
criteria. A square wave is a continuous-time discrete amplitude signal. For binary signals,
there are only two quantization levels (0, 1). Some examples of continuous-time discrete
amplitude signals are shown by Fig. 1.34 (a) and (b).
(a)
(b)
Fig. 1.34 Continuous and Discrete Amplitude Signals
Signals and Systems | 37
A system is linear if it follows the two principles that are additivity and homogeneity.
1. Additivity property: Additivity means that the response of the system to the sum of two
inputs is equal to the sum of the individual responses to each input.
Mathematically, if input ( ) produces output ( ) and input ( ) produces output ( ),
then ( ) + ( ) must produce ( ) + ( ).
2. Homogeneity/scaling property: Homogeneity means that scaling the input signal by a
constant scales the output response by the same constant.
i.e., ( )+ ( )→ ( )+ ( ) (1.64)
i.e., [ ]+ [ ]→ [ ]+ [ ] (1.65)
Delay the input signal by and check the response of the system ( ).
Delay the output of the system for unshifted input by . Let this delayed response is ( ).
Check whether ( ) = ( ). If they are equal, system is time invariant otherwise time
variant.
Example 1.5
Check whether the following systems are time invariant or not.
1. ( )= 5 ( )
2. ( )= ( )sin(10 )
3. ( ) = 3 ( )
( )
4. ( ) = 4
5. ( ) =
Solution:
1. Given, ( ) = ( )
2. Given, ( ) = ( ) ( )
( )= ( − ) sin 10 ( − )
Here,
3. Given, ( ) =
( ) = 3 (( − )2 )
Here,
( )
4. Given, ( ) =
( )=4 ( )
Here,
( ) = ( − )2
Here,
A causal system produces an output response that depends only on present and past values of
the input signal. In other words, Causality means that the output of the system does not depend
on future inputs, but only on past input. On the other hand, the output of a noncausal system
depends upon present, past as well as future values of the input signal. All the physical
systems in the real world are the noncausal systems.
Mathematically, the output response ( ) at time is determined solely by the input signal
( ) for τ ≤ .
Example 1.6: Determine whether the given CT and DT systems are causal or noncausal.
1. ( ) = ( + 1)
2. ( ) = ( ) + ( − 1)
3. [ ] = [ ] + [ − 3]
Solution:
1. Given, ( ) = ( + )
When t = 0, y(0) = x(1), which implies that, The response at t = 0, i.e. , y(0) depends
on the future value of input x(0).
Signals and Systems | 41
When = 1, (1) = (2), which implies that, The response at = 1, i.e. , (1) depends
on the future value of input (2).
From the above analysis we can say that for any value of , the system output depends on
future inputs. Hence the system is noncausal.
2. Given, ( ) = ( ) + ( − )
When = 0, (0) = (0) + (−1), which implies that, The response at = 0, i.e. , (0)
depends on the present input (0) and past input (−1).
When = 1, (1) = (1) + (0), which implies that, The response at = 1, i.e. , (1)
depends on the present input (1) and past input (0).
From the above analysis we can say that for any value of , the system output depends on
present and past value of inputs. Hence the system is causal.
3. Given, [ ] = [ ]+ [ − ]
When = 0, [0] = 0 [0] + [−3], which implies that, The response at = 0, i.e. , [0]
depends on the present input [0] and past input [−3].
When = 1, [1] = 1 [1] + [−2], which implies that, The response at = 1, i.e. , [0]
depends on the present input [1] and past input [−2].
From the above analysis we can say that for any value of , the DT system output depends
on present and past value of inputs. Hence the system is causal.
Stability refers to the boundedness of the system's response. A system is considered stable if,
for bounded input signals, the output response remains bounded i.e., small inputs lead to
output that do not diverge. For example, if we apply only little pressure to push the object, it
42 | Introduction to Signals and Systems
will move only a little bit. In other words, if the input signal is finite, the output signal should
also be finite.
On the other hand, if a small signal causes the output signal to be arbitrarily large then that
system is called as unstable system. The examples of bounded and unbounded signals are
shown by Fig 1.35.
Stability is also defined by the boundedness. The input signal ( ) is said to be bounded if
there exist a constant (0 < < ∞), such that
Similarly, the output signal is bounded if it satisfies the condition |y(t)| ≤ My < ∞
Some of the examples of bounded input signal are step signal, decaying exponential signal
and impulse signal. Examples of unbounded input signal are ramp signal and increasing
exponential signal.
1. ( ) = cos( ( ))
2. [ ] = [ − 1]
3. ( ) = ( )
4. ( ) = ∫ ( )
Solution:
1. Given, ( ) = ( ( ))
The value of θ lies between 1 to +1 for any value of θ. Therefore, the output ( ) is
bounded for any value of input ( ). Hence the given system is stable.
2. Given, [ ] = [ − ]
For an arbitrary signal [ ], | [ ]| ≤ for all
Then delayed input [ − 1] is also bounded by ,
| [[ − 1]| ≤ for all
The output is
[ ]=
Hence, the DT system is stable.
3. Given, ( ) = ( )
The given system is a time variant system, and so the test for stability should be performed
for specific inputs. There can be two cases for the existence of ( ).
Case 1: Let ( ) tends to ∞ or constant, as tends to infinity.
In this case, ( ) = ( ) will be infinity as t tends to infinity and so the system is unstable.
Case 2: Let ( ) tends to 0, as tends to infinity. In this case ( ) = ( ) will be zero as
tends to infinity and so the system is stable
4. Given, ( ) = ∫ ( )
It is unbounded because the amplitude of ramp is not finite and tends to become infinite when
t →infinite
1.8.5 Realizability:
Consider the first system, ( ) = ( − 1) is a causal system, because its output is a time-
delayed version of the original signal.
On the other hand, consider the second system, ( ) = ( + 1), is non-causal, because
its output is a time-advanced version of the input signal. This means, that for example at
the output time t=0, the system requires access to the value of the input signal at time t=1.
Clearly, this is impossible in a realizable system, as nobody can predict the future.
UNIT SUMMARY
This this chapter, we have seen the introduction to signals and systems that covers the
fundamental concepts and properties of CT and DT signals. It explores different signal types,
including periodicity, determinism, and stochastic character. Special signals such as the unit
step, impulse, and ramp are discussed. Signals can exist in continuous or discrete domains with
continuous or discrete amplitudes. System properties, including linearity, time-invariance,
causality, stability, and realizability are also covered. Understanding signals and systems is
crucial for various engineering and scientific applications.
Signals and Systems | 45
EXERCISES
Multiple Choice Questions and Answers
1. Signals and systems are relevant in which areas?
a) Everyday life
b) Engineering
c) Science
d) All of the above
Answer: d) All of the above
b) Discrete
c) Both
d) None
Answer: a) Continuous
7. Which property ensures that a system's output remains bounded for bounded input signals?
a) Linearity
b) Shift-invariance
c) Causality
d) Stability
Answer: d) Stability
Signals and Systems | 47
10. Discuss the differences between continuous and discrete amplitude signals.
Numerical Problems
1. Consider a periodic signal with a period of T = 4 seconds. Find the frequency of the signal.
2. Determine if the following signal is absolutely integrable: x(t) = e^(-2t) for t ≥ 0.
3. Calculate the average value of the signal x(t) = 3sin(2πt) over the interval 0 ≤ t ≤ 2 seconds.
4. Given a system with the impulse response h(t) = 2e^(-t)u(t), where u(t) is the unit step
function, find the response of the system to the input signal x(t) = 3u(t).
5. Determine if the system described by the difference equation y[n] = 0.5y[n-1] + x[n] is
linear or nonlinear.
6. Consider a discrete-time system with the input signal x[n] = {1, 2, 3, 4} and the impulse
response h[n] = {1, -1, 2, -2}. Calculate the output signal y[n] using the convolution sum.
7. Determine if the system described by the following difference equation is time-invariant
or time-varying: y[n] = x[n] + x[n-1].
8. Test the stability of the continuous-time system with the transfer function H(s) = 1/(s + 2).
9. Determine if the system with the transfer function H(z) = (1 - z^(-1))/(1 + z^(-1)) is stable
in the discrete-time domain.
10. Given a system with the input signal x(t) = 4sin(3πt) and the output signal y(t) = 2sin(3πt
+ π/4), calculate the gain of the system.
Signals and Systems | 49
KNOW MORE
Signals and systems are fundamental concepts that permeate our daily lives and play a vital role
in various fields of engineering and science. Signals exhibit different properties such as
periodicity, which describes their repetitive nature, and absolute integrability, which quantifies
the energy or power content of a signal. Signals can be deterministic, meaning they have a
predictable behavior, or stochastic, displaying random characteristics. Special signals of
significance include the unit step, representing abrupt changes, the unit impulse, denoting
instantaneous events, sinusoids, fundamental periodic waveforms, and complex exponentials,
integral to signal processing. Time-limited signals have finite duration, and they can be
continuous or discrete in both the time and amplitude domains. Systems, on the other hand,
possess distinct properties that govern their behavior. Linearity signifies that the response of a
system to a sum of inputs is the sum of their individual responses, while additivity and
homogeneity describe their scaling behavior. Shift-invariance indicates that shifting the input
signal leads to a corresponding shift in the output response. Causality denotes that the output
depends only on past and present inputs. Stability ensures that the system produces bounded
output responses for bounded inputs, and realizability signifies the practical implementability
of the system using realizable components or algorithms. Understanding these properties and
concepts is vital in comprehending the behavior and characteristics of signals and systems,
enabling their analysis, manipulation, and design in numerous applications across engineering
and scientific disciplines.
2 Behavior of Continuous
and Discrete-time LTI
Systems
UNIT SPECIFICS
Through this unit we have discussed the following aspects:
Impulse response and step response provide information about a system's characteristics
and its response to specific inputs.
Convolution is used to compute the output of a system by integrating the product of the input
and the shifted impulse response.
LTI systems can process aperiodic and convergent inputs, and their output can be computed
through convolution or other techniques.
Cascade interconnections involve connecting multiple LTI systems in series by convolving
their impulse responses.
Causality refers to a system's output depending only on past or present input values, while
stability means the output remains bounded for any bounded input.
Differential equations represent continuous-time LTI systems, while difference equations
represent discrete-time LTI systems.
State-space representation describes a system using first-order differential or difference
equations, including state variables and input-output relationships.
State-space analysis allows for studying the behavior of systems in terms of their state
variables and can handle multi-input, multi-output systems.
The state transition matrix relates the initial state to the state at any given time and is
essential in solving state-space equations.
Frequency response describes how an LTI system responds to different frequencies in the
input, and it is related to the impulse response through Fourier analysis.
Periodic inputs, such as sinusoidal waves, can be analyzed using the notion of frequency
response to understand the system's behavior in the frequency domain.
This unit focuses on the behavior of Continuous and Discrete-time Linear Time-Invariant (LTI)
Systems. It covers key topics including impulse response, step response, and convolution. The
52 | Behavior of Continuous & Discrete-time LTI Systems
unit explores how LTI systems respond to aperiodic and convergent inputs, emphasizing
techniques such as convolution to determine the output. Cascade interconnections, where
multiple systems are connected in series, are examined by convolving their impulse responses.
Causality and stability in LTI systems are characterized, where causality refers to the past and
present dependency between input and output, and stability ensures bounded output for any
bounded input.
The unit addresses system representation through differential equations for continuous-time
systems and difference equations for discrete-time systems. It introduces state-space
representation, which describes systems using first-order differential or difference equations,
facilitating state-space analysis and the study of multi-input and multi-output systems. The role
of the state transition matrix is explored, connecting the initial state to the state at any given
time. The unit also covers periodic inputs applied to LTI systems, investigating the notion of
frequency response and its relationship with the impulse response. This understanding offers
insights into how LTI systems respond to different frequencies in the input signal.
Overall, this unit provides comprehensive coverage of impulse response, step response,
convolution, input-output behavior, causality, stability, system representation, state-space
analysis, state transition matrix, periodic inputs, and the relationship between frequency
response and impulse response. It equips learners with a solid foundation in analyzing and
understanding the behavior of Continuous and Discrete-time LTI Systems.
RATIONALE
The unit on “Behavior of Continuous and Discrete-time LTI Systems" is to provide students
Understanding the behavior of Continuous and Discrete-time Linear Time-Invariant (LTI)
Systems is crucial in various engineering and scientific disciplines. This 8-hour unit is designed
to provide students with a comprehensive understanding of LTI systems and their
characteristics.
Impulse response and step response are fundamental concepts in LTI systems. By studying these
responses, students gain insights into how a system reacts to specific inputs and determine its
dynamic behavior. Convolution is a key operation used to compute the output of a system, and
it plays a vital role in analyzing LTI systems.
The unit focuses on the input-output behavior of LTI systems with a particular emphasis on
aperiodic and convergent inputs. Students learn how to analyze and determine the system's
response using techniques such as convolution. Cascade interconnections of LTI systems are
explored, as they are commonly encountered in real-world applications. By understanding the
convolution of impulse responses, students can analyze and predict the behavior of
interconnected systems.
Signals and Systems | 53
Characterizing causality and stability is essential for assessing the reliability and predictability
of LTI systems. Students examine the concepts of causality, where the output depends on past
and present inputs, and stability, which ensures bounded output for any bounded input. These
characterizations provide valuable insights into system behavior and performance.
System representation is an important aspect covered in the unit. Students learn to represent
LTI systems using differential equations for continuous-time systems and difference equations
for discrete-time systems. State-space representation is introduced as a powerful method to
describe and analyze complex systems. It provides a framework for studying multi-input, multi-
output systems and understanding their interactions.
The role of the state transition matrix is explored, emphasizing its significance in relating the
initial state to the state at any given time. This matrix plays a crucial role in solving state-space
equations and analyzing system behavior over time.
Periodic inputs and the notion of frequency response are examined to understand how LTI
systems respond to different frequencies in the input. The frequency response is related to the
impulse response through Fourier analysis, enabling students to analyze system behavior in the
frequency domain.
Overall, this unit equips students with the necessary knowledge and skills to analyze and
understand the behavior of Continuous and Discrete-time LTI Systems. The concepts covered,
such as impulse response, step response, convolution, input-output behavior, causality,
stability, system representation, state-space analysis, state transition matrix, periodic inputs,
and frequency response, are essential for successful engineering and scientific applications.
PRE-REQUISITES
UNIT OUTCOMES
List of outcomes of this unit is as follows:
U2-O1: Understand the concept of impulse response and step response.
U2-O2: Apply convolution to analyze the behavior of LTI systems.
U2-O3: Analyze the input-output behavior of LTI systems with aperiodic convergent inputs.
U2-O4: Understand and apply cascade interconnections of LTI systems.
U2-O5: Characterize the causality and stability of LTI systems.
U2-O6: Represent LTI systems through differential equations and difference equations.
U2-O7: Understand and apply state-space representation of systems.
U2-O8: Perform state-space analysis of LTI systems.
U2-O9: Analyze multi-input, multi-output systems.
U2-O10: Understand the role and application of the State Transition Matrix.
U2-O11: Analyze the behavior of LTI systems with periodic inputs.
U2-O12: Understand the concept of frequency response and its relation to the impulse
response.
"Education is the most powerful weapon which you can use to change the world."
- Nelson Mandela
Signals and Systems | 55
2.1 Introduction
In chapter 1, we have discussed types of systems and their properties. Two properties namely
linearity and time-invariance play very important roles in the analysis of signals and systems
since most of the practical systems possess these two properties. We call such systems as linear
time-invariant (LTI) systems. In our study of signals and systems, we will be especially
interested in systems that demonstrate both properties, which together allow the use of some of
the most powerful tools of signal processing.
In above Figure 2.1(a), the input to the linear system gives the output . If is scaled by a
value α and passed through this same system, as in Figure 2.1(b), the output will also be scaled
by α.
Superposition Principle: The linear system also obeys the principle of superposition. This
means that if two inputs are added together and passed through a linear system, the output will
be the sum of the outputs corresponding to their individual inputs. It is demonstrated in Figure
2.2.
56 | Behavior of Continuous & Discrete-time LTI Systems
That is, if cases in Figure 2.2 (a) and (b) are true then Figure 2.2 (c) is also true for a linear
system. The scaling property mentioned above still holds in conjunction with the superposition
principle. Therefore, if the inputs and are scaled by factors α and β, respectively, then the
sum of these scaled inputs will give the sum of the individual scaled outputs:
In this figure, ( ) and ( − ) are passed through the system TI. Because the system TI is
time-invariant, the inputs ( ) and ( − ) produce the same output. Whether a system is
time-invariant or time-variant can be seen in the differential equation (or difference equation)
describing it. Time-invariant systems are modelled with constant coefficient equations. A
constant coefficient differential (or difference) equation means that the parameters of the system
are not changing over time and an input now will give the same result as the same input later.
As LTI systems are a subset of linear systems, they obey the principle of superposition. In the
figure below, we see the effect of applying time-invariance to the superposition definition in
the linear systems section above.
[ ]= [ ]h[n] (2.1)
If the linear system is time invariant, then the responses to time-shifted unit impulses are all
time-shifted versions of the same impulse responses:
ℎ [ ]=ℎ [ − ] (2.2)
[ ]= [ ]h[n − k] (2.3)
This result is referred to as the convolution sum or superposition sum and the operation on the
right-hand side of the equation is known as the convolution of the sequences of x[n] and h[n].
The convolution operation is usually represented symbolically as
[ ] = [ ] ∗ ℎ[ ] (2.4)
Recall the definition of the unit pulse δ ; we can define a signal ( ) as a linear combination of
delayed pulses of height (k )
( )= ( (k ( − k ))Δ (2.6)
Taking the limit as = 0, we obtain the integral of Eq. (2.6), in which when Δ =0
(1) The summation approaches to an integral
(2) k = and x(k )= x( )
(3) Δ=d
(4) δ(t-k )= δ(t- )
Eq. (2.7) can also be obtained by using the sampling property of the impulse function. If we
consider t is fixed and t is time variable, then we have
∫ ( ) ( − ) =∫ ( ) ( − ) = ( )∫ ( − ) = ( ) (2.8)
[ ]= h[k] (2.9)
It can be seen that the step response of a discrete-time LTI system is the running sum of its
impulse response. Conversely, the impulse response of a discrete-time LTI system is the first
difference of its step response.
Similarly, in continuous time, the step response of an LTI system is the running integral of its
impulse response,
( )=∫ ℎ( ) (2.11)
and the unit impulse response is the first derivative of the unit step response,
( )
ℎ( ) = = ( ) (2.12)
Therefore, in both continuous and discrete time, the unit step response can also be used to
characterize an LTI system.
( ) = ( ) ∗ ℎ( ) = ∫ ( )ℎ( − ) (2.13)
Equation (2.13) is commonly called the convolution integral. Thus, we have the fundamental
result that the output of any continuous-time LTI system is the convolution of the input ( )
with the impulse response ℎ( ) of the system. Fig. 2.10 illustrates the definition of the impulse
response ℎ( ) and the relationship of Eq. (2.13).
1. Commutative:
( ) ∗ ℎ( ) = ℎ( ) ∗ ( )
2. Associative:
{ ( ) ∗ ℎ ( )} ∗ ℎ ( ) = ( ){ℎ ( ) ∗ ℎ ( )}
62 | Behavior of Continuous & Discrete-time LTI Systems
3. Distributive:
( ) ∗ {ℎ ( ) + ℎ ( )} = ( ) ∗ ℎ ( ) + ( ) ∗ ℎ ( )
So, instead of ( ) if we apply a standard elementary signal that is known as impulse signal. If
we apply impulse signal then the system's response will be impulse response if delta is replaced
by Tau Delta.
So, this equation this convolution in integral equation is very useful for calculating the
responses of the systems
If this input is an impulse signal [ ] then the system will provide the output corresponding to
impulse signal the output corresponding to impulse signal is called impulse response ℎ[ ].
If the input impulse signal is delayed by , i.e., [ − ] and it is applied to the same discrete
time system DTS then the output will also be delayed by , i.e., ℎ[ − ].
Dividing summation of the input from −∞ to +∞ and then passing through the system, the
summation will also result in the right side in terms of which is called convolution sum.
Convolution sum is important for calculating output of the discrete time systems.
[ ]= [ ]ℎ[ − ] (2.15)
[ ]= [ ] ∗ ℎ[ ] (2.16)
64 | Behavior of Continuous & Discrete-time LTI Systems
Figure 2.18: Equivalences for the series interconnection of continuous-time LTI systems.
The (a) first equivalence (b) second equivalence.
Consider two LTI systems with impulse responses ℎ and ℎ that are connected in a series
configuration, as shown on the left-side of Fig 2.18(a). From the block diagram on the left side
of Fig. 2.18(a), we have
= ( ∗ ℎ ) ∗ℎ (2.17)
Due to the associativity of convolution, however, this is equivalent to
= ∗ (ℎ ∗ ℎ ) (2.18)
Thus, the series interconnection of two LTI systems behaves as a single LTI system with
impulse response ℎ ∗ ℎ . In other words, we have the equivalence shown in Fig. 2.18(a).
Consider two LTI systems with impulse responses ℎ and ℎ that are connected in a series
configuration, as shown on the left-side of Figure 2.18(b). From the block diagram on the left
side of Figure 2.18(b), we have
= ( ∗ ℎ ) ∗ ℎ (2.19)
= ∗ (ℎ ∗ ℎ ) = ∗ (ℎ ∗ ℎ )
Signals and Systems | 65
= ( ∗ ℎ ) ∗ℎ (2.20)
Thus, interchanging the two LTI systems does not change the behaviour of the overall system
with input and output . In other words, we have the equivalence shown in Figure 2.18(b).
Consider two LTI systems with impulse responses h1 and h2 that are connected in a parallel
configuration, as shown on the left-side of Figure 2.19. From the block diagram on the left side
of Figure 2.19, we have
= ∗ ℎ + ∗ ℎ (2.21)
= ∗ (ℎ + ℎ ) (2.22)
Thus, the parallel interconnection of two LTI systems behaves as a single LTI system with
impulse response ℎ + ℎ . In other words, we have the equivalence shown in Figure 2.19.
Similarly, from the right half of the block diagram from Figure 2.20, we can write
( ) = ∗ ℎ (2.23)
Substituting the expression for into the Eq. (2.23) we obtain,
= ( ∗ [ + ℎ + ℎ ]) ∗ ℎ
= ∗ [ℎ + ℎ ∗ ℎ + ℎ ∗ ℎ ] (2.24)
Thus, from Eq. (2.24) the impulse response ℎ of the overall system is
ℎ( ) = ℎ + ℎ ∗ ℎ + ℎ ∗ ℎ (2.25)
(2.28)
[ ] is bounded in magnitude, and hence is stable if
ℎ[ ]˂ ∞ (2.29)
Signals and Systems | 67
So discrete-time LTI system is stable is Eq. (2.29) is satisfied. The similar analysis applies to
continuous-time LTI systems, for which the stability is equivalent to
∫ ℎ( ) ˂∞ (2.30)
Example: consider a system that is pure time shift in either continuous time or discrete time.
In discrete time,
between the system's input, output, and their derivatives with respect to time. The general
structure of such a representation is given by Eq. (2.31),
( ) ( )
∑ =∑ (2.31)
Any general output ( ) can also be represented in terms of two signals and that are particular
solution and homogeneous solution.
( )
∑ =0 (2.32)
The system is said to be linear when described by the above differential equation if all the initial
conditions are equal to 0. Similarly, the system is said to be time invariant if it is at initial rest,
i.e., if ( ) = 0 for ≤ then assume, ( ) = 0 for ≤ . Hence, the initial condition
becomes,
( ) ( )
( )= =…..= =0 (2.33)
That means the value of the output at and its derivatives up to degree ( − 1) is 0. So, if the
system satisfies both the conditions of linearity and time invariance then it is linear time-
invariant (LTI) system.
∑ ( − ) =∑ ( − ) (2.34)
= ̇( ) = ( ) + ( ) (2.36)
( ) = ( ) + ( ) (2.37)
70 | Behavior of Continuous & Discrete-time LTI Systems
Eq. (2.36) and Eq. (2.37) represents the state equations and output equations respectively.
Above state equation can be represented in q matrix form given by Eq. (2.38),
Signals and Systems | 71
( ) ⋯ ( ) ⋯ ( )
( ) = ⋯ ( ) + ⋯ ( ) (2.38)
( ) ⋯ ( ) ⋯ ( )
⋮ ⋮ ⋮ ⋮ ⋮
⋮ ⋮ ⋮ ⋮ ⋮
( ) ( ) ( )
Where
= System matrix
= Input matrix
( ) = State vector
( ) = Input vector
The output equations can be written in terms of matrix form as in Eq. (2.40)
( ) ⋯ ( ) ⋯ ( )
( ) = ⋯ ( ) + ⋯ ( ) (2.40)
( ) ⋯ ( ) ⋯ ( )
⋮ ⋮ ⋮ ⋮ ⋮
⋮ ⋮ ⋮ ⋮ ⋮
( ) ( ) ( )
Where
C = Output matrix
D = Transition matrix
72 | Behavior of Continuous & Discrete-time LTI Systems
( ) = ( ) (2.49)
( )= . ( ) + (0)
( )= ( )
∫ . ( ) + (0) (2.51)
Eq. (2.51) is the time domain solution of state equations of the CT system.
Matrix is called state transition matrix of CT System.
Solution:
For t > 0:
= 2 * ∫(e^(-τ) * e^(2(t-τ))) dτ
= 2 * ∫(e^(2t - 3τ)) dτ
= 2 * e^(2t) * ∫(e^(-3τ)) dτ
For t <= 0:
y(t) = 0
0 for t <= 0
x[n] = [1, 2, 3]
Solution:
For n = 0:
y[0] = (1 * 2) = 2
Signals and Systems | 75
For n = 1:
y[1] = (1 * -1) + (2 * 2) = 3
For n = 2:
y[2] = (1 * 3) + (2 * -1) + (3 * 2) = 5
For n = 3:
y[3] = (2 * 3) + (3 * -1) = 3
For n = 4:
y[4] = (3 * 3) = 9
y[n] = [2, 3, 5, 3, 9]
MIMO refers to systems that have multiple inputs and multiple outputs. These systems are
characterized by their ability to handle multiple control inputs and generate multiple outputs
simultaneously. The state-space representation for a MIMO system is an extension of the single-
input, single-output (SISO) case. In the MIMO representation, the state equations and output
equations are modified to accommodate multiple inputs and outputs.
State equations:
−2 0 1 0 1
= . + . (2.52)
1 −3 0 2 2
76 | Behavior of Continuous & Discrete-time LTI Systems
Output equations:
1 1 2 1 0 1
= . + . (2.53)
2 −1 0 0 −1 2
In this representation:
1
x= is a 2-dimensional state vector representing the internal state of the system.
2
1
u= is a 2-dimensional input vector representing the inputs to the system.
2
1
y= is a 2-dimensional output vector representing the outputs from the system.
2
−2 0
The state matrix A is .
1 −3
1 0
The input matrix B is
0 2
1 2
The output matrix C is
−1 0
1 0
The feed forward matrix D is .
0 −1
To analyze this system, we can perform various analyses such as stability analysis,
controllability analysis, observability analysis and system response analysis.
For stability analysis, we can determine the eigenvalues of the state matrix A. In this case, the
eigenvalues are -2 and -3, indicating that the system is stable. For controllability analysis, we
can check if the controllability matrix Co has full rank. The controllability matrix Co is formed
by concatenating the input matrix B with powers of the state matrix A. For observability
analysis, we can examine if the observability matrix Ob has full rank. The observability matrix
Ob is formed by concatenating the output matrix C with powers of the state matrix A. To
analyze the system's response, we can simulate the state equations and output equations with
appropriate inputs. For example, if we apply a step input ( ) = 1 and ( ) = 0, we can
Signals and Systems | 77
numerically solve the state equations and output equations to obtain the time-domain response
of the system.
By analyzing the state variables ( ), ( ), and the outputs ( ), ( ), we can observe the
behavior of the system, including transient response, steady-state behavior, and the interaction
between inputs and outputs.
Example 2.4: Find Laplace domain and time domain state transition matrix if
0 1
=
−2 −3
Solution:
To find φ(t) we must take the inverse Laplace Transform of every term in the matrix
78 | Behavior of Continuous & Discrete-time LTI Systems
When a periodic input signal is applied to a linear time-invariant (LTI) system, the response of
the system also becomes periodic. In this scenario, the output of the LTI system exhibits the
same periodicity as the input signal, but with potentially different amplitudes and phases.
When a periodic signal with period is passed through a linear, time-invariant system, the
output is also periodic and can be expressed as a sum of complex exponential signals.
( / ∗ )
Let's consider the input signal [ ] =
Where is the discrete time index and k range from 0 to -1. This signal represents a complex
exponential with a frequency of 2 / .
According to the properties of linear, time-invariant systems, the output signal can be obtained
by multiplying the frequency response of the system, denoted by ℎ[2 / ], with the input
signal's complex exponential component [ ].
Signals and Systems | 79
Hence, the output for each component signal can be written as:
[ ]= [ ]∗ ℎ[ ( / ∗ )
] ∗ (2.54)
Since ℎ[2 / ] is a constant value for each component signal, we can combine it with the
exponential term:
( ( / ∗ ))
[ ] = ℎ[ ] ∗ (2.55)
Here, θ represents the phase shift introduced by the frequency response of the system.
When a periodic input signal is passed through a linear, time-invariant system, the output signal
remains periodic and can be represented as a sum of complex exponential signals. Each
component signal's frequency response, ℎ[2 / ], is multiplied by the input signal's complex
exponential component, resulting in a modified exponential term in the output.
The linearity property of the filter allows us to express the output signal as
( ( / ∗ ))
[ ] = ℎ[ ] ∗ (2.56)
2.16 The Notion of frequency response and its relation to impulse response
The notion of frequency response is an important concept in the field of signal processing. It
provides information about how a system or a filter responds to different frequencies present in
the input signal. It is obtained by taking the Fourier transform of the impulse response. The
80 | Behavior of Continuous & Discrete-time LTI Systems
frequency response reveals how the system responds to different frequencies, while the impulse
response shows its response to an impulse signal.
( ) = ∑[ ( − )ℎ( )] (2.57)
Where x (n) is the input signal, h (n) is the impulse response of the system, and y (n) is the
output signal. The convolution sum represents the mathematical operation of convolving the
input signal with the impulse response to obtain the output signal.
Now, let's consider a specific input signal in the form of a complex exponential function:
( ) =
Where represents the imaginary unit, is the angular frequency, and belongs to the set of
integers.
By substituting this input signal into the convolution sum equation, we get:
[ ]
[ ] = ∑ ℎ[ (2.58)
[ ]
[ ] = ∑ ℎ[ ] (2.59)
Notice that the term inside the summation, , is the complex conjugate of . Therefore,
we can rewrite it as:
[ ] = ∑ ℎ[ ]
Signals and Systems | 81
Now, if we compare this expression to the form of the output signal when the input is a complex
exponential with frequency ω, given as ( ) = ( ) , we can conclude that:
( ) = ∑ [ℎ [ ] ] (2.60)
The term ( ) in this equation is referred to as the frequency response. It represents the
relationship between the input signal's frequency ω and the output signal's amplitude and phase
shift. The frequency response provides information about how the system or filter amplifies or
attenuates specific frequencies.
Therefore, the frequency response H (ω) is obtained by taking the discrete-time Fourier
transform (DTFT) of the impulse response h (n). The frequency response describes how the
system or filter responds to different frequencies present in the input signal, and it is a
fundamental concept in signal processing.
UNIT SUMMARY
In this unit on the behavior of continuous and discrete-time LTI systems, we covered various
important topics. These include understanding impulse and step responses, analysing input-
output behavior with aperiodic convergent inputs, cascade interconnections, causality and
stability characterization, system representation through differential equations and difference
equations, state-space representation, state-space analysis, multi-input multi-output systems,
the role of the state transition matrix, periodic inputs and the notion of frequency response.
Overall, this unit provided a comprehensive overview of the behavior and analysis of LTI
systems in both continuous and discrete-time domains.
82 | Behavior of Continuous & Discrete-time LTI Systems
EXERCISES
3. Causality in LTI systems implies that the output of the system depends on:
a) Future inputs
b) Past inputs
c) Present inputs only
d) Both past and future inputs
Answer: b) Past inputs
c) Z-transform
d) Convolution
Answer: b) Laplace transform
10. The relationship between the frequency response and impulse response of an LTI system
given by:
a) Convolution theorem
b) Parseval's theorem
c) Nyquist criterion
d) Plancherel's theorem
Answer: a) Convolution theorem
7. What are difference equations, and how are they used to represent LTI systems?
10. How do periodic inputs affect LTI systems, and what is the relationship between the
frequency response and the impulse response?
Numerical Problems
1. Given the impulse response h(t) = 3e^(-2t), calculate the step response of the system.
2. Find the convolution of the two sequences x[n] = {1, 2, 3} and h[n] = {2, -1, 0}.
Signals and Systems | 85
3. Consider an LTI system with the input x(t) = 2cos(3t). If the impulse response is h(t) = e^(-
t), determine the output y(t).
5. Represent an LTI system through a differential equation: y''(t) + 3y'(t) + 2y(t) = x(t), where
y(0) = 0 and y'(0) = 1.
6. Calculate the state-space representation of a multi-input, multi-output LTI system with two
state variables and two inputs.
7. Find the State Transition Matrix for a second-order LTI system described by the state-space
equations: ẋ(t) = Ax(t) + Bu(t) and y(t) = Cx(t) + Du(t).
8. Determine the response of a continuous-time LTI system with the impulse response h(t) =
sin(2πt) to a periodic input signal with a frequency of 10 Hz and an amplitude of 3.
9. Analyze the frequency response of an LTI system with the impulse response h(t) = e^(-
t)cos(2πt).
10. Investigate the relationship between the impulse response and the frequency response of an
LTI system using a specific example.
KNOW MORE
The study of continuous and discrete-time LTI systems involves analyzing their behavior using
concepts like impulse response, step response, convolution, and input-output behavior with
periodic inputs. Understanding causality and stability is crucial for characterizing these
systems. System representation can be done through differential equations or difference
equations, with state-space analysis providing a comprehensive approach. Multi-input, multi-
output systems and the role of the state transition matrix are also studied. Additionally, the
notion of frequency response and its relation to the impulse response is explored. These
concepts form the foundation for analyzing and designing LTI systems in various applications.
In summary, the study of continuous and discrete-time LTI systems includes analyzing impulse
86 | Behavior of Continuous & Discrete-time LTI Systems
and step responses, convolution, input-output behavior with periodic inputs, causality,
stability, system representation through differential or difference equations, state-space
analysis, multi-input multi-output systems, state transition matrix, and the relation between
frequency response and impulse response. These concepts are essential for understanding and
designing LTI systems in diverse applications.
d
3 Fourier Series and
Transform
UNIT SPECIFICS
Through this unit we have discussed the following aspects:
What is Fourier series, why it was developed?
Fourier series representation of periodic signals, Waveform Symmetries, Calculation of
Fourier
Coefficients.
Fourier Transform, convolution/ multiplication and their effect in the frequency
domain,
Magnitude and phase response, Fourier domain duality.
The Discrete-Time Fourier Transform (DTFT) and the Discrete Fourier Transform (DFT).
RATIONALE
The unit on “Fourier Series and Transform" provide students to understand the behavior of
Continuous and Discrete-time signals. This 6-hour unit is designed to provide students with a
comprehensive understanding of periodic and non-periodic signals along with their frequency
domain representation.
The unit focuses on Fourier series representation of periodic signals along with the properties.
The students can analyze the behavior of signal by extracting the frequency components from
the signal. The behavior will be found for both CT and DT signals.
The concepts covered, such as Fourier series representation of periodic signals, waveform
symmetries, calculation of Fourier coefficients, Fourier transform, convolution/ multiplication
and their effect in the frequency domain, magnitude and phase response, Fourier domain
88 | Fourier Series and Transform
duality, the Discrete-Time Fourier Transform (DTFT) and the Discrete Fourier Transform
(DFT).
PRE-REQUISITES
1. Strong understanding of mathematics, including algebra, calculus, and complex numbers.
2. Familiarity with basic concepts in signals and systems, such as time-domain and periodic,
non-periodic signals
UNIT OUTCOMES
List of outcomes of this unit is as follows:
U3-O1: Be able to compute the frequency components of the signal.
U3-O2: Be able to predict how the signal will interact with linear systems and circuits using
frequency response methods.
U3-O3: Be able to extract the Fourier coefficients from the signal.
U3-O4: Be able to analyze the signal directly from its properties.
U3-O5: Be able to analyze the magnitude and phase response of the given signal.
3.1 Introduction
Earlier we have discussed the time domain representation of signals and systems. We have
also seen the types of signals that are periodic and non-periodic. The French mathematician
Jean Baptiste Joseph Fourier showed that any periodic non-sinusoidal signal could be
represented in terms of linear weighted sum of harmonic sinusoidal signals. This
representation is called as Fourier series representation.
On the other hand, the Fourier representation of the aperiodic or non-periodic signals is
performed by treating them as periodic signals with an infinite fundamental period. In this
case, the non-periodic signals are represented as a function of frequency called Fourier
Transform. Fourier domain representation of the signal is another name for Fourier
Transform.
The Fourier representations of the signals are used for the frequency domain analysis of
signals. It enables us to extract the amplitude and phase of various frequency components
present in the signal.
Figure 3.1 shows the types of signals and their corresponding Fourier representations.
It may be noted that cos and sin are orthogonal functions over one period for
all integer values of and . This implies that ∫ cos m Ω t sin n Ω t dt = 0 for any
arbitrary value of .
Equation (3.1) can be extended as,
( )= + Cos + 2 +⋯+ Sin + 2 + ⋯ (3.2)
= ∫ ( ) or
or = ∫ ( ) (3.3)
2. The signal x(t) must have finite number of maxima and minima in one period.
3. The signal x(t) must have finite number of discontinuities in one period.
If any signal x(t) satisfies the above Dirichlet’s conditions, the Fourier series represented by
equation (3.1) converges i.e., the sum becomes equal to x(t) except at the point of
discontinuities.
Signals and Systems | 91
a
x(t)dt = dt + a cos nΩ t dt + b sin nΩ t dt
2
= ∫ dt + ∑ a ∫ cos nΩ t dt + ∑ b ∫ sin nΩ t dt
= [ t] + ∑ a +∑ b
= [ T − 0] + ∑ a − +∑ b +
2π 2π
T sin n T . T −cos n T . T 1
= a + a + b +
2 2π 2π 2π
n T n T n T
T 1 1
= a + a T. 0 + b T − +
2 n2π n2π
T
x(t)dt = a
2
92 | Fourier Series and Transform
2
∴a x(t)dt
T
+ cos m Ω t b sin n Ω t
+ b sin m Ω t. cosmΩ t dt + ⋯
In above equation all the definite integrations becomes zero except ∫ a cos mΩ t dt
∴ ∫ x(t) cos mΩ t dt = ∫ a cos mΩ t dt
=∫ a dt
= ∫ (1 + cos2m Ω t)dt
= t+
= T+ −0−
Signals and Systems | 93
= T+
2
a = x(t) cos n Ω t dt
T
+ sin m Ω t b sin n Ω t
x(t) sin m Ω t dt
a
= sin m Ω t + a cos Ω t sin mΩ t + ⋯
2
+ ⋯..+ b sin m Ω t dt + ⋯
94 | Fourier Series and Transform
In above equation all the definite integrations becomes zero except ∫ b Sin mΩ t dt
=b ∫ dt
= ∫ [1 − cos2mΩ t] dt
= t−
= T− −0+
= T−
2
b = x(t)sin n Ω t dt
T
= ∫ ( ) or = ∫ ( ) (3.9)
Signals and Systems | 95
= ∫ ( ) or = ∫ ( )
Proof: Consider the exponential form of Fourier series of ( ) given by Eq. (3.8),
( )=∑ = ⋯+ +…+ + + +⋯+
+⋯
Multiply the above equation by
( ) = ⋯+ +… + +
+ +⋯+ +⋯
Now integrate the above equation over a period 0 .
∫ ( ) = ⋯+ ∫ +… +∫ +
∫ +∫ +⋯+∫ +⋯
In the above equation all the definite integrations beocmes zero except ∫ . Hence,
∫ ( ) =∫ = [ ] = [ − 0] =
1
= ( )
3.3.3 Relationship between the triginimetric and exponential form of Fourier coefficient
The relationship between the triginimetric and exponential form of Fourier coefficient can be
written as,
= (3.10)
±= ( ∓
) for = ±1, ±2, ±3, … (3.11)
96 | Fourier Series and Transform
= ∫ ( ) (3.13)
= ∫ ( ) cos (3.14)
=0 (3.15)
Proof 1: Consider Eq. (3.3),
2 2 2
= ( ) = ( ) = ( )
Let = − ; ∴ =−
2 2
= (− )(− )+ ( )
2 2
= (− ) + ( )
= ∫ (− ) + ∫ ( )
Signals and Systems | 97
Since, ( ) is even, (− ) = ( ),
= ∫ ( ) + ∫ ( )
4
= ( )
2 2 2
= ( ) cos = ( ) cos + ( ) cos
Let = − ; ∴ =−
2 2
= (− ) cos (− ) (− )+ ( ) cos
2 2
= (− ) cos + ( ) cos
2 2
= (− ) cos + ( ) cos
Since, ( ) is even, (− ) = ( ),
2 2
= ( ) cos + ( ) cos
4
= ( ) cos
2 2 2
= ( ) sin = ( ) sin + ( ) sin
Let = − ; ∴ =−
2 2
= (− ) sin (− ) (− )+ ( ) sin
2 2
=− (− ) sin + ( ) sin
2 2
=− (− ) sin + ( ) sin
Since, ( ) is even (− ) = ( ),
2 2
=− ( ) sin + ( ) sin
=0
3.4.1.1 Some even symmetry signals with their Fourier series expansion
Figure 3.2 shows the waveform with even symmetry as well as half wave and quarter wave
symmetry. Hence for this waveform,
=0
=0
ℎ
( )= − + − +
⋯ (3.16)
=0
( )= + − + − +⋯ (3.17)
3.4.2 Odd Symmetry
If ( ) is an odd signal, then it should satisfy the condition (− ) = − ( ). The waveform
of an odd symmetry signal is anti-symmetric about the vertical axis or it is anti-symmetric at
= 0.
a waveform can be represented in terms of odd symmetry by folding the waveform with
respect to vertical axis. After folding, if the shape is exaclty opposite of the the another side
of vertical axis then waveform adopts odd symmetry.
The value of Fourier coefficient is zero, is zero but exists. Hnece the Fourier
coefficients are given by,
=0 (3.18)
=0 (3.19)
100 | Fourier Series and Transform
= ∫ ( ) sin (3.20)
Proof 1: Consider Eq. (3.3),
2 2 2
= ( ) = ( ) = ( )
Let = − ; ∴ =−
2 2
= (− )(− )+ ( )
2 2
= (− ) + ( )
= ∫ (− ) + ∫ ( )
Since, ( ) is odd, (− ) = − ( ),
=− ∫ ( ) + ∫ ( )
= 0
Proof 2: Consider Eq. (3.4),
2 2 2
= ( ) cos = ( ) cos + ( ) cos
Let = − ; ∴ =−
2 2
= (− ) cos (− ) (− )+ ( ) cos
2 2
= (− ) cos + ( ) cos
Signals and Systems | 101
2 2
= (− ) cos + ( ) cos
Since, ( ) is odd, (− ) = − ( ),
2 2
=− ( ) cos + ( ) cos
=0
Proof 3: Consider Eq. (3.5),
2 2 2
= ( ) sin = ( ) sin + ( ) sin
Let = − ; ∴ =−
2 2
= (− ) sin (− ) (− )+ ( ) sin
2 2
=− (− ) sin + ( ) sin
2 2
=− (− ) sin + ( ) sin
Since, ( ) is odd (− ) = − ( ),
2 2
= ( ) sin + ( ) sin
4
= ( ) sin
102 | Fourier Series and Transform
3.4.2.1 Some odd symmetry signals with their Fourier series expansion
Figure 3.4 shows the waveform with odd symmetry as well as half wave and quarter wave
symmetry. Hence for this waveform,
=0
=0
( )= + + + +⋯ (3.21)
Figure 3.5 shows the waveform with odd symmetry. Hence for this waveform,
=0
=0
Signals and Systems | 103
exists for all values of and Fourier series consists both even and
odd harmonics of sine terms
( )= − + − +⋯ (3.22)
If ( ) has even and half wave symmetry, then (− ) = ( ) and Fourier series will have odd
harmonics of only cosine terms.
If ( ) has odd and half wave symmetry, then (− ) = − ( ) and Fourier series will have
odd harmonics of only sine terms.
The waveforms shown in Fig. 3.2, 3.4 are the examples of quarter wave symmetry.
( )
In terms of Fourier coefficients,
3.5.5 Multiplication
For continuous time periodic signal,
( ) ( )
In terms of Fourier coefficients,
Signals and Systems | 105
3.5.6 Conjugation
For continuous time periodic signal,
∗
( )
In terms of Fourier coefficients,
∗
3.5.8 Differentiation
For continuous time periodic signal,
( )
In terms of Fourier coefficients,
3.5.9 Integration
For continuous time periodic signal,
( )
In terms of Fourier coefficients,
1
= | |
( )=
Above expansion consists the terms as a sum of infinite series of harmonic frequency
components. When the signal ( ) is to be reconstructed with some N terms from infinite
series of harmonic frequency components, the signal displays some oscillations also called as
ripples that have discontinuities with it.
Fig. 3.7 Approximation of square waveform using some harmonic frequency components
Signals and Systems | 107
Consider the Fig. 3.7(a) of periodic square waveform. Figure 3.7(b), (c), (d) shows the
reconstruction using some N harmonic frequency components. It can be easily seen that the
reconstructed signals have some ripples with it at the edge points. Also it can be observed
that, at the points of discontinuity, the Fourier series converges to average value of the signal
on either side of discontinuity. This phenomenon is called as Gibbs phenomenon after the
name of great scientist, Josiah Gibbs. He had shown that as the value of N increases, the peak
overshoot shifts towards the point of discontinuity.
Fig. 3.1.1.
Solution: The waveform has even symmetry, half wave symmetry and quarter wave
symmetry.
= 0, = 0, = ∫ ( )
Equation for square wave is written as,
( ) = ; for = 0
= − ; for =
Now,
4
= ( )
108 | Fourier Series and Transform
4 4
= + (− )
= -
= -
= − - −
4 4
= −0 − −
2 2 2 2 2
2 2 4
= + =
2 2 2
= 0;
for even values of
= ±1;
for odd values of
∴ = 0;
for even values of
= ;
for odd values of
( )=
4A 3 5 7
( )= − + − +⋯
1 3 5 7
Example 3.2 Determine the trigonometric form of Fourier series of the waveform shown in
fig 3.1.2.
Fig. 3.1.2.
Signals and Systems | 109
= ∫ ( ) ; = ∫ ( ) ; =0
( ) ( )
ℎ = = = =
( ) ( )
( , ( )) = [0, 0] and ( , ( )) = [ , ]
( ) ( )
∴ = => = => ( ) =
for = 0
Now,
= ∫ ( ) = ∫ = ∫ = = −0 =
4 4 2
= ( ) =
= +
= sin + cos −
= +1;
for even values of
= −1;
for odd values of
∴ = 0;
for even values of
= cos[ − 1] = −
; for odd values of
( )= +
2
4A 3 5 7
( )= − + + + +⋯
2 1 3 5 7
110 | Fourier Series and Transform
Example 3.3 Given the Fourier series coefficients as a determine the signalx(t):
1 1
= [ − 1] + [ + 1] + [ + 3] + [ − 3] + [ + 4] + [ − 4]
2 3 6
Solution:
1 1
= [ − 1] + [ + 1] + [ + 3] + [ − 3] + [ + 4] + [ − 4]
2 3 6
The Fourier series coefficients are identified as
1
= 0, = , = 0, = , =1
6
1
= , = 0, = , =1
2 3
( )=
( ) <∞
2. The signal ( ) should have a finite number of maxima and minima within a finite
duration of interval.
3. The signal ( ) can have a finite number of discontinuities within a interval.
112 | Fourier Series and Transform
( )= ∫ ( ) =∫ ( ) =Ƒ { ( )} (3.25)
Signal ( ) and its Fourier transform ( ) makes the Fourier trasnform pair and the relation
is given by,
( )⃖ , ⃗ ( )
( )= ( )+ ( )
( ) =Imaginary part of ( )
| ( )| = ( )+ ( ) (3.26)
If ( ) ⃗ ( ),
( ) ⃗ ( )
Then,
{ ( )+ ( )} = ( )+ ( ) (3.28)
Signals and Systems | 113
If ( ) ( )
Then,
{ ( − )} = ( ) (3.29)
Proof: By definition of FT,
( )=∫ ( )
∴ { ( − )} = ∫ ( − )
Let − = , ∴ = + , =
( )
∴ { ( − )} = ∫ ( )
=∫ ( ) .
= ∫ ( )
= − 0 ( )
If ( ) ⃗ ( )
then,
{ ( )} = ∗ (3.30)
| |
Proof: By definition of FT,
( )= ( )
114 | Fourier Series and Transform
{ ( )} = ( )
Let = , = , dt =
,
∞
1 −
∴ { ( )} = ( )
−∞
1
= ( )
3.8.5 Conjugation:
If ( ) ⃗ ( )
then,
{ ∗ ( )} = ∗ (− ) (3.32)
( )= ( )
∴ { ∗ ( )} = ∗( )
Signals and Systems | 115
∗
= ( )
∗
( ) ( )
=∫
∗( )
= −
( )= ( )
∴ ( ) = ( )
( ) ( )
=∫
= ( − )
3.8.7 Time differentiation
If ( ) ⃗ ( )
then,
( ) = ( ) (3.34)
( )= ( )
∴ ( ) =∫ ( )
= ( )
, = −
116 | Fourier Series and Transform
∴ ( ) = ( )| − (− ) ( )
= 0+ ( )
= ( )
3.8.8 Time Integration
( ) ⃗ ( )
then,
∫ ( ) = ( ) (3.35)
Proof: Let,
( )= ( )
{ ( )} = ( )
( )= ( )
( )=( ) ( )
1
∴ ( ) = ( )
If ( ) ⃗ ( )
then,
{ ( )} = ( ) (3.36)
Proof: By definition of FT
Signals and Systems | 117
( )= ( )
( )= ( )
=∫ ( )
( ) (− )
As, − =
1
( )= [ ( )]
1
= { ( )}
∴ { ( )} = ( )
3.8.10 Convolution
If ( ) ⃗ ( )
( ) ⃗ ( )
then,
{ ( )∗ ( )} = ( ) ( ) (3.37)
Proof: By definition of FT,
( )= ( )
( )= ( )
118 | Fourier Series and Transform
∴ { ( )∗ ( )} = [ ( )∗ ( )]
( )∗ ( − )
= − ,
= ,
= + ,
=
∴ { ( )∗ ( )} = ( ) ( )
∴ { ( )∗ ( )} = ( ) ( )
Replace ,
∴ { ( )∗ ( )} = ( ) ( )
If ( ) ⃗ ( )
then,
∫ | ( )| = ∫ | ( | (3.38)
Proof:
Let | ( )| = ( ) ∗ ( ) (3.39)
∴ ∫ | ( )| = ∫ ( ) ∗( ) (3.40)
Recall the inverse FT definition,
( )= ∫ ( )
∗( )= ∗( )
∴ ∫ (3.41)
R.H.S. of Eq. (3.40) becomes,
Signals and Systems | 119
1 ∗(
∴ ( ) )
2
1 ∗(
∴ | ( )| = ) ( )
2
1 ∗(
= ) ( )
2
1
| ( )| = | ( |
2
If ( ) ⃗ ( )
then,
( ) ↔2 (− ) (3.42)
(− ) = ∫ ( ) (3.43)
Put t= j in Eq. (3.43)
(− )= ∫ ( ) d
∴2 (− )= ( )
Where R.H.S. is FT of ( ).
120 | Fourier Series and Transform
( ) +⋯+ ( ) (3.45)
Transfer function of CT LTI system can be obtained by taking the FT of above equation and
arranging as a ration of ( ) to ( ).
( ) = ( ) ∗ ℎ( ) = ( )ℎ( − )
Let,
( )=FT of ℎ( )
( )=FT of ( )
( )=FT of ( )
Using the convolution property,
Signals and Systems | 121
Ƒ{ ( ) ∗ ℎ( )} = Ƒ{ ( )} = ( ) ( )
∴ ( )= ( ) ( )
( )
( )= (3.46)
( )
Equation (3.46) shows that the transfer function in frequency domain is given by Fourier
transform of impulse response which is the ratio of Fourier transform of output to input.
The output of LTI system in terms of the convolution opertaion is written as,
( ) = ( ) ∗ ℎ( ) = ℎ( ) ∗ ( ) = ℎ( ) ( − )
Let,
( )= (3.51)
( − )= ( )
∴ (3.52)
( )=∫ ( )
ℎ( ) (3.53)
122 | Fourier Series and Transform
( )=∫ ℎ( ) (3.54)
( )= ∫ ℎ( ) (3.55)
( )= ( ) ( ) (3.57)
Equation (3.57) says that if a complex signal is given as input signal to CT LTI system, then
the output has the same frequency as that of input signal multiplied by ( ). Hence, ( )
is called the frequency response of the CT LTI system. The
Solution:
− ; | |<
1. Given, ( ) =
; | |>
That means
( )=1− ; for = -1 to +1
Signals and Systems | 123
( )= ( )
( )= (1 − )
( )= −
, = −
= − − 2
− − −
− 2
= − +
−
, = −
− 2
= − + − 1
− − −
2
= − − + − −
− ( )
2
= − − − − − +
( ) −
2 2
= − − − + +
2 2 2 2
=− + − − + + + + −
Using, = and =
2 2
=− + + +
124 | Fourier Series and Transform
2 2
=− 2 + 2
4
= −
2. Given, ( ) = ( )
( )= ( )
=∫ =∫
= ∫ + ∫
( ) ( )
= ∫ + ∫
( ) ( )
= +
( ) ( )
1
= −
2 −( − + ) −( − + )
1
+ −
2 −( + + ) −( + + )
1 1 1 1
= 0+ + 0+
2 ( − + ) 2 ( + + )
1 1 1
= +
2 ( + )− ( + )+
1 ( + )+ +( + )−
=
2 ( + ) +
1 2( + ) ( + )
= =
2 ( + ) + ( + ) +
Signals and Systems | 125
Example 3.6 Determine the Fourier transform of the rectangular pulse shown in fig 3.10.1.
Fig. 3.10.1
Solution:
From fig. 3.10.1, the equation can be written as,
x(t) = 1; for = − +
Using the definition of FT,
( )= ( )
= 1
= = − = ( − )= 2
=2 =2 =2 ……….as =
Example 3.7 Determine the inverse Fourier transform of the following functions.
( )
1. ( )=
2. ( )=
( )
Solution:
( )
1.Given, ( )=
3( ) + 14
( )=
( + 3)( + 4)
Using partial fraction,
( )= +
( + 3) ( + 4)
126 | Fourier Series and Transform
3( ) + 14
= ( + 3) =5
( + 3)( + 4)
3( ) + 14
= ( + 4) = −2
( + 3)( + 4)
( )= −( (3.60)
( ) )
( )=5 ( )−2 ( )
2. Given, ( )=
( )
X(jΩ) = +
(jΩ + 3) jΩ + 3
+7
= ( + 3) = −3 + 7 = 4
( + 3)
Using rule of repeating poles,
+7
= ( + 3) = ( + 7) =1
( ) ( + 3) ( )
( )= + (3.61)
( )
( )=4 ( )+ ( )
Signals and Systems | 127
x(t)δ(t − t ) dt = x(t )
∴ x ( t) = e
For this time–domain signal, the Fourier transform given as,
e ↔ 2πδ(Ω − Ω )
A discrete time signal which is periodic with fundamental period can be decomposed into
harmonics of frequency components (frequency spectrum). When we combine these related
frequency components it must give the Fourier series representation of that particular periodic
discrete time signal which is a function of angular frequency denoted as ω, . This representation
of Fourier series of discrete time signal is called Discrete Time Fourier Series (DTFS).
3.10.1 Representation of Discrete Time Fourier Series
The Discrete Time Fourier Series, (DTFS) of discrete time periodic signal [ ] with period
is defined as,
[ ]=
, ω0 = Fundamental frequency of [ ]
[ ]= 0
[ ]=∑ (3.62)
= ∑ [ ] ; for = 0, 1, 2, . . −1 (3.63)
3 Frequency shifting
[ ]
∗ ∗
4 Conjugation [ ]
5 Time reversal [− ]
6 Time scaling where is 1
multiple of
7 Multiplication [ ] [ ]
8 Convolution
[ ] [
− ]
9 Symmetry If [ ] is real = ∗
| |=| |
∠ = −∠
If [ ] is real and are real and even
even
If [ ] is real and are imaginary and
odd odd
10 Parseval’s theorem
1 = | |
= | [ ]|
Solution:
1. Given, [ ] = √
= ∑ [ ] ; for = 0, 1, 2, . . −1
Here, N=8 and [ ] = 8
= ∑ 8 ; for = 0, 1, 2,3, 4, 5, 6, 7
8
= = −
8 4 4 4 4
Signals and Systems | 131
= 0( 0− 0) + − + −
4 4 4 2 2 2
3 3 3
+ − + ( − )
4 4 4
5 5 5 3 3 3
+ − + −
4 4 4 2 2 2
7 7 7
+ −
4 4 4
√ √
As, 0 = 1, = −1, = = = , =− , = 0, =
√ √
1, = 0, = =− , = 0, = −1, = , =
√
−
√2 √2 √2 √2 √2 √2 √2 √2 √2
=1+ − − − − + − − +
2 2 2 2 2 2 2 2 2
√2 √2 √2
+ +
2 2 2
2 2 2 2
=1+ + + + +
4 4 4 4
When k=0, = =1
When k=1, = = 1+ + +1+ + = 4
When k=2, = = 1+1+1+2+1+1= 7
When k=3, = = 1 + + + 3 + + = 10
When k=4, = = 1 + 2 + 2 + 4 + 2 + 2 = 13
When k=5, = = 1 + + + 5 + + = 16
When k=6, = = 1 + 3 + 3 + 6 + 3 + 3 = 19
When k=7, = = 1 + + + 7 + + = 22
The Fourier series representation of [ ] is,
[ ]= = =
= + + + + + + +
[ ]=1+4 +7 + 10 + 13 + 16 + 19 + 22
132 | Fourier Series and Transform
The Fourier transform (FT) of discrete-time signals is called Discrete Time Fourier Transform
(DTFT).
Let, [ ]=Finite energy discrete time signal
=Discrete time Fourier transform of signal [ ]
= { [ ]} = ∑ [ ] (3.64)
The discrete time Fourier transform exists only for the absolutely summable signals. That means
the Fourier transform exists for the signal [ ] if,
| [ ]| < ∞
= { [ ]} = [ ]
Signals and Systems | 133
+∞
∴ = [ ] −
=−∞
+∞
∴ = [ ] −
=−∞
= ∑+∞
=−∞[ [ ] − + [ ] − ]
= ∑+∞
=−∞ [ ] − + ∑+∞
=−∞ [ ] −
= [ ]+ [ ]
= { [ ]} = [ ]
∴ { [ − ]} = ∑+∞
=−∞ [ − ] −
Let − = , ∴ = +
+∞
∴ { [ − ]} = [ ] − ( )
=−∞
+∞
= [ ] − −
=−∞
+∞
= − [ ] −
=−∞
Replace m by n
=
134 | Fourier Series and Transform
3.12.3 Periodicity
Where is an integer.
Proof:
By definition of FT,
= { [ ]} = [ ]
∴ { [− ]} = ∑+∞
=−∞ [− ]
−
∴ { [− ]} = [ ]
=−∞
+∞
{ [− ]} = [ ] (− )−
=−∞
{ [− ]} = [ ]
3.12.5 Conjugation:
If [ ] ⃗ [ ], then
{ ∗ [ ]} = ∗
[ ] (3.70)
= { [ ]} = [ ]
Signals and Systems | 135
{ ∗ [ ]} = ∗
[ ]
[ ] ( )
=
∗
=
∗
= [ ]
If [ ] ⃗ [ ], then
( )
[ ] = [ ] (3.71)
= { [ ]} = [ ]
[ ] = [ ]
( )
= [ ]
= [ ( − 0)]
= { [ ]} = [ ]
136 | Fourier Series and Transform
{ [ ]} = [ ]
{ [ ]} = [ ] (− )
= [ ](− )
As, (− ) =
= [ ]
= [ ]
= [ ]
3.12.8 Convolution
If [ ] ⃗ [ ]
[ ] ⃗ [ ] then,
{ [ ]∗ [ ]} = [ ] [ ] (3.73)
= { [ ]} = [ ]
+∞
∴ = [ ] −
=−∞
+∞
∴ = [ ] −
=−∞
Signals and Systems | 137
∴ { [ ]∗ [ ]} = ( [ ]∗ [ ])
{ [ ]∗ [ ]} = ( [ ]∗ [ − ])
= ( [ ]∗ [ − ])
[ ] [ − ] ( )
=
Let, n-k=m
= [ ] [ ]
Replace ,
∴ { [ ]∗ [ ]} = [ ] [ ]
∑ ( [ ]∗ [ ]) = ∫ [ ] (3.74)
Proof: By definition of FT,
= { [ ]} = [ ]
Using inverse FT
1
[ ]= [ ]
2
1 1 ∗
[ ] = 1[ ] 2 [ ]
2 2
1 ∗
= 1[ ] 2 [ ]
2
1 ∗
= 1[ ] [ ]
2
=∑ ( [ ]∗ [ ])
The above discrete time Fourier transform (DTFT) concept provides analysis for a discrete time
signal in frequency domain where it is a continuous function of and so it cannot be processed
by digital system. Hence, we have to represent this into a discrete function of , so that
frequency analysis of discrete time signals can be presented using digital system.
Basically, the DFT of a discrete time signal is obtained by sampling the DTFT of the signal at
uniform frequency intervals. These samples must be sufficient to avoid aliasing effect. DFT is
represented as a sequence of complex numbers represented as ( ) for k = 0, 1, 2, 3, …. The
magnitude and phase of each sample of ( ) can also be computed.
The plot of magnitude versus is called magnitude spectrum and the plot of phase versus is
called phase spectrum (frequency spectrum).
Let [ ] be DTFT of the discrete time signal [ ]. The discrete Fourier transform (DFT)
[ ]is calculated by sampling one period of the DTFT [ ] at a finite number of frequency
points.
Let one period consists of N equally spaced points, 0 ≤ ≤2 .
Each frequency point is represented by ratio,
= ; for = 0, 1, 2 … −1
Hence, the sampling of DTFT at frequency points is written as,
[ ]= [ ] ; for = 0, 1, 2 … −1 (3.75)
DFT is also called as N point DFT, where the number of samples N for a finite duration
sequence x[n] of length L should be such that, N ≥ L, in order to avoid aliasing.
Signals and Systems | 139
[ ]= { [ ]} = [ ]
−1
−2
∴ [ ]= [ ]
=0
−1
−2
∴ [ ]= [ ]
=0
Consider linear combination, [ ]+ [ ]
−2
−1
∴ { [ ]+ [ ]} = ∑ =0 ( [ ]+ [ ])
−2 −2
−1
=∑ =0 [ [ ] + [ ] ]
140 | Fourier Series and Transform
−2 −2
−1 −1
= ∑ =0 [ ] + ∑ =0 [ ]
= [ ]+ [ ]
[ ]= { [ ]} = [ ]
−2
−1
∴ { [( − ) ]} = ∑ =0 [( − ) ]
Let − = , ∴ = +
−1
−2 ( )
∴ { [ − ]} = [( − ) ]
=0
−1
−2 ( )
= [( ) ]
=0
−1
−2 −2
= [ ]
=0
Replace p by n
−2
= [ ]
3.14.3 Periodicity
If [ ] ⃗
[ ] , i. e., If a sequence [ ] is periodic with periodicity of N samples then
N-point DFT, X(k) is also periodic with periodicity of N samples.
Hence, if [ ] and [ ] are N point DFT pair then,
[ + ] = [ ] ; for all (3.80)
[ + ] = [ ] ; for all (3.81)
[ ]= { [ ]} = [ ]
−2 ( + )
−1
∴ [ + ]=∑ =0 [ ]
−1
−2 −2
= [ ]
=0
−1
−2
= [ ] −2
=0
−1
−2
= [ ]
=0
∴ [ + ]= [ ]
{ [ − ]} = [ − ] (3.82)
If a signal is folded about the origin in discrete time, its magnitude spectrum does not changes
and the phase spectrum changes in sign i.e., phase reversal happens.
Proof:
By definition of DFT,
[ ] = { [ − ]} = [ − ]
Let, − = ,∴ = −
( − )
= [ ]
( − )
= [ ]
= [ − ]
142 | Fourier Series and Transform
3.14.5 Conjugation:
If [ ] ⃗ [ ], then
{ ∗ [ ]} = ∗
[ − ] (3.83)
{ ∗ [ ]} = ∗
[ ]
= [ ]
∗
= ∑ [ ] As, =1
( ) ∗
= ∑ [ ]
∗
= [ − ]
If [ ] ⃗ [ ], then
[ ] = [( − ) ] (3.84)
[ ]= [ ] = [ ]
( )
= [ ]
= [( − ) ]
3.14.7 Multiplication
If [ ] ⃗ [ ],
[ ] ⃗ [ ]
Signals and Systems | 143
Then,
{ [ ] [ ]} = [ [ ]⊛ [ ]] (3.85)
Proof: By definition of FT,
[ ]= { [ ]} = [ ]
{ [ ] [ ]} = [ ] [ ]
1
= [ ] [ ]
1
= [ ] [ ]
1 ( ( ))
= [ ] [ ]
1
= [ ] [( − )]
1
= [ [ ]⊛ [ ]]
3.14.8 Convolution
If [ ] ⃗ [ ],
[ ] ⃗ [ ]
Then,
{ [ ]⊛ [ ]} = [ ] [ ] (3.86)
[ ]= { [ ]} = [ ]
144 | Fourier Series and Transform
−1
−2
∴ [ ]= [ ]
=0
−1
−2
∴ [ ]= [ ]
=0
Considering the product of X [k] and X [k] and taking inverse DFT of the product the convolution
property ccan be proved.
Hence,
[ ] [ ]= { [ ]⊛ [ ]}
If [ ] ⃗ [ ],
[ ] ⃗ [ ]
then, Parseval’s relation says that,
∑ [ ] ∗ [ ]= ∑ [ ] ∗ [ ] (3.87)
Unit Summary
Fourier series and Fourier transform are fundamental tools in signal processing, mathematics,
physics, and engineering. They are used to analyze and represent periodic and non-periodic
functions in terms of sinusoidal or complex exponential functions. Fourier series decomposes
a periodic function into a sum of sinusoidal functions (sine and cosine). It's applicable to
functions with periodicity, allowing representation in terms of a discrete set of harmonics. The
series comprises a constant term (DC component) and an infinite sum of harmonic terms, each
with its own amplitude and phase. Fourier transform extends the concept of Fourier series to
non-periodic functions or signals. It transforms a function from the time or spatial domain into
the frequency domain. It decomposes a function into its constituent frequencies, represented
by a continuous spectrum. Fourier series and Fourier transform are indispensable tools in
various fields for analyzing, synthesizing, and processing signals and functions. Understanding
these concepts facilitates advanced analysis and manipulation of signals in diverse
applications.
Signals and Systems | 145
Exercises
1. Find the Fourier series coefficients for the signals shown in figure.
0≤x≤π
8. Find a sine series for
π
f(x) = x; 0<x<
2
= 0; <x<π
9. Show that in the range (0, π), the sine series for πx − x is sin x + sin3x +
sin5x + ⋯
10. Find a Fourier cosine series corresponding to the function f(x) = x, defined in the
interval (0,π).
11. Find the Fourier sine series and the Fourier cosine series corresponding to the function ,
x(n) = 1, for n = 2 to 6
x(n) = 0, for n = 0,1,7,8,9
Assume x(n) is periodic beyond this interval 0 − 9
15. Find the response of the following system to the input:
nπ 2πn π
x(n) = 2 + 2cos + cos +
4 3 2
System: H(ω) = e cos( )
16. Show that the Hilbert Transform of exp (jωt)is(−sgn f)exp (jωt)
Signals and Systems | 147
17. Without using a calculator or computer find the dot products of (a) w1 and w-1, (b) w1
and w-2 (c) w11 and w37, where
w
⎡ ⎤
⎢w ⎥
w =⎢ ⎥ and w /
⎢w ⎥
⎣w ⎦
to show that they are orthogonal.
18. Find the DTFS harmonic function of a signal x[n]with period 4 for which x[0]= 3,
x[1]=1, x[2] = -5, and x[3]=0 using the matrix multiplication X =
19. One period of a periodic function with period 4 is described by x[n] = δ[n] −
δ[n − 2], 0 ≤ n < 4. Using the summation formula for the DTFS harmonic function and
not using the tables or properties, find the harmonic function X[k].
20. Find the DTFS harmonic function of
x [ n] = δ [ m ] − δ [ m − 1]
with N = N = 3.
21. A periodic signal x[n] is exactly described for all discrete time by its DTFS
/
X[k] = (δ [k − 1] + δ [k + 1] + j2δ [k + 2] − j2δ [k − 2])e
Using one fundamental period as the representation time.
a) Write a correct analytical expression for x[n] in which √1 (j) does not appear
22. Based on a representation time N = 4, the DTFS harmonic function X[k] of a signal
x[n] has the following values.
Multiple-Choice Questions
1. A CT periodic signal x(t) is represented in Fourier series representation as
a. x(t) = ∑ a e Ω
b. x(t) = ∑ a e Ω
c. x(t) = ∑ a e Ω
d. x(t) = ∑ a e Ω /k
4. If the function f(x) is odd, then which of the only coefficient is present?
a. a
b. b
c. a
d. Everything is present
6. The Fourier series coefficients of a continuous periodic signal x(t) are a . Fourier series
coefficients of x(−t) are
a. a
b. −a
Signals and Systems | 149
c. a
d.
d. Ω
KNOW MORE
To delve deeper into the realm of Fourier series and Fourier transform is to embark on a
journey of profound mathematical elegance and practical utility. One can uncover the
secrets of convergence theorems, unravel the mysteries of orthogonality in function
150 | Fourier Series and Transform
spaces, and master the art of manipulating signals in both time and frequency domains.
Advanced analytical techniques open doors to solving complex differential equations,
paving the way for applications in fields as diverse as physics, engineering, and finance.
Moreover, the realm of Fourier transform beckons with promises of understanding the
very essence of signals, be they audio waves, images, or quantum phenomena. From fast
algorithms powering digital signal processors to cutting-edge applications in medical
imaging and quantum computing, Fourier analysis continues to shape our modern world.
As we journey forward, the exploration of Fourier series and transform promises not only
a deeper understanding of mathematics and science but also an endless stream of
possibilities for innovation and discovery.
4 Laplace Transform
UNIT SPECIFICS
RATIONALE
The unit on “Laplace Transform" provides students to understand the behavior of Continuous
and Discrete-time signals and systems in s-domain or Laplace domain. The Laplace transform
is one of the most important tools used for solving ODEs and specifically, PDEs as it converts
partial differentials to regular differentials.
Laplace transform can convert complex differential equations that describe the dynamic
behavior of a system into simpler algebraic equations that describe the frequency response of
a system
PRE-REQUISITES
UNIT OUTCOMES
List of outcomes of this unit is as follows:
U4-O1: Understand the need for bilateral and unilateral Laplace transform for continuous-
time signals and systems.
U4-O2: Understand the relationship between continuous-time Fourier transform and Laplace
transform.
U4-O3: Learn the properties of Laplace transform.
U3-O4: Learn the applications of Laplace transform in the analysis of CT LTI systems.
U3-O5: Learn to solve the differential equations using Laplace transform.
4.1 Introduction
In the 3rd chapter we have discussed the Fourier series and Fourier transform along with their
magnitude and frequency spectrum. In this chapter, the Laplace transform is discussed which
is used to transform a time signal to complex frequency domain and this complex frequency
domain is called as Laplace domain or s-domain. Laplace transformation was proposed by
Laplace in the year 1780; hence this transformation is called as Laplace transform. In time
domain the equations to represent a system are written in terms of differential equations
whereas in s-domain, the differential equations are transformed to algebraic equations for
easier analysis. In this chapter a brief discussion about Laplace transform, its properties and
applications for analysis of signals and systems are presented.
Let us analyze the signal of Eq. (3.1) for various choice of and Ω.
When, = ,
x(t) = Ae = Ae( Ω)
The real part of Eq. (3.2) represents a cosine signal and the imaginary part represents a
sinusoidal signal.
When, Ω = ,
154 | Laplace Transform
( ) = = (4.3)
L{ ( )}= ( ) = ∫ ( ) (4.5)
x(s) = x(t) e dt
= e u ( t) e dt
( )
= e e dt = e dt
e ( )
=
−(b + s)
156 | Laplace Transform
1
= e ( )
(
− b+s )
1
= [e − e ]
−(s + b)
1 1
= [ 0 − 1] =
−(s + b) s+b
Therefore, for causal signal ROC is the right side of pole at σ = −b as shown in following fig
4.2.
x(s) = x(t)e dt
= e u ( t) e dt
( )
e e dt = e dt
Signals and Systems | 157
( )
=
−(s + b)
1 ( )×
= e − e( )×
−(s + b)
Let = + ,
1 ( )
∴ X(S) = − −
( + )
( ) ×
1
=− +
+ +
Let = +
If + > 0 , > − , . ., = ∞
If + < 0 , < − . ., = 0
Hence, ( ) converges when σ < −b
X(s) = x(t)e dt
158 | Laplace Transform
= [ ( )+ (− )]
= +
( ) ( )
= +
( ) ( )
= +
−( + ) −( + )
Here also,
When, s + a > 0, s > −a → e =0
s + a < 0, s < −a → e = ∞
s + b > 0, s > −b → e = ∞
s + b < 0, s < −b → e =0
1 1
∴ X(s) = −
s+a s+b
Therefore, for two sided signal, ROC includes all points on s-plane lies between poles –a to –b
as shown in figure.
Example 4.1
Determine the Laplace transform of the following continuous time signals & find their ROC
1) x (t) = A u (t)
Signals and Systems | 159
2) x (t) = t u (t)
3) x (t) = e u ( t)
4) x (t) = e u (−t)
||
5) x (t) = e
Solution:
1.Given, ( ) = ()
X(s) = x(t)e dt
A u ( t) e dt
x(s) = Ae dt = A e dt
e A
=A = [e ]
−s −s
A A
∴ X(s) = [e − e ] = − [ 0 − 1]
s s
∴ X(s) = where for s > 0 the X(s) converges.
2. Given, ( ) = ( )
X(s) = x ( t) e dt
= t u ( t) e dt
= t e dt
= t −
= [e − 0] − [e −e ]
=
When, > 0, ( ) converges and ROC lies to the right of line passing through σ = 0.
3. Given , ( ) = ()
= e u ( t) e dt
= e e dt
( )
( )
=∫ e dt = ( )
1 1
[e − e ] = = −
s+8 s+8
For, + 8 > 0, X (s) converges for > −8 and ROC lies to the right of line passing through
σ = −8
4. Given, ( ) = (− )
X (s) = x(t)e dt
= e u (−t) e dt
( )
= e e dt = e dt
( )
( )
= = − e
( )
||
5. Given ( ) =
x(s) = x(t) e dt
||
= e e dt
e e dt + e e dt
( ) ( )
e dt + e dt
( ) ( )
= + −
( ) ( )
=− [e − e ] − [e −e ]
1 1 8
X(s) = − + = −
s−4 s+4 s − 16
||
Fig 4.8 ROC of ( ) =
Solution:
1. Given ( )= ( )= ; ≥
=∫ sin Ω t u(t)e dt
= ∫ sin Ω t e dt
∅ ∅
Using formula, sin∅ =
X(s) = ∫ e dt
164 | Laplace Transform
= ∫ e −e e dt
= ∫ e e −e e dt
( ) ( )
= ∫ e −e dt
( ) ( )
= ( )
− ( )
= ( )
− ( )
− ( )
+ ( )
= 0−0+ −
= ( )( )
as, j = −1
∴L{sin Ω tu(t)} =
2. Given that ( )= ( )= ; ≥
X(s) = ∫ x(t)e dt
Signals and Systems | 165
=∫ cos Ω t u(t)e dt
= ∫ cos Ω t e dt
∅ ∅
Using formula, cos∅ =
X(s) = ∫ e dt
= ∫ e e +e e dt
( ) ( )
= ∫ e +e dt
( ) ( )
= ( )
+ ( )
= 0+0+ +
= ( )( )
3. Given that ( )= ( )
= e sin Ω t ; t ≥ 0
Using definition of Laplace Transform,
X(s) = ∫ x(t)e dt
=∫ e sin Ω t e dt
166 | Laplace Transform
1 ( ) ( )
= e dt
2j
1 e ( )t e ( )
= −
2j −(s + a − jΩ −(s + a + jΩ
Ω
=
(s + a) + Ω
Ω
∴ L{e sin Ω t u(t)} =
(s + a) + Ω
Given that x(t) = e cos Ω t u (t)
=e cos Ω t; t≥0
Using definitiion of laplace Trasnform.
x(s) = x(t) e dt
= e cosΩ t e dt
1 ( ) ( )
= e dt
2
1 e ( ) e ( )
= −
2 −(s + a − jΩ −(s + a + jΩ
= 0+0+ +
s+a
=
(s + a) + Ω
∴L{e cos Ω t u(t)} = ( )
Signals and Systems | 167
( ) ( )
( ) 1 −
( ) 1 >0
( ) 1 >0
1 >0
( )
( − 1)!
( ) 1 >−
+
− (− ) 1 <−
+
( ) ! >0
( ) 1 >−
( + )
( ) ! >−
( + )
( ) >0
+
( ) >0
+
( ) >−
( + ) +
( ) + >−
( + ) +
168 | Laplace Transform
=∫ Kx(t)e dt
= K∫ x(t)e dt
= ( ) Using Eq. (4.7)
4.4.2 Linearity
This property states that weighted sum of two or more signals is equal to similar weighted sum
of individual’s Laplace transform.
i.e. if L{x (t)} = X (s)
L{x (t)} = X (s) , then
L{ax (t) + bx (t)} = aX (s) + bX (s)
If derivative is taken in time domain then it’s Laplace transform is SX(s) − x(0)
Signals and Systems | 169
∴L x(t) = ∫ x(t)e dt
( )
=∫ e dt … for causal x(t)
Using ∫ uv = u ∫ v − ∫[du ∫ v]
L x(t) = [e x(t)] − ∫ −se x(t)dt
=e x(∞) − e x(0) + s ∫ x(t)e dt
= S ∫ x(t)e dt − x(0)
= SX(s) − x(0) Using Eq. (4.10)
= [∫ x(t)dt]| + ∫ x(t)e dt
( ) [∫ ( ) ]|
= +
170 | Laplace Transform
let t ∓ a = τ
∴ t = τ ∓ a, dt = dτ
( ∓ )
∴ L{x(t ± a)} = x(τ)e dτ
= x(t)e e± dτ
= e± x ( τ) e dτ
d
L {tx(t)} = − X (s)
ds
Proof: Using definition of LT,
d
X(s) = x(t)e dt
ds
d
= x(t) e dt
ds
= x(t)(−t e ) dt
= [−t x(t)]e dt
= L{−t x (t)}
= −L{t x (t)}
d
∴ L{t x(t)} = − X(s)
ds
4.4.8 Time Scaling
X(s) = x(t) e dt
∴ L { x (at)} = x(at) e dt
let at = τ
τ dτ
t= , dt =
a a
172 | Laplace Transform
dτ
X(s) = x ( τ) e
a
1
= x(τ)e dτ
a
1 s
L{x(at)} = X (4.12)
a a
Above equation is true when a is positive for negative a,
L{x(at)} = − X (4.13)
∴ Combining Eq. (4.12)& Eq. (4.13)
1 s
L{x(at)} = X
|a| a
4.4.9 Initial Value theorem
The convolution theorem of LT says that the convolution of two signals in time domain is
equivalent to multiplication of their Laplace transforms in S domain.
i.e. If L {x, (t)} = X (s)
L {x (t)} = X (s) then,
L{x (t) ∗ x (t)} = X (s)X (s)
Signals and Systems | 173
In fact, X(s) will be rational whenever x(t) is a linear combination of real or complex
exponentials. Rational transforms also arise when we consider LTI systems specified in
terms of linear, constant coefficient differential equations.
We can mark the roots of N and D in the s-plane along with the ROC
Property-1: The ROC of X(s) consists of strips parallel to the jW - axis in the s-plane.
Property-2: If x(t) is of finite duration and is absolutely integrable, then the ROC is the entire
s- plane.
Property-3: If x(t) is right sided, and if the line passing through Re(s) = s0 is in ROC, then all
values of s for which Re(s) > s0 will also be in ROC.
Property-4: If x(t) is left sided, and if the line passing through Re(s) = s0 is in ROC, then all
values of s for which Re(s) < s0 will also be in ROC.
Property-5: If x(t) is two sided, and if the line passing through Re(s) = s0 is in ROC, then the
ROC will consists of a strip in the s-plane that includes the line passing through Re(s) =
s0 .
Property-6: If X(s) is rational, (where X(s) is Laplace transform of x(t)), then its ROC is
bounded by poles or extends to infinity.
Property-7: If X(s) is rational, (where X(s) is Laplace transform of x(t)), then ROC does not
include any poles of X(s).
Property-8: If X(s) is rational, (where X(s) is Laplace transform of x(t)), and if x(t) is right
sided, then ROC is the region in s-plane to the right of the rightmost pole.
Signals and Systems | 175
Property-9: If X(s) is rational, (where X(s) is Laplace transform of x(t)), and if x(t) is left sided,
then ROC is the region in s-plane to the left of the leftmost pole.
Let Laplace transform of x(t) be X(s). The s-domain signal X(s) will be a ratio of two
polynomials in s (i.e., rational function of s). The roots of the denominator polynomial are called
poles. The roots of numerator polynomials are called zeros. In signals and systems, three
different types of s-domain signals are encountered. They are, with separate poles, with multiple
poles, with complex conjugate poles.
The inverse Laplace transform (ILT) by partial fraction expansion method of all the three cases
are explained with an example.
Let ( ) = ( )( )
The residues A1, A2, A3 will be found for s=0, s=-p1, s=-p2.
Let ( ) = ( )( )
The residues A1, A2, A4, will be found for s=0, s=-p1, s=-p2.
d
3= [ ( )(s + p2) ]
ds
Let ( ) = ( )( )
The residues A2 and A3 are solved by cross multiplying the above equation and then equating
the
coefficients of like power of s.
1
X (s) Re{ s} 2
( s 1)( s 2 )
L 1
e t u ( t ) , Re{s} 1
s 1
L 1
e 2t u (t ) , Re{s} 2
s2
L 1
x(t ) (e t e 2t )u (t ) , Re{s} 2
( s 1)( s 2)
Signals and Systems | 177
Solution: Given ( ) = ( = +
)( )
Residue A1 is,
4
1 = ( )(s + 2)| = × (s + 2) =2
(s + 2)(s + 4)
Residue A2 is,
4
2 = ( )(s + 4)| = × (s + 4) = −2
(s + 2)(s + 4)
2 2
( )= −
s+2 s+4
i. For −2 > { } > −4
Given that ROC lies between lines passing through s = 2 to s = 4. Hence, x(t) will be two sided
signal.
( )=− (− ) − ( )
ii. { } < −4
Given that ROC is left of the line passing through s = 4. Hence x(t) will be anticausal signal
( )=− (− ) + (− )
178 | Laplace Transform
iii. { } > −2
Given that ROC is right of the line passing through s = – 2. Hence x(t) will be causal signal.
( )= ( )− ( )
( )+ ( )+ ( )+⋯+ ( )+ ( )= ( )+
( )+ ( ) +⋯+ ( )+ ( ) (4.14)
The transfer function of a continuous time system is defined as the ratio of Laplace transform
of output and Laplace transform of input. Hence the equation (4.15) is the transfer function of
an LTI continuous time system.
( ) ( − )( − )( − ) … … ( − )
( )= =
( ) ( − )( − )( − ) … … ( − )
The transfer function of LTI continuous time system is also given by Laplace transform of the
impulse response.
Unit Summary
The Laplace transform, a powerful mathematical tool, transcends mere computation to offer
profound insights into the behavior of dynamic systems across various disciplines. Beginning
with its historical roots and fundamental properties, the journey into Laplace transform theory
elucidates its superiority over conventional methods in solving differential equations, thanks to
its ability to convert complex time-domain problems into simpler algebraic ones. Techniques
such as partial fraction decomposition and theorems for initial and final values amplify its
practicality in diverse applications, from control systems analysis and electrical circuit design
to signal processing and probability theory. Advanced topics unveil its versatility in handling
generalized functions and partial differential equations, while its integration into modern
engineering workflows underscores its indispensable role in modeling and analysis. As we
reflect on its enduring impact, it becomes evident that the Laplace transform not only
revolutionizes problem-solving but also fosters a deeper understanding of dynamic systems,
paving the way for innovation and discovery across scientific and engineering frontiers.
180 | Laplace Transform
Exercise
1. Using final value theorem, find the final value of the signal x(t) given
a. X(s) = ( )
b. X(s) =
c. X(s) =
d. X(s) =
e. X(s) =
a. X (s) = ( )( )
; ROC: −2 < Re{s} < 1
4. Sketch the pole-zero plot and ROC (if exists) for the following signals:
a. x (t) = e u(t)
b. x (t) = e u(−t)
c. x (t) = te u(t)
│ │
d. x (t) = 3e
e. x (t) = e( ⁄ )
u(−t)
8. Find the Laplace transform of the signal and its corresponding ROC.
( )= [ ( ) − ( − 4)]
9. Find the Laplace transform of ( ) = ( )+ ( )
10. Find the Laplace transform of the signal ( ) = ( )+ (− )
18. Find the Laplace transform of ( ) = − ( ). Using the differentiation in time domain
property.
19. Find the Laplace transform of the output of an LTI system ( ) = 3 ( ) by using the
differentiation in s-domain property. Given
+2
( )=
+4 +4
20. Find the initial value and final value of ( ) with Laplace transform
2
( )=
( + 3 + 5)
1
( )=
( + 5)( − 3)
For the following ROCs:
( )−5 < { } < 3
(b) { } > 3
(c) { } < −5
24. Find the inverse Laplace transform of
( )
( )= ,
( )
ROC: { } > −3
(c) ( ) = ( )
; ROC: { } > −1
Multiple-Choice Questions
1. For a causal signal x(t), the ROC of X(s) is
a) Right-half of s-plane
b) Left-half of s-plane
c) Entire s-plane
d) jΩ-axis
b)
c) ( )
d) ( )
a) Re{s} > 4
b) Re{s} < −4
c) Re{s} < 4
d) Re{s} > −4
d) 2 ⁄
14. A causal LTI system has a transfer fraction ( ), whose ROC will be
a) Right-sided in the s-plane
186 | Laplace Transform
KNOW MORE
The Laplace transform is a mathematical technique that is used to simplify solving differential
equations, particularly those with initial conditions. It transforms functions of time into
functions of complex frequency. This transformation makes it easier to solve linear differential
equations by turning them into algebraic equations. Understanding the Laplace transform's
definition and its properties is crucial. This includes linearity, time-shifting, scaling, and
differentiation properties. Additionally, understanding the region of convergence is important
for ensuring convergence of the transformed function. Knowing how to convert a Laplace-
transformed function back into the time domain is essential. Techniques such as partial fraction
decomposition, contour integration, and the use of tables of Laplace transforms are commonly
employed. Laplace transform is extensively used in solving ordinary and partial differential
equations, including those arising in engineering, physics, and other fields. It simplifies solving
differential equations with initial conditions, boundary conditions, and forcing functions.
Laplace transform finds applications in signal processing for analyzing and manipulating
continuous-time signals. It aids in filtering, modulation, demodulation, and system
identification tasks, contributing to various fields such as telecommunications, audio
processing, and medical imaging. We have to stay updated on recent advancements and ongoing
research in Laplace transform theory, applications, and computational methods. Investigating
emerging trends, challenges, and potential interdisciplinary collaborations in areas such as data
science, machine learning, and quantum computing along with Laplace transform is needed.
Signals and Systems | 187
5 Z - Transform
UNIT SPECIFICS
Through this unit we have discussed the following aspects:
What is z- Transform, why it was developed?
z-Transform and ROC of finite and infinite duration sequences
Relation between Discrete Time Fourier Transform (DTFT) and z-Transform
Properties of z-Transform.
Inverse z-Transform and methods of analysis
RATIONALE
The unit on “z-Transform" provides students to understand the relationship between DTFT and
z-Transform. The students will understand the conversion of a discrete-time signal, which is a
sequence of real or complex numbers, into a complex valued frequency-domain (the z-domain
or z-plane) representation.
The unit focuses on z-Transform and ROC of finite and infinite duration sequences along with
the properties. The students can analyse the behaviour of the linear time-invariant (LTI) system
using the Z transform.
PRE-REQUISITES
2. Familiarity with basic concepts in signals and systems, such as periodic, non-periodic
signals, unilateral and bilateral sequences.
3. Proficiency in solving ordinary differential equations and understanding linear algebra
concepts.
UNIT OUTCOMES
List of outcomes of this unit is as follows:
U5-O1: Be able to understand the need for bilateral and unilateral z-transforms to analyze
discrete-time (DT) signals and systems.
U5-O2: Be able to understand the relationship between DT Fourier Transform (DTFT) and z-
transform.
U5-O3: Be able to learn the properties of z-transform.
U5-O4: Be able to learn the applications of bilateral and unilateral z-transform.
5.1 Introduction
The Z-transform is a mathematical technique used in signal processing and control theory to
analyze and process discrete-time signals and systems. It is the discrete-time counterpart of the
Laplace transform, which is used for continuous-time signals and systems.
There are some signals which are not absolutely summable and there Fourier transform does
not exist. Instead of taking transform of x[n], we can do a little change in the signal so that the
signal becomes absolutely summable and then apply the transform. Let us multiply the signal
x[n] with r-n. r-n is a slow decaying exponential signal. This transform is named as Z-transform.
. . { ( )} = ( ) = ∑ { [ ] }
=∑ [ ]
= [ ]
Signals and Systems| 191
Substituting into z = r ejw the above equation, the difficulty is resolved by generalizing the DTFT
of the signal x(n) can be expressed as sum of complex exponential, zn.
( )=∑ [ ] (5.1)
The Z-transform reduces to DTFT for the value of r = 1.
Inverse Z-transform is given by,
[ ]= ∮ ( )
It is denoted as,
.
( ) ( )
Z. T. { ( )} = ( )
I. Z. T. { ( )} = [ ]
In practical applications, the unilateral Z-transform is often more commonly used, especially in
the context of causal systems and signals. The choice between unilateral and bilateral Z-
transform depends on the nature of the problem and the characteristics of the signals involved.
The point at the origin (z = 0) represents the DC (zero-frequency) component of the signal. The
unit circle in the Z-plane corresponds to the unit circle in the complex plane, where the
magnitude of z is equal to 1. Points on the unit circle are associated with frequencies equal to
the sampling frequency. The unit circle is particularly important for analyzing frequency
response characteristics.
Poles and zeros of the Z-transform are represented as points in the Z-plane. Zeros are locations
where the Z-transform is zero, and poles are locations where the Z-transform becomes infinite.
The distribution of poles and zeros in the Z-plane provides insights into the stability and
frequency response of a discrete-time system.
Signals and Systems| 193
5.4.1 Poles:
Poles are the values of Z for which the transfer function becomes infinite (the denominator of
the transfer function becomes zero). The poles are denoted with the cross sign in the above
figure. They represent the natural frequencies of the system and provide information about the
system's stability and response to input signals.
The locations of the poles in the Z-plane determine how the system responds to different
frequencies. Poles closer to the origin (Z = 0) correspond to faster decaying modes, while poles
farther from the origin may represent dominant resonant frequencies.
The system is considered stable if all poles are inside the unit circle in the Z-plane. If any poles
are outside the unit circle, the system is unstable.
5.4.2 Zeros:
Zeros are the values of Z for which the transfer function becomes zero (the numerator of the
transfer function becomes zero). The zeros are denoted with a small circle sign in the above
figure. They represent the frequencies at which the system's response is zero, indicating points
in the Z-plane where the system does not respond to certain input frequencies.
Zeros can affect the system's frequency response, leading to resonant peaks or notches in the
frequency domain. The location of zeros in the Z-plane indicates the frequencies at which the
system has no response.
In summary, poles and zeros in the Z-transform provide valuable information about the
frequency response and stability of a discrete-time system. By analyzing the distribution of
poles and zeros in the Z-plane, one can understand how a system responds to different
frequencies and make design decisions to achieve desired system performance.
The Z-plane is also associated with the concept of the Region of Convergence (ROC), which is
the set of values of z for which the Z-transform converges. The ROC is often specified to ensure
the convergence of the Z-transform. We will discuss about this later in this chapter. The
194 | Z - Transform
location of poles in the Z-plane is crucial for stability analysis. For a discrete-time system to be
stable, all poles must lie inside the unit circle.
The Z-plane is a valuable tool for visualizing and analyzing the properties of discrete-time
systems in the context of Z-transforms. It provides insights into the frequency response,
stability, and convergence properties of these systems, making it an essential concept in the
field of digital signal processing and control system analysis.
Example 5.1 Determine the Z.T. and ROC of the following finite duration signals:
(a) [ ] = {1,2,3, −1,0,1}, −2 ≤ ≤ 3
(b) [ ] = {0, 0 ,1,2,1}, 0 ≤ ≤ 4
(c) [ ] = {1, 2, 3, −1,0, }, −4 ≤ ≤ 0
Solution:
(a) [ ] = {1, 2, , −1,0,1}, −2 ≤ ≤3
By definition,
( )= [ ]
= [ ]
(b) [ ] = { , 0,1,2,1}, 0 ≤ ≤4
By definition,
( )= [ ]
( )= [ ]
( )= [ ]
The signal x[n] has non-zero values of n = 4, 3, 2, 1, 0. i.e. 4 n 0.
( )= [ ]
( )
= (−4) + (−3) ( ) + (−2) ( )
+
(−1) ( ) + (0)
=1 + 2 + 3 + (−1) + 0
=1 + 2 + 3 + (−1) + 0
196 | Z - Transform
( )= +3 +2 +
The ROC for X(z) is entire z-plane except z = .
Example 5.2
Consider the Signal x[n] = anu(n). Find Z-Transform of x[n] Where |a| < 1.
Given : x[n] = an u(n), |a| < 1
Solution: The signal x[n] is right sided i.e. causal and infinite duration.
Z.T. of x[n] can be found by using equation below,
( )= [ ]
= ( )
We know,
1, ≥0
( )=
0, . .
( )= .1.
= ( . )
We know,
1
. ( )= , | |<1
1−
Applying this to X(z) we get,
1
( )= ( ) = =
1− −
Observe here that the signal x[n] is causal, therefore the ROC of X(z) is outside the circle having
radius |a|.
Example 5.3: Determine the Z.T. of the another signal given below
[ ]=− [ − − 1] , | | < 1
Solution: It is an infinite duration left sided, anticausal signal.
The Z.T. of the x[n] can be evaluated as,
( )= [ ]
( )= − (− − 1)
∴ ( )=− ( ( − 1)
=− . 1.
=− ( )
Replacing n by n we get,
=− ( )
As the lower limit starts from 1, we cannot directly find the answer. We will do some
adjustments so that it will start from 0.
=− ( ) −1
=1−
1
=1− =
1− −
∴ ( )= ; | |<| |
−
[ ]=− (− − 1) is a non-causal infinite duration signal.
The ROC is inside the circle of radius |a| as shown below.
Signals and Systems| 199
3. If x[n] is of finite duration, then the ROC is the entire z-plane, except possibility z = 0 and/or
z = .
4. If x[n] is right sided and of infinite duration sequence, then ROC is the region of the z-plane
outside the outermost pole.
5. If x[n] is a left sided and of infinite duration sequence, then ROC is the region of the z-plane
inside the innermost pole.
6. If x[n] is two sided and if the circle |z| = r0 is the ROC, then the ROC will consist of a ring
in the z-plane that includes the circle |z| = r0.
7. If the z-transform X(z) of x[n], is rational, then its ROC is bounded by poles or extends to
infinity.
8. If the Z.T. of X(z) of x[n] is rational and if x[n] is right sided, then ROC is the region in the
z-plane outside the outermost pole. i.e. outside the circle of radius equal to the largest magnitude
of the pole of X(z).
If signal x[n] is causal then ROC includes z = .
9. If the Z.T. X(z) of x[n] is rational and if x[n] is left sided, then ROC is the region in the z-
plane inside the innermost nonzero pole.
i.e. Inside the circle of radius equal to the smallest magnitude including z = 0 in particular, if
x[n] is anti-causal then the ROC also includes z = 0.
5.7.1 Linearity:
The Z-transform is a linear operation, which means it satisfies the superposition principle. If
you have a linear combination of signals, you can compute the Z-transform of each signal
separately and then sum them to find the Z-transform of the combined signal.
Linearity property states that,
.
( ) ( ) ℎ ∶
.
( ) ( ) ℎ ∶
.
Then, y(n) = ( )+ ( ) ( )=a ( )+ ( ) with ROC (5.5)
Proof : The Z.T. of Y[n] is given by,
Signals and Systems| 201
( )= ( ).
= { ( )+ ( )}
= ( ). + ( )
( ) ( )
( )= ( )+ ( ) ℎ
Hence proved.
The linearity property can be generalized for any number of arbitrary signals.
It implies that the Z.T. of a linear combination of signals is the same as linear combination of
their Z.T.
[ ]= [ ]
= ( − )
Let, − = , + =
[ ]= ∑ [ ] ( )
(5.7)
=∑ [ ] (5.8)
= ∑ ( ) (5.9)
( )= ( ) ∶ ∩ {0 < | | < ∞}
Hence proved.
202 | Z - Transform
( )= ( )
=∑ (− ) (5.11)
Substituting − =
( )= [ ]
=∑ [ ][ ] (5.12)
( )= ( )
then, ( ) = ( ) ℎ .
which is same as R.H.S. Hence proved.
[ ]= ( ) = . [ ]
Signals and Systems| 203
= [ ].
( )= ℎ ∶ | | .
then, ( ) = ℎ ∶ | | .
|| | R is scaled version of R.
z
If X(z) has a pole or zero at z = b then X a has a pole or zero at z = a b.
If b is +ve number, the scaling can be interpreted as shrinking or expanding of the z-plane.
xm[n] is obtained from x[n] by inserting (m 1) zeros between successive sample values of
x(m).
if, ( ) = X (z)
z
then, y(n) = Xm(n) X(zm) = Y(z) with ROC : R1/m. (5.14)
Proof : We know that,
( )= [ ]
( )= [ ]( )
= [ ]
k = mn then n=k/m,
( )=∑ (5.15)
Replacing k by n,
( )=
204 | Z - Transform
5.7.6 Convolution:
Convolution in the time domain corresponds to multiplication in the Z-domain. This property
is particularly useful for analyzing the behavior of linear time-invariant systems. Convolution
property states that,
( )↔ ( ) ℎ
( )↔ ( ) ℎ 2
then, ( ) = ( ) ∗ ( )↔ ( )= ( ). ( ) ∶ (5.16)
Proof : By definition of Z.T.
We have,
( )= ( )
[ ( )∗ ( )]
But, we know,
( )= ( )∗ ( )
=∑ ( ) ( − ) (5.17)
Substituting y (n) in y (z) as,
( )=∑ ∑ ( ). ( − ) (5.18)
=∑ ( )∑ ( − ) (5.19)
Using time shifting property,
=∑ ( ). ( ) (5.20)
Putting ∑ ( ). = ( ) into the above equation,
( )= ( ). ( ) ∶
Hence proved.
Signals and Systems| 205
( )= [ ]
5.7.8 Conjugation:
Convolution property states that
It states that, if [ ] ↔ ( ) ℎ
then, ( ) = ∗ ( ) ↔ ( ) = ∗ ( ∗ ) ℎ . (5.26)
Proof: The . . [ ] ,
( )=∑ [ ]
Taking conjugate of the above equation we get,
∗( ) )∗
= (∑ [ ] (5.27)
∗( ) ∗ ∗ ( )( ∗ )
=∑ ( ). = ∑ (5.28)
Putting z = z* we get,
206 | Z - Transform
∗ ( ∗) ∗( ).
=
∗( ∗
Hence, )↔ ( ∗)
( )= { [ ] − ( − 1)}
= →∞ { [ ]− ( − 1)} −
=0
Hence proved.
Signals and Systems| 207
5.7.11 Accumulation:
, [ ]= [ ]
Then
( )
( )= (5.31)
( )
Proof :
( ) − ( − 1) = [ ]− [ ]
( ) − ( − 1) = [ ]
Taking Z.T. on both sides, we get
( )− ( )= ( )
( )(1 − )= ( )
( )
( )=
(1 − )
This is the Z.T. of accumulator. It adds poles at z = 1 and ROC 1 < |z| < R.
These properties make the Z-transform a powerful tool for analyzing and solving problems in
discrete-time signal processing, control theory, and other related fields. They simplify the
analysis of discrete-time systems and help in understanding their behavior in the Z-domain.
Example 5.4
Determine the Z.T. of the following signals [ ] = . [ ] − [− − ].
for | | < | |, | | > ,| | < .
Solution: ∶ [ ]= [ ]− [ − 1]
The given signal is two sided; the Z.T. is given by,
( )= [ ]
=∑ ( ( )− (− − 1))
208 | Z - Transform
= ( ) − (− − 1)
= ( ) + −( )
1 1
= + −1
1− 1−
( )= +
− −
There are two different causes for ROC.
( )= + | |<| |<| |
− −
Fig. 5.7 Pole-zero plot and ROC |a| < |z| < |b|
For an infinite duration two sided sequence. The ROC is a ring in the z-plane.
From this example, it is clear that we cannot have two ROC's.
Signals and Systems| 209
Example 5.5
Determine the Z.T. of the signal,
1 1
[ ]= ( )+ ( ).
3 4
Solution: We know that Z.T. of,
. .
( ) , | | > | |.
−
Hence,
1 . z 1
u ( n) , |z| >
3 1 3
z−3
1 . z 1
u(n) ∶ |z| >
4 1 4
z−4
Therefore, z-transform of x[n] is given by,
( )= +
1 1
−3 −4
4 3 3 4
= + = +
4 −1 3 −1 3 −1 4 −1
1 1
The first series converges for |z| > . Second series converges for |z| > . The common
3 4
1
values of z for which both the series converges is |z| > . This is shown in the figure below.
3
1
Fig. 5.8 Pole-zero plot and ROC : < |z|
3
210 | Z - Transform
Example 5.6
Determine the Z.T. of x[n],
[ ]= (− − ) + (− − )
1 1
[ ]= (− − ) + (− − 1)
4 3
Solution: The Z.T. of x[n] is given by,
1 1
( )= [ (− − 1) + (− − 1)]
4 3
1 1
= +
4 3
= (4 ) + (3 )
4 3
( )= +
1−4 1−3
1
First series converges for |z| < because signal is non-causal and second series converges for
4
1
|z| < . Therefore, x(z) will be converge for common ROC of R1 and R2.
3
1
i.e. |z| <
4
1
The ROC of X(z) is inside the circle of radius |z| <
4
Pole zero plot and ROC is shown below.
Signals and Systems| 211
1
Fig. 5.9: Pole-zero plot and ROC |z| <
4
5.8 Relationship between DTFT and z-transform:
The relationship between CTFT and L.T. we have seen.
DTFT and Z.T.exhibits similar relation as CTFT and L.T.
Let, z= r ejω
Here, r is the magnitude and z = ω is the phase.
We know, z = rejω
( )= [ ].
=∑ [ ] (5.32)
=∑ { . [ ]} (5.33)
and we have, DTFT given by
∑ [ ] (5.34)
On comparing (1) and (2), it is clear that DTFT of rn x[n] is Z.T.
. .
[ ] ( )
When r = 1.
. .
, (1) [ ]= [ ] ( )
( )| = = . . { [ ]}
212 | Z - Transform
( )= [ ]
= .
= . .{ [ ] } (5.35)
Applying I.Z.T.to above equation we get,
[ ] = . . { }
[ ]= . . .{ }
Using I.F.T. expression, we have
[ ]= ∫ ( ) (5.36)
1
( )( . )
2
If we put,
=
= . dω = /(jr . )
Substituting in above equation is obtained.
[ ] = 1/(2 ) ( ).
This is the expression for I.Z.T. indicates the integration around a counter clockwise closed
circular contour centered at the origin with radius r.
This is a formal definition of I.Z.T. This is a direct method of computing I.Z.T. I.Z.T.
There are other methods to find a time domain sequence when its z-transform is known.
These are,
Signals and Systems| 213
( )= .
where cn = x[n] are the coefficients in the power series. When X(z) is rational, the expansion can
be performed by long division. Thus, it is also called as long division method.
Example 5.7
Using long division method find the I.Z.T. of
+
( )= assuming ROC to be | | > .
+
Solution:
The ROC is outside the circle of radius z = 1/3. So, its corresponding time domain signal is
causal.
Causal signals have negative power series expansion of z.
2 2 2
1 + z1 z2 + z3
3 9 27
1
1 + z ) 1 + z1
3
1
1 + z1
3
214 | Z - Transform
2
z1
3
2 2
z1 + z2
3 9
2
z2
9
2 2
z2 + z3
9 27
2
z3
27
2 2 2
X(z) =1 + z1 z2 + z3
3 9 27
Taking I.Z.T.
2 2 2
x[n] =1
3 9 27
Example 5.8
z
Using long division method find the I.Z.T. of X(z) = ; ROC |z| < |a|
za
Solution:
The ROC is inside the circle of radius z = a. So, its corresponding time domain signal is
anti-causal.
Anti-causal signals have positive power series expansion of z.
z
So, the I.Z.T. of can be evaluated by using following method (the terms are written in
za
opposite sequence so as to get positive power series expansion of z) as,
1 12 13
z z2 z3
a a a
a+z ) z
1
z z2 z ) z
a
1
z2
a
Signals and Systems| 215
1 12
z2 z3
a a
12
a z3
12 13
a z3 a z4
13
a z4
13 14
a z4 a z5
1 1 1
( )=− − − …
1 1 1
=− + + …….
We can write, [ ]
[ ] = −− (− − 1)
We can write relation X(z) and x[n] as,
− (− − 1) ↔ ; | |<| |
( − )
216 | Z - Transform
( )
= + (5.39)
=
−
( ) .( )( )…..(( )
( )= =( (5.40)
( ) )(( )……(( )
( )
= + + + ⋯..
− − −
= +∑ (5.41)
= ( ) =0
=( − ) ( )
( )= +( +( + ⋯..( (5.42)
) ) )
Let Pi is pole location with multiplicity r then X(z)/z will have term of the following form
( )
= +( +⋯( )
(5.43)
)
( )
= ( − ) (5.44)
!
From the above equation we can find the coefficients of poles which are located at same
location.
Example 5.9
Determine the inverse Z.T. of the following X(z) by the partial fraction method.
+
( )=
− +
With ROC ( )| | > , ( )| | < , ( ) > | | < .
Solution: Let,
( ) ( + 2)
=
(2 − 7 + 3)
( ) +2
=
1
2 − 2 ( − 3)
( )
= + +
1 ( − 3)
−2
0+2 2
= ( )| = =
1
2+ ∗3 3
2
1 ( + 2)
= −
2 1
2 − ( − 3)
2
( + 2)
=
2 ( − 3)
218 | Z - Transform
1 5
+2
= 2 = 2 = −1
1 5
2∗ 1 −
2
2 ∗3
2
( + 2)
= ( − 3)
1
2 − ( − 3)
2
3+2 5 1
= + =
1 5 3
2∗3 −2 6∗2
2 1
( )= − +
3 1 3 ( − 3)
−2
2 . . . 2
⎯ [ ]
3 3
. . . 1
⎯ ( )
1 2
−
2
. . .
⎯ (3) ( )
( − 3)
2 1
[ ]= [ ]− ( ) + (3) ( )
3 2
1
(b) ROC |z| <
2
1
The ROC is inside the circle of radius z = .
2
2 1 1
[ ]= [ ]+ (− − 1) − (3) (− − 1)
3 2 3
(c) ROC : > | | <
Signals and Systems| 219
1
( ) ∶ | |<3
2
The ROC is ring between the poles at = = 3.
z z
The corresponding time domain signal of is anti-causal and of is causal.
(z 3) 1
z
2
2 1 1
[ ]= [ ]+ ( ) − (3) (− − 1)
3 2 3
( )= ∑ |ℎ[ ]| (5.48)
In turn this condition implies that H(z) must contain the unit circle within its ROC.
Example 5.10
A linear time-invariant system is characterized by the system function
3−4
( )=
1 − 3.5 + 1.5
220 | Z - Transform
Specify the ROC of H(z) and determine h(n) for the following conditions:
(a) The system is stable.
(b) The system is causal.
(c) The system is anticausal.
Solution:
3−4
( )=
1 − 3.5 + 1.5
By applying partial fractions,
1 2
( )= +
1 − 0.5 1−3
The system has poles at z= 0.5 and z= 3.
(a) Since the system is stable, its ROC must include the unit circle and hence
it is 0.5 < |z| < 3. Consequently, h(n) is non-causal and is given as,
ℎ( ) = (0.5) ( ) − 2(3) (− − 1)
(b) Since the system is causal, it ROC is |z| > 3. Hence,
ℎ( ) = (0.5) ( ) + 2(3) )
and the system is unstable in this case.
(c) If the system is anti-causal, it ROC is |z| < 0.5. Hence,
ℎ( ) = −(0.5) ( − 1) − 2(3) (− − 1)
and the system is unstable in this case.
Unit Summary
The Z-transform, a powerful mathematical tool in discrete-time signal processing, offers a
comprehensive framework for analyzing discrete signals and systems. Understanding its
definition and properties, including linearity, time-shifting, scaling, and convolution properties,
forms the foundation of its application. The inverse Z-transform facilitates the conversion of
transformed signals back into the time domain, employing techniques such as partial fraction
decomposition and contour integration. In practical terms, the Z-transform finds extensive use
in digital filter design, system analysis, and control theory, allowing engineers to analyze and
design discrete-time systems with precision. Its applications extend to fields such as
telecommunications, audio processing, and image processing, where it enables efficient
manipulation of discrete signals. Advanced topics, including the relationship between Z-
transform and Laplace transform, as well as multirate signal processing, deepen the
understanding of its theoretical underpinnings and broaden its scope of application. As
technology continues to evolve, the Z-transform remains a vital tool for digital signal
processing, offering insights into the behavior of discrete systems and paving the way for
innovative solutions in a digital world.
Signals and Systems| 221
Exercise
1) Determine the Z-transform of the following signals, also mention the ROC.
a) ( ) = {1, 2, 3, 3, 2, 1} 0≤ ≤5
c) ( ) = {3, 2, 1, 1, 2, 3} −5≤ ≤0
2) Determine the z-transform of the following signals, also plot the ROC.
a) ( )= sin( ). ( )
b) ( ) = (0.3) [ ( ) − ( − 2)]
2 − 1.5
( )=
1 − 1.5 + 0.5
4) Determine the Z-transform of the signal ( ) = − (− − 1) and plot the ROC.
a) ( )= |z|<0.5
.
b) ( )=( . )( . )
. .
Specify the ROC of H(z) and determine h(n) for the following
convolutions
a) ⁄
b) ⁄
c)
d)
a) |z|>1
b) b) |z|<1
c) c) no ROC
d) d) -1<|z|<1
c) ( ) =
( )
d) none of the above
Signals and Systems| 223
6. If all the poles of the system function H(z) have magnitude smaller than 1, then the system will
be,
a) Stable
b) unstable
c) BIBO stable
d) a and c
d)
8. ℎ ℎ ( )= −5 < <5 ,
a) Entire z-plane
b) entire z-plane except z=0 and z=∞
c) Entire z-plane except z=0
d) entire z-plane except z=∞
9. Ƶ− ( ) ( )ℎ Ƶ− (− ) ,
a) − ( )
b) (− )
c) − ( )
d) ( )
10. ℎ Ƶ− ( ) ,
a) ( ) = ∮ ( )
b) ( ) = ∮ ( )
c) ( ) = ∮ ( )
d) ( ) = ∮ ( )
224 | Z - Transform
11. ℎ Ƶ − ,
a) finite series
b) infinite power series
c) geometric series
d) both a and c
12. ℎ Ƶ − ℎ ( ) ( ) ,
∗( ) ∗
a) ( ) b) ( ) ( ) c) ( ) ∗ ( ) d) ( ) ( )
13. For a stable LTI discrete time system poles should lie_______ and unit circle should be_______
a) Outside unit circle, included in ROC
b) inside unit circle, outside of ROC
c) inside unit circle, included in ROC
d) outside unit circle, outside of ROC
14. ℎ , ℎ ( ) = (− ) ( ) − < −1 ,
a) stable system
b) unstable system
c) anticausal system
d) neither stable nor causal
15. ( )ℎ ℎ , ℎ , ( ) ,
a) signed constant sequence
b) signed decaying sequence
c) signed growing sequence
d) constant sequence
Ƶ
16. ( )ℎ − ( ) ℎ → ℎ ( )↔ ,
a) b) a c) d)
17. ℎ Ƶ − ( )= ( ) ,
a) ( ) = ; | |>1 b) ( ) = ; | |>1
( ) ( )
Signals and Systems| 225
c) ( ) = ( )
; | |<1
d) ( ) = ; | |<1
( )
KNOW MORE
Z-transform reveals a rich tapestry of mathematical intricacies and practical applications.
Beyond its fundamental properties lie advanced techniques and insights that amplify its utility
in discrete signal analysis and system design. Understanding the intricacies of Z-transform
inversion methods, such as residue calculus and contour integration, empowers engineers and
researchers to unravel complex system behaviors with precision and accuracy. The Z-
transform's role extends far beyond mere signal processing; it serves as a cornerstone in areas
ranging from digital control theory to communication systems design, enabling the
development of robust algorithms and efficient data processing techniques. Exploring the
connections between the Z-transform and other mathematical tools, such as Fourier analysis
and Laplace transform, unveils deeper insights into the interplay between time and frequency
domains in discrete systems. Moreover, ongoing research in areas such as multirate signal
processing and adaptive filtering continues to push the boundaries of Z-transform theory,
paving the way for innovative applications in emerging technologies. As we delve deeper into
the intricacies of the Z-transform, we unlock a world of possibilities, where mathematical
abstraction converges with real-world engineering challenges, driving forward progress and
innovation in the digital age.
226 | Z - Transform
Sampling &
d
6 Reconstruction
UNIT SPECIFICS
Through this unit we have discussed following aspects:
• The necessity of sampling theorem
• Sampling theorem for Continuous Time & Discrete time signal
• Understanding of discrete time processing of continuous time signals
• Frequency domain spectra of the sampled signals
• Interpolation methods for reconstruction of sampled signals
• Zero-order & first order hold interpolation methods
• Effects of under sampling and oversampling on the signals
• Using spectra to understand aliasing and its effects
• Understanding continuous and discrete time systems
RATIONALE
The unit “Sampling and Reconstruction” is not only important to understand signals but also
will be helpful in Communications. We can call on simple intuition to motivate and describe the
processes of sampling and reconstruction from samples, because in communication systems
that are closely related to sampling or rely fundamentally on using a sampled version of the
signal to be transmitted.
228 | Sampling & Reconstruction
This unit focuses on sampling of continuous and discrete time signals. The effects of under
sampling and oversampling are discussed in detail with various examples. Frequency domain
analysis i.e. Fourier transform of signals is extensively used for the better understanding of the
topic. The effects of ‘aliasing’ are explained in lucid manner. Various methods to avoid aliasing
are also discussed in this topic. Both continuous time and discrete time signals are considered
for discussions. For reconstruction of the sampled signal, various interpolation methods are
discussed with detailed mathematical analysis and distinct examples. Different filtering
techniques are studied for proper reconstruction of the signal.
The discrete and continuous time systems are also discussed to give overview of the working of
how systems work.
PRE-REQUISITES
1. Strong understanding of mathematics, including algebra, calculus, and complex
numbers.
2. Familiarity with basic concepts in signals and systems, such as time-domain and
frequency-domain representations, Fourier analysis, and convolution.
3. Proficiency in solving ordinary differential equations and understanding linear
algebra concepts.
4. Basic knowledge of electronics and circuit analysis for understanding continuous-
time LTI systems.
5. Knowledge of digital signal processing concepts for understanding discrete-time
LTI systems.
UNIT OUTCOMES
List of outcomes of this unit is as follows:
U6-O1: Understand need of sampling theorem
U6-O2: Apply sampling theorem to continuous and discrete time signals
U6-O3: Study the effects of under sampling and oversampling
U6-O4: Perform Zero-order and first order hold interpolation
U6-O5: Study aliasing and its effects
U6-O6: Understand aliasing and its effects through Fourier analysis
U6-O7: Study relationship between continuous time & discrete time systems
Signals & Systems | 229
6.1 Introduction
A continuous time signal can entirely be represented by its samples which are equally spaced
in time. The sampling theorem is associated with these samples. This theorem is widely used in
applications where digital data is preferred over analog. Sampling theorem is one of the most
important theorems in signals & systems as it acts bridge between continuous time signals and
discrete time signals.
Nowadays, technically advanced digital systems are developed to effectively process
continuous time signals. Hence, there is need to convert these continuous time signals into
discrete time signals. Sampling process provides some insight, to deal with the problem of
conversion mentioned above. Sampling is the process which involves conversion of continuous
time signal to discrete time signal. The sampling is performed by taking samples of continuous
time signal at definite intervals of time. This sampled (discrete time) signal is easily processed
by discrete time systems. The original continuous time signal is reconstructed from this discrete
time signal.
In the following discussion, we introduce and develop concept of sampling and process of
reconstructing CT signals from its samples. In the following discussion, we will analyze the
conditions under which sampling rate is sufficient to be able to exactly reconstruct the original
continuous time signal. And we will also observe when sampling rate is low what will happen
to original continuous time signal while trying to reconstruct from its samples. Finally, we
examine the sampling of discrete time signals & related concepts of decimation & interpolation.
6.2 Sampling Theorem
First, we need to clearly see some examples of continuous time signals which can be uniquely
specified by a sequence of equally spaced samples. For example, figure 6.1 illustrate three
different continuous time signals, all of which have identical values at integer multiples of T (T
is sampling time/sampling period/sampling interval) that is,
x (kT) = x (kT) = x (kT)
( )= ( − ) (6.2)
Now, by using sampling property of unit impulse function, we know that multiplying x(t) by a
unit impulse sample the value of signal at the point at which the impulse is located i.e.
( ) ( − ) = ( ) ( − ). Applying this to Eq. (6.1) we see the result which is illustrated
in figure 6.2, that ( ) is an impulse train with amplitudes of impulses equal to the samples of
x(t) at intervals spaced at time interval T; that is,
( )= ( ) ( − ) (6.3)
Since, convolution with the impulse function simply shifts a signal, it follows that,
1
( )= (( − )) (6.6)
232 | Sampling & Reconstruction
Fig. 6.3: Frequency domain representation due to sampling in time domain: (a) Spectrum of
original signal; (b) Spectrum of sampling function; (c) Spectrum of sampled signal with
> ; (d) Spectrum of sampled signal <
Sampling Theorem:
Let x(t) be a band limited signal with ( ) = 0 | |> . Then x(t) is uniquely
determined by its samples x(nT), = 0, ±1, ±2, … …, if
>2
Where,
2
=
Given these samples, we can reconstruct x(t) by generating a periodic impulse train in which
successive impulses have amplitudes that are successive sample values. This impulse train is
then processed through an ideal lowpass filter with gain T and cutoff frequency greater than
and less than − . The resulting output will be exactly equal to x(t).
We have seen sampling theorem, where impulse train sampling method was discussed.
However, in practice the frequency of the original continuous time signal which comes under
sampling theorem must be excess than the sampling frequency is referred to as ‘Nyquist Rate’.
In real life applications a non-ideal lowpass filter is used instead of ideal lowpass filter as
shown in figure 6.4. The non-ideal filter has filter characterictics as | ( )| where | ( )| ≅
1 < and | ( )| ≅ 0 − . For understanding basic principles of
sampling theorem for convenience we will regularly use ideal filters throughout this chapter.
234 | Sampling & Reconstruction
generate and transmit. So, it is often more convenient to generate sampled signal in a form
referred to as ‘Zero-Order hold. Such system samples the given continuous time signal at given
instant and holds that value until the next instant at which sample is taken as shown in figure
6.5.
Fig 6.6: Zero order hold as impulse- train sampling followed by an LTI system with ractangular
response
236 | Sampling & Reconstruction
( )= ( )ℎ( − ) (6.7)
Eq. (6.7) describes how to fit a continuous curve between the sample points ( ) and
consequently represent an interpolation formula. For the ideal lowpass filter ( ) in Figure
6.4, the impulse response ℎ( ) is,
sin( )
ℎ( ) = (6.8)
we will get,
( )= ( )ℎ( − ) (6.9)
( ( − ))
( )= ( ) (6.10)
( − )
Signals & Systems | 237
The reconstruction according to Eq. (6.10) with = /2 is illustrated in Figure 6.8 Figure
6.8(a) represents the original band-limited signal ( ), and Figure 6.8(b) represents ( ), the
impulse train of samples. In figure 6.8(c), the superposition of the individual terms in Eq. (6.10)
is illustrated.
Interpolation using the impulse response of an ideal lowpass filter as in Eq. (6.10) is commonly
referred to as band-limited interpolation, since it implements exact reconstruction if ( ) is
band limited and the sampling frequency satisfies the conditions of the sampling theorem. As
we have indicated, in many cases it is preferable to use a less accurate, but simpler, filter or,
equivalently, a simpler interpolating function than the function in Eq. (6.8). For example, the
zero-order hold can be viewed as a form of interpolation between sample values in which the
interpolating function ℎ( ) is the impulse response ℎ ( ) depicted in figure 6.6. In that sense,
with ( ) in the figure corresponding to the approximation to ( ), the system ℎ ( ) represents
an approximation to the ideal lowpass filter required for the exact interpolation. Figure 6.9
shows the magnitude of the transfer function of the zero-order-hold interpolating filter,
superimposed on the desired transfer function of the exact interpolating filter.
Fig 6.8: Band-limited interpolation using Sinc function: (a) Band-limited signal x(t) (b) Impulse
train sampling of x(t) (c) Ideal band-limited interpolation in which impulse train is
replaced by superposition of Sinc functions.
238 | Sampling & Reconstruction
If the interpolation provided by zero-order hold is insufficient then we can opt for interpolation
strategies which are of higher order holds. We know from figure 6.5 that the zero-order hold
produces an output signal that is not continuous in time. On the other hand, the linear
interpolation, as shown in figure 6.7 gives us reconstructions which are continuous. The linear
interpolations are sometimes also known as first-order hold and can also be viewed as in figure
6.6. The associated transfer function is also shown in figure and is given by
1 sin 2
( )= (6.11)
2
The transfer function of the first-order hold is shown superimposed on the transfer function for
the ideal interpolating filter. Now, we can define second- and higher order holds that produce
reconstructions with a higher degree of smoothness. For example, the output of a second-order
hold provides an interpolation of the sample values that is continuous and has a continuous first
derivative and discontinuous second derivative.
Till now we assumed that the sampling frequency was sufficiently high that the conditions of
the sampling theorem were satisfied. As illustrated in figure 6.3, with >2 the spectrum
of the sampled signal consists of scaled replications of the spectrum of ( ), and this forms the
basis for the sampling theorem. When < 2 , ( ) the spectrum of ( ), is no longer
replicated in ( ) and thus it is not possible to recover original continuous time signal by
lowpass filtering. This effect is known as aliasing, and in this section, we explore its effect and
consequences.
Clearly, if the system of figure 6.4 is applied to a signal with <2 the reconstructed
signal ( ) will no longer be equal to x(t). However, as explored in earlier section, the original
Signals & Systems | 239
signal, and the signal ( ) that is reconstructed using band limited interpolation will always
be equal at the sampling instants; that is, for any choice ,
( )= ( ), = 0, ±1, ±2 … … (6.12)
We will try to understand the relationship between ( ) and ( ) when <2 for the
simple case of sinusoidal case thus let,
( ) = cos( ) (6.13)
with Fourier transform ( ) as indicated in Figure 6.11(a). In this figure, we have graphically
distinguished the impulse at w0 from that at -w0 for convenience. Let us consider ( ), the
spectrum of the sampled signal, and focus on the effect of a change in the frequency with
the sampling frequency fixed. In figure 6.11(b)-(e), we illustrate ( ) for several values
of . Also indicated by a dashed line is the passband of the lowpass filter of figure 6.4 with
= /2. Note that no aliasing occurs in (b) and (c), since < /2, whereas aliasing does
occur in (d) and (e). For each of the four cases, the lowpass filtered output ( ) is given as
follows:
240 | Sampling & Reconstruction
( ) = ; ( ) = cos = ( )
6
2
( ) = ; ( ) = cos = ( )
6
4
( ) = ; ( ) = cos( − ) ≠ ( )
6
5
( ) = ; ( ) = cos( − ) ≠ ( )
6
When aliasing occurs, the original frequency takes on the identity of a lower frequency,
− . For /2 < < , as increases relative to , the output frequency −
decreases. When = , for example, the reconstructed signal is a constant. This is consistent
with the fact that, when sampling once per cycle, the samples are all equal and would be
identical to those obtained by sampling a constant signal ( = 0). In figure 6.12, we have
depicted, for each of the four cases in Figure 6.11, the signal ( ), its samples, and the
reconstructed signal ( ). From the figure, we can see how the lowpass filter interpolates
between samples. Consider another sinusoidal signal given by Eq. (6.14) as,
( ) = cos( + ) (6.14)
In this case, the Fourier transform of ( ) is essentially the same as Figure 6.11(a), except that
the impulse indicated with a solid line now has amplitude , while the impulse indicated
with a dashed line has amplitude with the opposite phase, namely, If we now consider
the same set of choices for as in Figure 6.11, the resulting spectra for the sampled versions
of cos( + ) are exactly as in the figure, with all solid impulses having amplitude
and all dashed ones having amplitude Again, in cases (b) and (c) the condition of the
sampling theorem is met, so that ( ) = cos( + ) = ( ), while in (d) and (e) we again
have aliasing. but we now see that there has been a reversal in the solid and dashed impulses
appearing in the passband of the lowpass filter. As a result, we find that in these cases, ( ) =
cos[( − ) + )], where we have a change in the sign of the phase i.e., a phase reversal.
It is important to note that the sampling theorem explicitly requires that the sampling frequency
be greater than twice the highest frequency in the signal, rather than greater than or equal to
twice the highest frequency. The next example illustrates that sampling a sinusoidal signal at
exactly twice its frequency (i.e., exactly two samples per cycle) is not sufficient.
Signals & Systems | 241
Fig 6.11: Effect of oversampling & under sampling: (a) Spectrum of original sinusoidal signal;
(b) (c) Spectrum of sampled signal with > ; (d) (e) Spectrum of sampled signal
with < ;
242 | Sampling & Reconstruction
Fig 6.12: Effect of aliasing on sinusoidal signal. For each of the four values of , the original
sinusoidal signal (solid curve), its samples, and reconstructed signal (dashed curve) are
illustrated: (a) = , in (a) and (b) no aliasing occurs, whereas in (c) & (d) there is
aliasing
Signals & Systems | 243
Example 6.1
( ) = cos + ,
2
And suppose that the above signal is sampled using impulse sampling at exactly the twice the
frequency of the sinusoid that is with sampling frequency . Now if this impulse sampled
signal is applied as input to an ideal low pass filter with cut-off frequency /2 the resulting
output is
( ) = (cos ) cos
2
It is observed that the perfect reconstruction of ( ) is only possible when = 0 or when is
integer multiple of 2 . Otherwise ( ) ≠ ( ). So, the perfect reconstruction of original
continuous time signal becomes conditional, which is not desirable.
( ) = sin
2
The signal corresponding to above signal is sketched in figure 6.12. The values of the signal at
integer multiples of the sampling period 2 / are zero. So, sampling at this rate will produce
a signal which is zero. Now these zero inputs will be given to the ideal low pass filter, the
resulting output ( ) will also be zero.
Due to under sampling, stroboscopic effect is observed where higher frequencies are reflected
into lower frequencies, is the principle on which the it is based. Consider, for example, the
situation depicted in Figure 6.13, in which we have a disc rotating at a constant rate with a
single radial line marked on the disc.
244 | Sampling & Reconstruction
The flashing strobe can be considered to act as a sampling system, since it illuminates the disc
for extremely brief time intervals at a periodic rate. It is observed that when the rotational speed
of the disc is less than the strobe frequency then the speed of the rotation of the disc is perceived
correctly. Now, when the strobe frequency is less than twice the rotational frequency of the
disc, then rotation appears to be at lower frequency than the actual. Sometimes because of phase
reversal disc will appear to be rotating in the wrong direction. Now, if we track the position of
line on the disc at successive samples then when < < 2 such that sampling rate per
revolution is increased so that so that we sample somewhat more frequently than once per
revolution, samples of the disc will show the fixed line in positions that are successively
displaced in a counterclockwise direction, opposite to the clockwise rotation of the disc itself.
At one flash per revolution, corresponding to = , the radial line appears stationary (i.e.,
the rotational frequency of the disc and its harmonics have been aliased to zero frequency). A
similar effect is commonly observed in western movies, where the wheels of a stagecoach
appear to be rotating more slowly than would be consistent with the coach's forward motion,
and sometimes in the wrong direction. In this case, the sampling process corresponds to the fact
that moving pictures are a sequence of individual frames with a rate (usually between 18 and
24 frames per second) corresponding to the sampling frequency.
The preceding discussion suggests interpreting the stroboscopic effect as an example of a useful
application of aliasing due to under sampling. Another practical application of aliasing arises
in a measuring instrument referred to as a sampling oscilloscope. This instrument is intended
for observing very high-frequency waveforms and exploits the principles of sampling to alias
these frequencies into ones that are more easily displayed.
Signals & Systems | 245
In broad terms, this approach to continuous-time signal processing can be viewed as the cascade
of three operations, as indicated in Figure 7.13, where ( ) and ( ) are continuous-time
signals and ( ) and ( ) are the discrete-time signals corresponding ( ) to ( ) and.
The overall system is, of course, a continuous-time system in the sense that its input and output
are both continuous-time signals. The theoretical basis for converting a continuous-time signal
to a discrete-time signal and reconstructing a continuous-time signal from its discrete-time
representation lies in the sampling theorem. By satisfying simple conditions of sampling
theorem and through the process of periodic sampling with the sampling frequency consistent
with the conditions of the sampling theorem, the continuous-time signal ( ) is exactly
represented by a sequence of instantaneous sample values ( ); that is, the discrete-time
sequence ( ) is related to ( ) by
( )= ( ) 6.15
The continuous time signal is applied to the first block of the figure 6.13 and ( ) is converted
to the discrete time signal ( ). It will be abbreviated as C/D conversion. The third block in
figure 6.13 converts the discrete time signal ( ) to continuous signal. This conversion is
246 | Sampling & Reconstruction
known is abbreviated as D/C. The D/C operation uses the interpolation technique between
sample values provided to it as input. The continuous time signal produced is expressed as
( )= ( )
This notation is made explicit in Figure 6.14. In systems such as digital computers and digital
systems for which the discrete-time signal is represented in digital form, the device commonly
used to implement the C/D conversion is referred to as an analog-to-digital (A to-D) converter,
and the device used to implement the D/C conversion is referred to as a digital-to-analog (D-
to-A) converter.
To understand further the relationship between the continuous-time signal ( ) and its
discrete-time representation ( ) , it is helpful to represent C/D as a process of periodic
sampling followed by a mapping of the impulse train to a sequence. These two steps are
illustrated in Figure 6.15. First step is to represent using the sampling process, the impulse train
( ) corresponds to a sequence of impulses with amplitudes corresponding to the samples of
( ) and with a time spacing equal to the sampling period T. In the process of conversion from
the impulse train to the discrete-time sequence, we obtain ( ), corresponding to the same
sequence of samples of ( ), but with unity spacing in terms of the new in dependent variable
n. Thus, in effect, the conversion from the impulse train sequence of samples to the discrete-
time sequence of samples can be thought of as a normalization in time. This normalization is
evident in Figures 6.15 (b) and (c).
It is also instructive to examine the processing stages in Figure 6.13 in the frequency domain.
Since we will be dealing with Fourier transforms in both continuous and discrete time, in this
section only we distinguish the continuous-time and discrete-time frequency variables by using
in continuous time and Ω in discrete time. For example, the continuous-time Fourier
Signals & Systems | 247
Fig 6.17: Sampling then followed by conversion to Discrete-time sequence: (a) Overall System;
(b) ( ) for two sampling rates. Dashed envelop represents ( ); (c) The output
sequence for two different sampling rates.
( )= ( ) ( − ) (6.16 )
( )= ( ) (6.17 )
( Ω) = ( ) (6.18 )
= ( ) (6.19)
Comparing equation 6.17 and 6.19 we see that & ( ) are related as
= ( Ω/ ) (6.20)
1
( )= ( (ω − )) (6.22)
Similarly,
1
= ( (Ω − 2πk)/ ) (6.23)
In the overall system of Figure 6.13, after processing with a discrete-time system, the resulting
sequence is converted back to a continuous-time signal. This process is the reverse of the steps
in Figure 6.15. Specifically, from the sequence ( ), a continuous time impulse train ( )
can be generated. Recovery of the continuous-time signal ( ) from this impulse train is then
accomplished by means of lowpass filtering, as illustrated in Figure 6.17.
Signals & Systems | 249
Fig 6.18: The relationship among ( ), ( ) and ( ) with different sampling rates
For the above filters, the magnitude and phase response are shown in figure 6.19 and figure
6.20
Fig 6.21: Frequency Response of discrete-time filter used to implement a continuous time band-
limited differentiator
Example 6.2
By considering the output of the digital differentiator for a continuous time sine input, we may
conveniently determine the impulse response ℎ ( ) of the discrete-time filter in the
implementation of the digital differentiator.
Signals & Systems | 251
sin( / )
( )= (6.27)
cos( / ) sin
( )= ( )= − (6.28)
( ), =
( )= (6.31)
, ℎ
Using the multiplication property developed early in this section, discrete time sampling in
frequency domain can be written as,
( )= ( ) ( )= ( ) ( − ) (6.32)
1 ( )
= (6.33)
2
2
= ( − ) (6.34)
Signals & Systems | 253
Equation (6.42) is the counterpart for discrete-time sampling of eq. (6.6) for continuous-time
sampling and is illustrated in figure 6.21. In Figure 6.21(c), with − > , or
equivalently, > there is no aliasing (i.e., the nonzero portions of the replicas of ( )
do not overlap), whereas with < , as in figure 6.21(d), frequency domain aliasing results.
In the absence of aliasing, ( ) is faithfully reproduced around = 0 and integer multiples
of 2 . Consequently, ( ) can be recovered from ( ) by means of a lowpass filter with gain
cutoff frequency greater than and less than − , where we have specified cutoff
frequency of the low pass filter as /2. If overall system from figure 6.22 (a) is applied to a
sequence for which < , so that there are aliasing results ( ) will no longer will equal
to ( ). But with continuous time sampling, the two sequences will be equal at multiple time
of sampling period. Now, we have,
( ) = ( ), = 0, ±1, ±2 … … …, (6.35)
independently of whether aliasing occurs
Fig 6.23: Impulse-train sampling of discrete-time signal in frequency domain: (a) Original signal
spectrum; (b) Spectrum of sampling sequence; (c) Spectrum of sampled signal with >
; (d) Spectrum of sampled signal with < . No aliasing occurs.
254 | Sampling & Reconstruction
Fig 6.24: Exact recovery of discrete time signal from its samples using an ideal low pass filter:
(a) Block diagram for sampling & reconstruction; (b) spectrum of ( );
Example 6.3
Consider a sequence of ( ) whose Fourier transform ( ) has property that
=0 2 /9 ≤ | | ≤
To determine the lowest rate at which ( ) may be sampled without the possibility of aliasing,
we must find the largest N such that
2 2 9
≥2 ⟹ ≤
9 2
We conclude that = 4 and corresponding sampling frequency is =
Signals & Systems | 255
( )= ( )ℎ ( − ) (6.38)
= [ ] (6.41)
= [ ] (6.42)
/
= [ ] (6.43)
[ ] /
= (6.44)
Also, right hand side of equation (6.44) as the Fourier transform of [ ]; so,
[ ] / /
= ( ) (6.45)
This relationship is illustrated in Figure 6.24, and from it, we observe that the spectra for the
sampled sequence and the decimated sequence differ only in a frequency scaling or
normalization. If the original spectrum ( ) is appropriately band limited, so that there is no
aliasing present in ( ), then, as shown in the figure, the effect of decimation is to spread
the spectrum of the original sequence over a larger portion of the frequency band.
Fig 6.26: Frequency domain illustration of the relationship between sampling & decimation
Fig 6.27: Continuous time signal that was originally sampled at Nyquist rate. After discrete time
filtering, the resulting sequence can be further downsampled. Here ( ) is the
continuous time Fourier Transform of ( ), ( ) are the discrete time
Fourier transforms of ( ) & ( ) respectively. And is the frequency response
of the discrete time low pass filter depicted in the block diagram.
Just as in some applications it is useful to downsample, there are situations in which it is useful
to convert a sequence to a higher equivalent sampling rate, a process referred to as upsampling
or interpolation. Upsampling is basically the reverse of decimation or downsampling. As
Signals & Systems | 259
illustrated in Figures 6.23 and 6.24, in decimation we first sample and then retain only the
sequence values at the sampling instants. To upsample, we reverse the process. For example,
referring to Figure 6.23, we consider upsampling the sequence [ ] to obtain [ ]. From
[ ], we form the sequence [ ] by inserting N - 1 points with zero amplitude between each
of the values in [ ]. The interpolated sequence [ ] is then obtained from [ ] by lowpass
filtering. The overall procedure is summarized in Figure 6.26.
Fig 6.28: Upsampling: (a) Overall system; (b) associated sequences and spectra for
upsampling by a factor of 2.
Example 6.4
In this example, we illustrate how a combination of interpolation and decimation may be used
to further downsample a sequence without incurring aliasing. It should be noted that maximum
possible downsampling is achieved once the non-zero portion of one period of the discrete-time
spectrum has expanded to fill the entire band from − + .
260 | Sampling & Reconstruction
Fig 6.29: Spectra associated with example 6.4: (a) Spectrum of [ ]; (b) Spectrum
after downsampling by 4; (c) Spectrum after upsampling of ( ) by factor of 2;
(d) Spectrum after upsampling [ ] by 2 then downsampling by 9
Signals & Systems | 261
sampling every 4th value of ( ). If the result of such sampling is decimated by a factor of 4,
we obtain a sequence [ ] whose spectrum is shown in Figure 6.27(b). Clearly, there is still
no aliasing of the original spectrum. However, this spectrum is zero for 8 /9 ≤ | | ≤ , which
suggests there is room for further downsampling.
Specifically, examining Figure 6.27(a) we see that if we could scale frequency by a factor of
9/2, the resulting spectrum would have nonzero values over the entire frequency interval from
− + . However, since 9/2 is not an integer, we cannot achieve this purely by
downsampling. Rather we must first upsample [ ] by a factor of 2 and then downsample by a
factor of 9. In particular, the spectrum of the signal [ ] obtained when [ ] is upsampled by
a factor of 2, is displayed in Figure 6.27(c). When [ ] is then downsampled by a factor of 9,
the spectrum of the resulting sequence [ ] is as shown in Figure 6.27(d). This combined
result effectively corresponds to downsampling [ ] by a no integer amount, 9/2. Assuming
that [ ] represents unaliased samples of a continuous-time signal ( ), our interpolated and
decimated sequence represents the maximum possible (aliasing-free) downsampling of ( ).
Unit Summary
In this chapter we have developed the concept of sampling, whereby a continuous-time or
discrete-time signal is represented by a sequence of equally spaced samples. The conditions
under which the signal is exactly recoverable from the samples is embodied in the sampling
theorem. For exact reconstruction, this theorem requires that the signal to be sampled be band
limited and that the sampling frequency be greater than twice the highest frequency in the signal
to be sampled. Under these conditions, exact reconstruction of the original signal is carried out
by means of ideal lowpass filtering. The time-domain interpretation of this ideal reconstruction
procedure is often referred to as ideal band limited interpolation. In practical implementations,
the lowpass filter is approximated and the interpolation in the time domain is no longer exact.
In some instances, simple interpolation procedures such as a zero-order hold or linear
interpolation (a first-order hold) suffice.
If a signal is undersampled (i.e., if the sampling frequency is less than that required by the
sampling theorem), then the signal reconstructed by ideal band-limited interpolation will be
related to the original signal through a form of distortion referred to as aliasing. In many
instances, it is important to choose the sampling rate to avoid aliasing. However, there are a
variety of important examples, such as the stroboscope, in which aliasing is exploited.
Sampling has several important applications. One particularly significant set of applications
relates to using sampling to process continuous-time signals with discrete time systems, by
means of minicomputers, microprocessors, or any of a variety of devices specifically oriented
toward discrete-time signal processing.
The basic theory of sampling is similar for both continuous-time and discrete time signals. In
the discrete-time case there is the closely related concept of decimation, whereby the decimated
262 | Sampling & Reconstruction
sequence is obtained by extracting values of the original sequence at equally spaced intervals.
The difference between sampling and decimation lies in the fact that, for the sampled sequence,
values of zero lie in between the sample values, whereas in the decimated sequence these zero
values are discarded, thereby compressing the sequence in time. The inverse of decimation is
interpolation.
The sampling signal ( ), the Fourier transform of signal ( ) and frequency response of filter
are shown below
(b) For Δ < , Determine a system that will recover ( ) from ( ) and another that will
recover ( ) from ( )
(c) What is the maximum value of ∆ in relation to for which ( ) can be recovered from
either ( ) or ( )
Solution:
We know that ( ) = ( ) ( ), by dual of convolution theorem, we have,
Signals & Systems | 263
( ) = ( ) ( ),
So we first find Fourier transform of ( ) as follows
The Fourier transform of periodic function is an impulse train at intervals of = =
= ( )
Δ
= 1− = 1−
Δ Δ
= (1 − (−1) )
Δ
Thus, we can sketch ( )
cos( ) has spectrum with impulses of equal strength at & − . Thus, new signal will have
copies of the original spectrum (modulated by constant of course) at all even multiples of
Now, an appropriate low pass filter can extract the original spectrum
To recover ( ) from ( )
Here too, notice from the figures that modulation with cos( ) will do the job. Here too the
modulated signal will have copies of the original spectrum at all even multiples of
(c) So long as adjacent copies of the original spectrum do not overlap in ( ), theoretically
one can reconstruct the original signal. Therefore, the condition is,
2
2 < ⟹Δ<
Δ
Example 6.2:
The signal ( ) is obtained by convolving signals ( ) and ( ) where:
| ( )| = 0 | | > 1000 &
| ( )| = 0 | | > 2000
Impulse train sampling is performed on ( ) to get
( )= ( ) ( − )
To obtain this result, consider the following equation which expresses ( ) in terms of samples
( ):
sin[ ( − )]
( )= ( )
( − )
sin ( − )
( )= ( )
( − )
[ ]
By considering value of for which = 0, show that without any restrictions on
( ), ( )= ( ) for any integer value of k.
Solution:
To show that ( ) and ( ) are equal at the sampling instants, consider
sin ( − )
lim ( ) = lim ( )
→ → ( − )
sin ( − )
= lim ( )
→ ( − )
sin ( − ) sin ( − )
= ( ) + lim ( )
( − ) → ( − )
,
sin[ ( − )] sin ( − )
= ( ) + ( ) lim
( − ) → ( − )
,
266 | Sampling & Reconstruction
Solution:
(a) Multiplication by the complex exponential in time domain is equivalent to shifting
left the Fourier transform by an amount in frequency domain. Therefore, resultant
transform looks as shown below
Signals & Systems | 267
Now sampling the signal amounts to making copies of the Fourier transform, the center of each
separated from the other by the sampling frequency in the frequency domain. Thus,
( ) has the following form
268 | Sampling & Reconstruction
(b) ( ) is recoverable from ( ) only if the copies of Fourier transform obtained by sampling
do not overlap with each other. For this to happen, the condition set down by Shannon-
Nyquist theorem for a band-limited signal must be satisfied i.e. the sampling frequency
should be greater than twice the bandwidth of original signal. Mathematically,
>2
>2
2
<
−
Hence the maximum sampling period for ( ) to be recoverable from ( ) is
(c) The system to recover ( ) from ( ) is outlined below:
Example 6.5:
Shown in the figures is a system in which the sampling signal is an impulse train with alternating
sign. The Fourier transform of the input signal is as indicated in figures below.
(i) For Δ < , Sketch the Fourier Transform of ( ) and ( )
Solution:
(a) As ( ) = ( ) ( ), by dual of convolution theorem we have ( )= ( ) ( ) so
we first find Fourier transform of ( ) as follows
The Fourier transform of a periodic function is an impulse train at intervals of = =
Each impulse being of magnitude:
( ) = 2 /2Δ ( )
= /Δ(1 − cos( ))
Here, we see that the impulses on axis vanish at even values of k
Hence, Fourier transform of ( ) is as shown in figure (a). In the frequency domain, the
output signal Y can be found by multiplying the input with the frequency response. Hence
( ) is as shown below in figure (b)
Signals & Systems | 271
Example 6.6:
A signal ( ) with Fourier transform ( ) undergoes impulse train sampling to generate
( )= ( ) ( − )
Where = 10 For each of the following set of constraints on ( ) and/or ( ), does the
sampling theorem guarantee that ( ) can be recovered exactly from ( ) ?
(a) ( ) = 0 | | > 5000
(b) ( ) = 0 | | > 15000
(c) ℜ{ ( )} = 0 | | > 5000
(d) ( ) real and ( ) = 0 | | > 5000
(e) ( ) real and ( ) = 0 | | < −15000
(f) ( ) ∗ ( ) = 0 | | > 15000
(g) | ( )| = 0 > 5000
Solution:
We have = 10
So, = 20000
(a) ( ) = 0 | | > 5000
2 = 10000
Here, obviously <2
Hence ( ) can be recovered exactly from ( )
(b) ( )=0 | | > 15000
2 = 30000
Here, obviously <2
Hence ( ) can be recovered exactly from ( )
Example 6.7:
Figure I shows overall system for filtering a continuous time signal using a discrete time filter.
If ( ) and ( ) are as shown in figure II with = 20 sketch ( ), ( ),
( ), ( )& ( )
Solution:
Signals & Systems | 275
EXERCISES
( )= (−10) ( − 0.5 10 )
The resulting signal is then passed through an ideal low pass filter with bandwidth 1KHz.
Write the output of the low pass filter.
6) A 1KHz signal is ideally sampled at 1500 samples/sec and the sampled signal is passed
through an ideal low pass filter with cut off frequency of 800 Hz. Calculate frequency of
the output signal
7) A signal ( ) = 100 cos(24 × 10 ) is ideally sampled with sampling period of 50
and then passed through an ideal low pass filter with cutoff frequency of 15 KHz. What
will be the frequencies at the output?
8) A 4 GHz carrier is DSB-SC modulated by a low pass message signal with maximum
frequency of 2 MHz. The resultant signal is to be ideally sampled. Find the minimum
frequency of the sampling impulse train.
9) Find the Nyquist sampling interval for the signal (700 ) + (500 )
10) A low pass signal ( ) band-limited to B Hz is sampled by a periodic rectangular pulse
train ( ) of period = 1/3 sec. Assuming natural sampling and that the pulse
amplitude and pulse width are A volts 1/30 sec., respectively, obtain an expression for
the frequency spectrum of the sampled signal ( ).
11) A real valued signal ( ) is known to be uniquely determined by its samples, when the
sampling frequency is = 10000 . For what values of is ( ) guaranteed to be
zero.
12) A continuous time signal ( ) at the output of of ideal low pass filter with cutoff frequency
= 1000 . If impulse train sampling is performed on ( ), Which of the following
sampling periods would guarantee that ( ) can be recovered from its sampled version
using an appropriate low pass filter
(a) = 0.5 × 10
Signals & Systems | 277
(b) = 2 × 10
(c) = 10
13) For the following figure shows a system consisting of a continuous time LTI system
followed by a sampler, conversion to a sequence, an LTI discrete time system. The
continuous time LTI system is causal and satisfies the linear, constant coefficient
differential equation,
( )
+ ( )= ( )
The input ( ) is unit impulse ( )
(a) Determine ( )
(b) Determine frequency response ( ) and impulse response ℎ[ ] such that [ ]= [ ]
14) A signal ( ) is obtained through impulse train sampling of a sinusoidal signal ( ) whose
frequency is equal to half the sampling frequency
( ) = cos +
2
And
( )= ( ) ( − )
Where = 2 /
(a) Find ( ) such that
( ) = cos cos + ( )
2
(b) Show that
( )=0 = 0, ±1, ±2, … …
15) Suppose [ ] = cos + with 0 ≤ ≤ 2 and [ ] = [ ] ∑ [ − 4 ],
what additional constraints must be imposed on to ensure that
278 | Sampling & Reconstruction
sin 4
[ ]∗ = [ ]
4
16) With reference to the filtering approach, assume that sampling period used is and input
( ) is band limited so that ( ) = 0 for | | ≥ / . If the overall system has the
property ( ) = ( − 2 ) determine the impulse response ℎ[ ] of the discrete time
filter.
17) Repeat the previous problem except this time assume that
( )= − .
18) Impulse train sampling of ( ) is used to obtain
[ ]= [ ] [ − ]
Multiple-Choice Questions
1. Sampling can be done by:
a) Impulse train sampling
b) Natural sampling
c) Flat-top sampling
d) All of above
Ans: d)
Signals & Systems | 279
a) 200 Hz
b) 500 Hz
c) 400 Hz
d) 450Hz
Ans: c)
9. Find Nyquist rate and Nyquist interval of sinc[t]
a) 1 Hz, 1 sec
b) 2 Hz, 2 sec
c) ½ Hz, 2 sec
d) 2 Hz, ½ sec
Ans: a)
10. Determine the Nyquist rate of signal ( ) = 1 + cos 2000 + sin 4000
a) 2000 Hz
b) 4000 Hz
c) 500 Hz
d) 3000 Hz
Ans: b)
11. Which of the following requires interpolation filtering?
a) UP-Sampler
b) D to A Converter
c) Both (a) & (b)
d) None of these
Ans: c)
12. Which process requires Low Pass Filter.
a) UP-sampling
b) Down-sampling
c) Up-sampling & Down-sampling
d) None of the above mentioned
Ans: c)
13. Which device is needed for the reconstruction of signal?
a) Low pass filter
b) Equalizer
Signals & Systems | 281
Ans: a)
19. How is the sampling rate conversion achieved by factor I/D
a) By increase in sampling rate with (I)
b) By filtering the sequence to remove unwanted images of spectra of original signal.
c) By decimation of filtered signal with factor D
d) All of the above
Ans: d)
20. The first step required to convert Analog signal to digital is:
a) Aliasing
b) Holding
c) Quantization
d) Sampling
Ans: d)
KNOW MORE
Our treatment of sampling is concerned primarily with the sampling theorem and its
implications. However, to place this subject in perspective we begin by discuss the general
concepts of representing a continuous-time signal in terms of its samples and the reconstruction
of signals using interpolation. After using frequency-domain methods to derive the sampling
theorem, we consider both the frequency and time domains to provide intuition concerning the
phenomenon of aliasing resulting from under sampling. One of the very important uses of
sampling is in the discrete-time processing of continuous time signals. Which is discussed
thoroughly. Following this, we turn to the sampling of discrete-time signals. The basic result
underlying discrete-time sampling is developed in a manner that parallels that used in
continuous time, and the applications of this result to problems of decimation and interpolation
are described. Again, a variety of other applications, in both continuous and discrete time, are
addressed in the problems.
Both the sampling and reconstruction are critical in maintaining the integrity and fidelity of
signals as they transition between the continuous and discrete domains. Careful consideration
of the sampling rate, anti-aliasing, and reconstruction techniques is essential to avoid signal
degradation and ensure accurate representation. These concepts are foundational in various
fields, including telecommunications, audio processing, image processing, and more.
Signals & Systems | 283
Course outcomes (COs) for this course can be mapped with the programme outcomes (POs)
after the completion of the course and a correlation can be made for the attainment of POs to
analyze the gap. After proper analysis of the gap in the attainment of POs necessary measures
can be taken to overcome the gaps.
CO-1
CO-2
CO-3
CO-4
CO-5
CO-6
Index| 285
INDEX