Signals and Systems Rev2021
Signals and Systems Rev2021
2021 REVISION
LEARNING MATERIAL
SIGNALS
&
SYSTEMS
GOVERNMENT OF KERALA
DEPARTMENT OF TECHNICAL EDUCATION
REVISION 2021
SIGNALS &
SYSTEMS
Prepared By
Published By
PREFACE
Welcome to the world of Signals and Systems! This learning material has been carefully crafted
to provide you with a comprehensive understanding of this fundamental field, specifically tailored for
students pursuing a diploma program. Whether you are embarking on a career in engineering,
telecommunications, or any other field that deals with the analysis and processing of signals, this
material will serve as your guide.
Signals and Systems is an essential subject that forms the backbone of modern technology. It
encompasses the study of how information is captured, manipulated, and transmitted in various
systems. From audio and image processing to telecommunications and control systems, Signals and
Systems underlies the design and analysis of a wide range of technologies that shape our world.
Signals and Systems is an Engineering Science course in Revision 2021 diploma syllabus. This
course is in the fifth semester for Electronics & Communication Engineering program as a program core
course and in the sixth semester Biomedical Engineering program as a program elective course. The
course material is prepared in accordance with the syllabus prescribed by the Department of Technical
Education, Government of Kerala and State Institute of Technical Teachers’ Training & Research (SITTTR).
This learning material aims to present the key concepts, theories, and techniques in a clear and
concise manner, allowing you to grasp the fundamental principles of Signals and Systems. Each chapter
builds upon the previous ones, providing a structured learning experience that gradually increases in
complexity. The material includes numerous examples, illustrations, and practical applications to
enhance your understanding and demonstrate the real-world relevance of the subject matter.
This material is divided into four modules. Each module covers a course outcome. The course
material is prepared in such a way that the students find it easy to understand the concepts and get the
basic idea of mathematical representation and analysis of signals and systems. The faculty members who
prepared the material have taken great effort to make the topics simple even for the beginners. At the
end of each module, sample questions are included which will help students to prepare for the board
examination.
Suggestions are invited to make it better in the revised editions. We hope this material will
benefit both students and teachers.
CONTRIBUTORS
ACKNOWLEDGEMENT
We would like to extend our heartfelt gratitude and appreciation to Ms. Geethadevi R., Joint
Director in-charge and Ms. Chandrakantha, Deputy Director (Rtd.) for their visionary leadership and
guidance throughout the development of this learning material. Their valuable insights and
encouragement have been instrumental in shaping this learning material and ensuring its relevance to
the diploma program.
We would also like to acknowledge the diligent efforts of Dr. Ajitha S. and Ms. Swapna K. K.,
Project Officers, SITTTR Kalamassery who have worked tirelessly behind the scenes. Their expertise,
organizational skills, and attention to detail have been invaluable in coordinating the various aspects of
this endeavour, from content development to editing and formatting. Their commitment to excellence
has played a pivotal role in ensuring the quality of this learning material.
Furthermore, we extend our appreciation to the team of subject matter experts, curriculum
designers, and educators who have generously shared their knowledge and expertise. Their
contributions have helped to create a comprehensive and well-rounded resource that caters to the
learning needs of the students in the diploma program.
We are grateful to all the reviewers who provided valuable feedback and suggestions during the
development stages. Their critical insights and constructive criticism have greatly enhanced the clarity
and effectiveness of the material.
Last but not least, we would like to acknowledge the students who have been our ultimate
motivation. Your eagerness to learn and succeed has been the driving force behind this endeavour. It is
our hope that this learning material will serve as a valuable resource to enhance your understanding and
mastery of the subject.
To all those mentioned above, and to anyone who has contributed in ways both seen and unseen,
we extend our deepest appreciation. Your dedication, expertise, and unwavering support have been
instrumental in the creation of this learning material, and we are truly grateful for your invaluable
contributions.
With heartfelt thanks,
Sheeja Rajan S.
Sreeraj K. P.
Alen James
Dwija M. Divakaran
CONTENTS
Sl. No. Topics Page number
Preface ii
Acknowledgement iii
Table Of Contents v
Syllabus 1
MODULE 1
1.1 Signals 7
1.2 Basic Elementary Signals 7
1.3 Classification Of Signals 13
1.4 Mathematical Operations on signals 18
1.5 Convolution Sum 21
1.6 Convolution Integral 26
1.7 Problems 32
MODULE 2
2.1 Continuous & Discrete Time Systems 39
2.2 Representation Of Discrete Time Signals Using Impulse 40
2.3 Output Of Discrete Time System Using Convolution Sum 40
2.4 Representation Of Continuous Time Signals Using Impulse 43
2.5 Output Of Continuous Time System Using Convolution Sum 43
Representation Of Systems -
2.6 46
Differential Equation & Difference Equation
2.7 Properties Of Systems 48
2.8 Problems 52
MODULE 3
3.1 Fourier Representation Of Signals 55
3.2 Continuous Time Fourier Series 57
3.3 Frequency Spectrum Of Continuous Time Periodic Signals 59
3.4 Conditions For Existence Of Fourier Series 60
3.5 Properties Of Continuous Time Fourier series 62
3.6 Discrete Time Fourier Series 64
3.7 Difference Between Continuous Time And Discrete Time Fourier Series 66
3.8 Frequency Spectrum Of Discrete Time Periodic Signals 66
3.9 Properties Of Discrete Time Fourier Series 66
3.10 Sampling Of Continuous Time (Analog) Signals 68
3.11 Sampling Theorem 69
3.12 Aliasing And Signal Reconstruction 72
3.13 Continuous Time Fourier Transform 76
3.14 Existence Of Fourier Transform 80
3.15 Properties Of Continuous-Time Fourier Transform 80
3.16 Discrete Time Fourier Transform 83
3.17 Properties Of Discrete Time Fourier Transform 87
3.18 Problems 89
MODULE 4
4.1 Laplace Transform 92
• To introduce students to the idea of signals and systems, their characteristics in time
and frequency domain.
• To provide basic knowledge on Fourier representation and Laplace transform and its
applications on signals and systems
Course Prerequisites:
Course
Topic Course name Semester
code
1002
Differentiation, Integration Mathematics I & II 1&2
2002
Course Outcomes:
Duration
COn Description Cognitive level
(Hours)
Series Test 2
CO – PO Mapping:
Course Outline:
Module Duration
Description Cognitive Level
Outcomes (Hours)
Summarize the basic concepts, classifications and mathematical properties of
CO1
signals
Contents:
• Define signals: state the importance of signals and systems in the field of science and
engineering.
• Basic elementary signals: Unit step function, unit impulse function, ramp, parabolic,
signum, exponential, rectangular, triangular, and sinusoidal.
• Classification of signals: continuous time and discrete time, deterministic and non-
deterministic, even and odd, periodic and aperiodic, energy and power, real and
imaginary.
• Mathematical operations on signals: amplitude scaling, time scaling, time shifting,
time reversal, addition, subtraction, multiplication and convolution.
CO2 Classify and compare continuous time and discrete time systems
Series Test – I 1
Contents:
• Representation of systems: Differential equation representation, Difference equation
representation
• Continuous time and discrete time systems – Impulse response, examples
• Properties of systems – linearity, time invariant system, invertible, casual and non-
casual, stable and unstable.
Contents:
• Fourier representation of four class of signals
▪ Continuous time periodic signal : Fourier series (FS)
▪ Discrete time periodic signal : Discrete time Fourier series (DTFS)
▪ Continuous time non-periodic signal : Fourier transform (FT)
▪ Discrete time non-periodic signal : Discrete time Fourier transform (DTFT)
• Properties of Fourier representation – linearity, symmetry, time shift, frequency shift,
scaling, differentiation and integration, convolution and modulation
• Sampling theorem, aliasing, reconstruction
CO4 Apply Laplace transform to demonstrate the concepts of signals and systems
Series Test – II 1
Contents:
• Need of Laplace transform
• Region of Convergence (ROC)
• Advantages and limitation of Laplace transform
• Laplace transform of some commonly used signals - impulse, step, ramp, parabolic,
exponential, sine and cosine functions
• Properties of Laplace transform: Linearity, time shifting, time scaling, time reversal,
transform of derivatives and integrals, initial value theorem, final value theorem.
• Inverse Laplace transform: simple problems (no derivation required)
Text/Reference:
T3 Nagoor Kani : Signals and System, Tata McGrew Hill, 3/e 2011
1 https://nptel.ac.in/courses/108/104/108104100/
2 https://nptel.ac.in/courses/117/101/117101055/
Revision 2021
SIGNALS &
SYSTEMS
MODULE 1 NOTES
Contents:
Define signals: state the importance of signals and systems in the field of science and
engineering.
Basic elementary signals: Unit step function, unit impulse function, ramp, parabolic,
signum, exponential, rectangular, triangular, and sinusoidal.
Classification of signals: continuous time and discrete time, deterministic and
nondeterministic, even and odd, periodic and aperiodic, energy and power, real and
imaginary.
Mathematical operations on signals: amplitude scaling, time scaling, time shifting,
time reversal, addition, subtraction, multiplication and convolution.
SIGNALS
❖ A function of one or more independent variables which contains some information is
called a signal.
❖ Eg.: speech signal, E.C.G. signals, temperature variations, etc.
❖ A voltage or current is an example of a signal which is a function of time as an
independent variable.
❖ An AC signal is generally represented in the form: -
𝑥(𝑡) = 𝐴𝑐𝑜𝑠(𝜔𝑡 + 𝜙)
x(t) = independent variables
A = amplitude
ω = angular frequency in radians
ϕ = phase angle in radians
Discrete time
Discrete time
Discrete time unit impulse function is also called unit sample sequence.
It is defined only at n=0.
Continuous time
Discrete time
4. Parabolic Function
Continuous time
Discrete time
5. Signum Function
Continuous time
Discrete time
6. Exponential Signal
Continuous time
𝑥(𝑡) = 𝐴ⅇ 𝑎𝑡
‘A’ and ‘a’ are real numbers. ‘A’ is the amplitude of the exponential signal measured
at t= 0.
‘a’ can be either positive or negative.
At t = -𝛼, 𝑥(𝑡) = 𝐴ⅇ 𝑎𝑡 becomes 𝑥(𝑡) = 𝐴ⅇ 𝑎.−𝛼 or 𝑥(𝑡) = 𝐴ⅇ −𝛼 = 0
At t = 0, 𝑥(𝑡) = 𝐴ⅇ 𝑎𝑡 becomes 𝑥(𝑡) = 𝐴ⅇ 𝑎.0 or 𝑥(𝑡) = 𝐴ⅇ 0 = 1
At t = 𝛼, 𝑥(𝑡) = 𝐴ⅇ 𝑎𝑡 becomes 𝑥(𝑡) = 𝐴ⅇ 𝑎.𝛼 or 𝑥(𝑡) = 𝐴ⅇ 𝛼 = α
0 𝑎𝑡 𝑡 = −𝛼: 𝐴ⅇ −𝛼 = 0
𝑥(𝑡) = { 1 𝑎𝑡 𝑡 = 0 ∶ 𝐴ⅇ 0 = 1
𝛼 𝑎𝑡 𝑡 = 𝛼: 𝐴ⅇ 𝛼 = 𝛼
𝛼 𝑎𝑡 𝑡 = −𝛼: 𝐴ⅇ 𝛼 = 𝛼
𝑥(𝑡) = { 1 𝑎𝑡 𝑡 = 0 ∶ 𝐴ⅇ 0 = 1
0 𝑎𝑡 𝑡 = 𝛼: 𝐴ⅇ −𝛼 = 0
Discrete time
𝑥[𝑛] = 𝐴ⅇ 𝑎𝑛
‘A’ and ‘n’ are real numbers. ‘A’ is the amplitude of the exponential signal measured
at t= 0.
‘n’ can be either positive or negative.
At n = -𝛼, 𝑥[n] = 𝐴ⅇ 𝑎𝑛 becomes 𝑥[𝑛] = 𝐴ⅇ 𝑎.−𝛼 or 𝑥[𝑛] = 𝐴ⅇ −𝛼 = 0
At n = 0, 𝑥[𝑛] = 𝐴ⅇ 𝑎𝑛 becomes 𝑥[𝑛] = 𝐴ⅇ 𝑎.0 or 𝑥[𝑛] = 𝐴ⅇ 0 = 1
At n = 𝛼, 𝑥[𝑛] = 𝐴ⅇ 𝑎𝑛 becomes 𝑥[𝑛] = 𝐴ⅇ 𝑎.𝛼 or 𝑥[𝑛] = 𝐴ⅇ 𝛼 = α
𝑥[𝑛]
0 𝑎𝑡 𝑛 = −𝛼: 𝐴ⅇ −𝛼 = 0
= { 1 𝑎𝑡 𝑛 = 0 ∶ 𝐴ⅇ 0 = 1
𝛼 𝑎𝑡 𝑛 = 𝛼: 𝐴ⅇ 𝛼 = 𝛼
𝑥[𝑛]
𝛼 𝑎𝑡 𝑛 = −𝛼: 𝐴ⅇ −𝛼 = 𝛼
= { 1 𝑎𝑡 𝑛 = 0 ∶ 𝐴ⅇ 0 = 1
0 𝑎𝑡 𝑛 = 𝛼: 𝐴ⅇ −𝛼 = 0
7. Rectangular Function
Continuous time
Discrete time
8. Triangular Function
Continuous time
Discrete time
9. Sinusoidal Signal
Continuous time
Discrete time
CLASSIFICATION OF SIGNALS
In the above example, the sampling time interval was T=1sec, which means samples are
taken after every 1 sec to form the discrete time signal.
3. Deterministic and non-deterministic
❖ A signal is said to be deterministic if there is no uncertainty with respect to its value at
any instant of time. Or, signals which can be defined exactly by a mathematical formula
are known as deterministic signals.
A signal is said to be even when it satisfies the condition x(t) = x(-t) or x[n] = x[-n]
Example: As shown in the following diagram, rectangle function x(t) = x(-t) so it is even
function.
❖ A signal is said to be odd when it satisfies the condition x(t) = -x(-t) or x[n] = - x[-n]
Example: As shown in the following diagram, signum function x(t) = - x(-t) so it is odd
function.
Any function ƒ(t) can be expressed as the sum of its even function ƒe(t) and odd function
ƒo(t).
ƒ(t) = ƒe(t) + ƒo(t)
where
ƒe(t ) = ½[ƒ(t ) +ƒ(-t )] and ƒo(t ) = ½[ƒ(t ) - ƒ(-t )]
5. Periodic and aperiodic
❖ A signal is said to be periodic signal if it has a definite pattern and repeats itself at a
regular interval of time.
❖ A signal is said to be periodic if it satisfies the condition x(t) = x(t + T) or x(n) = x(n +
N).
where
T = fundamental time period,
1/T = f = fundamental frequency.
The above signal will repeat for every time interval T0 hence it is periodic with period T0.
❖ A signal is said to be aperiodic signal if it does not have a definite pattern and does not
repeat itself at a regular interval of time.
1 𝑇
𝑃𝑎𝑣 𝑥(𝑡) = lim ∫ |𝑥(𝑡)|2 ⅆ𝑡
2𝑇 −𝑇
T→∞
0 < 𝑃𝑎𝑣 𝑥(𝑡) < α
𝑁
1
𝑃𝑎𝑣 𝑥(𝑛) = lim ∑ |𝑥(𝑛)|2
N→∞ 2𝑛 + 1
𝑛=−𝑁
Time scaling
x(At) is time scaled version of the signal x(t). where A is always positive.
Note: u(at) = u(t) time scaling is not applicable for unit step function.
Time shifting
x(t ± t0) is time shifted version of the signal x(t).
x (t + t0) → negative shift
x (t - t0) → positive shift
Time reversal
x(-t) is the time reversal of the signal x(t).
Addition
Addition of two signals is nothing but addition of their corresponding amplitudes. This can
be best explained by using the following example:
Subtraction
Subtraction of two signals is nothing but subtraction of their corresponding amplitudes. This
can be best explained by the following example:
Multiplication
Multiplication of two signals is nothing but multiplication of their corresponding amplitudes.
This can be best explained by the following example:
Convolution
A continuous time system as shown below, accepts a continuous time signal x(t)
and gives out a transformed continuous time signal y(t).
Some of the different methods of representing the continuous time system are:
i. Differential equation
ii. Block diagram
iii. Impulse response
iv. Frequency response
v. Laplace-transform
vi. Pole-zero plot
It is possible to switch from one form of representation to another, and each of
the representations is complete. Moreover, from each of the above representations, it is
possible to obtain the system properties using parameters as: stability, causality, linearity,
invertibility etc. We now attempt to develop the convolution integral.
Impulse Response
The impulse response of a continuous time system is defined as the output of the
system when its input is a unit impulse, δ(t). Usually, the impulse response is denoted by
h(t).
Convolution Sum:
We now attempt to obtain the output of a digital system for an arbitrary input
x[n], from the knowledge of the system impulse response h[n].
Problem:
To obtain the digital system output y[n], given the system impulse response h[n], and the
system input x[n] as:
Convolution Integral
Example 1:
Consider the convolution of the delta impulse signal and any other regular signal f(t).
Example 2:
Consider the convolution of e-t u(t) and sin(t)
Graphical convolution
The graphical presentation of the convolution integral helps in the understanding of every
step in the convolution procedure. According to the definition integral, the convolution
procedure involves the following steps:
Step 1: Apply the convolution duration property to identify intervals in which the
convolution is equal to zero.
Step 2: Flip about the vertical axis one of the signals (the one that has a simpler form
(shape) since the commutativity holds), that is, represent one of the signals in the time scale
-τ.
Step 3: Vary the parameter t from -infinity to +infinity, that is, slide the flipped signal
from the left to the right, look for the intervals where it overlaps with the other
signal, and evaluate the integral of the product of two signals in the corresponding
intervals.
In the above steps one can also incorporate (if applicable) the convolution time shifting
property such that all signals start at the origin. In such a case, after the final convolution
result is obtained the convolution time shifting formula should be applied appropriately. In
addition, the convolution continuity property may be used to check the obtained
convolution result, which requires that at the boundaries of adjacent intervals the
convolution remains a continuous function of the parameter.
We present several graphical convolution problems starting with the simplest one.
Example:
Consider two rectangular pulses given below
Since the durations of the signals f1(t) and f2(t) are respectively given by [t1, T1] = [0,3] and
[t2, T2] = [0,1], we conclude that the convolution of these two signals is zero in the following
intervals (Step 1).
In Step 3, we shift the signal 𝑓2 (−𝜏) to the left and to the right, that is, we form the signal
𝑓2 (𝑡 − 𝜏) for 𝑡 ∈ (−∞, 0] and 𝑡 ∈ [0, +∞). A shift of the signal 𝑓2 (𝑡 − 𝜏) to the left (t<0)
produces no overlapping between the signals 𝑓1 (𝜏) and 𝑓2 (𝑡 − 𝜏), thus the convolution
integral is equal to zero.
Let us start shifting the signal 𝑓2 (𝑡 − 𝜏) to the right (t>0). Consider first the interval 0 ≤ t ≤ 1.
It can be seen from Figure that in the interval from zero to the signals overlap, hence their
product is different from zero in this interval, which implies that the convolution integral is
given by
By shifting the signal 𝑓2 (𝑡 − 𝜏) further to the right, we get the same “kind of overlap” for 1 ≤
t ≤ 3,
By shifting 𝑓2 (𝑡 − 𝜏) further to the right, for 3 ≤ t ≤ 4, we get the situation presented below.
For t<4, the convolution is equal to zero as determined in Step 1. This can be justified by the
fact that the signals 𝑓1 (𝜏) and 𝑓2 (𝑡 − 𝜏) do not overlap for that is, their product is equal to
zero for t>4, which implies that the corresponding integral is equal to zero in the same
interval.
Example 2
Let us convolve the signals
Since both signals have the duration intervals from zero to two, we conclude that the
convolution integral is zero for t≤0 and t≥4.
In the next step we flip about the vertical axis the rectangular signal since it apparently has a
simpler shape. In Step 3, we slide the rectangular signal to the right for 𝑡 ∈ [0,2], Figure b,
and for 𝑡 ∈ [2,4], Figure c.
The convolution integral in these two intervals, evaluated according to information given in
Figures b and c, is respectively given by
SOLVED PROBLEMS
1. What is a gate function?
𝑡
Rectangular function 𝐴 𝑟ⅇ𝑐𝑡 ( ) is called a gate function.
𝑇
2. What is a dirac delta function?
Impulse function 𝛿(𝑡) is called a dirac delta function.
3. If u(t) is unit step function, draw
a) u(t-3)
b) u(t+2)
c) u(-t)
(always do scaling first, then shifting. In this case since it’s a unit step function, there is no
1
effect on scaling. ) ie u(2(t + ))
2
3
(always do reversal first, then scaling, then shifting. ie 3u(-4(t - ))
4
UNSOLVED PROBLEMS
1. According to the product property of impulse response x(t) 𝛿(t-to) = x(t0) δ(t-to),
then what is the value of
a) cost δ(t- 𝜋/4)
b) (𝑡 2 + 1) . δ(t-2)
c) t . δ(t)
𝑡
2. Prove the sampling property ∫𝑡 2 x(t). δ (t-t0)dt = x(t0) ; t1 ≤ t0 ≤ t2
1
3. According to the sampling property of impulse response
𝑡
∫𝑡 x(t). δ (t-to)dt = x(t0) ; t1 ≤ to ≤ t2, then what is the value of
2
1
∞
a) ∫0 𝑡 2 + 1. δ (t+1)dt
∞
b) ∫0 𝑡 2 + 1. δ (t)dt
2π
c) ∫0 sint. δ (π/2-t)dt
4
d) ∫2 ⅇ −(2𝑡−1) . δ (3t-9)dt
4. Find the convolution of x(t) = e-2t u(t) and Y(t)= u(t-3)
5. For the given signal, sketch
a. y(t) – y(t-1)
b. y(-2t)
c. y(t-1)u(1-t)
d. y(2t) + y(-3t)
e. y(3t-1)
6. Determine if the following signals are power signals or energy signals
a. x(t)=sin(2t) u(t)
b. x(t)=tu(t)
c. x(t)=5e-3t u(t)
7. Sketch the even and odd part of the signal
t(ms)
2
8. Determine which of the following signals are bounded, and specify a smallest bound.
a. x(t)=e3t u(t)
b. x(t)=4e-6|t|
9. Determine if the following signals are periodic, and if so compute the fundamental
period.
20𝜋
a. 𝑥[𝑛] = ⅇ 𝑗 3
𝑛
4𝜋
b. 𝑥[𝑛] = 1 + ⅇ 𝑗 5 𝑛
Revision 2021
Semester 5
SIGNALS &
SYSTEMS
MODULE 2 NOTES
CO2 Classify and compare continuous time and discrete time systems
Understan
M2.01 Show time domain representation of a system 3
ding
Compare continuous time and discrete time Understan
M2.02 3
systems ding
Interpret impulse response of a continuous time Understan
M2.03 3
and discrete time system. ding
M2.04 Identify various properties of systems 3 Applying
Contents:
Representation of systems: Differential equation representation, Difference equation
representation
Continuous time and discrete time systems – Impulse response, examples
Properties of systems – linearity, time invariant system, invertible, casual and non-
casual, stable and unstable.
Here x[n] is the discrete-time input and y[n] is the discrete-time output. An example
of discrete-time system is a simple model for the balance in a bank account from month-to-
month. Discrete-time systems are described by difference equations.
𝑥[𝑛] = ∑ 𝑥[𝑘]𝛿[𝑛 − 𝑘]
𝑘=−∞
where x [k] can be taken as weights.
❖ This is called the convolution sum and h[n] is the impulse response of the system.
❖ h[n-k] is the response of the system to time shifted impulse.
❖ Hence if we know the impulse response of the LTI system, then we can find the
response of the system to any other input.
Example:
2
What is the output of an LTI system with impulse response ℎ[𝑛] = {1, , 1} to the input
↑
x[n]={2,3,-2}? Use graphical method.
y[n] will range from n= -1 to n=3 (sum of lower limits of x[n] & h[n] to sum of upper limits of
x[n] & h[n] )
❖ As discussed in the case of discrete time LTI systems, a continuous time signal can
also be expressed in terms of impulses.
❖ In general
❖ This is called the convolution integral and h(t) is the impulse response of the system.
❖ Hence if we know the impulse response of the LTI system, then we can find the
response of the system to any other input.
Example:
Find the output of a system whose impulse response is given by h(t)= u(t-3) for an input
signal x(t) = e-2t u(t).
Change of axis
𝑥(𝑡) = ⅇ −2𝑡 𝑢(𝑡) and ℎ(𝑡) = 𝑢(𝑡 − 3) becomes
𝑥(𝜏) = ⅇ −2𝜏 𝑢(𝜏) and ℎ(𝜏) = 𝑢(𝜏 − 3)
On sliding the input along the impulse response from -infinity to +infinity
So on integration
𝑡−3
𝑦(𝑡) = ∫ 𝑥(𝜏)ℎ(𝑡 − 𝜏)𝑑𝜏
0
𝑡−3 𝑡−3
−2𝜏
ⅇ −2𝜏 1 − ⅇ −2(𝑡−3)
𝑦(𝑡) = ∫ ⅇ 𝑑𝜏 = | =
0 −2 0 2
REPRESENTATION OF SYSTEMS
DIFFERENTIAL EQUATION REPRESENTATION
The general representation of a continuous time LTI system is given by the differential
equation
Differential equation provides an implicit specification of the system. The implicit expression
describes a relationship between the input and the output rather than an explicit expression
for the system output as a function of the input.
DIFFERENCE EQUATION REPRESENTATION
The general representation of a discrete time LTI system is given by the difference Equation
Example:
Obtain the output of an LTI causal discrete- time system described by difference equation
1
𝑦[𝑛] − 𝑦[𝑛 − 1] = 𝑥[𝑛] to the input 𝑥[𝑛] = 𝑘𝛿[𝑛]
5
To calculate the present output y[n], we need the past value of the output y[n-1]. But the
system is causal that means there is no output before applying the input. Since
x[n] =K δ[n]
i.e., x[n]=0 for n ≤ -1 implies that y[n] = 0 for n ≤ -1, so that we have as an initial condition
y[- 1] = 0. Thus, to begin the recursion, with this initial condition, we can solve for successive
values of y[n] for n ≥ 0 as follows:
𝟏
y[0] = x[0] + y(-1) = K + 0 = K (since δ [n] = 1; if n = 0)
𝟓
𝟏 1 1
y[1] = x[1] +
𝟓
y(0) = 0 + K= K
5 5
𝟏 1 1 1
y[2] = x[2] +
𝟓
y(1) = 0 + . K = ( )2 K
5 5 5
𝟏 1
y[n] = x[n] + 𝟓 y[n-1] = = (5)2 K
Since this system is an LTI, its input-output behaviour is completely characterized by its
impulse response.
If K = 1 or x[n] = δ [n], the output y[n] becomes the impulse response, i.e.,
1
h[n]= ( )2 u[n]
5
PROPERTIES OF SYSTEMS
Systems are classified into the following categories:
Hence,
𝑇[𝑎1 𝑥1 (𝑡) + 𝑎2 𝑥2 (𝑡)] = 𝑎1 𝑦1 (𝑡) + 𝑎2 𝑦2 (𝑡)
From the above expression, is clear that response of overall system is equal to
response of individual system.
Example:
y(t) = x2(t)
Solution:
Which is not equal to a1y1(t) + a2y2(t). Hence the system is said to be non-linear.
For present value t=0, the system output is y(0) = 2x(0). Here, the output is only
dependent upon present input. Hence the system is memory less or static.
Example 2:
y(t) = 2 x(t) + 3 x(t-3)
For present value t=0, the system output is y(0) = 2x(0) + 3x(-3).
Here x(-3) is past value for the present input for which the system requires memory
to get this output. Hence, the system is a dynamic system.
5. Causal and Non-Causal Systems
A system is said to be causal if its output depends upon present and past inputs, and
does not depend upon future input. 𝑦[𝑛] = 𝑓(𝑥[𝑛], 𝑥[𝑛 − 1], … ). All memory less
systems are causal.
For non-causal system, the output depends upon future inputs also.
Example 1:
y(n) = 2 x(t) + 3 x(t-3)
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2).
Here, the system output only depends upon present and past inputs. Hence, the system
is causal.
Example 2:
y(n) = 2 x(t) + 3 x(t-3) + 6x(t + 3)
For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the
system output depends upon future input. Hence the system is non-causal system.
For real time system where n actually denoted time causalities is important.
Causality is not an essential constraint in applications where n is not time, for example,
image processing. If we case doing processing on recorded data, then also causality may
not be required.
6. Invertible and Non-Invertible systems
A system is said to be invertible if the input signal {x[n]} can be recovered from the
output signal {y[n]}. For this to be true two different input signals should produce two
different outputs. If some different input signal produce same output signal then by
processing output we cannot say which input produced the output.
If y(t) = x(t), the system is invertible and if y(t) ≠ x(t), then the system is said to be
non-invertible.
Example 1
𝑛
𝑦[𝑛] = ∑ 𝑥[𝑘]
𝑘=−∞
Then 𝑥[𝑛] = 𝑦[𝑛] − 𝑦[𝑛 − 1]. Hence it is an invertible system.
Example if a non-invertible system is 𝑦[𝑛] = 0. That is the system produces an all zero
sequence for any input sequence. Since every input sequence gives all zero sequence,
we cannot find out which input produced the output. The system which produces the
sequence {x[n]} from sequence {y[n]} is called the inverse system. In communication
system, decoder is an inverse of the encoder.
7. Stable and Unstable Systems
The system is said to be stable only when the output is bounded for bounded input.
For a bounded input, if the output is unbounded in the system, then it is said to be
unstable.
A system is said to be BIBO stable if every bounded input produces a bounded
output. We say that a signal {x[n]} is bounded if
|𝑥[𝑛]| < 𝑀 < ∞
Note: For a bounded signal, amplitude is finite.
Example 1:
y(t) = x2(t)
Let the input is u(t) (unit step bounded input) then the output y(t) = u2(t) = u(t) =
bounded output. Hence, the system is stable.
Example 2: y (t) = ∫x(t)dt
the input is u (t) (unit step bounded input) then the output y(t) = ∫u(t)dt= ramp
signal (unbounded because amplitude of ramp is not finite it goes to infinite when t →
infinite). Hence, the system is unstable.
Example 3:
1
Consider a moving average system. 𝑦[𝑛] = ∑𝑁
𝑛=−𝑁 𝑥[𝑛].
2𝑁
This is stable as y[n] is sum of finite numbers and so it is bounded.
Example 4:
1
Consider 𝑦[𝑛] = ∑∞
𝑛=−∞ 𝑥[𝑛]
2𝑁
This system is unstable since if we take {𝑥[𝑛]} = {𝑢[𝑛]}, the unit step then 𝑦[0] =
1, 𝑦[1] = 2, 𝑦[2] = 3, are 𝑦[𝑛] = 𝑛 + 1, 𝑛 ≥ 0, so y[n] grows without bound.
UNSOLVED PROBLEMS
1. Determine if each of the following systems is causal, memoryless, time invariant,
linear, or stable. Justify your answer.
a. 𝑦(𝑡) = 2𝑥(𝑡) + 3𝑥 2 (𝑡 − 1)
b. 𝑦(𝑡) = 𝑐𝑜𝑠 2 (𝑡)𝑥(𝑡)
c. 𝑦(𝑡) = 𝑥(−𝑡)
d. 𝑦(𝑡) = 𝑥(3𝑡)
2. Determine if each of the following systems is causal, memoryless, time invariant,
linear, or stable. Justify your answer.
a. 𝑦[𝑛] = 3𝑥[𝑛]𝑥[𝑛 − 1]
b. 𝑦[𝑛] = 4𝑥[3𝑛 − 2]
c. 𝑦[𝑛] = ∑𝑛+2 𝑘=𝑛−2 𝑥[𝑘]
3. Using the graphical method, compute and sketch 𝑦[𝑛] = 𝑥[𝑛] ∗ ℎ[𝑛] for
a. 𝑥[𝑛] = 𝑢[𝑛] and ℎ[𝑛] = (1/2)𝑛 𝑢[𝑛 − 1]
b. 𝑥[𝑛] = 1 and ℎ[𝑛] = 𝛿[𝑛] − 2𝛿[𝑛 − 1] + 𝛿[𝑛 − 2]
c. 𝑥[𝑛] = 𝑢[𝑛 − 1] − 𝑢[𝑛 − 3] and ℎ[𝑛] = −𝑢[𝑛] + 𝑢[𝑛 − 3]
4. Using the graphical method, compute and sketch 𝑦[𝑛] = 𝑥[𝑛] ∗ ℎ[𝑛] for
a. ℎ(𝑡) = ⅇ −𝑡 𝑢(𝑡) and 𝑥(𝑡) = 2𝑢(𝑡) − 2𝑢(𝑡 − 1)
b. ℎ(𝑡) = ⅇ −|𝑡| and 𝑥(𝑡) = 𝑢(𝑡)
c. ℎ(𝑡) = ⅇ 𝑡 𝑢(−𝑡) and 𝑥(𝑡) = 𝑢(𝑡 − 2)
5. Suppose the input to LTI system is x(t) and impulse response is h(t), find the output of the
system
Revision 2021
Semester 5
SIGNALS &
SYSTEMS
MODULE 3 NOTES
Contents:
Fourier representation of four class of signals
• Continuous time periodic signal: Fourier series (FS)
• Discrete time periodic signal: Discrete time Fourier series (DTFS)
• Continuous time non-periodic signal: Fourier transform (FT)
• Discrete time non-periodic signal: Discrete time Fourier transform (DTFT)
Properties of Fourier representation – linearity, symmetry, time shift, frequency
shift, scaling, differentiation and integration, convolution and modulation
Sampling theorem, aliasing, reconstruction
The Fourier representation of a signal refers to expressing the signal in terms of its
frequency components using Fourier analysis. It allows us to decompose a signal into a sum
of sinusoidal functions of different frequencies, amplitudes, and phases. The basic idea
behind the Fourier representation is that any periodic or non-periodic signal can be
represented as a combination of sine and cosine functions with different frequencies. These
sinusoidal functions are referred to as Fourier components or Fourier harmonics.
There are four distinct Fourier representations, each applicable to a different class of
signals. These four classes are defined by the periodicity properties of a signal and whether it
is continuous or discrete time:
❖ Periodic signals have Fourier series representations. The Fourier series (FS) applies to
continuous-time periodic signals and the discrete time Fourier series (DTFS) applies to
discrete time periodic signals.
❖ Non-periodic signals have Fourier transform representations. If the signal is
continuous time and non-periodic, the representation is termed the Fourier transform
(FT). If the signal is discrete time and non-periodic, then the representation is termed
the discrete time Fourier transform (DTFT).
The two basic periodic signals are the sinusoidal signal and the complex exponential
signal.
x(t ) = cos(0t )
x(t ) = e jk0t
Both of these signals are periodic with fundamental period T and fundamental
frequency 0 = 2 / T . The Fourier series is a decomposition of such periodic signals into the
sum of a (possibly infinite) number of complex exponentials whose frequencies are
harmonically related.
k (t ) = e jk t = e jk (2 /T )t
0
k = 0, 1, +2,...
Then,
t0 + T t0 + T
jn0t − jk0t
x(t )e − jk0t dt = cn e e dt
t0 t0 n =−
t0 + T
= c
n =−
n e jn0t e − jk0t dt
t0
t0 + T
0, k n
Substituting the relation
t0
e jn0t e − jk0t dt = in the above equation we get,
T , k = n
t0 + T
t0
x(t )e − jk0t dt = Tck
Hence,
t0 + T
1
ck =
T
t0
x(t )e − jk0t dt
Or
t0 + T
1
cn =
T
t0
x(t )e − jn0t dt
Thus, the Fourier series of a continuous time periodic signal is given by the following
equations:
+ +
x(t ) = ce
n =−
n
jk0t
= cn e jk (2 /T ) t
n =−
Synthesis Equation
T T
1 1
cn = x(t )e − jk0t dt = x(t )e − jk (2 /T )t dt Analysis Equation
T 0 T 0
The term, cn represents the magnitude of nth harmonic component and term cn
denotes the phase of nth harmonic component. Therefore, we can plot two spectra, the
magnitude spectrum ( cn 𝑣ⅇ𝑟𝑠𝑢𝑠 n) and phase spectrum (𝑎𝑛𝑔𝑙ⅇ cn 𝑣ⅇ𝑟𝑠𝑢𝑠 n ). The plot of
harmonic magnitude/phase of a signal versus ‘n’ is called frequency spectrum (or Line
spectrum). The plot of harmonic magnitude versus ‘n’ is called magnitude spectrum and the
plot of harmonic phase vs ‘n’ is called phase spectrum. The two plots together are known as
Fourier frequency spectra of x(t ) . It is also known as frequency domain representation. The
Fourier spectrum exists only at discrete frequencies n0 where n = 0, 1, 2 ... hence it is also
known as discrete spectrum.
The spectra can be plotted for both positive & negative frequencies. Hence it is called
two-sided spectra. The magnitude spectrum is symmetrical about the vertical axis passing
through the origin and phase spectrum is anti-symmetrical about the vertical axis passing
through origin. So, magnitude spectrum exhibits even symmetry & phase spectrum exhibits
odd symmetry.
If x(t ) is represented by the complex exponential Fourier series, then we can compute
the power of the signal from the Fourier coefficients:
+
1
x(t ) dt = c
2 2
n
T T
n =−
This equation is called Parseval’s identity (or Parseval’s Theorem) for the Fourier
series. The significance of this result is that the power of a signal is directly dependent on the
magnitude square of its Fourier coefficients. Hence, the energy or power of a signal in the
time domain is equal to the energy or power in the frequency domain.
Every signal x(t ) of period T satisfying following conditions known as Dirichlet’s conditions,
can be expressed in the form of Fourier series
1. The signal should have only a finite number of maxima and minima over a given period.
2. The signal must possess only a finite number of discontinuities over a given period.
3. The signal must be absolutely integrable over a given period i.e.,
T
x(t ) dt
0
ck
Example 1.
Where,
a0 = 1,
1
a1 = a−1 = ,
4
1
a2 = a−2 = ,
2
1
a3 = a−3 = .
3
Rewriting the equation and collecting each of the harmonic components which have
the same frequency, we obtain
x(t ) = 1 +
4
(e + e ) + 2 (e + e ) + 3 (e + e )
1 j 2 t − j 2 t 1 j 4 t − j 4 t 1 j 6 t − j 6 t
1 2
x(t ) = 1 + cos 2 t + cos 4 t + cos 6 t
2 3
Figure below gives the graphical illustration of how the signal x(t ) is built up from its
harmonic components
Example 2.
Find the Fourier Series of sin(0t )
Consider a signal, x(t ) which is periodic with period, T and fundamental frequency,
0 = 2 / T
Then,
x(t ) ⎯→
FS
cn
1. Linearity
If x(t ) ⎯→
FS
cn and y(t ) ⎯→
FS
dn
Then, ax(t ) + by(t ) ⎯→
FS
acn + bdn
i.e., Fourier Series is a linear operation.
2. Time Shifting
If x(t ) ⎯→
FS
cn
Then according to time shifting property,
x(t − t0 ) ⎯→
FS
e− jn0t0 cn
i.e., Magnitude of Fourier Series coefficients remains unchanged when the signal is
shifted in time.
3. Frequency Shifting
If x(t ) ⎯→
FS
cn
Then according to frequency shifting property,
e jm0t0 x(t ) ⎯→
FS
C( n−m)
4. Time Scaling
If x(t ) is periodic with period T , then x(at ) will be periodic with period T / a ; a>0
If x(t ) ⎯→
FS
cn
Then x(at ) ⎯→
FS
cn
Thus, after time scaling FS coefficients are the same. But, the spacing between the
frequency components changes from 0 to a0 or from 1/ T to a / T
5. Time Inversion
Time inversion property states that
If x(n) ⎯→
FS
ck
Then x(−t ) ⎯→
FS
cn*
6. Differentiation in Time
According to this property, if x(t ) ⎯→
FS
cn
d
Then x(t ) ⎯→
FS
( jn0 )cn
dt
7. Integration
If x(t ) ⎯→
FS
cn
1
x(t )dt ⎯→ jn
FS
Then cn
0
8. Convolution
If x(t ) ⎯→
FS
cn and y(t ) ⎯→
FS
dn
Then x(t )* y(t ) ⎯→
FS
Tcn dn
Hence, the convolution in time domain leads to multiplication of Fourier series
coefficients in Fourier series domain.
10. Symmetry
Symmetry properties state that
If x(t ) is real, then → cn = c−* n
If x(t ) is imaginary, then, → cn = −c−* n
x ( n) = x ( n + N )
Where,
ck = Fourier Coefficients
N −1
1
ck =
N
x ( n )e
n =0
− j 2 kn / N
N −1
for k = 0,1, 2,..., N − 1
1
=
N
x ( n )e
n =0
− jko n
The Fourier coefficient ck represents the amplitude and phase associated with the kth
frequency component. Hence, we can say that the Fourier coefficients provide the description
of x(n) in the frequency domain.
Multiply expression for x(n) with e− j 2 mn / N and sum over time n over one period:
1 N −1
− j 2 mn / N N −1
N
n =0
e
x ( n ) =
k =0
ck e j 2 kn / N
Here the right-hand side becomes
N −1 N −1
1
N
c e
k =0
k
n=0
j 2 ( k − m ) n / N
Hence,
N −1 N −1
1
N
ck e j 2 ( k −m) n/ N = cm
k =0 n =0
And
N −1
1
N
x ( n )e
n =0
− j 2 mn / N
= cm
Or,
N −1
1
ck =
N
x ( n )e
n =0
− j 2 kn / N
Thus, the Fourier series (DTFS) of a discrete time periodic signal is given by the following
equations:
N −1 N −1
x(t ) = ck e jk0 n = ck e j 2 kn / N Synthesis Equation
k =0 k =0
N −1 N −1
1 1
ck =
N
x(n)e− j 2 kn / N =
n=0 N
x ( n )e
n=0
− jko n
Analysis Equation
1. The frequency range of continuous time signal is from − to + , and so it has infinite
frequency spectrum.
2. The frequency range of discrete time signal is 0 to 2 ( or − to + ) and so it has
infinite frequency spectrum. A discrete time signal with fundamental period N will
have N frequency components whose frequencies are,
k = 2 k / N for k = 0,1, 2,...., N − 1
Consider a signal x(n) which is periodic with period N and fundamental frequency
0 = 2 / N Let the Fourier coefficients of x(n) be denoted by ck
Then,
x(n) ⎯⎯
DTFS
→ ck
1. Linearity
If x(n) ⎯⎯
DTFS
→ ck and y(n) ⎯⎯
DTFS
→ dk
Then, ax(n) + by(n) ⎯⎯
DTFS
→ ack + bd k
i.e., Fourier Series is a linear operation.
2. Time Shifting
If x(n) ⎯⎯
DTFS
→ ck
Then according to time shifting property,
x(n − m) ⎯⎯
DTFS
→ e− jk0mck
i.e., Magnitude of Fourier Series coefficients remains unchanged when the signal is
shifted in time.
3. Frequency Shifting
If x(n) ⎯⎯
DTFS
→ ck
Then according to frequency shifting property,
e jm0n x(t ) ⎯⎯
DTFS
→ ck −m
4. Time Scaling
If x(n) is periodic with period N, then x(n / m) (where N multiple of m) will be periodic
with period mN
If x(n) ⎯⎯
DTFS
→ ck
n FS 1
Then x ⎯→ ck
m m
5. Time Reversal
Time inversion property states that
If x(n) ⎯⎯
DTFS
→ ck
Then, x(−n) ⎯⎯
DTFS
→ c− k
6. Multiplication
If x(n) ⎯⎯
DTFS
→ ck and y(n) ⎯⎯
DTFS
→ dk
N −1
We have x(n) y (n) ⎯⎯
DTFS
→ cm d k − m
m=0
7. Convolution
If x(n) ⎯⎯
DTFS
→ ck and y(n) ⎯⎯
DTFS
→ dk
N −1
We have x(m) y((n − m))
m=0
N ⎯⎯
DTFS
→ Nck d k
8. Symmetry
Symmetry properties state that
If x(n) is real, then ck = c−* k
If x(n) is imaginary, then ck = −c−* k
t = nTs
x(n) = x(nTs ); − n
The individual values x(n) are called the samples of the continuous time signal, x(t ) .
Usually, the time interval between successive samples will be the same and such type
of sampling is called periodic or uniform sampling. The time interval Ts between successive
samples is called sampling period. The inverse of the sampling period is called sampling
frequency or sampling rate and is denoted by f s
f s = 1/ Ts
Where,
The following figure shows a continuous-time signal x(t ) and the corresponding
sampled signal xs (t ) . When x(t ) is multiplied by a periodic impulse train, the sampled
signal xs (t ) is obtained.
SAMPLING THEOREM
A bandlimited continuous time signal with maximum frequency f m can be
represented in its samples and can be recovered back when sampling frequency f s is
greater than or equal to the twice the highest frequency component f m of message signal.
i. e.,
fs 2 fm
In other words, Bandlimited continuous time signals when sampled properly, can be
represented as discrete-time signals with no loss of information. This remarkable result is
known as the Sampling Theorem.
Sampling theorem is the bridge between continuous-time and discrete-time signals. It states
how often we must sample in order not to lose any information.
NYQUIST RATE
When the sampling frequency f s is equal to twice the maximum frequency of the given
signal, the sampling rate is called Nyquist rate. It is the minimum sampling frequency needed
to reconstruct the analog signal from sampled waveform. The corresponding sampling
1
interval Ts = is called the Nyquist interval
2 fm
Proof: Consider a continuous time signal x(t ) . The spectrum of x(t ) is a band limited to f m
Hz i.e. the spectrum of x(t ) is zero for m .
Sampling of input signal x(t ) can be obtained by multiplying x(t ) with an impulse
train p (t ) of period Ts . The output of multiplier is a discrete signal called sampled signal
which is represented with xs (t ) in the following diagrams:
Here, you can observe that the sampled signal takes the period of impulse. The
process of sampling can be explained by the following mathematical expression:
The expression for a pulse train is given by,
p (t ) = (t − nT )
n =−
s
T /2 T /2
1 1 1 1
ck =
T −T /2
p(t )e − jkst dt = (t )e − jkst dt = e − jks t =
T −T /2 T t = 0 T
1
p (t ) = Te
k =−
jks t
Recalling the Fourier transform properties of linearity (the transform of a sum is the
sum of the transforms) and modulation (multiplication by a complex exponential produces a
shift in the frequency domain), we can write an expression for the Fourier transform of our
sampled signal:
X s ( j ) = FT xs (t )
1
= FT p(t ) x(t ) = FT x(t )e jkst
k =− T
+
FT x(t )e
1 jk st
=
T k =−
1 +
= X ( j ( − ks ))
T k =−
Thus, the Fourier transform of the sampled signal is given by an infinite sum of
frequency shifted and amplitude scaled replicas of the spectrum of original continuous time
signal.
Since the original signal, x(t ) is bandlimited, the highest frequency component
present in it is f m .
i.e.,
X ( j ) = 0; for m
1. f s 2 f m : The replicas of original spectrum do not overlap and the signal can be
reconstructed from the sample spectrum. Here the signal is perfectly sampled without
any information loss.
ALIASING
Aliasing can be referred to as “the phenomenon of a high-frequency component in the
spectrum of a signal, taking on the identity of a low-frequency component in the spectrum of
its sampled version.”
• A low pass anti-aliasing filter is employed, before the sampler, to eliminate the high
frequency components, which are unwanted.
• The signal which is sampled after filtering, is sampled at a rate slightly higher than the
Nyquist rate. i.e., f s 2 f m
SIGNAL RECONSTRUCTION
When the spectrum of sampled signal has no aliasing then it is possible to recover
the original signal from the sampled signal.
In order to reconstruct the original signal x(t ) , we can use an ideal lowpass filter on
the sampled spectrum which has a bandwidth B of any value between f m and ( f s − f m ) . The
filter will pass only the portion of sampled spectrum, X s ( f ) , centred at f = 0 and will reject
all its replicas at f = nf s , for n 0 . This implies that the shape of the continuous time signal
xs (t ) , will be retained at the output of the ideal filter.
T , − B B
H ( j ) =
0, elsewhere
Reconstruction process is possible only if the shaded parts do not overlap. This
means that f s must be greater that twice f m .
The main drawback of Fourier series is, it is only applicable to periodic signals. There
are some naturally produced signals such as nonperiodic or aperiodic, which we cannot
represent using Fourier series. To overcome this shortcoming, Fourier developed a
mathematical model to transform signals between time (or spatial) domain to frequency
domain & vice versa, which is called 'Fourier transform'.
Fourier transform has many applications in physics and engineering such as analysis
of LTI systems, RADAR, astronomy, signal processing etc.
PURPOSE
Suppose that we are given a signal x(t) that is aperiodic. As a concrete example,
suppose that x(t) is a square pulse, with x(t) = 1 if −T1 ≤ t ≤ T1, and zero elsewhere. Clearly
x(t) is not periodic.
Now define a new signal 𝑥̃(𝑡),which is a periodic extension of x(t) with period T. In
other words, 𝑥̃(𝑡) is obtained by repeating x(t), where each copy is shifted T units in time.
This 𝑥̃(𝑡) has a Fourier series representation, which we found in the last section to be
Now recall that the Fourier series coefficients are calculated as follows:
However, we note that x(t) = 𝑥̃(𝑡) in the interval of integration, and thus
Furthermore, since x(t) is zero for all t outside the interval of integration, we can
expand the limits of the integral to obtain
Let us define
This is called the Fourier transform of the signal x(t), and the Fourier series
coefficients can be viewed as samples of the Fourier transform, scaled by T1 , i.e.,
2𝜋
Since 𝜔0 = , this becomes
𝑇
Now consider what happens as the period T gets bigger. In this case, 𝑥̃(𝑡)
approaches x(t), and so the above expression becomes a representation of x(t). As 𝑇 → ∞,
we have 𝜔0 → 0. Since each term in the summand can be viewed as the area of the
rectangle whose height is 𝑋(𝑗𝑘𝜔0 )ⅇ −𝑗𝑘𝜔0 𝑡 and whose base goes from 𝑘𝜔0 to (𝑘 + 1)𝜔0,
we see that as 𝜔0 → 0, the sum on the right-hand side approaches the area underneath the
curve 𝑋(𝑗𝜔)ⅇ −𝑗𝜔𝑡 (where t is held fixed). Thus, as 𝑇 → ∞, we have
The Fourier transform 𝑋(𝑗𝜔) is also called the spectrum of the signal, as it
represents the contribution of the complex exponential of frequency ω to the signal x(t).
Example 1
Consider the signal x(t) = e−atu(t), a>0. Evaluate the Fourier transform of this signal.
To visualize 𝑋(𝑗𝜔), we plot its magnitude and phase on separate plots (since 𝑋(𝑗𝜔)
is complex-valued in general). We have
Example 2
In other words, the spectrum of the impulse function has an equal contribution at all frequencies.
Example 3
Find the Fourier transform of the signal x(t) which is equal to 1 for −T1 ≤ t ≤ T1 and zero elsewhere.
Example 4
Example 5
The previous two examples showed the following. When x(t) is a square pulse, then 𝑋(𝑗𝜔) =
2sin (𝜔𝑇1 ) sin 𝑊𝑡
𝜔
and when 𝑋(𝑗𝜔) is a square pulse, 𝑥(𝑡) = 𝜋𝑡
.
There are a set of sufficient conditions called Dirichlet conditions under which a
continuous-time signal x(t) is guaranteed to have a Fourier transform:
∞
1. x(t) is absolutely integrable: ∫−∞|𝑥(𝑡)|𝑑𝑡 < ∞
2. x(t) has a finite number of maxima and minima in any finite interval.
3. x(t) has a finite number of discontinuities in any finite interval, and each of these
discontinuities is finite.
Suppose x(t) is a time-domain signal, and 𝑋(𝑗𝜔) is its Fourier transform. We then say
1. Linearity
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔) 𝑎𝑛𝑑 𝐹[𝑦(𝑡)] = 𝑌(𝑗𝜔), then for any constant a and b
𝐹[𝑎𝑥(𝑡) + 𝑏𝑦(𝑡)] = 𝑎𝑋(𝑗𝜔) + 𝑏𝑌(𝑗𝜔)
Meaning: The Fourier transform of linear combination of signals is equal to their
linear combination of their Fourier transforms.
2. Time Shifting
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔), then for any constant t0,
𝐹[𝑥(𝑡 − 𝑡0 )] = ⅇ −𝑗𝜔𝑡0 𝑋(𝑗𝜔)
Meaning: A shift of t0 in time domain is equivalent to introducing a phase shift of
−𝜔𝑡0 . Amplitude remains the same
3. Frequency shifting
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔), then
ⅇ 𝑗𝛽𝑡 𝑥(𝑡) = 𝑋(𝑤 − 𝛽)
Meaning: By shifting the frequency by β in frequency domain is equivalent to
multiplying the time domain by ⅇ 𝑗𝛽𝑡 .
4. Time scaling
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔), then for any constant a, then
1 𝑗𝜔
𝐹[𝑥(𝑎𝑡)] = 𝑋( )
|𝑎| 𝑎
5. Time differentiation
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔),then
𝑑𝐹[𝑥(𝑡)]
= 𝑗𝜔𝑋(𝑗𝜔)
𝑑𝑡
Meaning: Differentiation in time domain corresponds to multiplying by 𝑗𝜔 in
frequency domain.
6. Frequency differentiation
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔),then
𝑑𝑋(𝑗𝜔)
𝐹[−𝑗𝑡 𝑥(𝑡)] =
𝑑𝜔
Meaning: Differentiating the frequency spectrum is equivalent to multiplying the
time domain signal by the complex number -jt.
7. Integration
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔),then
𝑡
1
𝐹 [ ∫ 𝑥(𝜏)𝑑𝜏] = 𝑋(𝑗𝜔) + 𝜋𝑋(0)𝛿(𝜔)
𝑗𝜔
−∞
Meaning: Integration in time domain represents smoothing in frequency domain.
8. Convolution
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔) 𝑎𝑛𝑑 𝐹[ℎ(𝑡)] = 𝐻(𝑗𝜔), then
𝐹[𝑥(𝑡) ∗ ℎ(𝑡)] = 𝑋(𝑗𝜔)𝐻(𝑗𝜔)
Meaning: Fourier transform of convolution of two signals in time domain is equal to
the product of individual Fourier transforms
9. Modulation
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔) 𝑎𝑛𝑑 𝐹[𝑧(𝑡)] = 𝑍(𝑗𝜔), then
∞
1 1
𝐹[𝑥(𝑡)𝑧(𝑡)] = ∫ 𝑋(𝜉)𝑍(𝜔 − 𝜉)𝑑𝜉 = [𝑋(𝑗𝜔) ∗ 𝑍(𝑗𝜔)]
2𝜋 2𝜋
−∞
Meaning: Fourier transform of product of two signals is equal to the convolution of
their individual Fourier transforms.
10. Duality
If 𝐹[𝑥(𝑡)] = 𝑋(𝑗𝜔), then
𝐹[𝑋(𝑡)] = 2𝜋 𝑥(−𝑗𝜔)
11. Symmetry
Let x(t) be a real signal with xe(t) and xo(t) as its even and odd part and 𝑋(𝑗𝜔) =
𝑋𝑅 (𝑗𝜔) + 𝑗𝑋𝐼 (𝑗𝜔), then
𝐹[𝑥𝑒 (𝑡)] = 𝑋𝑅 (𝑗𝜔) and 𝐹[𝑥𝑜 (𝑡)] = 𝑗𝑋𝐼 (𝑗𝜔)
We saw that the Fourier transform for continuous-time aperiodic signals can be
obtained by taking the Fourier series of an appropriately defined periodic signal (and letting
the period go to infinity); we will follow an identical argument for discrete-time aperiodic
signals. The differences between the continuous-time and discrete-time Fourier series will
be reflected as differences between the continuous-time and discrete-time Fourier
transforms as well.
2𝜋
Where 𝜔0 = . The Fourier series coefficients are given by
𝑁
where n0 is any integer. Suppose we choose n0 so that the interval [−N1, N2] is contained in
[n0, n0+N−1]. Then since 𝑥̃[𝑛] = x[n] in this interval, we have
1
From this, we see that 𝑎𝑘 = 𝑋(ⅇ 𝑗𝑘𝜔0 ), i.e., the discrete-time Fourier series coefficients are
𝑁
obtained by sampling the discrete-time Fourier transform at periodic intervals of 𝜔0 . Also
note that 𝑋(ⅇ 𝑗𝜔0 ) is periodic in ω with period 2π(since ⅇ −𝑗𝜔𝑛 is 2π-periodic).
Once again, we see that each term in the summand represents the area of a
rectangle of width 𝜔0 obtained from the curve 𝑋(ⅇ 𝑗𝜔 )ⅇ 𝑗𝜔 . As 𝑁 → ∞, we have 𝜔0 → 0,. In
this case, the sum of the areas of the rectangles approaches the integral of the curve
𝑋(ⅇ 𝑗𝜔 )ⅇ 𝑗𝜔𝑛 , and since the sum was over only N samples of the function, the integral is only
over one interval of length 2π. Since 𝑥̃[𝑛] approaches x[n] as 𝑁 → ∞, we have
The main differences between the discrete-time and continuous-time Fourier transforms
are the following.
1) The discrete-time Fourier transform 𝑋(ⅇ 𝑗𝜔 ) is periodic in ω with period 2π, whereas
the continuous-time Fourier transform is not necessarily periodic.
2) The synthesis equation for the discrete-time Fourier transform only involves an
integral over an interval of length 2π, whereas the one for the continuous-time
Fourier transform is over the entire ω axis. Both of these are due to the fact that
ⅇ 𝑗𝜔𝑛 is 2π-periodic in ω, whereas the continuous-time complex exponential is not.
3)
Example 1
If we plot the magnitude of 𝑋(ⅇ 𝑗𝜔 ), we see an illustration of the “high” versus ”low”
frequency effect. Specifically, if a > 0 then the signal x[n] does not have any oscillations and
|𝑋(ⅇ 𝑗𝜔 )| has its highest magnitude around even multiples of π. However, if a < 0, then the
signal x[n] oscillates between positive and negative values at each time-step; this “high-
frequency” behaviour is captured by the fact that |𝑋(ⅇ 𝑗𝜔 )| has its largest magnitude near
odd multiples of π.
Example 2
Let x[n] and y[n] be two signals, then their DTFT is denoted by 𝑋(ⅇ 𝑗𝜔 ) and 𝑌(ⅇ 𝑗𝜔 ).
The notation is used to say that left hand side is the signal x[n] whose DTFT is 𝑋(ⅇ 𝑗𝜔 ) is
given at right hand side.
1. Periodicity
The discrete-time Fourier transform is always periodic in ω with period 2π.
2. Linearity of DTFT
5. Time reversal
6. Time expansion
Let us define a signal with k a positive integer,
7. Differentiation in frequency
8. Convolution
If 𝐹[𝑥(𝑛)] = 𝑋(ⅇ 𝑗𝜔 ) 𝑎𝑛𝑑 𝐹[ℎ(𝑛)] = 𝐻(ⅇ 𝑗𝜔 ), then
𝐹[𝑥(𝑛) ∗ ℎ(𝑛)] = 𝑋(ⅇ 𝑗𝜔 )𝐻(ⅇ 𝑗𝜔 )
UNSOLVED PROBLEMS
1. Compute the complex-form Fourier series coefficients and sketch the magnitude and
phase spectra for
a. the signal x( t) that has fundamental period T0 =1, with x(t )=e−t 0≤ t ≤1.
b. the signal shown below
2. Compute the discrete-time Fourier series coefficients for the signals below and
sketch the magnitude and phase spectra.
a.
b. 𝑥[𝑛] = ∑∞ 𝑘=−∞ 𝛿(𝑛 − 4𝑘 − 1)
3. From the basic definition, compute the Fourier transforms of the signals.
a. 𝑥[𝑛] = ⅇ −(𝑡−2) 𝑢(𝑡 − 3)
b. 𝑥[𝑛] = ⅇ −|𝑡+1|
4. Use the linearity of the DTFT to determine the DTFT of the following sum of two
right-sided exponential signals: x[n] = (0.8)nu[n] + 2(−0.5)nu[n].
5. Find the DTFT of 𝑥[𝑛] = 7𝑢[𝑛 − 1] − 7𝑢[𝑛 − 9]
6. Determine the inverse DTFT of 𝑌(ⅇ 𝑗𝜔 ) = 6cos (3𝜔)
Revision 2021
Semester 5
SIGNALS &
SYSTEMS
MODULE 4 NOTES
Contents:
Need of Laplace transform
Region of Convergence (ROC)
Advantages and limitation of Laplace transform
Laplace transform of some commonly used signals - impulse, step, ramp, parabolic,
exponential, sine and cosine functions
Properties of Laplace transform: Linearity, time shifting, time scaling, time reversal,
transform of derivatives and integrals, initial value theorem, final value theorem.
Inverse Laplace transform: simple problems (no derivation required)
LAPLACE TRANSFORM
The Laplace transform is named after Pierre Simon De Laplace, a famous French mathematician
(1749-1827) who formulated a transformation that can convert one signal into another using a set of
laws or equations.
The Laplace transformation is the most effective method for converting differential equations
to algebraic equations. In electronics engineering, the Laplace transformation is very important to
solve problems related to signal and system, digital signal processing, and control system. In studying
the dynamic control system, the characteristics of the Laplace transform and the inverse Laplace
transformation are both employed.
A piecewise continuous function is a function that has a finite number of breaks, and this
consistency remains till the function reaches infinity.
provided that the integral exists, where the Laplace Operator, s = σ + jω; will be real or complex
with j = √(-1).
The Laplace transform will change the differential equation into an easy-to-solve algebraic
function. This transform converts any signal into the frequency domain 's', where the complexity of the
problem reduces. Whenever you encounter any function written inside the capital letter L, instantly
identify that it is the Laplace transform of a function. It can also be represented as the capital letter of
the function related to frequency 's'. For instance, F (s), A (s), G (s), X (s), etc.
Laplace transform is formulated as the integration of the product of the function with e -st , with
limits ranging from 0 to infinity.
There are certain steps which need to be followed in order to do a Laplace transform of a time
function. In order to transform a given function of time f(t) into its corresponding Laplace transform,
we have to follow the following steps:
The time function f(t) is obtained back from the Laplace transform by a process called inverse
Laplace transformation and denoted by £-1
The Laplace transform is performed on a number of functions, which are – impulse, unit
impulse, step, unit step, shifted unit step, ramp, exponential decay, sine, cosine, hyperbolic sine,
hyperbolic cosine, natural logarithm, Bessel function. But the greatest advantage of applying the
Laplace transform is solving higher order differential equations easily by converting into algebraic
equations.
One-sided Laplace transformation: The Laplace transformation with limits 0 to infinity is known as
one-sided. This is also known as unilateral Laplace transformation.
∞
Two-sided Laplace transformation: The Laplace transformation with limits ranging from -infinity to
+infinity is considered as two-sided. This transformation is also known as bilateral
Laplace transformation.
∞
Laplace transform has no physical significance except that it transforms the time domain signal
to a complex frequency domain. It is useful to simply the mathematical computations and it can be
used for the easy analysis of signals and systems.
Transforms are not necessary to deal with the signal and system, but
❖ They make signal formation easy and convenient.
❖ These are the best ways to deal with the system.
❖ Computation and analysis have become interesting and convenient.
REGION OF CONVERGENCE
Region of Convergence (ROC) is defined as the set of points in s-plane for which the Laplace
transform of a function x(t) converges. In other words, the range of Re(s) (i.e. σ) for which the
function X(s)converges is called the region of convergence.
For a right-sided signal x(t), the ROC of the Laplace transform X(s) is Re(s)>σ1, where σ1 is a
constant. Thus, the ROC of the Laplace transform of the right-sided signal is to the right of the line σ=σ1.
A causal signal is an example of a right-sided signal.
For a left-sided signal x(t), the ROC of the Laplace transform X(s) is Re(s)<σ2, where σ2 is a
constant. Therefore, the ROC of the Laplace transform of a left-sided signal is to the left of the line σ =
σ2. An anti-causal signal is an example of a left-sided signal.
The range variation of σ for which the Laplace transform converges is called region of convergence.
❖
❖ If x(t) is absolutely integral and it is of finite duration, then ROC is entire s-plane.
❖ If x(t) is a right sided sequence then ROC: Re{s} > σo.
❖ If x(t) is a left sided sequence then ROC: Re{s} < σo.
❖ If x(t) is a two-sided sequence then ROC is the combination of two regions.
Example 1:
Find the Laplace transform and ROC of 𝑥(𝑡) = ⅇ −𝑎𝑡 𝑢(𝑡)
1
𝐿[𝑥(𝑡)] = 𝐿[ⅇ −𝑎𝑡 𝑢(𝑡)] =
𝑠+𝑎
ROC: Re{s} >> −a
Example 2:
Find the Laplace transform and ROC of 𝑥(𝑡) = ⅇ 𝑎𝑡 𝑢(−𝑡).
1
𝐿[𝑥(𝑡)] = 𝐿[ⅇ 𝑎𝑡 𝑢(−𝑡)] =
𝑠−𝑎
ROC: Re{s} < a
Example 3:
Find the Laplace transform and ROC of 𝑥(𝑡) = ⅇ −𝑎𝑡 𝑢(𝑡) + ⅇ 𝑎𝑡 𝑢(−𝑡)
1 1
𝐿[𝑥(𝑡)] = 𝐿[ⅇ −𝑎𝑡 𝑢(𝑡) + ⅇ 𝑎𝑡 𝑢(−𝑡)] = +
𝑠+𝑎 𝑠−𝑎
For 1/s+a , Re{s}> −a
For -1/s−a, Re{s}< a
ROC: −a<Re{s}<a
❖ A system is said to be stable when all poles of its transfer function lay on the left half of s-plane.
❖ A system is said to be unstable when at least one pole of its transfer function is shifted to the
right half of s-plane.
❖ A system is said to be marginally stable when at least one pole of its transfer function lies on the
jω axis of s-plane.
Now we apply the shifting property of the impulse. Since the impulse is 0 everywhere but
t=0, we can change the upper limit of the integral to 0+.
Since e-st is continuous at t=0, that is the same as saying it is constant from t=0- to t=0+. So,
we can replace e-st by its value evaluated at t=0.
So, the Laplace Transform of the unit impulse is just one. Therefore, the impulse function,
which is difficult to handle in the time domain, becomes easy to handle in the Laplace
domain. It will turn out that the unit impulse will be important to much of what we do.
so
3. The Ramp
So far (with the exception of the impulse), all the functions have been closely related to the
exponential. It is also possible to find the Laplace Transform of other functions. For example,
the ramp function:
y(t) = t, t >0;
= 0, elsewhere
4. The Parabolic
= 0; elsewhere
=1/s[0ʃ∞ t.e-stdt.]
=1/s .1/s2
=1/s3
5. The Exponential
Consider the causal (i.e., defined only for t>0) exponential:
Since γ(t) is equal to one for all positive t, we can remove it from the integral
6. The Sine
so
But we've already done this integral (the exponential function, above)
7.The Cosine
The cosine can be found in much the same way, but using Euler's identity for the cosine.
2. Time Shifting
3. Time Scaling:
4. Shift in S-domain
5. Time-reversal
6. Differentiation in S-domain
The differentiation property of the Laplace Transform. We will use the differentiation property
widely. It is repeated below (for first, second and nth order derivatives)
7. Integration
The integration theorem states that
8. Convolution in Time
Initial value theorem is applied when in Laplace transform the degree of the numerator is less
than the degree of the denominator
2. Time Delay
The time delay property is not much harder to prove, but there are some subtleties involved
in understanding how to apply it. We'll start with the statement of the property, followed by
the proof, and then followed by some examples. The time shift property states
The correct one is exactly like the original function but shifted.
Important: To apply the time delay property you must multiply a delayed version of your
function by a delayed step. If the original function is g(t)·γ(t), then the shifted function is g(t-
td)·γ(t-td) where td is the time delay.
3. First Derivative
The first derivative property of the Laplace Transform states
To prove this we start with the definition of the Laplace Transform and integrate by parts
The first term in the brackets goes to zero (as long as f(t) doesn't grow faster than an
exponential which was a condition for existence of the transform). In the next term, the
exponential goes to one. The last term is simply the definition of the Laplace Transform
multiplied by s. So the theorem is proved.
• We have taken a derivative in the time domain, and turned it into an algebraic equation in the
Laplace domain. This means that we can take differential equations in time, and turn them into
algebraic equations in the Laplace domain. We can solve the algebraic equations, and then
convert back into the time domain (this is called the Inverse Laplace Transform, and is
described later).
• The initial conditions are taken at t=0-. This means that we only need to know the initial
conditions before our input starts. This is often much easier than finding them at t=0+.
Second Derivative
Similarly for the second derivative we can show:
where
or
where
4. Integration
The integration theorem states that
The first term in the parentheses goes to zero if f(t) grows more slowly than an exponential
(one of our requirements for existence of the Laplace Transform), and the second term goes to
zero because the limits on the integral are equal. So the theorem is proven
Example: Find Laplace Transform of Step and Ramp using Integration Property
Given that the Laplace Transform of the impulse δ(t) is Δ(s)=1, find the Laplace Transform of the
step and ramp.
Solution:
We know that
so that
Likewise:
5. Convolution
The convolution theorem states (if you haven't studied convolution, you can skip this
theorem)
We then invoke the definition of the Laplace Transform, and split the integral into two parts:
Several simplifications are in order. In the left hand expression, we can take the second term
out of the limit, since it doesn't depend on 's.' In the right hand expression, we can take the
first term out of the limit for the same reason, and if we substitute infinity for 's' in the second
term, the exponential term goes to zero:
The two f(0-) terms cancel each other, and we are left with the Initial Value Theorem
This theorem only works if F(s) is a strictly proper fraction in which the numerator
polynomial is of lower order then the denominator polynomial. In other words is will work for
F(s)=1/(s+1) but not F(s)=s/(s+1).
However, we can only use the final value if the value exists (function like sine, cosine and the
ramp function don't have final values). To prove the final value theorem, we start as we did for
the initial value theorem, with the Laplace Transform of the derivative,
We let s→0,
As s→0 the exponential term disappears from the integral. Also, we can take f(0-) out of the
limit (since it doesn't depend on s)
Neither term on the left depends on s, so we can remove the limit and simplify, resulting in the
final value theorem
Examples of functions for which this theorem can't be used are increasing exponentials
(like eat where a is a positive number) that go to infinity as t increases, and oscillating functions
like sine and cosine that don't have a final value.
Definition
Linearity
First Derivative
Second Derivative
nth Derivative
Integration
Multiplication by time
Time Shift
Complex Shift
Time Scaling
Convolution
('*' denotes convolution
of functions)
Initial Value Theorem
(if F(s) is a strictly proper
fraction)
Final Value Theorem
(if final value exists,
e.g., decaying exponentials )
• Inversion formula
• Use of tables of Laplace transform pairs
• Partial fraction expansion method
Inversion formula
This is a procedure that holds to all classes of transform function that involve the evaluation of
a line integral in the complex s-plane.
𝑓(𝑡) = ∫ 𝑋(𝑠)ⅇ 𝑠𝑡 𝑑𝑠
𝑐
In the integral, the real C is to be selected such that σ1<C<σ2 when ROC of X(s) is σ1<Re{s}<σ2.
When X(s) is a proper rational function (m<n) and when all the poles of X(s) are simple or
distinct, then X(s) can be written as:
𝐶1 𝐶2 𝐶𝑛
𝑋(𝑠) = + + ⋯+
(𝑠 − 𝑝1 ) (𝑠 − 𝑝2 ) (𝑠 − 𝑝𝑛 )
𝑑1 𝑑2 𝑑𝑟
𝑋(𝑠) = + + ⋯+
(𝑠 − 𝑝𝑖 ) (𝑠 − 𝑝𝑖 ) 2 (𝑠 − 𝑝𝑖 )𝑟
1 𝑑𝑘
Where 𝑑𝑟−𝑘 = {(𝑠 − 𝑝𝑖 )𝑟 𝑋(𝑠)}𝑠=𝑝𝑖
𝑘! 𝑑𝑠𝑘
Case c:
𝑁(𝑠) 𝑅(𝑠)
Then 𝑋(𝑠) = = 𝑄(𝑠) +
𝐷(𝑠) 𝐷(𝑠)
N(s) is the numerator polynomial in s of X(s)
D(s) is the denominator polynomial in s of X(s)
R(s) is the reminder polynomial in s with degree less than n
Q(s) is the quotient polynomial in s with degree (m-n)
Then inverse Laplace transform is computed.
Problems
1. Find the inverse Laplace transform using partial fraction expansion method
1
a. 𝑋(𝑠) =
(𝑠+1)(𝑠+2)
1 𝐴 𝐵
𝑋(𝑠) = = +
(𝑠 + 1)(𝑠 + 2) 𝑠 + 1 𝑠 + 2
1 = 𝐴(𝑠 + 2) + 𝐵(𝑠 + 1)
1
c. 𝑋(𝑠) =
𝑠(𝑠+1)(𝑠+2)
1 𝐴 𝐵 𝐶
𝑋(𝑠) = = + +
𝑠(𝑠 + 1)(𝑠 + 2) 𝑠 𝑠 + 1 𝑠 + 2
1 = 𝐴(𝑠 + 2)(𝑠 + 1) + 𝐵𝑠(𝑠 + 2) + 𝐶𝑠(𝑠 + 1)
1/2 1 1/2
𝑋(𝑠) = − +
𝑠 𝑠+1 𝑠+2
1 1
𝑥(𝑡) = [ − ⅇ −𝑡 + ⅇ −2𝑡 ] 𝑢(𝑡)
2 2
UNSOLVED PROBLEMS
1. Find the Laplace transform of
a. 2 − 2ⅇ 𝑡 + 0.5 sin (4𝑡)
b. ⅇ −𝑡 sin (5𝑡)
c. ⅇ 2𝑡 + 2ⅇ −2𝑡 − 𝑡 2
2. Determine the inverse Laplace transform of
𝑠−1
a.
𝑠(𝑠+1)
𝑠3 +1
b.
𝑠(𝑠+1)(𝑠+2)
𝑠−1
c.
(𝑠+1)(𝑠2 +2𝑠+5)
2𝑠+1
3. Find the initial value of continuous time signal if its Laplace transform is given as 𝑋(𝑠) =
𝑠2 −1
2021 Curriculum