Course Manual Signals and Systems
Course Manual Signals and Systems
A Course Manual
on
By
July, 2015
Contents
1 Introduction 1
1.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Classification of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Continuous-Time and Discrete-Time Signals . . . . . . . . . . 1
1.2.2 Even and Odd Signals . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Periodic and Nonperiodic Signals . . . . . . . . . . . . . . . . 4
1.2.4 Deterministic and Random Signals . . . . . . . . . . . . . . . 5
1.2.5 Real and Complex Signals . . . . . . . . . . . . . . . . . . . . 5
1.2.6 Energy and Power Signals . . . . . . . . . . . . . . . . . . . . 5
1.2.7 Analog and Digital Signals . . . . . . . . . . . . . . . . . . . . 7
1.2.8 Causal, Anti-Causal, and Non-Causal Signals . . . . . . . . . 7
1.3 Transformation of the Independent Variable . . . . . . . . . . . . . . 8
1.3.1 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.3 Time Reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.4 Precedence Rule for Time Shifting and Time Scaling . . . . . 10
1.4 Elementary Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.1 Sinusoidal Signals . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.2 Exponential Signals . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.3 Unit Impulse Function . . . . . . . . . . . . . . . . . . . . . . 16
1.4.4 Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.5 Ramp Function . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.6 Relationship Between Unit Impulse Function, Unit Step Func-
tion, and Ramp Function . . . . . . . . . . . . . . . . . . . . . 20
1.5 Some Other Useful Signals . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5.1 Signum Function . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5.2 Rectangular Pulse or Gate Function . . . . . . . . . . . . . . . 22
1.5.3 Sinc Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.6 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
i
CONTENTS CONTENTS
Introduction
1.1 Signals
A signal is defined as a function of one or more independent variables that con-
tains information about the nature of some phenomenon. Voltages and currents as
a function of time in an electrical circuit are examples of signals. When the function
depends on a single variable, the signal is said to be one dimensional. A speech
signal is an example of a one-dimensional signal whose amplitude varies with time.
When the function depends on two or more variables, the signal is said to be multi-
dimensional. An image is an example of a two-dimensional signal whose brightness
depends on x- and y-coordinates. Other examples of signals are music, video, texts,
etc. We will study the signals with time t as an independent variable throughout
this course.
1
CHAPTER 1. INTRODUCTION
Figure 1.1: Example of (a) continuous-time signal and (b) discrete-time signal.
If the sequence does not have an arrow, the first element corresponds to the value
at n = 0.
x(−t) = x(t)
x[−n] = x[n].
x(−t) = −x(t)
x[−n] = −x[n].
Any signal can be decomposed into even and odd parts. That is,
or,
x(−t) = xe (t) − xo (t) (1.2)
Solving Equations (1.1) and (1.2), we get
1
xe (t) = [x(t) + x(−t)] . (1.3)
2
and,
1
xo (t) = [x(t) − x(−t)] . (1.4)
2
Similarly for discrete-time signal x[n], we have
1
xe [n] = [x[n] + x[−n]] . (1.5)
2
and,
1
[x[n] − x[−n]] .
xo [n] = (1.6)
2
Note that the product of two even signals or of two odd signals is an even signal
and the product of an even signal and an odd signal is an odd signal.
x(t + T ) = x(t)
x(t + mT ) = x(t),
where m is an integer. Figure 1.4 is an example of a continuous-time periodic signal.
The fundamental period T0 of x(t) is the smallest positive value of T for which
x(t + T ) = x(t) holds. A signal which is not periodic or does not hold the above
condition is called the nonperiodic or aperiodic signal.
For discrete time, a signal or sequence x[n] is periodic with period N if there is
a positive integer N for which
x[n + N ] = x[n]
v 2 (t)
p(t) = v(t)i(t) = i2 (t)R = . (1.7)
R
If R = 1 Ω, then the instantaneous power is
Z T /2
E = lim x2 (t)dt (1.10)
T →∞ −T /2
Z ∞
= x2 (t)dt
−∞
In general, the following equation is used to calculate the total energy of both the
real and complex signals.
Z ∞
E= |x(t)|2 dt. (1.11)
−∞
We can determine the average power of a periodic signal x(t) of fundamental period
T by using the equation given by
Z T /2
1
P = |x(t)|2 dt. (1.13)
T −T /2
0 < E < ∞.
The signal is referred to as a power signal if and only if the average power of the
signal satisfies the following condition.
0 < P < ∞.
The signal not satisfying any of the above conditions is referred to as the neither
energy nor power signal.
Note that an energy signal has zero average power and a power signal has infinite
energy. Periodic signals are usually viewed as power signals, whereas nonperiodic
signals are usually viewed as energy signals.
Note that the discrete-time signals resulted after sampling analog signals may not be
digital. The digital signals are obtained after quantizing the resulted discrete-time
signals to a finite number of amplitude levels and then sending them to analog to
digital converter.
signals. And, the signals that appear in both the positive and negative time instants
are called as non-causal signals. Figures 1.7(a), 1.7(b), and 1.8 show the examples
of a causal signal, anti-causal signal, and non-causal signal.
Figure 1.9: (a) Original signal x(t). (b) Time-shifted signal x(t − t0 ) for t0 = 2. (c)
Time-shifted signal x(t − t0 ) for t0 = −2.
Let us consider x(t) to be an audio tape recording, then x(2t) is the recording
played at twice the speed and x(t/2) is the recording played at half-speed.
Figure 1.10: (a) Original signal x(t). (b) Time-scaled signal x(αt) for α = 2. (c)
Time-scaled signal x(αt) for α = 1/2.
.
Considering the signal x(t) to be an audio tape recording, x(−t) represents the
same tape recording played backward.
Figure 1.11: (a) Original signal x(t). (b) Time-reversed version x(−t).
.
y(t) = x(αt − t0 )
then, to obtain y(t) from x(t), the time-shifting and time-scaling operations must be
performed in the correct order. We know that the scaling operation replaces t by
αt and time-shifting operation replaces t by t − t0 . So, the time-shifting operation
is performed first on x(t), resulting in an intermediate signal x(t − t0 ) and then
the time-scaling operation is performed on the intermediate signal resulting in the
desired output
y(t) = x(αt − t0 ).
where A is the amplitude, w is the angular frequency in radians per second, and φ
is the phase angle in radians. A sinusoidal signal is an example of a periodic signal
with period
2π
T = .
ω
x[n + N ] = x[n]
That is,
ωN = 2πm.
or,
2πm
ω= .
N
That means for a discrete-time sinusoidal signal to be periodic, the angular frequency
must be a rational multiple of 2π.
x(t) = Ceat ,
where c and a are real numbers. Depending on the sign of a, we have two types of
real exponential signals: exponentially growing signal for a > 0 and exponentially
decaying signal for a < 0.
Figure 1.13: (a) Exponentially growing signal. (b) Exponentially decaying signal.
x[n] = Cαn .
where C and α are real numbers. If |α| > 1 the magnitude of the signal grows
exponentially with n, whereas the signal decays exponentially for |α| < 1. Moreover,
if α is positive, all the values of x[n] are of the same sign whereas if α is negative
then the sign of x[n] alternates.
x(t) = ejω0 t ,
where ω0 is the fundamental frequency and the fundamental period is
2π
T0 = .
ω0
By using Euler’s relation, we have
x[n] = ejω0 n .
And, the Euler’s relation gives
C = |C|ejθ
and
a = r + jω0 .
So, the general complex exponential signal has the form
Depending upon the values of r in Equation (1.21) we will have the following three
conditions:
For r = 0, the real and imaginary parts of a complex exponential are sinusoidal.
For r > 0, both the real and imaginary parts are growing sinusoidals.
For r < 0, both the real and imaginary parts are decaying sinusoidals.
Growing sinusoidal and decaying sinusoidal are shown in Figures 1.14 (a) and 1.14
(b) respectively.
Figure 1.14: (a) Growing sinusoidal signal. (b) Decaying sinusoidal signal.
C = |C|ejθ
and
α = |α|ejω0 .
So,
x[n] = |C|ejθ |α|n ejω0 n = |C||α|n ej(ω0 n+θ) = |C||α|n cos(ω0 n+θ)+j|C||α|n sin(ω0 n+θ).
(1.23)
Therefore, from Equation (1.23), we see the following conditions.
For |α| = 1, the real and imaginary parts of a complex exponential signal are sinu-
soidal.
For |α| > 1, the real and imaginary parts are growing sinusoidal sequences.
For |α| < 1, the real and imaginary parts are decaying sinusoidal sequences.
The discrete-time unit impulse function can be used to sample the value of a
signal at n = 0. That is,
x[n]δ[n] = x[0]δ[n].
Moreover, if we consider a unit impulse δ[n − n0 ] at n = n0 , then
Z ∞
δ(t)dt = 1. (1.26)
−∞
From these equations, we can say that the impulse δ(t) is zero everywhere except
at the origin and the total area under the unit impulse is unity. The impulse δ(t) is
also referred to as the Dirac delta function. The graphical representation is shown
in the Figure 1.16(a).
Figure 1.16: (a) Continuous-time unit impulse function. (b) Rectangular pulse of
unit area.
The continuous-time unit impulse δ(t) is viewed as the limiting form of a rectan-
gular pulse of unit area as shown in Figure 1.16(b). As the duration of the pulse is
decreased, its amplitude is increased such that the area of the pulse remains unity.
So, for infinitesimally small duration, the rectangular pulse approximates the impulse
more closely. That is,
δ(−t) = δ(t).
3. The impulse function δ(t) has the time-scaling property as defined by:
1
δ(at) = δ(t), a > 0.
a
(
t, t ≥ 0
r(t) = .
0, t < 0
(
n, n ≥ 0
r[n] = .
0, n < 0
The continuous-time and discrete-time ramp functions of unit slope are shown in
Figures 1.19(a) and 1.19(b) respectively.
Figure 1.19: (a) Continuous-time ramp funciton. (b) Discrete-time ramp function.
Figure 1.20: Running sum of Equation (1.27): (a) n < 0; (b) n > 0 .
On the other hand, the discrete-time unit impulse is the first difference of the discrete-
time unit step. That is
Figure 1.21: Running integral of Equation (1.28): (a) t < 0; (b) t > 0 .
On the other hand, the continuous-time unit impulse is the first derivative of the
continuous-time unit step. That is,
du(t)
δ(t) = .
dt
The integral of the unit step function u(t) is a ramp funciton of unit slope. That is,
Z t
r(t) = u(τ )dτ.
−∞
Conversely, the continuous-time unit step funtion is the first derivative of the continuous-
time unit ramp function. That is,
dr(t)
u(t) = .
dt
( T
t 1, |t| < 2
rect = T
,
T 0, |t| > 2
1.6 Systems
A system can be defined as an interconnection of components, devices, or subsys-
tems that processes particular input signals to produce the desired outputs. E.g.,
amplifier, filter, etc.
y(t) = Rx(t),
where R is the resistance. Another example is the identity system; i.e., y[n] = x[n].
On the other hand, a system is said to be with memory if its output signal
depends on the past or future values of the input signal. A capacitor is an example
of a continuous-time system with memory, since the voltage (taken as output y(t))
across it at any time depends on the past values of the current (taken as input x(t)).
That is,
1 t
Z
y(t) = x(τ )dτ,
C −∞
where C is the capacitance. An example of a discrete-time system with memory is
an accumulator or summer
n
X
y[n] = x[k].
k=−∞
1.8.2 Invertibility
A system is said to be invertible if the input of the system can be recovered from
the output. For an invertible system, there exists an inverse system when cascaded
with the original system produces the signal that is identical to the input signal to
the original system as illustrated in the Figure 1.30.
1. y[n] = 0: This sytem always produces zero output for any input signal.
2. y(t) = x2 (t): The output signal is same whether input signal has positive or
negative sign.
1.8.3 Causality
A system is said to be causal if the present value of the output signal depends only
on the present and/or past values of the input signal. If the present value of the
output signal depends also on the future value of the input signal, then the sytem is
referred to as the non-causal system.
All the real-time systems and memoryless systems are causal. E.g., resistor,
capacitor, inductor, delay, accumulator, etc. The moving-average system described
by
1
y[n] = (x[n + 1] + x[n] + x[n − 1])
3
is non-causal, since the output signal y[n] depends also on the future value of the
input signal, i.e., x[n + 1].
1.8.4 Stability
A system is said to be bounded input-bounded output (BIBO) stable if the output
is bounded for bounded input. That means the output of such a system does not
diverge or does not grow unreasonably larger if the input does not diverge.
y(t) = ex(t)
is BIBO stable, whereas
y(t) = tx(t)
is BIBO unstable.
x(t) → y(t),
then
x(t − t0 ) → y(t − t0 ).
Example:
Determine whether the system y(t) = sin[x(t)] is time variant or time invariant.
Solution:
Let x1 (t) be an input signal to the given system such that output signal is
x2 (t) = x1 (t − t0 ).
Then, the system output becomes
y2 (t) = y1 (t − t0 ).
Hence the given system is time invariant.
Example:
Determine whether the system y(t) = tx(t) is time variant or time invariant.
Solution:
Let x1 (t) be an input signal to the given system such that output signal is
x2 (t) = x1 (t − t0 ).
Then, the system output becomes
y1 (t − t0 ) = (t − t0 )x1 (t − t0 ). (1.39)
Comparing Equations (1.38) and (1.39), we get
y2 (t) 6= y1 (t − t0 ).
Hence the given system is time variant.
1.8.6 Linearity
A system is said to be linear if it satisfies the superposition theorem. That means
if an input consists of the weighted sum of several signals, then the output is the
weighted sum of the responses of the system to each of those signals. Let x1 (t) and
x2 (t) be the input signals and y1 (t) and y2 (t) be the corresponding output signals,
i.e.,
x1 (t) → y1 (t)
x2 (t) → y2 (t)
and
Example:
Is the system y(t) = tx(t) linear?
Solution:
Let x1 (t) and x2 (t) be the input signals and y1 (t) and y2 (t) be the corresponding
output signals. Then,
That is,
Example:
Is the system y(t) = x2 (t) linear?
Solution:
Let x1 (t) and x2 (t) be the input signals and y1 (t) and y2 (t) be the corresponding
output signals. Then,
Since,
Many practical systems possess the properties of linearity and time invariance. The
systems with both of these properties are known as linear time-invariant (LTI) sys-
tems. An LTI system can be completely characterized in terms of its impulse re-
sponse. Impulse response is the output of a system when its input is the impulse
function.
32
CHAPTER 2. LINEAR TIME-INVARIANT SYSTEMS
+∞
X
x[n] = x[k]δ[n − k]. (2.2)
k=−∞
δ[n − k] → hk [n]
+∞
X
y[n] = x[k]hk [n]. (2.3)
k=−∞
hk [n] = h0 [n − k]
where h0 [n − k] is the time-shifted version of unit impulse response h0 [n]. For sim-
plicity, let
h0 [n] = h[n].
Hence the output of a discrete-time LTI sytem for input x[n] becomes
+∞
X
y[n] = x[k]h[n − k]. (2.4)
k=−∞
Example:
Determine the output y[n] of a discrete-time LTI system for the following input x[n]
and impulse response h[n].
Solution:
We have,
+∞
X
y[n] = x[k]h[n − k]. (2.5)
k=−∞
The functions x[k] and h[k] are shown in Figure 2.2 (a).
y[0] = 0 + 2 × 1 + 0 = 2.
+∞
X
y[3] = x[k]h[3 − k] = 0 + 1 × 1 + 0 = 1.
k=−∞
+∞
X
y[4] = x[k]h[4 − k] = 0.
k=−∞
7. Similarly, for n > 4, there will be no nonzero overlapping between x[k] and
h[n − k]. So,
y[n] = 0.
8. For n = −1, h[−1 − k] is the time-shifted version of h[−k] to the left by 1 time
unit as shown in Figure 2.2 (g). There are no nonzero overlapping between
x[k] and h[−1 − k]. So,
+∞
X
y[−1] = x[k]h[−1 − k] = 0.
k=−∞
9. Similarly, for n < −1, there will be no nonzero overlapping between x[k] and
h[n − k]. So,
y[n] = 0.
10. Hence, the overall output of the given LTI system for the given input sequence
x[n] is
y[n] = {0, 2, 3, 2, 1, 0}.
Figure 2.2: Figures for determining the output of the given discrete-time LTI system
for the given input.
Let h(t) be defined as the impulse response of the sytem for an impulse input
δ(t). That is,
δ(t) → h(t).
Then using the time invariance property, we have
δ(t − τ ) → h(t − τ ).
Also, using the linearity property, the output of an LTI system is given as a linear
combination of time-shifted impulse responses given by
Z +∞
y(t) = x(τ )h(t − τ )dτ. (2.6)
−∞
The Equation (2.6) is termed as the convolution integral. Symbolically, we can write
Example:
Determine the output of an LTI system with unit impulse response
h(t) = u(t),
for the input signal
Solution:
We have,
Z +∞
y(t) = x(τ )h(t − τ )dτ.
−∞
h(τ ) = u(τ ),
and
x(τ ) = e−aτ u(τ ), a > 0.
The functions x(τ ) and h(τ ) are shown in Figure 2.3(a).
3. For t < 0, h(t − τ ) is the time-shifted version of h(−τ ) to the left by t time
unit as shown in Figure 2.3(c). We see that for t < 0, the product of x(τ ) and
h(t − τ ) is zero. Therefore, output y(t) is zero.
4. For t > 0, h(t − τ ) is the time-shifted version of h(−τ ) to the right by t time
unit as shown in Figure 2.3(d). The output for t > 0 is then determined by
Z +∞ Z t t
1 1
e−aτ dτ = − e−aτ 1 − e−at .
y(t) = x(τ )h(t − τ )dτ = =
−∞ 0 a 0 a
1
1 − e−at u(t),
y(t) =
a
Figure 2.3: Figures for determining the output of the given LTI system.
This property tells us that the output of an LTI system with input x[n] and unit
impulse response h[n] is identical to the output of an LTI system with input h[n]
and unit impulse response x[n].
and
h[n] = 0, for n 6= 0.
Here, the impulse response has the form
h[n] = Kδ[n],
where K = h[0] is a constant and the output of the system is
y[n] = Kx[n].
Similarly, a continuous-time LTI system is memoryless if
h(t) = 0, for t 6= 0.
The impulse response has the form
h(t) = Kδ(t),
and the system output is
y(t) = Kx(t).
If K = 1, then these systems become identity systems, with output equal to the
input. Also, the unit impulse response is equal to the unit impulse. Then, we have
For a discrete-time LTI system to be causal, y[n] must not depend on x[k] for k > n.
This requires
According to Equation (2.11), the impulse response of a causal LTI system must be
zero before the impulse occurs. Therefore, the output of a causal discrete-time LTI
system becomes
n
X
y[n] = x[k]h[n − k].
k=−∞
Alternatively,
∞
X
y[n] = h[k]x[n − k].
k=0
or,
+∞
X
|y[n]| ≤ |h[k]||x[n − k]|.
k=−∞
This equation implies that if the impulse response is absolutely summable, that is, if
+∞
X
|h[k]| < ∞,
k=−∞
then the output y[n] is bounded in magnitude, and hence the system is stable.
Similarly, a continuous-time LTI system is stable only if its impulse response is
absolutely integrable, that is,
Z +∞
|h(τ )|dτ < ∞.
−∞
+∞
X
s[n] = u[n] ∗ h[n] = h[n] ∗ u[n] = h[k]u[n − k]. (2.12)
k=−∞
Therefore, the unit step response of a discrete-time LTI system is the running sum
of its impulse response. Again,
n−1
X
s[n] = h[k] + h[n] = s[n − 1] + h[n].
k=−∞
or,
Similarly, for a continuous-time LTI system, the unit step response is the run-
ning integral of its unit impulse response and the unit impulse response is the first
derivative of its unit step response. That is,
Z t
s(t) = h(τ )dτ.
−∞
and,
ds(t)
h(t) = .
dt
Hence, the unit step response can also be used to characterize an LTI system,
since the unit impulse response can be determined from it.
dy(t)
x(t) = i(t)R + y(t) = RC + y(t)
dt
or,
dy(t)
y(t) + RC = x(t). (2.13)
dt
So, the input and output of an RC circuit can be related through the first order
differential equation.
To determine the output, this differential equation must be solved which requires
the initial condition of the system. The complete solution, y(t), consists of a homo-
geneous solution, yh (t), and a particular solution, yp (t), i.e.,
dy(t)
y(t) + RC = 0. (2.15)
dt
Let the homogeneous solution is of the form
1 + RCr1 = 0. (2.17)
1
So, r1 = − RC , and the homogeneous solution becomes
t
yh (t) = c1 e− RC . (2.18)
Consider the input signal
Y + 2RCY = K.
or,
K
Y = .
1 + 2RC
Then the particular solution becomes,
K
yp (t) = e2t , t > 0. (2.22)
1 + 2RC
Now, for t > 0, the complete solution becomes,
t K
y(t) = c1 e− RC + e2t . (2.23)
1 + 2RC
To determine the constant c1 , we need an initial condition. For a system to be
causal and LTI, it must have the condition of initial rest. This means, for a causal
LTI system, if x(t) = 0 for t < t0 , then y(t) must also equal to 0 for t < t0 . Therefore
y(t) = 0 for t < 0 and using y(0) = 0 in Equation (2.23), we get
K
0 = c1 +
1 + 2RC
or,
K
c1 = −
1 + 2RC
K t K
y(t) = − e− RC + e2t .
1 + 2RC 1 + 2RC
And, for all t,
K h
2t t
− RC
i
y(t) = e −e u(t).
1 + 2RC
This is the required solution of the given problem.
dy(t0 ) dN −1 y(t0 )
y(t0 ) = = ··· = = 0. (2.25)
dt dtN −1
Equations of this type can be solved exactly in a way we do for differential equations.
The complete solution is the sum of a homogeneous solution and a particular solution.
The homogeneous solution is the solution of the homogenous equation
N
X
ak y[n − k] = 0.
k=0
For a causal LTI system, we must have the condition of initial rest. That is, if
x[n] = 0 for n < n0 , then y[n] = 0 for n < n0 .
(M N
)
1 X X
y[n] = bk x[n − k] − ak y[n − k] . (2.27)
a0 k=0 k=1
In order to calculate y[n], we need to know the auxiliary conditions y[n − 1], y[n −
2] · · · , y[n − N ].An equation of the form Equation (2.26) is called the recursive equa-
tion since we can calculate the output in terms of the input and the previous outputs.
Example:
Solve the following difference equation
1
y[n] − y[n − 1] = x[n] (2.28)
2
for
x[n] = Kδ[n].
Solution:
Equation (2.28) can be expressed as
1
y[n] = x[n] + y[n − 1]
2
Using the condition of initial rest, since x[n] = 0 for n < 0 , then y[n] = 0 for n < 0
as well. Now, for n ≥ 0
Figure 2.8: Block diagram representation for the discrete-time system described by
Equation (2.29).
dy(t)
+ ay(t) = bx(t). (2.30)
dt
Let’s rearrange the Equation (2.30) in the form
1 dy(t) b
y(t) = − + x(t). (2.31)
a dt a
According to Equation (2.31), the output y(t) of the system can be determined by
performing three operations: addition, multiplication by a coefficient, and differen-
tiation as shown by the block diagram in Figure 2.9
dy(t)
= bx(t) − ay(t) (2.32)
dt
and then integrating from −∞ to t, we get
Z t
y(t) = [bx(τ ) − ay(τ )]dτ. (2.33)
−∞
1 −t/RC
h(t) = e u(t). (2.34)
RC
We have,
Z +∞
y(t) = x(τ )h(t − τ )dτ. (2.35)
−∞
1. As a function of τ , we have
1 −τ /RC
h(τ ) = e u(τ ).
RC
The functions x(τ ) and h(τ ) are shown in Figures 2.12 (a).
3. For t < 0, h(t − τ ) is the time-shifted version of h(−τ ) to the left by t time
unit as shown in Figure 2.12 (c). We see that for t < 0, the product of x(τ )
and h(t − τ ) is zero. Therefore, output y(t) is zero.
5. For t > 2, the overlapping always occurs from τ = 0 to 2. From Figure 2.12
(e), the output is obtained as
R 2 1 −(t−τ )/RC
y(t) = 0 RC e dτ.
1 −t/RC t τ /RC
R
= RC e e
0
dτ
1 −t/RC τ /RC 2
= RC e RC e 0
y(t) = e−t/RC (e2/RC − 1).
Figure 2.12: Figures for determining the output of an RC filter for a rectangular
pulse.
x(t) = ejw0 t
where
2π
w0 = .
T
A set of harmonically related complex exponentials can be represented as
57
CHAPTER 3. FOURIER ANALYSIS FOR CONTINUOUS-TIME AND
DISCRETE-TIME SIGNALS
+∞ +∞
2π
ak ejk( T )t .
X X
jkw0 t
x(t) = ak e = (3.1)
k=−∞ k=−∞
This representation is known as the Fourier series (or exponential Fourier series)
representation and the set of coefficients {ak } are called the Fourier series coeffi-
cients.The term for k = 0 is a constant. The terms for k = −1 and k = +1 both
have fundamental frequency equal to w0 and are collectively called as the funda-
mental components or the first harmonic components. The terms for k = −2 and
k = +2 are referred to as the second harmonic components. In general, the terms
for k = −N and k = +N are collectively referred to as the Nth harmonic components.
Z T Z T Z T
j(k−n)w0 t
e dt = cos((k − n)w0 t)dt + j sin((k − n)w0 t)dt. (3.4)
0 0 0
For, k 6= n, cos((k −n)w0 t) and sin((k −n)w0 t) are periodic signals with fundamental
T T
period |k−n| . Since, T is an integer multiple of |k−n| , then, for k 6= n, both of the
integrals on the right-hand side of Equation (3.4) are zero. For k = n, the integral
on the left-hand side of Equation (3.4) results T . Therefore,
Z T (
T. k = n
ej(k−n)w0 t dt =
0 0, k 6= n
And, the right-hand side of Equation (3.3) reduces to T an . That is,
Z T
x(t)e−jnw0 t dt = T an .
0
So,
Z T
1
an = x(t)e−jnw0 t dt
T 0
or,
Z T
1
ak = x(t)e−jkw0 t dt
T 0
or,
Z
1
ak = x(t)e−jkw0 t dt.
T T
The equation
+∞ +∞
2π
ak ejk( T )t
X X
x(t) = ak ejkw0 t = (3.5)
k=−∞ k=−∞
Example
Determine the Fourier series coefficients of the following periodic sawtooth wave.
Also, plot the magnitude and phase spectrum.
Solution
2π
Here, fundamental period, T = 1 sec., and hence, fundamental frequency, w0 = T
=
2π rad./sec..
we get,
x(t) = 1 − 2t.
The Fourier series coefficients can be determined using
Z
1
ak = x(t)e−jkw0 t dt
T T
Z 1
= (1 − 2t)e−jk2πt dt
Z0 1 Z 1
−jk2πt
= e dt − 2 te−jk2πt dt. (3.8)
0 0
Z 1 Z Z Z
−jk2πt −jk2πt −jk2πtdt
te dt = t e dt − e dt dt
0 dt
1 −jk2πt 1 1 1 −jk2πt 1
= te 0
− e 0
−jk2π −jk2π −jk2π
−jk2π −jk2π
e e 1
=− + −
jk2π (k2π)2 (k2π)2
1 1 1
=− + 2
−
jk2π (k2π) (k2π)2
1
=− (3.9)
jk2π
Also,
Z 1
1 −jk2πt 1
e−jk2πt dt = e 0
0 −k2π
1
=− (e−jk2π − 1)
k2π
1
=− (1 − 1)
k2π
= 0. (3.10)
1 j
ak = =− .
jkπ kπ
For k = 0, we have
Z Z 1
a0 = x(t)dt = (1 − 2t)dt = 0.
T 0
The magnitude and phase spectrum of the given sawtooth wave are shown in Figure
3.2 (a) and (b) respectively.
The following Dirichlet conditions are required for the convergence of Fourier
series.
1. Condition 1: Over any period, x(t) must be absolutely integrable; that is,
Z
|x(t)|dt < ∞.
T
2. Condition 2: There must not be more than a finite number of maxima and
minima during any single period of the signal. A signal that violates this
condition is
2π
x(t) = sin , 0 < t ≤ 1.
t
3. Condition 3: There must be only a finite number of discontinuities over any
single period.
Evaluation of b0 :
Integrating both sides of Equation (3.11) over one period, we get
Z Z +∞
Z X
x(t)dt = b0 dt + [bk cos(kω0 t) + ck sin(kω0 t)] dt
T T T k=1
+∞
X Z +∞
X Z
= b0 T + bk cos(kω0 t)dt + ck sin(kω0 t)dt
k=1 T k=1 T
Since
Z
cos(kω0 t)dt = 0
T
and
Z
sin(kω0 t)dt = 0
T
Therefore,
Z
x(t)dt = b0 T
T
and
Z
1
b0 = x(t)dt.
T T
Evaluation of bk :
Multiplying both sides of Equation (3.11) by cos(nω0 t) and integrating over a single
period, we get
Z Z +∞
Z X
x(t) cos(nω0 t)dt = b0 cos(nω0 t)dt + [bk cos(kω0 t) + ck sin(kω0 t)] cos(nω0 t)dt
T T T k=1
Z +∞
X Z
= b0 cos(nω0 t)dt + bk cos(kω0 t) cos(nω0 t)dt
T k=1 T
+∞
X Z
+ ck sin(kω0 t) cos(nω0 t)dt
k=1 T
Z +∞ Z
1X
= b0 cos(nω0 t)dt + bk cos((k + n)ω0 t)dt
T 2 k=1 T
+∞ Z +∞ Z
1X 1X
+ bk cos((k − n)ω0 t)dt + ck sin((k + n)ω0 t)dt
2 k=1 T 2 k=1 T
+∞ Z
1X
+ ck sin((k − n)ω0 t)dt
2 k=1 T
Since,
Z
cos(nω0 t)dt = 0
T
Z
cos((k + n)ω0 t)dt = 0, for all k, n
T
(
0, k 6= n
Z
cos((k − n)ω0 t)dt =
T T, k = n
Z
sin((k + n)ω0 t)dt = 0, for all k, n
T
Z
sin((k − n)ω0 t)dt = 0, for all k, n
T
Therefore,
Z
1
x(t) cos(nω0 t)dt = bn T
T 2
or,
Z
2
bn = x(t) cos(nω0 t)dt
T T
or,
Z
2
bk = x(t) cos(kω0 t)dt
T T
Evaluation of ck :
Multiplying both sides of Equation (3.11) by sin(nω0 t) and integrating over a single
period, we get
Z Z +∞
Z X
x(t) sin(nω0 t)dt = b0 sin(nω0 t)dt + [bk cos(kω0 t) + ck sin(kω0 t)] sin(nω0 t)dt
T T T k=1
Z +∞
X Z
= b0 sin(nω0 t)dt + bk cos(kω0 t) sin(nω0 t)dt
T k=1 T
+∞
X Z
+ ck sin(kω0 t) sin(nω0 t)dt
k=1 T
Z +∞ Z
1X
= b0 sin(nω0 t)dt + bk sin((n − k)ω0 t)dt
T 2 k=1 T
+∞ Z +∞ Z
1X 1X
+ bk sin((n + k)ω0 t)dt + ck cos((k − n)ω0 t)dt
2 k=1 T 2 k=1 T
+∞ Z
1X
− ck cos((k + n)ω0 t)dt
2 k=1 T
Since,
Z
sin(nω0 t)dt = 0
T
Z
sin((n − k)ω0 t)dt = 0, for all k, n
T
Z
sin((n + k)ω0 t)dt = 0, for all k, n
T
(
0, k 6= n
Z
cos((k − n)ω0 t)dt =
T T, k = n
Z
cos((k + n)ω0 t)dt = 0, for all k, n
T
Therefore,
Z
1
x(t) sin(nω0 t)dt = cn T
T 2
or,
Z
2
cn = x(t) sin(nω0 t)dt
T T
or,
Z
2
ck = x(t) sin(kω0 t)dt
T T
Symmetry Conditions
1. If x(t) is even, then
2 T /2
Z
b0 = x(t)dt
T 0
4 T /2
Z
bk = x(t) cos(kω0 t)dt
T 0
ck = 0
b0 = 0
bk = 0
Z T /2
4
ck = x(t) sin(kω0 t)dt
T 0
+∞
X
x(t) = d0 + dk cos(kω0 t − θ)
k=1
This is the compact trigonometric Fourier series representation. Here, the relation-
ship between the coefficients of trigonometric Fourier series and compact trigono-
metric Fourier series are:
b0 = d0 , bk = dk cos θ, and ck = dk sin θ.
+∞
X
x(t) = b0 + [bk cos(kω0 t) + ck sin(kω0 t)]
k=1
+∞
ejkω0 t + e−jkω0 t ejkω0 t − e−jkω0 t
X
= b0 + bk + ck
k=1
2 2j
+∞
X bk − jck jkω0 t bk + jck −jkω0 t
= b0 + e + e
k=1
2 2
bk −jck bk +jck
Let, a0 = b0 , ak = 2
, and a−k = 2
. Then,
+∞
X
+ a−k e−jkω0 t
jkω0 t
x(t) = a0 + ak e
k=1
+∞
X +∞
X
= a0 + ak e jkω0 t
+ a−k e−jkω0 t
k=1 k=1
+∞
X −1
X
= a0 + ak ejkω0 t + ak ejkω0 t
k=1 k=−∞
+∞
X
= ak ejkω0 t
k=−∞
Example
Determine the trigonometric Fourier series representation of the following periodic
sawtooth wave.
Solution
A periodic signal x(t) can be represented by trigonometric Fourier series as,
+∞
X
x(t) = b0 + [bk cos(kω0 t) + ck sin(kω0 t)] .
k=1
The given signal has fundamental period, T = 1 sec. and fundamental frequency,
ω0 = 2π rad./sec..The given signal x(t) is odd. Therefore, b0 = 0 and bk = 0. And,
4 T /2
Z
ck = x(t) sin(kω0 t)dt
T 0
Z 1/2
=4 x(t) sin(kω0 t)dt
0
0−1
x(t) − 1 = (t − 0)
1/2 − 0
∴ x(t) = 1 − 2t
Now,
Z 1/2
ck = 4 (1 − 2t) sin(kω0 t)dt
0
Z 1/2 Z 1/2
=4 sin(kω0 t)dt − 8 t sin(kω0 t)dt
0 0
Z 1/2 Z 1/2
=4 sin(k2πt)dt − 8 t sin(k2πt)dt
0 0
1/2 Z Z Z
4 dt
=− cos(k2πt) − 8 t sin(k2πt)dt − sin(k2πt)dt dt
k2π 0 dt
!
1/2 Z 1/2
4 t 1
=− (cos(kπ) − 1) − 8 − cos(k2πt) + cos(k2πt)dt
k2π k2π 0 k2π 0
!
1/2
4 1 1
(−1)k − 1 − 8 −
=− cos(kπ) + sin(k2πt)
k2π k4π (k2π)2 0
2 2 2
=− (−1)k + + (−1)k
kπ kπ kπ
2
=
kπ
Hence, the trigonometric Fourier series represenation of the given periodic signal is,
+∞
X 2
x(t) = sin(kω0 t).
k=1
kπ
1. Linearity
Let x(t) and y(t) be two periodic signals with period T and
FS
x(t) ←→ ak
FS
y(t) ←→ bk
Then,
FS
z(t) = Ax(t) + By(t) ←→ ck = Aak + Bbk .
Proof
Z
1
ck = z(t)e−jkω0 t dt
T T
Z
1
= (Ax(t) + By(t))e−jkω0 t dt
T T
Z Z
1 −jkω0 t 1
=A x(t)e dt + B y(t)e−jkω0 t dt
T T T T
= Aak + Bbk .
2. Time Shifting
If
FS
x(t) ←→ ak
Then,
FS
y(t) = x(t − t0 ) ←→ bk = e−jkω0 t0 ak
Proof
Z Z
1 −jkω0 t 1
bk = y(t)e dt = x(t − t0 )e−jkω0 t dt
T T T T
Let t − t0 = τ , then
Z
1
bk = x(τ )e−jkω0 (τ +t0 ) dτ
T T Z
−jkω0 t0 1
=e x(τ )e−jkω0 τ dτ
T T
= e−jkω0 t0 ak
When a periodic signal is shifted in time, the magnitudes of its Fourier series
coefficients remain same.
3. Frequency Shifting
If
FS
x(t) ←→ ak
Then,
FS
y(t) = ejM ω0 t x(t) ←→ bk = ak−M
Proof
Z
1
bk = y(t)e−jkω0 t dt
T T
Z
1
= ejM ω0 t x(t)e−jkω0 t dt
T T
Z
1
= x(t)e−j(k−M )ω0 t dt
T T
= ak−M
4. Time Reversal
If
FS
x(t) ←→ ak
Then,
FS
y(t) = x(−t) ←→ bk = a−k
Proof
+∞
X
x(t) = ak ejkω0 t
k=−∞
So,
+∞
X
y(t) = x(−t) = ak e−jkω0 t
k=−∞
Hence,
bk = a−k
Note that if x(t) is even, then a−k = ak and if x(t) is odd, then a−k = −ak .
5. Time Scaling
If
FS
x(t) ←→ ak
Then
FS
x(αt) ←→ ak
We can prove this in a straightforward manner. Since,
+∞
X
x(t) = ak ejkω0 t
k=−∞
Then,
+∞
X
x(αt) = ak ejk(αω0 )t
k=−∞
Hence, the time-scaled version x(αt) has the same Fourier series coefficients ak .
However, it has the fundamental period T /α and fundamental frequency αω0 .
6. Conjugation
If
FS
x(t) ←→ ak
Then
FS
x∗ (t) ←→ a∗−k
Proof
We have,
+∞
X
x(t) = ak ejkω0 t
k=−∞
Then, " #∗
+∞
X +∞
X
∗
x (t) = ak e jkω0 t
= a∗k e−jkω0 t
k=−∞ k=−∞
Put k = −m, then
+∞
X
∗
x (t) = a∗−m ejmω0 t
m=−∞
Hence,
FS
x∗ (t) ←→ a∗−k
Note that
(a) If x(t) is real, then
ak = a∗−k ⇒ a∗k = a−k
Proof
We have,
+∞
X
x(t) = ak ejkω0 t
k=−∞
+∞
X
y(t) = bk ejkω0 t
k=−∞
Then,
+∞
X +∞
X
jlω0 t
z(t) = x(t)y(t) = al e bm ejmω0 t
l=−∞ m=−∞
+∞
X +∞
X
= al bm ej(l+m)ω0 t
l=−∞ m=−∞
Let l + m = k, then
+∞
X +∞
X
z(t) = al bk−l ejkω0 t
l=−∞ k=−∞
+∞ +∞
!
X X
= al bk−l ejkω0 t
k=−∞ l=−∞
Hence,
+∞
X
ck = al bk−l .
l=−∞
8. Periodic Convolution
Let x(t) and y(t) be two periodic signals with period T and
FS
x(t) ←→ ak
FS
y(t) ←→ bk
Then, Z
FS
z(t) = x(τ )y(t − τ )dτ ←→ ck = T ak bk
T
The left-hand side is referred to as the periodic convolution between x(t) and
y(t).
Proof
Z
1
ck = z(t)e−jkω0 t dt
T T
Z Z
1
= x(τ )y(t − τ )dτ e−jkω0 t dt
T T T
Let t − τ = t0 , then
Z Z
1 0
ck = x(τ )y(t0 )dτ e−jkω0 (t +τ ) dt0
T
Z T T Z
1 0 −jkω0 t0 0
= x(τ ) y(t )e dt e−jkω0 τ dτ
T T T
Z
1 −jkω0 τ
= T bk x(τ )e dτ
T T
= T ak b k
FS
x(t) ←→ ak
Z +∞
1 2
X
P = |x(t)| dt = |ak |2
T T k=−∞
Proof
This relation states that the total average power in a periodic signal equals the
sum of the average powers in all of its harmonic components.
10. Differentiation
If
FS
x(t) ←→ ak
Then,
dx(t) F S
y(t) = ←→ bk = jkω0 ak
dt
Proof
We have,
+∞
X
x(t) = ak ejkω0 t
k=−∞
Hence, we get
bk = jkω0 ak
11. Integration
If
FS
x(t) ←→ ak
Then,
Z t
FS 1
y(t) = x(t)dt ←→ bk = ak
−∞ jkω0
Proof
We have,
+∞
X
x(t) = ak ejkω0 t
k=−∞
Hence, we get
1
bk = ak
jkω0
or,
ω0 T = 2π
or,
2π
ω0 =
T
Here, ω0 is the fundamental frequency and T is the fundamental period. Note that
the signals ejω0 t are distinct for different values of ω0 . If ω0 is increased, the rate of
oscillation also increases.
This means that the signal at frequency ω0 + 2π is the same as that at frequency ω0 .
So, the signals ejω0 n are not distinct for different values of ω0 .
x[n] = ejω0 n
where
2π
ω0 =
N
A set of harmonically related complex exponentials can be represented as
2π
φk [n] = ejkω0 n = ejk N n , k = 0, ±1, ±2, · · ·
Unlike in continous-time case, there are only N distinct signals in the set φk [n] since,
2π
φk+N [n] = ej(k+N ) N n
2π 2π
= ejk N n ejN N n
2π
= ejk N n
= φk [n]
In general,
2π 2π 2π
φ0 [n] = 1, φ1 [n] = ej N n , φ2 [n] = ej2 N n , · · · , φN −1 [n] = ej(N −1) N n
and any other φk [n] is identical to one of these signals. Therefore, the sequences
φk [n] are distinct only over a range of N successive values of k.
2π
X X X
x[n] = ak φk [n] = ak ejkω0 n = ak ejk N n (3.12)
k=<N > k=<N > k=<N >
2π 2π 2π
X X X
x[n]e−jr N n = ak ejk N n e−jr N n
n=<N > n=<N > k=<N >
2π
X X
= ak ej(k−r) N n
n=<N > k=<N >
2π
X X
= ak ej(k−r) N n
k=<N > n=<N >
Since,
(
X
j(k−r) 2π n 0, k 6= r
e N =
n=<N >
N, k = r
Therefore,
2π
X
x[n]e−jr N n = ar N
n=<N >
or,
1 X 2π
ar = x[n]e−jr N n
N n=<N >
or,
1 X 2π
ak = x[n]e−jk N n (3.13)
N n=<N >
The Equation (3.12) is referred to as the synthesis equation and (3.13) as the analy-
sis equation. The Fourier series coefficients are also known as the spectral coefficients.
ak = ak+N .
And, we conclude that the values ak repeat periodically with period N .
Example
Determine and plot the magnitude and phase spectra of the periodic signal x[n] =
sin(ω0 n).
Solution
Here,
2π
ω0 =
N
is the fundamental frequency and N is the fundamental period of the given signal.
Let’s take the value of N to be 5.
1 1 π
a1 = ⇒ |a1 | = , ∠a1 = −
2j 2 2
1 1 π
a−1 = − ⇒ |a−1 | = , ∠a−1 =
2j 2 2
and the remaining coefficients over the interval of summation are zero. The magni-
tude and phase spectra are shown in Figures 3.4 (a) and (b) respectively.
Example
Determine and plot the magnitude and phase spectra of the following periodic signal
4π 4π π
x[n] = 2 + sin n + 2 cos n+ .
N N 2
Solution
Expanding the given signal x[n] using Euler’s identity, we get
1 j 4π n 1 1 4π π 1 4π π
e N − e−j N n + 2 ej ( N n+ 2 ) + 2 e−j ( N n+ 2 )
4π
x[n] = 2 +
2j 2j 2 2
1 j 4π n 1 −j 4π n 4π 4π
= 2 − j e N + j e N + jej N n − je−j N n
2 2
1 j 4π n 1 4π
= 2 + j e N − j e−j N n
2 2
Comparing this equation with the synthesis equation, we get
a0 = 2 ⇒ |a0 | = 2, ∠a0 = 0,
1 1 π
a2 = j ⇒ |a2 | = , ∠a2 = ,
2 2 2
1 1 π
a−2 = −j ⇒ |a−2 | = , ∠a−2 = − ,
2 2 2
and other coefficients ak are zero over the interval of summation in the synthesis
equation. For N = 5, the magnitude and phase spectra are shown in Figures 3.5 (a)
and (b) respectively.
Example
Determine and plot the spectrum of the following discrete-time periodic square wave.
Solution
We have,
1 X 2π
ak = x[n]e−jk N n
N n=<N >
For the given signal x[n], we can express the above equation as
N1
1 X 2π
ak = e−jk N n
N n=−N
1
Let m = n + N1 , then
2N
1 X1 −jk 2π (m−N1 )
ak = e N
N m=0
2N
1 jk 2π N1 X1 −jk 2π m
= e N e N
N m=0
2π
!
1 2π 1 − e−jk N (2N1 +1)
= ejk N N1 2π
N 1 − e−jk N
π π π
1 2π ejk N ejk N (2N1 +1) − e−jk N (2N1 +1)
= ejk N N1 jk π (2N1 +1) π π
N e N ejk N − e−jk N
π
1 ejk N (2N1 +1) sin k Nπ (2N1 + 1)
= π
N ejk N (2N1 +1) sin( kπ )
π N
1 sin k N (2N1 + 1)
= ,
N sin( kπ
N
)
and
2N1 + 1
a0 =
N
π
1 sin k 10 (2 × 2 + 1)
ak =
10 sin( kπ )
π 10
1 sin k 2 )
= ,
10 sin( kπ
10
)
and
2×2+1 1
a0 = = .
10 2
1. Linearity
If
FS
x[n] ←→ ak
FS
y[n] ←→ bk
Then,
FS
z[n] = Ax[n] + By[n] ←→ ck = Aak + Bbk .
2. Time Shifting
If
FS
x[n] ←→ ak
Then,
FS
y[n] = x[n − n0 ] ←→ bk = e−jkω0 n0 ak
3. Frequency Shifting
If
FS
x[n] ←→ ak
Then,
FS
y[n] = ejM ω0 n x[n] ←→ bk = ak−M
4. Time Reversal
If
FS
x[n] ←→ ak
Then,
FS
y[n] = x[−n] ←→ bk = a−k
5. Time Scaling
If
FS
x[n] ←→ ak
Then,
FS
x[αn] ←→ αak , α > 0.
The scaling operation changes the fundamental frequency to be αω0 and the
fundamental period to be N/α. Also, the Fourier series coefficients are scaled
by α.
6. Conjugation
If
FS
x[n] ←→ ak
Then
FS
x∗ [n] ←→ a∗−k
7. Multiplication
If
FS
x[n] ←→ ak
FS
y[n] ←→ bk
Then,
FS
X
z[n] = x[n]y[n] ←→ ck = al bk−l
l=<N >
8. Periodic Convolution
If
FS
x[n] ←→ ak
FS
y[n] ←→ bk
Then,
FS
X
z[n] = x[r]y[n − r] ←→ ck = N ak bk
r=<N >
The left-hand side is referred to as the periodic convolution between x[n] and
y[n].
1 X X
P = |x[n]|2 = |ak |2
N n=<N > k=<N >
Proof
1 X
P = |x[n]|2
N n=<N >
1 X
= x[n]x∗ [n]
N n=<N >
1 X X 2π
= x[n] a∗k e−jk N n
N n=<N > k=<N >
" #
X 1 X
−jk 2π
= a∗k x[n]e N
n
k=<N >
N n=<N >
X
= a∗k ak
k=<N >
X
= |ak |2
k=<N >
This relation states that the total average power in a periodic signal equals the
sum of the average powers in all of its harmonic components.
Then,
FS 2π
y[n] = x[n] − x[n − 1] ←→ bk = 1 − e−jk N ak
Proof
The major concept behind the development of Fourier transform from the Fourier
series representation is that an aperiodic signal can be viewed as a periodic signal
with an infinite period. In the Fourier series representation of a periodic signal, we
have ω0 = 2π/T . As the period increases the fundamental frequency decreases and
the components become closer in frequency. If the period becomes infinite, then the
components form a continuum and the Fourier series sum becomes an integral.
Z T /2
1
ak = e(t)e−jkω0 t dt
x (3.17)
T −T /2
e(t) = x(t) for −T /2 < t < T /2 and x(t) = 0 outside this interval, Equation
Since x
(3.17) can be rewritten as
Z T /2 Z +∞
1 −jkω0 t 1
ak = x(t)e dt = x(t)e−jkω0 t dt
T −T /2 T −∞
Let us define
Z +∞
X(jkω0 ) = T ak = x(t)e−jkω0 t dt (3.18)
−∞
then
1
ak = X(jkω0 ). (3.19)
T
Here, x(t) and X(jw) are a Fourier transform pair and we can write a short-hand
notation as
FT
x(t) ←→ X(jw).
Equation (3.22) is termed the Fourier transform of x(t) and equation (3.21) the
inverse Fourier transform of X(jw). Equations (3.21) and (3.22) are also referred to
as the synthesis equation and analysis equation respectively.
2. x(t) should have only a finite number of maxima and minima within any finite
interval.
3. x(t) should have only a finite number of discontinuities within any finite inter-
val. And, each of these discontinuities must be finite.
Example
Determine the Fourier transform of the following signal and plot the spectrum.
x(t) = e−at u(t) a > 0.
Solution
We have,
Z +∞
X(jω) = x(t)e−jωt dt
−∞
Z+∞
= e−at e−jωt dt
0
Z +∞
= e−(a+jω)t dt
0
∞
1
=− e−(a+jω)t
a + jω 0
1
∴ X(jω) =
a + jω
For magnitude spectrum and phase spectrum,
1 a − jω
X(jω) =
a + jω a − jω
a ω
= 2 2
−j 2
a +ω a + ω2
Therefore, the magnitude spectrum is
1
|X(jw)| = √
a2 + ω2
and the phase spectrum is
ω
∠X(jw) = − tan−1 .
a
The magnitude spectrum and the phase spectrum of the given signal are plotted in
Example
Solution
The given signal is
Z T1
X(jω) = e−jωt dt
−T1
T
1 −jωt 1
=− e
jω −T1
1 −jωT1
− ejωT1
=− e
jω
2 ejωT1 − e−jωT1
=
ω 2j
sin(ωT1 )
∴ X(jω) = 2T1 .
ωT1
The spectrum X(jω) is as shown in Figure 3.11.
Example
Determine the inverse Fourier transform of
(
1, |ω| < W
X(jω) = .
0, |ω| > W
Solution
The given spectrum is
In general, if
∞
X
X(jω) = 2πak δ(ω − kω0 ), (3.25)
k=−∞
then
∞
X
x(t) = ak ejkω0 t . (3.26)
k=−∞
The Equation (3.26) is actually the Fourier series representation of a periodic signal
with Fourier series coefficients {ak }. Therefore, the Fourier transform of a periodic
signal can be determined as a train of impulses occurring at the harmonically related
frequencies; the area of an impulse occurring at ω = kω0 is 2π times the Fourier
series coefficient ak .
Example
Find the Fourier transform of the following periodic square wave for T = 4T1 .
Solution
We know that Fourier transform of a continuous-time periodic signal x(t) can be
evaluated as ∞
X
X(jω) = 2πak δ(ω − kω0 ),
k=−∞
Proof
Z ∞
Z(jω) = z(t)e−jωt dt
Z−∞
∞
= [Ax(t) + By(t)] e−jωt dt
−∞
Z ∞ Z ∞
−jωt
=A x(t)e dt + B y(t)e−jωt dt
−∞ −∞
∴ Z(jω) = AX(jω) + BY (jω)
2. Time Shifting
If
FT
x(t) ←→ X(jω),
then
FT
y(t) = x(t − t0 ) ←→ Y (jw) = e−jωt0 X(jω).
Proof
Z ∞
Y (jω) = y(t)e−jωt dt
Z−∞
∞
= x(t − t0 )e−jωt dt
−∞
Put t − t0 = τ , then
Z ∞
Y (jω) = x(τ )e−jω(τ +t0 ) dτ
−∞
Z ∞
−jωt0
=e x(τ )e−jωτ dτ
−∞
−jωt0
∴ Y (jω) = e X(jω)
3. Frequency Shifting
If
FT
x(t) ←→ X(jω),
then
FT
y(t) = ejω0 t x(t) ←→ X(j(ω − ω0 )).
Proof
Z ∞
Y (jω) = y(t)e−jωt dt
Z−∞
∞
= ejω0 t x(t)e−jωt dt
Z−∞
∞
= x(t)e−j(ω−ω0 )t dt
−∞
∴ Y (jω) = X(j(ω − ω0 )).
4. Time Reversal
If
FT
x(t) ←→ X(jω),
then
FT
y(t) = x(−t) ←→ Y (jω) = X(−jω).
Proof
Z ∞
Y (jω) = y(t)e−jωt dt
Z−∞
∞
= x(−t)e−jωt dt
−∞
5. Time Scaling
If
FT
x(t) ←→ X(jω),
then
FT 1 jω
y(t) = x(at) ←→ Y (jω) = X ,
|a| a
where a is a real constant.
Proof
Z ∞
Y (jω) = y(t)e−jωt dt
Z−∞
∞
= x(at)e−jωt dt
−∞
Put at = τ , then
( R∞ ω
1
a −∞
x(τ )e−j a τ dτ, a > 0
Y (jω) = R∞ ω
− a1 −∞ x(τ )ej a τ dτ, a < 0
1 jω
∴ Y (jω) = X
|a| a
Therefore, time-scaling a signal x(t) by a factor of 0 a0 corresponds to the
frequency-scaling by 0 1/a0 and amplitude scaling by 0 1/|a|0 of its frequency
spectrum.
6. Conjugation
If
FT
x(t) ←→ X(jω),
then
FT
x∗ (t) ←→ X ∗ (−jω).
Proof
We have, Z ∞
X(jω) = x(t)e−jωt dt
−∞
Hence,
FT
x∗ (t) ←→ X ∗ (−jω).
Note
we have,
Re{X(jω)} = Re{X(−jω)}
Im{X(jω)} = −jIm{X(−jω)}
|X(jω)| = |X(−jω)|
∠X(jω) = −∠X(−jω).
(b) If x(t) is real and even, then X(jω) is also real and even.
(c) If x(t) is real and odd, then X(jω) is purely imaginary and odd.
(d) Decomposing x(t) and X(jω) into even and odd parts; that is,
7. Multiplication
If
FT
x(t) ←→ X(jω),
and
FT
y(t) ←→ Y (jω),
then
FT 1
z(t) = x(t)y(t) ←→ Z(jω) = [X(jω) ∗ Y (jω)].
2π
Proof
Z ∞
Z(jω) = z(t)e−jωt dt
Z−∞
∞
= x(t)y(t)e−jωt dt
Z−∞∞ Z ∞
1 0 jω 0 t
= x(t) Y (jω )e dω e−jωt dt
0
−∞ 2π
Z ∞ Z−∞
∞
1 0 −j(ω−ω 0 )t 0
= x(t) Y (jω )e dω dt
2π −∞ −∞
Put ω − ω 0 = σ, then
Z ∞ Z ∞
1 jσt
Z(jω) = x(t) Y (j(ω − σ))e dσ dt
2π −∞ −∞
Z ∞ Z ∞
1 jσt
= Y (j(ω − σ)) x(t)e dt dσ
2π −∞ −∞
Z ∞
1
= Y (j(ω − σ))X(jσ)dσ
2π −∞
Z ∞
1
= X(jσ)Y (j(ω − σ))dσ
2π −∞
1
∴ Z(jω) = [X(jω) ∗ Y (jω)]
2π
The multiplication property tells us that multiplication in the time domain cor-
responds to convolution in the frequency domain. The multiplication property,
sometimes, is also referred to as the modulation property.
8. Convolution
If
FT
x(t) ←→ X(jω),
and
FT
h(t) ←→ H(jω),
then
FT
y(t) = x(t) ∗ h(t) ←→ Y (jω) = X(jω)H(jω).
Proof
We have
Therefore,
Z ∞
Y (jω) = x(τ )e−jωτ H(jω)dτ
−∞
Z ∞
= H(jω) x(τ )e−jωτ dτ
−∞
Y (jω) = H(jω)X(jω).
Proof
The total energy of a continuous-time aperiodic signal is determined as
Z ∞
E= |x(t)|2 dt
Z−∞
∞
= x(t)x∗ (t)dt
Z−∞
∞ Z ∞
1 ∗ −jωt
= x(t) X (jω)e dω dt
−∞ 2π −∞
Therefore, Z ∞ Z ∞
2 1
|x(t)| dt = |X(jω)|2 dω.
−∞ 2π −∞
The Parseval’s relation tells us that the total energy may be determined either
by computing the energy per unit time (|x(t)|2 ) and integrating over all time
or by computing the energy per unit frequency (|X(jω)|2 /2π) and integrating
over all frequencies. For this reason, (|X(jω)|2 ) is often referred to as the
energy-density spectrum or the energy spectral density of the signal x(t).
10. Differentiation
If
FT
x(t) ←→ X(jω),
then
dx(t) F T
←→ jωX(jω).
dt
Proof
We have Z ∞
1
x(t) = X(jω)ejωt dω.
2π −∞
Differentiating both sides with respect to t, we get
Z ∞
dx(t) 1
= jωX(jω)ejωt dω.
dt 2π −∞
Therefore
dx(t) F T
←→ jωX(jω).
dt
11. Integration
If
FT
x(t) ←→ X(jω),
then Z t
FT 1
x(τ )dτ ←→ X(jω) + πX(0)δ(ω).
−∞ jω
Proof
We have Z ∞
1
x(t) = X(jω)ejωt dω.
2π −∞
Integrating both sides with respect to t, we get
Z t Z ∞
1 1
x(t)dt = X(jω)ejωt dω.
−∞ 2π −∞ jω
Therefore Z t
1
FT
x(τ )dτ ←→ X(jω) + πX(0)δ(ω).
−∞ jω
The impulse term on the right-hand side is the dc constant that can result from
integration.
12. Duality
We have Fourier transform pairs
Z ∞
1
x(t) = X(jω)ejωt dω, (3.27)
2π −∞
Z ∞
X(jω) = x(t)e−jωt dt. (3.28)
−∞
For x
e[n], we have the Fourier series representation as
2π
X
x
e[n] = ak ejk N n (3.31)
k=<N >
e[n] over a period extending from −N/2 to N/2 − 1 and x[n] is zero
Since x[n] = x
Let ∞
2π
X
X(ejkω0 ) = N ak = x[n]e−jk N n , (3.33)
n=−∞
then
1
ak = X(ejkω0 ). (3.34)
N
Using Equation (3.34) in (3.31), we get
X 1
x
e[n] = X(ejkω0 )ejkω0 n . (3.35)
k=<N >
N
∞
X
jω
X(e ) = x[n]e−jωn . (3.38)
n=−∞
Here, the Equation (3.37) is the inverse discrete-time Fourier transform (I-DTFT)
and is called the synthesis equation whereas the Equation (3.38) is the discrete-time
Fourier transform (DTFT) and is called the analysis equation.
Example
Consider the signal x[n] = an u[n], 0 < a < 1. Find the Fourier transform of x[n].
Solution
Given signal x[n] is shown in Figure 3.18. The Fourier transform of x[n] can be
determined as
∞
X
jω
X(e ) = x[n]e−jωn
n=−∞
X∞
n −jωn
= a e
n=0
∞
X n
= ae−jω
n=0
1
∴ X(ejω ) = .
1 − ae−jω
1
X(ejω ) =
1 − ae−jω
1
=
1 − a cos(ω) + ja sin(ω)
1 1 − a cos(ω) − ja sin(ω)
=
1 − a cos(ω) + ja sin(ω) 1 − a cos(ω) − ja sin(ω)
1 − a cos(ω) − ja sin(ω)
=
(1 − a cos(ω))2 + (a sin(ω))2
1 − a cos(ω) a sin(ω)
= 2 2
−j .
(1 − a cos(ω)) + (a sin(ω)) (1 − a cos(ω))2 + (a sin(ω))2
Therefore
1
|X(ejω )| = p
(1 − a cos(ω))2 + (a sin(ω))2
1
=p
1 − 2a cos(ω) + a cos2 (ω) + a2 sin2 (ω)
2
1
=p .
1 − 2a cos(ω) + a2
And
jω −1 a sin(ω)
∠X(e ) = − tan .
1 − a cos(ω)
The magnitude spectrum and the phase spectrum are plotted in Figure 3.19 (a) and
(b) respectively.
There is no convergence issue associated with the synthesis equation since the inte-
gration is of finite interval.
Let us consider a periodic signal x[n] with period N and Fourier series represen-
tation X
x[n] = ak ejkω0 n ,
k=<n>
Example
Determine the Fourier transform of the following periodic signal.
Solution
We know
1 1
x[n] = ejω0 n + e−jω0 n
2 2
Since ∞
jω0 n F T
X
e ←→ 2πδ(ω − ω0 − 2πl),
l=−∞
therefore
∞
X ∞
X
X(ejω ) = πδ(ω − ω0 − 2πl) + πδ(ω + ω0 − 2πl)
l=−∞ l=−∞
That is,
X(ejω ) = πδ(ω − ω0 ) + πδ(ω + ω0 ), −π ≤ ω < π,
and X(ejω ) repeats periodically with period 2π. The spectrum is shown in Figure
3.20.
1. Periodicity
The discrete-time Fourier transform is periodic in ω with period 2π. That is,
X ej(ω+2π) = X(ejω ).
2. Linearity
If
FT
x[n] ←→ X(ejω )
and
FT
y[n] ←→ Y (ejω ),
then
FT
Ax[n] + By[n] ←→ AX(ejω ) + BY (ejω ).
3. Time Shifting
If
FT
x[n] ←→ X(ejω ),
then
FT
x[n − n0 ] ←→ e−jωn0 X(ejω )
4. Frequency Shifting
If
FT
x[n] ←→ X(ejω ),
then
FT
ejω0 n x[n] ←→ X ej(ω−ω0 ) .
5. Time Reversal
If
FT
x[n] ←→ X(ejω ),
then
FT
x[−n] ←→ X(e−jω ).
6. Time Scaling
If
FT
x[n] ←→ X(ejω ),
then ω
FT
y[n] = x[pn] ←→ Y (ejω ) = X(ej p ).
Proof
∞
X
jω
Y (e ) = y[n]e−jωn
n=−∞
X∞
= x[pn]e−jωn
n=−∞
Put pn = m, then
∞
X ω
jω
Y (e ) = x[m]e−j p m
m=−∞
jω
∴ Y (ejω ) = X(e ). p
If
FT
x[n] ←→ X(ejω ),
then
FT
y[n] = x[n/p] ←→ Y (ejω ) = X(ejpω ).
Proof
∞
X
Y (ejω ) = y[n]e−jωn
n=−∞
X∞
= x[n/p]e−jωn
n=−∞
7. Conjugation
If
FT
x[n] ←→ X(ejω ),
then
FT
x∗ [n] ←→ X ∗ (e−jω ).
X ∗ (e−jω ) = X(ejω ).
Note:
i.
Re{X(ejω )} = Re{X(e−jω )}
ii.
Im{X(ejω )} = −Im{X(e−jω )}
iii.
|X(ejω )| = |X(e−jω )|
iv.
∠X(ejω ) = −∠X(e−jω )
v.
FT
xe [n] ←→ Re{X(ejω )}
vi.
FT
xo [n] ←→ jIm{X(ejω )}
(b) If x[n] is real and even, then X(ejω ) is also real and even.
(c) If x[n] is real and odd, then X(ejω ) is odd and purely imaginary.
8. Multiplication
If
FT
x[n] ←→ X(ejω )
and
FT
y[n] ←→ Y (ejω ),
then Z
FT 1 jω
z[n] = x[n]y[n] ←→ Z(e ) = X(ejθ )Y (ej(ω−θ) )dθ.
2π 2π
Proof
∞
X
jω
Z(e ) = z[n]e−jωn
n=−∞
X∞
= x[n]y[n]e−jωn
n=−∞
∞ Z
X 1
= y[n] X(e )e dθ e−jωn
jθ jθn
n=−∞
2π 2π
Z " ∞ #
1 X
= X(ejθ ) y[n]e−j(ω−θ)n dθ
2π 2π n=−∞
Z
1
∴ Z(ejω ) = X(ejθ )Y (ej(ω−θ) )dθ.
2π 2π
9. Convolution
If
FT
x[n] ←→ X(ejω )
and
FT
h[n] ←→ H(ejω ),
then
FT
y[n] = x[n] ∗ h[n] ←→ Y (ejω ) = X(ejω )H(ejω ).
Proof
∞
X
jω
Y (e ) = y[n]e−jωn
n=−∞
∞
" ∞
#
X X
= x[k]h[n − k] e−jωn
n=−∞ k=−∞
∞
" ∞
#
X X
= x[k] h[n − k]e−jωn
k=−∞ n=−∞
So,
∞
X
jω
Y (e ) = x[k]e−jωk H(ejω )
k=−∞
∞
X
= H(e ) jω
x[k]e−jωk
k=−∞
jω jω
= H(e )X(e )
∴ Y (ejω ) = X(ejω )H(ejω ).
Proof
∞
X
E= |x[n]|2
n=−∞
X∞
= x[n]x∗ [n]
n=−∞
∞ Z
X 1 ∗ jω −jωn
= x[n] X (e )e dω
n=−∞
2π 2π
Z " ∞ #
1 X
= X ∗ (ejω ) x[n]e−jωn dω
2π 2π n=−∞
Z
1
= X ∗ (ejω )X(ejω )dω
2π 2π
Z
1
∴E= |X(ejω )|2 dω
2π 2π
12. Accumulation
If
FT
x[n] ←→ X(ejω ),
then n ∞
X FT 1 jω j0
X
x[m] ←→ X(e ) + πX(e ) δ(ω − 2πk).
m=−∞
1 − e−jω k=−∞
FT dX(ejω )
∴ nx[n] ←→ j .
dω
119
CHAPTER 4. DISCRETE FOURIER TRANSFORM (DFT)
where ∞
X
xp [n] = x[n − lN ] (4.4)
l=−∞
is obtained by the periodic repetition of x[n] every N samples and hence is periodic
with fundamental period N . The xp [n] can be represented by Fourier series as
N −1
2π
X
xp [n] = ak ejk N n n = 0, 1, 2, · · · , N − 1, (4.5)
k=0
Figure 4.2: (a) Aperiodic signal of length L = 4. (b) Periodic version xp [n] of x[n]
for N ≥ L; here N = 6. (c) Periodic version xp [n] of x[n] for N < L; here N = 3.
And, x[n] can be reconstructed from its frequency samples by using Equation (4.8).
In Equation (4.9), we see that zeros have been padded for L ≤ n ≤ N − 1. It is to
be noted that zero-padding does not add any information about the spectrum of the
sequence.
For a finite-duration sequence x[n] of length L (i.e., x[n] = 0 for n < 0 and n ≥ L),
the Fourier transform can be determined as
L−1
X
X(e ) =jω
x[n]e−jωn 0 ≤ ω ≤ 2π. (4.10)
n=0
The upper index in the sum has been increased from L − 1 to N − 1 since x[n] = 0
for N ≥ L. The relation in Equation (4.11) is called the discrete Fourier transform
(DFT) of x[n]. Equation (4.8) can be rewritten as
N −1
1 X 2π
x[n] = X[k]ejk N n n = 0, 1, 2, · · · , N − 1. (4.12)
N k=0
The relation in Equation (4.12) is called the inverse DFT (IDFT). Hence, we have
DFT pair equations as
N −1
2π
X
DFT: X[k] = x[n]e−jk N n k = 0, 1, 2, · · · , N − 1 (4.13)
n=0
N −1
1 X 2π
IDFT: x[n] = X[k]ejk N n n = 0, 1, 2, · · · , N − 1. (4.14)
N k=0
N −1
1 X
x[n] = X[k]WN−kn n = 0, 1, 2, · · · , N − 1. (4.16)
N k=0
We can view DFT and IDFT as linear transformations on sequences {x[n]} and
{X[k]}, respectively. Let us define an N −point vector xN of signal x[n], an N −point
vector XN of frequency samples X[k] and, an N × N matrix WN as
x[0] X[0]
x[1] X[1]
x[2] X[2]
xN = , XN = ,
.. ..
. .
x[N − 1] X[N − 1]
···
1 1 1 1
1 WN1 WN2 ··· WNN −1
2(N −1)
WN = 1
WN2 WN4 ··· WN .
.. .. .. ..
. . . .
2(N −1) (N −1)(N −1)
1 WNN −1 WN · · · WN
XN = WN x N
∗
where the matrix WN is the complex conjugate of the matrix WN .
Example
Compute the 4−point DFT of the sequence x[n] = {0, 1, 2, 3} with and without
linear transformation method.
Solution
N −1
2π
X
X[k] = x[n]e−jk N n k = 0, 1, 2, · · · , N − 1
n=0
3
2π
X
= x[n]e−jk N n
n=0
X[3] 1 j −1 −j 3
1×0+1×1+1×2+1×3
1 × 0 − j × 1 − 1 × 2 + j × 3
=
1 × 0 − 1 × 1 + 1 × 2 − 1 × 3
1×0+j×1−1×2−j×3
6
−2 + j2
=
−2
−2 − j2
∞
X
x(t) = ak ejkω0 t
k=−∞
where {ak } are the Fourier series coefficients. If x(t) is sampled at a uniform rate
with t = nTs = n/Fs , where Ts is the sampling interval and Fs is the sampling
frequency, we obtain the discrete-time sequence
∞
X
x[n] = x[nT s] = ak ejkω0 nT s
k=−∞
∞
X f0
= ak ejk2π Fs n
k=−∞
X∞
= ak ejk2πf n
k=−∞
∞
2π
X
x[n] = ak ejk N n (4.17)
k=−∞
where ∞
X
ak−lN =
e ak−lN
l=−∞
is an aliased version of {ak }. Comparing Equation (4.18) with the formula of IDFT
in Equation (4.14), we get
X[k] = Neak .
X[k] = N ak .
The spectral components X[k] are the DFT values of the periodic sequence xp [n] of
period N , given by
∞
X
xp [n] = x[n − lN ].
l=−∞
The xp [n] is the periodic repetition of x[n]. For an infinite-duration sequence x[n],
xp [n] is the time-aliased version of x[n]. But, for a finite-duration sequence x[n], if
(
xp [n], 0 ≤ n ≤ N − 1
x[n] = ,
0, otherwise
the original sequence x[n] can be reconstructed from the DFT coefficients {X[k]}.
4.2.1 Periodicity
If
DF T
x[n] ←−→ X[k],
N
then
4.2.2 Linearity
If
DF T
x1 [n] ←−→ X1 [k]
N
and
DF T
x2 [n] ←−→ X2 [k],
N
then
DF T
Ax1 [n] + Bx2 [n] ←−→ AX1 [k] + BX2 [k].
N
If we shift the periodic sequence xp [n] by n0 units to the right, we get another periodic
sequence as
∞
X
x0p [n] = xp [n − k] = x[n − k − lN ]
l=−∞
is related to the original sequence x[n] by a circular shift. Here x0 [n] is the sequence
resulted by circular shifting of x[n] by n0 time units. We take counterclockwise direc-
tion as the positive direction. Therefore, the circular shift of an N −point sequence
is equivalent to the linear shift of its periodic extension, and vice versa. This is il-
lustrated in Figure 4.3 for 4−point sequence x[n]. The circular shift of the sequence
can be represented as the index modulo N . That is,
x[N − n] = −x[n] 1 ≤ n ≤ N − 1.
Circular Convolution
Let x1 [n] and x2 [n] be the two sequences of length N . And,
if
DF T
x1 [n] ←−→ X1 [k]
N
and
DF T
x2 [n] ←−→ X2 [k],
N
then
DF T
x3 [m] = x1 [n] N x2 [n] ←−→ X3 [k] = X1 [k]X2 [k].
N
Here, x3 [m] is called the circular convolution between x1 [n] and x2 [n] and is expressed
as
NX−1
x3 [m] = x1 [n]x2 [(m − n)]N m = 0, 1, 2, · · · , N − 1.
n=0
This property tells us that circular convolution of two sequences in time domain
results multiplication of their corresponding DFTs in frequency domain.
Example
Perform the circular convolution between x1 [n] = {1, 2, 3, 4} and x2 [n] = {2, 3, 4, 5}.
Solution
The circular convolution between x1 [n] and x2 [n] can be computed by using the
expression
N
X −1
x3 [m] = x1 [n]x2 [(m − n)]N m = 0, 1, 2, · · · , N − 1
n=0
X3
= x1 [n]x2 [(m − n)]4 m = 0, 1, 2, 3.
n=0
Note
In circular convolution, if the length of the sequences x1 [n] and x2 [n] is not equal,
zeros are padded to the smaller sequence to make its length equal to that of the larger
one. Furthermore, we can compute linear convolution of two finite-length sequences
by using circular convolution. For that, the length of both the sequences must be
made equal to l1 + l2 − 1 by padding zeros, where l1 is the length of sequence x1 [n]
and l2 is the length of sequence x2 [n].
then
DF T 2π
x[(n − n0 )]N ←−→ e−jk N n0 X[k].
N
then
2π DF T
ejl N n x[n] ←−→ X[(k − l)]N .
N
then
DF T
x[(−n)]N ←−→ X[(−k)]N .
N
4.2.7 Conjugation
If
DF T
x[n] ←−→ X[k],
N
then
DF T
x∗ [n] ←−→ X ∗ [(−k)]N .
N
4.2.8 Multiplication
If
DF T
x1 [n] ←−→ X1 [k]
N
and
DF T
x2 [n] ←−→ X2 [k],
N
then
DF T 1
x1 [n]x2 [n] ←−→ X1 [k] N X2 [k].
N N
then
N
X −1 N
X −1
E= |x[n]|2 = |X[k]|2 .
n=0 k=0
and
DF T
y[n] ←−→ Y [k],
N
then
N −1
DF T e
X
rexy [l] = x[n]y ∗ [(n − l)]N ←−→ R ∗
XY [k] = X[k]Y [k].
N
n=0
Here, rexy [l] is the circular crosscorrelation sequence of x[n] and y[n].
and
N −1
1 X
x[n] = X[k]WN−kn n = 0, 1, 2, · · · , N − 1.
N k=0
Since the formulas for DFT and IDFT involve similar operations, the efficient algo-
rithms for DFT apply also for IDFT. From the formula of DFT, we can see that for
each value of k, direct computation of the DFT requires N complex multiplications
and N − 1 complex additions. Therefore, to compute N DFT values, N 2 complex
multiplications and N (N − 1) complex additions are required. The FFT algorithms
are used to reduce the number of arithemetic operations for faster computation of
DFT.
N = r1 r2 r3 · · · rv
x[n]. Since the two N/2−point sequences are obtained by decimating x[n], the
algorithm is referred to as the decimation-in-time FFT (DIT-FFT) algorithm. The
DFT computations are done separately for these two sequences and added to get the
overall N −point DFT. That is,
N
X −1
X[k] = x[n]WNkn k = 0, 1, 2, · · · , N − 1
n=0
X X
= x[n]WNkn + x[n]WNkn
n even n odd
N/2−1 N/2−1
X X k(2m+1)
= x[2m]WN2km + x[2m + 1]WN
m=0 m=0
N/2−1 N/2−1
X X
= x[2m]WN2km + WNk x[2m + 1]WN2km .
m=0 m=0
2π 2 2π
We know that WN2 = e−j N = e−j N/2 = WN/2 . Therefore,
N/2−1 N/2−1
X X
km
X[k] = x[2m]WN/2 + WNk km
x[2m + 1]WN/2
m=0 m=0
= G[k] + WNk H[k] k = 0, 1, 2, · · · , N − 1 (4.21)
where
N/2−1
X
km
G[k] = x[2m]WN/2
m=0
Since X[k] is periodic with period N , G[k] and H[k] will also be periodic with
k+N/2
period N/2. Also, WN = −WNk . Then, Equation (4.21) can be expressed as
where
N/4−1
X
kb
I[k] = g[2b]WN/4
b=0
is the N/4−point DFT of the odd-numbered samples of g[m]. I[k] and J[k] are
k+N/4 k
periodic with period N/4 and WN/2 = −WN/2 . Then Equation (4.24) can be
expressed as
k
G[k] = I[k] + WN/2 J[k] k = 0, 1, 2, · · · , N/4 − 1 (4.25)
k
G[k + N/4] = I[k] − WN/2 J[k] k = 0, 1, 2, · · · , N/4 − 1. (4.26)
Similarly,
N/4−1 N/4−1
X X
kb k kb
H[k] = h[2b]WN/4 + WN/2 h[2b + 1]WN/4 k = 0, 1, 2, · · · , N/2 − 1.
b=0 b=0
Figure 4.5: Block digram of radix-2 DIT-FFT algorithm for computing 8-point DFT.
Figure 4.6: Signal flow graph of radix-2 DIT-FFT algorithm for computing 8-point
DFT.
For 4−point DFT, radix-2 DIT-FFT algorithm can be described by the block
diagram and signal flow graph as shown in Figures 4.7 and 4.8 respectively.
Figure 4.7: Block diagram of radix-2 DIT-FFT algorithm for computing 4−point
DFT.
Splitting the DFT formula into two summations, one of which involves the sum over
the first N/2 data points and the second sum involves the last N/2 data points, we
Figure 4.8: Signal flow graph of radix-2 DIT-FFT algorithm for computing 4−point
DFT.
get
N/2−1 N −1
X X
X[k] = x[n]WNkn + x[n]WNkn
n=0 n=−N/2
N/2−1 N/2−1
X kN/2
X
= x[n]WNkn + WN x[n + N/2]WNkn .
n=0 n=0
kN/2
Since WN = (−1)k ,
N/2−1
X
x[n] + (−1)k x[n + N/2 WNkn .
X[k] =
n=0
Let us split X[k] into even and odd-numbered samples. Thus, we obtain
N/2−1
X
kn
X[2k] = [x[n] + x[n + N/2]] WN/2 k = 0, 1, 2, · · · , N/2 − 1
n=0
and
N/2−1
X
X[2k + 1] = {[x[n] − x[n + N/2]] WNn } WN/2
kn
k = 0, 1, 2, · · · , N/2 − 1.
n=0
Let
N N
g[n] = x[n] + x[n + ] n = 0, 1, 2, · · · , − 1
2 2
and
N N
h[n] = x[n] − x[n + ]W n n = 0, 1, 2, · · · , − 1,
2 N 2
then,
Figure 4.9: First stage of DIF-FFT algorithm for computing 8-point DFT.
N
2
−1
X
kn N
X[2k] = g[n]WN/2 k = 0, 1, 2, · · · , −1
n=0
2
and
N
2
−1
X
kn N
X[2k + 1] = h[n]WN/2 k = 0, 1, 2, · · · , − 1.
n=0
2
Figure 4.10: Signal flow graph of radix-2 DIF-FFT algorithm for computing 8−point
DFT.
Applications of FFT
1. Filtering
2. Correlation
3. Spectrum analysis
143
CHAPTER 5. ENERGY AND POWER
Let us consider a power signal x(t) as shown in Figure 5.1. We can assume the
power signal as a limiting case of an energy signal. Let us consider the truncated
version xT (t) of the signal x(t) as shown in Figure 5.2. We know that the average
As long as T is finite, xT (t) has finite energy. The average power in terms of xT (t) is
1 ∞
Z
P = lim |xT (t)|2 dt.
T →∞ T −∞
Let XT (jω) be the Fourier transform of xT (t), then using Parseval’s theorem,
Z ∞ Z ∞ Z ∞
2 1 2
|xT (t)| dt = |XT (jω)| dω = |XT (f )|2 df.
−∞ 2π −∞ −∞
Therefore,
1 ∞
Z
P = lim |XT (f )|2 df
T →∞ T −∞
Z ∞
|XT (f )|2
= lim df
−∞ T →∞ T
Z ∞
= S(f )df
−∞
where
|XT (f )|2
S(f ) = lim
T →∞ T
is called the power spectral density (PSD) or power density spectrum of the power
signal x(t). Therefore, the total average power of a power signal x(t) is the sum of
average powers contributed by all the spectral components.
Properties of PSD
1. PSD of a power signal x(t) is a non-negative real-valued function of frequency.
That is,
S(f ) ≥ 0.
2. PSD of a real-valued power signal x(t) is an even function of frequency. That
is,
S(f ) = S(−f ).
3. The total area under the curve of PSD of a power signal x(t) is equal to the
average power of the signal. That is,
Z ∞
P = S(f )df.
−∞
4. For a power signal, auto-correlation function and PSD function form a Fourier
transform pair. That is,
FT
R(τ ) ←→ S(f ).
Let us consider an energy signal x(t) with Fourier transform X(jω). Then, ac-
cording to Parseval’s theorem,
Z ∞ Z ∞ Z ∞
1 2 2
E= |X(jω)| dω = |X(f )| df = ψ(f )df
2π −∞ −∞ −∞
where
ψ(f ) = |X(f )|2
is called the energy spectral density (ESD) of x(t). Therefore, The total energy of a
signal x(t) is the sum of energies contributed by all the spectral components of the
signal x(t).
Properties of ESD
1. ESD of an energy signal x(t) is a non-negative real-valued function of frequency.
That is,
ψ(f ) ≥ 0.
ψ(f ) = ψ(−f ).
3. The total area under the curve of ESD of an energy signal x(t) is equal to the
total energy of the signal.
4. For an energy signal x(t), the auto-correlation function and ESD function form
a Fourier transform pair. That is,
FT
R(τ ) ←→ ψ(f ).
5.4 Correlation
Correlation is defined as the measure of similarity between two signals. Correlation
is of two types:
1. Auto-correlation
2. Cross-correaltion.
5.4.1 Auto-Correlation
Auto-correlation is defined as the measure of similarity between a signal x(t) and its
delayed version x(t − τ ).
Also, Z ∞
R(τ ) = x(t + τ )x∗ (t)dt.
−∞
Proof
Z ∞
R(τ ) = x(t)x∗ (t − τ )dt
−∞
Taking complex conjugate on both sides, we get
Z ∞
∗
R (τ ) = x(t)x∗ (t − τ )dt.
−∞
Let τ = −τ , then
Z ∞
∗
R (−τ ) = x∗ (t)x(t + τ )dt
Z−∞
∞
= x(t + τ )x∗ (t)dt
−∞
∗
∴ R (−τ ) = R(τ ).
2. The value of auto-correlation function for τ = 0 (at origin) is equal to the total
energy of the signal. That is,
Z ∞
R(0) = |x(t)|2 dt = E.
−∞
Proof
For τ = 0,
Z ∞
R(0) = x(t)x∗ (t)dt
Z−∞
∞
= |x(t)|2 dt.
−∞
= E.
4. Auto-correlation function and ESD form a Fourier transform pair. That is,
FT
R(τ ) ←→ ψ(f ).
Proof
Z ∞
R(τ ) = x(t)x∗ (t − τ )dt
−∞
We have, Z ∞
∗
x (t − τ ) = X ∗ (f )e−j2πf (t−τ ) df.
−∞
So, Z ∞ Z ∞
∗ −j2πf (t−τ )
R(τ ) = x(t) X (f )e df dt.
−∞ −∞
Hence,
FT
R(τ ) ←→ ψ(f ).
Also,
Z T /2
1
R(τ ) = lim x(t + τ )x∗ (t)dt.
T →∞ T −T /2
R(τ ) = R∗ (−τ ).
R(0) = P.
4. Auto-correlation function and PSD form a Fourier transform pair. That is,
FT
R(τ ) ←→ ψ(f ).
5.5 Cross-Correlation
Cross-correlation is defined as the measure of similarity between a signal and the
delayed version of another signal. The cross-correlation function between two energy
signals x1 (t) and x2 (t) is defined by
Z ∞
R12 (τ ) = x1 (t)x∗2 (t − τ )dt.
−∞
And, the cross-correlation function between two power signals x1 (t) and x2 (t) is
defined by
Z T /2
R12 (τ ) = lim x1 (t)x∗2 (t − τ )dt.
T →∞ −T /2
Transmission of Signals
Let
FT
x(t) ←→ X(jω),
FT
h(t) ←→ H(jω),
and
FT
y(t) ←→ Y (jω),
then, using the convolution property of Fourier transform for Equation (6.1), we get
Y (jω) = X(jω)H(jω).
Now,
Y (jω)
H(jω) =
X(jω)
is the transfer function or frequency response of the system. So, the frequency re-
sponse of a system is defined as the ratio of Fourier transform of output signal to
the Fourier transform of input signal.
Let
Y (jω) = |Y (jω)|ej∠Y (jω) ,
150
CHAPTER 6. TRANSMISSION OF SIGNALS
X(jω) = |X(jω)|ej∠X(jω) ,
and
H(jω) = |H(jω)|ej∠H(jω) ,
then
|Y (jω)|ej∠Y (jω) |Y (jω)| j(∠Y (jω)−∠X(jω))
|H(jω)|ej∠H(jω) = j∠X(jω)
= e .
|X(jω)|e |X(jω)|
That is, the magnitude response of the system is given by
|Y (jω)|
|H(jω)| = ,
|X(jω)|
and the phase response is given by
Y (jω) = KX(jω)e−jωtd .
We know,
Y (jω)
H(jω) = = Ke−jωtd .
X(jω)
Here, the magnitude response is
|H(jω)| = K, (6.3)
1. the magnitude response |H(jω)| must be constant as shown in Figure 6.1 (a).
This means that all the frequency components must be equally attenuated or
amplified and the system bandwidth must be infinite, and
Figure 6.1: Distortionless transmission. (a) Magnitude response. (b) Phase response.
Thus, for distortionless transmission, the system bandwidth must be infinite. But,
no practical system has infinite bandwidth and therefore distortionless transmission
can not be achieved practically.
A. Amplitude Distortion
Amplitude distortion occurs when the magnitude response is not constant within the
frequency band of interest and the frequency components present in the input signal
are transmitted with different gain or attenuation.
B. Phase Distortion
Phase distortion occurs when the phase response is not linearly changing with fre-
quency and the different frequency components in the input signal are subjected to
different time delays during transmission.
where td is the transmission delay and ωc is the cut-off frequency of the low pass
filter. The magnitude response and the phase response of the ideal low pass filter
are shown in Figures 6.2 (a) and (b) respectively.
We see that the ideal low pass filter allows the frequency components from −ωc
to ωc to pass without distortion and rejects the components above |ωc |.
Figure 6.2: Ideal low pass filter. (a) Magnitude response. (b) Phase response.
We know that the inverse Fourier transform of frequency response is the impulse
response of the LTI system. Taking inverse Fourier transform of Equation (6.5), we
get
Z ωc
1
h(t) = Ke−jωtd ejωt dω
2π −ωc
K ωc jω(t−td )
Z
= e dω
2π −ωc
ωc
K
= ejω(t−td )
2πj(t − td )
jω (t−t ) −ωc−jω (t−t )
K e c d −e c d
=
π(t − td ) 2j
Kωc sin(ωc (t − td ))
∴ h(t) = .
π wc (t − td )
The impulse response of an ideal LPF is the sinc function delayed by td as shown
in Figure 6.3. Since the impulse response is non causal, the ideal LPF can not be
realized practically.
We have,
FT 1
e−at u(t) ←→ .
jω + a
Therefore, the inverse Fourier transform of H(jω) is
1 − t
h(t) = e RC u(t),
RC
which is the impulse response of the given RC circuit.
di(t)
vin (t) = L + vo (t)
dt
d voR(t)
or, vin (t) = L + vo (t)
dt
L dvo (t)
or, vin (t) = + vo (t)
R dt
L dvo (t)
or, vo (t) + = vin (t). (6.8)
R dt
Taking Fourier transform on both sides of Equation (6.8), we get
L
Vo (jω) + jωVo (jω) = Vin (jω)
R
L
or, Vo (jω) 1 + jω = Vin (jω).
R
Therefore, the frequency response of the given RL circuit is
R
Vo (jω) 1 L
H(jω) = = L
= R
. (6.9)
Vin (jω) 1 + jω R jω + L
Since
FT 1
e−at u(t) ←→ ,
jω + a
the inverse Fourier transform of Equation (6.9) gives
R −Rt
h(t) = e L u(t),
L
which is the impulse response of the given RL circuit.
Information Source
The Information source originates a message, such as a human voice, television pic-
ture, or data. The message can be analog or digital in nature. Analog communication
sources produce continuously varying message signals. Microphone is a good exam-
ple of an analog source. On the other hand, digital communication sources produce
a finite set of possible messages. A digital computer is a good example of a digital
source. There are a finite number of characters (or messages) that can be emitted
by such source.
Input Transducer
The function of transducer is to change one form of energy into another form. The
message produced by the information source may or may not be in electrical form,
such as human voice and television picture. If the message is not in electrical form, an
input transducer is used to convert it into an electrical waveform referred to as base-
band signal or message signal. Microphone is an example of input transducer which
converts sound waves (e.g. human voice) into corresponding electrical waveform.
Transmitter
Transmitter converts the message signal into a suitable form for reliable communica-
tion over a channel. Different processes are performed in the transmitter depending
upon the type of communication systems. Modulation is the most important process
performed in the transmitter. In this process, the message signal is superimposed
upon the high-frequency carrier signal for making the message signal suitable for
transmission. Amplification and filtration may also be required before transmission.
Communication Channel
The communication channel is the medium through which signal travels from the
transmitter to the receiver. During transmission, the signal is distorted due to several
factors: the channel imperfections, addition of noise and interfering signals (origi-
nating from other sources). As a consequence, the received signal is the corrupted
version of the transmitted signal.
Receiver
The receiver receives the signal from the channel and recovers the original message
signal in electrical form by the process known as demodulation. Due to distortions
in the received signal, the demodulated signal may be degraded to some extent.
Equalizers are the devices that can be used to reduce the effect of distortions. Other
operations performed in the receiver are filtering and amplification.
Output Transducer
The output transducer converts the electrical message signal into its original form.
For example, speaker is the output transducer which converts the electrical signal
into corresponding sound wave.
Destination
Equalization
To compensate for linear distortion, we may use a network known as equalizer con-
nected in cascade with the system as shown in Figure 6.7. The equalizer is designed
in such a way that inside the frequency band of interest the overall magnitude re-
sponse and phase response of this cascade connection approximates the conditions for
distortionless transmission within prescribed limits. The overall frequency response
where td is a constant time delay and K is some constant. Therefore, the frequency
response of the equalizer is inversely related to that of the channel as,
Ke−jωtd
Heq (jω) = , (6.11)
Hc (jω)
where Hc (jω) 6= 0.
In practice, the equalizer is designed such that its frequency response approxi-
mates the ideal value of Equation (6.11) close enough for the linear distortion to be
reduced to a satisfactory level.
Transmission of Signals in
Discrete-Time Systems
and,
FT
y[n] ←→ Y (ejω ),
then, applying the convolution property of Fourier transform for Equation (7.1), we
get
Y (ejω ) = X(ejω )H(ejω ).
Therefore,
Y (ejω )
H(ejω ) =
X(ejω )
is the frequency response of a discrete-time LTI system.
160
CHAPTER 7. TRANSMISSION OF SIGNALS IN DISCRETE-TIME SYSTEMS
Y (z) = X(z)H(z).
Therefore,
Y (z)
H(z) =
X(z)
is the frequency response of a discrete-time LTI system in terms of Z-transform.
∞
X
y[n] = h[k]x[n − k].
k=−∞
For FIR system, the impulse response must be of finite length. Let M be the length
of the impulse response. Then, causal FIR system is defined by
M
X −1
y[n] = h[k]x[n − k]
k=0
or, y[n] = h[0]x[n] + h[1]x[n − 1] + h[2]x[n − 2] + · · · + h[M − 1]x[n − M + 1].
(7.2)
This difference equation tells us that the output of a causal FIR system at any
instant depends on the present and past inputs. Thus, a causal FIR system has a
finite memory of length M − 1, where M is the length of the impulse response.
Proof
A moving average system is expressed as
M −1
1 X
y[n] = x[n − k].
M k=0
Now,
3
1X 1
h[0] = δ[−k] = ,
4 k=0 4
3
1X 1
h[1] = δ[1 − k] = ,
4 k=0 4
3
1X 1
h[2] = δ[2 − k] = ,
4 k=0 4
3
1X 1
h[3] = δ[3 − k] = .
4 k=0 4
Since, the impulse response is of finite duration, then the moving average system is
the FIR system.
M
X −1
Y (z) = h[k]X(z)z −k .
k=0
Therefore,
M −1
Y (z) X
H(z) = = h[k]z −k
X(z) k=0
Here, the impulse response is of infinite duration and the IIR system requires infinite
memory length.
Proof
The cumulative average system can be expressed as
n
1 X
y[n] = x[k].
n + 1 k=0
Now,
h[0] = 1
1
h[1] =
2
1
h[2] =
3
..
.
1
h[n] = .
n+1
Since the impulse response is of infinite duration, the cumulative average system is
the IIR system.
Or,
N
X M
X
a0 y[n] + ak y[n − k] = bk x[n − k].
k=1 k=0
Normalizing both sides by a0 and rearranging the terms, we get the recursive formula
as
M
X N
X
y[n] = bk x[n − k] − ak y[n − k]. (7.4)
k=0 k=1
Taking Z-transform on both sides of Equation (7.4), we get
M
X N
X
−k
Y (z) = bk X(z)z − ak Y (z)z −k
k=0 k=1
PM −k
Y (z) k=0 bk z
or, = .
X(z) 1+ N
P −k
k=1 ak z
Therefore, PM −k
k=0 bk z
H(z) = PN
1+ −k
k=1 ak z
is the frequency response of an IIR system.
We can implement this causal FIR system in the direct form as shown in Figure 7.1.
M
X N
X
y[n] = bk x[n − k] − ak y[n − k] (7.5)
k=0 k=1
Since the direct form II structure minimizes the number of memory locations
(i.e., maximum(M, N )), it is also known as the canonic structure. The direct form
II realization is better than direct form I in terms of memory size requirement.
Example
Realize the following difference equation in direct form I and direct form II structures.
Solution
Rearranging the given difference equation in recursive form, we get
Example
Realize the following IIR system in cascade-form structure.
y[n] − 2y[n − 1] − 3y[n − 2] = x[n] + 4x[n − 1].
Solution
Y (z) 1 + 4z −1
or, H(z) = =
X(z) 1 − 2z −1 − 3z −2
1 + 4z −1
=
1 − 3z −1 + z −1 − 3z −2
1 + 4z −1
=
(1 + z −1 )(1 − 3z −1 )
1 + 4z −1
1
=
1 + z −1 1 − 3z −1
= H1 (z)H2 (z)
where
1 + 4z −1
H1 (z) =
1 + z −1
and
1
H2 (z) = .
1 − 3z −1
Now, we can realize the cascade-form structure as shown in Figure 7.7.
Example
Realize the following IIR system in parallel-form structure.
Solution
Taking z-transform on both sides, we get
Y (z) 1 + 2z −1 + 3z −2
or, =
X(z) 1 − 2z −1 − 3z −2
1 + 2z −1 + 3z −2
∴ H(z) =
1 − 2z −1 − 3z −2
z 2 + 2z + 3
=
z 2 − 2z − 3
z 2 + 2z + 3
=
z 2 − 3z + z − 3
z 2 + 2z + 3
=
(z + 1)(z − 3)
H(z) z 2 + 2z + 3
or, = .
z z(z + 1)(z − 3)
Performing partial-fraction expansion of Equation, we have
z 2 + 2z + 3 A B C
= + +
z(z + 1)(z − 3) z z+1 z−3
Put z = 0:
3 = A(1)(−3) ⇒ A = −1.
Put z = −1:
1 − 2 + 3 = B(−1)(−4) ⇒ B = 1/2.
Put z = 3:
9 + 6 + 3 = C(3)(4) ⇒ C = 3/2.
Therefore,
Now, we can implement the given IIR system in parallel-form structure as shown in
Figure 7.9.
The complete solution y[n] of a difference equation is the sum of homogeneous solu-
tion or complementary solution yh [n] and particular solution yp [n]. That is,
The complete solution y[n] can be determined also as the sum of zero-input
response yzi [n] and zero-state response yzs [n]. That is,
Example 1.5.1
If a discrete-time LTI system is given by
Solution
Homogeneous Solution
Homogeneous solution of the given difference equation is the solution of the homoge-
nous equation:
y[n] − 0.5y[n − 1] = 0. (7.14)
Let yh [n] = λn , then
λn − 0.5λn−1 = 0
or, λn−1 (λ − 0.5) = 0
∴ λ = 0.5.
Particular Solution
Let yp [n] = B(0.25)n u[n], then Equation (7.13) becomes
For n = 1 (choose the value of n such that none of the terms of Equation (7.16)
vanishes), Equation (7.16) becomes
Therefore,
yp [n] = −0.25n u[n]. (7.17)
Complete Solution
Now, the complete solution is
We know that an LTI system is initially at rest. In this example, y[n] = 0 for n < 0.
Put n = 0 in Equation (7.13), then we get
Therefore, for n ≥ 0,
y[n] = 2(0.5)n − 0.25n
and for all n,
y[n] = [2(0.5)n − 0.25n ]u[n].
Example 1.5.2
If a discrete-time LTI system is given by
where x[n] = 0.25n u[n], then determine y[n] as the sum of zero-input response yzi [n]
and zero-state response yzs [n].
Solution
Zero-Input Response or Natural Response
The zero-input response is the response of a difference equation which is determined
for x[n] = 0. In this example,the zero-input response yzi [n] is the solution of
y[n] − 0.5y[n − 1] = 0.
λn − 0.5λn−1 = 0
or, λn−1 (λ − 0.5) = 0
∴ λ = 0.5.
Therefore,
yh [n] = B0.5n . (7.22)
For determining particular solution, let yp [n] = C0.25n u[n], then Equation (7.19)
becomes
C0.25n u[n] − (0.5)C0.25n−1 u[n − 1] = 0.25n u[n].
For n = 1,
C0.251 − (0.5)C0.250 = 0.251
or, (C)(0.25) − (0.5)(C) = 0.25 ⇒ C = −1.
Therefore,
yp [n] = −0.25n u[n].
Considering the zero initial condition (i.e.,y[n] = 0 for n < 0), we have
yzs [n] = B0.5n − 0.25n u[n]. (7.23)
Put n = 0 in Equation (7.19) and consider initial condition y[−1] = 0, then we get
y[0] − 0.5y[−1] = 0.250
or, y[0] = 1.
Put n = 0 in Equation (7.23), we get
yzs [0] = B0.50 − 0.250 = 1 ⇒ B = 2.
Therefore, for n ≥ 0,
yzs [n] = 2(0.5)n − 0.25n
and for all n,
yzs [n] = [2(0.5)n − 0.25n ]u[n].
Complete Solution
Therefore, the complete solution is
y[n] = [2(0.5)n − 0.25n ]u[n].
Example 1.5.3
Determine the impulse response of a discrete-time LTI system described by
y[n] − 2y[n − 1] − 3y[n − 2] = x[n] + 2x[n − 1]. (7.24)
To determine homogeneous solution yh [n], let yh [n] = λn , then
λn − 2λn−1 − 3λn−2 = 0
or, λn−2 λ2 − 2λ − 3 = 0
or, λ2 − 2λ − 3 = 0
or, λ2 − 3λ + λ − 3 = 0
or, (λ + 1)(λ − 3) = 0
∴ λ = −1, 3.
Therefore,
yh [n] = C1 (−1)n + C2 (3)n .
For x[n] = δ[n], the particular solution yp [n] is zero. Also, for this LTI system,
y[n] = 0 for n < 0. Therefore, the complete solution is
[1] A. V. Oppenheim, A. S. Willsky and S. H. Nawab, Signals & Systems, 3rd ed.,
PHI Learning, New Delhi, 2012.
[2] S. Haykin and B. V. Veen, Signals & Systems, 2nd ed., Wiley-India, New Delhi,
2011.
[3] J. G. Proakis and D. G. Manolakis, Digital Signal Processing, 3rd ed., Pearson
Education, Delhi, 2004.
[5] S. Sharma, Signals & Systems, 8th ed., S.K. Kataria and Sons, New Delhi, 2015.