Digital Signals and System
Digital Signals and System
CHAPTER -1
1.12.4 Stability
1.12.5 Time Invariance
1.12.6 Linearity
1.0 OBJECTIVE
1.1 INTRODUCTION
Signals are represented mathematically as functions of one or more independent variables. Here
we focus attention on signals involving a single independent variable. For convenience, this
will generally refer to the independent variable as time.
There are two types of signals: continuous-time signals and discrete-time signals.
Discrete-time signal: the variable of time is discrete. The weekly Dow Jones stock market
index is an example of discrete-time signal.
A discrete-time signal x [n]may represent a phenomenon for which the independent variable is
inherently discrete. A discrete-time signal x [n]may represent successive samples of an
underlying phenomenon for which the independent variable is continuous. For example, the
processing of speech on a digital computer requires the use of a discrete time sequence
representing the values of the continuous-time speech signal at discrete points of time.
If v (t) and i(t)are respectively the voltage and current across a resistor with resistance R ,
then the instantaneous power is
1
p(t) = v(t) i(t) = R v2 (t)
𝑡2 𝑡2 1
∫𝑡1 𝑝(𝑡)𝑑𝑡 = ∫𝑡1 𝑅
v2 (t) dt
1 𝑡2 1 𝑡2 1
∫ 𝑝(𝑡)𝑑𝑡
𝑡2 –𝑡1 𝑡1
= 𝑡2 –𝑡1 ∫𝑡1 𝑅
v2 (t) dt
For any continuous-time signal x (t)( or any discrete-time signal x [n], the total energy over the
time interval t1 ≤ t ≤ t2 in a continuous-time signal x (t) is defined as
𝑡2
∫𝑡1 |𝑥(𝑡)| 2 dt
1 𝑡2
𝑡2 –𝑡1
∫𝑡1 |𝑥(𝑡)| 2 dt
Similarly the total energy in a discrete-time signal x [n] over the time interval n1 ≤ n ≤ n2 is
defined as
∑𝑛2
𝑛1|𝑥[𝑛]|
2
The average power is
1
∑𝑛2
𝑛1|𝑥[𝑛]|
2
𝑛2−𝑛1+1
In many systems, we will be interested in examining the power and energy in signals over an
infinite time interval, that is, for - ∞ ≤ t ≤ +∞ or. - ∞ ≤ n ≤ +∞
The total energy in continuous time is then defined
𝑇 ∞
E ∞= Lim (T→∞) ∫−𝑇 |𝑥(𝑡)| 2 dt = ∫−∞|𝑥(𝑡)|2 dt ,
For some signals, the integral in continuous Equation or sum in discrete might not converge,
that is, if x (t) or x [n] equals a nonzero constant value for all time. Such signals have infinite
energy, while signals with E ∞ < ∞ have finite energy.
The time-averaged power over an infinite interval
1 𝑇
P∞ = Lim (T→∞) ∫ |𝑥(𝑡)| 2 dt
2𝑇 −𝑇
1
P∞ = Lim (N→∞) 2𝑁+1
∑𝑛2
𝑛1|𝑥[𝑛]|
2
Type 1: signals with finite total energy, E ∞ < ∞ and zero average power,
E∞
P∞ = Lim (T→∞) =0
2𝑇
Type 3: signals for which neither P∞ and E∞ are finite. An example of this signal is x(t )= t .
1.3 Transformations of the independent variable
Fig. 1.5 (a) A discrete-time signal x [n]; (b) its reflection, x [-n] about n = 0
A periodic continuous-time signal x (t) has the property that there is a positive value of T for
which x (t) = x (t + T) for all t
From Equation , we can deduce that if x (t) is periodic with period T,
x(t)= Ceat
Where C and a are in general complex numbers.
Real exponential signals
An important property of this signal is that it is periodic. We know x (t) is periodic with period
T if
ejա0T =1
For ա0 ≠ 0, the fundamental period T0 is
2𝜋
T0 = |ա0|
Thus, the signals ejա0t and e-jա0t have the same fundamental period.
A signal closely related to the periodic complex exponential is the sinusoidal signal
x (t) = A cos(ա0 t + Ɵ)
With seconds as the unit of t, the units of Ɵ and ա0 are radians and radians per second. It is
also known ա0=2Лf0, where f0 has the unit of circles per second or Hz.
The sinusoidal signal is also a periodic signal with a fundamental period of T0 .
And
Periodic signals, such as the sinusoidal signals provide important examples of signal with
infinite total energy, but finite average power. For example:
Since there are an infinite number of periods as t ranges from - ∞ to + ∞ , the total energy
integrated over all time is infinite. The average power is finite since
Thus, for r = 0 , the real and imaginary parts of a complex exponential are sinusoidal.
For r > 0 , sinusoidal signals multiplied by a growing exponential.
x[n]= Cαn
where C and αare in general complex numbers. This can be alternatively expressed
x [n]= C eβn
where α =eβ
Real Exponential Signals
The discrete-time unit step is the running sum of the unit sample:
It can be seen that for n < 0 , the running sum is zero, and for n ≥0 , the running sum is 1.
If we change the variable of summation from m to k = n - m we have,
The unit impulse sequence can be used to sample the value of a signal at n = 0. Since it is
nonzero only for n = 0, it follows that
The continuous-time unit step is the running integral of the unit impulse
The continuous-time unit impulse can also be considered as the first derivative of the
continuous time unit step,
Since u (t) is discontinuous at t = 0 and consequently is formally not differentiable. This can
be interpreted, however, by considering an approximation to the unit step u Δ(t) , as illustrated
in the figure below, which rises from the value of 0 to the value 1 in a short time interval of
length Δ.
Fig. 1.20 (a) Continuous approximation to the unit step uΔ (t) ; (b) Derivative of uΔ (t) .
The derivative is
Note that It is a short pulse, of duration Δ and with unit area for any value of Δ. As Δ -> 0 ,
becomes narrower and higher, maintaining its unit area. At the limit,
And
Or more generally,
Example:
Fig. 1. 23 Examples of systems. (a) A system with input voltage v s (t) and output voltagev0(t)
.
(b) A system with input equal to the force f(t ) and output equal to the velocity v( t) .
A continuous-time system is a system in which continuous-time input signals are applied and
results in continuous-time output signals.
A discrete-time system is a system in which discrete-time input signals are applied and results
in discrete-time output signals.
The current i(t ) is proportional to the voltage drop across the resistor:
The current through the capacitor is
Equating the right-hand sides of both the above equations, we obtain a differential equation
describing the relationship between the input and output:
Example 2: Consider the system in Fig. 23 (b), where the force f(t ) as the input and the velocity
v( t) as the output. If we let m denote the mass of the car and pv the resistance due to friction.
Equating the acceleration with the net force divided by mass, we obtain
Example 3: Consider a simple model for the balance in a bank account from month to month.
Let y [n] denote the balance at the end of nth month, and suppose that y[n] evolves from month
to month according the equation:
where x [n] is the net deposit (deposits minus withdraws) during the nth month 1.01y[n -1]
models the fact that we accrue 1% interest each month.
Some conclusions:
· Mathematical descriptions of systems have great deal in common;
· A particular class of systems is referred to as linear, time-invariant systems.
· Any model used in describing and analyzing a physical system represents an idealization of
the system.
Fig. 1.24 Interconnection of systems. (a) A series or cascade interconnection of two systems;
(b) A parallel interconnection of two systems;
A resistor is a memoryless system, since the input current and output voltage has the
relationship,
v (t) = R i(t ) ,
Or
y[n]-y[n-1]=x[n]
Another example is a delay
y[n]=x[n-1]
the system produces zero output sequence for any input sequence.
y( t) = x2( t) ,
in which case, one cannot determine the sign of the input from the knowledge of the output.
Encoder in communication systems is an example of invertible system, that is, the input to the
encoder must be exactly recoverable from the output.
1.12.3 Causality
A system is causal if the output at any time depends only on the values of the input at present
time and in the past. Such a system is often referred to as being nonanticipative, as the system
output does not anticipate future values of the input.
The RC circuit in Fig. 23 (a) is causal, since the capacitor voltage responds only to the present
and past values of the source voltage. The motion of a car is causal, since it does not anticipate
future actions of the driver.
The following expressions describing systems that are not causal:
y (t) = x( t +1)
All memoryless systems are causal, since the output responds only to the current value of input.
System (2) is causal. The output at any time equals the input at the same time multiplied by a
number that varies with time.
1.12.4 Stability
A stable system is one in which small inputs leads to responses that do not diverge. More
formally, if the input to a stable system is bounded, then the output must be also bounded and
therefore cannot diverge.
Examples of stable systems and unstable systems:
The above two systems are stable system.
The accumulator y[n] = ∑𝑛𝐾=−∞ 𝑥[𝑘] is not stable, since the sum grows continuously even if
x [n] is bounded.
• S1; y( t) = tx (t) ;
• S1 is not stable, since a constant input x (t)= 1, yields y(t ) = t , which is not bounded –
no matter what finite constant we pick,| y( t)| will exceed the constant for some t.
• S2 is stable. Assume the input is bounded |x (t)| < B , or - B < x (t) < B for all t.
A system is time invariant if a time shift in the input signal results in an identical time shift in
the output signal. Mathematically, if the system output is y (t) when the input is x( t) , a
timeinvariant system will have an output of y(t-t0) when input is x(t-t0).
Examples: ·
Is
which holds for linear systems in both continuous and discrete time.
For a linear system, zero input leads to zero output.
Examples:
• The system y (t) = t x(t) is a linear system.
• The system y(t) = x2(t) is not a liner system. ·
• The system y [n][ = Re{x [n] }, is additive, but does not satisfy the homogeneity, so
it is not a linear system.
• The system y[ n][ = 2x [n] + 3 is not linear. y [n] = 3 if x [n] = 0 , the system violates
the “zeroin/zero-out” property. However, the system can be represented as the sum
of the output of a linear system and another signal equal to the zero-input response of
the system. For system y [n]= 2x[ n] + 3, the linear system is
x[n] → 2 x[n]
and the zero-input response is
y0[n]=3
as shown in Fig. 1.29.
The system represented in Fig. 1.29 is called incrementally linear system. The system
responds linearly to the changes in the input.
The overall system output consists of the superposition of the response of a linear
system with a zero-input response.
SUMMARY
A periodic continuous-time signal x (t) has the property that there is a positive value of T for
which x (t) = x (t + T) for all t
Any signal can be decomposed into a sum of two signals, one of which is even and one of
which is odd.
The sinusoidal signal is also a periodic signal with a fundamental period of T0 .
The continuous-time unit impulse can also be considered as the first derivative of the
continuous time unit step.
The continuous-time unit step is the running integral of the unit impulse.
A continuous-time system is a system in which continuous-time input signals are applied and
results in continuous-time output signals.
A discrete-time system is a system in which discrete-time input signals are applied and results
in discrete-time output signals.
A system is memoryless if its output for each value of the independent variable as a given time
is dependent only on the input at the same time.
A system is said to be invertible if distinct inputs leads to distinct outputs.
A system is causal if the output at any time depends only on the values of the input at present
time and in the past. Such a system is often referred to as being nonanticipative, as the system
output does not anticipate future values of the input.
A stable system is one in which small inputs leads to responses that do not diverge. More
formally, if the input to a stable system is bounded, then the output must be also bounded and
therefore cannot diverge.
A system is time invariant if a time shift in the input signal results in an identical time shift in
the output signal. Mathematically, if the system output is y (t) when the input is x( t) , a time
invariant system will have an output of y(t-t0) when input is x(t-t0).
Books
2.7.6 Duality
2.7.7 Parseval’s Relation
2.8 The Convolution Properties
From the study of the heat equation and wave equation, we have found that there are infinite
series expansions over other functions, such as sine functions. We now turn to such expansions
and in the next chapter we will find out that expansions over special sets of functions are not
uncommon in physics. But, first we turn to Fourier trigonometric series.
We will begin with the study of the Fourier trigonometric series expansion
We will find expressions useful for determining the Fourier coefficients {an, bn} given a
function f(x) defined on [−L, L]. We will also see if the resulting infinite series reproduces f(x).
However, we first begin with some basic ideas involving simple sums of sinusoidal functions.
There is a natural appearance of such sums over sinusoidal functions in music. A pure note can
be represented as
y(t) = A sin(2π f t)
where A is the amplitude, f is the frequency in hertz (Hz), and t is time in seconds. The
amplitude is related to the volume of the sound. The larger the amplitude, the louder the sound.
In Figure 2.1 we show plots of two such tones with f = 2 Hz in the top plot and f = 5 Hz in the
bottom one.
In these plots you should notice the difference due to the amplitudes and the frequencies. You
can easily reproduce these plots and others in your favorite plotting utility.
As an aside, you should be cautious when plotting functions, or sampling data. The plots you
get might not be what you expect, even for a simple sine function.
Figure 2.1: Plots of y(t) = A sin(2π f t) on [0, 5] for f = 2 Hz and f = 5 Hz.
In Figure 2.2 we show four plots of the function y(t) = 2 sin(4πt). In the top left you see a proper
rendering of this function. However, if you use a different number of points to plot this
function, the results may be surprising. In this example we show what happens if you use N =
200, 100, 101 points instead of the 201 points used in the first plot. Such disparities are not
only possible when plotting functions, but are also present when collecting data. Typically,
when you sample a set of data, you only gather a finite amount of information at a fixed rate.
This could happen when getting data on ocean wave heights, digitizing music and other audio
to put on your computer, or any other process when you attempt to analyze a continuous signal.
Figure 2.2: Problems can occur while plotting. Here we plot the function y(t) = 2 sin 4πt
using N = 201, 200, 100, 101 points.
Next, we consider what happens when we add several pure tones. After all, most of the sounds
that we hear are in fact a combination of pure tones with different amplitudes and frequencies.
In Figure 2.3 we see what happens when we add several sinusoids. Note that as one adds more
and more tones with different characteristics, the resulting signal gets more complicated.
However, we still have a function of time.
Given a signal f(t), we would like to determine its frequency content by finding out what
combinations of sines and cosines of varying frequencies and amplitudes will sum to the given
function. This is called Fourier Analysis.
Notice that we have opted to drop the references to the time-frequency form of the phase. This
will lead to a simpler discussion for now and one can always make the transformation nx = 2π
fnt when applying these ideas to applications.
The series representation in Equation is called a Fourier trigonometric series. We will simply
refer to this as a Fourier series for now.
Figure 2.4: Plot of the function f(t) defined on [0, 2π] and its periodic extension.
The set of constants a0, an, bn, n = 1, 2, . . . are called the Fourier coefficients. The constant
term is chosen in this form to make later computations simpler, though some other authors
choose to write the constant term as a0. Our goal is to find the Fourier series representation
given f(x). Having found the Fourier series representation, we will be interested in determining
when the Fourier series converges and to what function it converges.
Figure 2.5: Superposition of several sinusoids.
Looking at the superpositions in Figure 2.5, we see that the sums yield functions that appear to
be periodic. This is not to be unexpected. We recall that a periodic function is one in which the
function values repeat over the domain of the function. The length of the smallest part of the
domain which repeats is called the period. We can define this more precisely: A function is
said to be periodic with period T if f(t + T) = f(t) for all t and the smallest such positive number
T is called the period. For example, we consider the functions used in Figure 3.3. We began
with y(t) = 2 sin(4πt). Recall from your first studies of trigonometric functions that one can
determine the period by dividing the coefficient of t into 2π to get the period. In this case we
have
From our discussion in the last section, we see that The Fourier series is periodic. The periods
of cos nx and sin nx are 2π n . Thus, the largest period, T = 2π, comes from the n = 1 terms and
the Fourier series has period 2π. This means that the series should be able to represent functions
that are periodic of period 2π. While this appears restrictive, we could also consider functions
that are defined over one period. we can show a function defined on [0, 2π]. In the same figure,
we show its periodic extension. These are just copies of the original function shifted by the
period and glued together. The extension can now be represented by a Fourier series and
restricting the Fourier series to [0, 2π] will give a representation of the original function.
Therefore, we will first consider Fourier series representations of functions defined on this
interval. Note that we could just as easily considered functions defined on [−π, π] or any
interval of length 2π. We will consider more general intervals later in the chapter.
Fourier Coefficients Theorem 2.1. The Fourier series representation of f(x) defined on [0, 2π],
when it exists, is given by equation with Fourier coefficients
These expressions for the Fourier coefficients are obtained by considering special integrations
of the Fourier series. We will now derive the an integrals in equation. We begin with the
computation of a0. Integrating the Fourier series term by term in Equation above, we have
We will assume that we can integrate the infinite sum term by term. Then
we will need to compute
From these results we see that only one term in the integrated sum does not vanish leaving
This confirms the value for a02. Next, we will find the expression for an. We multiply the
Fourier series above by cos mx for some positive integer m. This is like multiplying by cos
2x, cos 5x, etc. We are multiplying by all possible cos mx functions for different integers m all
at the same time. We will see that this will allow us to solve for the an’s.
2𝜋
We have already established that ∫0 cos 𝑚𝑥 𝑑𝑥 = 0 which implies that the first term
vanishes. Next we need to compute integrals of products of sines and cosines. This requires
that we make use of some of the trigonometric identities listed . For quick reference, we list
these here.
Useful Trigonometric Identities
2𝜋
We first want to evaluate ∫0 cos 𝑛𝑥 𝑐𝑜𝑠 𝑚𝑥 𝑑𝑥. We do this by using the
There is one caveat when doing such integrals. What if one of the denominators m ± n vanishes?
For this problem m + n ≠ 0, since both m and n are positive integers. However, it is possible
for m = n. This means that the vanishing of the integral can only happen when m ≠ n. So, what
can we do about the m = n case? One way is to start from scratch with our integration. (Another
way is to compute the limit as n approaches m in our result and use L’Hopital’s Rule.)
2𝜋
For n = m we have to compute ∫0 cos 2 𝑚𝑥 𝑑𝑥 . This can also be handled using a
trigonometric identity. Using the half angle formula, with θ = mx, we find
This holds true for m, n = 0, 1, . . . . [Why did we include m, n = 0?] When we have such a
set of functions, they are said to be an orthogonal set over the integration interval. A set of
(real) functions {φn(x)} is said to be orthogonal on [a, b] if
The set of functions {𝑐𝑜𝑠𝑛𝑥)∞𝑛=0 are orthogonal on [0, 2π]. Actually, they are orthogonal on
any interval of length 2π. We can make them orthonormal by dividing each function by √ π as
indicated by Equation .
This is sometimes referred to normalization of the set of functions. The notion of
orthogonality is actually a generalization of the orthogonality of vectors in finite dimensional
𝑏
vector spaces. The integral ∫𝑎 𝑓 (𝑥 ) 𝑓(𝑥 )𝑑𝑥 is the generalization of the dot product, and is
called the scalar product of f(x) and g(x), which are thought of as vectors in an infinite
dimensional vector space spanned by a set of orthogonal functions.
2𝜋
we still have to evaluate ∫0 sin 𝑛𝑥 cos 𝑚𝑥 𝑑𝑥. We can use the trigonometric identity
involving products of sines and cosines, Setting A = nx and B = mx,
That
So,
For these integrals we also should be careful about setting n = m. In this special case, we have
the integrals
Finally, we can finish evaluating the expression in Equation. We have determined that all but
one integral vanishes. In that case, n = m. This leaves us with
Since this is true for all m = 1, 2, . . . , we have proven this part of the theorem. The only part
left is finding the bn’s This will be left as an exercise for the reader.
We now consider examples of finding Fourier coefficients for given functions. In all of these
cases we define f(x) on [0,2 П]
Example 2.1. f(x) = 3 cos 2x, x ∈ [0, 2π]. We first compute the integrals for the Fourier
coefficients.
The integrals for a0, an, n ≠ 2, and bn are the result of orthogonality. For a2, the integral can
be computed as follows:
Therefore, we have that the only nonvanishing coefficient is a2 = 3. So there is one term and
f(x) = 3 cos 2x.
Well, we should have known the answer to the last example before doing all of those integrals.
If we have a function expressed simply in terms of sums of simple sines and cosines, then it
should be easy to write down the Fourier coefficients without much work. This is seen by
writing out the Fourier series,
For the last problem, f(x) = 3 cos 2x. Comparing this to the expanded Fourier series, one can
immediately read off the Fourier coefficients without doing any integration. In the next
example we emphasize this point.
So, half of the bn’s are zero. While we could write the Fourier series representation as
we could let n = 2k − 1 in order to capture the odd numbers only. The answer can be written
as
Having determined the Fourier representation of a given function, we would like to know if
the infinite series can be summed; i.e., does the series converge? Does it converge to f(x)?
We will discuss this question later in the chapter after we generalize the Fourier series to
intervals other than for x ∈ [0, 2π].
Figure 2.7: A sketch of the transformation between intervals x ∈ [0, 2π] and t ∈ [0, L]
We define x ∈ [0, 2π] and t ∈ [0, L]. A linear transformation relating these intervals is simply
x = 2πt L as shown in Figure 2.7. So, t = 0 maps to x = 0 and t = L maps to x = 2π. Furthermore,
this transformation maps f(x) to a new function g(t) = f(x(t)), which is defined on [0, L]. We
will determine the Fourier series representation of this function using the representation for
f(x) from the last section. Recall the form of the Fourier representation for f(x) in Equation
This gives the form of the series expansion for g(t) with t ∈ [0, L]. But, we still need to
determine the Fourier coefficients. Recall, that
We need to make a substitution in the integral of x = 2πt L . We also will need to transform the
differential, dx = 2π L dt. Thus, the resulting form for the Fourier coefficients is
We note first that when L = 2π we get back the series representation that we first studied. Also,
the period of cos 2nπt L is L/n, which means that the representation for g(t) has a period of L
corresponding to n = 1. At the end of this section we present the derivation of the Fourier series
representation for a general interval for the interested reader.
At this point we need to remind the reader about the integration of even and odd functions on
symmetric intervals. We first recall that f(x) is an even function if f(−x) = f(x) for all x. One
can recognize even functions as they are symmetric with respect to the y-axis as shown in
Figure 2.8
Figure 2.8: Area under an even function on a symmetric interval, [−a, a].
If one integrates an even function over a symmetric interval, then one has that
One can prove this by splitting off the integration over negative values of x, using the
substitution x = −y, and employing the evenness of f(x). Thus,
This can be visually verified by looking at Figure 2.8. A similar computation could be done for
odd functions. f(x) is an odd function if f(−x) = −f(x) for all x. The graphs of such functions
are symmetric with respect to the origin as shown in Figure 2.9. If one integrates an odd
function over a symmetric interval, then one has that
Odd Functions
Figure 2.9: Area under an odd function on a symmetric interval, [−a, a].
Example 2.4.
Let f(x) = |x| on [−π, π] We compute the coefficients, beginning as usual with a0. We have,
using the fact that |x| is an even function,
We continue with the computation of the general Fourier coefficients for f(x) = |x| on [−π, π].
We have
Here we have made use of the fact that |x| cos nx is an even function. In order to compute the
resulting integral, we need to use integration by parts ,
1
by letting u = x and dv = cos nx dx. Thus, du = dx and v = ʃ dv = 𝑛 sin nx.
Continuing with the computation, we have
Here we have used the fact that cos nπ = (−1) n for any integer n. This leads to a factor (1 −
(−1) n ). This factor can be simplified as
4
So, an = 0 for n even and an = − for n odd. Computing the bn’s is simpler. We note that
πn 2
we have to integrate |x| sin nx from x = −π to π. The integrand is an odd function and this is a
symmetric interval. So, the result is that bn = 0 for all n. Putting this all together, the Fourier
series representation of f(x) = |x| on [−π, π] is given as
While this is correct, we can rewrite the sum over only odd n by reindexing. We let n = 2k − 1
for k = 1, 2, 3, . . . . Then we only get the odd integers. The series can then be written as
Throughout our discussion we have referred to such results as Fourier representations. We have
not looked at the convergence of these series. Here is an example of an infinite series of
functions. What does this series sum to? We show in Figure 2.10 the first few partial sums.
They appear to be converging to f(x) = |x| fairly quickly. Even though f(x) was defined on [−π,
π] we can still evaluate the Fourier series at values of x outside this interval. In Figure 2.11, we
see that the representation agrees with f(x) on the interval [−π, π]. Outside this interval we have
a periodic extension of f(x) with period 2π. Another example is the Fourier series representation
of f(x) = x on [−π, π] This is determined to be
As seen in Figure 2.12 we again obtain the periodic extension of the function. In this case we
needed many more terms. Also, the vertical parts of the
Figure 2.10: Plot of the first partial sums of the Fourier series representation for f(x) = |x|.
Figure 2.11: Plot of the first 10 terms of the Fourier series representation for f(x) = |x| on the
interval [−2π, 4π].
Figure 2.12: Plot of the first 10 terms and 200 terms of the Fourier series representation for
f(x) = x on the interval [−2π, 4π].
or alternatively
Suppose a signal x(t) with a finite duration, that is, x(t) = 0 for |t | > T1 , as illustrated in the
figure below.
• From this aperiodic signal, we construct a periodic signal ẋ(t) , shown in the figure
below.
Condition 2: In any finite interval of time, x(t) have a finite number of maxima and minima.
Condition 3: In any finite interval of time, there are only a finite number of discontinuities.
Furthermore, each of these discontinuities is finite.
Example:
That is, the impulse has a Fourier transform consisting of equal contributions at all
frequencies.
Example: Calculate the Fourier transform of the rectangular pulse signal
The Inverse Fourier transform is
xˆ(t) converges to x(t) everywhere except at the discontinuity, T1 t = ± , where xˆ(t) converges
to ½, which is the average value of x(t) on both sides of the discontinuity.
In addition, the convergence of xˆ(t) to x(t) also exhibits Gibbs phenomenon. Specifically, the
integral over a finite-length interval of frequencies
As W →∞ , this signal converges to x(t) everywhere, except at the discontinuities. More over,
the signal exhibits ripples near the discontinuities. The peak values of these ripples do not
decrease as W increases, although the ripples do become compressed toward the discontinuity,
and the energy in the ripples converges to zero.
Comparing the results in the preceding example and this example, we have
This means a square wave in the time domain, its Fourier transform is a sinc function. However,
if the signal in the time domain is a sinc function, then its Fourier transform is a square wave.
This property is referred to as Duality Property.
We also note that when the width of X( jw) increases, its inverse Fourier transform x(t) will be
compressed. When W → ∞ , X( jw) converges to an impulse. The transform pair with several
different values of W is shown in the figure below.
Example: If the Fourier series coefficients for the square wave below are given
The Fourier transform of this signal is
Example:
The Fourier transforms for x (t ) = sin ω0t and x(t ) = cosω0t are shown in the figure below.
Example: Calculate the Fourier transform for signal
The Fourier transform of a periodic impulse train in the time domain with period T is a periodic
impulse train in the frequency domain with period 2П /T , as sketched din the figure below.
2.7 Properties of The Continuous-Time Fourier Transform
2.7.1 Linearity
Then
Then
Or
Thus, the effect of a time shift on a signal is to introduce into its transform a phase shift, namely,
-ω0t .
Example: To evaluate the Fourier transform of the signal x(t) shown in the figure below.
The signal x(t) can be expressed as the linear combination
x 1(t) and x2( t) are rectangular pulse signals and their Fourier transforms are
Using the linearity and time-shifting properties of the Fourier transform yields
Then
We can also prove that if x(t) is both real and even, then X( jw) will also be real and even.
Similarly, if x(t) is both real and odd, then X( jw) will also be purely imaginary and odd.
A real function x(t) can be expressed in terms of the sum of an even function xe(t) =
Ev{x(t)}and an odd function xo (t) = Od{x(t)}. That is
From the preceding discussion, F{xe(t)} is real function and F{xo(t)} is purely imaginary. Thus
we conclude with x(t) real,
Example: Using the symmetry properties of the Fourier transform and the result
to evaluate the Fourier transform of the signal x(t)=e -|a|t , where a > 0 .
Since
So
2.7.4 Differentiation and Integration
Then
Example: Consider the Fourier transform of the unit step x(t) = u(t).
It is know that
where G(0) = 1.
Example: Consider the Fourier transform of the function x(t) shown in the figure below.
From the above figure we can see that g(t) is the sum of a rectangular pulse and two impulses.
It can be found X( jw) is purely imaginary and odd, which is consistent with the fact that x(t)
is real and odd.
2.7.5 Time and Frequency Scaling
Then
From the equation we see that the signal is compressed in the time domain, the spectrum will
be extended in the frequency domain.
That is, reversing a signal in time also reverses its Fourier transform.
2.7.6 Duality
The duality of the Fourier transform can be demonstrated using the following example.
The symmetry exhibited by these two examples extends to Fourier transform in general. For
any transform pair, there is a dual pair with the time and frequency variables interchanged.
Example: Consider using duality and the result
Based on the duality property we can get some other properties of Fourier transform:
We have
Parseval’s relation states that the total energy may be determined either by computing the
energy per unit time 2 x(t) and integrating over all time or by computing the energy per unit
frequency |X(jw) |2 / 2П and integrating over all frequencies. For this reason, 2 X ( jw) is
often referred to as the energy-density spectrum.
2.8 The convolution properties
The equation shows that the Fourier transform maps the convolution of two signals into product
of their Fourier transforms.
H( jw), the transform of the impulse response, is the frequency response of the LTI system,
which also completely characterizes an LTI system.
Example: The frequency response of a differentiator.
The impulse response of an integrator is the unit step, and therefore the frequency response of
the system:
So we have
The inverse transform for each of the two terms can be written directly. Using the linearity
property, we have
We should note that when a = b , the above partial fraction expansion is not valid. However,
with a = b , we have
2.9 The Multiplication Property
Multiplication of one signal by another can be thought of as one signal to scale or modulate the
amplitude of the other, and consequently, the multiplication of two signals is often referred to
as amplitude modulation.
Example: Let s(t) be a signal whose spectrum S( jw) is depicted in the figure below.
The spectrum of r(t) = s(t) p(t) is obtained by using the multiplication property,
If we use a lowpass filter with frequency response H( jw) that is constant at low frequencies
and zero at high frequencies, then the output will be a scaled replica of S( jw). Then the output
will be scaled version of s(t)- the modulated signal is recovered.
2.10 Summary of Fourier Transform Properties and Basic Fourier Transform Pairs
System Characterized by Linear Constant-Coefficient Differential Equations An LTI system
described by the following differential equation:
where X( jw), Y( jw) and H( jw) are the Fourier transforms of the input x(t), output y(t) and
the impulse response h(t), respectively.
Applying Fourier transform to both sides, we have
Example: Consider a stable LTI system that is characterized by the differential equation
The frequency response of this system is
If one integrates an even function over a symmetric interval, then one has that
Over any period, x(t) must be absolutely integrable, that is
In any finite interval of time, x(t) have a finite number of maxima and minima.
In any finite interval of time, there are only a finite number of discontinuities.
The Fourier transform can be plotted in terms of the magnitude and phase, as
The impulse has a Fourier transform consisting of equal contributions at all frequencies.
The Fourier series representation of the signal x(t) is
The Fourier transform of a periodic impulse train in the time domain with period T is a
periodic impulse train in the frequency domain with period 2П /T ,
Different properties of Fourier transforms are Linearity, Time Shifting, Conjugation and
Conjugate Symmetry, Differentiation and Integration, Time and Frequency Scaling, Duality,
Parseval’s Relation, convolution properties, Multiplication Property
Question:
Q1) Explain Trigonometric Fourier Series with example.
Q2) Explain Exponential Fourier Series with example.
Q3) Explain Convergence Of Fourier Transform.
Q4) Explain Fourier Transform For Periodic Signals
Q5) Explain Properties of Fourier Transform .
Q6) Explain Convolution properties of Fourier Transform.
Q7) Explain Multiplication properties of Fourier Transform.
Books
1. Digital Signal Processing by S. Salivahanan, C. Gnanapriya Second Edition, TMH
References
3. Signals and Systems by Alan V. Oppenheim and Alan S. Willsky with S. Hamid Nawab,
Second Edition, PHI (EEE)
4.Digital Signal Processing by Apte, Second Edition, Wiley India.
Unit -2
Chapter -3
LAPLACE TRANSFORM
3.0 Objectives
3.1 Introduction
3.2 Definition of Laplace Transform
3.3 Convergence of Laplace Transform
3.4 Properties of ROC
3.5 Properties of Laplace Transform
3.5.1 Linearity
3.5.2 Time Shifting (Translation in Time Domain)
3.5.3 Shifting in s- Domain (Complex Translation)
3.5.4 Time Scaling
3.5.5 Differentiation in Time Domain
3.5.6 Differentiation in s- Domain
3.5.7 Convolution in Time Domain
3.5.8 Integration in Time domain
3.5.9 Integration in s- Domain
3.6 Examples of Laplace Transform
3.7 Unilateral Laplace Transform
3.7.1 Differentiation in Time Domain
3.7.2 Initial Value Theorem
3.7.3 Final Value Theorem
3.0 OBJECTIVES
• Understand Laplace transform of basic signals.
• To understand and apply properties of Laplace transform.
• To understand and apply unilateral Laplace transforms.
3.1 INTRODUCTION
Laplace transform represents continuous time signals in terms of complex exponentials i.e.
e –st
Continuous time systems are also analyzed more effectively using Laplace transform.
Laplace transform can be applied to the analysis of unstable systems also.
Types of Laplace Transform
i) Bilateral or two sided Laplace transform
ii) Unilateral or one sided Laplace transform
3.2 Definition of Laplace Transform
∞
X(s)= ∫−∞ 𝑥(𝑡) e-st dt
The range of values of σ for which Laplace transform converges is called region of convergence
or ROC.
Example 3.1:- Calculate the Laplace transform of following functions and plot their ROC
i) x(t)= eat u(t)
Solution:-
3.5.1 Linearity
Statement: Laplace transform follows superposition principle ,i.e. it is linear
Proof:
Here ROC : R1 ᴖ R2 indicates the intersection of R1 and R2
3.5.2 Time Shifting (Translation in Time Domain)
Statement: A time shift in the signal introduces frequency shift in frequency domain.
Proof:
Proof:
3.5.4 Time Scaling
Statement: Expansion in time domain is equivalent to compression in frequency domain and
vice versa
𝑅
ROC : 𝑎
Proof:
This result shows that inverting the time axis inverts frequency axis as well as ROC.
Proof:
Differentiate both sides of above equation with respect to ‘t’ i.e.
Proof:
ROC : containing R1 ᴖ R2
Proof:
Proof:
Hence above equation becomes
Proof:
Changing the order of integration and rearranging the terms,
Here e –s x ∞ = e -∞ = 0
If s > 0 . Then above equation will be,
ROC
Thus
Here use,
Similarly,
We know that ,
The above equation can be written as ,
Therefore,
3.7 Unilateral Laplace Transform
The unilateral Laplace transform is given as ,
∞
X(s) = ∫0− 𝑥(𝑡) e-st dt
The lower limit is taken as 0- to indicate that initial conditions at t=0 are also considered.
Note that unilateral Laplace transform will be always convergent since ROC will be always
R.H.S. of S-plane.
3.7.1 Differentiation in Time Domain
Let x(t) → X(S) Laplace transform pair
Then,
𝑑𝑥(𝑡)
→ s X(s ) – x(0-)
𝑑𝑡
x(0-) indicates the value of x(t) just before t=0 and x(0+) indicates value of x(t) just after t=0 .
If the function x(t) is continuous at t=0 , then its value just before and after t=0 will be same
i.e.,
This equation is used to determine the initial value of x(t) and its derivative.
3.7.3 Final Value Theorem
Let x(t) → X(S) Laplace transform pair
Then initial value of x(t) is given as
Solution:
Final value is given as,
Example 3.6: Use the s- domain shift property and Fourier transform pair
To derive the unilateral Laplace transform of x(t) = e-at u(t) cos ω1t u(t)
Solution:
Here
Example 3.7 : Determine initial and final values of signal x(t) whose unilateral Laplace
transform :
Solution:
Initial value is given by,
SUMMARY
Laplace transform represents continuous time signals in terms of complex exponentials i.e.
e –st
Continuous time systems are also analyzed more effectively using Laplace transform.
Laplace transform can be applied to the analysis of unstable systems also.
∞
X(s)= ∫−∞ 𝑥(𝑡) e-st dt
If Fourier transform of x(t) e-σt exists , then Laplace transform of x(t) exits.
For the Fourier transform to exists , x(t) e-σt must be absolutely integrable.
∞
∫−∞ | x(t) e − σt| | dt < ∞
The range of values of σ for which Laplace transform converges is called region of convergence
or ROC.
No poles lie in ROC.
ROC of the causal signal is right hand sided. It is of the form Re(s) > a.
ROC of the noncausal signal is left hand sided. It is of the form Re(s) < a.
The system is stable if its ROC includes jω axis of s-plane.
Properties of Laplace Transform are Linearity, Time Shifting, Shifting in s- Domain, Time
Scaling, Differentiation in Time Domain, Differentiation in s- Domain, Convolution in Time
Domain, Integration in Time domain, and Integration in s- Domain.
The lower limit is taken as 0- to indicate that initial conditions at t=0 are also considered.
Note that unilateral Laplace transform will be always convergent since ROC will be always
R.H.S. of S-plane.
Questions:
1) Calculate Laplace transform of e-at u(t).
1
[Ans. , ROC: s > -a ]
𝑠+𝑎
2) Calculate Laplace transform of -e-at u(t).
1
[Ans. , ROC: s > -a ]
𝑠+𝑎
3) Calculate Laplace transform of -e-3t u(-t).
1
[Ans. - 𝑠+3 , ROC: s > -3 ]
4) Obtain Laplace transform of the following signals.
i. x(t) = sin (3t) u(t)
ii. x(t) = e-2t u (t + 1)
Books
1. Digital Signal Processing by S. Salivahanan, C. Gnanapriya Second Edition, TMH
References
1. Digital Signal Processing by Sanjit K. Mitra, Third Edition, TMH
2. Signals and systems by A Anand Kumar (PHI) 2011
3. Signals and Systems by Alan V. Oppenheim and Alan S. Willsky with S. Hamid Nawab,
Second Edition, PHI (EEE)
4.Digital Signal Processing by Apte, Second Edition, Wiley India.
Unit 3 :
Chapter 4 : Z-transform
Unit Structure
4.0 Objective
4.1 Introduction
4.2 Definition of z-transform
4.2.1.1 Region of Convergence (ROC)
4.3 Properties of z-transform
4.4 Evaluation of the Inverse of z-transform
4.5 Summary
4.6 Exercise
4.7 List of References
4.0 OBJECTIVE
By the end of this chapter, student will be able to understand Z-transform as a tool for the
solution of linear constant difference equations. Also one can analyse discrete time systems in
the frequency domain.
4.1 INTRODUCTION
Z-transform simplies signal analysis by reducing the number of poles and zeros to a finite
number in z-plane. Z-transform has real and imaginary parts, whose plot is called Z-plane.
Z-transform maps(transforms) any point 𝑠 = ±𝜎 ± 𝑗𝑤 in s-plane to a corresponding point
𝑧(𝑟|𝜃) in the z-plane using the relationship :
𝑧 = 𝑒 𝑠𝑇 , where T is the sampling period
The poles and zeros of discrete time system are plotted in the complex z-plane.
Figure 4.1 shows Mapping of s-plane into z-plane for 𝑧 = 𝑒 𝑗𝑤𝑇
Fig 4.1 Mapping of s-plane into z-plane for 𝑧 = 𝑒 𝑗𝑤𝑇
The stability of the system can be checked using pole-zero plot. Also z-transform can be used
to analyse discrete time systems for finding system transfer function and digital network
realisation.
Where z is a complex variable. This equationis also called two sided z transform.
One sided z-transform is given as :
∞
Inverse Z-transform
Inverse z-transform is defined as :
𝑥(𝑛) = 𝑍 −1 [𝑋(𝑧)]
Inverse z-transform is applied to recover original time domain discrete signal from ite
frequency domain signal.
If the output signal magnitude of thedigital signal system, 𝑥(𝑛) is to be finite, then the
magnitude of its z-transform must be finite. The Z values in the z-plane for which the
magnitude of 𝑋(𝑧) is finite is called the Region of Convergence (ROC).
ROC for 𝑋(𝑧) is the area outside the unit circle in the z-plane.
𝑧
Z-transform of the unit step 𝑢(𝑛) is 𝑋 (𝑧) = 𝑧−1 which has a zero at 𝑧 = 0 and pole at 𝑧 = 1
and the ROC is |𝑧| > 1 extending to ∞ as shown in Fig 4.2
Fig 4.2 Pole-zero plot and ROC of the Unit-Step response 𝑢(𝑛)
Properties of ROC :
1) ROC does not contain any poles
2) System stability can be checked with ROC
3) ROC also determines the types of sequence as
a) Causal or Non-causal signal
b) Finite or Infinite signal
Table 4.1) Finite Duration causal, anti-causal and two-sided signals with their ROCs.
Finite Duration Signals and their ROCs
Table 4.2) Infinte Duration causal, anticausal and two-sided signals with their ROCs
Infinite Duration Signals and their ROCs
Table 4.3) Some z-transform pairs
Sl Signal Sequence Laplace z-transform ROC
𝑥(𝑡) 𝑥(𝑛) Transform 𝑋(𝑧)
𝑋(𝑠)
1. 𝛿(𝑡) 𝛿(𝑛) 1 All z-plane
(a) 𝑥 (𝑛) = { 4 , 2 3
↑
, , 1
, 3
}
(b) 𝑥(𝑛) = { 4 , 3
, 1
, 4
, 4 0
↑
, 2
}
Solution :
(a) 𝑥 (𝑛 ) = { 4 , 2 3
↑
, , 1
, 0
}
Taking z-transform,
𝑋(𝑧) = 4𝑧 + 2 + 3𝑧 −1 + 𝑧 −2 + 3𝑧 −3
ROC entire plane except 𝑧 = 0 and 𝑧 = ∞
(b) 𝑥 (𝑛 ) = { 4 , 3
, 1
, 4
, 4 0
↑
, 2
}
Taking z-transform,
𝑋(𝑧) = 4𝑧 4 + 3𝑧 3 + 𝑧 2 + 4𝑧 1 + 4 + 2𝑧 −2
ROC entire plane except 𝑧 = 0 and 𝑧 = ∞
(c) 𝑥(𝑛) = 𝛿(𝑛), hence 𝑋 (𝑧) = 1, ROC : Entire z-plane
(d) 𝑥 (𝑛 ) = { 4 , 2
, 3
, 1
, 3
},
Example 4.3) By applying time shifting property, determine the inverse z-transform of the
signal
𝑧 −1
𝑋(𝑧) = 1−2𝑧 −1
Solution : By applying time shifting propery, we have 𝑘 = 1 and 𝑥(𝑛) = (2)𝑛 𝑢(𝑛)
Hence, 𝑥(𝑛) = (2)(𝑛−1) 𝑢(𝑛 − 1) (Ans)
𝑦(𝑛) = {6↑ , 4
, 5
, 6
, 1
, 2
} (Ans)
if 𝑥(∞) exists
Example 4.5) Find Initial and final values of 𝑥(𝑛) for 𝑋(𝑧) = 1 + 2𝑧 −1 + 3𝑧 −2
Solution :
2 3
𝑥(0) = lim [ 1 + 2𝑧 −1 + 3𝑧 −2 ] = 1 + + =1
|𝑧|→∞ ∞ ∞
= lim [ 1 + 𝑧 −1 + 𝑧 −2 − 3𝑧 −3 ]
|𝑧|→1
= 1+1+1−3 = 0
(Ans)
Various methods are available to take Inverse Z-transform. We will use Partial Fraction
method for taking Inverse z-transform.
Equating numerators,
1 = 𝑎(1 − 𝑧 −1 ) + 𝑏(1 + 𝑧 −1 )
1 = (𝑎 − 𝑎𝑧 −1 + 𝑏 + 𝑏𝑧 −1 )
1 = 𝑎 + 𝑏 + 𝑏𝑧 −1 − 𝑎𝑧 −1
1 = (𝑎 + 𝑏) + (𝑏 − 𝑎)𝑧 −1
Equating like terms,
(𝑎 + 𝑏) = 1, (𝑏 − 𝑎 ) = 0
Solving simultaneously,
1 1
𝑎= , 𝑏=
2 2
1 1
(2) (2)
𝑋 (𝑧 ) = +
(1 + 𝑧 −1 ) (1 − 𝑧 −1 )
Taking Inverse z-transform,
1 1
𝑥(𝑛) = [ (−1)𝑛 + (1)𝑛 ] 𝑢(𝑛)
2 2
𝑎 𝑏 𝑐
𝑋 (𝑧 ) = + +
(1 + 𝑧 ) (1 + 1 𝑧 −1 ) (1 − 1 𝑧 −1 )
−1
2 4
2 + 3𝑧 −1 8
𝑎= | =−
1 1 5
(1 + 2 𝑧 −1 )(1 − 4 𝑧 −1 )
𝑧 −1=−1
2 + 3𝑧 −1 8
𝑏= | =
1 3
(1 + 𝑧 −1 )(1 − 4 𝑧 −1 ) −1
𝑧 =−2
2 + 3𝑧 −1 14
𝑐= | =
1 1 15
(1 + 2 𝑧 −1 )(1 + 2 𝑧 −1 )
𝑧 −1 =4
8 8 14
(− 5) (3) (15)
𝑋 (𝑧 ) = + +
(1 + 𝑧 −1 ) (1 + 1 𝑧 −1 ) (1 − 1 𝑧 −1 )
2 4
𝑥(𝑛) = 𝑍 −1 [𝑋(𝑧)]
8 8 1 𝑛 14 1 𝑛
𝑥(𝑛) = [− (−1)𝑛 + (− ) + ( ) ] 𝑢(𝑛)
5 3 2 15 4
(Ans)
𝑧2
Example 4.9) Determine inverse z-transform of 𝑋 (𝑧) = (𝑧−𝑎)2 for ROC |𝑧| > |𝑎|
𝑑 𝑧 𝑛+1
={ ( 𝑧 − 𝑎 )2 }
𝑑𝑧 (𝑧 − 𝑎)2 𝑧=𝑎
𝑑 𝑧 𝑛+1
={ }𝑧=𝑎
𝑑𝑧
= (𝑛 + 1)𝑎 𝑛
𝑅𝑒𝑠
𝑥 (𝑛 ) = ∑ [𝑋(𝑧)𝑧 𝑛−1 ]
𝑧=𝑎
𝑖=1
4.5 SUMMARY
4.6 EXERCISE
1) Define z-transform
2) Explain shift property of z-transform
3) Explain what is discrete convolution
4) Explain inverse z-transform
5) Determine z-trasnform of 𝑢(𝑛 − 5)
1 1
6) Determine the inverse z-transform of 𝑋(𝑧) = , for |𝑧| < 2
(𝑧+2)2
𝑧 2 +𝑧 1
7) Determine the causal signal 𝑥 (𝑛 ) having z-transform 𝑋(𝑧) = 1 3 1
for |𝑧| > 2
(𝑧− ) (𝑧− )
2 4
5.0 OBJECTIVE
This chapter deals with Linear time invariant systems analysis using tool like z-transform.
Important methods like convolution, response to discrete systems by using standrd digital
inputs like impulse and unit step signals are discussed in detail. Concept of difference
equation is introduced for signal analysis.
5.1 INTRODUCTION
A typical Digital System has input 𝑥(𝑛) and has an output 𝑦(𝑛) as a response to input. The
sytem output depends on the input as well as on the system parameters. Some of the inputs
used to study digital systems are Impulse and Unit Step Signals, among others. Similarly the
Systems can also be classified as Liner Time Variant, Time Invariant and others.
Various system properties as discussed in this chapter are linearity, time invariance, causality
and stability.
Linearity
𝐹 [𝑎1 𝑥1 (𝑛) ± 𝑎2 𝑥2 (𝑛)] = 𝑎1 𝐹[𝑥1 (𝑛)] ± 𝑎2 𝐹[𝑥2 (𝑛)] = 𝑎1 𝑦1 (𝑛) ± 𝑎2 𝑦2 (𝑛)
Where F is an operator
Example 5.1) Check if the system 𝑭[𝒙(𝒏)] = 𝟑 𝒏 𝒙(𝒏) + 𝟒 is linear :
Solution :
𝐹 [𝑥1 (𝑛) + 𝑥2 (𝑛)] = 3 𝑛 [𝑥1 (𝑛) + 𝑥2 (𝑛)] + 4
Since, 𝐹[𝑥1 (𝑛) + 𝑥2 (𝑛)] ≠ 𝐹 [𝑥1 (𝑛)] + 𝐹[𝑥2 (𝑛)], the systemin non-linear.
Time Invariance
A system is said to be time-invariant if the relationship between the input and output does not
change with time.
If 𝑦(𝑛) = 𝐹[𝑥(𝑛)], then 𝑦(𝑛 − 𝑘) = 𝐹 [𝑥(𝑛 − 𝑘)] = 𝑧 −𝑘 𝐹[𝑥(𝑛)]
𝑧 −𝑘 represents a single delay of 𝑘 samples.
Causality
The systems in which changes in the output are only dependent on the changes in the present
and past values of the input and/or previous output values, and are not dependent on future
input values are called Causal Sytems.
The Causality condition for linear time invariant systems is given as :
ℎ(𝑛) = 0 𝑓𝑜𝑟 𝑛 < 0
Example 5.3) Check if system 𝑦(𝑛) = 𝑥(𝑛 − 2) + 𝑥(𝑛 − 3) is causal or not.
Solution :
In this system, the output is computed only on past sample values i.e. 𝑥(𝑛 − 2) and
𝑥(𝑛 − 3), the system is causal.
Stability
A DSP is said to be stable, if system poles are given as :
|𝑝𝑖 | < 1 i.e. ∑∞
𝑛=−∞|ℎ(𝑛)| < ∞
The pole-zero plot in Fig. 5.1 shows pole position of stable and unstable systems.
𝑧 2−𝑧+1
Example 5.5) Check stability of the system 𝐻 (𝑧) = 1
𝑧 2 −𝑧+
2
Solution :
𝑧2 − 𝑧 + 1 𝑧2 − 𝑧 + 1
𝐻 (𝑧 ) = =
1 1 1 1 1
𝑧 2 − 𝑧 + 2 (𝑧 + 2 + 𝑗 2)(𝑧 + 2 − 𝑗 2)
1 1
Since, |2 ± 𝑗 2| < 1, the given system is stable. (Ans)
∑|ℎ(𝑘)| < ∞
𝑘=0
∑|ℎ(𝑘)| < ∞
𝑘=0
therefore, ∑∞
𝑘=0|ℎ(𝑘)| = |ℎ(0)| + |ℎ(𝑎)|+. . . +|ℎ (𝑘 )|
= 8 + 5 + 5+ . . . +5+ .. .
This is diverging series, hence the given system is BIBO unstable. (Ans)
5.3 DISCRETE CONVOLUTION
Convoluting two signals in time domain is same as muliplying two signals in frequency
domain. Convolution is useful in studying analysing input signal response to the given
system. The concolution of the two signals is given as :
𝑦(𝑛) = 𝑥(𝑛) ∗ ℎ(𝑛)
∞
Properties of Convolution :
a) Commutative law :
𝑥 (𝑛) ∗ ℎ(𝑛) = ℎ(𝑛) ∗ 𝑥(𝑛)
b) Associative law :
For 𝑦1 (𝑛) = 𝑥(𝑛) ∗ ℎ1 (𝑛)
and 𝑦(𝑛) = 𝑦1 (𝑛) ∗ ℎ2 (𝑛)
𝑦(𝑛) = [𝑥(𝑛) ∗ ℎ1 (𝑛)] ∗ ℎ2 (𝑛)
𝑦(𝑛) = 𝑥 (𝑛) ∗ [ℎ1 (𝑛) ∗ ℎ2 (𝑛)]
c) Distributive law :
𝑥(𝑛) ∗ [ℎ1 (𝑛) + ℎ2 (𝑛)] = 𝑥(𝑛) ∗ ℎ1 (𝑛) + 𝑥(𝑛) ∗ ℎ2 (𝑛)
1 1 0 1 1
Example 5.7) Plot the signal given by sequence { , , , , }
↑
Solution :
5.4 SOLUTION OF LINEAR CONSTANT COEFFICIENT DIFFERENCE
EQUATION
A discrete time system transforms an input sequence into an output sequence according to the
recursion formula that represents the solution of a difference equation.
The general formof the difference equation is :
𝑁 𝑀
Where, 𝑦ℎ (𝑛) is the solution to the homogeneous difference equation and 𝑦𝑝 (𝑛) represents
the particular solution to the difference equation.
In a discrete time invariant system, if the input is of the form 𝑒 𝑖𝜔𝑡 , the output is 𝐻(𝑤)( 𝑒 𝑖𝜔𝑡 ).
𝐻( 𝑒 𝑖𝜔𝑡 ) is a function of 𝑤, which denotes the frequency response of the system.
𝑦(𝑛) = 𝐻( 𝑒 𝑖𝜔 )𝑒 𝑖𝜔𝑛
∑ 𝑎𝑘 𝑦(𝑛 − 𝑘) = ∑ 𝑏𝑘 𝑥(𝑛 − 𝑘)
𝑘=0 𝑘=0
𝑌(𝑧) ∑𝑀 𝑏 𝑧 −𝑘
The system function 𝐻 (𝑧) = 𝑋(𝑧) = ∑𝑁𝑘=0 𝑎𝑘 𝑧 −𝑘
𝑘=0 𝑘
Example 5.9) Determine the impulse response of the systems described by the difference
equation :
𝑦(𝑛) = 0.7𝑦(𝑛 − 1) − 0.1𝑦(𝑛 − 2) + 2𝑥(𝑛) − 𝑥(𝑛 − 2)
Solution :
The given difference equation is :
𝑦(𝑛) = 0.7𝑦(𝑛 − 1) − 0.1𝑦(𝑛 − 2) + 2𝑥(𝑛) − 𝑥(𝑛 − 2)
This equation canbe written as :
𝑦(𝑛) − 0.7𝑦(𝑛 − 1) + 0.1𝑦(𝑛 − 2) = 2𝑥(𝑛) − 𝑥(𝑛 − 2)
Input impulse is give as 𝑥 (𝑛) = 𝛿(𝑛).
Hence, 𝑦(𝑛) = 𝑢(𝑛)
Taking z-transform,
𝐻 (𝑧)[1 − 0.7𝑧 −1 + 0.1𝑧 −2 = (2 − 𝑧 −2 )
𝐻(𝑧) 2𝑧 2 − 1
=
𝑧 𝑧(𝑧 − 0.5)(𝑧 − 0.2)
𝐻(𝑧) 𝐴1 𝐴2 𝐴3
= + +
𝑧 𝑧 (𝑧 − 0.5) (𝑧 − 0.2)
𝐻(𝑧) 10 10 1 46 1
= − +
𝑧 𝑧 3 (𝑧 − 0.5) 3 (𝑧 − 0.2)
10 𝑧 46 𝑧
𝐻(𝑧) = 10 − +
3 (𝑧 − 0.5) 3 (𝑧 − 0.2)
1
𝑌 (𝑧 ) =
(1 − 𝑧 −1 )[1 − 0.6𝑧 −1 + 0.08𝑧 −2 ]
1
𝑌 (𝑧 ) =
(1 − 𝑧 −1 )(1 − 0.4𝑧 −1 )(1 − 0.2𝑧 −1 )
𝐴1 𝐴2 𝐴3
𝑌 (𝑧 ) = + +
(1 − 𝑧 ) (1 − 0.4𝑧 ) (1 − 0.2𝑧 −1 )
−1 −1
25 1 4 1 1 1
𝑌 (𝑧 ) = − +
12 (1 − 𝑧 ) 3 (1 − 0.4𝑧 ) 4 (1 − 0.2𝑧 −1 )
−1 −1
Frquency response describes the magnitude and phase shift over a range of frequencies.
ℎ( 𝑛 ) = ∑ ℎ𝑘 (𝑛 )
𝑘=1
Using, linearity property of z-transform, the frequency response of the complete system is :
𝐿
Where 𝑧 = 𝑒 𝑗𝜔
5.8 SUMMARY
1) Linear time invariant systems have properties like linearity, time-invariance and causality
which can be used in system analysis
2) A DSP is said to be stable, if system poles are given as :
|𝑝𝑖 | < 1 i.e. ∑∞
𝑛=−∞|ℎ(𝑛)| < ∞
3) A system is said to BIBO stable, if and only if every bounded input gives bounded output.
4) Convolution is useful in studying analysing input signal response to the given system.
5) A discrete time system transforms an input sequence into an output sequence according to
the recursion formula that represents the solution of a difference equation.
6) Frquency response describes the magnitude and phase shift over a range of frequencies.
5.9 EXERCISE
There are only N unique frequency components because for integer k, n, and r.
Comments:
1. Both the time function x˜[n] and the Fourier series coefficients X˜[k] are
periodic with period N, so they are represented by only N distinct (possibly
complex) numbers.
2. The series can be evaluated over any consecutive of values of n or k of length
N.
4. The frequency components are N equally-spaced samples of the frequencies
of the DTFT, or alternatively they represent N equally-spaced locations around
the unit circle of the z-plane.
Note that in the previous two properties, shift in one domain corresponds to
multiplication in the other by a complex exponential, both of which may be
easier to evaluate when you apply the definition for WN.
The sum on the right hand side is very important for our work and is referred to
as periodic convolution or circular convolution. The operator for circular
convolution is normally written as an asterisk with a circle around it, possibly
with an accompanying number that indicates the size of the convolution (which
matters).
It is easy to realise that the same values of : are calculated many times as
the computation proceeds. Firstly, the integer product nk repeats for different
combinations of n and k; secondly, is a periodic function with N only
distinct values.
5.4 Inverse DFT
The inverse transform of
Is given as:
i.e. the inverse matrix is 1/N : times the complex conjugate of the original
(symmetric) matrix.
Cost
• Direct
➢ N2 complex multiplies.
➢ N (N - 1) complex adds.
Figure 5.3
Choose shortest convenient N (usually smallest power-of-two greater than or
equal to L +M - 1).
5.7 Correlation
The concept of correlation can best be presented with an example. Figure 5.6 shows the key
elements of a radar system. A specially designed antenna transmits a short burst of radio wave
energy in a selected direction. If the propagating wave strikes an object, such as the helicopter
in this illustration, a small fraction of the energy is reflected back toward a radio receiver
located near the transmitter. The transmitted pulse is a specific shape that we have selected,
such as the triangle shown in this example. The received signal will consist of two parts: (1) a
shifted and scaled version of the transmitted pulse, and (2) random noise, resulting from
interfering radio waves, thermal noise in the electronics, etc. Since radio signals travel at a
known rate, the speed of light, the shift between the transmitted and received pulse is a direct
measure of the distance to the object being detected. This is the problem: given a signal of
some known shape, what is the best way to determine where (or if) the signal occurs in another
signal. Correlation is the answer. Correlation is a mathematical operation that is very similar
to convolution. Just as with convolution, correlation uses two signals to produce a third signal.
This third signal is called the cross-correlation of the two input signals. If a signal is correlated
with itself, the resulting signal is instead called the autocorrelation. The convolution machine
was presented in the last chapter to show how convolution is performed. Figure 5.6 is a similar
illustration of a correlation machine. The received signal, x[n], and the cross-correlation signal,
y[n], are fixed on the page. The waveform we are looking for, t[n], commonly called the target
signal, is contained within the correlation machine. Each sample in y[n] is calculated by
moving the correlation machine left or right until it points to the sample being worked on. Next,
the indicated samples from the received signal fall into the correlation machine, and are
multiplied by the corresponding points in the target signal. The sum of these products then
moves into the proper sample in the cross correlation signal.
Figure 5.6
The amplitude of each sample in the cross-correlation signal is a measure of how much the
received signal resembles the target signal, at that location. This means that a peak will occur
in the cross-correlation signal for every target signal that is present in the received signal. In
other words, the value of the cross-correlation is maximized when the target signal is aligned
with the same features in the received signal. What if the target signal contains samples with a
negative value? Nothing changes. Imagine that the correlation machine is positioned such that
the target signal is perfectly aligned with the matching waveform in the received signal. As
samples from the received signal fall into the correlation machine, they are multiplied by their
matching samples in the target signal. Neglecting noise, a positive sample will be multiplied
by itself, resulting in a positive number. Likewise, a negative sample will be multiplied by
itself, also resulting in a positive number. Even if the target signal is completely negative, the
peak in the cross-correlation will still be positive. If there is noise on the received signal, there
will also be noise on the cross correlation signal. It is an unavoidable fact that random noise
looks a certain amount like any target signal you can choose. The noise on the cross-correlation
signal is simply measuring this similarity. Except for this noise, the peak generated in the cross-
correlation signal is symmetrical between its left and right. This is true even if the target signal
isn't symmetrical. In addition, the width of the peak is twice the width of the target signal.
Remember, the cross-correlation is trying to detect the target signal, not recreate it. There is no
reason to expect that the peak will even look like the target signal. Correlation is the optimal
technique for detecting a known waveform in random noise. That is, the peak is higher above
the noise using correlation than can be produced by any other linear system. (To be perfectly
correct, it is only optimal for random white noise). Using correlation to detect a known
waveform is frequently called matched filtering.
The correlation machine and convolution machine are identical, except for one small
difference. As discussed in the last chapter, the signal inside of the convolution machine is
flipped left-for-right. This means that samples numbers: 1, 2, 3˛ run from the right to the left.
In the correlation machine this flip doesn't take place, and the samples run in the normal
direction.
References:
• Digital Signal Processing by S. Salivahanan, C. Gnanapriya Second Edition, TMH
• Digital Signal Processing by Sanjit K. Mitra, Third Edition, TMH
• Signals and systems by A Anand Kumar (PHI) 2011
• Signals and Systems by Alan V. Oppenheim and Alan S. Willsky with S. Hamid
Nawab, Second Edition, PHI (EEE)
• Digital Signal Processing by Apte, Second Edition, Wiley India
Unit 6
6.0 Objectives
6.1.1 Introduction
6.2.1 Introduction
6.3 Conclusion
6.5 Bibliography
6.0 OBJECTIVES
6.1.1 Introduction
A filter is a frequency selective system.Digital filters are classified as finite duration unit impulse
response filters or infinite duration unit impulse response filters.In signal processing,a finite impulse
response filter is a filter whose impulse response is of finite duration. I.e it has a finite number of
non zero terms.FIR filter is a filter with no feedback in its equation.the response of FIR filter depends
only on past and present samples.FIR filters are usually implemented using non-recursive
structure,however they can be realized in both recursive as well as non recursive structures.
Where ℎ(𝑘), 𝑘 = 0,1, . . . 𝑁 − 1 , are the impulse response coefficient of the filter, 𝐻(𝑧)is the
transfer function of the filter and N is the filter length,i.e the number of filter coefficients.equation
6.1a is the FIR difference equation.
y(n) is a function only of past and present values of input x(n).FIR filters are always stable ,if they are
implemented by direct evaluation as shown in equation 6.1a.Equation 6.1b represents the transfer
function of filter,which provides a mean of analyzing the filter.
All DSP processors available have architecture suited to FIR filtering.FIR filters are very simple to
implement.
𝐻(𝑧) = ∑𝑁−1
𝑛=0 ℎ(𝑛) 𝑧 −𝑛
where h(n) is the impulse response of the filter. The frequency response [Fourier transform
of h(n)] is given by
𝐻(𝜔) = ∑𝑁−1
𝑛=0 ℎ(𝑛) 𝑒 −𝑛𝜔𝑗
which is periodic in frequency with period 2 p , i.e.,
The frequency response of the filter is the fourier transform of its impulse response.if h(n) is the
impulse response of the system, then the frequency system is denoted by 𝐻(𝜔) or 𝐻(𝑒 𝑗𝜔 ).𝐻(𝜔)is a
complex function of frequency 𝜔 and so it can be expressed as magnitude function
Linear phase filters have 4 possible type of impulse response, depending on N and the type of
symmetry :
6.1.3.1 Frequency response of linear phase FIR filter when impulse response is symmetrical and N
is odd.
The equation for frequency response of linear filter when impulse response is symmetrical and N is
odd is given by :
From the figure it can be observed that the magnitude function of h is symmetric with 𝜔 = 𝛱 , when
the impulse response is symmetric and N is an odd number.
6.1.3.2 Frequency response of linear phase FIR filter when impulse response is symmetrical and N
is even.
The expression for frequency response of linear phase FIR filter when impulse response is
symmetrical and N is even is given by
.
Following figure (a) shows a symmetrical impulse response when N = 8, and figure (b)
shows the corresponding magnitude function of frequency response. From these figures it
can be observed that the magnitude function of 𝐻(𝜔)is antisymmetric with𝜔 = 𝛱 , when
Fig (a) symmetrical impulse response(N=8) Fig (b) magnitude function of frequency response
6.1.3.3 Frequency response of linear phase FIR filter when impulse response is anti symmetric and
N is odd.
This is the equation for frequency response of linear phase FIR filter when impulse response
Where
shows the corresponding magnitude function of frequency response. From these figures, it
can be observed that the magnitude function is antisymmetric with 𝜔 = 𝛱 , when the impulse
6.1.3.4 Frequency response of linear phase FIR filter when impulse response is anti symmetric and
N is even.
This is the equation for the frequency response of linear phase FIR filter when impulse
Where
Figure (a) shows an antisymmetric impulse response when N = 8, and Figure (b)
shows its corresponding magnitude function of frequency response. it can be observed that the
magnitude function of 𝐻(𝜔) is symmetric with 𝜔 = 𝛱 when the impulse response is antisymmetric
and N is an even number.
Fig (a) Antisymmetric impulse response for N = 8 Fig (b) Magnitude function of 𝐻(𝜔).
1. Filter specification : This may include stating the type of filter, for example lowpass filter, the
desired amplitude and/or phase responses and the tolerance, the sampling frequency, and
the word length of input data.
2. Coefficient calculation to find transfer function: At this step,we determine the coefficient of
transfer function.
3. Realization function : This involves converting the transfer function into a suitable filter
network or structure.
4. Analysis of finite wordlength effects : Here, we analyze the effect of quantizing the filter
coefficients and the input data as well as the effect carrying out the filtering operations
5. Implementation : This involves producing the software and/or hardware and performing the
actual filtering.
The procedure for designing FIR filters by Fourier series method is as follows:
Step 2: Evaluate the Fourier series coefficients of 𝐻𝑑 (𝜔𝑇)which gives the desired
Step 4: Take Z-transform of ℎ (𝑛) to get a non-causal filter transfer function 𝐻(𝑧).
Step 5: Multiply 𝐻(𝑧) by𝑧 −(𝑁−1)/2 to convert the non-causal transfer function to a
We know that any periodic function can be expressed as a linear combination of complex
exponentials. The frequency response of a digital filter is periodic with period equal to sampling
frequency. Therefor, the desired frequency response of an FIR filter can be represented by fourier
series as :
Where the fourier coefficients hd(n) are the desired impulse response sequence of the filter.the
samples of hd(n) can be determined using the equation :
The impulse response from the above equation is an infinite duration sequence.
For FIR filters , we truncate this infinite impulse response to a finite duration sequence of length N ,
where N is odd.Therefore ,
This transfer function of the filter 𝐻(𝑧) represents a non-causal filter. Hence the transfer function
represented by the above equation for 𝐻(𝑧) is multiplied by 𝑧 −(𝑁−1)/2. Therefore
𝛼 = (𝑁 − 1)/2. This modification does not affect the amplitude response of the filter, however the
abrupt truncation of the fourier series results in oscillations in the pass band and stop band.these
oscillations are due to slow convergence of the fourier series.This effect is known as gibbs
phenomenon.
A finite weighing sequence w(n) with which the infinite impulse response is multiplied to obtain a
finite impulse response is called a window.A finite weighing sequence w(n) with which the infinite
impulse response is multiplied to obtain a finite impulse response is called a window. This is
necessary because abrupt truncation of the infinite impulse response will lead to oscillations in the
pass band and stop band, and these oscillations can be reduced through the
1. The central lobe of the frequency response of the window should contain most of the energy
and should be narrow.
2. The highest side lobe level of the frequency response should be small.
3. The side lobes of the frequency response should decrease in energy rapidly as w tends to pi.
Step 1: For the desired frequency response𝐻(𝜔), find the impulse responseℎ𝑑 (𝑛)
Step 2: Multiply the infinite impulse response with a chosen window sequence
1. Rectangular
2. Bartlett
3. Hanning
4. Hamming
5. Blackmann
Rectangular window:
The weighting function (window function) for an N-point rectangular window is given by
The spectrum (frequency response) of rectangular window 𝑊𝑅 (𝜔) is given by the Fourier
transform of 𝑤𝑅 (𝑛)
(iii) The side lobe magnitude does not decrease significantly with increasing 𝜔.
In a rectangular window , the width of the transition region is related to the width of the main lobe
of window spectrum.Gibbs oscillations are noticed in the pass band and stop band.The attenuation
in the stop band is constant and cannot be varied.
Bartlett Window :
Bartlett window is also called a triangular window. This window has been chosen such that it has
tapered sequences from the middle on either side. The window function w T (n) is defined as
In magnitude response of triangular window, the side lobe level is smaller than that of the
rectangular window being reduced from –13 dB to –25 dB. However, the main lobe width is now
8𝛱/𝑁 or twice that of the rectangular window.
The triangular window produces a smooth magnitude response in both pass band and stop band,
but it has the following disadvantages when compared to magnitude response obtained by using
rectangular window:
Because of these characteristics, the triangular window is not usually a good choice
Hanning window :
Hamming window :
In the magnitude response for N = 31, the magnitude of the first side lobe is down about 41
dB from the main lobe peak, an improvement of 10 dB relative to the Hanning window. But this
improvement is achieved at the expense of the side lobe magnitudes at higher frequencies, which
are almost constant with frequency. The width of the main lobe is 8 p /N. In the magnitude response
of a low-pass filter designed using the Hamming window, the first side lobe peak is –51 dB, which is –
7 dB lesser with respect to the Hanning window filter. However, at higher frequencies, the stop band
attenuation is low when compared to that of Hanning window. Because the Hamming window
generates lesser oscillations in the side lobes than the Hanning window for the same main lobe
width, the Hamming window is generally preferred.
Blackman window :
The Blackman window function is another type of cosine window and given by the equation
Or
In the magnitude response, the width of the main lobe is 12𝛱/𝑁, which is highest among windows.
The peak of the first side lobe is at –58 dB and the side lobe magnitude decreases with frequency.
This desirable feature is achieved at the expense of increased main lobe width. However, the main
lobe width can be reduced by increasing the value of N. The side lobe attenuation of a low-pass filter
using Blackman window is –78 dB.
The ideal frequency response is sampled at sufficient number of points (i.e. N-points). These samples
are the DFT coefficients of the impulse response of the filter. Hence the impulse response of the
filter is determined by taking IDFT.
The impulse response ℎ(𝑛) is obtained by taking IDFT of 𝐻(𝑘) .The samples of impulse response
should be real. The terms 𝐻(𝑘)𝑒 𝑗(2𝛱𝑛𝑘/𝑁) should be matched by the 𝑒 −𝑗(2𝛱𝑛𝑘/𝑁).
1. type-I design
2. type-II design.
In the type-I design, the set of frequency samples includes the sample at frequency 𝜔 = 0
When other set of samples are used instead of 𝜔 = 0,such a design procedure is referred to as the
type-II design
When N is odd ,
When n is even ,
4. Take Z-transform of the impulse response h(n) to get the transfer function 𝐻(𝑧)
Procedure for type-II design
When N is odd,
When N is even,
4. Take Z-transform of the impulse response ℎ(𝑛) to get the transfer function 𝐻(𝑧).
In optimum filter design method, the weighted approximation error between the desired
frequency response and the actual frequency response is spread evenly across the pass band
and evenly across the stop band of the filter. This results in the reduction of maximum error.
The resulting filter has ripples in both the pass band and the stop band. This concept of
The optimal method is based on the concept of equiripple passband and stopband. In the passband
, the practical response oscillates between 1-𝛿𝑝 and 1+𝛿𝑝 . in th stopband the filter response lies
between 0 and𝛿𝑠 .The difference between the ideal filter and the practical response can be viewed as
error function
The main problem in the optimal method is to find the location of the external frequencies. A
powerful technique which employs remez exchange algorithm to find the external frequencies has
been developed.For a given set of specifications( that is passband edge frequencies N and the ratio
between the passband and stopband ripples the optimal method involves the following key steps :
1. Use the remez exchange algorithm to find the optimum set of external frequencies.
2. Determine the frequency response using external frequencies
3. Obtain the impulse response coefficients.
The heart of the optimal method is the first step where an iterative process is used to determine the
external frequencies of a filter whose amplitude-frequency response satisfies the optimality
condition.This step relies on the alternation theorem which specifies the number of external
frequencies that can exist for a given value of N.
6.2.1 Introduction
The type of filters which make use of feedback connection to get the desired filter implementation
are known as recursive filters.Their impulse response is of infinite duration. So ,they are called IIR
filters. IIR filters are designed by considering all the infinite samples of the impulse response. The
impulse response is obtained by taking the inverse Fourier transform of ideal frequency response.
There are several techniques available for the design of digital filters having an infinite duration unit
impulse response. The popular methods for such filter design uses the technique of first designing
the digital filter in analog domain and then transforming the analog filter into an equivalent digital
filter because the analog filter design techniques are well developed.
IIR filters normally require fewer coefficients than FIR filters.These filters are mainly used when
throughput and sharp cutoff is the important requirement.The physically realizable and stable IIR
filter cannot have a linear phase. For a filter to have a linear phase, the condition to be satisfied is
h(n) = h(N – 1 – n) where N is the length of the filter and the filter would have a mirror image pole
outside the unit circle for every pole inside the unit circle. This results in an unstable filter. As a
result, a causal and stable IIR filter cannot have linear phase. In the design of IIR filters, only the
desired magnitude
2. The IIR filter specifications include the desired characteristics for the magnitude
response only.
6.2.2 IIR filter design by approximation of derivatives
The approximation of derivative method is also known as backward difference method.The analog
filter having the rational system function H(s) can also be described by the linear
𝑑𝑦(𝑡) 𝑑𝑥(𝑡)
=
𝑑𝑡 𝑑𝑡
In this method of IIR filter design by approximation of derivatives, an analog filter is converted into a
digital filter by approximating the above differential equation into an equivalent difference equation.
𝑑𝑦(𝑡)
The backward difference formula is substituted for the derivative at time 𝑡 = 𝑛𝑇
𝑑𝑡
Thus,
𝑑𝑦(𝑡) 𝑦(𝑛𝑇)−𝑦(𝑛−1)𝑇
=
𝑑𝑡 𝑇
𝑑𝑦(𝑡) 𝑦(𝑛)−𝑦(𝑛−1)
Or =
𝑑𝑡 𝑇
Comparing these two, we can say that the frequency domain equivalent
𝑑𝑦(𝑡) 𝑦(𝑛)−𝑦(𝑛−1)
for the relationship = is:
𝑑𝑡 𝑇
1−𝑧 −1
𝑠=
𝑇
Thus, this is the analog domain to digital domain transformation.
1
𝑧=
1−𝑗𝛺𝑇
1 𝛺𝑇
= 2 +𝑗
1+𝑗𝛺2 𝑇 1+𝑗𝛺2 𝑇 2
It can be observed that the mapping of the equation𝑠 = (1 − 𝑧 −1) )/𝑇 takes the left half plane of s-
domain into the corresponding points inside the circle of radius 0.5 and centre at z = 0.5. Also the
right half of the s-plane is mapped outside the unit circle. Because of this,mapping results in a stable
analog filter transformed into a stable digital filter. However, since the location of poles in the z-
domain are confined to smaller frequencies, this design method can be used only for transforming
analog low-pass filters and band pass filters which are having smaller resonant frequencies.
The desired impulse response of the digital filter is obtained by uniformly sampling the impulse
response of the equivalent analog filter. The main idea behind this is to preserve the frequency
response characteristics of the analog filter. For the digital filter to possess the frequency response
characteristics of the corresponding analog filter, the sampling period T should be sufficiently small
(or the sampling frequency should be sufficiently high) to minimize (or completely avoid) the effects
of aliasing.
𝑇 = Sampling period
𝑁
𝐴𝑖
𝐻𝑎 (𝑠) = ∑
𝑠 − 𝑝𝑖
𝑖=1
The relationship between the transfer function f the digital filter and analog filter is given by
𝑁
𝐴𝑖
𝐻(𝑧) = ∑
1 − 𝑒 𝑝𝑖 𝑇 𝑧 −1
𝑖=1
Comparing the above expressions for 𝐻𝑎 (𝑠)and 𝐻(𝑧), we can say that the impulse invariant
1 1
=
𝑠 − 𝑝𝑖 1 − 𝑒 𝑝𝑖 𝑇 𝑧 −1
The above mapping shows that the analog pole at 𝑠 = 𝑝𝑖 is mapped into a digital pole at 𝑧 =
𝑒 𝑝𝑇 . Therefore, the analog poles and the digital poles are related by the relation.
𝑧 = 𝑒 𝑠𝑇
Fig : Mapping of (a) s-plane into (b) z-plane by impulse invariant transformation.
The mapping from the analog frequency 𝛺 to the digital frequency 𝜔 by impulse invariant
transformation is many-to-one which simply reflects the effects of aliasing due to sampling of the
impulse response.
The stability of a filter (or system) is related to the location of the poles. For a stable analog filter the
poles should lie on the left half of the s-plane. That means for a stable digital filter the poles should
lie inside the unit circle in the z-plane.
The IIR filter design using impulse invariant as well as approximation of derivatives methods is
appropriate only for the design of low-pass filters and band pass filters whose resonant frequencies
are small. These techniques are not suitable for high-pass or band reject filters. The limitation is
overcome in the mapping technique called the bilinear transformation. This transformation is a one-
to-one mapping from the s-domain to the z-domain. That is, the bilinear transformation is a
conformal mapping that transforms the imaginary axis of s-plane into the unit circle in the z-plane
only once, thus avoiding aliasing
of frequency components. In this mapping, all points in the left half of s-plane are mapped
inside the unit circle in the z-plane, and all points in the right half of s-plane are mapped
outside the unit circle in the z-plane. So the transformation of a stable analog filter results in
a stable digital filter. The bilinear transformation can be obtained by using the trapezoidal
The differential equation describing the above analog filter can be obtained as:
Integrating the above equation between the limits (nT – T) and nT, we have
Therefore, we get
Comparing this with the analog filter system function 𝐻𝑎 (𝑠) we get
On rearranging,
This is the relation between analog and digital poles in bilinear transformation.
So to convert an analog filter function into an equivalent digital filter function, we need to put
expressing the complex variable z in the polar form as 𝑧 = 𝑟𝑒 𝑗𝜔 in the above equation for s.
Thus,
Which is equal to
The mapping is non-linear and the lower frequencies in analog domain are expanded in the digital
domain, whereas the higher frequencies are compressed. This is due to the nonlinearity of the
arctangent function and is usually known as frequency warping.
To design a Butterworth IIR digital filter, first an analog Butterworth filter transfer function is
determined using the given specifications. Then the analog filter transfer function is converted to a
digital filter transfer function using either impulse invariant transformation or bilinear
transformation.Infinite-duration Impulse Response (IIR) Filters
The analog Butterworth filter is designed by approximating the ideal frequency response
using an error function. The error function is selected such that the magnitude is maximally
flat in the passband and monotonically decreasing in the stopband. (Strictly speaking the
magnitude is maximally flat at the origin, i.e., at W = 0, and monotonically decreasing with
increasing W).
The frequency response of Butterworth filter depends on the order N. The magnitude
response for different values of N are shown in Figure. From Figure 8.8, It can be observed that the
approximated magnitude response approaches the ideal response as the value of N increases.
However, the phase response of the Butterworth filter becomes more nonlinear with increasing N.
The low-pass digital Butterworth filter is designed as per the following steps:
Step 1 : Choose the type of transformation, i.e., either bilinear or impulse invariant transformation.
Step 2 : Calculate the ratio of analog edge frequencies depending upon the transformation chosen
such as bilinear or impulse
Step 3 : Decide the order N of the filter. Choose N such that it is an integer just greater than or equal
to the value obtained.
Step 6: Using the chosen transformation, transform the analog filter transfer function H a (s)
Step 7 : Realize the digital filter transfer function H(z) by a suitable structure.
Properties of Butterworth filters
1. The Butterworth filters are all pole designs (i.e. the zeros of the filters exist at ¥).
The magnitude response approaches the ideal response as the value of N increases.
2 . Hence the dB magnitude at the cutoff frequency will be 3 dB less than the
maximum value.
or designing a Chebyshev IIR digital filter, first an analog filter is designed using the given
specifications. Then the analog filter transfer function is transformed to digital filter transfer
The analog Chebyshev filter is designed by approximating the ideal frequency response
using an error function. There are two types of Chebyshev approximations. In type-1
approximation, the error function is selected such that the magnitude response is equiripple
in the passband and monotonic in the stopband. In type-2 approximation, the error function
is selected such that the magnitude function is monotonic in the passband and equiripple in
the stopband. The type-2 magnitude response is also called inverse Chebyshev response. The
response approaches the ideal response as the order N increases. The phase response of the
Chebyshev filter is more nonlinear than that of the Butterworth filter for a given filter length
N.
The low-pass Chebyshev IIR digital filter is designed following the steps given below.
Step 6 : Determine the analog transfer function H a (s) of the filter, wehn the order of N is odd or
even
Step 7 : Using the chosen transformation, transform H a (s) to H(z), where H(z) is the
1. The magnitude response is equiripple in the passband and monotonic in the Stopband.
2. The chebyshev type-1 filters are all pole designs.
3. The normalized magnitude function has a value of 1/ 1 + F 2 at the cutoff frequency W c
4. The magnitude response approaches the ideal response as the value of N increases.
Inverse Chebyshev filters
Inverse Chebyshev filters are also called type-2 Chebyshev filters. A low-pass inverse
where e is a constant and W c is the 3 dB cutoff frequency. The Chebyshev polynomial c N (x)
is given by
The
magnitude response has maximally flat passband and equiripple stopband, just the opposite
of the Chebyshev filters response. That is why type-2 Chebyshev filters are called the inverse
Chebyshev filters.
The elliptic filter is sometimes called the Cauer filter. This filter has equiripple passband and
stopband. Among the filters discussed so far, for a given filter order, pass band and stop
band deviations, elliptic filters have the minimum transition bandwidth. The magnitude
where 𝑈𝑁 (𝑥)is the Jacobian elliptic function of order N and 𝜀 is a constant related to the
passband ripple.
In the design techniques discussed so far, we have considered only low-pass filters.
This low-pass filter can be considered as a prototype filter and its system function H p (s) can
be determined. The high-pass or band pass or band stop filters are designed by designing a
low-pass filter and then transforming that low-pass transfer function into the required filter
ways.
Basically there are four types of frequency selective filters, viz. low-pass, high-pass, band pass
and the band stopped. In Figure 8.11, the frequency response of the ideal case is shown in solid
The high-pass or band pass or band stop filters are designed by designing a
low-pass filter and then transforming that low-pass transfer function into the required filter
ways:
In the analog frequency transformation, the analog system function 𝐻𝑝 (𝑠) of the prototype
filter is converted into another analog system function 𝐻(𝑠) of the desired filter (a low-pass
filter with another cutoff frequency or a high-pass filter or a band pass filter or a band stop
filter). Then using any of the mapping techniques (impulse invariant transformation or
bilinear transformation) this analog filter is converted into the digital filter with a system
function 𝐻(𝑧).
The frequency transformation formulae used to convert a prototype low-pass filter into a low-pass
(with a different cutoff frequency), high-pass, band pass or band stop are given in Table. 𝛺𝑐 is the
cutoff frequency of the low-pass prototype filter.𝛺𝑐 * cutoff frequency of new low-pass filter or
high-pass filter and 𝛺1 and 𝛺2 are the cutoff frequencies of band pass or band stop filters.
As in the analog domain, frequency transformation is possible in the digital domain also. The
frequency transformation is done in the digital domain by replacing the variable 𝑧 −1 by a function of
𝑧 −1 , i.e.,𝑓(𝑧 −1 ). This mapping must take into account the stability criterion. All the
poles lying within the unit circle must map onto itself and the unit circle must also map onto
Itself.
Following table gives the formulae for the transformation of the prototype low pass digital
filter into a digital low-pass, high-pass, band pass or band stop filters.
The frequency transformation may be accomplished in any of the available two techniques,
however, caution must be taken to which technique to use. For example, the impulse invariant
transformation is not suitable for high-pass or bandpass filters whose resonant frequencies are
higher. In such a case, suppose a low-pass prototype filter is converted into a high-pass filter using
the analog frequency transformation and transformed later to a digital filter using the impulse
invariant technique. This will result in aliasing problems. However, if the same prototype low-pass
filter is first transformed into a digital filter using the impulse invariant technique and later
converted into a high-pass filter using the digital frequency transformation, then it will not have any
aliasing problem. Whenever the bilinear transformation is used, it is of no significance whether
analog frequency transformation is used or digital frequency transformation. In this case, both
analog and digital frequency transformation techniques will give the same result
6.3 Conclusion
● Based on impulse response, filters are of two types: (i) IIR filters and (ii) FIR filters.The IIR
filters are designed using an infinite number of samples of impulse response.They are of
recursive type, whereby the present output depends on the present input, past input and
past output samples. The FIR filters are designed using only a finite number of samples of
impulse response. They are non-recursive types whereby the present output depends on the
present input and past input samples.
● The necessary and sufficient condition for the linear phase characteristic of FIR filter is that
the phase function should be a linear function of 𝜔, which in turn requires constant phase
delay or constant phase and group delay.
● The transformation of analog filter to digital filter without modifying the impulse response of
the filter is called impulse invariant transformation (i.e. in this transformation, the impulse
response of the digital filter will be the sampled version of the impulse response of the
analog filter).
● FIR filter is always stable because all its poles are at the origin.
● The two concepts that lead to the design of FIR filter by Fourier series are: (i) The frequency
response of a digital filter is periodic with period equal to sampling frequency.(ii) Any
periodic function can be expressed as a linear combination of complex exponentials.
● A finite weighing sequence w(n) with which the infinite impulse response is multiplied to
obtain a finite impulse response is called a window. This is necessary because abrupt
truncation of the infinite impulse response will lead to oscillations in the pass band and stop
band, and these oscillations can be reduced through the use of less abrupt truncation of the
Fourier series.
● Chebyshev approximation is one in which the approximation function is selected such that
the error is minimized over a prescribed band of frequencies.
● Type-1 Chebyshev approximation is one in which the error function is selected such that the
magnitude response is equiripple in the passband and monotonic in the stopband.
● Type-2 Chebyshev approximation is one in which the error function is selected such that the
magnitude response is monotonic in the passband and equiripple in the stopband. The type-
2 Chebyshev response is called inverse Chebyshev response.
6.4 List of References
6.5 Bibliography
5.Design an FIR digital filter to approximate an ideal low-pass filter with pass band gain of unity,
cutoff frequency of 1 kHz and working at a sampling frequency of𝑓𝑠 = 4 kHz. The length of the
impulse response should be 11. Use the Fourier series method.
6. Compare analog and digital filters. State the advantages of digital filters over
analog filters.
7. Define infinite impulse response and finite impulse response filters and compare.
8. Justify the statement IIR filter is less stable and give reason for it.