0% found this document useful (0 votes)
134 views406 pages

Unit 1

The document discusses analog and digital signal processing. It begins by introducing different types of signals like electrical, mechanical, acoustic signals. Most real-world signals are analog and processed by analog circuits using components like resistors and capacitors. It then discusses limitations of analog processing like accuracy issues. Digital signal processing represents signals as sequences of numbers by sampling analog signals. This allows processing using digital processors and reconstruction of analog signals. Key advantages of digital signal processing are accuracy, repeatability, flexibility and ability to do non-linear operations. The document also introduces concepts of discrete-time signals, periodic sampling, unit sample sequence, impulse function and unit step sequence important for digital signal processing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views406 pages

Unit 1

The document discusses analog and digital signal processing. It begins by introducing different types of signals like electrical, mechanical, acoustic signals. Most real-world signals are analog and processed by analog circuits using components like resistors and capacitors. It then discusses limitations of analog processing like accuracy issues. Digital signal processing represents signals as sequences of numbers by sampling analog signals. This allows processing using digital processors and reconstruction of analog signals. Key advantages of digital signal processing are accuracy, repeatability, flexibility and ability to do non-linear operations. The document also introduces concepts of discrete-time signals, periodic sampling, unit sample sequence, impulse function and unit step sequence important for digital signal processing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 406

UNIT-1

Introduction

1
Signal Processing
• Humans are the most advanced signal processors
– speech and pattern recognition, speech synthesis,…

• We encounter many types of signals in various


applications
– Electrical signals: voltage, current, magnetic and electric fields,…
– Mechanical signals: velocity, force, displacement,…
– Acoustic signals: sound, vibration,…
– Other signals: pressure, temperature,…

• Most real-world signals are analog


– They are continuous in time and amplitude
– Convert to voltage or currents using sensors and transducers

• Analog circuits process these signals using


– Resistors, Capacitors, Inductors, Amplifiers,…

• Analog signal processing examples


– Audio processing in FM radios
– Video processing in traditional TV sets

2
Limitations of Analog Signal Processing
• Accuracy limitations due to
– Component tolerances
– Undesired nonlinearities

• Limited repeatability due to


– Tolerances
– Changes in environmental conditions
• Temperature
• Vibration
• Sensitivity to electrical noise
• Limited dynamic range for voltage and currents
• Inflexibility to changes
• Difficulty of implementing certain operations
– Nonlinear operations
– Time-varying operations

• Difficulty of storing information


3
Digital Signal Processing
• Represent signals by a sequence of numbers
– Sampling or analog-to-digital conversions
• Perform processing on these numbers with a digital processor
– Digital signal processing
• Reconstruct analog signal from processed numbers
– Reconstruction or digital-to-analog conversion
digital digital
signal signal
analog analog
signal A/D DSP D/A signal

• Analog input – analog output


– Digital recording of music
• Analog input – digital output
– Touch tone phone dialing
• Digital input – analog output
– Text to speech
• Digital input – digital output
– Compression of a file on computer
4
Pros and Cons of Digital Signal Processing
• Pros
– Accuracy can be controlled by choosing word length
– Repeatable
– Sensitivity to electrical noise is minimal
– Dynamic range can be controlled using floating point numbers
– Flexibility can be achieved with software implementations
– Non-linear and time-varying operations are easier to implement
– Digital storage is cheap
– Digital information can be encrypted for security
– Price/performance and reduced time-to-market

• Cons
– Sampling causes loss of information
– A/D and D/A requires mixed-signal hardware
– Limited speed of processors
– Quantization and round-off errors
5
Analog, digital, mixed signal
processing

6
Digital Signal Processing

7
Sampling and reconstruction

8
Sample and hold (S/H)circuit

9
A/D converter

10
A/D converter

11
Quantization noise

12
D/A convertion

13
D/A convertion

14
Reconstruction

15
Reconstruction

16
Reconstruction

17
Reconstruction

18
Signals
• Continuous-time signals are functions of a real argument
x(t) where t can take any real value
x(t) may be 0 for a given range of values of t
• Discrete-time signals are functions of an argument that
takes values from a discrete set
x[n] where n ∈ {...-3,-2,-1,0,1,2,3...}
Integer index n instead of time t for discrete-time systems
• x may be an array of values (multi channel signal)
• Values for x may be real or complex

19
Discrete-time Signals and Systems
• Continuous-time signals are defined over a
continuum of times and thus are represented by a
continuous independent variable.
• Discrete-time signals are defined at discrete times
and thus the independent variable has discrete
values.
• Analog signals are those for which both time and
amplitude are continuous.
• Digital signals are those for which both time and
amplitude are discrete.

20
Analog vs. Digital
• The amplitude of an analog signal can take any real or complex value at each
time/sample

• The amplitude of a digital signal takes values from a


discrete set
1

-1
21
Periodic (Uniform) Sampling
• Sampling is a continuous to discrete-time conversion

-3 -2 -1 0 1 2 3 4

• Most common sampling is periodic

x[n] = x c (nT ) − ∞ < n < ∞


• T is the sampling period in second
• fs = 1/T is the sampling frequency in Hz
• Sampling frequency in radian-per-second Ωs=2πfs rad/sec
• Use [.] for discrete-time and (.) for continuous time signals
• This is the ideal case not the practical but close enough
– In practice it is implement with an analog-to-digital converters
– We get digital signals that are quantized in amplitude and time

22
Periodic Sampling
• Sampling is, in general, not reversible
• Given a sampled signal one could fit infinite continuous signals
through the samples
1

0.5

-0.5

-1
0 20 40 60 80 100

• Fundamental issue in digital signal processing


– If we loose information during sampling we cannot recover it
• Under certain conditions an analog signal can be sampled without
loss so that it can be reconstructed perfectly

23
Representation of Sampling
• Mathematically convenient to represent in two stages
– Impulse train modulator
– Conversion of impulse train to a sequence

s(t)

Convert impulse
xc(t) x train to discrete- x[n]=xc(nT)
time sequence

xc(t)
s(t) x[n]

t n
-3T-2T-T 0 T 2T3T4T -3 -2 -1 0 1 2 3 4

24
Unit Sample Sequence

δ[n] = 0, n ≠ 0
= 1, n = 0.

… 1

0 n

The unit sample sequence plays the same role for discrete-time sequences and
systems that the unit impulse (Dirac delta function) does for continuous-time
signals and systems.

25
Impulse Function
The impulse function, also known as Dirac’s delta function, is used to
represented quantities that are highly localized in space. Examples include
point optical sources and electrical charges.

The impulse function can be visualized as a narrow spike having infinite


height and zero width, such that its area is equal to unity.

26
Definition of Impulse Function
The impulse function may be defined from its basic properties.

δ( x − x0 ) = 0, x ≠ x0
x2
∫x1
f ( x ) δ( x − x0 ) dx = f ( x0 ), x1 < x0 < x2

Where f(x) is any complex-valued function of x. If f(x) is discontinuous at the


point x0, the value of f(x0) is taken as the average of the limiting values as x
approaches x0 from above and below.

This property is called the sifting property.

27
Graphical Representation
On graphs we will represent δ(x-x0) as a spike of unit
height located at the point x0.

δ( x − x0 )

x0

28
Sampling Operation
The delta function samples the function f(x).

f(x0)

f ( x ) δ( x − x0 )

x0

The function f(x) δ(x-x0) is graphed as a spike of height f(x0) located at the point x0.

29
Unit Step Sequence
u[n] = 1, n ≥ 0
= 0, n < 0.

… 1

0 n

u[n ] = δ [n ] + δ [n − 1] + δ [n − 2] + 

u[n ] = ∑ δ [n − k ]
k =0 Conversely, the impulse sequence can be expressed
as the first backward difference of the unit step
n

∑ δ [k ]
sequence:
or u[n ] =
k = −∞
δ [n ] = u[n ] − u[n − 1]
30
Exponential Sequence
x[n] =Aαn


0 n
If we want an exponential sequence that is
zero for n < 0, we can write this as:

x[n ] = Aα n u[n ]

31
Geometric Series
A one-sided exponential sequence of the form

α n , for n ≥ 0 and α an arbitrary constant


is called a geometric series. The series converges for |a| < 1, and its sum converges
to

1

n =0
α → n

1−α
The sum of a finite number N of terms is

N
1 − α N +1

n =0
α → n

1−α
A general form can also be written:
N2
αN −αN 2 +1


1

α → n

n=N 1
1−α 32
Sinusoidal Sequence
x[n ] = A cos(ωo n + φ )

0 n

33
Sequence as a sum of scaled, delayed
impulses
a1
a-3

-4 -2 0 4 6 n
a2 a7

p[n ] = a−3 δ[n + 3] + a1 δ[n − 1] − a2 δ[n − 2] − a7 δ[n − 7]

34
Sequence Operations
• The product and sum of two sequences are defined as the sample-by-
sample product and sum, respectively.
• Multiplication of a sequence by a number is defined as multiplication
of each sample value by this number.
• A sequence y[n] is said to be a delayed or shifted version of a sequence
x[n] if
y[n] = x[n – nd]
where nd is an integer.
• Combination of Basic Sequences
Ex 1 x[n] = Kα n, n ≥ 0,
= 0, n < 0,
or x[n] = Kα n u[n].

35
Systems

36
Systems

T{⋅}
x[n] y[n]

A discrete-time system is a transformation that maps an


input sequence x[n] into an output sequence y[n].

System Characteristics

1. Linear vs. non-linear


2. Causal vs. non-causal
3. Time invariant
37
System Characteristics

T{•}
x[n] y[n]

1. Linear vs. non-linear


2. Time invariant vs. time variant
3. Causal vs. non-causal
4. Stable vs. unstable
5. Memoryless vs. state-sensitive
6. Invertible vs. non-invertible
38
Discrete-Time Systems
• Discrete-Time Sequence is a mathematical operation that maps a given
input sequence x[n] into an output sequence y[n]

y[n] = T{x[n]} x[n] T{.} y[n]

• Example Discrete-Time Systems


– Moving (Running) Average
y[n] = x[n] + x[n − 1] + x[n − 2] + x[n − 3]

– Maximum

y[n] = max{x[n], x[n − 1], x[n − 2]}


– Ideal Delay System
y[n] = x[n − no ]

39
Linearity
A linear system is one that obeys the principle of
superposition,
T {a1 x1[n ] + a2 x2 [n ]} = a1 y1[n ] + a2 y2 [n ]

where the output of a linear combination of inputs is the


same linear combination applied to the individual outputs.
This result means that a complicated system can be
decomposed into a linear combination of elementary
functions whose transformation is known, and then taking
the same linear combination of the results. Linearity also
implies that the behavior of the system is independent of
the magnitude of the input.
40
Linear Systems
• Linear System: A system is linear if and only if
T{x1[n] + x2[n]} = T{x1[n]} + T{x2[n]} (additivity)
and
T{ax[n]} = aT{x[n]} (scaling)

• Examples
– Ideal Delay System
y[n] = x[n − no ]

T{x1[n] + x2[n]} = x1[n − no ] + x2[n − no ]


T{x2[n]} + T{x1[n]} = x1[n − no ] + x2[n − no ]
T{ax[n]} = ax1[n − no ]
aT{x[n]} = ax1[n − no ]

41
Time (Shift) Invariance
A system is said to be shift invariant if the only effect caused by
a shift in the position of the input is an equal shift in the position
of the output, that is

T {x[n − n0 ]} = y[n − n0 ]

The magnitude and shape of the output are unchanged, only the
location of the output is changed.

42
Time-Invariant Systems
• Time-Invariant (shift-invariant) Systems
– A time shift at the input causes corresponding time-shift at output
y[n] = T{x[n]} ⇒ y[n − no ] = T{x[n − no ]}

• Example
– Square

y1 [n] = (x[n − no ])
2
Delay the input the output is
y[n] = (x[n])
2

y[n - no ] = (x[n − no ])
2
Delay the output gives

• Counter Example
– Compressor System
Delay the input the output is y1 [n] = x[Mn − no ]
y[n] = x[Mn]
Delay the output gives y[n - no ] = x[M(n − no )]

43
Impulse Response
When the input to a system is a single impulse, the output is called
the impulse response. Let h[n] be the impulse response, given by

T {δ [n ]} = h[n ]

A general sequence f [x] can be represented as a linear combination


of impulses, since


f ( x) = f ( x) ∗ δ ( x) = ∫ f (u )δ ( x − u )du
−∞

f [n ] = f [n ] ∗ δ [n ] = ∑ f [k ]δ [n − k ]
k = −∞

44
Linear Shift-Invariant Systems
Suppose that T{} is a linear, shift-invariant system with h[n] as its
impulse response.
Then, using the principle of superposition,
 ∞  ∞
T {s[n ]} = T  ∑ s[k ]δ [n − k ] = ∑ s[k ]T {δ [n − k ]}
k = −∞  k = −∞
and finally after invoking shift-invariance
∞ ∞
T {s[n ]} = ∑ s[k ]T {δ [n − k ]} = ∑ s[k ]h[n − k ]
k = −∞ k = −∞

T {s[n ]} = s[n ] ∗ h[n ]

This very important result says that the output of any linear, shift-
invariant system is given by the convolution of the input with the
impulse response of the system. 45
Causality
A system is causal if, for every choice of n0, the output
sequence at the index n = n0 depends only on the input
sequence values for n ≤ 0.

All physical time-based systems are causal because they are


unable to look into the future and anticipate a signal value
that will occur later.

46
Causal System
• Causality
– A system is causal it’s output is a function of only the current and
previous samples

• Examples
– Backward Difference
y[n] = x[n] − x[n − 1]

• Counter Example
– Forward Difference
y[n] = x[n + 1] + x[n]

47
Stability
A system is stable in the bounded-input, bounded-output
(BIBO) sense if and only it every bounded input produces a
bounded output sequence.
The input x[n] is bounded if there exists a fixed positive finite
value Bx such that

x[n ] ≤ Bx < ∞ for all n

Stability requires that for any possible input sequence there exist
a fixed positive value By such that

y[ n ] ≤ B y < ∞
48
Stable System
• Stability (in the sense of bounded-input bounded-output BIBO)
– A system is stable if and only if every bounded input produces a bounded
output

x[n] ≤ B x < ∞ ⇒ y[n] ≤ B y < ∞


• Example
– Square
y[n] = (x[n])
2

if input is bounded by x[n] ≤ B x < ∞


output is bounded by y[n] ≤ B2x < ∞
• Counter Example
– Log
y[n] = log10 ( x[n] )
even if input is bounded by x[n] ≤ B x < ∞
output not bounded for x[n] = 0 ⇒ y[0] = log10 ( x[n] ) = −∞
49
Memory (State)

A system is referred to as memoryless if the output y[n] at every


value of n depends only on the input x[n] at the same value of n.

If the system has no memory, it is called a static system. Otherwise


it is a dynamic system.

50
Memoryless System
• Memoryless System
– A system is memoryless if the output y[n] at every value of n depends
only on the input x[n] at the same value of n

• Example Memoryless Systems


– Square
y[n] = (x[n])
2

y[n] = sign{x[n]}
– Sign

• Counter Example
– Ideal Delay System
y[n] = x[n − no ]

51
Invertible System
A system is invertible if for each output sequence we can find a
unique input sequence. If two or more input sequences produce
the same output sequence, the system is not invertible.

52
Passive and Lossless Systems
A system is said to be passive if, for every finite energy input
sequence x[n], the output sequence has at most the same energy:

∞ ∞

∑ y[ n ] ≤ ∑ x[n] <∞
2 2

n = −∞ n = −∞

If the above inequality is satisfied with an equal sign for every


input signal, the system is said to be lossless.

53
Examples of Systems

Ideal Delay System y[n ] = x[n − nd ]


M2
1
Moving Average System y[ n ] = ∑ x[n − k ]
M 2 + M 1 + 1 k =− M1

Memoryless non-linear System y[n ] = x[n ]


2

Accumulator System y[n ] = ∑ x[k ]


k = −∞

where M is a
Compressor System y[n ] = x[ Mn ] positive integer

Backward Difference System y[n ] = x[n ] − x[n − 1]


54
Impulse Response of LTI Systems

Find the impulse response by computing the response to δ[n].

Systems whose impulse responses have only a finite number of


nonzero samples are called finite-duration impulse response
(FIR) systems.

Systems whose impulse responses are of infinite duration are


called infinite-duration impulse response (IIR) systems.

If h[n] = 0 for n < 0, the system is causal.

55
Impulse Response for Examples
Find the impulse response by computing the response to δ[n]

Ideal Delay System y[n ] = δ[n − nd ] FIR


 1
 , -M 1 ≤ n ≤ M 2
Moving Average System y[ n ] =  M 2 + M 1 + 1 FIR
 0, otherwise
n
1, n≥0
Accumulator System y[n ] = ∑ δ[k ] =  IIR
k = −∞ 0, n<0

y[n ] = u[n ]

Backward Difference System y[n ] = δ[n ] − δ[n − 1] FIR

56
Stability Condition for LTI Systems
An LTI system is BIBO stable if and only if its impulse response
is absolutely summable, that is

S= ∑ h[k ] < ∞
k = −∞

57
Stable and Causal LTI Systems
• An LTI system is (BIBO) stable if and only if
– Impulse response is absolute summable

∑ h[k ] < ∞
k = −∞
– Let’s write the output of the system as
∞ ∞
y[n] = ∑ h[k ]x[n − k ] ≤ ∑ h[k ] x[n − k ]
k = −∞ k = −∞
– If the input is bounded
x[n] ≤ B x
– Then the output is bounded by ∞
y[n] ≤ B x ∑ h[k ]
k = −∞

– The output is bounded if the absolute sum is finite


• An LTI system is causal if and only if
h[k ] = 0 for k < 0
58
Difference Equations
An important subclass of LTI systems are defined by
an Nth-order linear constant-coefficient difference equation:
N M

∑a
0
k y[n − k ] = ∑ bm x[n − m ]
0

Often the leading coefficient a0 = 1. Then the output y[n] can be


computed recursively from
N M
y[n ] = − ∑ ak y[n − k ] + ∑ bm x[n − m ]
k =1 m =0

A causal LTI system of this form can be simulated in


MATLAB using the function filter

y = filter(a,b,x);
59
Accumulator Example
n

Accumulator System y[n ] = ∑ x[k ]


k = −∞

n n −1
y[ n ] = ∑ x[k ] = x[n] + ∑ x[k ] = x[n] + y[n − 1];
k = −∞ k = −∞

b0 = 1 x[n] + y[n]
a0 = 1
one sample
a1 = −1 delay

positive feedback system

60
Total Solution Calculation
N M

∑a
k =0
k y[n − k ] = ∑ bm x[n − m ]
m =0

The output sequence y[n] consists of a homogeneous solution yh[n] and a


particular solution yp[n].
y[ n ] = y h [ n ] + y p [ n ]
where the homogenous solution yh[n] is obtained from the homogeneous equation:

∑a
k =0
k yh [n − k ] = 0

61
Homogeneous Solution
N
Given the homogeneous equation:
∑a
k =0
k yh [n − k ] = 0

Assume that the homogeneous solution is of the form

yh [n ] = λn
then

= λn − N (a0λN + a1λN −1 +  + a N ) = 0
N
yh [n ] = ∑a λ
k =0
k
n −k

defines an Nth order characteristic polynomial with roots λ1, λ2 … λN

The general solution is then a sequence yh[n]


N
yh [n ] = ∑ m m
A
1
λn

(if the roots are all distinct) The coefficients Am may be found from the
initial conditions. 62
Particular Solution
The particular solution is required to satisfy the difference equation for a specific
input signal x[n], n ≥ 0.
N M

∑a
k =0
k y[n − k ] = ∑ bm x[n − m ]
m =0

To find the particular solution we assume for the solution yp[n] a form that depends
on the form of the specific input signal x[n].

y[ n ] = y h [ n ] + y p [ n ]

63
General Form of Particular Solution

Input Signal Particular Solution


x[n] yp[n]
A (constant) K
AMn KMn
AnM K0nM+K1nM-1+…+KM
AnnM An(K0nM+K1nM-1+…+KM)
 A cos(ωo n) K1 cos(ωo n) + K 2 sin(ωo n)
 
 A sin(ω o n ) 

64
Example (1/3)
Determine the homogeneous solution for

y[n] + y[n − 1] − 6 y[n − 2] = 0

Substitute yh [n] = λn

λn + λn −1 − 6λn − 2 = λn − 2 (λ2 + λ − 6) = 0
= λn − 2 (λ + 3)(λ − 2 ) = 0

Homogeneous solution is then

yh [n] = A1λ1n + A2 λn2 = A1 (− 3) + A2 (2 )


n n

65
Example (2/3)
Determine the particular solution for

y[n] + y[n − 1] − 6 y[n − 2] = x[n]

with x[n] = 8u[n] and y[-1] = 1 and y[-2] = -1

The particular solution has the form y p [n] = β

β + β − 6β = 8
which is satisfied by β = -2

66
Example (3/3)
Determine the total solution for

y[n] + y[n − 1] − 6 y[n − 2] = x[n]

with x[n] = 8u[n] and y[-1] = 1 and y[-2] = -1

The total solution has the form

y[n] = yh [n] + y p [n] = A1 (− 3) + A2 (2 ) − 2


n n

then
1 1
y[ −1] = − A1 + A2 − 2 = 1 A1 = −1.8
3 2
1 1 A2 = 4.8
y[ −3] = A1 + A2 − 2 = −1
9 4

y[n] = −1.8(− 3) + 4.8(2 ) − 2


n n
67
Initial-Rest Conditions

The output for a given input is not uniquely specified.


Auxiliary information or conditions are required.

Linearity, time invariance, and causality of the system will


depend on the auxiliary conditions. If an additional condition is
that the system is initially at rest (called initial-rest conditions),
then the system will be linear, time invariant, and causal.

68
Zero-input, Zero-state Response
An alternate approach for determining the total solution y[n] of a difference equation
is by computing its zero-input response yzi[n], and zero-state response yzs[n]. Then
the total solution is given by y[n] = yzi[n] + yzs[n].

The zero-input response is obtained by setting the input x[n] = 0 and satisfying the
initial conditions. The zero-state response is obtained by applying the specified input
with all initial conditions set to zero.

69
Example (1/2)
y[n] + y[n − 1] − 6 y[n − 2] = 0 y[−1] = 1
y[−2] = −1
yh [n] = A1λ1n + A2 λn2 = A1 (− 3) + A2 (2 )
n n

Zero-input response:
27
A1 = − = −5.4
y zi [0] = A1 + A2 = − y[ −1] + 6 y[ −2] = −1 − 6 = −7 5
y zi [1] = A1 ( −3) + A2 (2) = − y[0] + 6 y[ −1] = 7 + 6 = 13 8
A2 = − = −1.6
5

Zero-state response: y[n] + y[n − 1] − 6 y[n − 2] = x[n]


with x[n] = 8u[n]
18
A1 = = 3.6
y zs [0] = A1 + A2 − 2 = x[0] = 8 5
y zs [1] = A1 (−3) + A2 (2) − 2 = x[1] − y[0] = 8 − 8 = 0 32
A2 = = 6.470
5
Mitra Example (2/2)
Zero-input response:

y zi [n] = −5.4(−3) n − 1.6(2) n

Zero-state response:

y zs [n] = 3.6(−3) n + 6.4(2) n − 2

Total solution is

y[n] = y zi [n] + y zs [n]

y[n] = 1.8(−3) n + 4.8(2) n − 2

This is the same as before

71
Impulse Response

The impulse response h[n] of a causal system is the output


observed with input x[n] = δ[n].
For such a system, x[n] = 0 for n >0, so the particular solution is
zero, yp[n]=0. Thus the impulse response can be generated from the
homogeneous solution by determining the coefficients Am to satisfy
the zero initial conditions (for a causal system).

72
Example
Determine the impulse response for

y[n] + y[n − 1] − 6 y[n − 2] = x[n]


The impulse response is obtained from the homogenous solution:

h[n] = A1 (− 3) + A2 (2 )
n n

For n=0
y[0] + y[−1] − 6 y[−2] = x[0]
h[0] = δ[0] = 1 A1 + A2 = 1
For n=1
y[1] + y[0] − 6 y[−1] = x[1] (− 3 A1 + 2 A2 ) + ( A1 + A2 ) = 0
h[1] + h[0] = δ[1] = 0 3
A1 = , A2 =
2
5 5

h[n] = (− 3) + (2 )
3 2 n
n
n≥0
73
5 5
DSP Applications
• Image Processing • Military
– Pattern recognition – Secure communication
– Robotic vision – Radar processing
– Image enhancement – Sonar processing
– Facsimile – Missile guidance
– Satellite weather map • Telecommunications
– Animation – Echo cancellation
• Instrumentation/Control – Adaptive equalization
– Spectrum analysis – ADPCM transcoders
– Position and rate control – Spread spectrum
– Noise reduction – Video conferencing
– Data compression – Data communication
• Speech/audio • Biomedical
– Speech recognition/synthesis – Patient monitoring
– Text to speech – Scanners
– Digital audio – EEG brain mappers
– equalization – ECG analysis
– X-ray storage/enhancement
74
75
Image enhancement

76
More Examples of Applications
• Sound Recording • Telephone Dialing
Applications Applications
– Compressors and Limiters
• FM Stereo Applications
– Expanders and Noise Gates
– Equalizers and Filters • Electronic Music
– Noise Reduction Systems Synthesis
– Delay and Reverberation – Subtractive Synthesis
Systems – Additive Synthesis
– Special Effect Circuits
• Echo Cancellation in
• Speech Processing
Telephone Networks
– Speech Recognition
– Speech Communication • Interference Cancellation
in Wireless
Telecommunication
77
Reasons for Using DSP
• Signals and data of many types are • Flexibility in reconfiguring
increasingly stored in digital • Better control of accuracy
computers, and transmitted in requirement
digital form from place to place. In • Easily transportable and possible
many cases it makes sense to offline processing
process them digitally as well.
• Cheaper hardware in some case
• Digital processing is inherently
stable and reliable. It also offers • In many case DSP is used to
certain technical possibilities not process a number of signals
available with analog methods. simultaneously. This may be done
by interlacing samples (technique
• Rapid advances in IC design and known as TDM) obtained from the
manufacture are producing ever various signal channels.
more powerful DSP devices at
decreasing cost.
• Limitation in speed & Requirement
in larger bandwidth

78
79
System Analysis

80
Frequency Response

81
Ideal lowpass filter-example

82
Non causal to causal

83
Phase distortion and delay

84
Phase delay

85
Phase delay

86
Group delay

87
Group delay

88
The Z-Transform

89
Z-Transform Definition
-Counterpart of the Laplace transform for discrete-time signals
-Generalization of the Fourier Transform
-Fourier Transform does not exist for all signals

The z-transform of a sequence x[n] is defined as



Z {x[n ]} = ∑ x
n = −∞
[ n ] z −n
= X ( z)
The inverse z-transform is given by

1
2π j ∫C
x[n ] = X ( z ) z n −1dz

This expression denotes a closed contour integral taken counterclockwise


about the origin and in the region of convergence (ROC). It incorporates the
Cauchy integral theorem from the theory of complex variables. This result is
not given directly in Oppenheim, but may be found in Proakis.
90
Relationship to Fourier Transform
The z-transform of a sequence x[n] is defined as


X ( z) = ∑ x[
n = −∞
n ] z −n

The Fourier transform of a sequence x[n] is defined as


X (e jω ) = ∑ x[ n ]e − jω n

n = −∞

jω z-transform reduces to the Fourier transform


For z = e the

This domain is a circle of unit radius in the complex plane, that is


|z| = 1.
91
Convergence of the z-Transform
• DTFT does not always converge
( ) = ∑ x[n] e

jω − jωn
Xe
n = −∞
– Infinite sum not always finite if x[n] no absolute summable
– Example: x[n] = anu[n] for |a|>1 does not have a DTFT

• Complex variable z can be written as r ejω so the z-transform

( ) = ∑ x[n] (re ) ( )
∞ ∞
X re jω − jω − n
= ∑ x[n] r −n e − jωn
n = −∞ n = −∞
• DTFT of x[n] multiplied with exponential sequence r -n
– For certain choices of r the sum maybe made finite

∑ x[n] r
n = −∞
-n
<∞

92
Unit Circle in Complex Z-Plane
-The z-transform is a function of the complex z variable
-Convenient to describe on the complex z-plane
-If we plot z=ejω for ω=0 to 2π we get the unit circle

Im
z = e − jω
Unit Circle

ω
Re
1

z-plane

93
Region of Convergence (ROC)
For any given sequence, the set of value of z for which the z-transform converges
is called the region of convergence, ROC. The criterion for convergence is that
the z-transform be absolutely summable:

∑ x[n] z
−n
<∞
−∞

If some value of z, say, z = z1, is in the ROC, then all values of z on the circle defined
by |z| = |z1| will also be in the ROC. So, the ROC will consist of a ring in the z-plane
centered about the origin. Its outer boundary will be a circle (or the ROC may extend
outward to infinity), and its inner boundary will be a circle (or it may extend inward
to include the origin).

94
Region of Convergence
Im

r1
r2

Re

z-plane

The region of convergence (ROC) as a ring in the z-plane. For specific cases, the inner boundary
can extend inward to the origin, and the ROC becomes a disk. In other cases, the outer boundary
can extend outward to infinity. 95
Laurent Series

X ( z) = ∑ x[ n
n = −∞
] z −n

A power series of this form is a Laurent series. A Laurent series, and therefore
a z-transform, represents an analytic function at every point within the region
of convergence. This means that the z-transform and all its derivatives must be
continuous functions of z within the region of convergence.
P( z )
X ( z) =
Q( z)

Among the most useful z-transforms are those for which X(z) is a rational
function inside the region of convergence, for example where P(z) and Q(z)
are polynomials. The values of z for which X(z) are zero are the zeros of X(z)
and the values for which X(z) is infinite are the poles of X(z).
96
Properties of the ROC
1. The ROC is a ring or disk in the z-plane centered at the origin
0 ≤ rR < z < rL ≤ ∞

2. The Fourier transform of x[n] converges absolutely if and only if the ROC of the z-
transform of x[n] includes the unit circle.
3. The ROC cannot contain any poles.
4. If x[n] is a finite-duration sequence, then the ROC is the entire z-plane except possibly
z=0 or z=∞.
5. If x[n] is a right-handed sequence (zero for n < N1 <∞ ), the ROC extends outward from
the outermost (largest magnitude) finite pole in X(z) to (and possibly including infinity).
6. If x[n] is a left-handed sequence (zero for n > N2 > -∞ ), the ROC extends inward from
the innermost (smallest magnitude) nonzero pole in X(z) to (and possibly including) zero.
7. A two-sided sequence is an infinite-duration sequence that is neither right-sided or left-
sided. If x[n] is a two-sided sequence, the ROC will consist of a ring in the z-plane,
bounded on the interior and exterior by a pole and , consistent with property 3, not
containing any poles.
8. The ROC must be a connected region.
97
Properties of ROC Shown Graphically
Finite-Duration Signals

Causal Entire z-plane


Except z = 0

Anticausal Entire z-plane


Except z = infinity

Two-sided Entire z-plane


Except z = 0 and z = ∞

Infinite-Duration Signals

Causal
|z| > r2

Anticausal |z| < r1

Two-sided
r2 < |z| < r1
98
Example: Right-Sided Sequence

x[n ] = a n u[n ]

X ( z) = ∑ x[n]z
n = −∞
−n


X ( z ) = ∑ (az −1 ) =
n 1
n =0 1 − az −1

ROC az −1 < 1

or z>a

99
Example: Left-Sided Sequence

x[n ] = − a n u[ − n − 1]
nonzero for n ≤ −1

X ( z) = ∑ x[n]z
n = −∞
−n

−1 ∞
X ( z ) = − ∑ (az )
−1 n
= 1 − ∑ (a −1 z )
n

n = −∞ n =0

1 1
= 1− =
1 − a −1 z 1 − az −1

ROC a −1 z < 1

or z<a
100
Example: Sum of Two Exponential
Sequences
n n
1  1
x[n ] =   u[n ] +  −  u[n ]
2  3

X ( z) = ∑ x[n]z
n = −∞
−n

1 1
X ( z) = +
1 − 12 z −1 1 + 13 z −1

ROC 1
2 z <1 and − 13 z < 1
z> 1
2

2 + 13 z −1 − 12 z −1 2 z (z − 121 )
X ( z) = =
(1 − 2 z )(1 + 3 z ) (z − 12 )(z + 13 )
1 −1 1 −1

Poles at ½ and -1/3, zeros at 0 and 1/12 101


Example: Two-Sided Sequence
n
 1 1
x[n ] =  − u[n ] −   u[ − n − 1]
 3 2
n
 1 1 1
 −  u[n ] ⇒ z>
 3 1 + 13 z −1 3
n
1 1 1
−   u[ −n − 1] ⇒ z<
2 1 − 12 z −1 2
2 z (z − 121 )
X ( z) =
(z − 12 )(z + 13 )

1 1
ROC < z<
3 2
102
Example: Finite Length Sequence

a n 0 ≤ n ≤ N −1
x[n ] = 
0 otherwise

X ( z) = ∑ x[n]z
n = −∞
−n

1 − (az −1 )
N −1 N

X ( z ) = ∑ (az )
−1 n
=
n =0 1 − az −1
The sum is finite, so

ROC a <∞ and z≠0


Pole-zero plot for N = 16

103
Z-transforms with the same pole-zero locations
illustrating the different possibilities for the ROC.
Each ROC corresponds to a different sequence.

b) A right-sided sequence c) A left-sided sequence

d) A two-sided sequence e) A two-sided sequence


104
Common Z-Transform Pairs
Sequence Transform ROC
δ [n ] 1 All z
Sequence Transform ROC
1
u[n ] z >1 [cos ω0n ]u[n ] 1 − [cos ω0 ]z −1
1 − z −1 z >1
1 − [2 cos ω0 ]z −1 + z −2
1
− u[ −n − 1] z <1 [sin ω0 ]z −1
1 − z −1 [sin ω0n ]u[n ] z >1
1 − [2 cos ω0 ]z −1 + z −2
δ [n − m ] z −m *
1 − [r cos ω0 ]z −1
1 [cos ω0n ]u[n ] z >r
n
a u[n ] z>a 1 − [2r cos ω0 ]z −1 + r 2 z −2
1 − az −1
[ r sin ω0 ]z −1
1 [sin ω0n ]u[n ] z >r
z<a 1 − [2r cos ω0 ]z −1 + r 2 z −2
− a n u[ −n − 1] 1 − az −1
−1 an , 0 ≤ n ≤ N − 1 1 − aN z−N
na n u[n ]
az z >0
(1 − az )
−1 2 z>a 0, otherwise 1 − az −1

az −1
z<a
− na n u[ −n − 1]
(1 − az ) −1 2

105
*All z except 0 (if m > 0) or ∞ (if m<0)
Z-Transform Properties (1/2)
3.4.1 Linearity

ax1[n ] + bx2 [n ] ←→


Z
aX 1 ( z ) + bX 2 ( z ) ROC contains Rx1 ∩ Rx2

3.4.2 Time Shifting

x[n − n0 ] ←→
Z
z − n0 X ( z ) ROC = Rx (except for possible addition
or deletion of 0 or ∞)

3.4.3 Multiplication by an Exponential Sequence

 z
z0n x[n ] ←→
Z
X   ROC = z0 Rx
 z0 
3.4.4 Differentiation of X(z)

dX ( z )
nx[n ] ←→
Z
−z ROC = Rx 106
dz
Z-Transform Properties (2/2
3.4.5 Conjugation of a Complex Sequence

x ∗[n ] ←→
Z
X * ( z* ) ROC = Rx

3.4.6 Time Reversal

1 ROC =
1
x [ −n ] ←→ X  * 
* Z *

z  Rx

3.4.7 Convolution of Sequences

x1[n ] ∗ x2 [n ] ←→
Z
X1( z) X 2 ( z) ROC contains Rx1 ∩ Rx2

3.4.8 Initial-Value Theorem

x[0] = lim X ( z ) provided that x[n] is zero for n < 0, i.e. that x[n] is causal.
z →∞ 107
Inverse z-Transform
Method of inspection – recognize certain transform pairs.

Partial Fraction Expansion

∏ (1 − c z )
M M

∑b z −k −1
k
k b0
X ( z) = k =0 Factor to X ( z) = k =1

∏ (1 − d z )
N
N a0
∑a z −k −1
k k
k =0 k =1

Ak = (1 − d k z −1 )X ( z )
N
Ak
X ( z) = ∑ −1
where
k =0 1 − d k z z =d k

108
Example: Second-Order Z-Transform
1
X ( z) =
(1 − 14 z −1 )(1 − 12 z −1 )
1
z>
2

A1 A2
X ( z) = +
1 − 14 z −1 1 − 12 z −1

1
A1 = = −1
1− 1
2 () 1 −1
4

1
A2 = =2
1− 1
4 () 1 −1
2

−1 2
X ( z) = 1 −1
+
1− 4 z 1 − 12 z −1
109
Partial Fraction Expansion
M −N N
Ak
If M > N
X ( z ) = ∑ Br z + ∑
−r
−1
r =0 k =1 1 − d k z
Br can be obtained by long division of numerator by denominator, stopping
when the remainder is of lower degree than the denominator.

If M > N and X(z) has multiple-order poles, specifically a pole of order s at z=di

M −N N s
A Cm
X ( z ) = ∑ Br z + ∑
−r k
+∑
m =1 (1 − d i z )
−1
r =0 k =1,k ≠i 1 − d k z −1 m

Cm =
1
( s − m )! ( −d i ) s −m
 d s −m
[ −1 
 s −m (1 − d i w ) X (w ) 
s
]
 dw  w=di−1
110
Example
X ( z) =
(1 + z ) = 1 + 2 z
−1 2 −1
+ z −2
z >1
(1 − z )(1 − z ) 1 − z
1
2
−1 −1 3
2
−1
+ 12 z −2

z −2 + 
A1 A2 B0 = 1 −2 =2
X ( z ) = B0 + 1 −1
+ 2 z +
1− 2 z 1 − z −1

X ( z ) = B0 +
A1
+
A2
A =
(1 + ( ) ) = (3)
1 −1
2
2 2
= −9
1 − 12 z −1 1 − z −1 1− ( )
1 −1 −1
1
2

−9 8 (1 + 1) 2
X ( z) = 2 + 1 −1
+ A2 = =8
1− 2 z 1 − z −1 1 − 2 (1)
1 −1

x[n ] = 2δ [n ] − ( 12 ) u[n ] + 8u[n ]


n

111
Power Series Expansion

X ( z) = ∑ x[
n = −∞
n ] z −n

Note that δ [n − m] ←→


Z
z −m

Example :

X ( z ) = z 2 (1 − 12 z −1 )(1 − z −2 ) = z 2 − 12 z − 1 + 12 z −1

x[n ] = 2δ [n + 2] − 12 δ [n + 1] − δ [n ] + 12 δ [n − 1]

112
Example
X ( z ) = log(1 + az −1 ) z>a

Expand in power series:

n +1 n − n


log(1 + az ) = ∑
−1 ( 1) a z
n =1 n

 n +1 a
n

x[n ] = ( −1) n ,
n ≥1
 0, n≤0

113
Contour Integration
Cauchy integral theorem

1, k = 1
2π j ∫Cz dz = 
1 −k

0. k ≠ 1
C is a counterclockwise contour that encircles the origin.

Then one can show that


1
2π j ∫C
x[n ] = X ( z ) z n −1dz

[
x[n ] = ∑ residues of X(z)z n −1 at the poles inside C ]

114
Residue Calculation
If X(z) is a rational function of z, we can write

ψ ( z)
X (z )z n −1
=
(z − d 0 )s
Then one can show that

[
Res X ( z ) z n −1
at z = d 0 = ] 1
(s − 1)!
ψ (d 0 )

115
Quick Review of LTI Systems
• LTI Systems are uniquely determined
∞ by their impulse response
y[n] = ∑ x[k ] h[n − k ] = x[k ] ∗ h[k ]
k = −∞
• We can write the input-output relation also in the z-domain

Y (z ) = H(z )X(z )
• Or we can define an LTI system with its frequency response

( ) ( )( )
Y e jω = H e jω X e jω
• H(ejω) defines magnitude and phase change at each frequency
• We can define a magnitude response
( ) ( ) ( )
Y e jω = H e jω X e jω
• And a phase response

( ) ( )
∠Y e jω = ∠H e jω + ∠X e jω ( )
116
Phase Distortion and Delay
• Remember the ideal delay system
hid [n] = δ[n − nd ]  ( )
→ Hid e jω = e − jωnd
DTFT

• In terms of magnitude and phase response


( )
Hid e jω = 1
∠Hid (e ) = −ωn

d ω <π

• Delay distortion is generally acceptable form of distortion


– Translates into a simple delay in time
• Also called a linear phase response
– Generally used as target phase response in system design
• Ideal lowpass or highpass filters have zero phase response
– Not implementable in practice
117
System Functions for Difference Equations
• Ideal systems are conceptually useful but not implementable
• Constant-coefficient difference equations are
– general to represent most useful systems
– Implementable
– LTI and causal with zero initial conditions
N M

∑ a y[n − k ] = ∑ b x[n − k ]
k =0
k
k =0
k

• The z-transform is useful in analyzing difference equations


• Let’s take the z-transform of both sides
N M

∑ a z Y (z) = ∑ b z X(z)
k =0
k
−k

k =0
k
−k

 N   M 
 ∑ ak z −k  Y (z ) =  ∑ bk z −k X(z )
 k =0   k =0 

118
System Function
• Systems described as difference equations have system functions of the
form
(
1−c z )
M M

Y (z ) ∑ bk z −k
 b0 ∏ k
−1

H(z ) = = k =0
=   k =1
X(z )
∏ (1 − d z )
N N
 a0 
∑ k
a
k =0
z −k

k =1
k
−1

• Example

H(z ) =
(1 + z )
−1 2
=
1 + 2z −1 + z −2
=
Y (z )
 1 −1  3 −1  1 −1 3 −2 X(z )
1 − z 1 + z  1 + z + z
 2  4  4 8

1 −1 3 − 2 

 1 +
4 8
(
z + z  Y (z ) = 1 + 2z −1 + z −2 X(z ) )
 
1 3
y[n] + y[n − 1] + y[n − 2] = x[n] + 2x[n − 1] + x[n − 2]
4 8
119
Stability and Causality
• A system function does not uniquely specify a system
– Need to know the ROC
• Properties of system gives clues about the ROC
• Causal systems must be right sided
– ROC is outside the outermost pole
• Stable system requires absolute summable impulse response

∑ h[n] < ∞
k = −∞

– Absolute summability implies existence of DTFT


– DTFT exists if unit circle is in the ROC
– Therefore, stability implies that the ROC includes the unit circle
• Causal AND stable systems have all poles inside unit circle
– Causal hence the ROC is outside outermost pole
– Stable hence unit circle included in ROC
– This means outermost pole is inside unit circle
– Hence all poles are inside unit circle
120
Example
• Let’s consider the following LTI system
5
y[n] − y[n − 1] + y[n − 2] = x[n]
2
• System function can be written as

1
H(z ) =
1 −1 

(
1 − z  1 − 2z
2
−1
)
 
• Three possibilities for ROC
– If causal ROC1 but not stable ROC1 : z > 2
– If stable ROC2 but not causal
1
– If not causal neither stable ROC3 ROC2 : < z <2
2
1
ROC3 : z <
2
121
Structures for Discrete-Time Systems

• Block Diagram Representation of Linear Constant-Coefficient


Difference Equations
• Signal Flow Graph Representation of Linear Constant-Coefficient
Difference Equations
• Basic Structures for IIR Systems
• Transposed Forms
• Basic Network Structures for FIR Systems
• Lattice Structures

122
Introduction

• Example: The system function of a discrete-time system is

b0 + b1 z −1
H ( z) = , |z| > |a|
1 − az −1

• Its impulse response will be


h[n] = b0anu[n] + b1an-1u[n-1]
• Its difference equation will be
y[n] –ay[n-1] = b0x[n] + b1x[n-1]
Since this system has an infinite-duration impulse response, it is not possible
to implement the system by discrete convolution. However, it can be rewritten
in a form that provides the basis for an algorithm for recursive computation.
y[n] = ay[n-1] + b0x[n] + b1x[n-1]

123
Block Diagram Representation of Linear Constant-coefficient
Difference Equations

x[n] a ax[n] Multiplication of a


sequence by a constant

z-1 Unit delay


x[n] x[n-1]

+
Addition of two sequences
x1[n] x1[n] + x2[n]

x2[n]

124
Block Diagram Representation of Linear Constant-coefficient
Difference Equations

∑b z k
−k
Y ( z)
H ( z) = k =0
N
=
1 − ∑ ak z − k
X ( z)
k =1

 
  M
1 H ( z ) = H 2 ( z ) H1 ( z ) = 
1  
 N ∑
 k = 0
bk z −k 

 1 − ∑ ak z
−k

 k =1 
V ( z ) = H1 ( z ) X ( z )
Y ( z ) = H 2 ( z )V ( z )  
 
2 M  
H ( z ) = H1 ( z ) H 2 ( z ) =  ∑ bk z −k 
1
 k =0  1 −
N




k =1
a k z −k


W ( z) = H 2 ( z) X ( z)
Y ( z ) = H1 ( z )W ( z )
125
Block diagram representation for a general Nth-order
difference equation:
Direct Form I

x[n] b0 v[n] y[n]


+ +
z-1 z-1
b1 a1
x[n-1]
+ + y[n-1]

z-1 z-1
x[n-2] y[n-2]
bM-1 aN-1
+ +
z-1 z-1
bM aN
x[n-M] y[n-N]

126
Block diagram representation for a general Nth-order
difference equation:
Direct Form II

x[n] w[n] b0 y[n]


+ +
z-1 z-1
a1 b1
+ w[n-1] w[n-1]
+
z-1 z-1
w[n-2] w[n-2]
aN-1 bM-1
+ +
z-1 z-1
aN bM
w[n-N] w[n-M]

127
Combination of delay units (in case N = M)

x[n] w[n] b0 y[n]


+ +
z-1
a1 b1
+ +
z-1

aN-1 bN-1
+ +
z-1
aN bN

128
Block Diagram Representation of Linear Constant-
coefficient Difference Equations 2
• An implementation with the minimum number of delay elements is
commonly referred to as a canonic form implementation.
• The direct form I is a direct realization of the difference equation
satisfied by the input x[n] and the output y[n], which in turn can be
written directly from the system function by inspection.
• The direct form II or canonic direct form is an rearrangement of the
direct form I in order to combine the delay units together.

129
Signal Flow Graph Representation of Linear Constant-
coefficient Difference Equations

a
Attenuator
x[n] d e y[n]
a z-1
z-1 c
Delay Unit b
z-1

Node: Adder, Separator, Source, or Sink

130
Basic Structures for IIR Systems

• Direct Forms
• Cascade Form
• Parallel Form
• Feedback in IIR Systems

131
Basic Structures for IIR Systems

• Direct Forms

N M
y[n ] − ∑ ak y[n − k ] = ∑ bk x[n − k ]
k =1 k =0
M

∑ k
b z −k

H ( z) = k =0
N
1 − ∑ ak z − k
k =1

132
Direct Form I (M = N)

133
Direct Form II (M = N)

134
Direct Form II

135
x[n] y[n]
+ +
z-1 z-1
2 0.75
+ +
z-1 z-1
-0.125

x[n] y[n]
+ +
z-1
0.75 2
+ +
z-1 1 + 2 z −1 + z −2
H ( z) =
-0.125 1 − 0.75z −1 + 0.125z −2

136
x[n] y[n]

z-1 z-1
2 0.75

z-1 z-1
-0.125

x[n] y[n]

z-1
0.75 2

z-1
-0.125

137
Basic Structures for IIR Systems 2

• Cascade Form
M1 M2

∏ (1 − g k z )∏ (1 − hk z −1 )(1 − hk∗ z −1 )
−1

H ( z) = A k =1
N1
k =1
N2

∏ (1
k =1
− ck z −1
) ∏ (1 − d
k =1
k z −1
)(1 − d ∗ −1
kz )

where M = M1+2M2 and N = N1+2N2 .


• A modular structure that is advantageous for many types of
implementations is obtained by combining pairs of real factors and
complex conjugate pairs into second-order factors.

b0 k + b1k z −1 + b2 k z −2
Ns
H ( z) = ∏ −1 −2
k =1 1 − a 1k z − a 2k z
where Ns is the largest integer contained in (N+1)/2.

138
w1[n] y1[n] w2[n] y2[n] w3[n] y3[n]
x[n] b01 b02 b03 y[n]

z-1 z-1 z-1


a11 b11 a12 b12 a13 b13
z-1 z-1 z-1
a21 b21 a22 b22 a23 b23

1 + 2 z −1 + z −2 (1 + z −1 )(1 + z −1 )
H ( z) = =
1 − 0.75z −1 + 0.125z −2 (1 − 0.5z −1 )(1 − 0.25z −1 )
x[n] y[n]

z-1 z-1 z-1 z-1


0.5 0.25

x[n] y[n]

z-1 z-1
0.5 0.25
139
Basic Structures for IIR Systems 3

• Parallel Form
−1
NP N1
A N2
B (1 − e z )
H ( z ) = ∑ Ck z + ∑
−k k
−1
+∑ k
−1
k
∗ −1
k =0 k =1 1 − ck z k =1 (1 − d k z )(1 − d kz )

where N = N1+2N2 . If M Ĺ N, then NP = M - N; otherwise, the first summation


in right hand side of equation above is not included.
• Alternatively, the real poles of H(z) can be grouped in pairs :

−1
NP NS
e + e z
H ( z ) = ∑ Ck z − k + ∑ 0k
−1
1k
−2
k =0 k =1 1 − a1k z − a 2k z
where NS is the largest integer contained in (N+1)/2, and if NP = M - N is
negative, the first sum is not present.

140
C0

w1[n] b y1[n]
01

a11 z-1 b
11

a21 z-1

x[n] w2[n] b y2[n] y[n]


02

a12 z-1 b
12

a22 z-1

w3[n] b y3[n]
03

a13 z-1 b
13

a23 z-1
Parallel form structure for
sixth order system (M=N=6).

141
1 + 2 z −1 + z −2 − 7 + 8 z −1
H ( z) = −1 −2
=8+
1 − 0.75z + 0.125z 1 − 0.75z −1 + 0.125z −2
18 25
=8+ −
1 − 0.5z −1 1 − 0.25z −1
8

x[n] y[n]

-7 8
z-1
0.75 8 x[n] 18 y[n]

z-1 z-1
-0.125 0.5

-25
z-1
0.25
142
Transposed Forms

• Transposition (or flow graph reversal) of a flow graph is accomplished by


reversing the directions of all branches in the network while keeping the
branch transmittances as they were and reversing the roles of the input and
output so that source nodes become sink nodes and vice versa.

x[n] y[n] y[n] x[n]

z-1 z-1
a a

x[n] y[n]

z-1
a

143
x[n] b0 y[n]

z-1 z-1
b1 a1

z-1 z-1
b2 a2

bN-1 aN-1

z-1 z-1
bN aN

x[n] b0 y[n]

z-1 z-1
a1 b1

z-1 z-1
a2 b2

aN-1 bN-1

z-1 z-1
aN bN

144
x[n] b0 y[n]
z-1
a1 b1
z-1
a2 b2

aN-1 bN-1
z-1 x[n] b0 y[n]
aN bN
z-1
b1 a1
z-1
b2 a2

bN-1 aN-1
z-1
bN aN

145
Basic Network Structures for FIR Systems

• Direct Form
– It is also referred to as a tapped delay line structure or a transversal filter
structure.
• Transposed Form
• Cascade Form
M MS
H ( z ) = ∑ h[n ]z −n
= ∏ (b0 k + b1k z −1 + b2 k z −2 )
n =0 k =1

where MS is the largest integer contained in (M + 1)/2. If M is odd, one of


coefficients b2k will be zero.

146
Direct Form

• For causal FIR system, the system function has only zeros (except for
poles at z = 0) with the difference equation:
y[n] = SMk=0 bkx[n-k]
• It can be interpreted as the discrete convolution of x[n] with the
impulse response
h[n] = bn , n = 0, 1, …, M,
0 , otherwise.

147
Direct Form (Tapped Delay Line or Transversal Filter)

x[n] b0 y[n]

z-1
b1

z-1 b2

bN-1

z-1 bN

x[n] z-1 z-1 z-1

h[0] h[1] h[2] h[M-1] h[M] y[n]

148
Transposed Form of FIR Network

z-1 z-1 z-1 z-1 y[n]

h[M] h[M-1] h[M-2] h[2] h[1] h[0]

x[n]

149
Cascade Form Structure of a FIR System

x[n] b01 b02 b0Ms y[n]

z-1 z-1 z-1


b11 b12 b1Ms

z-1 b21 z-1 b22 z-1 b2Ms

150
Structures for Linear-Phase FIR Systems

• Structures for Linear Phase FIR Systems :


– h[M-n] = h[n] for n = 0, 1, ..., M
• For M is an even integer : Type I
y[n] = Sk=0(M/2)-1 h[k](x[n-k] + x[n-M+k]) + h[M/2]x[n-M/2]
• For M is an odd integer : Type II
y[n] = Sk=0(M-1)/2 h[k](x[n-k] + x[n-M+k])
– h[M-n] = -h[n] for n = 0, 1, ..., M
• For M is an even integer : Type III
y[n] = Sk=0(M/2)-1 h[k](x[n-k] - x[n-M+k])
• For M is an odd integer : Type IV
y[n] = Sk=0(M-1)/2 h[k](x[n-k] - x[n-M+k])

151
Direct form structure for an FIR linear-phase when M is even.

x[n] z-1 z-1 z-1

z-1 z-1 z-1

y[n] h[0] h[1] h[2] h[(M/2)-1] h[M/2]

Direct form structure for an FIR linear-phase when M is odd.

x[n] z-1 z-1 z-1

z-1

z-1 z-1 z-1

y[n] h[0] h[1] h[2] h[(M-3)/2] h[(M-1)/2]

152
Lattice Structures

• Theory of autoregressive signal modeling :


– Lattice Structure
• Development of digital filter structures that are analogous to analog
filter structures :
– Wave Digital Filters
• Another structure development approach is based on state-variable
representations and linear transformations.

153
Lattice Structures 2

• FIR Lattice
Y ( z)  N

H ( z) = = A( z ) = 1 − ∑ am z −m 
X ( z)  m =1 

The coefficients {ki} in the lattice structures are referred


to as the k-parameters, which are called reflection
coefficients or PARCOR coefficients.
– When used in signal modeling, the k-parameters
are estimated from a data signal .
– Given a set of k-parameters, the system function
can be found and therefore the impulse response :
by recurrence formula for computing A(z) in terms
of the intermediate system functions

154
Reflection coefficients or PARCOR coefficients structure

x[n] e0[n] e1[n] e2[n] eN-1[n] eN[n] y[n]

-k1 -k2 -kN


-k1 -k2 -kN
z-1 z-1 z-1

e~0[n] e~1[n] e~2[n] e~N-1[n] e~N[n]

Signal flow graph of an FIR lattice system

e0[n] = e~0[n] = x[n]


ei[n] = ei-1[n] – kie~i-1[n-1], i = 1, 2, …, N,
e~i[n] = -kiei-1[n] + e~i-1[n-1]
y[n] = eN[n]

155
A recurrence formula for computing A(z) = H(z) = Y(z)/X(z) can be
obtained in terms of intermediate system functions:

Ei ( z )  i

Ai ( z ) = = 1 − ∑ am( i ) z −m 
EO ( z )  m =1 

By recursive technique:
ai(i) = ki ,
am(i) = am(i-1) - ki ai-m(i-1) ,
m = 1, 2, ..., (i-1)

Or by reverse recursive technique:


ki = ai(i)
am(i-1) = [am(i) + kiai-m(i)]/[1 – ki2], m = 1, 2, …, (i – 1)

156
Example:

A(z) = (1-0.8jz-1)(1+0.8jz-1)(1-0.9z-1) = 1 – 0.9z-1 + 0.64z-2 – 0.576z-3.


Then, a1(3) = 0.9, a2(3) = -0.64, and a3(3) = 0.576
The k-parameter can be computed as follow:
k3 = a3(3) = 0.576
a1(2) = [a1(3) + k3a2(3)]/[1 – k32] = 0.79518245
a2(2) = [a2(3) + k3a1(3)]/[1 – k32] = - 0.18197491
k2 = a2(2) = - 0.18197491
a1(1) = [a1(2) + k2a1(2)]/[1 – k22] = 0.67275747
k1 = a1(1) = 0.67275747

157
x[n] y[n]
-0.6728 +0.182 -0.576
-0.6728 +0.182 -0.576

z-1 z-1 z-1

z–1 z–1 z-1


x[n]
-0.9 0.64 -0.576
y[n]

158
All-Pole Lattice

• A lattice system with an all-pole system function H(z) = 1/A(z) can be


developed from the FIR lattice .
– all roots of A(z) must be inside the unit circle: |ki| < 1, i = 1, ..., N
ei[n] ei-1[n]
eN[n] = x[n]
Ei-1[n] = ei[n] + kie~i-1[n-1], i = N, (N-1), …1, ki
-ki
z-1 z-1
e~i[n] = -kiei-1[n] + e~i-1[n-1]
y[n] = e0[n] = e~0[n] e~i[n] e~i-1[n]

x[n] e0[n] e1[n] e2[n] eN-1[n] eN[n] y[n]

kN kN-2 k1
-kN -k
-1 N-2
-k1
z z-1 z-1

e~N[n] e~N-1[n] e~1[n] e~0[n] 159


Basic all-pole lattice structures

• Three-multiplier form
• Four-multiplier, normalized form
N

∏ cos θ i
H ( z) = i =1
A( z )

• Four-multiplier, Kelly-Lochbaum form : was first derived as an acoustic tube


model for speech synthesis.

∏ (1 + k ) i
H ( z) = i =1
A( z )

160
ei[n] ei-1[n]

-ki ki
Three-multiplier form
e’i[n] (1 - ki2) e’i-1[n]

ei[n] cos qi ei-1[n]

-sin qi sin qi
Four-multiplier, normalized
form
e’i[n] cos qi e’i-1[n]

ei[n] (1 + ki) ei-1[n]

-ki ki Four-multiplier, Kelly-


Lochbaum form
e’i[n] (1 - ki) e’i-1[n]

161
Lattice Systems with Poles and Zeros

Y ( z) N
ci z − i Ai ( z −1 ) B( z )
H ( z) =
X ( z)
= ∑
i =0 A( z )
=
A( z )
N
B( z ) = ∑ m
b
m =0
z −m

N
bm = cm − ∑ i i −m
c a (i )

i = m +1

x[n] = eN[n] eN-1[n] eN-2[n] e1[n] e0[n]

Section Section Section


e’N[n] N e’N-1[n] N - 1e’N-2[n] e’1[n] e’01[n]

cN cN-1 cN-2 c1 c0

y[n]
162
Example of lattice IIR filter with poles and zeros

x[n] y[n]

z-1
0.9 3

z-1
-0.64 3

z-1
0.576

x[n]

0.576 -0.182 0.6728


-0.576 0.182 -0.6728

z-1 z-1 z-1


3.9 5.4612 4.5404

y[n]

163
UNIT-2
DFS , DFT & FFT

164
Fourier representation of signals

165
Fourier representation of signals

166
Fourier representation of signals

167
Discrete complex exponentials

168
Discrete Fourier Series
• Given a periodic sequence ~
x[n] with period N so that
~
x[n] = ~
x[n + rN]

• The Fourier series representation can be written as


~ 1 ~
x[n] = ∑ X[k ]e j(2 π / N)kn
N k
• The Fourier series representation of continuous-time
periodic signals require infinite many complex exponentials
• Not that for discrete-time periodic signals we have
e j(2 π / N)(k + mN)n = e j(2 π / N)kne j(2 πmn) = e j(2 π / N)kn
• Due to the periodicity of the complex exponential we only
need N exponentials for discrete time Fourier series
~ 1 N −1 ~
x[n] = ∑ X[k ]e j(2 π / N)kn
N k =0

169
Discrete Fourier Series Pair
• A periodic sequence in terms of Fourier series coefficients
~ 1 N −1 ~
x[n] = ∑ X[k ]e j(2 π / N)kn
N k =0
• The Fourier series coefficients can be obtained via
~ N −1
X[k ] = ∑ ~x[n]e
n=0
− j(2 π / N )kn

• For convenience we sometimes use

• Analysis equation WN = e − j(2 π / N)

~ N −1
~
X[k ] = ∑ x[n]Wkn
N
• Synthesis equation n=0

~ 1 N −1 ~
x[n] = ∑ X[k ]WN−kn
N k =0
170
Fourier series for discrete-time periodic
signals

171
Discrete-time Fourier series
(DTFS)

172
Fourier representation of aperiodic
signals

173
Discrete-time Fourier transform
(DTFT)

174
Discrete Fourier Transform

175
Discrete Fourier Transform

176
Discrete Fourier Transform

177
Discrete Fourier Transform

178
Discrete Fourier Transform

179
Summary of properties

180
DFT Pair & Properties
• The Discrete Fourier Transform pair
N −1 N −1
X[k ] = ∑ X [k ]e ( π
1
∑ x[n]e
n=0
− j(2 π / N )kn
x[n] =
N k =0
j 2 / N )kn

181
Circular convolution

182
Modulo Indices and Periodic
Repetition

183
Overlap During Periodic
Repetition

184
Periodic repetition: N=4

185
Periodic repetition: N=4

186
Modulo Indices and the Periodic
Repetition

187
Modulo Indices and the Periodic
Repetition

188
Modulo Indices and the Periodic
Repetition

189
Modulo Indices and the Periodic
Repetition

190
Circular convolution

191
Circular convolution

192
Circular convolution-another
interpretation

193
Using DFT for Linear
Convolution

194
Using DFT for Linear Convolution

195
Using DFT for Linear Convolution

196
Using DFT for Linear
Convolution

197
Using DFT for Linear
Convolution

198
Using DFT for cicular
Convolution

199
Using DFT for cicular
Convolution

200
Using DFT for cicular
Convolution

201
Filtering of Long Data Sequences

202
Filtering of Long Data Sequences

203
Over-lap Add

204
Over-lap Add

205
Over-lap Add

206
Over-lap Add

207
Over-lap Add

208
Over-lap Add

209
Over-lap Add

210
Over-lap save

211
Over-lap save

212
Over-lap save

213
Over-lap save input segment stage

214
Over-lap save input segment stage

215
Over-lap save input segment stage

216
Over-lap save filtering stage

217
Over-lap save output blocks

218
Over-lap save output blocks

219
Over-lap save output blocks

220
Over-lap save

221
Over-lap save

222
Relationships between CTFT,
DTFT, & DFT

223
Relationships between CTFT,
DTFT, & DFT

224
Relationships between CTFT, DTFT,
& DFT

225
Fast Fourier Transform

226
Discrete Fourier Transform
• The DFT Npair was given as
1 N −1
x[n] = ∑ X[k ]e j(2 π / N)kn
−1
X[k ] = ∑ x[n]e − j(2 π / N )kn
N k =0
n=0
• Baseline for computational complexity:
– Each DFT coefficient requires
• N complex multiplications
• N-1 complex additions
– All N DFT coefficients require
• N2 complex multiplications
• N(N-1) complex additions

• Complexity in terms of real operations


• 4N2 real multiplications
• 2N(N-1) real additions

• Most fast methods are based on symmetry properties


– Conjugate symmetry e − j(2 π / N)k (N −n) = e − j(2 π / N)kNe − j(2 π / N)k (−n) = e j(2 π / N)kn
– Periodicity in n and k e − j(2 π / N)kn = e − j(2 π / N)k (n +N) = e j(2 π / N)(k +N)n

227
Direct computation of DFT

228
Direct computation of DFT

229
230
FFT

231
Decimation-In-Time FFT Algorithms
• Makes use of both symmetry and periodicity
• Consider special case of N an integer power of 2
• Separate x[n] into two sequence of length N/2
– Even indexed samples in the first sequence
– Odd indexed samples in the other sequence
N −1 N −1 N −1
X[k ] = ∑ x[n]e − j(2 π / N )kn
= ∑ x[n]e − j(2 π / N )kn
+ ∑ x[n]e − j(2 π / N )kn

n=0 n even n odd

• Substitute variables n=2r for n even and n=2r+1 for odd


N / 2 −1 N / 2 −1
X[k ] = ∑ x[2r]W 2rk
N + ∑ x[2r + 1]W( N
2r +1)k

r =0 r =0
N / 2 −1 N / 2 −1
= ∑ x[2r]W + W ∑ x[2r + 1]W
r =0
rk
N/2
k
N
r =0
rk
N/2

= G[k ] + W H[k ] k
N

• G[k] and H[k] are the N/2-point DFT’s of each subsequence


232
Decimation In Time

• 8-point DFT example using


decimation-in-time
• Two N/2-point DFTs
– 2(N/2)2 complex multiplications
– 2(N/2)2 complex additions
• Combining the DFT outputs
– N complex multiplications
– N complex additions
• Total complexity
– N2/2+N complex multiplications
– N2/2+N complex additions
– More efficient than direct DFT
• Repeat same process
– Divide N/2-point DFTs into
– Two N/4-point DFTs
– Combine outputs

233
Decimation In Time Cont’d
• After two steps of decimation in time

• Repeat until we’re left with two-point DFT’s

234
Decimation-In-Time FFT Algorithm
• Final flow graph for 8-point decimation in time

• Complexity:
– Nlog2N complex multiplications and additions

235
Butterfly Computation
• Flow graph constitutes of butterflies

• We can implement each butterfly with one multiplication

• Final complexity for decimation-in-time FFT


– (N/2)log2N complex multiplications and additions

236
In-Place Computation
• Decimation-in-time flow graphs require two sets of registers
– Input and output for each stage
• Note the arrangement of the input indices
– Bit reversed indexing

X0 [0] = x[0] ↔ X0 [000] = x[000]


X0 [1] = x[4] ↔ X0 [001] = x[100]
X0 [2] = x[2] ↔ X0 [010] = x[010]
X0 [3] = x[6] ↔ X0 [011] = x[110]
X0 [4] = x[1] ↔ X0 [100] = x[001]
X0 [5] = x[5] ↔ X0 [101] = x[101]
X0 [6] = x[3] ↔ X0 [110] = x[011]
X0 [7] = x[7] ↔ X0 [111] = x[111]

237
Decimation-In-Frequency FFT Algorithm
• The DFT equation
N −1
X[k ] = ∑ x[n]Wnk
N
n=0

• Split the DFT equation into even and odd frequency indexes
N −1 N / 2 −1 N −1
X[2r ] = ∑ x[n]W n2r
N = ∑ x[n]W n2r
N + ∑ x[n]Wn2r
N
n=0 n=0 n =N / 2

• Substitute variables to get


N / 2 −1 N / 2 −1 N / 2 −1
X[2r ] = ∑ x[n]W n2r
N + ∑ x[n + N / 2]W (n + N / 2 )2r
N = ∑ (x[n] + x[n + N / 2])Wnr
N/2
n=0 n=0 n=0

• Similarly for odd-numbered


N / 2 −1
frequencies
X[2r + 1] = ∑ (x[n] − x[n + N / 2])W ( n 2r +1 )
N/2
n=0

238
Decimation-In-Frequency FFT Algorithm
• Final flow graph for 8-point decimation in frequency

239
UNIT-3
IIR filters

240
Filter Design Techniques
• Any discrete-time system that modifies certain frequencies
• Frequency-selective filters pass only certain frequencies
• Filter Design Steps
– Specification
• Problem or application specific
– Approximation of specification with a discrete-time system
• Our focus is to go from spec to discrete-time system
– Implementation
• Realization of discrete-time systems depends on target technology
• We already studied the use of discrete-time systems to implement a
continuous-time system
– If our specifications are given in continuous time we can use

x[n] y[n]
xc(t) C/D H(ejω) D/C yr(t)

( )
H e jω = Hc (jω / T ) ω <π
241
Digital Filter Specifications
• Only the magnitude approximation problem
• Four basic types of ideal filters with magnitude responses
as shown below (Piecewise flat)
HLP (e jω ) HHP (e jω )

1 1

ω ω
−π –ω c 0 ωc π −π –ω c 0 ωc π
HBS (e jω )
HBP (e jω )

–1 1

ω ω
− π –ω c2 –ω c1 ω c1 ω c2 π − π –ω c2 –ω c1 ω c1 ω c2 π
242
Digital Filter Specifications
• These filters are unealisable because (one of the
following is sufficient)
– their impulse responses infinitely long non-
causal
– Their amplitude responses cannot be equal to a
constant over a band of frequencies
Another perspective that provides some
understanding can be obtained by looking at the
ideal amplitude squared.
243
Digital Filter Specifications
• The realisable squared amplitude response transfer
function (and its differential) is continuous in
• Such functions ω
– if IIR can be infinite at point but around that
point cannot be zero.
– if FIR cannot be infinite anywhere.
• Hence previous differential of ideal response is
unrealisable

244
Digital Filter Specifications
• For example the magnitude response of a digital
lowpass filter may be given as indicated below

245
Digital Filter Specifications

• In the passband 0 ≤ ω ≤ ω p we require



that G ( e ) ≅1 with a deviation± δ p
1 − δ p ≤ G ( e jω ) ≤ 1 + δ p , ω ≤ ω p

δs
• In the stopband ω s ≤ ω ≤ π we require
that G (e jω ) ≅ 0 with a deviation

G (e ) ≤ δ s , ωs ≤ ω ≤ π
246
Digital Filter Specifications
Filter specification parameters
• ωp - passband edge frequency
• ωs - stopband edge frequency
• δp - peak ripple value in the passband
• δs - peak ripple value in the stopband

247
Digital Filter Specifications

• Practical specifications are often given in


terms of loss function (in dB)

• G (ω ) = − 20 log10 G ( e )
• Peak passband ripple
α p = − 20 log10 (1 − δ p ) dB
• Minimum stopband attenuation
α s = − 20 log10 (δ s ) dB
248
Digital Filter Specifications

• In practice, passband edge frequency Fp and


stopband edge frequency are Fs specified in
Hz
• For digital filter design, normalized bandedge
frequencies need to be computed from
specifications in Hz using Ω 2π Fp
ωp = p
= = 2π FpT
FT FT
Ω s 2π Fs
ωs = = = 2π Fs T
FT FT 249
Digital Filter Specifications
• Example - Let Fp = 7 kHz, Fs = 3
kHz, and FT = 25 kHz
• Then
2π (7 × 103 )
ωp = = 0.56π
25 × 10 3

2π (3 × 103 )
ωs = = 0.24π
25 × 10 3

250
IIR Digital Filter Design

Standard approach
(1) Convert the digital filter specifications into
an analogue prototype lowpass filter
specifications
(2) Determine the analogue lowpass filter
transfer function H a (s )
(3) Transform H a (s ) by replacing the complex
variable to the digital transfer function
G (z )
251
IIR Digital Filter Design
• This approach has been widely used for the
following reasons:
(1) Analogue approximation techniques are
highly advanced
(2) They usually yield closed-form
solutions
(3) Extensive tables are available for
analogue filter design
(4) Very often applications require digital
simulation of analogue systems
252
IIR Digital Filter Design

• Let an analogue transfer function be


Pa ( s )
H a (s) =
Da ( s )
where the subscript “a” indicates the
analogue domain
• A digital transfer function derived from this
is denoted as
P( z )
G( z) =
D( z )
253
IIR Digital Filter Design
• Basic idea behind the conversion of H a (s ) intoG (z )
is to apply a mapping from the s-domain to the z-
domain so that essential properties of the analogue
frequency response are preserved
• Thus mapping function should be such that
– Imaginary (jΩ ) axis in the s-plane be
mapped onto the unit circle of the z-plane
– A stable analogue transfer function be mapped
into a stable digital transfer function

254
Specification for effective frequency response of a continuous-time lowpass
filter and its corresponding specifications for discrete-time system.

dp or d1 passband ripple
ds or d2 stopband ripple
Wp, wp passband edge frequency
Ws, ws stopband edge frequency
e2 passband ripple parameter

1 – dp = 1/√1 + e2

BW bandwidth = wu – wl
wc 3-dB cutoff frequency
wu, wl upper and lower 3-dB cutoff
frequensies
Dw transition band = |wp – ws|
Ap passband ripple in dB
= ± 20log10(1 ± dp)
As stopband attenuation in dB
= -20log10(ds)
255
Design of Discrete-Time IIR Filters

• From Analog (Continuous-Time) Filters


– Approximation of Derivatives
– Impulse Invariance
– the Bilinear Transformation

256
Reasons of Design of Discrete-Time IIR Filters from
Continuous-Time Filters

• The art of continuous-time IIR filter design is highly advanced and,


since useful results can be achieved, it is advantageous to use the
design procedures already developed for continuous-time filters.

• Many useful continuous-time IIR design methods have relatively


simple closed-form design formulas. Therefore, discrete-time IIR
filter design methods based on such standard continuous-time design
formulas are rather simple to carry out.

• The standard approximation methods that work well for continuous-


time IIR filters do not lead to simple closed-form design formulas
when these methods are applied directly to the discrete-time IIR case.

257
Characteristics of Commonly Used Analog Filters

• Butterworth Filter
• Chebyshev Filter
– Chebyshev Type I
– Chebyshev Type II of Inverse Chebyshev Filter

258
Butterworth Filter
• Lowpass Butterworth filters are all-pole filters characterized by the magnitude-squared
frequency response

|H(W)|2 = 1/[1 + (W/Wc)2N] = 1/[1 + e2(W/Wp)2N]

where N is the order of the filter, Wc is its – 3-dB frequency (cutoff frequency), Wp is
the bandpass edge frequency, and 1/(1 + e2) is the band-edge value of |H(W)|2.

• At W = Ws (where Ws is the stopband edge frequency) we have


1/[1 + e2(Ws/Wp)2N] = d22
and
N = (1/2)log10[(1/d22) – 1]/log10(Ws/Wc) = log10(d/e)/log10(Ws/Wp)
where d2= 1/√1 + d22.

• Thus the Butterworth filter is completely characterized by the parameters N, d2, e, and
the ratio Ws/Wp.

259
Butterworth Lowpass Filters
• Passband is designed to be maximally flat
• The magnitude-squared function is of the form
1 1
Hc (jΩ ) = ( )
2 2
H s =
1 + (jΩ / jΩ c ) 1 + (s / jΩ c )
2N c 2N

sk = (− 1) (jΩc ) = Ωce( jπ / 2N)(2k +N−1)


1 / 2N
for k = 0,1,...,2N - 1

260
Frequency response of lowpass Butterworth filters

261
Chebyshev Filters
• The magnitude squared response of the analog lowpass Type I Chebyshev
filter of Nth order is given by:

|H(W)|2 = 1/[1 + e2TN2(W/Wp)].


where TN(W) is the Chebyshev polynomial of order N:
TN(W) = cos(Ncos-1 W), |W| ≤ 1,
= cosh(Ncosh-1 W), |W| > 1.

The polynomial can be derived via a recurrence relation given by


Tr(W) = 2WTr-1(W) – Tr-2(W), r ≥ 2,
with T0(W) = 1 and T1(W) = W.

• The magnitude squared response of the analog lowpass Type II or inverse


Chebyshev filter of Nth order is given by:
|H(W)|2 = 1/[1 + e2{TN(Ws/Wp)/ TN(Ws/W)}2].

262
Chebyshev Filters
• Equiripple in the passband and monotonic in the stopband
• Or equiripple in the stopband and monotonic in the passband
1
Hc (jΩ ) =
2

1 + ε VN (Ω / Ω c )
2 2
VN (x ) = cos(N cos −1
x)

263
Frequency response of
lowpass Type I Chebyshev filter

|H(W)|2 = 1/[1 + e2TN2(W/Wp)]

Frequency response of
lowpass Type II Chebyshev filter

|H(W)|2 = 1/[1 + e2{TN2(Ws/Wp)/TN2(Ws/W)}]

264
N = log10[(√ 1 - d22 + √ 1 – d22(1 + e2))/ed2]/log10[(Ws/Wp) + √ (Ws/Wp)2 – 1 ]
= [cosh-1(d/e)]/[cosh-1(Ws/Wp)]

for both Type I and II Chebyshev filters, and where


d2 = 1/ √ 1 + d2.

• The poles of a Type I Chebyshev filter lie on an ellipse in the s-plane with major
axis r1 = Wp{(b2 + 1)/2b] and minor axis r1 = Wp{(b2 - 1)/2b] where b is related to
e according to
b = {[√ 1 + e2 + 1]/e}1/N
• The zeros of a Type II Chebyshev filter are located on the imaginary axis.

265
Type I: pole positions are
xk = r2cosfk
yk = r1sinfk
fk = [p/2] + [(2k + 1)p/2N]
r1 = Wp[b2 + 1]/2b
r2 = Wp[b2 – 1]/2b

b = {[√ 1 + e2 + 1]/e}1/N

Type II: zero positions are


sk = jWs/sinfk
and pole positions are

vk = Wsxk/√ xk2 + yk2

wk = Wsyk/√ xk2 + yk2

b = {[1 + √ 1 – d22 ]/d2}1/N

Determination of the pole locations k = 0, 1, …, N-1.


for a Chebyshev filter. 266
Approximation of Derivative Method

• Approximation of derivative method is the simplest one for converting an


analog filter into a digital filter by approximating the differential equation by
an equivalent difference equation.
– For the derivative dy(t)/dt at time t = nT, we substitute the backward difference
[y(nT) – y(nT – T)]/T. Thus
dy (t ) y ( nT ) − y ( nT − T ) y[n ] − y[n − 1]
= ≅
dt t =nT T T

where T represents the sampling period. Then, s = (1 – z-1)/T


– The second derivative d2y(t)/dt2 is derived into second difference as follow:

dy (t ) y[n ] − 2 y[n − 1] + y[n − 2]



dt t =nT T2
which s2 = [(1 – z-1)/T]2. So, for the kth derivative of y(t), sk = [(1 – z-1)/T]k.

267
Approximation of Derivative Method
• Hence, the system function for the digital IIR filter obtained as a result of the
approximation of the derivatives by finite difference is
H(z) = Ha(s)|s=(z-1)/Tz

• It is clear that points in the LHP of the s-plane are mapped into the
corresponding points inside the unit circle in the z-plane and points in the
RHP of the s-plane are mapped into points outside this circle.
– Consequently, a stable analog filter is transformed into a stable digital filter due
to this mapping property.

jW
Unit circle

s-plane
z-plane
268
Example: Approximation of derivative method

Convert the analog bandpass filter with system function


Ha(s) = 1/[(s + 0.1)2 + 9]
Into a digital IIR filter by use of the backward difference for the derivative.

Substitute for s = (1 – z-1)/T into Ha(s) yields


H(z) = 1/[((1 – z-1)/T) + 0.1)2 + 9]

T2
H ( z) = 1+ 0.2 T + 9.01T 2
−1
1− 2 (1+ 0.1T )
1+ 0.2 T + 9.01T 2 z + 1
1+ 0.2 T + 9.01T 2
z −2
T can be selected to satisfied specification of designed filter. For example, if T = 0.1,
the poles are located at
p1,2 = 0.91 ± j0.27 = 0.949exp[± j16.5o]

269
Filter Design by Impulse Invariance
• Remember impulse invariance
– Mapping a continuous-time impulse response to discrete-time
– Mapping a continuous-time frequency response to discrete-time
h[n] = Tdhc (nTd )
 ω 2π 
( )

He jω
= ∑ Hc  j
 +j k 
k = −∞  Td Td 
• If the continuous-time filter is bandlimited to
Hc (jΩ ) = 0 Ω ≥ π / Td
 ω
( )
H e jω = Hc  j  ω ≤π
 Td 
• If we start from discrete-time specifications Td cancels out
– Start with discrete-time spec in terms of ω
– Go to continuous-time Ω= ω /T and design continuous-time filter
– Use impulse invariance to map it back to discrete-time ω= ΩT
• Works best for bandlimited filters due to possible aliasing

270
Impulse Invariance of System Functions
• Develop impulse invariance relation between system functions
• Partial fraction expansion of transfer function
N
Ak
Hc (s) = ∑
k =1 s − sk
• Corresponding impulse response
N
∑ Ak esk t t≥0
hc (t ) = k =1
 0 t<0
• Impulse response of discrete-time filter

( )
N N
h[n] = Tdhc (nTd ) = u[n] =∑ TdAk esk Td u[n]
n
∑ TdAk e
k =1
sknTd

k =1
• System function
N
TdAk
H(z ) = ∑ sk Td −1
k =1 1 − e z
sk Td
• Pole s=sk in s-domain transform into pole at e

271
Impulse Invariant Algorithm

• Step 1: define specifications of filter


– Ripple in frequency bands
– Critical frequencies: passband edge, stopband edge, and/or cutoff frequencies.
– Filter band type: lowpass, highpass, bandpass, bandstop.
• Step 2: linear transform critical frequencies as follow
W = w/Td
• Step 3: select filter structure type and its order: Bessel, Butterworth, Chebyshev
type I, Chebyshev type II or inverse Chebyshev, elliptic.
• Step 4: convert Ha(s) to H(z) using linear transform in step 2.
• Step 5: verify the result. If it does not meet requirement, return to step 3.

272
Example: Impulse invariant method

Convert the analog filter with system function


Ha(s) = [s + 0.1]/[(s + 0.1)2 + 9]
into a digital IIR filter by means of the impulse invariance method.

The analog filter has a zero at s = - 0.1 and a pair of complex conjugate poles at pk = - 0.1 ± j3.
Thus, 1 1
H a (s ) = 2
+ 2
s + 0.1 − j 3 s + 0.1 + j 3

1 1
H (z ) =
Then 2
−0.1T −1
+ 2
−0.1T − j 3T
1− e e j 3T
z 1− e e z −1

273
Frequency response
of digital filter.

Frequency response
of analog filter.

274
Disadvantage of previous
techniques: frequency
warping  aliasing effect
and error in specifications
of designed filter (frequencies)
So, prewarping of frequency
is considered.

275
Example
• Impulse invariance applied to Butterworth
( )
0.89125 ≤ H e jω ≤ 1 0 ≤ ω ≤ 0.2π
H(e ) ≤ 0.17783

0.3π ≤ ω ≤ π
• Since sampling rate Td cancels out we can assume Td=1
• Map spec to continuous time

0.89125 ≤ H(jΩ ) ≤ 1 0 ≤ Ω ≤ 0.2π


H(jΩ ) ≤ 0.17783 0.3π ≤ Ω ≤ π
• Butterworth filter is monotonic so spec will be satisfied if
Hc (j0.2π) ≥ 0.89125 and Hc (j0.3π) ≤ 0.17783
1
Hc (jΩ ) =
2

1 + (jΩ / jΩ c )
2N

• Determine N and Ωc to satisfy these conditions

276
Example Cont’d
• Satisfy both2Nconstrains 2N
2 2
 0.2π   1   0.3π   1 
1 +   =  and 1 +   = 
 Ωc   0.89125   Ωc   0.17783 
• Solve these equations to get
N = 5.8858 ≅ 6 and Ω c = 0.70474
• N must be an integer so we round it up to meet the spec
• Poles of transfer function
sk = (− 1) (jΩc ) = Ωce( jπ / 12 )(2k +11)
1 / 12
for k = 0,1,...,11
• The transfer function
0.12093
H(s) =
( )(
s2 + 0.364s + 0.4945 s2 + 0.9945s + 0.4945 s2 + 1.3585s + 0.4945 )( )
• Mapping to z-domain

0.2871 − 0.4466z −1 − 2.1428 + 1.1455z −1


H(z ) = −1 −2
+
1 − 1.2971z + 0.6949z 1 − 1.0691z −1 + 0.3699z −2
1.8557 − 0.6303z −1
+
1 − 0.9972z −1 + 0.257z −2
277
Example Cont’d

278
Filter Design by Bilinear Transformation
• Get around the aliasing problem of impulse invariance
• Map the entire s-plane onto the unit-circle in the z-plane
– Nonlinear transformation
– Frequency response subject to warping
• Bilinear transformation
2  1 − z −1 
s=  
−1 
Td 1 + z 
• Transformed system function
2  1 − z −1 
H(z ) = Hc   
−1  
 Td  1 + z 
• Again Td cancels out so we can ignore it
• We can solve the transformation for z as
1 + (Td / 2)s 1 + σTd / 2 + jΩTd / 2 s = σ + jΩ
z= =
1 − (Td / 2)s 1 − σTd / 2 − jΩTd / 2
• Maps the left-half s-plane into the inside of the unit-circle in z
– Stable in one domain would stay in the other

279
Bilinear Transformation
• On the unit circle the transform becomes
1 + jΩTd / 2
z= = e jω
1 − jΩTd / 2

• To derive the relation between ω and Ω


2  1 − e − jω  2  2e − jω / 2 j sin(ω / 2) 2 j  ω
s=   = σ + jΩ =  − jω / 2  = tan 
Td − jω 
1 + e  Td  2e cos(ω / 2) Td 2

• Which yields
2  ω  ΩTd 
Ω= tan  or ω = 2 arctan 
Td 2  2 

280
Bilinear Transformation

281
Example
• Bilinear transform applied to Butterworth
( )
0.89125 ≤ H e jω ≤ 1 0 ≤ ω ≤ 0.2π
H(e ) ≤ 0.17783

0.3π ≤ ω ≤ π
• Apply bilinear transformation to specifications
2  0.2π 
0.89125 ≤ H(jΩ ) ≤ 1 0≤ Ω ≤ tan 
Td  2 
2  0.3π 
H(jΩ ) ≤ 0.17783 tan ≤ Ω <∞
Td  2 
• We can assume Td=1 and apply the specifications to

1
Hc (jΩ ) =
2

1 + (Ω / Ω c )
2N
• To get

2N 2 2N 2
 2 tan 0.1π   1   2 tan 0.15π   1 
1 +   =  and 1 +   = 
 Ωc   0.89125   Ωc   0.17783 

282
• Solve N and Ω
Example Cont’d
c
  1 
2
   1 
2

log   −1     − 1 
  0.17783     0.89125 
 


N= = 5.305 ≅ 6 Ω c = 0.766
[ ( ) (
2 log tan 0.15π tan 0.1π )]
• The resulting transfer function has the following poles
sk = (− 1) (jΩc ) = Ωce( jπ / 12 )(2k +11) for k = 0,1,...,11
1 / 12

• Resulting in
0.20238
Hc (s) = 2
(s 2
)( )(
2
+ 0.3996s + 0.5871 s + 1.0836s + 0.5871 s + 1.4802s + 0.5871 )
• Applying the bilinear transform yields
H(z ) =
(
0.0007378 1 + z −1 )
6

(1 − 1.2686z −1
)(
+ 0.7051z −2 1 − 1.0106z −1 + 0.3583z −2 )
1
×
(1 − 0.9044z −1
+ 0.2155z −2 )
283
Example Cont’d

284
IIR Digital Filter: The bilinear
transformation
• To obtain G(z) replace s by f(z) in H(s)
• Start with requirements on G(z)
G(z) Available H(s)
Stable Stable
Real and Rational in z Real and Rational
in s
Order n Order n
L.P. (lowpass) cutoff Ω L.P. cutoff ωcT
c
285
Bilinear Transformation
• Mapping of s-plane into the z-plane

286
Bilinear Transformation

• For z = e with unity scalar we have
− jω
jΩ = 1 − e = j tan(ω / 2)
− jω
1+ e
or Ω = tan(ω / 2)

287
Bilinear Transformation
• Mapping is highly nonlinear
• Complete negative imaginary axis in the s-
plane from Ω = −∞ to Ω = 0 is mapped into
the lower half of the unit circle in the z-plane
from z = −1 to z = 1
• Complete positive imaginary axis in the s-
plane from Ω = 0 to Ω = ∞ is mapped into the
upper half of the unit circle in the z-plane
from z = 1 toz = −1
288
Bilinear Transformation
• Nonlinear mapping introduces a distortion
in the frequency axis called frequency
warping
• Effect of warping shown below

289
Spectral Transformations
• To transform GL (z ) a given lowpass transfer
function to another transfer function GD (zˆ )
that may be a lowpass, highpass, bandpass or
bandstop filter (solutions given by
Constantinides)
• z −1 has been used to denote the unit delay in
−1
the prototype lowpass filter GL (z ) and zˆ
to denote the unit delay in the transformed
filter GD (zˆ ) to avoid confusion
290
Spectral Transformations
• Unit circles in z- and ẑ -planes defined by
z = e jω , zˆ = e jω̂
• Transformation from z-domain to
ẑ -domain given by

• Then
z = F (zˆ )
GD ( zˆ ) = GL {F ( zˆ )}
291
Spectral Transformations
• From z = F (zˆ ) , thusz = F (zˆ ) , hence
> 1, if z > 1

F ( zˆ ) = 1, if z = 1
< 1, if z < 1

• Therefore 1 / F ( zˆ ) must be a stable allpass function

1 L  1 − α* zˆ 
= ± ∏   , α < 1
 
F ( zˆ )  =1 zˆ − α  

292
Lowpass-to-Lowpass
Spectral Transformation
• To transform a lowpass filterGL (z ) with a cutoff
frequency ω c to another lowpass filter GD (zˆ )
with a cutoff frequency ω̂ c, the transformation is
1 1 − α zˆ
z −1
= =
F ( zˆ ) zˆ − α
• On the unit circle we have
− jωˆ
e = e −−αjωˆ
− jω
1−α e
which yields
tan(ω / 2) =  1 + α  tan(ωˆ / 2)
1 − α  293
Lowpass-to-Lowpass
Spectral Transformation
• Solving we get sin ((ω c − ωˆ c ) / 2 )
α=
sin ((ω c + ωˆ c ) / 2 )
• Example - Consider the lowpass digital filter
0.0662(1 + z −1 )3
GL ( z ) = −1 −1 −2
(1 − 0.2593 z )(1 − 0.6763 z + 0.3917 z )
which has a passband from dc to 0.25π with
a 0.5 dB ripple
• Redesign the above filter to move the
passband edge to
0.35π 294
Lowpass-to-Lowpass
Spectral Transformation
• Here sin(0.05π )
α =− = − 0.1934
sin(0.3π )
• Hence, the desired lowpass transfer function is
GD ( zˆ ) = GL ( z ) z = zˆ + 0.1934 −1
−1

1+ 0.1934 zˆ −1

-10
Gain, dB

G (z) G (z)
L D
-20

-30

-40
0 0.2 0.4 0.6 0.8 1
ω/π 295
Lowpass-to-Lowpass
Spectral Transformation
• The lowpass-to-lowpass transformation
− 1 1 − α zˆ
z =
1 =
F ( zˆ ) zˆ − α
can also be used as highpass-to-highpass,
bandpass-to-bandpass and bandstop-to-
bandstop transformations

296
Lowpass-to-Highpass
Spectral Transformation
• Desired transformation
−1
−1 zˆ + α
z =−
1 + α zˆ −1
• The transformation parameter α is given by
cos((ω c + ωˆ c ) / 2 )
α =−
cos((ω c − ωˆ c ) / 2 )
where ω c is the cutoff frequency of the lowpass
filter and ω̂ c is the cutoff frequency of the desired
highpass filter
297
Lowpass-to-Highpass
Spectral Transformation
• Example - Transform the lowpass filter
−1 3
0.0662(1 + z )
GL ( z ) =
(1 − 0.2593 z −1 )(1 − 0.6763 z −1 + 0.3917 z −2 )
• with a passband edge at 0.25π to a highpass
filter with a passband edge at 0.55π
• Here α = − cos(0.4π ) / cos(0.15π ) = −0.3468
• The desired transformation is
−1
−1 ˆ − 0.3468
z
z =− −1
1 − 0.3468 zˆ 298
Lowpass-to-Highpass
Spectral Transformation

• The desired highpass filter is


GD ( zˆ ) = G ( z ) z −1
=−
zˆ −1 −0.3468
1−0.3468 zˆ −1
0

−20
Gain, dB

−40

−60

−80
0 0.2π 0.4π 0.6π 0.8π π
Normalized frequency
299
Lowpass-to-Highpass
Spectral Transformation
• The lowpass-to-highpass transformation can
also be used to transform a highpass filter with
a cutoff at ω c to a lowpass filter with a cutoff
at ω̂ c
• and transform a bandpass filter with a center
frequency at ω o to a bandstop filter with a
center frequency at ω̂ o

300
Lowpass-to-Bandpass
Spectral Transformation
• Desired transformation
−2 2αβ −1 β − 1
zˆ − zˆ +
−1 β +1 β +1
z =−
β − 1 −2 2αβ −1
zˆ − zˆ + 1
β +1 β +1

301
Lowpass-to-Bandpass
Spectral Transformation
• The parameters α and β are given by
cos((ωˆ c 2 + ωˆ c1 ) / 2 )
α=
cos((ωˆ c 2 − ωˆ c1 ) / 2 )
β = cot ((ωˆ c 2 − ωˆ c1 ) / 2 ) tan(ω c / 2)
where ω c is the cutoff frequency of the lowpass
filter, and ωˆ c1 and ωˆ c 2 are the desired upper and
lower cutoff frequencies of the bandpass filter

302
Lowpass-to-Bandpass
Spectral Transformation
• Special Case - The transformation can be
simplified if ω c = ωˆ c 2 − ωˆ c1
• Then the transformation reduces to
−1
−1 −1 ˆ − α
z
z = − zˆ −1
1 − α zˆ
where α = cos ωˆ o with ω̂ o denoting the
desired center frequency of the bandpass filter

303
Lowpass-to-Bandstop
Spectral Transformation
• Desired transformation

−2 2αβ −1 1 − β
zˆ − zˆ +
−1 1+ β 1+ β
z =
1 − β −2 2αβ −1
zˆ − zˆ + 1
1+ β 1+ β

304
Lowpass-to-Bandstop
Spectral Transformation
• The parameters α and β are given by
cos((ωˆ c 2 + ωˆ c1 ) / 2 )
α=
cos((ωˆ c 2 − ωˆ c1 ) / 2 )
β = tan ((ωˆ c 2 − ωˆ c1 ) / 2 ) tan(ω c / 2)
where ω c is the cutoff frequency of the
lowpass filter, and ωˆ c1 and ωˆ c 2 are the desired
upper and lower cutoff frequencies of the
bandstop filter

305
UNIT-4
FIR Filters

306
Selection of Filter Type

• The transfer function H(z) meeting the


specifications must be a causal transfer
function
• For IIR real digital filter the transfer
−1
function is a real rational function of z
p0 + p1 z −1 + p2 z −2 +  + pM z − M
H ( z) =
d 0 + d1 z −1 + d 2 z −2 +  + d N z − N
• H(z) must be stable and of lowest order N or
M for reduced computational complexity 307
Selection of Filter Type

• FIR real digital filter transfer function is a


polynomial in z −1 (order N) with real
coefficients N
−n
H ( z ) = ∑ h[n] z
n =0
• For reduced computational complexity, degree N
of H(z) must be as small as possible
• If a linear phase is desired then we must have:

h[n] = ± h[ N − n]
308
Selection of Filter Type
• Advantages in using an FIR filter -
(1) Can be designed with exact linear phase
(2) Filter structure always stable with quantised
coefficients
• Disadvantages in using an FIR filter - Order of an
FIR filter is considerably higher than that of an
equivalent IIR filter meeting the same
specifications; this leads to higher computational
complexity for FIR
309
FIR Filter Design
Digital filters with finite-duration impulse response (all-zero, or FIR filters)
have both advantages and disadvantages compared to infinite-duration
impulse response (IIR) filters.
FIR filters have the following primary advantages:

•They can have exactly linear phase.


•They are always stable.
•The design methods are generally linear.
•They can be realized efficiently in hardware.
•The filter startup transients have finite duration.

The primary disadvantage of FIR filters is that they often require a much
higher filter order than IIR filters to achieve a given level of performance.
Correspondingly, the delay of these filters is often much greater than for an
equal performance IIR filter.
FIR Design
FIR Digital Filter Design
Three commonly used approaches to FIR
filter design -
(1) Windowed Fourier series approach
(2) Frequency sampling approach
(3) Computer-based optimization methods

311
Finite Impulse Response Filters
• The transfer function is given by
N −1
−n
H ( z ) = ∑ h(n).z
n =0

• The length of Impulse Response is N


• All poles are at z = 0.
• Zeros can be placed anywhere on the z-
plane
312
FIR: Linear phase

For phase linearity the FIR transfer


function must have zeros outside the
unit circle

313
Linear Phase
• What is linear phase?
• Ans: The phase is a straight line in the passband of
the system.
• Example: linear phase (all pass system)
• I Group delay is given by the negative of the slope
of the line

314
Linear phase
• linear phase (low pass system)
• Linear characteristics only need to pertain to
the passband frequencies only.

315
FIR: Linear phase
• For Linear Phase t.f. (order N-1)
• h( n) = ± h( N − 1 − n)
• so that for N even:
N −1
2 N −1
−n
H ( z ) = ∑ h( n).z ± ∑ h( n).z − n
n =0 n= N
2
N −1 N −1
2 2
= ∑ h( n).z − n ± ∑ h( N − 1 − n).z −( N −1− n )
n =0 n =0
N −1
2
[
= ∑ h( n) z − n ± z − m
n =0
] m = N −1− n
316
FIR: Linear phase
• for N odd:
N −1 N −1 
−1
−
2
H ( z ) = ∑ h(n). z
n =0
[ −n
±z −m
] 
+ h
N − 1
 2 
z  2 

• I) On C : z = 1 we have for N even, and


+ve sign
N −1  N −1
− jωT  N − 1 
j ωT
 2  
H (e )=e  2 
. ∑ 2h(n). cos ωT  n − 
n =0   2 
317
FIR: Linear phase
• II) While for –ve sign
N −1  N −1
− jωT  N − 1 

. ∑ j 2h( n).sin  ωT  n −
 2
H ( e j ωT ) = e  2 

n =0   2 
• [Note: antisymmetric case adds π / 2 rads to
phase, with discontinuity atω = 0 ]
• III) For N odd with +ve sign
N −1
− j ωT  N − 1
j ωT 
 2   
H (e )=e  h 
  2 
N −3

 N − 1  
+ ∑ 2h( n). cos ωT 
2
 n −  
n =0   2  
 318
FIR: Linear phase
• IV) While with a –ve sign
 N −1  N −3 
− j ωT 
 2  
2   N − 1  
H ( e jωT ) = e  ∑ 2 j.h( n).sin ωT  n −  
 n =0   2  
 

• [Notice that for the antisymmetric case to have


linear phase we require
 N − 1
h  = 0.
 2 

The phase discontinuity is as for N even]


319
FIR: Linear phase
• The cases most commonly used in filter
design are (I) and (III), for which the
amplitude characteristic can be written as a
polynomial in
ωT
cos
2

320
Summary of Properties
K
H (ω ) = e jω0 e − jωN / 2 F (ω )∑ ak cos(kω )
k =0

Type I II III IV
Order N even odd even odd
Symmetry symmetric symmetric anti-symmetric anti-symmetric
Period 2π 4π 2π 4π
ω0 0 0 π/2 π/2
F(ω) 1 cos(ω/2) sin(ω) sin(ω/2)
K N/2 (N-1)/2 (N-2)/2 (N-1)/2
H(0) arbitrary arbitrary 0 0
H(π) arbitrary 0 0 arbitrary
Design of FIR filters: Windows
(i) Start with ideal infinite duration {h(n)}
(ii) Truncate to finite length. (This produces
unwanted ripples increasing in height near
discontinuity.)
~
(iii) Modify to h (n) = h(n).w(n)
Weight w(n) is the window

322
Design of FIR filters: Windows
• Simplest way of designing FIR filters
• Method is all discrete-time no continuous-time involved
• Start with ideal frequency response
( ) = ∑ h [n]e
∞ π
1
Hd e jω

n = −∞
d
− jωn
hd [n] =
2π −∫π
Hd( )
e jω
e jωn

• Choose ideal frequency response as desired response


• Most ideal impulse responses are of infinite length
• The easiest way to obtain a causal FIR filter from ideal is

hd [n] 0 ≤ n ≤ M
h[n] = 
 0 else
• More generally

1 0 ≤ n ≤ M
h[n] = hd [n]w[n] where w[n] = 
0 else

323
Properties of Windows
• Prefer windows that concentrate around DC in frequency
– Less smearing, closer approximation
• Prefer window that has minimal span in time
– Less coefficient in designed filter, computationally efficient
• So we want concentration in time and in frequency
– Contradictory requirements
• Example: Rectangular window
1 − e − jω(M +1) − jωM / 2 sin[ω(M + 1) / 2]
( ) = ∑e
M
jω − jωn
We = = e
n=0 1 − e − jω sin[ω / 2]

324
Windowing distortion
• increasing window length generally reduces the
width of the main lobe
• peak of sidelobes is generally independent of M

325
Windows
Commonly used windows
•Rectangular 1 N −1
1−
2n n <
•Bartlett N 2
2πn 
•Hanning 1 + cos
 
 N  2πn
•Hamming 0.54 + 0.46 cos 

 N 

 2πn   4πn 
• Blackman 0.42 + 0.5 cos  + 0.08 cos 
 N   N 

 
2
J 0 β 1 − 
2 n
• Kaiser    J 0 (β )

  N − 1 
326
Rectangular Window
• Narrowest main lob
– 4π/(M+1)
– Sharpest transitions at
discontinuities in frequency

• Large side lobs


– -13 dB
– Large oscillation around
discontinuities

• Simplest window possible


1 0≤n≤M
[ ]
wn = 
0 else

327
Bartlett (Triangular) Window
• Medium main lob
– 8π/M

• Side lobs
– -25 dB

• Hamming window
performs better
• Simple equation
 2n / M 0 ≤ n ≤ M/2

w[n] = 2 − 2n / M M / 2 ≤ n ≤ M
 0 else

328
Hanning Window
• Medium main lob
– 8π/M

• Side lobs
– -31 dB

• Hamming window performs


better

• Same complexity as
Hamming
1   2πn 
 1 − cos  0 ≤ n ≤ M
w[n] = 2   M 
 0 else

329
Hamming Window
• Medium main lob
– 8π/M

• Good side lobs


– -41 dB

• Simpler than Blackman

  2πn 
0.54 − 0.46 cos  0≤n≤M
w[n] =   M 
 0 else

330
Blackman Window
• Large main lob
– 12π/M

• Very good side lobs


– -57 dB

• Complex equation
  2πn   4πn 
0.42 − 0.5 cos  + 0.08 cos  0≤n≤M
w[n] =   M   M 
 0 else

331
Kaiser Window Filter Design Method
• Parameterized equation
forming a set of windows
– Parameter to change main-lob
width and side-lob area trade-off

  2

 I0 β 1 −  n − M / 2 
 
  M/2  
w[n] =    0≤n≤M
 I0 (β)

 0 else

– I0(.) represents zeroth-order


modified Bessel function of 1st
kind

332
Comparison of windows

333
Kaiser window
• Kaiser window
β Transition Min. stop
width (Hz) attn dB
2.12 1.5/N 30
4.54 2.9/N 50
6.76 4.3/N 70
8.96 5.7/N 90
334
Example
• Lowpass filter of length 51 and ωc = π / 2
Lowpass Filter Designed Using Hann window Lowpass Filter Designed Using Hamming window
0 0

Gain, dB
Gain, dB

-50 -50

-100 -100

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


ω/π ω/π
Lowpass Filter Designed Using Blackman window
0
Gain, dB

-50

-100

0 0.2 0.4 0.6 0.8 1 335


ω/π
Frequency Sampling Method
• In this approach we are given H (k ) and
need to find H (z )
• This is an interpolation problem and the
solution is given in the DFT part of the
course −N
1 N −1 1− z
H ( z ) = ∑ H (k ). 2π
N k =0 j k
1 − e N .z −1
• It has similar problems to the windowing
approach 336
FIR Digital Filter Order Estimation

Kaiser’s Formula:
− 20 log10 ( δ pδ s ) − 13
N≅ +1
14.6(ωs − ω p ) / 2π
• ie N is inversely proportional to transition
band width and not on transition band
location

337
UNIT-5
Multirate signal processing &
Finite Word length Effects

338
Single vs Multirate Processing

339
Basic Multirate operations: Decimation
and Interpolation

340
M-fold Decimator

341
Sampling Rate Reduction by an Integer Factor:
Downsampling
• We reduce the sampling rate of a sequence by “sampling” it
x d [n] = x[nM] = x c (nMT )

• This is accomplished with a sampling rate compressor

• We obtain xd[n] that is identical to what we would get by


reconstructing the signal and resampling it with T’=MT
• There will be no aliasing if
π π
= > ΩN
T' MT

342
Frequency Domain Representation of Downsampling
• Recall the DTFT of x[n]=xc(nT)
1 ∞   ω 2πk  
( )
Xe jω
= ∑ X c  j −
T k = −∞   T
 
T 
• The DTFT of the downsampled signal can similarly written as
1 ∞   ω 2πr   1 ∞   ω 2πr  
( )
Xd e jω
= ∑ X  j
c 
T' r = −∞   T'
−  = ∑ X  j
c  −
T'   MT r = −∞   MT MT  
 

• Let’s represent the summation index as

r = i + kM where - ∞ < k < ∞ and 0 ≤ i < M


1 M −1  1 ∞   ω 2πk 2πi  
( )
Xd e jω
= ∑  ∑ X c  j
M i = 0  T r = −∞   MT

T
−  
MT  
• And finally

1 M −1  j M − M  
 ω 2 πi 

( )
X d e jω = ∑X e
M i=0  
 
343
Frequency Domain Representation of Downsampling

344
Aliasing

345
Frequency Domain Representation of Downsampling w/ Prefilter

346
Decimation filter

347
L-fold Interpolator

348
Increasing the Sampling Rate by an Integer Factor:
Upsampling
• We increase the sampling rate of a sequence interpolating it
xi [n] = x[n / L ] = x c (nT / L )
• This is accomplished with a sampling rate expander

• We obtain xi[n] that is identical to what we would get by


reconstructing the signal and resampling it with T’=T/L
• Upsampling consists of two steps
– Expanding
x[n / L ] n = 0,L,2L,... ∞
x e [n] =  = ∑ x[k ]δ[n − kL ]
 0 else k = −∞

– Interpolating

349
Frequency Domain Representation of Expander
• The DTFT of xe[n] can be written as

( )  ∞  − jωn
( )
∞ ∞
X e e = ∑  ∑ x[k ]δ[n − kL ]e

= ∑ x[k ]e − jωLk = X e jωL
n = −∞ k = −∞  k = −∞
• The output of the expander is frequency-scaled

350
Input-output relation on the Spectrum

351
Periodicity and spectrum images

352
Frequency Domain Representation of Interpolator
• The DTFT of the desired interpolated signals is

• The extrapolator output is given as

• To get interpolated signal we apply the following LPF

353
Interpolation filters

354
Fractional sampling rate convertor

355
Fractional sampling rate convertor

356
Changing the Sampling Rate by Non-Integer Factor

• Combine decimation and interpolation for non-integer factors


Interpolator Decimator

x[n] xe[n] Lowpass filter xi[n] Lowpass filter xo[n] xd[n]


L Gain = L Gain = 1 M
Cutoff = p/L Cutoff = p/M

T T/L T/L T/L TM/L


• The two low-pass filters can be combined into a single one

Lowpass filter
x[n] xe[n] Gain = L xo[n] xd[n]
L Cutoff = M
min(p/L, p/M)

T T/L T/L TM/L 357


Time Domain
• xi[n] in a low-pass filtered version of x[n]
• The low-pass filter impulse response is
sin(πn / L )
hi [n] =
πn / L
• Hence the interpolated signal is written as

sin(π(n − kL ) / L )
xi [n] = ∑ x[k ]
k = −∞ π(n − kL ) / L
hi [0] = 1
• Note that
hi [n] = 0 n = L,2L,...

• Therefore the filter output can be written as


xi [n] = x[n / L ] = x c (nT / L ) = x c (nT') for n = 0,L,2L,...

358
Sampling of bandpass signals

359
Sampling of bandpass signals

360
Over sampling -ADC

361
362
363
364
Sub band coding

365
Sub band coding

366
Digital filter banks

367
Finite Word length Effects

368
Finite Wordlength Effects
• Finite register lengths and A/D converters
cause errors in:-
(i) Input quantisation.
(ii) Coefficient (or multiplier)
quantisation
(iii) Products of multiplication truncated
or rounded due to machine length

369
Finite Wordlength Effects
• Quantisation
Output
eo (k )

Q
ei (k )
Input

Q Q
− ≤ ei ,o (k ) ≤
2 2

370
Finite Wordlength Effects
• The pdf for e using rounding
1
Q

Q Q

2 2

Q 2
• Noise power σ = ∫ e p (e).de = E{e }
2 2 2

or −Q 2
2
Q
σ =
2
12
371
Finite Wordlength Effects
• Let input signal be sinusoidal of unity
amplitude. Then total signal power P = 1
2
• If b bits used for binary then Q = 2 2 b

so that σ 2 = 2−2b 3
• Hence 3 + 2b
P σ = .2
2
2
or SNR = 1.8 + 6b dB
372
Finite Wordlength Effects
• Consider a simple example of finite
precision on the coefficients a,b of second
order system with poles ρe ± jθ

1
H ( z) = −1 −2
1 − az + bz
1
H ( z) = −1 2 −2
1 − 2 ρ cosθ .z + ρ .z

• where a = 2 ρ cosθ b=ρ 2


373
Finite Wordlength Effects
bit pattern 2 ρ cosθ , ρ 2 ρ
000 0 0
• 001 0.125 0.354
010 0.25 0.5
011 0.375 0.611
100 0.5 0.707
101 0.625 0.791
110 0.75 0.866
111 0.875 0.935
1.0 1.0 1.0 374
Finite Wordlength Effects
• Finite wordlength computations

INPUT OUTPU
T
+

375
Limit-cycles; "Effective Pole"
Model; Deadband

• Observe that for H ( z ) = 1


(1 + b1 z −1 + b2 z −2 )
• instability occurs when b2 → 1
• i.e. poles are
• (i) either on unit circle when complex
• (ii) or one real pole is outside unit
circle.
• Instability under the "effective pole" model
is considered as follows 376
Finite Wordlength Effects
• In the time domain with H ( z ) = Y ( z )
X ( z)

y (n) = x(n) − b1 y (n − 1) − b2 y (n − 2)

• With b2 → 1 for instability we have


Q[b2 y (n − 2)] indistinguishable from y (n − 2)
• Where Q[⋅] is quantisation

377
Finite Wordlength Effects
• With rounding, therefore we have
b2 y (n − 2) ± 0.5 y ( n − 2)
are indistinguishable (for integers)
or b2 y (n − 2) ± 0.5 = y (n − 2)
• Hence ± 0.5
y ( n − 2) =
1 − b2
• With both positive and negative numbers
± 0.5
y ( n − 2) =
1 − b2 378
Finite Wordlength Effects
± 0.5
• The range of integers
1 − b2

constitutes a set of integers that cannot be


individually distinguished as separate or from the
asymptotic system behaviour.
• The band of integers  0.5 0.5 
 − , + 
 1 − b2 1 − b2 
is known as the "deadband".
• In the second order system, under rounding, the
output assumes a cyclic set of values of the
deadband. This is a limit-cycle.
379
Finite Wordlength Effects
• Consider the transfer function
G( z) = 1
(1 + b1 z −1 + b2 z −2 )
yk = xk − b1 yk −1 − b2 yk −2

• if poles are complex then impulse response


is given by hk
ρk
hk = .sin[(k + 1)θ ]
sin θ
380
Finite Wordlength Effects
• Where ρ = b2 θ = cos −1  − b1 

 2 b2 
• If b2 = 1 then the response is sinusiodal
with frequency
1 −1  − b1 
ω = cos  
T  2 
• Thus product quantisation causes instability
implying an "effective “ b2 = 1 .

381
Finite Wordlength Effects

• Notice that with infinite precision the


response converges to the origin

• With finite precision the reponse does not


converge to the origin but assumes
cyclically a set of values –the Limit Cycle

382
Finite Wordlength Effects
• Assume {e1 (k )} ,{e2 (k )} ….. are not
correlated, random processes etc.
2 ∞ 2
σ 0i = σ e ∑ hi ( k ) σ
2 2 2
= Q
k =0 e 12
Hence total output noise power
2 −2b ∞ 2 k sin 2 [(k + 1)θ ]
σ0 2
= σ 012
+ σ 02 2
= 2. ∑ρ .
12 k =0 sin 2 θ
−b
• Where Q = 2 and
sin[(k + 1)θ ]
h1 (k ) = h2 (k ) = ρ . k
; k ≥0
sin θ
383
Finite Wordlength Effects

• ie
− 2b
2 1+ ρ 2
1 
σ 02 =  . 
6 1 − ρ 1 + ρ − 2 ρ cos 2θ 
2 4 2

384
Finite Wordlength Effects

A(n) B(n+1)
• For FFT
B(n) -
B(n+1)
W(n)

A( n + 1) = A( n) + W ( n).B ( n)
B ( n + 1) = A( n) − W ( n).B ( n)
A(n)
A(n+1)
B(n+1)
B(n) B(n)W(n)

385
Finite Wordlength Effects
• FFT
A(n + 1) + B (n + 1) = 2
2 2

A(n + 1) = 2 A(n)
2 2

A(n) = 2 A(n)
• AVERAGE GROWTH: 1/2 BIT/PASS

386
Finite Wordlength Effects
IMAG 1.0
• FFT
-1.0 1.0
REAL

-1.0
Ax ( n + 1) = Ax ( n) + Bx ( n)C ( n) − B y ( n) S ( n)
Ax ( n + 1) < Ax ( n) + Bx ( n) C ( n) − B y ( n) S ( n)
Ax ( n + 1)
< 1.0 + C ( n) − S ( n) = 2.414....
Ax ( n)

• PEAK GROWTH: 1.21.. BITS/PASS


387
Finite Wordlength Effects
• Linear modelling of product quantisation
x(n) ~
x ( n)
Q[⋅]

• Modelled as
x(n) + x ( n) = x ( n) + q ( n)
~
q(n)

388
Finite Wordlength Effects
• For rounding operations q(n) is uniform
distributed between − Q2 , Q2
and where Q is
the quantisation step (i.e. in a wordlength of
bits with sign magnitude representation or
mod 2, Q = 2 −).b
• A discrete-time system with quantisation at
the output of each multiplier may be
considered as a multi-input linear system

389
Finite Wordlength Effects

q1 (n)...q2 (n)...q p (n)

{x(n)} h(n) {y (n)}


• Then
∞ ∞
y ( n) = ∑ x ( r ).h( n − r ) + ∑ ∑ qλ ( r ).hλ ( n − r ) 
p

r =0 λ =1
 r =0 
• where hλ (n) is the impulse response of the
system from λ the output of the multiplier
to y(n).

390
Finite Wordlength Effects
• For zero input i.e. x(n) = 0, ∀n we can write
p ∞
y (n) ≤ ∑ qˆλ . ∑ hλ (n − r )
λ =1 r =0

• where q̂λ is the maximum of qλ (r ) , ∀λ , r


which is not more than Q
2
Q p∞
• ie y (n) ≤ . ∑ ∑ hλ (n − r ) 
2 λ =1n =0 

391
Finite Wordlength Effects
• However
∞ ∞
∑ hλ (n) ≤ ∑ h(n)
n =0 n =0

• And hence
pQ ∞
y ( n) ≤ . ∑ h( n)
2 n =0
• ie we can estimate the maximum swing at
the output from the system parameters and
quantisation level
392
Finite Precision Numerical
Effects

393
Quantization in Implementing Systems
• Consider the following system

• A more realistic model would be

• In order to analyze it we would prefer

394
Effects of Coefficient Quantization in IIR Systems

• When the parameters of a rational system are quantized


– The poles and zeros of the system function move
• If the system structure of the system is sensitive to
perturbation of coefficients
– The resulting system may no longer be stable
– The resulting system may no longer meet the original specs
• We need to do a detailed sensitivity analysis
– Quantize the coefficients and analyze frequency response
– Compare frequency response to original response
• We would like to have a general sense of the effect of
quantization

395
Effects on Roots
M M

∑b z k
−k

Quantiza ∑ b̂ z k
−k

H(z ) = k =0
N
Ĥ(z ) = k =0
N
1 − ∑ ak z −k tion 1 − ∑ âk z −k
k =1 k =1

• Each root is affected by quantization errors in ALL coefficient


• Tightly clustered roots can be significantly effected
– Narrow-bandwidth lowpass or bandpass filters can be very
sensitive to quantization noise
• The larger the number of roots in a cluster the more sensitive it
becomes
• This is the reason why second order cascade structures are less
sensitive to quantization error than higher order system
– Each second order system is independent from each other

396
Poles of Quantized Second-Order Sections
• Consider a 2nd order system with complex-conjugate pole pair

• The pole locations after quantization will be on the grid point

← 3-bits

7-bits →

397
Coupled-Form Implementation of Complex-Conjugate Pair

• Equivalent implementation of the


second order system

• But the quantization grid this time is

398
Effects of Coefficient Quantization in FIR Systems
• No poles to worry about only zeros
• Direct form is commonly used for FIR systems
M
H(z ) = ∑ h[n]z
n=0
−n

• Suppose the coefficients are quantized M


∆H(z ) = ∑ ∆h[n]z
M
Ĥ(z ) = ∑ ĥ[n]z = H(z ) + ∆H(z )
−n −n

n=0 n=0

• Quantized system is linearly related to the quantization error

• Again quantization noise is higher for clustered zeros


• However, most FIR filters have spread zeros
399
Round-Off Noise in Digital Filters
• Difference equations
implemented with finite-
precision arithmetic are
non-linear systems
• Second order direct form I
system
• Model with quantization
effect
• Density function error
terms for rounding

400
Analysis of Quantization Error
• Combine all error terms to single location to get

e[n] = e0 [n] + e1 [n]


+ e2 [n] + e3 [n] + e4 [n]

2−2B
• The variance of e[n] in the general case is σ = (M + 1 + N)
2
e
12
N
• The contribution of e[n] to the output is f [n] = ∑ a f [n − k ] + e[n]
k
k =1

• The variance of the output error term f[n] is


2−2B ∞
Hef (z ) = 1 / A (z )
σ = (M + 1 + N) ∑ hef [n]
2 2
f
12 n = −∞

401
Round-Off Noise in a First-Order System
• Suppose we want to implement the following stable system
b
H(z ) = a <1
1 − az −1
• The quantization error noise variance is
2 −2B ∞
2 −2B ∞
2−2B  1 
σ2f = (M + 1 + N) ∑ hef [n] = 2 ∑a  
2 2n
=2
12 n = −∞ 12 n=0 12 1 − a2 
 
• Noise variance increases as |a| gets closer to the unit circle
• As |a| gets closer to 1 we have to use more bits to compensate for the
increasing error

402
Zero-Input Limit Cycles in Fixed-Point Realization of IIR Filters

• For stable IIR systems the output will decay to zero when the input
becomes zero
• A finite-precision implementation, however, may continue to oscillate
indefinitely
• Nonlinear behaviour very difficult to analyze so we sill study by example
• Example: Limit Cycle Behavior in First-Order Systems
y[n] = ay[n − 1] + x[n] a <1

• Assume x[n] and y[n-1]


are implemented by 4 bit registers

403
Example Cont’d
y[n] = ay[n − 1] + x[n] a <1
• Assume that a=1/2=0.100b and the input is
7
x[n] = δ[n] = (0.111b )δ[n]
8

• If we calculate the output for values of n

n y[n] Q(y[n])
0 7/8=0.111b 7/8=0.111b
1 7/16=0.011100b 1/2=0.100b
2 1/4=0.010000b 1/4=0.010b
3 1/8=0.001000b 1/8=0.001b
4 1/16=0.00010b 1/8=0.001b

• A finite input caused an oscillation with period 1

404
Example: Limit Cycles due to Overflow
• Consider a second-order system realized by
ŷ[n] = x[n] + Q(a1ŷ[n − 1]) + Q(a2ŷ[n − 2])
– Where Q() represents two’s complement rounding
– Word length is chosen to be 4 bits
• Assume a1=3/4=0.110b and a2=-3/4=1.010b
• Also assume
ŷ[− 1] = 3 / 4 = 0.110b and ŷ[− 2] = −3 / 4 = 1.010b
• The output at sample n=0 is
ŷ[0] = 0.110b × 0.110b + 1.010b × 1.010b
= 0.100100b + 0.100100b
• After rounding up we get
ŷ[0] = 0.101b + 0.101b = 1.010b = -3/4
• Binary carry overflows into the sign bit changing the sign
• When repeated for n=1
ŷ[0] = 1.010b + 1.010b = 0.110 = 3 / 4

405
Avoiding Limit Cycles
• Desirable to get zero output for zero input: Avoid limit-cycles
• Generally adding more bits would avoid overflow
• Using double-length accumulators at addition points would
decrease likelihood of limit cycles
• Trade-off between limit-cycle avoidance and complexity
• FIR systems cannot support zero-input limit cycles

406

You might also like