0% found this document useful (0 votes)
28 views45 pages

Chapter 2 Linear Signal Models

Chapter 2 discusses linear signal models, focusing on both nonparametric and parametric signal modeling techniques. It covers various methods for designing filters, estimating parameters, and minimizing errors, including least squares, Padé approximation, and Prony's method. The chapter also addresses stochastic signal modeling and the characteristics of autoregressive moving-average models, all-pole models, and pole-zero models.

Uploaded by

Nejat Abdulwahid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views45 pages

Chapter 2 Linear Signal Models

Chapter 2 discusses linear signal models, focusing on both nonparametric and parametric signal modeling techniques. It covers various methods for designing filters, estimating parameters, and minimizing errors, including least squares, Padé approximation, and Prony's method. The chapter also addresses stochastic signal modeling and the characteristics of autoregressive moving-average models, all-pole models, and pole-zero models.

Uploaded by

Nejat Abdulwahid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Chapter 2: Linear Signal Models

1
Motivation
• Signal modeling is the representation of
signal in an efficient manner.
• Signal modeling has several applications.
– Compression,
• Represent the signal with small number of
parameters.
– Signal prediction,
• A model may be used to estimate future data.
– Signal interpolation,
• A model may be used to estimate missing data.

2
• Nonparametric signal model: when the LTI
filter is specified by its impulse response.
– Non parametric because there is no restriction
regarding the form of the model and the number of
parameters is infinite.
• Parametric: when the filter is represented by a
finite-order rational system function.
– Limited to
• System with rational system function: All-pole, All-zero and
pole-zero systems.
• Minimum-phase systems
– Described by a finite number of parameters.
3
• The two major topics we address in this
chapter are
– Design of an all-pole, all-zero, or pole-zero
system that produces a random signal with a
given autocorrelation sequence or PSD function.
– Derivation of the second-order moments, given
the parameters of their system function, and

4
Nonparametric Signal Models
• The model given below is a nonparametric
model.

• If w(n) is a zero-mean white noise process the


autocorrelation, complex PSD and PSD of the
output for unit sample input are given as

5
• Notice that when the input is a white noise
process, the shape of the autocorrelation and
the power spectrum (second-order moments) of
the output signal are completely characterized
by the system.
• We use the term system-based signal model to
refer to the signal generated by a system with a
white noise input.

6
Parametric Signal Modeling
• Two steps in parametric signal modeling:
– Choose an appropriate model and
– Determine the parameters of the model.
• In linear signal model, the signal is modeled
as the output of a causal stable LTI filter.

• Parameters are ap(k) and bq(k).


9
• The parameters should provide the best
approximation to the given signal.
• What does causality and stability imply in
this system function?

10
Deterministic Signal Modeling
• The signal is modeled as an output to an LTI
system with impulse input.

• What do we mean by best approximation?


– Error or Implementation cost
• The error in this modeling is given as

• Different methods of minimizing this error.


11
Least Square Method
• It tries to minimize the square of the error.

• By taking the partial derivative of this error


with respect to ap(k) and bq(k) and setting to
zero leads to nonlinear equations that are
not mathematically tractable.

12
Padé Approximation
• Reformulates the problem in such a way that
the filter coefficients may be found that force
the error to be zero for p+q+1 values of n.

• In matrix form,

13
• Solution different depending upon whether
the matrix is invertible or not.
– Reading assignment.
• Limitation
– No guarantee on how accurate the model is for
n>p+q since no data for n>p+q is used.
– The model generated is not guaranteed to be
stable.

14
Prony’s Method
• Modifies the least square error definition,

• The new error is then given as

• Then minimizing the square of this error


with respect to ap(k). 15
• By setting the partial derivative to zero.
Orthogonally
Principle

• Substituting this into the modified error


equation and manipulation will lead to:

• Where

16
• In matrix form,

Conjugate symmetric matrix


• The minimum error is then

• minimum error is not dependent on the


numerator coefficients 17
• The bq(k) are obtained by setting the error
to zero for n=0,…,q. Same as Padé approx.

• Limitation
– Requires knowledge of data for all n.

18
Padé vs Prony
• Model a signal consisting of single pulse of
length N=21 with p=q=1.

• Prony’s method • Padé method

19
Filter Design: Padé vs Prony
• Design a linear phase lowpass filter for
q=p=5.
Pade method Prony method

Error

Magnitude
Response

20
Shanks’ Method
• Instead of forcing the error zero to obtain
bq(k), Shanks’ method minimizes the
squared error.

• Reading assignment.

21
Reading assignment
• FIR least square inverse filter
– Hayes: pp 166-174
• Iterative prefiltering
– Hayes: pp 174-177

22
Finite Data Records
• If only x(n) is known for finite interval
0≤n≤N, Prony’s method cannot be used.

• Assumption have to be made regarding x(n)


outside this interval.
23
Autocorrelation Method
• In this method, the data is assumed to be
zero outside 0≤n≤N.
• This is basically using a rectangular window
to obtain a new signal from x(n).

24
• Then solve for ap(k) using Prony’s method
except

25
• Limitation
– Since the window forces x(n) to be zero outside
0≤n≤N, the accuracy of the model outside this
range is compromised.
– For 0≤n≤P the prediction is based on fewer data,
so the error may be greater.
• A non-rectangular window may be used. Hamming,
Hanning, …
• Advantage
– The model will be stable. That is the poles of
H(z) will be inside the unit circle.
26
Covariance Method
• Does not make any assumptions about the
data outside the 0≤n≤N.
• Error is calculated for data P≤n≤N.

• The solution is then given as

• Where 27
• The normal equations are identical to those of
Prony’s method, except how the
autocorrelation is obtained.
• Note that the autocorrelation matrix is not
Toeplitz.
– Computationally much costly than autocorrelation
method.

28
Example: Autocorr. vs Covariance
• Obtain a first order all-pole model for

• Autocorrelation • Covariance

29
Stochastic Signal Modeling
• Analyze the properties of a special class of
stationary random sequences that are
obtained by driving a linear, time-invariant
system with white noise.
• Previous methods not applicable for
stochastic signal modeling.
– Only probabilistic information is available.
– Errors are also described probabilistically.

30
• If a white noise is filtered with a stable LTI
filter, random signals with almost any
arbitrary a periodic correlation structure or
continuous PSD can be obtained.

31
Autoregressive Moving-Average
(Pole-Zero) Models
• A pole-zero stochastic model is given as

• Its system function is:

• Its short term memory is exponentially


decaying

32
• If a parametric model is excited with white
noise w(n)=IID{0, σ2ω}, the second moments
of the random signal is given as

33
All-pole Models
• An all pole model is when Q=0, that is

• Its impulse response is given by

• Multiplying by h*(n-l) and summing for all n


will lead to

34
• Since h(-l)=0 for l>0,
and

for l=1,…,P for l=0


• Combining these two equations

35
• In matrix notations, it is the Yule-Walker equations.

• Where Rx is the autocorrelation matrix, a is the all


pole parameter vector and rx is the vector of
autocorrelations.
• So the first P+1 autocorrelation values determine
the coefficients.
• Conversely, the coefficients determine the first P+1
autocorrelation values.

36
• Normalizing the autocorrelations by dividing
by r(0),

• It can be re-written as:

• Given the coefficients, the autocorrelation


can be obtained.
37
• Second Order all-pole model

– Representing the a1 and a2 in terms of the poles

– To be stable and causal

38
All-zero Models
• The output of the all-zero model is the
weighted average of delayed versions of the
input signal

• Its impulse response is:

39
• Its autocorrelation is given as

• Note that these equations are nonlinear. So


solution is complicated.
• The spectrum is

40
• Reading Assignment:
– Low order model of all-zero models

41
Pole-Zero Models
• A general pole-zero model is given as

• Its impulse response is given as

• For n>Q, it can be shown that

• Therefore, the first P+Q+1 values of the impulse


response completely specify the pole-zero
model.
42
• The autocorrelation can be represented as

• Noting that h(n) is causal,

43
• Therefore, the ak can be obtained from l=Q+1,…,Q+P

• Note that this is not a Toeplitz matrix.


• Even after ak are obtained, the solution to dk
is still nonlinear.
– Reading Assignment:
• Manolakis: pp 178-179
• Hayes: pp 190 -193
44
Reading Assignment
• Models with poles on the unit circle.
• Cepstrum of pole-zero models.

45
Assignment 2
b0=1
2.1 Implement the second order filter with a1=0.6 and a2=0.2 and
drive it with IID(0,0.2). Then , obtain a sample realization of length
40. From this sample realization, obtain a second order deterministic
all-pole model with Prony’s method.
– Plot the obtained sample realization,
– Show the coefficients of the all-pole model obtained with Prony’s method,
– Comment on the result.
2.2 Show that the autocorrelation matrix used to obtain the
denominator coefficients in the pole-zero stochastic model is not
Toeplitz.
2.3 Implement the autocorrelation and covariance methods by
MATLAB. Record 10 seconds of your speech, break it into 32.5
millisecond windows and represent each of the windows by all pole
model with P=14.
• The MATLAB code,
• The recorded speech,
• The model parameters and
• Discussion points. 46
• Submission format
– Via Email: [email protected]
– Email title: SDSP_[ID]_Assignment2
– File format: SDSP_[ID]_Assignment2.rar
• Report in .pdf only May 2, 2022
– Submission date:5PM, September 28, 2020
– Individual submission

47

You might also like