0% found this document useful (0 votes)
163 views40 pages

Chapter 2 Linear Signal Models

Electrical and Computer Engineering Masters Program at Addis Ababa University, Communication Engineering. Course Name of Linear Systems

Uploaded by

Sisay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views40 pages

Chapter 2 Linear Signal Models

Electrical and Computer Engineering Masters Program at Addis Ababa University, Communication Engineering. Course Name of Linear Systems

Uploaded by

Sisay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Linear Signal Models

Motivation
• Signal modeling is the representation of signal
in an efficient manner.
• Signal modeling has several applications.
– Compression,
• Represent the signal with small number of parameters.
– Signal prediction,
– Signal interpolation,
• A model may be used to estimate missing data.
• Nonparametric signal model: when the LTI filter is specified
by its impulse response.
• Parametric: when the filter is represented by a finite-order
rational system function.
– Limited to
• System with rational system function: All-pole, All-zero and pole-zero
systems.
• Minimum-phase systems
• The two major topics we address in this chapter are
– Derivation of the second-order moments, given the parameters of
their system function, and
– Design of an AP, AZ, or PZ system that produces a random signal
with a given autocorrelation sequence or PSD function.
Nonparametric Signal Models
• The model given below is a nonparametric
model.

• The autocorrelation, complex PSD and PSD of


the output for unit sample input are given as
• It can also be represented recursively with

• Where hi(m) is the inverse system of h(n).


• Another way to look at it is with the following
equation.
Parametric Signal Modeling
• Two steps in parametric signal modeling:
– Choose an appropriate model and
– Determine the parameters of the model.
• In linear signal model, the signal is the o/p of a
causal stable linear shift-invariant filter.

• Parameters are ak and bk.


Deterministic Signal Modeling
• The signal is modeled as an output to an LTI
system with impulse input.

• The error in this modeling is given as

• Different methods of minimizing this error.


Least Square Method
• It tries to minimize the square of the error.

• By taking the partial derivative of this error


with respect to ap(k) and bq(k) and setting to
zero leads to nonlinear equations that are not
mathematically tractable.
Padé Approximation
• Reformulates the problem in such a way that
the filter coefficients may be found that force
the error to be zero for p+q+1 values of n.

• In matrix form,
• Solution different depending upon whether
the matrix is invertible or not.
• Limitation
– No guarantee on how accurate the model is for
n>p+q since no data for n>p+q is used.
Prony’s Method
• Modifies the least square error definition,

• The new error is then given as

• Then minimizing this error with respect to ap(k).


• By setting the partial derivative to zero.
Orthogonally
Principle
• Substituting this into the modified error
equation and manipulation will lead to:

• Where
• In matrix form,

Hermitian matrix
• The minimum error is then
• The bq(k) are obtained by setting the error to
zero for n=0,…,q. Same as Padé approx.

• Limitation
– Requires knowledge of data for all n.
Padé vs Prony
• Model a signal consisting of single pulse of
length N=21 with p=q=1.

• Prony’s method • Padé method


Filter Design: Padé vs Prony
• Design a linear phase lowpass filter.

Padé Prony method

Error

Magnitude
Response
Shanks’ Method
• Instead of forcing the error zero to obtain
bq(k), Shanks’ method minimizes the squared
error.

• Reading assignment.
Reading assignment
• FIR least square invers filter
– Hayes: pp 166-174
• Iterative prefiltering
– Hayes: pp 174-177
Finite Data Records
• If only x(n) is known for finite interval 0≤n≤N,
Prony’s method cannot be used.
• Assumption have to be made regarding x(n)
outside this interval.
Autocorrelation Method
• In this method, the data is assumed to be zero
outside 0≤n≤N.
• This is basically using a rectangular window to
obtain a new signal from x(n).

• Then solve for ap(k) using Prony’s method except


• Limitation
– Since the window forces x(n) to be zero outside
0≤n≤N, the accuracy of the model outside this range
is compromised.
– For 0≤n≤P the prediction is based on fewer data, so
the error may be greater.
• A non-rectangular window may be used. Hamming,
Hanning, …
• Advantage
– The model will be stable. That is the poles of H(z)
will be inside the unit circle.
Covariance Method
• Does not make any assumptions about the
data outside the 0≤n≤N.
• Error is calculated for data P≤n≤N.

• The solution is then given as

• Where
• The normal equations are identical to those of
Prony’s method, except how the autocorrelation
is obtained.
• Note that the autocorrelation matrix is not
Toeplitz.
– Computationally much costly than autocorrelation
method.
Example: Autocorr. vs Covariance
• Obtain a first order all-pole model for

• Autocorrelation • Covariance
Stochastic Signal Modeling
• Analyze the properties of a special class of
stationary random sequences that are
obtained by driving a linear, time-invariant
system with white noise.
• Previous methods not applicable for stochastic
signal modeling.
– Only probabilistic information is available. Errors
are also described probabilistically.
• If a white noise is filtered with a stable LTI
filter, random signals with almost any arbitrary
a periodic correlation structure or continuous
PSD can be obtained.
Autoregressive Moving-Average (Pole-Zero)
Models
• A pole-zero stochastic model is given as

• Its system function is:

• Its short term memory is behavior exponentially


decaying
• If a parametric model is excited with white
noise w(n)=IID{0, σ2ω}, the second moments of
the random signal is given as
All-pole Models
• An all pole model is when Q=0, that is

• Its impulse response is given by

• Multiplying by h*(n-l) and summing for all n


will lead to
• Since h(-l)=0 for l>0,
and

for l=1,…,P for l=0


• Combining these two equations
• In matrix notations, it is the Yule-Walker equations.

• Where Rx is the autocorrelation matrix, and a is the


all pole parameter vector.
• So the first P+1 autocorrelation values determine the
coefficients.
• Conversely, the coefficients determine the first P+1
autocorrelation values.
• Normalizing the autocorrelations by dividing
by r(0),

• It can be re-written as:

• Given the coefficients, the autocorrelation can


be obtained.
• Second Order all-pole model

– Representing the a1 and a2 in terms of the poles


Assignment 2.1: Implement the second order filter with a1=0.5 and
a2=0.25 and drive it with IID(0,0.1). Then , obtain a sample
– To be stable and causal
realization of length 20. From this sample realization, obtain a
second order deterministic all-pole model with Prony’s method.
Comment on the result.
All-zero Models
• The output of the all-zero model is the
weighted average of delayed versions of the
input signal

• Its impulse response is:


• Its autocorrelation is given as

• Note that these equations are nonlinear. So


solution is complicated.
• The spectrum is
• Reading Assignment:
– Low order model of all-zero models
Pole-Zero Models
• A general pole-zero model is given as

• Its impulse response is given as

• For n>Q, it can be shown that

• Therefore, the first P+Q+1 values of the impulse


response completely specify the pole-zero model.
• The autocorrelation can be represented as

• Noting that h(n) is causal,


• Therefore, the ak can be obtained from l=Q+1,…,Q+P

Assignment 2.2: Show that this autocorrelation matrix is not Toeplitz.

• Note that this is not a Toeplitz matrix.


• Even after ak are obtained, the solution to dk is
still nonlinear.
– Reading Assignment:
• Manolakis: pp 178-179
• Hayes: pp 190 -193
Reading Assignment
• Models with poles on the unit circle.
• Cepstrum of pole-zero models.

Assignment 2.3: Implement the autocorrelation and covariance


methods by MATLAB. Record 1 minute of your speech, break it
into 40 millisecond windows and represent each of the windows
by all pole model with P=20.
Submit:
• The MATLAB code,
• The recorded speech and
• The model parameters.
Submission deadline: May 6, 2017
Via [email protected]
Title: SDSP, yourname

You might also like