0% found this document useful (0 votes)
17 views

Linear Prediction

Uploaded by

parimaltiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Linear Prediction

Uploaded by

parimaltiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Optimum Linear Filters Optimum signal estimation

• In the linear prediction problem we use the M past


• In this chapter, we present the theory and application samples x(n-1), x(n-2), ... , x(n-M) of a signal to
of optimum linear filters and predictors. estimate the current sample x(n).

• We concentrate on linear filters that are optimum in • Problem statement:


the sense of minimizing the mean square error(MSE). Given a set of data xk (n) for 1 ≤ k ≤ M ,
• The general theory is applied to the design of determine an estimate yˆ (n) , of the desired response
optimum FIR filters and linear predictors for y(n), using the rule (estimator)
nonstationary and stationary processes (Wiener yˆ (n) = H {xk (n),1 ≤ k ≤ M }
filters).
which, in general, is a nonlinear function of the data.

• We want to find an estimator whose output


approximates the desired response as closely as
possible according to a certain performance criterion.

• Optimum here means the best under the given set of


assumptions and conditions.

• The sensitivity of the performance to deviations from


the assumed statistics is very important in practical
applications of optimum estimators.

%;*#Q5521.(R %;*#Q5521.(R

• Steps involved in the design of an optimum estimator: Linear mean square error estimation
1. Select a computational structure (i.e. H {•}) for
the implementation of the estimator. • Problem statement:
2. Select a performance criterion. Design an estimator yˆ (n) that provides an
estimate of the desired response y (n) using a linear
3. Optimize the performance criterion to determine
the parameters of H {•}. combination of the data xk (n) for 1 ≤ k ≤ M , such
that the MSE E{e( n) 2 } = E{( y (n) − yˆ ( n)) 2 } is
4. Evaluate the optimal performance and validate the
estimator. minimized.

• In practice, we focus on a performance criteria that


1. Only depend on the estimation error
e(n) = y (n) − yˆ (n)
2. Provide a sufficient measure of the user
satisfaction
3. Lead to a mathematically tractable problem.

• Typical performance criteria include

Performance criterion Definition


Mean square error criterion (MSE) E{e( n) 2 }
Mean α th -order error criterion E{e(n) α }
∑ nn2= n e( n)
Sum of squared errors (SSE) 2
1
others ...

• The MSE and the SSE criteria are the most widely
used criteria.

%;*#Q5521.(R %;*#Q5521.(R
*
• Solution: linear estimator • Rx and rxy are, respectively, the autocorrelation
M
yˆ (n) = ∑ ck (n) xk (n) matrix of xk (n) and the cross-correlation matrix of
k =1 xk (n) and y (n) .

• Let y = y (n) , e = e(n) , • The number M of data components used is called the
*
c = (c1 (n), c2 (n),cM (n))T and order of the estimator.
*
x = ( x1 (n), x2 (n), xk ( n))T
• All random variables are assumed to have zero-mean
** values.
• Objective function J = E{e } = E{( y − c ' x ) 2 }2

* *
∂J • The parameter vector c that optimize J, say cm , is
• J is minimum at * = 0 ⇒ known as the linear MMSE (LMMSE) estimator and
∂c
( )
∂J ∂ ** 2 ** * yˆ (n) as the LMMSE estimate.
* = E  * ( y − c ' x )  = E{−2( y − c ' x ) x '} = 0
∂c  ∂c 
• The minimum J is given by
* * ** * ** * * * * * *
⇒ E{ yx '} = E{c ' xx '} = c ' E{xx '} J min = E{( y − cm ' x ) 2 } = E{( y − cm ' x )( y − cm ' x )'}
* ** * * * * * **
⇒ E{xy} = E{xx '}c ⇒ J min = E{( y − cm ' x ) y − ( y − cm ' x ) x ' cm }
* * * * * *
⇒ Rx c = rxy ⇒ J min = E{ y 2 } − cm ' E{xy} = E{ y 2 } − ( Rx−1rxy )' rxy
* * * *
⇒ c = Rx−1rxy ⇒ J min = E{ y 2 } − (rxy ' Rx−1rxy )'
** * *
where Rx = E{xx '} and rxy = E{xy} or
* * * *
⇒ J min = E{ y 2 } − cm ' E{xy} = E{ y 2 } − cm ' rxy
* * * *
• Equation Rx c = rxy is known as normal equations. ⇒ J min = E{ y 2 } − cm ' Rx cm
 E ( x1 (n) x1 ( n)) E ( x1 ( n) x2 ( n))  E ( x1 ( n) xM ( n))   c1 (n)   E ( y (n) x1 (n)) 
 E ( x ( n) x ( n)) E ( x ( n) x ( n))  E ( x2 ( n) xM ( n))   c2 (n)   E ( y (n) x2 (n)) 
 2 1 2 2  = 
          
    
 E ( xM (n) x1 (n)) E ( xM (n) x2 (n))  E ( xM (n) xM (n)) cM (n)  E ( y (n) xM (n))

%;*#Q5521.(R %;*#Q5521.(R

Optimum FIR filters ∂J


• J is minimum at * = 0 ⇒
∂c
• In this section, we apply the theory of general
LMMSE estimators to the design of optimum linear
∂J
∂c
∂
 ∂c
( ** 2

) ** *
* = E  * ( y − c ' x )  = E{−2( y − c ' x ) x '} = 0
* * ** * **
filters whose performance is optimum in MSE sense. ⇒ E{ yx '} = E{c ' xx '} = c ' E{xx '}
* ** *
⇒ E{xy} = E{xx '}c
* *
⇒ Rx c = rxy
* *
⇒ c = Rx−1rxy
** * *
where Rx = E{xx '} and rxy = E{xy}

* *
• The parameter vector c that optimize J, say cm , is
the desired coefficient vector of the filter.
* *
• J min = E{ y 2 } − cm ' Rx cm
• Linear MMSE filters are often referred to as Wiener
filters. • We prefer FIR over IIR filters because
(1) any stable IIR filter can be approximated to any
M −1
yˆ (n) = ∑ h(k ) x (n − k ) desirable degree by an FIR filter and
k =0 (2) optimum FIR filters are easily obtained by
solving a linear system of equations.
• Let y = y (n) , e = e(n) ,
* • When the input and desired response stochastic
c = (h(0), h(1), h( M − 1))T and
* processes are jointly wide-sense stationary, the matrix
x = ( x(n), x(n − 1), x(n − M + 1))T
Rx is Toeplitz and positive definite unless the
** components of the data vector are linearly dependent.
• Objective function J = E{e 2 } = E{( y − c ' x ) 2 }

%;*#Q5521.(R %;*#Q5521.(R
Linear prediction
• Linear signal estimation is another applications of
optimum filtering

• Problem statement:
Given a set of values x(n), x(n-1),...,x(n-M) of a
stochastic process and we wish to estimate the value
of x(n-i), using a linear combination of the remaining
samples.

• The estimate and the corresponding estimation error


are given by
x(n) is stationary x(n) is nonstationary M
Rx Hermitian and Hermitian and xˆ (n − i ) = ∑ ck (n) x(n − k )
k =0
nonnegative definite, nonnegative definite k ≠i
and Toeplitz e(n) = x( n − i ) − xˆ (n − i )
* Time-invariant Time-varying
c

• If M = ∞ , we have a causal IIR filter. Case Condition
Forward linear i=0
prediction
Linear signal i =∈ {1,2,...n − M − 1}
estimation
Backward linear i = n−M
prediction

%;*#Q5521.(R %;*#Q5521.(R

• Let y = x(n − i) , e = e(n) ,


*
c = (c0 ( n), c1 (n),ci −1 (n), ci +1 ( n),cM (n))T and
*
x = ( x(n), x( n − 1), x(n − i + 1), x( n − i − 1), x(n − M ))T

**
• Objective function J = E{e 2 } = E{( y − c ' x ) 2 }

∂J
• J is minimum at * = 0 ⇒
∂c
∂J
∂c
∂
 ∂c
( ** 2

) ** *
* = E  * ( y − c ' x )  = E{−2( y − c ' x ) x '} = 0
* * ** * **
⇒ E{ yx '} = E{c ' xx '} = c ' E{xx '}
* ** *
⇒ E{xy} = E{xx '}c
* *
⇒ Rx c = rxy
* *
⇒ c = Rx−1rxy
** * *
where Rx = E{xx '} and rxy = E{xy}
* *
• The parameter vector c that optimize J, say cm , is
the desired coefficient vector of the prediction
coefficient vector.
* *
• J min = E{ y 2 } − cm ' Rx cm

• If the process x(n) is stationary, then the correlation


matrix R(n) is not n-dependent and it is Toeplitz.

%;*#Q5521.(R %;*#Q5521.(R

You might also like