Linear Prediction
Linear Prediction
%;*#Q5521.(R %;*#Q5521.(R
• Steps involved in the design of an optimum estimator: Linear mean square error estimation
1. Select a computational structure (i.e. H {•}) for
the implementation of the estimator. • Problem statement:
2. Select a performance criterion. Design an estimator yˆ (n) that provides an
estimate of the desired response y (n) using a linear
3. Optimize the performance criterion to determine
the parameters of H {•}. combination of the data xk (n) for 1 ≤ k ≤ M , such
that the MSE E{e( n) 2 } = E{( y (n) − yˆ ( n)) 2 } is
4. Evaluate the optimal performance and validate the
estimator. minimized.
• The MSE and the SSE criteria are the most widely
used criteria.
%;*#Q5521.(R %;*#Q5521.(R
*
• Solution: linear estimator • Rx and rxy are, respectively, the autocorrelation
M
yˆ (n) = ∑ ck (n) xk (n) matrix of xk (n) and the cross-correlation matrix of
k =1 xk (n) and y (n) .
• Let y = y (n) , e = e(n) , • The number M of data components used is called the
*
c = (c1 (n), c2 (n),cM (n))T and order of the estimator.
*
x = ( x1 (n), x2 (n), xk ( n))T
• All random variables are assumed to have zero-mean
** values.
• Objective function J = E{e } = E{( y − c ' x ) 2 }2
* *
∂J • The parameter vector c that optimize J, say cm , is
• J is minimum at * = 0 ⇒ known as the linear MMSE (LMMSE) estimator and
∂c
( )
∂J ∂ ** 2 ** * yˆ (n) as the LMMSE estimate.
* = E * ( y − c ' x ) = E{−2( y − c ' x ) x '} = 0
∂c ∂c
• The minimum J is given by
* * ** * ** * * * * * *
⇒ E{ yx '} = E{c ' xx '} = c ' E{xx '} J min = E{( y − cm ' x ) 2 } = E{( y − cm ' x )( y − cm ' x )'}
* ** * * * * * **
⇒ E{xy} = E{xx '}c ⇒ J min = E{( y − cm ' x ) y − ( y − cm ' x ) x ' cm }
* * * * * *
⇒ Rx c = rxy ⇒ J min = E{ y 2 } − cm ' E{xy} = E{ y 2 } − ( Rx−1rxy )' rxy
* * * *
⇒ c = Rx−1rxy ⇒ J min = E{ y 2 } − (rxy ' Rx−1rxy )'
** * *
where Rx = E{xx '} and rxy = E{xy} or
* * * *
⇒ J min = E{ y 2 } − cm ' E{xy} = E{ y 2 } − cm ' rxy
* * * *
• Equation Rx c = rxy is known as normal equations. ⇒ J min = E{ y 2 } − cm ' Rx cm
E ( x1 (n) x1 ( n)) E ( x1 ( n) x2 ( n)) E ( x1 ( n) xM ( n)) c1 (n) E ( y (n) x1 (n))
E ( x ( n) x ( n)) E ( x ( n) x ( n)) E ( x2 ( n) xM ( n)) c2 (n) E ( y (n) x2 (n))
2 1 2 2 =
E ( xM (n) x1 (n)) E ( xM (n) x2 (n)) E ( xM (n) xM (n)) cM (n) E ( y (n) xM (n))
%;*#Q5521.(R %;*#Q5521.(R
* *
• The parameter vector c that optimize J, say cm , is
the desired coefficient vector of the filter.
* *
• J min = E{ y 2 } − cm ' Rx cm
• Linear MMSE filters are often referred to as Wiener
filters. • We prefer FIR over IIR filters because
(1) any stable IIR filter can be approximated to any
M −1
yˆ (n) = ∑ h(k ) x (n − k ) desirable degree by an FIR filter and
k =0 (2) optimum FIR filters are easily obtained by
solving a linear system of equations.
• Let y = y (n) , e = e(n) ,
* • When the input and desired response stochastic
c = (h(0), h(1), h( M − 1))T and
* processes are jointly wide-sense stationary, the matrix
x = ( x(n), x(n − 1), x(n − M + 1))T
Rx is Toeplitz and positive definite unless the
** components of the data vector are linearly dependent.
• Objective function J = E{e 2 } = E{( y − c ' x ) 2 }
%;*#Q5521.(R %;*#Q5521.(R
Linear prediction
• Linear signal estimation is another applications of
optimum filtering
• Problem statement:
Given a set of values x(n), x(n-1),...,x(n-M) of a
stochastic process and we wish to estimate the value
of x(n-i), using a linear combination of the remaining
samples.
%;*#Q5521.(R %;*#Q5521.(R
**
• Objective function J = E{e 2 } = E{( y − c ' x ) 2 }
∂J
• J is minimum at * = 0 ⇒
∂c
∂J
∂c
∂
∂c
( ** 2
) ** *
* = E * ( y − c ' x ) = E{−2( y − c ' x ) x '} = 0
* * ** * **
⇒ E{ yx '} = E{c ' xx '} = c ' E{xx '}
* ** *
⇒ E{xy} = E{xx '}c
* *
⇒ Rx c = rxy
* *
⇒ c = Rx−1rxy
** * *
where Rx = E{xx '} and rxy = E{xy}
* *
• The parameter vector c that optimize J, say cm , is
the desired coefficient vector of the prediction
coefficient vector.
* *
• J min = E{ y 2 } − cm ' Rx cm
%;*#Q5521.(R %;*#Q5521.(R