0% found this document useful (0 votes)
105 views48 pages

Wiener Filters-Chapter56-2020 PDF

This chapter discusses the Wiener filter and linearly constrained minimum variance (LCMV) filter. The Wiener filter is an optimal linear filter that minimizes the mean squared error between the desired signal and estimated signal. It is derived using the orthogonality principle and solved using the Wiener-Hopf equations. The error performance surface of the Wiener filter is bowl-shaped, with the minimum mean squared error occurring at the bottom of the bowl. The LCMV filter incorporates linear constraints and is solved using Lagrange multipliers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views48 pages

Wiener Filters-Chapter56-2020 PDF

This chapter discusses the Wiener filter and linearly constrained minimum variance (LCMV) filter. The Wiener filter is an optimal linear filter that minimizes the mean squared error between the desired signal and estimated signal. It is derived using the orthogonality principle and solved using the Wiener-Hopf equations. The error performance surface of the Wiener filter is bowl-shaped, with the minimum mean squared error occurring at the bottom of the bowl. The LCMV filter incorporates linear constraints and is solved using Lagrange multipliers.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Chapter 5:

Wiener Filter
 Wiener filter
 Principle of orthogonality
 Wiener-Hopf equations
 Error performance surface
 Linearly Constrained Minimum Variance Filter
(LCMV Filter)
 Examples: Fixed Weight Beamforming

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE DHT, HCMUT
References

[1] Simon Haykin, Adaptive Filter Theory, Prentice Hall, 1996.

[2] Steven M. Kay, Fundamentals of Statistical Signal Processing:


Estimation Theory, Prentice Hall, 1993.

[3] Alan V. Oppenheim, Ronald W. Schafer, Discrete-Time Signal


Processing, Prentice Hall, 1989.

[4] Athanasios Papoulis, Probability, Random Variables, and


Stochastic Processes, McGraw-Hill, 1991.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 2 DHT, HCMUT
5. Wiener Filter: Linear Optimum Filtering
 Block diagram representation of the optimum filtering
problem:

 The goal of the optimum filter is to provide an estimate of


the desired response that is “as close as possible”.
 Questions :
 Is the filter impulse response finite or infinite?
 What statistical criterion is used for optimization?
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 3 DHT, HCMUT
5. Wiener Filter: Statistical Criteria for Optimization
 Typical choices
 Mean-squared value of the error
 Expectation of the absolute value of the error
 Expectation of third or higher order powers of the
absolute value of the error.

 The first choice is most preferred as it leads to a


mathematically tractable solution.

 Minimum Mean-Squared Error (MMSE) criteria


 Design the linear discrete time filter such that the
mean-squared value of the estimation error is
minimum.
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 4 DHT, HCMUT
5. WF: Mean-Squared Error (Cost Function)
Mean-Squared Error (Cost Function):
 
J  E e( n )e * ( n )  E e( n ) 2

where
e(n)  d (n)  y(n)
M 1
y (n)   wk*u (n  k )  w H u(n)
k 0

 u ( n) 
 u (n  1) 
u ( n)   
 ... 
 
u ( n  M  1) 

Question: Condition for JJmin ?

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 5 DHT, HCMUT
5. WF: Principle of Orthogonality (1)
 Taking derivative of J with respect to w*:
J
w *


w *
E 
e ( n ) e *
( n ) 
 E  u ( n ) e *
( n) 
Let
J
w *
 0  E u ( n 
) e *
opt 
(n)  0 : Principle of orthogonality

 Moreover: E y (n)e* (n)  w H E u(n)e* (n)

with optimum condition:  *


E u(n)eopt ( n)  0
then  *
E yopt (n)eopt 
( n)  0

and yopt (n) eopt (n)  d (n)

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 6 DHT, HCMUT
5. WF: Principle of Orthogonality (2)

 The principle of orthogonality states that:


 The necessary and sufficient condition for the cost
function J to attain its minimum value is that the
estimation error and the input values are orthogonal
to each other.

 The corollary:
 When the filter operates in its optimum condition, the
output of the filter is orthogonal to the estimation
error.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 7 DHT, HCMUT
5. WF: Wiener-Hopf Equations (1)
 The MMSE criterion for designing the optimal filter
leads to a set of equations given by:
 
E u(n  k )e* (n)  0, k  0,1,2,... , M 1
M
then  w Eu (n  k )u * (n  i)  Eu(n  k )d * (n),
i 0
oi

where r(k) is the auto-correlation function of the input,


p(k) is the cross-correlation between the input and the
desired response, and wopt,i is the ith optimal weight
value.

 These equations are known as the Wiener - Hopf


equations and form the basis for adaptive filtering
algorithms.
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 8 DHT, HCMUT
5. WF: Wiener-Hopf Equations (2)
 In matrix form:
Rw opt  p

where
 
R  E u(n)u H (n)  C M M , R  R H
p  E u(n)d (n) C
* M

then w opt  R 1p

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 9 DHT, HCMUT
5. WF: Error Performance Surface
 The cost function J for the transversal filter can be
written as:
 
J (w)  E e(n)e* (n)   d2  w H p  p H w  w H Rw

where d2 is the variance of the desired signal.

In optimum condition, we obtain MMSE:


J min  J (w opt )   d2  p H w opt   d2  p H R 1p

 The cost function is a second order function of the tap


weights. It can be visualized as a bowl-shaped (M+1)
dimensional surface with M degrees of freedom.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 10 DHT, HCMUT
5. WF: Example 1 (1)
 In a Wiener filtering problem, the correlation matrix and
the cross-correlation vector are:
 1 0.5
R  , p  [ 0.5 0.25]T

 0.5 1 
 Evaluate the tap weights of the Wiener filter.
 Express the cost function in terms of the weights.
 Plot the error performance surface assuming that the
variance of the desired input is 0.5. What is the
minimum mean squared error?

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 11 DHT, HCMUT
5. WF: Example 1 (2)

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 12 DHT, HCMUT
5. WF: Example 1 (3)

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 13 DHT, HCMUT
5. WF: Example 1 (4)

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 14 DHT, HCMUT
5. WF: Example 2
 Consider the system identification problem shown below:

Both u(n) and v(n) are zero-mean, white noise sequences


with variances 0.5 and 0.1 respectively.
If H(z) = 1-0.5z-1+0.25z-2, find the optimum Wiener
solution for filter lengths of 1, 2, 3, and 4. In each case,
calculate the minimum mean squared error.
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 15 DHT, HCMUT
5. WF: Constrained Optimization

 The Wiener-Hopf equations are sometimes referred to


as unconstrained optimization.

 In some situations, a constraint is placed on finding the


solution with minimum mean squared error.

 The solution, termed Linearly Constrained Minimum


Variance (LCMV), is found using Lagrange multipliers.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 16 DHT, HCMUT
5. WF: LCMV Filter (1)
 Output of the filter: y(n)  w H u(n)
Assumed a complex sinusoidal excitation: u(n)  e j0Tn
then:
y(n)  w H e j0na(0 )  e j0n w H a(0 )
where 0  0T , 
a( 0 )  1, e  j0 ,..., e  j0 ( M 1) 
 The linear constraint: w H a(0 )  g * , enforcing a:
 certain value g* of the transfer function of the filter at frequency
0=0/T (case of temporal frequency),
 certain antenna gain at an angle of arrival 0 with 0=2sin0 /,
where  is inter-antenna element spacing and  is wavelength of
the incomming signal (case of spatial frequency),
while trying to find a weight vector wopt, which minimizes
the output power E[y(n)y*(n)]

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 17 DHT, HCMUT
5. WF: LCMV Filter (2)
 It means:
  
J (w)  E y(n) y* (n)  E w H u(n)u H (n)w  w H Rw  J min
subject to w H a(0 )  g *

 Using Lagrange multipliers (see Reference [1]), we


obtain finally:
gR 1a(0 )
w opt  H
a (0 )R 1a(0 )

 When g=1  Minimum Variance Distortionless


Response (MVDR), with
R 1a(0 )
w opt  H
a (0 )R 1a(0 )

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 18 DHT, HCMUT
5. Smart Antennas (1)
 Traditional array antennas, where the main beam is steered to
directions of interest, are called phased arrays, beamsteered
arrays, or scanned arrays. The beam is steered via phase shifters
oftenly implemented at RF frequencies. This general approach to
phase shifting has been referred to as electronic beamsteering
because of the attempt to change the phase of the current directly at
each antenna element.

 Modern beamsteered array antennas, where the pattern is shaped


according to certain optimum criteria, are called smart antennas.
Smart antennas have alternatively been called digital beamformed
(DBF) arrays or adaptive arrays (when adaptive algorithms are
employed). The term smart implies the use of digital signal
processing in order to shape the beam pattern according to certain
conditions. Since an antenna pattern (or beam) is formed by digital
signal processing, this process is often referred to as digital
beamforming.
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 19 DHT, HCMUT
5. Smart Antennas (2)
 Smart antennas can be applied for improved radar systems, improved
system capacities with mobile wireless, and improved wireless
communications through the implementation of space division
multiple access (SDMA).

 Smart antenna patterns are controlled via algorithms based upon


certain criteria. These criteria could be maximizing the signal-
tointerference ratio (SIR), minimizing the variance, minimizing
the means-quare error (MSE), steering toward a signal of
interest, or nulling the interfering signals.

 The implementation of these algorithms can be performed


electronically through analog devices but it is generally more easily
performed using digital signal processing. This requires that the array
outputs be digitized through the use of an A/D converter. This
digitization can be performed at either IF or baseband frequencies.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 20 DHT, HCMUT
5. Smart Antennas (3)
 The main advantage of digital beamforming is that phase shifting
and array weighting can be performed on the digitized data rather
than by being implemented in hardware. If the parameters of
operation are changed or the detection criteria are modified, the
beamforming can be changed by simply changing an algorithm
rather than by replacing hardware.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 21 DHT, HCMUT
5. Smart Antennas (4)
 There are two types of digital beamforming:
 Fixed weight beamforming.
 Adaptive beamforming.

In this chapter, we consider algorithms for fixed weight


beamforming. Algorithms for adaptive beamforming will be
considered in chapter 8.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 22 DHT, HCMUT
5. Fixed Weight Beamforming (1)
 Maximum signal-to-interference ratio:
One criterion which can be applied to enhancing the received signal
and minimizing the interfering signals is based upon maximizing the
SIR.
Example: 3-element array with one fixed known desired source and
two fixed undesired interferers. All signals are assumed to operate
at the same carrier frequency.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 23 DHT, HCMUT
5. Fixed Weight Beamforming (2)
The required complex weights w1, w2, and w3 can be determined as
 AA   I
1
w   w1 w2
H
w3   u A T
1
H H 2
n

where
A  a 0 a1 a 2 
is matrix of steering vectors, and u1 = [1 0 … 0]T. The steering vector
for each source is given by
 jkd sin  jkd sin  T
a  e 1 e 
and n2 is the noise variace (noise power).

Example: if the desired signal is arriving from θ0 = 0◦, while θ1 = −45◦


and θ2 = 60◦, the necessary weights can be calculated to be
 w1*   0.28  j 0.07 
 *  
w
    0.45
2 
 w3   0.28  j 0.07 
*
 
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 24 DHT, HCMUT
5. Fixed Weight Beamforming (3)
The array factor is plotted as

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 25 DHT, HCMUT
5. Fixed Weight Beamforming (4)
The general case for max SIR:
It shows one desired signal arriving from the angle θ0 and N
interferers arriving from angles θ1, . . . , θN. The signal and the
interferers are received by an array of M elements with M potential
weights. Each received signal at element m also includes additive
Gaussian noise. Time is represented by the kth time sample.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 26 DHT, HCMUT
5. Fixed Weight Beamforming (5)
The array output y can be given in the following form:
y  w H x( k )
where
 i1 (k ) 
 i (k ) 
x(k )  a0 s( k )  a1 a2 a N   2   n( k )
 
 
iN ( k ) 
 x s ( k )  x i ( k )  n( k )

and w = [w1 w2 … wM]T is array weights; xs(k): desired signal


vector; xi(k): interfering signals vector; n(k): zero-mean Gaussian
noise for each channel; and ai: M-element array steering vector for
the i direction of arrival.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 27 DHT, HCMUT
5. Fixed Weight Beamforming (6)
Therefore, the array output can be re-written as

y  w H x ( k )  w H  x s ( k )  x i ( k )  n ( k )   w H  x s ( k )  u( k ) 

where u(k) = xi(k) + n(k) is undesired signal.

The SIR is defined as the ratio of the desired signal power divided
by the undesired signal power
 s2 w H R ss w
SIR  2  H
 u w R uu w
where Rss = E[xsxsH] is signal correlation matrix; Rii = E[xixiH] is
correlation matrix for interferers; and Rnn = E[nnH] is correlation
matrix for noise. The SIR can be maximized by taking the derivative
with respect to w and setting the result equal to zero, then we obtain

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 28 DHT, HCMUT
5. Fixed Weight Beamforming (7)
This equation is an eigenvector equation with SIR being the
eigenvalues.The maximum SIR (SIRmax) is equal to the largest
eigenvalue λmax for the Hermitian matrix R−1uuRss. The eigenvector
associated with the largest eigenvalue is the optimum weight vector
wopt. Thus
1
R uu R ss w SIR  max w opt  SIRmax w SIR

The final optimized weight for max SIR is


w SIR   R uu
1
a0
where 2
E[ s ] H
 a0 w SIR
SIRmax
Example: M = 3-element array with spacing d = 0.5λ has a noise
variance σ2n = 0.001, a desired received signal arriving at θ0 = 30◦,
and two interferers arriving at angles θ1 = −30◦ and θ2 = 45◦. Assume
that the signal and interferer amplitudes are constant.
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 29 DHT, HCMUT
5. Fixed Weight Beamforming (8)

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 30 DHT, HCMUT
5. Fixed Weight Beamforming (9)
 Minimum mean-square error

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 31 DHT, HCMUT
5. Fixed Weight Beamforming (10)
The signal d(k) is the reference signal. Preferably the reference
signal is either identical to the desired signal s(k) or it is highly
correlated with s(k) and uncorrelated with the interfering signals
in(k). If s(k) is not distinctly different from the interfering signals,
the minimum mean square technique will not work properly. The
signal ε(k) is the error signal such that
 (k )  d (k )  wH x(k )
The mean-square error is given by
E     E  d   2w H r  w H R xx w
2 2
   
where
r  E  d *x   E  d * (x s  x i  n) 
R xx  E  xx H   R ss  R uu
R ss  E  x s x s H 
R uu  R ii  R nn
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 32 DHT, HCMUT
5. Fixed Weight Beamforming (11)
The optimum weights provide the minimum mean-square error, the
optimum Wiener solution given as
w MSE  R xx1r

If we allow the reference signal d to be equal to the desired signal s,


and if s is uncorrelated with all interferers, we may simplify as
r  E[sx]  Sa0
where
S  Es 
2
 
The optimum weights can then be identified as
w MSE  SR xx1a0

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 33 DHT, HCMUT
5. Fixed Weight Beamforming (12)
 Minimum variance - MV (or minimum variance distortionless
response - MVDR):
The goal of the minimum variance method is to minimize the array
output noise variance. The weighted array output is given by
y  w H x  w H a0 s  w H u

In order to ensure a distortionless response, we must also add the


constraint that
w H a0  1
Finally, the minimum variance optimum weights can be obtained as
1
R uu a
w MV  H 10
a0 R uu a0
where
Ruu  Rii  Rnn
is the correlation matrix of unwanted signals and noise.
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 34 DHT, HCMUT
Chapter 6:
Linear Prediction

 Forward Linear Prediction.

 Backward Linear Prediction.

 Levinson-Durbin Algorithm.

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 35 DHT, HCMUT
6. Linear Prediction: Definitions (1)
 (M+1) samples of time series (Stationary DTSt process) :
u(n), u(n 1),..., u(n  M )
 Linear prediction of order M-Forward Prediction:
Predicting (estimating) the future value u(n) using the M
samples u(n-1), u(n-2),…, u(n-M)
uˆ (n)  w1*u (n  1)  w2*u (n  2)  ...  wM* u (n  M )
M
  wk*u (n  k )  w H u(n  1)
k 1

u(n-1): tap inputs

 Predictor vector (tap weight vector) of order M-Forward


Predictor: w  w1 , w2 ,..., wM T
a M  1, w1 , w2 ,...,  wM   aM ,0 , aM ,1 ,..., aM ,M 
T T

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 36 DHT, HCMUT
6. Linear Prediction: Definitions (2)
 Prediction error of order M-Forward Prediction Error:
 u ( n) 
f M (n)  u (n)  uˆ (n)  u (n)  w u(n  1)  a 
H H
  a H
M u ( n)
u(n  1)
M

 Mean-squared prediction error: 


PM  E f M (n)
2

if tap inputs u(n-1) have zero-mean, then PM called
forward prediction error power.

 Correlation matrix of tap inputs:


 r (0) r (1) ... r ( M  1) 
 r * (1) ... r ( M  2)

R  E u(n  1)u (n  1) 
H 

 ...
r (0)
... ... ... 
 * 
 r ( M  1) r *
( M  2) ... r (0) 

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 37 DHT, HCMUT
6. Linear Prediction: Definitions (3)
 Cross correlation vector between tap inputs u(n-1) and
the future value u(n):
 r * (1)   r (1) 
 *   

 
r  E u(n  1)u * (n)  
r ( 2)  r
 ...   ... 
( 2) 
 *   
r ( M ) r ( M )
 Variance of u(n) equals r(0) since u(n) has zero-mean
 Vector reversing (backward):
 r * (M ) 
 * 
r ( M  1)
rB   
 ... 
 * 
 r (1) 
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 38 DHT, HCMUT
6. LP: Optimal Forward Linear Prediction
 Optimal criterion:
 2

J (w )  E f M (n)  E u (n)  w u(n  1) 


H 2



 
 2

H  u ( n) 
J (a M )  E f M (n)  E  a M  
2

 u(n  1) 

 Optimal solution: based on optimal Wiener filter design


with:
 output filter: wHu(n-1)
 desired signal: d(n)=u(n)
 Therefore, optimal predictor vector: w opt  R r
1

 Similar to Chapter 5, we obtain forward prediction


error power: PM  J min (w )  r (0)  r H w opt
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 39 DHT, HCMUT
6. LP: Augmented Wiener-Hopf Equations (1)
 Optimum predictor vector and optimal prediction error
power satisfy:
r (0)  r H w opt  PM r (0) r H   1   PM 
or in matrix form:    w    
Rw opt  r  0  r R  opt  0
 
but a M  1,w opt . Then, augmented Wiener-Hopf equations
T

for optimal forward prediction error filter are:

 r (0) r (1) ... r ( M )   aM , 0   PM 


 r * (1)  
 r (0) ... r ( M  1)  aM ,1   0 

 ... ... ... 
...  ...   ... 
 *    
 r ( M ) r *
( M  1) ... r (0)  aM , M   0 

When RM+1 is nonsingular and aM,0=1, there are unique


solutions aM and PM . See Example 2, p. 247, [1].
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 40 DHT, HCMUT
6. LP: Optimal Backward Linear Prediction (1)
 Linear prediction of order M-Backward Prediction:
Predicting (estimating) the past value u(n-M) using the M
samples u(n), u(n-1),…, u(n-M+1):
uˆ b (n  M )  g1*u (n)  g 2*u (n  1)  ...  g M* u (n  M  1)
M
  g k*u (n  k  1)  g H u(n)
k 1

where g is backward predictor vector.

 Prediction error of order M-Backward Prediction Error:


bM (n)  u(n  M )  uˆ b (n  M )  u(n  M )  g H u(n)

 Optimum criterion

J (g)  E bM (n)
b 2
  E u ( n  M )  g u( n) 


H 2


Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 41 DHT, HCMUT
6. LP: Optimal Backward Linear Prediction (2)
 Optimal backward predictor vector:
g opt  R 1r B  w opt
B

 Optimal backward prediction error power:


PM  J min
b
(g)  r (0)  (r B ) H g opt

 The optimal backward predictor filter solution and the


optimal backward prediction error power satisfy:
Rg opt  r B  0  R r B   g opt   0 
  B H    
r (0)  (r ) g opt  PM
B H
(r ) r (0)  1   PM 

Let c M   g opt 1
T

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 42 DHT, HCMUT
6. LP: Optimal Backward Linear Prediction (3)
 Finally, we obtain the augmented Wiener-Hopf
equations for optimal backward prediction error filter:

 r ( 0) r (1) ... r ( M )   cM , 0 
 r * (1)  
 r ( 0) ... r ( M  1)  cM ,1 

 ... ... ... ...   ... 
 *  
 r ( M ) r *
( M  1) ... r (0)   M , M 
c
 r (0) r (1) ... r ( M )   aM , M   0 
 r * (1) r ( 0 ) ... r ( M  1)  a   
0
    M , M 1 
 
 ... ... ... ...   ...   ... 
 *    
 r ( M ) r *
( M  1) ... r ( 0 )   M , 0   PM 
a

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 43 DHT, HCMUT
6. LP: Levinson-Durbin Algorithm (1)
 Purpose: Recursive method with computational
efficiency for computing the prediction error filter vectors
(aM, cM) and the prediction error power (PM) by solving
the augmented Wiener-Hopf equations.

 Using the solution of the augmented Wiener-Hopf


equations for a prediction error filter of order m-1 to
compute the corresponding solution for a prediction error
filter of order m (m=1, 2, …, M; M is the final order of the
filter).

 All variables are with a subscript expressing the order of


the predictor: Rm, rm, am, wopt,m

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 44 DHT, HCMUT
6. LP: Levinson-Durbin Algorithm (2)
 Some order recursive equations can be written:
 rm 
rm 1  r (1), r (2), ... r (m), r (m  1)  
T

 
 r ( m 1) 
r (0) rmH   R m rmB 
R m 1    B H 
 mr R m  m
(r ) r ( 0) 

See more in [1].

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 45 DHT, HCMUT
6. LP: Levinson-Durbin Algorithm (3)
Summary of First Form:
Given the values of the autocorrelation function (for the lags k = 0, 1, …
M) r(0), r(1),…, r(M). These values can be estimated from the input data
u(1), u(2), …, u(N), where N is total length of the input data, N>>M; by
time average: 1 N
r (k )   u
N n  k 1
( n )u *
(n  k )

1. Initialize 0=r(1), P0=r(0)


2. For m=1,…, M
 m 1
2.1 m   : Reflection coefficients
Pm 1
2.2 am , 0  1; am,k  am1,k  m am* 1,m k , k  1,..., m
m
2.3  m  r (m  1)   am,k r (m  1  k )
k 1

2.4 Pm  Pm1 (1  m2 )


Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 46 DHT, HCMUT
6. LP: Levinson-Durbin Algorithm (4)
Summary of Second Form:

Given r(0) and 1, 2, …, M


1. Initialize P(0)= r(0)

2. For m=1, …, M

2.1 am,0  1; am,k  am1,k  m am* 1,m k , k  1,..., m

2.2 Pm  Pm1 (1  m2 )

See Example 2, p. 260, [1]

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 47 DHT, HCMUT
6. LP: Inverse Levinson-Durbin Algorithm
 Inverse problem: Given prediction error filter vector
aM,1, aM,2,…, aM,M, solve for the corresponding set of
reflection coefficients 1, 2,…, M.
 Starting with set of {aM,k}
am , k  am , m am , m  k
 Using: am 1,k  , k  1,..., m
1  ( am , m ) 2

with m=M, M-1, …, 2; to compute the vectors of the


corresponding prediction error filters of order
M-1, M-2,…, 1.
 Finally, using: m  am,m , m  M , M  1,...,1
to determine the desired set of M, M-1,…, 1.

See Example 3, p. 261, [1]

Dept. of Telecomm. Eng. AdSP2020


Faculty of EEE 48 DHT, HCMUT

You might also like