Wiener Filters-Chapter56-2020 PDF
Wiener Filters-Chapter56-2020 PDF
Wiener Filter
Wiener filter
Principle of orthogonality
Wiener-Hopf equations
Error performance surface
Linearly Constrained Minimum Variance Filter
(LCMV Filter)
Examples: Fixed Weight Beamforming
u ( n)
u (n 1)
u ( n)
...
u ( n M 1)
The corollary:
When the filter operates in its optimum condition, the
output of the filter is orthogonal to the estimation
error.
where
R E u(n)u H (n) C M M , R R H
p E u(n)d (n) C
* M
0.5 1
Evaluate the tap weights of the Wiener filter.
Express the cost function in terms of the weights.
Plot the error performance surface assuming that the
variance of the desired input is 0.5. What is the
minimum mean squared error?
where
A a 0 a1 a 2
is matrix of steering vectors, and u1 = [1 0 … 0]T. The steering vector
for each source is given by
jkd sin jkd sin T
a e 1 e
and n2 is the noise variace (noise power).
y w H x ( k ) w H x s ( k ) x i ( k ) n ( k ) w H x s ( k ) u( k )
The SIR is defined as the ratio of the desired signal power divided
by the undesired signal power
s2 w H R ss w
SIR 2 H
u w R uu w
where Rss = E[xsxsH] is signal correlation matrix; Rii = E[xixiH] is
correlation matrix for interferers; and Rnn = E[nnH] is correlation
matrix for noise. The SIR can be maximized by taking the derivative
with respect to w and setting the result equal to zero, then we obtain
Levinson-Durbin Algorithm.
2
H u ( n)
J (a M ) E f M (n) E a M
2
u(n 1)
Optimum criterion
J (g) E bM (n)
b 2
E u ( n M ) g u( n)
H 2
Dept. of Telecomm. Eng. AdSP2020
Faculty of EEE 41 DHT, HCMUT
6. LP: Optimal Backward Linear Prediction (2)
Optimal backward predictor vector:
g opt R 1r B w opt
B
Let c M g opt 1
T
r ( 0) r (1) ... r ( M ) cM , 0
r * (1)
r ( 0) ... r ( M 1) cM ,1
... ... ... ... ...
*
r ( M ) r *
( M 1) ... r (0) M , M
c
r (0) r (1) ... r ( M ) aM , M 0
r * (1) r ( 0 ) ... r ( M 1) a
0
M , M 1
... ... ... ... ... ...
*
r ( M ) r *
( M 1) ... r ( 0 ) M , 0 PM
a
r ( m 1)
r (0) rmH R m rmB
R m 1 B H
mr R m m
(r ) r ( 0)
2. For m=1, …, M