Wiener Filter - LMS
Wiener Filter - LMS
Based Algorithms
1/60
Advanced Digital Signal Processing
General Introduction
2/60
Advanced Digital Signal Processing
Adaptive Linear Combiner
Weight vector
Input Vector
Output Signal
3/60
Advanced Digital Signal Processing
Filtering
4/60
Advanced Digital Signal Processing
Adaptive Linear Combiner
Desired
x0 w0
Input Vector
Output
d
x1 w1 Error
y e
xL wL
xk xk-1 xk-L
z-1 z-1 z-1
w0k w1k
wLk
yk
6/60
Advanced Digital Signal Processing
Performance Surface contd...
8/60
Advanced Digital Signal Processing
Performance Surface Contd...
9/60
Advanced Digital Signal Processing
Gradient and Minimum Mean Square Error
w∗ = R−1 p (4)
10/60
Advanced Digital Signal Processing
Minimum Mean Square Error
11/60
Advanced Digital Signal Processing
Gradient Based Adaptation
12/60
Advanced Digital Signal Processing
Method of Steepest Descent-Gradient based adaptation
u(n)
z-1 z-1 z-1
w w
0(n)
M-1(n)
d(n)
∑ ∑ y(n) e(n)
∑ ∑
Weight Update
Mechanism
13/60
Advanced Digital Signal Processing
Gradient based adaptation
14/60
Advanced Digital Signal Processing
Gradient based adaptation contd...
15/60
Advanced Digital Signal Processing
Signal Flow Graph Diagram for Steepest Descent Algorithm
-µ R
p µ ∑ Z-1I ∑
w(n+1) w(n)
16/60
Advanced Digital Signal Processing
Stability of Steepest Descent Algorithm
17/60
Advanced Digital Signal Processing
Stability of Steepest Descent Algorithm contd....
Dening the weight error vector c(n) as the deviation between the
desired and estimated weight vectors
c(n) = w(n) − w0 (11)
where, w0 is the optimal weight vector according to the
Wiener-Hopf equation (w0 = R−1 p) Using eq.(10), (11) and
(w0 = R−1 p)
18/60
Advanced Digital Signal Processing
Signal Flow Graph Diagram for Weight Error Vector
-µ R
∑ Z-1I ∑
c(n+1) c(n)
19/60
Advanced Digital Signal Processing
The square input correlation matrix R can be represented as
R = QSQT using unitary similarity transformation
The unitary matrix Q contains orthogonal set of eigenvectors
related to eigenvalue of the matrix R as its column elements. R is
the diagonal matrix containing its eigenvalues as diagonal elements
as [λ1 , λ2 , · · · λM ]
c(n + 1) = [I − µQSQT ]c(n) (15)
20/60
Advanced Digital Signal Processing
Stability of Steepest Descent Algorithm contd....
21/60
Advanced Digital Signal Processing
Stability of Steepest Descent Algorithm contd....
2
0<µ< (22)
λmax
23/60
Advanced Digital Signal Processing
Least Mean Square Algorithm (LMS)
24/60
Advanced Digital Signal Processing
Structure of LMS algorithm
25/60
Advanced Digital Signal Processing
Block Diagram of Adaptive Filtering
d(n)
x(n) Transversal y(n) e(n)
Adaptive Filter
∑
e(n)
Adaptive Weight
Control
mechanism
26/60
Advanced Digital Signal Processing
Adaptive Weight Control
x(n) δw0(n)
x(n-1) δw1(n)
x(n-2) e(n)
µ
x(n-M+1) δwM-1(n-1)
27/60
Advanced Digital Signal Processing
LMS Algorithm
28/60
Advanced Digital Signal Processing
Derivation of LMS algorithm
29/60
Advanced Digital Signal Processing
LMS contd....
30/60
Advanced Digital Signal Processing
Signal Flow Diagram of LMS
d(n)
e(n)
x(n) µ ∑ xT(n)
∑
z-1 ∑
ŵ(n+1) ŵ(n)
31/60
Advanced Digital Signal Processing
Stability Analysis of LMS Algorithm
34/60
Advanced Digital Signal Processing
Statistical Analysis of LMS
35/60
Advanced Digital Signal Processing
Stability Analysis contd..
36/60
Advanced Digital Signal Processing
Convergence Criteria
37/60
Advanced Digital Signal Processing
Weight Error Correlation Matrix
38/60
Advanced Digital Signal Processing
Weight Error Correlation Matrix Contd...
41/60
Advanced Digital Signal Processing
Excess Mean Squared Error Contd..
43/60
Advanced Digital Signal Processing
K(n + 1) = (I − µR)K(n)(I − µR) + µ2 Jmin R (46)
R = QSQT
QT RQ = S
X(n + 1) = (I − µS)X(n)(I − µS) + µ2 Jmin S (47)
xi (n) = (1 − µλi )2 xi (n) + µ2 Jmin λi ; i = 1, 2, · · · M (48)
Dene the M × 1 vector x(n) and λ as follows:
x(n) = [x1 (n), x2 (n), · · · xM (n)]T
λ = [λ1 , λ2 , · · · λM ]T ]
Based on the denition of the above two vectors, we can re-write
eq.(65) as
x(n + 1) = Bx(n) + µ2 Jmin λ (49)
where, B is a M × M matrix given as
(
(1 − µλi )2 , i = j
bij = (50)
µ2 λi λj , i 6= j
44/60
Advanced Digital Signal Processing
The matrix B can be represented in terms of its eigenvalues and
eigenvectors as B = GCGT
, where, C is the diagonal matrix consisting of eigenvalues as
C = diag[c1 , c2 , · · · cM ] and G = [g1 , · · · gM ] Using this the
solution of eq.(49) can be represented as
M
cni gi giT [x(0) − x(∞)] + x(∞) (51)
X
x(n) =
i=1
where, x(0) and x(∞) are the initial and nal values of the x(n)
The excess mean squared error can be represented as
M
λi xi (n) = λT x(n) (52)
X
Jex (n) =
i=0
M
cni λT gi giT [x(0) − x(∞)] + λT x(∞) (53)
X
Jex (n) =
i=1
45/60
Advanced Digital Signal Processing
M
cni λT gi giT [x(0) − x(∞)] + Jex (∞) (54)
X
Jex (n) =
i=1
The term M n T T
i=1 ci λ gi gi [x(0) − x(∞)] denotes the transient
P
behavior of the mean square error whereas the second term denotes
the nal value of the excess mean squared error
46/60
Advanced Digital Signal Processing
Transient Behavior of the Mean Squared Error
47/60
Advanced Digital Signal Processing
Transient Behavior contd...
48/60
Advanced Digital Signal Processing
For property 2 to hold, all the values of eigenvalues of the matrix
has to be less than 1. Then by denition of the eigenvalues of the
matrix B
B = GCG
Bg = cg
M
(59)
X
bij gj = cgi ; i = 1, 2, · · · M
j=1
49/60
Advanced Digital Signal Processing
M
(61)
X
(1 − µλi )2 gi + µ2 λi λj gj = cgi ; i = 1, 2, · · · M
j=1,j6=i
From this it can be concluded that for gi to be positive for all i, the
step size parameter µ has to be upper bounded as 0 < µ < λmax 2
.
50/60
Advanced Digital Signal Processing
Property 3: The nal value of the excess mean squared error is
less than the minimum mean squared error if the step size
parameter µ satises the condition
M
2λi
(64)
X
≤1
2 − µλi
i=1
For Jex (∞) to be less than the Jmin , the step-size parameter µ has
to satisfy the condition given in (64)
52/60
Advanced Digital Signal Processing
Property 4: The misadjustment dened as the ratio of the steady
state value Jex (∞) of the excess mean squared error to the
minimum mean squared error Jmin , equals
M
Jex (∞) X µλi
Φ= = (70)
Jmin 2 − µλi
i=0
which is less than unity if the step size parameter µ satises the
condition given in (64).
53/60
Advanced Digital Signal Processing
2
0<µ< (71)
λmax
The condition for the LMS algorithm to be convergent in mean
square, requires the knowledge of the largest value of eigenvalue as
λmax of correlation matrix R. However, that may not be available
always, the tr[R] is taken as the estimate of λmax
2
0<µ< (72)
tr[R]
54/60
Advanced Digital Signal Processing
M −1
(73)
X
tr[R] = M r(0) = E[|x(n − k)|2 ]
k=0
Tap input power can be dened as the sum of all the input
elements of [x(n), x(n − 1), · · · x(n − M + 1)]. Therefore the
condition on µ can be further specied as
2
0<µ< (74)
tap − input − power
55/60
Advanced Digital Signal Processing
If the step size parameter µ is small compared to the largest λmax ,
the misadjustment Φ can be written as
M
Jex (∞) X µλi
Φ= = (75)
Jmin 2 − µλi
i=0
M
µX µ
Φ= λi = (tap − input − power) (76)
2 2
i=0
56/60
Advanced Digital Signal Processing
Dening an average eigenvalue as
M
1 X
λav = λi (77)
M
i=1
57/60
Advanced Digital Signal Processing
Comparison of LMS Algorithm with Steepest Descent
58/60
Advanced Digital Signal Processing
Comparison of LMS Algorithm with Steepest Descent
Contd...
59/60
Advanced Digital Signal Processing
Comparison of LMS Algorithm with Steepest Descent
Contd...
60/60
Advanced Digital Signal Processing