P6 Adaptive Filtering LMS
P6 Adaptive Filtering LMS
FILTERING ( I I)
Stochastic Gradient-based
Algorithms:
(Least-Mean-Square [LMS])
w( n + 1) = w( n ) + µ[ p( n ) - R( n ) w( n )]
~
We note two main problems with this approach:
R
~ ( n ) = u( n )u H ( n )...........(3)
pˆ ( n ) = u( n )d * ( n )...........( 4)
By using (3) and (4) in (1):
ш ( J ( n )) = -2u( n )d * ( n ) + 2u( n )u ( n )W ( n )...(5)
H
1. Filter Output:
y ( n ) = wˆ ( n )u( n ).........(7a )
H
2. Estimation of errors:
e( n ) = d ( n ) - y ( n )....(7b)
3. Tap weight adaptation:
wˆ ( n + 1) = wˆ ( n ) + µu( n )e* ( n )...(7c )
Correction Term
The algorithm gets initiated with:
wˆ (0) = 0
Notes on the LMS algorithms from the above
figure: wˆ ( n )
Rewrite:
wˆ ( n + 1) - w 0 = wˆ ( n ) - w 0 + µu( n )[d * ( n ) - u H ( n ) wˆ ( n )]
e( n + 1) e(n )
e( n + 1) = e( n ) + µu( n )[d * ( n ) - u H ( n ) wˆ ( n )]
e * ( n ) - e H ( n )u ( n )
0
E [u(n )e0* (n )] = 0
c( n + 1) = [ I - µ R ]c( n )
~
We therefore conclude from (A) that:
the mean of e(n) converges to zero as n ®¥,
provided:
2
0<µ< (B)
l max
Conclusion: if µ is set as in B
E[ wˆ (n)] ® w0 as n ®¥
Start From:
H
e(n ) = d (n ) - wˆ (n )u(n )
H H
= d ( n ) - w u ( n ) - e ( n )u ( n )
0
H
= e0 (n ) - e (n )u(n )
Now:
J (n ) = E[| e(n ) |2 ]
H
= E[(e0 (n ) - e (n )u(n ))( e (n ) - e(n )u (n ))]
*
0
H
T
J ex (n ) = l x (n )
{
~ (1-µl ) +µ l ...........i = j
2 2 2
bij = µ l l .........i ¹ j
2
i
i
j
i
The matrix B is real, positive and symmetric.
By solving the first order vector differennce
equation (F) gives:
n -1
x (n ) = B x (0) + µ J min å B l
n 2 i (G)
~ i =0 ~~
n -1
-1
B = ( I - B) ( I - B)
å~
i n
i =0 ~ ~ ~ ~
Then equation (G) becomes:
n
x (n ) = B [ x (0) - µ 2 J min ( I - B ) -1 l ] + µ 2 J min ( I - B ) -1 l
~ ~ ~ ~
transient Component Steady state
Component
Since B is symmetric, we can apply an
~
orthogonal similarity transformation.
GT B G = C
~ ~~
G: orthogonal matrix
GT G = I
~ ~ ~
B n = G C n GT =C
~ ~~ ~
C : Diagonal matrix with Ci, i = 1,2, …..M
gi : eigen vectors of B associated with
eigenvalues ci
We can then manipulate the equation for x(n)
to:
M
T
x (n ) = å c g i g i [ x (0) - x (¥)] + x (¥)
n
i
(H)
i =1
-1
x (¥) = µ J min ( I - B ) l
2
~ ~
And: M
T
GC G n T
= åc gi gi n
~~ ~ i =1
i
M µl i
å
i =1 2 - µl (J)
J ex ( ¥) = J min i
M µl i
1- å
i =1 2 - µl
i
Now Since:
J ex ( n ) = J ( n ) - J min
= tr{R K (n )}
~~
Putting equations (J) in (I), the time evolution of
the mean squared error for the LMS algorithm:
M Jim
J ( n ) = å ri Ci +n
(K)
i =1 M µl i
1- å
i =1 2 - µl
i
Where: ri = l g i g iT [ x(0) - x(¥)]
T
li I=1,2,….M, eigenvalues of R
~ .
The LMS algorithm is convergent in the mean
square.
c. The mean squared error produced by the
LMS algorithm has the final value:
J min
J (¥) = M µl i
1- å
i =1 2 - µl
i
1
( t) mse ,av »
2µl av
µMl av M
M= »
2 4t mse ,av