0% found this document useful (0 votes)
233 views39 pages

Adaptive Filter With LMS

An adaptive filter is a digital filter that can automatically adjust its parameters in response to an optimization algorithm and changing input signals. It is used to process a signal to extract or enhance certain aspects. The LMS algorithm is commonly used for adaptive filtering as it does not require knowing the gradient of the error surface, instead estimating it at each iteration to minimize the mean squared error between the filter output and desired response. An example application is noise cancellation, where an adaptive filter can remove noise from a signal like a pilot's voice by learning the noise pattern from a reference microphone.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
233 views39 pages

Adaptive Filter With LMS

An adaptive filter is a digital filter that can automatically adjust its parameters in response to an optimization algorithm and changing input signals. It is used to process a signal to extract or enhance certain aspects. The LMS algorithm is commonly used for adaptive filtering as it does not require knowing the gradient of the error surface, instead estimating it at each iteration to minimize the mean squared error between the filter output and desired response. An example application is noise cancellation, where an adaptive filter can remove noise from a signal like a pilot's voice by learning the noise pattern from a reference microphone.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Adaptive filter

10/13/2016
Overview
Before we start we must understand some concept :

 The term filter is a “black box” that takes an input signal ,processes it, and then
returns an output signal that in some way modifies the input. For example, if the
input signal is noisy, then one would want a filter that removes noise, but
otherwise leaves the signal unchanged.

 we may use a filter to perform three basic information-processing tasks:

1. Filtering, which means the extraction of information about a quantity of


interest at time t by using data measured up to and including time t.

10/13/2016
2. Smoothing, which differs from filtering in that information about the
quantity of interest need not be available at time t , This means that in the
case of smoothing there is a delay in producing the result of interest.

3. Prediction, which is the forecasting side of information processing. The aim


here is to derive information about what the quantity of interest will be like at
some time t + T in the future, for some z > 0, by using data measured up to
and including time t.

10/13/2016
 We may classify filters into linear and nonlinear.

A filter is said to be linear if the 1) filtered 2) smoothed 3) predicted


quantity at the output of the device is a linear function of the observations
applied w the filter input. Otherwise, the filter is nonlinear.

Filter

linear Non linear

10/13/2016
Fixed versus Adaptive Filter Design
Determine the values of the coefficients of the digital filter that meet the
desired specifications and the values are not changed once they are
implemented

Fixed W0 , W1 , W2 ,…….. Wn-1

The coefficient values are not fixed. They are adjusted to optimize some
measure of the filter performance using incoming input data and error.

Adaptive W0(n) , W1(n) , W2(N),….Wn-1(N)


10/13/2016
Introduction To adaptive filter

Desired
signal
Input signal Filter (A.F) Output signal + Error signal

 The figure shows a filter emphasizing the way it is used in typical


problems.

The filter is used to reshape certain input signals in such a way that its
output is a good estimate of the given desired signal.

10/13/2016
Introduction To adaptive filter

 An adaptive filter is a digital filter with self-adjusting characteristics.

 It adapts automatically, to changes in its input signals.

 A variety of Adaptive algorithms have been developed for the operation of


adaptive filters, e.g., LMS , RLS, etc.

*LMS (least Mean Square)


*RLS (Recursive Least Squares)
10/13/2016
Contains 2 main component :
1- Digital filter(with adjustable coefficients).
2- Adaptive Algorithm. 10/13/2016
Noise Cancelling and power
 We have two input signals into our Adaptive filter:

𝑌𝑘= 𝑆 𝑘 +𝑛 𝑘

where

𝒔𝒌 = Desired signal .
𝒏𝒌 = noise.

𝒙𝒌
A measure of the contaminating signal which is in some way
correlated with 𝑛 𝑘 .
10/13/2016
𝑋𝑘 gives us an estimate of 𝑛 𝑘 ,called it 𝑛 𝑘^
.
We try to find an estimate of 𝑆 𝑘 , by subtracting our best estimate of the
noise signal . Let 𝑆 𝑘^ be our estimate of the desired signal 𝑆
. 𝑘

^ ^
𝑆 𝑘^ = 𝑌𝑘 - 𝑛𝑘 = (𝑆𝑘 + 𝑛𝑘 ) - 𝑛 𝑘

Main objective :
^
produce Optimum 𝒏 𝒌

10/13/2016
^ ^
𝑆𝑘^ = 𝑌 - 𝑛 = (𝑆 + 𝑛 ) - 𝑛
𝑘 𝑘 𝑘 𝑘

Theorem: Minimizing𝑘 the total power at the output of the canceller


maximizes the out signal-to-noise ration.

Proof
𝑆 𝑘^ = 𝑌𝑘 - 𝑛𝑘^ = (𝑆𝑘 + 𝑛𝑘 ) - 𝑛 𝑘^

Squaring :
^2 ^ 2
𝑆𝑘 = 𝑆 𝑘 + (𝑛 𝑘 −
2 𝑛𝑘 + 2𝑆 𝑘 (𝑛 𝑘 - 𝑛 𝑘^
) )
Mean :
2 ^ 2
E ( 𝑆 ) = E (𝑆𝑘 ) + E ( (𝑛𝑘 −
^2
𝑘 𝑛𝑘 ) ) + 2E (𝑆𝑘 - 𝑛 𝑘^
(𝑛 𝑘 ))
10/13/2016
2 ^ 2
E( 𝑆^𝑘2 ) = E (𝑆𝑘 ) + E ( (𝑛𝑘 − + 2E (𝑆 (𝑛 - 𝑛𝑘^
𝑛 ) )
𝑘 𝑘
)) ^
Since the desired signal 𝑆 𝑘 ,is𝑘 uncorrelated with 𝑛𝑘 or 𝑛 𝑘
,
the last term become zero
2 ^ 2
E( ) = E (𝑆𝑘 ) + E ( (𝑛𝑘 − 𝑛 )
𝑆^𝑘2
𝑘 )
E (𝑆𝑘 2 ) represent the signal power. E ( 𝑆𝑘 ^ 2 ) represent the estimate of the
^ 2
signal power, and + E ( (𝑛 − 𝑛 ) ) represent the remnant noise power.
𝑘
𝑘
The manage of minimize the power of the Noise .

Min E ( 𝑆𝑘^ 2 ) = E (𝑆𝑘 2 ) + min E ( (𝑛 − 𝑛 𝑘^ )


2 𝑘 )
10/13/2016
Applications of Adaptive Filters:

1) Identification
Used to provide a linear model of an unknown plant.

Applications:
System identification

10/13/2016
Applications of Adaptive Filters:

2) Inverse Modeling
Used to provide an inverse model of an unknown plant

Applications :
Channel Equalization

10/13/2016
Applications of Adaptive Filters:

3) Prediction
Used to provide a prediction of the present value of a random signal

Applications :
Signal detection

10/13/2016
Applications of Adaptive Filters:

4) Echo (Noise) cancellation


Subtracts Noise from Received signal adaptively to improve SNR

10/13/2016
A good example to illustrate the principles of adaptive noise cancelling is the
noise removal from the pilot's microphone in the airplane.
Due to the high environmental noise produced by the airplane engines, the
pilot’s voice in the microphone is distorted with a high amount of noise ,and
can be very difficult to understand .
In order to overcome the problem , an adaptive filter can be used.

10/13/2016
Approaches to adaptive filter

Non linear Neural Networks

Adaptive
Linear
Filtering

Stochastic Gradient Least Square


Approach Estimation
(Least Mean Square (Recursive Least Square
Algorithms) Algorithm)
LM RL
N
S LM FTR
S
TVLM
S S
L 10/13/2016
V
S SSNLM
Stochastic Gradient
Most commonly used type of Adaptive Filters

Define cost function as mean-squared error


Difference between filter output and desired response
Based on the method of steepest descent
Move towards the minimum on the error surface to get to minimum
Requires the gradient of the error surface to be known
Most popular adaptation algorithm is LMS
Derived from steepest descent
Doesn’t require gradient to be know: it is estimated at every iteration

10/13/2016
Least-Mean-Square (LMS) Algorithm.

update value
  old value   learning -  tap  
    
 error 
 of tap - weigth    of tap - weight   input  
  signal 
 vector   vector  vector 
  parameter
 rate
In the family of stochastic gradient algorithms
Approximation of the steepest – descent method
Based on the MMSE criterion.(Minimum Mean square Error)
Adaptive process containing two important signals:
1.) Filtering process, producing output signal. 2.)
Desired signal (Training sequence) 10/13/2016
Adaptive process: Recursive adjustment of filter tap weights
The LMS Algorithm consists of two basic processes that is followed in the adaptive
equalization processes:
Training : It refers to adapting to the training sequence.
Tracking: keeps track of the changing characteristics of the channel.

10/13/2016
LMS Algorithm Steps:
M 1
Filter output zn  u  n  k  *
w
k
 n 
k 0

Estimation error en   d  n  


zn
Tap-weight adaptation

wk n 1 wk n  un  ke n *

 
10/13/2016
Transversal Filter

10/13/2016
Stability of LMS:
 The LMS algorithm is convergent in the mean square if and
only if the step-size parameter satisfy
1
0  
 m ax
Here max is the largest Eigen value of the correlation matrix of
the input data.
 More practical test for stability is
1
0  
 Larger values for step size input signal
power
Increases adaptation rate (faster adaptation)
Increases residual mean-squared error
10/13/2016
LMS – Advantage:
 Simplicity of implementation

 Not neglecting the noise like Zero forcing equalizer


 Stable and robust performance against different signal
conditions

LMS – Disadvantage:
 Slow Convergence
 Demands using of training sequence as
reference ,thus decreasing the communication BW.

10/13/2016
Wiener filter

Many adaptive algorithms can be viewed as approximation to the discrete


Wiener filter.

Tries to minimize the mean of the square of the error


(Least Mean Square)

The design of a Wiener filter requires a priori information about the statistics of
the data to be processed.

The filter is optimum only when the statistical characteristics of the input data
match the a priori information on which the design of the filter is based.

10/13/2016
Assuming an FIR filter structure with N coefficient (weights) the output signal is given
by:

Where 𝐗 𝐤 is the vector correlated with the noise at the 𝐾 𝑡 ℎ sample and W is the set of
adjustable weights.

10/13/2016
Squaring the error:

Mean :

Where P=E ( 𝑌𝐴 (The N-length Cross-Correlation Vector)


𝑋𝐾)
And (The N * N autocorrelation matrix).
R = E ( 𝑋𝑘 𝑋 𝑘 𝑇 )
10/13/2016
The mean square

10/13/2016
The wiener-Hopf solution

Setting gradient to zero

𝑾𝒐𝒑𝒕 =𝑹−𝟏 P

Where P=E ( 𝒀𝑨
𝑿𝑲 )
And
10/13/2016
Issues with the wiener – Hopf solution

1. Requires knowledge of R and P , nither of which are known


before-hand.

2. Matrix inversion is expensive (O(𝑛3))

3. If the signals are non-stionary. Then both R and P will Change


with time , and so 𝑊𝑜𝑝𝑡 will have to be
computed repeatedly.

10/13/2016
The windrow-Hopf LMS algorithm

Base on the the steepest descent algorithm

Where

U determines Stability and rate convergence.


 If u is too large, we observe too much fluctuation.
 If u is too small, rate of convergence too slow.
10/13/2016
Least Square Estimation

10/13/2016
Recursive Least Square (RLS) Algorithm

10/13/2016
Recursive Least Square (RLS) Algorithm

10/13/2016
Gama (typically between 0.98 and 1) is referred to as the
“forgetting factor”.

The previous samples contribute less and less to the new


weights:

when Y=1, we have “infinite memory” and this weighting


scheme reduce to exract Least Squares solution.

10/13/2016
Comparison against LMS

 RLS has rapid rate convergence, compared to LMS.

 RLS is computationally more expensive than LMS.

10/13/2016
10/13/2016
Thank YOU ^_^

10/13/2016

You might also like