Adaptive Filter With LMS
Adaptive Filter With LMS
10/13/2016
Overview
Before we start we must understand some concept :
The term filter is a “black box” that takes an input signal ,processes it, and then
returns an output signal that in some way modifies the input. For example, if the
input signal is noisy, then one would want a filter that removes noise, but
otherwise leaves the signal unchanged.
10/13/2016
2. Smoothing, which differs from filtering in that information about the
quantity of interest need not be available at time t , This means that in the
case of smoothing there is a delay in producing the result of interest.
10/13/2016
We may classify filters into linear and nonlinear.
Filter
10/13/2016
Fixed versus Adaptive Filter Design
Determine the values of the coefficients of the digital filter that meet the
desired specifications and the values are not changed once they are
implemented
The coefficient values are not fixed. They are adjusted to optimize some
measure of the filter performance using incoming input data and error.
Desired
signal
Input signal Filter (A.F) Output signal + Error signal
The filter is used to reshape certain input signals in such a way that its
output is a good estimate of the given desired signal.
10/13/2016
Introduction To adaptive filter
𝑌𝑘= 𝑆 𝑘 +𝑛 𝑘
where
𝒔𝒌 = Desired signal .
𝒏𝒌 = noise.
𝒙𝒌
A measure of the contaminating signal which is in some way
correlated with 𝑛 𝑘 .
10/13/2016
𝑋𝑘 gives us an estimate of 𝑛 𝑘 ,called it 𝑛 𝑘^
.
We try to find an estimate of 𝑆 𝑘 , by subtracting our best estimate of the
noise signal . Let 𝑆 𝑘^ be our estimate of the desired signal 𝑆
. 𝑘
^ ^
𝑆 𝑘^ = 𝑌𝑘 - 𝑛𝑘 = (𝑆𝑘 + 𝑛𝑘 ) - 𝑛 𝑘
Main objective :
^
produce Optimum 𝒏 𝒌
10/13/2016
^ ^
𝑆𝑘^ = 𝑌 - 𝑛 = (𝑆 + 𝑛 ) - 𝑛
𝑘 𝑘 𝑘 𝑘
Proof
𝑆 𝑘^ = 𝑌𝑘 - 𝑛𝑘^ = (𝑆𝑘 + 𝑛𝑘 ) - 𝑛 𝑘^
Squaring :
^2 ^ 2
𝑆𝑘 = 𝑆 𝑘 + (𝑛 𝑘 −
2 𝑛𝑘 + 2𝑆 𝑘 (𝑛 𝑘 - 𝑛 𝑘^
) )
Mean :
2 ^ 2
E ( 𝑆 ) = E (𝑆𝑘 ) + E ( (𝑛𝑘 −
^2
𝑘 𝑛𝑘 ) ) + 2E (𝑆𝑘 - 𝑛 𝑘^
(𝑛 𝑘 ))
10/13/2016
2 ^ 2
E( 𝑆^𝑘2 ) = E (𝑆𝑘 ) + E ( (𝑛𝑘 − + 2E (𝑆 (𝑛 - 𝑛𝑘^
𝑛 ) )
𝑘 𝑘
)) ^
Since the desired signal 𝑆 𝑘 ,is𝑘 uncorrelated with 𝑛𝑘 or 𝑛 𝑘
,
the last term become zero
2 ^ 2
E( ) = E (𝑆𝑘 ) + E ( (𝑛𝑘 − 𝑛 )
𝑆^𝑘2
𝑘 )
E (𝑆𝑘 2 ) represent the signal power. E ( 𝑆𝑘 ^ 2 ) represent the estimate of the
^ 2
signal power, and + E ( (𝑛 − 𝑛 ) ) represent the remnant noise power.
𝑘
𝑘
The manage of minimize the power of the Noise .
1) Identification
Used to provide a linear model of an unknown plant.
Applications:
System identification
10/13/2016
Applications of Adaptive Filters:
2) Inverse Modeling
Used to provide an inverse model of an unknown plant
Applications :
Channel Equalization
10/13/2016
Applications of Adaptive Filters:
3) Prediction
Used to provide a prediction of the present value of a random signal
Applications :
Signal detection
10/13/2016
Applications of Adaptive Filters:
10/13/2016
A good example to illustrate the principles of adaptive noise cancelling is the
noise removal from the pilot's microphone in the airplane.
Due to the high environmental noise produced by the airplane engines, the
pilot’s voice in the microphone is distorted with a high amount of noise ,and
can be very difficult to understand .
In order to overcome the problem , an adaptive filter can be used.
10/13/2016
Approaches to adaptive filter
Adaptive
Linear
Filtering
10/13/2016
Least-Mean-Square (LMS) Algorithm.
update value
old value learning - tap
error
of tap - weigth of tap - weight input
signal
vector vector vector
parameter
rate
In the family of stochastic gradient algorithms
Approximation of the steepest – descent method
Based on the MMSE criterion.(Minimum Mean square Error)
Adaptive process containing two important signals:
1.) Filtering process, producing output signal. 2.)
Desired signal (Training sequence) 10/13/2016
Adaptive process: Recursive adjustment of filter tap weights
The LMS Algorithm consists of two basic processes that is followed in the adaptive
equalization processes:
Training : It refers to adapting to the training sequence.
Tracking: keeps track of the changing characteristics of the channel.
10/13/2016
LMS Algorithm Steps:
M 1
Filter output zn u n k *
w
k
n
k 0
10/13/2016
Transversal Filter
10/13/2016
Stability of LMS:
The LMS algorithm is convergent in the mean square if and
only if the step-size parameter satisfy
1
0
m ax
Here max is the largest Eigen value of the correlation matrix of
the input data.
More practical test for stability is
1
0
Larger values for step size input signal
power
Increases adaptation rate (faster adaptation)
Increases residual mean-squared error
10/13/2016
LMS – Advantage:
Simplicity of implementation
LMS – Disadvantage:
Slow Convergence
Demands using of training sequence as
reference ,thus decreasing the communication BW.
10/13/2016
Wiener filter
The design of a Wiener filter requires a priori information about the statistics of
the data to be processed.
The filter is optimum only when the statistical characteristics of the input data
match the a priori information on which the design of the filter is based.
10/13/2016
Assuming an FIR filter structure with N coefficient (weights) the output signal is given
by:
Where 𝐗 𝐤 is the vector correlated with the noise at the 𝐾 𝑡 ℎ sample and W is the set of
adjustable weights.
10/13/2016
Squaring the error:
Mean :
10/13/2016
The wiener-Hopf solution
𝑾𝒐𝒑𝒕 =𝑹−𝟏 P
Where P=E ( 𝒀𝑨
𝑿𝑲 )
And
10/13/2016
Issues with the wiener – Hopf solution
10/13/2016
The windrow-Hopf LMS algorithm
Where
10/13/2016
Recursive Least Square (RLS) Algorithm
10/13/2016
Recursive Least Square (RLS) Algorithm
10/13/2016
Gama (typically between 0.98 and 1) is referred to as the
“forgetting factor”.
10/13/2016
Comparison against LMS
10/13/2016
10/13/2016
Thank YOU ^_^
10/13/2016