L6 Adaptive Filters
L6 Adaptive Filters
L6 Adaptive Filters
Lecture 6
Adaptive Filters
Why Adaptive?
Classical approach lowpass, highpass, bandstop and
bandpass filters with fixed coefficients. Filters will only work
for cases where spectra of signals of interest and noise do not
overlap and do not vary.
Optimal approach filters coefficients can be adapted and
are usually selected as the optimum values which minimise a
cost function in terms of mean squared error (or difference
between actual output and desired output).
When no a priori information is available, we need adaptive
filters which can adjust their own parameters based on the
incoming signal.
The plant is unknown and we use the adaptive filter to obtain a mathematical
model that best fits, in some sense, the unknown plant. The input to the plant and
the filter is the same and the filter is suppose to adapt such that its output y, is
close to the plants output d.
u = input of adaptive filter and plant
y = output of adaptive filter
d = desired response = output of plant
e = d-y = estimation error
The adaptive filter is to provide an inverse model that best represents (in some
sense) the unknown noisy plant. Ideally, the inverse model has a transfer function
equal to the reciprocal of the transfer function of the plant, such that the
combination of the two gives an ideal transmission medium.
u = output of plant and input to adaptive
filter
y = output of adaptive filter
d = desired response=delayed system
input
e = d-y = estimation error
e(n)
Speech Output
Far Microphone
x(n) = noise'
Adaptive Filter
y(n)
Filter Output
(noise)
e(n)
Example: Headset for pilots. Cockpit has lots of noise eg engine noise which
interferes with pilots voices. Noise in far and near microphones maybe
slightly different but are correlated. The filter will produce output y(n) which
are best estimates of the noise picked up by near microphone. This is
subtracted from d(n) to give the desired speech.
The adaptive filter is used to estimate future values of a signal based on past
values of the signal.
u = input of adaptive filter=delayed version of random signal
y = output of adaptive filter
d = desired response=random signal
e = d-y = estimation error=system output
10
u(n)
y(n)
11
Other data
12
PROBLEM STATEMENT
Suppose for the identification problem, we have the input samples going into the
unknown system un and the output (desired signal) dn where
uon
1n
Input signal : U n
u
( M 1) n
Desired signal : d n d1 d 2
M 1
yn wmum n
m 0
weight vector :W T wo w1 wM 2 wM 1
13
such that the mean square error (power of error signal) defined below, is
minimised
J E en
en d n yn
The solution for the Wiener filter coefficients requires estimates of the
autocorrelation of the input signal and the cross-correlation of the input signal and
the desired signal.
14
In vector form yn U n W
T
Error signal
en d n yn d n U n W
Cost function
J E en E d n U n W
W W EU
E d n 2d nU k W U n W
2
E d 2 P W W
E d n 2 E d nU n
2
2
Un W
RW
15
u0 nu0 n
u1nu0 n
RE
u M 1nu0 n
u0 nu1n
u1nu1n
u M 1nu1n
u0 nu M 1n
u1nu M 1n
u M 1nu M 1n
d n u0 n
d u
n 1n
P E
d
u
n M 1n
16
E en
:
0
W
2 RW 2 P 0
2
J min
RW P
W R 1 P Wiener Hopf equation
17
18
19
Points to note:
For pseudo-stationary signals such as speech signals, the filter
coefficients can be periodically recalculated and updated for
every block of say N input samples.
Choice of filter length? Must make a compromise too small
means filter may not be able to function properly; too large
results in computational complexity.
Depending upon the relationship between the input u(n) and
desired signal d(n), the filter can be used in various
applications.
20
23
24
25
27
dn U n W
W
W n 1 W n 2enU n
update value
of tap weight
vector
old value
learning tap
error
28
M 1
wmum n
m0
The filter has two taps, w(0) and w(1) and the following initial values for the
primary signal and reference signal: d(0)=3, d(1)=-2, d(2)=1, u(0)=3, u(1)=1, u(2)=2. Using a learning rate of 0.1,
(i) write the following equations required for the weight updates:
y(n), e(n), w(0) and w(1)
(ii) hence determine the values for w(0) and w(1) for the first 2 iterations
(Similar question except for part (ii) which requires two interations only,
appears in Example Sheet 4. Please see solution posted in Moodle)
31
32
Summary
Adaptive filters are time-variant filters which adapts their weights to
minimise a cost function, most commonly power in an error signal, the
difference between the desired signal and the actual signal.
Adaptive filters are applicable for processing dynamically changing signals.
Optimal Wiener filter requires statistical properties of signals which are
usually not available in practice.
The LMS filter algorithm only requires samples of input signal and desired
signal. It seeks minimum cost optimal weights by estimating the gradient
of the instantaneous error and moving in the negative direction of the
gradient towards minimum cost. If learning rate parameter is carefully
chosen, the algorithm will approach the optimal Wiener solution.
FIR-based LMS filters are guaranteed to have unimodal performance error
surfaces only one minimum.
In general, using other adaptive filters, cost functions may have local
minima. If there are, the global minimum is reached by appropriate
selection of learning rate parameter and initial weight values. However,
what we have learnt serves as a basis for more practical variants of the
algorithm.
33
Recommended Reading
See papers on Examples of Adaptive Filter
Applications in Moodle: Recommended Reading.
See also a Simulink program (dspanc) for an example
of adaptive noise cancellation. (Type dspanc at
Matlab command prompt. Experiment with different
values of learning rate parameter). Block diagram is
also given in next slide
34
35