0% found this document useful (0 votes)
355 views

Adaptive Signal Processing Lecture Note

This document is a lecture overview for an adaptive signal processing course. It covers the following topics: - Basic adaptive filtering methods including linear adaptive filters and supervised training. Students must complete project work involving algorithm implementations and take a final exam. - The textbook is Simon Haykin's "Adaptive Filter Theory". - The first lecture introduces adaptive noise cancelling using an adaptive filter to process a reference noise signal and minimize the error between the output and a primary sensor signal corrupted by noise. It describes how the least mean square algorithm is used to adapt the filter weights to reduce the error over time.

Uploaded by

prakashrout
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
355 views

Adaptive Signal Processing Lecture Note

This document is a lecture overview for an adaptive signal processing course. It covers the following topics: - Basic adaptive filtering methods including linear adaptive filters and supervised training. Students must complete project work involving algorithm implementations and take a final exam. - The textbook is Simon Haykin's "Adaptive Filter Theory". - The first lecture introduces adaptive noise cancelling using an adaptive filter to process a reference noise signal and minimize the error between the output and a primary sensor signal corrupted by noise. It describes how the least mean square algorithm is used to adapt the filter weights to reduce the error over time.

Uploaded by

prakashrout
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

1

SGN 2206 Adaptive Signal Processing


Lecturer:
Ioan Tabus
oce: TF 414,
e-mail [email protected].

Contents of the course:


Basic adaptive signal processing methods
Linear adaptive lters
Supervised training

Requirements:
Project work: Exercises and programs for algorithm implementation
Final examination

Lecture 1

Text book: Simon Haykin, Adaptive Filter Theory


Prentice Hall International, 2002

0
1
2
3
4
5
6
7
8
9

Background and preview


Stationary Processes and Models
Wiener Filters
Linear Prediction
Method of Steepest Descent

10
11
12
13
14

Kalman Filters
Square Root Adaptive Filters
Order Recursive Adaptive Filters
Finite Precision Eects
Tracking of Time Varying Systems
Adaptive Filters using Innite-Duration
Least-Mean-Square Adaptive Filters
15
Impulse Response Structures
Normalized Least-Mean-Square Adaptive
16 Blind Deconvolution
Filters
Frequency-Domain Adaptive Filters
17 Back-Propagation Learning
Method of Least Squares
Epilogue
Recursive Least-Squares Algorithm

Lecture 1

1. Introduction to Adaptive Filtering


1.1 Example: Adaptive noise cancelling
Found in many applications:
Cancelling 50 Hz interference in electrocardiography (Widrow, 1975);
Reduction of acoustic noise in speech (cockpit of a military aircraft: 10-15 dB reduction);
Two measured inputs, d(n) and v1 (n):
- d(n) comes from a primary sensor: d(n) = s(n) + v0 (n)
where s(n) is the information bearing signal;
v0 (n) is the corrupting noise:
- v1 (n) comes from a reference sensor:
Hypothesis:
* The ideal signal s(n) is not correlated with the noise sources v0 (n) and v1 (n);
Es(n)v0 (n k) = 0, Es(n)v1 (n k) = 0,

for all k

* The reference noise v1 (n) and the noise v0 (n) are correlated, with unknown crosscorrelation p(k),
Ev0 (n)v1 (n k) = p(k)

Lecture 1

Lecture 1

Lecture 1

Description of adaptive ltering operations, at any time instant, n:


* The reference noise v1 (n) is processed by an adaptive lter, with time varying parameters
w0 (n), w1 (n), . . . , wM 1 (n), to produce the output signal
y(n) =

M
1

wk (n)v1 (n k)

k=0

.
* The error signal is computed as e(n) = d(n) y(n).
* The parameters of the lters are modied in an adaptive manner. For example, using the LMS
algorithm (the simplest adaptive algorithm)
wk (n + 1) = wk (n) + v1 (n k)e(n)
where is the adaptation constant.

(LM S)

Lecture 1

Rationale of the method:


* e(n) = d(n) y(n) = s(n) + v0 (n) y(n)
* Ee2 (n) = Es2 (n) + E(v0 (n) y(n))2 (follows from hypothesis: Exercise)
* Ee2 (n) depends on the parameters w0 (n), w1 (n), . . . , wM 1 (n)
* The algorithm in equation (LMS) modies w0 (n), w1 (n), . . . , wM 1 (n) such that Ee2 (n) is minimized
* Since Es2 (n) does not depend on the parameters {wk (n)}, the algorithm (LMS) minimizes E(v0 (n)
y(n))2 , thus statistically v0 (n) will be close to y(n) and therefore e(n) s(n), (e(n) will be close to
s(n)).
* Sketch of proof for Equation (LMS)
e2 (n) = (d(n) y(n))2 = (d(n) w0 v1 (n) w1 v1 (n 1) . . . wM 1 v1 (n M + 1))2
Error Surface

30

The square error surface

25

20

e2 (n) = F (w0 , . . . , wM 1 )

15

is a paraboloid.

10
20
15

20
15

10
10
5
w

The gradient of square error is wk e2 (n) =

de (n)
dwk

5
0

= 2e(n)v1 (n k)

w0

Lecture 1

The method of gradient descent minimization: wk (n + 1) = wk (n) wk e2 (n) = wk (n) + v1 (n


k)e(n)
* Checking for eectiveness of Equation (LMS) in reducing the errors
(n) = d(n)
= d(n)
= d(n)

M
1

wk (n + 1)v1 (n k)

k=0
M
1

(wk (n) + v1 (n k)e(n))v1 (n k)

k=0
M
1

wk (n)v1 (n k) e(n)

k=0

M
1

v12 (n
k=0

M
1

v12 (n k)
k=0
M
1
2

v1 (n k))
k=0

= e(n) e(n)
= e(n)(1

In order to reduce the error by using the new parameters, w(n + 1)


|(n)| < |e(n)|
0< <

M 1 2
k=0 v1 (n

k)

k)

Lecture 1

Lecture 1

10

Lecture 1

11

Lecture 1

12

Lecture 1

13

You might also like