0% found this document useful (0 votes)
12 views16 pages

SGN 21006 Advanced Signal Processing

The document outlines the organization and content of the SGN 21006 Advanced Signal Processing course, taught by Ioan Tabus at Tampere University of Technology. It includes details on lecture and exercise schedules, course content covering topics like optimal and adaptive filter design, and required materials. Additionally, it discusses adaptive noise cancelling techniques and the rationale behind adaptive filtering methods.

Uploaded by

zhouwenwei0526
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views16 pages

SGN 21006 Advanced Signal Processing

The document outlines the organization and content of the SGN 21006 Advanced Signal Processing course, taught by Ioan Tabus at Tampere University of Technology. It includes details on lecture and exercise schedules, course content covering topics like optimal and adaptive filter design, and required materials. Additionally, it discusses adaptive noise cancelling techniques and the rationale behind adaptive filtering methods.

Uploaded by

zhouwenwei0526
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Organization of the course

Preview of the course

SGN 21006 Advanced Signal Processing

Ioan Tabus

Department of Signal Processing


Tampere University of Technology
Finland

1 / 16
Organization of the course
Preview of the course

Organization of the course


I Lecturer: Ioan Tabus (office: TF 419, e-mail [email protected] )
I Lectures: Tuesdays 12:15-14:00
TB214 (30.08, 6.09, 20.09, 27.09, 4.10, 11.10, 25.10, 1.11, 8.11, 15.11, 22.11,
29.11)
I Exercises:
1. Group 1: Mondays 12:15-14:00 TC 303 First exercise 5.09.2016
2. Group 2: Tuesdays 14:15-16:00 TC 303 First exercise 6.09.2016
I Content of the course:
1. Deterministic and random signals
2. Optimal filter design
3. Adaptive filter design
4. Application areas of Optimal filter design and Adaptive filter design
5. Spectrum estimation
6. Nonlinear filters
I Requirements:
1. Exercises; programs for implementation of various algorithms;
project work.
2. Final examination 2 / 16
Organization of the course
Preview of the course

Organization of the course


I Ioan Tabus. Lecture slides for the course.
http://www.cs.tut.fi/~tabus/
I Text books:
1. Sophocles J. Orfanidis. ”Optimum Signal Processing: An
Introduction”, Second Edition,
http://www.ece.rutgers.edu/~orfanidi/osp2e
2. Simon Haykin, ”Adaptive Filter Theory”, Prentice Hall
International, 2002.
3. Peter Stoica, Randolph Moses. ”Spectral Analysis of Signals”,
Prentice Hall, 2005. Full book available at
http://user.it.uu.se/~ps/SAS-new.pdf
The lecture slides for the full book are available at
http://www.prenhall.com/~stoica
I Additional materials:
1. Danilo Mandic. Slides of the courses ”Advanced Signal Processing”
and ”Spectral Estimation and Adaptive Filtering” given at
Department of Electrical and Electronic Engineering, Imperial
College, UK.
3 / 16
http://www.commsp.ee.ic.ac.uk/ mandic/courses.htm
Organization of the course
Preview of the course

Signal Processing: The Science Behind Our Digital Life

I YouTube.com. What is signal processing? [Online]. Available:


https://youtu.be/EErkgr1MWw0
I What is signal processing? https://www.youtube.com/watch?v=YmSvQe2FDKs
I Publications of IEEE Signal Processing Society
https:
//signalprocessingsociety.org/publications-resources/publications

4 / 16
Organization of the course
Preview of the course

Preview of the course

I Discuss generic applications in the following terms

1. Problem formulation, scheme of the application, signal flow


2. Derive or choose the proper algorithm for solving the problem
3. Check by simulation the performance of the solution
4. Find theoretical justification of the performance
I Derivation of the main algorithms
I Whenever needed, the necessary mathematical notions are reviewed

5 / 16
Organization of the course
Preview of the course

Adaptive noise cancelling


I Problem appearing in many applications:
I Cancelling 50 Hz interference in electrocardiography (Widrow,
1975);
I Reduction of acoustic noise in speech (cockpit of a military aircraft:
10-15 dB reduction);
I Two measured inputs, d(n) and v1 (n):
I d(n) comes from a primary sensor: d(n) = s(n) + v0 (n)
- where s(n) is the information bearing signal;
- v0 (n) is the corrupting noise:
I v1 (n) comes from a reference sensor:
I Hypotheses:
I The ideal signal s(n) is not correlated with the noise sources v0 (n)
and v1 (n);
Es(n)v0 (n − k) = 0, Es(n)v1 (n − k) = 0, for all k
I The reference noise v1 (n) and the noise v0 (n) are correlated, with
unknown crosscorrelation p(k), Ev0 (n)v1 (n − k) = p(k)
6 / 16
Organization of the course
Preview of the course

Adaptive noise cancelling

7 / 16
Organization of the course
Preview of the course

Adaptive noise cancelling

8 / 16
Organization of the course
Preview of the course

Adaptive noise cancelling

• Description of adaptive filtering operations, at any time instant, n:


* The reference noise v1 (n) is processed by an adaptive filter,
with time varying parameters
w0 (n), w1 (n), . . . , wM−1 (n), to produce the output signal
M−1
X
y (n) = wk (n)v1 (n − k)
k=0
.
* The error signal is computed as e(n) = d(n) − y (n).
* The parameters of the filters are modified in an adaptive
manner. For example, using the LMS algorithm (the simplest
adaptive algorithm)

wk (n + 1) = wk (n) + µv1 (n − k)e(n) (LMS)

where µ is the adaptation constant.


9 / 16
Organization of the course
Preview of the course

Rationale of the method

* e(n) = d(n) − y (n) = s(n) + v0 (n) − y (n)


* Ee 2 (n) = Es 2 (n) + E (v0 (n) − y (n))2 (follows from hypothesis: Exercise)
* Ee 2 (n) depends on the parameters w0 (n), w1 (n), . . . , wM−1 (n)
* The algorithm in equation (LMS) modifies w0 (n), w1 (n), . . . , wM−1 (n)
such that Ee 2 (n) is minimized
* Since Es 2 (n) does not depend on the parameters {wk (n)}, the algorithm
(LMS) minimizes E (v0 (n) − y (n))2 , thus statistically v0 (n) will be close
to y (n) and therefore e(n) ≈ s(n), (e(n) will be close to s(n)).

10 / 16
Organization of the course
Preview of the course

Rationale of the method

* Sketch of proof for Equation (LMS)


· e 2 (n) = (d(n) − y (n))2 =
(d(n) − w0 v1 (n) − w1 v1 (n − 1) − . . . wM−1 v1 (n − M + 1))2
Error Surface

The square error surface 30

25

· e 2 (n) = F (w0 , . . . , wM−1 )


20

15

10
20

is a paraboloid. 15

10
15
20

10
5
5
w1 0 0
w0

· The gradient of square error is


2
∇wk e 2 (n) = dedw(n)
k
= −2e(n)v1 (n − k)
· The method of gradient descent minimization:
wk (n + 1) = wk (n) − µ∇wk e 2 (n) = wk (n) + µv1 (n − k)e(n)

11 / 16
Organization of the course
Preview of the course

Rationale of the method


* Checking for effectiveness of Equation (LMS) in reducing the errors
M−1
X
ε(n) = d(n) − wk (n + 1)v1 (n − k)
k=0
M−1
X
= d(n) − (wk (n) + µv1 (n − k)e(n))v1 (n − k)
k=0
M−1
X M−1
X
= d(n) − wk (n)v1 (n − k) − e(n)µ v12 (n − k)
k=0 k=0
M−1
X
= e(n) − e(n)µ v12 (n − k)
k=0
M−1
X
= e(n)(1 − µ v12 (n − k))
k=0

In order to reduce the error by using the new parameters, w (n + 1), the
following inequality must hold:
2
|ε(n)| < |e(n)| or, equivalently 0 < µ < PM−1
k=0 v12 (n − k)
12 / 16
Organization of the course
Preview of the course

Optimal or Adaptive Linear Filtering Module


* In optimal filtering (or batch, or framewise filtering) the input and desired
signals are available for a given time-window, 1, . . . , N, and the optimal
parameters of the linear filter, and subsequently the filter output for that time
window, are computed only once
* In adaptive filtering the input and desired signals are provided to the algorithm
sequentially, and at every time instant a set of parameters of the linear filter are
computed or updated, and they are used to compute the output of the filter for
only that time instance

13 / 16
Organization of the course
Preview of the course

Optimal or Adaptive Linear Filtering Modules in


Applications
I In an application one has to figure out the correspondence between the signals
in the application and the conventional names of the signals in an
optimal/adaptive module:
I which is the input signal to the filter u(t)
I which is the desired signal to the filter d(t)
I which is the output signal of the filter y (t)
I Rarely the filter output y (t) is the interesting resulted signal
I Sometimes the error e(t) is the interesting signal
I Sometimes the vector of parameters w (t) is of interest

14 / 16
Organization of the course
Preview of the course

Optimal or Adaptive Linear Filtering Modules in


Applications

15 / 16
Organization of the course
Preview of the course

Optimal or Adaptive Linear Filtering Modules in


Applications

16 / 16

You might also like