DSP Slides
DSP Slides
Markus Kuhn
https://www.cl.cam.ac.uk/teaching/1920/{DSP,DSP2,L314}/
Michaelmas 2019
CST Part II(75%)Unit/Part II(50%)/Part III/MPhil ACS
2 / 231
Signal processing
3 / 231
Analog electronics
Advantages: Uin
Uout
I passive networks are highly linear
over a very large dynamic range √
ω (= 2πf)
0 1/ LC
and large bandwidths
Uin
I analog signal-processing circuits
require little or no power Uout
t
I analog circuits cause little
additional interference
Z t
Uin − Uout 1 dUout
= Uout dτ +C
R L −∞ dt
4 / 231
Digital signal processing
Advantages:
I noise is easy to control after initial quantization
I highly linear (within limited dynamic range)
I complex algorithms fit into a single chip
I flexibility, parameters can easily be varied in software
I digital processing is insensitive to component tolerances, aging,
environmental conditions, electromagnetic interference
But:
I discrete-time processing artifacts (aliasing)
I can require significantly more power (battery, cooling)
I digital clock and switching cause interference
5 / 231
Some DSP applications
music security
synthetic instruments, audio effects, noise steganography, digital watermarking, biometric
reduction identification, surveillance systems, signals
intelligence, electronic warfare
medical diagnostics
magnetic-resonance and ultrasonic imaging, engineering
X-ray computed tomography, ECG, EEG, MEG, control systems, feature extraction for pattern
AED, audiology recognition, sensor-data evaluation
6 / 231
Objectives
7 / 231
Textbooks
8 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Sequences and systems
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
xn = x(ts · n) = x(n/fs ),
9 / 231
Some simple sequences
un
Unit-step sequence: 1
(
0, n < 0
un =
1, n ≥ 0
. . . −3 −2 −1 0 1 2 3 ... n
δn
Impulse sequence: 1
(
1, n = 0
δn =
6 0
0, n =
= un − un−1
. . . −3 −2 −1 0 1 2 3 ... n
10 / 231
Sinusoidial sequences
A cosine wave, amplitude A, frequency f , phase offset ϕ:
x(t) = A · cos( 2πf t + ϕ )
| {z }
phase
0.6
f=400; x=cos(2*pi*f*n/fs); 0.4
fs = 8 kHz. −0.4
and ±π. 0 5 10 15 20 25 30 35 40
11 / 231
Properties of sequences
A sequence {xn } is
periodic ⇔ ∃k > 0 : ∀n ∈ Z : xn = xn+k
∞
X
absolutely summable ⇔ |xn | < ∞
n=−∞
X∞
square summable ⇔ |xn |2 < ∞ ⇔ “energy signal”
n=−∞
| {z }
“energy”
k
1 X
0 < lim |xn |2 < ∞ ⇔ “power signal”
k→∞ 1 + 2k
n=−k
| {z }
“average power”
Is a continuous function with period tp still periodic after sampling? Only if tp /ts ∈ Q.
∞
X
absolutely summable ⇔ |xn | < ∞
n=−∞
X∞
square summable ⇔ |xn |2 < ∞ ⇔ “energy signal”
n=−∞
| {z }
“energy”
k
1 X
0 < lim |xn |2 < ∞ ⇔ “power signal”
k→∞ 1 + 2k
n=−k
| {z }
“average power”
Weber’s law
Difference limit ∆φ is proportional to the intensity φ of the stimulus
(except for a small correction constant a, to describe deviation of
experimental results near SL):
∆φ = c · (φ + a)
Fechner’s scale
Define a perception intensity scale ψ using the sensation limit φ0 as the
origin and the respective difference limit ∆φ = c · φ as a unit step. The
result is a logarithmic relationship between stimulus intensity and scale
value:
φ
ψ = logc
φ0
14 / 231
Fechner’s scale matches older subjective intensity scales that follow
differentiability of stimuli, e.g. the astronomical magnitude numbers for
star brightness introduced by Hipparchos (≈150 BC).
15 / 231
Units and decibel
16 / 231
Decibel
Where P is some power and P0 a 0 dB reference power, or equally where
F is a field quantity and F0 the corresponding reference level:
P F
10 dB · log10 = 20 dB · log10
P0 F0
Common reference values are indicated with a suffix after “dB”:
0 dBW = 1 W
0 dBm = 1 mW = −30 dBW
0 dBµV = 1 µV
√
0 dBu = 0.775 V = 600 Ω × 1 mW
0 dBSPL = 20 µPa (sound pressure level)
0 dBSL = perception threshold (sensation limit)
0 dBFS = full scale (clipping limit of analog/digital converter)
Remember:
3 dB = 2× power, 6 dB = 2× voltage/pressure/etc.
10 dB = 10× power, 20 dB = 10× voltage/pressure/etc.
W.H. Martin: Decibel – the new name for the transmission unit. Bell Syst. Tech. J., Jan. 1929.
ITU-R Recommendation V.574-4: Use of the decibel and neper in telecommunication.
17 / 231
Types of discrete systems
discrete
. . . , x2 , x1 , x0 , x−1 , . . . . . . , y2 , y1 , y0 , y−1 , . . .
system T
M −1
1 X xn−M +1 + · · · + xn−1 + xn
yn = xn−k =
M M
k=0
x
y
19 / 231
Example: exponential averaging system
∞
X
yn = α · xn + (1 − α) · yn−1 =α (1 − α)k · xn−k
k=0
x
y
20 / 231
Example: accumulator system
n
X
yn = xk
k=−∞
x
y
21 / 231
Example: backward difference system
yn = xn − xn−1
x
y
22 / 231
Other examples
Time-invariant non-linear memory-less systems:
23 / 231
Constant-coefficient difference equations
Of particular practical interest are causal linear time-invariant systems of
the form
xn b0 yn
N
X
yn = b0 · xn − ak · yn−k z −1
−a1
k=1
yn−1
z −1
Block diagram representation −a2
of sequence operations: yn−2
x0n
z −1
−a3
xn xn + x0n
Addition: yn−3
Multiplication xn a axn
by constant: The ak and bm are
constant coefficients.
xn xn−1
Delay: z −1
24 / 231
or
xn xn−1 xn−2 xn−3
z −1 z −1 z −1
M
X
yn = bm · xn−m b0 b1 b2 b3
m=0
yn
z −1 z −1
b1 −a1
N
X M
X xn−1 yn−1
ak · yn−k = bm · xn−m
k=0 m=0 z −1 z −1
b2 −a2
xn−2 yn−2
z −1 z −1
b3 −a3
xn−3 yn−3
26 / 231
Convolution examples
A B C D
E F A∗B A∗C
27 / 231
Convolution
Another example of a LTI systems is
∞
X
yn = ak · xn−k
k=−∞
← 26 / 231
Convolution
Another example of a LTI systems is
∞
X
yn = ak · xn−k
k=−∞
← 26 / 231
Properties of convolution
For arbitrary sequences {pn }, {qn }, {rn } and scalars a, b:
I Convolution is associative
I Convolution is commutative
I Convolution is linear
← 26 / 231
Convolution
Another example of a LTI systems is
∞
X
yn = ak · xn−k
k=−∞
If f and g are continuous functions, their convolution is defined similarly as the integral
Z ∞
(f ∗ g)(t) = f (s)g(t − s)ds.
−∞
But what is the continuous equivalent of {δn }? More on that later . . .
← 26 / 231
All LTI systems just apply convolution
Proof:
Any sequence {xn } can be decomposed into a weighted sum of shifted
impulse sequences:
X∞
{xn } = xk · {δn−k }
k=−∞
29 / 231
Direct form I and II implementations
xn b0 a−1
0 yn xn a−1
0 b0 yn
z −1 z −1 z −1
b1 −a1 −a1 b1
xn−1 yn−1
z −1 z −1 = z −1
b2 −a2 −a2 b2
xn−2 yn−2
z −1 z −1 z −1
b3 −a3 −a3 b3
xn−3 yn−3
∗ =
1
x2 + y 2 ≤ r 2
,
h(x, y) = r2 π
0, x2 + y 2 > r 2
a
Original image I, blurred image B = I ∗ h, i.e.
ZZ
0 0 0 0 0 0
B(x, y) = I(x−x , y−y )·h(x , y )·dx dy s
f 31 / 231
Convolution: electronics example
R
Uin
Uin C Uout
Uout
t
Any passive network (resistors, capacitors, inductors) convolves its input
voltage Uin with an impulse response function h, leading to
Uout = Uin ∗ h, that is
Z ∞
Uout (t) = Uin (t − τ ) · h(τ ) · dτ
−∞
32 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Adding sine waves
Adding together sine waves of equal frequency, but arbitrary amplitude
and phase, results in another sine wave of the same frequency:
Why?
33 / 231
Adding sine waves
Adding together sine waves of equal frequency, but arbitrary amplitude
and phase, results in another sine wave of the same frequency:
Why?
A · sin(ωt + ϕ)
ωt ϕ
33 / 231
Adding sine waves
Adding together sine waves of equal frequency, but arbitrary amplitude
and phase, results in another sine wave of the same frequency:
Why?
33 / 231
Adding sine waves
Adding together sine waves of equal frequency, but arbitrary amplitude
and phase, results in another sine wave of the same frequency:
Why?
But adding sine waves as vectors (A1 , ϕ1 ) and (A2 , ϕ2 ) in polar coordinates is cumbersome:
q A1 sin ϕ1 + A2 sin ϕ2
A = A21 + A22 + 2A1 A2 cos(ϕ2 − ϕ1 ), tan ϕ =
A1 cos ϕ1 + A2 cos ϕ2
33 / 231
Cartesian coordinates for sine waves
Sine waves of any amplitude A and phase (start angle) ϕ can be
represented as linear combinations of sin(ωt) and cos(ωt):
where
A
x = A · cos(ϕ), y = A · sin(ϕ) A · sin(ϕ)
ωt ϕ
and A · cos(ϕ)
p y
A= x2 + y 2 , tan ϕ = .
x
Base: two rotating arrows with start angles 0◦ [height = sin(ω)] and 90◦ [height = cos(ω)].
sin(1t)⋅sin(2t)
sin(1t)
sin(2t)
−1
0 1.5708 3.1416 4.7124 6.2832
t
37 / 231
1
sin(2t)⋅sin(3t)
sin(2t)
sin(3t)
−1
0 1.5708 3.1416 4.7124 6.2832
t
1
sin(3t)⋅sin(4t)
sin(3t)
sin(4t)
−1
0 1.5708 3.1416 4.7124 6.2832
t
1
sin(2t)⋅sin(4t)
sin(2t)
sin(4t)
−1
0 1.5708 3.1416 4.7124 6.2832
t
1
sin(t)⋅cos(t)
sin(t)
cos(t)
−1
0 1.5708 3.1416 4.7124 6.2832
t
Why are exponential functions useful?
Adding together two exponential functions with the same base z, but
different scale factor and offset, results in another exponential function
with the same base:
A1 · z t+ϕ1 + A2 · z t+ϕ2 = A1 · z t · z ϕ1 + A2 · z t · z ϕ2
= (A1 · z ϕ1 + A2 · z ϕ2 ) · z t = A · z t
. . . , z −3 , z −2 , z −1 , 1, z, z 2 , z 3 , . . .
Notation: <(a + jb) := a, =(a + jb) := b and (a + jb)∗ := a − jb, where j2 = −1 and a, b ∈ R.
Then <(x) = 12 (x + x∗ ) and =(x) = 21j (x − x∗ ) for all x ∈ C.
39 / 231
We can now represent sine waves as projections of a rotating complex
vector. This allows us to represent sine-wave sequences as exponential
sequences with basis e jω̇ .
A phase shift in such a sequence corresponds to a rotation of a complex
vector.
3) Complex multiplication allows us to modify the amplitude and phase
of a complex rotating vector using a single operation and value.
Rotation of a 2D vector in (x, y)-form is notationally slightly messy, but
fortunately j2 = −1 does exactly what is required here:
x3 x2 −y2 x1
= ·
y3 y2 x2 y1 (x3 , y3 )
x1 x2 − y1 y2
= (−y2 , x2 )
x1 y2 + x2 y1
(x2 , y2 )
z1 = x1 + jy1 , z2 = x2 + jy2 (x1 , y1 )
z1 · z2 = x1 x2 − y1 y2 + j(x1 y2 + x2 y1 )
40 / 231
Complex phasors
Amplitude and phase are two distinct characteristics of a sine function
that are inconvenient to keep separate notationally.
Complex functions (and discrete sequences) of the form
xn = e jω̇n
In the notation of slide 38, where the argument of H is the base, we would write H(e jω̇ ).
41 / 231
Recall: Fourier transform
Many equivalent forms of the Fourier transform are used in the literature. There is no strong
consensus on whether the forward transform uses e−2π jf t and the backwards transform e2π jf t , or
vice versa. The above form uses the ordinary frequency f , whereas some authors prefer the angular
frequency ω = 2πf :
Z ∞
∓ jωt
F {h(t)}(ω) = H(ω) = α h(t) · e dt
−∞
Z ∞
± jωt
F −1 {H(ω)}(t) = h(t) = β H(ω)· e dω
−∞
This substitution introduces factors α and β such that αβ = 1/(2π). Some authors set α = 1
and β = 1/(2π), to keep √ the convolution theorem free of a constant prefactor; others prefer the
unitary form α = β = 1/ 2π, in the interest of symmetry.
42 / 231
Properties of the Fourier transform
If
x(t) •−◦ X(f ) and y(t) •−◦ Y (f )
are pairs of functions that are mapped onto each other by the Fourier
transform, then so are the following pairs.
Linearity:
Time scaling:
1 f
x(at) •−◦ X
|a| a
Frequency scaling:
1 t
x •−◦ X(af )
|a| a
43 / 231
Time shifting:
Frequency shifting:
44 / 231
Fourier transform example: rect and sinc
x(s)e− jωs t y(t)e− jωt dtds = s x(s)e− jωs ds · t y(t)e− jωt dt.
R R R R
s
This second form is also called “modulation theorem”, as it describes what happens in the
frequency domain with amplitude modulation of a signal (see slide 53).
The proof is very similar to the one above.
Both equally work for the inverse Fourier transform:
−1 −1 −1
F {(F ∗ G)(f )} = F {F (f )} · F {G(f )}
−1 −1 −1
F {F (f ) · G(f )} = F {F (f )} ∗ F {G(f )}
46 / 231
Dirac delta function
The continuous equivalent of the impulse sequence {δn } is known as
Dirac delta function δ(x). It is a generalized function, defined such that
0, x 6= 0 1
δ(x) =
∞, x = 0
Z ∞
δ(x) dx = 1
−∞
0 x
and can be thought of as the limit of function sequences such as
0, |x| ≥ 1/n
δ(x) = lim
n→∞ n/2, |x| < 1/n
or
n 2 2
δ(x) = lim √ e−n x
n→∞ π
The delta function is mathematically speaking not a function, but a distribution, that is an
expression that is only defined when integrated.
47 / 231
Some properties of the Dirac delta function:
Z ∞
f (x)δ(x − a) dx = f (a)
−∞
Z ∞
e±2π jxa dx = δ(a)
−∞
∞ ∞
X 1 X
e±2π jixa = δ(x − i/a)
i=−∞
|a| i=−∞
1
δ(ax) = δ(x)
|a|
Fourier transform:
Z ∞
F{δ(t)}(f ) = δ(t) · e−2π jf t dt = e0 = 1
−∞
Z ∞
F −1 {1}(t) = 1 · e2π jf t df = δ(t)
−∞
48 / 231
Linking the Dirac delta with the Fourier transform
The Fourier transform of 1 follows from the Dirac delta’s ability to
sample inside an integral:
g(t) = F −1 (F(g))(t)
Z ∞ Z ∞
−2π jf s
= g(s) · e · ds · e2π jf t · df
−∞ −∞
Z ∞ Z ∞
−2π jf s 2π jf t
= e ·e · df · g(s) · ds
−∞ −∞
Z ∞ Z ∞
= e−2π jf (t−s) · df ·g(s) · ds
−∞ −∞
| {z }
δ(t−s)
0.5
−0.5
−1
−4 −3 −2 −1 0 1 2 3 4
10
−5
−10
−4 −3 −2 −1 0 1 2 3 4 50 / 231
Z ∞ P100
e2π jtf df = δ(t) i=1 cos(2πfi t) ≈ δ(t)
−∞
f1 , . . . , f100 ∈ [0, 10] chosen uniformly at random
1
0.5
−0.5
−1
−4 −3 −2 −1 0 1 2 3 4
100
80
60
40
20
−20
−40
−4 −3 −2 −1 0 1 2 3 4
∞
X ∞
X
e±2π jnt = δ(t − n)
n=−∞ n=−∞
5
X ∞
X
cos(2πnt) ≈ δ(t − n)
n=1 n=−∞
1
0.5
−0.5
−1
−4 −3 −2 −1 0 1 2 3 4
−1
−2
−4 −3 −2 −1 0 1 2 3 4
Sine and cosine in the frequency domain
−f0 f0 f −f0 f0 f
As any x(t) ∈ R can be decomposed into sine and cosine functions, the spectrum of any
real-valued signal will show the symmetry X(−f ) = [X(f )]∗ , where ∗ denotes the complex
conjugate (i.e., negated imaginary part).
51 / 231
Fourier transform symmetries
52 / 231
Example: amplitude modulation
X(f ) Y (f )
∗ =
53 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Sampling using a Dirac comb
xn = x(ts · n) = x(n/fs )
The function x̂(t) now contains exactly the same information as the
discrete sequence {xn }, but is still in a form that can be analysed using
the Fourier transform on continuous functions.
54 / 231
The Fourier transform of a Dirac comb
∞
X ∞
X
s(t) = ts · δ(t − ts · n) = e2π jnt/ts
n=−∞ n=−∞
s(t) S(f )
55 / 231
Sampling and aliasing
sample
cos(2π tf)
cos(2π t(k⋅ fs± f))
· =
... ... ... ...
0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
X(f ) S(f ) X̂(f )
∗ =
... ... ... ...
0 f −fs fs f −fs 0 fs f
Sampling a signal in the time domain corresponds in the frequency
domain to convolving its spectrum with a Dirac comb. The resulting
copies of the original signal spectrum in the spectrum of the sampled
signal are called “images”.
57 / 231
Discrete-time Fourier transform (DTFT)
The Fourier transform of a sampled signal
∞
X
x̂(t) = ts · xn · δ(t − ts · n)
n=−∞
is
Z ∞ ∞
X f
F{x̂(t)}(f ) = X̂(f ) = x̂(t) · e−2π jf t dt = ts · xn · e−2π j fs n
−∞ n=−∞
0.6 4
0.4 2
0.2 0
0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
1 8
DTFT real
DTFT imag
0.8 6
0.6 4
0.4 2
0.2 0
0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
59 / 231
1 8
DTFT real
DTFT imag
0.8 6
0.6 4
0.4 2
0.2 0
0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
1 8
DTFT real
DTFT imag
0.8 6
0.6 4
0.4 2
0.2 0
0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
60 / 231
1 8
DTFT real
DTFT imag
0.8 6
0.6 4
0.4 2
0.2 0
0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
1 8
DTFT real
DTFT imag
6
0.5
4
0
2
-0.5
0
-1 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
61 / 231
1 8
DTFT real
DTFT imag
6
0.5
4
0
2
-0.5
0
-1 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
1 8
DTFT real
DTFT imag
0.8 6
0.6 4
0.4 2
0.2 0
0 -2
-5 0 5 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
62 / 231
Properties of the DTFT
The DTFT is periodic:
Beyond that, the DTFT is just the Fourier transform applied to a discrete
sequence, and inherits the properties of the continuous Fourier transform,
e.g.
I Linearity
I Symmetries
I Convolution and modulation theorem:
and
Z π
0 0
xn · yn = zn ⇐⇒ X(e jω̇ ) · Y (e j(ω̇−ω̇ ) ) dω̇ 0 = Z(e jω̇ )
−π
63 / 231
Nyquist limit and anti-aliasing filters
64 / 231
Nyquist limit and anti-aliasing filters
0 f −fs 0 fs f
double-sided bandwidth
reconstruction filter
X̂(f ) X̂(f )
65 / 231
Reconstruction of a continuous band-limited waveform
The ideal anti-aliasing filter for eliminating any frequency content above
fs /2 before sampling with a frequency of fs has the Fourier transform
(
1 if |f | < f2s
H(f ) = = rect(ts f ).
0 if |f | > f2s
Note that sampling h(t) gives the impulse function: h(t) · s(t) = δ(t).
66 / 231
Impulse response of ideal low-pass filter with cut-off frequency fs /2:
sampled signal
interpolation result
scaled/shifted sin(x)/x pulses
1 2 3 4 5
68 / 231
If before being sampled with xn = x(t/fs ) the signal x(t) satisfied the
Nyquist limit
Z ∞
F{x(t)}(f ) = x(t) · e−2π jf t dt = 0 for all |f | ≥ f2s
−∞
1 t
then it can be reconstructed by interpolation with h(t) = ts sinc ts :
Z ∞
x(t) = h(s) · x̂(t − s) · ds
−∞
Z ∞ ∞
1 s X
= sinc · ts xn · δ(t − s − ts · n) · ds
−∞ ts ts n=−∞
∞ Z ∞
X s
= xn · sinc · δ(t − s − ts · n) · ds
n=−∞ −∞ t s
∞ ∞
t − ts · n
X X
= xn · sinc = xn · sinc(t/ts − n)
n=−∞
ts n=−∞
∞
X sin π(t/ts − n)
= xn ·
n=−∞
π(t/ts − n)
69 / 231
Reconstruction filters
70 / 231
Band-pass signal sampling
Sampled signals can also be reconstructed if their spectral components
remain entirely within the interval n · fs /2 < |f | < (n + 1) · fs /2 for some
n ∈ N. (The baseband case discussed so far is just n = 0.)
X(f ) anti-aliasing filter X̂(f ) reconstruction filter
− 54 fs 0 5
4 fs f −fs −fs 0 fs fs f
2 2
n=2
In this case, the aliasing copies of the positive and the negative
frequencies will interleave instead of overlap, and can therefore be
removed again later by a reconstruction filter.
The ideal reconstruction filter for this sampling technique will only allow frequencies in the interval
[n · fs /2, (n + 1) · fs /2] to pass through. The impulse response of such a band-pass filter can be
obtained by amplitude modulating a low-pass filter, or by subtracting two low-pass filters:
sin πtfs /2 2n + 1 sin πt(n + 1)fs sin πtnfs
h(t) = fs · cos 2πtfs = (n + 1)fs − nfs .
πtfs /2 4 πt(n + 1)fs πtnfs
71 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Spectrum of a periodic signal
= ∗
... ... ... ...
−1/fp 0 1/fp t t −1/fp 0 1/fp t
X(f ) Ẋ(f ) P (f )
= ·
... ...
−fp 0 fp f f −fp 0 fp f
72 / 231
Spectrum of a sampled signal
· =
... ... ... ...
0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
X(f ) S(f ) X̂(f )
∗ =
... ... ... ...
0 f −fs fs f −fs 0 fs f
73 / 231
Continuous vs discrete Fourier transform
74 / 231
If x(t) has period tp = n · ts , then after sampling it at rate ts we have
∞
X ∞ n−1
X X
ẍ(t) = x(t)·s(t) = ts · xi ·δ(t−ts ·i) = ts · xi ·δ(t−ts ·(i+nl))
i=−∞ l=−∞ i=0
e±2π jixa =
P∞ 1
P∞ n
Recall that i=−∞ |a| i=−∞ δ(x − i/a) and map x = f , a = fs and i = l.
f f f k
After substituting k := fp = fs n, i.e. fs = n and f = kfp
∞ n−1
1 X X ki
Ẍ(kfp ) = · δ(kfp − lfp ) · xi · e−2π j n
n i=0
l=−∞
| {z } | {z }
Xk
δ(0) if k ∈ Z
=
0
if k ∈
/Z
Show that Xk = Xk±n for all k ∈ Z. 75 / 231
Discrete Fourier Transform (DFT)
n−1 n−1
X ik 1 X ik
Xk = xi · e−2π j n xk = · Xi · e2π j n
i=0
n i=0
The n-point DFT multiplies a vector with an n × n matrix
1 1 1 1 ··· 1
1
−2π j n 2
−2π j n 3
−2π j n −2π j n−1
1 e e e ··· e n
2 4 6 2(n−1)
−2π j n −2π j n −2π j n −2π j n
1 e e e ··· e
3(n−1)
Fn =
1 e−2π j n
3
e−2π j n
6
e−2π j n
9
··· e −2π j n
.. .. .. .. .. ..
. . . . . .
n−1 2(n−1) 3(n−1) (n−1)(n−1)
1 e−2π j n e−2π j n e−2π j n · · · e−2π j n
x0 X0 X0 x0
x1 X1 X1 x1
x X
1 X x
Fn · 2 = 2 , · Fn∗ · 2 = 2
. . n . .
.. .. .. ..
xn−1 Xn−1 Xn−1 xn−1
76 / 231
Discrete Fourier Transform visualized
x0 X0
x1 X1
x2
X2
x3 X3
· =
x4 X4
x5
X5
x6
X6
x7 X7
77 / 231
Inverse DFT visualized
X0 x0
X1 x1
X2 x2
1
X3 x3
· · =
8 X4
x4
X5 x5
X6 x6
X7 x7
78 / 231
Fast Fourier Transform (FFT)
n−1
X ik
Fn {xi }n−1
i=0 = xi · e−2π j n
k
i=0
n −1 n −1
2 2
X ik
−2π j n/2 k X ik
−2π j n/2
= x2i · e + e−2π j n x2i+1 · e
i=0 i=0
n −1 n −1
k
2 −2π j n 2 n
F n2 {x2i }i=0 k + e · F n2 {x2i+1 }i=0 , k<
2
k
= n −1 n −1
k
2
F n {x2i }i=0 + e−2π j n · F n2 {x2i+1 }i=0
2
k≥ n
2
, 2
k− n
2
k− n
2
The DFT over n-element vectors can be reduced to two DFTs over
n/2-element vectors plus n multiplications and n additions, leading to
log2 n rounds and n log2 n additions and multiplications overall,
compared to n2 for the equivalent matrix multiplication.
A high-performance FFT implementation in C with many processor-specific optimizations and
support for non-power-of-2 sizes is available at http://www.fftw.org/.
79 / 231
Efficient real-valued FFT
The symmetry properties of the Fourier transform applied to the discrete
Fourier transform {Xi }n−1 n−1
i=0 = Fn {xi }i=0 have the form
These two symmetries, combined with the linearity of the DFT, allows us to
calculate two real-valued n-point DFTs
provides the same result with three multiplications and five additions.
The latter may perform faster on CPUs where multiplications take three
or more times longer than additions.
This “Karatsuba multiplication” is most helpful on simpler microcontrollers. Specialized
signal-processing CPUs (DSPs) feature 1-clock-cycle multipliers. High-end desktop processors use
pipelined multipliers that stall where operations depend on each other.
81 / 231
FFT-based convolution
Calculating the convolution of two finite sequences {xi }m−1 n−1
i=0 and {yi }i=0
of lengths m and n via
min{m−1,i}
X
zi = xj · yi−j , 0≤i<m+n−1
j=max{0,i−(n−1)}
takes mn multiplications.
Can we apply the FFT and the convolution theorem to calculate the
convolution faster, in just O(m log m + n log n) multiplications?
A’ B’ F−1[F(A’)⋅F(B’)]
83 / 231
Zero padding is usually applied to extend both sequence lengths to the
next higher power of two (2dlog2 (m+n−1)e ), which facilitates the FFT.
With a causal sequence, simply append the padding zeros at the end.
With a non-causal sequence, values with a negative index number are
wrapped around the DFT block boundaries and appear at the right end.
In this case, zero-padding is applied in the center of the block, between
the last and first element of the sequence.
Thanks to the periodic nature of the DFT, zero padding at both ends has
the same effect as padding only at one end.
If both sequences can be loaded entirely into RAM, the FFT can be
applied to them in one step. However, one of the sequences might be too
large for that. It could also be a realtime waveform (e.g., a telephone
signal) that cannot be delayed until the end of the transmission.
In such cases, the sequence has to be split into shorter blocks that are
separately convolved and then added together with a suitable overlap.
84 / 231
Each block is zero-padded at both ends and then convolved as before:
∗ ∗ ∗
= = =
The regions originally added as zero padding are, after convolution, aligned to
overlap with the unpadded ends of their respective neighbour blocks. The
overlapping parts of the blocks are then added together.
85 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Deconvolution
A signal u(t) was distorted by convolution with a known impulse
response h(t) (e.g., through a transmission channel or a sensor problem).
The “smeared” result s(t) was recorded.
Can we undo the damage and restore (or at least estimate) u(t)?
∗ =
∗ =
86 / 231
The convolution theorem turns the problem into one of multiplication:
Z
s(t) = u(t − τ ) · h(τ ) · dτ
s = u∗h
F{u} = F{s}/F{h}
u = F −1 {F{s}/F{h}}
ũ = F −1 {F{c}/F{h}} = u + F −1 {F{n}/F{h}}
87 / 231
Typical workarounds:
I Modify the Fourier transform of the impulse response, such that
|F{h}(f )| > for some experimentally chosen threshold .
I If estimates of the signal spectrum |F{s}(f )| and the noise
spectrum |F{n}(f )| can be obtained, then we can apply the
“Wiener filter” (“optimal filter”)
|F{s}(f )|2
W (f ) =
|F{s}(f )|2 + |F{n}(f )|2
before deconvolution:
ũ = F −1 {W · F {c}/F{h}}
88 / 231
Exercise 13: Use MATLAB to deconvolve the blurred stars from slide 31.
The files stars-blurred.png with the blurred-stars image and stars-psf.png
with the impulse response (point-spread function) are available on the course-
material web page. You may find the MATLAB functions imread, double,
imagesc, circshift, fft2, ifft2 of use.
Try different ways to control the noise (slide 88) and distortions near the margins
(windowing). [The MATLAB image processing toolbox provides ready-made
“professional” functions deconvwnr, deconvreg, deconvlucy, edgetaper, for
such tasks. Do not use these, except perhaps to compare their outputs with the
results of your own attempts.]
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Vowel "A" sung at varying pitch
8000
7000
6000
Frequency (Hz)
5000
4000
3000
2000
1000
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Time
[w,fs, bits] = auread('sing.au');
specgram(w,2048,fs);
ylim([0 8e3]); xlim([0 4.5]);
saveas(gcf, 'sing.eps', 'eps2c');
89 / 231
Different vovels at constant pitch
8000
7000
6000
Frequency (Hz)
5000
4000
3000
2000
1000
0
0.5 1 1.5 2 2.5 3 3.5 4
Time
90 / 231
3.5 -105
-110
3
-115
-120
Power/frequency (dB/Hz)
2.5
Frequency (MHz)
-125
2
-130
1.5 -135
-140
1
-145
0.5 -150
-155
0
20 40 60 80 100 120
Time (ms)
f = fopen('iq-fm-97M-3.6M.dat', 'r', 'ieee-le');
c = fread(f, [2,inf], '*float32');
fclose(f);
z = c(1,:) + j*c(2,:);
fs = 3.6e6; % IQ sampling frequency
fciq = 97e6; % center frequency of IQ downconverter
spectrogram(double(z(1:5e5)), 1024, 512, 1024, fs, 'yaxis');
colormap(gray)
Spectral estimation
cos(2 *[0:15]/16*4)
1 12
DTFT mag
10 DFT mag
0.5
8
0 6
4
-0.5
2
-1 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
cos(2 *[0:15]/16*4.2)
1 12
DTFT mag
10 DFT mag
0.5
8
0 6
4
-0.5
2
-1 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
91 / 231
We introduced the DFT as a special case of the continuous Fourier
transform, where the input is sampled and periodic.
If the input is sampled, but not periodic, the DFT can still be used to
calculate an approximation of the Fourier transform of the original
continuous signal. However, there are two effects to consider. They are
particularly visible when analysing pure sine waves.
Sine waves whose frequency is a multiple of the base frequency (fs /n) of
the DFT are identical to their periodic extension beyond the size of the
DFT. They are, therefore, represented exactly by a single sharp peak in
the DFT. All their energy falls into one single frequency “bin” in the
DFT result.
Sine waves with other frequencies, which do not match exactly one of the
output frequency bins of the DFT, are still represented by a peak at the
output bin that represents the nearest integer multiple of the DFT’s base
frequency. However, such a peak is distorted in two ways:
I Its amplitude is lower (down to 63.7%).
I Much signal energy has “leaked” to other frequencies.
92 / 231
35
30
25
20
15
10
0
0 16
5 10 15 15.5
20 25 30 15
input freq.
DFT index
The leakage of energy to other frequency bins not only blurs the estimated spectrum.
The peak amplitude also changes significantly as the frequency of a tone changes from
that associated with one output bin to the next, a phenomenon known as scalloping.
In the above graphic, an input sine wave gradually changes from the frequency of bin
15 to that of bin 16 (only positive frequencies shown).
93 / 231
Windowing
Sine wave Discrete Fourier Transform
1 300
200
0
100
−1 0
0 200 400 0 200 400
Sine wave multiplied with window function Discrete Fourier Transform
1 100
0 50
−1 0
0 200 400 0 200 400
94 / 231
The reason for the leakage and scalloping losses is easy to visualize with the
help of the convolution theorem:
The operation of cutting a sequence of the size of the DFT input vector out of
a longer original signal (the one whose continuous Fourier spectrum we try to
estimate) is equivalent to multiplying this signal with a rectangular function.
This destroys all information and continuity outside the “window” that is fed
into the DFT.
Multiplication with a rectangular window of length T in the time domain is
equivalent to convolution with sin(πf T )/(πf T ) in the frequency domain.
The subsequent interpretation of this window as a periodic sequence by the
DFT leads to sampling of this convolution result (sampling meaning
multiplication with a Dirac comb whose impulses are spaced fs /n apart).
Where the window length was an exact multiple of the original signal period,
sampling of the sin(πf T )/(πf T ) curve leads to a single Dirac pulse, and the
windowing causes no distortion. In all other cases, the effects of the convolution
become visible in the frequency domain as leakage and scalloping losses.
95 / 231
Some better window functions
0.8
0.6
0.4
0.2
Rectangular window
Triangular window
0
Hann window
Hamming window
0 6
4
-0.5
2
-1 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
cos(2 *[0:15]/16*4.2).*hann(16)
1 12
DTFT mag
10 DFT mag
0.5
8
0 6
4
-0.5
2
-1 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
97 / 231
Rectangular window (64−point) Triangular window
20 20
Magnitude (dB)
Magnitude (dB)
0 0
−20 −20
−40 −40
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
Hann window Hamming window
20 20
Magnitude (dB)
Magnitude (dB)
0 0
−20 −20
−40 −40
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
98 / 231
Numerous alternatives to the rectangular window have been proposed
that reduce leakage and scalloping in spectral estimation. These are
vectors multiplied element-wise with the input vector before applying the
DFT to it. They all force the signal amplitude smoothly down to zero at
the edge of the window, thereby avoiding the introduction of sharp jumps
in the signal when it is extended periodically by the DFT.
Three examples of such window vectors {wi }n−1
i=0 are:
Triangular window (Bartlett window):
i
wi = 1 − 1 −
n/2
Hamming window:
i
wi = 0.54 − 0.46 × cos 2π
n−1
99 / 231
Does zero padding increase DFT resolution?
The two figures below show two spectra of the 16-element sequence
xi = si · wi , i ∈ {0, . . . , 15}
and the right plot shows the DFT of the zero-padded windowed sequence
si · wi , i ∈ {0, . . . , 15}
x0i =
0, i ∈ {16, . . . , 63}
2 2
0 0
0 5 10 15 0 20 40 60
100 / 231
cos(2 *[0:15]/16*3.3) + cos(2 *[0:15]/16*4)
2 8
DTFT mag
DFT mag
1 6
0 4
-1 2
-2 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
zero-padded to 64 samples
2 8
DTFT mag
DFT mag
1 6
0 4
-1 2
-2 0
0 20 40 60 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
101 / 231
cos(2 *[0:15]/16*3.3) + cos(2 *[0:15]/16*4)
2 8
DTFT mag
DFT mag
1 6
0 4
-1 2
-2 0
0 5 10 15 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
20
0
15
10
-1
5
-2 0
0 20 40 60 - -¾ -½ -¼ 0 ¼ ½ ¾
time-domain samples DTFT frequency (1 period)
102 / 231
Applying the discrete Fourier transform (DFT) to an n-element long
real-valued sequence samples the DTFT of that sequence at n/2 + 1
discrete frequencies.
The DTFT spectrum has already been distorted by multiplying the
(hypothetically longer) signal with a windowing function that limits its
length to n non-zero values and forces the waveform down to zero
outside the window. Therefore, appending further zeros outside the
window will not affect the DTFT.
The frequency resolution of the DFT is the sampling frequency divided by
the block size of the DFT. Zero padding can therefore be used to increase
the frequency resolution of the DFT, to sample the DTFT at more
places. But that does not change the limit imposed on the frequency
resolution (i.e., blurriness) of the DTFT by the length of the window.
Note that zero padding does not add any additional information to the
signal. The DTFT has already been “low-pass filtered” by being
convolved with the spectrum of the windowing function. Zero padding in
the time domain merely causes the DFT to sample the same underlying
DTFT spectrum at a higher resolution, thereby making it easier to
visually distinguish spectral lines and to locate their peak more precisely.
103 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
FIR filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Digital filters
104 / 231
Window-based design of FIR filters
Recall that the ideal continuous low-pass filter with cut-off frequency fc
has the frequency characteristic
1 if |f | < fc f
H(f ) = = rect
0 if |f | > fc 2fc
105 / 231
Solutions:
I Make the impulse response finite by multiplying the sampled h(t)
with a windowing function
I Make the impulse response causal by adding a delay of half the
window size
The impulse response of an n-th order low-pass filter is then chosen as
106 / 231
FIR low-pass filter design example
z Plane Impulse Response
1
1
Imaginary Part
Amplitude
0.5
30 0.5
0
−0.5
0
−1
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −500
−40 −1000
−60 −1500
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: n = 30, cutoff frequency (−6 dB): fc = 0.25 × fs /2, window: Hamming
107 / 231
Filter performance
An ideal filter has a gain of 1 in the pass-band and a gain of 0 in the stop
band, and nothing in between.
A practical filter will have
I frequency-dependent gain near 1 in the passband
I frequency-dependent gain below a threshold in the stopband
I a transition band between the pass and stop bands
We truncate the ideal, infinitely-long impulse response by multiplication
with a window sequence.
In the frequency domain, this will convolve the rectangular frequency
response of the ideal low-pass filter with the frequency characteristic of
the window.
The width of the main lobe determines the width of the transition band,
and the side lobes cause ripples in the passband and stopband.
108 / 231
Low-pass to band-pass filter conversion (modulation)
H(f )
= ∗
−fh −fl 0 fl fh f − fh −f
2
l fh −fl
2
f − fh +f
2
l
0 fh +fl
2
f
109 / 231
Low-pass to high-pass filter conversion (freq. inversion)
In order to turn the spectrum X(f ) of a real-valued signal xi sampled at
fs into an inverted spectrum X 0 (f ) = X(fs /2 − f ), we merely have to
shift the periodic spectrum by fs /2:
X 0 (f ) X(f )
= ∗
... ... ... ...
−fs 0 fs f −fs 0 fs f − f2s 0 fs
2
f
This can be accomplished by multiplying the sampled sequence xi with
yi = cos πfs t = cos πi, which is nothing but multiplication with the
sequence
. . . , 1, −1, 1, −1, 1, −1, 1, −1, . . .
A filter where the Fourier transform H(f ) of its impulse response h(t) is
real-valued will not affect the phase of the filtered signal at any
frequency. Only the amplitudes will be affected.
Filters that delay the phase of a signal at each frequency by the time ∆t
therefore add to the phase angle a value −2π jf ∆t, which increases
linearly with f . They are therefore called linear-phase filters.
This is the closest one can get to phase-neutrality with causality.
111 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Finite impulse response (FIR) filter
M
X
yn = bm · xn−m
m=0
M = 3:
xn xn−1 xn−2 xn−3
z −1 z −1 z −1
b0 b1 b2 b3
yn
(see slide 25)
Transposed implementation:
xn
b3 b2 b1 b0
z −1 z −1 z −1
yn
112 / 231
Infinite impulse response (IIR) filter
N
X M
X
ak · yn−k = bm · xn−m Usually normalize: a0 = 1
k=0 m=0
M N
!
X X
yn = bm · xn−m − ak · yn−k /a0
m=0 k=1
z −1 z −1
b1 −a1
xn−1 yn−1
z −1 z −1
b2 −a2
xn−2 yn−2
z −1 z −1
b3 −a3
xn−3 yn−3
113 / 231
Infinite impulse response (IIR) filter – direct form II
M N
!
X X
yn = bm · xn−m − ak · yn−k /a0
m=0 k=1
z −1 z −1
−a1 b1
b1 −a1
z −1 z −1
−a2 b2
b2 −a2
z −1 z −1
−a3 b3
b3 −a3
114 / 231
Polynomial representation of sequences
(1 + 2v + 3v 2 ) · (2 + 1v)
2
2 + 4v + 6v
+ 1v + 2v 2 + 3v 3
= 2 + 5v + 8v 2 + 3v 3
115 / 231
Convolution of sequences is equivalent to polynomial multiplication:
∞
X
{hn } ∗ {xn } = {yn } ⇒ yn = hk · xn−k
k=−∞
↓ ↓
∞ ∞
! !
X X
n n
H(v) · X(v) = hn v · xn v
n=−∞ n=−∞
∞
X ∞
X
= hk · xn−k · v n
n=−∞ k=−∞
116 / 231
Example of polynomial division:
∞
1 X
= 1 + av + a2 v 2 + a3 v 3 + · · · = an v n
1 − av n=0
1 + av + a2 v 2 + ···
1 − av 1
1 − av
xn yn av
av − a2 v 2
a2 v 2
v
a a2 v 2 − a3 v 3
yn−1 ···
117 / 231
The z-transform
∞
X
X(z) = xn z −n
n=−∞
Note that this differs only in the sign of the exponent from the polynomial representation discussed
on the preceeding slides.
Recall that the above X(z) is exactly the factor with which an
exponential sequence {z n } is multiplied, if it is convolved with {xn }:
{z n } ∗ {xn } = {yn }
∞
X ∞
X
⇒ yn = z n−k n
xk = z · z −k xk = z n · X(z)
k=−∞ k=−∞
118 / 231
The z-transform defines for each sequence a continuous complex-valued
surface over the complex plane C.
For finite sequences, its value is defined across the entire complex plane
(except possibly at z = 0 or |z| = ∞).
For infinite sequences, it can be shown that the z-transform converges
only for the region
xn+1
< |z| < lim xn+1
lim
n→∞ xn n→−∞ xn
The z-transform identifies a sequence unambiguously only in conjunction with a given region of
convergence. In other words, there exist different sequences, that have the same expression as their
z-transform, but that converge for different amplitudes of z.
where ω̇ = 2π ffs .
119 / 231
Properties of the z-transform
If X(z) is the z-transform of {xn }, we write here {xn } •−◦ X(z).
If {xn } •−◦ X(z) and {yn } •−◦ Y (z), then:
Linearity:
Convolution:
Time shift:
120 / 231
Time reversal:
Complex conjugate:
{x∗n } •−◦ X ∗ (z ∗ )
Real/imaginary value:
1
{<{xn }} •−◦ (X(z) + X ∗ (z ∗ ))
2
1
{={xn }} •−◦ (X(z) − X ∗ (z ∗ ))
2j
Initial value:
121 / 231
Some example sequences and their z-transforms:
xn X(z)
δn 1
z 1
un =
z−1 1 − z −1
z 1
an un =
z−a 1 − az −1
z
nun
(z − 1)2
z(z + 1)
n2 un
(z − 1)3
z
ean un
z − ea
n − 1 a(n−k)
1
e un−k
k−1 (z − ea )k
z 2 sin(ϕ) + z sin(ω̇ − ϕ)
sin(ω̇n + ϕ)un
z 2 − 2z cos(ω̇) + 1
122 / 231
Example:
What is the z-transform of the impulse response {hn }
of the discrete system yn = xn + ayn−1 ?
xn yn
yn = xn + ayn−1
Y (z) = X(z) + az −1 Y (z) z −1
a
Y (z) − az −1 Y (z) = X(z) yn−1
Y (z)(1 − az −1 ) = X(z)
Y (z) 1 z
= =
X(z) 1 − az −1 z−a
Since {yn } = {hn } ∗ {xn }, we have Y (z) = H(z) · X(z) and therefore
Y (z) z
H(z) = = = 1 + az −1 + a2 z −2 + · · ·
X(z) z−a
h0 = 1, h1 = a, h2 = a2 , . . . , hn = an for all n ≥ 0
We have applied here the linearity of the z-transform, and its time-shift and convolution properties.
123 / 231
z-transform of recursive filter structures
k m z −1 z −1
X X b1 −a1
al · yn−l = bl · xn−l xn−1 yn−1
l=0 l=0
z −1 z −1
··· ···
or equivalently ··· ···
k m z −1 z −1
X X bm −ak
a0 yn + al · yn−l = bl · xn−l xn−m yn−k
l=1 l=0
m k
!
X X
yn = a−1
0 · bl · xn−l − al · yn−l
l=0 l=1
124 / 231
Using the linearity and time-shift property of the z-transform:
k
X m
X
al · yn−l = bl · xn−l
l=0 l=0
k
X m
X
al z −l · Y (z) = bl z −l · X(z)
l=0 l=0
k
X m
X
Y (z) al z −l = X(z) bl z −l
l=0 l=0
Pm
Y (z) bl z −l
H(z) = = Pkl=0
X(z) l=0 al z
−l
b0 + b1 z −1 + b2 z −2 + · · · + bm z −m
H(z) =
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
125 / 231
The z-transform of the impulse re- xn b0 a−1
0 yn
sponse {hn } of the causal LTI system
defined by z −1 z −1
b1 −a1
xn−1 yn−1
k
X m
X
al · yn−l = bl · xn−l z −1
··· ···
z −1
l=0 l=0 ··· ···
b0 + b1 z −1 + b2 z −2 + · · · + bm z −m
H(z) =
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
(bm 6= 0, ak =
6 0) which can also be written as
Pm
z k l=0 bl z m−l z k b0 z m + b1 z m−1 + b2 z m−2 + · · · + bm
H(z) = k
= · .
z m a0 z k + a1 z k−1 + a2 z k−2 + · · · + ak
P
z m l=0 al z k−l
H(z) has m zeros and k poles at non-zero locations in the z plane, plus
k − m zeros (if k > m) or m − k poles (if m > k) at z = 0.
126 / 231
This function can be converted into the form
m
Y m
Y
(1 − cl · z −1 ) (z − cl )
b0 l=1 b0 k−m l=1
H(z) = · k = ·z · k
a0 Y a0 Y
−1
(1 − dl · z ) (z − dl )
l=1 l=1
where the cl are the non-zero positions of zeros (H(cl ) = 0) and the dl
are the non-zero positions of the poles (i.e., z → dl ⇒ |H(z)| → ∞) of
H(z). Except for a constant factor, H(z) is entirely characterized by the
position of these zeros and poles.
On the unit circle z = e jω̇ , H(e jω̇ ) is the discrete-time Fourier transform
of {hn } (ω̇ = πf / f2s ). The DTFT amplitude can also be expressed in
terms of the relative position of e jω̇ to the zeros and poles:
Qm
jω̇
b0 |e jω̇ − cl |
|H(e )| = · Qkl=1
a0 l=1 |e
jω̇ − d |
l
127 / 231
Example: a single-pole filter
Consider this IIR filter: Its z-transform
0.5
0.4
0.25
0.2 0
1
0.5 1
0 0 0.5
0 2 4 0
n (samples) −0.5 −0.5
−1 −1
imaginary real 128 / 231
0.8 0.8z
H(z) = 1−0.2·z −1 = z−0.2 (cont’d) Magnitude Response
1
0.95
2
1.75
0.9
1.5
Magnitude
1.25
0.85
1
0.75
0.8
0.5
0.25 0.75
0
1
0.5 1 0.7
0 0.5
−0.5 0
−0.5
−1 −1
imaginary real 0 0.2 0.4 0.6 0.8
Normalized Frequency (×π rad/sample)
Run this LTI filter at sampling frequency fs and test it with sinusoidial
input (frequency f , amplitude 1): xn = cos(2πf n/fs )
Output: yn = A(f ) · cos(2πf n/fs + θ(f ))
What are the gain A(f ) and phase delay θ(f ) at frequency f ?
Answer: A(f ) = |H(e j2πf /fs )|
={H(e jπf /fs )}
θ(f ) = ∠H(e j2πf /fs ) = tan−1
<{H(e jπf /fs )}
Example: fs = 8 kHz, f = 2 kHz (normalized frequency f / f2s = 0.5) ⇒ Gain A(2 kHz) =
0.8 j(− j−0.2) 0.8−0.16 j q 0.82 +0.162
|H(e jπ/2 )| = |H( j)| = 0.8 j =
= = = 0.784. . .
j−0.2 ( j−0.2)(− j−0.2) 1+0.04 1.042
129 / 231
Visual verification in MATLAB:
n = 0:15; x
fs = 8000; 1.5 y (time domain)
f = 1500; y (z−transform)
x = cos(2*pi*f*n/fs);
b = [0.8]; a = [1 -0.2]; 1
y1 = filter(b,a,x);
z = exp(j*2*pi*f/fs);
H = 0.8*z/(z-0.2);
A = abs(H); 0.5
theta = atan(imag(H)/real(H));
y2 = A*cos(2*pi*f*n/fs+theta);
plot(n, x, 'bx-', ... 0
n, y1, 'go-', ...
n, y2, 'r+-')
legend('x', ...
'y (time domain)', ... −0.5
'y (z-transform)')
ylim([-1.1 1.8])
−1
0 5 10 15
130 / 231
z 1
H(z) = z−0.7 = 1−0.7·z −1 How do poles affect time domain?
z Plane Impulse Response
1 1
Imaginary Part
Amplitude
0 0.5
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
z 1
H(z) = z−0.9 = 1−0.9·z −1
Amplitude
0 0.5
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
131 / 231
z 1
H(z) = z−1 = 1−z −1
Amplitude
0 0.5
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
z 1
H(z) = z−1.1 = 1−1.1·z −1
Amplitude
0 10
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
132 / 231
z2 1
H(z) = (z−0.9·e jπ/6 )·(z−0.9·e− jπ/6 )
= 1−1.8 cos(π/6)z −1 +0.92 ·z −2
Amplitude
1
2
0
0
−1 −1
−1 0 1 0 10 20 30
Real Part n (samples)
z2 1
H(z) = (z−e jπ/6 )·(z−e− jπ/6 )
= 1−2 cos(π/6)z −1 +z −2
Amplitude
2
0 0
−1 −5
−1 0 1 0 10 20 30
Real Part n (samples)
133 / 231
z2 1 1
H(z) = (z−0.9·e jπ/2 )·(z−0.9·e− jπ/2 )
= 1−1.8 cos(π/2)z −1 +0.92 ·z −2 = 1+0.92 ·z −2
Amplitude
2
0 0
−1 −1
−1 0 1 0 10 20 30
Real Part n (samples)
z 1
H(z) = z+1 = 1+z −1
Amplitude
0 0
−1 −1
−1 0 1 0 10 20 30
Real Part n (samples)
134 / 231
IIR filter design goals
135 / 231
IIR filter design techniques
The designer can then trade off conflicting goals such as: small transition
band, low order, low ripple amplitude or absence of ripples.
Design techniques for making these tradeoffs for analog filters (involving
capacitors, resistors, coils) can also be used to design digital IIR filters:
Butterworth filters: Have no ripples, gain falls monotonically across the pass
and p
transition band. Within the passband, the gain drops slowly down to
1 − 1/2 (−3 dB). Outside the passband, it drops asymptotically by a factor
2N per octave (N · 20 dB/decade).
Chebyshev type I filters: Distribute the gain error uniformly throughout the
passband (equiripples) and drop off monotonically outside.
Chebyshev type II filters: Distribute the gain error uniformly throughout the
stopband (equiripples) and drop off monotonically in the passband.
Elliptic filters (Cauer filters): Distribute the gain error as equiripples both in
the passband and stopband. This type of filter is optimal in terms of the
combination of the passband-gain tolerance, stopband-gain tolerance, and
transition-band width that can be achieved at a given filter order.
136 / 231
IIR filter design in MATLAB
MATLAB fdatool
137 / 231
Cascade of filter sections
Higher-order IIR filters can be numerically unstable (quantization noise).
A commonly used trick is to split a higher-order IIR filter design into a
cascade of l second-order (biquad) filter sections of the form:
xn b0 yn
z −1 b0 + b1 z −1 + b2 z −2
−a1 b1 H(z) =
1 + a1 z −1 + a2 z −2
z −1
−a2 b2
Filter sections H1 , H2 , . . . , Hl are then applied sequentially to the input
sequence, resulting in a filter
l l
Y Y bk,0 + bk,1 z −1 + bk,2 z −2
H(z) = Hk (z) =
1 + ak,1 z −1 + ak,2 z −2
k=1 k=1
Each section implements one pair of poles and one pair of zeros. Jackson’s algorithm for pairing
poles and zeros into sections: pick the pole pair closest to the unit circle, and place it into a
section along with the nearest pair of zeros; repeat until no poles are left.
138 / 231
Butterworth filter design example
z Plane Impulse Response
1 1
Imaginary Part
0.5
Amplitude
0.5
0
−0.5
0
−1
−1 0 1 0 10 20
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20
−50
−40
−60 −100
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
0.5
Amplitude
0.5
0
−0.5
0
−1
−1 0 1 0 10 20
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −200
−40 −400
−60 −600
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
0.5
Amplitude
0.5
0
−0.5
0
−1
−1 0 1 0 10 20
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −200
−40 −400
−60 −600
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
0.5
Amplitude
0.5
0
−0.5
0
−1
−1 0 1 0 10 20
Real Part n (samples)
0 100
Phase (degrees)
Magnitude (dB)
0
−20
−100
−40
−200
−60 −300
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
0.5
Amplitude
0.5
0
−0.5
0
−1
−1 0 1 0 10 20
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−100
−20
−200
−40
−300
−60 −400
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB, stop-band ripple: −20 dB
143 / 231
Notch filter design example
z Plane Impulse Response
1 1
Imaginary Part
0.5
Amplitude
0.5
0
−0.5
0
−1
−1 0 1 0 10 20
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−100
−20
−200
−40
−300
−60 −400
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
0.5
Amplitude
0.5
0
−0.5
0
−1
−1 0 1 0 10 20
Real Part n (samples)
0 100
Phase (degrees)
Magnitude (dB)
50
−20
0
−40
−50
−60 −100
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
X:S→E
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
The derivative pxn (a) = Px0 n (a) is called the probability density function.
This helps us to define quantities such as the
R
I expected value E(xn ) = apxn (a) da
I mean-square value (average power) E(|xn |2 ) = |a|2 pxn (a) da
R
148 / 231
Stationary processes
for any shift l and any number k, that is if all joint probability
distributions are time invariant.
If the above condition holds at least for k = 1, then the mean
E(xn ) = mx ,
and variance
E(|xn − mx |2 ) = σx2
are constant over all n. (σx is also called standard deviation).
149 / 231
A wide-sense stationary random process {xn } can not only be
characterized by its mean mx = E(xn ) and variance σx2 = E(|xn − mx |2 )
over all sample positions n.
It can, in addition, also be characterized by its autocorrelation sequence
φxx (k) = E(xn+k · x∗n )
mx = E(xn )
φxx (k) = E(xn+k x∗n )
then we call the process mean ergodic and correlation ergodic, resp.
Ergodicity means that single-sample-sequence time averages are identical to averages over the
entire ensemble for a random process, or, in other words, variation along the time axis looks similar
to variation across the ensemble.
151 / 231
Deterministic crosscorrelation sequence
For deterministic finite-energy sequences {xn } and {yn }, we can define
their crosscorrelation sequence as
∞
X ∞
X
cxy (k) = xi+k · yi∗ = ∗
xi · yi−k .
i=−∞ i=−∞
If {xn } is similar to {yn }, but lags l elements behind (xn ≈ yn−l ), then cxy (l) will be a peak in
the crosscorrelation sequence. It can therefore be used to locate shifted versions of a known
sequence in another one.
MATLAB’s xcorr function calculates the crosscorrelation sequence for two finite sequences
(vectors) of equal length (or zero-pads the shorter one). Option unbiased divides for each lag
∗
through the length of the overlap, to estimate E(xn+k yn ) from a pair of sample sequences {xn }
and {yn } from jointly correlation-ergodic WSS processes {xn } and {yn }.
153 / 231
Deterministic autocorrelation sequence
Equivalently, we define the deterministic autocorrelation sequence in the
time domain as
X∞
cxx (k) = xi+k x∗i
i=−∞
155 / 231
The autocorrelation of a sequence {xn } with power spectrum Φxx (e jω̇ ) is
Z π
1
φxx (k) = Φxx (e jω̇ )e jkω̇ dω̇
2π −π
we can interpret
f
2π fhs
1
Z
Φxx (e jω̇ )dω̇
π f
2π fsl
156 / 231
Filtered random sequences
Let {xn } be a random sequence from a WSS random process. The output
∞
X ∞
X
yn = hk · xn−k = hn−k · xk
k=−∞ k=−∞
and
∞
X φxx (k) = E(xn+k · x∗n )
φyy (k) = φxx (k − i)chh (i), where P∞ ∗
i=−∞ chh (k) = i=−∞ hi+k hi .
In other words:
{φyy (n)} = {chh (n)} ∗ {φxx (n)}
{yn } = {hn } ∗ {xn } ⇒
Φyy (e jω̇ ) = |H(e jω̇ )|2 · Φxx (e jω̇ )
Similarly:
{φyx (n)} = {hn } ∗ {φxx (n)}
{yn } = {hn } ∗ {xn } ⇒
jω̇
Φyx (e ) = H(e jω̇ ) · Φxx (e jω̇ )
157 / 231
Summary:
∗{hn } ∗{h∗
−n }
{yn } = {hn } ∗ {xn } ⇒ {φxx (n)} −→ {φyx (n)} −→ {φyy (n)}
Proofs:
∞
!
X
φyx (l) = E(x∗n · yn+l ) = E x∗n · hk · xn+l−k =
k=−∞
∞ ∞
WSS
X X
= hk · E(x∗n · xn+l−k ) = hk · φxx (l − k)
k=−∞ k=−∞
∞ ∞
!
X X
φyy (l) = E(yn∗ · yn+l ) = E h∗k · x∗n−k · hm · xn+l−m =
k=−∞ m=−∞
∞ ∞
WSS
X X
= h∗k · hm · E(x∗n−k · xn+l−m ) =
k=−∞ m=−∞
∞ ∞
i:=m−k
X X
= h∗k · hm · φxx (l + k − m) =
k=−∞ m=−∞
∞
X ∞
X ∞
X ∞
X
= h∗k · hk+i · φxx (l − i) = φxx (l − i) h∗k · hk+i
k=−∞ i=−∞ i=−∞ k=−∞
| {z }
chh (i) 158 / 231
White noise
A random sequence {xn } is a white noise signal, if mx = 0 and
φxx (k) = σx2 δk .
The power spectrum of a white noise signal is flat:
Φxx (e jω̇ ) = σx2 .
A commonly used form of white noise is white Gaussian noise (WGN),
where each random variable xn is independent and identically distributed
(i.i.d.) according to the normal-distribution probability density function
(x−mx )2
1 − 2
pxn (a) = p e 2σx
2πσx2
Application example:
Where an LTI {yn } = {hn } ∗ {xn } can be observed to operate on white
noise {xn } with φxx (k) = σx2 δk , the crosscorrelation between input and
output will reveal the impulse response of the system:
φyx (k) = σx2 · hk
where φyx (k) = φ∗xy (−k) = E(yn+k · x∗n ).
159 / 231
Demonstration of covert spread-spectrum radar:
x = randn(1,10000)
h = [0 0 0.4 0 0 0.3 0 0 0.2 0 0];
y = conv(x, h);
figure(1)
plot(1:length(x), x, 1:length(y), y-5);
figure(2)
c = conv(fliplr(x),y);
stem(c(length(c)/2-20:length(c)/2+20));
4000
2000
−2000
0 10 20 30 40 50
160 / 231
Spectral estimation: periodogram
Estimate amplitude spectrum of the noisy discrete sequence
s1 = abs(fft(x(1:n))/n);
s1 s2 s2 = abs(fft(x(1:8*n))/(8*n));
s3 = mean(abs(fft(xx, n, 2)/n),1);
s3 s4 s4 = abs(mean(fft(xx, n, 2)/n,1));
s3 {xn }64000
n=1 cut into 1000 consecutive 64-sample windows, showing
the average of the absolute values of the DFT of each window.
Non-coherent averaging: discard phase information first.
This better approximates the shape of the power spectrum: with a flat noise floor.
s4 Same 1000 windows, but this time the complex values of the DFTs
averaged before the absolute value was taken ⇒ coherent averaging.
Because DFT is linear, this is identical to first averaging all 1000 windows and then applying
a single DFT and taking its absolute value.
The windows start 64 samples apart. Only periodic waveforms with a period length that
divides 64 are not averaged away. This periodic averaging step suppresses both the noise
and the second sine wave.
162 / 231
Welch’s method for estimating PSD
“Periodogram”: Single-rectanglar-window DTFTPpower spectrum of a
N −1
random sequence {xn }: |X(ω̇)|2 with X(ω̇) = n=0 xn · e−2π jnω̇ .
Var[|X(ω̇)|2 ]
Problem: E[|X(ω̇)|2 ] does not drop with increasing window length N .
“Welch’s method” for estimating the PSD makes three improvements:
I Reduce leakage using a non-rectangular window sequence {wi }
(“modified periodogram”)
I To reduce the variance, average K periodograms of length N .
I Triangular, Hamming, Hanning, etc. windows can be used with 50%
overlap (L = N/2), such that all samples contribute with equal
weight.
0≤k<K
xk,n = xk·L+n · wn ,
0≤n<N
N
X −1
Xk (ω̇) = xk,n · e−2π jnω̇
n=0
K−1
1 X
P (ω̇) = |Xk (ω̇)|2
K
k=0
163 / 231
Periodic averaging
If a signal x(t) has a periodic component with period length tp , then we
can isolate this periodic component from discrete sequence xn = x(n/fs )
by periodic averaging
L N
1 X 1 X
x̄n = lim xn+pi ≈ xn+pi , n ∈ {0, . . . , p − 1}
L→∞ 2L + 1 N i=1
i=−L
164 / 231
Parametric models of the power spectrum
If we understand the physical process that generates a random sequence,
we may be able to model and estimate its power spectrum more
accurately, with fewer parameters.
If {xn } can be modeled as white noise filtered by an LTI system H(e jω̇ ),
then
Φxx (e jω̇ ) = σw
2
|H(e jω̇ )|2 .
Often such an LTI can be modeled as an IIR filter with
b0 + b1 z −1 + b2 z −2 + · · · + bm z −m
H(e jω̇ ) = .
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
2
where {wn } is stationary white noise with variance σw .
Pk
There is also the simpler AR(k) model xn = wn − l=1 al · xn−l .
165 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
IQ sampling
12 Audiovisual data compression
IQ sampling / downconversion / complex baseband signal
zn = z(n/fs )
166 / 231
X(f ) δ(f + fc )
∗
−fc 0 fc f −fc 0 fc f
sample
−→
167 / 231
IQ upconversion / interpolation
Given a discrete sequence of downconverted samples zn ∈ C recorded
with sampling frequency fs at centre frequency fc (as on slide 166), how
can we reconstruct a continuous waveform x̃(t) ∈ R that matches the
original signal x(t) within the frequency interval fl to fh ?
Reconstruction steps:
I Interpolation of complex baseband signal (remove aliases):
∞
X
z̃(t) = zn · sinc(t · fs − n)
n=−∞
169 / 231
Software-defined radio (SDR) front end
IQ downconversion in SDR receiver:
⊗ sample Q
⊗ sample I
The real part <(z(t)) is also known as “in-phase” signal (I) and
cos(2πfc t) the imaginary part =(z(t)) as “quadrature” signal (Q).
⊗ δ I
171 / 231
Visualization of IQ representation of sine waves
⊗ Q
Q
x(t) −90◦ y(t) z(t)
⊗ I
cos(2πfc t)
I
Recall these products of sine and cosine functions:
I cos(x) · cos(y) = 1 cos(x − y) + 1 cos(x + y)
2 2
I sin(x) · sin(y) = 1 1
2 cos(x − y) − 2 cos(x + y)
I sin(x) · cos(y) = 1 1
2 sin(x − y) + 2 sin(x + y)
Consider: (with x = 2πfc t)
I sin(x) = cos(x − 1 π)
2
I cos(x) · cos(x) = 1 1
2 + 2 cos 2x
I sin(x) · sin(x) = 1 1
2 − 2 cos 2x
I sin(x) · cos(x) = 0 + 1
2 sin 2x
I cos(x) · cos(x − ϕ) = 1 1
2 cos(ϕ) + 2 cos(2x − ϕ)
I sin(x) · cos(x − ϕ) = 1 1
2 sin(ϕ) + 2 sin(2x − ϕ)
172 / 231
IQ representation of amplitude-modulated signals
Assume voice signal s(t) contains only frequencies below B/2.
Antenna signal amplitude-modulated with carrier frequency fc :
x(t) = s(t) · A · cos(2πfc t + ϕ)
After IQ downconversion with centre frequency fc0 ≈ fc :
A 0
z(t) = · s(t) · e2π j(fc −fc )t+ jϕ
2 =[z(t)]
With perfect receiver tuning fc0 = fc :
A
z(t) = · s(t) · e jϕ
2 <[z(t)]
Reception techniques:
I Non-coherent demodulation (requires s(t) ≥ 0):
2
s(t) = A |z(t)|
174 / 231
Frequency demodulating IQ samples
Rt
Determine s(t) from downconverted signal z(t) = A
2 · e2π jk 0
s(τ )dτ + jϕ
.
First idea: measure the angle ∠z(t), where the angle operator ∠ is
defined such that ∠ae jφ = φ (a, φ ∈ R, a > 0). Then take its derivative:
1 d
s(t) = ∠z(t)
2πk dt
Problem: angle ambiguity, ∠ works only for −π ≤ φ < π.
Ugly hack: MATLAB function unwrap removes 2π jumps from sample sequences
0 1 0 1
11 01
11
010 001 10
011 00 01 11 10
176 / 231
Basic model of a modem
?
noise - LTI channel
+n(t) ∗hc (t)
?
bits data sampling receive filter IQ
slicer ∗hr (t) downconv.
177 / 231
Pulse Amplitude Modulation (PAM)
Baseband transmission (e.g., for wires), no IQ up/down conversion
I binary PAM: ai ∈ S = {−A, A} ⊂ R
1 bit/symbol ⇒ bit rate (bit/s) = symbol rate (baud)
I m-ary PAM: ai ∈ S = {A1 , . . . , Am } ⊂ R
k = log2 m bit/symbol ⇒ bit rate (bit/s) > symbol rate (baud)
I bit sequence {bj } → symbol sequence {ai },
ai = f (bki , . . . , bki+k−1 )
Pulse generator (symbol rate fs = t−1
s ):
X
x̂(t) = ai · δ(t − its )
i
Channel: Z ∞
z(t) = hc (s)x(t − s)ds + n(t)
0
178 / 231
PAM reception
Receive filter applied to channel output z(t):
Z ∞
y(t) = hr (s)y(t − s)ds
0
Initial symbol pulses x̂(t) have now passed through three LTIs:
y = h ∗ x̂
h = ht ∗ hc ∗ hr
H(f ) = Ht (f ) · Hc (f ) · Hr (f )
Ideally, we want (
1, i=0
h(its ) =
0, i 6= 0
otherwise yn will depend on other (mainly previous) symbols, not just on
an ⇒ intersymbol interference. (See also: interpolation function)
Nyquist ISI criterion
X
yn = an + vn ⇔ h(t) · δ(t − its ) = δ(t)
i
X
⇔ H(f ) ∗ fs δ(f − ifs ) = 1
i
X
⇔ H(f − ifs ) = ts
i
181 / 231
Some possible pulse-shape choices
I h(t) = rect(t/ts ) •−◦ H(f ) = ts sinc(f /fs )
Rectangular pulses may be practical on fibre optics and short cables, where there are no
bandwidth restriction. Not suitable for radio: bandwidth high compared to symbol rate.
I h(t) = sinc(t/ts ) •−◦ H(f ) = ts rect(f /fs )
Most bandwidth efficient pulse shape, but very long symbol waveform, very sensitive to
clock synchronization errors.
I Raised-cosine filter: rectangle with half-period cosine transitions
ts ,
|f | ≤ ts /2 − β
2 π
H(f ) = ts cos 4β (|f | − ts /2 + β), ts /2 − β < |f | ≤ ts /2 + β
0, |f | > ts /2 + β
cos 2πβt
h(t) = sinc(t/ts )
1 − (4βt)2
Transition width (roll-off) β with 0 ≤ β ≤ ts /2; for β = 0 this is H(f ) = ts rect(f /fs ).
I Gaussian filter: both h(t) and H(f ) are Gaussians (self-Fourier)
Fastest transition without overshot in either time or frequency domain, but does not satisfy
Nyquist ISI criterion.
182 / 231
Optimal transmit and receive filtering
Nyquist ISI criterion dictates H(f ) = Ht (f ) · Hc (f ) · Hr (f ).
Bandwidth limits guide choice of Ht (f ), and channel dictates Hc (f ) and
N (f ).
How should we then choose Ht (f ) Hr (f )?
Select a received pulse spectrum Pr (f ), e.g. raised cosine. Then for some
arbitrary gain factor k > 0:
H(f ) = Ht (f ) · Hc (f ) · Hr (f ) = k · Pr (f )
Optimal filters
Minimize noise variance Var(vn ) = N (f )|Hr (f )|2 df at slicer relative to
R
symbol distance.
12
Pr (f )
|Hr (f )| = p
N (f )Hc (f )
1
P (f )pN (f ) 2
r
|Ht (f )| = k
Hc (f )
If N (f ) and Hc (f ) are flat: |Hr (f )| = |Ht (f )|/k 0 , e.g. root raised cosine. 183 / 231
Outline
1 Sequences and systems
2 Convolution
3 Fourier transform
4 Sampling
5 Discrete Fourier transform
6 Deconvolution
7 Spectral estimation
8 Digital filters
9 IIR filters
10 Random signals
11 Digital communication
12 Audiovisual data compression
Audiovisual data compression
?
noise - channel
?
human perceptual entropy channel
senses
display
decoding decoding decoding
184 / 231
Audio-visual lossy coding today typically consists of these steps:
I A transducer converts the original stimulus into a voltage.
I This analog signal is then sampled and quantized.
The digitization parameters (sampling frequency, quantization levels) are preferably chosen
generously beyond the ability of human senses or output devices.
I The digitized sensor-domain signal is then transformed into a
perceptual domain.
This step often mimics some of the first neural processing steps in humans.
I This signal is quantized again, based on a perceptual model of what level
of quantization-noise humans can still sense.
I The resulting quantized levels may still be highly statistically dependent.
A prediction or decorrelation transform exploits this and produces a less
dependent symbol sequence of lower entropy.
I An entropy coder turns that into an apparently-random bit string, whose
length approximates the remaining entropy.
The first neural processing steps in humans are in effect often a kind of decorrelation transform;
our eyes and ears were optimized like any other AV communications system. This allows us to use
the same transform for decorrelating and transforming into a perceptually relevant domain.
185 / 231
Outline of the remaining lectures
186 / 231
Entropy coding review – Huffman
X 1
1.00 Entropy: H = p(α) · log2
p(α)
0 1 α∈A
= 2.3016 bit
0.40 0.60
0 1 0 1
v w
0.25
0.20 0.20 u
0.35 0 1
187 / 231
Entropy coding review – arithmetic coding
Partition [0,1] according 0.0 0.35 0.55 0.75 0.9 0.95 1.0
to symbol probabilities: u v w x y z
Encode text wuvw . . . as numeric value (0.58. . . ) in nested intervals:
1.0 0.75 0.62 0.5885 0.5850
z z z z z
y y y y y
x x x x x
w w w w w
v v v v v
u u u u u
0.55
0.0 0.55 0.5745 0.5822
188 / 231
Arithmetic coding
Several advantages:
I Length of output bitstring can approximate the theoretical
information content of the input to within 1 bit.
I Performs well with probabilities > 0.5, where the information per
symbol is less than one bit.
I Interval arithmetic makes it easy to change symbol probabilities (no
need to modify code-word tree) ⇒ convenient for adaptive coding
Can be implemented efficiently with fixed-length arithmetic by rounding
probabilities and shifting out leading digits as soon as leading zeros
appear in interval size. Usually combined with adaptive probability
estimation.
Huffman coding remains popular because of its simplicity and lack of patent-licence issues.
189 / 231
Coding of sources with memory and correlated symbols
Run-length coding:
↓
5 7 12 3 3
Predictive coding:
encoder decoder
f(t) g(t) g(t) f(t)
− +
predictor predictor
P(f(t−1), f(t−2), ...) P(f(t−1), f(t−2), ...)
190 / 231
Old (Group 3 MH) fax code
pixels white code black code
I Run-length encoding plus modified Huffman 0 00110101 0000110111
code 1 000111 010
I Fixed code table (from eight sample pages) 2 0111 11
3 1000 10
I separate codes for runs of white and black
4 1011 011
pixels 5 1100 0011
I termination code in the range 0–63 switches 6 1110 0010
between black and white code 7 1111 00011
I makeup code can extend length of a run by a 8 10011 000101
multiple of 64 9 10100 000100
10 00111 0000100
I termination run length 0 needed where run 11 01000 0000101
length is a multiple of 64 12 001000 0000111
I single white column added on left side before 13 000011 00000100
transmission 14 110100 00000111
I makeup codes above 1728 equal for black and 15 110101 000011000
16 101010 0000010111
white
... ... ...
I 12-bit end-of-line marker: 000000000001 (can 63 00110100 000001100111
be prefixed by up to seven zero-bits to reach 64 11011 0000001111
next byte boundary) 128 10010 000011001000
Example: line with 2 w, 4 b, 200 w, 3 b, EOL → 192 010111 000011001001
1000|011|010111|10011|10|000000000001 ... ... ...
1728 010011011 0000001100101
191 / 231
Modern (JBIG) fax code
Based on the counted numbers nblack and nwhite of how often each pixel value
has been encountered so far in each of the 1024 contexts, the probability for
the next pixel being black is estimated as
nblack + 1
pblack =
nwhite + nblack + 2
The encoder updates its estimate only after the newly counted pixel has been
encoded, such that the decoder knows the exact same statistics.
Joint Bi-level Expert Group: International Standard ISO 11544, 1993.
Example implementation: http://www.cl.cam.ac.uk/~mgk25/jbigkit/
192 / 231
Statistical dependence
Random variables X, Y are dependent iff ∃x, y:
P (X = x ∧ Y = y) 6= P (X = x) · P (Y = y).
⇒ ∃x, y : P (X = x | Y = y) 6= P (X = x) ∨
P (Y = y | X = x) 6= P (Y = y)
⇒ H(X|Y) < H(X) ∨ H(Y|X) < H(Y)
Application
Where x is the value of the next symbol to be transmitted and y is the
vector of all symbols transmitted so far, accurate knowledge of the
conditional probability P (X = x | Y = y) will allow a transmitter to
remove all redundancy.
An application example of this approach is JBIG, but there y is limited to
10 past single-bit pixels and P (X = x | Y = y) is only an estimate.
193 / 231
Practical limits of measuring conditional probabilities
registers for storing each probability will exceed the estimated number of
particles in this universe. (Neither will we encounter enough pixels to
record statistically significant occurrences in all (224 )10 contexts.)
This example is far from excessive. It is easy to show that in colour
images, pixel values show statistical significant dependence across colour
channels, and across locations more than eight pixels apart.
A simpler approximation of dependence is needed: correlation.
194 / 231
Correlation
195 / 231
Correlation of neighbour pixels
Values of neighbour pixels at distance 1 Values of neighbour pixels at distance 2
256 256
192 192
128 128
64 64
0 0
0 64 128 192 256 0 64 128 192 256
Values of neighbour pixels at distance 4 Values of neighbour pixels at distance 8
256 256
192 192
128 128
64 64
0 0
0 64 128 192 256 0 64 128 192 256
196 / 231
Covariance and correlation
197 / 231
Covariance Matrix
198 / 231
Decorrelation by coordinate transform
Neighbour−pixel value pairs Decorrelated neighbour−pixel value pairs
256 320
256
192
192
128 128
64
64
0
0 −64
0 64 128 192 256 −64 0 64 128 192 256 320
Probability distribution and entropy
Proof: The first equation follows from the linearity of the expected-value
~ · B) = A · E(X)
operator E(·), as does E(A · X ~ · B for matrices A, B.
With that, we can transform
Cov(Y)~ = E (Y ~ − E(Y))
~ · (Y~ − E(Y))
~ T
= E (AX ~ − AE(X))
~ · (AX ~ − AE(X))~ T
= E A(X ~ − E(X))
~ · (X ~ − E(X))
~ T AT
= A · E (X ~ − E(X))
~ · (X ~ − E(X))
~ T · AT
= ~ · AT
A · Cov(X)
200 / 231
Quick review: eigenvectors and eigenvalues
Ax = λx.
Spectral decomposition
Any real, symmetric matrix A = AT ∈ Rn×n can be diagonalized into the
form
A = U ΛU T ,
where Λ = diag(λ1 , λ2 , . . . , λn ) is the diagonal matrix of the ordered
eigenvalues of A (with λ1 ≥ λ2 ≥ · · · ≥ λn ), and the columns of U are
the n corresponding orthonormal eigenvectors of A.
201 / 231
Karhunen-Loève transform (KLT)
We are given a random vector variable X ~ ∈ Rn . The correlation of the
~
elements of X is described by the covariance matrix Cov(X). ~
How can we find a transform matrix A that decorrelates X, ~ i.e. that
~ ~ T
turns Cov(AX) = A · Cov(X) · A into a diagonal matrix? A would
provide us the transformed representation Y ~ = AX
~ of our random
vector, in which all elements are mutually uncorrelated.
Note that Cov(X) ~ is symmetric. It therefore has n real eigenvalues
λ1 ≥ λ2 ≥ · · · ≥ λn and a set of associated mutually orthogonal
eigenvectors b1 , b2 , . . . , bn of length 1 with
~ i = λi bi .
Cov(X)b
We convert this set of equations into matrix notation using the matrix
B = (b1 , b2 , . . . , bn ) that has these eigenvectors as columns and the
diagonal matrix D = diag(λ1 , λ2 , . . . , λn ) that consists of the
corresponding eigenvalues:
~
Cov(X)B = BD
202 / 231
B is orthonormal, that is BB T = I.
Multiplying the above from the right with B T leads to the spectral
decomposition
Cov(X)~ = BDB T
of the covariance matrix. Similarly multiplying instead from the left with
B T leads to
~
B T Cov(X)B =D
and therefore shows with
~ =D
Cov(B T X)
203 / 231
Karhunen-Loève transform example I
The three rightmost images show each of these and the covariance matrix
colour planes separately as a black/white
image. 1 X m
Cc,d = (Sc,i − S̄c )(Sd,i − S̄d )
We want to apply the KLT on a set of such Rn m − 1 i=1
colour vectors. Therefore, we reformat the
image I into an n × m matrix of the form 0.0328 0.0256 0.0160
C = 0.0256 0.0216 0.0140
0.0160 0.0140 0.0109
r1,1 r1,2 r1,3 · · · rr,r
S = g1,1 g1,2 g1,3 · · · gr,r
b1,1 b1,2 b1,3 · · · br,r [“m − 1” because S̄c only estimates the mean]
204 / 231
Karhunen-Loève transform example I
The resulting covariance matrix C has three eigenvalues 0.0622, 0.0025, and 0.0006:
0.0328 0.0256 0.0160 0.7167 0.7167
0.0256 0.0216 0.0140 0.5833 = 0.0622 0.5833
0.0160 0.0140 0.0109 0.3822 0.3822
0.0328 0.0256 0.0160 −0.5509 −0.5509
0.0256 0.0216 0.0140 0.1373 = 0.0025 0.1373
0.0160 0.0140 0.0109 0.8232 0.8232
0.0328 0.0256 0.0160 −0.4277 −0.4277
0.0256 0.0216 0.0140 0.8005 = 0.0006 0.8005
0.0160 0.0140 0.0109 −0.4198 −0.4198
205 / 231
Karhunen-Loève transform example I
206 / 231
Spatial correlation
207 / 231
Karhunen-Loève transform example II
Matrix columns of S filled with samples of 1/f filtered noise
...
Covariance matrix C Matrix U with eigenvector columns
208 / 231
Matrix U 0 with normalised KLT Matrix with Discrete Cosine
eigenvector columns Transform base vector columns
209 / 231
Discrete cosine transform (DCT)
with
√1 u=0
C(u) = 2
1 u>0
is an orthonormal transform:
N −1
(2x + 1)uπ C(u0 ) (2x + 1)u0 π u = u0
X C(u) 1
cos ·p cos =
2N 2N 0 u 6= u0
p
x=0
N/2 N/2
210 / 231
DCT base vectors for N = 8:
4
u
0 1 2 3 4 5 6 7
x
211 / 231
Discrete cosine transform – 2D
The 2-dimensional variant of the DCT applies the 1-D transform on both
rows and columns of an image:
C(u) C(v)
S(u, v) = p p ·
N/2 N/2
N −1 N −1
X X (2x + 1)uπ (2y + 1)vπ
s(x, y) cos cos
x=0 y=0
2N 2N
s(x, y) =
N −1 N −1
X X C(u) C(v) (2x + 1)uπ (2y + 1)vπ
· S(u, v) cos cos
2N 2N
p p
u=0 v=0
N/2 N/2
A range of fast algorithms have been found for calculating 1-D and 2-D
DCTs (e.g., Ligtenberg/Vetterli).
212 / 231
Whole-image DCT
4
50 50
3
100 100
150 2 150
200 1 200
250 0 250
300 300
−1
350 350
−2
400 400
−3
450 450
−4
500 500
100 200 300 400 500 100 200 300 400 500
213 / 231
Whole-image DCT, 80% coefficient cutoff
4
50 50
3
100 100
150 2 150
200 1 200
250 0 250
300 300
−1
350 350
−2
400 400
−3
450 450
−4
500 500
100 200 300 400 500 100 200 300 400 500
214 / 231
Whole-image DCT, 90% coefficient cutoff
4
50 50
3
100 100
150 2 150
200 1 200
250 0 250
300 300
−1
350 350
−2
400 400
−3
450 450
−4
500 500
100 200 300 400 500 100 200 300 400 500
215 / 231
Whole-image DCT, 95% coefficient cutoff
4
50 50
3
100 100
150 2 150
200 1 200
250 0 250
300 300
−1
350 350
−2
400 400
−3
450 450
−4
500 500
100 200 300 400 500 100 200 300 400 500
216 / 231
Whole-image DCT, 99% coefficient cutoff
4
50 50
3
100 100
150 2 150
200 1 200
250 0 250
300 300
−1
350 350
−2
400 400
−3
450 450
−4
500 500
100 200 300 400 500 100 200 300 400 500
217 / 231
Base vectors of 8×8 DCT
v
0 1 2 3 4 5 6 7
0
3
u
7
218 / 231
RGB video colour coordinates
219 / 231
YUV video colour coordinates
221 / 231
Y versus UV sensitivity example
Cr
Cr
Cr
0.4 0.4 0.4
Cr
Cr
Cr
0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Cb Cb Cb
224 / 231
Example of a linear quantizer (resolution R, peak value V ):
x 1
y = max −V, min V, R +
R 2
Adding a noise signal that is uniformly distributed on [0, 1] instead of adding 12 helps to spread the
frequency spectrum of the quantization noise more evenly. This is known as dithering.
225 / 231
Logarithmic quantization
European digital telephone networks use A-law quantization (A = 87.6), North American ones use
µ-law (µ=255), both with 8-bit resolution and 8 kHz sampling frequency (64 kbit/s). [ITU-T
G.711]
226 / 231
V -law (US)
A-law (Europe)
signal voltage
-V
227 / 231
Joint Photographic Experts Group – JPEG
Working group “ISO/TC97/SC2/WG8 (Coded representation of picture and audio information)”
was set up in 1982 by the International Organization for Standardization.
Goals:
I continuous tone gray-scale and colour images
I recognizable images at 0.083 bit/pixel
I useful images at 0.25 bit/pixel
I excellent image quality at 0.75 bit/pixel
I indistinguishable images at 2.25 bit/pixel
I feasibility of 64 kbit/s (ISDN fax) compression with late 1980s
hardware (16 MHz Intel 80386).
I workload equal for compression and decompression
The JPEG standard (ISO 10918) was finally published in 1994.
William B. Pennebaker, Joan L. Mitchell: JPEG still image compression standard. Van Nostrad
Reinhold, New York, ISBN 0442012721, 1993.
Gregory K. Wallace: The JPEG Still Picture Compression Standard. Communications of the ACM
34(4)30–44, April 1991, http://doi.acm.org/10.1145/103085.103089
228 / 231
Summary of the baseline JPEG algorithm
The most widely used lossy method from the JPEG standard:
I Colour component transform: 8-bit RGB → 8-bit YCrCb
I Reduce resolution of Cr and Cb by a factor 2
I For the rest of the algorithm, process Y , Cr and Cb components
independently (like separate gray-scale images)
The above steps are obviously skipped where the input is a gray-scale image.
I Split each image component into 8 × 8 pixel blocks
Partial blocks at the right/bottom margin may have to be padded by repeating the last
column/row until a multiple of eight is reached. The decoder will remove these padding
pixels.
I Apply the 8 × 8 forward DCT on each block
On unsigned 8-bit input, the resulting DCT coefficients will be signed 11-bit integers.
229 / 231
I Quantization: divide each DCT coefficient with the corresponding
value from an 8 × 8 table, then round to the nearest integer:
The two standard quantization-matrix examples for luminance and chrominance are:
16 11 10 16 24 40 51 61 17 18 24 47 99 99 99 99
12 12 14 19 26 58 60 55 18 21 26 66 99 99 99 99
14 13 16 24 40 57 69 56 24 26 56 99 99 99 99 99
14 17 22 29 51 87 80 62 47 66 99 99 99 99 99 99
18 22 37 56 68 109 103 77 99 99 99 99 99 99 99 99
24 35 55 64 81 104 113 92 99 99 99 99 99 99 99 99
49 64 78 87 103 121 120 101 99 99 99 99 99 99 99 99
72 92 95 98 112 100 103 99 99 99 99 99 99 99 99 99
I apply DPCM coding to quantized DC coefficients from DCT
I read remaining quantized values from DCT in zigzag pattern
I locate sequences of zero coefficients (run-length coding)
I apply Huffman coding on zero run-lengths and magnitude of AC
values
I add standard header with compression parameters
http://www.jpeg.org/
Example implementation: http://www.ijg.org/
230 / 231
Outlook
Further topics that we have not covered in this brief introductory tour
through DSP, but for the understanding of which you should now have a
good theoretical foundation:
I multirate systems
I effects of rounding errors
I adaptive filters
I DSP hardware architectures
I sound effects
If you find any typo or mistake in these lecture notes, please email [email protected].
231 / 231
Some final thoughts about redundancy . . .
Aoccdrnig to rsceearh at Cmabrigde Uinervtisy, it deosn’t
mttaer in waht oredr the ltteers in a wrod are, the olny
iprmoetnt tihng is taht the frist and lsat ltteer be at
the rghit pclae. The rset can be a total mses and you can
sitll raed it wouthit porbelm. Tihs is bcuseae the huamn
mnid deos not raed ervey lteter by istlef, but the wrod as
a wlohe.
. . . and perception
Count how many Fs there are in this text:
FINISHED FILES ARE THE RE-
SULT OF YEARS OF SCIENTIF-
IC STUDY COMBINED WITH THE
EXPERIENCE OF YEARS