Digital Signal Processing: Markus Kuhn
Digital Signal Processing: Markus Kuhn
Processing
Markus
Kuhn
Computer Laboratory
http://www.cl.cam.ac.uk/teaching/0809/DSP/
Signals
→ flow of information
Electronics (unlike optics) can only deal easily with time-dependent signals, therefore spatial
signals, such as images, are typically first converted into a time signal with a scanning process
(TV, fax, etc.).
2
Signal processing
Signals may have to be transformed in order to
→ amplify or filter out embedded information
→ detect patterns
→ prepare the signal to survive a transmission channel
→ prevent interference with other signals sharing a medium
→ undo distortions contributed by a transmission channel
→ compensate for sensor deficiencies
→ find information encoded in a different domain
To do so, we also need
→ methods to measure, characterise, model and simulate trans-
mission channels
→ mathematical tools that split common channels and transfor-
mations into easily manipulated building blocks
3
Analog electronics
∫ t
• analog circuits cause little addi- Uin − U 1 d U
out
tional interference out
= L Uout dτ + C
R −∞ dt
4
Digital signal processing
Analog/digital and digital/analog converter, CPU, DSP, ASIC, FPGA.
Advantages:
→ noise is easy to control after initial quantization
→ highly linear (within limited dynamic range)
→ complex algorithms fit into a single chip
→ flexibility, parameters can easily be varied in software
→ digital processing is insensitive to component tolerances, aging,
environmental conditions, electromagnetic interference
But:
→ discrete-time processing artifacts (aliasing)
→ can require significantly more power (battery, cooling)
→ digital clock and switching cause interference
5
→
cancellation
consumer electronics experimental physics
→ perceptual coding of audio and video sensor-data evaluation
on DVDs, speech synthesis, speech
recognition
→ aviation
→ music
radar, radio navigation
→
synthetic instruments, audio effects,
noise reduction security
→
steganography, digital watermarking,
medical diagnostics biometric identification, surveillance
systems, signals intelligence, elec-
magnetic-resonance and ultrasonic
tronic warfare
imaging, computer tomography,
ECG, EEG, MEG, AED, audiology
→ engineering
→ geophysics control systems, feature extraction
seismology, oil exploration for pattern recognition
6
Syllabu
s
Signals and systems. Discrete sequences and systems, their types and proper-
ties. Linear time-invariant systems, convolution. Harmonic phasors are the eigen
functions of linear time-invariant systems. Review of complex arithmetic. Some
examples from electronics, optics and acoustics.
MATLAB. Use of MATLAB on PWF machines to perform numerical experiments
and visualise the results in homework exercises.
Fourier transform. Harmonic phasors as orthogonal base functions. Forms of the
Fourier transform, convolution theorem, Dirac’s delta function, impulse combs in
the time and frequency domain.
Discrete sequences and spectra. Periodic sampling of continuous signals, pe-
riodic signals, aliasing, sampling and reconstruction of low-pass and band-pass
signals, spectral inversion.
Discrete Fourier transform. Continuous versus discrete Fourier transform, sym-
metry, linearity, review of the FFT, real-valued FFT.
Spectral estimation. Leakage and scalloping phenomena, windowing, zero
padding.
8
Objectives
By the end of the course, you should be able to
→ apply basic properties of time-invariant linear systems
→ understand sampling, aliasing, convolution, filtering, the pitfalls of
spectral estimation
→ explain the above in time and frequency domain representations
→ use filter-design software
→ visualise and discuss digital filters in the z-domain
→ use the FFT for convolution, deconvolution, filtering
→ implement, apply and evaluate simple DSP applications in
MATLAB
→ apply transforms that reduce correlation between several signal
sources
→ understand and explain limits in human perception that are ex-
ploited by lossy compression techniques
→ provide a good overview of the principles and characteristics of
sev-
eral widely-used compression techniques and standards for
audio-
visual signals
9
Textbooks
→ R.G. Lyons: Understanding digital signal processing. Prentice-
Hall, 2004. (£45)
→ A.V. Oppenheim, R.W. Schafer: Discrete-time signal process-
ing. 2nd ed., Prentice-Hall, 1999. (£47)
→ J. Stein: Digital signal processing – a computer science per-
spective. Wiley, 2000. (£74)
→ S.W. Smith: Digital signal processing – a practical guide for
engineers and scientists. Newness, 2003. (£40)
→ K. Steiglitz: A digital signal processing primer – with appli-
cations to digital audio and computer music. Addison-Wesley,
11
1996. (£40)
Sequences and systems
A discrete sequence {x n } ∞ n=−∞ is a sequence of numbers
. . . , x− 2 , x− 1 , x0 , x 1 , x2 , . . .
x n = x(t s · n) = x(n/f s ),
11
Properties of sequences
A sequence { x n } is
Σ∞
absolutely summable ⇔ n = − ∞ |xn| < ∞
Σ∞
2
square summable |xn| <
⇔ n=−∞ ∞
periodic ⇔ ∃k > 0 :
∀n ∈ Z : x n = x n + k
A square-summable sequence is also called an energy signal, and
|x n| 2
∞
nΣ
=−∞
1 Σk 2
lim |x n|
k→∞ 1 + 2k n =− k
exists
.
Special sequences
Unit-step sequence: .
0, n <
un =
0
1, n ≥
Impulse sequence: 0
.
1, n =
δn =
0
= 0, n ƒ=
u n − u n−1
0
13
y n = f (x n , x n − 1 , x n − 2 , . . .)
y n = f (x n )
y n = x n −d
T is a 14
linear system if for any pair of sequences
Examples:
The accumulator system
Σn
yn = xk
k=−∞
y n = b0 · x n b0
−a1 z −1
a k · y n−k
− k= 1
yn y n−1
ΣN −a2 z −1
Block diagram representation
y n−2
of sequence operations:
x ′n −a3 z −1
xn x n + x′n y n−3
Addition:
Multiplication xn a ax n
by constant:
The a k and b m are
constant coefficients.
xn x n−1
Delay: z −1
17
o
r xn x n −1 x n −2
x n − 3 z −1 z −1 z −1
ΣM
yn = b0
bm · x n − m b1 b2
m=0 b3
yn
z −1 b1 −a1 z −1
ΣN ΣM x n−1 y n−1
ak · yn −k = bm · n −m
k= 0 x m=0 z −1 b2 −a2 z −1
x n−2 y n−2
z −1 b3 −a3 z −1
x n−3 y n−3
The MATLAB function f i l t e r is an efficient implementation of the last variant.
18
Convolution
All linear time-invariant (LTI) systems can be represented in the form
Σ∞
yn =
ak · x n−k
k = −∞
Σ∞
{pn } ∗ {qn } = { rn } ⇐⇒ ∀n ∈ Z : r n = p k · q n −k .
k=−∞
19
A
Convolution
B
examples
C D
E F AB AC
20
Properties of convolution
For arbitrary sequences {p n }, {q n }, { r n } and scalars a, b:
→ Convolution is associative
( { p n } ∗ {q n }) ∗ { r n } = { p n } ∗ ({q n } ∗ { r n } )
→ Convolution is commutative
{pn } ∗ {qn } = {qn } ∗ {pn }
→ Convolution is linear
{ p n } ∗ {a · qn + b · r n } = a · ( { p n } ∗ {q n }) + b · ( { p n } ∗
{rn })
23
z −1 b3 −a3 z
−1
−a3 z
−1
x n−3 b3 y n−3
∗ =
Uout
Uin Uout
U out
0
t 0 1/RC ω (= 2πf)
Any passive network (R, L, C) convolves its input voltage Uin with an
impulse response function h, leading to Uout = Uin ∗h, that is
∫ ∞
Uout (t) = Uin(t − τ ) · h(τ ) ·
−∞ dτ
In this example:
. 1 −t
Uin − U
dUout · eRC , t≥
= C , h(t) = RC
R dt 0,0 t <0
u
to ·
27
A 1 sin ϕ 1 + A 2 sin ϕ 2
tan ϕ = A2
A 1 cos ϕ 1 + A 2 cos ϕ 2 A A2 ·
sin(ϕ 2 )
ϕ2
Sine waves of any phase can be
A1 A2 ·
formed from sin and cos alone: cos(ϕ 2 )
ωt ϕ
ϕ1 A · sin(ϕ )
A · sin(ωt + ϕ) 1
A1 ·
1
= a · sin(ωt) + b · cos(ϕ 1 )
cos(ωt) √ b
with a = A · cos(ϕ), b = A · sin(ϕ) and A = a + b , tan ϕ = a
2 2
. 28
Note: Convolution of a discrete sequence { x n } with another sequence
{ y n } is nothing but adding together scaled and delayed copies of { x n } .
(Think of { y n } decomposed into a sum of impulses.)
If { x n } is a sampled sine wave of frequency f , so is { x n } ∗ { y n } !
=⇒ Sine-wave sequences form a family of discrete sequences
that is closed under convolution with arbitrary sequences.
The same applies for continuous sine waves and convolution.
⇐⇒ ω1 ƒ= ω2 ∨ ϕ 1 − ϕ 2 = (2k + (k ∈
1)π/2 Z)
They can be used to form an orthogonal function basis for a
transform.
The term “orthogonal”∞is used here in the context of an (infinitely dimensional) vector space,
is defined as f · g = − ∞ f (t) · g(t)
where the “vectors” Rare functions
dt. of the form f : R → R (or f : R → C) and the scalar
29
product
Σ∞ Σ∞ Σ∞
yn = xn − k · hk = zn − k · hk = z n · z− k · hk = z n ·
H(z)
k=−∞ k=−∞ k=−∞
32
Complex phasors
Amplitude and phase are two distinct characteristics of a sine function
that are inconvenient to keep separate notationally.
Complex functions (and discrete sequences) of the form
j( ω t + ϕ )
A·e = A · [cos(ωt + ϕ) + j · sin(ωt + ϕ)]
(where j2 = −1) are able to represent both amplitude and phase in one
single algebraic object.
Thanks to complex multiplication, we can also incorporate in one single
factor both a multiplicative change of amplitude and an additive
change of phase of such a function. This makes discrete sequences of
the form x n = e jωn
33
35
Time scaling: . Σ
1 f
x(at) •−◦ X
|a|
a
Frequency scaling:
.
1
Σ •−◦
t X(af )
x
|a|
36
a
Time shifting:
−2π j f ∆ t
x(t − •−◦ X ( f ) · e
∆t)
Frequency shifting: 2π j ∆ f t
x(t) · e •−◦ X ( f −
∆f )
Parseval’s theorem (total
power): ∫ ∫
∞ ∞
|x (t)| dt
2
= | X ( f )|2 df
−∞ −∞
37
38
Convolution theorem
Continuous form:
F { ( f ∗ g)(t)} =
F { f (t)} · F { g ( t ) }
F { f (t) · g(t)} =
F { f (t)} ∗ F { g ( t ) }
jω jω
{ x n form:
Discrete } ∗ {yn } = {zn } ⇐⇒ X (e ) · Y (e ) = Z (e )
jω
−f 0 f0 f −f 0 f0 f
As any real-valued signal x ( t ) can be represented as a combination of sine and cosine functions,
the spectrum of any real-valued signal will show the symmetry X ( e jω ) = [ X ( e − jω)]∗, where ∗
denotes the complex conjugate (i.e., negated imaginary part).
42
Fourier transform symmetries
We call a function x(t)
odd if = −x (t)
x (−t) even = x (t)
if x (−t)
and ·∗ is the complex conjugate, such that (a + jb)∗ = (a −
jb). Then
x(t) is real ⇔ X ( − f ) = [ X ( f )]∗
x(t) is imaginary ⇔ X(−f ) = −
x(t) is even [X ( f X)](∗f ) is even
x(t) is odd ⇔ X ( f ) is odd
x(t) is real and even ⇔ X ( f ) is real and even X ( f
x(t) is real and odd ⇔ ) is imaginary and odd X ( f
x(t) is imaginary and even ⇔ ) is imaginary and even
x(t) is imaginary and odd ⇔ X ( f ) is real and odd
⇔
43
X(f Y
) (f )
∗ =
−f l 0 f l f −f c fc f −f c 0 fc f
x n = x(t s · n) = x(n/f s )
∞
s(t) = ts · Σ
n = −∞
δ(t − ts · n)
Σ∞ Σ∞
s(t) = ts · δ(t − ts · n) = e2π j nt / t s
n = −∞ n = −∞
s(t) S(f
)
· =
.. ... ... ..
. .
0 t −1/f s 0 1/ f s t −1/f s 0 1 / f s t
X(f S(f Xˆ(f )
) )
∗ =
.. ..... ..
. . .
0 f −f s fs f −f s 0 f s f
) )
a nti-alia sing filter
0 f −f s 0 fs f
double-sided b a n d w i d t h
r e c o n s t r u c t i o n filter
Xˆ(f ) Xˆ(f )
−2fs −f s fs 2f s f −2fs −f s fs 2f s f
0 0
Anti-aliasing and reconstruction filters both suppress frequencies outside |f | < f s /2.
50
Reconstruction of a continuous
band-limited waveform
The ideal anti-aliasing filter for eliminating any frequency content above
f s /2 before sampling with a frequency of f s has the Fourier transform
.
1 if |f | f2s
H(f ) = < fs = rect(t s f ).
2
0 if |f |
This leads, after an inverse Fourier transform, to the impulse response
>
. Σ
sin πtf s = · t
h(t) = f s .
1 πtf s sincs
· s t t
The original band-limited signal can be reconstructed by convolving
this with the sampled signal xˆ(t), which eliminates the periodicity of
the frequency domain introduced by the sampling process:
x(t) = h(t) ∗ xˆ(t)
Note that sampling h(t ) gives the impulse function: h (t) · s( t) = δ(t).
51
sampled signal
interpolation result
scaled/shifted
sin(x)/x pulses
1 2 3 4 5
53
Reconstruction filters
The mathematically ideal form of a reconstruction filter for suppressing
aliasing frequencies interpolates the sampled signal x n = x(t s · n)
into the continuous waveform back
Σ∞ sin π(t − ts ·
xn .
x(t) = n)π(t − ts ·
·
n=−∞ n)
55
„ «
sin πtf s/ 2 2n + 1 sin πt(n + 1) fs sin s
h(t) = f s ·cos π t nf = ( n + 1)f s − n fs .
πtf s / 2 4 πt(n + 1)f s πtnf s
2πtf s
− 54 f s 0 4 fs
5
f −f s −fs 0 fs fs f
2 2
n = 2
56
Exercise 9 Reconstructing a sampled baseband signal:
Why should the first filter have a lower cut-off frequency than the
second?
57
=
.. ∗ .. ..
. . .
− 1/ f p 0 1 / f p ... − 1/ f p 0 1 / f p t
X(f t
X˙( f )
t P
) (f )
= ·
.. ..
. .
−f p 0 f p f f −f p 0 f p f
59
· =
.. ... ... ..
. .
0 t −1/f s 0 1 / f s t −1/f s 0 1/ f s t
X(f S(f
Xˆ(f )
) )
∗ =
.. ... ..
. ... .
0 f −f s fs f −f s 0 f s f
60
Continuous vs discrete Fourier transform
• Sampling a continuous signal makes its spectrum periodic
• A periodic signal has a sampled spectrum
.. ... .. ..
−1
. f . .
−n/fs −1 n/fs t −f s −f s / n 0 f s / n fs f
s 0 fs
61
e −2π j
n n n
1
j
·
x X0 X·0 x0
0
x1 X1 X1 x
x2 X2 1 X 2 x 12
Fn = · ∗ =
. , n Fn .
· . · .
.. ..
x n−1 X n−1 X n−1 x n−1
62
Discrete Fourier Transform visualized
x0 X0
1
X1
x
x2
X2
x3 X3
=
· 4 X4
x X
5 5
x
x6 X6
x7 X7
X0 x0
1 x1
X
X2 x2
1 X3 x3
=
· · x4
8 4
X x
5 5
X
X6
x6
X7 x7
64
Fast Fourier Transform (FFT)
. Σ nΣ−
F n { x i} ni=−1
0 k
= 1
xi · −2π
j
i
k
e i=0 n
n n
Σ2 −2π ik
Σ2 −2π ik
= −1 x 2i · j n/2 + e−2π
j −1 x 2i+ 1 · j
k n/2
e n i=0 e
i . Σ . Σ
−1 −1
n n
−2π nk ·
= 0 F n { x 2i } i= 0
2
+ ej F n2{ x 2i+
} 1 i= 0
2
,
2 k k k < n2
= . Σ . Σ
n
−1 −2π nk
n
−1
F n2 { x 2i } i=2
0 n
+ ej · F n2 { x 2i+ 1} i= 0
2
n
, k≥n
k− k−
2 2
2
The DFT over n-element vectors can be reduced to two DFTs over
n/2-element vectors plus n multiplications and n additions, leading to
log2 n rounds and n log2 n additions and multiplications overall, com-
pared to n 2 for the equivalent matrix multiplication.
A high-performance FFT implementation in C with many processor-specific optimizations and
support for non-power-of-2 sizes is available at ht t p : / / w ww. ff t w. o r g/ .
65
These two symmetries, combined with the linearity of the DFT, allows us
to calculate two real-valued n-point DFTs
{ X ′}n−1 = F n { x ′ }n−1 { X ′′}n−1 = F n { x ′′}n−1
i= 0 i i i= 0 i i= 0 i
simultaneously i=in0a single complex-valued n-point DFT, by composing its
input as
x i = x′i + j · x′i′
and decomposing its output as
1
X i′ = ( X i + X n∗
′ −i )
′
X i = ( Xi − X n −i )
∗
1
2
2
To optimize the calculation of a single real-valued FFT, use this trick to calculate the two half-size
real-value FFTs that occur in the first round.
66
Fast complex multiplication
Calculating the product of two complex numbers as
provides the same result with three multiplications and five additions.
The latter may perform faster on CPUs where multiplications take
three or more times longer than additions.
This trick is most helpful on simpler microcontrollers. Specialized signal-processing CPUs (DSPs)
feature 1-clock-cycle multipliers. High-end desktop processors use pipelined multipliers that stall
where operations depend on each other.
67
FFT-based convolution
Calculating the convolution of two finite sequences { xi } mi=−1 n −1
0 and { y i } i= 0
of lengths m and n via
min{Σm−1,i}
xj · 0≤i<m + n−
zi = yi − j , 1
j=max{0,i−(n−1)}
takes m n multiplications.
Can we apply the FFT and the convolution theorem to calculate the
convolution faster, in just O(m log m + n log n) multiplications?
{ z i } = F− 1 ( F { x i } · F { y i } )
A’ B’ F−1[F(A’)F(B’)]
∗ ∗ ∗
= = =
The regions originally added as zero padding are, after convolution, aligned
to overlap with the unpadded ends of their respective neighbour blocks.
The overlapping parts of the blocks are then added together.
71
Deconvolution
A signal u(t) was distorted by convolution with a known impulse re-
sponse h(t) (e.g., through a transmission channel or a sensor problem).
The “smeared” result s(t) was recorded.
Can we undo the damage and restore (or at least estimate) u(t)?
∗ =
∗ =
72
The convolution theorem turns the problem into one of multiplication:
∫
s(t) = u(t − τ ) · h(τ ) · dτ
s = u∗h
u = F {F{s}/F{h}}
− 1
Typical workarounds:
→ Modify the Fourier transform of the impulse response, such
that
| F { h } ( f )| > ǫ for some experimentally chosen threshold ǫ.
→ If estimates of the signal spectrum | F { s } ( f )| and the noise
spectrum | F { n } ( f )| can be obtained, then we can apply the
2
“Wiener filter” (“optimal filter”)
| F { s } ( f )|
W (f ) =
| F { s } ( f )|2 + | F { n } ( f )|
2
11 Usedeconvolution:
Exercisebefore MATLAB to deconvolve the blurred stars from slide 26.
The files stars-blurred.png with the blurred-stars image and stars-psf.png with the impulse
= F − on
u˜available
response (point-spread function) are {W · F { c } / F { h } }
1 the course-material web page. You may find the
MATLAB functions imread, double, imagesc, c i r c s h i f t , f f t 2 , i f f t 2 of use.
Try different ways to control the noise (see above) and distortions near the margins (window-
ing). [The MATLAB image processing toolbox provides ready-made “professional” functions
deconvwnr, deconvreg, deconvlucy, edgetaper, for such tasks. Do not use these, except per-
haps to compare their outputs with the results of your own attempts.]
74
Spectral estimation
Sine wave 4f /32
s Discrete Fourier Transform
1
15
10
0
5
−1 0
0 10 20 30 0 10 20 30
Sine wave 4.61f
/32 Discrete Fourier Transform
1 s
15
10
0
5
−1 0
0 10 20 30 0 10 20 30
75
30
25
20
15
10
5
0
0 16
5 10 15 15.5
20 25 30 15
input freq.
DFT index
The leakage of energy to other frequency bins not only blurs the estimated spec-
trum. The peak amplitude also changes significantly as the frequency of a tone
changes from that associated with one output bin to the next, a phenomenon
known as scalloping. In the above graphic, an input sine wave gradually changes
from the frequency of bin 15 to that of bin 16 (only positive frequencies shown).
77
W indowing
Sine wave Discrete Fourier Transform
1 300
200
0
100
−1 0
0 200 400 0 200
400
Sine wave multiplied with window function Discrete Fourier Transform
1 100
0 50
−1 0
0 200 400 0 200 400
78
The reason for the leakage and scalloping losses is easy to visualize with
the help of the convolution theorem:
The operation of cutting a sequence of the size of the DFT input vector
out of a longer original signal (the one whose continuous Fourier spectrum
we try to estimate) is equivalent to multiplying this signal with a
rectangular function. This destroys all information and continuity outside
the “window” that is fed into the DFT.
Multiplication with a rectangular window of length T in the time domain is
equivalent to convolution with sin(πf T ) / ( π f T ) in the frequency
domain.
The subsequent interpretation of this window as a periodic sequence by
the DFT leads to sampling of this convolution result (sampling meaning
multiplication with a Dirac comb whose impulses are spaced f s / n apart).
Where the window length was an exact multiple of the original signal
period, sampling of the sin(πf T ) / ( π f T ) curve leads to a single Dirac
pulse, and the windowing causes no distortion. In all other cases, the
effects of the con- volution become visible in the frequency domain as
leakage and scalloping losses.
79
0.8
0.6
0.4
0.2
Rectangular window
Triangular window
0
Hann window
Hamming window
20 20
Magnitude (dB)
Magnitude (dB)
0 0
−20 −20
−40 −40
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency ( rad/sample) Normalized Frequency ( rad/sample)
Hann window Hamming window
20 20
Magnitude (dB)
Magnitude (dB)
0 0
−20 −20
−40 −40
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency ( rad/sample) Normalized Frequency ( rad/sample)
81
2π . Σ
n −1 i
w i = 0.54 − 0.46 × cos 2π
Hamming window: n − 82
Zero padding increases DFT resolution
The two figures below show two spectra of the 16-element
sequence
s i = cos(2π · 3i/16) + cos(2π · 4i/16), i ∈ {0, . . . ,
15}.
The left plot shows the DFT of the windowed sequence
and the right plot shows the DFT of the zero-padded windowed sequence
x i = s i ·. w i , i ∈ {0, . . . , 15}
s i · w i , i ∈ {0, . . . ,
x ′i =
0,
15} i ∈ {16, . . . ,
63}
where w i = 0.54 − 0.46 × cos (2πi/15) is the Hamming window.
2 2
0 0
0 5 10 15 0 20 40 60
83
0.2
Amplitude
0.5
30
0 0.1
−0.5 0
−1
−0.1
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −500
−40 −1000
−60 −1500
0 0.5 1 0 0.5 1
Normalized Frequency ( rad/sample) Normalized Frequency (
rad/sample)
order: n = 30, cutoff frequency (−6 dB): f c = 0.25 × f s /2, window: Hamming 88
We truncate the ideal, infinitely-long impulse response by multiplication with a window sequence.
In the frequency domain, this will convolve the rectangular frequency response of the ideal low-
pass filter with the frequency characteristic of the window. The width of the main lobe determines
the width of the transition band, and the side lobes cause ripples in the passband and stopband.
sin[π(i − n/2)(f h − f l ) /f s ]
h i = (f h − f l ) / f s ·sin[π(fh + f l )] · i
π(i − n/2)(f h − f l ) / f s
· w
H(f
)
= ∗
− f h−
fh + f l
−f h −f l 0 f l fh f 2
fl
fh −fl 2
f − 2
0 fh +fl f
2
89
Σ∞
X (v) = x n vn
n=−∞
91
↓
. Σ . Σ
Σ∞ Σ∞
H (v) · X (v) = ↓ hn vn xn vn
n=−∞ · n = −∞
Σ∞
n
= Σ∞ hk · xn − k ·
v
n = − ∞ k=−∞
Σ∞
X(e − j ω ) = x n e −jωn
n = −∞
92
Example of polynomial division:
1 Σ∞
= 1 + av + a2v2 + a3v3 + · · · =
1− n= 0
av an vn
1 + av + a 2v2 + ·
1 − av 1 ··
1
av
− av a 2v2
−
a 2v2
av − a 3v 3
a 2v2 ···
93
The z-transform
The z-transform of a sequence { x n } is defined as:
∞
X(z) = Σ x n z −n
n=−∞
Note that is differs only in the sign of the exponent from the polynomial representation discussed
on the preceeding slides.
n
{z } ∗ { xn } = {yn }
Σ∞ Σ∞
z− k xk = zn ·
⇒ y n k== −∞ z n−k
xk k = −∞ X ( z )
= zn ·
94
The z-transform defines for each sequence a continuous complex-
valued surface over the complex plane C. For finite sequences, its value
is al- ways defined across the entire complex plane.
For infinite sequences, it can be shown that the z-transform converges
only for the region
. .
lim . x.n + 1 .. < |z | < .. x n + 1 ..
.
n→∞ . x n lim n→−∞ . xn
The z-transform identifies a sequence unambiguously only in conjunction with a given region of
convergence. In other words, there exist different sequences, that have the same expression as
their z-transform, but that converge for different amplitudes of z.
jω Σ∞
F { x n } ( ω ) = X(e x n e −jωn
n = −∞
)=
95
x n−1 y n−1
Σk
al · y n − l = z −1
bl · ··· · · ·z −1
m
Σl=0 l=0 · ·
x n −l · ·
with { y n } = { h n } ∗ { x n } is · z −1
bm −a k z −1 ·
rational
the function
x n−m y n−k
b0 + b1z−1 + b2z−2 + · · · +
H(z) =
ba0m z+− ma 1 z−1 + a 2 z −2 + · · · +
a k z −can
(b m ƒ= 0, a k ƒ= 0) which k
also Σ
be written as
m
zk b z m−l
l
H(z) = Σ l =k 0 .
zm l= 0 a l z k−l
H(z) has m zeros and k poles at non-zero locations in the z plane,
plus k − m zeros (if k > m) or m − k poles (if m > k) at z = 0.
96
This function can be converted into the form
Ym
Ym
(1 − cl · (z − c l )
b0 l=1 z − 1 )
· kz−m
0 l=1
=
ab0 a
Yk · 0 Yk
H(z) = · (1 − dl · z − 1 )
(z − d l )
are the non-zero positions
l=1 of the poles (i.e., z → dl ⇒ |H(z)| → ∞)
of H(z). Except for l=1 a constant factor, H(z) is entirely characterized
by the the
where position
c areof
thethese zerospositions
non-zero and poles.
of zeros (H(c ) = 0) and the d
l l l
As with the Fourier transform, convolution in the time domain corre-
sponds to complex multiplication in the z-domain:
{ x n } •−◦ X( z), { y n } •−◦ Y (z) ⇒ { x n } ∗ { y n } •−◦ X ( z ) · Y (z)
Delaying a sequence by one corresponds in the z-domain to multipli-
cation with z −1 :
{ x n − ∆ n } •−◦ X ( z ) · z − ∆ n
97
2
1.75
1.5
1.25
|H(z)|
1
0.75
0.5
0.25
0
1
0.5 1
0 0.5
−0.5 0
−0.5
imaginary −1 −1 real
Amplitude
0 0.5
−1 0
−1 0 0 10 20 30
1 n (samples)
Real Part
H(z) = z
= 1
z −0.9 1−0.9·z − 1
1
Amplitude
0 0.5
−1 0
−1 0 0 10 20 30
1 n (samples)
Real Part
99
H(z) = z
= 1
z −1 1−z − 1
1
Amplitude
0 0.5
−1 0
−1 0 0 10 20 30
1 n (samples)
Real Part
H(z) = z
= 1
z −1.1 1−1.1·z − 1
1
Amplitude
0 10
−1 0
−1 0 0 10 20 30
1
Real Part n (samples)
100
H(z) = z2
(z−0.9·e jπ/6)·(z−0.9·e− j π / 6 ) = 1 − 1 +0.9 2 ·z − 2
1−1.8 cos(π/6)z
Amplitude
2
0 0
−1 −2
−1 0 0 10 20 30
1 n (samples)
Real Part
H(z) = (z−e jπ/6 )·(z−e − j π / 6 ) = 1 − 1 +z − 2
1−2 cos(π/6)z
z2
1
Amplitude
2
0 0
−1 −5
−1 0 0 10 20 30
1 n (samples)
Real Part 101
H(z) = z2
(z−0.9·e jπ/2)·(z−0.9·e− j π / 2 ) = 1 − 1 +0.9 2 ·z − 2
1−1.8 cos(π/2)z = 1+0.912 ·z −2
1
Amplitude
2
0 0
−1 −1
−1 0 0 10 20 30
1 n (samples)
Real Part 1
H(z) = z
=
z+1
1+ z − 1
Impulse Response
z Plane 1
Imaginary Part
1
Amplitude
0 0
−1 −1
−1 0 0 10 20 30
1
Real Part n (samples)
102
IIR Filter design techniques
The design of a filter starts with specifying the desired parameters:
The designer can then trade off conflicting goals such as a small tran-
sition band, a low order, a low ripple amplitude, or even an absence of
ripples.
Design techniques for making these tradeoffs for analog filters (involv-
ing capacitors, resistors, coils) can also be used to design digital IIR
filters:
Butterworth filters
Have no ripples, gain falls monotonically across the pass and tran√sition
band. Within the passband, the gain drops slowly down to 1 − 1/2
(−3 dB). Outside the passband, it drops asymptotically by a factor 2N
per octave (N · 20 dB/decade).
104
Chebyshev type I filters
Distribute the gain error uniformly throughout the passband (equirip-
ples) and drop off monotonically outside.
Chebyshev type II filters
Distribute the gain error uniformly throughout the stopband (equirip-
ples) and drop off monotonically in the passband.
Elliptic filters (Cauer filters)
Distribute the gain error as equiripples both in the passband and stop-
band. This type of filter is optimal in terms of the combination of the
passband-gain tolerance, stopband-gain tolerance, and transition-band
width that can be achieved at a given filter order.
All these filter design techniques are implemented in the MATLAB Signal Processing Toolbox in
the functions butter, cheby1, cheby2, and e l l i p , which output the coefficients a n and b n of the
difference equation that describes the filter. These can be applied with f i l t e r to a sequence, or
can be visualized with zplane as poles/zeros in the z-domain, with impz as an impulse response,
and with freqz as an amplitude and phase spectrum. The commands sptool and f dat o ol
provide interactive GUIs to design digital filters.
105
0.5 0.6
Amplitude
0 0.4
−0.5 0.2
−1 0
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20
−50
−40
−60 −100
0 0.5 1 0 0.5 1
Normalized Frequency ( rad/sample) Normalized Frequency ( rad/sample)
order: 1, cutoff frequency (−3 dB): 0.25 × f s / 2
106
Butterworth filter design example
Impulse Response
1 0.3
Imaginary Part
0.5 0.2
Amplitude
0 0.1
−0.5 0
−1 −0.1
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −200
−40 −400
−60 −600
0 0.5 1 0 0.5 1
Normalized Frequency ( rad/sample) Normalized Frequency ( rad/sample)
0.5 0.4
Amplitude
0 0.2
−0.5 0
−1 −0.2
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−20 −200
−40 −400
−60 −600
0 0.5 1 0
0.5 1
Normalized Frequency ( rad/sample)
Normalized Frequency ( rad/sample)
108
order: 5, cutoff frequency: 0.5 × f s /2, pass-band ripple: −3
Chebyshev type II filter design example
Impulse Response
1 0.6
Imaginary Part
0.5 0.4
Amplitude
0 0.2
−0.5 0
−1 −0.2
−1 0 1 0 10 20 30
Real Part n (samples)
0 100
Phase (degrees)
Magnitude (dB)
0
−20
−100
−40
−200
−60 −300
0 0.5 1 0 0.5 1
Normalized Frequency ( rad/sample) Normalized Frequency (
rad/sample)
0.5 0.4
Amplitude
0 0.2
−0.5 0
−1 −0.2
−1 0 1 0 10 20 30
Real Part n (samples)
0 0
Phase (degrees)
Magnitude (dB)
−100
−20
−200
−40
−300
−60 −400
0 0.5 1 0 0.5
1
Normalized Frequency ( rad/sample) Normalized Frequency (
rad/sample)
110
order: 5, cutoff frequency: 0.5 × f s /2, pass-band ripple: −3 dB, stop-band ripple: −20 dB
Exercise 14 Draw the direct form II block diagrams of the causal infinite-
impulse response filters described by the following z-transforms and write
down a formula describing their time-domain impulse responses:
1
(a) H(z) =
1 − 21 z −1
1 − 1 −4
z
(b) H ′ (z) = 44
1 − 14 z −1
1 −1 1 −2
′′ 1
(c) H (z) = 2 + 4 z + z
2
Exercise 15 (a) Perform the polynomial division of the rational function
given in exercise 14 (a) until you have found the coefficient of z −5 in the
result.
(b)Perform the polynomial division of the rational function given in exercise
14 (b) until you have found the coefficient of z −10 in the result.
(c)Use its z-transform to show that the filter in exercise 14 (b) has actually
a finite impulse response and draw the corresponding block diagram.
111
for all l, that is, if the probability distributions are time invariant. The
derivative pxn (a) = Px′ n (a) is called the probability density
func- tion, and helps us to define quantities such as the
→ expected value E(xn ) =∫ apxn (a)
∫
→ mean-square value (average power) E(|xn|2) = |a|2pxn (a) da
da
Σ∞
c x x (k) =
x i+kx i.
i=−∞
C x x ( f ) = X ( f ) · X ∗ (f ) = | X ( f )|2.
117
Σ∞ Σ∞
yn = hk · xn − k
h n −k · x k
k=−∞ = k=−∞
and
Σ∞ φ x x (k) = E(xn + k · ∗ n
φy y (k) = φx x (k −i)ch h (i), wher Σ
x )
∞
i= −∞ e c h h (k) i = − ∞ h i+kh i.
=
118
In other words:
{φ y y (n)} = {c h h (n)} ∗
{yn } = {hn } ∗ {xn } ⇒ { φΦ
x x ( n) }
2
yy |H(f )| · Φ
xx
(f ) = (f )
Similarly:
{φyx (n)} = { h n } ∗{ φ x x ( n) }
{yn } = {hn } ∗ {xn } ⇒
Φ y x (f ) = H ( f ) · Φ x x (f )
White noise
A random sequence { x n } is a white noise signal, if m x = 0 and
φ x x (k) = σx2 δ k .
Φ x x (f ) = σx2 .
119
Application example:
Where an LTI { y n } = { h n } ∗{ x n } can be observed to operate on
white noise { x n } with φ x x (k) = σ x2δ k, the crosscorrelation between
input and output will reveal the impulse response of the system:
φy x (k) = σx2 · k
h
where φ y x (k) = φ x y (−k) = E(y n + k ·
x∗n).
120
DFT averaging
121
The rightmost figure was generated from the same set of 1000 windows,
but this time the complex values of the DFTs were averaged before the
absolute value was taken. This is called coherent averaging and, because of
the linearity of the DFT, identical to first averaging the 1000 windows and
then applying a single DFT and taking its absolute value. The windows
start 64 samples apart. Only periodic waveforms with a period that divides
Periodic averaging
64 are not averaged away. This periodic averaging step suppresses both the
Ifnoise
a zero-mean signal { sine
and the second x i } wave.
has a periodic component with period p, the
periodic component can be isolated by periodic averaging :
1 Σk
x¯i= lim
k→∞ 2k + 1 n = − k x i + p n
v
huma perceptua entropy channel
n display decodin
l decoding
senses g
decoding
123
Literature
→ D. Salomon: A guide to data compression methods.
ISBN 0387952608, 2002.
126
Entropy coding review – Huffman
Σ 1
Entropy: H = p(α) · log2
1.00
α ∈A
0 1 p(α) = 2.3016
bit
0.40 0.60
0 1 0 1
v w
0.25
0.20 0.20 u
0.35 0 1
x
0.10
Mean codeword length: 2.35 bit 0.15
0 1
Huffman’s algorithm constructs an optimal code-word tree for a set of
symbols with known probability distribution. It iteratively picks the two y z
elements of the set with the smallest probability and combines them into 0.05 0.05
a tree by adding a common root. The resulting tree goes back into the
set, labeled with the sum of the probabilities of the elements it combines.
The algorithm terminates when less than two elements are left.
127
u v w x
Encode text wuvw . . . as numeric value (0.58. . . ) in nested
y z
intervals: 1.0 0.75 0.62 0.5885 0.5850
z z z z z
y y y y y
x x x x x
w w w w w
v v v v v
u u u u u
0.55
0.0 0.55 0.5745 0.5822
128
Arithmetic coding
Several advantages:
↓
5 7 12 3 3
Predictive coding:
encoder decoder
f(t) g(t) f(t)
−
g(t) +
predictor predictor
P(f(t−1), f(t−2), ...) P(f(t−1), f(t−2), ...)
Based on the counted numbers nblack and nwhite of how often each pixel
value has been encountered so far in each of the 1024 contexts, the proba-
bility for the next pixel being black is estimated as
pblack = nblack + 1
n white + nblack + 2
The encoder updates its estimate only after the newly counted pixel has
been encoded, such that the decoder knows the exact same statistics.
Joint Bi-level Expert Group: International Standard ISO 11544, 1993.
Example implementation: http://www.cl.cam.ac.uk/~mgk25/jbigkit/
132
Statistical dependence
Random variables X , Y are dependent iff ∃x, y:
P ( X = x ∧ Y = y) ƒ= P ( X = x) · P (Y = y).
Application
Where x is the value of the next symbol to be transmitted and y is
the vector of all symbols transmitted so far, accurate knowledge of the
conditional probability P ( X = x | Y = y) will allow a transmitter to
remove all redundancy.
An application example of this approach is JBIG, but there y is limited
to 10 past single-bit pixels and P ( X = x | Y = y) is only 133 an
estimate.
E { [ X − E ( X ) ] · [Y − E(Y )]} ƒ= 0
192 192
128 128
64 64
0 0
0 64 128 192 256 0 64 128 192 256
Values of neighbour pixels at distance 4 Values of neighbour pixels at distance 8
256 256
192 192
128 128
64 64
0 0
0 64 128 192 256 0 64 128 192 256
136
Covariance and correlation
We define the covariance of two random variables X and Y as
137
Covariance Matrix
For a random vector X = (X 1 , X 2 , . . . , X n ) ∈ R n we define the co-
variance matrix
.
Cov(X) = E Σ(X − E (X)) · (X − E (X)) T
= (Cov(X i , X j ))i,j =
Cov(X1, X 1 ) Cov(X1, X 2 ) Cov(X1, X 3 ) · · · Cov(X1,
Cov(X2, Cov(X2, X ) Cov(X , X )
Cov(X2, X 1 )
Cov(X 2 n
· ·
n
3, X ) X 2 ) X 3 ) · Cov(X 3 , n
Cov(X.. 3, X 2 ) Cov(X. 3, X 3 ) . ·
.. 1 . X..·) · ..
Cov(X n , X 1 ) Cov(X n , X 2 ) Cov(X n , X 3 ) · Cov(X n , X n )
··
The elements of a random vector X are uncorrelated if and only if
Cov(X) is a diagonal matrix.
Cov(X, Y ) = Cov(Y, X ) , so all covariance matrices are symmetric:
Cov(X) = CovT(X).
138
Decorrelation by coordinate transform
Neighbour−pixel value pairs Decorrelated neighbour−pixel value
pairs 256 320
256
1
9
2
128 1 128
9
2
64
64
0
0 −64
0 64 128 256 −64 0 64 128 192 256
192 320
Probability distribution and entropy
correlated value pair (H = 13.90 bit)
decorrelated value 1 (H = 7.12 bit) Idea: Take the values of a group of cor-
decorrelated value 2 (H = 4.75 bit) related symbols (e.g., neighbour pixels) as
a random vector. Find a coordinate trans-
form (multiplication with an orthonormal
matrix) that leads to a new random vector
whose covariance matrix is diagonal. The
vector components in this transformed co-
ordinate system will no longer be corre-
lated. This will hopefully reduce the en-
−64 0 64 128 192 256 320
tropy of some of these components.
139
E(Y) = A · E(X) + b
Cov(Y) = A · Cov(X) · A T
Proof: The first equation follows from the linearity of the expected-
value operator E(·), as does E ( A · X · B ) = A · E(X) · B for
matrices A, B. With that, we can transform
.
Cov(Y )
.Σ Σ
(Y −−E(Y)) · (AX
− −
T
= EE . (AX AE(X))
· (Y Σ
AE(X))
E TA(X − E(X)) · (X − E(X))
E(Y)) T T
.
= A T
Σ T
= A · E (X − E(X)) · (X − E(X)) · A T
= A · Cov(X) · A
140
=
Quick review: eigenvectors and eigenvalues
We are given a square matrix A ∈ R n × n . The vector x ∈ R n is an
eigenvector of A if there exists a scalar value λ ∈ R such that
A x = λx.
We convert this set of equations into matrix notation using the matrix
B = (b1, b2, . . . , b n ) that has these eigenvectors as columns and the
diagonal matrix D = diag(λ1, λ 2 , . . . , λ n ) that consists of the corre-
sponding eigenvalues:
Cov(X)B = B D
142
B is orthonormal, that is B B T = I.
Multiplying the above from the right with B T leads to the spectral
decomposition
Cov(X) = B D B T
of the covariance matrix. Similarly multiplying instead from the left
with B T leads to
B T Cov(X)B = D
and therefore shows with
Cov(B T X) = D
that the eigenvector matrix B T is the wanted transform.
The Karhunen-Lo`eve transform (also known as Hotelling transform or
Principal Component Analysis) is the multiplication of a correlated
random vector X with the orthonormal eigenvector matrix B T from
the spectral decomposition Cov(X) = B D B T of its covariance
matrix. This leads to a decorrelated random vector B T X whose
covariance matrix is diagonal.
143
147
...
Covariance matrix C Matrix U with eigenvector columns
148
Matrix U ′ with normalised KLT Matrix with Discrete Cosine
eigenvector columns Transform base vector columns
D
is
C(u) NΣ− s(x) (2x + c
S (u) = √
cos 1)uπ x = 0
1
2N r
N/2 e
NΣ− C (u) (2x +
s(x ) = 1 √ S(u)
u = 0 cos
1)uπ2N t
N/2 e
with . 1
C(u) =
√
2
u= c
0 o
is an orthonormal transform: 1 u>
0
si
.
NΣ− C(u) (2x + 1)uπ C(u ′ ) (2x + 1)u π ′
1 u n
1
√
2N ·√N / 2 2N
= = u′
e
x = 0 cos
N/2 cos 0 u
ƒ= u′ tr
150
a
The 2-dimensional variant of the DCT applies the 1-D transform on
both rows and columns of an image:
151
Whole-image DCT
−1
−2
−3
−4
152
Whole-image DCT, 80% coefficient cutoff
−1
−2
−3
−4
153
−1
−2
−3
−4
154
Whole-image DCT, 95% coefficient cutoff
−1
−2
−3
−4
155
−1
−2
−3
−4
156
Base vectors of 8× 8 DCT
v
0 1 2 3 4 5 6
7
0
3
u
7
157
158
The n-point Discrete Fourier Transform (DFT) can be viewed as a device that
sends an input signal through a bank of n non-overlapping band-pass filters, each
reducing the bandwidth of the signal to 1/ n of its original bandwidth.
According to the sampling theorem, after a reduction of the bandwidth by 1/n,
the number of samples needed to reconstruct the original signal can equally be
reduced by 1/n. The DFT splits a wide-band signal represented by n input signals
into n separate narrow-band samples, each represented by a single sample.
A Discrete Wavelet Transform (DWT) can equally be viewed as such a frequency-
band splitting device. However, with the DWT, the bandwidth of each output
signal is proportional to the highest input frequency that it contains. High-
frequency components are represented in output signals with a high bandwidth,
and therefore a large number of samples. Low-frequency signals end up in output
signals with low bandwidth, and are correspondingly represented with a low
number of samples. As a result, high-frequency information is preserved with
higher spatial resolution than low-frequency information.
Both the DFT and the DWT are linear orthogonal transforms that preserve all
input information in their output without adding anything redundant.
As with the DFT, the 1-dimensional DWT can be extended to 2-D images by
trans- forming both rows and columns (the order of which happens first is not
relevant).
159
√
c3 −√c2 + c1 − c0 =√ √
0 1
c = (1 + 3)/(4 2), c = (3
0 0c3 − 1c2 + 2c1 −+ 3)/(4 2)
√ √ √ √
c2 = (3 −3c0 3)/(4
= 2),
0 c3 = (1 − 3)/(4 2)
Daubechies tabulated also similar filters with more coefficients.
This leads to the solution 161
162
Discrete Wavelet Transform compression
80% truncated 2D DAUB8 DWT 90% truncated 2D DAUB8 DWT
163
Psychophysics of perception
Sensation limit (SL) = lowest intensity stimulus that can still be perceived
Difference limit (DL) = smallest perceivable stimulus difference at given
intensity level
Weber’s law
Difference limit ∆ φ is proportional to the intensity φ of the stimu- lus
(except for a small correction constant a, to describe deviation of
experimental results near SL):
∆ φ = c · (φ + a)
Fechner’s scale
Define a perception intensity scale ψ using the sensation limit φ0 as the
origin and the respective difference limit ∆ φ = c · φ as a unit step.
The result is a logarithmic relationship between stimulus intensity and
scale value:
φ
ψ = logc 0
φ 164
Fechner’s scale matches older subjective intensity scales that follow
differentiability of stimuli, e.g. the astronomical magnitude numbers
for star brightness introduced by Hipparchos (≈150 BC).
Stevens’ law
A sound that is 20 DL over SL is perceived as more than twice as loud
as one that is 10 DL over SL, i.e. Fechner’s scale does not describe
well perceived intensity. A rational scale attempts to reflect subjective
relations perceived between different values of stimulus intensity φ.
Stevens observed that such rational scales ψ follow a power law:
ψ = k · (φ − φ 0 )a
165
Decibel
Communications engineers often use logarithmic units:
→ Quantities often vary over many orders of magnitude →
to agree on a common SI prefix
difficult
Quotient of quantities (amplification/attenuation) usually more
→ interesting than difference
Signal strength usefully expressed as field quantity (voltage,
→ current, pressure, etc.) or power, but quadratic relationship
between these two (P = U 2 / R = I 2 R) rather inconvenient
→ Weber/Fechner: perception is logarithmic
Plus: Using magic special-purpose units has its own odd attractions ( → typographers, navigators)
U
0.6 0.6 0.6
Cr
Cr
Cr
Cb = +
0.4 0.4 0.4
2.
0.2 0.2 0.2
0.5 0
0 0.5 1
0
0 0.5 1
0
0 0.5 1
0
Cb Cb Cb
Y=0.7 Y=0.9 Y=0.99
V
1 1 1
Cr = +
0.8 0.8 0.8
1.
0.6 0.6 0.6
Cr
Cr
Cr
6
0.2 0.2 0.2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Cb Cb Cb
173
Each curve represents a loudness level in phon. At 1 kHz, the loudness unit
phon is identical to dBSPL and 0 phon is the sensation limit.
174
Sound waves cause vibration in the eardrum. The three smallest human bones in
the middle ear (malleus, incus, stapes) provide an “impedance match” between air
and liquid and conduct the sound via a second membrane, the oval window, to
the cochlea. Its three chambers are rolled up into a spiral. The basilar membrane
that separates the two main chambers decreases in stiffness along the spiral, such
that the end near the stapes vibrates best at the highest frequencies, whereas for
lower frequencies that amplitude peak moves to the far end.
175
177
masking.wav
Twelve sequences, each with twelve probe-tone pulses and a 1200 Hz
masking tone during pulses 5 to 8.
Probing tone frequency and relative masking tone amplitude:
10 dB 20 dB 30 dB 40 dB
1300 Hz
1900 Hz
700 Hz
178
Audio demo: loudness.wav
80 0 dBA curve (SL)
first series
second series
70
60
50
SPL
40
dB
30
20
10
4
0
6
3
1
0
0
Audio demo: masking.wav
1
6
0
80 0 dBA curve (SL)
2 masking tones
5 probing tones
70 0 masking
thresholds
4
60 0
0
50
6
3
SPL
40 0
dB
1
30 0
0
0
20
1
6
10 0
0
0 2
5
0
4
0
4
6 180
0
3
0
Quant izat ion
Uniform/linear quantization: Non-uniform quantization:
6 6
5 5
4 4
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−4 −4
−5 −5
−6 −6
−6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5
6
A-
law: . A|x|
1+log A
sgn(x) for 0 ≤ |x| VA
y= V (1+log x|
A |
) ≤
1+ log AV
sgn(x) for VA ≤ |x| ≤
V
European digital telephone networks use A-law quantization ( A = 87.6), North American ones
use µ-law (µ=255), both with 8-bit resolution and 8 kHz sampling frequency (64 kbit/s). [ITU-T
G.711]
183
V −law (US)
A−law (Europe)
signal voltage
−V
184
Joint Photographic Experts Group – JPEG
Working group “ISO/TC97/SC2/WG8 (Coded representation of picture and audio information)”
was set up in 1982 by the International Organization for Standardization.
Goals:
→ continuous tone gray-scale and colour images
→ recognizable images at 0.083 bit/pixel
→ useful images at 0.25 bit/pixel
→ excellent image quality at 0.75 bit/pixel
→ indistinguishable images at 2.25 bit/pixel
→ feasibility of 64 kbit/s (ISDN fax) compression with late 1980s
hardware (16 MHz Intel 80386).
→ workload equal for compression and decompression
The JPEG standard (ISO 10918) was finally published in 1994.
William B. Pennebaker, Joan L. Mitchell: JPEG still image compression standard. Van Nostrad
Reinhold, New York, ISBN 0442012721, 1993.
Gregory K. Wallace: The JPEG Still Picture Compression Standard. Communications of the ACM
34(4)30–44, April 1991, http://doi.acm.org/10.1145/103085.103089
185
186
→ Quantization: divide each DCT coefficient with the correspond-
ing value from an 8×8 table, then round to the nearest
integer:
The two standard quantization-matrix examples for luminance and chrominance
are:
16 11 10 16 24 40 51 61 17 18 24 47 99 99 99 99
12 12 14 19 26 58 60 55 18 21 26 66 99 99 99 99
14 13 16 24 40 57 69 56 24 26 56 99 99 99 99 99
14 17 22 29 51 87 80 62 47 66 99 99 99 99 99 99
18 22 37 56 68 109 103 77 99 99 99 99 99 99 99 99
24 35 55 64 81 104 113 92 99 99 99 99 99 99 99 99
49 64 78 87 103 121 120 101 99 99 99 99 99 99 99 99
72 92 95 98 112 100 103 99 99 99 99 99 99 99 99 99
0 1 5 6 14 15 27 28
2 4 7 13 16 26 29 42
3 8 12 17 25 30 41 43
vertical frequency
9 11 18 24 31 40 44 53
10 19 23 32 39 45 52 54
20 22 33 38 46 51 55 60
21 34 37 47 50 56 59 61
35 36 48 49 57 58 62 63
After the 8×8 coefficients produced by the discrete cosine transform
have been quantized, the values are processed in the above zigzag
order by a run-length encoding step.
The idea is to group all higher-frequency coefficients together at the end of the sequence. As
many image blocks contain little high-frequency information, the bottom-right corner of the
quantized DCT matrix is often entirely zero. The zigzag scan helps the run-length coder to make
best use of this observation.
188
Huffman coding in JPEG
s value range
0 0
1 −1, 1
2
3 −3, −2, 2, 3
4 −7 . . . − 4, 4 . . . 7
5 −15 . . . − 8, 8 . . . 15
6
... −31 . . . − 16, 16 . . . 31
i
−63 . . . − 32, 32 . . . 63
...
DCT coefficients have 11-bit resolution and would lead to huge Huffman
−(2i − 1) . . . − 2i−1, 2i−1 . . . 2i −
tables (up to 2048 code words). JPEG therefore uses a Huffman table only
1
to encode the magnitude category s = ⌈log2(|v| + 1)⌉ of a DCT value v.
A sign bit plus the (s − 1)-bit binary value |v| − 2s−1 are appended to
each Huffman code word, to distinguish between the 2s different values
within magnitude category s.
When storing DCT coefficients in zigzag order, the symbols in the Huffman tree are actually
tuples (r, s), where r is the number of zero coefficients preceding the coded value (run-length).
189
JPEG-2000 (JP2)
Processing steps:
→ The bit streams for the independently encoded code blocks are
then truncated (lossy mode only), to achieve the required
compression rate.
Features:
→ progressive recovery by fidelity or resolution
→ lower compression for specified region-of-interest
→ CrCb subsampling can be handled via DWT quantization
ISO 15444-1, example implementation: http://www.ece.uvic.ca/ ~mdadams/jasper/
193
194
JPEG2000 examples (DWT)
195
196
JPEG2000 examples (DWT)
197
backward forward
current picture
reference picture reference picture
Each MPEG image is split into 16×16-pixel large macroblocks. The predic-
tor forms a linear combination of the content of one or two other blocks of
the same size in a preceding (and following) reference image. The relative
positions of these reference blocks are encoded along with the differences.
200
MPEG reordering of reference images
Display order of frames:
time
IBBBPBBBPBBB P
Coding order:
time
IPBBBPBBBPBBB
MPEG distinguishes between I-frames that encode an image independent of any others, P-frames
that encode differences to a previous P- or I-frame, and B-frames that interpolate between the
two neighbouring B- and/or I-frames. A frame has to be transmitted before the first B-frame
that makes a forward reference to it. This requires the coding order to differ from the display
order.
201
content
content
buffer
buffer
time
time
MPEG can be used both with variable-bitrate (e.g., file, DVD) and fixed-bitrate (e.g., ISDN)
channels. The bitrate of the compressed data stream varies with the complexity of the input data
and the current quantization values. Buffers match the short-term variability of the encoder
bitrate with the channel bitrate. A control loop continuously adjusts the average bitrate via the
quantization values to prevent under- or overflow of the buffer.
The MPEG system layer can interleave many audio and video streams in a single data stream.
Buffers match the bitrate required by the codecs with the bitrate available in the multiplex and
encoders can dynamically redistribute bitrate among different streams.
MPEG encoders implement a 27 MHz clock counter as a timing reference and add its value as a
system clock reference (SCR) several times per second to the data stream. Decoders synchronize
with a phase-locked loop their own 27 MHz clock with the incoming SCRs.
Each compressed frame is annotated with a presentation time stamp (PTS) that determines when
its samples
the decoder.need to be output. Decoding timestamps specify when data needs to be available to
202
MPEG audio coding
Three different algorithms are specified, each increasing the processing
power required in the decoder.
Supported sampling frequencies: 32, 44.1 or 48 kHz.
Layer I
→ Waveforms are split into segments of 384 samples each (8 ms at 48 kHz).
→ Each segment is passed through an orthogonal filter bank that splits the
signal into 32 subbands, each 750 Hz wide (for 48 kHz).
This approximates the critical bands of human hearing.
Layer II
Uses better encoding of scale factors and bit allocation information.
Unless there is significant change, only one out of three scale factors is transmitted. Explicit zero
code leads to odd numbers of quantization levels and wastes one codeword. Layer II combines
several quantized values into a granule that is encoded via a lookup table (e.g., 3 × 5 levels: 125
values require 7 instead of 9 bits). Layer II is used in Digital Audio Broadcasting (DAB).
Layer III
→ Modified DCT step decomposes subbands further into 18 or 6 frequencies
→ dynamic switching between MDCT with 36-samples (28 ms, 576 freq.)
and 12-samples (8 ms, 192 freq.)
enables control of pre-echos before sharp percussive sounds (Heisenberg)
→ non-uniform quantization
MPEG audio layer III is the widely used “ M P3” music compression format.
204
Psychoacoustic models
MPEG audio encoders use a psychoacoustic model to estimate the spectral
and temporal masking that the human ear will apply. The subband quan-
tization levels are selected such that the quantization noise remains below
the masking threshold in each subband.
The masking model is not standardized and each encoder developer can
chose a different one. The steps typically involved are:
→ Fourier transform for spectral analysis
→ Group the resulting frequencies into “critical bands” within which
masking effects will not vary significantly
→ Distinguish tonal and non-tonal (noise-like) components
→ Apply masking function
→ Calculate threshold per subband
→ Calculate signal-to-mask ratio (SMR) for each subband
Masking is not linear and can be estimated accurately only if the actual sound pressure levels
reaching the ear are known. Encoder operators usually cannot know the sound pressure level
selected by the decoder user. Therefore the model must use worst-case SMRs.
205
→ multirate systems
→ adaptive filters
→ sound effects
If you find a typo or mistake in these lecture notes, please notify [email protected].
207
. . . and perception
Count how many Fs there are in this text:
FINISHED FILES ARE THE RE-
SULT OF YEARS OF SCIENTIF-
IC STUDY COMBINED WITH THE
EXPERIENCE OF YEARS
208