Computationally Efficient Music Synthesis - Methods and Sound Design
Computationally Efficient Music Synthesis - Methods and Sound Design
Uin
and large bandwidths
Uout
Continuous-time signals: voltage, current, temperature, speed, . . .
• analog signal-processing circuits Uout
√
lap intervals in races, sampled continuous signals, . . . • analog circuits cause little addi- Uin − Uout 1
Z t
dUout
tional interference = Uout dτ + C
R L −∞ dt
Electronics (unlike optics) can only deal easily with time-dependent signals, therefore spatial
signals, such as images, are typically first converted into a time signal with a scanning process
(TV, fax, etc.).
2 4
Digital signal processing Syllabus
Analog/digital and digital/analog converter, CPU, DSP, ASIC, FPGA.
Signals and systems. Discrete sequences and systems, their types and proper-
Advantages: ties. Linear time-invariant systems, convolution. Harmonic phasors are the eigen
functions of linear time-invariant systems. Review of complex arithmetic. Some
→ noise is easy to control after initial quantization examples from electronics, optics and acoustics.
→ highly linear (within limited dynamic range) MATLAB. Use of MATLAB on PWF machines to perform numerical experiments
and visualise the results in homework exercises.
→ complex algorithms fit into a single chip Fourier transform. Harmonic phasors as orthogonal base functions. Forms of the
→ flexibility, parameters can easily be varied in software Fourier transform, convolution theorem, Dirac’s delta function, impulse combs in
the time and frequency domain.
→ digital processing is insensitive to component tolerances, aging, Discrete sequences and spectra. Periodic sampling of continuous signals, pe-
environmental conditions, electromagnetic interference riodic signals, aliasing, sampling and reconstruction of low-pass and band-pass
signals, spectral inversion.
But:
Discrete Fourier transform. Continuous versus discrete Fourier transform, sym-
→ discrete-time processing artifacts (aliasing) metry, linearity, review of the FFT, real-valued FFT.
Spectral estimation. Leakage and scalloping phenomena, windowing, zero padding.
→ can require significantly more power (battery, cooling)
→ digital clock and switching cause interference
5 7
Typical DSP applications Finite and infinite impulse-response filters. Properties of filters, implementa-
tion forms, window-based FIR design, use of frequency-inversion to obtain high-
pass filters, use of modulation to obtain band-pass filters, FFT-based convolution,
→ communication systems → astronomy polynomial representation, z-transform, zeros and poles, use of analog IIR design
modulation/demodulation, channel VLBI, speckle interferometry
equalization, echo cancellation
techniques (Butterworth, Chebyshev I/II, elliptic filters).
Random sequences and noise. Random variables, stationary processes, autocor-
→ consumer electronics → experimental physics relation, crosscorrelation, deterministic crosscorrelation sequences, filtered random
perceptual coding of audio and video sensor-data evaluation sequences, white noise, exponential averaging.
on DVDs, speech synthesis, speech Correlation coding. Random vectors, dependence versus correlation, covariance,
recognition
→ aviation decorrelation, matrix diagonalisation, eigen decomposition, Karhunen-Loève trans-
form, principal/independent component analysis. Relation to orthogonal transform
→ music
radar, radio navigation
coding using fixed basis vectors, such as DCT.
synthetic instruments, audio effects,
noise reduction → security Lossy versus lossless compression. What information is discarded by human
steganography, digital watermarking, senses and can be eliminated by encoders? Perceptual scales, masking, spatial
→ medical diagnostics biometric identification, surveillance
systems, signals intelligence, elec-
resolution, colour coordinates, some demonstration experiments.
magnetic-resonance and ultrasonic
tronic warfare Quantization, image and audio coding standards. A/µ-law coding, delta cod-
imaging, computer tomography, ing, JPEG photographic still-image compression, motion compensation, MPEG
ECG, EEG, MEG, AED, audiology
video encoding, MPEG audio encoding.
→ engineering
→ geophysics control systems, feature extraction
Note: The last three lectures on audio-visual coding were previously part of the course “Informa-
tion Theory and Coding”. A brief introduction to MATLAB was given in “Unix Tools”.
seismology, oil exploration for pattern recognition
6 8
Objectives Sequences and systems
By the end of the course, you should be able to A discrete sequence {xn }∞
n=−∞ is a sequence of numbers
→ apply basic properties of time-invariant linear systems
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
→ understand sampling, aliasing, convolution, filtering, the pitfalls of
spectral estimation where xn denotes the n-th number in the sequence (n ∈ Z). A discrete
→ explain the above in time and frequency domain representations sequence maps integer numbers onto real (or complex) numbers.
→ use filter-design software We normally abbreviate {xn }∞ n=−∞ to {xn }, or to {xn }n if the running index is not obvious.
The notation is not well standardized. Some authors write x[n] instead of xn , others x(n).
→ visualise and discuss digital filters in the z-domain Where a discrete sequence {xn } samples a continuous function x(t) as
→ use the FFT for convolution, deconvolution, filtering
→ implement, apply and evaluate simple DSP applications in MATLAB xn = x(ts · n) = x(n/fs ),
→ apply transforms that reduce correlation between several signal sources we call ts the sampling period and fs = 1/ts the sampling frequency.
→ understand and explain limits in human perception that are ex- A discrete system T receives as input a sequence {xn } and transforms
ploited by lossy compression techniques it into an output sequence {yn } = T {xn }:
→ provide a good overview of the principles and characteristics of sev-
eral widely-used compression techniques and standards for audio- discrete
. . . , x2 , x1 , x0 , x−1 , . . . . . . , y2 , y1 , y0 , y−1 , . . .
visual signals system T
9 11
13 15
14 16
Constant-coefficient difference equations Convolution
Of particular practical interest are causal linear time-invariant systems
of the form All linear time-invariant (LTI) systems can be represented in the form
xn b0 yn ∞
X
N
X yn = ak · xn−k
yn = b0 · xn − ak · yn−k −a1
z −1 k=−∞
k=1 yn−1
where {ak } is a suitably chosen sequence of coefficients.
z −1 This operation over sequences is called convolution and defined as
−a2
Block diagram representation yn−2
of sequence operations: ∞
X
x′n −a3
z −1 {pn } ∗ {qn } = {rn } ⇐⇒ ∀n ∈ Z : rn = pk · qn−k .
yn−3 k=−∞
xn xn + x′n
Addition:
If {yn } = {an } ∗ {xn } is a representation of an LTI system T , with
Multiplication xn a axn
by constant:
The ak and bm are {yn } = T {xn }, then we call the sequence {an } the impulse response
constant coefficients. of T , because {an } = T {δn }.
xn xn−1
Delay: z −1
17 19
or Convolution examples
xn xn−1 xn−2 xn−3
M z −1 z −1 z −1 A B C D
X
yn = bm · xn−m b0 b1 b2 b3
m=0
yn
z −1 z −1
b1 −a1
N
X M
X xn−1 yn−1
ak · yn−k = bm · xn−m
k=0 m=0 z −1 z −1 C∗A A∗E D∗E A∗F
b2 −a2
xn−2 yn−2
z −1 z −1
b3 −a3
xn−3 yn−3
The MATLAB function filter is an efficient implementation of the last variant.
18 20
Properties of convolution Exercise 1 What type of discrete system (linear/non-linear, time-invariant/
For arbitrary sequences {pn }, {qn }, {rn } and scalars a, b: non-time-invariant, causal/non-causal, causal, memory-less, etc.) is:
Can all LTI systems be represented by convolution? Exercise 3 A finite-length sequence is non-zero only at a finite number of
Any sequence {xn } can be decomposed into a weighted sum of shifted positions. If m and n are the first and last non-zero positions, respectively,
impulse sequences: then we call n − m + 1 the length of that sequence. What maximum length
can the result of convolving two sequences of length k and l have?
∞
X
{xn } = xk · {δn−k } Exercise 4 The length-3 sequence a0 = −3, a1 = 2, a2 = 1 is convolved
k=−∞ with a second sequence {bn } of length 5.
(a) Write down this linear operation as a matrix multiplication involving a
Let’s see what happens if we apply a linear(∗) time-invariant(∗∗) system matrix A, a vector ~b ∈ R5 , and a result vector ~c.
T to such a decomposed sequence: (b) Use MATLAB to multiply your matrix by the vector ~b = (1, 0, 0, 2, 2)
∞
! ∞ and compare the result with that of using the conv function.
X (∗) X
T {xn } = T xk · {δn−k } = xk · T {δn−k } (c) Use the MATLAB facilities for solving systems of linear equations to
k=−∞ k=−∞ undo the above convolution step.
∞ ∞
!
(∗∗) X X
= xk · {δn−k } ∗ T {δn } = xk · {δn−k } ∗ T {δn } Exercise 5 (a) Find a pair of sequences {an } and {bn }, where each one
k=−∞ k=−∞ contains at least three different values and where the convolution {an }∗{bn }
= {xn } ∗ T {δn } q.e.d. results in an all-zero sequence.
(b) Does every LTI system T have an inverse LTI system T −1 such that
⇒ The impulse response T {δn } fully characterizes an LTI system. {xn } = T −1 T {xn } for all sequences {xn }? Why?
22 24
Direct form I and II implementations Convolution: electronics example
xn b0 a−1
0 yn xn a−1
0 b0 yn Uin
R
Uin U
√in
2
−1 −1 −1
z z z
Uout
b1 −a1 −a1 b1 Uin C Uout
xn−1 yn−1 Uout
z −1 z −1
= z −1 0
b2 −a2 −a2 b2 t 0 1/RC ω (= 2πf)
xn−2 yn−2
Any passive network (R, L, C) convolves its input voltage Uin with an
z −1
z −1
z −1 impulse response function h, leading to Uout = Uin ∗ h, that is
b3 −a3 −a3 b3
xn−3 yn−3
Z ∞
Uout (t) = Uin (t − τ ) · h(τ ) · dτ
The block diagram representation of the constant-coefficient difference −∞
equation on slide 18 is called the direct form I implementation. In this example:
The number of delay elements can be halved by using the commuta- −t
Uin − Uout dUout 1
tivity of convolution to swap the two feedback loops, leading to the · e RC , t ≥ 0
=C· , h(t) = RC
direct form II implementation of the same LTI system. R dt 0, t<0
These two forms are only equivalent with ideal arithmetic (no rounding errors and range limits).
25 27
A1 sin ϕ1 + A2 sin ϕ2
tan ϕ =
A1 cos ϕ1 + A2 cos ϕ2 A
A2 A2 · sin(ϕ2 )
as
Point-spread function h (disk, r = 2f
): image plane focal plane
ϕ2
Sine waves of any phase can be
A1 A2 · cos(ϕ2 )
1
x2 y2 r2 formed from sin and cos alone:
r2 π
, + ≤
h(x, y) = ωt A1 · sin(ϕ1 )
0, x2 + y 2 > r 2 ϕ
a ϕ1
Original image I, blurred image B = I ∗ h, i.e.
A · sin(ωt + ϕ) = A1 · cos(ϕ1 )
ZZ a · sin(ωt) + b · cos(ωt)
B(x, y) = I(x − x′ , y − y ′ ) · h(x′ , y ′ ) · dx′ dy ′ s √
f
with a = A · cos(ϕ), b = A · sin(ϕ) and A = a2 + b2 , tan ϕ = ab .
26 28
Note: Convolution of a discrete sequence {xn } with another sequence Why are complex numbers so useful?
{yn } is nothing but adding together scaled and delayed copies of {xn }. 1) They give us all n solutions√(“roots”) of equations involving poly-
(Think of {yn } decomposed into a sum of impulses.) nomials up to degree n (the “ −1 = j ” story).
If {xn } is a sampled sine wave of frequency f , so is {xn } ∗ {yn }! 2) They give us the “great unifying theory” that combines sine and
=⇒ Sine-wave sequences form a family of discrete sequences exponential functions:
that is closed under convolution with arbitrary sequences.
1 jωt
e + e− jωt
cos(ωt) =
The same applies for continuous sine waves and convolution. 2
1
e jωt − e− jωt
sin(ωt) =
2) Sine waves are orthogonal to each other: 2j
Z ∞ or
sin(ω1 t + ϕ1 ) · sin(ω2 t + ϕ2 ) dt “=” 0 1 jωt+ϕ
+ e− jωt−ϕ
cos(ωt + ϕ) = e
−∞ 2
or
⇐⇒ ω1 6= ω2 ∨ ϕ1 − ϕ2 = (2k + 1)π/2 (k ∈ Z)
cos(ωn + ϕ) = ℜ(e jωn+ϕ ) = ℜ[(e jω )n · e jϕ ]
They can be used to form an orthogonal function basis for a transform.
The term “orthogonal” is used here in the context of an (infinitely dimensional) vector space, sin(ωn + ϕ) = ℑ(e jωn+ϕ ) = ℑ[(e jω )n · e jϕ ]
where the “vectors” Rare functions of the form f : R → R (or f : R → C) and the scalar product
∞
is defined as f · g = −∞ f (t) · g(t) dt. Notation: ℜ(a + jb) := a and ℑ(a + jb) := b where j2 = −1 and a, b ∈ R.
29 31
Why are exponential functions useful? We can now represent sine waves as projections of a rotating complex
Adding together two exponential functions with the same base z, but vector. This allows us to represent sine-wave sequences as exponential
different scale factor and offset, results in another exponential function sequences with basis e jω .
with the same base: A phase shift in such a sequence corresponds to a rotation of a complex
vector.
A1 · z t+ϕ1 + A2 · z t+ϕ2 = A1 · z t · z ϕ1 + A2 · z t · z ϕ2
3) Complex multiplication allows us to modify the amplitude and phase
= (A1 · z ϕ1 + A2 · z ϕ2 ) · z t = A · z t
of a complex rotating vector using a single operation and value.
Likewise, if we convolve a sequence {xn } of values Rotation of a 2D vector in (x, y)-form is notationally slightly messy,
but fortunately j2 = −1 does exactly what is required here:
. . . , z −3 , z −2 , z −1 , 1, z, z 2 , z 3 , . . .
xn = z n with an arbitrary sequence {hn }, we get {yn } = {z n } ∗ {hn }, x3 x2 −y2 x1
= · (x3 , y3 )
y3 y2 x2 y1
∞ ∞ ∞
x1 x2 − y1 y2 (−y2 , x2 )
X X X
yn = xn−k · hk = z n−k · hk = z n · z −k · hk = z n · H(z) =
x1 y2 + x2 y1 (x2 , y2 )
k=−∞ k=−∞ k=−∞
(x1 , y1 )
z1 = x1 + jy1 , z2 = x2 + jy2
where H(z) is independent of n.
Exponential sequences are closed under convolution with z1 · z2 = x1 x2 − y1 y2 + j(x1 y2 + x2 y1 )
arbitrary sequences. The same applies in the continuous case.
30 32
Complex phasors Another notation is in the continuous case
Amplitude and phase are two distinct characteristics of a sine function Z ∞
that are inconvenient to keep separate notationally. F{h(t)}(ω) = H(e jω ) = h(t) · e− jωt dt
−∞
Complex functions (and discrete sequences) of the form
∞
1
Z
A · e j(ωt+ϕ) = A · [cos(ωt + ϕ) + j · sin(ωt + ϕ)] F −1 {H(e jω )}(t) = h(t) = H(e jω ) · e jωt dω
2π −∞
(where j2 = −1) are able to represent both amplitude and phase in
one single algebraic object. and in the discrete-sequence case
Thanks to complex multiplication, we can also incorporate in one single ∞
X
factor both a multiplicative change of amplitude and an additive change F{hn }(ω) = H(e jω ) = hn · e− jωn
of phase of such a function. This makes discrete sequences of the form n=−∞
1 π
Z
xn = e jωn F −1 jω
{H(e )}(t) = hn = H(e jω ) · e jωn dω
2π −π
eigensequences with respect to an LTI system T , because for each ω,
there is a complex number (eigenvalue) H(ω) such that which treats the Fourier transform as a special case of the z-transform
T {xn } = H(ω) · {xn } (to be introduced shortly).
In the notation of slide 30, where the argument of H is the base, we would write H(e jω ).
33 35
34 36
Time shifting: Convolution theorem
Continuous form:
x(t − ∆t) •−◦ X(f ) · e−2π jf ∆t
F{(f ∗ g)(t)} = F{f (t)} · F{g(t)}
Frequency shifting:
F{f (t) · g(t)} = F{f (t)} ∗ F{g(t)}
x(t) · e2π j∆f t •−◦ X(f − ∆f ) Discrete form:
The spectrum of the baseband signal in the interval −fl < f < fl is
As any real-valued signal x(t) can be represented as a combination of sine and cosine functions,
the spectrum of any real-valued signal will show the symmetry X(e jω ) = [X(e− jω )]∗ , where ∗ shifted by the modulation to the intervals ±fc − fl < f < ±fc + fl .
denotes the complex conjugate (i.e., negated imaginary part). How can such a signal be demodulated?
42 44
Sampling using a Dirac comb Sampling and aliasing
The loss of information in the sampling process that converts a con-
tinuous function x(t) into a discrete sequence {xn } defined by sample
cos(2π tf)
cos(2π t(k⋅ f ± f))
xn = x(ts · n) = x(n/fs ) s
The function x̂(t) now contains exactly the same information as the
discrete sequence {xn }, but is still in a form that can be analysed using Sampled at frequency fs , the function cos(2πtf ) cannot be distin-
the Fourier transform on continuous functions. guished from cos[2πt(kfs ± f )] for any k ∈ Z.
45 47
Nyquist limit and anti-aliasing filters Impulse response of ideal low-pass filter with cut-off frequency fs /2:
0 f −fs 0 fs f
double-sided bandwidth
reconstruction filter
X̂(f ) X̂(f )
„ «
sin πtfs /2 2n + 1 sin πt(n + 1)fs sin πtnfs
h(t) = fs · cos 2πtfs = (n + 1)fs − nfs .
πtfs /2 4 πt(n + 1)fs
Choice of sampling frequency πtnfs
Due to causality and economic constraints, practical analog filters can only approx- X(f ) anti-aliasing filter X̂(f ) reconstruction filter
imate such an ideal low-pass filter. Instead of a sharp transition between the “pass
band” (< fs /2) and the “stop band” (> fs /2), they feature a “transition band”
in which their signal attenuation gradually increases.
The sampling frequency is therefore usually chosen somewhat higher than twice
the highest frequency of interest in the continuous signal (e.g., 4×). On the other
hand, the higher the sampling frequency, the higher are CPU, power and memory − 45 fs 0 5
4 fs f −fs −fs 0 fs fs f
2 2
requirements. Therefore, the choice of sampling frequency is a tradeoff between n=2
signal quality, analog filter cost and digital subsystem expenses.
54 56
Exercise 9 Reconstructing a sampled baseband signal: Spectrum of a periodic signal
• Generate a one second long Gaussian noise sequence {rn } (using A signal x(t) that is periodic with frequency fp can be factored into a
MATLAB function randn) with a sampling rate of 300 Hz. single period ẋ(t) convolved with an impulse comb p(t). This corre-
sponds in the frequency domain to the multiplication of the spectrum
• Use the fir1(50, 45/150) function to design a finite impulse re-
of the single period with a comb of impulses spaced fp apart.
sponse low-pass filter with a cut-off frequency of 45 Hz. Use the
filtfilt function in order to apply that filter to the generated noise x(t) ẋ(t) p(t)
signal, resulting in the filtered noise signal {xn }.
• Then sample {xn } at 100 Hz by setting all but every third sample = ∗
value to zero, resulting in sequence {yn }. ... ... ... ...
• Generate another low-pass filter with a cut-off frequency of 50 Hz −1/fp 0 1/fp t t −1/fp 0 1/fp t
and apply it to {yn }, in order to interpolate the reconstructed filtered X(f ) Ẋ(f ) P (f )
noise signal {zn }. Multiply the result by three, to compensate the
energy lost during sampling.
= ·
• Plot {xn }, {yn }, and {zn }. Finally compare {xn } and {zn }.
... ...
Why should the first filter have a lower cut-off frequency than the second? −fp 0 fp f f −fp 0 fp f
57 59
Why does the reconstructed waveform differ much more from the original 0 f −fs fs f −fs 0 fs f
if you reduce the cut-off frequencies of both band-pass filters by 5 Hz?
58 60
Continuous vs discrete Fourier transform Discrete Fourier Transform visualized
• Sampling a continuous signal makes its spectrum periodic
x0
X0
ẍ(t) Ẍ(f ) The n-point DFT of a signal {xi } sampled at frequency fs contains in
the elements X0 to Xn/2 of the resulting frequency-domain vector the
frequency components 0, fs /n, 2fs /n, 3fs /n, . . . , fs /2, and contains
in Xn−1 downto Xn/2 the corresponding negative frequencies. Note
... ... ... ...
that for a real-valued input vector, both X0 and Xn/2 will be real, too.
−n/fs fs−1 0 fs−1 n/fs t −fs −fs /n 0 fs /n fs f Why is there no phase information recovered at fs /2?
61 63
62 64
Fast Fourier Transform (FFT) Fast complex multiplication
Calculating the product of two complex numbers as
n−1
−2π j ik
X
Fn {xi }n−1
i=0 k = xi · e n
(a + jb) · (c + jd) = (ac − bd) + j(ad + bc)
i=0
n n
2
−1 2
−1
X ik
−2π j n/2 k
−2π j n
X ik
−2π j n/2 involves four (real-valued) multiplications and two additions.
= x2i · e + e x2i+1 · e
i=0 i=0 The alternative calculation
n n
−1 −1
k
2
F n2 {x2i }i=0
+ e−2π j n · F n2 {x2i+1 }i=0
2
, k< n
2 α = a(c + d)
k k
= n n (a + jb) · (c + jd) = (α − β) + j(α + γ) with β = d(a + b)
−1 −1
k
2
F n2 {x2i }i=0 + e−2π j n · F n2 {x2i+1 }i=0
2
, k≥ n
γ = c(b − a)
k− n2 n
k− 2 2
provides the same result with three multiplications and five additions.
The DFT over n-element vectors can be reduced to two DFTs over
n/2-element vectors plus n multiplications and n additions, leading to The latter may perform faster on CPUs where multiplications take three
log2 n rounds and n log2 n additions and multiplications overall, com- or more times longer than additions.
This trick is most helpful on simpler microcontrollers. Specialized signal-processing CPUs (DSPs)
pared to n2 for the equivalent matrix multiplication. feature 1-clock-cycle multipliers. High-end desktop processors use pipelined multipliers that stall
A high-performance FFT implementation in C with many processor-specific optimizations and where operations depend on each other.
support for non-power-of-2 sizes is available at http://www.fftw.org/.
65 67
∗ ∗ ∗
−1
A’ B’ F [F(A’)⋅F(B’)] = = =
The regions originally added as zero padding are, after convolution, aligned
Both sequences are padded with zero values to a length of at least m+n−1. to overlap with the unpadded ends of their respective neighbour blocks.
This ensures that the start and end of the resulting sequence do not overlap. The overlapping parts of the blocks are then added together.
69 71
Zero padding is usually applied to extend both sequence lengths to the Deconvolution
next higher power of two (2⌈log2 (m+n−1)⌉ ), which facilitates the FFT. A signal u(t) was distorted by convolution with a known impulse re-
With a causal sequence, simply append the padding zeros at the end. sponse h(t) (e.g., through a transmission channel or a sensor problem).
With a non-causal sequence, values with a negative index number are The “smeared” result s(t) was recorded.
wrapped around the DFT block boundaries and appear at the right Can we undo the damage and restore (or at least estimate) u(t)?
end. In this case, zero-padding is applied in the center of the block,
between the last and first element of the sequence.
Thanks to the periodic nature of the DFT, zero padding at both ends
has the same effect as padding only at one end. ∗ =
If both sequences can be loaded entirely into RAM, the FFT can be ap-
plied to them in one step. However, one of the sequences might be too
large for that. It could also be a realtime waveform (e.g., a telephone
signal) that cannot be delayed until the end of the transmission.
In such cases, the sequence has to be split into shorter blocks that are ∗ =
separately convolved and then added together with a suitable overlap.
70 72
The convolution theorem turns the problem into one of multiplication: Spectral estimation
Z
s(t) = u(t − τ ) · h(τ ) · dτ
Sine wave 4×fs/32 Discrete Fourier Transform
1
s = u∗h 15
u = F −1 {F{s}/F{h}} −1
0 10 20 30
0
0 10 20 30
In practice, we also record some noise n(t) (quantization, etc.): Sine wave 4.61×f /32
s Discrete Fourier Transform
Z 1
15
c(t) = s(t) + n(t) = u(t − τ ) · h(τ ) · dτ + n(t)
10
0
Problem – At frequencies f where F{h}(f ) approaches zero, the
5
noise will be amplified (potentially enormously) during deconvolution:
−1 0
ũ = F −1 {F{c}/F{h}} = u + F −1 {F{n}/F{h}} 0 10 20 30 0 10 20 30
73 75
Typical workarounds: We introduced the DFT as a special case of the continuous Fourier
→ Modify the Fourier transform of the impulse response, such that transform, where the input is sampled and periodic.
|F{h}(f )| > ǫ for some experimentally chosen threshold ǫ. If the input is sampled, but not periodic, the DFT can still be used
to calculate an approximation of the Fourier transform of the original
→ If estimates of the signal spectrum |F{s}(f )| and the noise
continuous signal. However, there are two effects to consider. They
spectrum |F{n}(f )| can be obtained, then we can apply the
are particularly visible when analysing pure sine waves.
“Wiener filter” (“optimal filter”)
Sine waves whose frequency is a multiple of the base frequency (fs /n)
|F{s}(f )|2
W (f ) = of the DFT are identical to their periodic extension beyond the size
|F{s}(f )|2 + |F{n}(f )|2 of the DFT. They are, therefore, represented exactly by a single sharp
before deconvolution: peak in the DFT. All their energy falls into one single frequency “bin”
ũ = F −1 {W · F{c}/F{h}} in the DFT result.
Sine waves with other frequencies, which do not match exactly one of
Exercise 11 Use MATLAB to deconvolve the blurred stars from slide 26. the output frequency bins of the DFT, are still represented by a peak
The files stars-blurred.png with the blurred-stars image and stars-psf.png with the impulse
response (point-spread function) are available on the course-material web page. You may find at the output bin that represents the nearest integer multiple of the
the MATLAB functions imread, double, imagesc, circshift, fft2, ifft2 of use. DFT’s base frequency. However, such a peak is distorted in two ways:
Try different ways to control the noise (see above) and distortions near the margins (window-
ing). [The MATLAB image processing toolbox provides ready-made “professional” functions
deconvwnr, deconvreg, deconvlucy, edgetaper, for such tasks. Do not use these, except per-
→ Its amplitude is lower (down to 63.7%).
haps to compare their outputs with the results of your own attempts.] → Much signal energy has “leaked” to other frequencies.
74 76
The reason for the leakage and scalloping losses is easy to visualize with the
35 help of the convolution theorem:
30 The operation of cutting a sequence of the size of the DFT input vector out
of a longer original signal (the one whose continuous Fourier spectrum we
25
try to estimate) is equivalent to multiplying this signal with a rectangular
20 function. This destroys all information and continuity outside the “window”
15 that is fed into the DFT.
10 Multiplication with a rectangular window of length T in the time domain is
equivalent to convolution with sin(πf T )/(πf T ) in the frequency domain.
5
The subsequent interpretation of this window as a periodic sequence by
0
16
0 5 the DFT leads to sampling of this convolution result (sampling meaning
10 15 15.5
20 25 multiplication with a Dirac comb whose impulses are spaced fs /n apart).
30 15
DFT index
input freq. Where the window length was an exact multiple of the original signal period,
sampling of the sin(πf T )/(πf T ) curve leads to a single Dirac pulse, and
The leakage of energy to other frequency bins not only blurs the estimated spec-
trum. The peak amplitude also changes significantly as the frequency of a tone
the windowing causes no distortion. In all other cases, the effects of the con-
changes from that associated with one output bin to the next, a phenomenon volution become visible in the frequency domain as leakage and scalloping
known as scalloping. In the above graphic, an input sine wave gradually changes losses.
from the frequency of bin 15 to that of bin 16 (only positive frequencies shown).
77 79
200 0.8
0
100
0.6
−1 0
0 200 400 0 200 400 0.4
Sine wave multiplied with window function Discrete Fourier Transform
1 100 0.2
Rectangular window
Triangular window
0
0 50 Hann window
Hamming window
Magnitude (dB)
si = cos(2π · 3i/16) + cos(2π · 4i/16), i ∈ {0, . . . , 15}.
0 0
The left plot shows the DFT of the windowed sequence
−20 −20
xi = si · wi , i ∈ {0, . . . , 15}
−40 −40
and the right plot shows the DFT of the zero-padded windowed sequence
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) ′ si · wi , i ∈ {0, . . . , 15}
xi =
Hann window Hamming window 0, i ∈ {16, . . . , 63}
where wi = 0.54 − 0.46 × cos (2πi/15) is the Hamming window.
20 20
Magnitude (dB)
Magnitude (dB)
0 0 DFT without zero padding DFT with 48 zeros appended to window
4 4
−20 −20
−40 −40 2 2
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) 0 0
0 5 10 15 0 20 40 60
81 83
Numerous alternatives to the rectangular window have been proposed Applying the discrete Fourier transform to an n-element long real-
that reduce leakage and scalloping in spectral estimation. These are valued sequence leads to a spectrum consisting of only n/2+1 discrete
vectors multiplied element-wise with the input vector before applying frequencies.
the DFT to it. They all force the signal amplitude smoothly down to Since the resulting spectrum has already been distorted by multiplying
zero at the edge of the window, thereby avoiding the introduction of the (hypothetically longer) signal with a windowing function that limits
sharp jumps in the signal when it is extended periodically by the DFT. its length to n non-zero values and forces the waveform smoothly down
Three examples of such window vectors {wi }n−1 i=0 are: to zero at the window boundaries, appending further zeros outside the
Triangular window (Bartlett window): window will not distort the signal further.
The frequency resolution of the DFT is the sampling frequency divided
i
wi = 1 − 1 −
by the block size of the DFT. Zero padding can therefore be used to
n/2
increase the frequency resolution of the DFT.
Hann window (raised-cosine window, Hanning window):
Note that zero padding does not add any additional information to the
i signal. The spectrum has already been “low-pass filtered” by being
wi = 0.5 − 0.5 × cos 2π
n−1 convolved with the spectrum of the windowing function. Zero padding
Hamming window: in the time domain merely samples this spectrum blurred by the win-
dowing step at a higher resolution, thereby making it easier to visually
i
wi = 0.54 − 0.46 × cos 2π distinguish spectral lines and to locate their peak more precisely.
n−1
82 84
Frequency inversion Solutions:
In order to turn the spectrum X(f ) of a real-valued signal xi sampled at fs
into an inverted spectrum X ′ (f ) = X(fs /2 − f ), we merely have to shift
→ Make the impulse response finite by multiplying the sampled
h(t) with a windowing function
the periodic spectrum by fs /2:
X ′ (f ) X(f ) → Make the impulse response causal by adding a delay of half the
window size
= ∗
... ... ... ... The impulse response of an n-th order low-pass filter is then chosen as
−fs 0 fs f −fs 0 fs f − f2s 0 fs
2
f sin[2π(i − n/2)fc /fs ]
hi = 2fc /fs · · wi
This can be accomplished by multiplying the sampled sequence xi with yi = 2π(i − n/2)fc /fs
cos πfs t = cos πi, which is nothing but multiplication with the sequence
where {wi } is a windowing sequence, such as the Hamming window
. . . , 1, −1, 1, −1, 1, −1, 1, −1, . . .
wi = 0.54 − 0.46 × cos (2πi/n)
So in order to design a discrete high-pass filter that attenuates all frequencies
f outside the range fc < |f | < fs /2, we merely have to design a low-pass with wi = 0 for i < 0 and i > n.
Note that for fc = fs /4, we have hi = 0 for all even values of i. Therefore, this special case
filter that attenuates all frequencies outside the range −fc < f < fc , and requires only half the number of multiplications during the convolution. Such “half-band” FIR
then multiply every second value of its impulse response with −1. filters are used, for example, as anti-aliasing filters wherever a sampling rate needs to be halved.
85 87
Imaginary Part
1 if |f | < fc f 0.2
Amplitude
H(f ) = = rect 0.5
0 if |f | > fc 2fc 0
30
0.1
−0.5
and the impulse response 0
−1
−0.1
sin 2πtfc −1 0 1 0 10 20 30
h(t) = 2fc = 2fc · sinc(2fc · t). Real Part n (samples)
2πtfc
0 0
Sampling this impulse response with the sampling frequency fs of the
Phase (degrees)
Magnitude (dB)
signal to be processed will lead to a periodic frequency characteristic, −20 −500
that matches the periodic spectrum of the sampled signal.
−40 −1000
There are two problems though:
−60 −1500
→ the impulse response is infinitely long 0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
→ this filter is not causal, that is h(t) 6= 0 for t < 0
order: n = 30, cutoff frequency (−6 dB): fc = 0.25 × fs /2, window: Hamming
86 88
We truncate the ideal, infinitely-long impulse response by multiplication with a window sequence.
In the frequency domain, this will convolve the rectangular frequency response of the ideal low-
Polynomial representation of sequences
pass filter with the frequency characteristic of the window. The width of the main lobe determines We can represent sequences {xn } as polynomials:
the width of the transition band, and the side lobes cause ripples in the passband and stopband.
∞
Converting a low-pass into a band-pass filter X(v) =
X
xn v n
To obtain a band-pass filter that attenuates all frequencies f outside n=−∞
the range fl < f < fh , we first design a low-pass filter with a cut-off
frequency (fh − fl )/2 and multiply its impulse response with a sine Example of polynomial multiplication:
wave of frequency (fh + fl )/2, before applying the usual windowing:
(1 + 2v + 3v 2 ) · (2 + 1v)
sin[π(i − n/2)(fh − fl )/fs ]
hi = (fh − fl )/fs · · sin[π(fh + fl )] · wi 2 + 4v + 6v 2
π(i − n/2)(fh − fl )/fs + 1v + 2v 2 + 3v 3
H(f ) = 2 + 5v + 8v 2 + 3v 3
= ∗
Compare this with the convolution of two sequences (in MATLAB):
−fh −fl 0 fl fh f − fh −f
2
l fh −fl
2
f − fh +f
2
l
0 fh +fl
2
f conv([1 2 3], [2 1]) equals [2 5 8 3]
89 91
Exercise 12 Explain the difference between the DFT, FFT, and FFTW. Convolution of sequences is equivalent to polynomial multiplication:
∞
Exercise 13 Push-button telephones use a combination of two sine tones
X
{hn } ∗ {xn } = {yn } ⇒ yn = hk · xn−k
to signal, which button is currently being pressed: k=−∞
1209 Hz 1336 Hz 1477 Hz 1633 Hz ↓ ↓
697 Hz 1 2 3 A ∞
X
! ∞
X
!
770 Hz 4 5 6 B H(v) · X(v) = hn v n · xn v n
852 Hz 7 8 9 C n=−∞ n=−∞
941 Hz * 0 # D ∞
X ∞
X
(a) You receive a digital telephone signal with a sampling frequency of = hk · xn−k · v n
n=−∞ k=−∞
8 kHz. You cut a 256-sample window out of this sequence, multiply it with a
windowing function and apply a 256-point DFT. What are the indices where
Note how the Fourier transform of a sequence can be accessed easily
the resulting vector (X0 , X1 , . . . , X255 ) will show the highest amplitude if
from its polynomial form:
button 9 was pushed at the time of the recording?
∞
(b) Use MATLAB to determine, which button sequence was typed in the X
touch tones recorded in the file touchtone.wav on the course-material web X(e− jω ) = xn e− jωn
page. n=−∞
90 92
Example of polynomial division: The z-transform defines for each sequence a continuous complex-valued
∞
surface over the complex plane C. For finite sequences, its value is al-
1 X ways defined across the entire complex plane.
= 1 + av + a2 v 2 + a3 v 3 + · · · = an v n
1 − av n=0 For infinite sequences, it can be shown that the z-transform converges
only for the region
1 + av + a2 v 2 + · · ·
1 − av 1 xn+1 xn+1
lim < |z| < lim
1 − av n→∞ xn n→−∞ xn
av
av − a2 v 2 The z-transform identifies a sequence unambiguously only in conjunction with a given region of
convergence. In other words, there exist different sequences, that have the same expression as
a2 v 2 their z-transform, but that converge for different amplitudes of z.
93 95
a−1
The z-transform The z-transform of the impulse xn b0 0 yn
response {hn } of the causal LTI
The z-transform of a sequence {xn } is defined as: system defined by z −1 z −1
b1 −a1
∞ xn−1 yn−1
X k m
−n
X(z) = xn z
X X
al · yn−l = bl · xn−l z −1 z −1
n=−∞ ··· ···
l=0 l=0 ··· ···
Note that is differs only in the sign of the exponent from the polynomial representation discussed
on the preceeding slides.
with {yn } = {hn } ∗ {xn } is the z −1 z −1
bm −ak
rational function xn−m yn−k
Recall that the above X(z) is exactly the factor with which an expo-
nential sequence {z n } is multiplied, if it is convolved with {xn }: b0 + b1 z −1 + b2 z −2 + · · · + bm z −m
H(z) =
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k
{z n } ∗ {xn } = {yn } (bm =
6 0, ak =
6 0) which can also be written as
zk m m−l
P
l=0 bl z
∞ ∞ H(z) = .
z m kl=0 al z k−l
X X P
n−k n
⇒ yn = z xk = z · z −k xk = z n · X(z)
k=−∞ k=−∞ H(z) has m zeros and k poles at non-zero locations in the z plane,
plus k − m zeros (if k > m) or m − k poles (if m > k) at z = 0.
94 96
z 1
This function can be converted into the form H(z) = z−0.7
= 1−0.7·z −1
m
Y Ym
−1
(1 − cl · z ) (z − cl ) z Plane Impulse Response
Imaginary Part
b0 l=1 b0 k−m l=1 1 1
Amplitude
H(z) = · k = ·z · k
a0 Y a0
0 0.5
Y
(1 − dl · z −1 ) (z − dl )
l=1 l=1
−1 0
where the cl are the non-zero positions of zeros (H(cl ) = 0) and the dl −1 0 1 0 10 20 30
are the non-zero positions of the poles (i.e., z → dl ⇒ |H(z)| → ∞) Real Part n (samples)
of H(z). Except for a constant factor, H(z) is entirely characterized H(z) = z
= 1
z−0.9 1−0.9·z −1
by the position of these zeros and poles.
z Plane Impulse Response
As with the Fourier transform, convolution in the time domain corre-
Imaginary Part
1 1
Amplitude
sponds to complex multiplication in the z-domain:
{xn } •−◦ X(z), {yn } •−◦ Y (z) ⇒ {xn } ∗ {yn } •−◦ X(z) · Y (z) 0 0.5
z 1
H(z) = z−1
= 1−z −1
2
z Plane Impulse Response
1.75
Imaginary Part
1 1
Amplitude
1.5
1.25
0 0.5
|H(z)|
1
0.75
−1 0
0.5 −1 0 1 0 10 20 30
0.25 Real Part n (samples)
0 z 1
1 H(z) = z−1.1
= 1−1.1·z −1
0.5 1
0 0.5
−0.5 0 z Plane Impulse Response
−0.5
−1 −1 Imaginary Part 20
imaginary real 1
Amplitude
This example is an amplitude plot of xn 0.8 yn 0 10
0.8 0.8z z −1
H(z) = = −1 0
1 − 0.2 · z −1 z − 0.2 0.2
−1 0 1 0 10 20 30
yn−1
Real Part n (samples)
which features a zero at 0 and a pole at 0.2.
98 100
z2
H(z) = (z−0.9·e jπ/6 )·(z−0.9·e− jπ/6 )
= 1
1−1.8 cos(π/6)z −1 +0.92 ·z −2 IIR Filter design techniques
Imaginary Part z Plane Impulse Response The design of a filter starts with specifying the desired parameters:
1 2
→
Amplitude
The passband is the frequency range where we want to approx-
2 imate a gain of one.
0 0
H(z) = z2
= 1 → The order of a filter is the number of poles it uses in the
(z−e jπ/6 )·(z−e− jπ/6 ) 1−2 cos(π/6)z −1 +z −2
z-domain, and equivalently the number of delay elements nec-
z Plane Impulse Response essary to implement it.
Imaginary Part
1 5
→
Amplitude
Both passband and stopband will in practice not have gains
2
0 0 of exactly one and zero, respectively, but may show several
deviations from these ideal values, and these ripples may have
−1 −5 a specified maximum quotient between the highest and lowest
−1 0 1 0 10 20 30
Real Part n (samples) gain.
101 103
H(z) = z2
(z−0.9·e jπ/2 )·(z−0.9·e− jπ/2 )
= 1
1−1.8 cos(π/2)z −1 +0.92 ·z −2
= 1
1+0.92 ·z −2
→ There will in practice not be an abrupt change of gain between
passband and stopband, but a transition band where the fre-
z Plane Impulse Response
quency response will gradually change from its passband to its
Imaginary Part
1 1
stopband value.
Amplitude
2
0 0
The designer can then trade off conflicting goals such as a small tran-
−1 sition band, a low order, a low ripple amplitude, or even an absence of
−1
−1 0 1 0 10 20 30 ripples.
Real Part n (samples) Design techniques for making these tradeoffs for analog filters (involv-
z 1 ing capacitors, resistors, coils) can also be used to design digital IIR
H(z) = z+1
= 1+z −1
filters:
z Plane Impulse Response
Imaginary Part
1 1 Butterworth filters
Amplitude
Have no ripples, gain falls monotonically across the pass and transition
0 0 p
band. Within the passband, the gain drops slowly down to 1 − 1/2
−1 (−3 dB). Outside the passband, it drops asymptotically by a factor 2N
−1
−1 0 1 0 10 20 30 per octave (N · 20 dB/decade).
Real Part n (samples)
102 104
Chebyshev type I filters Butterworth filter design example
Distribute the gain error uniformly throughout the passband (equirip-
ples) and drop off monotonically outside. Impulse Response
1 0.3
Chebyshev type II filters
Imaginary Part
0.2
Amplitude
0.5
Distribute the gain error uniformly throughout the stopband (equirip- 0 0.1
ples) and drop off monotonically in the passband. −0.5 0
Phase (degrees)
Magnitude (dB)
passband-gain tolerance, stopband-gain tolerance, and transition-band
−20 −200
width that can be achieved at a given filter order.
−40 −400
All these filter design techniques are implemented in the MATLAB Signal Processing Toolbox in
the functions butter, cheby1, cheby2, and ellip, which output the coefficients an and bn of the
−60 −600
difference equation that describes the filter. These can be applied with filter to a sequence, or 0 0.5 1 0 0.5 1
can be visualized with zplane as poles/zeros in the z-domain, with impz as an impulse response, Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
and with freqz as an amplitude and phase spectrum. The commands sptool and fdatool
provide interactive GUIs to design digital filters. order: 5, cutoff frequency (−3 dB): 0.25 × fs /2
105 107
Imaginary Part
0.6 0.4
Amplitude
Amplitude
0.5 0.5
0 0.4 0 0.2
−0.5 0.2 −0.5 0
−1 0 −1 −0.2
−1 0 1 0 10 20 30 −1 0 1 0 10 20 30
Real Part n (samples) Real Part n (samples)
0 0 0 0
Phase (degrees)
Phase (degrees)
Magnitude (dB)
Magnitude (dB)
−20 −20 −200
−50
−40 −40 −400
0.4 1 − 21 z −1
Amplitude
0.5
1 −4
0 0.2 1−44
z
(b) H ′ (z) = 1 −1
−0.5 0 1 − 4z
−1 1 1 1
−1 0 1
−0.2
0 10 20 30 (c) H ′′ (z) = + z −1 + z −2
2 4 2
Real Part n (samples)
0 100 Exercise 15 (a) Perform the polynomial division of the rational function
Phase (degrees)
Magnitude (dB)
0 given in exercise 14 (a) until you have found the coefficient of z −5 in the
−20
−100
result.
−40
−200
(b) Perform the polynomial division of the rational function given in exercise
14 (b) until you have found the coefficient of z −10 in the result.
−60 −300
0 0.5 1 0 0.5 1 (c) Use its z-transform to show that the filter in exercise 14 (b) has actually
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) a finite impulse response and draw the corresponding block diagram.
order: 5, cutoff frequency: 0.5 × fs /2, stop-band ripple: −20 dB
109 111
Elliptic filter design example Exercise 16 Consider the system h : {xn } → {yn } with yn + yn−1 =
xn − xn−4 .
Impulse Response (a) Draw the direct form I block diagram of a digital filter that realises h.
1 0.6
(b) What is the impulse response of h?
Imaginary Part
0.4
Amplitude
−100 (g) How many poles does H have in the complex plane?
−20
−200 (h) Write H as a fraction using the position of its poles and zeros and draw
−40 their location in relation to the complex unit circle.
−300
(i) If h is applied to a sound file with a sampling frequency of 8000 Hz,
−60 −400
0 0.5 1 0 0.5 1 sine waves of what frequency will be eliminated and sine waves of what
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) frequency will be quadrupled in their amplitude?
order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB, stop-band ripple: −20 dB
110 112
Random sequences and noise A stationary random process {xn } can be characterized by its mean
A discrete random sequence {xn } is a sequence of numbers value
mx = E(xn ),
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
its variance
where each value xn is the outcome of a random variable xn in a σx2 = E(|xn − mx |2 ) = γxx (0)
corresponding sequence of random variables (σx is also called standard deviation), its autocorrelation sequence
. . . , x−2 , x−1 , x0 , x1 , x2 , . . . φxx (k) = E(xn+k · x∗n )
Such a collection of random variables is called a random process. Each and its autocovariance sequence
individual random variable xn is characterized by its probability distri- γxx (k) = E[(xn+k − mx ) · (xn − mx )∗ ] = φxx (k) − |mx |2
bution function
Pxn (a) = Prob(xn ≤ a) A pair of stationary random processes {xn } and {yn } can, in addition,
be characterized by its crosscorrelation sequence
and the entire random process is characterized completely by all joint
probability distribution functions φxy (k) = E(xn+k · yn∗ )
Two random variables xn and xm are called independent if Deterministic crosscorrelation sequence
For deterministic sequences {xn } and {yn }, the crosscorrelation sequence
Pxn ,xm (a, b) = Pxn (a) · Pxm (b)
is
∞
X
and a random process is called stationary if cxy (k) = xi+k yi .
i=−∞
Pxn1 +l ,...,xnk +l (a1 , . . . , ak ) = Pxn1 ,...,xnk (a1 , . . . , ak )
After dividing through the overlapping length of the finite sequences involved, cxy (k) can be
used to estimate, from a finite sample of a stationary random sequence, the underlying φxy (k).
for all l, that is, if the probability distributions are time invariant. MATLAB’s xcorr function does that with option unbiased.
The derivative pxn (a) = Px′ n (a) is called the probability density func- If {xn } is similar to {yn }, but lags l elements behind (xn ≈ yn−l ), then
tion, and helps us to define quantities such as the cxy (l) will be a peak in the crosscorrelation sequence. It is therefore widely
calculated to locate shifted versions of a known sequence in another one.
→ expected value E(xn) = apxn (a) da
R
The deterministic crosscorrelation sequence is a close cousin of the convo-
→ mean-square value (average power) E(|xn|2) = |a|2pxn (a) da lution, with just the second input sequence mirrored:
R
of an LTI applied to it will then be another random sequence, charac- where φyx (k) = φxy (−k) = E(yn+k · x∗n ).
terized by
X∞
my = mx hk
k=−∞
and
∞
X φxx (k) = E(xn+k · x∗n )
φyy (k) = φxx (k−i)chh (i), where P∞
i=−∞ chh (k) = i=−∞ hi+k hi .
118 120
DFT averaging Image, video and audio compression
Structure of modern audiovisual communication systems:
121 123
The rightmost figure was generated from the same set of 1000 windows, Audio-visual lossy coding today typically consists of these steps:
but this time the complex values of the DFTs were averaged before the
absolute value was taken. This is called coherent averaging and, because
→ A transducer converts the original stimulus into a voltage.
of the linearity of the DFT, identical to first averaging the 1000 windows → This analog signal is then sampled and quantized.
The digitization parameters (sampling frequency, quantization levels) are preferably
and then applying a single DFT and taking its absolute value. The windows chosen generously beyond the ability of human senses or output devices.
start 64 samples apart. Only periodic waveforms with a period that divides
64 are not averaged away. This periodic averaging step suppresses both the → The digitized sensor-domain signal is then transformed into a
noise and the second sine wave. perceptual domain.
This step often mimics some of the first neural processing steps in humans.
Periodic averaging → This signal is quantized again, based on a perceptual model of what
If a zero-mean signal {xi } has a periodic component with period p, the level of quantization-noise humans can still sense.
periodic component can be isolated by periodic averaging : → The resulting quantized levels may still be highly statistically de-
k
pendent. A prediction or decorrelation transform exploits this and
1 X produces a less dependent symbol sequence of lower entropy.
x̄i = lim xi+pn
k→∞ 2k + 1
n=−k → An entropy coder turns that into an apparently-random bit string,
whose length approximates the remaining entropy.
Periodic averaging corresponds in the time domain to convolution with a
P The first neural processing steps in humans are in effect often a kind of decorrelation transform;
Dirac comb n δi−pn . In the frequency domain, this means multiplication our eyes and ears were optimized like any other AV communications system. This allows us to
with a Dirac comb that eliminates all frequencies but multiples of 1/p. use the same transform for decorrelating and transforming into a perceptually relevant domain.
122 124
Outline of the remaining lectures Entropy coding review – Huffman
1
→ Quick review of entropy coding Entropy: H =
X
p(α) · log2
1.00 p(α)
α∈A
→ Transform coding: techniques for converting sequences of highly- 0 1 = 2.3016 bit
dependent symbols into less-dependent lower-entropy sequences.
0.40 0.60
• run-length coding
• decorrelation, Karhunen-Loève transform (PCA) 0 1 0 1
v w
• other orthogonal transforms (especially DCT) 0.20 0.20
0.25
u
0.35 0 1
→ Introduction to some characteristics and limits of human senses
x
• perceptual scales and sensitivity limits Mean codeword length: 2.35 bit 0.15
0.10
• colour vision 0 1
Huffman’s algorithm constructs an optimal code-word tree for a set of
• human hearing limits, critical bands, audio masking symbols with known probability distribution. It iteratively picks the two y z
elements of the set with the smallest probability and combines them into 0.05 0.05
→ Image and audio coding standards Entropy coding review – arithmetic coding
• A/µ-law coding (digital telephone network) Partition [0,1] according 0.0 0.35 0.55 0.75 0.9 0.95 1.0
x x x x x
Literature
→ D. Salomon: A guide to data compression methods.
w w w w w
0.55
0.0 0.55 0.5745 0.5822
126 128
Arithmetic coding Old (Group 3 MH) fax code
pixels white code black code
Several advantages: • Run-length encoding plus modified Huffman 0 00110101 0000110111
code 1 000111 010
→ Length of output bitstring can approximate the theoretical in- • Fixed code table (from eight sample pages) 2
3
0111
1000
11
10
• separate codes for runs of white and black
formation content of the input to within 1 bit. pixels
4 1011 011
5 1100 0011
• termination code in the range 0–63 switches
→ Performs well with probabilities > 0.5, where the information between black and white code
6
7
1110
1111
0010
00011
per symbol is less than one bit. • makeup code can extend length of a run by 8 10011 000101
a multiple of 64 9 10100 000100
10 00111 0000100
→ Interval arithmetic makes it easy to change symbol probabilities • termination run length 0 needed where run
length is a multiple of 64 11 01000 0000101
(no need to modify code-word tree) ⇒ convenient for adaptive • single white column added on left side be-
12 001000 0000111
13 000011 00000100
coding fore transmission 14 110100 00000111
• makeup codes above 1728 equal for black 15 110101 000011000
and white 16 101010 0000010111
Can be implemented efficiently with fixed-length arithmetic by rounding ... ... ...
• 12-bit end-of-line marker: 000000000001
probabilities and shifting out leading digits as soon as leading zeros (can be prefixed by up to seven zero-bits 63 00110100 000001100111
to reach next byte boundary) 64 11011 0000001111
appear in interval size. Usually combined with adaptive probability 128 10010 000011001000
estimation. Example: line with 2 w, 4 b, 200 w, 3 b, EOL → 192 010111 000011001001
1000|011|010111|10011|10|000000000001 ... ... ...
Huffman coding remains popular because of its simplicity and lack of patent-licence issues. 1728 010011011 0000001100101
129 131
↓
5 7 12 3 3
?
Predictive coding:
encoder decoder
Based on the counted numbers nblack and nwhite of how often each pixel
f(t) g(t) g(t) f(t) value has been encountered so far in each of the 1024 contexts, the proba-
− +
bility for the next pixel being black is estimated as
predictor predictor
nblack + 1
pblack =
P(f(t−1), f(t−2), ...) P(f(t−1), f(t−2), ...) nwhite + nblack + 2
Delta coding (DPCM): P (x) = x The encoder updates its estimate only after the newly counted pixel has
n
X been encoded, such that the decoder knows the exact same statistics.
Linear predictive coding: P (x1 , . . . , xn ) = ai x i Joint Bi-level Expert Group: International Standard ISO 11544, 1993.
i=1 Example implementation: http://www.cl.cam.ac.uk/~mgk25/jbigkit/
130 132
Statistical dependence Correlation
Random variables X, Y are dependent iff ∃x, y: Two random variables X ∈ R and Y ∈ R are correlated iff
P (X = x ∧ Y = y) 6= P (X = x) · P (Y = y).
E{[X − E(X)] · [Y − E(Y )]} 6= 0
If X, Y are dependent, then
where E(· · ·) denotes the expected value of a random-variable term.
⇒ ∃x, y : P (X = x | Y = y) 6= P (X = x) ∨
P (Y = y | X = x) 6= P (Y = y) Correlation implies dependence, but de- Dependent but not correlated:
pendence does not always lead to corre- 1
⇒ H(X|Y ) < H(X) ∨ H(Y |X) < H(Y )
lation (see example to the right).
Application However, most dependency in audiovi-
0
sual data is a consequence of correlation,
Where x is the value of the next symbol to be transmitted and y is
which is algorithmically much easier to
the vector of all symbols transmitted so far, accurate knowledge of the
exploit.
conditional probability P (X = x | Y = y) will allow a transmitter to −1
−1 0 1
remove all redundancy.
An application example of this approach is JBIG, but there y is limited Positive correlation: higher X ⇔ higher Y , lower X ⇔ lower Y
to 10 past single-bit pixels and P (X = x | Y = y) is only an estimate. Negative correlation: lower X ⇔ higher Y , higher X ⇔ lower Y
133 135
eral form, based on statistical measurements of example signals, quickly 256 256
reaches practical limits. JBIG needs an array of only 211 = 2048 count- 192 192
ing registers to maintain estimator statistics for its 10-bit context.
If we wanted to encode each 24-bit pixel of a colour image based on 128 128
its statistical dependence of the full colour information from just ten
64 64
previous neighbour pixels, the required number of
(224 )11 ≈ 3 × 1080 0
0 64 128 192 256
0
0 64 128 192 256
Values of neighbour pixels at distance 4 Values of neighbour pixels at distance 8
256 256
registers for storing each probability will exceed the estimated number
of particles in this universe. (Neither will we encounter enough pixels 192 192
134 136
Covariance and correlation Decorrelation by coordinate transform
Neighbour−pixel value pairs Decorrelated neighbour−pixel value pairs
256 320
We define the covariance of two random variables X and Y as
256
192
Cov(X, Y ) = E{[X −E(X)]·[Y −E(Y )]} = E(X ·Y )−E(X)·E(Y ) 192
64
The Pearson correlation coefficient 64
0
Cov(X, Y )
ρX,Y = p 0
0 64 128 192 256
−64
−64 0 64 128 192 256 320
Var(X) · Var(Y ) Probability distribution and entropy
137 139
138 140
Quick review: eigenvectors and eigenvalues B is orthonormal, that is BB T = I.
We are given a square matrix A ∈ Rn×n . The vector x ∈ Rn is an Multiplying the above from the right with B T leads to the spectral
eigenvector of A if there exists a scalar value λ ∈ R such that decomposition
Cov(X) = BDB T
Ax = λx. of the covariance matrix. Similarly multiplying instead from the left
with B T leads to
The corresponding λ is the eigenvalue of A associated with x. B T Cov(X)B = D
The length of an eigenvector is irrelevant, as any multiple of it is also and therefore shows with
an eigenvector. Eigenvectors are in practice normalized to length 1.
Cov(B T X) = D
Spectral decomposition
that the eigenvector matrix B T is the wanted transform.
Any real, symmetric matrix A = AT ∈ Rn×n can be diagonalized into
the form The Karhunen-Loève transform (also known as Hotelling transform
A = U ΛU T , or Principal Component Analysis) is the multiplication of a correlated
random vector X with the orthonormal eigenvector matrix B T from the
where Λ = diag(λ1 , λ2 , . . . , λn ) is the diagonal matrix of the ordered spectral decomposition Cov(X) = BDB T of its covariance matrix.
eigenvalues of A (with λ1 ≥ λ2 ≥ · · · ≥ λn ), and the columns of U This leads to a decorrelated random vector B T X whose covariance
are the n corresponding orthonormal eigenvectors of A. matrix is diagonal.
141 143
sponding eigenvalues:
0 1 0 1
r1,1 r1,2 r1,3 · · · rr,r 0.0328 0.0256 0.0160
S= g g g
1,1 1,2 1,3 · · · gr,r A C = 0.0256 0.0216 0.0140
Cov(X)B = BD
@ @ A
b1,1 b1,2 b1,3 · · · br,r 0.0160 0.0140 0.0109
142 144
[When estimating a covariance from a number of samples, the sum is divided by the number of
samples minus one. This takes into account the variance of the mean S̄c , which is not the exact
Spatial correlation
expected value, but only an estimate of it.]
The previous example used the Karhunen-Loève transform in order to
The resulting covariance matrix has three eigenvalues 0.0622, 0.0025, and 0.0006
0 10 1 0 1
eliminate correlation between colour planes. While this is of some
0.0328
@ 0.0256
0.0256 0.0160 0.7167 0.7167 relevance for image compression, far more correlation can be found
0.0216 0.0140 A @ 0.5833 A = 0.0622 @ 0.5833 A
0.0160 0.0140 0.0109 0.3822 0.3822 between neighbour pixels within each colour plane.
In order to exploit such correlation using the KLT, the sample set has
0 10 1 0 1
0.0328 0.0256 0.0160 −0.5509 −0.5509
@ 0.0256 0.0216 0.0140 0.1373 = 0.0025 0.1373 A
to be extended from individual pixels to entire images. The underlying
A @ A @
0.0160 0.0140 0.0109 0.8232 0.8232
0
0.0328 0.0256 0.0160
10
−0.4277
1 0
−0.4277
1
calculation is the same as in the preceeding example, but this time
@ 0.0256 0.0216 0.0140 A @ 0.8005 A = 0.0006 @ 0.8005 A the columns of S are entire (monochrome) images. The rows are the
0.0160 0.0140 0.0109 −0.4198 −0.4198
different images found in the set of test images that we use to examine
and can therefore be diagonalized as typical correlations between neighbour pixels.
In other words, we use the same formulas as in the previous example, but this time n is the number
0 1
0.0328 0.0256 0.0160
@ 0.0256 0.0216 0.0140 A = C = U · D · U T = of pixels per image and m is the number of sample images. The Karhunen-Loève transform is
0.0160 0.0140 0.0109 here no longer a rotation in a 3-dimensional colour space, but it operates now in a much larger
0 10 10 1 vector space that has as many dimensions as an image has pixels.
0.7167 −0.5509 −0.4277 0.0622 0 0 0.7167 0.5833 0.3822
@ 0.5833 0.1373 0.8005 A @ 0 0.0025 0 A @ −0.5509 0.1373 0.8232 A To keep things simple, we look in the next experiment only at m = 9000 1-dimensional “images”
0.3822 0.8232 −0.4198 0 0 0.0006 −0.4277 0.8005 −0.4198 with n = 32 pixels each. As a further simplification, we use not real images, but random noise
that was filtered such that its amplitude spectrum is proportional to 1/f , where f is the frequency.
The result would be similar in a sufficiently large collection of real test images.
(e.g. using MATLAB’s singular-value decomposition function svd).
145 147
C(u) C(v)
S(u, v) = p p ·
N/2 N/2
N −1 N −1
X X (2x + 1)uπ (2y + 1)vπ
s(x, y) cos cos
x=0 y=0
2N 2N
s(x, y) =
N −1 N −1
X X C(u) C(v) (2x + 1)uπ (2y + 1)vπ
p p · S(u, v) cos cos
u=0 v=0 N/2 N/2 2N 2N
Breakthrough: Ahmed/Natarajan/Rao discovered the DCT as an ex- A range of fast algorithms have been found for calculating 1-D and
cellent approximation of the KLT for typical photographic images, but 2-D DCTs (e.g., Ligtenberg/Vetterli).
far more efficient to calculate.
Ahmed, Natarajan, Rao: Discrete Cosine Transform. IEEE Transactions on Computers, Vol. 23,
January 1974, pp. 90–93.
149 151
3
N −1
X C(u) (2x + 1)uπ 2
s(x) = p S(u) cos
u=0 N/2 2N 1
with −1
√1
2
u=0 −2
C(u) =
1 u>0 −3
−4
is an orthonormal transform:
N −1
(2x + 1)uπ C(u′ ) (2x + 1)u′ π 1 u = u′
X C(u)
cos ·p cos =
0 u=6 u′
p
x=0 N/2 2N N/2 2N
150 152
Whole-image DCT, 80% coefficient cutoff Whole-image DCT, 95% coefficient cutoff
80% truncated 2D DCT (log10) 80% truncated DCT: reconstructed image 95% truncated 2D DCT (log10) 95% truncated DCT: reconstructed image
4 4
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−4 −4
153 155
Whole-image DCT, 90% coefficient cutoff Whole-image DCT, 99% coefficient cutoff
90% truncated 2D DCT (log10) 90% truncated DCT: reconstructed image 99% truncated 2D DCT (log10) 99% truncated DCT: reconstructed image
4 4
3 3
2 2
1 1
0 0
−1 −1
−2 −2
−3 −3
−4 −4
154 156
Base vectors of 8×8 DCT The n-point Discrete Fourier Transform (DFT) can be viewed as a device that
v sends an input signal through a bank of n non-overlapping band-pass filters, each
reducing the bandwidth of the signal to 1/n of its original bandwidth.
0 1 2 3 4 5 6 7
According to the sampling theorem, after a reduction of the bandwidth by 1/n,
0 the number of samples needed to reconstruct the original signal can equally be
reduced by 1/n. The DFT splits a wide-band signal represented by n input signals
1 into n separate narrow-band samples, each represented by a single sample.
A Discrete Wavelet Transform (DWT) can equally be viewed as such a frequency-
2 band splitting device. However, with the DWT, the bandwidth of each output signal
is proportional to the highest input frequency that it contains. High-frequency
3 components are represented in output signals with a high bandwidth, and therefore
a large number of samples. Low-frequency signals end up in output signals with
u
low bandwidth, and are correspondingly represented with a low number of samples.
4
As a result, high-frequency information is preserved with higher spatial resolution
than low-frequency information.
5
Both the DFT and the DWT are linear orthogonal transforms that preserve all
input information in their output without adding anything redundant.
6
As with the DFT, the 1-dimensional DWT can be extended to 2-D images by trans-
7 forming both rows and columns (the order of which happens first is not relevant).
157 159
Discrete Wavelet Transform A DWT is defined by a combination of a low-pass filter, which smoothes
a signal by allowing only the bottom half of all frequencies to pass
through, and a high-pass filter, which preserves only the upper half of
the spectrum. These two filters must be chosen to be “orthogonal”
to each other, in the sense that together they split up the information
content of their input signal without any mutual information in their
outputs.
A widely used 1-D filter pair is DAUB4 (by Ingrid Daubechies). The
low-pass filter convolves a signal with the 4-point sequence c0 , c1 , c2 , c3 ,
and the matching high-pass filter convolves with c3 , −c2 , c1 , −c0 . Writ-
ten as a transformation matrix, DAUB4 has the form
c0 c1 c2 c3
0 1
B c3 −c2 c1 −c0 C
c0 c1 c2 c3
B C
B C
c3 −c2 c1 −c0
B C
B C
B . ..
B C
B .. .. C
B . . C
C
B
B c0 c1 c2 c3 C
C
B
B c3 −c2 c1 −c0 C
C
@ c c3 c0 c1 A
2
c1 −c0 c3 −c2
158 160
An orthogonal matrix multiplied with itself transposed is the identity Discrete Wavelet Transform compression
matrix, which is fulfilled for the above one when 80% truncated 2D DAUB8 DWT 90% truncated 2D DAUB8 DWT
In an n-point DWT, an input vector is convolved separately with a low- Psychophysics of perception
pass and a high-pass filter. The result are two output sequences of n Sensation limit (SL) = lowest intensity stimulus that can still be perceived
numbers. But as each sequence has now only half the input bandwidth, Difference limit (DL) = smallest perceivable stimulus difference at given
each second value is redundant, can be reconstructed by interpolation intensity level
with the same filter, and can therefore be discarded.
Weber’s law
The remaining output values of the high-pass filter (“detail”) are part
of the final output of the DWT. The remaining values of the low-pass Difference limit ∆φ is proportional to the intensity φ of the stimu-
filter (“approximation”) are recursively treated the same way, until they lus (except for a small correction constant a, to describe deviation of
consist – after log2 n steps – of only a single value, namely the average experimental results near SL):
of the entire input. ∆φ = c · (φ + a)
Like with the DFT and DCT, for many real-world input signals, the
DWT accumulates most energy into only a fraction of its output values. Fechner’s scale
A commonly used approach for wavelet-based compression of signals is Define a perception intensity scale ψ using the sensation limit φ0 as
to replace all coefficients below an adjustable threshold with zero and the origin and the respective difference limit ∆φ = c · φ as a unit step.
encode only the values and positions of the remaining ones. The result is a logarithmic relationship between stimulus intensity and
scale value:
φ
ψ = logc
φ0
162 164
Fechner’s scale matches older subjective intensity scales that follow Where P is some power and P0 a 0 dB reference power, or equally
differentiability of stimuli, e.g. the astronomical magnitude numbers where F is a field quantity and F0 the corresponding reference level:
for star brightness introduced by Hipparchos (≈150 BC).
P F
10 dB · log10 = 20 dB · log10
Stevens’ law P0 F0
A sound that is 20 DL over SL is perceived as more than twice as loud Common reference values are indicated with an additional letter after
as one that is 10 DL over SL, i.e. Fechner’s scale does not describe the “dB”:
well perceived intensity. A rational scale attempts to reflect subjective
relations perceived between different values of stimulus intensity φ. 0 dBW = 1W
Stevens observed that such rational scales ψ follow a power law: 0 dBm = 1 mW = −30 dBW
0 dBµV = 1 µV
ψ = k · (φ − φ0 )a
0 dBSPL = 20 µPa (sound pressure level)
Example coefficients a: temperature 1.6, weight 1.45, loudness 0.6, 0 dBSL = perception threshold (sensation limit)
brightness 0.33.
3 dB = double power, 6 dB = double pressure/voltage/etc.
10 dB = 10× power, 20 dB = 10× pressure/voltage/etc.
W.H. Martin: Decibel – the new name for the transmission unit. Bell System Technical Journal,
January 1929.
165 167
U
0.6 0.6 0.6
Cr
Cr
Cr
Cb = + 0.5
0.4 0.4 0.4
2.0 0
0 0.5 1
0
0 0.5 1
0
0 0.5 1
Cb Cb Cb
Y=0.7 Y=0.9 Y=0.99
1 1 1
1.6
Cr
Cr
Cr
0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Cb Cb Cb
Equiloudness curves and the unit “phon” Frequency discrimination and critical bands
A pair of pure tones (sine functions) cannot be distinguished as two
separate frequencies if both are in the same frequency group (“critical
band”). Their loudness adds up, and both are perceived with their
average frequency.
The human ear has about 24 critical bands whose width grows non-
linearly with the center frequency.
Each audible frequency can be expressed on the “Bark scale” with
values in the range 0–24. A good closed-form approximation is
26.81
b≈ − 0.53
1 + 1960f Hz
dBSPL
For the study of masking effects, pure tones therefore need to be
40
distinguished from narrowband noise.
→ Temporal masking: SL rises shortly before and after a masker. 30
20
10
40 63 100 160 250 400 630 1000 1600 2500 4000 6300 10000 16000
Hz
177 179
dBSPL
masking.wav 40
1900 Hz 40 63 100 160 250 400 630 1000 1600 2500 4000 6300 10000 16000
700 Hz Hz
178 180
Quantization Logarithmic quantization
Uniform/linear quantization: Non-uniform quantization: Rounding the logarithm of the signal amplitude makes the quantiza-
6 6 tion error scale-invariant and is used where the signal level is not very
5 5
4
3
4
3
predictable. Two alternative schemes are widely used to make the
2
1
2
1 logarithm function odd and linearize it across zero before quantization:
0 0
−1
−2
−1
−2 µ-law:
−3 −3
−4 −4
−5
−6
−5
−6 V log(1 + µ|x|/V )
−6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6
y= sgn(x) for −V ≤ x ≤ V
log(1 + µ)
Quantization is the mapping from a continuous or large set of val-
ues (e.g., analog voltage, floating-point number) to a smaller set of A-law:
(typically 28 , 212 or 216 ) values. ( A|x| V
This introduces two types of error: 1+log A
sgn(x) for 0 ≤ |x| ≤ A
y= A|x|
V (1+log V )
→ the amplitude of quantization noise reaches up to half the max- 1+log A
sgn(x) for V
A
≤ |x| ≤ V
imum difference between neighbouring quantization levels
European digital telephone networks use A-law quantization (A = 87.6), North American ones
→ clipping occurs where the input amplitude exceeds the value of use µ-law (µ=255), both with 8-bit resolution and 8 kHz sampling frequency (64 kbit/s). [ITU-T
the highest (or lowest) quantization level G.711]
181 183
signal voltage
Variant with even number of output values (no zero):
0
x 1
y = max −V, min V, R +
R 2
Goals: 16
12
11
12
10
14
16 24 40
19 26 58
51 61
60 55
17
18
18
21
24
26
47
66
99
99
99
99
99
99
99
99
→ continuous tone gray-scale and colour images 14
14
13
17
16
22
24 40 57
29 51 87
69 56
80 62
24
47
26
66
56
99
99
99
99
99
99
99
99
99
99
99
→ recognizable images at 0.083 bit/pixel 18
24
22
35
37
55
56 68 109
64 81 104
103 77
113 92
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
→ useful images at 0.25 bit/pixel 49
72
64
92
78
95
87 103 121
98 112 100
120 101
103 99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
→ excellent image quality at 0.75 bit/pixel
→ indistinguishable images at 2.25 bit/pixel
→ apply DPCM coding to quantized DC coefficients from DCT
→ feasibility of 64 kbit/s (ISDN fax) compression with late 1980s → read remaining quantized values from DCT in zigzag pattern
hardware (16 MHz Intel 80386). → locate sequences of zero coefficients (run-length coding)
→ workload equal for compression and decompression → apply Huffman coding on zero run-lengths and magnitude of
The JPEG standard (ISO 10918) was finally published in 1994. AC values
William B. Pennebaker, Joan L. Mitchell: JPEG still image compression standard. Van Nostrad
Reinhold, New York, ISBN 0442012721, 1993.
→ add standard header with compression parameters
Gregory K. Wallace: The JPEG Still Picture Compression Standard. Communications of the http://www.jpeg.org/
ACM 34(4)30–44, April 1991, http://doi.acm.org/10.1145/103085.103089 Example implementation: http://www.ijg.org/
185 187
Summary of the baseline JPEG algorithm Storing DCT coefficients in zigzag order
horizontal frequency
The most widely used lossy method from the JPEG standard:
0 1 5 6 14 15 27 28
→ Colour component transform: 8-bit RGB → 8-bit YCrCb 2 4 7 13 16 26 29 42
vertical frequency
3 8 12 17 25 30 41 43
→ Reduce resolution of Cr and Cb by a factor 2
9 11 18 24 31 40 44 53
→ For the rest of the algorithm, process Y , Cr and Cb compo- 10 19 23 32 39 45 52 54
nents independently (like separate gray-scale images) 20 22 33 38 46 51 55 60
The above steps are obviously skipped where the input is a gray-scale image.
21 34 37 47 50 56 59 61
→ Split each image component into 8 × 8 pixel blocks 35 36 48 49 57 58 62 63
Partial blocks at the right/bottom margin may have to be padded by repeating the
last column/row until a multiple of eight is reached. The decoder will remove these After the 8×8 coefficients produced by the discrete cosine transform
padding pixels. have been quantized, the values are processed in the above zigzag order
by a run-length encoding step.
→ Apply the 8 × 8 forward DCT on each block The idea is to group all higher-frequency coefficients together at the end of the sequence. As many
On unsigned 8-bit input, the resulting DCT coefficients will be signed 11-bit integers. image blocks contain little high-frequency information, the bottom-right corner of the quantized
DCT matrix is often entirely zero. The zigzag scan helps the run-length coder to make best use
of this observation.
186 188
Huffman coding in JPEG Advanced JPEG features
s value range Beyond the baseline and lossless modes already discussed, JPEG pro-
0 0 vides these additional features:
1 −1, 1
2 −3, −2, 2, 3 → 8 or 12 bits per pixel input resolution for DCT modes
3 −7 . . . − 4, 4 . . . 7
4 −15 . . . − 8, 8 . . . 15
→ 2–16 bits per pixel for lossless mode
5 −31 . . . − 16, 16 . . . 31 → progressive mode permits the transmission of more-significant
6 −63 . . . − 32, 32 . . . 63 DCT bits or lower-frequency DCT coefficients first, such that
... ... a low-quality version of the image can be displayed early during
i −(2i − 1) . . . − 2i−1 , 2i−1 . . . 2i − 1 a transmission
DCT coefficients have 11-bit resolution and would lead to huge Huffman
tables (up to 2048 code words). JPEG therefore uses a Huffman table only → the transmission order of colour components, lines, as well as
to encode the magnitude category s = ⌈log2 (|v| + 1)⌉ of a DCT value v. A DCT coefficients and their bits can be interleaved in many ways
sign bit plus the (s − 1)-bit binary value |v| − 2s−1 are appended to each → the hierarchical mode first transmits a low-resolution image,
Huffman code word, to distinguish between the 2s different values within followed by a sequence of differential layers that code the dif-
magnitude category s. ference to the next higher resolution
When storing DCT coefficients in zigzag order, the symbols in the Huffman tree are actually
tuples (r, s), where r is the number of zero coefficients preceding the coded value (run-length). Not all of these features are widely used today.
189 191
1:5 (1.6 bit/pixel) 1:10 (0.8 bit/pixel) 1:20 (0.4 bit/pixel) 1:50 (0.16 bit/pixel)
194 196
JPEG2000 examples (DWT) MPEG video coding
→ Uses YCrCb colour transform, 8×8-pixel DCT, quantization,
zigzag scan, run-length and Huffman encoding, similar to JPEG
→ the zigzag scan pattern is adapted to handle interlaced fields
→ Huffman coding with fixed code tables defined in the standard
MPEG has no arithmetic coder option.
→ adaptive quantization
→ SNR and spatially scalable coding (enables separate transmis-
sion of a moderate-quality video signal and an enhancement
signal to reduce noise or improve resolution)
→ Asymmetric workload: Encoder needs significantly more com- Each MPEG image is split into 16×16-pixel large macroblocks. The predic-
putational power than decoder (for bit-rate adjustment, motion tor forms a linear combination of the content of one or two other blocks of
estimation, perceptual modeling, etc.) the same size in a preceding (and following) reference image. The relative
http://www.chiariglione.org/mpeg/ positions of these reference blocks are encoded along with the differences.
198 200
MPEG reordering of reference images MPEG audio coding
Display order of frames: Three different algorithms are specified, each increasing the processing
time power required in the decoder.
Supported sampling frequencies: 32, 44.1 or 48 kHz.
Layer I
→ Waveforms are split into segments of 384 samples each (8 ms at 48 kHz).
I B B B P B B B P B B B P
→ Each segment is passed through an orthogonal filter bank that splits the
signal into 32 subbands, each 750 Hz wide (for 48 kHz).
Coding order:
This approximates the critical bands of human hearing.
→
time
Each subband is then sampled at 1.5 kHz (for 48 kHz).
12 samples per window → again 384 samples for all 32 bands
buffer content
encoder
decoder
Layer III
time time
→ Modified DCT step decomposes subbands further into 18 or 6 frequencies
MPEG can be used both with variable-bitrate (e.g., file, DVD) and fixed-bitrate (e.g., ISDN)
channels. The bitrate of the compressed data stream varies with the complexity of the input
→ dynamic switching between MDCT with 36-samples (28 ms, 576 freq.)
data and the current quantization values. Buffers match the short-term variability of the encoder and 12-samples (8 ms, 192 freq.)
bitrate with the channel bitrate. A control loop continuously adjusts the average bitrate via the enables control of pre-echos before sharp percussive sounds (Heisenberg)
quantization values to prevent under- or overflow of the buffer.
The MPEG system layer can interleave many audio and video streams in a single data stream. → non-uniform quantization
Buffers match the bitrate required by the codecs with the bitrate available in the multiplex and
encoders can dynamically redistribute bitrate among different streams. → Huffman entropy coding
MPEG encoders implement a 27 MHz clock counter as a timing reference and add its value as a
system clock reference (SCR) several times per second to the data stream. Decoders synchronize → buffer with short-term variable bitrate
with a phase-locked loop their own 27 MHz clock with the incoming SCRs.
Each compressed frame is annotated with a presentation time stamp (PTS) that determines when → joint stereo processing
its samples need to be output. Decoding timestamps specify when data needs to be available to
the decoder. MPEG audio layer III is the widely used “MP3” music compression format.
202 204
Psychoacoustic models Outlook
MPEG audio encoders use a psychoacoustic model to estimate the spectral
and temporal masking that the human ear will apply. The subband quan- Further topics that we have not covered in this brief introductory tour
tization levels are selected such that the quantization noise remains below through DSP, but for the understanding of which you should now have
the masking threshold in each subband. a good theoretical foundation:
The masking model is not standardized and each encoder developer can
chose a different one. The steps typically involved are:
→ multirate systems
Exercise 17 Compare the quantization techniques used in the digital tele- Some final thoughts about redundancy . . .
phone network and in audio compact disks. Which factors to you think led
to the choice of different techniques and parameters here? Aoccdrnig to rsceearh at Cmabrigde Uinervtisy, it
deosn’t mttaer in waht oredr the ltteers in a wrod are,
Exercise 18 Which steps of the JPEG (DCT baseline) algorithm cause a the olny iprmoetnt tihng is taht the frist and lsat
loss of information? Distinguish between accidental loss due to rounding ltteer be at the rghit pclae. The rset can be a total
errors and information that is removed for a purpose. mses and you can sitll raed it wouthit porbelm. Tihs is
bcuseae the huamn mnid deos not raed ervey lteter by
Exercise 19 How can you rotate by multiples of ±90◦ or mirror a DCT- istlef, but the wrod as a wlohe.
JPEG compressed image without losing any further information. Why might
the resulting JPEG file not have the exact same file length?
. . . and perception
Exercise 20 Decompress this G3-fax encoded line: Count how many Fs there are in this text:
1101011011110111101100110100000000000001
FINISHED FILES ARE THE RE-
Exercise 21 You adjust the volume of your 16-bit linearly quantizing sound- SULT OF YEARS OF SCIENTIF-
card, such that you can just about hear a 1 kHz sine wave with a peak IC STUDY COMBINED WITH THE
amplitude of 200. What peak amplitude do you expect will a 90 Hz sine EXPERIENCE OF YEARS
wave need to have, to appear equally loud (assuming ideal headphones)?
206 208