Topic 3 - Source Coding
Topic 3 - Source Coding
Encoding
Waveform Coding - Pulse
Modulation (PM) Techniques
PAM, PCM, Companding
Source Coding
Introduction
Information
Source
Source Coder
Analog electronic
Information Source & preprocessing
Signal
Source
A/D Converter
Conditioner
Source Coder
5
Source Coder
Digital electronic
Information Source & preprocessing
Signal
Source
Conditioner
Source Coder
6
Approximate instantaneous
(still continuous) values to
a set of finite valuesquantization
Quantized
signal
Original signal
before sampling
9
Sampling
The electronic analog source is tapped at
regular intervals to acquire its instantaneous
value
But why sample?
As the first step in digitizing the source output
discretizing the source output in time
The original (source) signal can be approximated
at the receiver by interpolating between
successive restored sample values
11
12
x (t )
x(nT ) (t nT ) or ,
s
x (t ) x(t ) (t nTs )
n
X ( f ) X ( f )* F
1
X ( f ) X ( f )*
Ts
1
X ( f )
Ts
(t nTs )
( f
X(f
or ,
n
) thus,
Ts
n
)
Ts
15
Fourier
Multiplier pulse train
16
17
Folding frequency
18
Effect of Aliasing
To reconstruct the signal at the receiver:
A single signal spectrum copy is selected by appropriate band
filtering with an effectively ideal low-pass filter of bandwidth f s/2
Hz
Example 1
A bandlimited signal has a bandwidth equal
to 3400Hz. What sampling rate should be
used to guarantee a guard band of 1200Hz?
Let fs = sampling frequency, B = signal
bandwidth, and BG = guard band.
fs = 2B + BG
fs = 2(3400) + 1200 = 8000 Hz or
Samples/sec.
21
Example 2
A source with bandwidth 4000 Hz is
sampled at the Nyquist rate. Assuming
that the resulting sampled sequence can
be approximately modeled by a DMS with
1 1 1 1 1
, , 2}
, , and with
alphabet A = {-2, -1, 0, 21,
4 8 16 16
corresponding probabilities {
},
determine the rate at which information is
generated at this source in bits per
second.
22
24
mp
d ( x x )
x
2mp
- mp
where
is the level to which
the actual value of x is
quantized
Max. error qmax = V/2
25
Di q 2 q 2 f X ( x)dx
1
Di q
V
2
V
2
2
q
dq
V
2
27
1
N q Di q
V
2
Nq
(V ) 2
q dq
12
V / 2
V / 2
m 2p
3L2
28
Performance in Quantization
Quality of quantized signal described quantitatively in
terms of the Signal to Quantization Noise Ratio
(SQNR): ratio of signal power to quantization noise power
(often
2
Thus SQNR for equal-sized
S0
m
(t ) quantization
2
3L
intervals, 2mp/L,N quniform
m 2p quantization:
29
Uniform Quantization:
Observations
SQNR increases with
Increasing signal power
Decreasing quantization noise power
30
Increasing SQNR
In uniform quantization schemes, SQNR can be
increased by decreasing quantization noise at lower
amplitudes. How do we do this?
Nq decreased by increasing no. of quantization
levels
To equalize SQNR across signal dynamic range, we
may
Reduce quantization interval for small signal amplitudes:
non-uniform quantization or
Nonlinearly expand signal dynamic range so small
amplitudes see same Nq as large ones: Companding
31
Non-uniform Quantization
One scheme designed to
compensate for low SQNR of
uniform quantization
Lower m(t) amplitudes divided
into more quantization levels
Lower amplitudes subject to
lower quantization noise
SQNR equalized over
dynamic range of m(t)
32
Companding
Another scheme designed to compensate for low SQNR of
uniform quantization
Signal passed through non-linear
element that compresses large
amplitudes
33
Common compression
characteristics
To read graphs :
As the actual input signal m approaches its maximum value (x
axis), apply less gain to it according to an amount of
compression set by or A (y axis).
For low signal levels m, apply more gain according to the
values of or A
34
1
m
y
ln 1
ln(1 )
m p
m
1
mp
1
Am
y
1 ln
1 ln A
m p
m 1
mp A
NOTE:
The greater
or A, the more
larger
amplitudes of
m are
compressed in
the output y.
m
1
mp
35
ln(1 )
With
compression
______
2
m (t )
Without
compression
S0/Nq often
denoted as S0/N0
36
Sampling
Quantization
Quantization introduces
error, q, and noise, Nq,
V
which increase
with
Quantized
2
1 interval,
2
quantization
V :signal
Nq q2
q dq
V V
2
Original signal
before sampling
37
Nq
mp
Alternatives
Non-uniform
quantization
Companding
38
Encoding
In the encoding process, a sequence
of bits is assigned to different
quantization values
Maybe independent of quantization
process or integral to quantization
39
Encoding Independent of
Quantn
Quantized levels mapped to fixed length binary
code words
Original signal
Quantization Continuous
values rounded to a set of
discrete values
PCM
sign
al
4-bit PCM
41
41
42
43
PCM Observations
The output of the PCM process is a bit stream
Each quantized, encoded sample generates v bits
per sample
Sampling carried out at a minimum of 2B samples/sec
v bits per sample generates 2vB bits per second output
2vB bits/sec referred to as source-coded data or bit
rate
Represents the minimum coding rate for uniform PCM
44
DM performance
If the analog source output changes too slowly or too
rapidly, distortion is introduced
If the input changes too slowly, quantization error becomes
significant
If it changes too rapidly, slope overload occurs
Either condition introduces distortion noise
48
Improving DM performance
Achieved by selecting optimum step
size
Use adaptive DM
For rapid changes in input, use large
steps eliminates slope overload
For slow changes, use smaller steps
significantly reduces quantization noise
49
Encoding efficiency
Source output is represented by H(X) bits per
quantized sample
Thus encoder must map each quantized sample to
a binary code of length v H(X) bits to maintain
representation accuracy
Binary codes can only be of integer lengths (e.g. 4
bits/sample, 8 bits/sample, etc.)
The value of H(X) is not always an integer (Why?)
Coding process is most efficient for v = H(X), i.e.
H(X) being an integer value
Efficiency is lower if more bits are encoded than
are required to accurately represent the quantized
source output, i.e. for v > H(X)
50
Improving encoding
efficiency
Efficiency can be improved by compressing the output of the
source. How?
51
52
Huffman Coding
The probability of occurrence of each
quantized sample is used to
construct a matching codeword
This is accomplished using the
Huffman Coding Algorithm
53
probabilities
Merge the two least probable outputs into a single output
whose probability is the sum of the corresponding
probabilities
No. of remaining outputs = 2? If no, go back to step 2;
otherwise go on to step 4
Arbitrarily assign 0 and 1 as codewords for the two
remaining outputs
If an output is the result of the merger of two outputs
from a preceding step, append the current codeword with
a 0 and a 1 to obtain the codeword for the preceding
outputs; then repeat step 5. If no output is preceded by
another output in another step, then stop
Probability
a1
1/2
a2
1/4
a3
a4
a5
Implementation
0
0
0
1/8
1/16
1/8
1
1/16
10
1/2
1/4
Codeword
0
110
1
111
0
111
1
55
i 1
56
R
1
R satisfies the inequality
H ( X ) R H ( X ) 1
Compared to standard PCM, R < v: on
average, fewer bits required to represent the
source without sacrificing accuracy
57
58
Limitations of Huffman
coding
Huffman coding requires advanced knowledge
of the sources probability distribution
Cannot accommodate sources with differing statistics
Thus implementation can become quite complicated
in practice
59
Encoding
Waveform coding
Various techniques for improving coding
efficiency
61
For Quantization:
64
Reading
Mandatory reading:
Proakis, J, Masoud Salehi. Fundamentals of
Communication Systems : Sections 7.17.2.1, 7.3, 7.4, 12.2, 12.3-12.3.1
Recommended reading
Proakis, J, Masoud Salehi. Fundamentals of
Communication Systems : Section 12.3.2
Proakis, J, Masoud Salehi. Communication
Systems Engineering : Section 1.
65