Data Compression Huffman Codes
Data Compression Huffman Codes
Prof. Ja-Ling Wu
2
The recursive procedure in step 2 can be viewed as
the construction of a binary tree, since at each step we
are merging two symbols.
At the end of the recursion, all the symbols S1, S2, …,
SN will be leaf nodes of the tree.
3
Step1 Step21 Step2 2 Step23 Step2 4 Step25
si psi
a 0.05 e 0.3 e 0.3 e 0.3 e 0.3 k 0.4 l 0.6 1.0
b 0.2 b 0.2 b 0.2 b 0.2 j 0.3 e 0.3 0.6 k 0.4 root
c 0.1 f 0.2 f 0.2 f 0.2 b 0.2 0.4 j 0.3 e
d 0.05 c 0.1 c 0.1 i 0.2 0.3 f 0.2 k
e 0.3 g 0.1 g 0.1 0.2
c 0.1 j
f 0.2 a 0.05 0.1 h 0.1
i
g 0.1 d 0.05 h
C(w)
d a
0 1 a 10101
g
d a b 01
c 100
c h01 g d 10100
f b e e 11
c0 1 i
f 00
f0 1b j 0 1 e g 1011
k 0 1e
root
4
Average codeword length
lave li pi
i
5
Properties of Huffman codes
Fixed-length symbols variable-length codewords :
error propagation
6
The code construction process has a complexity of
O(Nlog2N). With presorting of the input symbol probs,
code construction method with complexity O(N) has
been presented in IEEE trans. ASSP-38, pp. 1619-
1626, Sept. 1990.
Huffman codes satisfy the prefix-condition : uniquely
decodable: no codeword is a prefix of another
codeword.
If li satisfy the Kraft constraint 1 then the
l
2 i
9
Decoding Processes:
1. From the compressed input bit stream, we read in L bits into
a buffer.
2. We use the L-bit word in the buffer as an address into the
lookup table and obtain the corresponding symbol, say sk.
Let the codeword length be lk. We have now decode one
symbol.
3. We discard the first lk bits from the buffer and we append to
the buffer, the next lk bits from the input, so that the buffer
has again L bits.
4. Repeat steps 2 and 3 until all of the symbols have been
decoded.
10
Memory Efficient and High-Speed
Search Huffman Coding, by R. Hashemian IEEE
trans. Communications, Oct. 1995, pp. 2576-2581
Due to variable-length coding, the Huffman tree gets
progressively sparse as it grows from the root
– Waste of memory space
– A lengthy search procedure for locating a symbol
Ex: if K-bit is the longest Huffman code assigned to a set
of symbols, the memory size for the symbols may
easily reach 2K words in size.
It is desirable to reduce the memory size from typical
value of 2K, to a size proportional to the number of the
actual symbols.
– Reduce memory size
– Quicker access
11
Ordering and clustering based Huffman Coding
groups the codewords (tree nodes) within specified codeword lengths
Characteristics of the proposed coding scheme:
1. The search time for more frequent symbols (shorter codes) is
substantially reduced compare to less frequent symbols,
resulting in an overall faster response.
2. For long codewords the search for the symbol is also speed
up. This is achieved through a specific partitioning technique
that groups the code bits in a codeword, and the search for a
symbol is conducted by jumping over the groups of bits
rather than going through the bit individually.
3. The growth of the Huffman tree is directed toward one side of
the tree.
– Single side growing Huffman tree (SGH-tree)
12
Ex: H=(S, P) S={S1, S2,…, Sn}
P={P1, P2,…, Pn}
No. of occurrence
TABLE I
Reduction Process In The Source List
s1 48 s1 48 s1 48 s1 48 s1 48 a5 52
s2 31 s2 31 s2 31 s2 31 s2 31 s1 48
s3 7 s3 7 a2 8 a3 13 a4 21
s4 6 s4 6 s3 7 a2 8
s5 5 s5 5 s4 6
s6 2 a1 3
s7 1
Merge
Insert (in descending order)
13
CL: codeword length
14
Algorithm 1: Creating a Table of Codeword Lengths
1. The symbols are listed with the probabilities in decending
order (the ordering of the symbols with equal probabilities is
assumed indifferent).
Next, the pair of symbols at the bottom of the ordered list are
merged and as a result a new symbol a1 is created. The
symbol a1, with probability equal to the sum of the
probabilities of the pair, is then inserted at the proper location
in the ordered list.
To record this operation a codeword length recording (CLR)
table is created which consists of three columns:
: Columns 1 and 2 hold the last pair of symbols before being
merged, and column 3, initially empty, is identified as the
codeword length (CL) column (Table II).
15
In order to make the size of the CLR table small and the hardware
design simpler, the new symbol a1 (in general aj) is selected such
that its inverse a1 (or a j ) represents the associated table address.
e.g. For an 8-bit address word, Composite symbol
a1 The first row in the CLR table, is given the value of 1111 1110
a1 0000 0001
a2 1111 11101 ( a2 0000 0010)
16
2. Continue applying the same procedure, developed for a
single row in step 1, and construct the entire CLR table. Note
that table II contains both the original symbol si and the
composite ones aj (carrying opposite signs).
3. The third column in Table II, designated by CL, is assigned to
hold the codeword lengths. To fill up this column we start
from the last row in the CLR and enter 1. This designates the
codeword length for both s1 and a5.
Next, we check for the signs of each si and a5; if positive
(MSB = 0) we skip, otherwise, the symbol is a composite one
(a5)and its binary inverse (a5=00…00101)
is a row address for table II. We now increment the number
in the CL column, and assign a new value (2 in this example)
to the CL column in row aj (5 in this example), and proceed
applying the same operation to other rows in the CLR table,
as we move to the top, until the CL column is completely
filled.
17
Row No Si Si-1 CL
1 S7 S6 5
2 a1 4+1 S5 4
3 S4 S3 4
4 a2 3+1 a3 3+1 3
5 a4 2+1 S2 2
6 S1 a5 1+1 1
18
role:
(i) Si, Si-1 中有一為 composite, 則以 composite 為 CL
增加及 new address 之基礎
(ii)若Si, Si-1皆為 original, skip this row and CL 不變
19
Single-Side growing Huffman table (SGHT)
— TableⅣ
Single-Side growing Huffman tree (SGH-Tree)
— Fig. 1
are adopted.
20
a5
a4
• For a uniformly
distributed source
SGH-Tree becomes a3 a2
full.
a1
21
Algorithm 2: Constructing a SGHT from a TOCL
– Start from the first row of the table and assign an
“all zero” codeword C1 = 00…0 to the symbol S1.
– Next we increment this codeword and assign the
new value to the next symbol in the table. Similarly,
we proceed creating codewords for the rest of the
symbols in the same row of the TOCL.
– When we change rows, we have to expand the last
codeword, after being incremented, by placing
extra zeros to the right, until the codeword length
matches the level (CL).
22
In general we can write :
C1 00 0
Ci 1 Ci 1* 2 p q
and
Cn 111
23
Super-tree (S-tree)
24
x x
y y
z z
25
26
TABLE VII
Memory (RAM) Space Associated with Table VI and Figs. 3 and 4
0 1 2 3 4 5 6 7 8 9 a b c d e f
0 00 01 02 03 04 05 06 07
1 08 09 0a 0b 0c 0d 0e 0f 10 11
2 12 13 14 15 16 17 18 19 1a
3 1b 1c 1d 1e 1f
27
Storage Allocation
For non-uniform source, SGH-Tree becomes sparse.
How to
i) optimize the storage space
ii) provide quick access to the symbol (data)
key Idea:
Break down the SGH-Tree into smaller clusters
(subtrees) such that the memory efficiency
increases.
28
Ex: 1
2/21 : 100%
2 a5 3
S1
3/22 : 75%
6 a4 3
S2
4/23 : 50%
14 a3 15 a2
6/24 : 37%
28 29 30 31 a1
S3 S4 S5
7/25 : 22%
62 63
S6 S7
29
Remarks:
1. The efficiency changes only when we switch to a new level
(or equivalently to a new CL), and it decreases as we
proceed to the higher levels.
2. Memory efficiency can be interpreted as a measure of the
performance of the system in terms of memory space
requirement; and it is directly related to the sparsity of the
Huffman tree.
3. Higher memory efficiency for the top levels (with smaller CL)
is a clear indication that partitioning the tree into smaller and
less sparse clusters will reduce the memory size. In addition,
clustering also helps to reduce the search time for a symbol.
Definition:
A cluster (subtree) Ti with minimum memory efficiency (MME) Bi,
if there is no level in Ti with memory efficiency less than Bi.
30
SGH-Tree Clustering
Given a SGH-tree, as shown in Fig. 2, depending on
the MME (or CL) assigned, the tree is partitioned by a
cut line, x-x, at the Lth level (L=4 for the choice of
MME=50%, in this example).
The first cluster (subtree), as shown in Fig.3(a), is
formed by removing the remainder of the tree beyond
the cut-line x-x.
The cluster length is defined to be the maximum path
length from the root to a node within the cluster
the cluster length for the first cluster is 4.
Associated with each cluster a look up table (LUT) is
assigned, as shown at the bottom of Fig.3(a), to
provide the addressing information for the
corresponding terminal node (symbol) within the
cluster, or beyond.
31
To identify other clusters, in the tree, we draw more cut lines y-y,
and z-z, each L levels apart.
More clusters are generated, each of which starting from a single
node (root of the cluster) and expanded until it is terminated
either by terminal nodes, or nodes being intercepted by the next
cute line.
Next, we construct a super-tree (s-tree) corresponding to a
SGH-Tree. In a s-tree each cluster is represented by a node,
and the links connecting these nodes, representing the branching
nodes in the SGH-tree, shared between two clusters.
The super-table (ST) associated with the s-tree is shown at the
bottom of the tree.
Note that the s-tree has 7 nodes, one for each cluster, while its
ST has 6 entries. This is because the root cluster a is left out and
the table starts from cluster b.
32
Entries in the ST and the LUT’s
There are two numbers in each location in the ST
the first number identifies the cluster length
the 2nd number is the offset memory address for that cluster
Ex: binary Hexa
f 11 2aH
cluster length : 11+1 = 100, or 4
2aH : the starting address of the corresponding LUT, in the
memory (see table Ⅶ)
the cluster f start at address 2aH in the memory table Ⅶ. (i.e.
symbol 18)
Each entry in a LUT is an integer in sign/magnitude
format.
Positive integer, with 0 sign, correspond to the nodes existed in
the cluster, while negative numbers, with 1 sign, represent the
nodes being cut by the next cut line.
33
The magnitude of a negative integer specifies a location in the ST,
for further search.For example, consider the cluster-C
C
2 6
4 5 6 7
0b
10 11 12 13 14 15
0c 0d
24 25 26 27 28 d e f
Sign/magnitude
y y
0e 0f 10 11 12
4 4 4 4 10 10 11 11 24 25 26 27 28 3 4 5
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
34
In the 15th entry of the above LUT we find ¼ as sign/magnitude
at that location.
the negative sign (1) indicates that we have to move to other
cluster, and 4 refers to the 4th entry in the ST, which
corresponds to cluster e, and contains 01 and 26H numbers.
36
For example
0 the symbol is found
1 0 4
2 0 4 76 5 4 32 10
29 = 000 1 1101
3 0 4
4 0 4
5 0 5 Codeword length = 4
6 0 5
7 0 5
Symbol 07 is located at 1101=dH
8 0 5
(See Table Ⅶ )
9 0 24
0 1 2 3 4 5 6 7 8 9 a b c d e f
10 0 25 00 01 • • • • • • 02 03 04 05 06 07 • •
11 0 26
12 0 27 The 1st row of the table Ⅶ
13 0 28
29)H= f)H+ d)H
14 0 29 07 symbol
15 1 1
indication of the tree-depth of the node
Cluster length
16 1 2
(codeword length)
memory location (address) if the
LUT for Fig.3(a) 37
memory table is offset by f
Huffman Decoding
The decoding procedure basically starts by receiving
an L-bit code cj, where L is the length of the top
cluster in the associated SGH-Tree (or the SGHT).
This L-bit code cj is then used as the address to the
associated look-up table [fig.3(a)]
MSB
Example:
1. Received codeword : 01100 1011…
first L=4 bit 0110=6 as an address to the LUT given in
Fig.3(a).
The content of the table at this location is 0/5, as the
MS1B
sign/magnitude.
0the symbol is located in this cluster
5 = 0000101, the MS1B is at location 2
CL=2. Next to the MS1B we have 01, which
represents the codeword
the symbol (01H) is found at the address 01 in the
memory (Table Ⅶ)
38
MSB
we move to the 15th location in the LUT for cluster f. Here we find
1/6 and refer to the 6th item in the ST.
The data here is read 00)2 and 3a)H
00 CL of cluster g is 00+1=01 ; or 1
3a)H the memory location offset of cluster g.
40
Take 1 bit from the bit stream which is “0” referring the LUT of
cluster g, location “0” gives 0/2
So the symbol is in cluster g, and 2 = 00000010
(i) MS1B is located at location 1 CL = 1
(ii) to the right of MS1B is a single 0 identifying the codeword, and
(iii) the symbol (1e) is at location 3a+0 = 3a in the memory (tableⅦ)
41
Remarks:
1. For high probable symbols with short codewords (4
bits or less) the search for the symbol is very fast,
and is completed in the first try. For longer
codewords, however, the search time grows almost
proportional to the codeword length.
If CL is the codeword length
L is the maximum level selected for each cluster
( L = 4 in the above example) then the search time is
closely proportional to 1+CL/L
2. Increasing L:
i. Decreasing search time speed up decoding
ii. Growing the memory space requirement
Trade-off
42
Huffman codes with constrained length
Due to lookup table size constraints, no codeword
length exceed L bits is allowed.
S1 , S 2 , , S N
, wh ere P1 P2 PN
P1 , P2 , , PN
the design procedure for a shortened Huffman code:
1. Partition S into two sets S1 and S2 as
1
S1 si | pi L
2
1
S 2 si | pi L
2
2. Create a special symbol Q such that its freq. of occurrence is
q pi
iS 2
43
3. Augment S1 by Q to form a new set W. Construct an optimal
prefix code for W using the design procedure for
unconstrained length Huffman codewords.
codewords cs, for symbols in the set S1
codeword cq for the symbols Q.
cq is the shortened prefix-code for symbols in S2
If li is the length of the i-th codeword of S1, then
1
max si S1 li max si S1 log2 L
pi
( x denotes the smallest integer larger or equal to x)
44
Encoding : input message string m1,m2,…,mk
For all miS1, output the corresponding codeword
from Cs .
1
45
Let lsh be the average codeword length for the shortened codes.
lw be the average codeword length for W.
H w pi log2
1 1
q log2
iS1 pi q
H s pi log2 H w
1 1
pi log2
iS1 pi iS 2 pi
lsh lw qL
H w lw H w 1
H s lsh H w qL 1
1 1
qL pi L pi log2 p log
2 L iS 2
i 2
iS 2 iS 2 pi
2 0.1419 3 011
3 0.1389 3 010
4 0.0514 4 0011
5 0.0513 4 0010
The longest codeword is 9 bits
6 0.0153 5 00011
29 = 512 - entry table is needed
7 0.0153 5 00010 for lookup-table-based decoding
8 0.0072 6 000011
9 0.0068 6 000010
10 0.0038 7 0000011
Now suppose only a 128-entry
11 0.0032 7 0000010 lookup table can be permitted.
12 0.0019 7 0000001 7-bit shortened Huffman code
13 0.0013 8 00000001
14 0.0007 9 000000001
15 0.0004 9 000000000 47
Code construction
S1 s0 , s1 , s7
1.
S 2 s8 , s9 , s15 , since Pi 128
1
for i 8 to 15.
15
q Pi 0.0253
i 8
48
Symbol i pi li Codeword Additional
0 0.2820 2 11
1 0.2786 2 01
2 0.1419 3 101
3 0.1389 3 100
4 0.0514 4 0011
5 0.0513 4 0010
6 0.0153 5 00011
7 0.0153 5 00010
8 0.0072 11 0000 0001000
9 0.0068 11 0000 0001001
10 0.0038 11 0000 0001010
11 0.0032 11 0000 0001011
12 0.0019 11 0000 0001100
13 0.0013 11 0000 0001101
14 0.0007 11 0000 0001110
15 0.0004 11 0000 0001111
49
2. For all symbols in S2 we need a prefix code of 4 bits
followed by a 7-bit representation for the specific
symbol in S2
7 15
lsh li pi 11pi 2.8057 bits
i 0 i 8
50
Decoding:
1. We first construct a lookup table as described above.
2. From the input bit stream, we fetch bits into a buffer until the buffer
has 7 bits. We access the lookup table location, using the 7 bits as
an address.
This lookup table location contains (mk, lk)
3. The first lk bits in the buffer are discarded by shifting the buffer
contents to the left by lk bits positions.
If mk Q, mk{S0, S1,…S7}, and thus we have correctly decoded this
symbol
If mk = Q, additional bits from the input bit stream are needed for
decoding. We fetch lk bits from the bit stream to fill up the buffer. The
buffer now contains the binary representation for one of the symbols S8,
S9,…S15, and thus we have correctly decoded a symbol from S2.
4. Repeat steps 2 and 3 until the complete message has been
decoded.
51
Lookup table size (entries) Worst case lsh-lave (bits/symbol)
16 0.4213
32 0.2326
64 0.2326
128 0.1342
256 0.0731
512 0.0338
52
Constrained-length Huffman codes: prefix-free constant
output decoding rate with a table-lookup decoder
For a maximum codeword length of L bits, we define a threshold
T = 2-L
Sort si, i = 1, 2, …, N so that pk pk+1
For each pi, if pi T, set pi = T
Design the codebook using the modified pi values and the
unconstrained-length Huffman code table design approach
Since pi is restricted to at most 2-L, no codeword length will
exceed L bits.
Codeword length 1/ pi : not guarantee
This is due to the fact that some of the properties were set to the
threshold T and hence the ordering among the properties is
obscured.
Rearranging is done by simply sorting the codeword lengths in
ascending order of magnitude and associating this sorted list to
the corresponding list of codewords.
53
Reordered
Symbol i pi l Codeword
l codeword
0 0.2820 2 11 2 11
1 0.2786 2 01 2 01
2 0.1419 3 101 3 101
3 0.1389 3 100 3 100
4 0.0514 4 0010 4 0010
5 0.0513 4 0001 4 0001
6 0.0153 6 001100 6 001100
7 0.0153 6 001101 6 001101
8 0.0072 7 0011110 6 000010
9 0.0068 7 0011111 6 000011
10 0.0038 7 0011100 6 000000
11 0.0032 7 0011101 6 000001
12 0.0019 6 000010 7 0011110
13 0.0013 6 000011 7 0011111
14 0.0007 6 000000 7 0011100
15 0.0004 6 000001 7 0011101
lave 2.7308 2.7141
54
: single-layer decoding
Constrained-length Huffman codes
: The Voorhis method [1974, IEEE Trans. IT]: near optimum codeword length
S1 , S2 , , S N
, and P1 P2 PN
P1 , P2 , , PN N
2 l .
k 1
k
55
Symbol Si Voorhis code
0 11
1 10
2 011
3 010
4 0011
5 0010
6 00011
7 00010
8 0000111
9 0000110
10 0000101
11 0000100
12 0000011
13 0000010
14 0000001
15 0000000
lave 2.7045
56
Home Work:
1. Consider codes that satisfy the suffix condition, which
says that no codeword is a suffix of any other
codeword. Show that a suffix condition code is
uniquely decodable, and show that the minimum
average length over all codes satisfying the suffix
condition is the same as the average length of the
Huffman code for that random variable.
Suffix code
2. Suppose that X=i with probability Pi, i=1, 2, …m. Let li
be the number of binary symbols in the codeword
associated with X=i, and let ci denote the cost per
letter of the codeword when X=i. Thus the average
cost C of the description
m
of X is
C pi ci li
i 1
57
a) Minimize C over all l1,l2,…,lm such that 2-li1. Ignore any
implied integer constraints on li. Exhibit the minimizing
l1*,l2*,…,lm* and the associated minimum value C*.
b) How would you use the Huffman code procedure to minimize
C over all uniquely decodable codes? Let CHuffman denote this
minimum. Show that
m
C CHuffman C pi ci
* *
i 1
58
3. A computer generates a number X according to a
known prob. mass function p(x), x{1,2,…,100}. The
player asks arbitrary Yes-No questions sequentially
until X is determined. If he is right (i.e., X is
determined), he receives a prize of value v(x).
a) How should the player proceed to maximize his expected
winnings? What is his expected return?
b) Continuing (a), what if v(x) is fixed, but p(x) can be chosen by
the computer (and then announced to the player)? The
computer wishes to minimize the player’s expected return.
What should p(x) be? What is the expected return to the
player?
— The game of Hi-Lo.
59
4. Although the codeword lengths of an optimal variable
length code are complicated functions of the
message probabilities {p1,p2,…,pm}, it can be said that
less probable symbols are encoded into longer
codewords. Suppose that the message probabilities
are given in decreasing order p1p2 … pm.
a) Prove that for any binary Huffman code, if the most probable
message symbol has probability p1>2/5, then that symbol
must be assigned a codeword of length 1.
b) Prove that for any binary Huffman code, if the most probable
message symbol has probability p1<1/3, then that symbol
must be assigned a codeword of length 2.
60