Tree-Structured Vector Quantizers
Tree-Structured Vector Quantizers
If we introduce some structure into the codebook then it would be easy to pick which
part contains the desired output vector.
Consider the two-dimensional vector quantizer shown in Figure.
For a given input to this vector quantizer, we can reduce the number of comparisons
necessary for finding the closest output point by using the sign on the components of the
input.
The sign on the components of the input vector will tell us in which quadrant the input
lies.
All the quadrants are mirror images of the neighboring quadrants, the closest output
point to a given input will lie in the same quadrant as the input itself.
Thus reducing the number of required comparisons by a factor of four.
This approach can be extended to L dimensions, where the signs on the L components of
the input vector can tell us in which of the 2L hyperquadrants the input lies, which in
turn would reduce the number of comparisons by 2L .
This approach works well when the output points are distributed in a symmetrical manner.
However, it breaks down as the distribution of the output points becomes less
symmetrical.
Consider the vector quantizer shown in the figure below
If the total number of output points is K , with this approach we have to make K /2 + 2
comparisons instead of K comparisons.
This process can be continued by splitting the output points in each group into two
groups and assigning a test vector to the subgroups.
So group0 would be split into group00 and group01, with associated test vectors labeled
00 and 01, and group1 would be split into group10 and group11, with associated test
vectors labeled 10 and 11.
In this way the number of comparisons required to obtain the final output point would be
2logK instead of K .
Thus, for a codebook of size 4096 we would need 24 vector comparisons instead of 4096
vector comparisons.
The comparisons that must be made at each step are shown in Figure given below.
First, obtain the average of all the training vectors, perturb it to obtain a second vector,
and use these vectors to form a two-level vector quantizer.
Let us label these two vectors 0 and 1, and the groups of training set vectors that would
be quantized to each of these two vectors group0 and group1.
We will later use these vectors as test vectors.
We perturb these output points to get the initial vectors for a four-level vector quantizer.
At this point, the design procedure for the tree-structured vector quantizer deviates from
the splitting technique.
Instead of using the entire training set to design a four-level vector quantizer, we use the
training set vectors in group0 to design a two-level vector quantizer with output points
labeled 00 and 01.
We use the training set vectors in group1 to design a two-level vector quantizer with
output points labeled 10 and 11.
We also split the training set vectors in group0 and group1 into two groups each.
The vectors in group0 are split, based on their proximity to the vectors labeled 00 and 01,
into group00 and group01, and the vectors in group1 are divided in a like manner into the
groups group10 and group11.
The vectors labeled 00, 01, 10, and 11 will act as test vectors at this level.
To get an eight-level quantizer, we use the training set vectors in each of the four groups
to obtain four two-level vector quantizers.
We continue in this manner until we have the required number of output points.
Notice that in the process of obtaining the output points, we have also obtained the test
vectors required for the quantization process.
Once we have built a tree-structured codebook, we can sometimes improve its rate
distortion performance by removing carefully selected subgroups.
Removal of a subgroup, referred to as pruning, will reduce the size of the codebook and
hence the rate.
It may also result in an increase in distortion.
Therefore, the objective of the pruning is to remove those subgroups that will result in the
best trade-off of rate and distortion.