0% found this document useful (0 votes)
66 views13 pages

Tree-Structured Vector Quantizers

This document discusses tree-structured vector quantizers (TSVQ), which introduce a tree structure into vector quantization codebooks. This allows the number of comparisons needed to find the closest output vector to be reduced from K to 2logK. TSVQ works by partitioning output vectors into groups and assigning test vectors, then making decisions at each tree node to direct the search. The design process recursively splits training vectors into subgroups. Pruning subgroups can improve the rate-distortion tradeoff by removing unnecessary parts of the tree.

Uploaded by

ABHISHEK B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views13 pages

Tree-Structured Vector Quantizers

This document discusses tree-structured vector quantizers (TSVQ), which introduce a tree structure into vector quantization codebooks. This allows the number of comparisons needed to find the closest output vector to be reduced from K to 2logK. TSVQ works by partitioning output vectors into groups and assigning test vectors, then making decisions at each tree node to direct the search. The design process recursively splits training vectors into subgroups. Pruning subgroups can improve the rate-distortion tradeoff by removing unnecessary parts of the tree.

Uploaded by

ABHISHEK B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Signal Compression

Dr. Waquar Ahmad

National Institute of Technology Calicut


[email protected]

November 15, 2021

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 1 / 13


TREE-STRUCTURED VECTOR QUANTIZERS

If we introduce some structure into the codebook then it would be easy to pick which
part contains the desired output vector.
Consider the two-dimensional vector quantizer shown in Figure.

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 2 / 13


TSVQ

For a given input to this vector quantizer, we can reduce the number of comparisons
necessary for finding the closest output point by using the sign on the components of the
input.
The sign on the components of the input vector will tell us in which quadrant the input
lies.
All the quadrants are mirror images of the neighboring quadrants, the closest output
point to a given input will lie in the same quadrant as the input itself.
Thus reducing the number of required comparisons by a factor of four.
This approach can be extended to L dimensions, where the signs on the L components of
the input vector can tell us in which of the 2L hyperquadrants the input lies, which in
turn would reduce the number of comparisons by 2L .

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 3 / 13


.

This approach works well when the output points are distributed in a symmetrical manner.
However, it breaks down as the distribution of the output points becomes less
symmetrical.
Consider the vector quantizer shown in the figure below

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 4 / 13


The output points are shown as filled circles, and the input point is the X .
One can observe that while the input is in the first quadrant, the closest output point is
in the fourth quadrant.
The situation gets worse as we lose more and more of the symmetry.
Consider the figure given below.

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 5 / 13


In this quantizer, not only will we get an incorrect output point when the input is close to
the boundaries of the first quadrant, but also there is no significant reduction in the
amount of computation required.
Most of the output points are in the first quadrant.
Therefore, whenever the input falls in the first quadrant, which it will do as the quantizer
design is reflective of the distribution of the input, it does not lead to a great reduction in
the number of comparisons.
The idea of using the L-dimensional equivalents of quadrants to partition the output
points in order to reduce the computational load can be extended to nonsymmetrical
situations.

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 6 / 13


.
Divide the set of output points into two groups, group0 and group1.
Assign each group a test vector such that output points in each group are closer to the
test vector assigned to that group than to the test vector assigned to the other group.

Label the two test vectors 0 and 1.


When we get an input vector, we compare it against the test vectors.
Depending on the outcome, the input is compared to the output points associated with
the test vector closest to the input.
Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 7 / 13
.

If the total number of output points is K , with this approach we have to make K /2 + 2
comparisons instead of K comparisons.
This process can be continued by splitting the output points in each group into two
groups and assigning a test vector to the subgroups.
So group0 would be split into group00 and group01, with associated test vectors labeled
00 and 01, and group1 would be split into group10 and group11, with associated test
vectors labeled 10 and 11.
In this way the number of comparisons required to obtain the final output point would be
2logK instead of K .

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 8 / 13


.

Thus, for a codebook of size 4096 we would need 24 vector comparisons instead of 4096
vector comparisons.
The comparisons that must be made at each step are shown in Figure given below.

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 9 / 13


.
The label inside each node is the label of the test vector that we compare the input
against.
This tree of decisions is what gives tree-structured vector quantizers (TSVQ) their name.
As we are progressing down a tree, we are also building a binary string.
As the leaves of the tree are the output points, by the time we reach a particular leaf or,
in other words, select a particular output point, we have obtained the binary codeword
corresponding to that output point.
This process of building the binary codeword as we progress through the series of
decisions required to find the final output can result in some other interesting properties
of tree-structured vector quantizers. For instance, even if a partial codeword is
transmitted, we can still get an approximation of the input vector.
if the quantized value was the codebook vector 5, the binary codeword would be 011.
However, if only the first two bits 01 were received by the decoder, the input can be
approximated by the test vector labeled 01.
Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 10 / 13
DESIGN OF TREE-STRUCTURED VECTOR QUANTIZERS

First, obtain the average of all the training vectors, perturb it to obtain a second vector,
and use these vectors to form a two-level vector quantizer.
Let us label these two vectors 0 and 1, and the groups of training set vectors that would
be quantized to each of these two vectors group0 and group1.
We will later use these vectors as test vectors.
We perturb these output points to get the initial vectors for a four-level vector quantizer.
At this point, the design procedure for the tree-structured vector quantizer deviates from
the splitting technique.
Instead of using the entire training set to design a four-level vector quantizer, we use the
training set vectors in group0 to design a two-level vector quantizer with output points
labeled 00 and 01.

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 11 / 13


DESIGN OF TREE-STRUCTURED VECTOR QUANTIZERS

We use the training set vectors in group1 to design a two-level vector quantizer with
output points labeled 10 and 11.
We also split the training set vectors in group0 and group1 into two groups each.
The vectors in group0 are split, based on their proximity to the vectors labeled 00 and 01,
into group00 and group01, and the vectors in group1 are divided in a like manner into the
groups group10 and group11.
The vectors labeled 00, 01, 10, and 11 will act as test vectors at this level.
To get an eight-level quantizer, we use the training set vectors in each of the four groups
to obtain four two-level vector quantizers.
We continue in this manner until we have the required number of output points.
Notice that in the process of obtaining the output points, we have also obtained the test
vectors required for the quantization process.

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 12 / 13


PRUNED TREE-STRUCTURED VECTOR QUANTIZERS

Once we have built a tree-structured codebook, we can sometimes improve its rate
distortion performance by removing carefully selected subgroups.
Removal of a subgroup, referred to as pruning, will reduce the size of the codebook and
hence the rate.
It may also result in an increase in distortion.
Therefore, the objective of the pruning is to remove those subgroups that will result in the
best trade-off of rate and distortion.

Dr. Waquar Ahmad (NITC) Lecture-6 November 15, 2021 13 / 13

You might also like