0% found this document useful (0 votes)
190 views8 pages

CABAC Encoder: Context Based Adaptive Binary Arithmetic Coding Encoder

CABAC Encoder consists of 3 main steps: binarization, context modelling, and binary arithmetic coding. Context modelling assigns probability distributions to binary symbols based on previous symbols. Binary arithmetic coding encodes each bin based on its probability. Two CABAC implementations are described: basic CABAC uses frequency tables that are updated based on encoded symbols, while CABAC with PPM uses a cache of previous context bits to select the appropriate probability table. Experimental results show that compression ratio and space savings increase with larger context sizes for both implementations.

Uploaded by

Keshav Awasthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
190 views8 pages

CABAC Encoder: Context Based Adaptive Binary Arithmetic Coding Encoder

CABAC Encoder consists of 3 main steps: binarization, context modelling, and binary arithmetic coding. Context modelling assigns probability distributions to binary symbols based on previous symbols. Binary arithmetic coding encodes each bin based on its probability. Two CABAC implementations are described: basic CABAC uses frequency tables that are updated based on encoded symbols, while CABAC with PPM uses a cache of previous context bits to select the appropriate probability table. Experimental results show that compression ratio and space savings increase with larger context sizes for both implementations.

Uploaded by

Keshav Awasthi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

CABAC Encoder : Context Based Adaptive

Binary Arithmetic Coding Encoder


Sanket Panchal(2019EET2347)
Shubham Jain(2019EET2351)

July 25 2020

1 Introduction
Context-based Adaptive Binary Arithmetic Coding (CABAC) as a normative
part of the new ITU-T — ISO/IEC standard H.264/AVC for video compres-
sion is presented. By combining an adaptive binary arithmetic coding technique
with context modeling, a high degree of adaptation and redundancy reduction is
achieved. The CABAC framework also includes a novel low-complexity method
for binary arithmetic coding and probability estimation that is well suited for
efficient hardware and software implementations. CABAC significantly outper-
forms the baseline entropy coding method of H.264/AVC for the typical area of
envisaged target applications.

Figure 1: CABAC Encoder Diagram

CABAC Encoder mainly consists of 3 elementary steps:

1. Binarization : To fulfill the requirement ”Fast and Accurate Estimation”


and ”Reduction of Computational Complexity” this Preprocessing step
was introduced to reduce the alphabet size by converting them into binary
form.

1
2. Context Modelling : One of the most important properties of arithmetic
coding is the possibility to utilize a clean interface between modeling and
coding such that in the modeling stage, a model probability distribution
is assigned to the given symbols, which then, in the subsequent coding
stage, drives the actual coding engine to generate a sequence of bits as a
coded representation of the symbols according to the model distribution.
3. Binary Arithmetic Coding : Binary arithmetic coding is based on the
principle of recursive interval subdivision that involves the following ele-
mentary multiplication operation.

2 Integer Implementation Algorithm For Binary


arithmetic Coding
Recall that the rationale for using numbers in the interval [0, 1) as a tag was
that there are an infinite number of numbers in this interval. However, in
practice, the number of numbers that can be uniquely represented on a machine
is limited by the maximum number of digits (or bits) we can use for representing
the number. Consider the values of l (n) and u (n), As n gets larger, these
values come closer and closer together. This means that in order to represent
all of the subintervals uniquely, we need increasing precision as the length of
the sequence increases. In a system with finite precision, the two values are
bound to converge, and we will lose all information about the sequence from
the point at which the two values converged. To avoid this situation, we need
to rescale the interval. However, we have to do it in a way that will preserve
the information that is being transmitted. We would also like to perform the
encoding incrementally—that is, to transmit portions of the code as the sequence
is being observed, rather than wait until the entire sequence has been observed
before transmitting the first bit. The algorithm described in this section takes
care of the problems of synchronized rescaling and incremental encoding.

2
Figure 2: Integer Implementation Algo

3
3 CABAC Algorithm
Coding a data symbol involves the following stages.
• Binarization: CABAC uses Binary Arithmetic Coding which means that
only binary decisions (1 or 0) are encoded. A non-binary-valued symbol
(e.g. a transform coefficient or motion vector) is ”binarized” or converted
into a binary code prior to arithmetic coding. This process is similar
to the process of converting a data symbol into a variable length code
but the binary code is further encoded (by the arithmetic coder) prior to
transmission.

• Stages are repeated for each bit (or ”bin”) of the binarized symbol.
• Context model selection: A ”context model” is a probability model for one
or more bins of the binarized symbol. This model may be chosen from a
selection of available models depending on the statistics of recently coded
data symbols. The context model stores the probability of each bin being
”1” or ”0”.
• Arithmetic encoding: An arithmetic coder encodes each bin according to
the selected probability model. Note that there are just two sub-ranges
for each bin (corresponding to ”0” and ”1”).

• Probability update: The selected context model is updated based on the


actual coded value (e.g. if the bin value was ”1”, the frequency count of
”1”s is increased).

4 Implementation
Two types of Implementation has been performed, One of them basic CABAC
and another is CABAC with prediction by partial matching(PPM). Difference
between them is in the creation and selection of Frequency tables. Before that
flow goes in same direction for both like : Decompressing and Image from
jpg/png, converting to binary image, save them as flatten numpy array and call
respective Encoding function.

1. CABAC : In this implementation according to the chosen context-order all


possible contexts will have their respective symbol frequency table initial-
ized with 1. And initial context bits are 00...r (r = context-order). When
we encounter a new symbol 0/1 symbol frequency table of it’s respective
context i.e. previous bits upto context-order is updated and context is
also update by appending current symbol to context and dropping first
symbol from it. Context Modelling for CABAC is shown below.

4
Figure 3: Context Model for order 2

2. CABAC with PPM : In this implementation according to chosen context


those many index of list will be containing their respective context and
each context their respective table. Cache of previous r bits (r=context-
order) is maintained. When a symbol is encountered if it’s respective
context table has frequency ¿ 0, symbol will be encoded using that table
and that table as well as other lower order context tables are updated. If
context table has frequency o of that symbol we will reduce context length
by one and find in lower order context and repeat same procedure once
available. Cache of previous bit is also updated. Context model is shown
below.

5
Figure 4: Context Model for order 2

5 Results
Compression Ratio = Uncompressed File size/Compressed file size Space Saved
= (1-1/Compression Ratio)*100

1. CABAC : Following figure shows results of Compression ratio and per-


centage of space saved for different context lenghts starting from 0 to 16.
Figure justify that both Compression Ratio and Space saved increases
with context length.

6
Figure 5: CABAC Results

2. CABAC with PPM : Following figure shows results of compression ratio


and percentage of space saved for different context length from 0 to 16.
Here results are very much aligned with normal CABAC because skewnes
in frequency achieved in both are very much similar.

Figure 6: CABAC with PPM Results

6 Experiments Not worked


We tried to implement CABAC by just creating table for respective context-
order and context-order lesser than that. For example for context order 2 cre-
ating table of 8 symbols for it, then table of 4 symbols for order 1 and table
for 2 sybols for order 0. This approach was not successful as skewness in the
probability of symbol was confined to lower order contexts only and symbols in
higher order context were discarded. So as context size increases compression
ratio and space saved decreased.

7
Figure 7: Static Table approach

7 Conclusion
The entropy coding method of Context-based Adaptive Binary Arithmetic Cod-
ing (CABAC) is part of the Main profile of H.264/AVC [1] and may find its
way into video streaming, broadcast, or storage applications within this pro-
file. Experimental results have shown the superior performance of CABAC in
comparison to the baseline entropy coding method of VLC/CAVLC.

References
[1] Context Based Adaptive Binary Arithmetic Coding in the H.264/AVC
Video Compression Standard Detlev Marpe, Member, IEEE, Heiko
Schwarz, and Thomas Wiegand
[2] https://en.wikipedia.org/wiki/Context-adaptive binary arithmeticcoding
[3] https://en.wikipedia.org/wiki/Prediction by partial matching
[4] https://github.com/nayuki/Reference-arithmetic-coding

[5] https://www.nayuki.io/page/reference-arithmetic-coding

You might also like