Arithmetic Coding: Implementation Details and Examples
Arithmetic Coding: Implementation Details and Examples
Arithmetic coding
Arithmetic coding is a form of variable-length entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 n < 1.0).
Defining a model
In general, arithmetic coders can produce near-optimal output for any given set of symbols and probabilities (the optimal value is log2P bits for each symbol of probability P, see source coding theorem). Compression algorithms that use arithmetic coding start by determining a model of the data basically a prediction of what patterns will be found in the symbols of the message. The more accurate this prediction is, the closer to optimal the output will be.
Arithmetic coding Example: a simple, static model for describing the output of a particular monitoring instrument over time might be: 60% chance of symbol NEUTRAL 20% chance of symbol POSITIVE 10% chance of symbol NEGATIVE 10% chance of symbol END-OF-DATA. (The presence of this symbol means that the stream will be 'internally terminated', as is fairly common in data compression; when this symbol appears in the data stream, the decoder will know that the entire stream has been decoded.)
Models can also handle alphabets other than the simple four-symbol set chosen for this example. More sophisticated models are also possible: higher-order modelling changes its estimation of the current probability of a symbol based on the symbols that precede it (the context), so that in a model for English text, for example, the percentage chance of "u" would be much higher when it follows a "Q" or a "q". Models can even be adaptive, so that they continuously change their prediction of the data based on what the stream actually contains. The decoder must have the same model as the encoder.
When all symbols have been encoded, the resulting interval unambiguously identifies the sequence of symbols that produced it. Anyone who has the same final interval and model that is being used can reconstruct the symbol sequence that must have entered the encoder to result in that final interval. It is not necessary to transmit the final interval, however; it is only necessary to transmit one fraction that lies within that interval. In particular, it is only necessary to transmit enough digits (in whatever base) of the fraction so that all fractions that begin with those digits fall into the final interval.
Arithmetic coding
Since .538 is within the interval [0.48, 0.54), the second symbol of the message must have been NEGATIVE. Again divide our current interval into sub-intervals: the interval for NEUTRAL would be [0.48, 0.516) the interval for POSITIVE would be [0.516, 0.528) the interval for NEGATIVE would be [0.528, 0.534) the interval for END-OF-DATA would be [0.534, 0.540).
Now .538 falls within the interval of the END-OF-DATA symbol; therefore, this must be the next symbol. Since it is also the internal termination symbol, it means the decoding is complete. If the stream is not internally terminated, there needs to be some other way to indicate where the stream stops. Otherwise, the decoding process could continue forever, mistakenly reading more symbols from the fraction than were in fact encoded into it.
Sources of inefficiency
The message 0.538 in the previous example could have been encoded by the equally short fractions 0.534, 0.535, 0.536, 0.537 or 0.539. This suggests that the use of decimal instead of binary introduced some inefficiency. This is correct; the information content of a three-digit decimal is approximately 9.966 bits; the same message could have been encoded in the binary fraction 0.10001010 (equivalent to 0.5390625 decimal) at a cost of only 8 bits. (The final zero must be specified in the binary fraction, or else the message would be ambiguous without external information such as compressed stream size.) This 8 bit output is larger than the information content, or entropy of the message, which is 1.573 or 4.71 bits. The large difference between the example's 8 (or 7 with external compressed data size information) bits of output and the entropy of 4.71 bits is caused by the short example message not being able to exercise the coder effectively. The claimed symbol probabilities were [0.6,0.2,0.1,0.1], but the actual frequencies in this example are [0.33,0,0.33,0.33]. If the intervals are readjusted for these frequencies, the entropy of the message would be 1.58
Arithmetic coding bits and the same NEUTRAL NEGATIVE ENDOFDATA message could be encoded as intervals [0,1/3); [1/9,2/9); [5/27,6/27); and a binary interval of [1011110, 1110001). This could yield an output message of 111, or just 3 bits. This is also an example of how statistical coding methods like arithmetic encoding can produce an output message that is larger than the input message, especially if the probability model is off.
1/3
[85/256, 171/256)
[0.01010101, 0.10101011)
1/3
[171/256, 1)
[0.10101011, 1.00000000)
A process called renormalization keeps the finite precision from becoming a limit on the total number of symbols that can be encoded. Whenever the range is reduced to the point where all values in the range share certain beginning digits, those digits are sent to the output. For however many digits of precision the computer can handle, it is now handling fewer than that, so the existing digits are shifted left, and at the right, new digits are added to expand the range as widely as possible. Note that this result occurs in two of the three cases from our previous example.
Arithmetic coding
Digits that can be sent to output Range after renormalization 0 None 1 00000000 10101001 01010101 10101010 01010110 11111111
as a number in a certain base presuming that the involved symbols form an ordered set and each symbol in the ordered set denotes a sequential integer A=0, B=1, C=2, D=3, and so on. This results in the following frequencies and cumulative frequencies:
Symbol Frequency of occurrence Cumulative frequency A B D 1 2 3 0 1 3
The cumulative frequency is the total of all frequencies below it in a frequency distribution (a running total of frequencies). In a positional numeral system the radix, or base, is numerically equal to a number of different symbols used to express the number. For example, in the decimal system the number of symbols is 10, namely 0,1,2,3,4,5,6,7,8,9. The radix is used to express any finite integer in a presumed multiplier in polynomial form. For example, the number 457 is actually 4102+5101+7100, where base 10 is presumed but not shown explicitly. Initially, we will convert DABDDB into a base-6 numeral, because 6 is the length of the string. The string is first mapped into the digit string 301331, which then maps to an integer by the polynomial:
The result 23671 has a length of 15 bits, which is not very close to the theoretical limit (the entropy of the message), which is approximately 9 bits. To encode a message with a length closer to the theoretical limit imposed by information theory we need to slightly generalize the classic formula for changing the radix. We will compute lower and upper bounds L and U and choose a number between them. For the computation of L we multiply each term in the above expression by the product of the frequencies of all previously occurred symbols:
The difference between this polynomial and the polynomial above is that each term is multiplied by the product of the frequencies of all previously occurring symbols. More generally, L may be computed as:
where
are the frequencies of occurrences. Indexes denote the position of are 1, this is the change-of-base formula.
Arithmetic coding The upper bound U will be L plus the product of all frequencies; in this case U = L + (3 1 2 3 3 2) = 25002 + 108 = 25110. In general, U is given by:
Now we can choose any number from the interval [L, U) to represent the message; one convenient choice is the value with the longest possible trail of zeroes, 25100, since it allows us to achieve compression by representing the result as 251102. The zeroes can also be truncated, giving 251, if the length of the message is stored separately. Longer messages will tend to have longer trails of zeroes. To decode the integer 25100, the polynomial computation can be reversed as shown in the table below. At each stage the current symbol is identified, then the corresponding term is subtracted from the result.
Remainder Identification Identified symbol 25100 590 590 187 26 2 25100 / 65 = 3 D 590 / 64 = 0 590 / 63 = 2 187 / 62 = 5 26 / 61 = 4 2 / 60 = 2 A B D D B Corrected remainder (25100 65 3) / 3 = 590 (590 64 0) / 1 = 590 (590 63 1) / 2 = 187 (187 62 3) / 3 = 26 (26 61 3) / 3 = 2
During decoding we take the floor after dividing by the corresponding power of 6. The result is then matched against the cumulative intervals and the appropriate symbol is selected from look up table. When the symbol is identified the result is corrected. The process is continued for the known length of the message or while the remaining result is positive. The only difference compared to the classical change-of-base is that there may be a range of values associated with each symbol. In this example, A is always 0, B is either 1 or 2, and D is any of 3, 4, 5. This is in exact accordance with our intervals that are determined by the frequencies. When all intervals are equal to 1 we have a special case of the classic base change.
value of this frequency, we can use the size of the alphabet A for the computation of the product
Applying log2 for the estimated number of bits in the message, the final message (not counting a logarithmic overhead for the message length and frequency tables) will match the number of bits given by entropy, which for long messages is very close to optimal:
Arithmetic coding
When the symbol 0 has a high probability of 0.95, the difference is much greater:
One simple way to address this weakness is to concatenate symbols to form a new alphabet in which each symbol represents a sequence of symbols in the original alphabet. In the above example, grouping sequences of three symbols before encoding would produce new "super-symbols" with the following frequencies: 000: 85.7% 001, 010, 100: 4.5% each 011, 101, 110: .24% each 111: 0.0125%
With this grouping, Huffman coding averages 1.3 bits for every three symbols, or 0.433 bits per symbol, compared with one bit per symbol in the original encoding.
Range encoding
Range encoding is regarded by one group of engineers as a different technique and by another group only as a different name for arithmetic coding. There is no unique opinion but some people believe that, when processing is applied as one step per symbol, it is range coding, and when one step is required per every bit it is arithmetic coding. In another opinion arithmetic coding is the computing of two boundaries on interval [0,1) and choosing the shortest fraction from it, and range encoding is computing boundaries on the interval and choosing the number with the longest trail of zeros from within. Many researchers believe that slight difference in the approach makes range encoding patent free. To back up this idea they provide reference to the article of G. Nigel N. Martin [1], which is not reader friendly and is subject to interpretation. It is cited in the Glen Langdon article An Introduction to Arithmetic Coding, IBM J. RES. DEVELOP. VOL. 28, No 2, March 1984 [2], which makes the method suggested by Martin as prior art recognized by an industry expert. It is close to the first topic of the current article with the difference that
Arithmetic coding both the LOW and HIGH limits are computed on every step and that probabilities are still used for narrowing down the interval and not the frequencies. The article of G. N. N. Martin amazingly dropped out of attention of many researchers who were filing patents on arithmetic coding explaining the matter of their algorithms as building long proper fraction, which put all their patents at risk to be circumvented by those who do it differently because a patent is a very formal document and language definitions should be very precise. It is not necessary that all patents on arithmetic coding are now void in the light of Martin's article but it opens the ground for debates, which could have been avoided if authors at least mentioned the approach.
US patents
A variety of specific techniques for arithmetic coding have historically been covered by US patents, although various well-known methods have since passed into the public domain as the patents have expired. Techniques covered by patents may be essential for implementing the algorithms for arithmetic coding that are specified in some formal international standards. When this is the case, such patents are generally available for licensing under what is called "reasonable and non-discriminatory" (RAND) licensing terms (at least as a matter of standards-committee policy). In some well-known instances (including some involving IBM patents that have since expired) such licenses were available free, and in other instances, licensing fees have been required. The availability of licenses under RAND terms does not necessarily satisfy everyone who might want to use the technology, as what may seem "reasonable" for a company preparing a proprietary software product may seem much less reasonable for a free software or open source project. At least one significant compression software program, bzip2, deliberately discontinued the use of arithmetic coding in favor of Huffman coding due to the perceived patent situation at the time. Also, encoders and decoders of the JPEG file format, which has options for both Huffman encoding and arithmetic coding, typically only support the Huffman encoding option, which was originally because of patent concerns; the result is that nearly all JPEG images in use today use Huffman encoding[3] although JPEG's arithmetic coding patents[4] have expired due to the age of the JPEG standard (the design of which was approximately completed by 1990).[5] Some US patents relating to arithmetic coding are listed below. U.S. Patent 4122440 [6] (IBM) Filed 4 March 1977, Granted 24 October 1978 (Now expired) U.S. Patent 4286256 [7] (IBM) Granted 25 August 1981 (Now expired) U.S. Patent 4467317 [8] (IBM) Granted 21 August 1984 (Now expired) U.S. Patent 4652856 [9] (IBM) Granted 4 February 1986 (Now expired) U.S. Patent 4891643 [10] (IBM) Filed 15 September 1986, granted 2 January 1990 (Now expired) U.S. Patent 4905297 [11] (IBM) Filed 18 November 1988, granted 27 February 1990 (Now expired) U.S. Patent 4933883 [12] (IBM) Filed 3 May 1988, granted 12 June 1990 (Now expired) U.S. Patent 4935882 [13] (IBM) Filed 20 July 1988, granted 19 June 1990 (Now expired) U.S. Patent 4989000 [14] Filed 19 June 1989, granted 29 January 1991 (Now expired) U.S. Patent 5099440 [15] (IBM) Filed 5 January 1990, granted 24 March 1992 (Now expired) U.S. Patent 5272478 [16] (Ricoh) Filed 17 August 1992, granted 21 December 1993 (expires on August 17, 2012)
Note: This list is not exhaustive. See the following link for a list of more patents.[17] The Dirac codec uses arithmetic coding and is not patent pending.[18] Patents on arithmetic coding may exist in other jurisdictions, see software patents for a discussion of the patentability of software around the world.
Arithmetic coding
Teaching aid
An interactive visualization tool for teaching arithmetic coding, dasher.tcl assistive communication system, Dasher.
[19]
References
[1] http:/ / www. compressconsult. com/ rangecoder/ rngcod. pdf. gz [2] http:/ / eprints. kfupm. edu. sa/ 26648/ 1/ 26648. pdf [3] (http:/ / www. faqs. org/ faqs/ compression-faq/ part1/ section-17. html) What is JPEG? comp.compression Frequently Asked Questions (part 1/3) [4] "Recommendation T.81 (1992) Corrigendum 1 (01/04)" (http:/ / www. itu. int/ rec/ T-REC-T. 81-200401-I!Cor1/ dologin. asp?lang=e& id=T-REC-T. 81-200401-I!Cor1!PDF-E& type=items). Recommendation T.81 (1992). International Telecommunication Union. 2004-11-09. . Retrieved 3 February 2011. [5] JPEG Still Image Data Compression Standard, W. B. Pennebaker and J. L. Mitchell, Kluwer Academic Press, 1992. ISBN 0-442-01272-1 [6] http:/ / www. google. com/ patents?vid=4122440 [7] http:/ / www. google. com/ patents?vid=4286256 [8] http:/ / www. google. com/ patents?vid=4467317 [9] http:/ / www. google. com/ patents?vid=4652856 [10] http:/ / www. google. com/ patents?vid=4891643 [11] http:/ / www. google. com/ patents?vid=4905297 [12] http:/ / www. google. com/ patents?vid=4933883 [13] http:/ / www. google. com/ patents?vid=4935882 [14] http:/ / www. google. com/ patents?vid=4989000 [15] http:/ / www. google. com/ patents?vid=5099440 [16] http:/ / www. google. com/ patents?vid=5272478 [17] (http:/ / www. faqs. org/ faqs/ compression-faq/ part1/ ) comp.compression Frequently Asked Questions (part 1/3) [18] (http:/ / lwn. net/ Articles/ 272520/ ) Dirac video codec 1.0 released [19] http:/ / www. inference. phy. cam. ac. uk/ mackay/ itprnn/ softwareI. html
MacKay, David J.C. (September 2003). "Chapter 6: Stream Codes" (http://www.inference.phy.cam.ac.uk/ mackay/itila/book.html) (PDF/PostScript/DjVu/LaTeX). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. ISBN0-521-64298-1. Retrieved 2007-12-30. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 22.6. Arithmetic Coding" (http:// apps.nrbook.com/empanel/index.html#pg=1181). Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN978-0-521-88068-8. Rissanen, Jorma (May 1976). "Generalized Kraft Inequality and Arithmetic Coding" (http://domino.watson. ibm.com/tchjr/journalindex.nsf/4ac37cf0bdc4dd6a85256547004d47e1/ 53fec2e5af172a3185256bfa0067f7a0?OpenDocument) (PDF). IBM Journal of Research and Development 20 (3): 198203. doi:10.1147/rd.203.0198. Retrieved 2007-09-21. Rissanen, J.J.; Langdon, G.G., Jr. (March 1979). "Arithmetic coding" (http://researchweb.watson.ibm.com/ journal/rd/232/ibmrd2302G.pdf) (PDF). IBM Journal of Research and Development 23 (2): 149162. doi:10.1147/rd.232.0149. Retrieved 2007-09-22.
Arithmetic coding Witten, Ian H.; Neal, Radford M.; Cleary, John G. (June 1987). "Arithmetic Coding for Data Compression" (http:/ /www.stanford.edu/class/ee398a/handouts/papers/WittenACM87ArithmCoding.pdf) (PDF). Communications of the ACM 30 (6): 520540. doi:10.1145/214762.214771. Retrieved 2007-09-21.
10
External links
Paul E. Black, arithmetic coding (http://www.nist.gov/dads/HTML/arithmeticCoding.html) at the NIST Dictionary of Algorithms and Data Structures. Newsgroup posting (http://www.gtoal.com/wordgames/documents/arithmetic-encoding.mai) with a short worked example of arithmetic encoding (integer-only). PlanetMath article on arithmetic coding (http://planetmath.org/encyclopedia/ArithmeticEncoding.html) Anatomy of Range Encoder (http://ezcodesample.com/reanatomy.html) The article explains both range and arithmetic coding. It has also code samples for 3 different arithmetic encoders along with performance comparison. Introduction to Arithmetic Coding (http://hpl.hp.com/techreports/2004/HPL-2004-76.pdf). 60 pages. Eric Bodden, Malte Clasen and Joachim Kneis: Arithmetic Coding revealed (http://www.sable.mcgill.ca/ publications/techreports/#report2007-5). Technical Report 2007-5, Sable Research Group, McGill University. Arithmetic Coding + Statistical Modeling = Data Compression (http://dogma.net/markn/articles/arith/part1. htm) by Mark Nelson.
11
License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/