Arithmetic coding
Arithmetic coding is a form of entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding, such as Huffman coding, in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where [0.0 ≤ ''n'' < 1.0).
[[File:Arithmetic coding example.svg|400px|thumb|right|An arithmetic coding example assuming a fixed probability distribution of three Symbols "A", "B", and "C". Probability of "A" is 50%, probability of "B" is 33% and probability of "C" is 17%. Furthermore we assume that the recursion depth is known in each step. In step one we code "B" which is inside the interval [0.5, 0.83): The binary number "0.10x" is the shortest code that represents an Interval that is entirely inside [0.5, 0.83). "x" means an arbitrary bit sequence. There are two extreme cases: the smallest x stands for an infinite number of zeros which represents the left side of the represented interval. Then the left side of the interval is dec(0.10) = 0.5. The greatest x stands for an infinite number of ones which gives a number that converges towards dec(0.11) = 0.75 Therefore "0.10x" represents the interval [0.5, 0.75) which is inside [0.5, 0.83)
Now we can leave out the "0." part since all intervals begin with "0." and we can ignore the "x" part because no matter what bit-sequence it represents, we will stay inside [0.5, 0.75).]]