Lossless compression: Difference between revisions
No edit summary |
No edit summary |
||
Line 9: | Line 9: | ||
Lossless compression methods may be categorized according to the type of data they are designed to compress. The three main types of targets for compression algorithms are text, images, and sound. While, in principle, any general-purpose lossless compression algorithm (general-purpose means that they can handle all binary input) can be used on any type of data, many are unable to achieve significant compression on data that is not of the form for which they were designed to compress. Sound data, for instance, cannot be compressed well with conventional text compression algorithms. |
Lossless compression methods may be categorized according to the type of data they are designed to compress. The three main types of targets for compression algorithms are text, images, and sound. While, in principle, any general-purpose lossless compression algorithm (general-purpose means that they can handle all binary input) can be used on any type of data, many are unable to achieve significant compression on data that is not of the form for which they were designed to compress. Sound data, for instance, cannot be compressed well with conventional text compression algorithms. |
||
Katrina i can see you |
|||
and Jamie how are you |
|||
we will meet again mwahahahahaha |
|||
Most lossless compression programs use two different kinds of algorithms: one which generates a ''statistical model'' for the input data, and another which maps the input data to bit strings using this model in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data. Often, only the former algorithm is named, while the latter is implied (through common use, standardization etc.) or unspecified. |
Most lossless compression programs use two different kinds of algorithms: one which generates a ''statistical model'' for the input data, and another which maps the input data to bit strings using this model in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data. Often, only the former algorithm is named, while the latter is implied (through common use, standardization etc.) or unspecified. |
||
Revision as of 15:14, 6 November 2007
Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. This can be contrasted to lossy data compression, which does not allow the exact original data to be reconstructed from the compressed data.
Lossless data compression is used in many applications. For example, it is used in the popular ZIP file format and in the Unix tool gzip. It is also often used as a component within lossy data compression technologies.
Lossless compression is used when it is important that the original and the decompressed data be identical, or when no assumption can be made on whether certain deviation is uncritical. Typical examples are executable programs and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods.
Lossless compression techniques
Lossless compression methods may be categorized according to the type of data they are designed to compress. The three main types of targets for compression algorithms are text, images, and sound. While, in principle, any general-purpose lossless compression algorithm (general-purpose means that they can handle all binary input) can be used on any type of data, many are unable to achieve significant compression on data that is not of the form for which they were designed to compress. Sound data, for instance, cannot be compressed well with conventional text compression algorithms.
Katrina i can see you
and Jamie how are you
we will meet again mwahahahahaha
Most lossless compression programs use two different kinds of algorithms: one which generates a statistical model for the input data, and another which maps the input data to bit strings using this model in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data. Often, only the former algorithm is named, while the latter is implied (through common use, standardization etc.) or unspecified.
Statistical modeling algorithms for text (or text-like binary data such as executables) include:
- Burrows-Wheeler transform (block sorting preprocessing that makes compression more efficient)
- LZ77 (used by DEFLATE)
- LZW
Encoding algorithms to produce bit sequences are:
- Huffman coding (also used by DEFLATE)
- Arithmetic coding
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the USA and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source activists encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing image files in favor of Portable Network Graphics PNG, which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW have now expired.[1]
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance to a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialised area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data – essentially using models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the "error") tends to be small, then certain difference values (like 0, +1, -1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of an image). This is called delta compression (from the Greek letter Δ which is commonly used in mathematics to denote a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta compression from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Lossless compression methods
For a complete list, see Category:Lossless compression algorithms
General purpose
- Run-length encoding – a simple scheme that provides good compression of data containing lots of runs of the same value.
- LZW – used by gif and compress among others
- Deflate – used by gzip, modern versions of zip and as part of the compression process of PNG
Audio compression
- Apple Lossless – ALAC (Apple Lossless Audio Codec)
- Audio Lossless Coding – also known as MPEG-4 ALS
- Direct Stream Transfer – DST
- Dolby TrueHD
- DTS-HD Master Audio
- Free Lossless Audio Codec – FLAC
- Meridian Lossless Packing – MLP
- Monkey's Audio – Monkey's Audio APE
- OptimFROG
- RealPlayer – RealAudio Lossless
- Shorten – SHN
- TTA – True Audio Lossless
- WavPack – WavPack lossless
- WMA Lossless – Windows Media Lossless
Graphic compression
- ABO – Adaptive Binary Optimization
- GIF – (lossless, but contains a very limited number color range)
- JBIG2 – (lossless or lossy compression of B&W images)
- JPEG-LS – (lossless/near-lossless compression standard)
- JPEG 2000 – (includes lossless compression method, as proven by Sunil Kumar, Prof San Diego State University)
- PGF – Progressive Graphics File (lossless or lossy compression)
- PNG – Portable Network Graphics
- Qbit Lossless Codec – Focuses on intra-frame (single-image) lossless compression
- TIFF
- WMPhoto – (includes lossless compression method)
Video compression
- Animation codec
- CamStudio Video Codec
- CorePNG
- FFV1
- H.264/MPEG-4 AVC
- Huffyuv
- Lagarith
- LCL
- MSU Lossless Video Codec
- Qbit Lossless Codec
- SheerVideo
- TSCC – TechSmith Screen Capture Codec
Lossless data compression must always make some files longer
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any (lossless) data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument, as follows:
- Assume that each file is represented as a string of bits of some arbitrary length.
- Suppose that there is a compression algorithm that transforms every file into a distinct file which is no longer than the original file, and that at least one file will be compressed into something that is shorter than itself.
- Let be the least number such that there is a file with length bits that compresses to something shorter. Let be the length (in bits) of the compressed version of .
- Because , every file of length keeps its size during compression. There are such files. Together with , this makes files which all compress into one of the files of length .
- But is smaller than , so by the pigeonhole principle there must be some file of length which is simultaneously the output of the compression function on two different inputs. That file cannot be decompressed reliably (which of the two originals should that yield?), which contradicts the assumption that the algorithm was lossless.
- We must therefore conclude that our original hypothesis (that the compression function makes no file longer) is necessarily untrue.
Any lossless compression algorithm that makes some files shorter must necessarily make some files longer, but it is not necessary that those files become very much longer. Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. Then the only increase in size is a few bits to tell the decoder that the normal coding has been turned off for the entire input. For example, DEFLATE compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is only useful when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithm are designed to act on all have some form of easily-modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm: indeed, this result is used to define the concept of randomness in algorithmic complexity theory.
It has been suggested that Magic compression algorithm be merged into this article. (Discuss) Proposed since September 2007. |
There have been many claims through the years of companies achieving 'perfect-compression' where an arbitrary number of random bits can always be compressed to N-1 bits. This is, of course, impossible: if such an algorithm existed, it could be applied repeatedly to losslessly reduce any file to length 0. These kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme.
See also
- Audio data compression
- David A. Huffman
- Information entropy
- Kolmogorov complexity
- Data compression
- Precompressor
- Lossy data compression
- Lossless Transform Audio Compression (LTAC)
- List of codecs
References
External links
- Lossless data compression Benchmarks and Tests
- Comparison of Lossless Audio Compressors at Hydrogenaudio Wiki
- Comparing lossless and lossy audio formats for music archiving
- Links to data compression topics and tutorials
lossless is when compressed music hits the toilet pan n then stuff happens