0% found this document useful (0 votes)
30 views

Wavelets: 1. What Is Wavelet Compression

The document discusses wavelet compression. It begins by defining wavelets and explaining how the discrete wavelet transform (DWT) works by filtering and subsampling a signal into an approximation and detail coefficients. Thresholding is then used to modify small coefficients, compacting the signal's energy and allowing for efficient entropy encoding. Wavelet compression provides multi-resolution analysis and compacts information, allowing the majority of coefficients to be discarded with minimal information loss for effective data compression.

Uploaded by

Deepna Khattri
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Wavelets: 1. What Is Wavelet Compression

The document discusses wavelet compression. It begins by defining wavelets and explaining how the discrete wavelet transform (DWT) works by filtering and subsampling a signal into an approximation and detail coefficients. Thresholding is then used to modify small coefficients, compacting the signal's energy and allowing for efficient entropy encoding. Wavelet compression provides multi-resolution analysis and compacts information, allowing the majority of coefficients to be discarded with minimal information loss for effective data compression.

Uploaded by

Deepna Khattri
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 9

1.

What is Wavelet Compression


This section provides a very brief description of compression using wavelets.

WAVELETS
1.1 What are Wavelets The most basic definition of a wavelet is simply a function with a well defined temporal support that wiggles about the xaxis (it has exactly the same area above and below the axis). This definition however does not help us much, and a better approach is to explain what the wavelet transform and wavelet analysis are. The basic Wavelet Transform is similar to the well known Fourier Transform. Like the Fourier Transform, the coefficients are calculated by an inner-product of the input signal with a set of orthonormal basis functions that span _1 (this is a small subset of all available wavelet transforms though). The difference comes in the way these functions are constructed, and more importantly in the types of analyses they allow. The key diff erence is that the Wavelet Transform is a multi-resolution transform, that is, it allows a form of timefrequency analysis (or translationscale in wavelet speak). When using the Fourier Transform the result is a very precise analysis of the frequencies contained in the signal, but no information on when those frequencies occurred. In the wavelet transform we get information about when certain features occurred, and about the scale characteristics of the signal. Scale is analogous to frequency, and is a measure of the amount of detail in the signal. Small scale generally means coarse details, and large scale means fine details (scale is a number related to the number of coefficients and is therefore counter-intuitive to the level of detail).

1.1.1 Sampling and the Discrete Wavelet Series In order for the Wavelet transforms to be calculated using computers the data must be discretised. A continuous signal can be sampled so that a value is recorded after a discrete time interval, if the Nyquist sampling rate is used then no information should be lost. With Fourier Transforms and STFTs the sampling rate is uniform but with wavelets the sampling rate can be changed when the scale changes. Higher scales will have a smaller sampling rate. According to Nyquist Sampling theory, the new sampling rate N2 can be calculated from the original rate N1 using the following: 1 2 1 2N s N=s where s1 and s2 are the scales. So every scale has a different sampling rate.

After sampling the Discrete Wavelet Series can be used, however this can still be very slow tocompute.The reason is that the information calculated by the wavelet series is still highly redundant, which requires a large amount of computation time. To reduce computation a different strategy was discovered and Discrete Wavelet Transform (DWT) method was born.

1.1.2 DWT and subsignal encoding

The DWT provides sufficient information for the analysis and synthesis of a signal, but is advantageously, much more efficient. Discrete Wavelet analysis is computed using the concept of filter banks. Filters of different cut-off frequencies analyse the signal at different scales. Resolution is changed by the filtering, the scale is changed by upsampling and downsampling. If a signal is put through two filters: (i) a high-pass filter, high frequency information is kept, low frequency information is lost. (ii) a low pass filter, low frequency information is kept, high frequency information is lost. then the signal is effectively decomposed into two parts, a detailed part (high frequency), and an approximation part (low frequency). The subsignal produced from the low filter will have a highest frequency equal to half that of the original. According to Nyquist sampling this change in frequency range means that only half of the original samples need to be kept in order to perfectly reconstruct the signal. More specifically this means that upsampling can be used to remove every second sample. The scale has now been doubled. The resolution has also been changed, the filtering made the frequency resolution better, but reduced the time resolution. The approximation subsignal can then be put through a filter bank, and this is repeated until the required level of decomposition has been reached. The ideas are shown in figure 2.8.

Figure 2.8

The Discrete Wavelet Transform can be described as a series of filtering and subsampling (decimating in time) as depicted in Figure 1. In each level in this series, a set Overall the filters have the effect of separating out finer and finer detail, if all the details are added together then the original signal should be reproduced. Using a further analogy from Hubbard[4] this decomposition is like decomposing the ratio 87/7 into parts of increasing detail, such that: 87 / 7 = 10 + 2 + 0.4 + 0.02 + 0.008 + 0.0005 The detailed parts can then be re-constructed to form 12.4285 which is an approximation of the original number 87/7. 1.1.3 Conservation and Compaction of Energy

An important property of wavelet analysis is the conservation of energy. Energy is defined as the sum of the squares of the values. So the energy of an image is the sum of the squares of the pixel values, the energy in the wavelet transform of an image is the sum of the squares of the transform coefficients. During wavelet analysis the energy of a signal is divided between approximation and details signals but the total energy does not change. During compression however, energy is lost because thresholding changes the coefficient values and hence the compressed version contains less energy. The compaction of energy describes how much energy has been compacted into the approximation signal during wavelet analysis. Compaction will occur wherever the magnitudes of the detail coefficients are significantly smaller than those of the approximation coefficients. Compaction is important when compressing signals because the more energy that has been compacted into the approximation signal the less energy can be lost during compression.

WAVELET COMPRESSION
1.2.1 Compression techniques There are many different forms of data compression. This investigation will concentrate on transform coding and then more specifically on Wavelet Transforms. Image data can be represented by coefficients of discrete image transforms. Coefficients that make only small contributions to the information contents can be omitted. Usually the image is split into blocks (subimages) of 8x8 or 16x16 pixels, then each block is transformed separately. However this does not take into account any correlation between blocks, and creates "blocking artifacts" , which are not good if a smooth image is required. However wavelets transform is applied to entire images, rather than subimages, so it produces no blocking artefacts. This is a major advantage of wavelet compression over other transform compression methods. Thresholding in Wavelet Compression For some signals, many of the wavelet coefficients are close to or equal to zero. Thresholding can modify the coefficients to produce more zeros. In Hard thresholding any coefficient below a threshold , is set to zero. This should then produce many consecutive zeros which can be stored in much less space, and transmitted more quickly by using entropy coding compression. An important point to note about Wavelet compression is explained by Aboufadel[3]: "The use of wavelets and thresholding serves to process the original signal, but, to this point, no actual compression of data has occurred". This explains that the wavelet analysis does not actually compress a signal, it simply provides information about the signal which allows the data to be compressed by standard entropy coding

techniques, such as Huffman coding. Huffman coding is good to use with a signal processed by wavelet analysis, because it relies on the fact that the data values are small and in particular zero, to compress data. It works by giving large numbers more bits and small numbers fewer bits. Long strings of zeros can be encoded very efficiently using this scheme. Therefore an actual percentage compression value can only be stated in conjunction with an entropy coding technique. To compare different wavelets, the number of zeros is used. More zeros will allow a higher compression rate, if there are many consecutive zeros, this will give an excellent compression rate.

Figure 1: Discrete Wavelet Transform (from http://www.public.iastate.edu/ rpolikar/WAVELETS/WTpart4.html

of 2j1 coefficients are calculated, where j < J is the scale and N = 2J is the number of samples in the input signal. The coefficients are calculated by applying a high-pass wavelet filter to the signal and down-sampling the result by a factor of 2. At the same level, a low-pass scale filtering is also performed (followed by down-sampling) to produce the signal for the next level. Both the wavelet and scale filters can be obtained from a single Quadrature Mirror Filter (QMF) function that defines the wavelet. Each set of scale-coefficient corresponds to a smoothing of the signal and the removal of details, whereas the wavelet-coefficients correspond to the differences between the scales. Wavelet theory shows that from the coarsest scale-coefficients and the series of the wavelet-coefficients the original signal can be reconstructed. The total number of coefficients (scale + wavelet) equals the number of samples in the signal.

1.2.2 Wavelet Compression The distribution of values for the wavelet coefficients is usually centered around 0, with very few large coefficients. This means that almost all the information is concentrated in a small fraction of the coefficients and can be efficiently compressed. This is done by quantizing the values based on the histogram and encoding the result in an efficient way, e.g. Huffman Encoding. For this homework we will use a simpler method, and instead of quantizing we will discard all but the M largest coefficients. This provides a compression ratio of roughly 2M/N (the factor of 2 is for storing both the coefficient value and index).

1.2.3 Multiresolution and Wavelets The power of Wavelets comes from the use of multiresolution. Rather than examining entire signals through the same window, different parts of the wave are viewed through different size windows (or resolutions). High frequency parts of the signal use a small window to give good time resolution, low frequency parts use a big window to get good frequency information.

An important thing to note is that the windows have equal area even though the height and width may vary in wavelet analysis. The area of the window is controlled by Heisenbergs Uncertainty principle as frequency resolution gets bigger the time resolution must get smaller.

Figure 2.3 The different transforms provided different resolutions of time and frequency. In Fourier analysis a signal is broken up into sine and cosine waves of different frequencies, and it effectively re-writes a signal in terms of different sine and cosine waves. Wavelet analysis does a similar thing, it takes a .mother wavelet., then the signal is translated into shifted and scale versions of this mother wavelet. 1.2.4 The Continuous Wavelet Transform (CWT) The continuous wavelet transform is the sum over all time of scaled and shifted versions of the mother wavelet . Calculating the CWT results in many coefficients C, which are functions of scale and translation. C(s, ) f (t) (s, ,t).dt = The translation, , is proportional to time information and the scale, s, is proportional to the inverse of the frequency information. To find the constituent wavelets of the signal, the coefficients should be multiplied by the relevant version of the mother wavelet. The scale of a wavelet simply means how stretched it is along the x-axis, larger scales are more stretched:

Figure 2.4 The db8 wavelet shown at two different scales The translation is how far it has been shifted along the x-axis. Figure 2.5 shows a wavelet, figure 2.6 shows the same mother wavelet translated by k: Figure 2.5

Figure 2.6

The same wavelet as in figure 2.5, but translated by k

USE OF WAVELET COMPRESSION


1.3.1 Wavelets and Compression Wavelets are useful for compressing signals but they also have far more extensive uses. They can be used to process and improve signals, in fields such as medical imaging where image degradation is not tolerated they are of particular use. They can be used to remove noise in an image, for example if it is of very fine scales, wavelets can be used to cut out this fine scale, effectively removing the noise. 1.3.2 The Fingerprint example The FBI have been using wavelet techniques in order to store and process fingerprint images more efficiently. The problem that the FBI were faced with was that they had over 200 Million sets of fingerprints, with up to 30,0000 new ones arriving each day, so searching through them was taking too long. The FBI thought that computerising the fingerprint images would be a better solution, however it was estimated that checking each fingerprint would use 600Kbytes of memory and even worse 2000 terabytes of storage space would be required to hold all the image data. The FBI then turned to wavelets for help, adapting a technique to compress each image into just 7% of the original space. Even more amazingly, according to Kiernan[8], when the images are decompressed they show "little distortion". Using wavelets the police hope to check fingerprints within 24 hours. Earlier attempts to compress images used the JPEG format; this breaks an image into blocks eight pixels square. It then uses Fourier transforms to transform the data, then compresses this. However this was unsatisfactory, trying to compress images this way into less than 10% caused "tiling artefacts" to occur, leaving marked boundaries in the image. As the fingerprint matching algorithm relies on accurate data to match images, using JPEG would weaken the success of the process. However wavelets dont create these "tiles" or "blocks", they work on the image as a whole, collecting detail at certain levels across the entire image. Therefore wavelets offered brilliant compression ratios and little image degradation; overall they outperformed the techniques based on Fourier transforms.

The basic steps used in the fingerprint compression were: (1) Digitise the source image into a signal s (2) Decompose the signal s into wavelet coefficients (3) Modify the coefficients from w, using thresholding to a sequence w. (4) Use quantisation to convert w to q. (5) Apply entropy encoding to compress q to e.

You might also like