Chapt 6
Chapt 6
Image Compression
Outline
Data redundancy
Huffman coding
Arithmetic coding
The reduced f ile is called the compressed f ile and is used to reconstruct
the image, resulting in the decompressed image.
The ratio of original uncompressed image f ile and the compressed f ile is
referred to as the compression ratio.
3
Cont’d..
EXAMPLE :The original image is 256x256 pixels , 8 bits per pixel. This f ile is 65,536
bytes. After compression the image file is 6,554 bytes.
The compression ratio is: SIZELU/SIZEC= 65536/6554 = 9.999 = 10.
This can also be written as 10:1.
The reduction in f ile size is necessary to meet the bandwidth requirements for
many transmission systems, the storage requirements in computer database.
The main goal of such system is to reduce the storage quantity as much as
possible, and
The decoded image displayed in the monitor can be similar to the original image
as much as can be.
4
Trade-offs
A compression scheme for video may require expensive hardware for the video
to be decompressed fast enough to be viewed as it's being decompressed.
trade-offs include
• Degree of compression,
• Amount of distortion introduced (if using a Lossy compression scheme), and,
• Computational resources required to compress and uncompressed the data.
• Speed of compression and decompression
5
Probability
All compression algorithms assume that there is some bias on the input
messages so that some inputs are more likely than others, i.e.
An assumption that repeated data are more likely than random data, or that
large white patches occur in “typical” images.
7
Cont’d..
Before encoding, preprocessing is performed to prepare the image for the
encoding process, and consists of any number of operations that are
application specific.
After the compressed f ile has been decoded, post processing can be
performed to eliminate some of the undesirable artifacts brought about by
the compression process.
8
Cont’d..
The compressor consists of
1. Preprocessing stage: preprocessing is performed to prepare the image
for the encoding process, and consists of any number of operations that
are application specific.
The mapping process, which maps the original image data into another
mathematical space where it is easier to compress the data.
9
Cont’d..
2. Encoding stage.
The quantization stage (as part of the encoding process), which takes the
potentially continuous data from the mapping stage and puts it in discrete
form.
Final stage of encoding involves coding the resulting data, which maps
the discrete data from the quantized onto a code in an optimal manner.
10
Decompression consists of
1. decoding process is divided into two stages.
the decoding stage, takes the compressed f ile and reverses the original
coding by mapping the codes to the original, quantized values.
Next, these values are processed by a stage that performs an inverse
mapping to reverse the original mapping process.
2. A postprocessing stage: After the compressed f ile has been decoded,
post processing can be performed to eliminate some of the potentially
undesirable artifacts brought about by the compression process.
The Image post processed to enhance the look of the final image.
Often, many practical compression algorithms are a combination of a
number of different individual compression techniques.
11
Images Compression Methods
Information is interpretation of the data in a meaningful way.
1. Lossless Compression
2. Lossy Compression.
12
Lossless Compression
This compression is called lossless because no data are lost, and
The original image can be recreated exactly from the compressed data.
Huffman Coding
Run-Length Coding
13
Huffman Coding
The Huffman code developed by D. Huffman in1952, is a minimum
length code.
This means that given the statistical distribution of the gray levels (the
histogram) code.
For example, Huffman coding alone will typically reduce the f ile by 10
to 50%, but this ratio can be improved to 2:1 or 3:1 by preprocessing
for irrelevant information removal.
14
Run-Length Coding
Run-length coding (RLC) is an image compression method that
works by counting the number of adjacent pixels with the same
gray-level value.
This count, called the run length, is then coded and stored.
Here we will explore several methods of run-length coding basic
methods that are used primarily for binary (two-valued) images
and extended versions for gray-scale images.
Basic RLC is used primarily for binary images but can work with
complex images that have been preprocessed by thresholding to
reduce the number of gray levels to two.
15
Lossy Compression
These compression methods are called Lossy because they allow a loss
because they allow a loss in actual image data,
16
Cont’d
17
Information Theory
Entropy is a numerical measure of the uncertainty of an outcome
18
Cont’d..
The equation says that messages with higher probability will contain
less information
19
Computing entropy
Entropy is simply a weighted average of the information of each
message, and therefore the average number of bits of information in
the set of messages.
The more random a set of messages (the more even the probabilities)
the more information they contain on average.
21
Measure of performance
The performance of a compression algorithm can be measured in a
number of ways
• Complexity of algorithm
• Memory requirement
• Speed
• Amount of compression and,
• Similarity of compressed and original data (for lossy)
22
Arithmetic coding
Arithmetic coding is another kind of lossless compression algorithm.
23
Bit plane encoding
Bit plane encoding leverages this binary representation for image
compression, offering a lossless method to reduce storage space
without sacrificing any image quality.
24
Reading Assignment
video compression
25
Quiz 5%
1.Whati is Image Restoration? 2pts.
2. write images compression methods? 2pts.
3. If entropy is high what happen the
redundancy?1pts
26