0% found this document useful (0 votes)
5 views26 pages

Chapt 6

Uploaded by

arewho557
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views26 pages

Chapt 6

Uploaded by

arewho557
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Chapter six

Image Compression
Outline

 Basic definition of image compression

 Data redundancy

 Elements of information theory

 General mechanism and types of data compression

 Huffman coding

 Arithmetic coding

 Bit plane encoding


1
Image compression
 Image compression is an application of data compression that encodes the
original image with few bits.

 The objective of image compression is to reduce the redundancy of the


image and to store or transmit data in an efficient form.

 In general, Image compression involves reducing the size of image data


files, while retaining necessary information.

 The reduced f ile is called the compressed f ile and is used to reconstruct
the image, resulting in the decompressed image.

 Compression helps to reduce the consumption of expensive resources,


such as hard disk space or transmission bandwidth.
2
Cont’d..
 The original image, before any compression is performed, is called the
uncompressed image file.

 The ratio of original uncompressed image f ile and the compressed f ile is
referred to as the compression ratio.

 It is often written as SIZEU:SIZEC. The compression ratio is denoted by:

3
Cont’d..
EXAMPLE :The original image is 256x256 pixels , 8 bits per pixel. This f ile is 65,536
bytes. After compression the image file is 6,554 bytes.
 The compression ratio is: SIZELU/SIZEC= 65536/6554 = 9.999 = 10.
 This can also be written as 10:1.
 The reduction in f ile size is necessary to meet the bandwidth requirements for
many transmission systems, the storage requirements in computer database.
 The main goal of such system is to reduce the storage quantity as much as
possible, and
 The decoded image displayed in the monitor can be similar to the original image
as much as can be.

4
Trade-offs
 A compression scheme for video may require expensive hardware for the video
to be decompressed fast enough to be viewed as it's being decompressed.

 trade-offs include
• Degree of compression,
• Amount of distortion introduced (if using a Lossy compression scheme), and,
• Computational resources required to compress and uncompressed the data.
• Speed of compression and decompression

5
Probability
 All compression algorithms assume that there is some bias on the input
messages so that some inputs are more likely than others, i.e.

 There is some unbalanced probability distribution over the possible


messages.

 Most compression algorithms base this “bias” on the structure of the


messages–i.e.,

 An assumption that repeated data are more likely than random data, or that
large white patches occur in “typical” images.

 Compression is therefore all about probability.


6
Compression System Model
 The compression system model consists of two parts: Compressor and
Decompressor.

 Compressor: consists of preprocessing stage and encoding stage.

 Decompressor: consists of decoding stage followed by a post processing


stage.

7
Cont’d..
 Before encoding, preprocessing is performed to prepare the image for the
encoding process, and consists of any number of operations that are
application specific.

 After the compressed f ile has been decoded, post processing can be
performed to eliminate some of the undesirable artifacts brought about by
the compression process.

 Often, many practical compression algorithms are a combination of a


number of different individual compression techniques.

8
Cont’d..
The compressor consists of
1. Preprocessing stage: preprocessing is performed to prepare the image
for the encoding process, and consists of any number of operations that
are application specific.

 Data reduction, Here, the image data can be reduced by gray-leve1


and/or spatial quantization, or they can undergo any desire image
enhancement (for example, noise removal) process.

 The mapping process, which maps the original image data into another
mathematical space where it is easier to compress the data.

9
Cont’d..
2. Encoding stage.

 The quantization stage (as part of the encoding process), which takes the
potentially continuous data from the mapping stage and puts it in discrete
form.

 Final stage of encoding involves coding the resulting data, which maps
the discrete data from the quantized onto a code in an optimal manner.

 A compression algorithm may consist of all the stages, or it may consist


of only one or two of the stage.

10
Decompression consists of
 1. decoding process is divided into two stages.
 the decoding stage, takes the compressed f ile and reverses the original
coding by mapping the codes to the original, quantized values.
 Next, these values are processed by a stage that performs an inverse
mapping to reverse the original mapping process.
 2. A postprocessing stage: After the compressed f ile has been decoded,
post processing can be performed to eliminate some of the potentially
undesirable artifacts brought about by the compression process.
 The Image post processed to enhance the look of the final image.
 Often, many practical compression algorithms are a combination of a
number of different individual compression techniques.
11
Images Compression Methods
 Information is interpretation of the data in a meaningful way.

 There are two primary types of images compression methods and


they are:

 1. Lossless Compression

 2. Lossy Compression.

12
Lossless Compression
 This compression is called lossless because no data are lost, and

 The original image can be recreated exactly from the compressed data.

 For simple image such as text only images.

There are two methods of lossless comparison:

 Huffman Coding

 Run-Length Coding

13
Huffman Coding
 The Huffman code developed by D. Huffman in1952, is a minimum
length code.

 This means that given the statistical distribution of the gray levels (the
histogram) code.

 The Huffman algorithm will generate a code that is as close as possible


to the minimum bound' the entropy'.

 For example, Huffman coding alone will typically reduce the f ile by 10
to 50%, but this ratio can be improved to 2:1 or 3:1 by preprocessing
for irrelevant information removal.
14
Run-Length Coding
 Run-length coding (RLC) is an image compression method that
works by counting the number of adjacent pixels with the same
gray-level value.
 This count, called the run length, is then coded and stored.
 Here we will explore several methods of run-length coding basic
methods that are used primarily for binary (two-valued) images
and extended versions for gray-scale images.
 Basic RLC is used primarily for binary images but can work with
complex images that have been preprocessed by thresholding to
reduce the number of gray levels to two.
15
Lossy Compression
 These compression methods are called Lossy because they allow a loss
because they allow a loss in actual image data,

 So original uncompressed image can not be created exactly from the


compressed file.

 For complex images these techniques can achieve compression ratios of


100 0r 200 and still retain in high – quality visual information.

 For simple image or lower-quality results compression ratios as high as


100 to 200 can be attained.

16
Cont’d

17
Information Theory
Entropy is a numerical measure of the uncertainty of an outcome

 Shannon borrowed the def in ition of entropy from statistical physics to


capture the notion of how much information is contained in a message
and their probabilities. For a set of possible messages , Shannon def ined
entropy as,

 Where p(s) is the probability of Message S


 H(S) is the minimum average number of bits/symbol possible

18
Cont’d..

 Shannon defined self information of a message sS as

 It represents the number of bits of information contained in a message


and, roughly speaking, the number of bits we should use to send that
message.

 The equation says that messages with higher probability will contain
less information

19
Computing entropy
 Entropy is simply a weighted average of the information of each
message, and therefore the average number of bits of information in
the set of messages.

 Larger entropies represent more information, and perhaps counter-


intuitively,

 The more random a set of messages (the more even the probabilities)
the more information they contain on average.

 If entropy is less, it means there is so many redundancy in the


message (data that does not convey too much information) that can be
20
Data Redundancy
 Data compression removes data redundancy

 Let n1 and n2 denote the number of information carrying units in two


data sets that represent the same information. •

 The relative data redundancy RD is RD = 1-1/CR where CR is the


compression ratio CR= n1/n2

 n1=n2 RD = 0, and CR= 1, no data redundancy

 n1>>n2 and CR>>1, RD ≅1 highly redundant data

21
Measure of performance
 The performance of a compression algorithm can be measured in a
number of ways
• Complexity of algorithm
• Memory requirement
• Speed
• Amount of compression and,
• Similarity of compressed and original data (for lossy)

22
Arithmetic coding
 Arithmetic coding is another kind of lossless compression algorithm.

 Like an y dig ital f il e , dig ital im ag e s are re pre s e n te d on lowe r


computational levels with a string of characters.

 Arithmetic coding encodes frequently used characters in an image f ile


with fewer bits and less-used characters with more bits.

 The result is fewer bits overall compared to the original string of


characters.

23
Bit plane encoding
 Bit plane encoding leverages this binary representation for image
compression, offering a lossless method to reduce storage space
without sacrificing any image quality.

 Bit plane encoding offers a powerful and versatile method for


compressing images without compromising their quality.

 Reduce an image’s inter-pixel redundancy is to process the image’s


the bit plane individually.

 First Decompose the original images into Bit plane

24
Reading Assignment
video compression

25
Quiz 5%
1.Whati is Image Restoration? 2pts.
2. write images compression methods? 2pts.
3. If entropy is high what happen the
redundancy?1pts

26

You might also like