0% found this document useful (0 votes)
72 views30 pages

aMOD 7 DIP

The document discusses different types of image compression techniques, including lossless compression methods like variable length coding, LZW, and bit-plane coding, as well as lossy compression via lossy predictive coding, and it examines various sources of data redundancy in images and how to measure information and fidelity for compression evaluation.

Uploaded by

Ankur Agarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views30 pages

aMOD 7 DIP

The document discusses different types of image compression techniques, including lossless compression methods like variable length coding, LZW, and bit-plane coding, as well as lossy compression via lossy predictive coding, and it examines various sources of data redundancy in images and how to measure information and fidelity for compression evaluation.

Uploaded by

Ankur Agarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

• Image compression

– Data vs. information


– Entropy
– Data redundancy
• Coding redundancy
• Interpixel redundancy
• Psycho-visual redundancy
– Fidelity measurement
• Lossless compression
– Variable length coding
• Huffman coding
– LZW
– Bitplane coding
– Binary image compression
• Run length coding
– Lossless predictive coding
• Lossy compression
Lossy predictive coding
Questions
• What is the difference between data and information?
• How to measure data redundancy?
• How many different types of data redundancy? Explain each.
• What is entropy? What is the first/second order estimate of entropy?
• Understand the two criteria that all coding mechanisms should satisfy.
• Understand Huffman coding
• How do you explain the average coding length from Huffman coding is
always greater than the entropy?
• What image format uses which coding scheme?
• Understand RLC, differential coding
Data and Information
• Data and information
– Different data set can represent the same kind of information
• Data redundancy
– Relative data redundancy

1 n1
RD = 1- CR =
CR n2
– CR: compression ratio
– n1, n2: number of information carrying units in two data sets that
represent the same information
• Data compression
– Reducing the amount of data required to represent a given quantity of
information
Data Redundancy
• Coding redundancy
• Interpixel redundancy
• Psychovisual redundancy
Coding Redundancy
• In general, coding redundancy is present when
the codes assigned to a set of events (such as
gray-level values) have not been selected to
take full advantage of the probabilities of the
events.
• In most images, certain gray levels are more
probable than others
Coding Redundancy - Example

rk pr (rk ) Code1 l1 (rk ) Code 2 l2 (rk )


r0 = 0 0.19 000 3 11 2
nk r1 = 1 / 7 0.25 001 3 01 2
pr (rk ) =
n r2 = 2 / 7 0.21 010 3 10 2
L -1 r3 = 3 / 7 0.16 011 3 001 3
Lavg = å l (rk )pr (rk )
k =0 r4 = 4 / 7 0.08 100 3 0001 4
r5 = 5 / 7 0.06 101 3 00001 5
r6 = 6 / 7 0.03 110 3 000001 6
r7 = 1 0.02 111 3 000000 6
Interpixel Redundancy
• Because the value of any pixel
can be reasonably predicted
from the value of its
neighbors, much of the visual
contribution of a single pixel
to an image is redundant, it
could have been guessed on
the basis of its neighbors’
values.
• Include spatial redundancy,
geometric redundancy,
interframe redundancy
Interpixel Redundancy - Example
Psychovisual Redundancy
• The eye does not respond with equal sensitivity to all
visual information. Certain information simply has
less relative importance than other information in
normal visual processing
• In general, an observer searches for distinguishing
features such as edges or textural regions and
mentally combines them into recognizable
groupings.
Psychvisual Redundancy

Improved Gray Scale Quantization (IGS)


Evaluation Metrics
• Fidelity criteria
• Measure information
– Entropy
Fidelity Criteria

• Objective fidelity criteria


– Root-mean-square error (erms)

åå [ ]
1/ 2
é 1 M -1 N -1
ˆf (x, y )- f (x, y ) 2 ù
erms =ê ú
ë MN x =0 y =0 û
– Mean-square signal-to-noise ratio (SNRms)
åå [ ]
M -1 N -1
ˆf (x, y ) 2
x =0 y =0
SNRms =
åå [ ]
M -1 N -1
ˆf (x, y )- f (x, y ) 2
x =0 y =0
– Root-mean-square SNRms (SNRrms)
Measure Information
• A random event E that occurs with probability P(E) is
said to contain

I (E )= log = - log P(E )


1
P(E )
– I(E): self-information of E
– The amount of self-information attributed to event E is
inversely related to the probability of E
– If base 2 is used, the unit of information is called a bit
– Example: if P(E) = ½, then I(E) = -log2P(E) = 1.
Entropy
• Average information per source output, or
uncertainty, or entropy of the source

H (z )= -å P(a j )log P(a j )


J

j =1
• How to interpret the increase of entropy?
• What happens if the events are equally probable?
Example
• Estimate the information content (entropy) of
the following image (8 bit image):
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
Gray - level Count Pr obability • First-order estimate of the entropy (bits/pixel)
21 12 3/8
95 4 1/ 8
169 4 1/ 8 - 3 / 8 * log 2(3 / 8) - 1 / 8 * log 2(1 / 8) - 1 / 8 * log 2(1 / 8) - 3 / 8 * log 2(3 / 8) = 1.81
• Second-order estimate of the entropy (bits every
243 12 3/8 two pixels)
Gray - level - pair Count Pr obability
(21,21) 8 2/7
(21,95) 4 1/ 7 - 2 / 7 * log 2(2 / 7) * 2 - 1 / 7 * log 2(1 / 7) * 3 = 2.2
(95,169) 4 1/ 7
(169,243) 4 1/ 7 What do these two numbers tell you?
(243,243) 8 2/7
Compression Approaches
• Error-free compression or lossless
compression
– Variable-length coding
– Bit-plane coding
– Lossless predictive coding
• Lossy compression
– Lossy predictive coding
– Transform coding
Variable-length Coding
• Only reduce code redundancy
• Assign the shortest possible code words to the
most probable gray levels
• Huffman coding
– Can obtain the optimal coding scheme
Revisit Example

rk pr (rk ) Code1 l1 (rk ) Code 2


r0 = 0 0.19 000 3 00
r1 = 1 / 7 0.25 001 3 1
r2 = 2 / 7 0.21 010 3 0
r3 = 3 / 7 0.16 011 3 01
r4 = 4 / 7 0.08 100 3 10
r5 = 5 / 7 0.06 101 3 11
r6 = 6 / 7 0.03 110 3 000
r7 = 1 0.02 111 3 001
Huffman Coding
• Uniquely decodable
• Instantaneous coding
• Source reduction
– Order the probabilities of symbols
– Combine the lowest probability symbols into a single
symbol that replaces them in the next source reduction
• Code each reduced source, starting with the smallest
source and working back to the original source
Example
R1 0.25 (01) 0.25 (01) 0.25 (01) 0.25 (01) 0.35 (00) 0.4 (1) 0.6 (0)
R2 0.21 (10) 0.21 (10) 0.21 (10) 0.21 (10) 0.25 (01) 0.35 (00) 0.4 (1)
R0 0.19 (11) 0.19 (11) 0.19 (11) 0.19 (11) 0.21 (10) 0.25 (01)
R3 0.16 (001) 0.16 (001) 0.16 (001) 0.19 (000) 0.19 (11)
R4 0.08 (0001) 0.08 (0001) 0.11 (0000) 0.16 (001)
R5 0.06 (00000) 0.06 (00000) 0.08 (0001)
R6 0.03 (000010) 0.05 (00001)
R7 0.02 (000011)
Entropy of the source:
-0.25*log2(0.25) - 0.21*log2(0.21) - 0.19*log2(0.19) - 0.16*log2(0.16)
-0.08*log2(0.08) - 0.06*log2(0.06) - 0.03*log2(0.03) - 0.02*log2(0.02)
= 2.6508 bits/pixel

Average length of Huffman code:


2*0.25 + 2*0.21 + 2*0.19 + 3*0.16 + 4*0.08 + 5*0.06 + 6*0.03 + 6*0.02
= 2.7 bits
LZW
• Lempel-Ziv-Welch
• Attack interpixel redundancy - spatial redundancy
• Assigns fixed-length code words to variable length
sequences of source symbols
• Requires no a-priori knowledge of the probability of
occurrence of the symbols
• Used in GIF, TIFF, PDF
• Coding book is created while the data are being
encoded
39 39 126 126
39 39 126 126
39 39 126 126
Example 39 39 126 126
Bit-plane Coding
• Attack inter-pixel redundancy
• First decompose the original image into bit-
plane
• Binary image compression approach
– run-length coding (RLC)
Bit-plane Decomposition
• Bit-plane slicing
• Problem: small changes in gray level can have a significant
impact on the complexity of the bit plane
– 127 vs. 128  0111 1111 vs. 1000 0000
• Solution:

gi = ai Å ai +1 0£i £ m-2
• Example: g m-1 = am-1
– 127  0111 1111  0100 0000
– 128  1000 0000  1100 0000
Binary Image Compression - RLC
• Developed in 1950s
• Standard compression approach in FAX coding
• Approach
– Code each contiguous group of 0’s or 1’s encountered in
a left to right scan of a row by its length
– Establish a convention for determining the value of the run
– Code black and white run lengths separately using
variable-length coding
Lossless Predictive Coding
• Do not need to decompose image into bit planes
• Eliminate interpixel redundancy
• Code only the new information in each pixel
• The new information is the difference between the actual and
predicted value of that pixel

e(x, y ) = f (x, y )- fˆ (x, y )


ˆf (x, y ) = round é m
ù
êå a i f (x, y - i )ú
ë i =1 û
Previous Pixel Coding (differential
coding)

fˆ (x, y )= round [af (x, y - 1)]


• First-order linear predictor / previous pixel
predictor uses differential coding

126 127 128

126 1 1

You might also like