aMOD 7 DIP
aMOD 7 DIP
1 n1
RD = 1- CR =
CR n2
– CR: compression ratio
– n1, n2: number of information carrying units in two data sets that
represent the same information
• Data compression
– Reducing the amount of data required to represent a given quantity of
information
Data Redundancy
• Coding redundancy
• Interpixel redundancy
• Psychovisual redundancy
Coding Redundancy
• In general, coding redundancy is present when
the codes assigned to a set of events (such as
gray-level values) have not been selected to
take full advantage of the probabilities of the
events.
• In most images, certain gray levels are more
probable than others
Coding Redundancy - Example
åå [ ]
1/ 2
é 1 M -1 N -1
ˆf (x, y )- f (x, y ) 2 ù
erms =ê ú
ë MN x =0 y =0 û
– Mean-square signal-to-noise ratio (SNRms)
åå [ ]
M -1 N -1
ˆf (x, y ) 2
x =0 y =0
SNRms =
åå [ ]
M -1 N -1
ˆf (x, y )- f (x, y ) 2
x =0 y =0
– Root-mean-square SNRms (SNRrms)
Measure Information
• A random event E that occurs with probability P(E) is
said to contain
j =1
• How to interpret the increase of entropy?
• What happens if the events are equally probable?
Example
• Estimate the information content (entropy) of
the following image (8 bit image):
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
Gray - level Count Pr obability • First-order estimate of the entropy (bits/pixel)
21 12 3/8
95 4 1/ 8
169 4 1/ 8 - 3 / 8 * log 2(3 / 8) - 1 / 8 * log 2(1 / 8) - 1 / 8 * log 2(1 / 8) - 3 / 8 * log 2(3 / 8) = 1.81
• Second-order estimate of the entropy (bits every
243 12 3/8 two pixels)
Gray - level - pair Count Pr obability
(21,21) 8 2/7
(21,95) 4 1/ 7 - 2 / 7 * log 2(2 / 7) * 2 - 1 / 7 * log 2(1 / 7) * 3 = 2.2
(95,169) 4 1/ 7
(169,243) 4 1/ 7 What do these two numbers tell you?
(243,243) 8 2/7
Compression Approaches
• Error-free compression or lossless
compression
– Variable-length coding
– Bit-plane coding
– Lossless predictive coding
• Lossy compression
– Lossy predictive coding
– Transform coding
Variable-length Coding
• Only reduce code redundancy
• Assign the shortest possible code words to the
most probable gray levels
• Huffman coding
– Can obtain the optimal coding scheme
Revisit Example
gi = ai Å ai +1 0£i £ m-2
• Example: g m-1 = am-1
– 127 0111 1111 0100 0000
– 128 1000 0000 1100 0000
Binary Image Compression - RLC
• Developed in 1950s
• Standard compression approach in FAX coding
• Approach
– Code each contiguous group of 0’s or 1’s encountered in
a left to right scan of a row by its length
– Establish a convention for determining the value of the run
– Code black and white run lengths separately using
variable-length coding
Lossless Predictive Coding
• Do not need to decompose image into bit planes
• Eliminate interpixel redundancy
• Code only the new information in each pixel
• The new information is the difference between the actual and
predicted value of that pixel
126 1 1