Dip Image Compression
Dip Image Compression
Ch t 8
Chapter
Image Compression
Th size
The
i off typical
t i l still
till image
i
(1200x1600)
(1200 1600)
30
Ch t 8
Chapter
Image Compression
Data,
D
t IInformation,
f
ti
and
dR
Redundancy
d d
Information
Data
D t is
i usedd to represent information
i f
i
Redundancy in data representation of an
i f
information
ti provides
id no relevant
l
t information
i f
ti or
repeats a stated information
Let n1,
n1 and n2 are data represents the same
information. Then, the relative data redundancy R
of the n1 is defined as
R = 1 1/C where C = n1/n2
19922008 R. C. Gonzalez & R. E. Woods
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Coding Redundancy
Spatial Redundancy
Irrelevant Information
Ch t 8
Chapter
Image Compression
Coding Redundancy
Assume the discrete random variable for rk in the interval
[0,1] that represent the ggrayy levels. Each rk occurs with
probability pk
If the number of bits used to represent each value of rk by
l(rk) then
L 1
Lavg l (rk ) p(rk )
k 0
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Spatial/Temopral Redundancy
Internal Correlation between the pixel result from
Respective Autocorrelation
Structural Relationship
Geometric Relation shipp
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Measuring Information
A random even E that occurs with
probability P(E) is said to contain I(E)
information where I(E) is defined as
I(E) = log(1/P(E)) = -log(P(E))
log(P(E))
P(E) = 1 contain no information
P(E) = requires one bit of information.
Ch t 8
Chapter
Image Compression
Measuring Information
For a source of events a0, a1,a2, .., ak with
associated probability P(a0),
) P(a1),
) P(a2),
) .., P(ak).
)
The average information per source (entropy) is
k
H P (a j ) log(
l ( P (a j )
j 0
Ch t 8
Chapter
Image Compression
Fidelity Criteria
Objective Fidelity Criteria
The information loss can be expressed as a function of the
encoded and decoded images.
For image
g I (x,y)
y and its decoded approximation I(x,y)
y
For any value of x and y, the error e(x,y) could be defined as
e( x, y ) I ' ( x, y ) I ( x, y )
For the entire Image
M 1 N 1
I ' ( x, y) I ( x, y)
x 0 y 0
Ch t 8
Chapter
Image Compression
Fidelity Criteria
The mean-square-error, erms is
erms
M 1 N 1
I ' ( x, y) I ( x, y)
x 0 y 0
SNRms
2
I
'
(
x
,
y
)
x 0 y 0
M 1 N 1
I
'
(
x
,
y
)
I
(
x
,
y
)
x 0 y 0
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Th approximations
Three
i i
off the
h same image
i
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Huffman
H
ff
coding
di
Assignment procedure
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Compression Algorithms
Ch t 8
Chapter
Image Compression
Symbol compression
This approaches determine a set of
symbols that constitute the image,
and take advantage of their multiple
appearance. It convert each symbol
into token, generate a token table
and represent
p
the compressed
p
image
g
as a list of tokens.
This approach is good for document
images.
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
DFT and
d DCT
The periodicity implicit in the 1-D DFT
and DCT. The DCT provide better
continuity that the general DFT.
DFT
Ch t 8
Chapter
Image Compression
B k Si
Bock
Size vs. R
Reconstruction
t ti E
Error
The DCT provide the least error at almost
anyy sub-image
g size.
The error takes its minimum at sub-images
of sizes between 16 and 32.
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
f (n) round ai f (n i )
i 0
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
e( n ) 0
otherwise
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Prediction Error
The following images show the
prediction error of the predictor
f ( x, y ) 0.97 f ( x, y 1)
f ( x, y ) 0.5 f ( x, y 1) 0.5 f ( x 1, y )
f ( x, y ) 0.75 f ( x, y 1) 0.75 f ( x 1, y ) 0.5 f ( x 1, y 1)
0.97 f ( x, y 1) h v
f ( x, y )
0.97 f ( x 1, y ) otherwise
h | f ( x 1, y ) f ( x 1, y 1)
v | f ( x, y 1) f ( x 1, y 1)
Ch t 8
Chapter
Image Compression
Optimal Predictors
What are the parameters of a linear predictor that minimize error
2
E{e (n)} E f (n) i f (n 1)
i 1
R 1r
19922008 R. C. Gonzalez & R. E. Woods
Ch t 8
Chapter
Image Compression
E f (n 1) f (n 1) E f (n 1) f (n 2)
E f (n 2) f (n 1) E f (n 2) f (n 2)
R
E f (n m) f (n 1) E f (n m) f (n 1)
E f (n) f (n 1)
E f (n 1) f (n m)
1
a
m
19922008 R. C. Gonzalez & R. E. Woods
E f (n 1) f (n m)
... E f (n 2) f (n m)
... E f (n m) f (n 1)
...
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression
Ch t 8
Chapter
Image Compression