0% found this document useful (0 votes)
5 views61 pages

Image Restroation

Uploaded by

dilip sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views61 pages

Image Restroation

Uploaded by

dilip sai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

Digital Image Processing, 3rd ed.

Gonzalez & Woods


www.ImageProcessingPlace.com

Chapter 8
Image Compression

The size of typical still image (1200x1600)


1200 1600  3byte  5760000byte
 5,760Kbyte  5.76Mbyte
The size of two hours standard television (720x480)
movies
frame pixels bytes
30 (760  480) 3  31,104,000bytes / sec
sec frame pixel
bytes sec
31,104,000  (60  60)  2hours  2.24 1011bytes
sec hour
 224GByte.

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Data, Information, and Redundancy


• Information
• Data is used to represent information
• Redundancy in data representation of an
information provides no relevant information or
repeats a stated information
• Let n1, and n2 are data represents the same
information. Then, the relative data redundancy R
of the n1 is defined as
R = 1 – 1/C where C = n1/n2
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

• Redundancy in Digital Images


– Coding redundancy
usually appear as results of the uniform
representation of each pixel
– Spatial/Temopral redundancy
because the adjacent pixels tend to have
similarity in practical.
– Irrelevant Information
Image contain information which are ignored
by the human visual system.
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Coding Redundancy Spatial Redundancy Irrelevant Information

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Coding Redundancy
• Assume the discrete random variable for rk in the interval
[0,1] that represent the gray levels. Each rk occurs with
probability pk
• If the number of bits used to represent each value of rk by
l(rk) then L1

Lavg  k l(rk )
0
• The average code p(rbits
k ) assigned to the gray level values.

• The length of the code should be inverse proportional


to its probability (occurrence).

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Examples of variable length encoding

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Spatial/Temopral Redundancy
• Internal Correlation between the pixel result from
Respective Autocorrelation
– Structural Relationship
– Geometric Relation ship
• The value of a pixel can be reasonably predicted
from the values of its neighbors.
• To reduce the inter-pixel redundancies in an image
the 2D array is transformed (mapped) into more
efficient format (Frequency Domain etc.)
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Irrelevant information and Psycho-Visual


Redundancy
• The brightness of a region depend on other factors
that the light reflection
• The perceived intensity of the eye is limited an non
linear
• Certain information has less relative importance that
other information in normal visual processing
• In general, observer searches for distinguishing
features such as edges and textural regions.
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Measuring Information
• A random even E that occurs with
probability P(E) is said to contain I(E)
information where I(E) is defined as
I(E) = log(1/P(E)) = -log(P(E))
• P(E) = 1 contain no information
• P(E) = ½ requires one bit of
information.

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Measuring Information
• For a source of events a0, a1,a2, .., ak with
associated probability P(a0), P(a1), P(a2), .., P(ak).
• The average information per source (entropy) is
k

H  j 0 P(a ) log(P(a )


j j
For image, we use the normalized histogram to generate
the source probability, which leas to the entropy
L1
~
H    pr (ri ) log(pr
i
0
(ri ))
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Fidelity Criteria
• Objective Fidelity Criteria
– The information loss can be expressed as a function
of encoded
the and decoded images.
– For image I (x,y) and its decoded approximation I’(x,y)
– For any value of x and y, the error e(x,y) could be defined as
e(x, y)  I '(x, y)  I (x, y)
– For the entire Image
M 1 N 1


x0 y I '(x, y)  I (x, y)
0
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Fidelity Criteria
• The mean-square-error, erms is
M 1 N 1

  I '(x, y)  I (x,
erms 

y)
The mean-square-error signal-to-noise ratio SNR
2
ms is
x0 y 0
M 1 N 1

SNRms  
x0 y
0
I '(x, y) 2
M 1 N 1

  I '(x, y)  I (x,
y)2
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Three approximations of the same image

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Huffman coding is an entropy


encoding algorithm used for
lossless data compression. The
term refers to the use of a variable-
length code table for encoding a
source symbol (such as a character
in a file) where the variable-length
code table has been derived in a
particular way based on the
estimated probability of occurrence
for each possible value of the
source symbol.

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Huffman coding
Assignment procedure

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Arithmetic coding is a form of


variable-length entropy encoding.
A string is converted to
arithmetic encoding, usually
characters are stored with fewer
bits
Arithmetic coding encodes the
entire message into a single
number, a fraction n where (0.0
≤ n < 1.0).

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Compression
Algorithms

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Symbol compression
This approaches determine a set of
symbols that constitute the image,
and take advantage of their multiple
appearance. It convert each symbol
into token, generate a token table
and represent the compressed image
as a list of tokens.
This approach is good for document
images.

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

ChChapa
ptterer
ImageImage
88
CompressionCompression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

ChChapa
ptterer
ImageImage
88
CompressionCompression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

DFT and DCT


The periodicity implicit in the 1-D DFT
and DCT. The DCT provide better
continuity that the general DFT.

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Bock Size vs. Reconstruction Error


The DCT provide the least error at almost any sub-image size.
The error takes its minimum at sub-images of sizes between 16 and 32.

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Lossless Predictive coding


The encoder expects a discrete sampl es
of a signal f(n).
1. A predictor is applied and its
output is rounded to the nearest
integer. fˆ(n)
The decoder uses the predictor and the
2. The error is estimated as
e(n)  f (n)  fˆ(n) error stream to reconstructs the
3. The compressed stream consist of original signal f(n).
first sample and the errors, 1. The predictor is initialized using
encoded using variable length the first sample.
coding 2. The received error is added to
predictor result.

f (n)  fˆ(n) 
e(n)
© 1992–2008 R. C. Gonzalez & R. E. Woods
Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Lossless Predictive coding


Linear predictors usually have the
form:
 m
fˆ (n)    ai f (n  i)
round  i 0 

Original Image (view of the earth).


The prediction error and its histogram.
1. The error is small in uniform
regions
2. Large close to edges and sharp
changes in pixel intensities

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Lossy Predictive coding


The encoder expects a discrete
samples of a signal f(n).
1. A predictor is applied and its
output is rounded to the nearest
integer, fˆ(n)
2. The error is mapped into limited
rage of values (quantized) e(n)
3. The compressed stream consist
of first sample and the
mapped errors, encoded using
variable length coding

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd ed.
Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

Prediction Error
The following images show the
prediction error of the
predictor
fˆ(x, y)  0.97 f (x, y 1)

fˆ(x, y)  0.5 f (x, y 1)  0.5 f (x 1,


0.97 f (x, y 1) h  v
y) 
fˆ(x, y)  
ˆ 0.97 f (x 1, y)
f (x,|y)f 
h (x 0.75
1, y)f (x,
f (xy 1)
1,  0.75
y 1) f (x
otherwise
1, |
v y) f (x, (x 1,
0.5 yf 1)  f y(x1)
1, y 1)

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

ChChapa
ptterer
ImageImage
88
CompressionCompression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods


Digital Image Processing, 3rd
ed. Gonzalez & Woods
www.ImageProcessingPlace.com

Chapter 8
Image Compression

© 1992–2008 R. C. Gonzalez & R. E. Woods

You might also like