Lecture 17
Lecture 17
1 n1
RD = 1 − , where C R =
CR n2
f ( m, n) : Original image
fˆ (m, n) : Reconstructed image
e( m, n) : Error image
∑∑ [ fˆ (m, n) − f (m, n)]
1
M −1 N −1 2
1 2
erms =
MN m = 0 n =0
∑∑ [ fˆ (m, n)]
M −1 N −1 2
SNRms = m = 0 n =0
m =0 n = 0
• The first block “Mapper” transforms the input data into a (usually
nonvisual) format, designed to reduce interpixel redundancy. This
block is reversible and may or may not reduce the amount of data.
Example: run-length encoding, image transform.
• The Quantizer reduces accuracy of the mapper output in
accordance with some fidelity criterion. This block reduces
psychovisual redundancy and is usually not invertible.
• The Symbol Encoder creates a fixed or variable length codeword
to represent the quantizer output and maps the output in
accordance with this code. This block is reversible and reduces
coding redundancy.
From Channel
Symbol Inverse fˆ (m, n)
(Compressed
Decoder Mapper
Image)
Source Decoder
N −1
• Naturally, pi ≥ 0, and pi = 1.
i =0
• The base for the logarithm depends on the units for measuring
information. Usually, we use base 2, which gives the information
in units of “binary digits” or “bits.” Using a base 10 logarithm
would give the entropy in the units of decimal digits.
• The amount of information attributed to an event E is inversely
related to the probability of that event.
• Examples:
Certain event: P( E ) = 1.0 . In this case I ( E ) = log(1 / 1) = 0 . This
agrees with intuition, since if the event E is certain to occur (has
probability 1), knowing that it has occurred has not led to any
gain of information.
Coin toss: P( E = Heads) = 0.5 . In this case
I ( E ) = log(1 / 0.5) = log(2) = 1 bit. This again agrees with
intuition.
Rare event: P( E ) = 0.001. In this case
I ( E ) = log(1 / 0.001) = log(1000) = 9.97 bits. This again agrees
with intuition, since knowing that a rare event has occurred
leads to a significant gain of information.
• The entropy H (z) of a source is defined as the average amount of
information gained by observing a single source symbol:
N −1
H (z ) = −
pi log pi
i =0
• By convention, in the above formula, we set 0 log 0 = 0 .
Example:
Source entropy:
H (z ) = − ∑ pi log pi = −[1 2 log 1 2 + 1 4 log 1 4 +
6
+ 1 64 log 1 64]
i =0
• Is this is the best we can do (in terms of Lavg )? For a fixed length
codeword scheme, yes. How about if we employ a variable length
scheme?
• Idea: Since the symbols are not all equally likely, assign shorter
codewords to symbols with higher probability and longer
codewords to symbols with lower probability, such that the
average length is smaller.
• Consider the following scheme
6
Lavg = pi li = [1 2 + 1 2 + + 3 32] = 63 = 1.96875 bit
i =0
32
Lavg ≥ H (z )
In other words, no codes exist that can losslessly represent the source
if Lavg < H ( z) .
• Note that Shannon’s theorem is quite general in that it refers to any
code, not a particular coding scheme.
• Also, it does not specify a scheme to construct codes whose
average length satisfies Lavg ≥ H (z) , nor does it claim that a code
satisfying Lavg = H (z ) exists.
H (z ) ≤ Lavg < H (z ) + 1
′
Lavg 1
H (z ) ≤ ≡ Lavg < H (z ) +
n n
H (z )
η= ≤1
Lavg