0% found this document useful (0 votes)
97 views

An Introduction To Image Compression

The document provides an overview of image compression fundamentals and algorithms. It discusses how image compression works by first reducing correlation between pixels using transforms like DCT and DWT. It then describes quantization and source coding techniques like Huffman coding. Popular image compression standards like JPEG and JPEG 2000 are also introduced. The goal of image compression is to reduce the file size of images while maintaining quality during reconstruction.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views

An Introduction To Image Compression

The document provides an overview of image compression fundamentals and algorithms. It discusses how image compression works by first reducing correlation between pixels using transforms like DCT and DWT. It then describes quantization and source coding techniques like Huffman coding. Popular image compression standards like JPEG and JPEG 2000 are also introduced. The goal of image compression is to reduce the file size of images while maintaining quality during reconstruction.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 58

1

An Introduction to
Image Compression

Speaker: Wei-Yi Wei


Advisor: Jian-Jung Ding
Digital Image and Signal Processing Lab
GICE, National Taiwan University
Outline
Image Compression Fundamentals
General Image Storage System
Color Space
Reduce correlation between pixels
Karhunen-Loeve Transform
Discrete Cosine Transform
Discrete Wavelet Transform
Differential Pulse Code Modulation
Differential Coding
Quantization and Source Coding
Huffman Coding
Arithmetic Coding
Run Length Coding
Lempel Ziv 77 Algorithm
Lempel Ziv 78 Algorithm
Overview of Image Compression Algorithms
JPEG
JPEG 2000
Shape-Adaptive Image Compression
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 2
Outline
Image Compression Fundamentals
Reduce correlation between pixels
Quantization and Source Coding
Overview of Image Compression Algorithms

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 3
General Image Storage System

Camera
C Transform to
R-G-B Downsample
Y-Cb-Cr Encoder
coordinate Chrominance
coordinate

Performance
RMSE HDD
Object
PSNR
Monitor
C
Transform to
R-G-B Upsample
R-G-B Decoder
coordinate Chrominance
coordinate
V

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 4
Color Specification
Luminance
Received brightness of the light, which is proportional to the total energy
in the visible band.
Chrominance
Describe the perceived color tone of a light, which depends on the
wavelength composition of light
Chrominance is in turn characterized by two attributes
Hue
Specify the color tone, which depends on the peak wavelength of the light
Saturation
Describe how pure the color is, which depends on the spread or bandwidth of the
light spectrum

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 5
YUV Color Space
In many applications, it is desirable to describe a color in terms
of its luminance and chrominance content separately, to enable
more efficient processing and transmission of color signals
One such coordinate is the YUV color space
Y is the components of luminance
Cb and Cr are the components of chrominance
The values in the YUV coordinate are related to the values in the RGB
coordinate by
 Y   0.299 0.587 0.114  R   0 
      
Cb
     0.169 0.334 0.500 G 
   128 
 Cr   0.500 0.419 0.081 B  128 
      

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 6
Spatial Sampling of Color Component
The three different chrominance downsampling format
(a) 4 : 4 : 4 (b) 4 : 2 : 2 (c) 4 : 2 : 0
W W W

H Y H Y H Y

W W/2 W/2
H/2 Cb
H Cb H Cb

W W/2 W/2
H/2 Cr
H Cr H Cr

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 7
The Flow of Image Compression (1/2)
What is the so-called image compression coding?
To store the image into bit-stream as compact as possible and to display
the decoded image in the monitor as exact as possible
Flow of compression
The image file is converted into a series of binary data, which is called
the bit-stream
The decoder receives the encoded bit-stream and decodes it to reconstruct
the image
The total data quantity of the bit-stream is less than the total data quantity
of the original image

Original Image Decoded Image


Bitstream

Encoder 0101100111... Decoder

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 8
The Flow of Image Compression (2/2)
Measure to evaluate the performance of image compression
W 1 H 1

  f ( x, y)  f '( x, y)
2

x 0 y 0
RMSE 
Root Mean square error: WH
255
Peak signal to noise ratio: PSNR  20 log 10

n1 MSE
Compression Ratio: Cr 
n2
Where n1 is the data rate of original image and n2 is that of the encoded
bit-stream
The flow of encoding
Reduce the correlation between pixels
Quantization
Source Coding
Reduce the
Original correlation Source
Quantization Bitstream
Image between Coding
pixels

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 9
Outline
Image Compression Fundamentals
Reduce correlation between pixels
Quantization and Source Coding
Overview of Image Compression Algorithms

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 10
Reduce the Correlation between Pixels
Orthogonal Transform Coding
KLT (Karhunen-Loeve Transform)
Maximal Decorrelation Process
DCT (Discrete Cosine Transform)
JPEG is a DCT-based image compression standard, which is a lossy coding
method and may result in some loss of details and unrecoverable distortion.
Subband Coding
DWT (Discrete Wavelet Transform)
To divide the spectrum of an image into the lowpass and the highpass
components, DWT is a famous example.
JPEG 2000 is a 2-dimension DWT based image compression standard.
Predictive Coding
DPCM
To remove mutual redundancy between seccessive pixels and encode only the
new information
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 11
Covariance
The covariance between two random variables X and Y, with expected value
E[X]=  X and E[Y]= Y is defined as

cov( X , Y )  E[( X   X )(Y  Y )]  E[ XY ]   X Y


  ( X   X )(Y  Y ) f ( X , Y ) (discrete)
X Y
If entries in the column vector X = [x1,x2,…,xN]T are random variables, each
with finite variance, then the covariance matrix C is the matrix whose (i, j)
entry is the covariance
 C11 C12 C1n 
C 21 C 22 C 2 n 
C  E[( X   x)( X   x)T ]  
 
 
 Cn1 Cn 2 Cnn 

 x  [ 1  2  n]T  [ E ( x 1) E ( x 2) E ( xn)]T

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 12
The Orthogonal Transform
The Linear Transform The Orthogonal Transform
The forward transform y  Ax The forward transform y  V x
T

The inverse transform x  Vy The inverse transform x  Vy


If we want to obtain the inverse If we want to obtain the inverse
transform, we need to compute the transform, we need not to compute
inverse of the transform matrix the inverse of the transform matrix
since since
V  A1 VV T  V TV  I
VA  A1 A  I x  Vy  V (V T x)  (V TV ) x  x
x  Vy  V ( Ax)  ( A1 A) x  x

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 13
Transform Coding (1/2)
Original sequence X=(x0,x1) Transformed Sequence Y=(y0,y1)
Height Weight Height Weight
65 170 181.97068384838812 3.416170333863853
75 188 202.40605916474945 0.887250469682385
60 150 161.55397378997284 0.559957738379552
70 170
Y  AX 183.8437168154677 -1.219748938970085
56 130 141.51187032497342 -3.223438711671541
80 203 206.13345984096256 -2.999455616319722
68 160 173.822665082968 -3.111447163995635
50 110 120.72055367314222 -5.152467452589058
40 80 89.15897210197947 -7.11882670939854
Transform
200

180

The original sequence tends to cluster 160

around the line x1=2.5x0. We rotate the 140

sequence by the transform : Y  AX


120

x1
100

 cos  sin  
80

Rotation Matrix : A   
60

  sin  cos  
40

20
  68
0
0 10 20 30 40 50 60 70 80 90
x0
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 14
Transform Coding (2/2)
Throw the value of weight y1 Inverse Transform
Height Weight Height Weight

181.97068384838812 0 68.16741797800859 168.72028006870275

202.40605916474945 0 75.82264431044632 187.66763012404562

161.55397378997284 0 60.51918377426525 149.79023613916877

183.8437168154677 0 ^ 68.86906847716196 170.45692599485025


1
141.51187032497342 0 XA Y 53.01127967035257 131.20752139486424

206.13345984096256 0 77.21895318005868 191.12361585053173

173.822665082968 0 65.11511642520563 161.165568622698

120.72055367314222 0 45.222715366778566 111.93014828010075

89.15897210197947 0 33.399538811586865 82.66675942272599

Height Weight
 ^

 x0  cos   sin    y0 65 170

 ^   sin  cos    0 
75 188
60 150
 x1  70 170
56 130
Because the other element of the pair
80 203
contained very little information, we could 68 160
discard it without a significant effect on the 50 110
fidelity of the reconstructed sequence 40 80

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 15
Karhunen-Loeve Transform
KLT is the optimum transform coder that is defined as the one that minimizes
the mean square distortion of the reproduced data for a given number of total
bits
The KLT
X: The input vector with size N-by-1
A: The transform matrix with size N-by-N
Y: The transformed vector with size N-by-1, and each components v(k) are mutually uncorrelated
Cxixj: The covariance matrix of xi and xj
Cyiyj: The covariance matrix of yi and yj
The transform matrix A is composed of the eigenvectors of the autocorrelation matrix Cxixj, which
makes the output autocorrelation matrix Cyiyj be composed of the eigenvalues  0,  1,... N  1 in
the diagonal direction. That is

Cyy  E[(Y  E (Y ))(Y  E (Y ))T ]  0 0 0 


 0 1 
 E[YY T ] : zero  mean assumption 0
Cyy     ACxxAT
 E[( Ax)( Ax)T ]  E[( AxxT AT )]  0 0 
 
 AE[ xxT ] AT  ACxxAT 0 0  N  1
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 16
Discrete Cosine Transform (1/2)
Why DCT is more appropriate for image compression than DFT?
The DCT can concentrate the energy of the transformed signal in low frequency,
whereas the DFT can not
For image compression, the DCT can reduce the blocking effect than the DFT
Forward DCT
2 N 1 N 1
  (2 x  1)u    (2 y  1)v 
F (u , v)  C (u )C (v)  f ( x, y )cos   cos  
N x 0 y 0  2N   2N 
for u  0,..., N  1 and v  0,..., N  1
1 / 2 for k  0
where N  8 and C (k )  
 1 otherwise

Inverse DCT
2 N 1 N 1   (2 x  1)u    (2 y  1)v 
f ( x, y )   C (u )C (v) F (u, v)cos   cos  
N u 0 v 0  2N   2N 
for x  0,..., N  1 and y  0,..., N  1 where N  8

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 17
Discrete Cosine Transform (2/2)
The 8-by-8 DCT basis
u

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 18
Discrete Wavelet Transform (1/2)
Subband Coding
The spectrum of the input data is decomposed into a set of bandlimitted
components, which is called subbands
Ideally, the subbands can be assembled back to reconstruct the original spectrum
without any error
The input signal will be filtered into lowpass and highpass components
through analysis filters
The human perception system has different sensitivity to different frequency
band
The human eyes are less sensitive to high frequency-band color components
The human ears is less sensitive to the low-frequency band less than 0.01 Hz and
high-frequency band larger than 20 KHz
y0(n)
H0 () H1 ()

-----------------------
-----------------------

h0(n) ↓ 2 ↑ 2 g0(n)

x(n) ^
x(n) Lowband Highband
Analysis Synthesis +

---------
w
h1(n) ↓ 2 ↑ 2 g1(n)
0 Pi/2 Pi
y1(n)
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 19
Discrete Wavelet Transform (2/2)
1D DWT applied alternatively to vertical and horizontal direction line by line
The LL band is recursively decomposed, first vertically, and then horizontally
1D scaling function  ( x),  ( y ) 2D scaling function  ( x, y )   ( x) ( y )
1D wavelet function  ( x), ( y ) 2D wavelet function  ( x, y)   ( x) ( y)
D

Rows
 H ( x, y)   ( x) ( y)
Columns h (m) ↓ 2 WD ( j , m, n)
h (n) ↓ 2
 V ( x, y)   ( y) ( x)
h (m) ↓ 2 WV ( j , m, n)
W ( j  1, m, n)

h (m) ↓ 2 WH ( j , m, n)
h (n) ↓ 2
h (m) ↓ 2 WH ( j , m, n)

LL LH
L H
HL HH

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 20
DPCM (1/3)
DPCM CODEC
u[ n ] e[n] e [n] Communication e [n] u [ n]
 Quantizer 
Channel
uˆ[ n ]

uˆ[ n ] u [ n]
Predictor Predictor

With Delay With Delay

u[n]  uˆ[u ]  e[n]


u[n]  u [u ]  (uˆ[n]  e[u ])  (uˆ[n]  e [u ])  e(n)  e [u ]  q[n]
There are two components to design in a DPCM system
The predictor
The predictor output Sˆ 0  a1S1  a2S 2  ...  anSn
The quantizer
A-law quantizer
μ-law quantizer
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 21
DPCM (2/3)
Design of Linear Predictor
Sˆ 0  a1S 1  a 2 S 2  ...  anSn (The predictor output),
e0  S 0  Sˆ 0 (The predictition error)
E[( S 0  Sˆ 0) 2 ] E[( S 0  (a1S 1  a 2 S 2  ...  anSn)) 2 ]

ai ai
 2 E[( S 0  (a1S 1  a 2 S 2  ...  anSn)) Si ]  0, for i=0, 1, 2, ,n
 E[( S 0  (a1S 1  a 2 S 2  ...  anSn)) Si ]  0
E[( S 0  Sˆ 0) Si ]  0, for i=0, 1, 2, ,n
E[ S 0 Si ]  E[ Sˆ 0 Si ]
Rij  E[ SiSj ]
R 0i  E[a1S 1Si  a 2 S 2 Si  ...  anSnSi ]  a1R1i  a 2 R 2i   anRni
 a1 
a2
[ R 0i ]  [ R1i R 2i Rni ]  
 
 
 an 
[ai ]  [ R1i R 2i Rni ]1[ R 0i ]
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 22
DPCM (3/3)
When S0 comprises these optimized coefficients, ai, then the mean square
error signal is
 e2  E[( S 0  Sˆ 0) 2 ]  E[( S 0  Sˆ 0) S 0  ( S 0  Sˆ 0) Sˆ 0]
 E[( S 0  Sˆ 0) S 0]  E[( S 0  Sˆ 0) Sˆ 0]
But E[( S 0  Sˆ 0) Sˆ 0]  0 (orthogonal principle)
 e2  E[( S 0  Sˆ 0) S 0]  E[( S 0) 2 ]  E[ Sˆ 0 S 0]
 R 00  (a1R 01  a 2 R 02  ...  anR 0 n)
 e2 : The variance of the difference signal
R 00 : The variance of the original signal

The variance of the error signal is less than the variance of the
original signal.

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 23
Differential Coding - JPEG (1/2)
Transform Coefficients
DC coefficient
AC coefficients
Because there is usually strong correlation between the DC coefficients of adjacent
8×8 blocks, the quantized DC coefficient is encoded as the difference from the DC
term of the previous block
The other 63 entries are the AC components. They are treated separately from the
DC coefficients in the entropy coding process

0 1 5 6 14 15 27 28

2 4 7 13 16 26 29 42

3 8 12 17 25 30 41 43

9 11 18 24 31 40 44 53

10 19 23 32 39 45 52 54

20 22 33 38 46 51 55 60

21 34 37 47 50 56 59 61

35 36 48 49 57 58 62 63 ZigZag Scan [6]

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 24
Differential Coding - JPEG (2/2)
We set DC0 = 0.
DC of the current block DCi will be equal to DCi-1 + Diffi .
Therefore, in the JPEG file, the first coefficient is actually the difference
of DCs. Then the difference is encoded with Huffman coding algorithm
together with the encoding of AC coefficients

Differential Coding : Diffi = DCi - DCi-1


DCi-1 DCi

… Blocki-1 Blocki …

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 25
Outline
Image Compression Fundamentals
Reduce correlation between pixels
Quantization and Source Coding
Overview of Image Compression Algorithms

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 26
Quantization and Source Coding
Quantization
The objective of quantization is to reduce the precision and to achieve
higher compression ratio
Lossy operation, which will result in loss of precision and unrecoverable
distortion
Source Coding
To achieve less average length of bits per pixel of the image.
Assigns short descriptions to the more frequent outcomes and long
descriptions to the less frequent outcomes
Entropy Coding Methods
Huffman Coding
Arithmetic Coding
Run Length Coding
Dictionary Codes
Lempel-Ziv77
Lempel-Ziv 78

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 27
Source Coding
Sequence of source
symbols ui Sequence of code symbols ai
Source Encoder

Source alphabet Code alphabet


U  {u1 u 2 uM} A  {a1 a2 an}
P  { p1 p2 pM}
meaasge ui 
 X 1(i ) , X 2(i ) , , X Ni(i )
where X k(i )  A, k=1,2, ,Ni and i = 1, ,M
X ( i ) : codeword
Ni : length of the codeword
M M M
Average length of codeword : N   p( X )Ni   p(ui)Ni   piNi
(i )

i 1 i 1 i 1

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 28
Huffman Coding (1/2)
The code construction process has a complexity of O(Nlog2N)
Huffman codes satisfy the prefix-condition
Uniquely decodable: no codeword is a prefix of another codeword

Huffman Coding Algorithm


(1) Order the symbols according to the probabilities
Alphabet set: S1, S2,…, SN
Probabilities: P1, P2,…, PN
The symbols are arranged so that P1≧ P2 ≧ … ≧ PN
(2) Apply a contraction process to the two symbols with the smallest
probabilities. Replace the last two symbols SN and SN-1 to form a new
symbol HN-1 that has the probabilities P1 +P2.
The new set of symbols has N-1 members: S1, S2,…, SN-2 , HN-1
(3) Repeat the step 2 until the final set has only one member.
(4) The codeword for each symbol Si is obtained by traversing the binary
tree from its root to the leaf node corresponding to Si
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 29
Huffman Coding (2/2)

Codeword
Codeword X Probability
length
01 00 1 0
2 01 1 0.25 0.3 0.45 0.55 1
10 01 00 1
2 10 2 0.25 0.25 0.3 0.45
11 10 01
3 11 3 0.2 0.25 0.25
000 11
3 000 4 0.15 0.2
001
3 001 5 0.15

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 30
Arithmetic Coding (1/4)
Shannon-Fano-Elias Coding
We take X={1,2,…,m}, p(x)>0 for all x.
1
Modified cumulative distribution function F   P (a ) 
a x 2
P( x)

Assume we round off F ( x ) to l ( x) , which is denoted by  F ( x) l ( x )

 1 
The codeword of symbol x has l(x)  log   1 bits
 p ( x) 
Codeword is the binary value of F ( x ) with l(x) bits

x P(x) F(x) F ( x) F ( x) in binary l(x) codeword


1 0.25 0.25 0.125 0.001 3 001
2 0.25 0.50 0.375 0.011 3 011
3 0.20 0.70 0.600 0.1001 4 1001
4 0.15 0.85 0.775 0.1100011 4 1100
5 0.15 1.00 0.925 0.1110110 4 1110
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 31
Arithmetic Coding (2/4)
Arithmetic Coding: a direct extension of Shannon-Fano-Elias coding
calculate the probability mass function p(xn) and the cumulative distribution
function F(xn) for the souece sequence xn
Lossless compression technique
Treate multiple symbols as a single data unit
Arithmetic Coding Algorithm
Input symbol is l
Previouslow is the lower bound for the old interval
Previoushigh is the upper bound for the old interval
Range is Previoushigh - Previouslow
Let Previouslow= 0, Previoushigh = 1, Range = Previoushigh – Previouslow =1
WHILE (input symbol != EOF)
get input symbol l
Range = Previoushigh - Previouslow
New Previouslow = Previouslow + Range* intervallow of l
New Previoushigh = Previouslow + Range* intervalhigh of l
END

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 32
Arithmetic Coding (3/4)
Symbol Probability Sub-interval Input String : l l u u r e ?
k 0.05 [0.00,0.05) l l u u r e ?
l 0.2 [0.05,0.25)
u 0.1 [0.20,0.35) 0.0713348389
w 0.05 [0.35,0.40)
=2-4+2-7+2-10+2-15+2-16
e 0.3 [0.40,0.70)
r 0.2 [0.70,0.90) Codeword : 0001001001000011
? 0.2 [0.90,1.00)

1 0.25 0.10 0.074 0.0714 0.07136 0.071336 0.0713360


1.00
? ? ? ? ? ? ? ?
0.90
r r r r r r r r
0.70

e e e e e e e e
0.40
w w w w w w w w
0.35
u u u u u u u u
0.25
l l l l l l l l
0.05
k k k k k k k k
0
0 0.05 0.06 0.070 0.0710 0.07128 0.07132 0.0713336
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 33
Arithmetic Coding (4/4)
Symbol Probability Huffman codeword Input String : l l u u r e ?
k 0.05 10101
l 0.2 01
Huffman Coding 18 bits
u 0.1 100 Codeword : 01,01,100,100,00,11,1101
w 0.05 10100
e 0.3 11 Arithmetic Coding 16 bits
r 0.2 00
? 0.2 1011 Codeword : 0001001001000011

Arithmetic coding yields better compression because it encodes a message as a


whole new symbol instead of separable symbols
Most of the computations in arithmetic coding use floating-point arithmetic.
However, most hardware can only support finite precision
While a new symbol is coded, the precision required to present the range grows
There is a potential for overflow and underflow
If the fractional values are not scaled appropriately, the error of encoding occurs

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 34
Zero-Run-Length Coding-JPEG (1/2)
The notation (L,F)
L zeros in front of the nonzero value F
EOB (End of Block)
A special coded value means that the rest elements are all zeros
If the last element of the vector is not zero, then the EOB marker will not be added
An Example:
1. 57, 45, 0, 0, 0, 0, 23, 0, -30, -16, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, ..., 0
2. (0,57) ; (0,45) ; (4,23) ; (1,-30) ; (0,-16) ; (2,1) ; EOB
3. (0,57) ; (0,45) ; (4,23) ; (1,-30) ; (0,-16) ; (2,1) ; (0,0)
4. (0,6,111001);(0,6,101101);(4,5,10111);(1,5,00001);(0,4,0111);(2,1,1);(0,0)
5. 1111000 1111001 , 111000 101101 , 1111111110011000 10111 ,
11111110110 00001 , 1011 0111 , 11100 1 , 1010

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 35
Zero-Run-Length Coding-JPEG (2/2)
Huffman table of Luminance AC coefficients
run/category code length code word
0/0 (EOB) 4 1010
15/0 (ZRL) 11 11111111001
0/1 2 00
... … …
0/6 7 1111000

... … …

0/10 16 1111111110000011
1/1 4 1100
1/2 5 11011
... … …
1/10 16 1111111110001000
2/1 5 11100

... … …

4/5 16 1111111110011000
... … …
15/10 16 1111111111111110
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 36
Dictionary Codes
Dictionary based data compression algorithms are based on the
idea of substituting a repeated pattern with a shorter token
Dictionary codes are compression codes that dynamically
construct their own coding and decoding tables “on the fly” by
looking at the data stream itself
It is not necessary for us to know the symbol probabilities
beforehand. These codes take advantage of the fact that, quite
often, certain strings of symbols are “frequently repeated” and
these strings can be assigned code words that represent the
“entire string of symbols”
Two series
Lempel-Ziv 77: LZ77, LZSS, LZBW
Lempel-Ziv 78: LZ78, LZW, LZMW

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 37
Lempel Ziv 77 Algorithm (1/4)
Search Buffer: It contains a portion of LZ77 Compression Algorithm
the recently encoded sequence. searches the search buffer for the longest
Look-Ahead Buffer: It contains the match
next portion of the sequence to be
encoded. If (longest match is found and all the
Once the longest match has been found, characters are compared)
the encoder encodes it with a triple Output <Cp, Cl, Cs>
<Cp, Cl, Cs> Shift window Cl characters
Cp :the offset or position of the longest
match from the lookahead buffer ELSE
Cl :the length of the longest matching Output <0, 0, Cs>
string
Shift window 1 character
Cs :the codeword corresponding to the
symbol in the look-ahead buffer that END
follows the match
The size of sliding window : N
Search Buffer Look-Ahead Buffer

…… b a b a a c a a c a b ……
Coded Text Text to be read
(Cp, Cl, Cs)

The size of search buffer : N-F The size of lookahead buffer : F


NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 38
Lempel Ziv 77 Algorithm (2/4)
Advantages of LZ77
Do not require to know the probabilities of the symbols beforehand
A particular class of dictionary codes, are known to asymptotically
approach to the source entropy for long messages. That is, the longer the
size of the sliding window, the better the performance of data compression
The receiver does not require prior knowledge of the coding table
constructed by the transmitter
Disadvantage of LZ77
A straightforward implementation would require up to (N-F)*F character
comparisons per fragment produced. If the size of the sliding window
becomes very large, then the complexity of comparison is very large

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 39
Lempel Ziv 77 Algorithm (3/4)
Codeword = (15,4,I) for “LZ77I”
Sliding Window of N characters
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8

L Z 7 7 T y p e I s O l d e s t L Z 7 7 I s v x

Already Encoded Lookahead buffer


N-F characters F characters
Shift 5 characters
Codeword = (6,1,v) for “sv”
Sliding Window of N characters
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8

y p e I s O l d e s t L Z 7 7 I ss vv x O I d e s

Already Encoded Lookahead buffer


N-F characters F characters
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 40
Lempel Ziv 77 Algorithm (4/4)
Shift 2 characters

Codeword = (0,0,x) for “x”


Sliding Window of N characters
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8

e I s O l d e s t L Z 7 7 I s v xx O l d e s t Z

Already Encoded Lookahead buffer


N-F characters F characters

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 41
Lempel Ziv 78 Algorithm (1/3)
The LZ78 algorithm parsed a Lempel Ziv 78 Algorithm
string into phrases, where each Step 1: In the parsing context, search
phrase is the shortest phrase the longest previously parsed phrase P
not seen so far matching the next encoded substring.
The multi-character patterns Step 2: Identify this phrase P by its
are of the form: C0C1 . . . Cn- index L in a list of phrases, and place
1Cn. The prefix of a pattern the index on the code string. Go to the
consists of all the pattern innovative context.
characters except the last: Step 3: In the innovative context,
C0C1 . . . Cn-1 concatenate next character C to the code
This algorithm can be viewed string, and form a new parsed phrase
as building a dictionary in the P‧C.
form of a tree, where the nodes Step 4: Add phrase P‧C to the end of the
corresponding to phrases seen list of parsed phrases as (L,C)
so far Return to the Step 1.

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 42
Lempel Ziv 78 Algorithm (2/3)
Advantages
Asymptotically, the average length of the codeword per source symbol is
not greater than the entropy rate of the information source
The encoder does not know the probabilities of the source symbol
beforehand
Disadvantage
If the size of the input goes to infinity, most texts are considerably shorter
than the entropy of the source. However, due to the limitation of memory
in modern computer, the resource of memory would be exhausted before
compression become optimal. This is the bottleneck of LZ78 needs to be
overcame

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 43
Lempel Ziv 78 Algorithm (3/3)
Input String: ABBABBABBBAABABAA
Parsed String: A, B, BA, BB, AB, BBA, ABA, BAA
Output Codes: (0,A), (0,B), (2,A), (2,B), (1,B), (4,A), (5,A), (3,A)

0 Index Phrases (L,C)


A B 1A → (0,A)
2B → (0,B)
1 2
3 BA → (2,A)
B A B 4 BB → (2,B)
5 3 4 5 AB → (1,B)
6 BBA → (4,A)
A A A
7 ABA → (5,A)
7 8 6 8 BAA → (3,A)

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 44
Outline
Image Compression Fundamentals
Reduce correlation between pixels
Quantization and Source Coding
Overview of Image Compression Algorithms

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 45
JPEG
B
The JPEG Encoder
G
R Chrominance
YVU color 8X8
Downsampling
Image coordinate FDCT
(4:2:2 or 4:2:0)

Huffman
zigzag
Encoding
Quantizer Bit-stream
Differential Huffman
Quantization Coding Encoding
Table

The JPEG Decoder


Huffman
De-zigzag
Decoding
Bit-stream Dequantizer
De-DC Huffman
Quantization coding Decoding
Table
B
G
Chrominance R
8X8 YVU color
Upsampling Decoded
IDCT coordinate
(4:2:2 or 4:2:0) Image

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 46
Quantization in JPEG
Quantization is the step where we actually throw away data.
Luminance and Chrominance Quantization Table
lower numbers in the upper left direction
large numbers in the lower right direction
The performance is close to the optimal condition

 F (u, v) 
Quantization F (u, v)Quantization  round  
 Q ( u , v ) 
Dequantization F (u, v) deQ  F (u, v)Quantization  Q(u, v)

 16 11 10 16 24 40 51 61   17 18 24 47 99 99 99 99 
   
 12 12 14 19 26 58 60 55   18 21 26 66 99 99 99 99 
 14 13 16 24 40 57 69 56   24 26 56 99 99 99 99 99 
   
14 17 22 29 51 87 80 62 
QY  
47 66 99 99 99 99 99 99 
 18
QC  
22 37 56 68 109 103 77   99 99 99 99 99 99 99 99 
   
 24 35 55 64 81 104 113 92   99 99 99 99 99 99 99 99 
 49 64 78 87 103 121 120 101  99 99 99 99 99 99 99 99 
   
 72 92 95 98 112 100 103 99   99 99 99 99 99 99 99 99 
NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 47
JPEG 2000
B
The JPEG 2000 Encoder
G
R JPEG 2000
Forward Bit-stream
Component 2D DWT Quantization EBCOT
Image
Transform

Rate-
Context Arithmetic
Distortion
Modeling Coding
Control
Tier-1
Tier-2

The JPEG 2000 Decoder


B
JPEG 2000 G
Inverse R
Bit-stream EBCOT
Dequantization 2D IDWT Component Decoded
Decoder Image
Transform

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 48
Quantization in JPEG 2000
 | ab(u, v) | 
Quantization coefficients qb (u , v )  sign  au (u , v )   floor  b 
ab(u,v) : the wavelet coefficients of subband b
Rb  b  b 
Quantization step size b  2 1  11 
 2 
Rb: the nominal dynamic range of subband b
εb: number of bits alloted to the exponent of the subband’s coefficients
μb: number of bits allotted to the mantissa of the subband’s coefficients
Reversible wavelets
Uniform deadzone scalar quantization with a step size of Δb =1 must be
used
Irreversible wavelets
The step size is specified in terms of an exponentεb, 0≦εb<25 , and a
mantissaμb , 0≦μb<211

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 49
Bitplane Scanning
The decimal DWT coefficients can be converted into signed binary format, so
the DWT coefficients are decomposed into many 1-bit planes.
In one 1-bit-plane
Significant
A bit is called significant after the first bit ‘1’ is met from MSB to LSB
Insignificant
The bits ‘0’ before the first bit ‘1’ are insignificant
n
Sign 0 MSB
n insignificant
MSB 0
significant
coding First 1 appear 1
order
0

LSB 1

1 LSB

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 50
Scanning Sequence
The scanning order of the bit-plane
Sample
Each element of the bit-plane is called a sample
Column stripe
Four vertical samples can form one column stripe
Full stripe
The 16 column stripes in the horizontal direction can form a full stripe
Code block 64-bits wide

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61
Stripe height of 4

2 6 10 14 18 22 26 30 34 38 42 46 50 54 58 62

3 7 11 15 19 23 27 31 35 39 43 47 51 55 59 63

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64

65 ...

66 ...

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 51
The Context Window
Current sample
The “curr” is the sample which is to be coded
The other 8 samples are its neighbor samples
The diagonal sample
The samples in the diagonal direction
The vertical samples
The samples in the vertical direction
The horizontal samples
The samples in the horizontal direction
Stripe i-1
1d 2v 3d

4h curr 5h
Stripe i ……
6d 7v 8d

Stripe i+1

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 52
Arithmetic Coder in JPEG 2000
The decision and context data generated from context formation is coded in
the arithmetic encoder
The arithmetic encoder used by JPEG 2000 standard is a binary coder
More Possible Symbol (MPS): If the value of input is 1
Less Possible Symbol (LPS): If the value of input is 0
MPS : More Possible Interval
LPS : Less Possible Interval
MPS
A : The probability distribution of present interval
A C : The bottom of present interval

LPS
C

New MPS Code LPS


MPS new A MPS
A A
New LPS

New MPS
LPS Code MPS LPS new A
New LPS

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 53
Rate Distortion Optimization
For meeting a target bit-rate or transmission time, the packaging
process imposes a particular organization of coding pass data in
the output code-stream
The rate-control assures that the desired number of bytes used by
the code-stream while assuring the highest image quality
possible

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 54
Shape-Adaptive Image Compression
Both the JPEG and JPEG 2000 image compression standard can achieve great
compression ratio, however, both of them do not take advantage of the local
characteristics of the given image effectively
Instead of taking the whole image as an object and utilizing transform coding,
quantization, and entropy coding to encode this object, the SAIC algorithm segments
the whole image into several objects, and each object has its own local characteristic
and color
Because of the high correlation of the color values in each image segment, the SAIC
can achieve better compression ratio and quality than conventional image compression
algorithm Boundary Descriptor

Boundary Quantization
Boundary
Transform And
Coding Entropy Coding

Image
Bit-stream
Segmentation

Arbitrary Shape Quantization


Transform And
Interal texture Coding Entropy Coding

Coefficients of Transform Bases


NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 55
Reference
[1] R. C. Gonzalea and R. E. Woods, "Digital Image Processing", 2nd Ed., Prentice
Hall, 2004.
[2] Liu Chien-Chih, Hang Hsueh-Ming, "Acceleration and Implementation of JPEG
2000 Encoder on TI DSP platform" Image Processing, 2007. ICIP 2007. IEEE
International Conference on, Vo1. 3, pp. III-329-339, 2005.
[3] ISO/IEC 15444-1:2000(E), "Information technology-JPEG 2000 image coding
system-Part 1: Core coding system", 2000.
[4] Jian-Jiun Ding and Jiun-De Huang, "Image Compression by Segmentation and
Boundary Description", Master’s Thesis, National Taiwan University, Taipei, 2007.
[5] Jian-Jiun Ding and Tzu-Heng Lee, "Shape-Adaptive Image Compression",
Master’s Thesis, National Taiwan University, Taipei, 2008.
[6] G. K. Wallace, "The JPEG Still Picture Compression Standard", Communications
of the ACM, Vol. 34, Issue 4, pp.30-44, 1991.
[7] 張得浩,“新一代JPEG 2000之核心編碼 — 演算法及其架構(上) ”,IC設計月
刊 2003.8月號.
[8] 酒井善則、吉田俊之 共著,白執善 編譯,“影像壓縮技術”,全華,2004.
[9] Subhasis Saha, "Image Compression - from DCT to Wavelets : A Review",
available in http://www.acm.org/crossroads/xrds6-3/sahaimgcoding.html

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 56
Thank You
Q&A

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 57
Question 1
In Slide DPCM(3), what’s the orthogonal principle?

Orthogonal principle : E[( S 0  Sˆ 0) Sˆ 0]  0

Proof:
Given E[( S 0  Sˆ 0) Si ]  0
E[( S 0  Sˆ 0) Sˆ 0]
 E[( S 0  Sˆ 0)(a1S 1  a 2 S 2  ...  anSn)]
 a1E[( S 0  Sˆ 0) S 1]  a 2 E[( S 0  Sˆ 0) S 2]  anE[( S 0  Sˆ 0) Sn]
 00 0
0

NTU, GICE, MD531, DISP Lab An Introduction to Image Compression Wei-Yi Wei 58

You might also like