Pattern and Pattern Classifier
Pattern and Pattern Classifier
2 Object Recognition
Digital Image Processing
Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008 Norbert Wiener
University of Ioannina - Department of Computer Science C. Nikou – Digital Image Processing
1
23/5/2013
2
23/5/2013
3
23/5/2013
4
23/5/2013
• The classifier that minimizes the total • The loss for a wrong decision is generally
average loss is called the Bayes classifier. assigned to a non zero value (e.g. 1)
• It assigns an unknown pattern x to classs • The loss for a correct decision is 0.
ωi if: Lij 1 ij
ri (x) rj (x), for j 1, 2,...,W
Therefore,
or
W W
or
• The probability densities of the patterns in each
p(x / i ) P(i ) p(x / j ) P( j ) class P(x/ωj) must be known.
– More difficult problem (especially for multidimensional
which is the computation of decision functions: variables) which requires methods from pdf estimation.
– Generally, we assume:
d j (x) p(x / j ) P( j ), j 1, 2,...,W • Analytic expressions for the pdf.
• The pdf parameters may be estimated from sample patterns.
It assigns pattern x to the class whose decision
• The Gaussian is the most common pdf.
function yields the largest numerical value.
C. Nikou – Digital Image Processing C. Nikou – Digital Image Processing
5
23/5/2013
31
Bayes classifier for Gaussian pattern 32
Bayes classifier for Gaussian pattern
classes classes (cont.)
• We first consider the 1-D case for W=2 classes. • In the n-D case:
( xm j ) 2
1
1 ( x m j )T Cj 1 ( x m j )
2 j (2 ) n /2
Cj
• For P(ωj)=1/2: m j E j [ x]
C j E j [(x m j )(x m j )T ]
33
Bayes classifier for Gaussian pattern 34
Bayes classifier for Gaussian pattern
classes (cont.) classes (cont.)
• Approximation of the mean vector and covariance • It is more convenient to work with the natural
matrix from samples from the classes: logarithm of the decision function as it is
monotonically increasing and it does not change
1 the order of the decision functions:
mj x
Nj x j d j (x) ln p(x / j ) P( j ) ln p(x / j ) ln P( j )
1
Cj
Nj
(x m
x j
j )(x m j ) T
1 1
ln P( j ) ln C j (x m j )Cj 1 (x m j )T
2 2
35
Bayes classifier for Gaussian pattern 36
Bayes classifier for Gaussian pattern
classes (cont.) classes (cont.)
• If all the classes have the same covarinace Cj=C, • The minimum distance classifier is optimum in the
j=1,2,…,W the decision functions are linear Bayes sense if:
(hyperplanes): − The pattern classes are Gaussian.
1 − All classes are equally to occur.
d j (x) ln P( j ) x C m j mTj C1m j
T 1
− All covariance matrices are equal to (the same multiple
2 of) the identity matrix.
• Gaussian pattern classes satisfying these
• Moreover, if P(ωj)=1/W and Cj=I: conditions are spherical clouds (hyperspheres)
1 • The classifier establishes a hyperplane between
d j (x) xT m j mTj m j
2 every pair of classes.
which is the minimum distance classifier − It is the perpendicular bisector of the line segment
decision function. joining the centers of the classes
C. Nikou – Digital Image Processing C. Nikou – Digital Image Processing
6
23/5/2013
37
Application to remotely sensed 38
Application to remotely sensed
images images (cont.)
• 4-D vectors.
• Three classes
− Water
− Urban development
− Vegetation
• Mean vectors and
covariance matrices
learnt from samples
whose class is known.
− Here, we will use
samples from the image
to learn the pdf
parameters
7
23/5/2013
• Let a and b denote two closed shapes which are • Alternatively, the distance between two shapes a
represented by 4-directional chain codes and s(a) and b is defined as the inverse of their degree of
and s(b) their shape numbers. similarity:
1
• The shapes have a degree of similarity, k, if: D ( a, b)
k
45 String matching