0% found this document useful (0 votes)
7 views60 pages

Object Recognition3

The document discusses various approaches to object recognition including pattern classes, decision-theoretic methods, and structural methods. Decision-theoretic methods include minimum distance classification, matching by correlation, and Bayesian and neural network classifiers. Structural methods involve matching shape numbers, strings, and tree grammars.

Uploaded by

Huzaifa Huzaifa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views60 pages

Object Recognition3

The document discusses various approaches to object recognition including pattern classes, decision-theoretic methods, and structural methods. Decision-theoretic methods include minimum distance classification, matching by correlation, and Bayesian and neural network classifiers. Structural methods involve matching shape numbers, strings, and tree grammars.

Uploaded by

Huzaifa Huzaifa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Object Recognition

Place it in
Find out the
Image respective
Pattern
pattern-class

Dr. Ali Raza


UET Lahore, Pakistan
Outline
• Introduction
• Patterns and Pattern Classes
• Decision-Theoretic Methods
– Matching
• Minimum Distance Classifier
• Matching by Correlation
– Optimum statistical classifiers
• Bayes Classifier
– Neural Networks
• Structural Methods
– Matching Shape Numbers
– String Matching
• Syntactic Recognition of Strings String Grammars
• Syntactic recognition of Tree Grammars
• Conclusions
Introduction
• The approaches to pattern recognition developed are divided
into two principal areas: decision-theoretic and structural

• The first category deals with patterns described using


quantitative descriptors, such as length, area, and texture

• The second category deals with patterns best described by


qualitative descriptors, such as the relational descriptors.
Introduction
• Basic pattern recognition flowchart

Feature Feature Classifier System


Sensor generation selection design evaluation
Applications: Computational photography
Applications: Assisted driving
Pedestrian and car detection

Ped

meters
Ped

Car

meters

Lane detection

• Collision warning systems


with adaptive cruise control,
• Lane departure warning
systems,
• Rear object detection
systems,
Applications: Image Search and Similarity
Challenges: viewpoint variation

Michelangelo 1475-1564
Challenges: illumination variation

slide credit: S. Ullman


Challenges: occlusion

Magritte, 1957
Challenges: scale
Challenges: deformation

Xu, Beihong 1943


Challenges: background clutter

Klimt, 1913
Challenges: intra-class variation
Let’s start simple

Face Detection using skin detection


Face detection

• How to tell if a face is present?


One simple method: skin detection

skin

• Skin pixels have a distinctive range of colors


– Corresponds to region(s) in RGB color space
• for visualization, only R and G components are shown above
Skin classifier
• A pixel X = (R,G,B) is skin if it is in the skin region
• But how to find this region?
Skin detection

• Learn the skin region from examples


– Manually label pixels in one or more “training images” as skin or not skin
– Plot the training data in RGB space
• skin pixels shown in orange, non-skin pixels shown in blue
• some skin pixels may be outside the region, non-skin pixels inside. Why?
Skin classifier
• Given X = (R,G,B): how to determine if it is skin or not?
Skin classification techniques

Skin classifier
• Given X = (R,G,B): how to determine if it is skin or not?
• Nearest neighbor
– find labeled pixel closest to X
– choose the label for that pixel
• Data modeling
– fit a model (curve, surface, or volume) to each class
• Probabilistic data modeling
– fit a probability model to each class
Probability
• Basic probability
– X is a random variable
– P(X) is the probability that X achieves a certain value
called a PDF
-probability distribution/density function
-a 2D PDF is a surface, 3D PDF is a volume

– or
continuous X discrete X

– Conditional probability: P(X | Y)


• probability of X given that we already know Y
Probabilistic skin classification

• Now we can model uncertainty


– Each pixel has a probability of being skin or not skin

Skin classifier
• Given X = (R,G,B): how to determine if it is skin or not?
• Choose interpretation of highest probability
– set X to be a skin pixel if and only if

Where do we get and ?


Learning conditional PDF’s

• We can calculate P(R | skin) from a set of training images


– It is simply a histogram over the pixels in the training images
• each bin Ri contains the proportion of skin pixels with color Ri

This doesn’t work as well in higher-dimensional spaces. Why not?


Approach: fit parametric PDF functions
• common choice is rotated Gaussian
– center
– covariance

» orientation, size defined by eigenvecs, eigenvals


Learning conditional PDF’s

• We can calculate P(R | skin) from a set of training images


– It is simply a histogram over the pixels in the training images
• each bin Ri contains the proportion of skin pixels with color Ri
But this isn’t quite what we want
• Why not? How to determine if a pixel is skin?
• We want P(skin | R) not P(R | skin)
• How can we get it?
Bayes rule

• In terms of our problem:


what we measure domain knowledge
(likelihood) (prior)

what we want normalization term


(posterior)

The prior: P(skin)


• Could use domain knowledge
– P(skin) may be larger if we know the image contains a person
– for a portrait, P(skin) may be higher for pixels in the center
• Could learn the prior from the training set. How?
– P(skin) may be proportion of skin pixels in training set
Bayesian estimation

likelihood posterior (unnormalized)

• Bayesian estimation = minimize probability of misclassification

– Goal is to choose the label (skin or ~skin) that maximizes the posterior
• this is called Maximum A Posteriori (MAP) estimation
• Suppose the prior is uniform: P(skin) = P(~skin) = 0.5
– in this case ,
– maximizing the posterior is equivalent to maximizing the likelihood
» if and only if
– this is called Maximum Likelihood (ML) estimation
Skin detection results
General classification
• This same procedure applies in more general circumstances
– More than two classes
– More than one dimension

Example: face detection


• Here, X is an image region
– dimension = # pixels
– each face can be thought
of as a point in a high
dimensional space
H. Schneiderman, T. Kanade. "A Statistical Method for 3D
Object Detection Applied to Faces and Cars". IEEE Conference
on Computer Vision and Pattern Recognition (CVPR 2000) H. Schneiderman and T.Kanade
http://www-2.cs.cmu.edu/afs/cs.cmu.edu/user/hws/www/CVPR00.pdf
But!

Its just face detection … not the


recognition!

28
Object Recognition

PATTERNS AND PATTERN CLASSES

29
Patterns and Pattern Classes
• Pattern
– An arrangement of descriptors
– Features
• Quantitative: length, area, texture, etc.
• Qualitative: relational descriptors
– Pattern vector with descriptors
‫ݔ‬ଵ
‫ݔ‬ଶ
– ࢞=

‫ݔ‬௡
• Pattern Class
– Family of patterns sharing some common properties
Discriminant Analysis [Fisher 1936]
Descriptors: 02
Classes: 03

Class Separability!

From Fisher 1936


Signatures:

Pattern vector: Amplitude values corresponding to angle


ଵ   ଵ , ଶ   ଶ , … , ௡   ௡
There are other ways to define pattern vectors!
e.g. the first n statistical moments of a given signature.

The selection of descriptors is going to define the


performance of object recognition approach

33
Now the Qualitative Descriptors!

String of symbols: … abababab…

Minutiae (For Fingerprints):


• Endings
• Branching
• Merging
• Disconnected segments …
Tree Descriptions:
Object Recognition

DECISION-THEORETIC METHODS

36
Minimum Distance Classifier
• Suppose that we define the prototype of each pattern class to
be the mean vector of the patterns of that class:
1
mj = ∑x j j=1,2,…,W (1)
Nj x∈w j

• Using the Euclidean distance to determine closeness


– reduces the problem to computing the distance measures

Dj ( x) = x − m j j=1,2,…,W (2)
Minimum Distance Classifier

• The smallest distance is equivalent to evaluating the functions

1 T
d j ( x) = x m j − m j m j
T
j=1,2,…,W (3)
2
• The decision boundary between classes and for a minimum
distance classifier is
d ij ( x) = d i ( x) − d j ( x) j=1,2,…,W (4)
1
= xT (mi − m j ) − (mi − m j )T (mi + m j ) = 0
2
Compute with: For n = 2: Line,
m1 = (4.3, 1.3)T For n = 3: plane,
m2 = (1.5, 0.3)T For n > 3: hyperplane
Minimum Distance Classifier
• Advantages:
1. Unusual direct-viewing
2. Can solve the rotated-problem
3. Intensity
4. Chooses the suitable characteristic, then solves the mirror
problem

• Disadvantages:
1. It costs time for counting samples,
we should have a lot of samples for high accuracy, so
more samples more accuracy!
2. Displacement
3. It is only for two features, so the accuracy is lower than
other methods.
4. Scaling
Matching by Correlation
• We consider it as the basis for finding matches of a
sub-image of size J × K within f ( x, y ) an image of
size M × N, where we assume that J ≤ M and K ≤ N

c( x, y ) = ∑∑ w( s, t ) f ( x + s, y + t )
s t
for x=0,1,2,…,M-1, y=0,1,2,…,N-1 (5)
Matching by Correlation
• The correlation function has the disadvantage of being sensitive to
changes in the amplitude of f and w
– For example, doubling all values of f doubles the value of c ( x, y )

• An approach frequently used to overcome this difficulty is to perform


matching via the normalized correlation coefficient

∑∑[w(s, t ) − w(s, t)][ f ( x + s, y + t) − f ]


γ ( x, y) = s t
1
 2
2
∑∑[w(s, t ) − w] ∑∑[ f ( x + s, y + t ) − f ] 
2

s t s t 
• The correlation coefficient is scaled in the range -1 to 1, independent of
scale changes in the amplitude of f and w
Matching by Correlation
• Arrangement for obtaining the correlation of f and w at point ( x, y )
Matching by Correlation
• Advantages:
1.Fast
2.Convenient
3.Displacement
• Disadvantages:
1.Scaling
2.Rotation
3.Shape similarity
4.Intensity
5.Mirror problem
6.Color cannot be recognized
Matching by Correlation: Example 1
Matching by Correlation: Example 2
Optimum statistical classifiers

• The probability that a particular pattern x


comes from class wi is denoted p( wi x)

• If the pattern classifier decides that x came


from w j when it actually came from wi , it
incurs a loss, denoted Lij
Conditional
W average risk

rj ( x) = ∑L kj p ( wk x ) or Loss

k =1
Optimum statistical classifiers

• From basic probability theory, we know that


p( A B) = [ p( A) p( B A)] p( B)

1 W
⇒ rj ( x ) = ∑ Lkj p ( x wk ) P ( wk )
p ( x) k =1
W
⇒ rj ( x) = ∑L kj p ( x wk ) P ( wk )
k =1
1
∵ is positive and common to all
p(x )
Optimum statistical classifiers

• Thus the Bayes classifier assigns an unknown


pattern x to class wi if ri (x) < rj (x)
W W
⇒ ∑ Lki p( x wk ) P( wk ) < ∑ Lqj p( x wq ) P( wq ) ∀j; j ≠ i.
k =1 q =1

The ‘loss’ for correct decision is


generally assigned a value of zero;
and the loss for incorrect decision 1.
⇒ Lij = 1 − δ ij
W
⇒ rj ( x) = ∑ (1 − δ kj ) p ( x wk ) P ( wk )
k =1

= p( x) − p( x w j ) p( w j )
Optimum statistical classifiers

• The Bayes classifier then assigns a pattern x


to class wi if,
p ( x) − p ( x wi ) P ( wi ) < p ( x) − p ( x w j ) P ( w j )

• or, equivalently, if ∀j; j ≠ i.


p ( x wi ) P ( wi ) > p ( x w j ) P ( w j )
• It implies a decision function of the form:
d j ( x) = p ( x w j ) P ( w j )
Remember the
decision-theoretic
classification
Optimum statistical classifiers

• Bayes Classifier for Gaussian Pattern Classes


• Let us consider a 1-D problem (n=1) involving
two pattern classes (W=2) governed by
Gaussian densities
d j ( x) = p( x w j ) P( w j )
( x − m j )2

1 2σ 2j
= e P( w j ) j = 1, 2
2πσ j
Consider a 1-D problem involving two pattern classes (W=2) governed
by Gaussian densities with means m1 and m2.

Decision boundary for d1 ( x0 ) = d 2 ( x0 )


Because the classifier is trying to minimize the loss of classification
Optimum statistical classifiers

• In the n-dimensional case, the Gaussian


density of the vectors in the jth pattern class
has the form

1
1 − ( x − m j )T C −j 1 ( x − m j )
p( x w j ) = 12
e 2

(2π ) n 2 C j
Covariance
matrix

1 1
⇒ d j ( x) = ln P( w j ) − ln C j − ( x − m j )T C j −1 ( x − m j ) 
2 2
Optimum statistical classifiers

• Advantages:
1. Can be used stand alone classifier
2. Can also be combined with other approaches

• Disadvantages:
1. Costs time to count samples

Consult your book to see how minimum distance classifier can be derived from an
optimum statistical classifier!
Find out the decision surface/boundary between two pattern classes

1. Means
2. Covariances
3. Decision
functions
4. Decision
boundary/surface
Bayes Classification for Multispectral Data
Bayes Classification for Multispectral Data
Bayes Classification for Multispectral Data
Questions

You might also like