Object Recognition
Object Recognition
d1(x) = 4.3x1+1.3x2-10.1
d2(x) = 1.5x1+0.3x2-1.17
d12(x) = d1(x)-d2(x) = 2.8x1+1.0x2-8.9 = 0
Minimum distance classifier
Minimum distance classifier
Matching by Correlation
Matching by Correlation
Matching by Correlation
Matching by Correlation
Other classification algorithms
• K-nearest neighbor
• K-means clustering
• Multi layer perceptron (MLP)
• Support vector machine (SVM)
• Self organizing maps (SoM)
• Neural networks
K-nearest neighbor (KNN)
K-nearest neighbour
• Step-1: Select the number K of the neighbors
• Step-2: Calculate the Euclidean distance of K number of
neighbors
• Step-3: Take the K nearest neighbors as per the calculated
Euclidean distance.
• Step-4: Among these k neighbors, count the number of the data
points in each category.
• Step-5: Assign the new data points to that category for which
the number of the neighbor is maximum.
• Step-6: Our model is ready.
K-means clustering
• Unsupervised learning algorithm
• Requires number of clusters as input.
K-means clustering
• Step-1: Select the number K to decide the number of
clusters.
• Step-2: Select random K points or centroids. (It can be
other from the input dataset).
• Step-3: Assign each data point to their closest centroid,
which will form the predefined K clusters.
K-means clustering
• Step-3: Assign each data point to their closest centroid, which
will form the predefined K clusters.
• Step-4: Calculate the variance and place a new centroid of each
cluster.
• Step-5: Repeat the third steps, which means reassign each data
point to the new closest centroid of each cluster.
• Two stages correspond to E (expectation) and M (maximization)
of EM algorithm – Expectation: what is the expected class? –
Maximization: what is the mle of the mean?
K-means clustering
• Step-6: If any reassignment occurs, then go to step-4 else go
to FINISH.
• Step-7: The model is ready.
K-means steps
Limitations of K-means clustering
• Every data point is assigned uniquely to one and
only one cluster
• A point may be equidistant from two cluster
centers
• A probabilistic approach will have a ‘soft’
assignment of data points reflecting the level of
uncertainty
K-means in image segmentation
• Goal: partition image into regions
– each of which has homogeneous visual
appearance
– or corresponds to objects
– or parts of objects
• Each pixel is a point in R_G_B space
• K-means clustering is used with a palette of K
colors
• Method does not take into account proximity of
different pixels
K-means in image segmentation
Support vector classification
• SVM picks best separating hyper-plane
according to some criterion
–e.g. maximum margin
• Training process is an optimization
• Training set is effectively reduced to a
relatively small number of support vectors
Feature spaces
• We may separate data by mapping to a
higher-dimensional feature space
–The feature space may even have an
infinite number of dimensions!
• We need not explicitly construct the new
feature space
Kernels
• We may use Kernel functions to implicitly map
to a new feature space
• Kernel function:
• Kernel must be equivalent to an inner product
in some feature space
Linear separator
Selecting best separator
Support vector classification
• Find closest point in convex hull
• Plane bisects the closest points
Classification margin
Support vector classification
• Maximum margin
Support vector classification
• Misclassification error and the function
complexity bound generalization error.
• Maximizing margins minimizes complexity.
• “Eliminates” over-fitting.
• Solution depends only on Support Vectors, not
on the number of attributes.
Linear SVM
Linear SVM
Self study
• Introduction to Neural networks
Thank you