Naïve Bayes Classifier
Naïve Bayes Classifier
▪ The prior probabilities p(Ci) for the individual classes Ci can be easily
computed from the training set as follows:
▪ Then, for the following sample assign the appropriate class label
based on the above training data.
Naïve Bayes Classifier
▪ Thus the naive Bayes classifier predicts buy = yes for the given
sample with feature vector f
Naïve Bayes Classifier
With the above training data, predict the class (evade =‘yes’ or
evade = ‘No’) to which the following instance belongs:
▪ In the given table, for the class “No”, the mean and standard
deviation of Income will be computed as:
• Mean (µ) = 110K
• Sample Variance (σ 2) = 2975
▪ Given the feature vector f for a particular pattern, we have already seen that the
posterior probability of class C given the feature vector f is :
▪ We assign the pattern to class C, if among all the classes, C has maximum
posterior probability. This is known as maximum a posteriori hypothesis.
▪ If we assume the prior probabilities of all the classes are equal, then the given
pattern can be assigned to the class having maximum likelihood (i.e. p(f|C) value).
This is known as maximum likelihood hypothesis.