0% found this document useful (0 votes)
386 views9 pages

Bayers Optimal Classifier

Uploaded by

Chatla Chetan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
386 views9 pages

Bayers Optimal Classifier

Uploaded by

Chatla Chetan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

BAYERS OPTIMAL CLASSIFIER

• The Bayes optimal classifier is a probabilistic


model that makes the most probable
prediction for a new example.
• It is closely related to the Maximum A
Posteriori MAP
• A probabilistic framework referred to as MAP
that finds the most probable hypothesis for a
training dataset.
• What is the most probable classification of the new instance
given the training data?
• The most probable classification of the new
instance is obtained by combining the
predictions of all hypothesis, weighted by their
posterior probabilities.
• The equation explains how to calculate the
conditional probability for a new instance (Vj)
from some set V, given the training data (D),
given a space of hypotheses (H).
• The optimal classification of the new instance is
the value Vj , for which P(vj|D) is maximum
• Bayes optimal classification:

• Lets consider an example A hypothesis space


containing three hypotheses h1,h2 and h3
• The posterior probability of these hypotheses
given the training data are h1=0.4,h2=0.3 and
h3=0.3
• Thus h1 is the MAP hypothesis.
• Suppose a new instance x which is to be
classified by using this optimal classifier and
which is classified positive by h1, but negative
by h2 and h3.
• Taking all hypotheses into consideration,
• The probability that x is positive is 0.4 and
• The pobability that it is negative is 0.6
(h2=0.3,h3=0.3)
• Hence, the most probable classification is
negative
• The set of possible classifications of the new
instance is v= { +,- }
• P( = 0.4 P(=0 , P(=1
• P( =0.3 P(=1 , P(=0
• P( =0.3 P(=1 , P(=0
• Therefore,
• Hence

• The most probable classification is negative in


this case is different from the classification
generated by the MAP hypothesis.
• Any model that classifies new instances
according to the above equation is called a
Bayes optimal classifier.
• This method maximizes the probability that
the new instance is classified correctly, given
the available data, hypothesis space and prior
probabilites over the hypotheses.

You might also like