0% found this document useful (0 votes)
44 views53 pages

Lecture 10 - Naive Bayes Classifier

The document discusses the Naive Bayes classifier technique for classification problems. It provides examples of using Naive Bayes for spam filtering and digit recognition. The key aspects of Naive Bayes covered are the conditional probability tables used for classification and how these probabilities are estimated from training data.

Uploaded by

Asad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views53 pages

Lecture 10 - Naive Bayes Classifier

The document discusses the Naive Bayes classifier technique for classification problems. It provides examples of using Naive Bayes for spam filtering and digit recognition. The key aspects of Naive Bayes covered are the conditional probability tables used for classification and how these probabilities are estimated from training data.

Uploaded by

Asad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

AI 202: Trends & Techniques in Artificial Intelligence

Lecture 10 – Naïve Bayes Classifier

Instructor: Dr. Hashim Ali


Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi
[Spring 2024]
AI 202 Trends & Techniques in AI

Machine Learning

§ Up until now: how use a model to make optimal decisions

§ Machine learning: how to acquire a model from data /


experience
§ Learning parameters (e.g. probabilities)
§ Learning structure (e.g. BN graphs)
§ Learning hidden concepts (e.g. clustering)

§ Today: model-based classification with Naive Bayes


Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Classification

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Example: Spam Filter

§ Input: an email
§ Output: spam/ham
Dear Sir.

First, I must solicit your confidence in


this transaction, this is by virture of its

§ Setup:
nature as being utterly confidencial
and top secret. …
TO BE REMOVED FROM FUTURE
§ Get a large collection of example emails, each labeled MAILINGS, SIMPLY REPLY TO THIS
“spam” or “ham” MESSAGE AND PUT "REMOVE" IN THE
§ Note: someone has to hand label all this data! SUBJECT.
§ Want to learn to predict labels of new, future emails 99 MILLION EMAIL ADDRESSES
FOR ONLY $99

§ Features: The attributes used to


Ok, Iknow this is blatantly OT but I'm
beginning to go insane. Had an old

make the ham / spam decision


Dell Dimension XPS sitting in the
corner and decided to put it to use, I
know it was working pre being stuck
§ Words: FREE! in the corner, but when I plugged it in,
hit the power nothing happened.
§ Text Patterns: $dd, CAPS
§ Non-text: SenderInContacts
§ …
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Example: Digit Recognition

§ Input: images / pixel grids


§ Output: a digit 0-9 0

§ Setup: 1
§ Get a large collection of example images, each labeled with
a digit 2
§ Note: someone has to hand label all this data!
§ Want to learn to predict labels of new, future digit images 1

§ Features: The attributes used to make the digit decision


?
§ Pixels: (6,8)=ON
?
§ Shape Patterns: NumComponents, AspectRatio, NumLoops
§ …
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Other Classification Tasks


§ Classification: given inputs x, predict labels (classes) y

§ Examples:
§ Spam detection (input: document,
classes: spam / ham)
§ OCR (input: images, classes: characters)
§ Medical diagnosis (input: symptoms,
classes: diseases)
§ Automatic essay grading (input: document,
classes: grades)
§ Fraud detection (input: account activity,
classes: fraud / no fraud)
§ Customer service email routing
§ … many more

§ Classification is an important commercial technology!

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Model-Based Classification

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Model-Based Classification

§ Model-based approach
§ Build a model (e.g. Bayes’
net) where both the label
and features are random
variables
§ Instantiate any observed
features
§ Query for the distribution
of the label conditioned
on the features
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Naïve Bayes for Digits

§ Naïve Bayes: Assume all features are independent effects of label


Y
§ Simple digit recognition version:
§ One feature (variable) Fij for each grid position <i,j>
§ Feature values are on / off, based on whether intensity
F1 F2 Fn
is more or less than 0.5 in underlying image
§ Each input maps to a feature vector, e.g.

§ Here: lots of features, each is binary valued

§ Naïve Bayes model:


§ What do we need to learn?
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

General Naïve Bayes

§ A general Naive Bayes model: Y


|Y|
paramet
F1 F2 Fn
ers
|Y| x |F|n n x |F| x
values |Y|
parameters

§ We only have to specify how each feature


depends on the class
§ Total number of parameters is linear in n Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Inference for Naïve Bayes

§ Goal: compute posterior distribution over


label variable Y
§ Step 1: get joint probability of label and evidence for each label

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

General Naïve Bayes

§ What do we need in order to use


Naïve Bayes?
§ Inference method (we just saw this part)
§ Start with a bunch of probabilities: P(Y) and the P(Fi|Y) tables
§ Use standard inference to compute P(Y|F1…Fn)
§ Nothing new here

§ Estimates of local conditional probability tables


§ P(Y), the prior over labels
§ P(Fi|Y) for each feature (evidence variable)
§ These probabilities are collectively called the parameters of the
model and denoted by q
§ Up until now, we assumed these appeared by magic, but…
§ …they typically come from training data counts: we’ll look at this
soon

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Example: Conditional Probabilities

1 0.1 1 0.01 1 0.05


2 0.1 2 0.05 2 0.01
3 0.1 3 0.05 3 0.90
4 0.1 4 0.30 4 0.80
5 0.1 5 0.80 5 0.90
6 0.1 6 0.90 6 0.90
7 0.1 7 0.05 7 0.25
8 0.1 8 0.60 8 0.85
9 0.1 9 0.50 9 0.60
0 0.1 0 0.80 0 0.80

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

A Spam Filter

§ Naïve Bayes
spam filter Dear Sir.

First, I must solicit your confidence in


this transaction, this is by virture of its
nature as being utterly confidencial and

§ Data:
top secret. …

TO BE REMOVED FROM FUTURE


§ Collection of emails, MAILINGS, SIMPLY REPLY TO THIS
labeled spam or ham MESSAGE AND PUT "REMOVE" IN
THE SUBJECT.
§ Note: someone has
to hand label all this 99 MILLION EMAIL ADDRESSES
FOR ONLY $99
data!
§ Split into training, Ok, Iknow this is blatantly OT but I'm
held-out, test sets beginning to go insane. Had an old Dell
Dimension XPS sitting in the corner and
decided to put it to use, I know it was
working pre being stuck in the corner,

§ Classifiers but when I plugged it in, hit the power


nothing happened.

§ Learn on the training


set
§ (Tune it on a held-
out set) Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Naïve Bayes for Text

§ Bag-of-words Naïve Bayes:


§ Features: Wi is the word at positon i
§ As before: predict label conditioned on feature variables (spam vs. ham)
§ As before: assume features are conditionally independent given label
§ New: each Wi is identically distributed Word at
position i, not ith

§ Generative model:
word in the
dictionary!

§ “Tied” distributions and bag-of-words


§ Usually, each variable gets its own conditional probability distribution P(F|Y)
§ In a bag-of-words model
§ Each position is identically distributed
§ All positions share the same conditional probs P(W|Y)
§ Why make this assumption?
§ Called “bag-of-words” because model is insensitive to word order or reordering
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Example: Spam Filtering


§ Model:
§ What are the parameters?
the : the :
0.0156 0.0210
ham : to : to :
0.66 0.0153 0.0133
spam: and : of :
0.33 0.0115 0.0119
of : 2002:
§ Where do these tables come from? 0.0095 0.0110
you : with:
0.0093 0.0108
a : from:
0.0086 0.0107
with: and : Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Spam Example

Word P(w|spam) P(w|ham) Tot Spam Tot Ham


(prior) 0.33333 0.66666 -1.1 -0.4
Gary 0.00002 0.00021 -11.8 -8.9
would 0.00069 0.00084 -19.1 -16.0
you 0.00881 0.00304 -23.8 -21.8
like 0.00086 0.00083 -30.9 -28.9
to 0.01517 0.01339 -35.1 -33.2
lose 0.00008 0.00002 -44.5 -44.0
weight 0.00016 0.00002 -53.3 -55.0
while 0.00027 0.00027 -61.5 -63.2
you 0.00881 0.00304 -66.2 -69.0
sleep 0.00006 0.00001 -76.0 -80.5

P(spam |
w) = 98.9

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Training and Testing

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Important Concepts
§ Data: labeled instances, e.g. emails marked spam/ham
§ Training set
§ Held out set Training
§ Test set Data
§ Features: attribute-value pairs which characterize each x
§ Experimentation cycle
§ Learn parameters (e.g. model probabilities) on training set
§ (Tune hyperparameters on held-out set)
§ Compute accuracy of test set Held-Out
§ Very important: never “peek” at the test set!
Data
§ Evaluation
§ Accuracy: fraction of instances predicted correctly
Test
§ Overfitting and generalization
§ Want a classifier which does well on test data Data
§ Overfitting: fitting the training data very closely, but not
generalizing well
§ We’ll investigate overfitting and generalization formally in a few
lectures

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Generalization and Overfitting

Lecture 09 – Neural Networks


Trends & Techniques in AI

Overfitting

30

25

20

Degree 15 polynomial
15

10

-5

-10

-15
0 2 4 6 8 10 12 14 16 18 20

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Example: Overfitting

2 wins!!

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Example: Overfitting

§ Posteriors determined by relative probabilities


(odds ratios):

south- screens
west : : inf
inf minute
nation : inf
: inf guarante
morally ed : inf
: inf What went wrong $205.00
nicely here? : inf
: inf delivery
extent : inf Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Generalization and Overfitting


§ Relative frequency parameters will overfit the training data!
§ Just because we never saw a 3 with pixel (15,15) on during training doesn’t mean we won’t see it at
test time
§ Unlikely that every occurrence of “minute” is 100% spam
§ Unlikely that every occurrence of “seriously” is 100% ham
§ What about all the words that don’t occur in the training set at all?
§ In general, we can’t go around giving unseen events zero probability
§ As an extreme case, imagine using the entire email as the only
feature
§ Would get the training data perfect (if deterministic labeling)
§ Wouldn’t generalize at all
§ Just making the bag-of-words assumption gives us some generalization, but isn’t enough
§ To generalize better: we need to smooth or regularize the
estimates Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Parameter Estimation

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Parameter Estimation
§ Estimating the distribution of a
random variable
§ Elicitation: ask a human (why is
r b br b
b r
b
br r b b
b b

this hard?)
§ Empirically: use training data
r r b
(learning!)
§ E.g.: for each outcome x, look at the
empirical rate of that value:

§ This is the estimate that maximizes Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Smoothing

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Maximum Likelihood?
§ Relative frequencies are the maximum likelihood estimates

§ Another option is to consider the most likely parameter value


given the data ????

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Unseen Events

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Laplace Smoothing

§ Laplace’s
estimate: r r b
§ Pretend you saw
every outcome
once more than
you actually did

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Laplace Smoothing

§ Laplace’s estimate
r r b
(extended):
§ Pretend you saw every
outcome k extra times

§ What’s Laplace with k = 0?


§ k is the strength of the
prior

§ Laplace for conditionals: Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Estimation: Linear Interpolation*

§ In practice, Laplace often performs poorly for


P(X|Y):
§ When |X| is very large
§ When |Y| is very large

§ Another option: linear interpolation


§ Also get the empirical P(X) from the data
§ Make sure the estimate of P(X|Y) isn’t too
different from the empirical P(X)

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Real NB: Smoothing


§ For real classification problems, smoothing is critical
§ New odds ratios:

helvetica : verdana :
11.4 28.8
seems : Credit :
10.8 28.4
group : ORDER :
10.2 27.2
Do these make more
ago : sense? <FONT> :
8.4 26.9
areas : money : Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Tuning

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Tuning on Held-Out Data

§ Now we’ve got two kinds


of unknowns
§ Parameters: the
probabilities P(X|Y), P(Y)
§ Hyperparameters: e.g. the
amount / type of
smoothing to do, k, a

§ What should we learn


where?
§ Learn parameters from
training data
§ Tune hyperparameters on Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Features

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Errors, and What to Do

§ Examples of errors
Dear GlobalSCAPE Customer,
GlobalSCAPE has partnered with ScanSoft to
offer you the latest version of OmniPage Pro,
for just $99.99* - the regular list price is
$499! The most common question we've received
about this offer is - Is this genuine? We
would like to assure you that this offer is
authorized by ScanSoft, is genuine and valid.
You can get the . . .

. . . To receive your $30 Amazon.com


promotional certificate, click through to
http://www.amazon.com/apparel
and see the prominent link for the $30 offer.
All details are there. We hope you enjoyed
receiving this message. However, if you'd
rather not receive future e-mails announcing
new store launches, please click . . .

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

What to Do About Errors?


§ Need more features– words
aren’t enough!
§ Have you emailed the sender
before?
§ Have 1K other people just gotten
the same email?
§ Is the sending information
consistent?
§ Is the email in ALL CAPS?
§ Do inline URLs point where they
say they point?
§ Does the email address you by
(your) name?
§ Can add these information Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Baselines
§ First step: get a baseline
§ Baselines are very simple “straw man” procedures
§ Help determine how hard the task is
§ Help know what a “good” accuracy is

§ Weak baseline: most frequent label classifier


§ Gives all test instances whatever label was most common in the training set
§ E.g. for spam filtering, might label everything as ham
§ Accuracy might be very high if the problem is skewed
§ E.g. calling everything “ham” gets 66%, so a classifier that gets 70% isn’t very good…

§ For real research, usually use previous work as a (strong) baseline

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

Confidences from a Classifier

§ The confidence of a
probabilistic classifier:
§ Posterior over the top label

§ Represents how sure the classifier is of the


classification
§ Any probabilistic model will have confidences
§ No guarantee confidence is correct

§ Calibration
§ Weak calibration: higher confidences mean
higher accuracy
§ Strong calibration: confidence predicts accuracy Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Summary
§ Bayes rule lets us do diagnostic queries with causal probabilities

§ The naïve Bayes assumption takes all features to be independent


given the class label

§ We can build classifiers out of a naïve Bayes model using training


data

§ Smoothing estimates is important in real systems

§ Classifier confidences are useful, when you can get them


Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Introduction
§ You are working on a classification problem and have generated
your set of hypothesis, created features and discussed the
importance of variables.
§ Within an hour, stakeholders want to see the first cut of the
model.
§ What will you do?
§ You have hundreds of thousands of data points and quite a
few variables in your training data set.
§ In such situation, if I were in your place, I would have used ‘Naive
Bayes‘, which can be extremely fast relative to
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Topics covered
§ What is Naive Bayes algorithm?
§ How Naive Bayes Algorithms works?
§ What are the Pros and Cons of using Naive Bayes?
§ 4 Applications of Naive Bayes Algorithm
§ Steps to build a basic Naive Bayes Model in Python
§ Tips to improve the power of Naive Bayes Model

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

What is Naïve Bayes algorithm?


§ It is a classification technique based on Bayes’
Theorem with an assumption of independence
among predictors.
§ In simple terms, a Naive Bayes classifier
assumes that the presence of a particular
feature in a class is unrelated to the presence of
any other feature.
§ For example, a fruit may be considered to be an
apple if it is red, round, and about 3 inches in
diameter.
§ Even if these features depend on each other or Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Bayes theorem
§ Bayes theorem provides a way of calculating
posterior probability P(c|x) from P(c), P(x) and
P(x|c).
§ Look at the equation below:

§ P(c|x) is the posterior probability of class (c, target)


Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

How Naïve Bayes algorithm works?


§ Below I have a training data set
of weather and corresponding
target variable ‘Play’ (suggesting
possibilities of playing).
§ Now, we need to classify
whether players will play or not
based on weather condition.
§ Let’s follow the below steps to
perform it.
§ Step 1: Convert the data set into a
frequency table Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

How Naïve Bayes algorithm works?


§ Problem: Players will play if
weather is sunny. Is this
statement is correct?
§ We can solve it using above
discussed method of posterior
probability.
§ P(Yes|Sunny) = P( Sunny | Yes) *
P(Yes) / P (Sunny)
§ Here we have P (Sunny |Yes) = 3/9 =
0.33, P(Sunny) = 5/14 = 0.36, P(
Yes)= 9/14 = 0.64
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Pros and Cons of Naive Bayes

§ Pros:
§ It is easy and fast to predict class of test data set. It
also perform well in multi class prediction
§ When assumption of independence holds, a Naive
Bayes classifier performs better compare to
other models like logistic regression and you need
less training data.
§ It perform well in case of categorical input variables
compared to numerical variable(s). For numerical
variable, normal distribution is assumed (bell curve,
which is a strong assumption). Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Pros and Cons of Naive Bayes


§ Cons:
§ If categorical variable has a category (in test data
set), which was not observed in training data set,
then model will assign a 0 (zero) probability and will
be unable to make a prediction. This is often known
as “Zero Frequency”. To solve this, we can use the
smoothing technique. One of the simplest smoothing
techniques is called Laplace estimation.
§ On the other side naive Bayes is also known as a bad
estimator, so the probability outputs
from predict_proba are not to be taken too seriously.
§ Another limitation of Naive Bayes is the assumptionLecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Applications of Naive Bayes Algorithms


§ Real time Prediction: Naive Bayes is an eager
learning classifier and it is sure fast. Thus, it could be
used for making predictions in real time.
§ Multi class Prediction: This algorithm is also well
known for multi class prediction feature. Here we can
predict the probability of multiple classes of target
variable.
§ Text classification/ Spam Filtering/ Sentiment
Analysis: Naive Bayes classifiers mostly used in text
classification (due to better result in multi class
problems and independence rule) have higher
success rate as compared to other algorithms. As a
Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Build a basic model using Naive Bayes in Python

§ Scikit learn (python library) helps to build a Naive


Bayes model in Python. There are three types of
Naive Bayes model under scikit learn library:
§ Gaussian: It is used in classification and it assumes that
features follow a normal distribution.
§ Multinomial: It is used for discrete counts. For
example, let’s say, we have a text classification
problem. Here we can consider bernoulli trials which is
one step further and instead of “word occurring in the
document”, we have “count how often word occurs in
the document”, you can think of it as “number of times
outcome number x_i is observed over the n trials”. Lecture 09 – Neural Networks
AI 202 Trends & Techniques in AI

Build a basic model using Naive Bayes in Python

#Import Library of Gaussian Naive Bayes model


from sklearn.naive_bayes import GaussianNB
import numpy as np
#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(x, y)
#Predict Output
predicted= model.predict([[1,2],[3,4]])
print predicted
Output: ([3,4])

Lecture 09 – Neural Networks


AI 202 Trends & Techniques in AI

References & Acknowledgements

§ Contents from George F. Luger, AI: Structures and strategies for complex problem
solving, 6th Ed.

Lecture 09 – Neural Networks

You might also like