0% found this document useful (0 votes)
23 views79 pages

NB 24 Aug

The document discusses text classification using Naive Bayes, a method for categorizing text into predefined classes based on features and probabilities. It covers various applications such as spam detection, authorship identification, and sentiment analysis, while explaining the underlying principles of Naive Bayes, including its assumptions and learning methods. The document also addresses challenges like zero probabilities and the handling of unknown words in classification tasks.

Uploaded by

alsanahesapmesap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views79 pages

NB 24 Aug

The document discusses text classification using Naive Bayes, a method for categorizing text into predefined classes based on features and probabilities. It covers various applications such as spam detection, authorship identification, and sentiment analysis, while explaining the underlying principles of Naive Bayes, including its assumptions and learning methods. The document also addresses challenges like zero probabilities and the handling of unknown words in classification tasks.

Uploaded by

alsanahesapmesap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Text

Classificati The Task of Text


on and Classification
Naive
Bayes
Is this spam?
Who wrote which Federalist Papers?
1787-8: essays anonymously written by:
Alexander Hamilton, James Madison, and John Jay
to convince New York to ratify U.S Constitution
Authorship of 12 of the letters unclear between:

Alexander Hamilton James Madison

1963: solved by Mosteller and Wallace using Bayesian methods


Positive or negative movie review?

unbelievably disappointing
Full of zany characters and richly applied satire, and
some great plot twists
this is the greatest screwball comedy ever filmed
It was pathetic. The worst part about it was the
boxing scenes.
4
Classification
Assigning subject categories, topics, or genres
Spam detection
Authorship identification (who wrote this?)
Language Identification (is this Portuguese?)
Sentiment analysis

Text Classification: definition

Input:
◦ a instance d
◦ a fixed set of classes C = {c1, c2,…, cJ}

Output: a predicted class c  C


Basic Classification Method:
Hand-coded rules

Rules based on combinations of words or other features


◦ spam: black-list-address OR (“dollars” AND “have been selected”)
Accuracy can be high
• In very specific domains
• If rules are carefully refined by experts
But:
• building and maintaining rules is expensive
• they are too literal and specific: "high-precision, low-recall"
Classification Method:
Supervised Machine Learning
Input:
◦ An instance x
◦ a fixed set of classes C = {c1, c2,…, cJ} or Y= {y1, y2,…, yJ}
◦ A training set of m hand-labeled instances (x1,y1),....,(xm,ym)
Output:
◦ a learned classifier γ:x  y

8
Classification Methods:
Supervised Machine Learning

Many kinds of classifiers!


• Naïve Bayes (this lecture)
• Logistic regression
• Neural networks
• k-nearest neighbors
• …

We can also use pretrained large language models (LLMs)!


• Fine-tuned as classifiers
• Prompted to give a classification
Text
Classificati The Naive Bayes Classifier
on and
Naive
Bayes
Naive Bayes Intuition

Simple ("naive") classification method based on


Bayes rule
Relies on very simple representation of instance
◦ I.I.D.
Bayes’ Rule Applied to instances and Classes

• For a instance d and a class c


Naive Bayes Classifier (I)

MAP is “maximum a
posteriori” = most
likely class

Bayes Rule

Dropping the
denominator
Naive Bayes Classifier (II)
"Likelihood" "Prior"

instance d
represented as
features x1..xn
Naïve Bayes Classifier (IV)

O(|X|n•|C|) parameters How often does this


class occur?

Could only be estimated if a


We can just count the
very, very large number of relative frequencies in
training examples was a corpus

available.
Multinomial Naive Bayes Independence
Assumptions

Bag of Words assumption: Assume position doesn’t matter


Conditional Independence: Assume the feature
probabilities P(xi|cj) are independent given the class c.
Multinomial Naive Bayes Classifier
Applying Multinomial Naive Bayes Classifiers to Text
Classification

positions  all word positions in test instance


Problems with multiplying lots of probs
There's a problem with this:

Multiplying lots of probabilities can result in floating-point underflow!


.0006 * .0007 * .0009 * .01 * .5 * .000008….
Idea: Use logs, because log(ab) = log(a) + log(b)
We'll sum logs of probabilities instead of multiplying probabilities!
We actually do everything in log space
Instead of this:

This:

Notes:
1) Taking log doesn't change the ranking of classes!
The class with highest probability also has highest log probability!
2) It's a linear model:
Just a max of a sum of weights: a linear function of the inputs
So naive bayes is a linear classifier
Text
Classificati The Naive Bayes Classifier
on and
Naive
Bayes
Text
Classificati
on and Naive Bayes: Learning
Naïve
Bayes
Sec.13.3

Learning the Multinomial Naive Bayes Model

First attempt: maximum likelihood estimates


◦ simply use the frequencies in the data

𝑁𝑐
^ (𝑐 )=
𝑃 𝑗
𝑗
𝑁 𝑡𝑜𝑡𝑎𝑙
Parameter estimation

fraction of times word wi appears


among all words in instances of topic cj

Create mega-instance for topic j by concatenating all docs


in this topic
◦ Use frequency of w in mega-instance
Sec.13.3

Problem with Maximum Likelihood

What if we have seen no training instances with the word fantastic


and classified in the topic positive (thumbs-up)?

Zero probabilities cannot be conditioned away, no matter the other


evidence!
Laplace (add-1) smoothing for Naïve Bayes
Multinomial Naïve Bayes: Learning

• From training corpus, extract Vocabulary

Calculate P(cj) terms • Calculate P(wk | cj) terms


◦ For each cj in C do • Textj  single doc containing all docsj
docsj  all docs with class =cj • For each word wk in Vocabulary
nk  # of occurrences of wk in Textj
Unknown words
What about unknown words
◦ that appear in our test data
◦ but not in our training data or vocabulary?
We ignore them
◦ Remove them from the test instance!
◦ Pretend they weren't there!
◦ Don't include any probability for them at all!
Why don't we build an unknown word model?
◦ It doesn't help: knowing which class has more unknown words is
not generally helpful!
Stop words
Some systems ignore stop words
◦ Stop words: very frequent words like the and a.
◦ Sort the vocabulary by word frequency in training set
◦ Call the top 10 or 50 words the stopword list.
◦ Remove all stop words from both training and test sets
◦ As if they were never there!

But removing stop words doesn't usually help


• So in practice most NB algorithms use all words and don't
use stopword lists
Text
Classificati
on and Naive Bayes: Learning
Naive
Bayes
Text
Classificati Sentiment and Binary
on and Naive Bayes
Naive
Bayes
Let's do a worked sentiment example!
A worked sentiment example with add-1 smoothing
1. Prior from training:
^ (𝑐 )=
𝑃 𝑗
𝑁𝑐 𝑗 P(-) = 3/5
𝑁 𝑡𝑜𝑡𝑎𝑙
P(+) = 2/5
2. Drop "with"
3. Likelihoods from training:
𝑐𝑜𝑢𝑛𝑡 ( 𝑤 𝑖 ,𝑐 ) +1
𝑝 ( 𝑤 𝑖|𝑐 ) =
(∑
𝑤 ∈𝑉
)
𝑐𝑜𝑢𝑛𝑡 ( 𝑤 , 𝑐 ) +¿ 𝑉 ∨¿ ¿
4. Scoring the test set:
Optimizing for sentiment analysis
For tasks like sentiment, word occurrence seems to
be more important than word frequency.
◦ The occurrence of the word fantastic tells us a lot
◦ The fact that it occurs 5 times may not tell us much more.
Binary multinominal naive bayes, or binary NB
◦ Clip our word counts at 1
◦ Note: this is different than Bernoulli naive bayes; see the
textbook at the end of the chapter.
Binary Multinomial Naïve Bayes: Learning
• From training corpus, extract Vocabulary

Calculate P(cj) terms • Calculate P(wk | cj) terms


◦ For each cj in C do •• Remove
Textj  duplicates
single docincontaining
each doc: all docsj
• For each word type w in docj
• For
docsj  all docs with class =cj each word wk in Vocabulary
• Retain only a single instance of w
nk  # of occurrences of wk in Textj
Binary Multinomial Naive Bayes
on a test instance d
First remove all duplicate words from d
Then compute NB using the same equation:

36
Binary multinominal naive Bayes
Binary multinominal naive Bayes
Binary multinominal naive Bayes
Binary multinominal naive Bayes

Counts can still be 2! Binarization is within-doc!


Text
Classificati Sentiment and Binary
on and Naive Bayes
Naive
Bayes
Text
Classificati More on Sentiment
on and Classification
Naive
Bayes
Sentiment Classification: Dealing with Negation
I really like this movie
I really don't like this movie

Negation changes the meaning of "like" to negative.


Negation can also change negative to positive-ish
◦ Don't dismiss this film
◦ Doesn't let us get bored
Sentiment Classification: Dealing with Negation
Das, Sanjiv and Mike Chen. 2001. Yahoo! for Amazon: Extracting market sentiment from stock message boards. In
Proceedings of the Asia Pacific Finance Association Annual Conference (APFA).
Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment Classification using
Machine Learning Techniques. EMNLP-2002, 79—86.

Simple baseline method:


Add NOT_ to every word between negation and following punctuation:

didn’t like this movie , but I

didn’t NOT_like NOT_this NOT_movie but I


Sentiment Classification: Lexicons
Sometimes we don't have enough labeled training
data
In that case, we can make use of pre-built word lists
Called lexicons
There are various publically available lexicons
MPQA Subjectivity Cues Lexicon
Theresa Wilson, Janyce Wiebe, and Paul Hoffmann (2005). Recognizing Contextual Polarity in
Phrase-Level Sentiment Analysis. Proc. of HLT-EMNLP-2005.

Riloff and Wiebe (2003). Learning extraction patterns for subjective expressions. EMNLP-2003.

Home page: https://mpqa.cs.pitt.edu/lexicons/subj_lexicon/


6885 words from 8221 lemmas, annotated for intensity (strong/weak)
◦ 2718 positive
◦ 4912 negative
+ : admirable, beautiful, confident, dazzling, ecstatic, favor, glee, great
− : awful, bad, bias, catastrophe, cheat, deny, envious, foul, harsh, hate
46
The General Inquirer
Philip J. Stone, Dexter C Dunphy, Marshall S. Smith, Daniel M. Ogilvie. 1966. The General
Inquirer: A Computer Approach to Content Analysis. MIT Press

◦ Home page: http://www.wjh.harvard.edu/~inquirer


◦ List of Categories: http://www.wjh.harvard.edu/~inquirer/homecat.htm
◦ Spreadsheet: http://www.wjh.harvard.edu/~inquirer/inquirerbasic.xls
Categories:
◦ Positiv (1915 words) and Negativ (2291 words)
◦ Strong vs Weak, Active vs Passive, Overstated versus Understated
◦ Pleasure, Pain, Virtue, Vice, Motivation, Cognitive Orientation, etc
Free for Research Use
Using Lexicons in Sentiment Classification
Add a feature that gets a count whenever a word
from the lexicon occurs
◦ E.g., a feature called "this word occurs in the positive
lexicon" or "this word occurs in the negative lexicon"
Now all positive words (good, great, beautiful,
wonderful) or negative words count for that feature.
Using 1-2 features isn't as good as using all the words.
• But when training data is sparse or not representative of the
test set, dense lexicon features can help
Naive Bayes in Other tasks: Spam Filtering
SpamAssassin Features:
◦ Mentions millions of (dollar) ((dollar) NN,NNN,NNN.NN)
◦ From: starts with many numbers
◦ Subject is all capitals
◦ HTML has a low ratio of text to image area
◦ "One hundred percent guaranteed"
◦ Claims you can be removed from the list
Naive Bayes in Language ID
Determining what language a piece of text is written in.
Features based on character n-grams do very well
Important to train on lots of varieties of each language
(e.g., American English varieties like African-American English,
or English varieties around the world like Indian English)
Summary: Naive Bayes is Not So Naive
Very Fast, low storage requirements
Work well with very small amounts of training data
Robust to Irrelevant Features
Irrelevant Features cancel each other without affecting results

Very good in domains with many equally important features


Decision Trees suffer from fragmentation in such cases – especially if little data

Optimal if the independence assumptions hold: If assumed independence


is correct, then it is the Bayes Optimal Classifier for problem
A good dependable baseline for text classification
◦ But we will see other classifiers that give better accuracy
Slide from Chris Manning
Text
Classificati More on Sentiment
on and Classification
Naive
Bayes
Text Classification
and Naïve Bayes

Naïve Bayes:
Relationship to
Language Modeling
Dan Jurafsky

Generative Model for Multinomial Naïve Bayes

c=China

X1=Shanghai X2=and X3=Shenzhen X4=issue X5=bonds

54
Dan Jurafsky

Naïve Bayes and Language Modeling


• Naïve bayes classifiers can use any sort of feature
• URL, email address, dictionaries, network features
• But if, as in the previous slides
• We use only word features
• we use all of the words in the text (not a subset)
• Then
• Naïve bayes has an important similarity to language
55
modeling.
Dan Jurafsky Sec.13.2.1

Each class = a unigram language model


• Assigning each word: P(word | c)
• Assigning each sentence: P(s|c)=Π P(word|c)
Class pos
0.1 I
I love this fun film
0.1 love
0.01 this
0.1 0.1 .05 0.01 0.1
0.05 fun
0.1 film
P(s | pos) = 0.0000005

Dan Jurafsky Sec.13.2.1

Naïve Bayes as a Language Model


• Which class assigns the higher probability to s?

Model pos Model neg


0.1 I 0.2 I I love this fun film
0.1 love 0.001 love
0.1 0.1 0.01 0.05 0.1
0.01 this 0.01 this 0.2 0.001 0.01 0.005 0.1
0.05 fun 0.005 fun
0.1 film P(s|pos) > P(s|neg)
0.1 film
Text Classification
and Naïve Bayes

Naïve Bayes:
Relationship to
Language Modeling
Text
Classificati Precision, Recall, and F1
on and
Naive
Bayes
Evaluating Classifiers: How well does our classifier work?
Let's first address binary classifiers:
• Is this email spam?
spam (+) or not spam (-)
• Is this post about Delicious Pie Company?
about Del. Pie Co (+) or not about Del. Pie Co(-)

We'll need to know


1. What did our classifier say about each email or post?
2. What should our classifier have said, i.e., the correct
answer, usually as defined by humans ("gold label")
First step in evaluation: The confusion matrix
Accuracy on the confusion matrix
Why don't we use accuracy?
Accuracy doesn't work well when we're dealing with
uncommon or imbalanced classes
Suppose we look at 1,000,000 social media posts to find
Delicious Pie-lovers (or haters)
• 100 of them talk about our pie
• 999,900 are posts about something unrelated
Imagine the following simple classifier
Every post is "not about pie"
Accuracy re: pie posts 100 posts are about pie; 999,900 aren't
Why don't we use accuracy?
Accuracy of our "nothing is pie" classifier
999,900 true negatives and 100 false negatives
Accuracy is 999,900/1,000,000 = 99.99%!
But useless at finding pie-lovers (or haters)!!
Which was our goal!
Accuracy doesn't work well for unbalanced classes
Most tweets are not about pie!
Instead of accuracy we use precision and recall

Precision: % of selected items that are correct


Recall: % of correct items that are selected
Precision/Recall aren't fooled by the"just call everything negative"
classifier!

Stupid classifier: Just say no: every tweet is "not about pie"
• 100 tweets talk about pie, 999,900 tweets don't
• Accuracy = 999,900/1,000,000 = 99.99%
But the Recall and Precision for this classifier are terrible:
A combined measure: F1

F1 is a combination of precision and recall.


F1 is a special case of the general "F-measure"

F-measure is the (weighted) harmonic mean of precision and recall

F1 is a special case of F-measure with β=1, α=½


Suppose we have more than 2 classes?
Lots of text classification tasks have more than two classes.
◦ Sentiment analysis (positive, negative, neutral) , named entities (person, location,
organization)

We can define precision and recall for multiple classes like this 3-way
email task:
How to combine P/R values for different classes:
Microaveraging vs Macroaveraging
Text
Classificati Precision, Recall, and F1
on and
Naive
Bayes
Text
Classificati Avoiding Harms in Classification
on and
Naive
Bayes
Harms of classification
Classifiers, like any NLP algorithm, can cause harms
This is true for any classifier, whether Naive Bayes or
other algorithms
Representational Harms
• Harms caused by a system that demeans a social group
• Such as by perpetuating negative stereotypes about them.
• Kiritchenko and Mohammad 2018 study
• Examined 200 sentiment analysis systems on pairs of sentences
• Identical except for names:
• common African American (Shaniqua) or European American (Stephanie).
• Like "I talked to Shaniqua yesterday" vs "I talked to Stephanie yesterday"
• Result: systems assigned lower sentiment and more negative
emotion to sentences with African American names
• Downstream harm:
• Perpetuates stereotypes about African Americans
• African Americans treated differently by NLP tools like sentiment (widely
used in marketing research, mental health studies, etc.)
Harms of Censorship
• Toxicity detection is the text classification task of detecting hate speech,
abuse, harassment, or other kinds of toxic language.
• Widely used in online content moderation
• Toxicity classifiers incorrectly flag non-toxic sentences that simply mention
minority identities (like the words "blind" or "gay")
• women (Park et al., 2018),
• disabled people (Hutchinson et al., 2020)
• gay people (Dixon et al., 2018; Oliva et al., 2021)
• Downstream harms:
• Censorship of speech by disabled people and other groups
• Speech by these groups becomes less visible online
• Writers might be nudged by these algorithms to avoid these words
Performance Disparities
1. Text classifiers perform worse on many languages
of the world due to lack of data or labels
2. Text classifiers perform worse on varieties of
even high-resource languages like English
• Example task: language identification, a first step in
NLP pipeline ("Is this post in English or not?")
• English language detection performance worse for
writers who are African American (Blodgett and
O'Connor 2017) or from India (Jurgens et al., 2017)
Harms in text classification
• Causes:
• Issues in the data; NLP systems amplify biases in training data
• Problems in the labels
• Problems in the algorithms (like what the model is trained to
optimize)
• Prevalence: The same problems occur throughout NLP
(including large language models)
• Solutions: There are no general mitigations or solutions
• But harm mitigation is an active area of research
• And there are standard benchmarks and tools that we can use for
measuring some of the harms
Text
Classificati Avoiding Harms in Classification
on and
Naive
Bayes

You might also like