cs229-notes2
cs229-notes2
Andrew Ng
Part IV
Generative Learning algorithms
So far, we’ve mainly been talking about learning algorithms that model
p(y|x; θ), the conditional distribution of y given x. For instance, logistic
regression modeled p(y|x; θ) as hθ (x) = g(θ T x) where g is the sigmoid func-
tion. In these notes, we’ll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish
between elephants (y = 1) and dogs (y = 0), based on some features of
an animal. Given a training set, an algorithm like logistic regression or
the perceptron algorithm (basically) tries to find a straight line—that is, a
decision boundary—that separates the elephants and dogs. Then, to classify
a new animal as either an elephant or a dog, it checks on which side of the
decision boundary it falls, and makes its prediction accordingly.
Here’s a different approach. First, looking at elephants, we can build a
model of what elephants look like. Then, looking at dogs, we can build a
separate model of what dogs look like. Finally, to classify a new animal, we
can match the new animal against the elephant model, and match it against
the dog model, to see whether the new animal looks more like the elephants
or more like the dogs we had seen in the training set.
Algorithms that try to learn p(y|x) directly (such as logistic regression),
or algorithms that try to learn mappings directly from the space of inputs X
to the labels {0, 1}, (such as the perceptron algorithm) are called discrim-
inative learning algorithms. Here, we’ll talk about algorithms that instead
try to model p(x|y) (and p(y)). These algorithms are called generative
learning algorithms. For instance, if y indicates whether a example is a dog
(0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’
features, and p(x|y = 1) models the distribution of elephants’ features.
After modeling p(y) (called the class priors) and p(x|y), our algorithm
1
2
can then use Bayes rule to derive the posterior distribution on y given x:
p(x|y)p(y)
p(y|x) = .
p(x)
Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y =
0)p(y = 0) (you should be able to verify that this is true from the standard
properties of probabilities), and thus can also be expressed in terms of the
quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating
p(y|x) in order to make a prediction, then we don’t actually need to calculate
the denominator, since
p(x|y)p(y)
arg max p(y|x) = arg max
y y p(x)
= arg max p(x|y)p(y).
y
Cov(X) = Σ.
3 3 3
2 2 2
3 3 3
1 2 1 2 1 2
0 1 0 1 0 1
−1 0 −1 0 −1 0
−1 −1 −1
−2 −2 −2
−2 −2 −2
−3 −3 −3 −3 −3 −3
The left-most figure shows a Gaussian with mean zero (that is, the 2x1
zero-vector) and covariance matrix Σ = I (the 2x2 identity matrix). A Gaus-
sian with zero mean and identity covariance is also called the standard nor-
mal distribution. The middle figure shows the density of a Gaussian with
zero mean and Σ = 0.6I; and in the rightmost figure shows one with , Σ = 2I.
We see that as Σ becomes larger, the Gaussian becomes more “spread-out,”
and as it becomes smaller, the distribution becomes more “compressed.”
Lets look at some more examples.
0.25 0.25 0.25
3 3 3
2 2 2
1 1 1
0 0 0
3 3 3
−1 2 −1 2 −1 2
1 1 1
−2 0 −2 0 −2 0
−1 −1 −1
−3 −2 −3 −2 −3 −2
−3 −3 −3
The figures above show Gaussians with mean 0, and with covariance
matrices respectively
1 0 1 0.5 1 0.8
Σ= ; Σ= ; .Σ = .
0 1 0.5 1 0.8 1
The leftmost figure shows the familiar standard normal distribution, and we
see that as we increase the off-diagonal entry in Σ, the density becomes more
“compressed” towards the 45◦ line (given by x1 = x2 ). We can see this more
clearly when we look at the contours of the same three densities:
4
3 3 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
3 3 3
2 2 2
3 3 3
1 2 1 2 1 2
0 1 0 1 0 1
−1 0 −1 0 −1 0
−1 −1 −1
−2 −2 −2
−2 −2 −2
−3 −3 −3 −3 −3 −3
y ∼ Bernoulli(φ)
x|y = 0 ∼ N (µ0 , Σ)
x|y = 1 ∼ N (µ1 , Σ)
p(y) = φy (1 − φ)1−y
1 1 T −1
p(x|y = 0) = exp − (x − µ0 ) Σ (x − µ0 )
(2π)n/2 |Σ|1/2 2
1 1 T −1
p(x|y = 1) = exp − (x − µ1 ) Σ (x − µ1 )
(2π)n/2 |Σ|1/2 2
Here, the parameters of our model are φ, Σ, µ0 and µ1 . (Note that while
there’re two different mean vectors µ0 and µ1 , this model is usually applied
using only one covariance matrix Σ.) The log-likelihood of the data is given
by
m
Y
`(φ, µ0 , µ1 , Σ) = log p(x(i) , y (i) ; φ, µ0 , µ1 , Σ)
i=1
Ym
= log p(x(i) |y (i) ; µ0 , µ1 , Σ)p(y (i) ; φ).
i=1
6
−1
−2
−3
−4
−5
−6
−7
−2 −1 0 1 2 3 4 5 6 7
Shown in the figure are the training set, as well as the contours of the
two Gaussian distributions that have been fit to the data in each of the
two classes. Note that the two Gaussians have contours that are the same
shape and orientation, since they share a covariance matrix Σ, but they have
different means µ0 and µ1 . Also shown in the figure is the straight line
giving the decision boundary at which p(y = 1|x) = 0.5. On one side of
the boundary, we’ll predict y = 1 to be the most likely outcome, and on the
other side, we’ll predict y = 0.
almost always do better than GDA. For this reason, in practice logistic re-
gression is used more often than GDA. (Some related considerations about
discriminative vs. generative models also apply for the Naive Bayes algo-
rithm that we discuss next, but the Naive Bayes algorithm is still considered
a very good, and is certainly also a very popular, classification algorithm.)
2 Naive Bayes
In GDA, the feature vectors x were continuous, real-valued vectors. Lets now
talk about a different learning algorithm in which the xi ’s are discrete-valued.
For our motivating example, consider building an email spam filter using
machine learning. Here, we wish to classify messages according to whether
they are unsolicited commercial (spam) email, or non-spam email. After
learning to do this, we can then have our mail reader automatically filter
out the spam messages and perhaps place them in a separate mail folder.
Classifying emails is one example of a broader set of problems called text
classification.
Lets say we have a training set (a set of emails labeled as spam or non-
spam). We’ll begin our construction of our spam filter by specifying the
features xi used to represent an email.
We will represent an email via a feature vector whose length is equal to
the number of words in the dictionary. Specifically, if an email contains the
i-th word of the dictionary, then we will set xi = 1; otherwise, we let xi = 0.
For instance, the vector
1 a
0 aardvark
0 aardwolf
. ..
.
x= . .
1 buy
. ..
.. .
0 zygmurgy
is used to represent an email that contains the words “a” and “buy,” but not
“aardvark,” “aardwolf” or “zygmurgy.”2 The set of words encoded into the
2
Actually, rather than looking through an english dictionary for the list of all english
words, in practice it is more common to look through our training set and encode in our
feature vector only the words that occur at least once there. Apart from reducing the
number of words modeled and hence reducing our computational and space requirements,
9
The first equality simply follows from the usual properties of probabilities,
and the second equality used the NB assumption. We note that even though
the Naive Bayes assumption is an extremely strong assumptions, the resulting
algorithm works well on many problems.
Our model is parameterized by φi|y=1 = p(xi = 1|y = 1), φi|y=0 = p(xi =
1|y = 0), and φy = p(y = 1). As usual, given a training set {(x(i) , y (i) ); i =
this also has the advantage of allowing us to model/include as a feature many words
that may appear in your email (such as “cs229”) but that you won’t find in a dictionary.
Sometimes (as in the homework), we also exclude the very high frequency words (which
will be words like “the,” “of,” “and,”; these high frequency, “content free” words are called
stop words) since they occur in so many documents and do little to indicate whether an
email is spam or non-spam.
10
Maximizing this with respect to φy , φi|y=0 and φi|y=1 gives the maximum
likelihood estimates:
Pm (i) (i)
i=1 1{xj = 1 ∧ y = 1}
φj|y=1 = Pm (i)
i=1 1{y = 1}
Pm (i) (i)
i=1 1{xj = 1 ∧ y = 0}
φj|y=0 = Pm (i) = 0}
i=1 1{y
Pm (i)
i=1 1{y = 1}
φy =
m
In the equations above, the “∧” symbol means “and.” The parameters have
a very natural interpretation. For instance, φj|y=1 is just the fraction of the
spam (y = 1) emails in which word j does appear.
Having fit all these parameters, to make a prediction on a new example
with features x, we then simply calculate
p(x|y = 1)p(y = 1)
p(y = 1|x) =
p(x)
( ni=1 p(xi |y = 1)) p(y = 1)
Q
= Qn ,
( i=1 p(xi |y = 1)) p(y = 1) + ( ni=1 p(xi |y = 0)) p(y = 0)
Q
(In practice, it usually doesn’t matter much whether we apply Laplace smooth-
ing to φy or not, since we will typically have a fair fraction each of spam and
non-spam messages, so φy will be a reasonable estimate of p(y = 1) and will
be quite far from 0 anyway.)
as we’ve presented it will work well for many classification problems, for text
classification, there is a related model that does even better.
In the specific context of text classification, Naive Bayes as presented uses
the what’s called the multi-variate Bernoulli event model. In this model,
we assumed that the way an email is generated is that first it is randomly
determined (according to the class priors p(y)) whether a spammer or non-
spammer will send you your next message. Then, the person sending the
email runs through the dictionary, deciding whether to include each word i
in that email independently and according to the probabilities Q p(xi = 1|y) =
φi|y . Thus, the probability of a message was given by p(y) ni=1 p(xi |y).
Here’s a different model, called the multinomial event model. To de-
scribe this model, we will use a different notation and set of features for
representing emails. We let xi denote the identity of the i-th word in the
email. Thus, xi is now an integer taking values in {1, . . . , |V |}, where |V |
is the size of our vocabulary (dictionary). An email of n words is now rep-
resented by a vector (x1 , x2 , . . . , xn ) of length n; note that n can vary for
different documents. For instance, if an email starts with “A NIPS . . . ,”
then x1 = 1 (“a” is the first word in the dictionary), and x2 = 35000 (if
“nips” is the 35000th word in the dictionary).
In the multinomial event model, we assume that the way an email is
generated is via a random process in which spam/non-spam is first deter-
mined (according to p(y)) as before. Then, the sender of the email writes the
email by first generating x1 from some multinomial distribution over words
(p(x1 |y)). Next, the second word x2 is chosen independently of x1 but from
the same multinomial distribution, and similarly for x3 , x4 , and so on, until
all n words of the email have Qnbeen generated. Thus, the overall probability of
a message is given by p(y) i=1 p(xi |y). Note that this formula looks like the
one we had earlier for the probability of a message under the multi-variate
Bernoulli event model, but that the terms in the formula now mean very dif-
ferent things. In particular xi |y is now a multinomial, rather than a Bernoulli
distribution.
The parameters for our new model are φy = p(y) as before, φi|y=1 =
p(xj = i|y = 1) (for any j) and φi|y=0 = p(xj = i|y = 0). Note that we have
assumed that p(xj |y) is the same for all values of j (i.e., that the distribution
according to which a word is generated does not depend on its position j
within the email).
If we are given a training set {(x(i) , y (i) ); i = 1, . . . , m} where x(i) =
(i) (i) (i)
(x1 , x2 , . . . , xni ) (here, ni is the number of words in the i-training example),
14
While not necessarily the very best classification algorithm, the Naive Bayes
classifier often works surprisingly well. It is often also a very good “first thing
to try,” given its simplicity and ease of implementation.