CS229
CS229
Andrew Ng
Supervised learning
Lets start by talking about a few examples of supervised learning problems.
Suppose we have a dataset giving the living areas and prices of 47 houses
from Portland, Oregon:
Living area (feet2 ) Price (1000$s)
2104 400
1600 330
2400 369
1416 232
3000 540
.. ..
. .
We can plot this data:
housing prices
1000
900
800
700
600
price (in $1000)
500
400
300
200
100
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
square feet
Given data like this, how can we learn to predict the prices of other houses
in Portland, as a function of the size of their living areas?
1
CS229 Winter 2003 2
To establish notation for future use, we’ll use x(i) to denote the “input”
variables (living area in this example), also called input features, and y (i)
to denote the “output” or target variable that we are trying to predict
(price). A pair (x(i) , y (i) ) is called a training example, and the dataset
that we’ll be using to learn—a list of m training examples {(x(i) , y (i) ); i =
1, . . . , m}—is called a training set. Note that the superscript “(i)” in the
notation is simply an index into the training set, and has nothing to do with
exponentiation. We will also use X denote the space of input values, and Y
the space of output values. In this example, X = Y = R.
To describe the supervised learning problem slightly more formally, our
goal is, given a training set, to learn a function h : X 7→ Y so that h(x) is a
“good” predictor for the corresponding value of y. For historical reasons, this
function h is called a hypothesis. Seen pictorially, the process is therefore
like this:
Training
set
Learning
algorithm
x h predicted y
(living area of (predicted price)
house.) of house)
When the target variable that we’re trying to predict is continuous, such
as in our housing example, we call the learning problem a regression prob-
lem. When y can take on only a small number of discrete values (such as
if, given the living area, we wanted to predict if a dwelling is a house or an
apartment, say), we call it a classification problem.
3
Part I
Linear Regression
To make our housing example more interesting, lets consider a slightly richer
dataset in which we also know the number of bedrooms in each house:
Living area (feet2 ) #bedrooms Price (1000$s)
2104 3 400
1600 3 330
2400 3 369
1416 2 232
3000 4 540
.. .. ..
. . .
(i)
Here, the x’s are two-dimensional vectors in R2 . For instance, x1 is the
(i)
living area of the i-th house in the training set, and x2 is its number of
bedrooms. (In general, when designing a learning problem, it will be up to
you to decide what features to choose, so if you are out in Portland gathering
housing data, you might also decide to include other features such as whether
each house has a fireplace, the number of bathrooms, and so on. We’ll say
more about feature selection later, but for now lets take the features as given.)
To perform supervised learning, we must decide how we’re going to rep-
resent functions/hypotheses h in a computer. As an initial choice, lets say
we decide to approximate y as a linear function of x:
hθ (x) = θ0 + θ1 x1 + θ2 x2
Here, the θi ’s are the parameters (also called weights) parameterizing the
space of linear functions mapping from X to Y. When there is no risk of
confusion, we will drop the θ subscript in hθ (x), and write it more simply as
h(x). To simplify our notation, we also introduce the convention of letting
x0 = 1 (this is the intercept term), so that
n
X
h(x) = θi xi = θT x,
i=0
where on the right-hand side above we are viewing θ and x both as vectors,
and here n is the number of input variables (not counting x0 ).
Now, given a training set, how do we pick, or learn, the parameters θ?
One reasonable method seems to be to make h(x) close to y, at least for
4
If you’ve seen linear regression before, you may recognize this as the familiar
least-squares cost function that gives rise to the ordinary least squares
regression model. Whether or not you have seen it previously, lets keep
going, and we’ll eventually show this to be a special case of a much broader
family of algorithms.
1 LMS algorithm
We want to choose θ so as to minimize J(θ). To do so, lets use a search
algorithm that starts with some “initial guess” for θ, and that repeatedly
changes θ to make J(θ) smaller, until hopefully we converge to a value of
θ that minimizes J(θ). Specifically, lets consider the gradient descent
algorithm, which starts with some initial θ, and repeatedly performs the
update:
∂
θj := θj − α J(θ).
∂θj
(This update is simultaneously performed for all values of j = 0, . . . , n.)
Here, α is called the learning rate. This is a very natural algorithm that
repeatedly takes a step in the direction of steepest decrease of J.
In order to implement this algorithm, we have to work out what is the
partial derivative term on the right hand side. Lets first work it out for the
case of if we have only one training example (x, y), so that we can neglect
the sum in the definition of J. We have:
∂ ∂ 1
J(θ) = (hθ (x) − y)2
∂θj ∂θj 2
1 ∂
= 2 · (hθ (x) − y) · (hθ (x) − y)
2 ∂θj
n
!
∂ X
= (hθ (x) − y) · θi x i − y
∂θj i=0
= (hθ (x) − y) xj
5
The rule is called the LMS update rule (LMS stands for “least mean squares”),
and is also known as the Widrow-Hoff learning rule. This rule has several
properties that seem natural and intuitive. For instance, the magnitude of
the update is proportional to the error term (y (i) − hθ (x(i) )); thus, for in-
stance, if we are encountering a training example on which our prediction
nearly matches the actual value of y (i) , then we find that there is little need
to change the parameters; in contrast, a larger change to the parameters will
be made if our prediction hθ (x(i) ) has a large error (i.e., if it is very far from
y (i) ).
We’d derived the LMS rule for when there was only a single training
example. There are two ways to modify this method for a training set of
more than one example. The first is replace it with the following algorithm:
The reader can easily verify that the quantity in the summation in the update
rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this is
simply gradient descent on the original cost function J. This method looks
at every example in the entire training set on every step, and is called batch
gradient descent. Note that, while gradient descent can be susceptible
to local minima in general, the optimization problem we have posed here
for linear regression has only one global, and no other local, optima; thus
gradient descent always converges (assuming the learning rate α is not too
large) to the global minimum. Indeed, J is a convex quadratic function.
Here is an example of gradient descent as it is run to minimize a quadratic
function.
1
We use the notation “a := b” to denote an operation (in a computer program) in
which we set the value of a variable a to be equal to the value of b. In other words, this
operation overwrites a with the value of b. In contrast, we will write “a = b” when we are
asserting a statement of fact, that the value of a is equal to the value of b.
6
50
45
40
35
30
25
20
15
10
5 10 15 20 25 30 35 40 45 50
The ellipses shown above are the contours of a quadratic function. Also
shown is the trajectory taken by gradient descent, with was initialized at
(48,30). The x’s in the figure (joined by straight lines) mark the successive
values of θ that gradient descent went through.
When we run batch gradient descent to fit θ on our previous dataset,
to learn to predict housing price as a function of living area, we obtain
θ0 = 71.27, θ1 = 0.1345. If we plot hθ (x) as a function of x (area), along
with the training data, we obtain the following figure:
housing prices
1000
900
800
700
600
price (in $1000)
500
400
300
200
100
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
square feet
If the number of bedrooms were included as one of the input features as well,
we get θ0 = 89.60, θ1 = 0.1392, θ2 = −8.738.
The above results were obtained with batch gradient descent. There is
an alternative to batch gradient descent that also works very well. Consider
the following algorithm:
7
Loop {
for i=1 to m, {
(i)
θj := θj + α y (i) − hθ (x(i) ) xj (for every j).
}
In this algorithm, we repeatedly run through the training set, and each time
we encounter a training example, we update the parameters according to
the gradient of the error with respect to that single training example only.
This algorithm is called stochastic gradient descent (also incremental
gradient descent). Whereas batch gradient descent has to scan through
the entire training set before taking a single step—a costly operation if m is
large—stochastic gradient descent can start making progress right away, and
continues to make progress with each example it looks at. Often, stochastic
gradient descent gets θ “close” to the minimum much faster than batch gra-
dient descent. (Note however that it may never “converge” to the minimum,
and the parameters θ will keep oscillating around the minimum of J(θ); but
in practice most of the values near the minimum will be reasonably good
approximations to the true minimum.2 ) For these reasons, particularly when
the training set is large, stochastic gradient descent is often preferred over
batch gradient descent.
We now state without proof some facts of matrix derivatives (we won’t
need some of these until later this quarter). Equation (4) applies only to
non-singular square matrices A, where |A| denotes the determinant of A. We
have:
∇A trAB = BT (1)
∇AT f (A) = (∇A f (A))T (2)
∇A trABAT C = CAB + C T AB T (3)
∇A |A| = |A|(A−1 )T . (4)
To make our matrix notation more concrete, let us now explain in detail the
meaning of the first of these equations. Suppose we have some fixed matrix
B ∈ Rn×m . We can then define a function f : Rm×n 7→ R according to
f (A) = trAB. Note that this definition makes sense, because if A ∈ Rm×n ,
then AB is a square matrix, and we can apply the trace operator to it; thus,
f does indeed map from Rm×n to R. We can then apply our definition of
matrix derivatives to find ∇A f (A), which will itself by an m-by-n matrix.
Equation (1) above states that the (i, j) entry of this matrix will be given by
the (i, j)-entry of B T , or equivalently, by Bji .
The proofs of Equations (1-3) are reasonably simple, and are left as an
exercise to the reader. Equations (4) can be derived using the adjoint repre-
sentation of the inverse of a matrix.3
be seen from its definition), this implies that (∂/∂Aij )|A| = A′ij . Putting all this together
shows the result.
10
Also, let ~y be the m-dimensional vector containing all the target values from
the training set:
y (1)
y (2)
~y = .. .
.
y (m)
Now, since hθ (x(i) ) = (x(i) )T θ, we can easily verify that
(x(1) )T θ y (1)
Xθ − ~y = .. ..
−
. .
(x(m) )T θ y (m)
hθ (x(1) ) − y (1)
= ..
.
.
(m) (m)
hθ (x ) − y
2
Thus, using the fact that for a vector z, we have that z T z =
P
i zi :
m
1 1X
(Xθ − ~y )T (Xθ − ~y ) = (hθ (x(i) ) − y (i) )2
2 2 i=1
= J(θ)
Hence,
1
∇θ J(θ) = ∇θ (Xθ − ~y )T (Xθ − ~y )
2
1
∇θ θT X T Xθ − θT X T ~y − ~y T Xθ + ~y T ~y
=
2
1
∇θ tr θT X T Xθ − θT X T ~y − ~y T Xθ + ~y T ~y
=
2
1
∇θ tr θT X T Xθ − 2tr ~y T Xθ
=
2
1
X T Xθ + X T Xθ − 2X T ~y
=
2
= X T Xθ − X T ~y
In the third step, we used the fact that the trace of a real number is just the
real number; the fourth step used the fact that trA = trAT , and the fifth
step used Equation (5) with AT = θ, B = B T = X T X, and C = I, and
Equation (1). To minimize J, we set its derivatives to zero, and obtain the
normal equations:
X T Xθ = X T ~y
Thus, the value of θ that minimizes J(θ) is given in closed form by the
equation
θ = (X T X)−1 X T ~y .
3 Probabilistic interpretation
When faced with a regression problem, why might linear regression, and
specifically why might the least-squares cost function J, be a reasonable
choice? In this section, we will give a set of probabilistic assumptions, under
which least-squares regression is derived as a very natural algorithm.
Let us assume that the target variables and the inputs are related via the
equation
y (i) = θT x(i) + ǫ(i) ,
where ǫ(i) is an error term that captures either unmodeled effects (such as
if there are some features very pertinent to predicting housing price, but
that we’d left out of the regression), or random noise. Let us further assume
that the ǫ(i) are distributed IID (independently and identically distributed)
according to a Gaussian distribution (also called a Normal distribution) with
12
mean zero and some variance σ 2 . We can write this assumption as “ǫ(i) ∼
N (0, σ 2 ).” I.e., the density of ǫ(i) is given by
(ǫ(i) )2
(i) 1
p(ǫ ) = √ exp − .
2πσ 2σ 2
(y (i) − θT x(i) )2
(i) (i) 1
p(y |x ; θ) = √ exp − .
2πσ 2σ 2
The notation “p(y (i) |x(i) ; θ)” indicates that this is the distribution of y (i)
given x(i) and parameterized by θ. Note that we should not condition on θ
(“p(y (i) |x(i) , θ)”), since θ is not a random variable. We can also write the
distribution of y (i) as as y (i) | x(i) ; θ ∼ N (θT x(i) , σ 2 ).
Given X (the design matrix, which contains all the x(i) ’s) and θ, what
is the distribution of the y (i) ’s? The probability of the data is given by
p(~y |X; θ). This quantity is typically viewed a function of ~y (and perhaps X),
for a fixed value of θ. When we wish to explicitly view this as a function of
θ, we will instead call it the likelihood function:
Note that by the independence assumption on the ǫ(i) ’s (and hence also the
y (i) ’s given the x(i) ’s), this can also be written
m
Y
L(θ) = p(y (i) | x(i) ; θ)
i=1
m
(y (i) − θT x(i) )2
Y 1
= √ exp − .
i=1
2πσ 2σ 2
Now, given this probabilistic model relating the y (i) ’s and the x(i) ’s, what
is a reasonable way of choosing our best guess of the parameters θ? The
principal of maximum likelihood says that we should should choose θ so
as to make the data as high probability as possible. I.e., we should choose θ
to maximize L(θ).
Instead of maximizing L(θ), we can also maximize any strictly increasing
function of L(θ). In particular, the derivations will be a bit simpler if we
13
4 4 4
3 3 3
y
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
x x x
2. Output θT x.
In contrast, the locally weighted linear regression algorithm does the fol-
lowing:
1. Fit θ to minimize i w(i) (y (i) − θT x(i) )2 .
P
2. Output θT x.
15
Here, the w(i) ’s are non-negative valued weights. Intuitively, if w(i) is large
for a particular value of i, then in picking θ, we’ll try hard to make (y (i) −
θT x(i) )2 small. If w(i) is small, then the (y (i) − θT x(i) )2 error term will be
pretty much ignored in the fit.
A fairly standard choice for the weights is4
(x(i) − x)2
(i)
w = exp −
2τ 2
Note that the weights depend on the particular point x at which we’re trying
to evaluate x. Moreover, if |x(i) − x| is small, then w(i) is close to 1; and
if |x(i) − x| is large, then w(i) is small. Hence, θ is chosen giving a much
higher “weight” to the (errors on) training examples close to the query point
x. (Note also that while the formula for the weights takes a form that is
cosmetically similar to the density of a Gaussian distribution, the w(i) ’s do
not directly have anything to do with Gaussians, and in particular the w(i)
are not random variables, normally distributed or otherwise.) The parameter
τ controls how quickly the weight of a training example falls off with distance
of its x(i) from the query point x; τ is called the bandwidth parameter, and
is also something that you’ll get to experiment with in your homework.
Locally weighted linear regression is the first example we’re seeing of a
non-parametric algorithm. The (unweighted) linear regression algorithm
that we saw earlier is known as a parametric learning algorithm, because
it has a fixed, finite number of parameters (the θi ’s), which are fit to the
data. Once we’ve fit the θi ’s and stored them away, we no longer need to
keep the training data around to make future predictions. In contrast, to
make predictions using locally weighted linear regression, we need to keep
the entire training set around. The term “non-parametric” (roughly) refers
to the fact that the amount of stuff we need to keep in order to represent the
hypothesis h grows linearly with the size of the training set.
4
If x is vector-valued, this is generalized to be w(i) = exp(−(x(i) − x)T (x(i) − x)/(2τ 2 )),
or w(i) = exp(−(x(i) − x)T Σ−1 (x(i) − x)/2), for an appropriate choice of τ or Σ.
16
Part II
Classification and logistic
regression
Lets now talk about the classification problem. This is just like the regression
problem, except that the values y we now want to predict take on only
a small number of discrete values. For now, we will focus on the binary
classification problem in which y can take on only two values, 0 and 1.
(Most of what we say here will also generalize to the multiple-class case.)
For instance, if we are trying to build a spam classifier for email, then x(i)
may be some features of a piece of email, and y may be 1 if it is a piece
of spam mail, and 0 otherwise. 0 is also called the negative class, and 1
the positive class, and they are sometimes also denoted by the symbols “-”
and “+.” Given x(i) , the corresponding y (i) is also called the label for the
training example.
5 Logistic regression
We could approach the classification problem ignoring the fact that y is
discrete-valued, and use our old linear regression algorithm to try to predict
y given x. However, it is easy to construct examples where this method
performs very poorly. Intuitively, it also doesn’t make sense for hθ (x) to take
values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}.
To fix this, lets change the form for our hypotheses hθ (x). We will choose
1
hθ (x) = g(θT x) = ,
1 + e−θT x
where
1
g(z) =
1 + e−z
is called the logistic function or the sigmoid function. Here is a plot
showing g(z):
17
0.9
0.8
0.7
0.6
g(z)
0.5
0.4
0.3
0.2
0.1
0
−5 −4 −3 −2 −1 0 1 2 3 4 5
z
So, given the logistic regression model, how do we fit θ for it? Follow-
ing how we saw least squares regression could be derived as the maximum
likelihood estimator under a set of assumptions, lets endow our classification
model with a set of probabilistic assumptions, and then fit the parameters
via maximum likelihood.
18
P (y = 1 | x; θ) = hθ (x)
P (y = 0 | x; θ) = 1 − hθ (x)
L(θ) = p(~y | X; θ)
Ym
= p(y (i) | x(i) ; θ)
i=1
m
Y y(i) 1−y(i)
= hθ (x(i) ) 1 − hθ (x(i) )
i=1
= (y − hθ (x)) xj
19
Above, we used the fact that g ′ (z) = g(z)(1 − g(z)). This therefore gives us
the stochastic gradient ascent rule
(i)
θj := θj + α y (i) − hθ (x(i) ) xj
If we compare this to the LMS update rule, we see that it looks identical; but
this is not the same algorithm, because hθ (x(i) ) is now defined as a non-linear
function of θT x(i) . Nonetheless, it’s a little surprising that we end up with
the same update rule for a rather different algorithm and learning problem.
Is this coincidence, or is there a deeper reason behind this? We’ll answer this
when get get to GLM models. (See also the extra credit problem on Q3 of
problem set 1.)
If we then let hθ (x) = g(θT x) as before but using this modified definition of
g, and if we use the update rule
(i)
θj := θj + α y (i) − hθ (x(i) ) xj .
50 50 50
40 40 40
30 30 30
f(x)
f(x)
f(x)
20 20 20
10 10 10
0 0 0
In the leftmost figure, we see the function f plotted along with the line
y = 0. We’re trying to find θ so that f (θ) = 0; the value of θ that achieves this
is about 1.3. Suppose we initialized the algorithm with θ = 4.5. Newton’s
method then fits a straight line tangent to f at θ = 4.5, and solves for the
where that line evaluates to 0. (Middle figure.) This give us the next guess
for θ, which is about 2.8. The rightmost figure shows the result of running
one more iteration, which the updates θ to about 1.8. After a few more
iterations, we rapidly approach θ = 1.3.
Newton’s method gives a way of getting to f (θ) = 0. What if we want to
use it to maximize some function ℓ? The maxima of ℓ correspond to points
where its first derivative ℓ′ (θ) is zero. So, by letting f (θ) = ℓ′ (θ), we can use
the same algorithm to maximize ℓ, and we obtain update rule:
ℓ′ (θ)
θ := θ − ′′ .
ℓ (θ)
(Something to think about: How would this change if we wanted to use
Newton’s method to minimize rather than maximize a function?)
21
Part III
Generalized Linear Models5
So far, we’ve seen a regression example, and a classification example. In the
regression example, we had y|x; θ ∼ N (µ, σ 2 ), and in the classification one,
y|x; θ ∼ Bernoulli(φ), where for some appropriate definitions of µ and φ as
functions of x and θ. In this section, we will show that both of these methods
are special cases of a broader family of models, called Generalized Linear
Models (GLMs). We will also show how other models in the GLM family
can be derived and applied to other classification and regression problems.
Here, η is called the natural parameter (also called the canonical param-
eter) of the distribution; T (y) is the sufficient statistic (for the distribu-
tions we consider, it will often be the case that T (y) = y); and a(η) is the log
partition function. The quantity e−a(η) essentially plays the role of a nor-
malization constant, that makes sure the distribution p(y; η) sums/integrates
over y to 1.
A fixed choice of T , a and b defines a family (or set) of distributions that
is parameterized by η; as we vary η, we then get different distributions within
this family.
We now show that the Bernoulli and the Gaussian distributions are ex-
amples of exponential family distributions. The Bernoulli distribution with
mean φ, written Bernoulli(φ), specifies a distribution over y ∈ {0, 1}, so that
p(y = 1; φ) = φ; p(y = 0; φ) = 1 − φ. As we varying φ, we obtain Bernoulli
distributions with different means. We now show that this class of Bernoulli
distributions, ones obtained by varying φ, is in the exponential family; i.e.,
that there is a choice of T , a and b so that Equation (6) becomes exactly the
class of Bernoulli distributions.
5
The presentation of the material in this section takes inspiration from Michael I.
Jordan, Learning in graphical models (unpublished book draft), and also McCullagh and
Nelder, Generalized Linear Models (2nd ed.).
23
p(y; φ) = φy (1 − φ)1−y
= exp(y log φ + (1 − y) log(1 − φ))
φ
= exp log y + log(1 − φ) .
1−φ
T (y) = y
a(η) = − log(1 − φ)
= log(1 + eη )
b(y) = 1
This shows that the Bernoulli distribution can be written in the form of
Equation (6), using an appropriate choice of T , a and b.
Lets now move on to consider the Gaussian distribution. Recall that,
when deriving linear regression, the value of σ 2 had no effect on our final
choice of θ and hθ (x). Thus, we can choose an arbitrary value for σ 2 without
changing anything. To simplify the derivation below, lets set σ 2 = 1.6 We
then have:
1 1 2
p(y; µ) = √ exp − (y − µ)
2π 2
1 1 2 1 2
= √ exp − y · exp µy − µ
2π 2 2
6
If we leave σ 2 as a variable, the Gaussian distribution can also be shown to be in the
exponential family, where η ∈ R2 is now a 2-dimension vector that depends on both µ and
σ. For the purposes of GLMs, however, the σ 2 parameter can also be treated by considering
a more general definition of the exponential family: p(y; η, τ ) = b(a, τ ) exp((η T T (y) −
a(η))/c(τ )). Here, τ is called the dispersion parameter, and for the Gaussian, c(τ ) = σ 2 ;
but given our simplification above, we won’t need the more general definition for the
examples we will consider here.
24
η = µ
T (y) = y
a(η) = µ2 /2
= η 2 /2
√
b(y) = (1/ 2π) exp(−y 2 /2).
9 Constructing GLMs
Suppose you would like to build a model to estimate the number y of cus-
tomers arriving in your store (or number of page-views on your website) in
any given hour, based on certain features x such as store promotions, recent
advertising, weather, day-of-week, etc. We know that the Poisson distribu-
tion usually gives a good model for numbers of visitors. Knowing this, how
can we come up with a model for our problem? Fortunately, the Poisson is an
exponential family distribution, so we can apply a Generalized Linear Model
(GLM). In this section, we will we will describe a method for constructing
GLM models for problems such as these.
More generally, consider a classification or regression problem where we
would like to predict the value of some random variable y as a function of
x. To derive a GLM for this problem, we will make the following three
assumptions about the conditional distribution of y given x and about our
model:
1. y | x; θ ∼ ExponentialFamily(η). I.e., given x and θ, the distribution of
y follows some exponential family distribution, with parameter η.
2. Given x, our goal is to predict the expected value of T (y) given x.
In most of our examples, we will have T (y) = y, so this means we
would like the prediction h(x) output by our learned hypothesis h to
25
The third of these assumptions might seem the least well justified of
the above, and it might be better thought of as a “design choice” in our
recipe for designing GLMs, rather than as an assumption per se. These
three assumptions/design choices will allow us to derive a very elegant class
of learning algorithms, namely GLMs, that have many desirable properties
such as ease of learning. Furthermore, the resulting models are often very
effective for modelling different types of distributions over y; for example, we
will shortly show that both logistic regression and ordinary least squares can
both be derived as GLMs.
hθ (x) = E[y|x; θ]
= µ
= η
= θT x.
The first equality follows from Assumption 2, above; the second equality
follows from the fact that y|x; θ ∼ N (µ, σ 2 ), and so its expected value is given
by µ; the third equality follows from Assumption 1 (and our earlier derivation
showing that µ = η in the formulation of the Gaussian as an exponential
family distribution); and the last equality follows from Assumption 3.
26
hθ (x) = E[y|x; θ]
= φ
= 1/(1 + e−η )
T
= 1/(1 + e−θ x )
T
So, this gives us hypothesis functions of the form hθ (x) = 1/(1 + e−θ x ). If
you are previously wondering how we came up with the form of the logistic
function 1/(1 + e−z ), this gives one answer: Once we assume that y condi-
tioned on x is Bernoulli, it arises as a consequence of the definition of GLMs
and exponential family distributions.
To introduce a little more terminology, the function g giving the distri-
bution’s mean as a function of the natural parameter (g(η) = E[T (y); η])
is called the canonical response function. Its inverse, g −1 , is called the
canonical link function. Thus, the canonical response function for the
Gaussian family is just the identify function; and the canonical response
function for the Bernoulli is the logistic function.7
Unlike our previous examples, here we do not have T (y) = y; also, T (y) is
now a k − 1 dimensional vector, rather than a real number. We will write
(T (y))i to denote the i-th element of the vector T (y).
We introduce one more very useful piece of notation. An indicator func-
tion 1{·} takes on a value of 1 if its argument is true, and 0 otherwise
(1{True} = 1, 1{False} = 0). For example, 1{2 = 3} = 0, and 1{3 =
5 − 2} = 1. So, we can also write the relationship between T (y) and y as
(T (y))i = 1{y = i}. (Before you continue reading, please make sure you un-
derstand why this is true!) Further, we have that E[(T (y))i ] = P (y = i) = φi .
We are now ready to show that the multinomial is a member of the
28
This function mapping from the η’s to the φ’s is called the softmax function.
To complete our model, we use Assumption 3, given earlier, that the ηi ’s
are linearly related to the x’s. So, have ηi = θiT x (for i = 1, . . . , k − 1),
where θ1 , . . . , θk−1 ∈ Rn+1 are the parameters of our model. For notational
convenience, we can also define θk = 0, so that ηk = θkT x = 0, as given
previously. Hence, our model assumes that the conditional distribution of y
given x is given by
p(y = i|x; θ) = φi
eηi
= Pk
j=1 eηj
T
eθi x
= Pk T (8)
j=1 eθj x
In other words, our hypothesis will output the estimated probability that
p(y = i|x; θ), for every value of i = 1, . . . , k. (Even though hθ (x) as defined
above is only k − 1 dimensional, clearly p(y = k|x; θ) can be obtained as
Pk−1
1 − i=1 φi .)
30
To obtain the second line above, we used the definition for p(y|x; θ) given
in Equation (8). We can now obtain the maximum likelihood estimate of
the parameters by maximizing ℓ(θ) in terms of θ, using a method such as
gradient ascent or Newton’s method.
CS229 Lecture notes
Andrew Ng
Part IV
Generative Learning algorithms
So far, we’ve mainly been talking about learning algorithms that model
p(y|x; θ), the conditional distribution of y given x. For instance, logistic
regression modeled p(y|x; θ) as hθ (x) = g(θ T x) where g is the sigmoid func-
tion. In these notes, we’ll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish
between elephants (y = 1) and dogs (y = 0), based on some features of
an animal. Given a training set, an algorithm like logistic regression or
the perceptron algorithm (basically) tries to find a straight line—that is, a
decision boundary—that separates the elephants and dogs. Then, to classify
a new animal as either an elephant or a dog, it checks on which side of the
decision boundary it falls, and makes its prediction accordingly.
Here’s a different approach. First, looking at elephants, we can build a
model of what elephants look like. Then, looking at dogs, we can build a
separate model of what dogs look like. Finally, to classify a new animal, we
can match the new animal against the elephant model, and match it against
the dog model, to see whether the new animal looks more like the elephants
or more like the dogs we had seen in the training set.
Algorithms that try to learn p(y|x) directly (such as logistic regression),
or algorithms that try to learn mappings directly from the space of inputs X
to the labels {0, 1}, (such as the perceptron algorithm) are called discrim-
inative learning algorithms. Here, we’ll talk about algorithms that instead
try to model p(x|y) (and p(y)). These algorithms are called generative
learning algorithms. For instance, if y indicates whether a example is a dog
(0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’
features, and p(x|y = 1) models the distribution of elephants’ features.
After modeling p(y) (called the class priors) and p(x|y), our algorithm
1
2
can then use Bayes rule to derive the posterior distribution on y given x:
p(x|y)p(y)
p(y|x) = .
p(x)
Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y =
0)p(y = 0) (you should be able to verify that this is true from the standard
properties of probabilities), and thus can also be expressed in terms of the
quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating
p(y|x) in order to make a prediction, then we don’t actually need to calculate
the denominator, since
p(x|y)p(y)
arg max p(y|x) = arg max
y y p(x)
= arg max p(x|y)p(y).
y
Cov(X) = Σ.
3 3 3
2 2 2
3 3 3
1 2 1 2 1 2
0 1 0 1 0 1
−1 0 −1 0 −1 0
−1 −1 −1
−2 −2 −2
−2 −2 −2
−3 −3 −3 −3 −3 −3
The left-most figure shows a Gaussian with mean zero (that is, the 2x1
zero-vector) and covariance matrix Σ = I (the 2x2 identity matrix). A Gaus-
sian with zero mean and identity covariance is also called the standard nor-
mal distribution. The middle figure shows the density of a Gaussian with
zero mean and Σ = 0.6I; and in the rightmost figure shows one with , Σ = 2I.
We see that as Σ becomes larger, the Gaussian becomes more “spread-out,”
and as it becomes smaller, the distribution becomes more “compressed.”
Lets look at some more examples.
0.25 0.25 0.25
3 3 3
2 2 2
1 1 1
0 0 0
3 3 3
−1 2 −1 2 −1 2
1 1 1
−2 0 −2 0 −2 0
−1 −1 −1
−3 −2 −3 −2 −3 −2
−3 −3 −3
The figures above show Gaussians with mean 0, and with covariance
matrices respectively
1 0 1 0.5 1 0.8
Σ= ; Σ= ; .Σ = .
0 1 0.5 1 0.8 1
The leftmost figure shows the familiar standard normal distribution, and we
see that as we increase the off-diagonal entry in Σ, the density becomes more
“compressed” towards the 45◦ line (given by x1 = x2 ). We can see this more
clearly when we look at the contours of the same three densities:
4
3 3 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
3 3 3
2 2 2
3 3 3
1 2 1 2 1 2
0 1 0 1 0 1
−1 0 −1 0 −1 0
−1 −1 −1
−2 −2 −2
−2 −2 −2
−3 −3 −3 −3 −3 −3
y ∼ Bernoulli(φ)
x|y = 0 ∼ N (µ0 , Σ)
x|y = 1 ∼ N (µ1 , Σ)
p(y) = φy (1 − φ)1−y
1 1 T −1
p(x|y = 0) = exp − (x − µ0 ) Σ (x − µ0 )
(2π)n/2 |Σ|1/2 2
1 1 T −1
p(x|y = 1) = exp − (x − µ1 ) Σ (x − µ1 )
(2π)n/2 |Σ|1/2 2
Here, the parameters of our model are φ, Σ, µ0 and µ1 . (Note that while
there’re two different mean vectors µ0 and µ1 , this model is usually applied
using only one covariance matrix Σ.) The log-likelihood of the data is given
by
m
Y
`(φ, µ0 , µ1 , Σ) = log p(x(i) , y (i) ; φ, µ0 , µ1 , Σ)
i=1
Ym
= log p(x(i) |y (i) ; µ0 , µ1 , Σ)p(y (i) ; φ).
i=1
6
−1
−2
−3
−4
−5
−6
−7
−2 −1 0 1 2 3 4 5 6 7
Shown in the figure are the training set, as well as the contours of the
two Gaussian distributions that have been fit to the data in each of the
two classes. Note that the two Gaussians have contours that are the same
shape and orientation, since they share a covariance matrix Σ, but they have
different means µ0 and µ1 . Also shown in the figure is the straight line
giving the decision boundary at which p(y = 1|x) = 0.5. On one side of
the boundary, we’ll predict y = 1 to be the most likely outcome, and on the
other side, we’ll predict y = 0.
almost always do better than GDA. For this reason, in practice logistic re-
gression is used more often than GDA. (Some related considerations about
discriminative vs. generative models also apply for the Naive Bayes algo-
rithm that we discuss next, but the Naive Bayes algorithm is still considered
a very good, and is certainly also a very popular, classification algorithm.)
2 Naive Bayes
In GDA, the feature vectors x were continuous, real-valued vectors. Lets now
talk about a different learning algorithm in which the xi ’s are discrete-valued.
For our motivating example, consider building an email spam filter using
machine learning. Here, we wish to classify messages according to whether
they are unsolicited commercial (spam) email, or non-spam email. After
learning to do this, we can then have our mail reader automatically filter
out the spam messages and perhaps place them in a separate mail folder.
Classifying emails is one example of a broader set of problems called text
classification.
Lets say we have a training set (a set of emails labeled as spam or non-
spam). We’ll begin our construction of our spam filter by specifying the
features xi used to represent an email.
We will represent an email via a feature vector whose length is equal to
the number of words in the dictionary. Specifically, if an email contains the
i-th word of the dictionary, then we will set xi = 1; otherwise, we let xi = 0.
For instance, the vector
1 a
0 aardvark
0 aardwolf
. ..
.
x= . .
1 buy
. ..
.. .
0 zygmurgy
is used to represent an email that contains the words “a” and “buy,” but not
“aardvark,” “aardwolf” or “zygmurgy.”2 The set of words encoded into the
2
Actually, rather than looking through an english dictionary for the list of all english
words, in practice it is more common to look through our training set and encode in our
feature vector only the words that occur at least once there. Apart from reducing the
number of words modeled and hence reducing our computational and space requirements,
9
The first equality simply follows from the usual properties of probabilities,
and the second equality used the NB assumption. We note that even though
the Naive Bayes assumption is an extremely strong assumptions, the resulting
algorithm works well on many problems.
Our model is parameterized by φi|y=1 = p(xi = 1|y = 1), φi|y=0 = p(xi =
1|y = 0), and φy = p(y = 1). As usual, given a training set {(x(i) , y (i) ); i =
this also has the advantage of allowing us to model/include as a feature many words
that may appear in your email (such as “cs229”) but that you won’t find in a dictionary.
Sometimes (as in the homework), we also exclude the very high frequency words (which
will be words like “the,” “of,” “and,”; these high frequency, “content free” words are called
stop words) since they occur in so many documents and do little to indicate whether an
email is spam or non-spam.
10
Maximizing this with respect to φy , φi|y=0 and φi|y=1 gives the maximum
likelihood estimates:
Pm (i) (i)
i=1 1{xj = 1 ∧ y = 1}
φj|y=1 = Pm (i)
i=1 1{y = 1}
Pm (i) (i)
i=1 1{xj = 1 ∧ y = 0}
φj|y=0 = Pm (i) = 0}
i=1 1{y
Pm (i)
i=1 1{y = 1}
φy =
m
In the equations above, the “∧” symbol means “and.” The parameters have
a very natural interpretation. For instance, φj|y=1 is just the fraction of the
spam (y = 1) emails in which word j does appear.
Having fit all these parameters, to make a prediction on a new example
with features x, we then simply calculate
p(x|y = 1)p(y = 1)
p(y = 1|x) =
p(x)
( ni=1 p(xi |y = 1)) p(y = 1)
Q
= Qn ,
( i=1 p(xi |y = 1)) p(y = 1) + ( ni=1 p(xi |y = 0)) p(y = 0)
Q
(In practice, it usually doesn’t matter much whether we apply Laplace smooth-
ing to φy or not, since we will typically have a fair fraction each of spam and
non-spam messages, so φy will be a reasonable estimate of p(y = 1) and will
be quite far from 0 anyway.)
as we’ve presented it will work well for many classification problems, for text
classification, there is a related model that does even better.
In the specific context of text classification, Naive Bayes as presented uses
the what’s called the multi-variate Bernoulli event model. In this model,
we assumed that the way an email is generated is that first it is randomly
determined (according to the class priors p(y)) whether a spammer or non-
spammer will send you your next message. Then, the person sending the
email runs through the dictionary, deciding whether to include each word i
in that email independently and according to the probabilities Q p(xi = 1|y) =
φi|y . Thus, the probability of a message was given by p(y) ni=1 p(xi |y).
Here’s a different model, called the multinomial event model. To de-
scribe this model, we will use a different notation and set of features for
representing emails. We let xi denote the identity of the i-th word in the
email. Thus, xi is now an integer taking values in {1, . . . , |V |}, where |V |
is the size of our vocabulary (dictionary). An email of n words is now rep-
resented by a vector (x1 , x2 , . . . , xn ) of length n; note that n can vary for
different documents. For instance, if an email starts with “A NIPS . . . ,”
then x1 = 1 (“a” is the first word in the dictionary), and x2 = 35000 (if
“nips” is the 35000th word in the dictionary).
In the multinomial event model, we assume that the way an email is
generated is via a random process in which spam/non-spam is first deter-
mined (according to p(y)) as before. Then, the sender of the email writes the
email by first generating x1 from some multinomial distribution over words
(p(x1 |y)). Next, the second word x2 is chosen independently of x1 but from
the same multinomial distribution, and similarly for x3 , x4 , and so on, until
all n words of the email have Qnbeen generated. Thus, the overall probability of
a message is given by p(y) i=1 p(xi |y). Note that this formula looks like the
one we had earlier for the probability of a message under the multi-variate
Bernoulli event model, but that the terms in the formula now mean very dif-
ferent things. In particular xi |y is now a multinomial, rather than a Bernoulli
distribution.
The parameters for our new model are φy = p(y) as before, φi|y=1 =
p(xj = i|y = 1) (for any j) and φi|y=0 = p(xj = i|y = 0). Note that we have
assumed that p(xj |y) is the same for all values of j (i.e., that the distribution
according to which a word is generated does not depend on its position j
within the email).
If we are given a training set {(x(i) , y (i) ); i = 1, . . . , m} where x(i) =
(i) (i) (i)
(x1 , x2 , . . . , xni ) (here, ni is the number of words in the i-training example),
14
While not necessarily the very best classification algorithm, the Naive Bayes
classifier often works surprisingly well. It is often also a very good “first thing
to try,” given its simplicity and ease of implementation.
CS229 Lecture notes
Andrew Ng
Part V
Support Vector Machines
This set of notes presents the Support Vector Machine (SVM) learning al-
gorithm. SVMs are among the best (and many believe is indeed the best)
“off-the-shelf” supervised learning algorithm. To tell the SVM story, we’ll
need to first talk about margins and the idea of separating data with a large
“gap.” Next, we’ll talk about the optimal margin classifier, which will lead
us into a digression on Lagrange duality. We’ll also see kernels, which give
a way to apply SVMs efficiently in very high dimensional (such as infinite-
dimensional) feature spaces, and finally, we’ll close off the story with the
SMO algorithm, which gives an efficient implementation of SVMs.
1 Margins: Intuition
We’ll start our story on SVMs by talking about margins. This section will
give the intuitions about margins and about the “confidence” of our predic-
tions; these ideas will be made formal in Section 3.
Consider logistic regression, where the probability p(y = 1|x; θ) is mod-
eled by hθ (x) = g(θ T x). We would then predict “1” on an input x if and
only if hθ (x) ≥ 0.5, or equivalently, if and only if θ T x ≥ 0. Consider a
positive training example (y = 1). The larger θ T x is, the larger also is
hθ (x) = p(y = 1|x; w, b), and thus also the higher our degree of “confidence”
that the label is 1. Thus, informally we can think of our prediction as being
a very confident one that y = 1 if θ T x 0. Similarly, we think of logistic
regression as making a very confident prediction of y = 0, if θ T x 0. Given
a training set, again informally it seems that we’d have found a good fit to
the training data if we can find θ so that θ T x(i) 0 whenever y (i) = 1, and
1
2
θT x(i) 0 whenever y (i) = 0, since this would reflect a very confident (and
correct) set of classifications for all the training examples. This seems to be
a nice goal to aim for, and we’ll soon formalize this idea using the notion of
functional margins.
For a different type of intuition, consider the following figure, in which x’s
represent positive training examples, o’s denote negative training examples,
a decision boundary (this is the line given by the equation θ T x = 0, and
is also called the separating hyperplane) is also shown, and three points
have also been labeled A, B and C.
A
B
C
Notice that the point A is very far from the decision boundary. If we are
asked to make a prediction for the value of y at at A, it seems we should be
quite confident that y = 1 there. Conversely, the point C is very close to
the decision boundary, and while it’s on the side of the decision boundary
on which we would predict y = 1, it seems likely that just a small change to
the decision boundary could easily have caused out prediction to be y = 0.
Hence, we’re much more confident about our prediction at A than at C. The
point B lies in-between these two cases, and more broadly, we see that if
a point is far from the separating hyperplane, then we may be significantly
more confident in our predictions. Again, informally we think it’d be nice if,
given a training set, we manage to find a decision boundary that allows us
to make all correct and confident (meaning far from the decision boundary)
predictions on the training examples. We’ll formalize this later using the
notion of geometric margins.
3
2 Notation
To make our discussion of SVMs easier, we’ll first need to introduce a new
notation for talking about classification. We will be considering a linear
classifier for a binary classification problem with labels y and features x.
From now, we’ll use y ∈ {−1, 1} (instead of {0, 1}) to denote the class labels.
Also, rather than parameterizing our linear classifier with the vector θ, we
will use parameters w, b, and write our classifier as
Note that if y (i) = 1, then for the functional margin to be large (i.e., for our
prediction to be confident and correct), then we need w T x + b to be a large
positive number. Conversely, if y (i) = −1, then for the functional margin to
be large, then we need w T x + b to be a large negative number. Moreover,
if y (i) (wT x + b) > 0, then our prediction on this example is correct. (Check
this yourself.) Hence, a large functional margin represents a confident and a
correct prediction.
For a linear classifier with the choice of g given above (taking values in
{−1, 1}), there’s one property of the functional margin that makes it not a
very good measure of confidence, however. Given our choice of g, we note that
if we replace w with 2w and b with 2b, then since g(w T x + b) = g(2w T x + 2b),
4
this would not change hw,b (x) at all. I.e., g, and hence also hw,b (x), depends
only on the sign, but not on the magnitude, of w T x + b. However, replacing
(w, b) with (2w, 2b) also results in multiplying our functional margin by a
factor of 2. Thus, it seems that by exploiting our freedom to scale w and b,
we can make the functional margin arbitrarily large without really changing
anything meaningful. Intuitively, it might therefore make sense to impose
some sort of normalization condition such as that ||w||2 = 1; i.e., we might
replace (w, b) with (w/||w||2 , b/||w||2 ), and instead consider the functional
margin of (w/||w||2 , b/||w||2 ). We’ll come back to this later.
Given a training set S = {(x(i) , y (i) ); i = 1, . . . , m}, we also define the
function margin of (w, b) with respect to S as the smallest of the functional
margins of the individual training examples. Denoted by γ̂, this can therefore
be written:
γ̂ = min γ̂ (i) .
i=1,...,m
Next, lets talk about geometric margins. Consider the picture below:
A w
γ (i)
find that the point B is given by x(i) − γ (i) · w/||w||. But this point lies on
the decision boundary, and all points x on the decision boundary satisfy the
equation w T x + b = 0. Hence,
T (i) (i) w
w x −γ + b = 0.
||w||
Note that if ||w|| = 1, then the functional margin equals the geometric
margin—this thus gives us a way of relating these two different notions of
margin. Also, the geometric margin is invariant to rescaling of the parame-
ters; i.e., if we replace w with 2w and b with 2b, then the geometric margin
does not change. This will in fact come in handy later. Specifically, because
of this invariance to the scaling of the parameters, when trying to fit w and b
to training data, we can impose an arbitrary scaling constraint on w without
changing anything important; for instance, we can demand that ||w|| = 1, or
|w1 | = 5, or |w1 + b| + |w2 | = 2, and any of these can be satisfied simply by
rescaling w and b.
Finally, given a training set S = {(x(i) , y (i) ); i = 1, . . . , m}, we also define
the geometric margin of (w, b) with respect to S to be the smallest of the
geometric margins on the individual training examples:
γ = min γ (i) .
i=1,...,m
on the training set and a good “fit” to the training data. Specifically, this
will result in a classifier that separates the positive and the negative training
examples with a “gap” (geometric margin).
For now, we will assume that we are given a training set that is linearly
separable; i.e., that it is possible to separate the positive and negative ex-
amples using some separating hyperplane. How we we find the one that
achieves the maximum geometric margin? We can pose the following opti-
mization problem:
maxγ,w,b γ
s.t. y (i) (wT x(i) + b) ≥ γ, i = 1, . . . , m
||w|| = 1.
Here, we’re going to maximize γ̂/||w||, subject to the functional margins all
being at least γ̂. Since the geometric and functional margins are related by
γ = γ̂/||w|, this will give us the answer we want. Moreover, we’ve gotten rid
of the constraint ||w|| = 1 that we didn’t like. The downside is that we now
γ̂
have a nasty (again, non-convex) objective ||w|| function; and, we still don’t
have any off-the-shelf software that can solve this form of an optimization
problem.
Lets keep going. Recall our earlier discussion that we can add an arbitrary
scaling constraint on w and b without changing anything. This is the key idea
we’ll use now. We will introduce the scaling constraint that the functional
margin of w, b with respect to the training set must be 1:
γ̂ = 1.
7
5 Lagrange duality
Lets temporarily put aside SVMs and maximum margin classifiers, and talk
about solving constrained optimization problems.
Consider a problem of the following form:
minw f (w)
s.t. hi (w) = 0, i = 1, . . . , l.
Some of you may recall how the method of Lagrange multipliers can be used
to solve it. (Don’t worry if you haven’t seen it before.) In this method, we
define the Lagrangian to be
l
X
L(w, β) = f (w) + βi hi (w)
i=1
1
You may be familiar with linear programming, which solves optimization problems
that have linear objectives and linear constraints. QP software is also widely available,
which allows convex quadratic objectives and linear constraints.
8
Here, the βi ’s are called the Lagrange multipliers. We would then find
and set L’s partial derivatives to zero:
∂L ∂L
= 0; = 0,
∂wi ∂βi
and solve for w and β.
In this section, we will generalize this to constrained optimization prob-
lems in which we may have inequality as well as equality constraints. Due to
time constraints, we won’t really be able to do the theory of Lagrange duality
justice in this class,2 but we will give the main ideas and results, which we
will then apply to our optimal margin classifier’s optimization problem.
Consider the following, which we’ll call the primal optimization problem:
minw f (w)
s.t. gi (w) ≤ 0, i = 1, . . . , k
hi (w) = 0, i = 1, . . . , l.
To solve it, we start by defining the generalized Lagrangian
k
X l
X
L(w, α, β) = f (w) + αi gi (w) + βi hi (w).
i=1 i=1
Here, the αi ’s and βi ’s are the Lagrange multipliers. Consider the quantity
θP (w) = max L(w, α, β).
α,β : αi ≥0
Here, the “P” subscript stands for “primal.” Let some w be given. If w
violates any of the primal constraints (i.e., if either gi (w) > 0 or hi (w) 6= 0
for some i), then you should be able to verify that
k
X l
X
θP (w) = max f (w) + αi gi (w) + βi hi (w) (1)
α,β : αi ≥0
i=1 i=1
= ∞. (2)
Conversely, if the constraints are indeed satisfied for a particular value of w,
then θP (w) = f (w). Hence,
f (w) if w satisfies primal constraints
θP (w) =
∞ otherwise.
2
Readers interested in learning more about this topic are encouraged to read, e.g., R.
T. Rockarfeller (1970), Convex Analysis, Princeton University Press.
9
Thus, θP takes the same value as the objective in our problem for all val-
ues of w that satisfies the primal constraints, and is positive infinity if the
constraints are violated. Hence, if we consider the minimization problem
min θP (w) = min max L(w, α, β),
w w α,β : αi ≥0
we see that it is the same problem (i.e., and has the same solutions as) our
original, primal problem. For later use, we also define the optimal value of
the objective to be p∗ = minw θP (w); we call this the value of the primal
problem.
Now, lets look at a slightly different problem. We define
θD (α, β) = min L(w, α, β).
w
Here, the “D” subscript stands for “dual.” Note also that whereas in the
definition of θP we were optimizing (maximizing) with respect to α, β, here
are are minimizing with respect to w.
We can now pose the dual optimization problem:
max θD (α, β) = max min L(w, α, β).
α,β : αi ≥0 α,β : αi ≥0 w
This is exactly the same as our primal problem shown above, except that the
order of the “max” and the “min” are now exchanged. We also define the
optimal value of the dual problem’s objective to be d∗ = maxα,β : αi ≥0 θD (w).
How are the primal and the dual problems related? It can easily be shown
that
d∗ = max min L(w, α, β) ≤ min max L(w, α, β) = p∗ .
α,β : αi ≥0 w w α,β : αi ≥0
(You should convince yourself of this; this follows from the “max min” of a
function always being less than or equal to the “min max.”) However, under
certain conditions, we will have
d∗ = p ∗ ,
so that we can solve the dual problem in lieu of the primal problem. Lets
see what these conditions are.
Suppose f and the gi ’s are convex,3 and the hi ’s are affine.4 Suppose
further that the constraints gi are (strictly) feasible; this means that there
exists some w so that gi (w) < 0 for all i.
3
When f has a Hessian, then it is convex if and only if the hessian is positive semi-
definite. For instance, f (w) = w T w is convex; similarly, all linear (and affine) functions
are also convex. (A function f can also be convex without being differentiable, but we
won’t need those more general definitions of convexity here.)
4
I.e., there exists ai , bi , so that hi (w) = aTi w + bi . “Affine” means the same thing as
linear, except that we also allow the extra intercept term bi .
10
We have one such constraint for each training example. Note that from the
KKT dual complementarity condition, we will have αi > 0 only for the train-
ing examples that have functional margin exactly equal to one (i.e., the ones
11
The points with the smallest margins are exactly the ones closest to the
decision boundary; here, these are the three points (one negative and two pos-
itive examples) that lie on the dashed lines parallel to the decision boundary.
Thus, only three of the αi ’s—namely, the ones corresponding to these three
training examples—will be non-zero at the optimal solution to our optimiza-
tion problem. These three points are called the support vectors in this
problem. The fact that the number of support vectors can be much smaller
than the size the training set will be useful later.
Lets move on. Looking ahead, as we develop the dual form of the problem,
one key idea to watch out for is that we’ll try to write our algorithm in terms
of only the inner product hx(i) , x(j) i (think of this as (x(i) )T x(j) ) between
points in the input feature space. The fact that we can express our algorithm
in terms of these inner products will be key when we apply the kernel trick.
When we construct the Lagrangian for our optimization problem we have:
m
1 X
L(w, b, α) = ||w||2 − αi y (i) (wT x(i) + b) − 1 .
(8)
2 i=1
Note that there’re only “αi ” but no “βi ” Lagrange multipliers, since the
problem has only inequality constraints.
Lets find the dual form of the problem. To do so, we need to first minimize
L(w, b, α) with respect to w and b (for fixed α), to get θD , which we’ll do by
12
If we take the definition of w in Equation (9) and plug that back into the
Lagrangian (Equation 8), and simplify, we get
m m m
X 1 X (i) (j) (i) T (j)
X
L(w, b, α) = αi − y y αi αj (x ) x − b αi y (i) .
i=1
2 i,j=1 i=1
But from Equation (10), the last term must be zero, so we obtain
m m
X 1 X (i) (j)
L(w, b, α) = αi − y y αi αj (x(i) )T x(j) .
i=1
2 i,j=1
You should also be able to verify that the conditions required for p∗ =
d∗ and the KKT conditions (Equations 3–7) to hold are indeed satisfied in
our optimization problem. Hence, we can solve the dual in lieu of solving
the primal problem. Specifically, in the dual problem above, we have a
maximization problem in which the parameters are the αi ’s. We’ll talk later
13
about the specific algorithm that we’re going to use to solve the dual problem,
but if we are indeed able to solve it (i.e., find the α’s that maximize W (α)
subject to the constraints), then we can use Equation (9) to go back and find
the optimal w’s as a function of the α’s. Having found w ∗ , by considering
the primal problem, it is also straightforward to find the optimal value for
the intercept term b as
maxi:y(i) =−1 w∗ T x(i) + mini:y(i) =1 w∗ T x(i)
b∗ = − . (11)
2
(Check for yourself that this is correct.)
Before moving on, lets also take a more careful look at Equation (9), which
gives the optimal value of w in terms of (the optimal value of) α. Suppose
we’ve fit our model’s parameters to a training set, and now wish to make a
prediction at a new point input x. We would then calculate w T x + b, and
predict y = 1 if and only if this quantity is bigger than zero. But using (9),
this quantity can also be written:
m
!T
X
wT x + b = αi y (i) x(i) x+b (12)
i=1
m
X
= αi y (i) hx(i) , xi + b. (13)
i=1
7 Kernels
Back in our discussion of linear regression, we had a problem in which the
input x was the living area of a house, and we considered performing regres-
14
Rather than applying SVMs using the original input attributes x, we may
instead want to learn using some features φ(x). To do so, we simply need to
go over our previous algorithm, and replace x everywhere in it with φ(x).
Since the algorithm can be written entirely in terms of the inner prod-
ucts hx, zi, this means that we would replace all those inner products with
hφ(x), φ(z)i. Specificically, given a feature mapping φ, we define the corre-
sponding Kernel to be
Thus, we see that K(x, z) = φ(x)T φ(z), where the feature mapping φ is given
(shown here for the case of n = 3) by
x1 x1
x1 x2
x1 x3
x2 x1
φ(x) =
x 2 x 2
.
x2 x3
x3 x1
x3 x2
x3 x3
Note that whereas calculating the high-dimensional φ(x) requires O(n2 ) time,
finding K(x, z) takes only O(n) time—linear in the dimension of the input
attributes.
For a related kernel, also consider
(Check this yourself.) This corresponds to the feature mapping (again shown
16
for n = 3)
x1 x1
x1 x2
x1 x3
x2 x1
x2 x2
x2 x3
φ(x) =
x3 x1 ,
x3 x2
√ 3 x3
x
√2cx1
√2cx2
2cx3
c
and the parameter c controls the relative weighting between the xi (first
order) and the xi xj (second order) terms.
More broadly, the kernel K(x, z) = (xT z + c)d corresponds to a feature
mapping to an n+d d feature space, corresponding of all monomials of the
form xi1 xi2 . . . xik that are up to order d. However, despite working in this
O(nd )-dimensional space, computing K(x, z) still takes only O(n) time, and
hence we never need to explicitly represent feature vectors in this very high
dimensional feature space.
Now, lets talk about a slightly different view of kernels. Intuitively, (and
there are things wrong with this intuition, but nevermind), if φ(x) and φ(z)
are close together, then we might expect K(x, z) = φ(x)T φ(z) to be large.
Conversely, if φ(x) and φ(z) are far apart—say nearly orthogonal to each
other—then K(x, z) = φ(x)T φ(z) will be small. So, we can think of K(x, z)
as some measurement of how similar are φ(x) and φ(z), or of how similar are
x and z.
Given this intuition, suppose that for some learning problem that you’re
working on, you’ve come up with some function K(x, z) that you think might
be a reasonable measure of how similar x and z are. For instance, perhaps
you chose
||x − z||2
K(x, z) = exp − .
2σ 2
This is a resonable measure of x and z’s similarity, and is close to 1 when
x and z are close, and near 0 when x and z are far apart. Can we use this
definition of K as the kernel in an SVM? In this particular example, the
answer is yes. (This kernel is called the Gaussian kernel, and corresponds
17
to an infinite dimensional feature mapping φ.) But more broadly, given some
function K, how can we tell if it’s a valid kernel; i.e., can we tell if there is
some feature mapping φ so that K(x, z) = φ(x)T φ(z) for all x, z?
Suppose for now that K is indeed a valid kernel corresponding to some
feature mapping φ. Now, consider some finite set of m points (not necessarily
the training set) {x(1) , . . . , x(m) }, and let a square, m-by-m matrix K be
defined so that its (i, j)-entry is given by Kij = K(x(i) , x(j) ). This matrix
is called the Kernel matrix. Note that we’ve overloaded the notation and
used K to denote both the kernel function K(x, z) and the kernel matrix K,
due to their obvious close relationship.
Now, if K is a valid Kernel, then Kij = K(x(i) , x(j) ) = φ(x(i) )T φ(x(j) ) =
φ(x(j) )T φ(x(i) ) = K(x(j) , x(i) ) = Kji , and hence K must be symmetric. More-
over, letting φk (x) denote the k-th coordinate of the vector φ(x), we find that
for any vector z, we have
XX
z T Kz = zi Kij zj
i j
XX
= zi φ(x(i) )T φ(x(j) )zj
i j
XX X
= zi φk (x(i) )φk (x(j) )zj
i j k
XXX
= zi φk (x(i) )φk (x(j) )zj
k i j
!2
X X
= zi φk (x(i) )
k i
≥ 0.
The second-to-last step above used the same trick as you saw in Problem
set 1 Q1. Since z was arbitrary, this shows that K is positive semi-definite
(K ≥ 0).
Hence, we’ve shown that if K is a valid kernel (i.e., if it corresponds to
some feature mapping φ), then the corresponding Kernel matrix K ∈ Rm×m
is symmetric positive semidefinite. More generally, this turns out to be not
only a necessary, but also a sufficient, condition for K to be a valid kernel
(also called a Mercer kernel). The following result is due to Mercer.5
5
Many texts present Mercer’s theorem in a slightly more complicated form involving
2
L functions, but when the input attributes take values in Rn , the version given here is
equivalent.
18
algorithms that we’ll see later in this class will also be amenable to this
method, which has come to be known as the “kernel trick.”
Thus, examples are now permitted to have (functional) margin less than 1,
and if an example whose functional margin is 1 − ξi , we would pay a cost of
the objective function being increased by Cξi . The parameter C controls the
relative weighting between the twin goals of making the ||w||2 large (which
we saw earlier makes the margin small) and of ensuring that most examples
have functional margin at least 1.
20
Now, all that remains is to give an algorithm for actually solving the dual
problem, which we will do in the next section.
of the SVM. Partly to motivate the SMO algorithm, and partly because it’s
interesting in its own right, lets first take another digression to talk about
the coordinate ascent algorithm.
max W (α1 , α2 , . . . , αm ).
α
Here, we think of W as just some function of the parameters αi ’s, and for now
ignore any relationship between this problem and SVMs. We’ve already seen
two optimization algorithms, gradient ascent and Newton’s method. The
new algorithm we’re going to consider here is called coordinate ascent:
For i = 1, . . . , m, {
αi := arg maxα̂i W (α1 , . . . , αi−1 , α̂i , αi+1 , . . . , αm ).
}
Thus, in the innermost loop of this algorithm, we will hold all the vari-
ables except for some αi fixed, and reoptimize W with respect to just the
parameter αi . In the version of this method presented here, the inner-loop
reoptimizes the variables in order α1 , α2 , . . . , αm , α1 , α2 , . . .. (A more sophis-
ticated version might choose other orderings; for instance, we may choose
the next variable to update according to which one we expect to allow us to
make the largest increase in W (α).)
When the function W happens to be of such a form that the “arg max”
in the inner loop can be performed efficiently, then coordinate ascent can be
a fairly efficient algorithm. Here’s a picture of coordinate ascent in action:
22
2.5
1.5
0.5
−0.5
−1
−1.5
−2
The ellipses in the figure are the contours of a quadratic function that
we want to optimize. Coordinate ascent was initialized at (2, −2), and also
plotted in the figure is the path that it took on its way to the global maximum.
Notice that on each step, coordinate ascent takes a step that’s parallel to one
of the axes, since only one variable is being optimized at a time.
9.2 SMO
We close off the discussion of SVMs by sketching the derivation of the SMO
algorithm. Some details will be left to the homework, and for others you
may refer to the paper excerpt handed out in class.
Here’s the (dual) optimization problem that we want to solve:
m m
X 1 X (i) (j)
maxα W (α) = αi − y y αi αj hx(i) , x(j) i. (17)
i=1
2 i,j=1
s.t. 0 ≤ αi ≤ C, i = 1, . . . , m (18)
Xm
αi y (i) = 0. (19)
i=1
Lets say we have set of αi ’s that satisfy the constraints (18-19). Now,
suppose we want to hold α2 , . . . , αm fixed, and take a coordinate ascent step
and reoptimize the objective with respect to α1 . Can we make any progress?
The answer is no, because the constraint (19) ensures that
m
X
(1)
α1 y =− αi y (i) .
i=2
23
(This step used the fact that y (1) ∈ {−1, 1}, and hence (y (1) )2 = 1.) Hence,
α1 is exactly determined by the other αi ’s, and if we were to hold α2 , . . . , αm
fixed, then we can’t make any change to α1 without violating the con-
straint (19) in the optimization problem.
Thus, if we want to update some subject of the αi ’s, we must update at
least two of them simultaneously in order to keep satisfying the constraints.
This motivates the SMO algorithm, which simply does the following:
Repeat till convergence {
1. Select some pair αi and αj to update next (using a heuristic that
tries to pick the two that will allow us to make the biggest progress
towards the global maximum).
2. Reoptimize W (α) with respect to αi and αj , while holding all the
other αk ’s (k 6= i, j) fixed.
}
To test for convergence of this algorithm, we can check whether the KKT
conditions (Equations 14-16) are satisfied to within some tol. Here, tol is
the convergence tolerance parameter, and is typically set to around 0.01 to
0.001. (See the paper and pseudocode for details.)
The key reason that SMO is an efficient algorithm is that the update to
αi , αj can be computed very efficiently. Lets now briefly sketch the main
ideas for deriving the efficient update.
Lets say we currently have some setting of the αi ’s that satisfy the con-
straints (18-19), and suppose we’ve decided to hold α3 , . . . , αm fixed, and
want to reoptimize W (α1 , α2 , . . . , αm ) with respect to α1 and α2 (subject to
the constraints). From (19), we require that
m
X
(1) (2)
α1 y + α2 y =− αi y (i) .
i=3
Since the right hand side is fixed (as we’ve fixed α3 , . . . αm ), we can just let
it be denoted by some constant ζ:
α1 y (1) + α2 y (2) = ζ. (20)
We can thus picture the constraints on α1 and α2 as follows:
24
H α1y(1)+ α2y(2)=ζ
α2
L
α1 C
From the constraints (18), we know that α1 and α2 must lie within the box
[0, C] × [0, C] shown. Also plotted is the line α1 y (1) + α2 y (2) = ζ, on which we
know α1 and α2 must lie. Note also that, from these constraints, we know
L ≤ α2 ≤ H; otherwise, (α1 , α2 ) can’t simultaneously satisfy both the box
and the straight line constraint. In this example, L = 0. But depending on
what the line α1 y (1) + α2 y (2) = ζ looks like, this won’t always necessarily be
the case; but more generally, there will be some lower-bound L and some
upper-bound H on the permissable values for α2 that will ensure that α1 , α2
lie within the box [0, C] × [0, C].
Using Equation (20), we can also write α1 as a function of α2 :
α1 = (ζ − α2 y (2) )y (1) .
(Check this derivation yourself; we again used the fact that y (1) ∈ {−1, 1} so
that (y (1) )2 = 1.) Hence, the objective W (α) can be written
Finally, having found the α2new , we can use Equation (20) to go back and find
the optimal value of α1new .
There’re a couple more details that are quite easy but that we’ll leave you
to read about yourself in Platt’s paper: One is the choice of the heuristics
used to select the next αi , αj to update; the other is how to update b as the
SMO algorithm is run.