0% found this document useful (0 votes)
13 views192 pages

Cs229 ML Notes

555

Uploaded by

Priyal Meena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views192 pages

Cs229 ML Notes

555

Uploaded by

Priyal Meena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 192

Machine Learning

Stanford, California
Contents

Acknowledgments viii

Part I Supervised Learning 1

1 Linear Regression 3
1.1 Least mean squares (LMS) algorithm 4
1.2 The normal equations 8
1.2.1 Matrix derivatives 9
1.2.2 Least squares revisited 9
1.3 Probabilistic interpretation 11
1.4 Locally weighted linear regression 13

2 Classification and Logistic Regression 16


2.1 Logistic regression 16
2.2 Digression: The perceptron learning algorithm 19
2.3 Another algorithm for maximizing `(θ ) 20

3 Generalized Linear Models 22


3.1 The exponential family 22
contents iii

3.2 Constructing GLMs 24


3.2.1 Ordinary Least Squares 25
3.2.2 Logistic Regression 26
3.2.3 Softmax Regression 26

Part II Generative Learning Algorithms 31

4 Gaussian discriminant analysis 32


4.1 The Gaussian Discriminant Analysis model 34
4.2 Discussion: GDA and logistic regression 36

5 Naive Bayes 38
5.1 Laplace smoothing 41
5.2 Event models for text classification 43

Part III Kernel Methods 46

6 Kernel methods 46
6.1 Feature maps 46
6.2 LMS (least mean squares) with features 47
6.3 LMS with the kernel trick 47
6.4 Properties of kernels 51

Part IV Support Vector Machines 57

7 Support vector machines 57


7.1 Margins: Intuition 57

2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


iv c ontents

7.2 Notation 58
7.3 Functional and geometric margins 59
7.4 The optimal margin classifier 61
7.5 Lagrange duality (optional reading) 62
7.6 Optimal margin classifiers 65
7.7 Regularization and the non-separable case (optional reading) 69
7.8 The SMO algorithm (optional reading) 70
7.8.1 Coordinate ascent 71
7.9 SMO 71

Part V Deep Learning 75

8 Supervised Learning with Non-Linear Models 75

9 Neural Networks 78

10 Backpropagation 87
10.1 Preliminary: chain rule 88
10.2 Backpropagation for two-layer neural networks 88
∂J
10.2.1 Computing ∂W [2]
89
∂J
10.2.2 Computing ∂W [1]
89
∂J
10.2.3 Computing ∂z 90
∂J
10.2.4 Computing ∂a 91
10.2.5 Summary for two-layer neural networks 92
10.3 Multi-layer neural networks 92

11 Vectorization Over Training Examples 95

2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


contents v

Part VI Regularization and Model Selection 98

12 Cross validation 98

13 Feature Selection 100

14 Bayesian statistics and regularization 103

15 Some calculations from bias variance 105

16 Bias-variance and error analysis 108


16.1 The bias-variance tradeoff 108
16.2 Error analysis 110
16.3 Ablative analysis 111
16.3.1 Analyze your mistakes 112

Part VII Unsupervised Learning 114

17 The k-means Clustering Algorithm 114

18 Mixtures of Gaussians and the EM Algorithm 115

Part VIII The EM Algorithm 119

19 Jensen’s inequality 119

20 The EM algorithm 120


20.1 Other interpretation of ELBO 126

2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


vi c ontents

21 Mixture of Gaussians revisited 126

22 Variational inference and variational auto-encoder 128

Part IX Factor Analysis 133

23 Restrictions of Σ 134

24 Marginals and conditionals of Gaussians 135

25 The factor analysis model 136

26 EM for factor analysis 138

Part X Principal Components Analysis 142

Part XI Independent Components Analysis 147

27 ICA ambiguities 148

28 Densities and linear transformations 149

29 ICA algorithm 150

Part XII Reinforcement Learning and Control 154

30 Markov decision processes 155

31 Value iteration and policy iteration 158

2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


contents vii

32 Learning a model for an MDP 160

33 Continuous state MDPs 162


33.1 Discretization 162
33.2 Value function approximation 163
33.2.1 Using a model or simulator 164
33.2.2 Fitted value iteration 165

34 Connections between Policy and Value Iteration (Optional) 169

35 Derivations for Bellman Equations 171

A Lagrange Multipliers 172

B Boosting 175
B.1 Boosting 175
B.1.1 The boosting algorithm 176
B.2 The convergence of Boosting 178
B.3 Implementing weak-learners 180
B.3.1 Decision stumps 180
B.3.2 Other strategies 181
B.4 Proof of lemma B.1 183

References 184

2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Acknowledgments

This work is taken from the lecture notes for the course Machine Learning at Stan-
ford University, CS 229 (cs229.stanford.edu). The contributors to the content
of this work are Andrew Ng, Christopher Ré, Moses Charikar, Tengyu Ma, Anand
Avati, Kian Katanforoosh, Yoann Le Calonnec, and John Duchi—this collection
is simply a typesetting of existing lecture notes with minor modifications. We
would like to thank the original authors for their contribution. In addition, we
wish to thank Mykel Kochenderfer and Tim Wheeler for their contribution to the
Tufte-Algorithms LATEX template, based off of Algorithms for Optimization.1 1
M. J. Kochenderfer and T. A.
Wheeler, Algorithms for Optimiza-
tion. MIT Press, 2019.

Ro b e rt J. Moss
Stanford, Calif.
May 23, 2021

Ancillary material is available on the template’s webpage:


https://github.com/sisl/textbook_template
Part I: Supervised Learning
From CS229 Fall 2020, Tengyu Ma,
Let’s start by talking about a few examples of supervised learning problems. Andrew Ng, Moses Charikar, &
Suppose we have a dataset giving the living areas and prices of 47 houses from Christopher Ré, Stanford Univer-
sity.
Portland, Oregon:

Table 1. Housing prices in Portland,


Living area (feet2 ) Price (1000$s)
OR.
2104 400
1600 330
2400 369
1416 232
3000 540
.. ..
. .

We can plot this data:

housing prices Figure 1. Housing prices in Port-


land, OR.
800

600
price (in $1000)

400

200

1,000 2,000 3,000 4,000 5,000


square feet
2

Given data like this, how can we learn to predict the prices of other houses in
Portland, as a function of the size of their living areas?
To establish notation for future use, we’ll use x (i) to denote the ‘‘input’’ variables
(living area in this example), also called input features, and y(i) to denote the
‘‘output’’ or target variable that we are trying to predict (price). A pair ( x (i) , y(i) )
is called a training example, and the dataset that we’ll be using to learn—a list
of n training examples {( x (i) , y(i) ); i = 1, . . . , n}—is called a training set. Note
that the superscript ‘‘(i )’’ in the notation is simply an index into the training set,
and has nothing to do with exponentiation. We will also use X denote the space
of input values, and Y the space of output values. In this example, X = Y = R.
To describe the supervised learning problem slightly more formally, our goal
is, given a training set, to learn a function h : X 7→ Y so that h( x ) is a ‘‘good’’
predictor for the corresponding value of y. For historical reasons, this function h
is called a hypothesis. Seen pictorially, the process is therefore like this:

training Figure 2. Hypothesis diagram.


set

learning
algorithm

x h predicted y
(living area (predicted price
of house) of house)

When the target variable that we’re trying to predict is continuous, such as in
our housing example, we call the learning problem a regression2 problem. When 2
The term regression was originally
y can take on only a small number of discrete values (such as if, given the living coined due to ‘‘regressing’’ to the
mean (Francis Galton, 1886).
area, we wanted to predict if a dwelling is a house or an apartment, say), we call
it a classification problem.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


1 Linear Regression

To make our housing example more interesting, let’s consider a slightly richer
dataset in which we also know the number of bedrooms in each house:

Table 1.1. Housing prices with bed-


Living area (feet2 ) # Bedrooms Price (1000$s)
rooms in Portland, OR.
2104 3 400
1600 3 330
2400 3 369
1416 2 232
3000 4 540
.. .. ..
. . .
(i )
Here, the x’s are two-dimensional vectors in R2 . For instance, x1 is the living
(i )
area of the i-th house in the training set, and x2 is its number of bedrooms.1 1
In general, when designing a
learning problem, it will be up
To perform supervised learning, we must decide how we’re going to represent
to you to decide what features to
functions/hypotheses h in a computer. As an initial choice, let’s say we decide to choose, so if you are out in Portland
approximate y as a linear function of x: gathering housing data, you might
also decide to include other fea-
tures such as whether each house
h θ ( x ) = θ0 + θ1 x1 + θ2 x2 (1.1) has a fireplace, the number of bath-
rooms, and so on. We’ll say more
Here, the θi ’s are the parameters (also called weights) parameterizing the about feature selection later, but for
space of linear functions mapping from X to Y . When there is no risk of confusion, now let’s take the features as given.

we will drop the θ subscript in hθ ( x ), and write it more simply as h( x ). To simplify


our notation, we also introduce the convention of letting x0 = 1 (this is the
intercept term), so that
d
h( x ) = ∑ θi xi = θ > x, (1.2)
i =0

where on the right-hand side above we are viewing θ and x both as vectors, and
here d is the number of input variables (not counting x0 ).
4 c h apter 1. line ar regression

Now, given a training set, how do we pick, or learn, the parameters θ? One
reasonable method seems to be to make h( x ) close to y, at least for the training
examples we have. To formalize this, we will define a function that measures, for
each value of the θ’s, how close the h( x (i) )’s are to the corresponding y(i) ’s. We
define the cost function:

1 n  2
J (θ ) = ∑
2 i =1
h θ ( x (i ) ) − y (i ) . (1.3)

If you’ve seen linear regression before, you may recognize this as the familiar
least-squares cost function that gives rise to the ordinary least squares regression
model. Whether or not you have seen it previously, let’s keep going, and we’ll
eventually show this to be a special case of a much broader family of algorithms.

1.1 Least mean squares (LMS) algorithm

We want to choose θ so as to minimize J (θ ). To do so, let’s use a search algorithm


that starts with some ‘‘initial guess’’ for θ, and that repeatedly changes θ to
make J (θ ) smaller, until hopefully we converge to a value of θ that minimizes
J (θ ). Specifically, let’s consider the gradient descent algorithm, which starts with
some initial θ, and repeatedly performs the update:2 2
This update is simultaneously
performed for all values of j =
∂ 0, . . . , d.
θj ← θj − α J (θ ) (1.4)
∂θ j

Here, α is called the learning rate. This is a very natural algorithm that repeatedly
takes a step in the direction of steepest decrease of J.
In order to implement this algorithm, we have to work out what is the partial
derivative term on the right hand side. Let’s first work it out for the case of if
we have only one training example ( x, y), so that we can neglect the sum in the
definition of J. We have:

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


1.1. least mean squares (lms) algorithm 5

∂ ∂ 1
J (θ ) = ( h ( x ) − y )2
∂θ j ∂θ j 2 θ
1 ∂
= 2 · ( hθ ( x ) − y) · (h ( x ) − y)
2 ∂θ j θ
!
d

∂θ j i∑
= ( hθ ( x ) − y) · θi xi − y
=0
= ( hθ ( x ) − y) x j

For a single training example, this gives the update rule:3 3


We use the notation ‘‘a ← b’’ to
denote an operation (in a computer
 
(i ) program) in which we set the value
θ j ← θ j + α y (i ) − h θ ( x (i ) ) x j . (1.5) of a variable a to be equal to the
value of b (something := is used).
The rule is called the LMS update rule (LMS stands for ‘‘least mean squares’’), In other words, this operation over-
and is also known as the Widrow-Hoff learning rule. This rule has several prop- writes a with the value of b. In con-
trast, we will write ‘‘a = b’’ when
erties that seem natural and intuitive. For instance, the magnitude of the update
we are asserting a statement of fact,
is proportional to the error term (y(i) − hθ ( x (i) )); thus, for instance, if we are en- that the value of a is equal to the
countering a training example on which our prediction nearly matches the actual value of b.

value of y(i) , then we find that there is little need to change the parameters; in
contrast, a larger change to the parameters will be made if our prediction hθ ( x (i) )
has a large error (i.e., if it is very far from y(i) ).
We’ve derived the LMS rule for when there was only a single training example.
There are two ways to modify this method for a training set of more than one
example. The first is replace it with the following algorithm:

repeat Algorithm 1.1. Gradient descent.

for every j do
n  
θ j ← θ j + α ∑ y (i ) − h θ ( x (i ) ) x j
(i )

i =1
end for
until convergence

By grouping the updates of the coordinates into an update of the vector θ, we


can rewrite update algorithm 1.1 in a slightly more succinct way:
The reader can easily verify that the quantity in the summation in the update
rule above is just ∂J (θ )/∂θ j (for the original definition of J). So, this is simply

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


6 c h apter 1. linear regression

repeat Algorithm 1.2. Gradient descent


n   vectorized.
θ ← θ + α ∑ y (i ) − h θ ( x (i ) ) x (i )
i =1
until convergence

gradient descent on the original cost function J. This method looks at every
example in the entire training set on every step, and is called batch gradient
descent. Note that, while gradient descent can be susceptible to local minima
in general, the optimization problem we have posed here for linear regression
has only one global, and no other local, optima; thus gradient descent always
converges (assuming the learning rate α is not too large) to the global minimum.
Indeed, J is a convex quadratic function.

Here is an example of gradient descent as it is run to minimize a quadratic Example 1.1. Gradient descent on
a quadratic function.
function.

40

20

−20

−40

−40 −20 0 20 40

The ellipses shown above are the contours of a quadratic function. Also
shown is the trajectory taken by gradient descent, which was initialized at
(48,30). The arrows in the figure (joined by straight lines) mark the successive
values of θ that gradient descent went through.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


1.1. least mean squares (lms) algorithm 7

When we run batch gradient descent to fit θ on our previous dataset, to learn Example 1.2. Best fit line using
batch gradient descent on Portland,
to predict housing price as a function of living area. We obtain: Oregon housing prices.

θ0 = 71.27 (intercept)
θ1 = 0.1345 (slope)

If we plot hθ ( x ) as a function of x (area), along with the training data, we


obtain the following figure:

housing prices
800

600
price (in $1000)

400

200

1,000 2,000 3,000 4,000 5,000


square feet

If the number of bedrooms were included as one of the input features as


well, we get θ0 = 89.60, θ1 = 0.1392, θ2 = −8.738.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


8 c hapter 1. line ar regression

The results in example 1.2 were obtained with batch gradient descent. There is
an alternative to batch gradient descent that also works very well. Consider the
following algorithm:

repeat Algorithm 1.3. Stochastic gradient


descent.
for i = 1 to n do
for every j do
n  
θ j ← θ j + α ∑ y (i ) − h θ ( x (i ) ) x j
(i )

i =1
end for
end for
until convergence

By grouping the updates of the coordinates into an update of the vector θ, we


can rewrite update in algorithm 1.3 in a slightly more succinct way:
 
(i )
θ ← θ + α y (i ) − h θ x (i ) (1.6)
In this algorithm, we repeatedly run through the training set, and each time
we encounter a training example, we update the parameters according to the
gradient of the error with respect to that single training example only. This algo-
rithm is called stochastic gradient descent (also incremental gradient descent).
Whereas batch gradient descent has to scan through the entire training set before
taking a single step—a costly operation if n is large—stochastic gradient descent
can start making progress right away, and continues to make progress with each
example it looks at. Often, stochastic gradient descent gets θ ‘‘close’’ to the mini-
mum much faster than batch gradient descent.4 For these reasons, particularly 4
Note, however, that it may never
when the training set is large, stochastic gradient descent is often preferred over ‘‘converge’’ to the minimum, and
the parameters θ will keep oscillat-
batch gradient descent. ing around the minimum of J (θ );
but in practice most of the values
near the minimum will be reason-
1.2 The normal equations ably good approximations to the
true minimum. By slowly letting
the learning rate α decrease to zero
Gradient descent gives one way of minimizing J. Let’s discuss a second way of as the algorithm runs, it is also pos-
doing so, this time performing the minimization explicitly and without resorting sible to ensure that the parameters
to an iterative algorithm. In this method, we will minimize J by explicitly taking will converge to the global mini-
mum rather than merely oscillate
its derivatives with respect to the θ j ’s, and setting them to zero. To enable us to around the minimum.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


1.2. the normal equations 9

do this without having to write reams of algebra and pages full of matrices of
derivatives, let’s introduce some notation for doing calculus with matrices.

1.2.1 Matrix derivatives


For a function f : Rn×d 7→ R mapping from n-by-d matrices to the real numbers,
we define the derivative of f with respect to A to be:
 ∂f ∂f 
∂A11 ··· ∂A1d
 . .. .. 
 ..
∇ A f ( A) =  . .  (1.7)

∂f ∂f
∂An1 ··· ∂And

Thus, the gradient ∇ A f ( A) is itself an n-by-d matrix, whose (i, j)-element is


∂ f /∂Aij .

" # Example 1.3. Matrix derivative.


A11 A12
For example, suppose A = is a 2-by-2 matrix, and the function
A21 A22
f : R2×2 7→ R is given by

3
f ( A) = A + 5A212 + A21 A22 .
2 11
Here, Aij denotes the (i, j) entry of the matrix A. We then have:
" #
3
2 10A12
∇ A f ( A) =
A22 A21

1.2.2 Least squares revisited


Armed with the tools of matrix derivatives, let us now proceed to find in closed-
form the value of θ that minimizes J (θ ). We begin by re-writing J in matrix-vector
notation.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


10 c hapter 1. linear regression

Given a training set, define the design matrix X to be the n-by-d matrix (actually
n-by-(d + 1), if we include the intercept term) that contains the training examples’
input values in its rows:

— ( x (1) ) > —
 

 — ( x (2) ) > — 
 
X= ..  (1.8)
.
 
 
— ( x (n) )> —
Also, let y be the n-dimensional vector containing all the target values from the
training set:  
y (1)
 (2) 
y 
y=  .. 
 (1.9)
 . 
y(n)

Now, since hθ ( x (i) ) = ( x (i) )> θ, we can easily verify that


   
( x (1) ) > θ y (1)
 ..   . 
− . 
Xθ − y =   .   . 
( x (n) )> θ y(n)
 
h θ ( x (1) ) − y (1)
 .. 
.
= . 
hθ ( x (n) ) − y(n)

Thus, using the fact that for a vector z, we have that z> z = ∑i z2i :

1 1 n  2
(Xθ − y)> (Xθ − y) = ∑ hθ ( x (i) ) − y(i)
2 2 i =1
= J (θ )

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


1.3. probabilistic interpretation 11

Finally, to minimize J, let’s find its derivative with respect to θ. Hence:


1
∇θ J (θ ) = ∇θ (Xθ − y)> (Xθ − y)
2
1  
= ∇θ (Xθ )> Xθ − (Xθ )> y − y> (Xθ ) + y> y
2
1  
= ∇θ θ > (X> X)θ − y> (Xθ ) − y> (Xθ ) (a> b = b> a)
2
1  
= ∇ θ θ > (X> X ) θ − 2 (X> y ) > θ
2
1 > 
= 2X Xθ − 2X> y (∇ x b> x = b and ∇ x x > Ax = 2Ax for sym. A)
2
= X> Xθ − X> y
To minimize J, we set its derivatives to zero, and obtain the normal equations:
X> Xθ = X> y (1.10)
Thus, the value of θ that minimizes J (θ ) is given in closed form by the equation: 5 5
Note that in the this step, we
are implicitly assuming that X> X
θ = (X> X ) −1 X> y (1.11) is an invertible matrix. This can
be checked before calculating the
inverse. If either the number of
1.3 Probabilistic interpretation linearly independent examples is
fewer than the number of features,
or if the features are not linearly in-
When faced with a regression problem, why might linear regression, and specifi- dependent, then X> X will not be
cally why might the least-squares cost function J, be a reasonable choice? In this invertible. Even in such cases, it is
section, we will give a set of probabilistic assumptions, under which least-squares possible to ‘‘fix’’ the situation with
additional techniques, which we
regression is derived as a very natural algorithm. skip here for the sake of simplicty.
Let us assume that the target variables and the inputs are related via the
equation
y (i ) = θ > x (i ) + e (i ) , (1.12)
where e(i) is an error term that captures either unmodeled effects (such as if
there are some features very pertinent to predicting housing price, but that we’d
left out of the regression), or random noise. Let us further assume that the e(i)
are distributed IID (independently and identically distributed) according to a
Gaussian distribution (also called a Normal distribution) with mean zero and
some variance σ2 . We can write this assumption as e(i) ∼ N (0, σ2 ), i.e. the density
of e(i) is given by !
1 ( e (i ) )2
p ( e (i ) ) = √ exp − . (1.13)
2πσ 2σ2

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


12 c hapter 1. linear regression

This implies that


!
(i ) (i ) 1 ( y (i ) − θ > x (i ) )2
p(y | x ; θ) = √ exp − . (1.14)
2πσ 2σ2

The notation ‘‘p(y(i) | x (i) ; θ )’’ indicates that this is the distribution of y(i) given
x (i) and parameterized by θ. Note that we should not condition on θ (i.e. ‘‘p(y(i) |
x (i) , θ )’’), since θ is not a random variable. We can also write the distribution of
y(i) as (y(i) | x (i) ; θ ) ∼ N (θ > x (i) , σ2 ).
Given X (the design matrix, which contains all the x (i) ’s) and θ, what is the
distribution of the y(i) ’s? The probability of the data is given by p(y | X; θ ). This
quantity is typically viewed a function of y (and perhaps X), for a fixed value of
θ. When we wish to explicitly view this as a function of θ, we will instead call it
the likelihood function:

L(θ ) = L(θ; X, y) = p(y | X; θ ) (1.15)

Note that by the independence assumption on the e(i) ’s (and hence also the y(i) ’s
given the x (i) ’s), this can also be written as
n
L(θ ) = ∏ p ( y (i ) | x (i ) ; θ ) (1.16)
i =1
!
n
1 ( y (i ) − θ > x (i ) )2
=∏√ exp − . (1.17)
i =1 2πσ 2σ2

Now, given this probabilistic model relating the y(i) ’s and the x (i) ’s, what is a
reasonable way of choosing our best guess of the parameters θ? The principal
of maximum likelihood says that we should choose θ so as to make the data as
high probability as possible—i.e. we should choose θ to maximize L(θ ).
Instead of maximizing L(θ ), we can also maximize any strictly increasing
function of L(θ ). In particular, the derivations will be a bit simpler if we instead

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


1.4. locally weighted linear regression 13

maximize the log likelihood `(θ ):

`(θ ) = log L(θ )


!
n
1 ( y (i ) − θ > x (i ) )2
= log ∏ √ exp −
i =1 2πσ 2σ2
!
n
1 ( y (i ) − θ > x (i ) )2
= ∑ log √ exp −
i =1 2πσ 2σ2
1 1 1 n  2
= n log √ − 2 · ∑ y (i ) − θ > x (i )
2πσ σ 2 i=1

Hence, maximizing `(θ ) gives the same answer as minimizing

1 n  (i ) 2
2 i∑
> (i )
y − θ x ,
=1

which we recognize to be J (θ ), our original least-squares cost function.

To summarize. Under the previous probabilistic assumptions on the data, least-


squares regression corresponds to finding the maximum likelihood estimate
of θ. This is thus one set of assumptions under which least-squares regression
can be justified as a very natural method that’s just doing maximum likelihood
estimation.6 6
Note however that the probabilis-
Note also that, in our previous discussion, our final choice of θ did not depend tic assumptions are by no means
necessary for least-squares to be a
on what was σ2 , and indeed we’d have arrived at the same result even if σ2 were perfectly good and rational proce-
unknown. We will use this fact again later, when we talk about the exponential dure, and there may—and indeed
there are—other natural assump-
family and generalized linear models. tions that can also be used to justify
it.

1.4 Locally weighted linear regression

Consider the problem of predicting y from x ∈ R. The leftmost figure below


shows the result of fitting a y = θ0 + θ1 x to a dataset. We see that the data doesn’t
really lie on straight line, and so the fit is not very good.
Instead, if we had added an extra feature x2 , and fit y = θ0 + θ1 x + θ2 x2 , then
we obtain a slightly better fit to the data. (See middle figure) Naively, it might
seem that the more features we add, the better. However, there is also a danger
in adding too many features: The rightmost figure is the result of fitting a 5-th

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


14 c hapter 1. linear regression

1-st order polynomial 2-nd order polynomial 5-th order polynomial

4 4 4
y

y
2 2 2

0 0 0
0 2 4 6 0 2 4 6 0 2 4 6
x x x

Figure 1.1. Polynomial regression


with different k-order fits.
order polynomial y = ∑5j=0 θ j x j . We see that even though the fitted curve passes
through the data perfectly, we would not expect this to be a very good predictor
of, say, housing prices (y) for different living areas (x). Without formally defining
what these terms mean, we’ll say the figure on the left shows an instance of
underfitting—in which the data clearly shows structure not captured by the
model—and the figure on the right is an example of overfitting.7 7
Later in this class, when we talk
As discussed previously, and as shown in figure 1.1, the choice of features is about learning theory we’ll formal-
ize some of these notions, and also
important to ensuring good performance of a learning algorithm. (When we talk define more carefully just what it
about model selection, we’ll also see algorithms for automatically choosing a good means for a hypothesis to be good
or bad.
set of features.) In this section, let us briefly talk about the locally weighted linear
regression (LWR) algorithm which, assuming there is sufficient training data,
makes the choice of features less critical. This treatment will be brief, since you’ll
get a chance to explore some of the properties of the LWR algorithm yourself in
the homework.
In the original linear regression algorithm, to make a prediction at a query
point x (i.e. to evaluate h( x )), we would:
 2
1. Fit θ to minimize ∑i y(i) − θ > x (i) .

2. Output θ > x.

In contrast, the locally weighted linear regression algorithm does the following:
 2
1. Fit θ to minimize ∑i w(i) y(i) − θ > x (i) .

2. Output θ > x.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


1.4. locally weighted linear regression 15

Here, the w(i) ’s are non-negative valued weights. Intuitively, if w(i) is large for
a particular value of i, then in picking θ we’ll try hard to make (y(i) − θ > x (i) )2
small. If w(i) is small, then the (y(i) − θ > x (i) )2 error term will be pretty much
ignored in the fit.
A fairly standard choice for the weights is:8 8
If x is vector-valued, the weights
w(i) can be generalized to
!
(i ) ( x (i ) − x )2 ( x (i ) − x ) > ( x (i ) − x )
!
w = exp − (1.18) exp −
2τ 2 2τ 2
or
Note that the weights depend on the particular point x at which we’re trying to !
( x ( i ) − x ) > Σ −1 ( x ( i ) − x )
evaluate x. Moreover, if | x (i) − x | is small, then w(i) is close to 1; and if | x (i) − x | exp −
2τ 2
is large, then w(i) is small. Hence, θ is chosen giving a much higher ‘‘weight’’ to
for appropriate choices of τ or Σ.
the (errors on) training examples close to the query point x.9 The parameter τ 9
Note also that while the formula
controls how quickly the weight of a training example falls off with distance of for the weights takes a form that
its x (i) from the query point x; τ is called the bandwidth parameter, and is also is cosmetically similar to the den-
sity of a Gaussian distribution, the
something that you’ll get to experiment with in your homework. w(i) ’s do not directly have anything
Locally weighted linear regression is the first example we’re seeing of a non- to do with Gaussians, and in partic-
parametric algorithm. The (unweighted) linear regression algorithm that we saw ular the w(i) are not random vari-
ables, normally distributed or oth-
earlier is known as a parametric learning algorithm, because it has a fixed, finite erwise.
number of parameters (the θi ’s), which are fit to the data. Once we’ve fit the θi ’s
and stored them away, we no longer need to keep the training data around to
make future predictions. In contrast, to make predictions using locally weighted
linear regression, we need to keep the entire training set around. The term ‘‘non-
parametric’’ (roughly) refers to the fact that the amount of stuff we need to keep
in order to represent the hypothesis h grows linearly with the size of the training
set.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


2 Classification and Logistic Regression

Let’s now talk about the classification problem. This is just like the regression
problem, except that the values y we now want to predict take on only a small
number of discrete values. For now, we will focus on the binary classification
problem in which y can take on only two values, 0 and 1. (Most of what we say
here will also generalize to the multiple-class case.) For instance, if we are trying
to build a spam classifier for email, then x (i) may be some features of a piece of
email, and y may be 1 if it is a piece of spam mail, and 0 otherwise. The class 0 is
also called the negative class, and 1 the positive class, and they are sometimes
also denoted by the symbols ‘‘−’’ and ‘‘+’’. Given x (i) , the corresponding y(i) is
also called the label for the training example.

2.1 Logistic regression

We could approach the classification problem ignoring the fact that y is discrete-
valued, and use our old linear regression algorithm to try to predict y given x.
However, it is easy to construct examples where this method performs very poorly.
Intuitively, it also doesn’t make sense for hθ ( x ) to take values larger than 1 or
smaller than 0 when we know that y ∈ {0, 1}.
To fix this, let’s change the form for our hypotheses hθ ( x ). We will choose

1
hθ ( x ) = g(θ > x ) = >x
1 + e−θ
where
1
g(z) =
1 + e−z
is called the logistic function or the sigmoid function. Here is a plot showing
g ( z ):
Notice that g(z) tends towards 1 as z → ∞, and g(z) tends towards 0 as
z → −∞. Moreover, g(z), and hence also h( x ), is always bounded between
0 and 1. As before, we are keeping the convention of letting x0 = 1, so that
θ > x = θ0 + ∑dj=1 θ j x j .
2.1. logistic regression 17

Figure 2.1. Sigmoid function (i.e.


1 logistic).

0.8

0.6

0.4

0.2

−6 −4 −2 0 2 4 6

For now, let’s take the choice of g as given. Other functions that smoothly
increase from 0 to 1 can also be used, but for a couple of reasons that we’ll see
later (when we talk about GLMs, and when we talk about generative learning
algorithms), the choice of the logistic function is a fairly natural one. Before
moving on, here’s a useful property of the derivative of the sigmoid function,
which we write as g0 :

d 1
g0 (z) = (2.1)
dz 1 + e−z
1
= (e−z ) (2.2)
(1 + e − z )2
 
1 1
= · 1− (2.3)
(1 + e − z ) (1 + e − z )
= g(z)(1 − g(z)) (2.4)

So, given the logistic regression model, how do we fit θ for it? Following how
we saw least squares regression could be derived as the maximum likelihood
estimator under a set of assumptions, let’s endow our classification model with
a set of probabilistic assumptions, and then fit the parameters via maximum
likelihood.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


18 c hapter 2. classification and logistic regression

Let us assume that

P(y = 1 | x; θ ) = hθ ( x )
P(y = 0 | x; θ ) = 1 − hθ ( x )

Note that this can be written more compactly as

p(y | x; θ ) = (hθ ( x ))y (1 − hθ ( x ))1−y (2.5)

Assuming that the n training examples were generated independently, we can


then write down the likelihood of the parameters as

L(θ ) = p(y | X; θ ) (2.6)


n
= ∏ p ( y (i ) | x (i ) ; θ ) (2.7)
i =1
n   y (i )  1− y (i )
= ∏ h θ ( x (i ) ) 1 − h θ ( x (i ) ) (2.8)
i =1

As before, it will be easier to maximize the log likelihood:

`(θ ) = log L(θ ) (2.9)


n
= ∑ y(i) log h(x(i) ) + (1 − y(i) ) log(1 − h(x(i) )) (2.10)
i =1

How do we maximize the likelihood? Similar to our derivation in the case of


linear regression, we can use gradient ascent. Written in vectorial notation, our
updates will therefore be given by θ := θ + α∇θ `(θ ). (Note the positive rather
than negative sign in the update formula, since we’re maximizing, rather than
minimizing, a function now.) Let’s start by working with just one training example
( x, y), and take derivatives to derive the stochastic gradient ascent rule:
 
∂ 1 1 ∂
`(θ ) = y >
− (1 − y ) >
g(θ > x ) (2.11)
∂θ j g(θ x ) 1 − g(θ x ) ∂θ j
 
1 1 ∂ >
= y >
− ( 1 − y ) >
g(θ > x )(1 − g(θ > x )) θ x
g(θ x ) 1 − g(θ x ) ∂θ j
(2.12)
 
> >
= y(1 − g(θ x )) − (1 − y) g(θ x ) x j (2.13)

= (y − hθ ( x )) x j (2.14)

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


2.2. digression: the perceptron learning algorithm 19

Above, we used the fact that g0 (z) = g(z)(1 − g(z)). This therefore gives us the
stochastic gradient ascent rule
 
(i )
θ j : = θ j + α y (i ) − h θ ( x (i ) ) x j (2.15)

If we compare this to the LMS update rule, we see that it looks identical; but
this is not the same algorithm, because hθ ( x (i) ) is now defined as a non-linear
function of θ > x (i) . Nonetheless, it’s a little surprising that we end up with the
same update rule for a rather different algorithm and learning problem. Is this
coincidence, or is there a deeper reason behind this? We’ll answer this when we
get to GLM models.

2.2 Digression: The perceptron learning algorithm

We now digress to talk briefly about an algorithm that’s of some historical interest,
and that we will also return to later when we talk about learning theory. Consider
modifying the logistic regression method to ‘‘force’’ it to output values that are
either 0 or 1 or exactly. To do so, it seems natural to change the definition of g to
be the threshold function:

1 if z ≥ 0
g(z) = (2.16)
0 if z < 0

If we then let hθ ( x ) = g(θ > x ) as before but using this modified definition of g,
and if we use the update rule
 
(i )
θ j : = θ j + α y (i ) − h θ ( x (i ) ) x j (2.17)

then we have the perceptron learning algorithm.


In the 1960s, this ‘‘perceptron’’ was argued to be a rough model for how
individual neurons in the brain work. Given how simple the algorithm is, it
will also provide a starting point for our analysis when we talk about learning
theory later in this class. Note however that even though the perceptron may
be cosmetically similar to the other algorithms we talked about, it is actually a
very different type of algorithm than logistic regression and least squares linear
regression; in particular, it is difficult to endow the perceptron’s predic- tions with
meaningful probabilistic interpretations, or derive the perceptron as a maximum
likelihood estimation algorithm.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


20 c hapter 2. classification and logistic regression

2.3 Another algorithm for maximizing `(θ )

Returning to logistic regression with g(z) being the sigmoid function, let’s now
talk about a different algorithm for maximizing `(θ ).
To get us started, let’s consider Newton’s method for finding a zero of a function.
Specifically, suppose we have some function f : R 7→ R, and we wish to find
a value of θ so that f (θ ) = 0. Here, θ ∈ R is a real number. Newton’s method
performs the following update:

f (θ )
θ := θ − (2.18)
f 0 (θ )

This method has a natural interpretation in which we can think of it as approxi-


mating the function f via a linear function that is tangent to f at the current guess
θ, solving for where that linear function equals to zero, and letting the next guess
for θ be where that linear function is zero.
Here’s a picture of the Newton’s method in action:

60 60 60

40 40 40
f (x)

20 20 20

0 0 0

1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
x x x

Figure 2.2. Newton’s method for


two steps.
In the leftmost figure, we see the function f plotted along with the line y = 0.
We’re trying to find θ so that f (θ ) = 0; the value of θ that achieves this is about
1.3. Suppose we initialized the algorithm with θ = 4.5. Newton’s method then
fits a straight line tangent to f at θ = 4.5, and solves for the where that line
evaluates to 0. (Middle figure.) This give us the next guess for θ, which is about
2.8. The rightmost figure shows the result of running one more iteration, which
the updates θ to about 1.8. After a few more iterations, we rapidly approach
θ = 1.3.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


2.3. another algorithm for maximizing `(θ ) 21

Newton’s method gives a way of getting to f (θ ) = 0. What if we want to use


it to maximize some function `? The maxima of ` correspond to points where
its first derivative `0 (θ ) is zero. So, by letting f (θ ) = `0 (θ ), we can use the same
algorithm to maximize `, and we obtain update rule:

`0 (θ )
θ := θ − . (2.19)
`00 (θ )

(Something to think about: How would this change if we wanted to use Newton’s
method to minimize rather than maximize a function?)
Lastly, in our logistic regression setting, θ is vector-valued, so we need to gen-
eralize Newton’s method to this setting. The generalization of Newton’s method
to this multidimensional setting (also called the Newton-Raphson method) is
given by:
θ := θ − H −1 ∇θ `(θ ). (2.20)
Here, ∇θ `(θ ) is, as usual, the vector of partial derivatives of `(θ ) with respect
to the θi ’s; and H is an d-by-d matrix (actually, d + 1-by-d + 1, assuming that we
include the intercept term) called the Hessian, whose entries are given by

∂2 `(θ )
Hij = . (2.21)
∂θi ∂θ j

Newton’s method typically enjoys faster convergence than (batch) gradient


descent, and requires many fewer iterations to get very close to the minimum.
One iteration of Newton’s can, however, be more expensive than one iteration of
gradient descent, since it requires finding and inverting an d-by-d Hessian; but
so long as d is not too large, it is usually much faster overall. When Newton’s
method is applied to maximize the logistic regression log likelihood function
`(θ ), the resulting method is also called Fisher scoring.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


3 Generalized Linear Models

The presentation of the material


So far, we’ve seen a regression example, and a classification example. In the in this section takes inspiration
regression example, we had y | x; θ ∼ N (µ, σ2 ), and in the classification one, from Michael I. Jordan, Learning
in graphical models (unpublished
y | x; θ ∼ Bernoulli(φ), for some appropriate definitions of µ and φ as functions book draft), and also McCullagh
of x and θ. In this section, we will show that both of these methods are special and Nelder, Generalized Linear Mod-
els (2nd ed.).
cases of a broader family of models, called Generalized Linear Models (GLMs). We
will also show how other models in the GLM family can be derived and applied
to other classification and regression problems.

3.1 The exponential family

To work our way up to GLMs, we will begin by defining exponential family


distributions. We say that a class of distributions is in the exponential family if it
can be written in the form:

p(y; η ) = b(y) exp(η > T (y) − a(η ))

Here, η is called the natural parameter (also called the canonical parameter) of
the distribution; T (y) is the sufficient statistic (for the distributions we consider,
it will often be the case that T (y) = y); and a(η ) is the log partition function.
The quantity e− a(η ) essentially plays the role of a normalization constant, that
makes sure the distribution p(y; η ) sums/integrates over y to 1. MLE w.r.t. η is concave → (neg. log-
likelihood is convex)
A fixed choice of T, a and b defines a family (or set) of distributions that is
parameterized by η; as we vary η, we then get different distributions within this
family.
We now show that the Bernoulli and the Gaussian distributions are examples of
exponential family distributions. The Bernoulli distribution with mean φ, written
Bernoulli(φ), specifies a distribution over y ∈ {0, 1}, so that p(y = 1; φ) =
3.1. the exponential family 23

φ; p(y = 0; φ) = 1 − φ. As we vary φ, we obtain Bernoulli distributions with


different means. We now show that this class of Bernoulli distributions, ones
obtained by varying φ, is in the exponential family; i.e., that there is a choice of T,
a and b so that 3.1 becomes exactly the class of Bernoulli distributions.
We write the Bernoulli distribution as:

p(y; φ) = φy (1 − φ)1−y (3.1)


= exp(y log φ + (1 − y) log(1 − φ)) (3.2)
    
φ
= exp y log + log(1 − φ) . (3.3)
1−φ

Thus, the natural parameter is given by η = log(φ/(1 − φ)). Interestingly, if we


invert this definition for η by solving for φ in terms of η, we obtain φ = 1/(1 +
e−η ). This is the familiar sigmoid function! This will come up again when we
derive logistic regression as a GLM. To complete the formulation of the Bernoulli
distribution as an exponential family distribution, we also have:

T (y) = y
a(η ) = − log(1 − φ)
= log(1 + eη )
b(y) = 1

This shows that the Bernoulli distribution can be written in the form of 3.1, using
an appropriate choice of T, a and b.
Let’s now move on to consider the Gaussian distribution. Recall that, when
deriving linear regression, the value of σ2 had no effect on our final choice of
θ and hθ ( x ). Thus, we can choose an arbitrary value for σ2 without changing
anything. To simplify the derivation below, let’s set σ2 = 1.1 We then have: If we leave σ2 as a variable,
1

the Gaussian distribution can


 
1 1 also be shown to be in the expo-
p(y; µ) = √ exp − (y − µ)2 (3.4) nential family, where η ∈ R2 is
2π 2 now a 2-dimension vector that
   
1 1 1 depends on both µ and σ. For
= √ exp − y2 · exp µy − µ2 (3.5) the purposes of GLMs, however,
2π 2 2
the σ2 parameter can also be
treated by considering a more
general definition of the expo-
nential family: p(y; η, τ ) =
b( a, τ ) exp((η > T (y) −
a(η ))/c(τ )). Here, τ is called the
dispersion parameter, and for the
toc Gaussian,
2021-05-23 00:18:27-07:00, draft: send comments c(τ ) = σ2 ; but given
to [email protected]
our simplification above, we won’t
need the more general definition
for the examples we will consider
here.
24 c hapter 3. generalized linear models

Thus, we see that the Gaussian is in the exponential family, with

η=µ
T (y) = y
a(η ) = µ2 /2
= η 2 /2

b(y) = (1/ 2π ) exp(−y2 /2).

There’re many other distributions that are members of the exponential family: The
multinomial (which we’ll see later), the Poisson (for modelling count-data; also
see the problem set); the gamma and the exponential (for modelling continuous,
non-negative random variables, such as time-intervals); the beta and the Dirichlet
(for distributions over probabilities); and many more. In the next section, we will
describe a general ‘‘recipe’’ for constructing models in which y (given x and θ)
comes from any of these distributions.

3.2 Constructing GLMs


Inference is easy:
Suppose you would like to build a model to estimate the number y of customers

arriving in your store (or number of page-views on your website) in any given E[y; η ] = a(η )
∂η
hour, based on certain features x such as store promotions, recent advertising, (log partition of exp. family).
weather, day-of-week, etc. We know that the Poisson distribution usually gives a
good model for numbers of visitors. Knowing this, how can we come up with a
model for our problem? Fortunately, the Poisson is an exponential family distri-
bution, so we can apply a Generalized Linear Model (GLM). In this section, we
will we will describe a method for constructing GLM models for problems such
as these.
More generally, consider a classification or regression problem where we would
like to predict the value of some random variable y as a function of x. To derive a
GLM for this problem, we will make the following three assumptions about the
conditional distribution of y given x and about our model:

1. y | x; θ ∼ ExponentialFamily(η ). I.e., given x and θ, the distribution of y


follows some exponential family distribution, with parameter η.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


3.2. constructing glms 25

2. Given x, our goal is to predict the expected value of T (y) given x. In most
of our examples, we will have T (y) = y, so this means we would like the
prediction h( x ) output by our learned hypothesis h to satisfy h( x ) = E[y | x ].
(Note that this assumption is satisfied in the choices for hθ ( x ) for both logistic
regression and linear regression. For instance, in logistic regression, we had
hθ ( x ) = p(y = 1 | x; θ ) = 0 · p(y = 0 | x; θ ) + 1 · p(y = 1 | x; θ ) = E[y | x; θ ].)

3. The natural parameter η and the inputs x are related linearly: η = θ > x. (Or, if
η is vector-valued, then ηi = θi> x.)

The third of these assumptions might seem the least well justified of the above,
and it might be better thought of as a ‘‘design choice’’ in our recipe for designing
GLMs, rather than as an assumption per se. These three assumptions/design
choices will allow us to derive a very elegant class of learning algorithms, namely
GLMs, that have many desirable properties such as ease of learning. Furthermore,
the resulting models are often very effective for modelling different types of
distributions over y; for example, we will shortly show that both logistic regression
and ordinary least squares can both be derived as GLMs.

3.2.1 Ordinary Least Squares


To show that ordinary least squares is a special case of the GLM family of models,
consider the setting where the target variable y (also called the response variable
in GLM terminology) is continuous, and we model the conditional distribution
of y given x as a Gaussian N (µ, σ2 ). (Here, µ may depend x.) So, we let the
ExponentialFamily(η ) distribution above be the Gaussian distribution. As we
saw previously, in the formulation of the Gaussian as an exponential family
distribution, we had µ = η. So, we have

hθ ( x ) = E[y | x; θ ]


= θ > x.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


26 c hapter 3. generalized linear models

The first equality follows from Assumption 2, above; the second equality follows
from the fact that y | x; θ ∼ N (µ, σ2 ), and so its expected value is given by µ; the
third equality follows from Assumption 1 (and our earlier derivation showing that
µ = η in the formulation of the Gaussian as an exponential family distribution);
and the last equality follows from Assumption 3.

3.2.2 Logistic Regression


We now consider logistic regression. Here we are interested in binary classification,
so y ∈ {0, 1}. Given that y is binary-valued, it therefore seems natural to choose
the Bernoulli family of distributions to model the conditional distribution of
y given x. In our formulation of the Bernoulli distribution as an exponential
family distribution, we had φ = 1/(1 + e−η ). Furthermore, note that if y | x; θ ∼
Bernoulli(φ), then E[y | x; θ ] = φ. So, following a similar derivation as the one
for ordinary least squares, we get:

hθ ( x ) = E[y | x; θ ]

= 1/(1 + e−η )
>x
= 1/(1 + e−θ )
>
So, this gives us hypothesis functions of the form hθ ( x ) = 1/(1 + e−θ x ). If you
are previously wondering how we came up with the form of the logistic function
1/(1 + e−z ), this gives one answer: Once we assume that y conditioned on x is
Bernoulli, it arises as a consequence of the definition of GLMs and exponential
family distributions.
To introduce a little more terminology, the function g giving the distribution’s
mean as a function of the natural parameter (g(η ) = E[ T (y); η ]) is called the
canonical response function. Its inverse, g−1 , is called the canonical link func-
tion. Thus, the canonical response function for the Gaussian family is just the
identity function; and the canonical response function for the Bernoulli is the
logistic function.2 2
Many texts use g to denote the
link function, and g−1 to denote the
response function; but the notation
3.2.3 Softmax Regression we’re using here, inherited from
the early machine learning litera-
Let’s look at one more example of a GLM. Consider a classification problem in ture, will be more consistent with
the notation used in the rest of the
which the response variable y can take on any one of k values, so y ∈ {1, 2, . . . , k}.
class.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


3.2. constructing glms 27

For example, rather than classifying email into the two classes spam or not-
spam—which would have been a binary classification problem— we might want
to classify it into three classes, such as spam, personal mail, and work-related mail.
The response variable is still discrete, but can now take on more than two values.
We will thus model it as distributed according to a multinomial distribution.
Let’s derive a GLM for modelling this type of multinomial data. To do so, we
will begin by expressing the multinomial as an exponential family distribution.
To parameterize a multinomial over k possible outcomes, one could use k
parameters φ1 , . . . , φk specifying the probability of each of the outcomes. However,
these parameters would be redundant, or more formally, they would not be
independent (since knowing any k − 1 of the φi ’s uniquely determines the last
one, as they must satisfy ∑ik=1 φi = 1). So, we will instead parameterize the
multinomial with only k − 1 parameters, φ1 , . . . , φk−1 , where φi = p(y = i; φ),
and p(y = k; φ) = 1 − ∑ik=−11 φi . For notational convenience, we will also let
φk = 1 − ∑ik=−11 φi , but we should keep in mind that this is not a parameter, and
that it is fully specified by φ1 , . . . , φk−1 .
To express the multinomial as an exponential family distribution, we will define
T (y) ∈ Rk−1 as follows:
       
1 0 0 0
0 1 0 0
       
0 , T (2) = 0 , · · · , T ( k − 1) = 0 , T ( k ) = 0 ,
       
T (1) = 
 ..   ..   ..   .. 
       
. . . .
0 0 1 0

Unlike our previous examples, here we do not have T (y) = y; also, T (y) is now
a k − 1 dimensional vector, rather than a real number. We will write ( T (y))i to
denote the i-th element of the vector T (y). We introduce one more very useful
piece of notation. An indicator function 1{·} takes on a value of 1 if its argument is
true, and 0 otherwise (1{True} = 1, 1{False} = 0). For example, 1{2 = 3} = 0,
and 1{3 = 5 − 2} = 1. So, we can also write the relationship between T (y) and
y as ( T (y))i = 1{y = i }. (Before you continue reading, please make sure you
understand why this is true!) Further, we have that E[( T (y))i ] = P(y = i ) = φi .

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


28 c hapter 3. generalized linear models

We are now ready to show that the multinomial is a member of the exponential
family. We have:
1{ y =1} 1{ y =2} 1{ y = k }
p(y; φ) = φ1 φ2 · · · φk
−1
1{ y =1} 1{ y =2} 1−∑ik= 1 1{ y = i }
= φ1 φ2 · · · φk
−1
( T (y))1 ( T (y))2 1−∑ik= 1 ( T ( y ))i
= φ1 φ2 · · · φk
! !
k −1
= exp ( T (y))i log(φ1 ) + ( T (y))2 log(φ2 ) + · · · + 1 − ∑ (T (y))i log(φk )
i =1

= exp (( T (y))i log(φ1 /φk ) + ( T (y))2 log(φ2 /φk ) + · · · + ( T (k))k−1 log(φk−1 /φk ) + log(φk ))
= b(y) exp(η > T (y) − a(η ))

where
 
log(φ1 /φk )
 log(φ2 /φk ) 
 
η= .. ,
.
 
 
log(φk−1 /φk )
a(η ) = − log(φk )
b(y) = 1.

This completes our formulation of the multinomial as an exponential family


distribution.
The link function is given (for i = 1, . . . , k) by:

φi
ηi = log
φk

For convenience, we have also defined ηk = log(φk /φk ) = 0. To invert the link
function and derive the response function, we therefore have that

φi
e ηi = (3.6)
φk
φk eηi = φi (3.7)
k k
φk ∑ eηi = ∑ φi = 1 (3.8)
i =1 i =1

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


3.2. constructing glms 29

This implies that φk = 1/ ∑ik=1 eηi , which can be substituted back into equa-
tion (3.7) to give the response function

e ηi
φi =
∑kj=1 e
ηj

This function mapping from the η’s to the φ’s is called the softmax function.
To complete our model, we use Assumption 3, given earlier, that the ηi ’s
are linearly related to the x’s. So, have ηi = θi> x (for i = 1, . . . , k − 1), where
θ1 , . . . , θk−1 ∈ Rd+1 are the parameters of our model. For notational convenience,
we can also define θk = 0, so that ηk = θk> x = 0, as given previously. Hence, our
model assumes that the conditional distribution of y given x is given by:

p(y = 1 | x; θ ) = φi (3.9)
e ηi
= (3.10)
∑kj=1 e
ηj

>x
e θi
= (3.11)
θ j> x
∑kj=1 e

This model, which applies to classification problems where y ∈ {1, ..., k}, is called
softmax regression. It is a generalization of logistic regression.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


30 c hapter 3. generalized linear models

Our hypothesis will output:

hθ ( x ) = E [ T (y) | x; θ ] (3.12)
1{ y = 1}
 

 1{ y = 2}
 
= E

 .. | x; θ  (3.13)
.

 
1{ y = k − 1}
 
φ1
 
 φ2 
= .. 
 (3.14)
 . 
φk−1
exp(θ1> x )
 
k
 ∑ j=1 exp(θ j> x ) 

 exp(θ2> x ) 

k
 ∑ j=1 exp(θ j> x ) 
=  (3.15)
 .. 

 . 

 exp(θk>−1 x ) 
∑kj=1 exp(θ j> x )

In other words, our hypothesis will output the estimated probability that p(y =
i | x; θ ), for every value of i = 1, . . . , k. (Even though hθ ( x ) as defined above is
only k − 1 dimensional, clearly p(y = k | x; θ ) can be obtained as 1 − ∑ik=−11 φi .)
Lastly, let’s discuss parameter fitting. Similar to our original derivation of
ordinary least squares and logistic regression, if we have a training set of n
examples {( x (i) , y(i) ); i = 1, . . . , n} and would like to learn the parameters θi of
this model, we would begin by writing down the log-likelihood
n
`(θ ) = ∑ log p(y(i) | x(i) ; θ ) (3.16)
i =1
  1{ y (i ) = l }
n k > x (i )
eθl
= ∑ log ∏  θ > x (i )
 (3.17)
i =1 l =1 ∑kj=1 e j

To obtain the second line above, we used the definition for p(y | x; θ ) given in
3.11. We can now obtain the maximum likelihood estimate of the parameters
by maximizing `(θ ) in terms of θ, using a method such as gradient ascent or
Newton’s method.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Part II: Generative Learning Algorithms
From CS229 Spring 2021, Andrew
So far, we’ve mainly been talking about learning algorithms that model p(y | x; θ ), Ng, Moses Charikar, & Christopher
the conditional distribution of y given x. For instance, logistic regression modeled Ré, Stanford University.

p(y | x; θ ) as hθ ( x ) = g(θ > x ) where g is the sigmoid function. In these notes,


we’ll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish
between elephants (y = 1) and dogs (y = 0), based on some features of an
animal. Given a training set, an algorithm like logistic regression or the perceptron
algorithm (basically) tries to find a straight line—that is, a decision boundary—
that separates the elephants and dogs. Then, to classify a new animal as either an
elephant or a dog, it checks on which side of the decision boundary it falls, and
makes its prediction accordingly.
Here’s a different approach. First, looking at elephants, we can build a model
of what elephants look like. Then, looking at dogs, we can build a separate model
of what dogs look like. Finally, to classify a new animal, we can match the new
animal against the elephant model, and match it against the dog model, to see
whether the new animal looks more like the elephants or more like the dogs we
had seen in the training set.
Algorithms that try to learn p(y | x ) directly (such as logistic regression),
or algorithms that try to learn mappings directly from the space of inputs X to
the labels {0, 1}, (such as the perceptron algorithm) are called discriminative
learning algorithms. Here, we’ll talk about algorithms that instead try to model
p( x | y) (and p(y)). These algorithms are called generative learning algorithms.
For instance, if y indicates whether an example is a dog (0) or an elephant (1),
then p( x | y = 0) models the distribution of dogs’ features, and p( x | y = 1)
models the distribution of elephants’ features.
After modeling p(y) (called the class priors) and p( x | y), our algorithm can
then use Bayes rule to derive the posterior distribution on y given x:

p( x | y) p(y)
p(y | x ) = (3.18)
p( x )

Here, the denominator is given by p( x ) = p( x | y = 1) p(y = 1) + p( x | y =


0) p(y = 0) (you should be able to verify that this is true from the standard prop-
erties of probabilities), and thus can also be expressed in terms of the quantities
p( x | y) and p(y) that we’ve learned. Actually, if we’re calculating p(y | x ) in order
to make a prediction, then we don’t actually need to calculate the denominator,
since
p( x | y) p(y)
arg max p(y | x ) = arg max
y y p( x )
= arg max p( x | y) p(y).
y

4 Gaussian discriminant analysis

The first generative learning algorithm that we’ll look at is Gaussian discrim-
inant analysis (GDA). In this model, we’ll assume that p( x | y) is distributed
according to a multivariate normal distribution. Let’s talk briefly about the prop-
erties of multivariate normal distributions before moving on to the GDA model
itself.
The multivariate normal distribution in d-dimensions, also called the multi-
variate Gaussian distribution, is parameterized by a mean vector µ ∈ Rd and a
covariance matrix Σ ∈ Rd×d , where Σ ≥ 0 is symmetric and positive semi-definite.
Also written ‘‘N (µ, Σ)’’, its density is given by:
 
1 1 > −1
p( x; µ, Σ) = exp − ( x − µ) Σ ( x − µ) (4.1)
(2π )d/2 |Σ|1/2 2

In the equation above, ‘‘|Σ|’’ denotes the determinant of the matrix Σ.


For a random variable X distributed N (µ, Σ), the mean is (unsurprisingly)
given by µ: Z
E[ X ] = xp( x; µ, Σ)dx = µ (4.2)
x
The covariance of a vector-valued random variable Z is defined as Cov( Z ) =
E[( Z − E[ Z ])( Z − E[ Z ])> ]. This generalizes the notion of the variance of a real-
valued random variable. The covariance can also be defined as Cov( Z ) = E[ ZZ > ] −
(E[ Z ])(E[ Z ])> . (You should be able to prove to yourself that these two definitions
are equivalent.) If X ∼ N (µ, Σ), then

Cov( X ) = Σ. (4.3)
33

Here are some examples of what the density of a Gaussian distribution looks
like:
The left-most figure shows a Gaussian with mean zero (that is, the 2 × 1 zero-
vector) and covariance matrix Σ = I (the 2 × 2 identity matrix). A Gaussian with
zero mean and identity covariance is also called the standard normal distribution.
The middle figure shows the density of a Gaussian with zero mean and Σ = 0.6I;
and in the rightmost figure shows one with, Σ = 2I. We see that as Σ becomes
larger, the Gaussian becomes more ‘‘spread-out,’’ and as it becomes smaller, the
distribution becomes more ‘‘compressed.’’
Let’s look at some more examples.
The figures above show Gaussians with mean 0, and with covariance matrices
respectively:
" # " # " #
1 0 1 0.5 1 0.8
Σ= ; Σ= ; Σ= . (4.4)
0 1 0.5 1 0.8 1
The leftmost figure shows the familiar standard normal distribution, and we
see that as we increase the off-diagonal entry in Σ, the density becomes more
‘‘compressed’’ towards the 45◦ line (given by x1 = x2 ). We can see this more
clearly when we look at the contours of the same three densities:

µ = [0, 0] µ = [0, 0] µ = [0, 0]


     
1 0 1 0.5 1 0.8
Σ= Σ= Σ=
0 1 0.5 1 0.8 1
4

−2

−4
−4 −2 0 2 4 −4 −2 0 2 4 −4 −2 0 2 4

Here’s one last set of examples generated by varying Σ:


The plots above used, respectively,
" # " # " #
3 0.8 1 −0.5 1 −0.8
Σ= ; Σ= ; Σ= . (4.5)
0.8 1 −0.5 1 −0.8 1

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


34 c hapter 4. gaussian discriminant analysis

µ = [0, 0] µ = [0, 0] µ = [0, 0]


     
3 0.8 1 −0.5 1 −0.8
Σ= Σ= Σ=
0.8 3 −0.5 1 −0.8 1
4

−2

−4
−4 −2 0 2 4 −4 −2 0 2 4 −4 −2 0 2 4

From the leftmost and middle figures, we see that by decreasing the off- diagonal
elements of the covariance matrix, the density now becomes ‘‘compressed’’ again,
but in the opposite direction. Lastly, as we vary the parameters, more generally
the contours will form ellipses (the rightmost figure showing an example).
As our last set of examples, fixing Σ = I, by varying µ, we can also move the
mean of the density around.
The figures above were generated using Σ = I, and respectively
" # " # " #
1 −0.5 −1
µ= ; µ= ; µ= . (4.6)
0 0 −1.5

4.1 The Gaussian Discriminant Analysis model

When we have a classification problem in which the input features x are continuous-
valued random variables, we can then use the Gaussian Discriminant Analysis
(GDA) model, which models p( x | y) using a multivariate normal distribution.
The model is:

y ∼ Bernoulli(φ) (4.7)
x | y = 0 ∼ N ( µ0 , Σ ) (4.8)
x | y = 1 ∼ N ( µ1 , Σ ) (4.9)

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


4.1. the gaussian discriminant analysis model 35

Writing out the distributions, this is:

p ( y ) = φ y (1 − φ )1− y (4.10)
 
1 1
p ( x | y = 0) = exp − ( x − µ0 )> Σ−1 ( x − µ0 ) (4.11)
(2π )d/2 |Σ|1/2 2
 
1 1
p ( x | y = 1) = exp − ( x − µ1 )> Σ−1 ( x − µ1 ) (4.12)
(2π )d/2 |Σ|1/2 2

Here, the parameters of our model are φ, Σ, µ0 and µ1 . (Note that while there’re
two different mean vectors µ0 and µ1 , this model is usually applied using only
one covariance matrix Σ.) The log-likelihood of the data is given by
n
`(φ, µ0 , µ1 , Σ) = log ∏ p( x (i) , y(i) ; φ, µo , µ1 , Σ) (4.13)
i =1
n
= log ∏ p( x (i) | y(i) ; φ, µo , µ1 , Σ) p(y(i) ; φ). (4.14)
i =1

By maximizing ` with respect to the parameters, we find the maximum likelihood


estimate of the parameters (see problem set 1) to be:
n
1
φ=
n ∑ 1{ y (i ) = 1} (4.15)
i =1
n
∑ i =1 1 { y ( i ) = 0 } x ( i )
µ0 = (4.16)
∑in=1 1{y(i) = 0}
∑in=1 1{y(i) = 1} x (i)
µ1 = (4.17)
∑in=1 1{y(i) = 1}
1 n (i )
Σ= ∑
n i =1
( x − µy(i) )( x (i) − µ y (i ) ) > (4.18)

Pictorially, what the algorithm is doing can be seen in as follows:


Shown in the figure are the training set, as well as the contours of the two
Gaussian distributions that have been fit to the data in each of the two classes. Note
that the two Gaussians have contours that are the same shape and orientation,
since they share a covariance matrix Σ, but they have different means µ0 and
µ1 . Also shown in the figure is the straight line giving the decision boundary at
which p(y = 1 | x ) = 0.5. On one side of the boundary, we’ll predict y = 1 to be
the most likely outcome, and on the other side, we’ll predict y = 0.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


36 c hapter 4. gaussian discriminant analysis

4.2 Discussion: GDA and logistic regression

The GDA model has an interesting relationship to logistic regression. If we view


the quantity p(y = 1 | x; φ, µ0 , µ1 , Σ) as a function of x, we’ll find that it can be
expressed in the form

1
p(y = 1 | x; φ, Σ, µ0 , µ1 ) = , (4.19)
1 + exp(−θ > x )

where θ is some appropriate function of φ, Σ, µ0 , µ1 .1 This is exactly the form that 1


This uses the convention of re-
logistic regression—a discriminative algorithm—used to model p(y = 1 | x ). defining the x (i) ’s on the right-
hand-side to be (d + 1)- dimen-
When would we prefer one model over another? GDA and logistic regression sional vectors by adding the extra
(i )
will, in general, give different decision boundaries when trained on the same coordinate x0 = 1; see problem
dataset. Which is better? set 1.

We just argued that if p( x | y) is multivariate Gaussian (with shared Σ), then


p(y | x ) necessarily follows a logistic function. The converse, however, is not
true; i.e., p(y | x ) being a logistic function does not imply p( x | y) is multivariate
Gaussian. This shows that GDA makes stronger modeling assumptions about
the data than does logistic regression. It turns out that when these modeling
assumptions are correct, then GDA will find better fits to the data, and is a better
model. Specifically, when p( x | y) is indeed Gaussian (with shared Σ), then GDA
is asymptotically efficient. Informally, this means that in the limit of very large
training sets (large n), there is no algorithm that is strictly better than GDA (in
terms of, say, how accurately they estimate p(y | x )). In particular, it can be shown
that in this setting, GDA will be a better algorithm than logistic regression; and
more generally, even for small training set sizes, we would generally expect GDA
to be better.
In contrast, by making significantly weaker assumptions, logistic regression
is also more robust and less sensitive to incorrect modeling assumptions. There
are many different sets of assumptions that would lead to p(y | x ) taking the
form of a logistic function. For example, if x | y = 0 ∼ Poisson(λ0 ), and x | y =
1 ∼ Poisson(λ1 ), then p(y | x ) will be logistic. Logistic regression will also work
well on Poisson data like this. But if we were to use GDA on such data—and fit
Gaussian distributions to such non-Gaussian data—then the results will be less
predictable, and GDA may (or may not) do well.
To summarize: GDA makes stronger modeling assumptions, and is more data
efficient (i.e., requires less training data to learn ‘‘well’’) when the modeling

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


4.2. discussion: gda and logistic regression 37

assumptions are correct or at least approximately correct. Logistic regression


makes weaker assumptions, and is significantly more robust to deviations from
modeling assumptions. Specifically, when the data is indeed non-Gaussian, then
in the limit of large datasets, logistic regression will almost always do better than
GDA. For this reason, in practice logistic regression is used more often than GDA.
(Some related considerations about discriminative vs. generative models also
apply for the Naive Bayes algorithm that we discuss next, but the Naive Bayes
algorithm is still considered a very good, and is certainly also a very popular,
classification algorithm.)

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


5 Naive Bayes

In GDA, the feature vectors x were continuous, real-valued vectors. Let’s now
talk about a different learning algorithm in which the x j ’s are discrete-valued.
For our motivating example, consider building an email spam filter using
machine learning. Here, we wish to classify messages according to whether they
are unsolicited commercial (spam) email, or non-spam email. After learning
to do this, we can then have our mail reader automatically filter out the spam
messages and perhaps place them in a separate mail folder. Classifying emails is
one example of a broader set of problems called text classification.
Let’s say we have a training set (a set of emails labeled as spam or non- spam).
We’ll begin our construction of our spam filter by specifying the features x j used
to represent an email.
We will represent an email via a feature vector whose length is equal to the
number of words in the dictionary. Specifically, if an email contains the j-th word
of the dictionary, then we will set x j = 1; otherwise, we let x j = 0. For instance,
the vector  
1 a
0 aardvark
 
 
0 aardwolf
 ..  ..
 
x= .
 .
1
  buy
. ..
.
. .
0 zygmurgy
is used to represent an email that contains the words ‘‘a’’ and ‘‘buy,’’ but not
‘‘aardvark,’’ ‘‘aardwolf’’ or ‘‘zygmurgy.’’1 The set of words encoded into the 1
Actually, rather than looking
feature vector is called the vocabulary, so the dimension of x is equal to the size through an English dictionary
for the list of all English words,
of the vocabulary. in practice it is more common to
Having chosen our feature vector, we now want to build a generative model. look through our training set and
encode in our feature vector only
So, we have to model p( x | y). But if we have, say, a vocabulary of 50000 words, the words that occur at least once
then x ∈ {0, 1}50000 (x is a 50000-dimensional vector of 0’s and 1’s), and if we there. Apart from reducing the
were to model x explicitly with a multinomial distribution over the 250000 possible number of words modeled and
hence reducing our computational
and space requirements, this also
has the advantage of allowing
us to model/include as a feature
many words that may appear
in your email (such as ‘‘cs229’’)
but that you won’t find in a
dictionary. Sometimes (as in the
39

outcomes, then we’d end up with a (250000 − 1)-dimensional parameter vector.


This is clearly too many parameters.
To model p( x | y), we will therefore make a very strong assumption. We will
assume that the xi ’s are conditionally independent given y. This assumption is
called the Naive Bayes (NB) assumption, and the resulting algorithm is called
the Naive Bayes classifier. For instance, if y = 1 means spam email; ‘‘buy’’ is
word 2087 and ‘‘price’’ is word 39831; then we are assuming that if I tell you y = 1
(that a particular piece of email is spam), then knowledge of x2087 (knowledge
of whether ‘‘buy’’ appears in the message) will have no effect on your beliefs
about the value of x39831 (whether ‘‘price’’ appears). More formally, this can
be written p( x2087 | y) = p( x2087 | y, x39831 ). (Note that this is not the same as
saying that x2087 and x39831 are independent, which would have been written
‘‘p( x2087 ) = p( x2087 | x39831 )’’; rather, we are only assuming that x2087 and x39831
are conditionally independent given y.)
We now have:

p( x1 , . . . , x50000 | y) (5.1)
= p( x1 | y) p( x2 | y, x1 ) p( x3 | y, x1 , x2 ) · · · p( x50000 | y, x1 , . . . , x49999 ) (5.2)
= p( x1 | y) p( x2 | y) p( x3 | y) · · · p( x50000 | y) (5.3)
d
= ∏ p( x j | y) (5.4)
j =1

The first equality simply follows from the usual properties of probabilities, and
the second equality used the NB assumption. We note that even though the Naive
Bayes assumption is an extremely strong assumptions, the resulting algorithm
works well on many problems.
Our model is parameterized by φj|y=1 = p( x j = 1 | y = 1), φj|y=0 = p( x j = 1 |
y = 0), and φy = p(y = 1). As usual, given a training set {( x (i) , y(i) ); i = 1, . . . , n},
we can write down the joint likelihood of the data:
n
L(φy , φj|y=0 , φj|y=1 ) = ∏ p( x (i) , y(i) ) (5.5)
i =1

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


40 c hapter 5. naive bayes

Maximizing this with respect to φy , φj|y=0 and φj|y=1 gives the maximum likeli-
hood estimates:
(i )
∑in=1 1{ x j = 1 ∧ y(i) = 1}
φ j | y =1 = (5.6)
∑in=1 1{y(i) = 1}
(i )
∑in=1 1{ x j = 1 ∧ y(i) = 0}
φ j | y =0 = (5.7)
∑in=1 1{y(i) = 0}
∑in=1 1{y(i) = 1}
φy = (5.8)
n
In the equations above, the ‘‘∧’’ symbol means ‘‘and.’’ The parameters have a
very natural interpretation. For instance, φj|y=1 is just the fraction of the spam
(y = 1) emails in which word j does appear.
Having fit all these parameters, to make a prediction on a new example with
features x, we then simply calculate

p ( x | y = 1) p ( y = 1)
p(y = 1 | x ) = (5.9)
p( x )
 
∏dj=1 p( x j | y = 1) p(y = 1)
=     ,
∏dj=1 p( x j | y = 1) p(y = 1) + ∏dj=1 p( x j | y = 0) p(y = 0)
(5.10)

and pick whichever class has the higher posterior probability.


Lastly, we note that while we have developed the Naive Bayes algorithm mainly
for the case of problems where the features x j are binary-valued, the generalization
to where x j can take values in {1, 2, . . . , k j } is straightforward. Here, we would
simply model p( x j | y) as multinomial rather than as Bernoulli. Indeed, even if
some original input attribute (say, the living area of a house, as in our earlier
example) were continuous valued, it is quite common to discretize it—that is, turn
it into a small set of discrete values—and apply Naive Bayes. For instance, if we
use some feature x j to represent living area, we might discretize the continuous
values as follows:

Table 5.1. Discretized living area.


Living area (ft2 ) < 400 400 − 800 800 − 1200 1200 − 1600 > 1600
xi 1 2 3 4 5

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


5.1. laplace smoothing 41

Thus, for a house with living area 890 square feet, we would set the value of the
corresponding feature x j to 3. We can then apply the Naive Bayes algorithm, and
model p( x j | y) with a multinomial distribution, as described previously. When
the original, continuous-valued attributes are not well- modeled by a multivariate
normal distribution, discretizing the features and using Naive Bayes (instead of
GDA) will often result in a better classifier.

5.1 Laplace smoothing

The Naive Bayes algorithm as we have described it will work fairly well for many
problems, but there is a simple change that makes it work much better, especially
for text classification. Let’s briefly discuss a problem with the algorithm in its
current form, and then talk about how we can fix it.
Consider spam/email classification, and let’s suppose that, we are in the year
of 20xx, after completing CS229 and having done excellent work on the project,
you decide around May 20xx to submit work you did to the NeurIPS conference
for publication.2 Because you end up discussing the conference in your emails, 2
NeurIPS is one of the top machine
you also start getting messages with the word ‘‘neurips’’ in it. But this is your learning conferences. The deadline
for submitting a paper is typically
first NeurIPS paper, and until this time, you had not previously seen any emails in May-June.
containing the word ‘‘neurips’’; in particular ‘‘neurips’’ did not ever appear in
your training set of spam/non-spam emails. Assuming that ‘‘neurips’’ was the
35000th word in the dictionary, your Naive Bayes spam filter therefore had picked
its maximum likelihood estimates of the parameters φ35000|y to be

(i )
∑in=1 1{ x35000 = 1 ∧ y(i) = 1}
φ35000|y=1 = =0 (5.11)
∑in=1 1{y(i) = 1}
(i )
∑in=1 1{ x35000 = 1 ∧ y(i) = 0}
φ35000|y=0 = = 0, (5.12)
∑in=1 1{y(i) = 0}

i.e., because it has never seen ‘‘neurips’’ before in either spam or non-spam
training examples, it thinks the probability of seeing it in either type of email is
zero. Hence, when trying to decide if one of these messages containing ‘‘neurips’’

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


42 c hapter 5. naive bayes

is spam, it calculates the class posterior probabilities, and obtains

∏dj=1 p( x j | y = 1) p(y = 1)
p(y = 1 | x ) =
∏dj=1 p( x j | y = 1) p(y = 1) + ∏dj=1 p( x j | y = 0) p(y = 0)
(5.13)
0
= (5.14)
0

This is because each of the terms ‘‘∏dj=1 p( x j | y)’’ includes a term p( x35000 | y) = 0
that is multiplied into it. Hence, our algorithm obtains 0/0, and doesn’t know
how to make a prediction.
Stating the problem more broadly, it is statistically a bad idea to estimate the
probability of some event to be zero just because you haven’t seen it before in your
finite training set. Take the problem of estimating the mean of a multinomial ran-
dom variable z taking values in {1, . . . , k}. We can parameterize our multinomial
with φj = p(z = j). Given a set of n independent observations {z(1) , . . . , z(n) },
the maximum likelihood estimates are given by

∑in=1 1{z(i) = j}
φj = . (5.15)
n
As we saw previously, if we were to use these maximum likelihood estimates,
then some of the φj ’s might end up as zero, which was a problem. To avoid this,
we can use Laplace smoothing, which replaces the above estimate with

1 + ∑in=1 1{z(i) = j}
φj = . (5.16)
k+n
Here, we’ve added 1 to the numerator, and k to the denominator. Note that
∑kj=1 φj = 1 still holds (check this yourself!), which is a desirable property since
the φj ’s are estimates for probabilities that we know must sum to 1. Also, φj 6= 0
for all values of j, solving our problem of probabilities being estimated as zero.
Under certain (arguably quite strong) conditions, it can be shown that the Laplace
smoothing actually gives the optimal estimator of the φj ’s.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


5.2. event models for text classification 43

Returning to our Naive Bayes classifier, with Laplace smoothing, we therefore


obtain the following estimates of the parameters:
(i )
1 + ∑in=1 1{ x j = 1 ∧ y(i) = 1}
φ j | y =1 = (5.17)
2 + ∑in=1 1{y(i) = 1}
(i )
1 + ∑in=1 1{ x j = 1 ∧ y(i) = 0}
φ j | y =0 = (5.18)
2 + ∑in=1 1{y(i) = 0}

(In practice, it usually doesn’t matter much whether we apply Laplace smoothing
to φy or not, since we will typically have a fair fraction each of spam and non-spam
messages, so φy will be a reasonable estimate of p(y = 1) and will be quite far
from 0 anyway.)

5.2 Event models for text classification

To close off our discussion of generative learning algorithms, let’s talk about one
more model that is specifically for text classification. While Naive Bayes as we’ve
presented it will work well for many classification problems, for text classification,
there is a related model that does even better.
In the specific context of text classification, Naive Bayes as presented uses the
what’s called the Bernoulli event model (or sometimes multi-variate Bernoulli
event model). In this model, we assumed that the way an email is generated is
that first it is randomly determined (according to the class priors p(y)) whether
a spammer or non-spammer will send you your next message. Then, the person
sending the email runs through the dictionary, deciding whether to include each
word j in that email independently and according to the probabilities p( x j = 1 |
y) = φj|y . Thus, the probability of a message was given by p(y) ∏dj=1 p( x j | y)
Here’s a different model, called the Multinomial event model. To describe
this model, we will use a different notation and set of features for representing
emails. We let x j denote the identity of the j-th word in the email. Thus, x j is now
an integer taking values in {1, . . . , |V |}, where |V | is the size of our vocabulary
(dictionary). An email of d words is now represented by a vector ( x1 , x2 , . . . , xd )
of length d; note that d can vary for different documents. For instance, if an email
starts with ‘‘A NeurIPS …,’’ then x1 = 1 (‘‘a’’ is the first word in the dictionary),
and x2 = 35000 (if ‘‘neurips’’ is the 35000th word in the dictionary).

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


44 c hapter 5. naive bayes

In the multinomial event model, we assume that the way an email is generated
is via a random process in which spam/non-spam is first determined (according
to p(y)) as before. Then, the sender of the email writes the email by first generating
x1 from some multinomial distribution over words (p( x1 | y)). Next, the second
word x2 is chosen independently of x1 but from the same multinomial distribution,
and similarly for x3 , x4 , and so on, until all d words of the email have been
generated. Thus, the overall probability of a message is given by p(y) ∏dj=1 p( x j |
y). Note that this formula looks like the one we had earlier for the probability of a
message under the Bernoulli event model, but that the terms in the formula now
mean very different things. In particular x j | y is now a multinomial, rather than
a Bernoulli distribution.
The parameters for our new model are φy = p(y) as before, φk|y=1 = p( x j =
k | y = 1) (for any j) and φk|y=0 = p( x j = k | y = 0). Note that we have assumed
that p( x j | y) is the same for all values of j (i.e., that the distribution according to
which a word is generated does not depend on its position j within the email).
(i ) (i ) (i )
If we are given a training set {( x (i) , y(i) ); i = 1, . . . , n} where x (i) = ( x1 , x2 , . . . , xd )
i
(here, di is the number of words in the i-training example), the likelihood of the
data is given by
n
L(φy , φk|y=0 , φk|y=1 ) = ∏ p( x (i) , y(i) ) (5.19)
i =1
!
n di
=∏ ∏
(i )
p( x j | y; φk|y=0 , φk|y=1 ) p(y(i) ; φy ). (5.20)
i =1 j =1

Maximizing this yields the maximum likelihood estimates of the parameters:

d (i )
∑in=1 ∑ j=i 1 1{ x j = k ∧ y(i) = 1}
φk|y=1 = (5.21)
∑in=1 1{y(i) = 1}di
d (i )
∑in=1 ∑ j=i 1 1{ x j = k ∧ y(i) = 0}
φk|y=0 = (5.22)
∑in=1 1{y(i) = 0}di
∑in=1 1{y(i) = 1}
φy = . (5.23)
n
If we were to apply Laplace smoothing (which is needed in practice for good
performance) when estimating φk|y=0 and φk|y=1 , we add 1 to the numerators and

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


5.2. event models for text classification 45

|V | to the denominators, and obtain:


d (i )
1 + ∑in=1 ∑ j=i 1 1{ x j = k ∧ y(i) = 1}
φk|y=1 = (5.24)
|V | + ∑in=1 1{y(i) = 1}di
d (i )
1 + ∑in=1 ∑ j=i 1 1{ x j = k ∧ y(i) = 0}
φk|y=0 = (5.25)
|V | + ∑in=1 1{y(i) = 0}di

While not necessarily the very best classification algorithm, the Naive Bayes
classifier often works surprisingly well. It is often also a very good ‘‘first thing to
try,’’ given its simplicity and ease of implementation.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Part III: Kernel Methods
From CS229 Fall 2020, Tengyu Ma,
6 Kernel methods Moses Charikar, Andrew Ng &
Christopher Ré, Stanford Univer-
sity.

6.1 Feature maps

Recall that in our discussion about linear regression, we considered the problem
of predicting the price of a house (denoted by y) from the living area of the house
(denoted by x), and we fit a linear function of x to the training data. What if the
price y can be more accurately represented as a non-linear function of x? In this
case, we need a more expressive family of models than linear models.
We start by considering fitting cubic functions y = θ3 x3 + θ2 x2 + θ1 x + θ0 . It
turns out that we can view the cubic function as a linear function over a different
set of feature variables (defined below). Concretely, let the function φ : R 7→ R4
be defined as  
1
x
φ( x ) =  2  ∈ R4 . (6.1)
 
x 
x3

Let θ ∈ R4 be the vector containing θ0 , θ1 , θ2 , θ3 as entries. Then we can rewrite


the cubic function in x as:

θ3 x 3 + θ2 x 2 + θ1 x + θ0 = θ > φ ( x )

Thus, a cubic function of the variable x can be viewed as a linear function over the
variables φ( x ). To distinguish between these two sets of variables, in the context
of kernel methods, we will call the ‘‘original’’ input value the input attributes of
a problem (in this case, x, the living area). When the original input is mapped to
some new set of quantities φ( x ), we will call those new quantities the features
variables. (Unfortunately, different authors use different terms to describe these
two things in different contexts.) We will call φ a feature map, which maps the
attributes to the features.
6.2. lms (least mean squares) with features 47

6.2 LMS (least mean squares) with features

We will derive the gradient descent algorithm for fitting the model θ > φ( x ). First
recall that for ordinary least square problem where we were to fit θ > x, the batch
gradient descent update is (see the first lecture note for its derivation):
n  
θ : = θ + α ∑ y (i ) − h θ ( x (i ) ) x (i ) (6.2)
i =1
n  
: = θ + α ∑ y (i ) − θ > x (i ) x (i ) . (6.3)
i =1

Let φ : Rd 7→ R p be a feature map that maps attribute x (in Rd ) to the features


φ( x ) in R p . (In the motivating example in the previous subsection, we have d = 1
and p = 4.) Now our goal is to fit the function θ > φ( x ), with θbeing a vector in R p
instead of Rd . We can replace all the occurrences of x (i) in the algorithm above by
φ( x (i) ) to obtain the new update:
n  
θ : = θ + α ∑ y (i ) − θ > φ ( x (i ) ) φ ( x (i ) ). (6.4)
i =1

Similarly, the corresponding stochastic gradient descent update rule is:


 
θ : = θ + α y (i ) − θ > φ ( x (i ) ) φ ( x (i ) ). (6.5)

6.3 LMS with the kernel trick

The gradient descent update, or stochastic gradient update above becomes compu-
tationally expensive when the features φ( x ) is high-dimensional. For example, con-
sider the direct extension of the feature map in equation 6.1 to high-dimensional
input x: suppose x ∈ Rd , and let φ( x ) be the vector that contains all the monomials

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


48 c hapter 6. kernel methods

of x with degree ≤ 3  
1
 x1 
 
 
 x2 
 
 . 
 .. 
 
 2 
 x1 
 
 x1 x2 
 
x x 
φ( x ) =  1 3 (6.6)
 . 
 .. 
 
 x2 x1 
 
 .. 
 
 . 
 
 x3 
 1 
 2 
 x1 x2 
..
 
.

The dimension of the features φ( x ) is on the order of d3 .1 This is a prohibitively 1


Here, for simplicity, we include all
long vector for computational purpose — when d = 1000, each update requires at the monomials with repetitions (so
that, e.g., x1 x2 x3 and x2 x3 x1 both
least computing and storing a 10003 = 109 dimensional vector, which is 106 times appear in φ( x )). Therefore, there
slower than the update rule for for ordinary least squares updates in equation are totally 1 + d + d2 + d3 entries
in φ( x ).
6.3.
It may appear at first that such d3 runtime per update and memory usage are
inevitable, because the vector θ itself is of dimension p ≈ d3 , and we may need
to update every entry of θ and store it. However, we will introduce the kernel
trick with which we will not need to store θ explicitly, and the runtime can be
significantly improved.
For simplicity, we assume the initialize the value θ = 0, and we focus on the
iterative update in equation 6.4. The main observation is that at any time, θ can
be represented as a linear combination of the vectors φ( x (1) ), . . . , φ( x (n) ). Indeed,
we can show this inductively as follows. At initialization, θ = 0 = ∑in=1 0 · φ( x (i) ).
Assume at some point, θ can be represented as
n
θ= ∑ β i φ ( x (i ) ) (6.7)
i =1

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


6.3. lms with the kernel trick 49

for some β 1 , . . . , β n ∈ R. Then we claim that in the next round, θ is still a linear
combination of φ( x (1) ), . . . , φ( x (n) ) because
n  
θ : = θ + α ∑ y (i ) − θ > φ ( x (i ) ) φ ( x (i ) ) (6.8)
i =1
n n  
= ∑ i
β φ ( x (i )
) + α ∑ y (i )
− θ >
φ ( x (i )
) φ ( x (i ) ) (6.9)
i =1 i =1
n   
= ∑ β i + α y (i ) − θ > φ ( x (i ) ) φ ( x (i ) ) (6.10)
i =1 | {z }
new β i

You may realize that our general strategy is to implicitly represent the p-dimensional
vector θ by a set of coefficients β 1 , . . . , β n . Towards doing this, we derive the up-
date rule of the coefficients β 1 , . . . , β n . Using the equation above, we see that the
new β i depends on the old one via:
 
β i : = β i + α y (i ) − θ > φ ( x (i ) ) (6.11)

Here we still have the old θ on the RHS of the equation. Replacing θ by θ =
∑nj=1 β j φ( x ( j) ) gives:
!
n
∀i ∈ {1, . . . , n}, β i := β i + α y(i) − ∑ β j φ( x ( j) )> φ( x (i) )
j =1

We often rewrite φ( x ( j) )> φ( x (i) ) as hφ( x ( j) ), φ( x (i) )i to emphasize that it’s the
inner product of the two feature vectors. Viewing β i ’s as the new representation
of θ, we have successfully translated the batch gradient descent algorithm into
an algorithm that updates the value of β iteratively. It may appear that at every
iteration, we still need to compute the values of hφ( x ( j) ), φ( x (i) )i for all pairs of
i, j, each of which may take roughly O( p) operation. However, two important
properties come to rescue:

1. We can pre-compute the pairwise inner products hφ( x ( j) ), φ( x (i) )i for all pairs
of i, j before the loop starts.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


50 c hapter 6. kernel methods

2. For the feature map φ defined in 6.6 (or many other interesting feature maps),
computing hφ( x ( j) ), φ( x (i) )i can be efficient and does not necessarily require
computing φ( x (i) ) explicitly. This is because:
d
hφ( x ), φ(z)i = 1 + ∑ xi zi + ∑ xi x j zi z j + ∑ xi x j x k zi z j z k
i =1 i,j∈{1,...,d} i,j,k ∈{1,...,d}
(6.12)
!2 !3
d d d
= 1 + ∑ xi zi + ∑ xi zi + ∑ xi zi (6.13)
i =1 i =1 i =1

= 1 + h x, zi + h x, zi + h x, zi3
2
(6.14)
Therefore, to compute hφ( x ), φ(z)i, we can first compute h x, zi with O(d) time
and then take another constant number of operations to compute 1 + h x, zi +
h x, zi2 + h x, zi3 .
As you will see, the inner products between the features hφ( x ), φ(z)i are essen-
tial here. We define the Kernel corresponding to the feature map φ as a function
that maps X × X 7→ R satisfying:2 2
Recall that X is the space of the
input x. In our running example,
K ( x, z) , hφ( x ), φ(z)i (6.15) X = Rd

To wrap up the discussion, we write the down the final algorithm as follows:
1. Compute all the values K ( x (i) , x ( j) ) , hφ( x (i) ), φ( x ( j) )i using equation 6.14 for
all i, j ∈ {1, . . . , n}. Set β := 0.

2. Loop:
!
n
∀i ∈ {1, . . . , n}, β i := β i + α y(i) − ∑ β j K ( x (i) , x ( j) ) (6.16)
j =1

Or in vector notation, letting K be the n × n matrix with Kij = K ( x (i) , x ( j) ), we


have:
β := β + α(y − Kβ)

With the algorithm above, we can update the representation β of the vector θ
efficiently with O(n2 ) time per update. Finally, we need to show that the knowl-
edge of the representation β suffices to compute the prediction θ > φ( x ). Indeed,
we have:
n n
θ > φ( x ) = ∑ β i φ ( x (i ) ) > φ ( x ) = ∑ β i K ( x (i ) , x ) (6.17)
i =1 i =1

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


6.4. properties of kernels 51

You may realize that fundamentally all we need to know about the feature map
φ(·) is encapsulated in the corresponding kernel function K (·, ·). We will expand
on this in the next section.

6.4 Properties of kernels

In the last subsection, we started with an explicitly defined feature map φ, which
induces the kernel function K ( x, z) , hφ( x ), φ(z)i. Then we saw that the kernel
function is so intrinsic so that as long as the kernel function is defined, the whole
training algorithm can be written entirely in the language of the kernel without
referring to the feature map φ, so can the prediction of a test example x (equation
6.17.)
Therefore, it would be tempting to define other kernel functions K (·, ·) and
run the algorithm 6.16. Note that the algorithm 6.16 does not need to explicitly
access the feature map φ, and therefore we only need to ensure the existence of
the feature map φ, but do not necessarily need to be able to explicitly write φ
down.
What kinds of functions K (·, ·) can correspond to some feature map φ? In other
words, can we tell if there is some feature mapping φ so that K ( x, z) = φ( x )> φ(z)
for all x, z?
If we can answer this question by giving a precise characterization of valid
kernel functions, then we can completely change the interface of selecting feature
maps φ to the interface of selecting kernel function K. Concretely, we can pick a
function K, verify that it satisfies the characterization (so that there exists a feature
map φ that K corresponds to), and then we can run update rule 6.16. The benefit
here is that we don’t have to be able to compute φ or write it down analytically,
and we only need to know its existence. We will answer this question at the end
of this subsection after we go through several concrete examples of kernels.
Suppose x, z ∈ Rd , and let’s first consider the function K (·, ·) defined as:

K ( x, z) = ( x > z)2

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


52 c hapter 6. kernel methods

We can also write this as


! !
d d
K ( x, z) = ∑ xi zi ∑ xj zj
i =1 j =1
d d
= ∑ ∑ xi x j zi z j
i =1 j =1
d
= ∑ (xi x j )(zi z j )
i,j=1

Thus, we see that K ( x, z) = hφ( x ), φ(z)i is the kernel function that corresponds
to the the feature mapping φ given (shown here for the case of d = 3) by
 
x1 x1
 x1 x2 
 
 
 x1 x3 
 
x x 
 2 1
φ ( x ) =  x2 x2  .
 
 
 x2 x3 
 
x x 
 3 1
 x3 x2 
 
x3 x3

Revisiting the computational efficiency perspective of kernel, note that whereas


calculating the high-dimensional φ( x ) requires O(d2 ) time, finding K ( x, z) takes
only O(d) time—linear in the dimension of the input attributes.
For another related example, also consider K (·, ·) defined by

K ( x, z) = ( x > z + c)2
d d √  √ 
= ∑ ( xi x j )(zi z j ) + ∑ 2cxi 2czi + c2 .
i,j=1 i =1

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


6.4. properties of kernels 53

(Check this yourself.) This function K is a kernel function that corresponds to


the feature mapping (again shown for d = 3)
 
x1 x1
 x x 
 1 2 
 x1 x3 
 
 
 x2 x1 
 
 x x 
 2 2 
 x2 x3 
 
 
φ( x ) = 
 x3 x1  ,

 x x 
 3 2 
 x3 x3 
 
√ 
 2cx1 
√ 
 2cx 
√ 2 
 
 2cx3 
c

and the parameter c controls the relative weighting between the xi (first order)
and the xi x j (second order) terms.
More broadly, the kernel K ( x, z) = ( x > z + c)k corresponds to a feature map-
ping to an (d+k k) feature space, corresponding of all monomials of the form
xi1 xi2 · · · xik that are up to order k. However, despite working in this O(dk )-
dimensional space, computing K ( x, z) still takes only O(d) time, and hence we
never need to explicitly represent feature vectors in this very high dimensional
feature space.

Kernels as similarity metrics. Now, let’s talk about a slightly different view
of kernels. Intuitively, (and there are things wrong with this intuition, but nev-
ermind), if φ( x ) and φ(z) are close together, then we might expect K ( x, z) =
φ( x )> φ(z) to be large. Conversely, if φ( x ) and φ(z) are far apart— say nearly
orthogonal to each other—then K ( x, z) = φ( x )> φ(z) will be small. So, we can
think of K ( x, z) as some measurement of how similar are φ( x ) and φ(z), or of
how similar are x and z.
Given this intuition, suppose that for some learning problem that you’re work-
ing on, you’ve come up with some function K ( x, z) that you think might be a

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


54 c hapter 6. kernel methods

reasonable measure of how similar x and z are. For instance, perhaps you chose
k x − z k2
 
K ( x, z) = exp − .
2σ2
This is a reasonable measure of x and z’s similarity, and is close to 1 when x and z
are close, and near 0 when x and z are far apart. Does there exist a feature map
φ such that the kernel K defined above satisfies K ( x, z) = φ( x )> φ(z)? In this
particular example, the answer is yes. This kernel is called the Gaussian kernel,
and corresponds to an infinite dimensional feature mapping φ. We will give a
precise characterization about what properties a function K needs to satisfy so
that it can be a valid kernel function that corresponds to some feature map φ.

Necessary conditions for valid kernels. Suppose for now that K is indeed a
valid kernel corresponding to some feature mapping φ, and we will first see what
properties it satisfies. Now, consider some finite set of n points (not necessarily
the training set) { x (1) , . . . , x (n) }, and let a square, n-by-n matrix K be defined
so that its (i, j)-entry is given by Kij = K ( x (i) , x ( j) ). This matrix is called the
kernel matrix. Note that we’ve overloaded the notation and used K to denote
both the kernel function K ( x, z) and the kernel matrix K, due to their obvious
close relationship.
Now, if K is a valid kernel, then Kij = K ( x (i) , x ( j) ) = φ( x (i) )> φ( x ( j) ) =
φ( x ( j) )> φ( x (i) ) = K ( x ( j) , x (i) ) = K ji , and hence K must be symmetric. More-
over, letting φk ( x ) denote the k-th coordinate of the vector φ( x ), we find that for
any vector z, we have

z> Kz = ∑ ∑ zi Kij z j
i j

= ∑ ∑ z i φ ( x (i ) ) > φ ( x ( j ) ) z j
i j

= ∑ ∑ zi ∑ φk ( x (i) )φk ( x ( j) )z j
i j k

= ∑ ∑ ∑ zi φk ( x (i) )φk ( x ( j) )z j
k i j
!2
=∑ ∑ zi φk (x (i )
)
k i

≥ 0.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


6.4. properties of kernels 55

The second-to-last step uses the fact that ∑i,j ai a j = (∑i ai )2 for ai = zi φk ( x (i) ).
Since z was arbitrary, this shows that K is positive semi-definite (K ≥ 0).
Hence, we’ve shown that if K is a valid kernel (i.e., if it corresponds to some
feature mapping φ), then the corresponding kernel matrix K ∈ Rn×n is symmetric
positive semidefinite.

Sufficient conditions for valid kernels. More generally, the condition above
turns out to be not only a necessary, but also a sufficient, condition for K to
be a valid kernel (also called a Mercer kernel). The following result is due to
Mercer.3 3
Many texts present Mercer’s theo-
rem in a slightly more complicated
form involving L2 functions, but
Theorem (Mercer). Let K : Rd × Rd 7→ R be given. Then for K to be a valid when the input attributes take val-
(Mercer) kernel, it is necessary and sufficient that for any { x (1) , . . . , x (n) }, (n < ues in Rd , the version given here is
equivalent.
∞), the corresponding kernel matrix is symmetric positive semi-definite.
Given a function K, apart from trying to find a feature mapping φ that corre-
sponds to it, this theorem therefore gives another way of testing if it is a valid
kernel. You’ll also have a chance to play with these ideas more in problem set 2.
In class, we also briefly talked about a couple of other examples of kernels.
For instance, consider the digit recognition problem, in which given an image
(16 × 16 pixels) of a handwritten digit (0-9), we have to figure out which digit it
was. Using either a simple polynomial kernel K ( x, z) = ( x > z)k or the Gaussian
kernel, support vector machines (SVMs) were able to obtain extremely good
performance on this problem. This was particularly surprising since the input
attributes x were just 256-dimensional vectors of the image pixel intensity values,
and the system had no prior knowledge about vision, or even about which pixels
are adjacent to which other ones. Another example that we briefly talked about
in lecture was that if the objects x that we are trying to classify are strings (say, x
is a list of amino acids, which strung together form a protein), then it seems hard
to construct a reasonable, ‘‘small’’ set of features for most learning algorithms,
especially if different strings have different lengths. However, consider letting
φ( x ) be a feature vector that counts the number of occurrences of each length-k
substring in x. If we’re considering strings of English letters, then there are 26 k
such strings. Hence, φ( x ) is a 26k -dimensional vector; even for moderate values
of k, this is probably too big for us to efficiently work with. (e.g., 264 ≈ 460000.)
However, using (dynamic programming-ish) string matching algorithms, it is

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


56 c hapter 6. kernel methods

possible to efficiently compute K ( x, z) = φ( x )> φ(z), so that we can now implicitly


work in this 26k -dimensional feature space, but without ever explicitly computing
feature vectors in this space.

Application of kernel methods. We’ve seen the application of kernels to linear


regression. In the next part, we will introduce the support vector machines to
which kernels can be directly applied. We won’t dwell too much longer on it
here. In fact, the idea of kernels has significantly broader applicability than linear
regression and SVMs. Specifically, if you have any learning algorithm that you can
write in terms of only inner products h x, zi between input attribute vectors, then
by replacing this with K ( x, z) where K is a kernel, you can ‘‘magically’’ allow your
algorithm to work efficiently in the high dimensional feature space corresponding
to K. For instance, this kernel trick can be applied with the perceptron to derive a
kernel perceptron algorithm. Many of the algorithms that we’ll see later in this
class will also be amenable to this method, which has come to be known as the
‘‘kernel trick.’’

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Part IV: Support Vector Machines
From CS229 Fall 2020, Tengyu Ma,
7 Support vector machines Andrew Ng, Moses Charikar, &
Christopher Ré, Stanford Univer-
sity.

This set of notes presents the Support Vector Machine (SVM) learning al- gorithm.
SVMs are among the best (and many believe are indeed the best) ‘‘off-the-shelf’’
supervised learning algorithms. To tell the SVM story, we’ll need to first talk
about margins and the idea of separating data with a large ‘‘gap.’’ Next, we’ll
talk about the optimal margin classifier, which will lead us into a digression
on Lagrange duality. We’ll also see kernels, which give a way to apply SVMs
efficiently in very high dimensional (such as infinite-dimensional) feature spaces,
and finally, we’ll close off the story with the SMO algorithm, which gives an
efficient implementation of SVMs.

7.1 Margins: Intuition

We’ll start our story on SVMs by talking about margins. This section will give the
intuitions about margins and about the ‘‘confidence’’ of our predictions; these
ideas will be made formal in Section 7.3.
Consider logistic regression, where the probability p(y = 1 | x; θ ) is modeled
by hθ ( x ) = g(θ > x ). We then predict ‘‘1’’ on an input x if and only if hθ ( x ) ≥ 0.5,
or equivalently, if and only if θ > x ≥ 0. Consider a positive training example
(y = 1). The larger θ > x is, the larger also is hθ ( x ) = p(y = 1 | x; θ ), and thus also
the higher our degree of ‘‘confidence’’ that the label is 1. Thus, informally we can
think of our prediction as being very confident that y = 1 if θ > x  0. Similarly,
we think of logistic regression as confidently predicting y = 0, if θ > x  0. Given
a training set, again informally it seems that we’d have found a good fit to the
training data if we can find θ so that θ > x (i)  0 whenever y(i) = 1, and θ > x (i)  0
whenever y(i) = 0, since this would reflect a very confident (and correct) set of
classifications for all the training examples. This seems to be a nice goal to aim
for, and we’ll soon formalize this idea using the notion of functional margins.
58 c hapter 7. support vector machines

For a different type of intuition, consider the following figure, in which x’s
represent positive training examples, o’s denote negative training examples, a
decision boundary (this is the line given by the equation θ > x = 0, and is also
called the separating hyperplane) is also shown, and three points have also been
labeled A, B and C.
Notice that the point A is very far from the decision boundary. If we are asked
to make a prediction for the value of y at A, it seems we should be quite confident
that y = 1 there. Conversely, the point C is very close to the decision boundary,
and while it’s on the side of the decision boundary on which we would predict
y = 1, it seems likely that just a small change to the decision boundary could
easily have caused out prediction to be y = 0. Hence, we’re much more confident
about our prediction at A than at C. The point B lies in-between these two cases,
and more broadly, we see that if a point is far from the separating hyperplane,
then we may be significantly more confident in our predictions. Again, informally
we think it would be nice if, given a training set, we manage to find a decision
boundary that allows us to make all correct and confident (meaning far from the
decision boundary) predictions on the training examples. We’ll formalize this
later using the notion of geometric margins.

7.2 Notation

To make our discussion of SVMs easier, we’ll first need to introduce a new no-
tation for talking about classification. We will be considering a linear classifier
for a binary classification problem with labels y and features x. From now, we’ll
use y ∈ {−1, 1} (instead of {0, 1}) to denote the class labels. Also, rather than
parameterizing our linear classifier with the vector θ, we will use parameters w, b,
and write our classifier as

hw,b ( x ) = g(w> x + b).

Here, g(z) = 1 if z ≥ 0, and g(z) = −1 otherwise. This ‘‘w, b’’ notation allows us
to explicitly treat the intercept term b separately from the other parameters. (We
also drop the convention we had previously of letting x0 = 1 be an extra coordinate
in the input feature vector.) Thus, b takes the role of what was previously θ0 , and
w takes the role of [θ1 . . . θd ]> .

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.3. functional and geometric margins 59

Note also that, from our definition of g above, our classifier will directly predict
either 1 or −1 (cf. the perceptron algorithm), without first going through the
intermediate step of estimating p(y = 1) (which is what logistic regression does).

7.3 Functional and geometric margins

Let’s formalize the notions of the functional and geometric margins. Given a train-
ing example ( x (i) , y(i) ), we define the functional margin of (w, b) with respect to
the training example as
γ̂(i) = y(i) (w> x (i) + b).
Note that if y(i) = 1, then for the functional margin to be large (i.e., for our
prediction to be confident and correct), we need w> x (i) + b to be a large positive
number. Conversely, if y(i) = −1, then for the functional margin to be large, we
need w> x (i) + b to be a large negative number. Moreover, if y(i) (w> x (i) + b) > 0,
then our prediction on this example is correct. (Check this yourself.) Hence, a
large functional margin represents a confident and a correct prediction.
For a linear classifier with the choice of g given above (taking values in {−1, 1}),
there’s one property of the functional margin that makes it not a very good
measure of confidence, however. Given our choice of g, we note that if we replace
w with 2w and b with 2b, then since g(w> x + b) = g(2w> x + 2b), this would not
change hw,b ( x ) at all. I.e., g, and hence also hw,b ( x ), depends only on the sign,
but not on the magnitude, of w> x + b. However, replacing (w, b) with (2w, 2b)
also results in multiplying our functional margin by a factor of 2. Thus, it seems
that by exploiting our freedom to scale w and b, we can make the functional
margin arbitrarily large without really changing anything meaningful. Intuitively,
it might therefore make sense to impose some sort of normalization condition
such as that kwk2 = 1; i.e., we might replace (w, b) with (w/kwk2 , b/kwk2 ), and
instead consider the functional margin of (w/kwk2 , b/kwk2 ). We’ll come back to
this later.
Given a training set S = {( x (i) , y(i) ); i = 1, . . . , n}, we also define the function
margin of (w, b) with respect to S as the smallest of the functional margins of the
individual training examples. Denoted by γ̂, this can therefore be written:

γ̂ = min γ̂(i)
i =1,...,n

Next, let’s talk about geometric margins. Consider the picture below:

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


60 c hapter 7. support vector machines

The decision boundary corresponding to (w, b) is shown, along with the vector
w. Note that w is orthogonal (at 90◦ ) to the separating hyperplane. (You should
convince yourself that this must be the case.) Consider the point at A, which
represents the input x (i) of some training example with label y(i) = 1. Its distance
to the decision boundary, γ(i) , is given by the line segment AB.
How can we find the value of γ(i) ? Well, w/kwk is a unit-length vector pointing
in the same direction as w. Since A represents x (i) , we therefore find that the point
B is given by x (i) − γ(i) · w/kwk. But this point lies on the decision boundary, and
all points x on the decision boundary satisfy the equation w> x + b = 0. Hence,
 
w
w > x (i ) − γ (i ) + b = 0.
kwk
Solving for γ(i) yields
>
w > x (i ) + b

(i ) w b
γ = = x (i ) + .
kwk kwk kwk
This was worked out for the case of a positive training example at A in the figure,
where being on the ‘‘positive’’ side of the decision boundary is good. More
generally, we define the geometric margin of (w, b) with respect to a training
example ( x (i) , y(i) ) to be

w > (i )
  !
(i ) (i ) b
γ =y x + .
kwk kwk
Note that if kwk = 1, then the functional margin equals the geometric margin—
this thus gives us a way of relating these two different notions of margin. Also,
the geometric margin is invariant to rescaling of the parameters; i.e., if we replace
w with 2w and b with 2b, then the geometric margin does not change. This will
in fact come in handy later. Specifically, because of this invariance to the scaling
of the parameters, when trying to fit w and b to training data, we can impose
an arbitrary scaling constraint on w without changing anything important; for
instance, we can demand that kwk = 1, or |w1 | = 5, or |w1 + b| + |w2 | = 2, and
any of these can be satisfied simply by rescaling w and b.
Finally, given a training set S = {( x (i) , y(i) ); i = 1, . . . , n}, we also define the
geometric margin of (w, b) with respect to S to be the smallest of the geometric
margins on the individual training examples:

γ = min γ(i) .
i =1,...,n

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.4. the optimal margin classifier 61

7.4 The optimal margin classifier

Given a training set, it seems from our previous discussion that a natural desider-
atum is to try to find a decision boundary that maximizes the (geometric) margin,
since this would reflect a very confident set of predictions on the training set and
a good ‘‘fit’’ to the training data. Specifically, this will result in a classifier that
separates the positive and the negative training examples with a ‘‘gap’’ (geometric
margin).
For now, we will assume that we are given a training set that is linearly sep-
arable; i.e., that it is possible to separate the positive and negative examples
using some separating hyperplane. How will we find the one that achieves the
maximum geometric margin? We can pose the following optimizationproblem:

max γ
γ,w,b

s. t. y(i) (w> x (i) + b) ≥ γ, i = 1, . . . , n


kwk = 1.

I.e., we want to maximize γ, subject to each training example having functional


margin at least γ. The kwk = 1 constraint moreover ensures that the functional
margin equals to the geometric margin, so we are also guaranteed that all the
geometric margins are at least γ. Thus, solving this problem will result in (w, b)
with the largest possible geometric margin with respect to the training set.
If we could solve the optimization problem above, we’d be done. But the
‘‘kwk = 1’’ constraint is a nasty (non-convex) one, and this problem certainly
isn’t in any format that we can plug into standard optimization software to solve.
So, let’s try transforming the problem into a nicer one. Consider:
γ̂
max
γ̂,w,b kwk
s. t. y(i) (w> x (i) + b) ≥ γ̂, i = 1, . . . , n

Here, we’re going to maximize γ̂/kwk, subject to the functional margins all being
at least γ̂. Since the geometric and functional margins are related by γ = γ̂/kwk,
this will give us the answer we want. Moreover, we’ve gotten rid of the constraint
kwk = 1 that we didn’t like. The downside is that we now have a nasty (again,
γ̂
non-convex) objective kw k
function; and, we still don’t have any off-the-shelf
software that can solve this form of an optimization problem.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


62 c hapter 7. support vector machines

Let’s keep going. Recall our earlier discussion that we can add an arbitrary
scaling constraint on w and b without changing anything. This is the key idea
we’ll use now. We will introduce the scaling constraint that the functional margin
of w, b with respect to the training set must be 1:

γ̂ = 1

Since multiplying w and b by some constant results in the functional margin being
multiplied by that same constant, this is indeed a scaling constraint, and can be
satisfied by rescaling w, b. Plugging this into our problem above, and noting that
maximizing γ̂/kwk = 1/kwk is the same thing as minimizing kwk2 , we now
have the following optimization problem:
1
min k w k2
w,b 2
s. t. y(i) (w> x (i) + b) ≥ 1, i = 1, . . . , n

We’ve now transformed the problem into a form that can be efficiently solved.
The above is an optimization problem with a convex quadratic objective and
only linear constraints. Its solution gives us the optimal margin classifier. This
optimization problem can be solved using commercial quadratic programming
(QP) code.1 1
You may be familiar with lin-
While we could call the problem solved here, what we will instead do is make ear programming, which solves
optimization problems that have
a digression to talk about Lagrange duality. This will lead us to our optimization linear objectives and linear con-
problem’s dual form, which will play a key role in allowing us to use kernels to straints. QP software is also widely
available, which allows convex
get optimal margin classifiers to work efficiently in very high dimensional spaces. quadratic objectives and linear con-
The dual form will also allow us to derive an efficient algorithm for solving the straints.
above optimization problem that will typically do much better than generic QP
software.

7.5 Lagrange duality (optional reading)

Let’s temporarily put aside SVMs and maximum margin classifiers, and talk about
solving constrained optimization problems. Consider a problem of the following
form:

min f (w)
w
s. t. hi (w) = 0, i = 1, . . . , l.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.5. lagrange duality (optional reading) 63

Some of you may recall how the method of Lagrange multipliers can be used to
solve it. (Don’t worry if you haven’t seen it before.) In this method, we define the
Lagrangian to be
l
L(w, β) = f (w) + ∑ β i hi (w)
i =1
Here, the β i ’s are called the Lagrange multipliers. We would then find and set
L’s partial derivatives to zero:
∂L ∂L
= 0; = 0,
∂wi ∂β i
and solve for w and β.
In this section, we will generalize this to constrained optimization problems
in which we may have inequality as well as equality constraints. Due to time
constraints, we won’t really be able to do the theory of Lagrange duality justice in
this class,2 but we will give the main ideas and results, which we will then apply 2
Readers interested in learning
to our optimal margin classifier’s optimization problem. more about this topic are encour-
aged to read, e.g., R. T. Rockarfeller
Consider the following, which we’ll call the primal optimization problem: (1970), Convex Analysis, Princeton
University Press.
min f (w)
w
s. t. gi (w) ≤ 0, i = 1, . . . , k
hi (w) = 0, i = 1, . . . , l.

To solve it, we start by defining the generalized Lagrangian


k l
L(w, α, β) = f (w) + ∑ αi gi (w) + ∑ β i hi (w).
i =1 i =1

Here, the αi ’s and β i ’s are the Lagrange multipliers. Consider the quantity

θP (w) = max L(w, α, β).


α,β:αi ≥0

Here, the ‘‘P ’’ subscript stands for ‘‘primal.’’ Let some w be given. If w violates
any of the primal constraints (i.e., if either gi (w) > 0 or hi (w) 6= 0 for some i),
then you should be able to verify that
k l
θP (w) = max f (w) + ∑ αi gi (w) + ∑ β i hi (w)
α,β:αi ≥0 i =1 i =1
= ∞.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


64 c hapter 7. support vector machines

Conversely, if the constraints are indeed satisfied for a particular value of w, then
θP (w) = f (w). Hence,

 f (w) if w satisfies primal constraints
θP (w) =
∞ otherwise.

Thus, θP takes the same value as the objective in our problem for all values of w
that satisfies the primal constraints, and is positive infinity if the constraints are
violated. Hence, if we consider the minimization problem

min θP (w) = min max L(w, α, β),


w w α,β:αi ≥0

we see that it is the same problem (i.e., and has the same solutions as) our original,
primal problem. For later use, we also define the optimal value of the objective to
be p∗ = minw θP (w); we call this the value of the primal problem.
Now, let’s look at a slightly different problem. We define

θD (α, β) = min L(w, α, β).


w

Here, the ‘‘D ’’ subscript stands for ‘‘dual.’’ Note also that whereas in the defi-
nition of θP we were optimizing (maximizing) with respect to α, β, here we are
minimizing with respect to w.
We can now pose the dual optimization problem:

max θD (α, β) = max min L(w, α, β).


α,β:αi ≥0 α,β:αi ≥0 w

This is exactly the same as our primal problem shown above, except that the order
of the ‘‘max’’ and the ‘‘min’’ are now exchanged. We also define the optimal value
of the dual problem’s objective to be d∗ = maxα,β:αi ≥0 θD (w).
How are the primal and the dual problems related? It can easily be shown that

d∗ = max min L(w, α, β) ≤ min max L(w, α, β) = p∗ .


α,β:αi ≥0 w w α,β:αi ≥0

(You should convince yourself of this; this follows from the ‘‘max min’’ of a
function always being less than or equal to the ‘‘min max.’’) However, under
certain conditions, we will have

d∗ = p∗ ,

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.6. optimal margin classifiers 65

so that we can solve the dual problem in lieu of the primal problem. Let’s see
what these conditions are.
Suppose f and the gi ’s are convex,3 and the hi ’s are affine.4 Suppose further 3
When f has a Hessian, then it is
that the constraints gi are (strictly) feasible; this means that there exists some w convex if and only if the Hessian is
positive semi-definite. For instance,
so that gi (w) < 0 for all i. f (w) = w> w is convex; similarly,
Under our above assumptions, there must exist w∗ , α∗ , β∗ so that w∗ is the all linear (and affine) functions are
also convex. (A function f can also
solution to the primal problem, α∗ ,β∗ are the solution to the dual problem, and be convex without being differen-
moreover p∗ = d∗ = L(w∗ , α∗ , β∗ ). Moreover, w∗ ,α∗ and β∗ satisfy the Karush- tiable, but we won’t need those
Kuhn-Tucker (KKT) conditions, which are as follows: more general definitions of convex-
ity here.)
4
∂ I.e., there exists ai , bi , so that
L(w∗ , α∗ , β∗ ) = 0, i = 1, . . . , d (7.1) hi (w) = ai> w + bi . ‘‘Affine’’ means
∂wi the same thing as linear, except that
∂ we also allow the extra intercept
L(w∗ , α∗ , β∗ ) = 0, i = 1, . . . , l (7.2) term bi .
∂β i
αi∗ gi (w∗ ) = 0, i = 1, . . . , k (7.3)

gi (w ) ≤ 0, i = 1, . . . , k (7.4)

α ≥ 0, i = 1, . . . , k (7.5)

Moreover, if some w∗ , α∗ , β∗ satisfy the KKT conditions, then it is also a solution


to the primal and dual problems.
We draw attention to Equation (7.3), which is called the KKT dual comple-
mentarity condition. Specifically, it implies that if αi∗ > 0, then gi (w∗ ) = 0. (I.e.,
the ‘‘gi (w) ≤ 0’’ constraint is active, meaning it holds with equality rather than
with inequality.) Later on, this will be key for showing that the SVM has only a
small number of ‘‘support vectors’’; the KKT dual complementarity condition
will also give us our convergence test when we talk about the SMO algorithm.

7.6 Optimal margin classifiers

Note: The equivalence of optimization problem 7.6 and the optimization problem 7.11,
and the relationship between the primary and dual variables in equation 7.8 are the most
important take home messages of this section.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


66 c hapter 7. support vector machines

Previously, we posed the following (primal) optimization problem for finding


the optimal margin classifier:

1
min k w k2 (7.6)
w,b 2
s. t. y(i) (w> x (i) + b) ≥ 1, i = 1, . . . , n (7.7)

We can write the constraints as

gi (w) = −y(i) (w> x (i) + b) + 1 ≤ 0.

We have one such constraint for each training example. Note that from the KKT
dual complementarity condition, we will have αi > 0 only for the training exam-
ples that have functional margin exactly equal to one (i.e., the ones corresponding
to constraints that hold with equality, gi (w) = 0). Consider the figure below, in
which a maximum margin separating hyperplane is shown by the solid line.
The points with the smallest margins are exactly the ones closest to the deci-
sion boundary; here, these are the three points (one negative and two positive
examples) that lie on the dashed lines parallel to the decision boundary. Thus,
only three of the αi ’s—namely, the ones corresponding to these three training
examples—will be non-zero at the optimal solution to our optimization problem.
These three points are called the support vectors in this problem. The fact that
the number of support vectors can be much smaller than the size the training set
will be useful later.
Let’s move on. Looking ahead, as we develop the dual form of the problem,
one key idea to watch out for is that we’ll try to write our algorithm in terms of
only the inner product h x (i) , x ( j) i (think of this as ( x (i) )> x ( j) ) between points in
the input feature space. The fact that we can express our algorithm in terms of
these inner products will be key when we apply the kernel trick.
When we construct the Lagrangian for our optimization problem we have:
n
1 h i
L(w, b, α) = k w k2 − ∑ α i y (i ) ( w > x (i ) + b ) − 1 .
2 i =1

Note that there’re only ‘‘αi ’’ but no ‘‘β i ’’ Lagrange multipliers, since the problem
has only inequality constraints.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.6. optimal margin classifiers 67

Let’s find the dual form of the problem. To do so, we need to first minimize
L(w, b, α) with respect to w and b (for fixed α), to get θD , which we’ll do by setting
the derivatives of L with respect to w and b to zero. We have:
n
∇w L(w, b, α) = w − ∑ αi y(i) x (i) = 0
i =1

This implies that


n
w= ∑ α i y (i ) x (i ) . (7.8)
i =1

As for the derivative with respect to b, we obtain


n

L(w, b, α) = ∑ αi y(i) = 0. (7.9)
∂b i =1

If we take the definition of w in Equation (7.8) and plug that back into the
Lagrangian (Section 7.6), and simplify, we get
n n n
1
L(w, b, α) = ∑ αi − 2 ∑ y (i ) y ( j ) α i α j ( x (i ) ) > x ( j ) − b ∑ α i y (i ) . (7.10)
i =1 i,j=1 i =1

But from Equation (7.9), the last term must be zero, so we obtain
n n
1
L(w, b, α) = ∑ αi − 2 ∑ y (i ) y ( j ) α i α j ( x (i ) ) > x ( j ) .
i =1 i,j=1

Recall that we got to the equation above by minimizing L with respect to w and
b. Putting this together with the constraints αi ≥ 0 (that we always had) and
the constraint from equation (7.9), we obtain the following dual optimization
problem:
n
1 n (i ) ( j )
max W (α) =
α
∑ αi − 2 i,j∑
y y α i α j h x (i ) , x ( j ) i. (7.11)
i =1 =1

s. t. αi ≥ 0, i = 1, . . . , n (7.12)
n
∑ αi y(i) = 0. (7.13)
i =1

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


68 c hapter 7. support vector machines

You should also be able to verify that the conditions required for p∗ = d∗ and
the KKT conditions (Equations (7.1) to (7.5)) to hold are indeed satisfied in our
optimization problem. Hence, we can solve the dual in lieu of solving the primal
problem. Specifically, in the dual problem above, we have a maximization problem
in which the parameters are the αi ’s. We’ll talk later about the specific algorithm
that we’re going to use to solve the dual problem, but if we are indeed able to
solve it (i.e., find the α’s that maximize W (α) subject to the constraints), then we
can use Equation (7.8) to go back and find the optimal w’s as a function of the α’s.
Having found w∗ , by considering the primal problem, it is also straightforward
to find the optimal value for the intercept term b as

maxi:y(i) =−1 w∗> x (i) + mini:y(i) =1 w∗> x (i)


b∗ = − . (7.14)
2
(Check for yourself that this is correct.)
Before moving on, let’s also take a more careful look at Equation (7.8), which
gives the optimal value of w in terms of (the optimal value of) α. Suppose we’ve
fit our model’s parameters to a training set, and now wish to make a prediction at
a new point input x. We would then calculate w> x + b, and predict y = 1 if and
only if this quantity is bigger than zero. But using equation (7.8), this quantity
can also be written:
!>
n
w> x + b = ∑ α i y (i ) x (i ) x+b (7.15)
i =1
n
= ∑ αi y(i) hx(i) , xi + b. (7.16)
i =1

Hence, if we’ve found the αi ’s, in order to make a prediction, we have to calculate
a quantity that depends only on the inner product between x and the points in
the training set. Moreover, we saw earlier that the αi ’s will all be zero except for
the support vectors. Thus, many of the terms in the sum above will be zero, and
we really need to find only the inner products between x and the support vectors
(of which there is often only a small number) in order calculate equation (7.16)
and make our prediction.
By examining the dual form of the optimization problem, we gained significant
insight into the structure of the problem, and were also able to write the entire
algorithm in terms of only inner products between input feature vectors. In the

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.7. regularization and the non-separable case (optional reading) 69

next section, we will exploit this property to apply the kernels to our classifica-
tion problem. The resulting algorithm, support vector machines, will be able to
efficiently learn in very high dimensional spaces.

7.7 Regularization and the non-separable case (optional reading)

The derivation of the SVM as presented so far assumed that the data is linearly
separable. While mapping data to a high dimensional feature space via φ does
generally increase the likelihood that the data is separable, we can’t guarantee
that it always will be so. Also, in some cases it is not clear that finding a separating
hyperplane is exactly what we’d want to do, since that might be susceptible to
outliers. For instance, the left figure below shows an optimal margin classifier,
and when a single outlier is added in the upper-left region (right figure), it causes
the decision boundary to make a dramatic swing, and the resulting classifier has
a much smaller margin.
To make the algorithm work for non-linearly separable datasets as well as be less
sensitive to outliers, we reformulate our optimization (using `1 regularization)
as follows:
n
1
min k w k2 + C ∑ ξ i
γ,w,b 2 i =1
s. t. y (i ) ( w > x (i ) + b ) ≥ 1 − ξ i , i = 1, . . . , n
ξ i ≥ 0, i = 1, . . . , n.

Thus, examples are now permitted to have (functional) margin less than 1, and
if an example has functional margin 1 − ξ i (with ξ > 0), we would pay a cost
of the objective function being increased by Cξ i . The parameter C controls the
relative weighting between the twin goals of making the kwk2 small (which we
saw earlier makes the margin large) and of ensuring that most examples have
functional margin at least 1.
As before, we can form the Lagrangian:
n n n
1 > h i
L(w, b, ξ, α, r ) = w w + C ∑ ξ i − ∑ α i y (i ) ( x > w + b ) − 1 + ξ i − ∑ r i ξ i .
2 i =1 i =1 i =1

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


70 c hapter 7. support vector machines

Here, the αi ’s and ri ’s are our Lagrange multipliers (constrained to be ≥ 0). We


won’t go through the derivation of the dual again in detail, but after setting the
derivatives with respect to w and b to zero as before, substituting them back in,
and simplifying, we obtain the following dual form of the problem:
n
1 n (i ) ( j )
max W (α) =
α
∑ αi − 2 i,j∑
y y α i α j h x (i ) , x ( j ) i
i =1 =1

s. t. 0 ≤ αi ≤ C, i = 1, . . . , n
n
∑ αi y(i) = 0.
i =1

As before, we also have that w can be expressed in terms of the αi ’s as given


in equation (7.8), so that after solving the dual problem, we can continue to use
equation (7.16) to make our predictions. Note that, somewhat surprisingly, in
adding `1 regularization, the only change to the dual problem is that what was
originally a constraint that 0 ≤ αi has now become 0 ≤ αi ≤ C. The calculation for
b∗ also has to be modified (equation (7.14) is no longer valid); see the comments
in the next section/Platt’s paper.
Also, the KKT dual-complementarity conditions (which in the next section
will be useful for testing for the convergence of the SMO algorithm) are:

αi = 0 =⇒ y(i) (w> x (i) + b) ≥ 1 (7.17)


αi = C =⇒ y(i) (w> x (i) + b) ≤ 1 (7.18)
(i ) > (i )
0 < αi < C =⇒ y (w x + b) = 1. (7.19)

Now, all that remains is to give an algorithm for actually solving the dual
problem, which we will do in the next section.

7.8 The SMO algorithm (optional reading)

The SMO (sequential minimal optimization) algorithm, due to John Platt, gives
an efficient way of solving the dual problem arising from the derivation of the
SVM. Partly to motivate the SMO algorithm, and partly because it’s interesting in
its own right, let’s first take another digression to talk about the coordinate ascent
algorithm.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.9. smo 71

7.8.1 Coordinate ascent


Consider trying to solve the unconstrained optimization problem

max W (α1 , α2 , . . . , αn ).
α

Here, we think of W as just some function of the parameters αi ’s, and for now
ignore any relationship between this problem and SVMs. We’ve already seen
two optimization algorithms, gradient ascent and Newton’s method. The new
algorithm we’re going to consider here is called coordinate ascent:

repeat Algorithm 7.1. Coordinate ascent.

for i = 1, . . . , n do
αi := arg maxα̂i W (α1 , . . . , αi−1 , α̂i , αi+1 , . . . , αn ).
end for
until convergence

Thus, in the innermost loop of this algorithm, we will hold all the variables
except for some αi fixed, and reoptimize W with respect to just the parameter
αi . In the version of this method presented here, the inner-loop reoptimizes the
variables in order α1 , α2 , . . . , αn , α1 , α2 , . . . (A more sophisticated version might
choose other orderings; for instance, we may choose the next variable to update
according to which one we expect to allow us to make the largest increase in
W (α).)
When the function W happens to be of such a form that the ‘‘arg max’’ in the
inner loop can be performed efficiently, then coordinate ascent can be a fairly
efficient algorithm. Here’s a picture of coordinate ascent in action:
The ellipses in the figure are the contours of a quadratic function that we want
to optimize. Coordinate ascent was initialized at (2, −2), and also plotted in the
figure is the path that it took on its way to the global maximum. Notice that on
each step, coordinate ascent takes a step that’s parallel to one of the axes, since
only one variable is being optimized at a time.

7.9 SMO

We close off the discussion of SVMs by sketching the derivation of the SMO
algorithm.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


72 c hapter 7. support vector machines

Here’s the (dual) optimization problem that we want to solve:


n n
1
max W (α) =
α
∑ αi − 2 ∑ y (i ) y ( j ) α i α j h x (i ) , x ( j ) i. (7.20)
i =1 i,j=1

s. t. 0 ≤ αi ≤ C, i = 1, . . . , n (7.21)
n
∑ αi y(i) = 0. (7.22)
i =1

Let’s say we have set of αi ’s that satisfy the constraints in equations (7.21)
and (7.22). Now, suppose we want to hold α2 , . . . , αn fixed, and take a coordinate
ascent step and reoptimize the objective with respect to α1 . Can we make any
progress? The answer is no, because the constraint 7.22 ensures that
n
α 1 y (1) = − ∑ α i y ( i ) .
i =2

Or, by multiplying both sides by y(1) , we equivalently have


n
α 1 = − y (1) ∑ α i y (i ) .
i =2

(This step used the fact that y(1) ∈ {−1, 1}, and hence (y(1) )2 = 1.) Hence, α1
is exactly determined by the other αi ’s, and if we were to hold α2 , . . . , αn fixed,
then we can’t make any change to α1 without violating the constraint 7.22 in the
optimization problem.
Thus, if we want to update some subject of the αi ’s, we must update at least two
of them simultaneously in order to keep satisfying the constraints. This motivates
the SMO algorithm, which simply does the following:
To test for convergence of this algorithm, we can check whether the KKT
conditions (equations (7.17) to (7.19)) are satisfied to within some tol. Here, tol is
the convergence tolerance parameter, and is typically set to around 0.01 to 0.001.
(See the paper and pseudocode for details.)
The key reason that SMO is an efficient algorithm is that the update to αi , α j can
be computed very efficiently. Let’s now briefly sketch the main ideas for deriving
the efficient update.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


7.9. smo 73

repeat Algorithm 7.2. SMO algorithm.

1. Select some pair αi and α j to update next (using a heuristic that tries to
pick the two that will allow us to make the biggest progress towards
the global maximum).
2. Reoptimize W (α) with respect to αi and α j , while holding all the other
αk ’s (k 6= i, j) fixed.
until convergence

Let’s say we currently have some setting of the αi ’s that satisfy the constraints
7.21–7.22, and suppose we’ve decided to hold α3 , . . . , αn fixed, and want to re-
optimize W (α1 , α2 , . . . , αn ) with respect to α1 and α2 (subject to the constraints).
From equation (7.22), we require that
n
α 1 y (1) + α 2 y (2) = − ∑ α i y ( i ) .
i =3

Since the right hand side is fixed (as we’ve fixed α3 , . . . αn ), we can just let it be
denoted by some constant ζ:

α1 y(1) + α2 y(2) = ζ.

We can thus picture the constraints on α1 and α2 as follows:


From the constraints 7.21, we know that α1 and α2 must lie within the box
[0, C ] × [0, C ] shown. Also plotted is the line α1 y(1) + α2 y(2) = ζ, on which we
know α1 and α2 must lie. Note also that, from these constraints, we know L ≤
α2 ≤ H; otherwise, (α1 , α2 ) can’t simultaneously satisfy both the box and the
straight line constraint. In this example, L = 0. But depending on what the line
α1 y(1) + α2 y(2) = ζ looks like, this won’t always necessarily be the case; but
more generally, there will be some lower-bound L and some upper-bound H
on the permissible values for α2 that will ensure that α1 , α2 lie within the box
[0, C ] × [0, C ].
Using section 7.9, we can also write α1 as a function of α2 :

α 1 = ( ζ − α 2 y (2) ) y (1) .

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


74 c hapter 7. support vector machines

(Check this derivation yourself; we again used the fact that y(1) ∈ {−1, 1} so that
(y(1) )2 = 1.) Hence, the objective W (α) can be written

W (α1 , α2 , . . . , αn ) = W ((ζ − α2 y(2) )y(1) , α2 , . . . , αn ).

Treating α3 , . . . , αn as constants, you should be able to verify that this is just some
quadratic function in α2 . I.e., this can also be expressed in the form aα22 + bα2 + c
for some appropriate a, b, and c. If we ignore the ‘‘box’’ constraints 7.21 (or,
equivalently, that L ≤ α2 ≤ H), then we can easily maximize this quadratic
new,unclipped
function by setting its derivative to zero and solving. We’ll let α2 denote
the resulting value of α2 . You should also be able to convince yourself that if we had
instead wanted to maximize W with respect to α2 but subject to the box constraint,
new,unclipped
then we can find the resulting value optimal simply by taking α2 and
‘‘clipping’’ it to lie in the [ L, H ] interval, to get

new,unclipped

H

 if α2 >H
new,unclipped new,unclipped
αnew
2 = α if L ≤ α2 ≤H
 2
 new,unclipped
L if <L

α2

Finally, having found the αnew 2 , we can use section 7.9 to go back and find the
optimal value of α1 . new

There’re a couple more details that are quite easy but that we’ll leave you to
read about yourself in Platt’s paper: One is the choice of the heuristics used to
select the next αi , α j to update; the other is how to update b as the SMO algorithm
is run.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Part V: Deep Learning
From CS229 Fall 2020, Tengyu Ma,
We now begin our study of deep learning. In this set of notes, we give an overview Anand Avati, Kian Katanforoosh,
of neural networks, discuss vectorization and discuss training neural networks Andrew Ng, Moses Charikar, &
Christopher Ré, Stanford Univer-
with backpropagation. sity.

8 Supervised Learning with Non-Linear Models

In the supervised learning setting (predicting y from the input x), suppose our
model/hypothesis is hθ ( x ). In the past lectures, we have considered the cases
when hθ ( x ) = θ > x (in linear regression or logistic regression) or hθ ( x ) = θ > φ( x )
(where φ( x ) is the feature map). A commonality of these two models is that they
are linear in the parameters θ. Next we will consider learning general family of
models that are non-linear in both the parameters θ and the inputs x. The most
common non-linear models are neural networks, which we will define staring from
the next section. For this section, it suffices to think hθ ( x ) as an abstract non-linear
model.1 1
If a concrete example is helpful,
Suppose {( x (i) , y(i) )}in=1 are the training examples. For simplicity, we start with perhaps think about the model
hθ ( x ) = θ12 x12 + θ22 x22 + · · · + θd2 xd2
the case where y(i) ∈ R and hθ ( x ) ∈ R. in this subsection, even though it’s
Cost/loss function. We define the least square cost function for the i-th example not a neural network.
( x (i) , y(i) ) as
1 2
J (i ) ( θ ) = h θ ( x (i ) ) − y (i ) (8.1)
2
and define the mean-square cost function for the dataset as
n
1
J (θ ) =
n ∑ J (i ) ( θ ) (8.2)
i =1

which is same as in linear regression except that we introduce a constant 1/n in


front of the cost function to be consistent with the convention. Note that multi-
plying the cost function with a scalar will not change the local minima or global
minima of the cost function. Also note that the underlying parameterization for
76 c hapter 8. supervised learning with n on-linear models

hθ ( x ) is different from the case of linear regression, even though the form of the
cost function is the same mean-squared loss. Throughout the notes, we use the
words ‘‘loss’’ and ‘‘cost’’ interchangeably. 2
Recall that, as defined in the pre-
vious lecture notes, we use the no-
tation ‘‘a := b’’ to denote an oper-
Optimizers (SGD). Commonly, people use gradient descent (GD), stochastic ation (in a computer program) in
gradient (SGD), or their variants to optimize the loss function J (θ ). GD’s update which we set the value of a variable
a to be equal to the value of b. In
rule can be written as2 other words, this operation over-
θ := θ − α∇θ J (θ ) (8.3) writes a with the value of b. In con-
trast, we will write ‘‘a = b’’ when
where α > 0 is often referred to as the learning rate or step size. Next, we introduce we are asserting a statement of fact,
that the value of a is equal to the
a version of the SGD (algorithm 8.1), which is slightly different from that in the
value of b.
first lecture notes. Oftentimes computing the gradient of B examples simultane-

Hyperparameter: learning rate α, number of total iteration niter . Algorithm 8.1. Stochastic gradient
descent.
Initialize θ randomly.
for i = 1 to niter do
Sample j uniformly from 1, . . . , n, and update θ by

θ : = θ − α ∇θ J ( j) (θ )

end for

ously for the parameter θ can be faster than computing B gradients separately
due to hardware parallelization. Therefore, a mini-batch version of SGD is most
commonly used in deep learning, as shown in algorithm 8.2. There are also other
variants of the SGD or mini-batch SGD with slightly different sampling schemes.
With these generic algorithms, a typical deep learning model is learned with
the following steps:

1. Define a neural network parametrization hθ ( x ), which we will introduce in


chapter 9.

2. Write the backpropagation algorithm to compute the gradient of the loss


function J ( j) (θ ) efficiently, which will be covered in chapter 10.

3. Run SGD or mini-batch SGD (or other gradient-based optimizers) with the
loss function J (θ ).

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


77

Hyperparameter: learning rate α, batch size B, # iteration niter . Algorithm 8.2. Mini-batch stochas-
tic gradient descent
Initialize θ randomly.
for i = 1 to niter do
Sample j uniformly from 1, . . . , n, and update θ by
Sample B examples j1 , . . . , jB (without replacement) uniformly from
{1, . . . , n}, and update θ by
B
α
θ := θ −
B ∑ ∇θ J ( jk ) (θ )
k =1

end for

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


9 Neural Networks

Neural networks refer to broad type of non-linear models/parametrizations hθ ( x )


that involve combinations of matrix multiplications and other entrywise non-
linear operations. We will start small and slowly build up a neural network, step
by step.

A neural network with a single neuron. Recall the housing price prediction
problem from before: given the size of the house, we want to predict the price.
We will use it as a running example in this subsection.
Previously, we fit a straight line to the graph of size vs. housing price. Now,
instead of fitting a straight line, we wish to prevent negative housing prices by
setting the absolute minimum price as zero. This produces a ‘‘kink’’ in the graph
as shown in figure 9.1. How do we represent such a function with a single kink as
hθ ( x ) with unknown parameter? (After doing so, we can invoke the machinery
in part V.)
We define a parameterized function hθ ( x ) with input x, parameterized by θ,
which outputs the price of the house y. Formally, hθ : x 7→ y. Perhaps one of the
simplest parametrization would be

hθ ( x ) = max(wx + b, 0), where θ = (w, b) ∈ R2 (9.1)

Here hθ ( x ) returns a single value: (wx + b) or zero, whichever is greater. In the


context of neural networks, the function max{t, 0} is called a ReLU (pro- nounced
‘‘ray-lu’’), or rectified linear unit, and often denoted by ReLU(t) , max{t, 0}.
Generally, a one-dimensional non-linear function that maps R to R such as
ReLU is often referred to as an activation function. The model hθ ( x ) is said to
have a single neuron partly because it has a single non-linear activation function.
(We will discuss more about why a non-linear activation is called neuron.)
When the input x ∈ Rd has multiple dimensions, a neural network with a
single neuron can be written as

hθ ( x ) = ReLU(w> x + b), where w ∈ Rd , b ∈ R, and θ = (w, b) (9.2)


79

housing prices Figure 9.1. Housing prices with a


‘‘kink’’ in the graph.
2,000

1,500
price (in $1000)

1,000

500

0
0 1,000 2,000 3,000 4,000 5,000
square feet

The term b is often referred to as the ‘‘bias’’, and the vector w is referred to
as the weight vector. Such a neural network has 1 layer. (We will define what
multiple layers mean in the sequel.)

Stacking neurons. A more complex neural network may take the single neuron
described above and ‘‘stack’’ them together such that one neuron passes its output
as input into the next neuron, resulting in a more complex function.
Let us now deepen the housing prediction example. In addition to the size
of the house, suppose that you know the number of bedrooms, the zip code
and the wealth of the neighborhood. Building neural networks is analogous to
Lego bricks: you take individual bricks and stack them together to build complex
structures. The same applies to neural networks: we take individual neurons
and stack them together to create complex neural networks. Given these features
(size, number of bedrooms, zip code, and wealth), we might then decide that
the price of the house depends on the maximum family size it can accommodate.
Suppose the family size is a function of the size of the house and number of
bedrooms (see figure 9.2). The zip code may provide additional information such
as how walkable the neighborhood is (i.e., can you walk to the grocery store or

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


80 c hapter 9. neural networks

do you need to drive everywhere). Combining the zip code with the wealth of the
neighborhood may predict the quality of the local elementary school. Given these
three derived features (family size, walkable, school quality), we may conclude
that the price of the home ultimately depends on these three features.

size Figure 9.2. Diagram of a small neu-


fam ral network for predicting housing
i ly si
# bedrooms z e prices.

price y
walkable
zip code
u ality
ol q
scho
wealth

Formally, the input to a neural network is a set of input features x1 , x2 , x3 , x4 .


We denote the intermediate variables for ‘‘family size’’, ‘‘walk- able’’, and ‘‘school
quality’’ by a1 , a2 , a3 (these ai ’s are often referred to as ‘‘hidden units’’ or ‘‘hid-
den neurons’’). We represent each of the ai ’s as a neural network with a single
neuron with a subset of x1 , . . . , x4 as inputs. Then as in figure 9.1, we will have
the parameterization:

a1 = ReLU(θ1 x1 + θ2 x2 + θ3 )
a2 = ReLU(θ4 x3 + θ5 )
a3 = ReLU(θ6 x3 + θ7 x4 + θ8 )

where (θ1 , . . . , θ8 ) are parameters. Now we represent the final output hθ ( x ) as


another linear function with a1 , a2 , a3 as inputs, and we get1 1
Typically, for multi-layer neural
network, at the end, near the out-
hθ ( x ) = θ9 a1 + θ10 a2 + θ11 a3 + θ12 (9.3) put, we don’t apply ReLU, espe-
cially when the output is not nec-
essarily a positive number.
where θ contains all the parameters (θ1 , . . . , θ12 ).
Now we represent the output as a quite complex function of x with parameters
θ. Then you can use this parametrization hθ with the machinery of part V to learn
the parameters θ.

Inspiration from biological neural networks. As the name suggests, artificial


neural networks were inspired by biological neural networks. The hidden units
a1 , . . . , am correspond to the neurons in a biological neural network, and the

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


81

parameters θi ’s correspond to the synapses. However, it’s unclear how similar the
modern deep artificial neural networks are to the biological ones. For example,
perhaps not many neuroscientists think biological neural networks could have
1000 layers, while some modern artificial neural networks do (we will elaborate
more on the notion of layers.) Moreover, it’s an open question whether human
brains update their neural networks in a way similar to the way that computer
scientists learn artificial neural networks (using backpropagation, which we will
introduce in the next section.).

Two-layer fully-connected neural networks. We constructed the neural net-


work in equation (9.3) using a significant amount of prior knowledge/belief
about how the ‘‘family size’’, ‘‘walkable’’, and ‘‘school quality’’ are determined by
the inputs. We implicitly assumed that we know the family size is an important
quantity to look at and that it can be determined by only the ‘‘size’’ and ‘‘# bed-
rooms’’. Such a prior knowledge might not be available for other applications. It
would be more flexible and general to have a generic parameterization. A simple
way would be to write the intermediate variable a1 as a function of all x1 , . . . , x4 :

a1 = ReLU(w1> x + b1 ), where w1 ∈ R4 and b1 ∈ R


a2 = ReLU(w2> x + b2 ), where w2 ∈ R4 and b2 ∈ R
a3 = ReLU(w3> x + b3 ), where w3 ∈ R4 and b3 ∈ R

We still define hθ ( x ) using equation (9.3) with a1 , a2 , a3 being defined as above.


Thus we have a so-called fully-connected neural network as visualized in the
dependency graph in figure 9.3 because all the intermediate variables ai ’s depend
on all the inputs xi ’s.

a1 Figure 9.3. Diagram of a two-layer


fully connected neural network.
x1 Each edge from node xi to node
a2 a j indicates that a j depends on xi .
The edge from xi to a j is associated
x2 hθ ( x ) [1]
with the weight (w j )i which de-
a3
notes the i-th coordinate of the vec-
x3 [1]
tor w j . The activation a j can be
a4 computed by taking the ReLU of
the weighted sum of xi ’s with the
weights being the weights associ-
ated with the incoming
 edges, that
[1]
is, a j = ReLU ∑id=1 (w j )i xi .
toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]
82 c hapter 9. neural networks

For full generality, a two-layer fully-connected neural network with m hidden


units and d dimensional input x ∈ Rd is defined as

[1] > [1] [1] [1]


∀ j ∈ [1, . . . , m], zj = wj x + b j where w j ∈ Rd , b j ∈ R (9.4)
a j = ReLU(z j ) (9.5)
a = [ a 1 , . . . , a m ] > ∈ Rm (9.6)
[2] > [2] [2] m [2]
hθ ( x ) = w a+b where w ∈ R ,b ∈R (9.7)

Note that by default the vectors in Rd are viewed as column vectors, and in
particular a is a column vector with components a1 , a2 , . . . , am . The indices [1] and
[2] [1]
are used to distinguish two sets of parameters: the w j ’s (each of which is a
vector in Rd ) and w[2] (which is a vector in Rm ). We will have more of these later.

Vectorization. Before we introduce neural networks with more layers and more
complex structures, we will simplify the expressions for neural networks with
more matrix and vector notations. Another important motivation of vectorization
is the speed perspective in the implementation. In order to implement a neural
network efficiently, one must be careful when using for loops. The most natural
way to implement equation (9.4) in code is perhaps to use a for loop. In practice,
the dimensionalities of the inputs and hidden units are high. As a result, code
will run very slowly if you use for loops. Leveraging the parallelism in GPUs
is/was crucial for the progress of deep learning.
This gave rise to vectorization. Instead of using for loops, vectorization takes
advantage of matrix algebra and highly optimized numerical linear algebra pack-
ages (e.g., BLAS) to make neural network computations run quickly. Before the
deep learning era, a for loop may have been sufficient on smaller datasets, but
modern deep networks and state-of-the-art datasets will be infeasible to run with
for loops.
We vectorize the two-layer fully-connected neural network as below. We define
[1]
a weight matrix W [1] in Rm×d as the concatenation of all the vectors w j ’s in the

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


83

following way:
[1] >
 
w1
[1] >
 
 w2 
W [1] =  ∈ Rm × d
 
..

 . 

[1] >
wm
Now by the definition of matrix vector multiplication, we can write z =
[z1 , . . . , zm ]> ∈ Rm as:
[1] >
      
z1 [1]
w1 x1 b1
 . 
 ..   w [1] >    x [1] 
   
x

2
 2  2
 .  =  ..  + 
     
..  .. 

 . 
 .  . .  . 
   

[1] > zd [1]
bm
zm wm
| {z } | {z } | {z }
d ×1 x ∈R
| {z }
z ∈Rm ×1 W [1] ∈Rm × d b [1] ∈Rm ×1

Or succinctly,
z = W [1] x + b [1] (9.8)
We remark again that a vector in Rd
in these notes, following the conventions
previously established, is automatically viewed as a column vector, and can also
be viewed as a d × 1 dimensional matrix. (Note that this is different from numpy
where a vector is viewed as a row vector in broadcasting.)
Computing the activations a ∈ Rm from z ∈ Rm involves an element-wise
non-linear application of the ReLU function, which can be computed in parallel
efficiently. Overloading ReLU for element-wise application of ReLU (meaning, for
a vector t ∈ Rd , ReLU(t) is a vector such that ReLU(t)i = ReLU(ti )), we have:
a = ReLU(z) (9.9)
>
Define W [2] = [w[2] ] ∈ R1×m similarly. Then, the model in equation (9.7) can
be summarized as:
a = ReLU(W [1] x + b[1] ) (9.10)
[2] [2]
hθ ( x ) = W a + b (9.11)
Here θ consists of W [1] , W [2] (often referred to as the weight matrices) and b[1] , b[2]
(referred to as the biases). The collection of W [1] , b[1] is referred to as the first layer,
and W [2] , b[2] the second layer. The activation a is referred to as the hidden layer.
A two-layer neural network is also called one-hidden-layer neural network.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


84 c hapter 9. neural networks

Multi-layer fully-connected neural networks. With this succinct notations, we


can stack more layers to get a deeper fully-connected neural network. Let r be the
number of layers (weight matrices). Let W [1] , . . . , W [r] , b[1] , . . . , b[r] be the weight
matrices and biases of all the layers. Then a multi-layer neural network can be
written as:

a[1] = ReLU(W [1] x + b[1] ) (9.12)


a[2] = ReLU(W [2] a[1] + b[2] ) (9.13)
··· (9.14)
a[r−1] = ReLU(W [r−1] a[r−2] + b[r−1] ) (9.15)
[r ] [r −1] [r ]
hθ ( x ) = W a +b (9.16)

We note that the weight matrices and biases need to have compatible dimen-
sions for the equations above to make sense. If a[k] has dimension mk , then the
weight matrix W [k] should be of dimension mk × mk−1 , and the bias b[k] ∈ Rmk .
Moreover, W [1] ∈ Rm1 ×d and W [r] ∈ R1×mr−1 .
The total number of neurons in the network is m1 + · · · + mr , and the total
number of parameters in this network is (d + 1)m1 + (m1 + 1)m2 + · · · + (mr−1 +
1) mr .
Sometimes for notational consistency we also write a[0] = x, and a[r] = hθ ( x ).
Then we have simple recursion that

a[k] = ReLU(W [k] a[k−1] + b[k] ), ∀k = 1, . . . , r − 1 (9.17)

Note that this would have be true for k = r if there were an additional ReLU in
equation (9.16), but often people like to make the last layer linear (aka without a
ReLU) so that negative outputs are possible and it’s easier to interpret the last
layer as a linear model. (More on the interpretability at the ‘‘connection to kernel
method’’ paragraph of this section.)

Other activation functions. The activation function ReLU can be replaced by


many other non-linear function σ(·) that maps R to R such as:

1
σ(z) = (sigmoid)
1 + e−z
ez − e−z
σ(z) = z (tanh)
e + e−z

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


85

Why do we not use the identity function for σ(z)? That is, why not use σ (z) = z?
Assume for sake of argument that b[1] and b[2] are zeros. Suppose σ (z) = z, then
for two-layer neural network, we have that

h θ ( x ) = W [2] a [1]
= W [2] σ ( z [1] ) (by definition)
[2] [1]
=W z (since σ (z) = z)
= W [2] W [1] x (from chapter 9)
= W̃x (where W̃ = W [2] W [1] )

Notice how W [2] W [1] collapsed into W̃.


This is because applying a linear function to another linear function will result
in a linear function over the original input (i.e., you can construct a W̃ such
that W̃x = W [2] W [1] x). This loses much of the representational power of the
neural network as often times the output we are trying to predict has a non-linear
relationship with the inputs. Without non-linear activation functions, the neural
network will simply perform linear regression.

Connection to the Kernel Method. In the previous lectures, we covered the


concept of feature maps. Recall that the main motivation for feature maps is to
represent functions that are non-linear in the input x by θ > φ( x ), where θ are the
parameters and φ( x ), the feature map, is a handcrafted function non-linear in the
raw input x. The performance of the learning algorithms can significantly depends
on the choice of the feature map φ( x ). Oftentimes people use domain knowledge
to design the feature map φ( x ) that suits the particular applications. The process
of choosing the feature maps is often referred to as feature engineering.
We can view deep learning as a way to automatically learn the right feature
map (sometimes also referred to as ‘‘the representation’’) as follows. Suppose we
denote by β the collection of the parameters in a fully-connected neural networks
(equation (9.16)) except those in the last layer. Then we can abstract right a[r−1]
as a function of the input x and the parameters in β : a[r−1] = φβ ( x ). Now we can
write the model as:
h θ ( x ) = W [r ] φ β ( x ) + b [r ] (9.18)

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


86 c hapter 9. neural networks

When β is fixed, then φβ (·) can viewed as a feature map, and therefore hθ ( x ) is just
a linear model over the features φβ ( x ). However, we will train the neural networks,
both the parameters in β and the parameters W [r] , b[r] are optimized, and therefore
we are not learning a linear model in the feature space, but also learning a good
feature map φβ (·) itself so that it’s possible to predict accurately with a linear
model on top of the feature map. Therefore, deep learning tends to depend less
on the domain knowledge of the particular applications and requires often less
feature engineering. The penultimate layer a[r−1] is often (informally) referred to
as the learned features or representations in the context of deep learning.
In the example of house price prediction, a fully-connected neural network
does not need us to specify the intermediate quantity such ‘‘family size’’, and
may automatically discover some useful features in the last penultimate layer
(the activation a[r−1] ), and use them to linearly predict the housing price. Often
the feature map / representation obtained from one datasets (that is, the function
φβ (·) can be also useful for other datasets, which indicates they contain essential
information about the data. However, oftentimes, the neural network will discover
complex features which are very useful for predicting the output but may be
difficult for a human to understand or interpret. This is why some people refer to
neural networks as a black box, as it can be difficult to understand the features it
has discovered.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


10 Backpropagation

In this section, we introduce backpropgation or auto-differentiation, which computes


the gradient of the loss ∇ J ( j) (θ ) efficiently. We will start with an informal theorem
that states that as long as a real-valued function f can be efficiently computed/eval-
uated by a differentiable network or circuit, then its gradient can be efficiently
computed in a similar time. We will then show how to do this concretely for
fully-connected neural networks.
Because the formality of the general theorem is not the main focus here, we
will introduce the terms with informal definitions. By a differentiable circuit or
a differentiable network, we mean a composition of a sequence of differentiable
arithmetic operations (additions, subtraction, multiplication, divisions, etc) and
elementary differentiable functions (ReLU, exp, log, sin, cos, etc.). Let the size of
the circuit be the total number of such operations and elementary functions. We
assume that each of the operations and functions, and their derivatives or partial
derivatives can be computed in O(1) time in the computer.

Theorem 1 (backpropagation or auto-differentiation, informally stated). Suppose


a differentiable circuit of size N computes a real-valued function f : R` 7→ R. Then, the
gradient ∇ f can be computed in time O( N ), by a circuit of size O( N ).1 1
We note if the output of the func-
tion f does not depend on some
We note that the loss function J ( j) (θ ) for the j-th example can be indeed com- of the input coordinates, then we
set by default the gradient w.r.t
puted by a sequence of operations and functions involving additions, subtraction, that coordinate to zero. Setting to
multiplications, and non-linear activations. Thus the theorem suggests that we zero does not count towards the to-
should be able to compute ∇ J ( j) (θ ) in a similar time to that for computing J ( j) (θ ) tal runtime here in our accounting
scheme. This is why when N ≤ `,
itself. This does not only apply to the fully-connected neural network introduced we can compute the gradient in
in chapter 9, but also many other types of neural networks. O( N ) time, which might be poten-
tially even less than `.
In the rest of the section, we will showcase how to compute the gradient of
the loss efficiently for fully-connected neural networks using backpropagation.
Even though auto-differentiation or backpropagation is implemented in all the
deep learning packages such as TensorFlow and PyTorch, understanding it is
very helpful for gaining insights into the workings of deep learning.
88 c hapter 10. backpropagation

10.1 Preliminary: chain rule

We first recall the chain rule in calculus. Suppose the variable J depends on the
variables θ1 , . . . , θ p via the intermediate variables g1 , . . . , gk :

g j = g j (θ1 , . . . , θ p ), ∀ j ∈ {1, . . . , k} J = J ( g1 , . . . , g k ) (10.1)

Here we overload the meaning of g j ’s: they denote both the intermediate variables
but also the functions used to compute the intermediate variables. Then, by the
chain rule, we have that ∀i:
k
∂J ∂J ∂gj
∂θi
= ∑ ∂gj ∂θi (10.2)
j =1

For the ease of invoking the chain rule in the following subsections in various
ways, we will call J the output variable, g1 , . . . , gk intermediate variables, and
θ1 , . . . , θ p the input variables in the chain rule.

10.2 Backpropagation for two-layer neural networks

Now we consider the two-layer neural network defined in equation (9.11). Our
general approach is to first unpack the vectorized notation to scalar form to apply
the chain rule, but as soon as we finish the derivation, we will pack the scalar
equations back to a vectorized form to keep the notations succinct.
Recall the following equations are used for the computation of the loss J:

z = W [1] x + b [1] (10.3)


a = ReLU(z) (10.4)
h θ ( x ) , o = W [2] a + b [2] (10.5)
1
J = ( y − o )2 (10.6)
2

Recall that W [1] ∈ Rm×d , W [2] ∈ R1×m , and b[1] , z, a ∈ Rm , and o, y, b[2] ∈ R.
Recall that a vector in Rd is automatically interpreted as a column vector (like a
matrix in Rd×1 ) if need be.2 2
We also note that even though
this is the convention in math, it’s
different from the convention in
numpy where an one dimensional
array will be automatically inter-
preted as a row vector.
2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc
10.2. backpropagation for two-layer neural networks 89

∂J
10.2.1 Computing ∂W [2]
[2] [2]
Suppose W [2] = [W1 , . . . , Wm ]. We start by computing ∂J
[2] using the chain rule
∂Wi
(equation (10.2)) with o as the intermediate variable.
∂J ∂J ∂o
= ·
∂W [2]i ∂o ∂W [2]i
∂o
= (o − y) ·
∂W [2]i
= ( o − y ) · ai (because o = ∑im=1 W [2]i ai + b[2] )

Vectorized notation. The equation above in vectorized notation becomes:


∂J
= ( o − y ) · a > ∈ R1 × m (10.7)
∂W [2]
Similarly, we leave the reader to verify that:
∂J
= (o − y) ∈ R (10.8)
∂b[2]

Clarification for the dimensionality of the partial derivative notation. We will


∂J
use the notation ∂A frequently in the rest of the lecture notes. We note that here we
only use this notation for the case when J is a real-valued variable,3 but A can be 3
There is an extension of this no-
a vector or a matrix. Moreover, ∂A∂J
has the same dimensionality as A. For example, tation to vector or matrix variable
∂J ∂J J. However, in practice, it’s often
when A is a matrix, the (i, j)-th entry of ∂A is equal to ∂A . If you are familiar with impractical to compute the deriva-
ij
the notion of total derivatives, we note that the convention for dimensionality tives of high-dimensional outputs.
Thus, we will avoid using the no-
here is different from that for total derivatives. ∂J
tation ∂A for J that is not a real-
valued variable.
∂J
10.2.2 Computing ∂W [1]
∂J [1]
Next we compute . We first unpack the vectorized notation: let Wij denote
∂W [1]
the (i, j)-the entry of W [1] , where i ∈ [m] and j ∈ [d]. We compute ∂J
[1] using
∂Wij
chain rule (equation (10.2)) with zi as the intermediate variable:
∂J ∂J ∂zi
[1]
= ·
∂Wij ∂zi ∂W [1]
ij
∂J [1] [1]
= ·x (because zi = ∑dk=1 Wik xk + bi )
∂zi j

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


90 c hap ter 10. backpropagation

Vectorized notation. The equation above can be written compactly as:

∂J ∂J >
= ·x (10.9)
∂W [1] ∂z

We can verify that the dimensions match: ∂J


∂W [1]
∈ Rm×d , ∂z
∂J
∈ Rm×1 and x > ∈
R1×d .

∂J ∂J
Abstraction. For future usage, the computations for and above can
∂W [1] ∂W [2]
be abstractified into the following claim:

Claim 1. Suppose J is a real-valued output variable, z ∈ Rm is the intermediate


variable, and W ∈ Rm×d , u ∈ Rd , b ∈ Rm are the input variables, and suppose
they satisfy the following:

z = Wu + b
J = J (z)

∂J ∂J
Then ∂W and ∂b satisfy:

∂J ∂J >
= ·u
∂W ∂z
∂J ∂J
=
∂b ∂z

∂J
10.2.3 Computing ∂z

Equation (10.9) tells us that to compute ∂J[1] , it suffices to compute ∂z


∂J
, which is
∂W
the goal of the next few derivations.
We invoke the chain rule with J as the output variable, ai as the intermediate
variable, and zi as the input variable:

∂J ∂J ∂ai
=
∂zi ∂ai ∂zi
∂J
= · 1{ z i ≥ 0}
∂ai

Vectorization and abstraction. The computation above can be summarized


into:

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


10.2. backpropagation for two-layer neural networks 91

Claim 2. Suppose the real-valued output variable J and vectors z, a ∈ Rm satisfy


the following:

a = σ(z) (where σ is an element-wise activation, z, a ∈ Rm )


J = J ( a)

Then, we have that


∂J ∂J
= σ0 (z)
∂z ∂a
where σ0 (·) is the element-wise derivative of the activation function σ, and
denotes the element-wise product of two vectors of the same dimensionality.

∂J
10.2.4 Computing ∂a
∂J
Now it suffices to compute ∂a . We invoke the chain rule with J as the output
variable, o as the intermediate variable, and ai as the input variable:

∂J ∂J ∂o
=
∂ai ∂o ∂ai
[2] [2]
= (o − y) · Wi (because o = ∑im=1 Wi ai + b[2] )

Vectorization. In vectorized notation, we have:


∂J >
= W [2] · ( o − y ) (10.10)
∂a

Abstraction. We now present a more general form of the computation above.

Claim 3. Suppose J is a real-valued output variable, v ∈ Rm is the intermediate


variable, and W ∈ Rm×d , u ∈ Rd , b ∈ Rm are the input variables, and suppose
they satisfy the following:

v = Wu + b
J = J (v)

Then,
∂J ∂J
= W> . (10.11)
∂u ∂v

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


92 c hapter 10. backpropagation

10.2.5 Summary for two-layer neural networks


Now combining the equations above, we arrive at algorithm 10.1 which computes
the gradients for two-layer neural networks.

Compute the values of z ∈ Rm , a ∈ Rm , and o ∈ R Algorithm 10.1. Back-propogation


for two-layer neural networks.
Compute:

∂J
δ [2] , = (o − y) ∈ R
∂o
∂J >
δ [1] , = (W [2] (o − y)) 1{z ≥ 0} ∈ Rm×1 (by claim 2 and 10.10)
∂z
Compute:

∂J
= δ [ 2 ] a > ∈ R1 × m (by equation (10.7))
∂W [2]
∂J
= δ [2] ∈ R (by equation (10.8))
∂b[2]
∂J
= δ [1] x > ∈ Rm × d (by equation (10.9))
∂W [1]
∂J
= δ [1] ∈ Rm (as an exercise)
∂b[1]

10.3 Multi-layer neural networks

In this section, we will derive the backpropagation algorithms for the model
defined in equation (9.16). With the notation a[0] = x, recall that we have:

a[1] = ReLU(W [1] a[0] + b[1] )


a[2] = ReLU(W [2] a[1] + b[2] )
···
a[r−1] = ReLU(W [r−1] a[r−2] + b[r−1] )
a [r ] = z [r ] = W [r ] a [r −1] + b [r ]
1  [r ] 2
J= a −y
2

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


10.3. multi-layer neural networks 93

Here we define both a[r] and z[r] as hθ ( x ) for notational simplicity.


First, we note that we have the following local abstraction for k ∈ {1, . . . , r }:

z [ k ] = W [ k ] a [ k −1] + b [ k ]
J = J (z[k] )

Invoking Claim 1, we have that

∂J ∂J >
[ k ]
= [ k ] · a [ k −1] (10.12)
∂W ∂z
∂J ∂J
= [k] (10.13)
∂b[k] ∂z

Therefore, it suffices to compute ∂J


. For simplicity, let’s define δ[k] , ∂J
. We
∂z[k] ∂z[k]
compute δ[k] from k = r to 1 inductively. First we have that:

∂J
δ [r ] , = ( z [r ] − y ) (10.14)
∂z[r]
Next for k ≤ r − 1, suppose we have computed the value of δ[k+1] , then we will
compute δ[k] . First, using claim 2, we have that:

∂J ∂J
δ[k] , [ k ]
= [k] ReLU0 (z[k] ) (10.15)
∂z ∂a
Then we note that the relationship between a[k] and z[k+1] can be abstractly written
as:

z [ k +1] = W [ k +1] a [ k ] + b [ k +1] (10.16)


J = J ( z [ k +1] ) (10.17)

Therefore by claim 3 we have that:

∂J > ∂J
[ k ]
= W [ k +1] (10.18)
∂a ∂z[k+1]
It follows that:
 
[ k +1] > ∂J
δ [k]
= W ReLU0 (z[k] )
∂z[k+1]
>
 
= W [ k +1] δ [ k +1] ReLU0 (z[k] )

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


94 c hapter 10. backpropagation

Algorithm 10.2. Back-propagation


Compute and store the values of a[k] ’s and z[k] ’s for k = 1, . . . , r, and J. for multi-layer neural networks.
. (This is often called the ‘‘forward pass’’.)
for k = r to 1 do . (This is often called the ‘‘backward pass’’)
if k = r then
Compute δ[r] , ∂J[r]
∂z
else
Compute:

∂J 
[ k +1] > [ k +1]

δ[k] , = W δ ReLU0 (z[k] )
∂z[k]

Compute:

∂J >
[ k ]
= δ [ k ] a [ k −1]
∂W
∂J
= δ[k]
∂b[k]
end for

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


11 Vectorization Over Training Examples

As we discussed in chapter 9, in the implementation of neural networks, we will


leverage the parallelism across multiple examples. This means that we will need to
write the forward pass (the evaluation of the outputs) of the neural network and
the backward pass (backpropagation) for multiple training examples in matrix
notation.

The basic idea. The basic idea is simple. Suppose you have a training set with
three examples x (1) , x (2) , x (3) . The first-layer activations for each example are as
follows:

z[1](1) = W [1] x (1) + b[1]


z[1](2) = W [1] x (2) + b[1]
z[1](3) = W [1] x (3) + b[1]

Note the difference between square brackets [·], which refer to the layer number,
and parenthesis (·), which refer to the training example number. Intuitively,
one would implement this using a for loop. It turns out, we can vectorize these
operations as well. First, define:
 

X =  x (1) x (2) x (3)  ∈ Rd ×3 (11.1)


 

Note that we are stacking training examples in columns and not rows. We can
then combine this into a single unified formulation:
 

Z [1] = z[1](1) z[1](2) z[1](3)  = W [1] X + b[1] (11.2)


 
96 c hapter 11. vectorization over training examples

You may notice that we are attempting to add b[1] ∈ R4×1 to W [1] X ∈ R4×3 . Strictly
following the rules of linear algebra, this is not allowed. In practice however, this
addition is performed using broadcasting. We create an intermediate b̃[1] ∈ R4×3 :
 

b̃[1] = b[1] b [1] b [1]  (11.3)


 

We can then perform the computation: Z [1] = W [1] X + b̃[1] . Often times, it is
not necessary to explicitly construct b̃[1] . By inspecting the dimensions in equa-
tion (11.1), you can assume b[1] ∈ R4×1 is correctly broadcast to W [1] X ∈ R4×3 .
The matricization approach as above can easily generalize to multiple layers,
with one subtlety though, as discussed below.

Complications/subtlety in the implementation. All the deep learning packages


or implementations put the data points in the rows of a data matrix. (If the data
point itself is a matrix or tensor, then the data are concentrated along the zero-th
dimension.) However, most of the deep learning papers use a similar notation
to these notes where the data points are treated as column vectors.1 There is 1
The instructor suspects that this
a simple conversion to deal with the mismatch: in the implementation, all the is mostly because in mathematics
we naturally multiply a matrix to a
columns become row vectors, row vectors become column vectors, all the matrices vector on the left hand side.
are transposed, and the orders of the matrix multiplications are flipped. In the
example above, using the row major convention, the data matrix is X ∈ R3×d , the
first layer weight matrix has dimensionality d × m (instead of m × d as in the two
layer neural net section), and the bias vector b[1] ∈ R1×m . The computation for
the hidden activation becomes:

Z [1] = XW [1] + b[1] ∈ R3×m (11.4)

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


97

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Part VI: Regularization and Model Selection
From CS229 Spring 2021, Andrew
Suppose we are trying select among several different models for a learning Ng, Moses Charikar, Christopher
problem. For instance, we might be using a polynomial regression model hθ ( x ) = Ré & Yoann Le Calonnec, Stanford
University.
g(θ0 + θ1 x + θ2 x2 + · · · + θk x k ), and wish to decide if k should be 0, 1, . . . , or
10. How can we automatically select a model that represents a good tradeoff
between the twin evils of bias and variance?2 Alternatively, suppose we want to 2
Given that we said in the previ-
automatically choose the bandwidth parameter τ for locally weighted regression, ous set of notes that bias and vari-
ance are two very different beasts,
or the parameter C for our `1 -regularized SVM. How can we do that? some readers may be wondering if
For the sake of concreteness, in these notes we assume we have some finite we should be calling them ‘‘twin’’
evils here. Perhaps it’d be better
set of models M = { M1 , . . . , Md } that we’re trying to select among. For instance, to think of them as non-identical
in our first example above, the model Mi would be an i-th order polynomial twins. The phrase ‘‘the fraternal
regression model. (The generalization to infinite M is not hard.3 ) Alternatively, twin evils of bias and variance’’
doesn’t have the same ring to it,
if we are trying to decide between using an SVM, a neural network or logistic though.
regression, then M may contain these models. 3
If we are trying to choose from
an infinite set of models, say cor-
responding to the possible values
of the bandwidth τ ∈ R+ , we may

12 Cross validation discretize τ and consider only a fi-


nite number of possible values for
it. More generally, most of the al-
gorithms described here can all be
viewed as performing optimization
search in the space of models, and
Lets suppose we are, as usual, given a training set S. Given what we know about we can perform this search over in-
empirical risk minimization, here’s what might initially seem like a algorithm, finite model classes as well.
resulting from using empirical risk minimization for model selection:
1. Train each model Mi on S, to get some hypothesis hi .

2. Pick the hypotheses with the smallest training error.


This algorithm does not work. Consider choosing the order of a polynomial.
The higher the order of the polynomial, the better it will fit the training set S,
and thus the lower the training error. Hence, this method will always select a
high-variance, high-degree polynomial model, which we saw previously is often
poor choice.
Here’s an algorithm that works better. In hold-out cross validation (also called
simple cross validation), we do the following:
99

1. Randomly split S into Strain (say, 70% of the data) and Scv (the remaining 30%).
Here, Scv is called the hold-out cross validation set.

2. Train each model Mi on Strain only, to get some hypothesis hi .

3. Select and output the hypothesis hi that had the smallest error ε̂ Scv (hi ) on the
hold out cross validation set. (Recall, ε̂ Scv (h) denotes the empirical error of h
on the set of examples in Scv .)
By testing on a set of examples Scv that the models were not trained on, we
obtain a better estimate of each hypothesis hi ’s true generalization error, and
can then pick the one with the smallest estimated generalization error. Usually,
somewhere between 1/4 − 1/3 of the data is used in the hold out cross validation
set, and 30% is a typical choice.
Optionally, step 3 in the algorithm may also be replaced with selecting the
model Mi according to arg mini ε̂ Scv (hi ), and then retraining Mi on the entire
training set S. (This is often a good idea, with one exception being learning
algorithms that are be very sensitive to perturbations of the initial conditions
and/or data. For these methods, Mi doing well on Strain does not necessarily mean
it will also do well on Scv , and it might be better to forgo this retraining step.)
The disadvantage of using hold out cross validation is that it ‘‘wastes’’ about
30% of the data. Even if we were to take the optional step of retraining the model
on the entire training set, it’s still as if we’re trying to find a good model for a
learning problem in which we had 0.7m training examples, rather than n training
examples, since we’re testing models that were trained on only 0.7m examples
each time. While this is fine if data is abundant and/or cheap, in learning problems
in which data is scarce (consider a problem with m = 20, say), we’d like to do
something better.
Here is a method, called k-fold cross validation, that holds out less data each
time:
1. Randomly split S into k disjoint subsets of m/k training examples each. Lets
call these subsets S1 , . . . , Sk .

2. For each model Mi , we evaluate it as follows:

• For j = 1, . . . , k:
– Train the model Mi on S1 ∪ · · · ∪ S j−1 ∪ S j+1 ∪ · · · Sk (i.e., train on all the
data except S j ) to get some hypothesis hij .

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


– Test the hypothesis hij on S j , to get ε̂ S j (hij ).
• The estimated generalization error of model Mi is then calculated as the
average of the ε̂ S j (hij )’s (averaged over j).

3. Pick the model Mi with the lowest estimated generalization error, and retrain
that model on the entire training set S. The resulting hypothesis is then output
as our final answer.

A typical choice for the number of folds to use here would be k = 10. While
the fraction of data held out each time is now 1/k—much smaller than before—
this procedure may also be more computationally expensive than hold-out cross
validation, since we now need train to each model k times.
While k = 10 is a commonly used choice, in problems in which data is really
scarce, sometimes we will use the extreme choice of k = m in order to leave out
as little data as possible each time. In this setting, we would repeatedly train
on all but one of the training examples in S, and test on that held-out example.
The resulting m = k errors are then averaged together to obtain our estimate of
the generalization error of a model. This method has its own name; since we’re
holding out one training example at a time, this method is called leave-one-out
cross validation.
Finally, even though we have described the different versions of cross validation
as methods for selecting a model, they can also be used more simply to evaluate a
single model or algorithm. For example, if you have implemented some learning
algorithm and want to estimate how well it performs for your application (or if
you have invented a novel learning algorithm and want to report in a technical
paper how well it performs on various test sets), cross validation would give a
reasonable way of doing so.

13 Feature Selection

One special and important case of model selection is called feature selection.
To motivate this, imagine that you have a supervised learning problem where the
number of features d is very large (perhaps d  n), but you suspect that there is
only a small number of features that are ‘‘relevant’’ to the learning task. Even if
101

you use the a simple linear classifier (such as the perceptron) over the d input
features, the VC dimension of your hypothesis class would still be O(n), and thus
overfitting would be a potential problem unless the training set is fairly large.
In such a setting, you can apply a feature selection algorithm to reduce the
number of features. Given d features, there are 2d possible feature subsets (since
each of the d features can either be included or excluded from the subset), and
thus feature selection can be posed as a model selection problem over 2d possible
models. For large values of d, it’s usually too expensive to explicitly enumerate
over and compare all 2d models, and so typically some heuristic search procedure
is used to find a good feature subset. The following search procedure is called
forward search:

Initialize F = ∅. Algorithm 13.1. Forward search.

repeat
for i = 1, . . . , d do
if i 6∈ F then
Fi = F ∪ {i }
Use some version of cross validation to evaluate features Fi .
(i.e., train your learning algorithm using only the features in Fi ,
and estimate its generalization error.)
end for
Set F to be the best feature subset found in the previous step.
until convergence
Select and output the best feature subset that was evaluated during the
entire search procedure.

The outer loop of the algorithm can be terminated either when F = {1, . . . , d}
is the set of all features, or when |F | exceeds some pre-set threshold (correspond-
ing to the maximum number of features that you want the algorithm to consider
using).
This algorithm described above one instantiation of wrapper model feature
selection, since it is a procedure that ‘‘wraps’’ around your learning algorithm,
and repeatedly makes calls to the learning algorithm to evaluate how well it
does using different feature subsets. Aside from forward search, other search
procedures can also be used. For example, backward search starts off with F =

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


102 chapter 13. feature selection

{1, . . . , d} as the set of all features, and repeatedly deletes features one at a time
(evaluating single-feature deletions in a similar manner to how forward search
evaluates single-feature additions) until F = ∅.
Wrapper feature selection algorithms often work quite well, but can be compu-
tationally expensive given how that they need to make many calls to the learning
algorithm. Indeed, complete forward search (terminating when F = {1, . . . , d})
would take about O(n2 ) calls to the learning algorithm.
Filter feature selection methods give heuristic, but computationally much
cheaper, ways of choosing a feature subset. The idea here is to compute some
simple score S(i ) that measures how informative each feature xi is about the class
labels y. Then, we simply pick the k features with the largest scores S(i ).
One possible choice of the score would be define S(i ) to be (the absolute value
of) the correlation between xi and y, as measured on the training data. This would
result in our choosing the features that are the most strongly correlated with
the class labels. In practice, it is more common (particularly for discrete-valued
features xi ) to choose S(i ) to be the mutual information MI( xi , y) between xi and
y:
p( x , y)
MI( xi , y) = ∑ ∑ p(xi , y) log p(xi )i p(y) (13.1)
i x ∈{0,1} y∈{0,1}

(The equation above assumes that xi and y are binary-valued; more generally
the summations would be over the domains of the variables.) The probabilities
above p( xi , y), p( xi ) and p(y) can all be estimated according to their empirical
distributions on the training set.
To gain intuition about what this score does, note that the mutual information
can also be expressed as a Kullback-Leibler (KL) divergence:

MI( xi , y) = KL( p( xi , y) || p( xi ) p(y)) (13.2)

You’ll get to play more with KL-divergence in the problem sets, but informally,
this gives a measure of how different the probability distributions p( xi , y) and
p( xi ) p(y) are. If xi and y are independent random variables, then we would have
p( xi , y) = p( xi ) p(y), and the KL-divergence between the two distributions will
be zero. This is consistent with the idea if xi and y are independent, then xi is
clearly very ‘‘non-informative’’ about y, and thus the score S(i ) should be small.
Conversely, if xi is very ‘‘informative’’ about y, then their mutual information
MI( xi , y) would be large.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


One final detail: Now that you’ve ranked the features according to their scores
S(i ), how do you decide how many features k to choose? Well, one standard way
to do so is to use cross validation to select among the possible values of k. For
example, when applying naive Bayes to text classification— a problem where d,
the vocabulary size, is usually very large—using this method to select a feature
subset often results in increased classifier accuracy.

14 Bayesian statistics and regularization

In this section, we will talk about one more tool in our arsenal for our battle
against overfitting.
At the beginning of the quarter, we talked about parameter fitting using maxi-
mum likelihood estimation (MLE), and chose our parameters according to
n
θMLE = arg max ∏ p(y(i) | x (i) ; θ ). (14.1)
θ i =1

Throughout our subsequent discussions, we viewed θ as an unknown parameter


of the world. This view of the θ as being constant-valued but unknown is taken in
frequentist statistics. In the frequentist this view of the world, θ is not random—it
just happens to be unknown—and it’s our job to come up with statistical proce-
dures (such as maximum likelihood) to try to estimate this parameter.
An alternative way to approach our parameter estimation problems is to take
the Bayesian view of the world, and think of θ as being a random variable whose
value is unknown. In this approach, we would specify a prior distribution p(θ )
on θ that expresses our ‘‘prior beliefs’’ about the parameters. Given a training set
S = {( x (i) , y(i) )}in=1 , when we are asked to make a prediction on a new value of
x, we can then compute the posterior distribution on the parameters:

p(S | θ ) p(θ )
p(θ | S) = (14.2)
p(S)
 
∏in=1 p(y(i) | x (i) , θ ) p(θ )
= R n
(14.3)
θ ∏ i =1 p ( y
(i ) | x (i ) , θ ) p ( θ ) dθ

In the equation above, p(y(i) | x (i) , θ ) comes from whatever model you’re using for
your learning problem. For example, if you are using Bayesian logistic regression,
(i ) (i )
then you might choose p(y(i) | x (i) , θ ) = hθ ( x (i) )y (1 − hθ ( x (i) ))(1−y ), where
hθ ( x (i) ) = 1/(1 + exp(−θ > x (i) )).1 1
Since we are now viewing θ as
When we are given a new test example x and asked to make a prediction on it, a random variable, it is okay to
condition on its value, and write
we can compute our posterior distribution on the class label using the posterior ‘‘p(y| x, θ )’’ instead of ‘‘p(y| x; θ ).’’
distribution on θ:
Z
p(y | x, S) = p(y | x, θ ) p(θ | S)dθ (14.4)
θ

In the equation above, p(θ | S) comes from equation (14.2). Thus, for example, if
the goal is to the predict the expected value of y given x, then we would output:2 2
The integral below would be re-
Z placed by a summation if y is
discrete-valued.
E[y | x, S] = yp(y | x, S)dy (14.5)
y

The procedure that we’ve outlined here can be thought of as doing ‘‘fully
Bayesian’’ prediction, where our prediction is computed by taking an average
with respect to the posterior p(θ | S) over θ. Unfortunately, in general it is com-
putationally very difficult to compute this posterior distribution. This is because
it requires taking integrals over the (usually high-dimensional) θ as in equa-
tion (14.2), and this typically cannot be done in closed-form.
Thus, in practice we will instead approximate the posterior distribution for θ.
One common approximation is to replace our posterior distribution for θ (as in
equation (14.4)) with a single point estimate. The MAP (maximum a posteriori)
estimate for θ is given by:
n
θMAP = arg max ∏ p(y(i) | x (i) , θ ) p(θ ) (14.6)
θ i =1

Note that this is the same formulas as for the MLE (maximum likelihood) estimate
for θ, except for the prior p(θ ) term at the end.
In practical applications, a common choice for the prior p(θ ) is to assume
that θ ∼ N (0, τ 2 I ). Using this choice of prior, the fitted parameters θ MAP will
have smaller norm than that selected by maximum likelihood. In practice, this
causes the Bayesian MAP estimate to be less susceptible to overfitting than the ML
estimate of the parameters. For example, Bayesian logistic regression turns out to
be an effective algorithm for text classification, even though in text classification
we usually have d  n.
105

15 Some calculations from bias variance


From CS229 Fall 2020, Christopher
This section contains a reprise of the eigenvalue arguments to understand how Ré, Stanford University.
variance is reduced by regularization. We also describe different ways regulariza-
tion can occur including from the algorithm or initialization. This note contains
some additional calculations from the lecture and Piazza, just so that we have
typeset versions of them. They contain no new information over the lecture, but
they do supplement the previous sections.
Recall we have a design matrix X ∈ Rn×d and labels y ∈ Rn . We are interested
in the underdetermined case n < d so that rank( X ) ≤ n < d. We consider the
following optimization problem for least squares with a regularization parameter
λ ≥ 0:
1 λ
`(θ; λ) = min k Xθ − yk2 + kθ k2 (15.1)
θ ∈R 2
d 2
Normal equations. Computing derivatives as we did for the normal equations,
we see that:

∇θ `(θ; λ) = X > ( Xθ − y) + λθ = ( X > X + λI )θ − X > y (15.2)

By setting ∇θ `(θ, λ) = 0 we can solve for the θ̂ that minimizes the above problem.
Explicitly, we have:
θ̂ = ( X > X + λI )−1 X > y (15.3)
To see that the inverse in equation (15.3) exists, we observe that X > X is a sym-
metric, real d × d matrix so it has d eigenvalues (some may be 0). Moreover, it is
positive semidefinite, and we capture this by writing eig( X > X ) = {σ12 , . . . , σd2 }.
Now, inspired by the regularized problem, we examine:
n o
eig( X > X + λI ) = σ12 + λ, . . . , σd2 + λ (15.4)

Since σi2 ≥ 0 for all i ∈ [d], if we set λ > 0 then X > X + λI is full rank, and the
inverse of ( X > X + λI ) exists. In turn, this means there is a unique such θ̂.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


106 chapter 15. some calculations from bias variance

Variance. Recall that in bias-variance, we are concerned with the variance of θ̂ as


we sample the training set. We want to argue that as the regularization parameter
λ increases, the variance in the fitted θ̂ decreases. We won’t carry out the full
formal argument, but it suffices to make one observation that is immediate from
equation (15.3): the variance of θ̂ is proportional to the eigenvalues of ( X > X + λI )−1 .
To see this, observe that the eigenvalues of an inverse are just the inverse of the
eigenvalues:
( )

> −1
 1 1
eig ( X X + λI ) = ,..., 2 +λ (15.5)
σ12 + λ σd

Now, condition on the points we draw, namely X. Then, recall that randomness
is in the label noise (recall the linear regression model y ∼ Xθ ∗ + N (0, τ 2 I ) =
N ( Xθ ∗ , τ 2 I )).
Recall a fact about the multivariate normal distribution:

if y ∼ N (µ, Σ) then Ay ∼ N ( Aµ, AΣA> ) (15.6)

Using linearity, we can verify that the expectation of θ̂ is:


h i
E[θ̂ ] = E ( X > X + λI )−1 X > y (15.7)
h i
= E ( X > X + λI )−1 X > ( Xθ ∗ + N (0, τ 2 I )) (15.8)
h i
= E ( X > X + λI )−1 X > ( Xθ ∗ ) (15.9)

= ( X > X + λI )−1 ( X > X )θ ∗ (essentially a ‘‘shrunk’’ θ ∗ )

The last line above suggests that the more regularization we add (larger the λ),
the more the estimated θ̂ will be shrunk towards 0. In other words, regularization
adds bias (towards zero in this case). Though we paid the cost of higher bias, we
gain by reducing the variance of θ̂. To see this bias-variance tradeoff concretely,
observe the covariance matrix of θ̂:

C := Cov[θ̂ ] (15.10)
   
= ( X > X + λI )−1 X > (τ 2 I ) X ( X > X + λI )−1 (15.11)

and
( )
τ 2 σ12 τ 2 σd2
eig(C ) = 2
, . . . , 2
(15.12)
(σ1 + λ)2 (σd + λ)2

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Notice that the entire spectrum of the covariance is a decreasing function of λ. By
decomposing in the eigenvalue basis, we can see that actually E kθ̂ − θ ∗ k2 is a
 

decreasing function of λ, as desired.

Gradient descent. We show that you can initialize gradient descent in a way that
effectively regularizes undetermined least squares—even with no regularization
penalty (λ = 0). Our first observation is that any point x ∈ Rd can be decomposed
into two orthogonal components x0 , x1 such that:

x = x0 + x1 and x0 ∈ Null( X ) and x1 ∈ Range( X > ) (15.13)

Recall that Null( X ) and Range( X > ) are orthogonal subspaces by the fundamental
theory of linear algebra. We write P0 for the projection on the null and P1 for the
projection on the range, then x0 = P0 ( x ) and x1 = P1 ( x ).
If one initializes at a point θ then, we observe that the gradient is orthogonal
to the null space. That is, if g(θ ) = X > ( Xθ − y) then g> P0 (v) = 0 for any v ∈ Rd .
But, then:

P0 (θ (t+1) ) = P0 (θ t − αg(θ (t) )) = P0 (θ t ) − αP0 g(θ (t) ) = P0 (θ (t) ) (15.14)

That is, no learning happens in the null. Whatever portion is in the null that we
initialize stays there throughout execution.
A key property of the Moore-Penrose pseudoinverse, is that if θ̂ = ( X > X ) +
X > y then P0 (θ̂ ) = 0. Hence, the gradient descent solution initialized at θ0 can be
written θ̂ + P0 (θ0 ). Two immediate observations:

• Using the Moore-Penrose inverse acts as regularization, because it selects the


solution θ̂.

• So does gradient descent—provided that we initialize at θ0 = 0. This is partic-


ularly interesting, as many modern machine learning techniques operate in
these underdetermined regimes.

We’ve argued that there are many ways to find equivalent solutions, and
that this allows us to understand the effect on the model fitting procedure as
regularization. Thus, there are many ways to find that equivalent solution. Many
modern methods of machine learning including dropout and data augmentation
are not penalty, but their effect is understood as regularization. One contrast with
the above methods is that they often depend on some property of the data or for
108 chapter 16. bias-variance and error analysis

how much they effectively regularization. In some sense, they adapt to the data.
A final comment is that in the same sense above, adding more data regularizes
the model as well!

16 Bias-variance and error analysis


From CS229 Fall 2017, Yoann Le
16.1 The bias-variance tradeoff Calonnec, Stanford University.

Assume you are given a well fitted machine learning model fˆ that you want to
apply on some test dataset. For instance, the model could be a linear regression
whose parameters were computed using some training set different from your
test set. For each point x in your test set, you want to predict the associated target
y ∈ R, and compute the mean squared error (MSE):
h i
E( x,y)∼test set | fˆ( x ) − y|2 (16.1)

You now realize that this MSE is too high, and try to find an explanation to this
result:

• Overfitting: the model is too closely related to the examples in the training set
and doesn’t generalize well to other examples.

• Underfitting: the model didn’t gather enough information from the training set,
and doesn’t capture the link between the features x and the target y.

• The data is simply noisy, that is the model is neither overfitting or underfitting,
and the high MSE is simply due to the amount of noise in the dataset.

Our intuition can be formalized by the bias-variance tradeoff.


Assume that the points in your training/test set are all taken from a similar
distribution, with

y i = f ( x i ) + ei , where the noise ei satisfies E(ei ) = 0, Var(ei ) = σ2 (16.2)

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


16.1. the bias-variance tradeoff 109

and your goal is to compute f . By looking at your training set, you obtain an
estimate fˆ. Now use this estimate with your test set, meaning that for each example
j in the test set, your prediction for y j = f ( x j ) + e j is fˆ( x j ). Here, x j is a fixed
real number (or vector if the feature space is multi-dimensional) thus f ( x j ) is
fixed, and e j is a real random variable with mean 0 and variance σ2 . The crucial
observation is that fˆ( x j ) is random since it depends on the values ei from the
training set. That’s why talking about the bias E[ fˆ( x ) − f ( x )] and the variance of
fˆ makes sense.
We can now compute our MSE on the test set by computing the following
expectation with respect to the possible training sets (since fˆ is a random variable
function of the choice of the traning set):
h i
test MSE = E (y − fˆ( x ))2 (16.3)
h i
= E ((e + f ( x ) − fˆ( x ))2 (16.4)
h i
= E[e2 ] + E ( f ( x ) − fˆ( x ))2 (16.5)
 2  
= σ2 + E[ f ( x ) − fˆ( x )] + Var f ( x ) − fˆ( x ) (16.6)
 2  
= σ2 + Bias fˆ( x ) + Var fˆ( x ) (16.7)

There is nothing we can do about the first term σ2 as we can not predict the
noise e by definition. The bias term is due to underfitting, meaning that on average,
fˆ does not predict f . The last term is closely related to overfitting, the prediction fˆ
is too close from the values y train and varies a lot with the choice of our training
set.
To sum up, we can understand our MSE as follows:

High Bias ←→ Underfitting


High Variance ←→ Overfitting
Large σ2 ←→ Noisy data

Hence, when analyzing the performance of a machine learning algorithm, we


must always ask ourselves how to reduce the bias without increasing the variance,
and respectively how to reduce the variance without increasing the bias. Most
of the time, reducing one will increase the other, and there is a tradeoff between
bias and variance.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


110 chapter 16. bias-variance and error analysis

16.2 Error analysis

Even though understanding whether our poor test error is due to high bias or high
variance is important, knowing which parts of the machine learning algorithm
lead to this error or score is crucial. Consider the machine learning pipeline on ??.
The algorithms is divided into several steps:

1. The inputs are taken from a camera image

2. Preprocessing to remove the background on the image. For instance, if the


image are taken from a security camera, the background is always the same,
and we could remove it easily by keeping the pixels that changed on the image.

3. Detect the position of the face.

4. Detect the eyes - Detect the nose - Detect the mouth

5. Final logistic regression step to predict the label

If you biuld a complicated system like this one, you might want to figure out
how much error is attributable to each of the components, how good is each of
these green boxes. Indeed, if one of these boxes is really problematic, you might
want to spend more time trying to improve the performance of that one green
box. How do you decide what part to focus on?
One thing we can do is plug in the ground-truth for each component, and
see how accuracy changes. Let’s say the overall accuracy of the system is 85%
(pretty bad). You can now take your development set and manually give it the
perfect background removal, that is, instead of using your background removal
algorithm, manually specify the perfect background removal yourself (using
photoshop for instance), and look at how much that affect the performance of
the overall system.
Now let’s say the accuracy only improves by 0.1%. This gives us an upperbound,
that is even if we worked for years on background removal, it wouldn’t help our
system by more than 0.1%.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


16.3. ablative analysis 111

Table 16.1. Accuracy when pro-


Component Accuracy
viding the system with the perfect
component.
Overall system 85%
Preprocess (remove background) 85.1%
Face detection 91%
Eyes segmentation 95%
Nose segmentation 96%
Mouth segmentation 97%
Logistic regression 100%

Now let’s give the pipeline the perfect face detection by specifying the position
of the face manually, see how much we improve the performance, and so on. The
results are specified in the table 16.1.
Looking at the table, we know that working on the background removal won’t
help much. It also tells us where the biggest jumps are. We notice that having an
accurate face detection mechanism really improves the performance, and similarly,
the eyes really help making the prediction more accurate.
Error analysis is also useful when publishing a paper, since it’s a convenient
way to analyze the error of an algorithm and explain which parts should be
improved.

16.3 Ablative analysis

While error analysis tries to explain the difference between current performance
and perfect performance, ablative analysis tries to explain the difference between
some baseline (much poorer) performance and current performance.
For instance, suppose you have built a good anti-spam classifier by adding lots
of clever features to logistic regression

• Spelling correction

• Sender host features

• Email header features

• Email text parser features

• Javascript parser

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


112 chapter 16. bias-variance and error analysis

• Features from embedded images

and your question is: How much did each of these components really help?
In this example, let’s say that simple logistic regression without any clever
features gets 94% performance, but when adding these clever features, we get
99.9% performance. In abaltive analysis, what we do is start from the current level
of performance 99.9%, and slowly take away all of these features to see how it
affects performance. The results are provided in table 16.2.

Table 16.2. Accuracy when remov-


Component Accuracy
ing feature from logistic regres-
sion.
Overall system 99.9%
Spelling correction 99.0%
Sender host features 98.9%
Email header features 98.9%
Email text parser features 95%
Javascript parser 94.5%
Features from images 94.0%
When presenting the results in a paper, ablative analysis really helps analyzing
the features that helped decreasing the misclassification rate. Instead of simply
giving the loss/error rate of the algorithm, we can provide evidence that some
specific features are actually more important than others.

16.3.1 Analyze your mistakes


Assume you are given a dataset with pictures of animals, and your goal is to
identify pictures of cats that you would eventually send to the members of a
community of cat lovers. You notice that there are many pictures of dogs in the
original dataset, and wonders whether you should build a special algorithm to
identify the pictures of dogs and avoid sending dogs pictures to cat lovers or not.
One thing you can do is take a 100 examples from your development set that
are misclassified, and count up how many of these 100 mistakes are dogs. If 5%
of them are dogs, then even if you come up with a solution to identidy your dogs,
your error would only go down by 5%, that is your accuracy would go up from
90% to 90.5%. However, if 50 of these 100 errors are dogs, then you could improve
your accuracy to reach 95%.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


16.3. ablative analysis 113

By analyzing your mistakes, you can focus on what’s really important. If you
notice that 80 out of your 100 mistakes are blurry images, then work hard on
classifying correctly these blurry images. If you notice that 70 out of the 100 errors
are great cats, then focus on this specific task of identifying great cats.
In brief, do not waste your time improving parts of your algorithm that won’t
really help decreasing your error rate, and focus on what really matters.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Part VII: Unsupervised Learning
From CS229 Spring 2021, Andrew
Ng, Moses Charikar, Christopher
Ré & Tengyu Ma, Stanford Univer-
sity.
17 The k-means Clustering Algorithm

In the clustering problem, we are given a training set { x (1) , . . . , x (n) }, and want
to group the data into a few cohesive ‘‘clusters.’’ Here, x (i) ∈ Rd as usual; but no
labels y(i) are given. So, this is an unsupervised learning problem. The k-means
clustering algorithm is as follows:

1. Initialize cluster centroids µ1 , µ2 , . . . , µk ∈ Rd randomly.

2. Repeat until convergence:

• For every i, set:


c(i) := arg mink x (i) − µ j k2
j

• For each j, set:


∑in=1 1{c(i) = j} x (i)
µ j :=
∑in=1 1{c(i) = j}
In the algorithm above, k (a parameter of the algorithm) is the number of
clusters we want to find; and the cluster centroids µ j represent our current guesses
for the positions of the centers of the clusters. To initialize the cluster centroids
(in step 1 of the algorithm above), we could choose k training examples randomly,
and set the cluster centroids to be equal to the values of these k examples. (Other
initialization methods are also possible.)
The inner-loop of the algorithm repeatedly carries out two steps: (i) ‘‘Assign-
ing’’ each training example x (i) to the closest cluster centroid µ j , and (ii) Moving
each cluster centroid µ j to the mean of the points assigned to it. Figure 1 shows
an illustration of running k-means.
Is the k-means algorithm guaranteed to converge? Yes it is, in a certain sense.
In particular, let us define the distortion function to be:
n
J (c, µ) = ∑ k x (i ) − µ c (i ) k2
i =1

Thus, J measures the sum of squared distances between each training example
x (i) and the cluster centroid µc(i) to which it has been assigned. It can be shown
that k-means is exactly coordinate descent on J. Specifically, the inner-loop of
k-means repeatedly minimizes J with respect to c while holding µ fixed, and then
minimizes J with respect to µ while holding c fixed. Thus, J must monotonically
decrease, and the value of J must converge. (Usually, this implies that c and µ
will converge too. In theory, it is possible for k-means to oscillate between a few
different clusterings—i.e., a few different values for c and/or µ—that have exactly
the same value of J, but this almost never happens in practice.)
The distortion function J is a non-convex function, and so coordinate descent on
J is not guaranteed to converge to the global minimum. In other words, k-means
can be susceptible to local optima. Very often k-means will work fine and come
up with very good clusterings despite this. But if you are worried about getting
stuck in bad local minima, one common thing to do is run k-means many times
(using different random initial values for the cluster centroids µ j ). Then, out of
all the different clusterings found, pick the one that gives the lowest distortion
J (c, µ).

18 Mixtures of Gaussians and the EM Algorithm

In this chapter, we discuss the EM (Expectation-Maximization) algorithm for


density estimation.
Suppose that we are given a training set { x (1) , . . . , x (n) } as usual. Since we are
in the unsupervised learning setting, these points do not come with any labels.
We wish to model the data by specifying a joint distribution p( x (i) , z(i) ) = p( x (i) |
z(i) ) p(z(i) ). Here, z(i) ∼ Multinomial(φ) (where φj ≥ 0, ∑kj=1 φj = 1, and the
parameter φj gives p(z(i) = j)), and x (i) | z(i) = j ∼ N (µ j , Σ j ). We let k denote
the number of values that the z(i) ’s can take on. Thus, our model posits that each
116 chapter 18. mixtures of gaussians and the em algorithm

x (i) was generated by randomly choosing z(i) from {1, . . . , k}, and then x (i) was
drawn from one of k Gaussians depending on z(i) . This is called the mixture of
Gaussians model. Also, note that the z(i) ’s are latent random variables, meaning
that they’re hidden/unobserved. This is what will make our estimation problem
difficult.
The parameters of our model are thus φ, µ and Σ. To estimate them, we can
write down the likelihood of our data:
n
`(φ, µ, Σ) = ∑ log p(x(i) ; φ, µ, Σ)
i =1
n k
= ∑ log ∑ p( x (i) | z(i) ; µ, Σ) p(z(i) ; φ)
i =1 z (i ) =1

However, if we set to zero the derivatives of this formula with respect to the
parameters and try to solve, we’ll find that it is not possible to find the maximum
likelihood estimates of the parameters in closed form. (Try this yourself at home.)
The random variables z(i) indicate which of the k Gaussians each x (i) had come
from. Note that if we knew what the z(i) ’s were, the maximum likelihood problem
would have been easy. Specifically, we could then write down the likelihood as:
n
`(φ, µ, Σ) = ∑ log p(x(i) | z(i) ; µ, Σ) + log p(z(i) ; φ)
i =1

Maximizing this with respect to φ, µ and Σ gives the parameters:


n
1
φj =
n ∑ 1{ z (i ) = j }
i =1
n
∑ i =1 1 { z ( i ) = j } x ( i )
µj =
∑in=1 1{z(i) = j}
∑in=1 1{z(i) = j}( x (i) − µ j )( x (i) − µ j )>
Σj =
∑in=1 1{z(i) = j}

Indeed, we see that if the z(i) ’s were known, then maximum likelihood esti-
mation becomes nearly identical to what we had when estimating the parameters
of the Gaussian discriminant analysis model, except that here the z(i) ’s playing
the role of the class labels.1 1
There are other minor differences
However, in our density estimation problem, the z(i) ’s are not known. What in the formulas here from what
we’d obtained in PS1 with Gaus-
can we do? The EM algorithm is an iterative algorithm that has two main steps. sian discriminant analysis, first be-
cause we’ve generalized the z(i) ’s
2021-05-23 00:18:27-07:00, draft: send comments to [email protected] to be multinomial rather than toc
Bernoulli, and second because here
we are using a different Σ j for each
Gaussian.
117

Applied to our problem, in the E-step, it tries to ‘‘guess’’ the values of the z(i) ’s.
In the M-step, it updates the parameters of our model based on our guesses. Since
in the M-step we are pretending that the guesses in the first part were correct, the
maximization becomes easy. Here’s the algorithm:
• Repeat until convergence:

– (E-step) For each i, j, set:


(i )
w j := p(z(i) = j | x (i) ; φ, µ, Σ)

– (M-step) Update the parameters:


n
1
∑ wj
(i )
φj =
n i =1
(i )
∑in=1 w j x (i )
µj = (i )
∑in=1 w j
(i )
∑in=1 w j ( x (i) − µ j )( x (i) − µ j )>
Σj = (i )
∑in=1 w j
In the E-step, we calculate the posterior probability of our parameters, the
z(i) ’s,
given the x (i) and using the current setting of our parameters. I.e., using
Bayes rule, we obtain:
p( x (i) | z(i) = j; µ, Σ) p(z(i) = j; φ)
p(z(i) = j | x (i) ; φ, µ, Σ) =
∑kl=1 p( x (i) | z(i) = l; µ, Σ) p(z(i) = l; φ)
Here, p( x (i) | z(i) = j; µ, Σ) is given by evaluating the density of a Gaussian with
mean µ j and covariance Σ j at x (i) ; p(z(i) = j; φ) is given by φj , and so on. The
(i )
values w j calculated in the E-step represent our ‘‘soft’’ guesses2 for the values 2
The term ‘‘soft’’ refers to our
guesses being probabilities and tak-
of z (i ) . ing values in [0, 1]; in contrast, a
Also, you should contrast the updates in the M-step with the formulas we had ‘‘hard’’ guess is one that represents
a single best guess (such as taking
when the z(i) ’s were known exactly. They are identical, except that instead of the
values in {0, 1} or {1, . . . , k }).
indicator functions ‘‘1{z(i) = j}’’ indicating from which Gaussian each datapoint
(i )
had come, we now instead have the w j ’s.
The EM-algorithm is also reminiscent of the K-means clustering algorithm,
except that instead of the ‘‘hard’’ cluster assignments c(i ), we instead have the
(i )
‘‘soft’’ assignments w j . Similar to K-means, it is also susceptible to local optima,
so reinitializing at several different initial parameters may be a good idea.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


118 chapter 18. mixtures of gaussians and the em algorithm

It’s clear that the EM algorithm has a very natural interpretation of repeatedly
trying to guess the unknown z(i) ’s; but how did it come about, and can we make
any guarantees about it, such as regarding its convergence? In the next set of
notes, we will describe a more general view of EM, one that will allow us to easily
apply it to other estimation problems in which there are also latent variables, and
which will allow us to give a convergence guarantee.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Part VIII: The EM Algorithm
From CS229 Spring 2021, Andrew
In the previous set of notes, we talked about the EM algorithm as applied to Ng, Moses Charikar, Christopher
fitting a mixture of Gaussians. In this set of notes, we give a broader view of the Ré & Tengyu Ma, Stanford Univer-
sity.
EM algorithm, and show how it can be applied to a large family of estimation
problems with latent variables. We begin our discussion with a very useful result
called Jensen’s inequality.

19 Jensen’s inequality

Let f be a function whose domain is the set of real numbers. Recall that f is
a convex function if f 00 ( x ) ≥ 0 (for all x ∈ R). In the case of f taking vector-
valued inputs, this is generalized to the condition that its hessian H is positive
semi-definite (H ≥ 0). If f 00 ( x ) > 0 for all x, then we say f is strictly convex (in
the vector-valued case, the corresponding statement is that H must be positive
definite, written H > 0). Jensen’s inequality can then be stated as follows:

Theorem. Let f be a convex function, and let X be a random variable. Then:

E[ f ( X )] ≥ f (E[ X ]). (19.1)

Moreover, if f is strictly convex, then E[ f ( X )] = f (E[ X ]) holds true if and only if


X = E[ X ] with probability 1 (i.e., if X is a constant).
Recall our convention of occasionally dropping the parentheses when writing
expectations, so in the theorem above, f (EX ) = f (E[ X ]).
For an interpretation of the theorem, consider the figure below.
Here, f is a convex function shown by the solid line. Also, X is a random
variable that has a 0.5 chance of taking the value a, and a 0.5 chance of taking the
value b (indicated on the x-axis). Thus, the expected value of X is given by the
midpoint between a and b.
We also see the values f ( a), f (b) and f (E[ X ]) indicated on the y-axis. Moreover,
the value E[ f ( X )] is now the midpoint on the y-axis between f ( a) and f (b).
From our example, we see that because f is convex, it must be the case that
E[ f ( X )] ≥ f ( EX ).
Incidentally, quite a lot of people have trouble remembering which way the
inequality goes, and remembering a picture like this is a good way to quickly
figure out the answer.

Remark. Recall that f is [strictly] concave if and only if − f is [strictly] convex


(i.e., f 00 ( x ) ≤ 0 or H ≤ 0). Jensen’s inequality also holds for concave functions f ,
but with the direction of all the inequalities reversed (E[ f ( X )] ≤ f ( EX ), etc.).

20 The EM algorithm

Suppose we have an estimation problem in which we have a training set


{ x (1) , . . . , x ( n ) }
consisting of n independent examples. We have a latent variable
model p( x, z; θ ) with z being the latent variable (which for simplicity is assumed
to take finite number of values). The density for x can be obtained by marginalized
over the latent variable z:

p( x; θ ) = ∑ p(x, z; θ ) (20.1)
z

We wish to fit the parameters θ by maximizing the log-likelihood of the data,


defined by:
n
`(θ ) = ∑ log p(x(i) ; θ ) (20.2)
i =1

We can rewrite the objective in terms of the joint density p( x, z; θ ) by:


n
`(θ ) = ∑ log p(x(i) ; θ ) (20.3)
i =1
n
= ∑ log ∑ p(x(i) , z(i) ; θ ) (20.4)
i =1 z (i )
121

But explicitly finding the maximum likelihood estimates of the parameters θ may
be hard since it will result in difficult non-convex optimization problems.1 Here, 1
It’s mostly an empirical observa-
the z(i) ’s are the latent random variables; and it is often the case that if the z(i) ’s tion that the optimization problem
is difficult to optimize.
were observed, then maximum likelihood estimation would be easy.
In such a setting, the EM algorithm gives an efficient method for maximum
likelihood estimation. Maximizing `(θ ) explicitly might be difficult, and our
strategy will be to instead repeatedly construct a lower-bound on ` (E-step), and
then optimize that lower-bound (M-step).2 2
Empirically, the E-step and M-
It turns out that the summation ∑in=1 is not essential here, and towards a step can often be computed more
efficiently than optimizing the
simpler exposition of the EM algorithm, we will first consider optimizing the function `(·) directly. However, it
the likelihood log p( x ) for a single example x. After we derive the algorithm for doesn’t necessarily mean that al-
ternating the two steps can always
optimizing log p( x ), we will convert it to an algorithm that works for n examples converge to the global optimum
by adding back the sum to each of the relevant equations. Thus, now we aim to of `(·). Even for mixture of Gaus-
optimize log p( x; θ ) which can be rewritten as: sians, the EM algorithm can either
converge to a global optimum or
get stuck, depending on the prop-
log p( x; θ ) = log ∑ p( x, z; θ ) (20.5) erties of the training data. Empiri-
z cally, for real-world data, often EM
can converge to a solution with rel-
Let Q be a distribution over the possible values of z. That is, ∑z Q(z) = 1, Q(z) ≥ atively high likelihood (if not the
0. optimum), and the theory behind
Consider the following:3 it is still largely not understood.
3
If z were continuous, then Q
would be a density, and the sum-
log p( x; θ ) = log ∑ p( x, z; θ ) (20.6) mations over z in our discussion
z
are replaced with integrals over z.
p( x, z; θ )
= log ∑ Q(z) (20.7)
z Q(z)
p( x, z; θ )
≥ ∑ Q(z) log (20.8)
z Q(z)

The last step of this derivation used Jensen’s inequality. Specifically, f ( x ) =


log x is a concave function, since f 00 ( x ) = −1/x2 < 0 over its domain x ∈ R+ .
Also, the term  
p( x, z; θ )
∑ Q ( z )
Q(z)
z

in the summation is just an expectation of the quantity [ p( x, z; θ )/Q(z)] with


respect to z drawn according to the distribution given by Q.4 By Jensen’s inequality, 4
We note that the notion Q(z)
p( x,z;θ )

we have       only makes sense if Q(z) 6= 0


p( x, z; θ ) p( x, z; θ ) whenever p( x, z; θ ) 6= 0. Here we
f Ez ∼ Q ≥ Ez ∼ Q f , implicitly assume that we only con-
Q(z) Q(z)
sider those Q with such a property.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


122 chapter 20. the em algorithm

where the ‘‘z ∼ Q’’ subscripts above indicate that the expectations are with respect
to z drawn from Q. This allowed us to go from equation (20.7) to equation (20.8).
Now, for any distribution Q, the formula 20.8 gives a lower-bound on log p( x; θ ).
There are many possible choices for the Q’s. Which should we choose? Well, if we
have some current guess θ of the parameters, it seems natural to try to make the
lower-bound tight at that value of θ. I.e., we will make the inequality above hold
with equality at our particular value of θ. To make the bound tight for a particular
value of θ, we need for the step involving Jensen’s inequality in our derivation
above to hold with equality. For this to be true, we know it is sufficient that the
expectation be taken over a ‘‘constant’’-valued random variable. I.e., we require
that
p( x, z; θ )
=c
Q(z)
for some constant c that does not depend on z. This is easily accomplished by
choosing
Q(z) ∝ p( x, z; θ ).
Actually, since we know ∑z Q(z) = 1 (because it is a distribution), this further
tells us that
p( x, z; θ )
Q(z) = (20.9)
∑z p( x, z; θ )
p( x, z; θ )
= (20.10)
p( x; θ )
= p(z | x; θ ) (20.11)

Thus, we simply set the Q’s to be the posterior distribution of the z’s given x and
the setting of the parameters θ.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


123

Indeed, we can directly verify that when Q(z) = p(z | x; θ ), then equa-
tion (20.8) is an equality because:

p( x, z; θ ) p( x, z; θ )
∑ Q(z) log Q(z)
= ∑ p(z | x; θ ) log p(z | x; θ )
z z
p(z | x; θ ) p( x; θ )
= ∑ p(z | x; θ ) log
z p(z | x; θ )
= ∑ p(z | x; θ ) log p( x; θ )
z
= log p( x; θ ) ∑ p(z | x; θ )
z
= log p( x; θ ) (because ∑z p(z | x; θ ) = 1)

For convenience, we call the expression in equation (20.8) the evidence lower
bound (ELBO) and we denote it by:

p( x, z; θ )
ELBO( x; Q, θ ) = ∑ Q(z) log Q(z)
(20.12)
z

With this equation, we can re-write equation (20.8) as:

∀ Q, θ, x, log p( x; θ ) ≥ ELBO( x; Q, θ ) (20.13)

Intuitively, the EM algorithm alternatively updates Q and θ by a) setting Q(z) =


p(z | x; θ ) following equation (20.11) so that ELBO( x; Q, θ ) = log p( x; θ ) for x
and the current θ, and b) maximizing ELBO( x; Q, θ ) w.r.t θ while fixing the choice
of Q.
Recall that all the discussion above was under the assumption that we aim to
optimize the log-likelihood log p( x; θ ) for a single example x. It turns out that
with multiple training examples, the basic idea is the same and we only need to
take a sum over examples at relevant places. Next, we will build the evidence
lower bound for multiple training examples and make the EM algorithm formal.
Recall we have a training set { x (1) , . . . , x (n) }. Note that the optimal choice of Q
is p(z | x; θ ), and it depends on the particular example x. Therefore here we will
introduce n distributions Q1 , . . . , Qn , one for each example x (i) . For each example
x (i) , we can build the evidence lower bound:

p ( x (i ) , z (i ) ; θ )
log p( x (i) ; θ ) ≥ ELBO( x (i) ; Qi , θ ) = ∑ Qi (z(i) ) log Q i ( z (i ) )
z (i )

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


124 chapter 20. the em algorithm

Taking sum over all the examples, we obtain a lower bound for the log- likelihood:

`(θ ) ≥ ∑ ELBO( x (i) ; Qi , θ ) (20.14)


i
p ( x (i ) , z (i ) ; θ )
= ∑ ∑ Qi (z(i) ) log (20.15)
i z (i ) Q i ( z (i ) )

For any set of distributions Q1 , . . . , Qn , the formula 20.14 gives a lower-bound


on `(θ ), and analogous to the argument around equation (20.11), the Qi that
attains equality satisfies:

Q i ( z (i ) ) = p ( z (i ) | x (i ) ; θ )

Thus, we simply set the Qi ’s to be the posterior distribution of the z(i) ’s given x (i)
with the current setting of the parameters θ.
Now, for this choice of the Qi ’s, equation (20.14) gives a lower-bound on the
log-likelihood ` that we’re trying to maximize. This is the E-step. In the M-step of
the algorithm, we then maximize our formula in equation (20.14) with respect to
the parameters to obtain a new setting of the θ’s. Repeatedly carrying out these
two steps gives us the EM algorithm, which is as follows:
• Repeat until convergence:

– (E-step) For each i, set:

Q i ( z (i ) ) : = p ( z (i ) | x (i ) ; θ )

– (M-step) Set:
n
θ := arg max ∑ ELBO( x (i) ; Qi , θ ) (20.16)
θ i =1
p ( x (i ) , z (i ) ; θ )
= arg max ∑ ∑ Qi (z(i) ) log . (20.17)
θ i z (i ) Q i ( z (i ) )

How do we know if this algorithm will converge? Well, suppose θ (t) and θ (t+1)
are the parameters from two successive iterations of EM. We will now prove
that `(θ (t) ) ≤ `(θ (t+1) ), which shows EM always monotonically improves the log-
likelihood. The key to showing this result lies in our choice of the Qi ’s. Specifically,
on the iteration of EM in which the parameters had started out as θ (t) , we would

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


125

(t)
have chosen Qi (z(i) ) := p(z(i) | x (i) ; θ (t) ). We saw earlier that this choice ensures
that Jensen’s inequality, as applied to get equation (20.14), holds with equality,
and hence:
n
∑ ELBO(x(i) ; Qi
(t)
`(θ (t) ) = , θ (t) ) (20.18)
i =1

The parameters θ (t+1) are then obtained by maximizing the right hand side of
the equation above. Thus,
n
∑ ELBO(x(i) ; Qi
(t)
`(θ (t+1) ) ≥ , θ ( t +1) )
i =1
(because inequality 20.14 holds for all Q and θ)
n
∑ ELBO(x(i) ; Qi
(t)
≥ , θ (t) ) (see reason below)
i =1
= `(θ (t) ) (by equation (20.18))

where the last inequality follows from that θ (t+1) is chosen explicitly to be:
n
arg max ∑ ELBO( x (i) ; Qi , θ )
(t)
θ i =1

Hence, EM causes the likelihood to converge monotonically. In our description


of the EM algorithm, we said we’d run it until convergence. Given the result that
we just showed, one reasonable convergence test would be to check if the increase
in `(θ ) between successive iterations is smaller than some tolerance parameter,
and to declare convergence if EM is improving `(θ ) too slowly.

Remark. If we define (by overloading ELBO(·))


n
p ( x (i ) , z (i ) ; θ )
ELBO( Q, θ ) = ∑ ELBO(x(i) ; Qi , θ ) = ∑ ∑ Qi (z(i) ) log Q i ( z (i ) )
(20.19)
i =1 i z (i )

then we know `(θ ) ≥ ELBO( Q, θ ) from our previous derivation. The EM can
also be viewed an alternating maximization algorithm on ELBO( Q, θ ), in which
the E-step maximizes it with respect to Q (check this yourself), and the M-step
maximizes it with respect to θ.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


20.1 Other interpretation of ELBO
p( x,z;θ )
Let ELBO( x; Q, θ ) = ∑z Q(z) log Q(z) be defined as in equation (20.12). There
are several other forms of ELBO. First, we can rewrite

ELBO( x; Q, θ ) = Ez∼Q [log p( x, z; θ )] − Ez∼Q [log Q(z)] (20.20)


= Ez∼Q [log p( x | z; θ )] − DKL ( Q || pz ) (20.21)

where we use pz to denote the marginal distribution of z (under the distribution


p( x, z; θ )), and DKL () denotes the KL divergence:

Q(z)
DKL ( Q || pz ) = ∑ Q(z) log p(z)
(20.22)
z

In many cases, the marginal distribution of z does not depend on the parameter θ.
In this case, we can see that maximizing ELBO over θ is equivalent to maximizing
the first term in 20.21. This corresponds to maximizing the conditional likelihood
of x conditioned on z, which is often a simpler question than the original question.
Another form of ELBO(·) is (please verify yourself):

ELBO( x; Q, θ ) = log p( x ) − DKL ( Q || pz| x ) (20.23)

where pz| x is the conditional distribution of z given x under the parameter θ. This
forms shows that the maximizer of ELBO( Q, θ ) over Q is obtained when Q = pz| x ,
which was shown in equation (20.11) before.

21 Mixture of Gaussians revisited

Armed with our general definition of the EM algorithm, let’s go back to our
old example of fitting the parameters φ, µ and Σ in a mixture of Gaussians. For
the sake of brevity, we carry out the derivations for the M-step updates only for φ
and µ j , and leave the updates for Σ j as an exercise for the reader.
The E-step is easy. Following our algorithm derivation above, we simply calcu-
late:
(i )
w j = Qi (z(i) = j) = P(z(i) = j | x (i) ; φ, µ, Σ)
127

Here, ‘‘Qi (z(i) = j)’’ denotes the probability of z(i) taking the value j under the
distribution Qi .
Next, in the M-step, we need to maximize, with respect to our parameters
φ, µ, Σ, the quantity:
n
p( x (i) , z(i) ; φ, µ, Σ)
∑ ∑ Qi (z(i) ) log Q i ( z (i ) )
i =1 z (i )
n k
p( x (i) | z(i) = j; µ, Σ) p(z(i) = j; φ)
= ∑ ∑ Qi (z(i) = j) log Q i ( z (i ) = j )
i =1 j =1
 
1
exp − 1 (i )
( x − µ ) > Σ −1 ( x ( i ) − µ ) · φ
n k (2π )d/2 |Σ |1/2 2 j j j j
∑ ∑ wj
(i ) j
= log (i )
i =1 j =1 wj

Let’s maximize this with respect to µl . If we take the derivative with respect to µl ,
we find:
 
1
exp − 1 (i )
( x − µ ) > Σ −1 ( x ( i ) − µ ) · φ
n k (2π )d/2 |Σ j |1/2 2 j j j j
∇µl ∑ ∑ w j log
(i )
(i )
i =1 j =1 wj
n k
(i ) 1
= −∇µl ∑ ∑ wj 2
( x (i ) − µ j ) > Σ − 1 (i )
j (x − µj )
i =1 j =1
n
1
2 i∑
(i )
= wl ∇µl 2µ> −1 ( i )
l Σl x − µ> −1
l Σl µl
=1
n  
= ∑ wl Σ−
(i ) 1 (i )
l x − Σ− 1
l µl
i =1

Setting this to zero and solving for µl therefore yields the update rule
(i )
∑in=1 wl x (i)
µl := (i )
,
∑in=1 wl
which was what we had in the previous set of notes.
Let’s do one more example, and derive the M-step update for the parameters
φj . Grouping together only the terms that depend on φj , we find that we need to
maximize:
n k
∑ ∑ wj
(i )
log φj
i =1 j =1

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


However, there is an additional constraint that the φj ’s sum to 1, since they rep-
resent the probabilities φj = p(z(i) = j; φ). To deal with the constraint that
∑kj=1 φj = 1, we construct the Lagrangian
!
n k k
∑∑ ∑ φj − 1
(i )
L(φ) = w j log φj +β ,
i =1 j =1 j =1

where β is the Lagrange multiplier.1 Taking derivatives, we find: 1


We don’t need to worry about the
constraint that φj ≥ 0, because as
n w
(i ) we’ll shortly see, the solution we’ll
∂ j
L(φ) = ∑ +β find from this derivation will auto-
∂φj φ j matically satisfy that anyway.
i =1

Setting this to zero and solving, we get:


(i )
∑in=1 w j
φj =
−β
(i )
I.e., φj ∝ ∑in=1 w j . Using the constraint that ∑ j φj = 1, we easily find that − β =
(i ) (i )
∑in=1 ∑kj=1 w j = ∑in=1 1 = n. (This used the fact that w j = Qi (z(i) = j), and
(i )
since probabilities sum to 1, ∑ j w j = 1.) We therefore have our M-step updates
for the parameters φj :
1 n (i )
φj := ∑ w j (21.1)
n i =1
The derivation for the M-step updates to Σ j are also entirely straightforward.

22 Variational inference and variational auto-encoder

Loosely speaking, variational auto-encoder1 generally refers to a family of algo- 1


D. P. Kingma and M. Welling,
rithms that extend the EM algorithms to more complex models parameterized ‘‘Auto-Encoding Variational
Bayes,’’ ArXiv Preprint
by neural networks. It extends the technique of variational inference with the ArXiv:1312.6114, 2013.
additional ‘‘re-parametrization trick’’ which will be introduced below. Variational
auto-encoder may not give the best performance for many datasets, but it contains
several central ideas about how to extend EM algorithms to high-dimensional
129

continuous latent variables with non-linear models. Understanding it will likely


give you the language and backgrounds to understand various recent papers
related to it.
As a running example, we will consider the following parameterization of
p( x, z; θ ) by a neural network. Let θ be the collection of the weights of a neural
network g(z; θ ) that maps z ∈ Rk to Rd . Let:

z ∼ N (0, Ik×k ) (22.1)


2
x | z ∼ N ( g(z; θ ), σ Id×d ) (22.2)

Here Ik×k denotes identity matrix of dimension k by k, and σ is a scalar that we


assume to be known for simplicity.
For the Gaussian mixture models in section 20.1, the optimal choice of Q(z) =
p(z | x; θ ) for each fixed θ, that is the posterior distribution of z, can be analytically
computed. In many more complex models such as the model 22.2, it’s intractable
to compute the exact the posterior distribution p(z | x; θ ).
Recall that from equation (20.13), ELBO is always a lower bound for any choice
of Q, and therefore, we can also aim for finding an approximation of the true
posterior distribution. Often, one has to use some particular form to approximate
the true posterior distribution. Let Q be a family of Q’s that we are considering,
and we will aim to find a Q within the family of Q that is closest to the true
posterior distribution. To formalize, recall the definition of the ELBO lower bound
as a function of Q and θ defined in equation (20.19):
n
p ( x (i ) , z (i ) ; θ )
ELBO( Q, θ ) = ∑ ELBO(x(i) ; Qi , θ ) = ∑ ∑ Qi (z(i) ) log Q i ( z (i ) )
i =1 i z (i )

Recall that EM can be viewed as alternating maximization of ELBO( Q, θ ). Here


instead, we optimize the EBLO over Q ∈ Q:

max max ELBO( Q, θ ) (22.3)


Q∈Q θ

Now the next question is what form of Q (or what structural assumptions to
make about Q) allows us to efficiently maximize the objective above. When the
latent variable z are high-dimensional discrete variables, one popular assumption
is the mean field assumption, which assumes that Qi (z) gives a distribution
with independent coordinates, or in other words, Qi can be decomposed into

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


130 chapter 22. variational inference and variational auto-encoder

Qi (z) = Q1i (z1 ) · · · Qik (zk ). There are tremendous applications of mean field
assumptions to learning generative models with discrete latent variables, and
we refer to Blei, Kucukelbir, and McAuliffe for a survey of these models and
their impact to a wide range of applications including computational biology,
computational neuroscience, social sciences. We will not get into the details about
the discrete latent variable cases, and our main focus is to deal with continuous
latent variables, which requires not only mean field assumptions, but additional
techniques.
When z ∈ Rk is a continuous latent variable, there are several decisions to
make towards successfully optimizing equation (22.3). First we need to give a
succinct representation of the distribution Qi because it is over an infinite number
of points. A natural choice is to assume Qi is a Gaussian distribution with some
mean and variance. We would also like to have more succinct representation of
the means of Qi of all the examples. Note that Qi (z(i) ) is supposed to approximate
p(z(i) | x (i) ; θ ). It would make sense let all the means of the Qi ’s be some function
of x (i) . Concretely, let q(·; φ), v(·; φ) be two functions that map from dimension d
to k, which are parameterized by φ and ψ, we assume that:

Qi = N (q( x (i) ; φ), diag(v( x (i) ; ψ))2 ) (22.4)

Here diag(w) means the k × k matrix with the entries of w ∈ Rk on the diagonal.
In other words, the distribution Qi is assumed to be a Gaussian distribution with
independent coordinates, and the mean and standard deviations are governed
by q and v. Often in variational auto-encoder, q and v are chosen to be neural
networks.2 In recent deep learning literature, often q,v are called encoder (in the 2
q and v can also share parameters.
sense of encoding the data into latent code), whereas g(z; θ ) if often referred to We sweep this level of details under
the rug in this note.
as the decoder.
We remark that Qi of such form in many cases are very far from a good ap-
proximation of the true posterior distribution. However, some approximation is
necessary for feasible optimization. In fact, the form of Qi needs to satisfy other
requirements (which happened to be satisfied by the form 22.4)
Before optimizing the ELBO, let’s first verify whether we can efficiently evaluate
the value of the ELBO for fixed Q of the form 22.4 and θ. We rewrite the ELBO as
a function of φ, ψ, θ by:
" #
n
p ( x (i ) , z (i ) ; θ )
ELBO(φ, ψ, θ ) = ∑ Ez(i) ∼Q log , (22.5)
i =1
i Q i ( z (i ) )

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


131

where Qi = N (q( x (i) ; φ), diag(v( x (i) ; ψ))2 ). Note that to evaluate Qi (z(i) ) inside
the expectation, we should be able to compute the density of Qi . To estimate
the expectation Ez(i) ∼Q , we should be able to sample from distribution Qi so
i
that we can build an empirical estimator with samples. It happens that for Gaus-
sian distribution Qi = N (q( x (i) ; φ), diag(v( x (i) ; ψ))2 ), we are able to do both
efficiently.
Now let’s optimize the ELBO. It turns out that we can run gradient ascent over
φ, ψ, θ instead of alternating maximization. There is no strong need to compute the
maximum over each variable at a much greater cost. (For Gaussian mixture model
in section 20.1, computing the maximum is analytically feasible and relatively
cheap, and therefore we did alternating maximization.) Mathematically, let η be
the learning rate, the gradient ascent step is:

θ := θ + η ∇θ ELBO(φ, ψ, θ )
φ := φ + η ∇φ ELBO(φ, ψ, θ )
ψ := ψ + η ∇ψ ELBO(φ, ψ, θ )

Computing the gradient over θ is simple because:


" #
n
log p( x (i) , z(i) ; θ )
∇θ ELBO(φ, ψ, θ ) = ∇θ ∑ Ez(i) ∼Q (22.6)
i =1
i Q i ( z (i ) )
n h i
= ∇θ ∑ Ez(i) ∼Q log p( x (i) , z(i) ; θ ) (22.7)
i
i =1
n h i
= ∑ Ez (i ) ∼ Q i ∇θ log p( x (i) , z(i) ; θ ) (22.8)
i =1

But computing the gradient over φ and ψ is tricky because the sampling dis-
tribution Qi depends on φ and ψ. (Abstractly speaking, the issue we face can be
simplified as the problem of computing the gradient Ez∼Qφ [ f (φ)] with respect
to variable φ. We know that in general, ∇Ez∼Qφ [ f (φ)] 6= Ez∼Qφ [∇ f (φ)] because
the dependency of Qφ on φ has to be taken into account as well.)
The idea that comes to rescue is the so-called re-parameterization trick: we
rewrite z(i) ∼ Qi = N (q( x (i) ; φ), diag(v( x (i) ; ψ))2 ) in an equivalent way:

z (i ) = q ( x (i ) ; φ ) + v ( x (i ) ; ψ ) ξ (i) where ξ (i) ∼ N (0, Ik×k ) (22.9)

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


132 chapter 22. variational inference and variational auto-encoder

Here x y denotes the entry-wise product of two vectors of the same dimen-
sion. Here we used the fact that x ∼ N (µ, σ2 ) is equivalent to that x = µ + ξσ
with ξ ∼ N (0, 1). We mostly just used this fact in every dimension simultaneously
for the random variable z(i) ∼ Qi .
With this re-parameterization, we have that:
" # " #
p ( x (i ) , z (i ) ; θ ) p ( x (i ) , q ( x (i ) ; φ ) + v ( x (i ) ; ψ ) ξ (i ) ; θ )
Ez (i ) ∼ Q log = Eξ (i) ∼N (0,1) log (22.10)
i Q i ( z (i ) ) Q i ( q ( x (i ) ; φ ) + v ( x (i ) ; ψ ) ξ (i ) )
It follows that:
" #
p ( x (i ) , z (i ) ; θ )
∇φ Ez(i) ∼Q log (22.11)
i Q i ( z (i ) )
" #
p ( x (i ) , q ( x (i ) ; φ ) + v ( x (i ) ; ψ ) ξ (i ) ; θ )
= ∇φ Eξ (i) ∼N (0,1) log (22.12)
Q i ( q ( x (i ) ; φ ) + v ( x (i ) ; ψ ) ξ (i ) )
" #
p ( x (i ) , q ( x (i ) ; φ ) + v ( x (i ) ; ψ ) ξ (i ) ; θ )
= Eξ (i) ∼N (0,1) ∇φ log (22.13)
Q i ( q ( x (i ) ; φ ) + v ( x (i ) ; ψ ) ξ (i ) )

We can now sample multiple copies of ξ (i) ’s to estimate the expectation in


the RHS of the equation above.3 We can estimate the gradient with respect to 3
Empirically people sometimes
ψ similarly, and with these, we can implement the gradient ascent algorithm to just use one sample to estimate it
for maximum computational effi-
optimize the ELBO over φ, ψ, θ. ciency.
There are not many high-dimensional distributions with analytically com-
putable density function are known to be re-parameterizable. We refer to Kingma
and Welling for a few other choices that can replace Gaussian distribution.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Part IX: Factor Analysis
From CS229 Spring 2021, Andrew
When we have data x (i) ∈ Rd that comes from a mixture of several Gaussians, the Ng, Moses Charikar & Christopher
Ré, Stanford University.
EM algorithm can be applied to fit a mixture model. In this setting, we usually
imagine problems where we have sufficient data to be able to discern the multiple-
Gaussian structure in the data. For instance, this would be the case if our training
set size n was significantly larger than the dimension d of the data.
Now, consider a setting in which d  n. In such a problem, it might be difficult
to model the data even with a single Gaussian, much less a mixture of Gaussian.
Specifically, since the n data points span only a low-dimensional subspace of Rd ,
if we model the data as Gaussian, and estimate the mean and covariance using
the usual maximum likelihood estimators,
n
1
µ=
n ∑ x (i )
i =1
n
1
Σ=
n ∑ (x(i) − µ)(x(i) − µ)> ,
i =1

we would find that the matrix Σ is singular. This means that Σ−1 does not exist, and
1/|Σ|1/2 = 1/0. But both of these terms are needed in computing the usual density
of a multivariate Gaussian distribution. Another way of stating this difficulty is
that maximum likelihood estimates of the parameters result in a Gaussian that
places all of its probability in the affine space spanned by the data,4 and this 4
This is the set of points x satisfy-
ing x = ∑in=1 αi x (i) , for some αi ’s
corresponds to a singular covariance matrix.
so that ∑in=1 αi = 1.
More generally, unless n exceeds d by some reasonable amount, the maximum
likelihood estimates of the mean and covariance may be quite poor. Nonetheless,
we would still like to be able to fit a reasonable Gaussian model to the data, and
perhaps capture some interesting covariance structure in the data. How can we
do this?
In the next section, we begin by reviewing two possible restrictions on Σ that
allow us to fit Σ with small amounts of data but neither will give a satisfactory
solution to our problem. We next discuss some properties of Gaussians that will
be needed later; specifically, how to find marginal and conditonal distributions of
Gaussians. Finally, we present the factor analysis model, and EM for it.
23 Restrictions of Σ

If we do not have sufficient data to fit a full covariance matrix, we may place
some restrictions on the space of matrices Σ that we will consider. For instance,
we may choose to fit a covariance matrix Σ that is diagonal. In this setting, the
reader may easily verify that the maximum likelihood estimate of the covariance
matrix is given by the diagonal matrix Σ satisfying
n
1
∑ (xj
(i )
Σ jj = − µ j )2 .
n i =1

Thus, Σ jj is just the empirical estimate of the variance of the j-th coordinate of the
data.
Recall that the contours of a Gaussian density are ellipses. A diagonal Σ corre-
sponds to a Gaussian where the major axes of these ellipses are axis-aligned.
Sometimes, we may place a further restriction on the covariance matrix that not
only must it be diagonal, but its diagonal entries must all be equal. In this setting,
we have Σ = σ2 I, where σ2 is the parameter under our control. The maximum
likelihood estimate of σ2 can be found to be:
d n
1
∑ ∑ (xj
(i )
σ2 = − µ j )2 .
nd j =1 i =1

This model corresponds to using Gaussians whose densities have contours that
are circles (in 2 dimensions; or spheres/hyperspheres in higher dimensions).
If we are fitting a full, unconstrained, covariance matrix Σ to data, it is necessary
that n ≥ d + 1 in order for the maximum likelihood estimate of Σ not to be singular.
Under either of the two restrictions above, we may obtain non-singular Σ when
n ≥ 2.
However, restricting Σ to be diagonal also means modeling the different coor-
dinates xi , x j of the data as being uncorrelated and independent. Often, it would
be nice to be able to capture some interesting correlation structure in the data. If
we were to use either of the restrictions on Σ described above, we would therefore
fail to do so. In this set of notes, we will describe the factor analysis model, which
uses more parameters than the diagonal Σ and captures some correlations in the
data, but also without having to fit a full covariance matrix.
135

24 Marginals and conditionals of Gaussians

Before describing factor analysis, we digress to talk about how to find conditional
and marginal distributions of random variables with a joint multivariate Gaussian
distribution.
Suppose we have a vector-valued random variable
" #
x
x= 1 ,
x2

where x1 ∈ Rr , x2 ∈ Rs , and x ∈ Rr+s . Suppose x ∼ N (µ, Σ), where


" # " #
µ1 Σ11 Σ12
µ= , Σ= .
µ2 Σ21 Σ22

Here, µ1 ∈ Rr , µ2 ∈ Rs , Σ11 ∈ Rr×r , Σ12 ∈ Rr×s , and so on. Note that since
covariance matrices are symmetric, Σ12 = Σ21 >.

Under our assumptions, x1 and x2 are jointly multivariate Gaussian. What is


the marginal distribution of x1 ? It is not hard to see that E[ x1 ] = µ1 , and that
Cov( x1 ) = E[( x1 − µ1 )( x1 − µ1 )] = Σ11 . To see that the latter is true, note that
by definition of the joint covariance of x1 and x2 , we have that:

Cov( x ) = Σ
" #
Σ11 Σ12
=
Σ21 Σ22
= E[( x − µ)( x − µ)> ]
x1 − µ1 >
h i
= E ( xx1 − µ1
) ( )
2 − µ2 x2 − µ2
" #
( x1 − µ1 )( x1 − µ1 )> ( x1 − µ1 )( x2 − µ2 )>
=E .
( x2 − µ2 )( x1 − µ1 )> ( x2 − µ2 )( x2 − µ2 )>

Matching the upper-left subblocks in the matrices in the second and the last lines
above gives the result.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Since marginal distributions of Gaussians are themselves Gaussian, we there-
fore have that the marginal distribution of x1 is given by x1 ∼ N (µ1 , Σ11 ). Also,
we can ask, what is the conditional distribution of x1 given x2 ? By referring to
the definition of the multivariate Gaussian distribution, it can be shown that
x1 | x2 ∼ N (µ1|2 , Σ1|2 ), where:

µ1|2 = µ1 + Σ12 Σ2−1 2( x2 − µ2 ) (24.1)


Σ1|2 = Σ11 − Σ12 Σ2−1 2Σ21 (24.2)

When we work with the factor analysis model in the next section, these formulas
for finding conditional and marginal distributions of Gaussians will be very
useful.

25 The factor analysis model

In the factor analysis model, we posit a joint distribution on ( x, z) as follows,


where z ∈ Rk is a latent random variable:

z ∼ N (0, I )
x | z ∼ N (µ + Λz, Ψ)

Here, the parameters of our model are the vector µ ∈ Rd , the matrix Λ ∈ Rd×k ,
and the diagonal matrix Ψ ∈ Rd×d . The value of k is usually chosen to be smaller
than d.
Thus, we imagine that each datapoint x (i) is generated by sampling a k dimen-
sion multivariate Gaussian z(i) . Then, it is mapped to a d-dimensional affine space
of Rd by computing µ + Λz(i) . Lastly, x (i) is generated by adding covariance Ψ
noise to µ + Λz(i) .
Equivalently (convince yourself that this is the case), we can therefore also
define the factor analysis model according to

z ∼ N (0, I )
e ∼ N (0, Ψ)
x = µ + Λz + e
137

where e and z are independent.


Let’s work out exactly what distribution our model defines. Our random vari-
ables z and x have a joint Gaussian distribution
" #
z
∼ N (µzx , Σ).
x

We will now find µzx and Σ.


We know that E[z] = 0, from the fact that z ∼ N (0, I ). Also, we have that:

E[ x ] = E[µ + Λz + e] = µ + ΛE[z] + E[e] = µ

Putting these together, we obtain


" #
0
µzx =
µ

Next, to find Σ, we need to calculate:

Σzz = E[(z − E[z])(z − E[z])> ] (the upper-left block of Σ)


Σzx = E[(z − E[z])( x − E[ x ])> ] (upper-right block)
>
Σ xx = E[( x − E[ x ])( x − E[ x ]) ] (lower-right block)

Now, since z ∼ N (0, I ), we easily find that Σzz = Cov(z) = I. Also,

E[(z − E[z])( x − E[ x ])> ] = E[z(µ + Λz + e − µ)> ]


= E[zz> ]Λ> + E[ze> ]
= Λ>

In the last step, we used the fact that E[zz> ] = Cov(z) (since z has zero mean),
and E[ze> ] = E[z]E[e> ] = 0 (since z and e are independent, and hence the
expectation of their product is the product of their expectations). Similarly, we
can find Σ xx as follows:

E[( x − E[ x ])( x − E[ x ])> ] = E[(µ + Λz + e − µ)(µ + Λz + e − µ)> ]


= E[Λzz> Λ> + ez> Λ> + Λze> + ee> ]
= ΛE[zz> ]Λ> + E[ee> ]
= ΛΛ> + Ψ

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Putting everything together, we therefore have that
" # " # " #!
z 0 I Λ>
∼N , . (25.1)
x µ Λ ΛΛ> + Ψ

Hence, we also see that the marginal distribution of x is given by x ∼ N (µ, ΛΛ> +
Ψ). Thus, given a training set x (i) ; i = 1, . . . , n, we can write down the log likeli-
hood of the parameters:
n  
1 1 (i )
`(µ, Λ, Ψ) = log ∏ d/2 | ΛΛ> + Ψ |1/2
> >
exp − ( x − µ) (ΛΛ + Ψ) ( x − µ) −1 ( i )
i =1 (2π )
2
(25.2)
To perform maximum likelihood estimation, we would like to maximize this
quantity with respect to the parameters. But maximizing this formula explicitly
is hard (try it yourself), and we are aware of no algorithm that does so in closed-
form. So, we will instead use to the EM algorithm. In the next section, we derive
EM for factor analysis.

26 EM for factor analysis

The derivation for the E-step is easy. We need to compute Qi (z(i) ) = p(z(i) |
x (i) ; µ, Λ, Ψ).
By substituting the distribution given in equation (25.1) into the
formulas 24.1-24.2 used for finding the conditional distribution of a Gaussian,
we find that z(i) | x (i) ; µ, Λ, Ψ ∼ N (µz(i) | x(i) , Σz(i) | x(i) ), where

µz(i) | x(i) = Λ> (ΛΛ> + Ψ)−1 ( x (i) − µ)


Σz(i) | x(i) = I − Λ> (ΛΛ> + Ψ)−1 Λ

So, using these definitions for µz(i) | x(i) and Σz(i) | x(i) , we have:
 
1 1 (i ) > −1
Q i ( z (i ) ) = exp − ( z − µ z (i ) | x (i ) ) Σ ( z (i )
− µ z (i ) | x (i ) )
(2π )k/2 |Σz(i) | x(i) |1/2 2 z (i ) | x (i )

Let’s now work out the M-step. Here, we need to maximize


n Z
p( x (i) , z(i) ; µ, Λ, Ψ) (i)
∑ (i )
Qi (z(i) ) log
Q i ( z (i ) )
dz (26.1)
i =1 z
139

with respect to the parameters µ, Λ, Ψ. We will work out only the optimization
with respect to Λ, and leave the derivations of the updates for µ and Ψ as an
exercise to the reader.
We can simplify equation (26.1) as follows:
n Z
∑ (i )
Qi (z(i) )[log p( x (i) | z(i) ; µ, Λ, Ψ) + log p(z(i) ) − log Qi (z(i) )]dz(i) (26.2)
i =1 z
n
= ∑ Ez(i) ∼Qi [log p(x(i) | z(i) ; µ, Λ, Ψ) + log p(z(i) ) − log Qi (z(i) )] (26.3)
i =1

Here, the ‘‘z(i) ∼ Qi ’’ subscript indicates that the expectation is with respect to
z(i) drawn from Qi . In the subsequent development, we will omit this subscript
when there is no risk of ambiguity. Dropping terms that do not depend on the
parameters, we find that we need to maximize:
n
∑ E[log p(x(i) | z(i) ; µ, Λ, Ψ)]
i =1
n   
1 1
= ∑E log
(2π )d/2 |Ψ|1/2
exp − ( x (i) − µ − Λz(i) )> Ψ−1 ( x (i) − µ − Λz(i) )
2
i =1
n  
1 n 1
= ∑ E − log |Ψ| − log(2π ) − ( x (i) − µ − Λz(i) )> Ψ−1 ( x (i) − µ − Λz(i) )
i =1
2 2 2
Let’s maximize this with respect to Λ. Only the last term above depends on Λ.
Taking derivatives, and using the facts that tr( a) = a (for a ∈ R), tr( AB) =
tr( BA), and ∇ A tr( ABA> C ) = CAB + C > AB> , we get:
n  
1
∇Λ ∑ −E ( x (i) − µ − Λz(i) )> Ψ−1 ( x (i) − µ − Λz(i) )
i =1
2
n  
1 > >
= ∑ ∇Λ E − tr( z(i) Λ> Ψ−1 Λz(i) ) + tr(z(i) Λ> Ψ−1 ( x (i) − µ))
i =1
2
n  
1 > >
= ∑ ∇Λ E − tr( Λ> Ψ−1 Λz(i) z(i) ) + tr(Λ> Ψ−1 ( x (i) − µ)z(i) )
i =1
2
n
> >
h i
= ∑ E −Ψ−1 Λz(i) z(i) + Ψ−1 ( x (i) − µ)z(i)
i =1

Setting this to zero and simplifying, we get:


n n
>
h i h >i
∑ ΛEz(i) ∼Q z(i) z(i) = ∑ (x(i) − µ)Ez(i) ∼Q z(i)
i i
i =1 i =1

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


140 chapter 26. em for factor analysis

Hence, solving for Λ, we obtain


! ! −1
n n
(i )> (i )>
h i h i
Λ= ∑ ( x ( i ) − µ )Ez (i ) ∼ Q i z ∑ Ez (i ) ∼ Q i z (i ) z (26.4)
i =1 i =1

It is interesting to note the close relationship between this equation and the normal
equation that we’d derived for least squares regression,

‘‘θ > = (y> X )( X > X )−1 .’’

The analogy is that here, the x’s are a linear function of the z’s (plus noise). Given
the ‘‘guesses’’ for z that the E-step has found, we will now try to estimate the
unknown linearity Λ relating the x’s and z’s. It is therefore no surprise that we
obtain something similar to the normal equation. There is, however, one important
difference between this and an algorithm that performs least squares using just
the ‘‘best guesses’’ of the z’s; we will see this difference shortly.
To complete our M-step update, let’s work out the values of the expectations
in equation (26.4). From our definition of Qi being Gaussian with mean µz(i) | x(i)
and covariance Σz(i) | x(i) , we easily find
h >i
Ez (i ) ∼ Q z ( i ) = µ > z (i ) | x (i )
i
>
h i
Ez (i ) ∼ Q z ( i ) z ( i ) = µ z (i ) | x (i ) µ >
z (i ) | x (i )
+ Σ z (i ) | x (i )
i

The latter comes from the fact that, for a random variable Y, Cov(Y ) = E[YY > ] −
E[Y ]E[Y ]> , and hence E[YY > ] = E[Y ]E[Y ]> + Cov(Y ). Substituting this back
into equation (26.4), we get the M-step update for Λ:
! ! −1
n n
Λ= ∑ (x(i) − µ)µ>z(i) |x(i) ∑ µz(i) |x(i) µ>z(i) |x(i) + Σz(i) |x(i) (26.5)
i =1 i =1

It is important to note the presence of the Σz(i) | x(i) on the right hand side of this
equation. This is the covariance in the posterior distribution p(z(i) | x (i) ) of z(i)
given x (i) , and the M-step must take into account this uncertainty about z(i) in the
posterior. A common mistake in deriving EM is to assume that in the E-step, we
need to calculate only expectation E[z] of the latent random variable z, and then
plug that into the optimization in the M-step everywhere z occurs. While this
worked for simple problems such as the mixture of Gaussians, in our derivation

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


141

for factor analysis, we needed E[zz> ] as well as E[z]; and as we saw, E[zz> ] and
E[z]E[z]> differ by the quantity Σz| x . Thus, the M-step update must take into
account the covariance of z in the posterior distribution p(z(i) | x (i) ).
Lastly, we can also find the M-step optimizations for the parameters µ and Ψ.
It is not hard to show that the first is given by
n
1
µ=
n ∑ x (i ) .
i =1

Since this doesn’t change as the parameters are varied (i.e., unlike the update for
Λ, the right hand side does not depend on Qi (z(i) ) = p(z(i) | x (i) ; µ, Λ, Ψ), which
in turn depends on the parameters), this can be calculated just once and needs
not be further updated as the algorithm is run. Similarly, the diagonal Ψ can be
found by calculating
n
1 > (i )>
 
Φ=
n ∑ x (i ) x (i ) − x (i ) µ >( i
z |x ) ( i ) Λ >
− Λµ ( i
z |x ) ( i ) x + Λ µ ( i
z |x ) ( i ) µ >
( i
z |x ) ( i ) + Σ ( i
z |x ) ( i ) Λ> ,
i =1

and setting Ψii = Φii (i.e., letting Ψ be the diagonal matrix containing only the
diagonal entries of Φ).

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Part X: Principal Components Analysis
From CS229 Spring 2021, Andrew
In our discussion of factor analysis, we gave a way to model data x ∈ Rd as Ng, Moses Charikar & Christopher
Ré, Stanford University.
‘‘approximately’’ lying in some k-dimension subspace, where k  d. Specifically,
we imagined that each point x (i) was created by first generating some z(i) lying
in the k-dimension affine space Λz + µ; z ∈ Rk , and then adding Ψ-covariance
noise. Factor analysis is based on a probabilistic model, and parameter estimation
used the iterative EM algorithm.
In this set of notes, we will develop a method, Principal Components Analysis
(PCA), that also tries to identify the subspace in which the data approximately
lies. However, PCA will do so more directly, and will require only an eigenvector
calculation (easily done with the eig function in Matlab), and does not need to
resort to EM.
Suppose we are given a dataset x (i) ; i = 1, . . . , n of attributes of n different
types of automobiles, such as their maximum speed, turn radius, and so on. Let
x (i) ∈ Rd for each i (d  n). But unknown to us, two different attributes—some
xi and x j —respectively give a car’s maximum speed measured in miles per hour,
and the maximum speed measured in kilometers per hour. These two attributes
are therefore almost linearly dependent, up to only small differences introduced
by rounding off to the nearest mph or kph. Thus, the data really lies approximately
on an n − 1 dimensional subspace. How can we automatically detect, and perhaps
remove, this redundancy?
For a less contrived example, consider a dataset resulting from a survey of
(i )
pilots for radio-controlled helicopters, where x1 is a measure of the piloting
(i )
skill of pilot i, and x2 captures how much he/she enjoys flying. Because RC
helicopters are very difficult to fly, only the most committed students, ones that
truly enjoy flying, become good pilots. So, the two attributes x1 and x2 are strongly
correlated. Indeed, we might posit that that the data actually lies along some
diagonal axis (the u1 direction) capturing the intrinsic piloting ‘‘karma’’ of a
person, with only a small amount of noise lying off this axis. (See figure.) How
can we automatically compute this u1 direction?
We will shortly develop the PCA algorithm. But prior to running PCA per se,
typically we first preprocess the data by normalizing each feature to have mean 0
143

and variance 1. We do this by subtracting the mean and dividing by the empirical
standard deviation:
(i )
(i )
xj − µj
xj ←
σj
(i ) (i )
where µ j = n1 ∑in=1 x j and σj2 = n1 ∑in=1 ( x j − µ j )2 are the mean variance of
feature j, respectively.
Subtracting µ j zeros out the mean and may be omitted for data known to have
zero mean (for instance, time series corresponding to speech or other acoustic
signals). Dividing by the standard deviation σj rescales each coordinate to have
unit variance, which ensures that different attributes are all treated on the same
‘‘scale.’’ For instance, if x1 was cars’ maximum speed in mph (taking values in the
high tens or low hundreds) and x2 were the number of seats (taking values around
2-4), then this renormalization rescales the different attributes to make them more
comparable. This rescaling may be omitted if we had a priori knowledge that the
different attributes are all on the same scale. One example of this is if each data
(i )
point represented a grayscale image, and each x j took a value in {0, 1, . . . , 255}
corresponding to the intensity value of pixel j in image i.
Now, having normalized our data, how do we compute the ‘‘major axis of vari-
ation’’ u—that is, the direction on which the data approximately lies? One way is
to pose this problem as finding the unit vector u so that when the data is projected
onto the direction corresponding to u, the variance of the projected data is maxi-
mized. Intuitively, the data starts off with some amount of variance/information
in it. We would like to choose a direction u so that if we were to approximate the
data as lying in the direction/subspace corresponding to u, as much as possible
of this variance is still retained. Consider the following dataset, on which we have
already carried out the normalization steps:
Now, suppose we pick u to correspond the the direction shown in the figure
below. The circles denote the projections of the original data onto this line.
We see that the projected data still has a fairly large variance, and the points
tend to be far from zero. In contrast, suppose had instead picked the following
direction:
Here, the projections have a significantly smaller variance, and are much closer
to the origin.
We would like to automatically select the direction u corresponding to the first
of the two figures shown above. To formalize this, note that given a unit vector u

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


144

and a point x, the length of the projection of x onto u is given by x > u. I.e., if x (i)
is a point in our dataset (one of the crosses in the plot), then its projection onto u
(the corresponding circle in the figure) is distance x > u from the origin. Hence, to
maximize the variance of the projections, we would like to choose a unit-length u
so as to maximize:
n n
1 > 1 >

n ∑ ( x (i ) u )2 = n ∑ u > x (i ) x (i ) u
i =1 i =1
!
n
1 (i ) (i )>
=u >
n ∑x x u
i =1

We easily recognize that the maximizing this subject to kuk2 = 1 gives the prin-
>
cipal eigenvector of Σ = n1 ∑in=1 x (i) x (i) , which is just the empirical covariance
matrix of the data (assuming it has zero mean).1 1
If you haven’t seen this before, try
To summarize, we have found that if we wish to find a 1-dimensional subspace using the method of Lagrange mul-
tipliers to maximize u> Σu subject
with with to approximate the data, we should choose u to be the principal eigen- to that u> u = 1. You should be
vector of Σ. More generally, if we wish to project our data into a k-dimensional able to show that Σu = λu, for
some λ, which implies u is an eigen-
subspace (k < d), we should choose u1 , . . . , uk to be the top k eigenvectors of Σ. vector of Σ, with eigenvalue λ.
The ui ’s now form a new, orthogonal basis for the data.2 2
Because Σ is symmetric, the ui ’s
Then, to represent x (i) in this basis, we need only compute the corresponding will (or always can be chosen to be)
orthogonal toeach other.
vector
u1> x (i)
 
 > (i ) 
 u2 x 
y (i ) = 
 ..  ∈ R .
 k
 . 
u>
k x
(i )

Thus, whereas x (i) ∈ Rd , the vector y(i) now gives a lower, k-dimensional, approx-
imation/representation for x (i) . PCA is therefore also referred to as a dimension-
ality reduction algorithm. The vectors u1 , . . . , uk are called the first k principal
components of the data.

Remark. Although we have shown it formally only for the case of k = 1, using
well-known properties of eigenvectors it is straightforward to show that of all
possible orthogonal bases u1 , . . . , uk , the one that we have chosen maximizes
∑i ky(i) k22 . Thus, our choice of a basis preserves as much variability as possible in
the original data.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


145

In problem set 4, you will see that PCA can also be derived by picking the basis
that minimizes the approximation error arising from projecting the data onto the
k-dimensional subspace spanned by them.
PCA has many applications; we will close our discussion with a few examples.
First, compression—representing x (i) ’s with lower dimension y(i) ’s—is an obvious
application. If we reduce high dimensional data to k = 2 or 3 dimensions, then
we can also plot the y(i) ’s to visualize the data. For instance, if we were to reduce
our automobiles data to 2 dimensions, then we can plot it (one point in our plot
would correspond to one car type, say) to see what cars are similar to each other
and what groups of cars may cluster together.
Another standard application is to preprocess a dataset to reduce its dimension
before running a supervised learning learning algorithm with the x (i) ’s as inputs.
Apart from computational benefits, reducing the data’s dimension can also reduce
the complexity of the hypothesis class considered and help avoid overfitting
(e.g., linear classifiers over lower dimensional input spaces will have smaller VC
dimension).
Lastly, as in our RC pilot example, we can also view PCA as a noise reduc-
tion algorithm. In our example it, estimates the intrinsic ‘‘piloting karma’’ from
the noisy measures of piloting skill and enjoyment. In class, we also saw the
application of this idea to face images, resulting in eigenfaces method. Here,
each point x (i) ∈ R100×100 was a 10000 dimensional vector, with each coordinate
corresponding to a pixel intensity value in a 100 × 100 image of a face. Using PCA,
we represent each image x (i) with a much lowerdimensional y(i) . In doing so, we
hope that the principal components we found retain the interesting, systematic
variations between faces that capture what a person really looks like, but not the
‘‘noise’’ in the images introduced by minor lighting variations, slightly different
imaging conditions, and so on. We then measure distances between faces i and j
by working in the reduced dimension, and computing ky(i) − y( j) k2 . This resulted
in a surprisingly good face-matching and retrieval algorithm.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


146

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Part XI: Independent Components Analysis
From CS229 Spring 2021, Andrew
Our next topic is Independent Components Analysis (ICA). Similar to PCA, Ng, Moses Charikar & Christopher
this will find a new basis in which to represent our data. However, the goal is Ré, Stanford University.

very different.
As a motivating example, consider the ‘‘cocktail party problem.’’ Here, d
speakers are speaking simultaneously at a party, and any microphone placed in
the room records only an overlapping combination of the d speakers’ voices. But
lets say we have d different microphones placed in the room, and because each
microphone is a different distance from each of the speakers, it records a different
combination of the speakers’ voices. Using these microphone recordings, can we
separate out the original d speakers’ speech signals?
To formalize this problem, we imagine that there is some data s ∈ Rd that is
generated via d independent sources. What we observe is

x = As,

where A is an unknown square matrix called the mixing matrix. Repeated obser-
vations gives us a dataset x (i) ; i = 1, . . . , n, and our goal is to recover the sources
s(i) that had generated our data ( x (i) = As(i) ).
(i )
In our cocktail party problem, s(i) is an d-dimensional vector, and s j is the
sound that speaker j was uttering at time i. Also, x (i) in an d-dimensional vector,
(i )
and x j is the acoustic reading recorded by microphone j at time i.
Let W = A−1 be the unmixing matrix. Our goal is to find W, so that given our
microphone recordings x (i) , we can recover the sources by computing s(i) = Wx (i) .
For notational convenience, we also let wi> denote the i-th row of W, so that
 
— w1> —
 .. 
W=  .


— wd —>

(i )
Thus, wi ∈ Rd , and the j-th source can be recovered as s j = w> (i )
j x .
148 chapter 27. ica ambiguities

27 ICA ambiguities

To what degree can W = A−1 be recovered? If we have no prior knowledge


about the sources and the mixing matrix, it is easy to see that there are some
inherent ambiguities in A that are impossible to recover, given only the x (i) ’s.
Specifically, let P be any d-by-d permutation matrix. This means that each row
and each column of P has exactly one ‘‘1.’’ Here are some examples of permutation
matrices:  
0 1 0 " # " #
0 1 1 0
P = 1 0 0 ; P= ; P= .
 
1 0 0 1
0 0 1
If z is a vector, then Pz is another vector that contains a permuted version of z’s
coordinates. Given only the x (i) ’s, there will be no way to distinguish between W
and PW. Specifically, the permutation of the original sources is ambiguous, which
should be no surprise. Fortunately, this does not matter for most applications.
Further, there is no way to recover the correct scaling of the wi ’s. For instance,
if A were replaced with 2A, and every s(i) were replaced with (0.5)s(i) , then our
observed x (i) = 2A · (0.5)s(i) would still be the same. More broadly, if a single
column of A were scaled by a factor of α, and the corresponding source were
scaled by a factor of 1/α, then there is again no way to determine that this had
happened given only the x (i) ’s. Thus, we cannot recover the ‘‘correct’’ scaling of
the sources. However, for the applications that we are concerned with—including
the cocktail party problem—this ambiguity also does not matter. Specifically,
(i )
scaling a speaker’s speech signal s j by some positive factor α affects only the
(i )
volume of that speaker’s speech. Also, sign changes do not matter, and s j and
(i )
−s j sound identical when played on a speaker. Thus, if the wi found by an
algorithm is scaled by any non-zero real number, the corresponding recovered
source si = wi> x will be scaled by the same factor; but this usually does not matter.
(These comments also apply to ICA for the brain/MEG data that we talked about
in class.)
Are these the only sources of ambiguity in ICA? It turns out that they are,
so long as the sources si are non-Gaussian. To see what the difficulty is with

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


Gaussian data, consider an example in which n = 2, and s ∼ N (0, I ). Here, I is
the 2 × 2 identity matrix. Note that the contours of the density of the standard
normal distribution N (0, I ) are circles centered on the origin, and the density is
rotationally symmetric.
Now, suppose we observe some x = As, where A is our mixing matrix. Then,
the distribution of x will be Gaussian, x ∼ N (0, AA> ), since

Es∼N (0,I ) [ x ] = E[ As] = AE[s] = 0


Cov[ x ] = Es∼N (0,I ) [ xx ] = E[ Ass> A> ] = AE[ss> ] A> = A · Cov[s] · A> = AA>
>

Now, let R be an arbitrary orthogonal (less formally, a rotation/reflection) matrix,


so that RR> = R> R = I, and let A0 = AR. Then if the data had been mixed
according to A0 instead of A, we would have instead observed x 0 = A0 s. The
distribution of x 0 is also Gaussian, x 0 ∼ N (0, AA> ), since Es∼N (0,I ) [ x 0 ( x 0 )> ] =
E[ A0 ss> ( A0 )> ] = E[ ARss> ( AR)> ] = ARR> A> = AA> . Hence, whether the
mixing matrix is A or A0 , we would observe data from a N (0, AA> ) distribution.
Thus, there is no way to tell if the sources were mixed using A and A0 . There is an
arbitrary rotational component in the mixing matrix that cannot be determined
from the data, and we cannot recover the original sources.
Our argument above was based on the fact that the multivariate standard
normal distribution is rotationally symmetric. Despite the bleak picture that this
paints for ICA on Gaussian data, it turns out that, so long as the data is not
Gaussian, it is possible, given enough data, to recover the d independent sources.

28 Densities and linear transformations

Before moving on to derive the ICA algorithm proper, we first digress briefly
to talk about the effect of linear transformations on densities.
Suppose a random variable s is drawn according to some density ps (s). For
simplicity, assume for now that s ∈ R is a real number. Now, let the random
variable x be defined according to x = As (here, x ∈ R, A ∈ R). Let p x be the
density of x. What is p x ?
Let W = A−1 . To calculate the ‘‘probability’’ of a particular value of x, it
is tempting to compute s = Wx, then then evaluate ps at that point, and con-
clude that ‘‘p x ( x ) = ps (Wx ).’’ However, this is incorrect. For example, let s ∼
Uniform[0, 1], so ps (s) = 1{0 ≤ s ≤ 1}. Now, let A = 2, so x = 2s. Clearly,
x is distributed uniformly in the interval [0, 2]. Thus, its density is given by
p x ( x ) = (0.5)1{0 ≤ x ≤ 2}. This does not equal ps (Wx ), where W = 0.5 = A−1 .
Instead, the correct formula is p x ( x ) = ps (Wx )|W |.
More generally, if s is a vector-valued distribution with density ps , and x = As
for a square, invertible matrix A, then the density of x is given by

p x ( x ) = ps (Wx ) · |W |,

where W = A−1 .
Remark. If you’re seen the result that A maps [0, 1]d to a set of volume | A|,
then here’s another way to remember the formula for p x given above, that also
generalizes our previous 1-dimensional example. Specifically, let A ∈ Rd×d be
given, and let W = A−1 as usual. Also let C1 = [0, 1]d be the d-dimensional
hypercube, and define C2 = { As : s ∈ C1 } ⊆ Rd to be the image of C1 under the
mapping given by A. Then it is a standard result in linear algebra (and, indeed,
one of the ways of defining determinants) that the volume of C2 is given by | A|.
Now, suppose s is uniformly distributed in [0, 1]d , so its density is ps (s) = 1{s ∈
C1 }. Then clearly x will be uniformly distributed in C2 . Its density is therefore
found to be p x ( x ) = 1{ x ∈ C2 }/ vol(C2 ) (since it must integrate over C2 to
1). But using the fact that the determinant of the inverse of a matrix is just the
inverse of the determinant, we have 1/ vol(C2 ) = 1/| A| = | A−1 | = |W |. Thus,
p x ( x ) = 1{ x ∈ C2 }|W | = 1{Wx ∈ C1 }|W | = ps (Wx )|W |.

29 ICA algorithm

We are now ready to derive an ICA algorithm. We describe an algorithm


by Bell and Sejnowski, and we give an interpretation of their algorithm as a
method for maximum likelihood estimation. (This is different from their original
interpretation involving a complicated idea called the infomax principal which is
no longer necessary given the modern understanding of ICA.)
151

We suppose that the distribution of each source s j is given by a density ps , and


that the joint distribution of the sources s is given by

d
p(s) = ∏ p s ( s j ).
j =1

Note that by modeling the joint distribution as a product of marginals, we capture


the assumption that the sources are independent. Using our formulas from the
previous section, this implies the following density on x = As = W −1 s:

d
p( x ) = ∏ ps (w>j x) · |W |
j =1

All that remains is to specify a density for the individual sources ps . Recall that,
given a real-valued random variable z, its cumulative distribution function (cdf) F
Rz
is defined by F (z0 ) = P(z ≤ z0 ) = −0∞ pz (z)dz and the density is the derivative
of the cdf: pz (z) = F 0 (z).
Thus, to specify a density for the si ’s, all we need to do is to specify some cdf for
it. A cdf has to be a monotonic function that increases from zero to one. Following
our previous discussion, we cannot choose the Gaussian cdf, as ICA doesn’t work
on Gaussian data. What we’ll choose instead as a reasonable ‘‘default’’ cdf that
slowly increases from 0 to 1, is the sigmoid function g(s) = 1/(1 + e−s ). Hence,
p s ( s ) = g 0 ( s ) .1 1
If you have prior knowledge that
The square matrix W is the parameter in our model. Given a training set the sources’ densities take a certain
form, then it is a good idea to sub-
{ x i) ; i = 1, . . . , n}, the log likelihood is given by
(
stitute that in here. But in the ab-
! sence of such knowledge, the sig-
n d moid function can be thought of as
`(W ) = ∑ ∑ log g0 (w>j x(i) ) + log |W | . a reasonable default that seems to
i =1 j =1 work well for many problems. Also,
the presentation here assumes that
We would like to maximize this in terms W. By taking derivatives and using the either the data x (i) has been pre-
processed to have zero mean, or
fact (from the first set of notes) that ∇W |W | = |W |(W −1 )> , we easily derive a that it can naturally be expected to
stochastic gradient ascent learning rule. For a training example x (i) , the update have zero mean (such as acoustic
rule is: signals). This is necessary because
our assumption that ps (s) = g0 (s)
1 − 2g(w1> x (i) )
  
implies E[s] = 0 (the derivative of
1 − 2g(w2> x (i) ) (i)>
  
> −1 
 the logistic function is a symmetric
W := W + α    .. x
 + (W )  , function, and hence gives a density
 .   corresponding to a random vari-
>
1 − 2g(wd x )( i ) able with zero mean), which im-
plies E[ x ] = E[ As] = 0.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


152 chapter 29. ica algorithm

where α is the learning rate.


After the algorithm converges, we then compute s(i) = Wx (i) to recover the
original sources.
Remark. When writing down the likelihood of the data, we implicitly assumed
that the x (i) ’s were independent of each other (for different values of i; note this
issue is different from whether the different coordinates of x (i) are independent),
so that the likelihood of the training set was given by ∏i p( x (i) ; W ). This assump-
tion is clearly incorrect for speech data and other time series where the x (i) ’s are
dependent, but it can be shown that having correlated training examples will not
hurt the performance of the algorithm if we have sufficient data. However, for
problems where successive training examples are correlated, when implementing
stochastic gradient ascent, it sometimes helps accelerate convergence if we visit
training examples in a randomly permuted order (i.e., run stochastic gradient
ascent on a randomly shuffled copy of the training set.)

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


153

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


Part XII: Reinforcement Learning and Control
From CS229 Spring 2021, Andrew
We now begin our study of reinforcement learning and adaptive control. Ng, Moses Charikar & Christopher
In supervised learning, we saw algorithms that tried to make their outputs Ré, Stanford University.

mimic the labels y given in the training set. In that setting, the labels gave an
unambiguous ‘‘right answer’’ for each of the inputs x. In contrast, for many
sequential decision making and control problems, it is very difficult to provide
this type of explicit supervision to a learning algorithm. For example, if we have
just built a four-legged robot and are trying to program it to walk, then initially
we have no idea what the ‘‘correct’’ actions to take are to make it walk, and so do
not know how to provide explicit supervision for a learning algorithm to try to
mimic.
In the reinforcement learning framework, we will instead provide our algo-
rithms only a reward function, which indicates to the learning agent when it is
doing well, and when it is doing poorly. In the four-legged walking example, the
reward function might give the robot positive rewards for moving forwards, and
negative rewards for either moving backwards or falling over. It will then be the
learning algorithm’s job to figure out how to choose actions over time so as to
obtain large rewards.
Reinforcement learning has been successful in applications as diverse as au-
tonomous helicopter flight, robot legged locomotion, cell-phone network routing,
marketing strategy selection, factory control, and efficient web-page indexing.
Our study of reinforcement learning will begin with a definition of the Markov
decision processes (MDP), which provides the formalism in which RL problems
are usually posed.
30 Markov decision processes

A Markov decision process is a tuple hS, A, { Psa }, γ, Ri, where:

• S is a set of states. (For example, in autonomous helicopter flight, S might be


the set of all possible positions and orientations of the helicopter.)

• A is a set of actions. (For example, the set of all possible directions in which
you can push the helicopter’s control sticks.)

• Psa are the state transition probabilities. For each state s ∈ S and action a ∈ A,
Psa is a distribution over the state space. We’ll say more about this later, but
briefly, Psa gives the distribution over what states we will transition to if we
take action a in state s.

• γ ∈ [0, 1) is called the discount factor.

• R : S × A 7→ R is the reward function. (Rewards are sometimes also written


as a function of a state S only, in which case we would have R : S 7→ R).

The dynamics of an MDP proceeds as follows: We start in some state s0 , and


get to choose some action a0 ∈ A to take in the MDP. As a result of our choice, the
state of the MDP randomly transitions to some successor state s1 , drawn according
to s1 ∼ Ps0 a0 . Then, we get to pick another action a1 . As a result of this action,
the state transitions again, now to some s2 ∼ Ps1 a1 . We then pick a2 , and so on.
Pictorially, we can represent this process as follows:
a0 1 a 2 a a3
s0 −→ s1 −→ s2 −→ s3 −→ . . .

Upon visiting the sequence of states s0 , s1 , . . . with actions a0 , a1 , . . ., our total


payoff is given by

R(s0 , a0 ) + γR(s1 , a1 ) + γ2 R(s2 , a2 ) + · · · .

Or, when we are writing rewards as a function of the states only, this becomes

R(s0 ) + γR(s1 ) + γ2 R(s2 ) + · · · .


156 chapter 30. markov decision processes

For most of our development, we will use the simpler state-rewards R(s), though
the generalization to state-action rewards R(s, a) offers no special difficulties.
Our goal in reinforcement learning is to choose actions over time so as to
maximize the expected value of the total payoff:
h i
E R(s0 ) + γR(s1 ) + γ2 R(s2 ) + · · ·

Note that the reward at timestep t is discounted by a factor of γt . Thus, to make


this expectation large, we would like to accrue positive rewards as soon as possible
(and postpone negative rewards as long as possible). In economic applications
where R(·) is the amount of money made, γ also has a natural interpretation
in terms of the interest rate (where a dollar today is worth more than a dollar
tomorrow).
A policy is any function π : S 7→ A mapping from the states to the actions. We
say that we are executing some policy π if, whenever we are in state s, we take
action a = π (s). We also define the value function for a policy π according to
h i
V π (s) = E R(s0 ) + γR(s1 ) + γ2 R(s2 ) + · · · | s0 = s, π .

V π (s) is simply the expected sum of discounted rewards upon starting in state s,
and taking actions according to π.1 1
This notation in which we condi-
Given a fixed policy π, its value function V π satisfies the Bellman equations: tion on π isn’t technically correct
because π isn’t a random variable,
but this is quite standard in the lit-
V π (s) = R(s) + γ ∑
0
Psπ (s) (s0 )V π (s0 ). erature.
s ∈S

This says that the expected sum of discounted rewards V π (s) for starting in s
consists of two terms: First, the immediate reward R(s) that we get right away
simply for starting in state s, and second, the expected sum of future discounted
rewards. Examining the second term in more detail, we see that the summation
term above can be rewritten Es0 ∼ Psπ (s) [V π (s0 )]. This is the expected sum of dis-
counted rewards for starting in state s0 , where s0 is distributed according Psπ (s) ,
which is the distribution over where we will end up after taking the first action
π (s) in the MDP from state s. Thus, the second term above gives the expected
sum of discounted rewards obtained after the first step in the MDP.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


157

Bellman’s equations can be used to efficiently solve for V π . Specifically, in a


finite-state MDP (|S| < ∞), we can write down one such equation for V π (s)
for every state s. This gives us a set of |S| linear equations in |S| variables (the
unknown V π (s)’s, one for each state), which can be efficiently solved for the
V π (s)’s.
We also define the optimal value function according to

V ∗ (s) = max V π (s). (30.1)


π

In other words, this is the best possible expected sum of discounted rewards that
can be attained using any policy. There is also a version of Bellman’s equations
for the optimal value function:

V ∗ (s) = R(s) + max γ


a∈ A

0
Psa (s0 )V ∗ (s0 ). (30.2)
s ∈S

The first term above is the immediate reward as before. The second term is the
maximum over all actions a of the expected future sum of discounted rewards
we’ll get upon after action a. You should make sure you understand this equation
and see why it makes sense. (A derivation for equation (30.2) and the equa-
tion (30.3) below are given in chapter 35) We also define a policy π ∗ : S 7→ A as
follows:
π ∗ (s) = arg max ∑ Psa (s0 )V ∗ (s0 ). (30.3)
a∈ A s0 ∈S
Note that π ∗ (s) gives the action a that attains the maximum in the ‘‘max’’ in
equation (30.2).
It is a fact that for every state s and every policy π, we have

V ∗ ( s ) = V π ( s ) ≥ V π ( s ).

The first equality says that the V π , the value function for π ∗ , is equal to the
optimal value function V ∗ for every state s. Further, the inequality above says
that π ∗ ’s value is at least as large as the value of any other other policy. In other
words, π ∗ as defined in equation (30.3) is the optimal policy.
Note that π ∗ has the interesting property that it is the optimal policy for all
states s. Specifically, it is not the case that if we were starting in some state s then
there’d be some optimal policy for that state, and if we were starting in some other
state s0 then there’d be some other policy that’s optimal policy for s0 . The same
policy π ∗ attains the maximum in equation (30.1) for all states s. This means that
we can use the same policy π ∗ no matter what the initial state of our MDP is.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


31 Value iteration and policy iteration

We now describe two efficient algorithms for solving finite-state MDPs. For now,
we will consider only MDPs with finite state and action spaces (|S| < ∞, | A| < ∞).
In this section, we will also assume that we know the state transition probabilities
{ Psa } and the reward function R.
The first algorithm, value iteration, is as follows:

For each state s, initialize V (s) := 0. Algorithm 31.1. Value iteration.

repeat
for every state s, update do

V (S) := R(s) + max γ ∑ Psa (s0 )V (s0 ). (31.1)


a∈ A
s0

end for
until convergence

This algorithm can be thought of as repeatedly trying to update the estimated


value function using the Bellman equation (30.2).
There are two possible ways of performing the updates in the inner loop of
the algorithm. In the first, we can first compute the new values for V (s) for every
state s, and then overwrite all the old values with the new values. This is called a
synchronous update. In this case, the algorithm can be viewed as implementing
a ‘‘Bellman backup operator’’ that takes a current estimate of the value function,
and maps it to a new estimate. (See homework problem for details.) Alternatively,
we can also perform asynchronous updates. Here, we would loop over the states
(in some order), updating the values one at a time.
Under either synchronous or asynchronous updates, it can be shown that
value iteration will cause V to converge to V ∗ . Having found V ∗ , we can then use
equation (30.3) to find the optimal policy.
Apart from value iteration, there is a second standard algorithm for finding an
optimal policy for an MDP. The policy iteration algorithm proceeds as follows:
159

Initialize π randomly. Algorithm 31.2. Policy iteration.

repeat
Let V := V π . . typically by linear system solver
for every state s, update do

π (s) := arg max ∑ Psa (s0 )V (s0 ). (31.2)


a∈ A s0

end for
until convergence

Thus, the inner-loop repeatedly computes the value function for the current
policy, and then updates the policy using the current value function. (The policy
π found in step (b) is also called the policy that is greedy with respect to V.)
Note that step (a) can be done via solving Bellman’s equations as described
earlier, which in the case of a fixed policy, is just a set of |S| linear equations in |S|
variables.
After at most a finite number of iterations of this algorithm, V will converge to
V ∗ , and π will converge to π ∗ .1 1
Note that value iteration cannot
Both value iteration and policy iteration are standard algorithms for solving reach the exact V ∗ in a finite num-
ber of iterations, whereas policy it-
MDPs, and there isn’t currently universal agreement over which algorithm is eration with an exact linear system
better. For small MDPs, policy iteration is often very fast and converges with solver, can. This is because when
the actions space and policy space
very few iterations. However, for MDPs with large state spaces, solving for V π are discrete and finite, and once the
explicitly would involve solving a large system of linear equations, and could be policy reaches the optimal policy
difficult (and note that one has to solve the linear system multiple times in policy in policy iteration, then it will not
change at all. On the other hand,
iteration). In these problems, value iteration may be preferred. For this reason, even though value iteration will
in practice value iteration seems to be used more often than policy iteration. For converge to the V ∗ , but there is al-
ways some non-zero error in the
some more discussions on the comparison and connection of value iteration and learned value function.
policy iteration, please see chapter 34.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


32 Learning a model for an MDP

So far, we have discussed MDPs and algorithms for MDPs assuming that the
state transition probabilities and rewards are known. In many realistic problems,
we are not given state transition probabilities and rewards explicitly, but must
instead estimate them from data. (Usually, S, A, and γ are known.)
For example, suppose that, for the inverted pendulum problem (see problem
set 4), we had a number of trials in the MDP, that proceeded as follows:
(1) (1) (1) (1)
(1) a0 1 (1) a
2 (1) a (1) a3
s0 −→ s1 −→ s2 −→ s3 −→ . . .
(2) (2) (2) (2)
(2) a0 1 (2) a
2 (2) a (2) a3
s0 −→ s1 −→ s2 −→ s3 −→ . . .
...
( j) ( j)
Here, si is the state we were at time i of trial j, and ai is the corresponding
action that was taken from that state. In practice, each of the trials above might
be run until the MDP terminates (such as if the pole falls over in the inverted
pendulum problem), or it might be run for some large but finite number of
timesteps.
Given this ‘‘experience’’ in the MDP consisting of a number of trials, we can
then easily derive the maximum likelihood estimates for the state transition
probabilities:
# times we took action a in state s and got to s0
Psa (s0 ) = (32.1)
# times we took action a in state s
Or, if the ratio above is ‘‘0/0’’—corresponding to the case of never having taken
action a in state s before—the we might simply estimate Psa (s0 ) to be 1/|S| (i.e.,
estimate Psa to be the uniform distribution over all states.)
Note that, if we gain more experience (observe more trials) in the MDP, there
is an efficient way to update our estimated state transition probabilities using the
new experience. Specifically, if we keep around the counts for both the numerator
and denominator terms of equation (32.1), then as we observe more trials, we
can simply keep accumulating those counts. Computing the ratio of these counts
then given our estimate of Psa .
161

Using a similar procedure, if R is unknown, we can also pick our estimate of


the expected immediate reward R(s) in state s to be the average reward observed
in state s.
Having learned a model for the MDP, we can then use either value iteration or
policy iteration to solve the MDP using the estimated transition probabilities and
rewards. For example, putting together model learning and value iteration, here
is one possible algorithm for learning in an MDP with unknown state transition
probabilities:

1. Initialize π randomly.

2. Repeat:

(a) Execute π in the MDP for some number of trials.


(b) Using the accumulated experience in the MDP, update our estimates for Psa
(and R, if applicable).
(c) Apply value iteration with the estimated state transition probabilities and
rewards to get a new estimated value function V.
(d) Update π to be the greedy policy with respect to V.

We note that, for this particular algorithm, there is one simple optimization
that can make it run much more quickly. Specifically, in the inner loop of the
algorithm where we apply value iteration, if instead of initializing value iteration
with V = 0, we initialize it with the solution found during the previous iteration
of our algorithm, then that will provide value iteration with a much better initial
starting point and make it converge more quickly.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


33 Continuous state MDPs

So far, we’ve focused our attention on MDPs with a finite number of states. We
now discuss algorithms for MDPs that may have an infinite number of states.
For example, for a car, we might represent the state as ( x, y, θ, ẋ, ẏ, θ̇ ), compris-
ing its position ( x, y); orientation θ; velocity in the x and y directions ẋ and ẏ;
and angular velocity θ̇. Hence, S = R6 is an infinite set of states, because there
is an infinite number of possible positions and orientations for the car.1 Simi- 1
Technically, θ is an orientation and
larly, the inverted pendulum you saw in PS4 has states ( x, θ, ẋ, θ̇ ), where θ is so the range of θ is better written
θ ∈ [−π, π ) than θ ∈ R; but for
the angle of the pole. And, a helicopter flying in 3d space has states of the form our purposes, this distinction is not
( x, y, z, φ, θ, ψ, ẋ, ẏ, ż, φ̇, θ̇, ψ̇), where here the roll φ, pitch θ, and yaw ψ angles important.
specify the 3d orientation of the helicopter.
In this section, we will consider settings where the state space is S = Rd , and
describe ways for solving such MDPs.

33.1 Discretization

Perhaps the simplest way to solve a continuous-state MDP is to discretize the


state space, and then to use an algorithm like value iteration or policy iteration,
as described previously.
For example, if we have 2d states (s1 , s2 ), we can use a grid to discretize the
state space:
Here, each grid cell represents a separate discrete state s̄. We can then approxi-
mate the continuous-state MDP via a discrete-state one (S̄, A, { Ps̄a }, γ, R), where
S̄ is the set of discrete states, { Ps̄a } are our state transition probabilities over the
discrete states, and so on. We can then use value iteration or policy iteration to
solve for the V ∗ (s̄) and π ∗ (s̄) in the discrete state MDP (S̄, A, { Ps̄a }, γ, R). When
our actual system is in some continuous-valued state s ∈ S and we need to pick an
action to execute, we compute the corresponding discretized state s̄, and execute
action π ∗ (s̄).
This discretization approach can work well for many problems. However, there
are two downsides. First, it uses a fairly naive representation for V ∗ (and π ∗ ).
Specifically, it assumes that the value function is takes a constant value over each
33.2. value function approximation 163

of the discretization intervals (i.e., that the value function is piecewise constant
in each of the gridcells).
To better understand the limitations of such a representation, consider a su-
pervised learning problem of fitting a function to this dataset:
Clearly, linear regression would do fine on this problem. However, if we instead
discretize the x-axis, and then use a representation that is piecewise constant in
each of the discretization intervals, then our fit to the data would look like this:
This piecewise constant representation just isn’t a good representation for
many smooth functions. It results in little smoothing over the inputs, and no
generalization over the different grid cells. Using this sort of representation, we
would also need a very fine discretization (very small grid cells) to get a good
approximation.
A second downside of this representation is called the curse of dimensionality.
Suppose S = Rd , and we discretize each of the d dimensions of the state into
k values. Then the total number of discrete states we have is kd . This grows
exponentially quickly in the dimension of the state space d, and thus does not
scale well to large problems. For example, with a 10d state, if we discretize each
state variable into 100 values, we would have 1001 0 = 102 0 discrete states, which
is far too many to represent even on a modern desktop computer.
As a rule of thumb, discretization usually works extremely well for 1d and
2d problems (and has the advantage of being simple and quick to implement).
Perhaps with a little bit of cleverness and some care in choosing the discretization
method, it often works well for problems with up to 4d states. If you’re extremely
clever, and somewhat lucky, you may even get it to work for some 6d problems.
But it very rarely works for problems any higher dimensional than that.

33.2 Value function approximation

We now describe an alternative method for finding policies in continuous-state


MDPs, in which we approximate V ∗ directly, without resorting to discretiza-
tion. This approach, called value function approximation, has been successfully
applied to many RL problems.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


164 chapter 33. continuous state mdps

33.2.1 Using a model or simulator


To develop a value function approximation algorithm, we will assume that we
have a model, or simulator, for the MDP. Informally, a simulator is a black-box
that takes as input any (continuous-valued) state st and action at , and outputs a
next-state st+1 sampled according to the state transition probabilities Pst at :
There are several ways that one can get such a model. One is to use physics
simulation. For example, the simulator for the inverted pendulum in PS4 was
obtained by using the laws of physics to calculate what position and orientation
the cart/pole will be in at time t + 1, given the current state at time t and the
action a taken, assuming that we know all the parameters of the system such as
the length of the pole, the mass of the pole, and so on. Alternatively, one can also
use an off-the-shelf physics simulation software package which takes as input a
complete physical description of a mechanical system, the current state st and
action at , and computes the state st+1 of the system a small fraction of a second
into the future.2 2
Open Dynamics Engine (http://
www . ode . com) is one example of
An alternative way to get a model is to learn one from data collected in the MDP.
a free/open-source physics simula-
For example, suppose we execute n trials in which we repeatedly take actions in tor that can be used to simulate sys-
an MDP, each trial for T timesteps. This can be done picking actions at random, tems like the inverted pendulum,
and that has been a reasonably pop-
executing some specific policy, or via some other way of choosing actions. We ular choice among RL researchers.
would then observe n state sequences like the following:

(1) (1) (1) (1) (1)


(1) a0 1 (1) a
2 (1) a (1) a3 a T −1 (1)
s0 −→ s1 −→ s2 −→ s3 −→−→ s T
(2) (2) (2) (2) (2)
(2) a0 1 (2) a
2 (2) a (2) a3 a T −1 (2)
s0 −→ s1 −→ s2 −→ s3 −→−→ s T
...
(n) (n) (n) (n) (n)
( n ) a0 ( n ) a1 ( n ) a2 ( n ) a3 a T −1 (n)
s0 −→ s1 −→ s2 −→ s3 −→−→ s T

We can then apply a learning algorithm to predict st+1 as a function of st and at .


For example, one may choose to learn a linear model of the form

st+1 = Ast + Bat , (33.1)

using an algorithm similar to linear regression. Here, the parameters of the model
are the matrices A and B, and we can estimate them using the data collected from

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


33.2. value function approximation 165

our n trials, by picking


n T −1  
arg min ∑ ∑ k s t +1 −
(i ) (i ) (i )
Ast + Bat k22 .
A,B i =1 t =0
We could also potentially use other loss functions for learning the model. For
example, it has been found in recent work [?] that using k·k2 norm (without the
square) may be helpful in certain cases.
Having learned A and B, one option is to build a deterministic model, in
which given an input st and at , the output st+1 is exactly determined. Specifically,
we always compute st+1 according to equation (33.1). Alternatively, we may also
build a stochastic model, in which st+1 is a random function of the inputs, by
modeling it as
st+1 = Ast + Bat + et ,
where here et is a noise term, usually modeled as et ∼ N (0, Σ). (The covariance
matrix Σ can also be estimated from data in a straightforward way.)
Here, we’ve written the next-state st+1 as a linear function of the current state
and action; but of course, non-linear functions are also possible. Specifically, one
can learn a model st+1 = Aφs (st ) + Bφa ( at ), where φs and φa are some non-linear
feature mappings of the states and actions. Alternatively, one can also use non-
linear learning algorithms, such as locally weighted linear regression, to learn
to estimate st+1 as a function of st and at . These approaches can also be used to
build either deterministic or stochastic simulators of an MDP.

33.2.2 Fitted value iteration


We now describe the fitted value iteration algorithm for approximating the value
function of a continuous state MDP. In the sequel, we will assume that the problem
has a continuous state space S = Rd , but that the action space A is small and
discrete.3 Recall that in value iteration, we would like to perform the update 3
In practice, most MDPs have
Z much smaller action spaces than
V (s) := R(s) + γ max Psa (s0 )V (s0 )ds0 (33.2) state spaces. E.g., a car has a 6d
a s0 state space, and a 2d action space
= R(s) + γ max Es0 ∼ Psa [V (s0 )] (33.3) (steering and velocity controls);
a the inverted pendulum has a 4d
(In chapter 31, we had written the value iteration update with a summation state space, and a 1d action space;
a helicopter has a 12d state space,
V (s) := R(s) + γ maxa ∑s0 Psa (s0 )V (s0 ) rather than an integral over states; the and a 4d action space. So, discretiz-
new notation reflects that we are now working in continuous states rather than ing this set of actions is usually less
of a problem than discretizing the
discrete states.) state space would have been.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


166 chapter 33. continuous state mdps

The main idea of fitted value iteration is that we are going to approximately
carry out this step, over a finite sample of states s(1) , . . . , s(n) . Specifically, we
will use a supervised learning algorithm—linear regression in our description
below—to approximate the value function as a linear or non-linear function of
the states:
V ( s ) = θ > φ ( s ).
Here, φ is some appropriate feature mapping of the states.
For each state s in our finite sample of n states, fitted value iteration will first
compute a quantity y(i) , which will be our approximation to R(s) + γ maxa Es0 ∼ Psa [V (s0 )]
(the right hand side of equation (33.3)). Then, it will apply a supervised learning
algorithm to try to get V (s) close to R(s) + γ maxa Es0 ∼ Psa [V (s0 )] (or, in other
words, to try to get V (s) close to y(i) ).
In detail, the algorithm is as follows:

1. Randomly sample n states s(1) , s(2) , . . . , s(n) ∈ S.

2. Initialize θ := 0.

3. Repeat:

For i = 1, . . . , n
For each action a ∈ A
Sample s10 , . . . , s0k ∼ Ps( i)a (using a model of the MDP).
Set q( a) = 1
k ∑kj=1 R(s(i) ) + γV (s0j )
// Hence, q( a) is an estimate of R(s(i) ) + γEs0 ∼ P (i) [V (s0 )].
s a

Set y(i) = maxa q( a).


// Hence, y(i) is an estimate of R(s(i) ) + γ maxa Es0 ∼ P (i) [V (s0 )].
s a

// In the original value iteration algorithm (over discrete states)


// we updated the value function according to V (s(i) ) := y(i) .
// In this algorithm, we want V (s(i) ) ≈ y(i) , which we’ll achieve
// using supervised learning (linear regression).
 2
Set θ := arg minθ 12 ∑in=1 θ > φ(s(i) ) − y(i)

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


33.2. value function approximation 167

Above, we had written out fitted value iteration using linear regression as
the algorithm to try to make V (s(i) ) close to y(i) . That step of the algorithm is
completely analogous to a standard supervised learning (regression) problem in
which we have a training set ( x (1) , y(1) ), ( x (2) , y(2) ), . . . , ( x (n) , y(n) ), and want to
learn a function mapping from x to y; the only difference is that here s plays the
role of x. Even though our description above used linear regression, clearly other
regression algorithms (such as locally weighted linear regression) can also be
used.
Unlike value iteration over a discrete set of states, fitted value iteration cannot
be proved to always to converge. However, in practice, it often does converge (or
approximately converge), and works well for many problems. Note also that if we
are using a deterministic simulator/model of the MDP, then fitted value iteration
can be simplified by setting k = 1 in the algorithm. This is because the expectation
in equation (33.3) becomes an expectation over a deterministic distribution, and
so a single example is sufficient to exactly compute that expectation. Otherwise, in
the algorithm above, we had to draw k samples, and average to try to approximate
that expectation (see the definition of q( a), in the algorithm pseudo-code).
Finally, fitted value iteration outputs V, which is an approximation to V ∗ . This
implicitly defines our policy. Specifically, when our system is in some state s, and
we need to choose an action, we would like to choose the action

arg max Es0 ∼ Psa [V (s0 )] (33.4)


a

The process for computing/approximating this is similar to the inner-loop of fitted


value iteration, where for each action, we sample s10 , . . . , s0k ∼ Psa to approximate
the expectation. (And again, if the simulator is deterministic, we can set k = 1.)
In practice, there are often other ways to approximate this step as well. For
example, one very common case is if the simulator is of the form st+1 = f (st , at ) +
et , where f is some deterministic function of the states (such as f (st , at ) = Ast +
Bat ), and e is zero-mean Gaussian noise. In this case, we can pick the action given
by
arg max V ( f (s, a)).
a

In other words, here we are just setting et = 0 (i.e., ignoring the noise in the
simulator), and setting k = 1. Equivalent, this can be derived from equation (33.4)

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


168 chapter 33. continuous state mdps

using the approximation

Es0 [V (s0 )] ≈ V (Es0 [s0 ]) (33.5)


= V ( f (s, a)), (33.6)

where here the expectation is over the random s0 ∼ Psa . So long as the noise terms
et are small, this will usually be a reasonable approximation.
However, for problems that don’t lend themselves to such approximations, hav-
ing to sample k| A| states using the model, in order to approximate the expectation
above, can be computationally expensive.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


34 Connections between Policy and Value Iteration (Op-
tional)

In the policy iteration, line 3 of algorithm 31.2, we typically use linear system
solver to compute V π . Alternatively, one can also the iterative Bellman updates,
similarly to the value iteration, to evaluate V π , as in the Procedure VE(·) in line 1
of algorithm 34.1 below. Here if we take option 1 in line 2 of the Procedure VE, then
the difference between the Procedure VE from the value iteration (algorithm 31.1)
is that on line 4, the procedure is using the action from π instead of the greedy
action.
Using the Procedure VE, we can build algorithm 34.1, which is a variant of
policy iteration that serves an intermediate algorithm that connects policy itera-
tion and value iteration. Here we are going to use option 2 in VE to maximize the
re-use of knowledge learned before. One can verify indeed that if we take k = 1
and use option 2 in line 2 in algorithm 34.1, then algorithm 34.1 is semantically
equivalent to value iteration (algorithm 31.2). In other words, both algorithm 34.1
and value iteration interleave the updates in equation (34.2) and equation (34.1).
algorithm 34.1 alternate between k steps of update equation (34.1) and one step of
equation (34.2), whereas value iteration alternates between 1 step of update equa-
tion (34.1) and one step of equation (34.2). Therefore generally algorithm 34.1
should not be faster than value iteration, because assuming that update equa-
tion (34.1) and equation (34.2) are equally useful and time-consuming, then the
optimal balance of the update frequencies could be just k = 1 or k ≈ 1.
On the other hand, if k steps of update equation (34.1) can be done much
faster than k times a single step of equation (34.1), then taking additional steps of
equation equation (34.1) in group might be useful. This is what policy iteration is
leveraging—the linear system solver can give us the result of Procedure VE with
k = ∞ much faster than using the Procedure VE for a large k. On the flip side,
when such a speeding-up effect no longer exists, e.g., when the state space is large
and linear system solver is also not fast, then value iteration is more preferable.
170 chapter 34. connections between policy and value iteration (optional)

function VE(π, k) . to evaluate V π Algorithm 34.1. Variant of policy


iteration.
Option 1: Initialize V (s) := 0
Option 2: Initialize from the current V in the main algorithm.
for i = 0 to k − 1 do
for every state s, update do

V (s) := R(s) + γ ∑ Psπ (s) (s0 )V (s0 ). (34.1)


s0

end for
end for
return V
Require: hyperparameter k.
Initialize π randomly.
repeat
Let V = VE(π, k).
for every state s, update do

π (s) := arg max ∑ Psπ (s) (s0 )V (s0 ). (34.2)


a∈ A s0

end for
until convergence

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


35 Derivations for Bellman Equations

Here we give a derivation for the Bellman Equation given in chapter 30. Recall
that the value function for a policy π is defined as
h i
V π (s) = E R(s0 ) + γR(s1 ) + γ2 R(s2 ) + · · · | s0 = s, π .

Therefore, we have
h i
V π (s) = E R(s0 ) + γR(s1 ) + γ2 R(s2 ) + · · · | s0 = s, π

= R(s) + γE[ R(s1 ) + γR(s2 ) + · · · ]


= R(s) + γEs1 ∼ Psπ(s) [ R(s1 ) + γR(s2 ) + · · · ]
= R(s) + γEs1 ∼ Psπ(s) [V π (s1 )].

Now we derive the Bellman Equation for the optimal value function.

V ∗ (s) = max V π (s)


π
!
= max R(s) + γ
π
∑ 0 π
Psπ (s) (s )V (s ) 0
s0 ∈S
!
= R(s) + max γ
π
∑ Psπ (s) (s0 )V π (s0 )
s0 ∈S
= R(s) + max γ
a

0
Psa (s0 ) max V π (s0 )
π
s ∈S
= R(s) + max γ
a∈ A
∑ Psa (s0 )V ∗ (s0 ).
s0 ∈S

Here the fourth equality is because that for MDP, the optimal action at a later
state is independent of actions at previous states, hence the optimal policy at the
current state can be decomposed to an action followed by the optimal policy at
the new state.
A Lagrange Multipliers
From CS229 Spring 2021, Andrew
We consider a special case of Lagrange Multipliers for constrained optimization. Ng, Moses Charikar & Christopher
The class quickly sketched the ‘‘geometric’’ intuition for Lagrange multipliers, Ré, Stanford University.

and this note considers a short algebraic derivation.


In order to minimize or maximize a function with linear constraints, we con-
sider finding the critical points (which may be local maxima, local minima, or
saddle points) of
f ( x ) subject to Ax = b
Here f : Rd 7→ R is a convex (or concave) function, x ∈ Rd , A ∈ Rn×d , and
b ∈ Rn . To find the critical points, we cannot just set the derivative of the objective
equal to 0.1 The technique we consider is to turn the problem from a constrained 1
See the example at the end of this
problem into an unconstrained problem using the Lagrangian, chapter.

L( x, µ) = f ( x ) + µ> ( Ax − b) in which µ ∈ Rn

We’ll show that the critical points of the constrained function f are critical points
of L( x, µ).

Finding the Space of Solutions. Assume the constraints are satisfiable, then let
x0 be such that Ax0 = b. Let rank( A) = r, then let {u1 , . . . , uk } be an orthonormal
basis for the null space of A in which k = d − r. Note if k = 0, then x0 is uniquely
defined. So we consider k > 0. We write this basis as a matrix:

U = [ u 1 , . . . , u k ] ∈ Rd × k

Since U is a basis, any solution for f ( x ) can be written as x = x0 + Uy. This


captures all the free parameters of the solution. Thus, we consider the function:

g(y) = f ( x0 + Uy) in which g : Rk 7→ R

The critical points of g are critical points of f . Notice that g is unconstrained, so


we can use standard calculus to find its critical points.

∇y g(y) = 0 equivalently U > ∇ f ( x0 + Uy) = 0.


173

To make sure the types are clear: ∇y g(y) ∈ Rk , ∇ f (z) ∈ Rd and U ∈ Rd×k . In
both cases, 0 is the 0 vector in Rk .
The above condition says that if y is a critical point for g, then ∇ f ( x ) must
be orthogonal to U. However, U forms a basis for the null space of A and the
rowspace is orthogonal to it. In particular, any element of the rowspace can be
written z = A> µ ∈ Rd . We verify that z and u = Uy are orthogonal since:

z> u = µ> Au = µ> 0 = 0

Since we can decompose Rd as a direct sum of null( A) and the rowspace of A, we


know that any vector orthogonal to U must be in the rowspace. We can rewrite
this orthogonality condition as follows: there is some µ ∈ Rn (depending on x)
such that
∇ f ( x ) + A> µ = 0
for a certain x such that Ax = A( x0 + Uy) = Ax0 = b.

The Clever Lagrangian. We now observe that the critical points of the Lagrangian
are (by differentiating and setting to 0)

∇ x L( x, µ) = ∇ f ( x ) + A> µ = 0
and
∇µ L( x, µ) = Ax − b = 0

The first condition is exactly the condition that x be a critical point in the way
we derived it above, and the second condition says that the constraint be satisfied.
Thus, if x is a critical point, there exists some µ as above, and ( x, µ) is a critical
point for L.

Generalizing to Nonlinear Equality Constraints. Lagrange multipliers are a


much more general technique. If you want to handle non-linear equality con-
straints, then you will need a little extra machinery: the implicit function theorem.
However, the key idea is that you find the space of solutions and you optimize.
In that case, finding the critical points of

f ( x ) s.t. g( x ) = c leads to L( x, µ) = f ( x ) + µ> ( g( x ) − c).

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


174 appendix a. lagrange multipliers

The gradient condition here is ∇ f ( x ) + J > µ = 0, where J is the Jacobian matrix of


g. For the case where we have a single constraint, the gradient condition reduces
to ∇ f ( x ) = −µ1 ∇ g1 ( x ), which we can view as saying, ‘‘at a critical point, the
gradient of the surface must be parallel to the gradient of the function.’’ This connects
us back to the picture that we drew during lecture.

We give a simple example to show that you cannot just set the derivatives to Example A.1. Need for constrained
optimization.
0. Consider f ( x1 , x2 ) = x1 and g( x1 , x2 ) = x12 + x22 and so:

max f ( x ) subject to g( x ) = 1.
x

This is just a linear functional over the circle, and it is compact, so the function
must achieve a maximum value. Intuitively, we can see that (1, 0) is the
maximum possible value (and hence a critical point). Here, we have:
   
1 x
∇ f (x) = and ∇ g( x ) = 2 1
0 x2

Notice that ∇ f ( x ) is not zero anywhere on the circle—it’s constant! For x ∈


{(1, 0), (−1, 0)}, ∇ f ( x ) = λ∇ g( x ) (take λ ∈ {1/2, −1/2}, respectively). On
the other hand, for any other point on the circle x2 6= 0, and so the gradient
of f and g are not parallel. Thus, such points are not critical points.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


B Boosting
From CS229 Spring 2021, John
B.1 Boosting Duchi, Stanford University.

We have seen so far how to solve classification (and other) problems when we have
a data representation already chosen. We now talk about a procedure, known as
boosting, which was originally discovered by Rob Schapire, and further developed
by Schapire and Yoav Freund, that automatically chooses feature representations.
We take an optimization-based perspective, which is somewhat different from
the original interpretation and justification of Freund and Schapire, but which
lends itself to our approach of (1) choose a representation, (2) choose a loss, and
(3) minimize the loss.
Before formulating the problem, we give a little intuition for what we are going
to do. Roughly, the idea of boosting is to take a weak learning algorithm—any
learning algorithm that gives a classifier that is slightly better than random—
and transforms it into a strong classifier, which does much much better than
random. To build a bit of intuition for what this means, consider a hypothetical
digit recognition experiment, where we wish to distinguish 0s from 1s, and we
receive images we must classify. Then a natural weak learner might be to take
the middle pixel of the image, and if it is colored, call the image a 1, and if it is
blank, call the image a 0. This classifier may be far from perfect, but it is likely
better than random. Boosting procedures proceed by taking a collection of such
weak classifiers, and then reweighting their contributions to form a classifier with
much better accuracy than any individual classifier.
With that in mind, let us formulate the problem. Our interpretation of boosting
is as a coordinate descent method in an infinite dimensional space, which—while
it sounds complex—is not so bad as it seems. First, we assume we have raw input
examples x ∈ Rn with labels y ∈ {−1, 1}, as is usual in binary classification. We
also assume we have an infinite collection of feature functions φj : Rn 7→ {−1, 1}
and an infinite vector θ = [θ1 θ2 · · · ]> , but which we assume always has only a
finite number of non-zero entries. For our classifier we use

!
hθ ( x ) = sign ∑ θ j φj ( x ) .
j =1
176 appendix b. boosting

We will abuse notation, and define θ > φ( x ) = ∑∞ j =1 θ j φ j ( x ).


In boosting, one usually calls the features φj weak hypotheses. Given a training set
{( x (1) , y(1) ), . . . , ( x (m) , y(m) )}, we call a vector p = ( p(1) , . . . , p(m) ) a distribution
on the examples if p(i) ≥ 0 for all i and
m
∑ p(i) = 1.
i =1

Then we say that there is a weak learner with margin γ > 0 if for any distribution p
on the m training examples there exists one weak hypothesis φj such that
m n o 1
∑ p (i )
1 y (i )
6 = φ j ( x (i )
) ≤ − γ.
2
(B.1)
i =1

That is, we assume that there is some classifier that does slightly better than
random guessing on the dataset. The existence of a weak learning algorithm is an
assumption, but the surprising thing is that we can transform any weak learning
algorithm into one with perfect accuracy.
In more generality, we assume we have access to a weak learner, which is an
algorithm that takes as input a distribution (weights) p on the training examples
and returns a classifier doing slightly better than random. We will show how,
given access to a weak learning algorithm, boosting can return a classifier with
perfect accuracy on the training data. (Admittedly, we would like the classifer to
generalize well to unseen data, but for now, we ignore this issue.)

Algorithm B.1. Weak learning algo-


(i) Input: A distribution p(1) , . . . , p(m) and training set {( x (i) , y(i) )}im=1 with rithm.
∑im=1 p(i) = 1 and p(i) ≥ 0.

(ii) Return: A weak classifier φj : Rn 7→ {−1, 1} such that


m n o 1
∑ p (i ) 1 y(i) 6= φj ( x (i) ) ≤ − γ.
2
i =1

B.1.1 The boosting algorithm


Roughly, boosting begins by assigning each training example equal weight in the
dataset. It then receives a weak-hypothesis that does well according to the current

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


b.1. boosting 177

weights on training examples, which it incorporates into its current classification


model. It then reweights the training examples so that examples on which it makes
mistakes receive higher weight—so that the weak learning algorithm focuses
on a classifier doing well on those examples—while examples with no mistakes
receive lower weight. This repeated reweighting of the training data coupled with
a weak learner doing well on examples for which the classifier currently does
poorly yields classifiers with good performance.
The boosting algorithm specifically performs coordinate descent on the exponen-
tial loss for classification problems, where the objective is
m
1
J (θ ) =
m ∑ exp(−y(i) θ > φ(x(i) )).
i =1

We first show how to compute the exact form of the coordinate descent update
for the risk J (θ ). Coordinate descent iterates as follows:

(i) Choose a coordinate j ∈ N.

(ii) Update θ j to
θ j = arg min J (θ )
θj

while leaving θk identical for all k 6= j.

We iterate the above procedure until convergence.


In the case of boosting, the coordinate updates are not too challenging to derive
because of the analytic convenience of the exp function. We now show how to
derive the update. Suppose we wish to update coordinate k. Define
!
w(i) = exp −y(i) ∑ θ j φ j ( x (i ) )
j6=k

to be a weight, and note that optimizing coordinate k corresponds to minimizing


m
∑ w(i) exp(−y(i) φk (x(i) )α)
i =1

in α = θk . Now, define

W + := ∑ w (i ) and W − := ∑ w (i )
i:y(i) φk ( x (i) )=1 i:y(i) φk ( x (i) )=−1

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


178 appendix b. boosting

to be the sums of the weights of examples that φk classifies correctly and incorrectly,
respectively. Then finding θk is the same as choosing

1 W+
α = arg min W + e−α + W − eα = log − .

α 2 W

To see the final equality, take derivatives and set the resulting equation to zero,
+
so we have −W + e−α + W − eα = 0. That is, W − e2α = W + , or α = 12 log W W−
.
What remains is to choose the particular coordinate to perform coordinate
descent on. We assume we have access to a weak-learning algorithm as in algo-
rithm B.1, which at iteration t takes as input a distribution p on the training set and
returns a weak hypothesis φt satisfying the margin condition in equation (B.1).
We present the full boosting algorithm in algorithm B.2. It proceeds in iterations
t = 1, 2, 3, . . .. We represent the set of hypotheses returned by the weak learning
algorithm at time t by {φ1 , . . . , φt }.

B.2 The convergence of Boosting

We now argue that the boosting procedure achieves 0 training error, and we
also provide a rate of convergence to zero. To do so, we present a lemma that
guarantees progress is made.

Lemma B.1. Let


!
m t
1
J (θ (t)
)=
m ∑ exp −y (i )
∑ θτ φτ (x (i )
) .
i =1 τ =1

Then q
J ( θ (t) ) ≤ 1 − 4γ2 J (θ (t−1) ).

As the proof of the lemma is somewhat involved and not the central focus of
these notes—though it is important to know one’s algorithm will converge!— we
defer the proof to appendix B.4. Let us describe how it guarantees convergence of
the boosting procedure to a classifier with zero training error.
We initialize the procedure at θ (0) = 0, so that the initial empirical risk J (θ (0) ) =
1. Now, we note that for any θ, the misclassification error satisfies
n o n o  
1 sign(θ > φ( x )) 6= y = 1 yθ > φ( x ) ≤ 0 ≤ exp −yθ > φ( x )

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


b.2. the convergence of boosting 179

For each iteration t = 1, 2, . . .: Algorithm B.2. Boosting algorithm.

(i) Define weights


!
t −1
w (i )
= exp −y (i )
∑ θτ φτ (x (i )
)
τ =1

and distribution p(i) = w(i) / ∑m ( j)


j =1 w .

(ii) Construct a weak hypothesis φt : Rn 7→ {−1, 1} from the distribution


p = ( p(1) , . . . , p(m) ) on the training set.

(iii) Compute Wt+ = ∑i:y(i) φt ( x(i) )=1 w(i) and Wt− = ∑i:y(i) φt ( x(i) )=−1 w(i) and
set
1 W+
θt = log t− .
2 Wt

because ez ≥ 1 for all z ≥ 0. Thus, we have that the misclassification error rate
has upper bound
m
1 n o
m ∑1 sign(θ > φ( x (i) )) 6= y(i) ≤ J ( θ ),
i =1

and so if J (θ ) < m1 then the vector θ makes no mistakes on the training data. After
t iterations of boosting, we find that the empirical risk satisfies

J (θ (t) ) ≤ (1 − 4γ2 )t/2 J (θ (0) ) = (1 − 4γ2 )t/2 .


1
To find how many iterations are required to guarantee J (θ (t) ) < m, we take
logarithms to find that J (θ (t) ) < m1 if
t 1 2 log m
log(1 − 4γ2 ) < log , or t> .
2 m − log(1 − 4γ2 )
Using a first order Taylor expansion, that is, that log(1 − 4γ2 ) ≤ −4γ2 , we see
that if the number of rounds of boosting—the number of weak classifiers we
use—satisfies
log m 2 log m
t> ≥ ,
2γ2 − log(1 − 4γ2 )
1
then J (θ (t) ) < m.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


180 appendix b. boosting

B.3 Implementing weak-learners

One of the major advantages of boosting algorithms is that they automatically


generate features from raw data for us. Moreover, because the weak hypotheses
always return values in {−1, 1}, there is no need to normalize features to have
similar scales when using learning algorithms, which in practice can make a large
difference. Additionally, and while this is not theoretically well-understood, many
types of weak-learning procedures introduce non-linearities intelligently into our
classifiers, which can yield much more expressive models than the simpler linear
models of the form θ > x that we have seen so far.

B.3.1 Decision stumps


There are a number of strategies for weak learners, and here we focus on one,
known as decision stumps. For concreteness in this description, let us suppose
that the input variables x ∈ Rn are real-valued. A decision stump is a function f ,
which is parameterized by a threshold s and index j ∈ 1, 2, . . . , n, and returns

1 if x j ≥ s
φj,s ( x ) = sign( x j − s) = (B.2)
−1 otherwise.

These classifiers are simple enough that we can fit them efficiently even to a
weighted dataset, as we now describe.
Indeed, a decision stump weak learner proceeds as follows. We begin with a
distribution—set of weights p(1) , . . . , p(m) summing to 1—on the training set, and
we wish to choose a decision stump of the form of equation (B.2) to minimize
the error on the training set. That is, we wish to find a threshold s ∈ R and index
j such that
m n o m n o
∑ p (i ) 1 ∑ p (i ) 1
(i )
c (φj , s, p) =
Err φj,s ( x (i) ) 6= y(i) = y (i ) ( x j − s ) ≤ 0 (B.3)
i =1 i =1

is minimized. Naively, this could be an inefficient calculation, but a more intelli-


gent procedure allows us to solve this problem in roughly O(nm log m) time. For
each feature j = 1, 2, . . . , n, we sort the raw input features so that
( i1 ) ( i2 ) (i m )
xj ≥ xj ≥ · · · ≥ xj .

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


b.3. implementing weak-learner s 181

As the only values s for which the error of the decision stump can change are the
(i )
values x j , a bit of clever book-keeping allows us to compute
m n o m n o
∑ p (i ) 1 ∑ p (i k ) 1
(i ) (i k )
y (i ) ( x j − s ) ≤ 0 = y (i k ) ( x j − s) ≤ 0
i =1 k =1

efficiently by incrementally modifying the sum in sorted order, which takes


(i )
time O(m) after we have already sorted the values x j . (We do not describe the
algorithm in detail here, leaving that to the interested reader.) Thus, performing
this calcuation for each of the n input features takes total time O(nm log m), and
we may choose the index j and threshold s that give the best decision stump for
the error in equation (B.3).
One very important issue to note is that by flipping the sign of the thresholded
decision stump φj,s , we achieve error 1 − Err
c (φj,s , p), that is, the error of

c (−φj,s , p) = 1 − Err
Err c (φj,s , p).

(You should convince yourself that this is true.) Thus, it is important to also
track the smallest value of 1 − Err
c (φj,s , p) over all thresholds, because this may be
smaller than Err(φj,s , p), which gives a better weak learner. Using this procedure
c
for our weak learner (algorithm B.1) gives the basic, but extremely useful, boosting
classifier.

B.3.2 Other strategies


There are a huge number of variations on the basic boosted decision stumps idea.
First, we do not require that the input features x j be real-valued. Some of them
may be categorical, meaning that x j ∈ {1, 2, . . . , k} for some k, in which case
natural decision stumps are of the form

1 if x j = l
φj ( x ) =
−1 otherwise.

as well as variants setting φj ( x ) = 1 if x j ∈ C for some set C ⊂ {1, . . . , k} of


categories.
Another natural variation is the boosted decision tree, in which instead of a single
level decision for the weak learners, we consider conjuctions of features or trees
of decisions. Google can help you find examples and information on these types
of problems.

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


182 appendix b. boosting

We now give an example showing the behavior of boosting on a simple


dataset. In particular, we consider a problem with data points x ∈ R2 , where
the optimal classifier is

1 if x1 < 0.6 and x2 < 0.6
y= (B.4)
−1 otherwise.

This is a simple non-linear decision rule, but it is impossible for standard


linear classifiers, such as logistic regression, to learn. In ??, we show the best
decision line that logistic regression learns, where positive examples are
circles and negative examples are x’s. It is clear that logistic regression is not
fitting the data particularly well.
With boosted decision stumps, however, we can achieve a much better
fit for the simple nonlinear classification problem B.4. ?? shows the boosted
classifiers we have learned after different numbers of iterations of boosting,
using a training set of size m = 150. From the figure, we see that the first
decision stump is to threshold the feature x1 at the value s ≈ 0.23, that is,
φ( x ) = sign( x1 − s) for s ≈ 0.23.

2021-05-23 00:18:27-07:00, draft: send comments to [email protected] toc


b.4. proof of lemma B.1 183

B.4 Proof of lemma B.1

We now return to prove the progress lemma. We prove this result by directly
showing the relationship of the weights at time t to those at time t − 1. In particular,
we note by inspection that
q
J (θ (t) ) = min{Wt+ e−α + Wt− eα } = 2 Wt+ Wt−
α

while !
m t −1
1
J (θ ( t −1)
)=
m ∑ exp −y (i )
∑ θτ φτ (x (i )
) = Wt+ + Wt− .
i =1 τ =1
We know by the weak-learning assumption that
m n o 1 1 1
∑ p (i ) 1 y(i) 6= φt ( x (i) ) ≤ − γ,
2
or
Wt+ + Wt−
∑ w (i ) ≤
2
− γ.
i =1
i : y(i) φt ( x (i) ) = −1
| {z }
=Wt−

Rewriting this expression by noting that the sum on the right is nothing but Wt− ,
we have
 
1 1 + 2γ −
Wt− ≤ − γ (Wt+ + Wt− ), or Wt+ ≥ W .
2 1 − 2γ t

By substituting α = 12 log 11+ 2γ (t)


−2γ in the minimum defining J ( θ ), we obtain
s s
1 − 2γ 1 + 2γ
J (θ (t) ) ≤ Wt+ + Wt−
1 + 2γ 1 − 2γ
s s
+ 1 − 2γ − 1 + 2γ
= Wt + Wt (1 − 2γ + 2γ)
1 + 2γ 1 − 2γ
s s s
1 − 2γ 1 + 2γ 1 − 2γ 1 + 2γ +
≤ Wt+ + Wt− (1 − 2γ) + 2γ W
1 + 2γ 1 − 2γ 1 + 2γ 1 − 2γ t
"s s #
1 − 2γ 1 − 2γ
q
= Wt+
+ 2γ + Wt− 1 − 4γ2 ,
1 + 2γ 1 + 2γ
1−2γ +
where we used that Wt− ≤ 1+2γ Wt .
Performing a few algebraic manipulations,
we see that the final expression is equal to 1 − 4γ2 (Wt+ + Wt− ). That is, J (θ (t) ) ≤
p

1 − 4γ2 J (θ (t−1) ).
p

toc 2021-05-23 00:18:27-07:00, draft: send comments to [email protected]


184 index

References
1. D. M. Blei, A. Kucukelbir, and J. D. McAuliffe, ‘‘Variational Inference: A Review for
Statisticians,’’ Journal of the American Statistical Association, vol. 112, no. 518, pp. 859–
877, 2017 (cit. on p. 130).
2. D. P. Kingma and M. Welling, ‘‘Auto-Encoding Variational Bayes,’’ ArXiv Preprint
ArXiv:1312.6114, 2013 (cit. on pp. 128, 132).
3. M. J. Kochenderfer and T. A. Wheeler, Algorithms for Optimization. MIT Press, 2019 (cit.
on p. viii).

2021-05-23 00:18:27-07:00, draft: send comments to [email protected]

You might also like