Notes MSM
Notes MSM
Course webpage:
http://www.statslab.cam.ac.uk/~rds37/modern_stat_methods.html
In this course we will study a selection of important modern statistical methods. This
selection is heavily biased towards my own interests, but I hope it will nevertheless give you
a flavour of some of the most important recent methodological developments in statistics.
Over the last 25 years, the sorts of datasets that statisticians have been challenged to
study have changed greatly. Where in the past, we were used to datasets with many obser-
vations with a few carefully chosen variables, we are now seeing datasets where the number
of variables can run into the thousands and greatly exceed the number of observations. For
example, with microarray data, we typically have gene expression values measured for sev-
eral thousands of genes, but only for a few hundred tissue samples. The classical statistical
methods are often simply not applicable in these “high-dimensional” situations.
The course is divided into 4 chapters (of unequal size). Our first chapter will start by
introducing ridge regression, a simple generalisation of ordinary least squares. Our study
of this will lead us to some beautiful connections with functional analysis and ultimately
one of the most successful and flexible classes of learning algorithms: kernel machines.
The second chapter concerns the Lasso and its extensions. The Lasso has been at the
centre of much of the developments that have occurred in high-dimensional statistics, and
will allow us to perform regression in the seemingly hopeless situation when the number
of parameters we are trying to estimate is larger than the number of observations.
In the third chapter we will study graphical modelling and provide an introduction to
the exciting field of causal inference. Where the previous chapters consider methods for
relating a particular response to a large collection of (explanatory) variables, graphical
modelling will give us a way of understanding relationships between the variables them-
selves. Ultimately we would like to infer causal relationships between variables based on
(observational) data. This may seem like a fundamentally impossible task, yet we will show
how by developing the graphical modelling framework further, we can begin to answer such
causal questions.
Statistics is not only about developing methods that can predict well in the presence
of noise, but also about assessing the uncertainty in our predictions and estimates. In
the final chapter we will tackle the problem of how to handle performing thousands of
hypothesis tests at the same time and more generally the task of quantifying uncertainty
in high-dimensional settings.
Before we begin the course proper, we will briefly review two key classical statistical
methods: ordinary least squares and maximum likelihood estimation. This will help to set
the scene and provide a warm-up for the modern methods to come later.
i
Classical statistics
Ordinary least squares
Imagine data are available in the form of observations (Yi , xi ) ∈ R × Rp , i = 1, . . . , n, and
the aim is to infer a simple regression function relating the average value of a response, Yi ,
and a collection of predictors or variables, xi . This is an example of regression analysis,
one of the most important tasks in statistics.
A linear model for the data assumes that it is generated according to
Y = Xβ 0 + ε, (0.0.1)
where Y ∈ Rn is the vector of responses; X ∈ Rn×p is the predictor matrix (or design
matrix) with ith row xTi ; ε ∈ Rn represents random error; and β 0 ∈ Rp is the unknown
vector of coefficients.
Provided p n, a sensible way to estimate β is by ordinary least squares (OLS). This
yields an estimator β̂ OLS with
β̂ OLS := arg min kY − Xβk22 = (X T X)−1 X T Y, (0.0.2)
β∈Rp
ii
A very useful quantity in the context of maximum likelihood estimation is the Fisher
information matrix with jkth (1 ≤ j, k ≤ d) entry
∂2
ijk (θ) := −Eθ `(θ) .
∂θj ∂θk
is positive semi-definite.
A remarkable fact about maximum likelihood estimators (MLEs) is that (under quite
general conditions) they are asymptotically normally distributed, asymptotically unbiased
and asymptotically achieve the Cramér–Rao lower bound.
Assume that the Fisher information matrix when there are n observations, i(n) (θ) (where
we have made the dependence on n explicit) satisfies i(n) (θ)/n → I(θ) for some positive
definite matrix I. Then denoting the maximum likelihood estimator of θ when there are
n observations by θ̂(n) , under regularity conditions, as the number of observations n → ∞
we have √ (n) d
n(θ̂ − θ) → Nd (0, I −1 (θ)).
Returning to our linear model, if we assume in addition that εi ∼ N (0, σ 2 ), then the
log-likelihood for (β, σ 2 ) is
n
2 n 2 1 X
`(β, σ ) = − log(σ ) − 2 (yi − xTi β)2 .
2 2σ i=1
We see that the maximum likelihood estimate of β and OLS coincide. It is easy to check
that −2 T
2 σ X X 0
i(β, σ ) = .
0 nσ −4 /2
√
The general theory for MLEs would suggest that approximately n(β̂−β) ∼ Np (0, nσ 2 (X T X)−1 );
in fact it is straight-forward to show that this distributional result is exact.
iii
Contents
1 Kernel machines 1
1.1 Ridge regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 The singular value decomposition and principal components analysis 3
1.2 v-fold cross-validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 The kernel trick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Examples of kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Reproducing kernel Hilbert spaces . . . . . . . . . . . . . . . . . . . 9
1.4.3 The representer theorem . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5 Kernel ridge regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Other kernel machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.1 The support vector machine . . . . . . . . . . . . . . . . . . . . . . 16
1.6.2 Logistic regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7 Large-scale kernel machines . . . . . . . . . . . . . . . . . . . . . . . . . . 19
iv
3.3.1 Normal conditionals . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.2 Nodewise regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.3 The precision matrix and conditional independence . . . . . . . . . 43
3.3.4 The Graphical Lasso . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4 Structural equation models . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.5 Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 The Markov properties on DAGs . . . . . . . . . . . . . . . . . . . . . . . 46
3.7 Causal structure learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.7.1 Three obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.7.2 The PC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
v
Chapter 1
Kernel machines
Here 1 is an n-vector of 1’s. We see that the usual OLS objective is penalised by an
additional term proportional to kβk22 . The parameter λ ≥ 0, which controls the severity of
1
the penalty and therefore the degree of the shrinkage towards 0, is known as a regularisation
parameter or tuning parameter. We have explicitly included an intercept term which is not
penalised. The reason for this is that were the variables to have their origins shifted so
e.g. a variable representing temperature is given in units of Kelvin rather than Celsius, the
fitted values would not change. However, X β̂ is not invariant under scale transformations
of the variables so it is standard practice to centre each column of X (hence
√ making them
orthogonal to the intercept term) and then scale them to have `2 -norm n. P
n
It is straightforward to
Pshow that after this standardisation of X, µ̂R
λ = Ȳ := i=1 Yi /n,
n
so we may assume that i=1 Yi = 0 by replacing Yi by Yi − Ȳ and then we can remove µ
from our objective function. In this case
β̂λR = (X T X + λI)−1 X T Y.
In this form, we can see how the addition of the λI term helps to stabilise the estimator.
Note that when X does not have full column rank (such as in high-dimensional situations),
we can still compute this estimator. On the other hand, when X does have full column
rank, we have the following theorem.
Theorem 1. For λ sufficiently small (depending on β 0 and σ 2 ),
E(β̂ OLS − β 0 )(β̂ OLS − β 0 )T − E(β̂λR − β 0 )(β̂λR − β 0 )T
is positive definite.
Proof. First we compute the bias of β̂λR . We drop the subscript λ and superscript R for
convenience.
E(β̂) − β 0 = (X T X + λI)−1 X T Xβ 0 − β 0
= (X T X + λI)−1 (X T X + λI − λI)β 0 − β 0
= −λ(X T X + λI)−1 β 0 .
Now we look at the variance of β̂.
Var(β̂) = E{(X T X + λI)−1 X T ε}{(X T X + λI)−1 X T ε}T
= σ 2 (X T X + λI)−1 X T X(X T X + λI)−1 .
Thus E(β̂ OLS − β 0 )(β̂ OLS − β 0 )T − E(β̂ − β 0 )(β̂ − β 0 )T is equal to
T
σ 2 (X T X)−1 − σ 2 (X T X + λI)−1 X T X(X T X + λI)−1 − λ2 (X T X + λI)−1 β 0 β 0 (X T X + λI)−1 .
After some simplification, we see that this is equal to
T
λ(X T X + λI)−1 [σ 2 {2I + λ(X T X)−1 } − λβ 0 β 0 ](X T X + λI)−1 .
Thus E(β̂ OLS − β 0 )(β̂ OLS − β 0 )T − E(β̂ − β 0 )(β̂ − β 0 )T is positive definite for λ > 0 if and
only if
T
σ 2 {2I + λ(X T X)−1 } − λβ 0 β 0
is positive definite, which is true for λ > 0 sufficiently small (we can take 0 < λ <
2σ 2 /kβ 0 k22 ).
2
The theorem says that β̂λR outperforms β̂ OLS provided λ is chosen appropriately. To
be able to use ridge regression effectively, we need a way of selecting a good λ—we will
come to this very shortly. What the theorem doesn’t really tell us is in what situations
we expect ridge regression to perform well. To understand that, we will turn to one of the
key matrix decompositions used in statistics, the singular value decomposition (SVD).
X = U DV T .
Here the U ∈ Rn×n and V ∈ Rp×p are orthogonal matrices and D ∈ Rn×p has D11 ≥ D22 ≥
· · · ≥ Dmm ≥ 0 where m := min(n, p) and all other entries of D are zero. To compute
such a decomposition requires O(np min(n, p)) operations. The rth columns of U and V
are known as the rth left and right singular vectors of X respectively, and Drr is the rth
singular value.
When n > p, we can replace U by its first p columns and D by its first p rows to produce
another version of the SVD (sometimes known as the thin SVD). Then X = U DV T where
U ∈ Rn×p has orthonormal columns (but is no longer square) and D is square and diagonal.
There is an equivalent version for when p > n.
Let us take X ∈ Rn×p as our matrix of predictors and suppose n ≥ p. Using the (thin)
SVD we may write the fitted values from ridge regression as follows.
Here we have used the notation (that we shall use throughout the course) that Uj is the
jth column of U . For comparison, the fitted values from OLS (when X has full column
rank) are
X β̂ OLS = X(X T X)−1 X T Y = U U T Y.
Both OLS and ridge regression compute the coordinates of Y with respect to the columns
2 2
of U . Ridge regression then shrinks these coordinates by the factors Djj /(Djj + λ); if Djj
is small, the amount of shrinkage will be larger.
To interpret this further, note that the SVD is intimately connected with Principal
Components Analysis (PCA). Consider v ∈ Rp with kvk2 = 1. Since the columns of X
3
have had their means subtracted, the sample variance of Xv ∈ Rn , is
1 T T 1
v X Xv = v T V D2 V T v.
n n
Writing a = V T v, so kak2 = 1, we have
1 T 1 1X 2 2 1 X 1 2
v V D 2 V T v = aT D 2 a = aj Djj ≤ D11 a2j = D11 .
n n n j n j
n
2
As kXV1 k22 /n = D11 /n, V1 determines the linear combination of the columns of X which
has the largest sample variance, when the coefficients of the linear combination are con-
strained to have `2 -norm 1. XV1 = D11 U1 is known as the first principal component of
2
X. Subsequent principal components D22 U2 , . . . , Dpp Up have maximum variance Djj /n,
subject to being orthogonal to all earlier ones—see example sheet 1 for details.
Returning to ridge regression, we see that it shrinks Y most in the smaller principal
components of X. Thus it will work well when most of the signal is in the large principal
components of X. We now turn to the problem of choosing λ.
where compared with (1.2.1), we have taken a further expectation over the training set.
We still have no way of computing (1.2.2) directly, but we can attempt to estimate it.
The idea of v-fold cross-validation is to split the data into v groups or folds of roughly equal
size: (X (1) , Y (1) ), . . . , (X (v) , Y (v) ). Let (X (−k) , Y (−k) ) be all the data except that in the kth
fold. For each λ on a grid of values, we compute β̂λR (X (−k) , Y (−k) ): the ridge regression
4
estimate based on all the data except the kth fold. Writing κ(i) for the fold to which
(xi , Yi ) belongs, we choose the value of λ that minimises
n
1X
CV(λ) = {Yi − xTi β̂λR (X (−κ(i)) , Y (−κ(i)) )}2 . (1.2.3)
n i=1
Writing λCV for the minimiser, our final estimate of β 0 can then be β̂λRCV (X, Y ).
Note that for each i,
E{Yi − xTi β̂λR (X (−κ(i)) , Y (−κ(i)) )}2 = E[E{Yi − xTi β̂λR (X (−κ(i)) , Y (−κ(i)) )}2 |X (−κ(i)) , Y (−κ(i)) ].
(1.2.4)
This is precisely the expected prediction error in (1.2.2) but with the training data X, Y
replaced with a training data set of smaller size. If all the folds have the same size, then
CV(λ) is an average of n identically distributed quantities, each with expected value as
in (1.2.4). However, the quantities being averaged are not independent as they share the
same data.
Thus cross-validation gives a biased estimate of the expected prediction error. The
amount of the bias depends on the size of the folds, the case when the v = n giving the
least bias—this is known as leave-one-out cross-validation. The quality of the estimate,
though, may be worse as the quantities being averaged in (1.2.3) will be highly positively
correlated. Typical choices of v are 5 or 10.
Cross-validation aims to allow us to choose the single best λ (or more generally regres-
sion procedure); we could instead aim to find the best weighted combination of regression
procedures. Returning to our ridge regression example, suppose λ is restricted to a grid of
values λ1 > λ2 > · · · > λL . We can then minimise
n L 2
1X X
T R (−κ(i)) (−κ(i))
Yi − wl xi β̂λl (X ,Y )
n i=1 l=1
5
• Note while X T X is p × p, XX T is n × n. Computing fitted values using (1.3.1)
would require roughly O(np2 + p3 ) operations. If p n this could be extremely
costly. However, our alternative formulation would only require roughly O(n2 p + n3 )
operations, which could be substantially smaller.
• We see that the fitted values of ridge regression depend only on inner products
K = XX T between observations (note Kij = xTi xj ).
Now suppose that we believe the signal depends quadratically on the predictors:
X
Yi = xTi β + xik xil θkl + εi .
k,l
We can still use ridge regression provided we work with an enlarged set of predictors
xi1 , . . . , xip , xi1 xi1 , . . . , xi1 xip , xi2 xi1 , . . . , xi2 xip , . . . , xip xip .
This will give us O(p2 ) predictors. Our new approach to computing fitted values would
therefore have complexity O(n2 p2 + n3 ), which could be rather costly if p is large.
However, rather than first creating all the additional predictors and then computing
the new K matrix, we can attempt to directly compute K. To this end consider
X 2
T 2
(1 + xi xj ) = 1 + xik xjk
k
X X
=1+2 xik xjk + xik xil xjk xjl .
k k,l
6
1.4 Kernels
We have seen how a model with quadratic effects can be fitted very efficiently by replacing
the inner product matrix (known as the Gram matrix ) XX T in (1.3.2) with the matrix
in (1.3.4). It is then natural to ask what other non-linear models can be fitted efficiently
using this sort of approach.
We won’t answer this question directly, but instead we will try to understand the sorts
of similarity measures k that can be represented as inner products between transformations
of the original data.
That is, we will study the similarity measures k : X × X → R from the input space X
to R for which there exists a feature map φ : X → H where H is some (real) inner product
space with
k(x, x0 ) = hφ(x), φ(x0 )i. (1.4.1)
Recall that an inner product space is a real vector space H endowed with a map h·, ·i :
H × H → R that obeys the following properties.
A kernel is a little like an inner product, but need not be bilinear in general. However,
a form of the Cauchy–Schwarz inequality does hold for kernels.
Proposition 2.
k(x, x0 )2 ≤ k(x, x)k(x0 , x0 ).
k(x0 , x) k(x0 , x0 )
must be positive semi-definite so in particular its determinant must be non-negative.
First we show that any inner product of feature maps will give rise to a kernel.
7
Proof. Let x1 , . . . , xn ∈ X , α1 , . . . , αn ∈ R and consider
X X
αi k(xi , xj )αj = αi hφ(xi ), φ(xj )iαj
i,j i,j
X X
= αi φ(xi ), αj φ(xj ) ≥ 0.
i j
Showing that every kernel admits a representation of the form (1.4.1) is slightly more
involved, and we delay this until after we have studied some examples.
and using (i) of Proposition 4 shows that k2 is a kernel. Finally observing that k = k1 k2
and using (ii) shows that the Gaussian kernel is indeed a kernel.
8
Sobolev kernel. Take X to be [0, 1] and let k(x, x0 ) = min(x, x0 ). Note this is the
covariance function of Brownian motion so it must be positive definite.
Jaccard similarity kernel. Take X to be the set of all subsets of {1, . . . , p}. For
x, x0 ∈ X with x ∪ x0 =
6 ∅ define
|x ∩ x0 |
k(x, x0 ) =
|x ∪ x0 |
and if x ∪ x0 = ∅ then set k(x, x0 ) = 1. Showing that this is a kernel is left to the example
sheet.
The first equality shows that the inner product does not depend on the particular expansion
of g whilst the second equality shows that it also does not depend on the expansion of f .
Thus the inner product is well-defined.
9
First we check that with φ defined as in (1.4.4) we do have relationship (1.4.2). Observe
that n
X
hk(·, x), f i = αi k(xi , x) = f (x), (1.4.8)
i=1
so in particular we have
It remains to show that it is indeed an inner product. It is clearly symmetric and (1.4.7)
shows linearity. We now need to show positive definiteness.
First note that X
hf, f i = αi k(xi , xj )αj ≥ 0 (1.4.9)
i,j
If we could use the Cauchy–Schwarz inequality on the right-hand side, we would have
which would show that if hf, f i = 0 then necessarily f = 0; the final property we need
to show that h·, ·i is an inner product. However, in order to use the traditional Cauchy–
Schwarz inequality we need to first know we’re dealing with an inner product, which is
precisely what we’re trying to show!
Although we haven’t shown that h·, ·i is an inner product, we do have enough infor-
mation to show that it is itself a kernel. We may then appeal to Proposition 2 to obtain
(1.4.10). With this in mind, we argue as follows. Given functions f1 , . . . , fm and coefficients
γ1 , . . . , γm ∈ R, we have
X X X
γi hfi , fj iγj = γi fi , γj fj ≥ 0
i,j i j
10
By adding the limits of Cauchy sequences to H (from Theorem 5) we can make H a
Hilbert space. Indeed, note that if (fm )∞
m=1 ∈ H is Cauchy then since by (1.4.10) we have
p
|fm (x) − fn (x)| ≤ k(x, x)kfm − fn kH ,
we may define function f ∗ : X → R by f ∗ (x) = limm→∞ fm (x). We can check that all such
f ∗ can be added to H to create a Hilbert space.
In fact, the completion of H is a special type of Hilbert space known as a reproducing
kernel Hilbert space (RKHS).
The function
k :X ×X →R
(x, x0 ) 7→ hkx , kx0 i = kx0 (x)
By Proposition 3 the reproducing kernel of any RKHS is a (positive definite) kernel, and
Theorem 5 shows that to any kernel k is associated a (unique) RKHS that has reproducing
kernel k.
Examples
Linear kernel. Here H = {f : f (x) = β T x, β ∈ Rp } and if f (x) = β T x then kf k2H =
kβk22 .
Sobolev kernel. It can be shown that H is roughly the space of continuous functions
Rf 1: [0, 1] → R with f (0) = 0 that are differentiable almost everywhere, and for which
0
0
f (x)2 dx < ∞. It contains the class of Lipschitz functions (functions f : [0, 1] → R for
which there exists some L with |f (x) − f (y)| ≤ L|x − y| for all x, y ∈ [0, 1]) that are 0 at
the origin. The norm is
Z 1 1/2
0 2
f (x) dx .
0
Though the construction of the RKHS from a kernel is explicit, it can be challenging to
understand precisely the space and the form of the norm.
11
1.4.3 The representer theorem
To recap, what we have shown so far is that replacing the matrix XX T in the definition of
an algorithm by K derived form a positive definite kernel is essentially equivalent to running
the same algorithm on some mapping of the original data, though with the modification
that instances of xTi xj become hφ(xi ), φ(xj )i.
But what exactly is the optimisation problem we are solving when performing kernel
ridge regression? Clearly it is determined by the kernel or equivalently by the RKHS. Note
we know that an alternative way of writing the usual ridge regression optimisation is
X n
2 2
arg min {Yi − f (xi )} + λkf kH (1.4.11)
f ∈H i=1
where H is the RKHS corresponding to the linear kernel. The following theorem shows in
particular that kernel ridge regression (i.e. ridge regression replacing XX T with K) with
kernel k is equivalent to the above with H now being the RKHS corresponding to k.
Theorem 6 (Representer theorem, [Kimeldorf and Wahba, 1970, Schölkopf et al., 2001]).
Let c : Rn × X n × Rn → R be an arbitrary loss function, and let J : [0, ∞) → R be strictly
increasing. Let x1 , . . . , xn ∈ X , Y ∈ Rn . Finally, let f ∈ H where H is an RKHS with
reproducing kernel k, and let Kij = k(xi , xj ) i, j = 1, . . . , n. Then fˆ minimises
over f ∈ H iff. fˆ(·) = ni=1 α̂i k(·, xi ) and α̂ ∈ Rn minimises Q2 over α ∈ Rn where
P
Now observe that if fˆ takes this form, then kfˆk2H = αT Kα, so Q1 (fˆ) = Q2 (α). Then by
optimality of fˆ, we have that α must minimisePQ2 .
Now suppose α̂ minimises Q2 and fˆ(·) = ni=1 α̂i k(·, xi ). Note that Q1 (fˆ) = Q2 (α̂).
If f˜ ∈ H has Q1 (f˜) ≤ Q1 (fˆ), by the argument above, writing f˜ = u + v with u ∈ V ,
v ∈ V ⊥ , we know that Q1 (u) ≤ Q1 (f˜). But by optimality of α̂ we have Q1 (fˆ) ≤ Q1 (u), so
Q1 (fˆ) = Q1 (f˜).
Consider the result specialised the ridge regression objective. We see that (1.4.11) is
essentially equivalent to minimising
12
and you may check (see example sheet 1) that the minimiser α̂ satisfies K α̂ = K(K +
λI)−1 Y . Thus (1.4.11) is indeed an alternative way of expressing kernel ridge regression.
Viewing the result in the opposite direction gives a more “sensational” perspective. If
you had set out trying to minimise Q1 , it might appear completely hopeless as H could be
infinite-dimensional. However, somewhat remarkably we see that this reduces to finding
the coefficients α̂i which solve the simple(r) optimisation problem Q2 .
The result also tells us how to form predictions: given a new observation x, our predic-
tion for f (x) is
n
X
ˆ
f (x) = α̂i k(x, xi ).
i=1
Theorem 7. The mean squared prediction error (MSPE) may be bounded above in the
following way:
n n
σ2 X d2i
X
1 λ
E {f (xi ) − fˆλ (xi )} ≤
0 2
2
+ (1.5.1)
n i=1
n i=1
(d i + λ) 4n
n
σ2 1 X λ
≤ min(di /4, λ) + .
n λ i=1 4n
13
for some α ∈ Rn , and moreover that kf 0 k2H ≥ αT Kα. Let the eigendecomposition of K be
given by K = U DU T with Dii = di and define θ = U T Kα. We see that n times the LHS
of (1.5.1) is
Now as θ = DU T α note that θi = 0 when di = 0. Let D+ be the diagonal matrix with ith
diagonal entry equal to Dii−1 if Dii > 0 and 0 otherwise. Then
X θ2
i
√
= k D+ θk22 = αT KU D+ U T Kα = αT U DD+ DU T α = αT Kα ≤ 1.
i:d >0
di
i
using the inequality (a + b)2 ≥ 4ab in the final line. Finally note that
d2i
≤ min{1, d2i /(4di λ)} = min(λ, di /4)/λ.
(di + λ)2
14
Here we have treated the xi as fixed, but we could equally well think of them as
random. Consider a setup where the xi are i.i.d. and independent of ε. If we take a
further expectation on the RHS of (1.5.2), our result still holds true (the µ̂i are random in
this setting). Ideally we would like to then replace E min(µ̂i /4, λn ) with a quantity more
directly related to the kernel k.
Mercer’s theorem is helpful in this regard. This guarantees (under some mild conditions)
an eigendecomposition for kernels, which are somewhat like infinite-dimensional analogues
of symmetric positive semi-definite matrices. Under certain technical conditions, we may
write ∞
X
0
k(x, x ) = µj ej (x)ej (x0 )
j=1
where given some density p(x) on X , the eigenfunctions ej and corresponding eigenvalues
µj obey the integral equation
Z
0
µj ej (x ) = k(x, x0 )ej (x)p(x)dx,
X
When k is the Sobolev kernel and p(x) is the uniform density on [0, 1], we find the eigen-
values satisfy
1
µj /4 = 2 .
π (2j − 1)2
Thus
∞ Z ∞
X λn 1 1 1
min(µi /4, λn ) ≤ √ + 1 + dx
i=1
2 π 2 λn π 2 {(π2 λn )−1/2 +1}/2 (2x − 1)2
p p
= λn /π + λn /2 = O( λn )
15
In fact, one can show that this is the best error rate one can achieve with any estimator
in this problem. More generally Yang et al. [2015] shows that for essentially any RKHS H
we have X n
1 0 ˆ 2
inf sup E {f (xi ) − f (xi )} ≥ c inf δn (λn )
fˆ f 0 :kf 0 kH ≤1 n i=1 λn
where c > 0 is a constant and fˆ is allowed to range over all (measurable) functions of
the data Y, X. The conclusion is that kernel ridge regression is the optimal regression
procedure up to a constant factor in terms of MSPE when the true signal f 0 is from an
RKHS.
Consider first the simple case where the data in the two classes {xi }i:Yi =1 and {xi }i:Yi =−1
are separable by a hyperplane through the origin, so there exists β ∈ Rp with kβk2 = 1
such that Yi β T xi > 0 for all i. Note β would then be a unit normal vector to a plane that
separates the two classes.
16
There may be an infinite number of planes that separate the classes, in which case
it seems sensible to use the plane that maximises the margin between the two classes.
Consider therefore the following optimisation problem.
max M
β∈Rp ,M ≥0
Note that by normalising β above we need not impose the constraint that kβk2 = 1.
Suppose now that the classes are not separable. One way to handle this is to replace
the constraint Yi xTi β/kβk2 ≥ M with a penalty for how far over the margin boundary
xi is. This penalty should be zero if xi is on the correct side of the boundary (i.e. when
Yi xTi β/kβk2 ≥ M ), and should be equal to the distance over the boundary, M −Yi xTi β/kβk2
otherwise. It will in fact be more convenient to penalise according to 1 − Yi xTi β/(kβk2 M )
in the latter case, which is the distance measured in units of M . This penalty is invariant
to β undergoing any positive scaling, so we may set kβk2 = 1/M , thus eliminating M from
the objective function. Switching max 1/kβk2 with min kβk22 and adding the penalty we
arrive at n
X
arg min kβk22 + λ (1 − Yi xTi β)+ ,
β∈Rp i=1
where (·)+ denotes the positive part. Replacing λ with 1/λ we can write the objective in
the more familiar-looking form
n
X
arg min (1 − Yi xTi β)+ + λkβk22 .
β∈Rp i=1
Thus far we have restricted ourselves to hyperplanes through the origin but we would more
generally want to consider any translate of these i.e. any hyperplane. This can be achieved
by allowing ourselves to translate the xi by an arbitrary vector b, giving
n
X
arg min (1 − Yi (xi − b)T β)+ + λkβk22 ,
β∈Rp ,b∈Rp i=1
or equivalently
n
X
(µ̂, β̂) = arg min {1 − Yi (xTi β + µ)}+ + λkβk22 . (1.6.1)
(µ,β)∈R×Rp i=1
This final objective defines the support vector classifier ; given a new observation x predic-
tions are obtained via sgn(µ̂ + xT β̂).
Note that the objective in (1.6.1) may be re-written as
n
X
(µ̂, fˆ) = arg min [1 − Yi {f (xi ) + µ}]+ + λkf k2H , (1.6.2)
(µ,f )∈R×H i=1
17
where H is the RKHS corresponding to the linear kernel. The representer theorem (more
specifically the variant in question 10 of example sheet 1) shows that (1.6.2) for an arbitrary
RKHS with kernel k and kernel matrix K is equivalent to the support vector machine
n
X
(µ̂, α̂) = arg min [1 − Yi {KiT α + µ}]+ + λαT Kα.
(µ,α)∈R×Rn i=1
and picking β̂ to maximise the log-likelihood. This leads to (see example sheet) the fol-
lowing optimisation problem:
n
X
arg min log{1 + exp(−Yi xTi β)}.
β∈Rp i=1
where H is an RKHS. As in the case of the SVM, the representer theorem gives a finite-
dimensional optimisation that is equivalent to the above.
18
1.7 Large-scale kernel machines
We introduced the kernel trick as a computational device that avoided performing cal-
culations in a high or infinite dimensional feature space and, in the case of kernel ridge
regression reduced computation down to forming the n × n matrix K and then inverting
K + λI. This can be a huge saving, but when n is very large, this can present serious
computational difficulties. Even if p is small, the O(n3 ) cost of inverting K + λI may cause
problems. What’s worse, the fitted regression function is a sum over n terms:
n
X
fˆ(·) = α̂i k(xi , ·).
i=1
φ̂ : X → Rb
with b small such that E{φ̂(x)T φ̂(x0 )} = k(x, x0 ). In a sense we are trying to reverse the
kernel trick by approximating the kernel using a random feature map. To increase the
quality of the approximation of the kernel, we can consider
1
x 7→ √ (φ̂1 (x), . . . , φ̂L (x)) ∈ RLb
L
with each (φ̂l (x))Ll=1 √being i.i.d. for each x. Let Φ be the matrix with ith row given by
(φ̂1 (xi ), . . . , φ̂L (xi ))/ L. We may then run our learning algorithm replacing the initial
matrix of predictors X with Φ. For example, when performing ridge regression, we can
compute
(ΦT Φ + λI)−1 ΦT Y,
which would require O(nL2 b2 + L3 b3 ) operations: a cost linear in n. Predicting a new
observation would cost O(Lb).
The work of Rahimi and Recht [2007] proposes a construction of such a random map-
ping φ̂ for shift-invariant kernels, that is kernels for which there exists a function h with
k(x, x0 ) = h(x − x0 ) for all x, x0 ∈ X = Rp . A useful property of such kernels is given by
Bochner’s theorem.
19
To make use of this theorem, first observe the following. Let u ∼ U [−π, π], x, y ∈ R.
Then
2E cos(x + u) cos(y + u) = 2E{(cos x cos u − sin x sin u)(cos y cos u − sin y sin u)}.
d
Now as u = −u, E cos u sin u = E cos(−u) sin(−u) = −E cos u sin u = 0. Also of course
cos2 u + sin2 u = 1 so E cos2 u = E sin2 u = 1/2. Thus
Then
As a concrete example of this approach, let us take the Gaussian kernel k(x, x0 ) = exp{−kx−
T
x0 k22 /(2σ 2 )}. Note that if W ∼ √N (0, σ −2 I), it has characteristic function E(eit W ) =
2 2
e−ktk2 /(2σ ) so we may take φ̂(x) = 2 cos(W T x + u).
20
Chapter 2
21
Forward selection
This can be seen as a greedy way of performing best subsets regression. Given a target
model size m (the tuning parameter), this works as follows.
2. Add to the current model the predictor variable that reduces the residual sum of
squares the most.
Like ridge regression, β̂λL shrinks the OLS estimate towards the origin, but there is
an important difference. The `1 penalty can force some of the estimated coefficients to be
exactly 0. In this way the Lasso can perform simultaneous variable selection and parameter
estimation. As we did with ridge regression, we can centre and scale the X matrix, and
also centre Y and thus remove µ from the objective. Define
1
Qλ (β) = kY − Xβk22 + λkβk1 . (2.2.2)
2n
Now the minimiser(s) of Qλ (β) will also be the minimiser(s) of
Similarly, with the Ridge regression objective, we know that β̂λR minimises kY − Xβk22
subject to kβk2 ≤ kβ̂λR k2 .
Now the contours of the OLS objective kY − Xβk22 are ellipsoids centred at β̂ OLS ,
while the contours of kβk22 are spheres centred at the origin, and the contours of kβk1 are
‘diamonds’ centred at 0.
The important point to note is that the `1 ball {β ∈ Rp : kβk1 ≤ kβ̂λL k1 } has corners
where some of the components are zero, and it is likely that the OLS contours will intersect
the `1 ball at such a corner.
22
2.2.1 Prediction error of the Lasso with no assumptions on the
design
A remarkable property of the Lasso is that even when p n, it can still perform well in
terms of prediction error. Suppose the columns of X have been centred and scaled (as we
will always assume from now on unless stated otherwise) and assume the normal linear
model (where we have already centred Y ),
Y = Xβ 0 + ε − ε̄1 (2.2.3)
23
Now XjT ε/n ∼ N (0, σ 2 /n), so if we obtain a bound on the tail probabilities of normal
distributions, the argument above will give a bound for P(Ω).
Motivated by the need to bound normal tail probabilities, we will briefly discuss the
topic of concentration inequalities that provide such bounds for much wider classes of
random variables. Concentration inequalities are vital for the study of many modern
algorithms and in our case here, they will reveal that the attractive properties of the Lasso
presented in Theorem 9 hold true for a variety of non-normal errors.
We begin our discussion with the simplest tail bound, Markov’s inequality, which states
that given a non-negative random variable W ,
E(W )
P(W ≥ t) ≤ .
t
This immediately implies that given a strictly increasing function ϕ : R → [0, ∞) and any
random variable W ,
E(ϕ(W ))
P(W ≥ t) = P{ϕ(W ) ≥ ϕ(t)} ≤ .
ϕ(t)
Applying this with ϕ(t) = eαt (α > 0) yields the so-called Chernoff bound :
Thus
2 σ 2 /2−αt 2 /(2σ 2 )
P(W ≥ t) ≤ inf eα = e−t .
α>0
Note that to arrive at this bound, all we required was (an upper bound on) the moment
generating function (mgf) of W (2.2.4).
Sub-Gaussian variables
Definition 3. We say a random variable W with mean µ = E(W ) is sub-Gaussian if there
exists σ > 0 such that
2 2
Eeα(W −µ) ≤ eα σ /2
for all α ∈ R. We then say that W is sub-Gaussian with parameter σ.
24
As well as Gaussian random variables, the sub-Gaussian class includes bounded random
variables.
Lemma 11 (Hoeffding’s lemma). If W is mean-zero and takes values in [a, b], then W is
sub-Gaussian with parameter (b − a)/2.
The following proposition shows that analogously to how a linear combination of jointly
Gaussian random variables is Gaussian, a linear combination of sub-Gaussian random
variables is also sub-Gaussian.
Proposition 12. Let (Wi )ni=1 be a sequence of independent mean-zero sub-Gaussian ran-
dom variables with parameters (σi )ni=1 and let γ ∈ Rn . Then γ T W is sub-Gaussian with
P 1/2
2 2
parameter γ σ
i i i .
Proof.
Xn Yn
E exp α γi Wi = E exp(αγi Wi )
i=1 i=1
n
Y
≤ exp(α2 γi2 σi2 /2)
i=1
n
X
2
= exp α γi2 σi2 /2 .
i=1
We can now prove a more general version of the probability bound required for Theo-
rem 9.
Lemma 13. Suppose (εi )ni=1 are independent, mean-zero and sub-Gaussian
p with common
parameter σ. Note that this includes ε ∼ Nn (0, σ 2 I). Let λ = Aσ log(p)/n. Then
2 /2−1)
P(kX T εk∞ /n ≤ λ) ≥ 1 − 2p−(A .
Proof.
p
X
T
P(kX εk∞ /n > λ) ≤ P(|XjT ε|/n > λ).
j=1
√
But ±XjT ε/n are both sub-Gaussian with parameter (σ 2 kXj k22 /n2 )1/2 = σ/ n. Thus the
RHS is at most
2
2p exp(−A2 log(p)/2) = 2p1−A /2 .
When trying to understand the impact of the design matrix X on properties of the
Lasso, it will be helpful to have a tail bound for a product of sub-Gaussian random variables.
Bernstein’s inequality, which applies to random variables satisfying the condition below, is
helpful in this regard.
25
Definition 4 (Bernstein’s condition). We say that the random variable W with EW = µ
satisfies Bernstein’s condition with parameter (σ, b) where σ, b > 0 if
1
E(|W − µ|k ) ≤ k!σ 2 bk−2 for k = 2, 3, . . . .
2
Proposition 14 (Bernstein’s inequality). Let W1 , W2 , . . . be independent random variables
with E(Wi ) = µ. Suppose each Wi satisfies Bernstein’s condition with parameter (σ, b).
Then
2 2
α(Wi −µ) α σ /2
E(e ) ≤ exp for all |α| < 1/b
1 − b|α|
n
nt2
X
1
P Wi − µ ≥ t ≤ exp − for all t ≥ 0.
n i=1 2(σ 2 + bt)
provided |α| < 1/b and using the inequality eu ≥ 1 + u in the final line. For the probability
bound, first note that
n
X Yn
E exp α(Wi − µ)/n = E exp{α(Wi − µ)/n}
i=1 i=1
(α/n)2 σ 2 /2
≤ exp n
1 − b|α/n|
for |α|/n < 1/b. Then we use the Chernoff method and set α/n = t/(bt+σ 2 ) ∈ [0, 1/b).
Lemma 15. Let W, Z be mean-zero and sub-Gaussian with parameters σW and σZ respec-
tively. Then the product W Z satisfies Bernstein’s condition with parameter (8σW σZ , 4σW σZ ).
Proof. In order to use Bernstein’s inequality (Proposition 14) we first obtain bounds on
26
the moments of W and Z. Note that W 2k = 0 1{x<W 2k } dx. Thus by Fubini’s theorem
R∞
Z ∞
2k
E(W ) = P(W 2k > x)dx
0
Z ∞
= 2k t2k−1 P(|W | > t)dt substituting t2k = x
Z0 ∞
≤ 4k t2k−1 exp{−t2 /(2σW 2
)}dt by Proposition 10
0
Z ∞
= 4kσW2
(2σW2
x)k−1 e−x dx substituting t2 /(2σW
2
)=x
0
k+1 2k
= 2 σW k!.
|E(Y − EY )k | ≤ E|Y − EY |k
= 2k E|Y /2 − EY /2|k
≤ 2k−1 (E|Y |k + |EY |k ) by Jensen’s inequality applied to t 7→ |t|k ,
≤ 2k E|Y |k .
Therefore
Convexity
A set A ⊆ Rd is convex if
In certain settings it will be convenient to consider functions that take, in addition to real
values, the value ∞. Denote R̄ = R ∪ {∞}. A function f : Rd → R̄ is convex if
f (1 − t)x + ty ≤ (1 − t)f (x) + tf (y)
27
for all x, y ∈ Rd and t ∈ (0, 1), and f (x) < ∞ for at least one x. [This is in fact known in
the literature as a proper convex function]. It is strictly convex if the inequality is strict for
all x, y ∈ Rd , x 6= y and t ∈ (0, 1). Define the domain of f , to be dom f = {x : f (x) < ∞}.
Note that when f is convex, dom f must be a convex set.
Proposition 16. (i) Let f1 , . . . , fm : Rd → R̄ be convex functions with dom f1 ∩ · · · ∩
dom fm 6= ∅. Then if c1 , . . . , cm ≥ 0, c1 f1 + · · · cm fm is a convex function.
(ii) If f : Rd → R is twice continuously differentiable then
(a) f is convex iff. its Hessian H(x) is positive semi-definite for all x,
(b) f is strictly convex if H(x) is positive definite for all x.
where g : Rd → Rb . Suppose the optimal value is c∗ ∈ R. The Lagrangian for this problem
is defined as
L(x, θ) = f (x) + θT g(x)
where θ ∈ Rb . Note that
for all θ. The Lagrangian method involves finding a θ∗ such that the minimising x∗ on the
LHS satisfies g(x∗ ) = 0. This x∗ must then be a minimiser in the original problem (2.2.5).
Subgradients
Definition 5. A vector v ∈ Rd is a subgradient of a convex function f : Rd → R̄ at x if
28
The following easy (but key) result is often referred to in the statistical literature as the
Karush–Kuhn–Tucker (KKT) conditions, though it is actually a much simplified version
of them.
Proposition 19. x∗ ∈ arg min f (x) if and only if 0 ∈ ∂f (x∗ ).
x∈Rd
Proof.
Let us now compute the subdifferential of the `1 -norm. First note that k · k1 : Rd → R
is convex. Indeed it is a norm so the triangle inequality gives ktx + (1 − t)yk1 ≤ tkxk1 +
(1 − t)kyk1 . We introduce some notation that will be helpful here and throughout the rest
of the course.
For x ∈ Rd and A = {k1 , . . . , km } ⊆ {1, . . . , d} with k1 < · · · < km , by xA we will mean
(xk1 , . . . , xkm )T . Similarly if X has d columns we will write XA for the matrix
XA = (Xk1 · · · Xkm ).
Further in this context, by Ac , we will mean {1, . . . , d}\A. Additionally, when in subscripts
we will use the shorthand −j = {j}c and −jk = {j, k}c . Note these column and component
extraction operations will always be considered to have taken place first before any further
operations on the matrix, so for example XAT = (XA )T . Finally, define
−1 if x1 < 0
sgn(x1 ) = 0 if x1 = 0
1 if x1 > 0,
and
sgn(x) = (sgn(x1 ), . . . , sgn(xd ))T .
Proposition 20. For x ∈ Rd let A = {j : xj 6= 0}. Then
gj : Rd → R
x 7→ |xj |.
P P
Then k · k = j gj (·) so by Proposition 18, ∂kxk1 = j ∂gj (x). When x has xj 6= 0, gj
is differentiable at x so by Proposition 17 ∂gj (x) = {sgn(xj )ej } where ej is the jth unit
vector. When xj = 0, if v ∈ ∂gj (x) we must have
29
so
|yj | ≥ v T (y − x) for all y ∈ Rd . (2.2.6)
we claim that the above holds iff. vj ∈ [−1, 1] and v−j = 0. For the ‘if’ direction, note
that v T (y − x) = vj yj ≤ |yj |. Conversely, set y−j = x−j + v−j and yj = 0 in (2.2.6) to get
0 ≥ kv−j k22 , so v−j = 0. Then take y with y−j = x−j to get |yj | ≥ vj yj for all yj ∈ R, so
|vj | ≤ 1. Forming the set sum of the subdifferentials then gives the result.
30
Theorem 22. Let λ > 0 and define ∆ = XNT XS (XST XS )−1 sgn(βS0 ). If k∆k∞ ≤ 1 and for
k ∈ S,
|βk0 | > λ|sgn(βS0 )T [{ n1 XST XS }−1 ]k |, (2.2.7)
then there exists a Lasso solution β̂λL with sgn(β̂λL ) = sgn(β 0 ). As a partial converse, if
there exists a Lasso solution β̂λL with sgn(β̂λL ) = sgn(β 0 ), then k∆k∞ ≤ 1.
Remark 1. We can interpret k∆k∞ as the maximum in absolute value over k ∈ N of the
dot product of sgn(βS0 ) and (XST XS )−1 XST Xk , the coefficient vector obtained by regressing
Xk on XS . The condition k∆k∞ ≤ 1 is known as the irrepresentable condition.
Proof. Fix λ > 0 and write β̂ = β̂λL and Ŝ = {k : β̂k 6= 0} for convenience. The KKT
conditions for the Lasso give
1 T
X X(β 0 − β̂) = λν̂
n
where kν̂k∞ ≤ 1 and ν̂Ŝ = sgn(β̂Ŝ ). We can expand this into
0
1 XST XS XST XN βS − β̂S ν̂
T T =λ S . (2.2.8)
n XN XS X N XN −β̂N ν̂N
We prove the converse first. If sgn(β̂) = sgn(β 0 ) then ν̂S = sgn(βS0 ) and β̂N = 0. The
top block of (2.2.8) gives
satisfies (2.2.8). We only need to check that sgn(βS0 ) = sgn(β̂S ), but this follows from
(2.2.7).
31
Definition 6. Given a matrix of predictors X ∈ Rn×p and support set S, define
1
kXβk22
φ2 = inf n
1 ,
β∈Rp :βS 6=0, kβN k1 ≤3kβS k1
s
kβS k21
Note that if X T X/n has minimum eigenvalue cmin > 0 (so necessarily p ≤ n), then
φ2 > cmin . Indeed by the Cauchy–Schwarz inequality,
√ √
kβS k1 = sgn(βS )T βS ≤ skβS k2 ≤ skβk2 .
Thus
1
2 n
kXβk22
φ ≥ inf = cmin .
β6=0 kβk22
Although in the high-dimensional setting we would have cmin = 0, the fact that the infimum
in the definition of φ2 is over a restricted set of β can still allow φ2 to be positive even in
this case, as we discuss after the presentation of the theorem.
Theorem 23.p Suppose the compatibility condition holds and let β̂ be the 2Lasso solution
with λ = Aσ log(p)/n for A > 0. Then with probability at least 1 − 2p−(A /8−1) , we have
32
Returning to the actual proof, write a = kX(β̂ − β 0 )k22 /(nλ). Then from (2.2.9) we can
derive the following string of inequalities:
the final inequality coming from adding kβS0 − β̂S k1 to both sides.
Now using the compatibility condition with β = β̂ − β 0 we have
1
kX(β̂ − β 0 )k22 + λkβ 0 − β̂k1 ≤ 4λkβS0 − β̂S k1
n r
4λ s
≤ kX(β̂ − β 0 )k2 . (2.2.10)
φ n
From this we get √
1 0 4λ s
√ kX(β̂ − β )k2 ≤ ,
n φ
and substituting this into the RHS of (2.2.10) gives the result.
where Σ ∈ Rp×p . Note then our φ2 = φ2Σ̂ (S) where Σ̂ := X T X/n and S is the support set
of β 0 . The following result shows that if Σ̂ is close to a matrix Σ̌ for which φ2Σ̌ (S) > 0,
then also φ2Σ̂ (S) > 0.
Lemma 24. Suppose φ2Σ̌ (S) > 0 and maxjk |Σ̂jk − Σ̌jk | ≤ φ2Σ̌ (S)/(32|S|). Then φ2Σ̂ (S) ≥
φ2Σ̌ (S)/2.
Proof. In the following we suppress dependence on S. Let s = |S| and let t = φ2Σ̌ /(32s).
We have
33
If kβN k1 ≤ 3kβS k1 then
p
β T Σ̌β
kβk1 = kβN k1 + kβS k1 ≤ 4kβS k1 ≤ 4 √ .
φΣ̌ / s
T
φ2Σ̌ 16β T Σ̌β 1
β Σ̌β − 2
= β T Σ̌β ≤ β T Σ̂β.
32s φΣ̌ /s 2
We may now apply this this with Σ̌ = Σ0 . To make the result more readily interpretable,
we shall state it in an asymptotic framework. Imagine a sequence of design matrices with
n and p growing, each with their own compatibility condition. We will however suppress
the asymptotic regime in the notation.
Theorem 25. Suppose the rows of Xpare i.i.d. and each entry of X is mean-zero sub-
Gaussian with parameter v. Suppose s log(p)/n → 0 (and p > 1) as n → ∞. Let
and suppose the latter is bounded away from 0. Then P(φ2Σ̂,s ≥ φ2Σ0 ,s /2) → 1 as n → ∞.
Corollary 26. Suppose the rows of X are independent with distribution Np (0, Σ0 ). Suppose
the diagonal entries of Σ0 are bounded above and the minimump eigenvalue of Σ0 , cmin is
bounded away from 0. Then P(φ2Σ̂,s ≥ cmin /2) → 1 provided s log(p)/n → 0.
34
2.2.7 Computation
One of the most efficient ways of computing Lasso solutions is to use a optimisation tech-
nique called coordinate descent. This is a quite general way of minimising a function
f : Rd → R and works particularly well for functions of the form
d
X
f (x) = g(x) + hj (xj )
j=1
Tseng [2001] proves that provided A0 = {x : f (x) ≤ f (x(0) )} is compact, then every
converging subsequence of x(m) will converge to a minimiser of f .
Corollary 27. Suppose A0 is compact. Then
(i) There exists a minimiser of f , x∗ and f (x(m) ) → f (x∗ ).
with g convex and differentiable and each hb : Rdb → R convex, then block coordinate
descent can be used.
35
We often want to solve the Lasso on a grid of λ values λ0 > · · · > λL (for the purposes
of cross-validation for example). To do this, we can first solve for λ0 , and then solve at
subsequent grid points by using the solution at the previous grid points as an initial guess
(known as a warm start). An active set strategy can further speed up computation. This
works as follows: For l = 1, . . . , L
3. Let V = {k : |XkT (Y − X β̂)|/n > λl }, the set of coordinates which violate the KKT
conditions when β̂ is taken as a candidate solution.
Group Lasso
Suppose we have a partition G1 , . . . , Gq of {1, . . . , p} (so ∪qk=1 Gk = {1, . . . , p}, Gj ∩ Gk = ∅
for j 6= k). The group Lasso penalty [Yuan and Lin, 2006] is given by
q
X
λ mj kβGj k2 .
j=1
The multipliers mj > 0 serve pto balance cases where the groups are of very different sizes;
typically we choose mj = |Gj |. This penalty encourages either an entire group G to
have β̂G = 0 or β̂k 6= 0 for all k ∈ G. Such a property is useful when groups occur through
coding for categorical predictors or when expanding predictors using basis functions.
36
Fused Lasso
If there is a sense in which the coefficients are ordered, so βj0 is expected to be close to
0
βj+1 , a fused Lasso penalty [Tibshirani et al., 2005] may be appropriate. This takes the
form
p−1
X
λ1 |βj − βj+1 | + λ2 kβk1 ,
j=1
where the second term may be omitted depending on whether shrinkage towards 0 is
desired. As an example, consider the simple setting where Yi = µ0i + εi , and it is thought
that the (µ0i )ni=1 form a piecewise constant sequence. Then one option is to minimise over
µ ∈ Rn , the following objective
n−1
1 X
kY − µk22 + λ |µi − µi+1 |.
n i=1
A prominent example is the minimax concave penalty (MCP) [Zhang, 2010] which takes
0 u
pλ (u) = λ − .
γ +
One disadvantage of using a non-convex penalty is that there may be multiple local minima
which can make optimisation problematic. However, typically if the non-convexity is not
too severe, coordinate descent can produce reasonable results.
37
Chapter 3
So far we have considered the problem of relating a particular response to a large collection
of explanatory variables.
In some settings however, we do not have a distinguished response variable and instead
we would like to better understand relationships between all the variables. In other sit-
uations, rather than being able to predict variables, we would like to understand causal
relationships between them. Representing relationships between random variables through
graphs will be an important tool in tackling these problems.
3.1 Graphs
Definition 7. A graph is a pair G = (V, E) where V is a set of vertices or nodes and
E ⊆ V × V with (v, v) ∈
/ E for any v ∈ V is a set of edges.
• We say there is an edge between j and k and that j and k are adjacent if either
(j, k) ∈ E or (k, j) ∈ E.
• If all edges in the graph are (un)directed we call it an (un)directed graph. We can
represent graphs as pictures: for example, we can draw the graph when p = 4 and
E = {(2, 1), (3, 4), (2, 3)} as
38
Z1 Z2
Z3 Z4
If instead we have E = {(1, 2), (2, 1), (2, 4), (4, 2)} we get the undirected graph
Z1 Z2
Z3 Z4
• A set of three nodes is called a v-structure if one node is a child of the two other
nodes, and these two nodes are not adjacent.
• A directed cycle is (almost) a directed path but with the start and end points the
same. A partially directed acyclic graph (PDAG) is a graph containing no directed
cycles. A directed acyclic graph (DAG) is a directed graph containing no directed
cycles.
39
• If G is a DAG, given a triple of subsets of nodes A, B, S, we say S d-separates A from
B if S blocks every path from A to B.
• The moralised graph of a DAG G is the undirected graph obtained by adding edges
between (marrying) the parents of each node and removing all edge directions.
Proposition 28. Given a DAG G with V = {1, . . . , p}, we say that a permutation π of V
is a topological (or causal) ordering of the variables if it satisfies
Proof. We use induction on the number of nodes p. Clearly the result is true when p = 1.
Now we show that in any DAG, we can find a node with no parents. Pick any node
and move to one of its parents, if possible. Then move to one of the new node’s parents,
and continue in this fashion. This procedure must terminate since no node can be visited
twice, or we would have found a cycle. The final node we visit must therefore have no
parents, which we call a source node.
Suppose then that p ≥ 2, and we know that all DAGs with p−1 nodes have a topological
ordering. Find a source s (wlog s = p) and form a new DAG G̃ with p−1 nodes by removing
the source (and all edges emanating from it). Note we keep the labelling of the nodes in
this new DAG the same. This smaller DAG must have a topological order π̃. A topological
ordering π for our original DAG is then given by π(s) = 1 and π(k) = π̃(k)+1 for k 6= s.
Definition 8. If X, Y and Z are random vectors with a joint density fXY Z (w.r.t. a
product measure µ) then we say X is conditionally independent of Y given Z, and write
X ⊥⊥ Y |Z
if
fXY |Z (x, y|z) = fX|Z (x|z)fY |Z (y|z).
Equivalently
X ⊥⊥ Y |Z ⇐⇒ fX|Y Z (x|y, z) = fX|Z (x|z).
40
We will first look at how undirected graphs can be used to visualise conditional inde-
pendencies between random variables; thus in the next few subsections by graph we will
mean undirected graph.
Let Z = (Z1 , . . . , Zp )T be a collection of random variables with joint law P and consider
a graph G = (V, E) where V = {1, . . . , p}. A reminder of our notation: −k and −jk when
in subscripts denote the sets {1, . . . , p} \ {k} and {1, . . . , p} \ {j, k} respectively.
Definition 9. We say that P satisfies the pairwise Markov property w.r.t. G if for any pair
j, k ∈ V with j 6= k and (j, k), (k, j) ∈
/ E,
Zj ⊥⊥ Zk |Z−jk .
Note that the complete graph that has edges between every pair of vertices will satisfy
the pairwise Markov property for any P . The minimal graph satisfying the pairwise Markov
property w.r.t. a given P is called the conditional independence graph (CIG) for P .
Definition 10. We say P satisfies the global Markov property w.r.t. G if for any triple
(A, B, S) of disjoint subsets of V such that S separates A from B, we have
ZA ⊥⊥ ZB |ZS .
Proposition 29. If P has a positive density (w.r.t. some product measure) then if it
satisfies the pairwise Markov property w.r.t. a graph G, it also satisfies the global Markov
property w.r.t. G and vice versa.
Proposition 30.
41
Proof. Idea: write ZA = M ZB +(ZA −M ZB ) with matrix M ∈ R|A|×|B| such that ZA −M ZB
and ZB are independent, i.e. such that
This occurs when we take M T = Σ−1 B,B ΣB,A . Because ZA − M ZB and ZB are indepen-
dent, the distribution of ZA − M ZB conditional on ZB = zB is equal to its unconditional
distribution. Now
where
mk = µk − Σk,−k Σ−1
−k,−k µ−k
εk |Z−k = z−k ∼ N (0, Σk,k − Σk,−k Σ−1
−k,−k Σ−k,k ).
Note that if the jth element of the vector of coefficients Σ−1 −k,−k Σ−k,k is zero, then the
distribution of Zk conditional on Z−k will not depend at all on the jth component of Z−k .
Then if that jth component was Zj 0 , we would have that Zk |Z−k = z−k has the same
distribution as Zk |Z−j 0 k = z−j 0 k , so Zk ⊥⊥ Zj |Z−j 0 k .
i.i.d.
Thus given x1 , . . . , xn ∼ Z and writing
xT1
X = ... ,
xTn
42
for the selected set of variables when regressing Xk on X{k}c , we can use the “OR” rule
and put an edge between vertices j and k if and only if k ∈ Ŝj or j ∈ Ŝk . An alternative is
the “AND” rule where we put an edge between j and k if and only if k ∈ Ŝj and j ∈ Ŝk .
Another popular approach to estimating the CIG works by first directly estimating Ω,
as we’ll now see.
S −1 −S −1 QT R−1
−1
M = .
−R−1 QS −1 R−1 + R−1 QS −1 QT R−1
Thus
Zk ⊥⊥ Zj |Z−jk ⇔ Ωjk = 0.
This motivates another approach to estimating the CIG.
43
Write n n
1X 1X
X̄ = xi , S= (xi − X̄)(xi − X̄)T .
n i=1 n i=1
Then
n
X n
X
(xi − µ)T Ω(xi − µ) = (xi − X̄ + X̄ − µ)T Ω(xi − X̄ + X̄ − µ)
i=1 i=1
n
X
= (xi − X̄)T Ω(xi − X̄) + n(X̄ − µ)T Ω(X̄ − µ)
i=1
n
X
+2 (xi − X̄)T Ω(X̄ − µ).
i=1
Also,
n
X n
X
T
(xi − X̄) Ω(xi − X̄) = tr{(xi − X̄)T Ω(xi − X̄)}
i=1 i=1
n
X
= tr{(xi − X̄)(xi − X̄)T Ω}
i=1
= ntr(SΩ).
Thus
n
`(µ, Ω) = − {tr(SΩ) − log det(Ω) + (X̄ − µ)T Ω(X̄ − µ)}
2
and
n
maxp `(µ, Ω) = − {tr(SΩ) − log det(Ω)}.
µ∈R 2
Hence the maximum likelihood estimate of Ω, Ω̂M L can be obtained by solving
min {− log det(Ω) + tr(SΩ)},
Ω:Ω0
where Ω 0 means Ω is positive definite. One can show that the objective is convex and
we are minimising over a convex set. As
∂
log det(Ω) = (Ω−1 )kj = (Ω−1 )jk ,
∂Ωjk
∂
tr(SΩ) = Skj = Sjk ,
∂Ωjk
if X has full column rank so S is positive definite, Ω̂M L = S −1 .
The graphical Lasso [Yuan and Lin, 2007] penalises the log-likelihood for Ω and solves
min {− log det(Ω) + tr(SΩ) + λkΩk1 },
Ω:Ω0
P
where kΩk1 = j,k |Ωjk |; this results in a sparse estimate of the precision matrix from
which an estimate of the CIG can be constructed. Often the kΩk1 is modified such that
the diagonal elements are not penalised.
44
3.4 Structural equation models
Conditional independence graphs give us some understanding of the relationships between
variables. However they do not tell us how, if we were to set the kth variable to a particular
value, say 0.5, then how the distribution of the other values would be altered. Yet this is
often the sort of question that we would like to answer.
In order to reach this more ambitious goal, we introduce the notion of structural equation
models (SEMs). These give a way of representing the data generating process. We will now
have to make use of not just undirected graphs but other sorts of graphs (and particularly
DAGs), so by graph we will now mean any sort of graph satisfying definition 7.
• Pk ⊆ {1, . . . , p} \ {k} are such that the graph with edges given by Pk being pa(k) is
a DAG.
Example 3.4.1. Consider the following (totally artificial) SEM which has whether you
are taking this course (Z1 = 1) depending on whether you went to the statistics catch up
lecture (Z2 = 1) and whether you have heard about machine learning (Z3 = 1). Suppose
Z3 = ε3 ∼ Bern(1/4)
Z2 = 1{ε2 (1+Z3 )>1/2} ε2 ∼ U [0, 1]
Z1 = 1{ε1 (Z2 +Z3 )>1/2} ε1 ∼ U [0, 1].
Z2 Z3
Z1
Note that an SEM for Z determines its law. Indeed using a topological ordering π for
the associated DAG, we can write each Zk as a function of επ−1 (1) , επ−1 (2) , . . . , επ−1 (π(k)) .
Importantly, though, we can use it to tell us much more than simply the law of Z: for
example we can query properties of the distribution of Z after having set a particular
component to any given value. This is what we study next.
45
3.5 Interventions
Given an SEM S, we can replace one (or more) of the structural equations by a new
structural equation, for example for a chosen variable k we could replace the structural
equation Zk = hk (ZPk , εk ) by Zk = h̃k (Z̃P̃k , ε̃k ). This gives us a new structural equation
model S̃ which in turn determines a new joint law for Z.
When we have h̃k (Z̃P̃k , ε̃k ) = a for some a ∈ R, so we are setting the value of Zk to
be a, we call this a (perfect) intervention. Expectations and probabilities under this new
law for Z are written by adding |do(Zk = a) e.g. E(Zj |do(Zk = a)). Note that this will in
general be different from the conditional expectation E(Zj |Zk = a).
Z3 = ε3 ∼ Bern(1/4)
Z2 = 1
Z1 = 1{ε1 (1+Z3 )>1/2} ε1 ∼ U [0, 1].
13 31 9
Thus P(Z1 = 1|do(Z2 = 1)) = 44
+ 42
= 16
. On the other hand,
X
P(Z1 = 1|Z2 = 1) = P(Z1 = 1|Z2 = 1, Z3 = j)P(Z3 = j|Z2 = 1)
j∈{0,1}
1 X
= P(Z1 = 1|Z2 = 1, Z3 = j)P(Z2 = 1|Z3 = j)P(Z3 = j)
P(Z2 = 1)
j∈{0,1}
1 331 113
= 13 31 +
44
+ 42 444 224
7 9
= 6= .
12 16
46
(ii) global Markov property w.r.t. the DAG G if for all disjoint A, B, S ⊆ {1, . . . , p},
A, B d-separated by S ⇒ ZA ⊥⊥ ZB |ZS .
Theorem 32. If P has a density f (with respect to a product measure), then all Markov
properties in definition 12 are equivalent.
In view of this, we will henceforth use the term Markov to mean global Markov.
Proposition 33. Let P be the law of an SEM with DAG G. Then P obeys the Markov
factorisation property w.r.t. G.
Thus we can read off from the DAG of an SEM a great deal of information concerning the
distribution it generates. We can use this to help us calculate the effects of interventions.
We have seen now how an SEM can be used to not only query properties of the joint
distribution, but also to determine the effects of certain perturbations to the system. In
many settings, we may not have a prespecified SEM to work with, but instead we’d like to
learn the DAG from observational data. This is the problem we turn to next.
Causal minimality
We know that if P is generated by an SEM with DAG G, then P will be Markov w.r.t. G.
Conversely, one can show that if P is Markov w.r.t. a DAG G, then there is also an SEM
with DAG G that could have generated P . But P will be Markov w.r.t. a great number of
DAGs, e.g. Z1 and Z2 being independent can be represented by
Z1 = 0 × Z2 + ε1 = ε1 , Z2 = ε2 .
This motivates the following definition.
Definition 13. P satisfies causal minimality with respect to G if it is (global) Markov
w.r.t. G but not to a proper subgraph of G with the same nodes.
47
Markov equivalent DAGs
It is possible for two different DAGs to satisfy the same collection of d-separations e.g.
Z1 Z2 Z1 Z2
Definition 14. We say two DAGs G1 and G2 are Markov equivalent if M(G1 ) = M(G2 ).
Proposition 34. Two DAGs are Markov equivalent if and only if they have the same
skeleton and v-structures.
The set of all DAGs that are Markov equivalent to a DAG can be represented by a
completed PDAG (CPDAG) which contains an edge (j, k) if and only if one member of the
Markov equivalence class does. We can only ever hope to obtain the Markov equivalence
class i.e. the CPDAG of a DAG with which P satisfies causal minimality (unless we place
restrictions on the functional forms of the SEM equations).
Faithfulness
Consider the following SEM.
Z1 = ε1
Z2 = αZ1 + ε2
Z3 = βZ1 + γZ2 + ε3 ,
Z̃1 = ε̃1
Z̃2 = Z̃1 + α̃Z̃3 + ε̃2
Z̃3 = ε̃3 .
Here the ε̃j are independent with ε̃1 ∼ N (0, 1), ε̃3 ∼ N (0, 2), α̃ = 1/2 and ε̃3 ∼ N (0, 1/2).
Writing the DAGs for the two SEMs above as G and G̃, note that P 0 satisfies causal
minimality w.r.t. both G and G̃.
48
Definition 15. We say P is faithful to the DAG G if it is Markov w.r.t. G and for all
disjoint A, B, S ⊆ {1, . . . , p},
A, B d-separated by S ⇐ ZA ⊥⊥ ZB |ZS .
Proposition 36. Suppose we have a triple of nodes j, k, l in a DAG and the only non-
adjacent pair is j, k (i.e. in the skeleton j − l − k).
(i) If the nodes are in a v-structure (j → l ← k) then no S that d-separates j and k can
contain l.
Proof. For (i) note that any set containing l cannot block the path j, l, k. For (ii) note we
know that the path j, l, k is blocked by S, so we must have j → l ← k.
This last result then allows us to find the v-structures given the skeleton and a d-
separating set S(j, k) corresponding to each absent edge. Given a skeleton and v-structures,
it may be possible to orient further edges by making use of the acyclicity of DAGs; we do
not cover this here.
49
Algorithm 1 First part of the PC algorithm: finding the skeleton.
Set Ĝ to be the complete undirected graph. Set ` = −1.
repeat
Increment ` → ` + 1.
repeat
Select a (new) ordered pair of nodes j, k that are adjacent in Ĝ and such that
|adj(Ĝ, j) \ {k}| ≥ `.
repeat
Choose new S ⊆ adj(Ĝ, j) \ {k} with |S| = `.
If Zj ⊥⊥ Zk |ZS then delete edges (j, k) and (k, j) and set S(j, k) = S(k, j) = S.
until edges (j, k), (k, j) are deleted or all relevant subsets have been chosen.
until all relevant ordered pairs have been chosen.
until for every ordered pair j, k that are adjacent in Ĝ we have |adj(Ĝ, j) \ {k}| < `.
Population version
The PC-algorithm, named after its inventors Peter Spirtes and Clarke Glymour [Spirtes
et al., 2000], exploits the fact that we need not search over all sets S but only subsets of
either pa(j) or pa(k) for efficiency. The version assumes P is known and so conditional
independencies can be queried directly. A sample version that is applicable in practice is
given in the following subsection. We denote the set of nodes that are adjacent to a node
j in graph G by adj(G, j).
Suppose P is faithful to DAG G 0 . At each stage of the Algorithm 1 we must have that
the skeleton is a subgraph of Ĝ. By the end of the algorithm, for each pair j, k adjacent in
Ĝ, we would have searched through adj(Ĝ, j) and adj(Ĝ, k) for sets S such that Zj ⊥⊥ Zk |ZS .
If P were faithful to G 0 then, we would know that j and k must be adjacent in G 0 . That
is the output of Algorithm 1 would be the skeleton of G 0 .
Sample version
The sample version of the PC algorithm replaces the querying of conditional independence
with a conditional independence test applied to data x1 , . . . , xn . The level of the test
α will be a tuning parameter of the method. If the data are assumed to be multivariate
normal, the (sample) partial correlation can be used to test conditional independence since
if Zj ⊥⊥ Zk |ZS then
Corr(Zj , Zk |ZS ) := ρjk·S = 0.
50
To compute the sample partial correlation, we regress Xj and Xk on XS and compute the
correlation between the resulting residuals.
51
Chapter 4
FWER = P(N01 ≥ 1)
at a prescribed level α; i.e. find procedures for which FWER ≤ α. The simplest such
procedure is the Bonferroni correction, which rejects Hi if pi ≤ α/m.
52
Proof. The first inequality comes from Markov’s inequality. Next
X
E(N01 ) = E 1{pi ≤α/m}
i∈I0
X
= P(pi ≤ α/m)
i∈I0
m0 α
≤ .
m
A more sophisticated approach is the closed testing procedure.
where HI = ∩i∈I Hi is known as an intersection hypothesis (HI is the hypothesis that all
Hi i ∈ I are true).
Suppose that for each I, we have an α-level test φI taking values in {0, 1} for testing
HI (we reject if φI = 1), so under HI ,
PHI (φI = 1) ≤ α.
Typically we only make use of the individual hypotheses that are rejected by the procedure
i.e. those rejected HI where I is a singleton.
We consider the case of 4 hypotheses as an example. Suppose the underlined hypotheses
are rejected by the local tests.
H1234
H123 H124 H134 H234
H12 H13 H14 H23 H24 H34
H1 H2 H3 H4
53
• H23 is rejected by the closed testing procedure.
Theorem 38. The closed testing procedure makes no false rejections with probability 1−α.
In particular it controls the FWER at level α.
Proof. Assume I0 is not empty (as otherwise no rejection can be false anyway). Define the
events
In order for there to be a false rejection, we must have rejected HI0 with the local test.
Thus B ⊇ A, so
FWER ≤ P(A) ≤ P(φI0 = 1) ≤ α.
Different choices for the local tests give rise to different testing procedures. Holm’s
procedure takes φI to be the Bonferroni test i.e.
(
α
1 if mini∈I pi ≤ |I|
φI =
0 otherwise.
It can be shown (see example sheet) that Holm’s procedure amounts to ordering the p-
values p1 , . . . , pm as p(1) ≤ · · · ≤ p(m) with corresponding hypothesis tests H(1) , . . . , H(m) ,
so (i) is the index of the ith smallest p-value, and then performing the following.
Step 1. If p(1) ≤ α/m reject H(1) , and go to step 2. Otherwise accept H(1) , . . . , H(m) and
stop.
Step i. If p(i) ≤ α/(m−i+1), reject H(i) and go to step i+1. Otherwise accept H(i) , . . . , H(m) .
The p-values are visited in ascending order and rejected until the first time a p-value exceeds
a given critical value. This sort of approach is known (slightly confusingly) as a step-down
procedure.
FDR = E(FDP)
N01
FDP = ,
max(R, 1)
54
where FDP is the false discovery proportion. Note the maximum in the denominator is
to ensure division by zero does not occur. The FDR was introduced in Benjamini and
Hochberg [1995], and it is now widely used across science, particularly biostatistics.
The Benjamini–Hochberg procedure attempts to control the FDR at level α and works
as follows. Let
iα
k̂ = max i : p(i) ≤ .
m
Reject H(1) , . . . , H(k̂) (or perform no rejections if k̂ is not defined).
Theorem 39. Suppose that the pi , i ∈ I0 are independent, and independent of {pi :
i ∈
/ I0 }. Then the Benjamini–Hochberg procedure controls the FDR at level α; in fact
FDR ≤ αm0 /m.
Proof. For each i ∈ I0 , let Ri denote the number of rejections we get by applying a modified
Benjamini–Hochberg procedure to
with cutoff
\i α(j + 1)
k̂i = max j : p(j) ≤ ,
m
\i
where p(j) is the jth smallest p-value in the set p\i .
For r = 1, . . . , m and i ∈ I0 , note that
αr αr αr αs
pi ≤ , R = r = pi ≤ , p(r) ≤ , p(s) > for all s > r
m m m m
αr \i αr \i αs
= pi ≤ ,p ≤ ,p > for all s > r
m (r−1) m (s−1) m
αr
= pi ≤ , Ri = r − 1 .
m
55
Thus
N01
FDR = E
max(R, 1)
m
X N01
= E 1{R=r}
r=1
r
m X
1
1{pi ≤αr/m} 1{R=r}
X
= E
r=1
r i∈I 0
m
X 1X
= P(pi ≤ αr/m, R = r)
r=1
r i∈I
0
m
X 1X
= P(pi ≤ αr/m)P(Ri = r − 1)
r=1
r i∈I
0
m
α XX
≤ P(Ri = r − 1)
m i∈I r=1
0
αm0
= .
m
56
rearranging we have
1 T
Σ̂(β̂ − β 0 ) + λν̂ = X ε.
n
The key idea is now to form an approximate inverse Θ̂ of Σ̂. Then we have
1 1
β̂ + λΘ̂ν̂ − β 0 = Θ̂X T ε + √ ∆
n n
√
where ∆ = n(Θ̂Σ̂ − I)(β 0 − β̂). Define
which we shall refer to as the debiased Lasso. If we choose Θ̂ such that ∆ is small, we will
have b̂ − β 0 ≈ Θ̂X T ε/n, which can be used as a basis for performing inference.
We already know that under a compatibility condition on the design matrix X, kβ̂−β 0 k1
is small (Theorem 23) with high probability. If we can also show that the `∞ -norms of
rows of Θ̂Σ̂ − I are small, we can leverage this fact using Hölder’s inequality to show that
k∆k∞ is small. Let θ̂j be the jth row of Θ̂. Then k(Σ̂Θ̂T − I)j k∞ ≤ η is equivalent to
1
kX T X θ̂j k∞ ≤ η and |XjT X θ̂j /n − 1| ≤ η.
n −j
The first of these inequalities is somewhat reminiscent of the KKT conditions for the Lasso.
Let
(j) 1 2
γ̂ = arg min kXj − X−j γk2 + λj kγk1 . (4.3.1)
γ∈Rp−1 2n
Further let
1
τ̂j2 = XjT (Xj − X−j γ̂ (j) )/n = kXj − X−j γ̂ (j) k22 + λj kγ̂ (j) k1 ;
n
see the example sheet for the final equality. Then set
1 (j) (j) (j) (j)
θ̂j = − 2
(γ̂1 , . . . , γ̂j−1 , − 1, γ̂j , . . . , γ̂p−1 )T .
τ̂j
↑
jth position
Xj − X−j γ̂ (j)
X θ̂j = .
XjT (X − X−j γ̂ (j) )/n
Thus XjT X θ̂j /n = 1 and by the KKT conditions of the Lasso optimisation (4.3.1), we have
τ̂j2 kX−j
T
X θ̂j k∞ /n ≤ λj .
57
Thus with the choice of Θ̂ defined as above, we have
√ λj
k∆k∞ ≤ nkβ̂ − β 0 k1 max 2
j τ̂j
When can we expect λj /τ̂j2 to be small? One way of answering this is to consider a random
design setting. Let us assume that each row of X is independent and distributed as Np (0, Σ)
where Σ is positive definite. Write Ω = Σ−1 . From Proposition 30 and our study of the
neighbourhood selection procedure (see also Section 3.3.3), we know that for each j, we
can write
Xj = X−j γ (j) + ε(j) , (4.3.2)
(j) i.i.d.
where εi |X−j ∼ N (0, Ω−1 jj ) and γ
(j)
= −Ω−1
jj Ω−j,j . Theorem 23 can therefore be used to
understand properties of γ̂ and hence the τ̂j2 . In order to apply this result however, we
(j)
1{Ωkj 6=0}
X
sj =
k6=j
and smax = max(maxj sj , s). In order to make the following result more easily interpretable,
we will consider an asymptotic regime where X, s, smax etc. are all allowed to change as
n → ∞, though we suppress this in the notation. We will consider σ as constant.
Theorem 40. Suppose the minimum eigenvalue p of Σ is always at least cmin > 0 and
maxj Σjj ≤ 1. Suppose further thatpsmax log(p)/n → 0. Then there exists constants
A1 , A2 such that setting λ = λj = A1 log(p)/n, we have
√
n(b̂ − β 0 ) = W + ∆
W |X ∼ Np (0, σ 2 Θ̂Σ̂Θ̂T ),
and as n, p → ∞, √
P(k∆k∞ > A2 s log(p)/ n) → 0.
Proof. Consider the sequence of events Λn described by the following properties:
• φ2Σ̂,s ≥ cmin /2 and φ2Σ̂ ≥ cmin /2 for all j,
−j,−j ,sj
58
We now seek a lower bound for the τ̂j2 . Consider the linear models in (4.3.2). Note that
the maximum eigenvalue of Ω is at most c−1 −1 −1
min so Ωjj ≤ cmin . Also, Ωjj = Var(Xij |Xi,−j ) ≤
Var(Xij ) = Σjj ≤ 1. Thus applying Theorem 23 to the linear models (4.3.2), we know that
p
kγ (j) − γ̂ (j) k1 ≤ c2 sj log(p)/n.
Then
1 1 2
τ̂j2 ≥ kXj − X−j γ̂ (j) k22 ≥ kε(j) k22 − kX−jT (j)
ε k∞ kγ (j) − γ̂ (j) k1
n n n
p c4 smax log(p)
≥ Ω−1
jj (1 − 4 log(p)/n) −
n
≥ cmin /2
for all j when n is sufficiently large. Putting things together we see that on Λn ,
√
k∆k∞ ≤ λ nkβ̂ − β 0 k1 max τ̂j−2
j
p p √
≤ 2A1 log(p)(c1 s log(p)/n)/cmin ≤ A2 s log(p)/ n
where Wj ∼ N (0, σ 2 (Θ̂Σ̂Θ̂T )jj ). Let (Θ̂Σ̂Θ̂T )jj = dj . The approximate equality above
suggests constructing (1-α)-level confidence intervals of the form
h p √ p √ i
b̂j − zα/2 σ dj / n, b̂j + zα/2 σ dj / n ,
where zα is the upper α point of a standard normal. The only unknown quantity in the
confidence interval above is σ: this can be estimated [Sun and Zhang, 2012].
59
Bibliography
Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: a practical and
powerful approach to multiple testing. Journal of the Royal Statistical Society, Series
B, 57:289–300, 1995.
L. Breiman. Stacked regressions. Machine Learning, 24:49–64, 1996.
C. Cortes and V. Vapnik. Support-vector networks. Machine learning, 20(3):273–297,
1995.
A. Hoerl and R. Kennard. Ridge regression: Biased estimation for nonorthogonal problems.
Technometrics, pages 55–67, 1970.
G. S. Kimeldorf and G. Wahba. A correspondence between bayesian estimation on stochas-
tic processes and smoothing by splines. The Annals of Mathematical Statistics, 41(2):
495–502, 1970.
R. Marcus, P. Eric, and K. R. Gabriel. On closed testing procedures with special reference
to ordered analysis of variance. Biometrika, 63(3):655–660, 1976.
N. Meinshausen. Relaxed lasso. Computational Statistics and Data Analysis, 52:374–393,
2007.
N. Meinshausen and P. Bühlmann. High dimensional graphs and variable selection with
the lasso. Annals of Statistics, 34:1436–1462, 2006.
A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in
Neural Information Processing Systems, pages 1177–1184, 2007.
B. Schölkopf, R. Herbrich, and A. J. Smola. A generalized representer theorem. In Inter-
national Conference on Computational Learning Theory, pages 416–426. Springer, 2001.
P. Spirtes, C. N. Glymour, and R. Scheines. Causation, prediction, and search. MIT press,
2000.
T. Sun and C.-H. Zhang. Scaled sparse linear regression. Biometrika, page ass043, 2012.
R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society, Series B, 58:267–288, 1996.
60
R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smooth-
ness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 67(1):91–108, 2005.
Y. Yang, M. Pilanci, and M. J. Wainwright. Randomized sketches for kernels: Fast and
optimal non-parametric regression. arXiv preprint arXiv:1501.06195, 2015.
M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society: Series B, 68:49–67, 2006.
M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model.
Biometrika, 94(1):19–35, 2007.
C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The
Annals of statistics, pages 894–942, 2010.
C.-H. Zhang and S. S. Zhang. Confidence intervals for low dimensional parameters in high
dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 76(1):217–242, 2014.
H. Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical
Association, 101:1418–1429, 2006.
61