0% found this document useful (0 votes)
101 views66 pages

The Risk of Machine Learning

This document summarizes a paper that analyzes the performance of machine learning estimators for contexts that require estimating many parameters simultaneously. It discusses (1) choosing between regularized estimators like ridge, lasso, and pretest based on properties of the data, and (2) data-driven selection of regularization parameters. Ridge tends to perform best when there are few true zeros, while lasso and pretest do better with more zeros, depending on the distribution. The paper shows that data-driven choices of regularization parameters based on risk estimation criteria yield estimators close to the optimal risk.

Uploaded by

adeeyo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views66 pages

The Risk of Machine Learning

This document summarizes a paper that analyzes the performance of machine learning estimators for contexts that require estimating many parameters simultaneously. It discusses (1) choosing between regularized estimators like ridge, lasso, and pretest based on properties of the data, and (2) data-driven selection of regularization parameters. Ridge tends to perform best when there are few true zeros, while lasso and pretest do better with more zeros, depending on the distribution. The paper shows that data-driven choices of regularization parameters based on risk estimation criteria yield estimators close to the optimal risk.

Uploaded by

adeeyo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

The Risk of Machine Learning

Alberto Abadie Maximilian Kasy


MIT Harvard University
arXiv:1703.10935v1 [stat.ML] 31 Mar 2017

April 3, 2017

Abstract

Many applied settings in empirical economics involve simultaneous estimation


of a large number of parameters. In particular, applied economists are often in-
terested in estimating the effects of many-valued treatments (like teacher effects
or location effects), treatment effects for many groups, and prediction models
with many regressors. In these settings, machine learning methods that combine
regularized estimation and data-driven choices of regularization parameters are
useful to avoid over-fitting. In this article, we analyze the performance of a
class of machine learning estimators that includes ridge, lasso and pretest in
contexts that require simultaneous estimation of many parameters. Our anal-
ysis aims to provide guidance to applied researchers on (i) the choice between
regularized estimators in practice and (ii) data-driven selection of regulariza-
tion parameters. To address (i), we characterize the risk (mean squared error)
of regularized estimators and derive their relative performance as a function of
simple features of the data generating process. To address (ii), we show that
data-driven choices of regularization parameters, based on Stein’s unbiased risk
estimate or on cross-validation, yield estimators with risk uniformly close to
the risk attained under the optimal (unfeasible) choice of regularization param-
eters. We use data from recent examples in the empirical economics literature
to illustrate the practical applicability of our results.

Alberto Abadie, Department of Economics, Massachusetts Institute of Technology, [email protected]. Max-


imilian Kasy, Department of Economics, Harvard University, [email protected]. We thank
Gary Chamberlain, Ellora Derenoncourt, Jiaying Gu, Jérémy, L’Hour, José Luis Montiel Olea, Jann Spiess,
Stefan Wager and seminar participants at several institutions for helpful comments and discussions.
1 Introduction

Applied economists often confront problems that require estimation of a large number
of parameters. Examples include (a) estimation of causal (or predictive) effects for a
large number of treatments such as neighborhoods or cities, teachers, workers and firms,
or judges; (b) estimation of the causal effect of a given treatment for a large number
of subgroups; and (c) prediction problems with a large number of predictive covariates or
transformations of covariates. The machine learning literature provides a host of estimation
methods, such as ridge, lasso, and pretest, which are particularly well adapted to high-
dimensional problems. In view of the variety of available methods, the applied researcher
faces the question of which of these procedures to adopt in any given situation. This article
provides guidance on this choice based on the study of the risk properties (mean squared
error) of a class of regularization-based machine learning methods.
A practical concern that generally motivates the adoption of machine learning proce-
dures is the potential for severe over-fitting in high-dimensional settings. To avoid over-
fitting, most machine learning procedures for “supervised learning” (that is, regression and
classification methods for prediction) involve two key features, (i) regularized estimation
and (ii) data-driven choice of regularization parameters. These features are also central to
more familiar non-parametric estimation methods in econometrics, such as kernel or series
regression.

Setup In this article, we consider the canonical problem of estimating the unknown
means, µ1 , . . . , µn , of a potentially large set of observed random variables, X1 , . . . , Xn .
After some transformations, our setup covers applications (a)-(c) mentioned above and
many others. For example, in the context of a randomized experiment with n subgroups,
Xi is the difference in the sample averages of an outcome variable between treated and
non-treated for subgroup i, and µi is the average treatment effect on the same outcome
and subgroup. Moreover, as we discuss in Section 2.1, the many means problem analyzed in
this article encompasses the problem of nonparametric estimation of a regression function.
We consider componentwise estimators of the form µ
bi = m(Xi , λ), where λ is a non-

1
negative regularization parameter. Typically, m(x, 0) = x, so that λ = 0 corresponds to the
unregularized estimator µ
bi = Xi . Positive values of λ typically correspond to regularized
estimators, which shrink towards zero, |b
µi | ≤ |Xi |. The value λ = ∞ typically implies
maximal shrinkage: µ
bi = 0 for i = 1, . . . , n. Shrinkage towards zero is a convenient
normalization but it is not essential. Shifting Xi by a constant to Xi − c, for i = 1, . . . , n,
results in shrinkage towards c.

The risk function of regularized estimators Our article is structured according to


the two mentioned features of machine learning procedures, regularization and data-driven
choice of regularization parameters. We first focus on feature (i) and study the risk prop-
erties (mean squared error) of regularized estimators with fixed and with oracle-optimal
regularization parameters. We show that for any given data generating process there is an
(infeasible) risk-optimal regularized componentwise estimator. This estimator has the form
of the posterior mean of µI given XI and given the empirical distribution of µ1 , . . . , µn ,
where I is a random variable with uniform distribution on the set of indices {1, 2, . . . , n}.
The optimal regularized estimator is useful to characterize the risk properties of machine
learning estimators. It turns out that, in our setting, the risk function of any regularized
estimator can be expressed as a function of the distance between that regularized estimator
and the optimal one.
Instead of conditioning on µ1 , . . . , µn , one can consider the case where each (Xi , µi ) is
a realization of a random vector (X, µ) with distribution π and a notion of risk that is
integrated over the distribution of µ in the population. For this alternative definition of
risk, we derive results analogous to those of the previous paragraph.
We next turn to a family of parametric models for π. We consider models that allow
for a probability mass at zero in the distribution of µ, corresponding to the notion of
sparsity, while conditional on µ 6= 0 the distribution of µ is normal around some grand
mean. For these parametric models we derive analytic risk functions under oracle choices
of risk minimizing values for λ, which allow for an intuitive discussion of the relative
performance of alternative estimators. We focus our attention on three estimators that

2
are widespread in the empirical machine learning literature: ridge, lasso, and pretest.
When the point-mass of true zeros is small, ridge tends to perform better than lasso or
pretest. When there is a sizable share of true zeros, the ranking of the estimators depends
on the other characteristics of the distribution of µ: (a) if the non-zero parameters are
smoothly distributed in a vicinity of zero, ridge still performs best; (b) if most of the
distribution of non-zero parameters assigns large probability to a set well-separated from
zero, pretest estimation tends to perform well; and (c) lasso tends to do comparatively well
in intermediate cases that fall somewhere between (a) and (b), and overall is remarkably
robust across the different specifications. This characterization of the relative performance
of ridge, lasso, and pretest is consistent with the results that we obtain for the empirical
applications discussed later in the article.

Data-driven choice of regularization parameters The second part the article turns
to feature (ii) of machine learning estimators and studies the data-driven choice of reg-
ularization parameters. We consider choices of regularization parameters based on the
minimization of a criterion function that estimates risk. Ideally, a machine learning esti-
mator evaluated at a data-driven choice of the regularization parameter would have a risk
function that is uniformly close to the risk function of the infeasible estimator using an
oracle-optimal regularization parameter (which minimizes true risk). We show this type of
uniform consistency can be achieved under fairly mild conditions whenever the dimension
of the problem under consideration is large. This is in stark contrast to well-known results
in Leeb and Pötscher (2006) for low-dimensional settings. We further provide fairly weak
conditions under which machine learning estimators with data-driven choices of the regular-
ization parameter, based on Stein’s unbiased risk estimate (SURE) and on cross-validation
(CV), attain uniform risk consistency. In addition to allowing data-driven selection of reg-
ularization parameters, uniformly consistent estimation of the risk of shrinkage estimators
can be used to select among alternative shrinkage estimators on the basis of their estimated
risk in specific empirical settings.

3
Applications We illustrate our results in the context of three applications taken from the
empirical economics literature. The first application uses data from Chetty and Hendren
(2015) to study the effects of locations on intergenerational earnings mobility of children.
The second application uses data from the event-study analysis in Della Vigna and La Fer-
rara (2010) who investigate whether the stock prices of weapon-producing companies react
to changes in the intensity of conflicts in countries under arms trade embargoes. The third
application considers nonparametric estimation of a Mincer equation using data from the
Current Population Survey (CPS), as in Belloni and Chernozhukov (2011). The presence
of many neighborhoods in the first application, many weapon producing companies in the
second one, and many series regression terms in the third one makes these estimation
problems high-dimensional.
These examples showcase how simple features of the data generating process affect
the relative performance of machine learning estimators. They also illustrate the way
in which consistent estimation of the risk of shrinkage estimators can be used to choose
regularization parameters and to select among different estimators in practice. For the
estimation of location effects in Chetty and Hendren (2015) we find estimates that are
not overly dispersed around their mean and no evidence of sparsity. In this setting, ridge
outperforms lasso and pretest in terms of estimated mean squared error. In the setting of
the event-study analysis in Della Vigna and La Ferrara (2010), our results suggest that a
large fraction of values of parameters are closely concentrated around zero, while a smaller
but non-negligible fraction of parameters are positive and substantially separated from zero.
In this setting, pretest dominates. Similarly to the result for the setting in Della Vigna and
La Ferrara (2010), the estimation of the parameters of a Mincer equation in Belloni and
Chernozhukov (2011) suggests a sparse approximation to the distribution of parameters.
Substantial shrinkage at the tails of the distribution is still helpful in this setting, so that
lasso dominates.

Roadmap The rest of this article is structured as follows. Section 2 introduces our setup:
the canonical problem of estimating a vector of means under quadratic loss. Section 2.1

4
discusses a series of examples from empirical economics that are covered by our setup. Sec-
tion 2.2 discusses the setup of this article in the context of the machine learning literature
and of the older literature on estimation of normal means. Section 3 provides character-
izations of the risk function of regularized estimators in our setting. We derive a general
characterization in Section 3.1. Sections 3.2 and 3.3 provide analytic formulas for risk un-
der additional assumptions. In particular, in Section 3.3 we derive analytic formulas for
risk in a spike-and-normal model . These characterizations allow for a comparison of the
mean squared error of alternative procedures and yield recommendations for the choice of
an estimator. Section 4 turns to data-driven choices of regularization parameters. We show
uniform risk consistency results for Stein’s unbiased risk estimate and for cross-validation.
Section 5 discusses extensions and explains the apparent contradiction between our results
and those in Leeb and Pötscher (2005). Section 6 reports simulation results. Section 7 dis-
cusses several empirical applications. Section 8 concludes. The appendix contains proofs
and supplemental materials.

2 Setup

Throughout this paper, we consider the following setting. We observe a realization of an


n-vector of real-valued random variables, X = (X1 , . . . , Xn )0 , where the components of X
are mutually independent with finite mean µi and finite variance σi2 , for i = 1, . . . , n. Our
goal is to estimate µ1 , . . . , µn .
In many applications, the Xi arise as preliminary least squares estimates of the coeffi-
cients of interest, µi . Consider, for instance, a randomized controlled trial where random-
ization of treatment assignment is carried out separately for n non-overlapping subgroups.
Within each subgroup, the difference in the sample averages between treated and control
units, Xi , has mean equal to the average treatment effect for that group in the population,
µi . Further examples are discussed in Section 2.1 below.

5
Componentwise estimators We restrict our attention to componentwise estimators of
µi ,
µ
bi = m(Xi , λ),

where m : R × [0, ∞] 7→ R defines an estimator of µi as a function of Xi and a non-


negative regularization parameter, λ. The parameter λ is common across the components
i but might depend on the vector X. We study data-driven choices λ
b in Section 4 below,

focusing in particular on Stein’s unbiased risk estimate (SURE) and cross-validation (CV).
Popular estimators of this componentwise form are ridge, lasso, and pretest. They are
defined as follows:

mR (x, λ) = argmin (x − m)2 + λm2 (ridge)


m∈R
1
= x,
1+λ

mL (x, λ) = argmin (x − m)2 + 2λ|m| (lasso)


m∈R
= 1(x < −λ)(x + λ) + 1(x > λ)(x − λ),

mP T (x, λ) = argmin (x − m)2 + λ2 1(m 6= 0) (pretest)


m∈R
= 1(|x| > λ)x,

where 1(A) denotes the indicator function, which equals 1 if A holds and 0 otherwise.
Figure 1 plots mR (x, λ), mL (x, λ) and mP T (x, λ) as functions of x. For reasons apparent in
Figure 1, ridge, lasso, and pretest estimators are sometimes referred to as linear shrinkage,
soft thresholding, and hard thresholding, respectively. As we discuss below, the problem
of determining the optimal choice among these estimators in terms of minimizing mean
squared error is equivalent to the problem of determining which of these estimators best
approximates a certain optimal estimating function, m∗ .
Let µ = (µ1 , . . . , µn )0 and µ
b = (b bn )0 , where for simplicity we leave the depen-
µ1 , . . . , µ
dence of µ
b on λ implicit in our notation. Let P1 , . . . , Pn be the distributions of X1 , . . . , Xn ,
and let P = (P1 , . . . , Pn ).

6
Loss and risk We evaluate estimates based on the squared error loss function, or com-
pound loss,
n
1X 2
Ln (X, m(·, λ), P ) = m(Xi , λ) − µi ,
n i=1
where Ln depends on P via µ. We will use expected loss to rank estimators. There
are different ways of taking this expectation, resulting in different risk functions, and the
distinction between them is conceptually important.
Componentwise risk fixes Pi and considers the expected squared error of µ
bi as an esti-
mator of µi ,
R(m(·, λ), Pi ) = E[(m(Xi , λ) − µi )2 |Pi ].

Compound risk averages componentwise risk over the empirical distribution of Pi across
the components i = i, . . . , n. Compound risk is given by the expectation of compound loss
Ln given P ,

Rn (m(·, λ), P ) = E[Ln (X, m(·, λ), P )|P ]


n
1X
= E[(m(Xi , λ) − µi )2 |Pi ]
n i=1
n
1X
= R(m(·, λ), Pi ).
n i=1
Finally, integrated (or empirical Bayes) risk considers P1 , . . . , Pn to be themselves draws
from some population distribution, Π. This induces a joint distribution, π, for (Xi , µi ).
Throughout the article, we will often use a subscript π to denote characteristics of the joint
distribution of (Xi , µi ). Integrated risk refers to loss integrated over π or, equivalently,
componentwise risk integrated over Π,

R̄(m(·, λ), π) = Eπ [Ln (X, m(·, λ), P )]

= Eπ [(m(Xi , λ) − µi )2 ]
Z
= R(m(·, λ), Pi )dΠ(Pi ). (1)

Notice the similarity between compound risk and integrated risk: they differ only by re-
placing an empirical (sample) distribution by a population distribution. For large n, the
difference between the two vanishes, as we will explore in Section 4.

7
Regularization parameter Throughout, we will use Rn (m(·, λ), P ) to denote the risk
function of the estimator m(·, λ) with fixed (non-random) λ, and similarly for R̄(m(·, λ), π).
In contrast, Rn (m(·, λ
bn ), P ) is the risk function taking into account the randomness of λ
bn ,

where the latter is chosen in a data-dependent manner, and similarly for R̄(m(·, λ
bn ), π).

For a given P , we define the “oracle” selector of the regularization parameter as the
value of λ that minimizes compound risk,

λ∗ (P ) = argmin Rn (m(·, λ), P ),


λ∈[0,∞]

whenever the argmin exists. We use λ∗R (P ), λ∗L (P ) and λ∗P T (P ) to denote the oracle
selectors for ridge, lasso, and pretest, respectively. Analogously, for a given π, we define

λ̄∗ (π) = argmin R̄(m(·, λ), π) (2)


λ∈[0,∞]

whenever the argmin exists, with λ̄∗R (π), λ̄∗L (π), and λ̄∗P T (π) for ridge, lasso, and pretest,
respectively. In Section 3, we characterize compound and integrated risk for fixed λ and for
the oracle-optimal λ. In Section 4 we show that data-driven choices λ
bn are, under certain

conditions, as good as the oracle-optimal choice, in a sense to be made precise.

2.1 Empirical examples

Our setup describes a variety of settings often encountered in empirical economics, where
X1 , . . . , Xn are unbiased or close-to-unbiased but noisy least squares estimates of a set of
parameters of interest, µ1 , . . . , µn . As mentioned in the introduction, examples include (a)
studies estimating causal or predictive effects for a large number of treatments such as
neighborhoods, cities, teachers, workers, firms, or judges; (b) studies estimating the causal
effect of a given treatment for a large number of subgroups; and (c) prediction problems
with a large number of predictive covariates or transformations of covariates.

Large number of treatments Examples in the first category include Chetty and Hen-
dren (2015), who estimate the effect of geographic locations on intergenerational mobility
for a large number of locations. Chetty and Hendren use differences between the outcomes

8
of siblings whose parents move during their childhood in order to identify these effects. The
problem of estimating a large number of parameters also arises in the teacher value-added
literature when the objects of interest are individual teachers’ effects, see, for instance,
Chetty, Friedman, and Rockoff (2014). In labor economics, estimation of firm and worker
effects in studies of wage inequality has been considered in Abowd, Kramarz, and Margo-
lis (1999). Another example within the first category is provided by Abrams, Bertrand,
and Mullainathan (2012), who estimate differences in the effects of defendant’s race on
sentencing across individual judges.

Treatment for large number of subgroups Within the second category, which con-
sists of estimating the effect of a treatment for many sub-populations, our setup can be
applied to the estimation of heterogeneous causal effects of class size on student outcomes
across many subgroups. For instance, project STAR (Krueger, 1999) involved experimental
assignment of students to classes of different sizes in 79 schools. Causal effects for many
subgroups are also of interest in medical contexts or for active labor market programs,
where doctors / policy makers have to decide on treatment assignment based on individual
characteristics. In some empirical settings, treatment impacts are individually estimated
for each sample unit. This is often the case in empirical finance, where event studies are
used to estimate reactions of stock market prices to newly available information. For ex-
ample, Della Vigna and La Ferrara (2010) estimate the effects of changes in the intensity
of armed conflicts in countries under arms trade embargoes on the stock market prices of
arms-manufacturing companies.

Prediction with many regressors The third category is prediction with many regres-
sors. This category fits in the setting of this article after orthogonalization of the regressors.
Prediction with many regressors arises, in particular, in macroeconomic forecasting. Stock
and Watson (2012), in an analysis complementing the present article, evaluate various pro-
cedures in terms of their forecast performance for a number of macroeconomic time series
for the United States. Regression with many predictors also arises in series regression,

9
where series terms are transformations of a set of predictors. Series regression and its
asymptotic properties have been widely studied in econometrics (see for instance Newey,
1997). Wasserman (2006, Sections 7.2-7.3) provides an illuminating discussion of the equiv-
alence between the normal means model studied in this article and nonparametric regres-
sion estimation. For that setting, X1 , . . . , Xn and µ1 , . . . , µn correspond to the estimated
and true regression coefficients on an orthogonal basis of functions. Application of lasso
and pretesting to series regression is discussed, for instance, in Belloni and Chernozhukov
(2011). Appendix A.1 further discusses the relationship between the normal means model
and prediction models.
In Section 7, we return to three of these applications, revisiting the estimation of lo-
cation effects on intergenerational mobility, as in Chetty and Hendren (2015), the effect
of changes in the intensity of conflicts in arms-embargo countries on the stock prices of
arms manufacturers, as in Della Vigna and La Ferrara (2010), and nonparametric series
estimation of a Mincer equation, as in Belloni and Chernozhukov (2011).

2.2 Statistical literature

Machine learning methods are becoming widespread in econometrics – see, for instance,
Athey and Imbens (2015) and Kleinberg, Ludwig, Mullainathan, and Obermeyer (2015).
A large number of estimation procedures are available to the applied researcher. Textbooks
such as Hastie, Tibshirani, and Friedman (2009) or Murphy (2012) provide an introduction
to machine learning. Lasso, which was first introduced by Tibshirani (1996), is becoming
particularly popular in applied economics. Belloni and Chernozhukov (2011) provide a
review of lasso including theoretical results and applications in economics.
Much of the research on machine learning focuses on algorithms and computational
issues, while the formal statistical properties of machine learning estimators have received
less attention. However, an older and superficially unrelated literature in mathematical
statistics and statistical decision theory on the estimation of the normal means model has
produced many deep results which turn out to be relevant for understanding the behavior of
estimation procedures in non-parametric statistics and machine learning. A foundational

10
article in this literature is James and Stein (1961), who study the case Xi ∼ N (µi , 1).
b = X is inadmissible whenever n ≥ 3. That is, there
They show that the estimator µ
exists a (shrinkage) estimator that has mean squared error smaller than the mean squared
error of µ
b = X for all values of µ. Brown (1971) provides more general characterizations
of admissibility and shows that this dependence on dimension is deeply connected to the
recurrence or transience of Brownian motion. Stein et al. (1981) characterizes the risk
function of arbitrary estimators, µ
b , and based on this characterization proposes an unbiased
estimator of the mean squared error of a given estimator, labeled “Stein’s unbiased risk
estimator” or SURE. We return to SURE in Section 4.2 as a method to produce data-
driven choices of regularization parameters. In section 4.3, we discuss cross-validation as
an alternative method to obtain data-driven choices of regularization parameters in the
context studied in this article.1
A general approach for the construction of regularized estimators, such as the one
proposed by James and Stein (1961), is provided by the empirical Bayes framework, first
proposed in Robbins (1956) and Robbins (1964). A key insight of the empirical Bayes
framework, and the closely related compound decision problem framework, is that trying
to minimize squared error in higher dimensions involves a trade-off across components of the
estimand. The data are informative about which estimators and regularization parameters
perform well in terms of squared error and thus allow one to construct regularized estimators
that dominate the unregularized µ
b = X. This intuition is elaborated on in Stigler (1990).
The empirical Bayes framework was developed further by Efron and Morris (1973) and
Morris (1983), among others. Good reviews and introductions can be found in Zhang
(2003) and Efron (2010).
In Section 4 we consider data-driven choices of regularization parameters and emphasize
uniform validity of asymptotic approximations to the risk function of the resulting estima-
tors. Lack of uniform validity of standard asymptotic characterizations of risk (as well as of
test size) in the context of pretest and model-selection based estimators in low-dimensional
settings has been emphasized by Leeb and Pötscher (2005).
1
See, e.g., Arlot and Celisse (2010) for a survey on cross-validation methods for model selection.

11
While in this article we study risk-optimal estimation of µ, a related literature has
focused on the estimation of confidence sets for the same parameter. Wasserman (2006,
Section 7.8) and Casella and Hwang (2012) surveys some results in this literature. Efron
(2010) studies hypotheses testing in high dimensional settings from an empirical Bayes
perspective.

3 The risk function

We now turn to our first set of formal results, which pertain to the mean squared error
of regularized estimators. Our goal is to guide the researcher’s choice of estimator by
describing the conditions under which each of the alternative machine learning estimators
performs better than the others.
We first derive a general characterization of the mean squared error of regularized
estimators. This characterization is based on the geometry of estimating functions m as
depicted in Figure 1. It is a-priori not obvious which of these functions is best suited for
estimation. We show that for any given data generating process there is an optimal function
m∗P that minimizes mean squared error. Moreover, we show that the mean squared error
for an arbitrary m is equal, up to a constant, to the L2 distance between m and m∗P . A
function m thus yields a good estimator if it is able to approximate the shape of m∗P well.
In Section 3.2, we provide analytic expressions for the componentwise risk of ridge,
lasso, and pretest estimators, imposing the additional assumption of normality. Summing
or integrating componentwise risk over some distribution for (µi , σi ) delivers expressions
for compound and integrated risk.
In Section 3.3, we turn to a specific parametric family of data generating processes where
each µi is equal to zero with probability p, reflecting the notion of sparsity, and is otherwise
drawn from a normal distribution with some mean µ0 and variance σ02 . For this parametric
family indexed by (p, µ0 , σ0 ), we provide analytic risk functions and visual comparisons of
the relative performance of alternative estimators. This allows us to identify key features of
the data generating process which affect the relative performance of alternative estimators.

12
3.1 General characterization

Recall the setup introduced in Section 2, where we observe n jointly independent random
variables X1 , . . . , Xn , with means µ1 , . . . , µn . We are interested in the mean squared error
for the compound problem of estimating all µ1 , . . . , µn simultaneously. In this formulation
of the problem, µ1 , . . . , µn are fixed unknown parameters.
Let I be a random variable with a uniform distribution over the set {1, 2, . . . , n} and
consider the random component (XI , µI ) of (X, µ). This construction induces a mixture
distribution for (XI , µI ) (conditional on P ),
n
1X
(XI , µI )|P ∼ Pi δµi ,
n i=1

where δµ1 , . . . , δµn are Dirac measures at µ1 , . . . , µn . Based on this mixture distribution,
define the conditional expectation

m∗P (x) = E[µI |XI = x, P ]

and the average conditional variance

vP∗ = E var(µI |XI , P )|P .


 

The next theorem characterizes the compound risk of an estimator in terms of the average
squared discrepancy relative to m∗P , which implies that m∗P is optimal (lowest mean squared
error) for the compound problem.

Theorem 1 (Characterization of risk functions)


Under the assumptions of Section 2 and supλ∈[0,∞] E[(m(XI , λ))2 |P ] < ∞, the compound
risk function Rn of µ
bi = m(Xi , λ) can be written as

Rn (m(·, λ), P ) = vP∗ + E (m(XI , λ) − m∗P (XI ))2 |P ,


 

which implies
λ∗ (P ) = argmin E (m(XI , λ) − m∗P (XI ))2 |P
 
λ∈[0,∞]

whenever λ (P ) is well defined.

13
The proof of this theorem and all further results can be found in the appendix.
The statement of this theorem implies that the risk of componentwise estimators is equal
to an irreducible part vP∗ , plus the L2 distance of the estimating function m(., λ) to the
infeasible optimal estimating function m∗P . A given data generating process P maps into
an optimal estimating function m∗P , and the relative performance of alternative estimators
m depends on how well they approximate m∗P .
We can easily write m∗P explicitly because the conditional expectation defining m∗P is a
weighted average of the values taken by µi . Suppose, for example, that Xi ∼ N (µi , 1) for
i = 1 . . . n. Let φ be the standard normal probability density function. Then,
n
X
µi φ(x − µi )
i=1
m∗P (x) = n .
X
φ(x − µi )
i=1

Theorem 1 conditions on the empirical distribution of µ1 , . . . , µn , which corresponds


to the notion of compound risk. Replacing this empirical distribution by the population
distribution π, so that
(Xi , µi ) ∼ π,

results analogous to those in Theorem 1 are obtained for the integrated risk and the inte-
grated oracle selectors in equations (1) and (2). That is, let

m̄∗π (x) = Eπ [µi |Xi = x]

and
v̄π∗ = Eπ [varπ (µi |Xi )],

and assume supλ∈[0,∞] Eπ [(m(Xi , λ) − µi )2 ] < ∞. Then

R̄(m(·, λ), π) = v̄π∗ + Eπ (m(Xi , λ) − m̄∗π (Xi ))2


 

and
λ̄∗ (π) = argmin Eπ (m(Xi , λ) − m̄∗π (Xi ))2 .
 
(3)
λ∈[0,∞]

14
The proof of these assertions is analogous to the proof of Theorem 1. m∗P and m̄∗π are
optimal componentwise estimators or “shrinkage functions” in the sense that they minimize
the compound and integrated risk, respectively.

3.2 Componentwise risk

The characterization of the risk of componentwise estimators in the previous section relies
only on the existence of second moments. Explicit expressions for compound risk and
integrated risk can be derived under additional structure. We shall now consider a setting
in which the Xi are normally distributed,

Xi ∼ N (µi , σi2 ).

This is a particularly relevant scenario in applied research, where the Xi are often unbiased
estimators with a normal distribution in large samples (as in examples (a) to (c) in Sec-
tions 1 and 2.1). For concreteness, we will focus on the three widely used componentwise
estimators introduced in Section 2, ridge, lasso, and pretest, whose estimating functions
m were plotted in Figure 1. The following lemma provides explicit expressions for the
componentwise risk of these estimators.

Lemma 1 (Componentwise risk)


Consider the setup of Section 2. Then, for i = 1, . . . , n, the componentwise risk of ridge is:
 2  2
1 2 1
R(mR (·, λ), Pi ) = σi + 1 − µ2i .
1+λ 1+λ

Assume in addition that Xi has a normal distribution. Then, the componentwise risk of
lasso is
!
 −λ − µ  λ − µ 
i i
R(mL (·, λ), Pi ) = 1+Φ −Φ (σi2 + λ2 )
σi σi
!
 −λ − µ   λ − µ   −λ + µ   −λ − µ 
i i i i
+ φ + φ σi2
σi σi σi σi
  
λ − µi   −λ − µi 
+ Φ −Φ µ2i .
σi σi

15
Under the same conditions, the componentwise risk of pretest is
!
 −λ − µ  λ − µ 
i i
R(mP T (·, λ), Pi ) = 1 + Φ −Φ σi2
σi σi
!
 λ − µ   λ − µ   −λ − µ   −λ − µ 
i i i i
+ φ − φ σi2
σi σi σi σi
   −λ − µ 
λ − µi  i
+ Φ −Φ µ2i .
σi σi

Figure 2 plots the componentwise risk functions in Lemma 1 as functions of µi (with


λ = 1 for ridge, λ = 2 for lasso, and λ = 4 for pretest). It also plots the componentwise
bi = Xi , which is equal to σi2 . As
risk of the unregularized maximum likelihood estimator, µ
Figure 2 suggests, componentwise risk is large for ridge when |µi | is large. The same is true
for lasso, except that risk remains bounded. For pretest, componentwise risk is large when
|µi | is close to λ.
Notice that these functions are plotted for a fixed value of the regularization parameter.
If λ is chosen optimally , then the componentwise risks of ridge, lasso, and pretest are no
greater than the componentwise risk of the unregularized maximum likelihood estimator
bi = Xi , which is σi2 . The reason is that ridge, lasso, and pretest nest the unregularized
µ
estimator (as the case λ = 0).

3.3 Spike and normal data generating process

If we take the expressions for componentwise risk derived in Lemma 1 and average them
over some population distribution of (µi , σi2 ), we obtain the integrated, or empirical Bayes,
risk. For parametric families of distributions of (µi , σi2 ), this might be done analytically.
We shall do so now, considering a family of distributions that is rich enough to cover
common intuitions about data generating processes, but simple enough to allow for analytic
expressions. Based on these expressions, we characterize scenarios that favor the relative
performance of each of the estimators considered in this article.
We consider a family of distributions for (µi , σi ) such that: (i) µi takes value zero with
probability p and is otherwise distributed as a normal with mean value µ0 and standard

16
deviation σ0 , and (ii) σi2 = σ 2 . The following proposition derives the optimal estimating
function m̄∗π , as well as integrated risk functions for this family of distributions.

Proposition 1 (Spike and normal data generating process)


Assume π is such that (i) µ1 , . . . , µn are drawn independently from a distribution with
probability mass p at zero, and normal with mean µ0 and variance σ02 elsewhere, and (ii)
conditional on µi , Xi follows a normal distribution with mean µi and variance σ 2 . Then,
the optimal shrinkage function is
!
1 x − µ0 µ0 σ 2 + xσ02
(1 − p) p 2 φ
σ02 + σ 2
p

σ0 + σ 2 σ02 + σ 2
m̄π (x) = !.
1 x 1 x − µ0
p φ + (1 − p) p 2 φ p
σ σ σ0 + σ 2 σ02 + σ 2

The integrated risk of ridge is


!2 !2
1 λ
R̄(mR (·, λ), π) = σ 2 + (1 − p) (µ20 + σ02 ),
1+λ 1+λ

with
σ2
λ̄∗R (π) = .
(1 − p)(µ20 + σ02 )
The integrated risk of lasso is given by

R̄(mL (·, λ), π) = pR̄0 (mL (·, λ), π) + (1 − p)R̄1 (mL (·, λ), π),

where
 −λ  λ λ
R̄0 (mL (·, λ), π) = 2Φ (σ 2 + λ2 ) − 2 φ σ2,
σ σ σ
and
! !!
−λ − µ0 λ − µ0
R̄1 (mL (·, λ), π) = 1+Φ p 2 −Φ p 2 (σ 2 + λ2 )
σ0 + σ 2 σ0 + σ 2
! !!
λ − µ0 −λ − µ0
+ Φ p 2 −Φ p 2 (µ20 + σ02 )
σ0 + σ 2 σ0 + σ 2
!
1 λ − µ0
−p 2 φ p 2 (λ + µ0 )(σ02 + σ 2 )
σ0 + σ 2 σ0 + σ 2

17
!
1 −λ − µ0
−p 2 φ p 2 (λ − µ0 )(σ02 + σ 2 ).
σ0 + σ 2 σ0 + σ 2

Finally, the integrated risk of pretest is given by

R̄(mP T (·, λ), π) = pR̄0 (mP T (·, λ), π) + (1 − p)R̄1 (mP T (·, λ), π),

where
 −λ  λ λ
R̄0 (mP T (·, λ), π) = 2Φ σ2 + 2 φ σ2
σ σ σ
and
! !!
−λ − µ0 λ − µ0
R̄1 (mP T (·, λ), π) = 1+Φ p 2 −Φ p 2 σ2
σ0 + σ 2 σ0 + σ 2
! !!
λ − µ0 −λ − µ0
+ Φ p 2 −Φ p 2 (µ20 + σ02 )
σ0 + σ 2 σ0 + σ 2
!
1 λ − µ0
λ(σ02 − σ 2 ) + µ0 (σ02 + σ 2 )

−p 2 φ p 2
σ0 + σ 2 σ0 + σ 2
!
1 −λ − µ0
λ(σ02 − σ 2 ) − µ0 (σ02 + σ 2 ) .

−p 2 φ p 2
σ0 + σ 2 σ0 + σ 2
Notice that, even under substantial sparsity (that is, if p is large), the optimal shrinkage
function, m̄∗π , never shrinks all the way to zero (unless, of course, µ0 = σ0 = 0 or p =
1). This could in principle cast some doubts about the appropriateness of thresholding
estimators, such as lasso or pretest, which induce sparsity in the estimated parameters.
However, as we will see below, despite this stark difference between thresholding estimators
and m̄∗π , lasso and, to a certain extent, pretest are able to approximate the integrated risk
of m̄∗π in the spike and normal model when the degree of sparsity in the parameters of
interest is substantial.

Visual representations While it is difficult to directly interpret the risk formulas in


Proposition 1, plotting these formulas as functions of the parameters governing the data
generating process elucidates some crucial aspects of the risk of the corresponding estima-
tors. Figure 3 does so, plotting the minimal integrated risk function of the different estima-
tors. Each of the four subplots in Figure 3 is based on a fixed value of p ∈ {0, 0.25, 0.5, 0.75},

18
with µ0 and σ02 varying along the bottom axes. For each value of the triple (p, µ0 , σ0 ), Fig-
ure 3 reports minimal integrated risk of each estimator (minimized over λ ∈ [0, ∞]). As
a benchmark, Figure 3 reports the risk of the optimal shrinkage function, m̄∗π , simulated
over 10 million repetitions. Figure 4 maps the regions of parameter values over which each
of the three estimators, ridge, lasso, or pretest, performs best in terms of integrated risk.
Figures 3 and 4 provide some useful insights on the performance of shrinkage estimators.
With no true zeros, ridge performs better than lasso or pretest. A clear advantage of ridge
in this setting is that, in contrast to lasso or pretest, ridge allows shrinkage without shrink-
ing some observations all the way to zero. As the share of true zeros increases, the relative
performance of ridge deteriorates for pairs (µ0 , σ0 ) away from the origin. Intuitively, lin-
ear shrinkage imposes a disadvantageous trade-off on ridge. Using ridge to heavily shrink
towards the origin in order to fit potential true zeros produces large expected errors for
observations with µi away from the origin. As a result, ridge performance suffers consid-
erably unless much of the probability mass of the distribution of µi is tightly concentrated
around zero. In the absence of true zeros, pretest performs particularly poorly unless the
distribution of µi has much of its probability mass tightly concentrated around zero, in
which case shrinking all the way to zero produces low risk. However, in the presence of
true zeros, pretest performs well when much of the probability mass of the distribution of
µi is located in a set that is well-separated from zero, which facilitates the detection of
true zeros. Intermediate values of µ0 coupled with moderate values of σ0 produces settings
where the conditional distributions Xi |µi = 0 and Xi |µi 6= 0 greatly overlap, inducing sub-
stantial risk for pretest estimation. The risk performance of lasso is particularly robust.
It out-performs ridge and pretest for values of (µ0 , σ0 ) at intermediate distances to the
origin, and uniformly controls risk over the parameter space. This robustness of lasso may
explain its popularity in empirical practice. Despite the fact that, unlike optimal shrink-
age, thresholding estimators impose sparsity, lasso – and to a certain extent – pretest are
able to approximate the integrated risk of the optimal shrinkage function over much of the
parameter space.

19
All in all, the results in Figures 3 and 4 for the spike and normal case support the
adoption of ridge in empirical applications where there are no reasons to presume the
presence of many true zeros among the parameters of interest. In empirical settings where
many true zeros may be expected, Figures 3 and 4 show that the choice among estimators
in the spike and normal model depends on how well separated the distributions Xi |µi = 0
and Xi |µi 6= 0 are. Pretest is preferred in the well-separated case, while lasso is preferred
in the non-separated case.

4 Data-driven choice of regularization parameters

In Section 3.3 we adopted a parametric model for the distribution of µi to study the risk
properties of regularized estimators under an oracle choice of the regularization parameter,
λ̄∗ (π). In this section, we return to a nonparametric setting and show that it is possible
to consistently estimate λ̄∗ (π) from the data, X1 , . . . , Xn , under some regularity conditions
bn of λ̄∗ (π) based on Stein’s unbiased risk estimate and
on π. We consider estimates λ
based on cross validation. The resulting estimators m(Xi , λ
bn ) have risk functions which

are uniformly close to those of the infeasible estimators m(Xi , λ̄∗ (π)).
The uniformity part of this statement is important and not obvious. Absent uniformity,
asymptotic approximations might misleadingly suggest good behavior, while in fact the fi-
nite sample behavior of proposed estimators might be quite poor for plausible sets of data
generating processes. This uniformity results in this section contrast markedly with other
oracle approximations to risk, most notably approximations which assume that the true
zeros, that is the components i for which µi = 0, are known. Asymptotic approximations
of this latter form are often invoked when justifying the use of lasso and pretest estima-
tors. Such approximations are in general not uniformly valid, as emphasized by Leeb and
Pötscher (2005) and others.

20
4.1 Uniform loss and risk consistency

For the remainder of the paper we adopt the following short-hand notation:

Ln (λ) = Ln (X, m(·, λ), P ) (compound loss)

Rn (λ) = Rn (m(·, λ), P ) (compound risk)

R̄π (λ) = R̄(m(·, λ), π) (empirical Bayes or integrated risk)

bn of λ̄∗ (π) that are obtained by minimizing some


We will now consider estimators λ
empirical estimate of the risk function R̄π (possibly up to a constant that depends only
on π). The resulting λ
bn is then used to obtain regularized estimators of the form µ
bi =
m(Xi , λ
bn ). We will show that for large n the compound loss, the compound risk, and the

integrated risk functions of the resulting estimators are uniformly close to the corresponding
functions of the same estimators evaluated at oracle-optimal values of λ. As n → ∞, the
differences between Ln , Rn , and R̄π vanish, so compound loss optimality, compound risk
optimality, and integrated risk optimality become equivalent.
The following theorem establishes our key result for this section. Let Q be a set of
probability distributions for (Xi , µi ). Theorem 2 provides sufficient conditions for uniform
loss consistency over π ∈ Q, namely that (i) the supremum of the difference between the
loss, Ln (λ), and the empirical Bayes risk, R̄π (λ), vanishes in probability uniformly over
π ∈ Q and (ii) that λ
bn is chosen to minimize a uniformly consistent estimator, rn (λ), of the

risk function, R̄π (λ) (possibly up to a constant v̄π ). Under these conditions, the difference
between loss Ln (λ
bn ) and the infeasible minimal loss inf λ∈[0,∞] Ln (λ) vanishes in probability

uniformly over π ∈ Q.

Theorem 2 (Uniform loss consistency)


Assume !

sup Pπ sup Ln (λ) − R̄π (λ) >  → 0, ∀ > 0. (4)

π∈Q λ∈[0,∞]

Assume also that there are functions, r̄π (λ), v̄π , and rn (λ) (of (π, λ), π, and ({Xi }ni=1 , λ),

21
respectively) such that R̄π (λ) = r̄π (λ) + v̄π , and
!

sup Pπ sup rn (λ) − r̄π (λ) >  → 0, ∀ > 0. (5)
π∈Q λ∈[0,∞]

Then,  

sup Pπ Ln (λn ) − inf Ln (λ) >  → 0,
b ∀ > 0,
π∈Q λ∈[0,∞]

where λ
bn = argmin λ∈[0,∞] rn (λ).

The sufficient conditions given by this theorem, as stated in equations (4) and (5), are
rather high-level. We shall now give more primitive conditions for these requirements to
hold. In Sections 4.2 and 4.3 below, we propose suitable choices of rn (λ) based on Stein’s
unbiased risk estimator (SURE) and cross-validation (CV), and show that equation (5)
holds for these choices of rn (λ).
The following Theorem 3 provides a set of conditions under which equation (4) holds, so
the difference between compound loss and integrated risk vanishes uniformly. Aside from
a bounded moment assumption, the conditions in Theorem 3 impose some restrictions on
the estimating functions, m(x, λ). Lemma 2 below shows that those conditions hold, in
particular, for ridge, lasso, and pretest estimators.

Theorem 3 (Uniform L2 -convergence)


Suppose that

1. m(x, λ) is monotonic in λ for all x in R,

2. m(x, 0) = x and limλ→∞ m(x, λ) = 0 for all x in R,

3. supπ∈Q Eπ [X 4 ] < ∞.

4. For any  > 0 there exists a set of regularization parameters 0 = λ0 < . . . < λk = ∞,
which may depend on , such that

Eπ [(|X − µ| + |µ|)|m(X, λj ) − m(X, λj−1 )|] ≤ 

for all j = 1, . . . , k and all π ∈ Q.

22
Then, " #
 2
sup Eπ sup Ln (λ) − R̄π (λ) → 0. (6)
π∈Q λ∈[0,∞]

Notice that finiteness of supπ∈Q Eπ [X 4 ] is equivalent to finiteness of supπ∈Q Eπ [µ4 ] and


supπ∈Q Eπ [(X − µ)4 ] via Jensen’s and Minkowski’s inequalities.

Lemma 2
If supπ∈Q Eπ [X 4 ] < ∞, then equation (6) holds for ridge and lasso. If, in addition, X is
continuously distributed with a bounded density, then equation (6) holds for pretest.

Theorem 2 provides sufficient conditions for uniform loss consistency. The following
corollary shows that under the same conditions we obtain uniform risk consistency, that is,
the integrated risk of the estimator based on the data-driven choice λ
bn becomes uniformly

close to the risk of the oracle-optimal λ̄∗ (π). For the statement of this corollary, recall
that R̄(m(., λ
bn ), π) is the integrated risk of the estimator m(., λ
bn ) using the stochastic

(data-dependent) λ
bn .

Corollary 1 (Uniform risk consistency)


Under the assumptions of Theorem 3,


bn ), π) − inf R̄π (λ) → 0.
sup R̄(m(., λ (7)
π∈Q
λ∈[0,∞]

In this section, we have shown that approximations to the risk function of machine
learning estimators based on oracle-knowledge of λ are uniformly valid over π ∈ Q under
mild assumptions. It is worth pointing out that such uniformity is not a trivial result.
This is made clear by comparison to an alternative approximation, sometimes invoked to
motivate the adoption of machine learning estimators, based on oracle-knowledge of true
zeros among µ1 , . . . , µn (see, e.g., Fan and Li 2001). As shown in Appendix A.2, assuming
oracle knowledge of zeros does not yield a uniformly valid approximation.

4.2 Stein’s unbiased risk estimate

Theorem 2 provides sufficient conditions for uniform loss consistency using a general esti-
mator rn of risk. We shall now establish that our conditions apply to a particular estimator

23
of rn , known as Stein’s unbiased risk estimate (SURE), which was first proposed by Stein
et al. (1981). SURE leverages the assumption of normality to obtain an elegant expression
of risk as an expected sum of squared residuals plus a penalization term.
SURE as originally proposed requires that m be piecewise differentiable as a function
of x, which excludes discontinuous estimators such as the pretest estimator mP T (x, λ). We
provide a generalization in Lemma 3 that allows for discontinuities. This lemma is stated in
terms of integrated risk; with the appropriate modifications, the same result holds verbatim
for compound risk.

Lemma 3 (SURE for piecewise differentiable estimators)


Suppose that µ ∼ ϑ and
X|µ ∼ N (µ, 1).

Let fπ = ϑ ∗ φ be the marginal density of X, where φ is the standard normal density.


Consider an estimator m(X) of µ, and suppose that m(x) is differentiable everywhere in
R\{x1 , . . . , xJ }, but might be discontinuous at {x1 , . . . , xJ }. Let ∇m be the derivative of
m (defined arbitrarily at {x1 , . . . , xJ }), and let ∆mj = limx↓xj m(x) − limx↑xj m(x) for
j ∈ {1, . . . , J}. Assume that Eπ [(m(X) − X)2 ] < ∞, Eπ [∇m(X)] < ∞, and (m(x) −
x)φ(x − µ) → 0 as |x| → ∞ ϑ-a.s. Then,
J
!
X
R̄(m(.), π) = Eπ [(m(X) − X)2 ] + 2 Eπ [∇m(X)] + ∆mj fπ (xj ) − 1.
j=1

The result of this lemma yields an objective function for the choice of λ of the general
form we considered in Section 4.1, with v̄π = −1 and
J
!
X
r̄π (λ) = Eπ [(m(X, λ) − X)2 ] + 2 Eπ [∇x m(X, λ)] + ∆mj (λ)fπ (xj ) , (8)
j=1

where ∇x m(x, λ) is the derivative of m(x, λ) with respect to its first argument, and {x1 , . . . , xJ }
may depend on λ. The expression in equation (8) can be estimated using its sample analog,
n n J
!
1X 2 1X X
rn (λ) = (m(Xi , λ) − Xi ) + 2 ∇x m(Xi , λ) + ∆mj (λ)fb(xj ) , (9)
n i=1 n i=1 j=1

24
where fb(x) is an estimator of fπ (x). This expression can be thought of as a penalized least
squares objective function. The following are explicit expressions for the penalty for the
cases of ridge, lasso, and pretest.
2
ridge:
1+λ
n
2X
lasso: 1(|Xi | > λ)
n i=1
n
2X
prestest: 1(|Xi | > λ) + 2λ(fb(−λ) + fb(λ))
n i=1
The lasso penalty was previously derived in Donoho and Johnstone (1995). Our results
allow to apply SURE estimation of risk to any machine learning estimator, as long as the
conditions of Lemma 3 are satisfied.
To apply the uniform risk consistency in Theorem 2, we need to show that equation
(5) holds. That is, we have to show that rn (λ) is uniformly consistent as an estimator of
r̄π (λ). The following lemma provides the desired result.

Lemma 4
Assume the conditions of Theorem 3. Then, equation (5) holds for m(·, λ) equal to mR (·, λ),
mL (·, λ). If, in addition,
 
sup Pπ sup |x|f (x) − |x|fπ (x) >  → 0 ∀ > 0,
b
π∈Q x∈R

then equation (5) holds for m(·, λ) equal to mP T (·, λ).

Identification of m̄∗π Under the conditions of Lemma 3 the optimal regularization pa-
rameter λ̄∗ (π) is identified. In fact, under the same conditions, the stronger result holds
that m̄∗π as defined in Section 3.1 is identified as well (see, e.g., Brown, 1971; Efron, 2011).
The next lemma states the identification result for m̄∗π .

Lemma 5
Under the conditions of Lemma 3, the optimal shrinkage function is given by

m̄∗π (x) = x + ∇ log(fπ (x)). (10)

25
Several nonparametric empirical Bayes estimators (NPEB) that target m̄∗π (x) have been
proposed (see Brown and Greenshtein, 2009; Jiang and Zhang, 2009, Efron, 2011, and
Koenker and Mizera, 2014). In particular, Jiang and Zhang (2009) derive asymptotic op-
timality results for nonparametric estimation of m̄∗π and provide an estimator based on
the EM-algorithm. The estimator proposed in Koenker and Mizera (2014), which is based
on convex optimization techniques, is particularly attractive, both in terms of computa-
tional properties and because it sidesteps the selection of a smoothing parameters (cf., e.g.,
Brown and Greenshtein, 2009). Both estimators, in Jiang and Zhang (2009) and Koenker
and Mizera (2014), use a discrete distribution over a finite number of values to approximate
the true distribution of µ. In sections 6 and 7, we will use the Koenker-Mizera estimator to
visually compare the shape of this estimated m̄∗π (x) to the shape of ridge, lasso and pretest
estimating functions and to assess the performance of ridge, lasso and pretest relative to
the performance of a nonparametric estimator of m̄∗π .

4.3 Cross-validation

A popular alternative to SURE is cross-validation, which chooses tuning parameters to


optimize out-of-sample prediction. In this section, we investigate data-driven choices of the
regularization parameter in a panel data setting, where multiple observations are available
for each value of µ in the sample.
For i = 1, . . . , n, consider i.i.d. draws, (x1i , . . . , xki , µi , σi ), of a random variable (x1 , . . . ,
xk , µ, σ) with distribution π ∈ Q . Assume that the components of (x1 , . . . , xk ) are i.i.d.
conditional on (µ, σ 2 ) and that for each j = 1, . . . , k,

E[xj |µ, σ] = µ,

var(xj |µ, σ) = σ 2 .

Let
k k
1X 1X
Xk = xj and Xki = xji .
k j=1 k j=1

For concreteness and to simplify notation, we will consider an estimator based on the first

26
k − 1 observations for each group i = 1, . . . , n,

µ
bk−1i = m(Xk−1i , λ),

and will use observations xki , for i = 1, . . . n, as a hold-out sample to choose λ. Similar
results hold for alternative sample partitioning choices. The loss function and empirical
Bayes risk function of this estimator are given by
n
1X
Ln,k (λ) = (m(Xk−1i , λ) − µi )2
n i=1

and

R̄π,k (λ) = Eπ [(m(Xk−1 , λ) − µ)2 ].

Consider the following cross-validation estimator


n
1X
rn,k (λ) = (m(Xk−1i , λ) − xki )2 .
n i=1

Lemma 6
Assume Conditions 1 and 2 of Theorem 3 and Eπ [x2j ] < ∞, for j = 1, . . . k. Then,

Eπ [rn,k (λ)] = R̄π,k (λ) + Eπ [σ 2 ].

That is, the cross validation yields an (up to a constant) unbiased estimator for the risk
of the estimating function m(Xk−1 , λ). The following theorem shows that this result can
be strengthened to a uniform consistency result.

Theorem 4
Assume conditions 1 and 2 of Theorem 3 and supπ Eπ [x4j ] < ∞, for j = 1, . . . k. Let
v̄π = −Eπ [σ 2 ],

r̄π,k (λ) = Eπ [rn,k (λ)],

= R̄π,k (λ) − v̄π ,

27
and λ
bn = argmin λ∈[0,∞] rn,k (λ). Then, for ridge, lasso, and pretest,
" #
 2
sup Eπ sup rn,k (λ) − r̄π,k (λ) → 0,
π∈Q λ∈[0,∞]

and  
 
sup Pπ Ln,k λn − inf Ln,k (λ) >  → 0,
b ∀ > 0.
π∈Q λ∈[0,∞]

Cross-validation has advantages as well as disadvantages relative to SURE. On the


positive side, cross-validation does not rely on normal errors, while SURE does. Normality
is less of an issue if k is large, so Xki is approximately normal. On the negative side, however,
cross-validation requires holding out part of the data from the second step estimation of µ,
once the value of the regularization parameter has been chosen in a first step. This affects
the essence of the cross-validation efficiency results, which apply to estimators of the form
m(Xk−1i , λ), rather than to feasible estimators that use the entire sample in the second
step, m(Xki , λ). Finally, cross-validation imposes greater data availability requirements, as
it relies on availability of data on repeated realizations, x1i , . . . , xki , of a random variable
centered at µi , for each sample unit i = 1, . . . , n. This may hinder the practical applicability
of cross-validation selection of regularization parameters in the context considered in this
article.

5 Discussion and Extensions


5.1 Mixed estimators and estimators of the optimal shrinkage function

We have discussed criteria such as SURE and CV as means to select the regularization
parameter, λ. In principle, these same criteria might also be used to choose among al-
ternative estimators, such as ridge, lasso, and pretest, in specific empirical settings. Our
uniform risk consistency results imply that such a mixed-estimator approach dominates
each of the estimators which are being mixed, for n large enough. Going even further, one
might aim to estimate the optimal shrinkage function, m̄∗π , using the result of Lemma 5,
as in Jiang and Zhang (2009), Koenker and Mizera (2014)) and others. Under suitable
consistency conditions, this approach will dominate all other componentwise estimators for

28
large enough n (Jiang and Zhang, 2009). In practice, these results should be applied with
some caution, as they are based on neglecting the variability in the choice of estimation
procedure or in the estimation of m̄∗π . For small and moderate values of n, procedures with
fewer degrees of freedom may perform better in practice. We return to this issue in section
6, where we compare the finite sample risk of the machine learning estimators considered
in this article (ridge, lasso and pretest) to the finite sample risk of the NPEB estimator of
Koenker and Mizera (2014).

5.2 Heteroskedasticity

While for simplicity many of our results are stated for the homoskedastic case, where
var(Xi ) = σ for all i, they easily generalize to heteroskedasticity.
The general characterization of compound risk in Theorem 1 does not use homoskedas-
ticity, nor does the derivation of componentwise risk in Lemma 1. The analytical derivations
of empirical Bayes risk for the spike and normal data generating process in Proposition 1,
and the corresponding comparisons of risk in Figures 3 and 4 do rely on homoskedastic-
ity. Similar formulas to those of Proposition 1 might be derived for other data generating
processes with heteroskedasticity, but the rankings of estimators might change.
As for our proofs of uniform risk consistency, our general results (Theorem 2 and 3)
do not require homoskedasticity, nor does the validity or consistency of crossvalidation, cf.
Theorem 4. SURE, in the form we introduced in Lemma 3, does require homoskedastic-
ity. However, the definition of SURE, and the corresponding consistency results, can be
extended to the heteroskedastic case (see Xie, Kou, and Brown, 2012).

5.3 Comparison with Leeb and Pötscher (2006)

Our results on the uniform consistency of estimators of risk such as SURE or CV appear
to stand in contradiction to those of Leeb and Pötscher (2006). They consider the same
setting as we do – estimation of normal means – and the same types of estimators, including
ridge, lasso, and pretest. In this setting, Leeb and Pötscher (2006) show that no uniformly
consistent estimator of risk exists for such estimators.

29
The apparent contradiction between our results and the results in Leeb and Pötscher
(2006) is explained by the different nature of the asymptotic sequence adopted in this article
to study the properties of machine learning estimators, relative to the asymptotic sequence
adopted in Leeb and Pötscher (2006) for the same purpose. In this article, we consider
the problem of estimating a large number of parameters, such as location effects for many
locations or group-level treatment effects for many groups. This motivates the adoption
of an asymptotic sequence along which the number of estimated parameters increases as
n → ∞. In contrast, Leeb and Pötscher (2006) study the risk properties of regularized
estimators embedded in a sequence along which the number of estimated parameters stays
fixed as n → ∞ and the estimation variance is of order 1/n. We expect our approximation
to work well when the dimension of the estimated parameter is large; the approximation
of Leeb and Pötscher (2006) is likely to be more appropriate when the dimension of the
estimated parameter is small while sample size is large.
In the simplest version of the setting in Leeb and Pötscher (2006) we observe a (k ×
1) vector X n with distribution X n ∼ N (µn , I k /n), where I k is the identity matrix of
dimension k. Let Xni and µni be the i-components of X n and µn , respectively. Consider
the componentwise estimator mn (Xni ) of µni . Leeb and Pötscher (2006) study consistent
estimation of the normalized risk

R̄nLP = nEkmn (X n ) − µn k2 ,

where mn (X n ) is a (k × 1) vector with i-th element equal to mn (Xni ).


√ √
Adopting the re-parametrization, Y n = nX n and hn = nµn , we obtain Y n −
hn ∼ N (0, I k ). Notice that, for the maximum likelihood estimator, mn (X n ) − µn =

(Y n − hn )/ n and R̄nLP = E|km(Y n ) − hn k2 = k, so the risk of the maximum likelihood
estimator does not depend on the sequence hn and, therefore, can be consistently estimated.
This is not the case for shrinkage estimators, however. Choosing hn = h for some fixed h,
the problem becomes invariant in n,

Y n ∼ N (h, I k ).

30
In this setting, it is easy to show that the risk of machine learning estimators, such as
ridge, lasso, and pretest depends on h, and therefore it cannot be estimated consistently.

For instance, consider the lasso estimator, mn (x) = mL (x, λn ), where nλn → c with
0 < c < ∞, as in Leeb and Pötscher (2006). Then, Lemma 1 implies that R̄nLP is constant
in n and dependent on h. As a result, R̄nLP cannot be estimated consistently.2
Contrast the setting in Leeb and Pötscher (2006) to the one adopted in this article,
where we consider a high dimensional setting, such that X and µ have dimension equal
to n. The pairs (Xi , µi ) follow a distribution π which may vary with n. As n increases, π
becomes identified and so does the average risk, Eπ [(mn (Xi ) − µi )2 ], of any componentwise
estimator, mn (·).
Whether the asymptotic approximation in Leeb and Pötscher (2006) or ours provides
a better description of the performance of SURE, CV, or other estimators of risk in actual
applications depends on the dimension of µ. If this dimension is large, as typical in the
applications we consider in this article, we expect our uniform consistency result to apply:
a “blessing of dimesionality”. As demonstrated by Leeb and Pötscher, however, precise
estimation of a fixed number of parameters does not ensure uniformly consistent estimation
of risk.

6 Simulations

Designs To gauge the relative performance of the estimators considered in this article,
we next report the results of a set of simulations that employ the spike and normal data
generating process of Section 3.3. As in Proposition 1, we consider distributions π of
(X, µ) such that µ is degenerate at zero with probability p and normal with mean µ0 and
variance σ02 with probability (1 − p). We consider all combinations of parameter values
p = 0.00, 0.25, 0.50, 0.75, 0.95, µ0 = 0, 2, 4, σ0 = 2, 4, 6, and sample sizes n = 50, 200, 1000.
Given a set of values µ1 , . . . , µn , the values for X1 , . . . , Xn are generated as follows.
To evaluate the performance of estimators based on SURE selectors and of the NPEB
2
This result holds more generally outside the normal error model. Let mL (X n , λ) be the (n × 1) vector

with i-th element equal√ to mL√(Xi , λ). Consider the sequence of regularization parameters λn = c/ n,
then mL (x, λn ) = mL ( nx, c)/ n. This implies R̄nLP = E|kmL (Y n , c) − hk2 , which is invariant in n.

31
estimator of Koenker and Mizera (2014), we generate the data as

Xi = µi + Ui , (11)

where the Ui follow a standard normal distribution, independent of other components. To


evaluate the performance of cross-validation estimators, we generate

xji = µi + kuji

for j = 1, . . . , k, where the uji are draws from independent standard normal distributions.
As a result, the averages
k
1X
Xki = xji
k j=1

have the same distributions as the Xi in equation (11), which makes the comparison of
between the cross-validation estimators and the SURE and NPEB estimators a meaninful
one. For cross-validation estimators we consider k = 4, 20.

Estimators The SURE criterion function employed in the simulations is the one in equa-
tion (9) where, for the pretest estimator, the density of X is estimated with a normal kernel
and the bandwidth implied by “Silverman’s rule of thumb”.3 The cross-validation criterion
function employed in the simulations is a leave-one-out version of the one considered in
Section 4.3, !
k n
X 1X
rn,k (λ) = (m(X−ji , λ) − xji )2 , (12)
j=1
n i=1

where X−ji is the average of {x1i , . . . , xki }\xji . Notice that because of the result in Theorem
4 applies to each of the k terms on the right-hand-side of equation (12) it also applies to
rn,k (λ) as defined on the left-hand-side of the same equation. The cross validation estimator
employed in our simulations is m(Xki , λ), with λ evaluated at the minimizer of (12).

Results Tables 1, 2, and 3 report average compound risk across 1000 simulations for
n = 50, n = 200 and n = 1000, respectively. Each row corresponds to a particular value of
3
See Silverman (1986) equation (3.31).

32
(p, µ0 , σ0 ), and each column corresponds to a particular estimator/regularization criterion.
The results are coded row-by-row on a continuous color scale which varies from dark blue
(minimum row value) to light yellow (maximum row value).
Several clear patterns emerge from the simulation results. First, even for a dimension-
ality as modest as n = 50, the patterns in Figure 3, which were obtained for oracle choices
of regularization parameters, are reproduced in Tables 1 to 3 for the same estimators but
using data-driven choices of regularization parameters. As in Figure 3, among ridge, lasso
and pretest, ridge dominates when there is little or no sparsity in the parameters of interest,
pretest dominates when the distribution of non-zero parameters is substantially separated
from zero, and lasso dominates in the intermediate cases. Second, while the results in Jiang
and Zhang (2009) suggest good performance of nonparametric estimators of m̄∗π for large
n, the simulation results in Tables 1 and 2 indicate that the performance of NPEB may be
substantially worse than the performance of the other machine learning estimators in the
table, for moderate and small n. In particular, the performance of the NPEB estimator
suffers in the settings with low or no sparsity, especially when the distribution of the non-
zero values of µ1 , . . . , µn has considerable dispersion. This is explained by the fact that, in
practice, the NPEB estimator approximates the distribution of µ using a discrete distri-
bution supported on a small number of values. When most of the probability mass of the
true distribution of µ is also concentrated around a small number of values (that is, when
p is large or σ0 is small), the approximation employed by the NPEB estimator is accurate
and the performance of the NPEB estimator is good. This is not the case, however, when
the true distribution of µ cannot be closely approximated with a small number of values
(that is, when p is small and σ0 is large). Lasso shows a remarkable degree of robustness
to the value of (p, µ0 , σ0 ), which makes it an attractive estimator in practice. For large n,
as in Table 3, NPEB dominates except in settings with no sparsity and a large dispersion
in µ (p = 0 and σ0 large).

33
7 Applications

In this section, we apply our results to three data sets from the empirical economics liter-
ature. The first application, based on Chetty and Hendren (2015), estimates the effect of
living in a given commuting zone during childhood on intergenerational income mobility.
The second application, based on Della Vigna and La Ferrara (2010), estimates changes in
the stock prices of arms manufacturers following changes in the intensity of conflicts in coun-
tries under arms trade embargoes. The third application uses data from the 2000 census
of the US, previously employed in Angrist, Chernozhukov, and Fernández-Val (2006) and
Belloni and Chernozhukov (2011), to estimate a nonparametric Mincer regression equation
of log wages on education and potential experience.
For all applications we normalize the observed Xi by their estimated standard error.
Note that this normalization (i) defines the implied loss function, which is quadratic error
loss for estimation of the normalized latent parameter µi , and (ii) defines the class of esti-
mators considered, which are componentwise shrinkage estimators based on the normalized
Xi .

7.1 Neighborhood Effects: Chetty and Hendren (2015)

Chetty and Hendren (2015) use information on income at age 26 for individuals who moved
between commuting zones during childhood to estimate the effects of location on income.
Identification comes from comparing differently aged children of the same parents, who are
exposed to different locations for different durations in their youth. In the context of this
application, Xi is the (studentized) estimate of the effect of spending an additional year
of childhood in commuting zone i, conditional on parental income rank, on child income
rank relative to the national household income distribution at age 26.4 In this setting, the
point zero has no special role; it is just defined, by normalization, to equal the average of
commuting zone effects. We therefore have no reason to expect sparsity, nor the presence
4
The data employed in this section were obtained from http://www.equality-of-opportunity.org/
images/nbhds_online_data_table3.xlsx. We focus on the estimates for children with parents at the
25th percentile of the national income distribution among parents with children in the same birth cohort.

34
of a set of effects well separated from zero. Our discussion in Section 3 would thus lead us
to expect that ridge will perform well, and this is indeed what we find.
Figure 5 reports SURE estimates of risk for ridge, lasso, and pretest estimators, as
functions of λ. Among the three estimators, minimal estimated risk is equal to 0.29, and it
is attained by ridge for λ
bR,n = 2.44. Minimal estimated risk for lasso and pretest are 0.31

and 0.41, respectively. The relative performance of the three shrinkage estimators reflects
the characteristics of the example and, in particular, the very limited evidence of sparsity
in the data.
The first panel of Figure 6 shows the Koenker-Mizera NPEB estimator (solid line)
along with the ridge, lasso, and pretest estimators (dashed lines) evaluated at SURE-
minimizing values of the regularization parameters. The identity of the estimators can
be easily recognized from their shape. The ridge estimator is linear, with positive slope
equal to estimated risk, 0.29. Lasso has the familiar piecewise linear shape, with kinks
at the positive and negative versions of the SURE-minimizing value of the regularization
parameter, λ
bL,n = 1.34. Pretest is flat at zero, because SURE is minimized for values of λ

higher than the maximum absolute value of X1 , . . . , Xn . The second panel shows a kernel
estimate of the distribution of X.5 Among ridge, lasso, and pretest, ridge best approximates
the optimal shrinkage estimator over most of the estimated distribution of X. Lasso comes
a close second, as evidenced in the minimal SURE values for the three estimators, and
pretest is way off. Despite substantial shrinkage, these estimates suggest considerable
heterogeneity in the effects of childhood neighborhood on earnings. In addition, as expected
given the nature of this application, we do not find evidence of sparsity in the location effects
estimates.

7.2 Detecting Illegal Arms Trade: Della Vigna and La Ferrara (2010)

Della Vigna and La Ferrara (2010) use changes in stocks prices of arms manufacturing
companies at the time of large changes in the intensity of conflicts in countries under arms-
5
To produce a smooth depiction of densities, for the panels reporting densities in this section we use
the normal reference rule to choose the bandwidth. See, e.g., Silverman (1986) equation (3.28).

35
trade embargoes to detect illegal arms trade. In this section, we apply the estimators in
Section 4 to data from the Della Vigna and La Ferrara study.6
In contrast to the location effects example in Section 7.1, in this application there are
reasons to expect a certain amount of sparsity, if changes in the intensity of the conflicts
in arms-embargo areas do not affect the stock prices of arms manufacturers that comply
with the embargoes.7 Economic theory would suggest this to be the case if there are fixed
costs for violating the embargo. In this case, our discussion of Section 3 would lead us to
expect that pretest might be optimal, which is again what we find.
Figure 7 shows SURE estimates for ridge, lasso, and pretest. Pretest has the lowest
bP T,n = 2.39,8 followed by lasso, for λ
estimated risk, for λ bL,n = 1.50.

Figure 8 depicts the different shrinkage estimators and shows that lasso and especially
pretest closely approximate the NPEB estimator over a large part of the distribution of
X. The NPEB estimate suggests a substantial amount of sparsity in the distribution of
µ. There is, however, a subset of the support of X around x = 3 where the estimate of
the optimal shrinkage function implies only a small amount of shrinkage. Given the shapes
of the optimal shrinkage function estimate and of the estimate of the distribution of X,
it is not surprising that the minimal values of SURE in Figure 7 for lasso and pretest are
considerably lower than for ridge.
6
Della Vigna and La Ferrara (2010) divide their sample of arms manufacturers in two groups, depending
on whether the company is head-quartered in a country with a high or low level of corruption. They also
divide the events of changes in the intensity of the conflicts in embargo areas in two groups, depending
on whether the intensity of the conflict increased or decreased at the time of the event. For concreteness,
we use the 214 event study estimates for events of increase in the intensity of conflicts in arms embargo
areas and for companies in high-corruption countries. The data for this application is available at http:
//eml.berkeley.edu/~sdellavi/wp/AEJDataPostingZip.zip.
7
In the words of Della Vigna and La Ferrara (2010): “If a company is not trading or trading legally, an
event increasing the hostilities should not affect its stock price or should affect it adversely, since it delays
the removal of the embargo and hence the re-establishment of legal sales. Conversely, if a company is trading
illegally, the event should increase its stock price, since it increases the demand for illegal weapons.”
8
Notice that the pretest’s SURE estimate attains a negative minimum value. This could be a matter of
estimation variability, of inappropriate choice of bandwidth for the estimation of the density of X in small
samples, or it could reflect misspecification of the model (in particular, Gaussianity of X given µ).

36
7.3 Nonparametric Mincer equation: Belloni and Chernozhukov (2011)

In our third application, we use data from the 2000 US Census in order to estimate a non-
parametric regression of log wages on years of education and potential experience, similar
to the example considered in Belloni and Chernozhukov (2011).9 We construct a set of 66
regressors by taking a saturated basis of linear splines in education, fully interacted with the
terms of a 6-th order polynomial in potential experience. We orthogonalize these regressors
and take the coefficients Xi of an OLS regression of log wages on these orthogonalized
regressors as our point of departure. We exclude three coefficients of very large magnitude,10
which results in n = 63. In this application, economics provides less intuition as to what
distribution of coefficients to expect. Based on functional analysis considerations, Belloni
and Chernozhukov (2011) argue that for plausible families of functions containing the
true conditional expectation function, sparse approximations of the coefficients of series
regression as induced by the lasso penalty, have low mean squared error.
Figure 9 reports SURE estimates of risk for ridge, lasso and pretest. In this application,
estimated risk for lasso is substantially smaller than for ridge or pretest.
The top panel of Figure 10 reports the three regularized estimators, ridge, lasso, and
pretest, evaluated at the data-driven choice of regularization parameter, along with the
Koenker-Mizera NPEB estimator. In order to visualize the differences between the esti-
mates close to the origin, where most of the coefficients are, we report the value of the
estimates for x ∈ [−10, 10]. The bottom panel of Figure 10 reports an estimate of the
density of X. Locally, the shape of the NPEB estimate looks similar to a step function.
This behavior is explained by the fact that the NPEB estimator is based on an approxima-
tion to the distribution of µ that is supported on a finite number of values. However, over
the whole range of x in the Figure 10, the NPEB estimate is fairly linear. In view of this
close-to-linear behavior of NPEB in the [10, 10] interval, the very poor risk performance
9
The data for this application are available at http://economics.mit.edu/files/384.
10
The three excluded coefficients have values, 2938.04 (the intercept), 98.19, and -77.35. The largest
absolute value among the included coefficients is -21.06. Most of the included coefficients are small in
absolute value. About 40 percent of them have absolute values smaller than one, and about 60 percent of
them have absolute value smaller than two.

37
of ridge relative to lasso and pretest, as evidenced in Figure 9, may appear surprising.
This is explained by the fact that in this application, some of the values in X1 , . . . , Xn
fall exceedingly far from the origin. Linearly shrinking those values towards zero induces
severe loss. As a result, ridge attains minimal risk for a close-to-zero value of the regular-
ization parameter, λ
bR,n = 0.04, resulting in negligible shrinkage. Among ridge, lasso, and

pretest, minimal estimated risk is attained by lasso for λ


bL,n = 0.59, which shrinks about

24 percent of the regression coefficients all the way to zero. Pretest induces higher sparsity

bP T,n = 1.14, shrinking about 49 percent of the coefficients all the way to zero) but does

not improve over lasso in terms of risk.

8 Conclusion

The interest in adopting machine learning methods in economics is growing rapidly. Two
common features of machine learning algorithms are regularization and data-driven choice
of regularization parameters. We study the properties of such procedures. We consider,
in particular, the problem of estimating many means µi based on observations Xi . This
problem arises often in economic applications. In such applications, the “observations” Xi
are usually equal to preliminary least squares coefficient estimates, like fixed effects.
Our goal is to provide guidance for applied researchers on the use of machine learning
estimators. Which estimation method should one choose in a given application? And how
should one choose regularization parameters? To the extent that researchers care about the
squared error of their estimates, procedures are preferable if they have lower mean squared
errors than the competitors.
Based on our results, ridge appears to dominate the alternatives considered when the
true effects µi are smoothly distributed, and there is no point mass of true zeros. This is
likely to be the case in applications where the objects of interests are the effects of many
treatments, such as locations or teachers, and applications that estimate effects for many
subgroups. Pretest appears to dominate if there are true zeros and non-zero effects are well
separated from zero. This happens in economic applications when there are fixed costs for
agents who engage in non-zero behavior. Lasso finally dominates for intermediate cases

38
and appears to do well for series regression, in particular.
Regarding the choice of regularization parameters, we prove a series of results which
show that data-driven choices are almost optimal (in a uniform sense) for large-dimensional
problems. This is the case, in particular, for choices of regularization parameters that mini-
mize Stein’s Unbiased Risk Estimate (SURE), when observations are normally distributed,
and for Cross Validation (CV), when repeated observations for a given effect are available.
Although not explicitly analyzed in this article, equation (3) suggests a new empirical se-
lector of regularization parameters based on the minimization of the sample mean square
discrepancy between m(Xi , λ) and NPEB estimates of m̄∗π (Xi ).
There are, of course, some limitations to our analysis. First, we focus on a restricted
class of estimators, those which can be written in the componentwise shrinkage form µ
bi =
m(Xi , λ).
b This covers many estimators of interest for economists, most notably ridge, lasso,

and pretest estimation. Many other estimators in the machine learning literature, such as
random forests or neural nets, do not have this tractable form. The analysis of the risk
properties of such estimators constitutes an interesting avenue of future research.
Finally, we focus on mean square error. This loss function is analytically quite conve-
nient and amenable to tractable results. Other loss functions might be of practical interest,
however, and might be studied using numerical methods. In this context, it is also worth
emphasizing again that we were focusing on point estimation, where all coefficients µi are
simultaneously of interest. This is relevant for many practical applications such as those
discussed above. In other cases, however, one might instead be interested in the estimates
µ
bi solely as input for a lower-dimensional decision problem, or in (frequentist) testing of
hypotheses on the coefficients µi . Our analysis of mean squared error does not directly
speak to such questions.

39
Appendix

A.1 Relating prediction problems to the normal means model setup


We have introduced our setup in the canonical form of the problem of estimating many means. Machine
learning methods are often discussed in terms of the problem of minimizing out-of-sample prediction error.
The two problems are closely related. Consider the linear prediction model
Y = W 0 β + ,
where Y is a scalar random variable, W is an (n×1) vector of covariates (features), and |W ∼ N (0, σ 2 ).11
The machine learning literature is often concerned with the problem of predicting the value of Y of a draw
of (Y, W ) using
Yb = W 0 β,
b
where β
b is an estimator of β based on N (N ≥ n) previous independent draws, (Y1 , W1 ), . . . , (YN , WN ),
from the distribution of (Y, W ), so βb is independent of (Y, W ). We evaluate out-of-sample predictions
based on the squared prediction error,
 2  
L̃ = (Yb − Y )2 = W 0 (βb − β) + 2 + 2 W 0 (βb − β) .

Suppose that the features W for prediction are drawn from the empirical distribution of W1 , . . . , WN ,12
and that Y is drawn from the conditional population distribution of Y given W . The expected squared
prediction error, R̃ = E[L̃], is then equal to
 
R̃ = tr Ω · E[(β b − β)0 ] + E[2 ],
b − β)(β

where
N
1 X
Ω= W j W 0j .
N j=1
In the special case where the components of W are orthonormal in the sample, Ω = I n , this immediately
yields
Xn
R̃ = E[(βbi − βi )2 ] + E[2 ],
i=1

where βbi and βi are the i-th components of β b and β, respectively. In this special case, we thus get that
the risk function for out of sample prediction and the mean squared error for coefficient estimation are the
same, up to a constant.

More generally, assume that Ω has full rank, define V = Ω−1/2 W , µ = Ω1/2 β, and let X be the coefficients
of an ordinary least squares regression of Y1 , . . . , YN on V 1 , . . . , V N . This change of coordinates yields,
conditional on W 1 , . . . , W N ,
σ2
 
X ∼ N µ, I n ,
N
so that the assumptions of our setup regarding X and µ hold. Regularized estimators µ b of µ can be formed
by componentwise shrinkage of X. For any estimator µ b of µ we can furthermore write the corresponding
risk for out of sample prediction as
µ − µ)0 (b
R̃ = E[(b µ − µ)] + E[2 ].
11
Linearity of the conditional expectation and normality are assumed here for ease of exposition; both
could in principle be dropped in an asymptotic version of the following argument.
12
This assumption is again made for convenience, to sidestep asymptotic approximations

40
To summarize: After orthogonalizing the regressors for a linear regression problem, the assumptions of the
many means setup apply to the vector of ordinary least squares coefficients. The risk function for out of
sample prediction is furthermore the same as the risk function of the many means problem, if we assume
the features for prediction are drawn from the empirical distribution of observed features.

A.2 Assuming oracle knowledge of zeros is not uniformly valid


Consider the pretest estimator, mP T (Xi , λ bn ). An alternative approximation to the risk of the pretest
estimator is given by the risk of the infeasible estimator based on oracle-knowledge of true zeros,

m0,µ
P T (Xi ) = 1(µi 6= 0)Xi .

As we show now, this approximation is not uniformly valid, which illustrates that uniformity is not a trivial
requirement. Consider the following family Q of data generating processes,

X|µ ∼ N (µ, 1),


P (µ = 0) = p,
P (µ = µ0 ) = 1 − p.

It is easy to check that


R̄(m0,µ
P T (·), π) = 1 − p,

for all π ∈ Q. By Proposition 1, for π ∈ Q, the integrated risk of the pretest estimator is
 
R̄(mP T (·, λ), π) = 2 Φ(−λ) + λφ(λ) p

+ 1 + Φ(−λ − µ0 ) − Φ(λ − µ0 ) + (Φ(λ − µ0 ) − Φ(−λ − µ0 ))µ20
 
− φ(λ − µ0 ) − λ + µ0 − φ(−λ − µ0 ) − λ − µ0 (1 − p).

We have shown above that data-driven choices of λ are uniformly risk consistent, so their integrated risk is
asymptotically equal to minλ∈[0,∞] R(mP T (·, λ), π). It follows that the risk of m0,µ
P T (·) provides a uniformaly
valid approximation to the risk of mP T (·, λ) if and only if
b

min R̄(mP T (·, λ), π) = 1 − p, ∀π ∈ Q. (A.1)


λ∈[0,∞]


It is easy to show that equation (A.1) is violated. Consider, for example, (p, µ0 ) = (1/2, 2). Then, the
minimum value of R̄(mP T (, λ), π) is equal to one (achieved at λ = 0 and λ = ∞). Therefore,

min R̄(mP T (, λ), π) = 1 > 0.5 = R̄(m0,µ


P T (·), π).
λ∈[0,∞]

Moreover, equation (A.1) is also violated in the opposite direction. Notice that

lim R̄(mP T (·, λ), π) = (1 − p)µ20 .


λ→∞

As a result, if |µ0 | < 1 we obtain

min R̄(mP T (, λ), π) < 1 − p = R̄(m0,µ


P T (·), π),
λ∈[0,∞]

which violates equation (A.1).

41
A.3 Proofs
Proof of Theorem 1:
n
1X
Rn (m(., λ), P ) = E[(m(Xi , λ) − µi )2 |Pi ]
n i=1
= E (m(XI , λ) − µI )2 |P
 

= E E[(m∗P (XI ) − µI )2 |XI , P ]|P + E (m(XI , λ) − m∗P (XI ))2 |P


   


+ E (m(XI , λ) − m∗P (XI ))2 |P .
 
= vP

The second equality in this proof is termed the fundamental theorem of compound decisions in Jiang
and Zhang (2009), who credit Robbins (1951). Finiteness of µ1 , . . . , µn , and supλ∈[0,∞] E[(m(XI , λ))2 |P ]
implies that all relevant expectations are finite. 
Proof of Lemma 1: Notice that
   
1 λ
mR (x, λ) − µi = (x − µi ) − µi .
1+λ 1+λ
The result for ridge equals the second moment of this expression. For pretest, notice that

mP T (x, λ) − µi = 1(|x| > λ)(x − µi ) − 1(|x| ≤ λ)µi .

Therefore,
R(mP T (·, λ), Pi ) = E (Xi − µi )2 1(|Xi | > λ) + µ2i Pr |Xi | ≤ λ .
  
(A.2)
0
Using the fact that φ (v) = −vφ(v) and integrating by parts, we obtain
Z b Z b h i
2
v φ(v) dv = φ(v) dv − bφ(b) − aφ(a)
a
ha i h i
= Φ(b) − Φ(a) − bφ(b) − aφ(a) .

Now,
" !2 #
Xi − µi
E (Xi − µi )2 1(|Xi | > λ) = σi2 E
 
1(|Xi | > λ)
σi
!
 −λ − µ  λ − µ 
i i
= 1+Φ −Φ σi2
σi σi
!
 λ − µ   λ − µ   −λ − µ   −λ − µ 
i i i i
+ φ − φ σi2 . (A.3)
σi σi σi σi

The result for the pretest estimator now follows easily from equations (A.2) and (A.3). For lasso, notice
that

mL (x, λ) − µi = 1(x < −λ)(x + λ − µi ) + 1(x > λ)(x − λ − µi ) − 1(|x| ≤ λ)µi


= 1(|x| > λ)(x − µi ) + (1(x < −λ) − 1(x > λ))λ − 1(|x| ≤ λ)µi .

Therefore,

R(mL (·, λ), Pi ) = E (Xi − µi )2 1(|Xi | > λ) + λ2 E[1(|Xi | > λ)] + µ2i E[1(|Xi | ≤ λ)]
 
    
+ 2λ E (Xi − µi )1(Xi < −λ) − E (Xi − µi )1(Xi > λ)

42
= R(mP T (·, λ), Pi ) + λ2 E[1(|Xi | > λ)]
    
+ 2λ E (Xi − µi )1(Xi < −λ) − E (Xi − µi )1(Xi > λ) . (A.4)

Notice that Z b
vφ(v)dv = φ(a) − φ(b).
a
As a result,
   λ − µ 
    −λ − µi  i
E (Xi − µi )1(Xi < −λ) − E (Xi − µi )1(Xi > λ) = −σi φ +φ . (A.5)
σi σi
Now, the result for lasso follows from equations (A.4) and (A.5). 
Proof of Proposition 1: The results for ridge are trivial. For lasso, first notice that the integrated risk
at zero is:  −λ  λ λ
R0 (mL (·, λ), π) = 2Φ (σ 2 + λ2 ) − 2 φ σ2 .
σ σ σ
Next, notice that !
−λ − µ  1  µ0 − µ  −λ − µ0
Z 
Φ φ dµ = Φ p 2 ,
σ σ0 σ0 σ0 + σ 2
!
λ − µ  1  µ0 − µ  λ − µ0
Z 
Φ φ dµ = Φ p 2 ,
σ σ0 σ0 σ0 + σ 2
!
 λ−µ   µ0 σ 2 + λσ02

−λ − µ   λ − µ  1  µ0 − µ  1
Z 
0
φ φ dµ = − φ λ+
σ02 + σ 2
p p
σ σ σ0 σ0 σ02 + σ 2 σ02 + σ 2
!
 −λ − µ   µ0 σ 2 − λσ02

−λ + µ   −λ − µ  1  µ0 − µ  1
Z 
0
φ φ dµ = − p 2 φ p 2 λ− .
σ σ σ0 σ0 σ0 + σ 2 σ0 + σ 2 σ02 + σ 2
The integrals involving µ2 are more involved. Let v be a Standard normal variable independent of µ.
Notice that,
Z λ − µ 1 µ − µ  Z Z  1 µ − µ 
0 0
µ2 Φ φ dµ = µ2 I[v≤(λ−µ)/σ] φ(v)dv φ dµ
σ σ0 σ0 σ0 σ0
1  µ − µ0  
Z Z
= µ2 I[µ≤λ−σv] φ dµ φ(v)dv.
σ0 σ0
Using the change of variable u = (µ − µ0 )/σ0 , we obtain,
1  µ − µ0 
Z Z
µ2 I[µ≤λ−σv] φ dµ = (µ0 + σ0 u)2 I[u≤(λ−µ0 −σv)/σ0 ] φ(u)du
σ0 σ0
 λ − µ − σv   λ − µ − σv 
0 0
=Φ µ20 − 2φ σ0 µ0
σ0 σ0
!
 λ − µ − σv   λ − µ − σv   λ − µ − σv 
0 0 0
+ Φ − φ σ02
σ0 σ0 σ0
 λ − µ − σv   λ − µ − σv 
0 0
=Φ (µ20 + σ02 ) − φ σ0 (λ + µ0 − σv).
σ0 σ0
Therefore,
!
Z λ − µ 1 µ − µ  λ − µ0
2 0
µ Φ φ dµ = Φ p 2 (µ20 + σ02 )
σ σ0 σ0 σ0 + σ 2

43
!
1 λ − µ0
−p 2 φ p 2 (λ + µ0 )σ02
σ0 + σ 2 σ0 + σ 2
! !
2 2 1 λ − µ0 λ − µ0
+ σ0 σ p 2 φ p 2 .
σ0 + σ 2 σ0 + σ 2 σ02 + σ 2

Similarly,
!
Z  −λ − µ  1  µ − µ  −λ − µ0
2 0
µ Φ φ dµ = Φ p 2 (µ20 + σ02 )
σ σ0 σ0 σ0 + σ 2
!
1 −λ − µ0
−p 2 φ p 2 (−λ + µ0 )σ02
σ0 + σ 2 σ0 + σ 2
! !
2 2 1 −λ − µ0 −λ − µ0
+ σ0 σ p 2 φ p 2 .
σ0 + σ 2 σ0 + σ 2 σ02 + σ 2

The integrated risk conditional on µ 6= 0 is


! !!
−λ − µ0 λ − µ0
R1 (mL (·, λ), π) = 1+Φ p 2 −Φ p 2 (σ 2 + λ2 )
σ0 + σ 2 σ0 + σ 2
! !!
λ − µ0 −λ − µ0
+ Φ p 2 −Φ p 2 (µ20 + σ02 )
σ0 + σ 2 σ0 + σ 2
!
1 λ − µ0
−p 2 φ p 2 (λ + µ0 )(σ02 + σ 2 )
σ0 + σ 2 σ0 + σ 2
!
1 −λ − µ0
−p 2 φ p 2 (λ − µ0 )(σ02 + σ 2 ).
σ0 + σ 2 σ0 + σ 2

The results for pretest follow from similar calculations. 


The next lemma is used in the proof of Theorem 2.

Lemma A.1
For any two real-valued functions, f and g,

inf f − inf g ≤ sup |f − g|.

Proof: The result of the lemma follows directly from

inf f ≥ inf g − sup |f − g|,

and

inf g ≥ inf f − sup |f − g|.


Proof of Theorem 2: Because v̄π does not depend on λ, we obtain
       
Ln (λ) − Ln (λ
bn ) − rn (λ) − rn (λ
bn ) = Ln (λ) − R̄π (λ) − Ln (λ bn ) − R̄π (λ
bn )
   
+ r̄π (λ) − rn (λ) − r̄π (λ
bn ) − rn (λ
bn ) .

44
Applying Lemma A.1 we obtain
   
inf Ln (λ) − Ln (λ
bn ) − inf bn ) ≤ 2 sup Ln (λ) − R̄π (λ)
rn (λ) − rn (λ


λ∈[0,∞] λ∈[0,∞] λ∈[0,∞]

+ 2 sup r̄π (λ) − rn (λ) .

λ∈[0,∞]

Given that λ
bn is the value of λ at which rn (λ) attains its minimum, the result of the theorem follows. 
The following preliminary lemma will be used in the proof of Theorem 3.

Lemma A.2
For any finite set of regularization parameters, 0 = λ0 < . . . < λk = ∞, let

uj = sup L(λ)
λ∈[λj−1 ,λj ]

lj = inf L(λ),
λ∈[λj−1 ,λj ]

where L(λ) = (µ − m(X, λ))2 . Suppose that for any  > 0 there is a finite set of regularization parameters,
0 = λ0 < . . . < λk = ∞ (where k may depend on ), such that

sup max Eπ [uj − lj ] ≤  (A.6)


π∈Q 1≤j≤k

and
sup max max{varπ (lj ), varπ (uj )} < ∞. (A.7)
π∈Q 1≤j≤k

Then, equation (6) holds.

Proof: We will use En to indicate averages over (µ1 , X1 ), . . . , (µn , Xn ). Let λ ∈ [λj−1 , λj ]. By construction

En [L(λ)] − Eπ [L(λ)] ≤ En [uj ] − Eπ [lj ] ≤ En [uj ] − Eπ [uj ] + Eπ [uj − lj ]


En [L(λ)] − Eπ [L(λ)] ≥ En [lj ] − Eπ [uj ] ≥ En [lj ] − Eπ [lj ] − Eπ [uj − lj ]

and thus

sup (En [L(λ)]−Eπ [L(λ)])2


λ∈[0,∞]
 2
≤ max max{(En [uj ] − Eπ [uj ])2 , (En [lj ] − Eπ [lj ])2 } + max Eπ [uj − lj ]
1≤j≤k 1≤j≤k

+ 2 max max{|En [uj ] − Eπ [uj ]|, |En [lj ] − Eπ [lj ]|} max Eπ [uj − lj ]
1≤j≤k 1≤j≤k
k 
X 
≤ (En [uj ] − Eπ [uj ])2 + (En [lj ] − Eπ [lj ])2 + 2
j=1
k 
X 
+ 2 |En [uj ] − Eπ [uj ]| + |En [lj ] − Eπ [lj ]| .
j=1

Therefore,
h i
Eπ sup (En [L(λ)] − Eπ [L(λ)])2
λ∈[0,∞]

45
k 
X 
≤ Eπ [(En [uj ] − Eπ [uj ])2 ] + Eπ [En [lj ] − Eπ [lj ])2 ] + 2
j=1
k
X
+ 2 Eπ [|En [uj ] − Eπ [uj ]| + |En [lj ] − Eπ [lj ]|]
j=1
k 
X 
≤ varπ (uj )/n + varπ (lj )/n + 2
j=1
k q
X q 
+ 2 varπ (uj )/n + varπ (lj )/n .
j=1

Now, the result of the lemma follows from the assumption of uniformly bounded variances. 
Proof of Theorem 3: We will show that the conditions of the theorem imply equations (A.6) and (A.7)
and, therefore, the uniform convergence result in equation (6). Using conditions 1 and 2, along with
the convexity of 4th powers, we immediately get bounded variances. Because the maximum of a convex
function is achieved at the boundary,

varπ (uj ) ≤ Eπ [u2j ] ≤ Eπ [max{(X − µ)4 , µ4 }] ≤ Eπ [(X − µ)4 ] + Eπ [µ4 ].

Notice also that


varπ (lj ) ≤ Eπ [lj2 ] ≤ Eπ [u2j ].
Now, condition 3 implies equation (A.7) in Lemma A.2.
It remains to find a set of regularization parameters such that Eπ [uj − lj ] <  for all j. Using again the
monotonicity of m(X, λ) in λ and convexity of the square function, we have that the supremum defining
uj is achieved at the boundary,
uj = max{L(λj−1 ), L(λj )},
while
lj = min{L(λj−1 ), L(λj )}
if µ ∈
/ [m(X, λj−1 ), m(X, λj )] and lj = 0, otherwise. In the former case,

uj − lj = |L(λj ) − L(λj−1 )|,

and in the latter case, uj −lj = max{L(λj−1 ), L(λj )}. Consider first the case of µ ∈
/ [m(X, λj−1 ), m(X, λj )].
Using the formula a2 − b2 = (a + b)(a − b) and the shorthand mj = m(X, λj ), we obtain

uj − lj = (mj − µ)2 − (mj−1 − µ)2



 
= (mj − µ) + (mj−1 − µ) mj − mj−1
≤ (|mj − µ| + |mj−1 − µ|)|mj − mj−1 |.

To check that the same bound applies to the case µ ∈ [m(X, λj−1 ), m(X, λj )], notice that

max {|mj − µ|, |mj−1 − µ|} ≤ |mj − µ| + |mj−1 − µ|

and because µ ∈ [m(X, λj−1 ), m(X, λj )],

max {|mj − µ|, |mj−1 − µ|} ≤ |mj − mj−1 |.

Monotonicity, boundary conditions, and the convexity of absolute values allow one to bound further,

uj − lj ≤ 2(|X − µ| + |µ|)|mj − mj−1 |.

46
Now, condition 4 in Theorem 3 implies equation (A.6) in Lemma A.2 and, therefore, the result of the
theorem. 
Proof of Lemma 2: Conditions 1 and 2 of Theorem 3 are easily verified to hold for ridge, lasso, and the
pretest estimator. Let us thus discuss condition 4.
Let ∆mj = m(X, λj ) − m(X, λj−1 ), and ∆λj = λj − λj−1 . For ridge, ∆mj is given by
 
1 1
∆mj = − X
1 + λj 1 + λj−1

so that the requirement follows from finite variances if we choose a finite set of regularization parameters
such that
1 1  

1 + λj − sup E (|X − µ| + |µ|)|X| < 
1 + λj−1 π∈Q

for all j = 1, . . . , k, which is possible by the uniformly bounded moments condition.


For lasso, notice that |∆mk | = (|X| − λk−1 ) 1(|X| > λk−1 ) ≤ |X| 1(|X| > λk−1 ), and |∆mj | ≤ ∆λj for
j = 1, . . . , k − 1. We will first verify that for any  > 0 there is a finite λk−1 such that condition 4 of
the lemma holds for j = k. Notice that for any pair of non-negative random variables (ξ, ζ) such that
E[ξ ζ] < ∞ and for any positive constant, c, we have that

E[ξζ] ≥ E[ξζ 1(ζ > c)] ≥ cE[ξ1(ζ > c)]

and, therefore,
E[ξζ]
E[ξ1(ζ > c)] ≤ .
c
As a consequence of this inequality, and because supπ∈Q Eπ [(|X − µ| + |µ|)|X|2 ] < ∞ (implied by condition
3), then for any  > 0 there exists a finite positive constant, λk−1 such that condition 4 of the lemma holds
for j = k. Given that λk−1 is finite, supπ∈Q Eπ [|X − µ| + |µ|] < ∞ and |∆mj | ≤ ∆λj imply condition 4
for j = 1, . . . , k − 1.
For pretest,
|∆mj | = |X| 1(|X| ∈ (λj−1 , λj ]),
so that we require that for any  > 0 we can find a finite number of regularization parameters, 0 = λ0 <
λ1 < . . . < λk−1 < λk = ∞, such that

Eπ [(|X − µ| + |µ|)|X| 1(|X| ∈ (λj−1 , λj ])] < ,

for j = 1, . . . , k. Applying the Cauchy-Schwarz inequality and uniform boundedness of fourth moments,
this condition is satisfied if we can choose uniformly bounded Pπ (|X| ∈ (λj−1 , λj ]), which is possible
under the assumption that X is continuously distributed with a (version of the) density that is uniformly
bounded. 
Proof of Corollary 1: From Theorem 2 and Lemma A.1, it follows immediately that
 

sup Pπ Ln (λn ) − inf R̄π (λ) >  → 0.
b
π∈Q λ∈[0,∞]

By definition,
R̄(m(., λ
bn ), π) = Eπ [Ln (λ
bn )].

Equation (7) thus follows if we can strengthen uniform convergence in probability to uniform L1 conver-
gence. To do so, we need to show uniform integrability of Ln (λ
bn ), as per Theorem 2.20 in van der Vaart
(1998).

47
Monotonicity, convexity of loss, and boundary conditions imply
n
bn ) ≤ 1
X 
Ln (λ µ2i + (Xi − µi )2 .
n i=1

Uniform integrability along arbitrary sequences πn , and thus L1 convergence, follows from the assumed
bounds on moments. 
Proof of Lemma 3: Recall the definition R̄(m(.), π) = Eπ [(m(X) − µ)2 ]. Expanding the square yields

Eπ [(m(X) − µ)2 ] = Eπ [(m(X) − X + X − µ)2 ]


= Eπ [(X − µ)2 ] + Eπ [(m(X) − X)2 ] + 2Eπ [(X − µ)(m(X) − X)].

By the form of the standard normal density,

∇x φ(x − µ) = −(x − µ)φ(x − µ).

Partial integration over the intervals ]xj , xj+1 [ (where we let x0 = −∞ and xJ+1 = ∞) yields
Z Z
Eπ [(X − µ)(m(X) − X)] = (x − µ) (m(x) − x) φ(x − µ) dx dπ(µ)
R R
J Z Z
X xj+1
=− (m(x) − x) ∇x φ(x − µ) dx dπ(µ)
j=0 R xj

J Z
"Z
X xj+1
= (∇m(x) − 1) φ(x − µ) dx
j=0 R xj

+ lim (m(x) − x)φ(x − µ) − lim (m(x) − x)φ(x − µ) dπ(µ)
x↓xj x↑xj+1
J
X
= Eπ [∇m(X)] − 1 + ∆mj f (xj ).
j=1


Proof of Lemma 4: Uniform convergence of the first term follows by the exact same arguments we used
to show uniform convergence of Ln (λ) to R̄π (λ) in Theorem 3. We thus focus on the second term, and
discuss its convergence on a case-by-case basis for our leading examples.
For ridge, this second term is equal to the constant
2
2 ∇x mR (x, λ) = ,
1+λ
and uniform convergence holds trivially.
For lasso, the second term is equal to

2 En [∇x mL (X, λ)] = 2 Pn (|X| > λ).

To prove uniform convergence of this term we slightly modify the proof of the Glivenko-Cantelly Theorem
(e.g., van der Vaart (1998), Theorem 19.1). Let Fn be the cumulative distribution function of X1 , . . . , Xn ,
and let Fπ be its population counterpart. It is enough to prove uniform convergence of Fn (λ),
!
sup Pπ sup |Fn (λ) − Fπ (λ)| >  →0 ∀ > 0.
π∈Q λ∈[0∞]

48
Using Chebyshev’s inequality and supπ∈Q varπ (1(X ≤ λ)) ≤ 1/4 for every λ ∈ [0, ∞], we obtain
p
sup |Fn (λ) − Fπ (λ)| → 0,
π∈Q

for every λ ∈ [0, ∞]. Next, we will establish that for any  > 0, it is possible to find a finite set of
regularization parameters 0 = λ0 < λ1 < · · · < λk = ∞ such that

sup max {Fπ (λj ) − Fπ (λj−1 )} < .


π∈Q 1≤j≤k

This assertion follows from the fact that fπ (x) is uniformly bounded by φ(0). The rest of the proof proceeds
as in the proof of Theorem 19.1 in van der Vaart (1998).
Let us finally turn to pre-testing. The objective function for pre-testing is equal to the one for lasso, plus
additional terms for the jumps at ±λ; the penalty term equals

2Pn (|X| > λ) + 2λ(fb(−λ) + fb(λ)).

Uniform convergence of the SURE criterion for pre-testing thus holds if (i) the conditions for lasso are
satisfied, and (ii) we have a uniformly consistent estimator of |x|fb(x). 
Proof of Lemma 6: First, notice that the assumptions of the lemma plus convexity of the square function
make Eπ [rn,k (λ)] finite. Now, i.i.d.-ness of (x1i , . . . , xki , µi , σi ) and mutual independence of (x1 , . . . , xk )
conditional on (µ, σ 2 ) imply,
h i
2
Eπ [rn,k (λ)] = Eπ (m(Xk−1 , λ) − xk )
h i h i
2 2
= Eπ (m(Xk−1 , λ) − µ) + Eπ (xk − µ)
= R̄π,k (λ) + Eπ [σ 2 ].


Proof of Theorem 4: We can decompose
n
1 Xh 2
i
rn,k (λ) = (m(Xk−1i , λ) − µi ) + (xki − µi )2 + 2 (m(Xk−1i , λ) − µi ) (xki − µi )
n i=1
n n
1X 2X
= Ln,k (λ) + (xki − µi )2 − (m(Xk−1i , λ) − µi ) (xki − µi ) . (A.8)
n i=1 n i=1

Theorem 3 and Lemma 2 imply that the first term on the last line of equation (A.8) converges uniformly
in quadratic mean to R̄π,k (λ). The second term does not depend on λ. Uniform convergence in quadratic
mean of this term to −v̄π = Eπ [σi2 ] follows immediately from the assumption that supπ∈Q Eπ [x4k ] < ∞.
To prove uniform convergence to zero in quadratic mean of the third term, notice that,
" n
!2 # n
1X 1 X 
Eπ (m(Xk−1i , λ) − µi )2 (xki − µi )2

Eπ (m(Xk−1i , λ) − µi )(xki − µi ) = 2
n i=1 n i=1
1 1/2
Eπ (m(Xk−1 , λ) − µ)4 Eπ (xk − µ)4
  

n
1 1/2
Eπ (Xk−1 − µ)4 + µ4 Eπ (xk − µ)4
  
≤ .
n
The condition supπ∈Q E[x4j ] < ∞ for j = 1, . . . k guarantees that the two expectations on the last line of
the last equation are uniformly bounded in π ∈ Q, which yields the first result of the theorem.
The second result follows from Theorem 2. 

49
Figure 1: Estimators

2
m(x; 6)

-2

-4
ridge
-6 lasso
pretest

-8
-8 -6 -4 -2 0 2 4 6 8
x

This graph plots mR (x, λ), mL (x, λ), and mP T (x, λ) as functions of x. The regularization parameters are
λ = 1 for ridge, λ = 2 for lasso, and λ = 4 for pretest.

50
Figure 2: Componentwise risk functions

11
ridge
10
lasso
pretest
9
mle
8
R(m("; 6); Pi )

0
-6 -4 -2 0 2 4 6
7i

This figure displays componentwise risk, R(m(·, λ)), as a function of µi for componentwise estimators,
where σi2 = 2. “mle” refers to the maximum likelihood (unregularized) estimator, µ bi = Xi , which has risk
equal to σi2 = 2. The regularization parameters are λ = 1 for ridge, λ = 2 for lasso, and λ = 4 for pretest,
as in Figure 1.

51
Figure 3: Risk for estimators in spike and normal setting

p = 0.00 p = 0.25

1 1 1 1

0.8 0.8 0.8 0.8

0.6 0.6 0.6 0.6

0.4 0.4 0.4 0.4

0.2 0.2 0.2 0.2

0 0 0 0

4 4 4 4
4 4 4 4
2 2 2 2
2 2 2 2
0 0 0 0 0 0 0 0
σ0 µ0 σ0 µ0 σ0 µ0 σ0 µ0
ridge lasso ridge lasso

1 1 1 1

0.8 0.8 0.8 0.8

0.6 0.6 0.6 0.6

0.4 0.4 0.4 0.4

0.2 0.2 0.2 0.2

0 0 0 0

4 4 4 4
4 4 4 4
2 2 2 2
2 2 2 2
0 0 0 0 0 0 0 0
σ0 µ0 σ0 µ0 σ0 µ0 σ0 µ0
pretest optimal pretest optimal

p = 0.50 p = 0.75

1 1 1 1

0.8 0.8 0.8 0.8

0.6 0.6 0.6 0.6

0.4 0.4 0.4 0.4

0.2 0.2 0.2 0.2

0 0 0 0

4 4 4 4
4 4 4 4
2 2 2 2
2 2 2 2
0 0 0 0 0 0 0 0
σ0 µ0 σ0 µ0 σ0 µ0 σ0 µ0
ridge lasso ridge lasso

1 1 1 1

0.8 0.8 0.8 0.8

0.6 0.6 0.6 0.6

0.4 0.4 0.4 0.4

0.2 0.2 0.2 0.2

0 0 0 0

4 4 4 4
4 4 4 4
2 2 2 2
2 2 2 2
0 0 0 0 0 0 0 0
σ0 µ0 σ0 µ0 σ0 µ0 σ0 µ0
pretest optimal pretest optimal

52
Figure 4: Best estimator in spike and normal setting

p = 0.00 p = 0.25
5 5

4 4

3 3
σ0

σ0
2 2

1 1

0 0
0 1 2 3 4 5 0 1 2 3 4 5
µ0 µ0

p = 0.50 p = 0.75
5 5

4 4

3 3
σ0

σ0

2 2

1 1

0 0
0 1 2 3 4 5 0 1 2 3 4 5
µ0 µ0

This figure compares integrated risk values attained by ridge, lasso, and pretest for different parameter
values of the spike and normal specification in Section 3.3. Blue circles are placed at parameters values
for which ridge minimizes integrated risk, green crosses at values for which lasso minimizes integrated risk,
and red dots are parameters values for which pretest minimizes integrated risk.

53
Figure 5: Neighborhood Effects: SURE Estimates

SURE as function of 6
1
ridge
lasso
pretest

0.8
SURE(6)

0.6

0.4

0.2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
6

54
Figure 6: Neighborhood Effects: Shrinkage Estimators

Shrinkage estimators
2.5

1.5

0.5
m(x)

0
b

-0.5

-1

-1.5

-2

-2.5
-3 -2 -1 0 1 2 3
x
Kernel estimate of the density of X
fb(x)

-3 -2 -1 0 1 2 3
x

The first panel shows the Koenker-Mizera NPEB estimator (solid line) along with the ridge, lasso, and
pretest estimators (dashed lines) evaluated at SURE-minimizing values of the regularization parameters.
The ridge estimator is linear, with positive slope equal to estimated risk, 0.29. Lasso is piecewise linear, with
kinks at the positive and negative versions of the SURE-minimizing value of the regularization parameter,
λ
bL,n = 1.34. Pretest is flat at zero, because SURE is minimized for values of λ higher than the maximum
absolute value of X1 , . . . , Xn . The second panel shows a kernel estimate of the distribution of X.

55
Figure 7: Arms Event Study: SURE Estimates

SURE as function of 6
1
ridge
lasso
pretest
0.8

0.6
SURE(6)

0.4

0.2

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5


6

56
Figure 8: Arms Event Study: Shrinkage Estimators

Shrinkage estimators
4

1
m(x)

0
b

-1

-2

-3

-4
-3 -2 -1 0 1 2 3
x
Kernel estimate of the density of X
fb(x)

-3 -2 -1 0 1 2 3
x

The first panel shows the Koenker-Mizera NPEB estimator (solid line) along with the ridge, lasso, and
pretest estimators (dashed lines) evaluated at SURE-minimizing values of the regularization parameters.
The ridge estimator is linear, with positive slope equal to estimated risk, 0.50. Lasso is piecewise linear, with
kinks at the positive and negative versions of the SURE-minimizing value of the regularization parameter,
λ
bL,n = 1.50. Pretest is discontinuous at λ bP T,n = 2.39 and −λ bP T,n = −2.39.

57
Figure 9: Nonparametric Mincer Equation: SURE Estimates

SURE as function of 6
1.2
ridge
lasso
pretest

1.1
SURE(6)

0.9

0.8
0 0.5 1 1.5 2
6

58
Figure 10: Nonparametric Mincer Equation: Shrinkage Estimators

Shrinkage estimators
10

2
m(x)

0
b

-2

-4

-6

-8

-10
-10 -8 -6 -4 -2 0 2 4 6 8 10
x
Kernel estimate of the density of X
fb(x)

-10 -8 -6 -4 -2 0 2 4 6 8 10
x

The first panel shows the Koenker-Mizera NPEB estimator (solid line) along with the ridge, lasso, and
pretest estimators (dashed lines) evaluated at SURE-minimizing values of the regularization parameters.
The ridge estimator is linear, with positive slope equal to estimated risk, 0.996. Lasso is piecewise linear,
with kinks at the positive and negative versions of the SURE-minimizing value of the regularization pa-
rameter, λ = 0.59. Pretest is discontinuous at λbP T,n = 1.14 and −λ
bP T,n = −1.14. The second panel shows
a kernel estimate of the distribution of X.

59
TableTable
1: Average Compound
1: Average CompoundLoss
Loss Across 1000Simulations
Across 1000 Simulations
withwith
N =N50 = 50

SURE Cross-Validation Cross-Validation NPEB


(k = 4) (k = 20)
p µ0 σ0 ridge lasso pretest ridge lasso pretest ridge lasso pretest
0.00 0 2 0.80 0.89 1.02 0.83 0.90 1.12 0.81 0.88 1.12 0.94
0.00 0 4 0.95 0.98 1.02 0.96 0.98 1.08 0.94 0.97 1.08 1.15
0.00 0 6 0.97 0.99 1.01 0.97 0.99 1.05 0.97 0.99 1.07 1.21
0.00 2 2 0.89 0.96 1.01 0.90 0.95 1.06 0.89 0.95 1.09 0.93
0.00 2 4 0.96 0.99 1.01 0.96 0.98 1.06 0.96 0.98 1.09 1.13
0.00 2 6 0.97 0.99 1.01 0.99 1.00 1.06 0.97 0.98 1.07 1.21
0.00 4 2 0.95 1.00 1.01 0.95 0.99 1.02 0.95 1.00 1.04 0.93
0.00 4 4 0.97 1.00 1.01 0.99 1.01 1.06 0.97 0.99 1.07 1.15
0.00 4 6 0.99 1.00 1.02 0.99 1.00 1.05 0.99 1.00 1.07 1.21
0.25 0 2 0.75 0.78 1.01 0.77 0.79 1.12 0.77 0.78 1.08 0.85
0.25 0 4 0.92 0.90 1.00 0.94 0.90 1.07 0.92 0.88 1.05 1.04
0.25 0 6 0.97 0.93 0.99 0.97 0.92 1.02 0.96 0.92 1.02 1.09
0.25 2 2 0.87 0.88 1.01 0.87 0.86 1.06 0.87 0.86 1.07 0.88
0.25 2 4 0.93 0.90 0.99 0.94 0.89 1.04 0.95 0.90 1.04 1.03
0.25 2 6 0.97 0.93 0.98 0.98 0.93 1.03 0.97 0.93 1.02 1.09
0.25 4 2 0.94 0.95 0.99 0.95 0.95 1.03 0.95 0.95 1.04 0.92
0.25 4 4 0.97 0.94 0.99 0.97 0.93 1.03 0.97 0.93 1.02 1.04
0.25 4 6 0.98 0.94 0.98 0.98 0.93 1.02 0.98 0.93 1.00 1.09
0.50 0 2 0.67 0.64 0.94 0.69 0.64 0.96 0.67 0.62 0.90 0.69
0.50 0 4 0.89 0.75 0.92 0.91 0.76 0.92 0.89 0.75 0.89 0.82
0.50 0 6 0.95 0.80 0.90 0.95 0.79 0.87 0.96 0.78 0.84 0.84
0.50 2 2 0.80 0.72 0.96 0.82 0.72 0.96 0.81 0.72 0.93 0.73
0.50 2 4 0.92 0.77 0.94 0.93 0.76 0.90 0.90 0.75 0.87 0.83
0.50 2 6 0.96 0.80 0.92 0.95 0.77 0.83 0.95 0.78 0.82 0.86
0.50 4 2 0.91 0.82 0.95 0.92 0.81 0.90 0.92 0.81 0.87 0.75
0.50 4 4 0.94 0.80 0.93 0.94 0.79 0.87 0.94 0.78 0.83 0.81
0.50 4 6 0.97 0.81 0.93 0.97 0.79 0.83 0.96 0.78 0.79 0.85
0.75 0 2 0.51 0.43 0.61 0.51 0.42 0.57 0.50 0.41 0.57 0.46
0.75 0 4 0.77 0.50 0.59 0.80 0.51 0.58 0.78 0.50 0.57 0.52
0.75 0 6 0.88 0.54 0.55 0.90 0.54 0.55 0.88 0.53 0.52 0.51
0.75 2 2 0.66 0.49 0.65 0.67 0.49 0.63 0.67 0.49 0.62 0.47
0.75 2 4 0.81 0.53 0.59 0.86 0.54 0.58 0.82 0.52 0.56 0.51
0.75 2 6 0.90 0.56 0.54 0.91 0.56 0.53 0.90 0.55 0.52 0.51
0.75 4 2 0.84 0.59 0.64 0.85 0.57 0.60 0.84 0.58 0.58 0.49
0.75 4 4 0.88 0.56 0.57 0.89 0.55 0.53 0.89 0.55 0.52 0.50
0.75 4 6 0.92 0.57 0.53 0.92 0.55 0.49 0.92 0.56 0.51 0.50
0.95 0 2 0.18 0.15 0.17 0.17 0.12 0.15 0.18 0.13 0.19 0.17
0.95 0 4 0.37 0.19 0.17 0.37 0.17 0.17 0.37 0.18 0.20 0.18
0.95 0 6 0.49 0.21 0.16 0.51 0.19 0.16 0.49 0.19 0.19 0.16
0.95 2 2 0.26 0.17 0.18 0.27 0.16 0.18 0.27 0.17 0.23 0.17
0.95 2 4 0.40 0.19 0.17 0.43 0.18 0.16 0.40 0.18 0.20 0.17
0.95 2 6 0.53 0.21 0.15 0.53 0.19 0.15 0.53 0.20 0.18 0.16
0.95 4 2 0.44 0.21 0.18 0.45 0.20 0.18 0.45 0.20 0.22 0.18
0.95 4 4 0.51 0.21 0.16 0.51 0.19 0.17 0.52 0.20 0.19 0.17
0.95 4 6 0.57 0.21 0.15 0.58 0.19 0.14 0.57 0.20 0.18 0.16

1
60
TableTable
2: Average Compound
1: Average CompoundLoss
Loss Across 1000Simulations
Across 1000 Simulations
withwith
N =N200= 200

SURE Cross-Validation Cross-Validation NPEB


(k = 4) (k = 20)
p µ0 σ0 ridge lasso pretest ridge lasso pretest ridge lasso pretest
0.00 0 2 0.80 0.87 1.01 0.82 0.88 1.04 0.80 0.87 1.04 0.86
0.00 0 4 0.94 0.96 1.00 0.95 0.97 1.02 0.94 0.96 1.03 1.03
0.00 0 6 0.98 0.99 1.01 0.98 0.99 1.02 0.98 0.99 1.03 1.09
0.00 2 2 0.89 0.95 1.00 0.90 0.95 1.02 0.89 0.94 1.03 0.86
0.00 2 4 0.95 0.97 1.00 0.96 0.98 1.02 0.96 0.97 1.03 1.03
0.00 2 6 0.98 1.00 1.01 0.98 0.99 1.02 0.98 0.99 1.03 1.10
0.00 4 2 0.95 1.00 1.00 0.96 1.00 1.01 0.95 1.00 1.02 0.86
0.00 4 4 0.97 0.99 1.00 0.97 0.99 1.01 0.97 0.98 1.02 1.03
0.00 4 6 0.98 0.99 1.01 0.98 0.99 1.01 0.99 0.99 1.03 1.09
0.25 0 2 0.75 0.77 1.00 0.77 0.78 1.07 0.75 0.75 1.04 0.78
0.25 0 4 0.92 0.88 0.99 0.93 0.88 1.02 0.93 0.88 1.02 0.95
0.25 0 6 0.96 0.91 0.99 0.97 0.91 1.01 0.96 0.91 1.00 0.98
0.25 2 2 0.86 0.86 1.00 0.87 0.86 1.03 0.86 0.85 1.03 0.80
0.25 2 4 0.94 0.90 1.00 0.95 0.90 1.02 0.93 0.88 1.01 0.95
0.25 2 6 0.97 0.92 0.99 0.97 0.92 1.00 0.97 0.91 1.00 0.98
0.25 4 2 0.94 0.95 1.00 0.94 0.93 1.00 0.94 0.94 1.01 0.83
0.25 4 4 0.96 0.92 0.99 0.97 0.92 1.01 0.95 0.91 0.99 0.94
0.25 4 6 0.97 0.92 0.98 0.97 0.92 0.99 0.97 0.92 0.98 0.98
0.50 0 2 0.67 0.61 0.90 0.69 0.62 0.93 0.67 0.61 0.90 0.63
0.50 0 4 0.89 0.74 0.90 0.90 0.74 0.89 0.89 0.73 0.86 0.76
0.50 0 6 0.94 0.77 0.86 0.95 0.76 0.82 0.95 0.77 0.83 0.77
0.50 2 2 0.80 0.70 0.94 0.82 0.71 0.93 0.80 0.69 0.91 0.65
0.50 2 4 0.92 0.75 0.92 0.92 0.75 0.87 0.91 0.74 0.86 0.76
0.50 2 6 0.95 0.78 0.88 0.96 0.78 0.83 0.95 0.77 0.82 0.77
0.50 4 2 0.91 0.80 0.94 0.92 0.81 0.87 0.91 0.80 0.87 0.67
0.50 4 4 0.94 0.78 0.94 0.95 0.78 0.83 0.94 0.77 0.82 0.74
0.50 4 6 0.96 0.79 0.92 0.97 0.79 0.81 0.97 0.78 0.80 0.76
0.75 0 2 0.50 0.39 0.55 0.51 0.40 0.57 0.50 0.39 0.55 0.40
0.75 0 4 0.80 0.50 0.55 0.81 0.50 0.57 0.80 0.49 0.56 0.48
0.75 0 6 0.90 0.53 0.49 0.91 0.53 0.52 0.89 0.52 0.50 0.47
0.75 2 2 0.67 0.47 0.59 0.68 0.47 0.61 0.67 0.46 0.59 0.42
0.75 2 4 0.83 0.50 0.53 0.84 0.51 0.56 0.83 0.51 0.55 0.46
0.75 2 6 0.91 0.54 0.50 0.91 0.54 0.52 0.91 0.53 0.51 0.47
0.75 4 2 0.83 0.56 0.55 0.85 0.57 0.58 0.83 0.55 0.56 0.42
0.75 4 4 0.89 0.54 0.50 0.90 0.54 0.52 0.88 0.53 0.50 0.45
0.75 4 6 0.93 0.55 0.47 0.93 0.55 0.49 0.92 0.53 0.48 0.46
0.95 0 2 0.17 0.12 0.14 0.17 0.12 0.14 0.17 0.12 0.15 0.12
0.95 0 4 0.43 0.17 0.16 0.43 0.17 0.16 0.42 0.17 0.16 0.14
0.95 0 6 0.61 0.18 0.14 0.62 0.18 0.14 0.61 0.18 0.14 0.14
0.95 2 2 0.28 0.16 0.17 0.29 0.16 0.18 0.28 0.15 0.17 0.14
0.95 2 4 0.46 0.17 0.15 0.48 0.17 0.16 0.47 0.17 0.16 0.14
0.95 2 6 0.63 0.19 0.14 0.64 0.19 0.14 0.63 0.18 0.14 0.13
0.95 4 2 0.49 0.20 0.17 0.50 0.20 0.17 0.48 0.19 0.17 0.14
0.95 4 4 0.58 0.19 0.14 0.59 0.19 0.14 0.59 0.18 0.15 0.14
0.95 4 6 0.68 0.19 0.13 0.70 0.19 0.13 0.67 0.19 0.14 0.13

61
Table Table
3: Average Compound
1: Average Loss
Compound LossAcross 1000Simulations
Across 1000 Simulations with
with N =N = 1000
200

SURE Cross-Validation Cross-Validation NPEB


(k = 4) (k = 20)
p µ0 σ0 ridge lasso pretest ridge lasso pretest ridge lasso pretest
0.00 0 2 0.80 0.87 1.01 0.81 0.87 1.01 0.80 0.86 1.01 0.82
0.00 0 4 0.94 0.96 1.00 0.95 0.97 1.01 0.94 0.96 1.00 0.97
0.00 0 6 0.97 0.98 1.00 0.98 0.98 1.00 0.97 0.98 1.01 1.02
0.00 2 2 0.89 0.94 1.00 0.90 0.95 1.00 0.89 0.94 1.01 0.82
0.00 2 4 0.95 0.97 1.00 0.96 0.97 1.00 0.95 0.97 1.01 0.98
0.00 2 6 0.97 0.98 1.00 0.98 0.99 1.00 0.97 0.98 1.01 1.02
0.00 4 2 0.95 1.00 1.00 0.96 1.00 1.00 0.95 0.99 1.00 0.82
0.00 4 4 0.97 0.99 1.00 0.97 0.99 1.01 0.97 0.99 1.01 0.97
0.00 4 6 0.98 0.99 1.00 0.98 0.99 1.00 0.98 0.99 1.01 1.02
0.25 0 2 0.75 0.76 1.00 0.76 0.77 1.02 0.75 0.75 1.01 0.74
0.25 0 4 0.92 0.88 0.99 0.93 0.88 1.00 0.92 0.87 1.00 0.89
0.25 0 6 0.97 0.91 0.99 0.97 0.91 0.99 0.96 0.91 0.99 0.92
0.25 2 2 0.86 0.85 1.00 0.87 0.86 1.01 0.86 0.84 1.01 0.76
0.25 2 4 0.94 0.89 1.00 0.94 0.89 1.00 0.94 0.89 1.00 0.89
0.25 2 6 0.97 0.91 0.99 0.97 0.91 0.99 0.97 0.91 0.99 0.92
0.25 4 2 0.94 0.94 0.99 0.94 0.94 0.99 0.94 0.93 0.99 0.79
0.25 4 4 0.96 0.92 0.99 0.96 0.91 0.99 0.96 0.91 0.99 0.88
0.25 4 6 0.98 0.92 0.99 0.98 0.92 0.98 0.97 0.92 0.98 0.91
0.50 0 2 0.67 0.60 0.87 0.68 0.61 0.90 0.67 0.60 0.87 0.60
0.50 0 4 0.89 0.73 0.85 0.90 0.73 0.86 0.89 0.72 0.85 0.71
0.50 0 6 0.95 0.77 0.81 0.95 0.77 0.82 0.95 0.76 0.81 0.72
0.50 2 2 0.80 0.70 0.90 0.81 0.71 0.90 0.80 0.69 0.89 0.62
0.50 2 4 0.91 0.74 0.85 0.92 0.75 0.85 0.91 0.74 0.84 0.70
0.50 2 6 0.95 0.77 0.80 0.96 0.78 0.81 0.95 0.77 0.80 0.71
0.50 4 2 0.91 0.80 0.87 0.92 0.80 0.84 0.91 0.80 0.84 0.63
0.50 4 4 0.94 0.77 0.88 0.95 0.78 0.81 0.94 0.77 0.80 0.68
0.50 4 6 0.96 0.78 0.87 0.97 0.78 0.79 0.96 0.78 0.78 0.70
0.75 0 2 0.50 0.38 0.54 0.51 0.40 0.56 0.50 0.38 0.54 0.38
0.75 0 4 0.80 0.49 0.53 0.81 0.50 0.55 0.80 0.48 0.53 0.44
0.75 0 6 0.90 0.52 0.49 0.91 0.53 0.51 0.90 0.52 0.49 0.43
0.75 2 2 0.67 0.46 0.57 0.68 0.47 0.59 0.67 0.46 0.58 0.40
0.75 2 4 0.83 0.50 0.52 0.85 0.51 0.55 0.83 0.50 0.53 0.44
0.75 2 6 0.91 0.53 0.48 0.92 0.53 0.50 0.91 0.52 0.48 0.43
0.75 4 2 0.83 0.55 0.53 0.85 0.56 0.55 0.83 0.55 0.54 0.39
0.75 4 4 0.89 0.53 0.49 0.90 0.54 0.51 0.89 0.52 0.49 0.41
0.75 4 6 0.93 0.54 0.46 0.94 0.54 0.48 0.93 0.53 0.47 0.42
0.95 0 2 0.17 0.11 0.14 0.17 0.12 0.14 0.17 0.11 0.14 0.11
0.95 0 4 0.44 0.16 0.15 0.45 0.16 0.16 0.44 0.16 0.15 0.13
0.95 0 6 0.63 0.18 0.13 0.65 0.18 0.14 0.64 0.17 0.14 0.12
0.95 2 2 0.28 0.15 0.16 0.29 0.15 0.18 0.29 0.14 0.17 0.12
0.95 2 4 0.49 0.16 0.14 0.50 0.17 0.16 0.50 0.16 0.15 0.12
0.95 2 6 0.66 0.18 0.13 0.67 0.18 0.14 0.66 0.18 0.13 0.12
0.95 4 2 0.50 0.19 0.16 0.51 0.19 0.17 0.50 0.19 0.16 0.12
0.95 4 4 0.61 0.18 0.14 0.62 0.18 0.14 0.61 0.18 0.14 0.12
0.95 4 6 0.72 0.18 0.13 0.73 0.19 0.13 0.71 0.18 0.13 0.12

62
References
Abowd, J. M., F. Kramarz, and D. N. Margolis (1999). High wage workers and high wage firms. Econo-
metrica 67 (2), 251–333.

Abrams, D., M. Bertrand, and S. Mullainathan (2012). Do judges vary in their treatment of race? Journal
of Legal Studies 41 (2), 347–383.

Angrist, J., V. Chernozhukov, and I. Fernández-Val (2006). Quantile regression under misspecification,
with an application to the U.S. wage structure. Econometrica 74 (2), 539–563.

Arlot, S. and A. Celisse (2010). A survey of cross-validation procedures for model selection. Statistics
Surveys 4, 40–79.

Athey, S. and G. W. Imbens (2015). 2015 NBER Summer institute methods lectures. http://www.nber.
org/econometrics_minicourse_2015/.

Belloni, A. and V. Chernozhukov (2011). High dimensional sparse econometric models: An introduction.
In P. Alquier, E. Gautier, and G. Stoltz (Eds.), Inverse Problems and High-Dimensional Estimation: :
Stats in the Château Summer School, August 31 - September 4, 2009, Volume 203 of Lecture Notes in
Statistics, Chapter 3, pp. 121–156. Berlin: Springer.

Brown, L. D. (1971). Admissible estimators, recurrent diffusions, and insoluble boundary value problems.
Annals of Mathematical Statistics, 855–903.

Brown, L. D. and E. Greenshtein (2009). Nonparametric empirical bayes and compound decision approaches
to estimation of a high-dimensional vector of normal means. Annals of Statistics 37 (4), 1685–1704.

Casella, G. and J. T. G. Hwang (2012). Shrinkage confidence procedures. Statistical Science 27 (1), 51–60.

Chetty, R., J. N. Friedman, and J. E. Rockoff (2014). Measuring the impacts of teachers II: Teacher
value-added and student outcomes in adulthood. American Economic Review 104 (9), 2633–2679.

Chetty, R. and N. Hendren (2015). The impacts of neighborhoods on intergenerational mobility: Childhood
exposure effects and county-level estimates. Working Paper.

Della Vigna, S. and E. La Ferrara (2010). Detecting illegal arms trade. American Economic Journal:
Economic Policy 2 (4), 26–57.

Donoho, D. L. and I. M. Johnstone (1995). Adapting to unkown smoothness via wavelet shrinkage. Journal
of the American Statistical Association 90 (432), 1200–1224.

Efron, B. (2010). Large-scale inference: empirical Bayes methods for estimation, testing, and prediction.
Institute of mathematical statistics monographs. Cambridge: Cambridge University Press.

Efron, B. (2011). Tweedie’s formula and selection bias. Journal of the American Statistical Associa-
tion 106 (496), 1602–1614.

Efron, B. and C. Morris (1973). Stein’s estimation rule and its competitors—an empirical Bayes approach.
Journal of the American Statistical Association 68 (341), 117–130.

Fan, J. F. and R. Li (2001). Variable selection via nonconcave penalized likelihood and its oracle properties.
Journal of the American Statistical Association 96 (456), 1348–1360.

Hastie, T., R. Tibshirani, and J. Friedman (2009). The elements of statistical learning: Data mining,
inference, and prediction (2 ed.). Springer series in statistics Springer, Berlin.

63
James, W. and C. Stein (1961). Estimation with quadratic loss. In Proceedings of the Fourth Berkeley Sym-
posium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics,
pp. 361–379.

Jiang, W. and C.-H. Zhang (2009). General maximum likelihood empirical Bayes estimation of normal
means. Annals of Statistics 37 (4), 1647–1684.

Kleinberg, J., J. Ludwig, S. Mullainathan, and Z. Obermeyer (2015). Prediction policy problems. American
Economic Review: Papers and Proceedings 105 (5), 491–495.

Koenker, R. and I. Mizera (2014). Convex optimization, shape constraints, compound decisions, and
empirical bayes rules. Journal of the American Statistical Association 109 (506), 674–685.

Krueger, A. B. (1999). Experimental estimates of education production functions. Quarterly Journal of


Economics 114 (2), 497–532.

Leeb, H. and B. M. Pötscher (2005). Model selection and inference: Facts and fiction. Econometric
Theory 21 (1), 21–59.

Leeb, H. and B. M. Pötscher (2006). Performance limits for estimators of the risk or distribution of
shrinkage-type estimators, and some general lower risk-bound results. Econometric Theory 22 (1), 69–
97.

Morris, C. N. (1983). Parametric empirical Bayes inference: Theory and applications. Journal of the
American Statistical Association 78 (381), 47–55.

Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. Cambridge: The MIT Press.

Newey, W. K. (1997). Convergence rates and asymptotic normality for series estimators. Journal of
Econometrics 79 (1), 147–168.

Robbins, H. (1951). Asymptotically subminimax solutions of compound statistical decision problems. In


Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, pp. 131–149.
Berkeley: University of California Press.

Robbins, H. (1956). An empirical Bayes approach to statistics. In Proceedings of the Third Berkeley Sym-
posium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics,
pp. 157–163. Berkeley: University of California Press.

Robbins, H. (1964). The empirical Bayes approach to statistical decision problems. Annals of Mathematical
Statistics 35, 1–20.

Silverman, B. (1986). Density Estimation for Statistics and Data Analysis. London: Chapman & Hall/CRC.

Stein, C. M. et al. (1981). Estimation of the mean of a multivariate Normal distribution. Annals of
Statistics 9 (6), 1135–1151.

Stigler, S. M. (1990). The 1988 Neyman memorial lecture: a Galtonian perspective on shrinkage estimators.
Statistical Science 5 (1), 147–155.

Stock, J. H. and M. W. Watson (2012). Generalized shrinkage methods for forecasting using many predic-
tors. Journal of Business & Economic Statistics 30 (4), 481–493.

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B (Methodological), 267–288.

64
van der Vaart, A. (1998). Asymptotic statistics. Cambridge, UK: Cambridge University Press.

Wasserman, L. (2006). All of Nonparametric Statistics. New York: Springer.

Xie, X., S. Kou, and L. D. Brown (2012). SURE estimates for a heteroscedastic hierarchical model. Journal
of the American Statistical Association 107 (500), 1465–1479.

Zhang, C.-H. (2003). Compound decision theory and empirical Bayes methods: invited paper. Annals of
Statistics 31 (2), 379–390.

65

You might also like