Econometrics PDF
Econometrics PDF
Bruce E. Hansen
c
°2000, 20141
University of Wisconsin
Department of Economics
1
This manuscript may be printed and reproduced for individual or instructional use, but may not be
printed for commercial purposes.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
1 Introduction 1
1.1 What is Econometrics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Probability Approach to Econometrics . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Econometric Terms and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Observational Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Standard Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Sources for Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Econometric Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Reading the Manuscript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 Common Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
i
CONTENTS ii
15 Endogeneity 296
15.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
15.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
15.3 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
15.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
15.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
15.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
15.7 Identification Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
B Probability 348
B.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
B.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
B.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
B.4 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
B.5 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
B.6 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
B.7 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . . 356
B.8 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
B.9 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
B.10 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
B.11 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
This book is intended to serve as the textbook for a first-year graduate course in econometrics.
It can be used as a stand-alone text, or be used as a supplement to another text.
Students are assumed to have an understanding of multivariate calculus, probability theory,
linear algebra, and mathematical statistics. A prior course in undergraduate econometrics would
be helpful, but not required. Two excellent undergraduate textbooks are Wooldridge (2009) and
Stock and Watson (2010).
For reference, some of the basic tools of matrix algebra, probability, and statistics are reviewed
in the Appendix.
For students wishing to deepen their knowledge of matrix algebra in relation to their study of
econometrics, I recommend Matrix Algebra by Abadir and Magnus (2005).
An excellent introduction to probability and statistics is Statistical Inference by Casella and
Berger (2002). For those wanting a deeper foundation in probability, I recommend Ash (1972)
or Billingsley (1995). For more advanced statistical theory, I recommend Lehmann and Casella
(1998), van der Vaart (1998), Shao (2003), and Lehmann and Romano (2005).
For further study in econometrics beyond this text, I recommend Davidson (1994) for asymp-
totic theory, Hamilton (1994) for time-series methods, Wooldridge (2002) for panel data and discrete
response models, and Li and Racine (2007) for nonparametrics and semiparametric econometrics.
Beyond these texts, the Handbook of Econometrics series provides advanced summaries of contem-
porary econometric methods and theory.
The end-of-chapter exercises are important parts of the text and are meant to help teach students
of econometrics. Answers are not provided, and this is intentional.
I would like to thank Ying-Ying Lee for providing research assistance in preparing some of the
empirical examples presented in the text.
As this is a manuscript in progress, some parts are quite incomplete, and there are many topics
which I plan to add. In general, the earlier chapters are the most complete while the later chapters
need significant work and revision.
viii
Chapter 1
Introduction
This definition remains valid today, although some terms have evolved somewhat in their usage.
Today, we would say that econometrics is the unified study of economic models, mathematical
statistics, and economic data.
Within the field of econometrics there are sub-divisions and specializations. Econometric the-
ory concerns the development of tools and methods, and the study of the properties of econometric
methods. Applied econometrics is a term describing the development of quantitative economic
models and the application of econometric methods to these models using economic data.
1
CHAPTER 1. INTRODUCTION 2
paper “The probability approach in econometrics”, Econometrica (1944). Haavelmo argued that
quantitative economic models must necessarily be probability models (by which today we would
mean stochastic). Deterministic models are blatently inconsistent with observed economic quan-
tities, and it is incoherent to apply deterministic models to non-deterministic data. Economic
models should be explicitly designed to incorporate randomness; stochastic errors should not be
simply added to deterministic models to make them random. Once we acknowledge that an eco-
nomic model is a probability model, it follows naturally that an appropriate tool way to quantify,
estimate, and conduct inferences about the economy is through the powerful theory of mathe-
matical statistics. The appropriate method for a quantitative economic analysis follows from the
probabilistic construction of the economic model.
Haavelmo’s probability approach was quickly embraced by the economics profession. Today no
quantitative work in economics shuns its fundamental vision.
While all economists embrace the probability approach, there has been some evolution in its
implementation.
The structural approach is the closest to Haavelmo’s original idea. A probabilistic economic
model is specified, and the quantitative analysis performed under the assumption that the economic
model is correctly specified. Researchers often describe this as “taking their model seriously.” The
structural approach typically leads to likelihood-based analysis, including maximum likelihood and
Bayesian estimation.
A criticism of the structural approach is that it is misleading to treat an economic model
as correctly specified. Rather, it is more accurate to view a model as a useful abstraction or
approximation. In this case, how should we interpret structural econometric analysis? The quasi-
structural approach to inference views a structural economic model as an approximation rather
than the truth. This theory has led to the concepts of the pseudo-true value (the parameter value
defined by the estimation problem), the quasi-likelihood function, quasi-MLE, and quasi-likelihood
inference.
Closely related is the semiparametric approach. A probabilistic economic model is partially
specified but some features are left unspecified. This approach typically leads to estimation methods
such as least-squares and the Generalized Method of Moments. The semiparametric approach
dominates contemporary econometrics, and is the main focus of this textbook.
Another branch of quantitative structural economics is the calibration approach. Similar
to the quasi-structural approach, the calibration approach interprets structural models as approx-
imations and hence inherently false. The difference is that the calibrationist literature rejects
mathematical statistics (deeming classical theory as inappropriate for approximate models) and
instead selects parameters by matching model and data moments using non-statistical ad hoc 1
methods.
Economists typically denote variables by the italicized roman characters y, x, and/or z. The
convention in econometrics is to use the character y to denote the variable to be explained, while
the characters x and z are used to denote the conditioning (explaining) variables.
Following mathematical convention, real numbers (elements of the real line R) are written using
lower case italics such as y, and vectors (elements of Rk ) by lower case bold italics such as x, e.g.
⎛ ⎞
x1
⎜ x2 ⎟
⎜ ⎟
x = ⎜ . ⎟.
⎝ .. ⎠
xk
The i’th observation is the set (yi , xi , z i ). The sample is the set {(yi , xi , z i ) :
i = 1, ..., n}.
It is proper mathematical practice to use upper case X for random variables and lower case x for
realizations or specific values. Since we use upper case to denote matrices, the distinction between
random variables and their realizations is not rigorously followed in econometric notation. Thus the
notation yi will in some places refer to a random variable, and in other places a specific realization.
This is an undesirable but there is little to be done about it without terrifically complicating the
notation. Hopefully there will be no confusion as the use should be evident from the context.
We typically use Greek letters such as β, θ and σ 2 to denote unknown parameters of an econo-
metric model, and will use boldface, e.g. β or θ, when these are vector-valued. Estimates are
typically denoted by putting a hat “^”, tilde “~” or bar “-” over the corresponding letter, e.g. β̂
and β̃ are estimates of β.
The covariance matrix of an econometric estimator will typically be written
³ ´ using the capital
boldface V , often with a subscript to denote the estimator, e.g. V βe = var β b as the covariance
matrix for β.b Hopefully without causing confusion, we will use the notation V β = avar(β)
b to denote
√ ³b ´
the asymptotic covariance matrix of n β − β (the variance of the asymptotic distribution).
Estimates will be denoted by appending hats or tildes, e.g. Vb β is an estimate of V β .
Instead, most economic data is observational. To continue the above example, through data
collection we can record the level of a person’s education and their wage. With such data we
can measure the joint distribution of these variables, and assess the joint dependence. But from
observational data it is difficult to infer causality, as we are not able to manipulate one variable to
see the direct effect on the other. For example, a person’s level of education is (at least partially)
determined by that person’s choices. These factors are likely to be affected by their personal abilities
and attitudes towards work. The fact that a person is highly educated suggests a high level of ability,
which suggests a high relative wage. This is an alternative explanation for an observed positive
correlation between educational levels and wages. High ability individuals do better in school,
and therefore choose to attain higher levels of education, and their high ability is the fundamental
reason for their high wages. The point is that multiple explanations are consistent with a positive
correlation between schooling levels and education. Knowledge of the joint distibution alone may
not be able to distinguish between these explanations.
Most economic data sets are observational, not experimental. This means
that all variables must be treated as random and possibly jointly deter-
mined.
This discussion means that it is difficult to infer causality from observational data alone. Causal
inference requires identification, and this is based on strong assumptions. We will discuss these
issues on occasion throughout the text.
Data Structures
• Cross-section
• Time-series
• Panel
CHAPTER 1. INTRODUCTION 5
• US Census
• CompuStat
Another good source of data is from authors of published empirical studies. Most journals
in economics require authors of published papers to make their datasets generally available. For
example, in its instructions for submission, Econometrica states:
Econometrica has the policy that all empirical, experimental and simulation results must
be replicable. Therefore, authors of accepted papers must submit data sets, programs,
and information on empirical analysis, experiments and simulations that are needed for
replication and some limited sensitivity analysis.
All data used in analysis must be made available to any researcher for purposes of
replication.
It is the policy of the Journal of Political Economy to publish papers only if the data
used in the analysis are clearly and precisely documented and are readily available to
any researcher for purposes of replication.
If you are interested in using the data from a published paper, first check the journal’s website,
as many journals archive data and replication programs online. Second, check the website(s) of
the paper’s author(s). Most academic economists maintain webpages, and some make available
replication files complete with data and programs. If these investigations fail, email the author(s),
politely requesting the data. You may need to be persistent.
As a matter of professional etiquette, all authors absolutely have the obligation to make their
data and programs available. Unfortunately, many fail to do so, and typically for poor reasons.
The irony of the situation is that it is typically in the best interests of a scholar to make as much of
their work (including all data and programs) freely available, as this only increases the likelihood
of their work being cited and having an impact.
Keep this in mind as you start your own empirical project. Remember that as part of your end
product, you will need (and want) to provide all data and programs to the community of scholars.
The greatest form of flattery is to learn that another scholar has read your paper, wants to extend
your work, or wants to use your empirical methods. In addition, public openness provides a healthy
incentive for transparency and integrity in empirical analysis.
analysis, and it is easier to program new methods than in STATA. Some disadvantages are that
you have to do much of the programming yourself, programming complicated procedures takes
significant time, and programming errors are hard to prevent and difficult to detect and eliminate.
Of these languages, Gauss used to be quite popular among econometricians, but now Matlab is
more popular. A smaller but growing group of econometricians are enthusiastic fans of R, which of
these languages is uniquely open-source, user-contributed, and best of all, completely free!
For highly-intensive computational tasks, some economists write their programs in a standard
programming language such as Fortran or C. This can lead to major gains in computational speed,
at the cost of increased time in programming and debugging.
As these different packages have distinct advantages, many empirical economists end up using
more than one package. As a student of econometrics, you will learn at least one of these packages,
and probably more than one.
2.1 Introduction
The most commonly applied econometric tool is least-squares estimation, also known as regres-
sion. As we will see, least-squares is a tool to estimate an approximate conditional mean of one
variable (the dependent variable) given another set of variables (the regressors, conditioning
variables, or covariates).
In this chapter we abstract from estimation, and focus on the probabilistic foundation of the
conditional expectation model and its projection approximation.
When we say that a person’s wage is random we mean that we do not know their wage before it is
measured, and we treat observed wage rates as realizations from the distribution F. Treating un-
observed wages as random variables and observed wages as realizations is a powerful mathematical
abstraction which allows us to use the tools of mathematical probability.
A useful thought experiment is to imagine dialing a telephone number selected at random, and
then asking the person who responds to tell us their wage rate. (Assume for simplicity that all
workers have equal access to telephones, and that the person who answers your call will respond
honestly.) In this thought experiment, the wage of the person you have called is a single draw from
the distribution F of wages in the population. By making many such phone calls we can learn the
distribution F of the entire population.
When a distribution function F is differentiable we define the probability density function
d
f (u) = F (u).
du
The density contains the same information as the distribution function, but the density is typically
easier to visually interpret.
9
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 10
1.0
0.9
0.8
0.7
Wage Distribution
0.6
Wage Density
0.5
0.4
0.3
0.2
0.1
0.0
0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 80 90 100
Figure 2.1: Wage Distribution and Density. All full-time U.S. workers
In Figure 2.1 we display estimates1 of the probability distribution function (on the left) and
density function (on the right) of U.S. wage rates in 2009. We see that the density is peaked around
$15, and most of the probability mass appears to lie between $10 and $40. These are ranges for
typical wage rates in the U.S. population.
Important measures of central tendency are the median and the mean. The median m of a
continuous2 distribution F is the unique solution to
1
F (m) = .
2
The median U.S. wage ($19.23) is indicated in the left panel of Figure 2.1 by the arrow. The median
is a robust3 measure of central tendency, but it is tricky to use for many calculations as it is not a
linear operator.
The expectation or mean of a random variable y with density f is
Z ∞
μ = E (y) = uf (u)du.
−∞
A general definition of the mean is presented in Section 2.31. The mean U.S. wage ($23.90) is
indicated in the right panel of Figure 2.1 by the arrow. Here we have used the common and
convenient convention of using the single character y to denote a random variable, rather than the
more cumbersome label wage.
We sometimes use the notation the notation Ey instead of E (y) when the variable whose
expectation is being taken is clear from the context. There is no distinction in meaning.
The mean is a convenient measure of central tendency because it is a linear operator and
arises naturally in many economic models. A disadvantage of the mean is that it is not robust4
especially in the presence of substantial skewness or thick tails, which are both features of the wage
1
The distribution and density are estimated nonparametrically from the sample of 50,742 full-time non-military
wage-earners reported in the March 2009 Current Population Survey. The wage rate is constructed as annual indi-
vidual wage and salary earnings divided by hours worked.
2 1
If F is not continuous the definition is m = inf{u : F (u) ≥ }
3
2
The median is not sensitive to pertubations in the tails of the distribution.
4
The mean is sensitive to pertubations in the tails of the distribution.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 11
distribution as can be seen easily in the right panel of Figure 2.1. Another way of viewing this
is that 64% of workers earn less that the mean wage of $23.90, suggesting that it is incorrect to
describe the mean as a “typical” wage rate.
1 2 3 4 5 6
In this context it is useful to transform the data by taking the natural logarithm5 . Figure 2.2
shows the density of log hourly wages log(wage) for the same population, with its mean 2.95 drawn
in with the arrow. The density of log wages is much less skewed and fat-tailed than the density of
the level of wages, so its mean
E (log(wage)) = 2.95
is a much better (more robust) measure6 of central tendency of the distribution. For this reason,
wage regressions typically use log wages as a dependent variable rather than the level of wages.
Another useful way to summarize the probability distribution F (u) is in terms of its quantiles.
For any α ∈ (0, 1), the α’th quantile of the continuous7 distribution F is the real number qα which
satisfies
F (qα ) = α.
The quantile function qα , viewed as a function of α, is the inverse of the distribution function F.
The most commonly used quantile is the median, that is, q0.5 = m. We sometimes refer to quantiles
by the percentile representation of α, and in this case they are often called percentiles, e.g. the
median is the 50th percentile.
white men
white women
black men
black women
Log Wage Density
0 1 2 3 4 5 6 1 2 3 4 5
The values 3.05 and 2.81 are the mean log wages in the subpopulations of men and women
workers. They are called the conditional means (or conditional expectations) of log wages
given gender. We can write their specific values as
A difference in expected log wages of 0.24 implies an average 24% difference between the wages
of men and women, which is quite substantial. (For an explanation of logarithmic and percentage
differences see Section 2.4.)
Consider further splitting the men and women subpopulations by race, dividing the population
into whites, blacks, and other races. We display the log wage density functions of four of these
groups on the right in Figure 2.3. Again we see that the primary difference between the four density
functions is their central tendency.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 13
men women
white 3.07 2.82
black 2.86 2.73
other 3.03 2.86
Focusing on the means of these distributions, Table 2.1 reports the mean log wage for each of
the six sub-populations.
The entries in Table 2.1 are the conditional means of log(wage) given gender and race. For
example
E (log(wage) | gender = man, race = white) = 3.07
and
E (log(wage) | gender = woman, race = black) = 2.73
One benefit of focusing on conditional means is that they reduce complicated distributions
to a single summary measure, and thereby facilitate comparisons across groups. Because of this
simplifying property, conditional means are the primary interest of regression analysis and are a
major focus in econometrics.
Table 2.1 allows us to easily calculate average wage differences between groups. For example,
we can see that the wage gap between men and women continues after disaggregation by race, as
the average gap between white men and white women is 25%, and that between black men and
black women is 13%. We also can see that there is a race gap, as the average wages of blacks are
substantially less than the other race categories. In particular, the average wage gap between white
men and black men is 21%, and that between white women and black women is 9%.
We display in Figure 2.5 the conditional means of log(wage) for white men and white women as a
function of education. The plot is quite revealing. We see that the conditional mean is increasing in
years of education, but at a different rate for schooling levels above and below nine years. Another
striking feature of Figure 2.5 is that the gap between men and women is roughly constant for all
education levels. As the variables are measured in logs this implies a constant average percentage
gap between men and women regardless of educational attainment.
In many cases it is convenient to simplify the notation by writing variables using single charac-
ters, typically y, x and/or z. It is conventional in econometrics to denote the dependent variable
(e.g. log(wage)) by the letter y, a conditioning variable (such as gender ) by the letter x, and
multiple conditioning variables (such as race, education and gender ) by the subscripted letters
x1 , x2 , ..., xk .
Conditional expectations can be written with the generic notation
We call this the conditional expectation function (CEF). The CEF is a function of (x1 , x2 , ..., xk )
as it varies with the variables. For example, the conditional expectation of y = log(wage) given
(x1 , x2 ) = (gender , race) is given by the six entries of Table 2.1. The CEF is a function of (gender ,
race) as it varies across the entries.
8
Here, education is defined as years of schooling beyond kindergarten. A high school graduate has education=12,
a college graduate has education=16, a Master’s degree has education=18, and a professional degree (medical, law or
PhD) has education=20.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 15
4.0
3.5
Log Dollars per Hour
3.0
2.5
2.0
white men
white women
4 6 8 10 12 14 16 18 20
Years of Education
For greater compactness, we will typically write the conditioning variables as a vector in Rk :
⎛ ⎞
x1
⎜ x2 ⎟
⎜ ⎟
x = ⎜ . ⎟. (2.5)
⎝ .. ⎠
xk
Here we follow the convention of using lower case bold italics x to denote a vector. Given this
notation, the CEF can be compactly written as
E (y | x) = m (x) .
4.0
Exp=5
Exp=10
Exp=25
Exp=40
3.5
3.0
2.5
Conditional Mean
2.0
Linear Projection
Quadratic Projection
0 10 20 30 40 50
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
Labor Market Experience (Years)
Log Dollars per Hour
For any x such that fx (x) > 0 the conditional density of y given x is defined as
f (y, x)
fy|x (y | x) = . (2.6)
fx (x)
The conditional density is a slice of the joint density f (y, x) holding x fixed. We can visualize this
by slicing the joint density function at a specific value of x parallel with the y-axis. For example,
take the density contours on the left side of Figure 2.6 and slice through the contour plot at a
specific value of experience. This gives us the conditional density of log(wage) for white men with
12 years of education and this level of experience. We do this for four levels of experience (5, 10,
25, and 40 years), and plot these densities on the right side of Figure 2.6. We can see that the
distribution of wages shifts to the right and becomes more diffuse as experience increases from 5 to
10 years, and from 10 to 25 years, but there is little change from 25 to 40 years experience.
The CEF of y given x is the mean of the conditional density (2.6)
Z
m (x) = E (y | x) = yfy|x (y | x) dy. (2.7)
R
Intuitively, m (x) is the mean of y for the idealized subpopulation where the conditioning variables
are fixed at x. This is idealized since x is continuously distributed so this subpopulation is infinitely
small.
In Figure 2.6 the CEF of log(wage) given experience is plotted as the solid line. We can see
that the CEF is a smooth but nonlinear function. The CEF is initially increasing in experience,
flattens out around experience = 30, and then decreases for high levels of experience.
E (E (y | x)) = E (y)
The simple law states that the expectation of the conditional expectation is the unconditional
expectation. In other words, the average of the conditional averages is the unconditional average.
When x is discrete
∞
X
E (E (y | x)) = E (y | xj ) Pr (x = xj )
j=1
Or numerically,
3.05 × 0.57 + 2.79 × 0.43 = 2.92.
The general law of iterated expectations allows two sets of conditioning variables.
E (E (y | x1 , x2 ) | x1 ) = E (y | x1 )
Notice the way the law is applied. The inner expectation conditions on x1 and x2 , while
the outer expectation conditions only on x1 . The iterated expectation yields the simple answer
E (y | x1 ) , the expectation conditional on x1 alone. Sometimes we phrase this as: “The smaller
information set wins.”
As an example
or numerically
3.07 × 0.84 + 2.86 × 0.08 + 3.05 × 0.08 = 3.05.
A property of conditional expectations is that when you condition on a random vector x you
can effectively treat it as if it is constant. For example, E (x | x) = x and E (g (x) | x) = g (x) for
any function g(·). The general property is known as the conditioning theorem.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 18
The proofs of Theorems 2.7.1, 2.7.2 and 2.7.3 are given in Section 2.34.
y = m(x) + e. (2.11)
In (2.11) it is useful to understand that the error e is derived from the joint distribution of
(y, x), and so its properties are derived from this construction.
A key property of the CEF error is that it has a conditional mean of zero. To see this, by the
linearity of expectations, the definition m(x) = E (y | x) and the Conditioning Theorem
E (e | x) = E ((y − m(x)) | x)
= E (y | x) − E (m(x) | x)
= m(x) − m(x)
= 0.
This fact can be combined with the law of iterated expectations to show that the unconditional
mean is also zero.
E (e) = E (E (e | x)) = E (0) = 0.
We state this and some other results formally.
1. E (e | x) = 0.
2. E (e) = 0.
4. For any function h (x) such that E |h (x) e| < ∞ then E (h (x) e) = 0.
4.0
3.5
Log Dollars per Hour
3.0
2.5
2.0
0 10 20 30 40 50
Figure 2.7: Joint density of CEF error e and experience for white men with education=12.
The equations
y = m(x) + e
E (e | x) = 0
together imply that m(x) is the CEF of y given x. It is important to understand that this is not
a restriction. These equations hold true by definition.
The condition E (e | x) = 0 is implied by the definition of e as the difference between y and the
CEF m (x) . The equation E (e | x) = 0 is sometimes called a conditional mean restriction, since
the conditional mean of the error e is restricted to equal zero. The property is also sometimes called
mean independence, for the conditional mean of e is 0 and thus independent of x. However,
it does not imply that the distribution of e is independent of x. Sometimes the assumption “e is
independent of x” is added as a convenient simplification, but it is not generic feature of the con-
ditional mean. Typically and generally, e and x are jointly dependent, even though the conditional
mean of e is zero.
As an example, the contours of the joint density of e and experience are plotted in Figure 2.7
for the same population as Figure 2.6. The error e has a conditional mean of zero for all values of
experience, but the shape of the conditional distribution varies with the level of experience.
As a simple example of a case where x and e are mean independent yet dependent, let e = xε
where x and ε are independent N(0, 1). Then conditional on x, the error e has the distribution
N(0, x2 ). Thus E (e | x) = 0 and e is mean independent of x, yet e is not fully independent of x.
Mean independence does not imply full independence.
y =μ+e
E (e) = 0
We can call σ 2 the regression variance or the variance of the regression error. The magnitude
of σ 2 measures the amount of variation in y which is not “explained” or accounted for in the
conditional mean E (y | x) .
The regression variance depends on the regressors x. Consider two regressions
y = E (y | x1 ) + e1
y = E (y | x1 , x2 ) + e2 .
We write the two errors distinctly as e1 and e2 as they are different — changing the conditioning
information changes the conditional mean and therefore the regression error as well.
In our discussion of iterated expectations, we have seen that by increasing the conditioning
set, the conditional expectation reveals greater detail about the distribution of y. What is the
implication for the regression error?
It turns out that there is a simple relationship. We can think of the conditional mean E (y | x)
as the “explained portion” of y. The remainder e = y − E (y | x) is the “unexplained portion”. The
simple relationship we now derive shows that the variance of this unexplained portion decreases
when we condition on more variables. This relationship is monotonic in the sense that increasing
the amont of information always decreases the variance of the unexplained portion.
Theorem 2.10.2 says that the variance of the difference between y and its conditional mean
(weakly) decreases whenever an additional variable is added to the conditioning information.
The proof of Theorem 2.10.2 is given in Section 2.34.
E (y − g (x))2 . (2.12)
We can define the best predictor as the function g (x) which minimizes (2.12). What function
is the best predictor? It turns out that the answer is the CEF m(x). This holds regardless of the
joint distribution of (y, x).
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 21
To see this, note that the mean squared error of a predictor g (x) is
where the first equality makes the substitution y = m(x) + e and the third equality uses Theorem
2.8.1.4. The right-hand-side after the third equality is minimized by setting g (x) = m (x), yielding
the inequality in the fourth line. The minimum is finite under the assumption Ey 2 < ∞ as shown
by Theorem 2.10.1.
We state this formally in the following result.
E (y − g (x))2 ≥ E (y − m (x))2
It may be helpful to consider this result in the context of the intercept-only model
y =μ+e
E(e) = 0.
Theorem 2.11.1 shows that the best predictor for y (in the class of constant parameters) is the
unconditional mean μ = E(y), in the sense that the mean minimizes the mean squared prediction
error.
σ 2 (x) = var (y | x)
³ ´
= E (y − E (y | x))2 | x
¡ ¢
= E e2 | x .
Generally, σ 2 (x) is a non-trivial function of x and can take any form subject to the restriction
that it is non-negative. One way to think about σ 2 (x) is that it is the conditional mean of e2 given
x.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 22
The variance is in a different unit of measurement than the original variable. To convert the
variance back to thepsame unit of measure we define the conditional standard deviation as its
square root σ(x) = σ 2 (x).
As an example of how the conditional variance depends on observables, compare the conditional
log wage densities for men and women displayed in Figure 2.3. The difference between the densities
is not purely a location shift, but is also a difference in spread. Specifically, we can see that the
density for men’s log wages is somewhat more spread out than that for women, while the density
for women’s wages is somewhat more peaked. Indeed, the conditional standard deviation for men’s
wages is 3.05 and that for women is 2.81. So while men have higher average wages, they are also
somewhat more dispersed.
The unconditional error variance and the conditional variance are related by the law of iterated
expectations ¡ ¢ ¡ ¡ ¢¢ ¡ ¢
σ 2 = E e2 = E E e2 | x = E σ 2 (x) .
That is, the unconditional error variance is the average conditional variance.
Given the conditional variance, we can define a rescaled error
e
ε= . (2.13)
σ(x)
and µ ¶
¡ ¢ e2 1 ¡ 2 ¢ σ 2 (x)
var (ε | x) = E ε2 | x = E | x = E e | x = 2 = 1.
σ 2 (x) σ 2 (x) σ (x)
Thus ε has a conditional mean of zero, and a conditional variance of 1.
Notice that (2.13) can be rewritten as
e = σ(x)ε.
and substituting this for e in the CEF equation (2.11), we find that
¡ ¢
Definition 2.13.1 The error is homoskedastic if E e2 | x = σ 2
does not depend on x.
In the general case where σ 2 (x) depends on x we say that the error e is heteroskedastic.
¡ ¢
Definition 2.13.2 The error is heteroskedastic if E e2 | x = σ 2 (x)
depends on x.
We can unify the continuous and discrete cases with the notation
⎧
⎪
⎪ ∂
⎨ m(x1 , ..., xk ), if x1 is continuous
∂x1
∇1 m(x) =
⎪
⎪
⎩ m(1, x , ..., x ) − m(0, x , ..., x ), if x is binary.
2 k 2 k 1
Collecting the k effects into one k × 1 vector, we define the regression derivative with respect to
x: ⎡ ⎤
∇1 m(x)
⎢ ∇2 m(x) ⎥
⎢ ⎥
∇m(x) = ⎢ .. ⎥
⎣ . ⎦
∇k m(x)
∂
When all elements of x are continuous, then we have the simplification ∇m(x) = m(x), the
∂x
vector of partial derivatives.
There are two important points to remember concerning our definition of the regression deriv-
ative.
First, the effect of each variable is calculated holding the other variables constant. This is the
ceteris paribus concept commonly used in economics. But in the case of a regression derivative,
the conditional mean does not literally hold all else constant. It only holds constant the variables
included in the conditional mean. This means that the regression derivative depends on which
regressors are included. For example, in a regression of wages on education, experience, race and
gender, the regression derivative with respect to education shows the marginal effect of education
on mean wages, holding constant experience, race and gender. But it does not hold constant an
individual’s unobservable characteristics (such as ability), or variables not included in the regression
(such as the quality of education).
Second, the regression derivative is the change in the conditional expectation of y, not the
change in the actual value of y for an individual. It is tempting to think of the regression derivative
as the change in the actual value of y, but this is not a correct interpretation. The regression
derivative ∇m(x) is the change in the actual value of y only if the error e is unaffected by the
change in the regressor x. We return to a discussion of causal effects in Section 2.30.
m(x) = x1 β1 + x2 β2 + · · · + xk βk + βk+1 .
Notationally it is convenient to write this as a simple function of the vector x. An easy way to do
so is to augment the regressor vector x by listing the number “1” as an element. We call this the
“constant” and the corresponding coefficient is called the “intercept”. Equivalently, specify that
the final element10 of the vector x is xk = 1. Thus (2.5) has been redefined as the k × 1 vector
⎛ ⎞
x1
⎜ x2 ⎟
⎜ ⎟
⎜ ⎟
x = ⎜ ... ⎟ . (2.15)
⎜ ⎟
⎝ xk−1 ⎠
1
10
The order doesn’t matter. It could be any element.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 25
m(x) = x1 β1 + x2 β2 + · · · + xk βk
= x0 β (2.16)
where ⎛ ⎞
β1
⎜ ⎟
β = ⎝ ... ⎠ (2.17)
βk
is a k × 1 coefficient vector. This is the linear CEF model. It is also often called the linear
regression model, or the regression of y on x.
In the linear CEF model, the regression derivative is simply the coefficient vector. That is
∇m(x) = β.
This is one of the appealing features of the linear CEF model. The coefficients have simple and
natural interpretations as the marginal effects of changing one variable, holding the others constant.
y = x0 β + e
E (e | x) = 0
If in addition the error is homoskedastic, we call this the homoskedastic linear CEF model.
y = x0 β + e
E (e | x) = 0
¡ ¢
E e2 | x = σ 2
m(x1 , x2 ) = x0 β
which is linear in β. For most econometric purposes (estimation and inference on β) the linearity
in β is all that is important.
An exception is in the analysis of regression derivatives. In nonlinear equations such as (2.18),
the regression derivative should be defined with respect to the original variables, not with respect
to the transformed variables. Thus
∂
m(x1 , x2 ) = β1 + 2x1 β3 + x2 β5
∂x1
∂
m(x1 , x2 ) = β2 + 2x2 β4 + x1 β5
∂x2
We see that in the model (2.18), the regression derivatives are not a simple coefficient, but are
functions of several coefficients plus the levels of (x1, x2 ). Consequently it is difficult to interpret
the coefficients individually. It is more useful to interpret them as a group.
We typically call β5 the interaction effect. Notice that it appears in both regression derivative
equations, and has a symmetric interpretation in each. If β5 > 0 then the regression derivative
with respect to x1 is increasing in the level of x2 (and the regression derivative with respect to x2
is increasing in the level of x1 ), while if β5 < 0 the reverse is true. It is worth noting that this
symmetry is an artificial implication of the quadratic equation (2.18), and is not a general feature
of nonlinear conditional means m(x1 , x2 ).
To facilitate a mathematical treatment, we typically record dummy variables with the values {0, 1}.
For example ½
0 if gender=man
x1 = (2.19)
1 if gender=woman
Given this notation we can write the conditional mean as a linear function of the dummy variable
x1 , that is
E (y | x1 ) = β1 x1 + β2
where β1 = μ1 − μ0 and β2 = μ0 . In this simple regression equation the intercept β2 is equal to
the conditional mean of y for the x1 = 0 subpopulation (men) and the slope β1 is equal to the
difference in the conditional means between the two subpopulations.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 27
In this case, the regression intercept is the mean for women (rather than for men) and the regression
slope has switched signs. The two regressions are equivalent but the interpretation of the coefficients
has changed. Therefore it is always important to understand the precise definitions of the variables,
and illuminating labels are helpful. For example, labelling x1 as “gender” does not help distinguish
between definitions (2.19) and (2.20). Instead, it is better to label x1 as “women” or “female” if
definition (2.19) is used, or as “men” or “male” if (2.20) is used.
Now suppose we have two dummy variables x1 and x2 . For example, x2 = 1 if the person is
married, else x2 = 0. The conditional mean given x1 and x2 takes at most four possible values:
⎧
⎪
⎪ μ00 if x1 = 0 and x2 = 0 (unmarried men)
⎨
μ01 if x1 = 0 and x2 = 1 (married men)
E (y | x1 , x2 ) =
⎪
⎪ μ if x1 = 1 and x2 = 0 (unmarried women)
⎩ 10
μ11 if x1 = 1 and x2 = 1 (married women)
In this case we can write the conditional mean as a linear function of x1 , x2 and their product
x1 x2 :
E (y | x1 , x2 ) = β1 x1 + β2 x2 + β3 x1 x2 + β4
where β1 = μ10 − μ00 , β2 = μ01 − μ00 , β3 = μ11 − μ10 − μ01 + μ00 , and β4 = μ00 .
We can view the coefficient β1 as the effect of gender on expected log wages for unmarried
wages earners, the coefficient β2 as the effect of marriage on expected log wages for men wage
earners, and the coefficient β3 as the difference between the effects of marriage on expected log
wages among women and among men. Alternatively, it can also be interpreted as the difference
between the effects of gender on expected log wages among married and non-married wage earners.
Both interpretations are equally valid. We often describe β3 as measuring the interaction between
the two dummy variables, or the interaction effect, and describe β3 = 0 as the case when the
interaction effect is zero.
In this setting we can see that the CEF is linear in the three variables (x1 , x2 , x1 x2 ). Thus to
put the model in the framework of Section 2.15, we would define the regressor x3 = x1 x2 and the
regressor vector as ⎛ ⎞
x1
⎜ x2 ⎟
x=⎜ ⎟
⎝ x3 ⎠ .
1
So even though we started with only 2 dummy variables, the number of regressors (including the
intercept) is 4.
If there are 3 dummy variables x1 , x2 , x3 , then E (y | x1 , x2 , x3 ) takes at most 23 = 8 distinct
values and can be written as the linear function
E (y | x1 , x2 , x3 ) = β1 x1 + β2 x2 + β3 x3 + β4 x1 x2 + β5 x1 x3 + β6 x2 x3 + β7 x1 x2 x3 + β8
such as race. For example, we earlier divided race into three categories. We can record categorical
variables using numbers to indicate each category, for example
⎧
⎨ 1 if white
x3 = 2 if black
⎩
3 if other
When doing so, the values of x3 have no meaning in terms of magnitude, they simply indicate the
relevant category.
When the regressor is categorical the conditional mean of y given x3 takes a distinct value for
each possibility: ⎧
⎨ μ1 if x3 = 1
E (y | x3 ) = μ if x3 = 2
⎩ 2
μ3 if x3 = 3
This is not a linear function of x3 itself, but it can be made a linear function by constructing
dummy variables for two of the three categories. For example
½
1 if black
x4 =
0 if not black
½
1 if other
x5 =
0 if not other
In this case, the categorical variable x3 is equivalent to the pair of dummy variables (x4 , x5 ). The
explicit relationship is ⎧
⎨ 1 if x4 = 0 and x5 = 0
x3 = 2 if x4 = 1 and x5 = 0
⎩
3 if x4 = 0 and x5 = 1
Given these transformations, we can write the conditional mean of y as a linear function of x4 and
x5
E (y | x3 ) = E (y | x4 , x5 ) = β1 x4 + β2 x5 + β3
We can write the CEF as either E (y | x3 ) or E (y | x4 , x5 ) (they are equivalent), but it is only linear
as a function of x4 and x5 .
This setting is similar to the case of two dummy variables, with the difference that we have not
included the interaction term x4 x5 . This is because the event {x4 = 1 and x5 = 1} is empty by
construction, so x4 x5 = 0 by definition.
Assumption 2.18.1
1. Ey 2 < ∞.
2. E kxk2 < ∞.
In Assumption 2.18.1.2 we use the notation kxk = (x0 x)1/2 to denote the Euclidean length of
the vector x.
The first two parts of Assumption 2.18.1 imply that the variables y and x have finite means,
variances, and covariances. The third part of the assumption is more technical, and its role will
become apparent shortly. It is equivalent to imposing that the columns of Qxx = E (xx0 ) are
linearly independent, or equivalently that the matrix Qxx is invertible.
A linear predictor for y is a function of the form x0 β for some β ∈ Rk . The mean squared
prediction error is ¡ ¢2
S(β) = E y − x0 β .
The best linear predictor of y given x, written P(y | x), is found by selecting the vector β to
minimize S(β).
P(y | x) = x0 β
The minimizer
β = argmin S(β) (2.21)
β∈Rk
The quadratic structure of S(β) means that we can solve explicitly for the minimizer. The first-
order condition for minimization (from Appendix A.9) is
∂ ¡ ¢
0= S(β) = −2E (xy) + 2E xx0 β. (2.22)
∂β
Rewriting (2.22) as ¡ ¢
2E (xy) = 2E xx0 β
and dividing by 2, this equation takes the form
where Qxy = E (xy) is k × 1 and Qxx = E (xx0 ) is k × k. The solution is found by inverting the
matrix Qxx , and is written
β = Q−1 xx Qxy
or ¡ ¡ ¢¢−1
β = E xx0 E (xy) . (2.24)
It is worth taking the time to understand the notation involved in the expression (2.24). Qxx is a
E(xy)
k × k matrix and Qxy is a k × 1 column vector. Therefore, alternative expressions such as E(xx 0)
or E (xy) (E (xx0 ))−1 are incoherent and incorrect. We also can now see the role of Assumption
2.18.1.3. It is necessary in order for the solution (2.24) to exist. Otherwise, there would be multiple
solutions to the equation (2.23).
We now have an explicit expression for the best linear predictor:
¡ ¡ ¢¢−1
P(y | x) = x0 E xx0 E (xy) .
e = y − x0 β. (2.25)
This equals the error from the regression equation when (and only when) the conditional mean is
linear in x, otherwise they are distinct.
Rewriting, we obtain a decomposition of y into linear predictor and error
y = x0 β + e. (2.26)
In general we call equation (2.26) or x0 β the best linear predictor of y given x, or the linear
projection of y on x. Equation (2.26) is also often called the regression of y on x but this can
sometimes be confusing as economists use the term regression in many contexts. (Recall that we
said in Section 2.15 that the linear CEF model is also called the linear regression model.)
An important property of the projection error e is
E (xe) = 0. (2.27)
To see this, using the definitions (2.25) and (2.24) and the matrix properties AA−1 = I and
Ia = a,
¡ ¡ ¢¢
E (xe) = E x y − x0 β
¡ ¢¡ ¡ ¢¢−1
= E (xy) − E xx0 E xx0 E (xy)
=0 (2.28)
as claimed.
Equation (2.27) is a set of k equations, one for each regressor. In other words, (2.27) is equivalent
to
E (xj e) = 0 (2.29)
for j = 1, ..., k. As in (2.15), the regressor vector x typically contains a constant, e.g. xk = 1. In
this case (2.29) for j = k is the same as
E (e) = 0. (2.30)
Thus the projection error has a mean of zero when the regressor vector contains a constant. (When
x does not have a constant, (2.30) is not guaranteed. As it is desirable for e to have a zero mean,
this is a good reason to always include a constant in any regression model.)
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 31
It is also useful to observe that since cov(xj , e) = E (xj e) − E (xj ) E (e) , then (2.29)-(2.30)
together imply that the variables xj and e are uncorrelated.
This completes the derivation of the model. We summarize some of the most important prop-
erties.
and
E (xe) = 0.
E (e) = 0.
y = x0 β + e.
E (xe) = 0
¡ ¡ ¢¢−1
β = E xx0 E (xy)
We illustrate projection using three log wage equations introduced in earlier sections.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 32
For our first example, we consider a model with the two dummy variables for gender and race
similar to Table 2.1. As we learned in Section 2.17, the entries in this table can be equivalently
expressed by a linear CEF. For simplicity, let’s consider the CEF of log(wage) as a function of
Black and Female.
E(log(wage) | Black, F emale) = −0.20Black − 0.24F emale + 0.10Black × F emale + 3.06. (2.31)
This is a CEF as the variables are dummys and all interactions are included.
Now consider a simpler model omitting the interaction effect. This is the linear projection on
the variables Black and F emale
What is the difference? The full CEF (2.31) shows that the race gap is differentiated by gender: it
is 20% for black men (relative to non-black men) and 10% for black women (relative to non-black
women). The projection model (2.32) simplifies this analysis, calculating an average 15% wage gap
for blacks, ignoring the role of gender. Notice that this is despite the fact that the gender variable
is included in (2.32).
4.0
3.5
Log Dollars per Hour
3.0
2.5
2.0
4 6 8 10 12 14 16 18 20
Years of Education
For our second example we consider the CEF of log wages as a function of years of education
for white men which was illustrated in Figure 2.5 and is repeated in Figure 2.8. Superimposed on
the figure are two projections. The first (given by the dashed line) is the linear projection of log
wages on years of education
This simple equation indicates an average 11% increase in wages for every year of education. An
inspection of the Figure shows that this approximation works well for education≥ 9, but under-
predicts for individuals with lower levels of education. To correct this imbalance we use a linear
spline equation which allows different rates of return above and below 9 years of education:
This equation is displayed in Figure 2.8 using the solid line, and appears to fit much better. It
indicates a 2% increase in mean wages for every year of education below 9, and a 12% increase in
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 33
4.0
Conditional Mean
Linear Projection
Quadratic Projection
3.5
Log Dollars per Hour
3.0
2.5
2.0
0 10 20 30 40 50
mean wages for every year of education above 9. It is still an approximation to the conditional
mean but it appears to be fairly reasonable.
For our third example we take the CEF of log wages as a function of years of experience for
white men with 12 years of education, which was illustrated in Figure 2.6 and is repeated as the
solid line in Figure 2.9. Superimposed on the figure are two projections. The first (given by the
dot-dashed line) is the linear projection on experience
and the second (given by the dashed line) is the linear projection on experience and its square
It is fairly clear from an examination of Figure 2.9 that the first linear projection is a poor approx-
imation. It over-predicts wages for young and old workers, and under-predicts for the rest. Most
importantly, it misses the strong downturn in expected wages for older wage-earners. The second
projection fits much better. We can call this equation a quadratic projection since the function
is quadratic in experience.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 34
When Qxx is not invertible there are multiple solutions to (2.33), all of
which yield an equivalent best linear predictor x0 β. In this case the coeffi-
cient β is not identified as it does not have a unique value. Even so, the
best linear predictor x0 β still identified. One solution is to set
¡ ¡ ¢¢−
β = E xx0 E (xy)
One useful feature of this formula is that it shows that Qyy·x = Qyy − Qyx Q−1
xx Qxy equals the
variance of the error from the linear projection of y on x.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 35
Ey = Ex0 β + Eα + Ee
or
μy = μ0x β + α
where μy = Ey and μx = Ex, since E (e) = 0 from (2.30). (While x does not contain a constant,
the equation does so (2.30) still applies.) Rearranging, we find
α = μy − μ0x β.
y − μy = (x − μx )0 β + e, (2.36)
a linear equation between the centered variables y − μy and x − μx . (They are centered at their
means, so are mean-zero random variables.) Because x − μx is uncorrelated with e, (2.36) is also
a linear projection, thus by the formula for the linear projection model,
¡ ¡ ¢¢−1
β = E (x − μx ) (x − μx )0 E ((x − μx ) (y − μy ))
= var (x)−1 cov (x, y)
y = x0 β + α + e,
then
α = μy − μ0x β (2.37)
and
β = var (x)−1 cov (x, y) . (2.38)
y = x0 β + e
= x01 β1 + x02 β2 + e (2.40)
E (xe) = 0.
def def
where Q11·2 = Q11 − Q12 Q−1 −1
22 Q21 and Q22·1 = Q22 − Q21 Q11 Q12 . Thus
µ ¶
β1
β=
β2
∙ ¸∙ ¸
Q−1 11·2 −Q−111·2 Q12 Q−1
22 Q1y
=
−Q−1 −1
22·1 Q21 Q11 Q−1 22·1 Q2y
µ −1 ¡ −1
¢ ¶
Q11·2 ¡Q1y − Q12 Q22 Q2y ¢
=
Q−1 −1
22·1 Q2y − Q21 Q11 Q1y
µ −1 ¶
Q11·2 Q1y·2
=
Q−1
22·1 Q2y·1
β1 = Q−1
11·2 Q1y·2
β2 = Q−1
22·1 Q2y·1
y = x1 β1 + x02 β2 + e. (2.42)
x1 = x02 γ 2 + u1
E (x2 u1 ) = 0.
y = x01 γ 1 + u (2.43)
E (x1 u) = 0.
Notice that we have written the coefficient on x1 as γ 1 rather than β1 and the error as u rather
than e. This is because (2.43) is different than (2.40). Goldberger (1991) introduced the catchy
labels long regression for (2.40) and short regression for (2.43) to emphasize the distinction.
Typically, β 1 6= γ 1 , except in special cases. To see this, we calculate
¡ ¡ ¢¢−1
γ 1 = E x1 x01 E (x1 y)
¡ ¡ ¢¢ −1 ¡ ¡ ¢¢
= E x1 x01 E x1 x01 β 1 + x02 β2 + e
¡ ¡ ¢¢−1 ¡ ¢
= β1 + E x1 x01 E x1 x02 β2
= β1 + Γβ2
where ¡ ¡ ¢¢−1 ¡ ¢
Γ = E x1 x01 E x1 x02
is the coefficient matrix from a projection of x2 on x1 .
Observe that γ 1 = β1 + Γβ2 6= β 1 unless Γ = 0 or β2 = 0. Thus the short and long regressions
have different coefficients on x1 . They are the same only under one of two conditions. First, if the
projection of x2 on x1 yields a set of zero coefficients (they are uncorrelated), or second, if the
coefficient on x2 in (2.40) is zero. In general, the coefficient in (2.43) is γ 1 rather than β1 . The
difference Γβ2 between γ 1 and β1 is known as omitted variable bias. It is the consequence of
omission of a relevant correlated variable.
To avoid omitted variables bias the standard advice is to include all potentially relevant variables
in estimated models. By construction, the general model will be free of such bias. Unfortunately
in many cases it is not feasible to completely follow this advice as many desired variables are
not observed. In this case, the possibility of omitted variables bias should be acknowledged and
discussed in the course of an empirical investigation.
For example, suppose y is log wages, x1 is education, and x2 is intellectual ability. It seems
reasonable to suppose that education and intellectual ability are positively correlated (highly able
individuals attain higher levels of education) which means Γ > 0. It also seems reasonable to
suppose that conditional on education, individuals with higher intelligence will earn higher wages
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 38
on average, so that β2 > 0. This implies that Γβ2 > 0 and γ1 = β1 + Γβ2 > β1 . Therefore, it seems
reasonable to expect that in a regression of wages on education with ability omitted, the coefficient
on education is higher than in a regression where ability is included. In other words, in this context
the omitted variable biases the regression coefficient upwards.
Similar to the best linear predictor we are measuring accuracy by expected squared error. The
difference is that the best linear predictor (2.21) selects β to minimize the expected squared predic-
tion error, while the best linear approximation (2.45) selects β to minimize the expected squared
approximation error.
Despite the different definitions, it turns out that the best linear predictor and the best linear
approximation are identical. By the same steps as in (2.18) plus an application of conditional
expectations we can find that
¡ ¡ ¢¢−1
β = E xx0 E (xm(x)) (2.46)
¡ ¡ 0 ¢¢−1
= E xx E (xy) (2.47)
(see Exercise 2.19). Thus (2.45) equals (2.21). We conclude that the definition (2.45) can be viewed
as an alternative motivation for the linear projection coefficient.
and ¡ ¢ ¡ ¢
E e2 | x = E e2 = σ 2
which are properties of a homoskedastic linear CEF.
We have shown that when (y, x) are jointly normally distributed, they satisfy a normal linear
CEF
y = x0 β + e
where
e ∼ N(0, σ2 )
is independent of x.
This is an alternative (and traditional) motivation for the linear CEF model. This motivation
has limited merit in econometric applications since economic data is typically non-normal.
y = xβ + α + e (2.48)
where y equals the height of the child and x equals the height of the parent. Assume that y and x
have the same mean, so that μy = μx = μ. Then from (2.37)
α = (1 − β) μ
P (y | x) = (1 − β) μ + xβ.
This shows that the projected height of the child is a weighted average of the population average
height μ and the parent’s height x, with the weight equal to the regression slope β. When the
height distribution is stable across generations, so that var(y) = var(x), then this slope is the
simple correlation of y and x. Using (2.38)
cov (x, y)
β= = corr(x, y).
var(x)
By the properties of correlation (e.g. equation (B.7) in the Appendix), −1 ≤ corr(x, y) ≤ 1, with
corr(x, y) = 1 only in the degenerate case y = x. Thus if we exclude degeneracy, β is strictly less
than 1.
This means that on average a child’s height is more mediocre (closer to the population average)
than the parent’s.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 40
Sir Francis Galton (1822-1911) of England was one of the leading figures in
late 19th century statistics. In addition to inventing the concept of regres-
sion, he is credited with introducing the concepts of correlation, the standard
deviation, and the bivariate normal distribution. His work on heredity made
a significant intellectual advance by examing the joint distributions of ob-
servables, allowing the application of the tools of mathematical statistics to
the social sciences.
A common error — known as the regression fallacy — is to infer from β < 1 that the population
is converging, meaning that its variance is declining towards zero. This is a fallacy because we
derived the implication β < 1 under the assumption of constant means and variances. So certainly
β < 1 does not imply that the variance y is less than than the variance of x.
Another way of seeing this is to examine the conditions for convergence in the context of equation
(2.48). Since x and e are uncorrelated, it follows that
β ∗ = corr(x, y) = β
α∗ = (1 − β) μ = α
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 41
which are exactly the same as in the projection of y on x! The intercept and slope have exactly the
same values in the forward and reverse projections!
While this algebraic discovery is quite simple, it is counter-intuitive. Instead, a common yet
mistaken guess for the form of the reverse regression is to take the equation (2.48), divide through
by β and rewrite to find the equation
1 α 1
x=y − − e (2.50)
β β β
suggesting that the projection of x on y should have a slope coefficient of 1/β instead of β, and
intercept of −α/β rather than α. What went wrong? Equation (2.50) is perfectly valid, because
it is a simple manipulation of the valid equation (2.48). The trouble is that (2.50) is neither a
CEF nor a linear projection. Inverting a projection (or CEF) does not yield a projection (or CEF).
Instead, (2.49) is a valid projection, not (2.50).
In any event, Galton’s finding was that when the variables are standardized, the slope in both
projections (y on x, and x and y) equals the correlation, and both equations exhibit regression to
the mean. It is not a causal relation, but a natural feature of all joint distributions.
y = x0 η
12
The x in Group 1 are N(2, 1) and those in Group 2 are N(4, 1), and the conditional distribution of y given x is
N(m(x), 1) where m(x) = 2x − x2 /6.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 42
β = E(η)
Σ = var (η)
η =β+u
where u is distributed independently of x with mean zero and covariance matrix Σ. Then we can
write
E(y | x) = x0 E(η | x) = x0 E(η) = x0 β
so the CEF is linear in x, and the coefficients β equal the mean of the random coefficient η.
We can thus write the equation as a linear CEF
y = x0 β + e (2.51)
E(e | x) = 0.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 43
Furthermore
E (y | x) = x0 β
var (y | x) = x0 Σx
y = h (x1 , x2 , u) (2.52)
where x1 and x2 are the observed variables, u is an × 1 unobserved random factor, and h is a
functional relationship. This framework includes as a special case the random coefficient model
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 44
(2.29) studied earlier. We define the causal effect of x1 within this model as the change in y due to
a change in x1 holding the other variables x2 and u constant.
To understand this concept, imagine taking a single individual. As far as our structural model is
concerned, this person is described by their observables x1 and x2 and their unobservables u. In a
wage regression the unobservables would include characteristics such as the person’s abilities, skills,
work ethic, interpersonal connections, and preferences. The causal effect of x1 (say, education) is
the change in the wage as x1 changes, holding constant all other observables and unobservables.
It may be helpful to understand that (2.53) is a definition, and does not necessarily describe
causality in a fundamental or experimental sense. Perhaps it would be more appropriate to label
(2.53) as a structural effect (the effect within the structural model).
Sometimes it is useful to write this relationship as a potential outcome function
y(x1 ) = h (x1 , x2 , u)
y(0) = h (0, x2 , u)
y(1) = h (1, x2 , u) .
In the literature on treatment effects, it is common to refer to y(0) and y(1) as the latent outcomes
associated with non-treatment and treatment, respectively. That is, for a given individual, y(0) is
the health outcome if there is no treatment, and y(1) is the health outcome if there is treatment.
The causal effect of treatment for the individual is the change in their health outcome due to
treatment — the change in y as we hold both x2 and u constant:
This is random (a function of x2 and u) as both potential outcomes y(0) and y(1) are different
across individuals.
In a sample, we cannot observe both outcomes from the same individual, we only observe the
realized value ⎧
⎨ y(0) if x1 = 0
y=
⎩
y(1) if x1 = 1.
As the causal effect varies across individuals and is not observable, it cannot be measured on
the individual level. We therefore focus on aggregate causal effects, in particular what is known as
the average causal effect.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 45
We can think of the average causal effect ACE(x1 , x2 ) as the average effect in the general
population. In our Jennifer & George schooling example given earlier, supposing that half of the
population are Jennifer’s and the other half George’s, then the average causal effect of college is
(10+4)/2 = $7 an hour. This is not the individual causal effect, it is the average of the causal effect
across all individuals in the population. Given data on only educational attainment and wages, the
ACE of $7 is the best we can hope to learn.
When we conduct a regression analysis (that is, consider the regression of observed wages
on educational attainment) we might hope that the regression reveals the average causal effect.
Technically, that the regression derivative (the coefficient on education) equals the ACE. Is this the
case? In other words, what is the relationship between the average causal effect ACE(x1 , x2 ) and
the regression derivative ∇1 m (x1 , x2 )? Equation (2.52) implies that the CEF is
m(x1 , x2 ) = E (h (x1 , x2 , u) | x1 , x2 )
Z
= h (x1 , x2 , u) f (u | x1 , x2 )du,
R
the average causal equation, averaged over the conditional distribution of the unobserved component
u.
Applying the marginal effect operator, the regression derivative is
Z
∇1 m(x1 , x2 ) = ∇1 h (x1 , x2 , u) f (u | x1 , x2 )du
R
Z
+ h (x1 , x2 , u) ∇1 f (u|x1 , x2 )du
R Z
= ACE(x1 , x2 ) + h (x1 , x2 , u) ∇1 f (u | x1 , x2 )du. (2.55)
R
Equation (2.55) shows that in general, the regression derivative does not equal the average
causal effect. The difference is the second term on the right-hand-side of (2.55). The regression
derivative and ACE equal in the special case when this term equals zero, which occurs when
∇1 f (u | x1 , x2 ) = 0, that is, when the conditional density of u given (x1 , x2 ) does not depend on
x1 . When this condition holds then the regression derivative equals the ACE, which means that
regression analysis can be interpreted causally, in the sense that it uncovers average causal effects.
The condition is sufficiently important that it has a special name in the treatment effects
literature.
∇1 m(x1 , x2 ) = ACE(x1 , x2 )
the regression derivative equals the average causal effect for x1 on y condi-
tional on x2 .
This is a fascinating result. It shows that whenever the unobservable is independent of the
treatment variable (after conditioning on appropriate regressors) the regression derivative equals the
average causal effect. In this case, the CEF has causal economic meaning, giving strong justification
to estimation of the CEF. Our derivation also shows the critical role of the CIA. If CIA fails, then
the equality of the regression derivative and ACE fails.
This theorem is quite general. It applies equally to the treatment-effects model where x1 is
binary or to more general settings where x1 is continuous.
It is also helpful to understand that the CIA is weaker than full independence of u from the
regressors (x1 , x2 ). The CIA was introduced precisely as a minimal sufficient condition to obtain
the desired result. Full independence implies the CIA and implies that each regression derivative
equals that variable’s average causal effect, but full independence is not necessary in order to
causally interpret a subset of the regressors.
To illustrate, let’s return to our education example involving a population with equal numbers
of Jennifer’s and George’s. Recall that Jennifer earns $10 as a high-school graduate and $20 as a
college graduate (and so has a causal effect of $10) while George earns $8 as a high-school graduate
and $12 as a college graduate (so has a causal effect of $4). Given this information, the average
causal effect of college is $7, which is what we hope to learn from a regression analysis.
Now suppose that while in high school all students take an aptitude test, and if a student gets
a high (H) score he or she goes to college with probability 3/4, and if a student gets a low (L)
score he or she goes to college with probability 1/4. Suppose further that Jennifer’s get an aptitude
score of H with probability 3/4, while George’s get a score of H with probability 1/4. Given this
situation, 62.5% of Jennifer’s will go to college13 , while 37.5% of George’s will go to college14 .
An econometrician who randomly samples 32 individuals and collects data on educational at-
tainment and wages will find the following wage distribution:
Let college denote a dummy variable taking the value of 1 for a college graduate, otherwise 0.
Thus the regression of wages on college attendence takes the form
The coefficient on the college dummy, $8.25, is the regression derivative, and the implied wage effect
of college attendence. But $8.25 overstates the average causal effect of $7. The reason is because
13
Pr (College|Jennif er) = Pr (College|H) Pr (H|Jennif er) + Pr (College|L) Pr (L|Jennifer) = (3/4)2 + (1/4)2
14
Pr (College|George) = Pr (College|H) Pr (H|George) + Pr (College|L) Pr (L|George) = (3/4)(1/4) + (1/4)(3/4)
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 47
the CIA fails. In this model the unobservable u is the individual’s type (Jennifer or George) which
is not independent of the regressor x1 (education), since Jennifer is more likely to go to college than
George. Since Jennifer’s causal effect is higher than George’s, the regression derivative overstates
the ACE. The coefficient $8.25 is not the average benefit of college attendence, rather it is the
observed difference in realized wages in a population whose decision to attend college is correlated
with their individual causal effect. At the risk of repeating myself, in this example, $8.25 is the true
regression derivative, it is the difference in average wages between those with a college education and
those without. It is not, however, the average causal effect of college education in the population.
This does not mean that it is impossible to estimate the ACE. The key is conditioning on the
appropriate variables. The CIA says that we need to find a variable x2 such that conditional on
x2 , u and x1 (type and education) are independent. In this example a variable which will achieve
this is the aptitude test score. The decision to attend college was based on the test score, not on
an individual’s type. Thus educational attainment and type are independent once we condition on
the test score.
This also alters the ACE. Notice that Definition 2.30.2 is a function of x2 (the test score).
Among the students who receive a high test score, 3/4 are Jennifer’s and 1/4 are George’s. Thus
the ACE for students with a score of H is (3/4) × 10 + (1/4) × 4 = $8.50. Among the students who
receive a low test score, 1/4 are Jennifer’s and 3/4 are George’s. Thus the ACE for students with
a score of L is (1/4) × 10 + (3/4) × 4 = $5.50. The ACE varies between these two observable groups
(those with high test scores and those with low test scores). Again, we would hope to be able to
learn the ACE from a regression analysis, this time from a regression of wages on education and
test scores.
To see this in the wage distribution, suppose that the econometrician collects data on the
aptitude test score as well as education and wages. Given a random sample of 32 individuals we
would expect to find the following wage distribution:
$8 $10 $12 $20 Mean
High-School Graduate + High Test Score 1 3 0 0 $9.50
College Graduate + High Test Score 0 0 3 9 $18.00
High-School Graduate + Low Test Score 9 3 0 0 $8.50
College Graduate + Low Test Score 0 0 3 1 $14.00
Define the dummy variable highscore which takes the value 1 for students who received a
high test score, else zero. The regression of wages on college attendence and test scores (with
interactions) takes the form
E (wage | college, highscore) = 1.00highscore + 5.50college + 3.00highscore × college + 8.50.
The cofficient on college, $5.50, is the regression derivative of college attendence for those with low
test scores, and the sum of this coefficient with the interaction coefficient, $8.50, is the regression
derivative for college attendence for those with high test scores. These equal the average causal
efffect.
In this example, by conditioning on the aptitude test score, the average causal effect of education
on wages can be learned from a regression analyis. What this shows is that by conditioning on the
proper variables, it may be possible to achieve the CIA, in which case regression analysis measures
average causal effects.
We can unify these definitions by writing the expectation as the Lebesgue integral with respect to
the distribution function F Z ∞
Ey = ydF (y). (2.56)
−∞
In the event that the integral (2.56) is not finite, separately evaluate the two integrals
Z ∞
I1 = ydF (y) (2.57)
0
Z 0
I2 = − ydF (y). (2.58)
−∞
By Liapunov’s Inequality (B.20), (2.59) implies E |y|s < ∞ for all s ≤ r. Thus, for example, if the
fourth moment is finite then the first, second and third moments are also finite.
It is common in econometric theory to assume that the variables, or certain transformations of
the variables, have finite moments of a certain order. How should we interpret this assumption?
How restrictive is it?
One way to visualize the importance is to consider the class of Pareto densities given by
The parameter a of the Pareto distribution indexes the rate of decay of the tail of the density.
Larger a means that the tail declines to zero more quickly. See Figure 2.11 below where we show
the Pareto density for a = 1 and a = 2. The parameter a also determines which moments are finite.
We can calculate that
⎧ R a
∞ r−a−1
⎪
⎨ a 1 y dy = if r < a
r a−r
E |y| =
⎪
⎩
∞ if r ≥ a.
This shows that if y is Pareto distributed with parameter a, then the r’th moment of y is finite if
and only if r < a. Higher a means higher finite moments. Equivalently, the faster the tail of the
density declines to zero, the more moments are finite.
This connection between tail decay and finite moments is not limited to the Pareto distribution.
We can make a similar analysis using a tail bound. Suppose that y has density f (y) which satisfies
the bound f (y) ≤ A |y|−a−1 for some A < ∞ and a > 0. Since f (y) is bounded below a scale of a
Pareto density, its tail behavior is similarly bounded. This means that for r < a
Z ∞ Z 1 Z ∞
r r 2A
E |y| = |y| f (y)dy ≤ f (y)dy + 2A yr−a−1 dy ≤ 1 + < ∞.
−∞ −1 1 a −r
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 49
Thus if the tail of the density declines at the rate |y|−a−1 or faster, then y has finite moments up
to (but not including) a. Broadly speaking, the restriction that y has a finite r’th moment means
that the tail of y’s density declines to zero faster than y−r−1 . The faster decline of the tail means
that the probability of observing an extreme value of y is a more rare event.
We complete this section by adding an alternative representation of expectation in terms of the
distribution function.
Proof of Theorem 2.31.1: Let F ∗ (x) = Pr (y > x) = 1 − F (x), where F (x) is the distribution
function. By integration by parts
Z ∞ Z ∞ Z ∞ Z ∞
∗ ∗ ∞ ∗
Ey = ydF (y) = − ydF (y) = − [yF (y)]0 + F (y)dy = Pr (y > u) du
0 0 0 0
as stated. ¥
these are the situations where the conditional mean is easiest to describe and understand. However,
the conditional mean exists quite generally without appealing to the properties of either discrete
or continuous random variables.
To justify this claim we now present a deep result from probability theory. What is says is that
the conditional mean exists for all joint distributions (y, x) for which y has a finite mean.
The conditional mean m(x) defined by (2.60) specializes to (2.7) when (y, x) have a joint density.
The usefulness of definition (2.60) is that Theorem 2.32.1 shows that the conditional mean m(x)
exists for all finite-mean distributions. This definition allows y to be discrete or continuous, for x to
be scalar or vector-valued, and for the components of x to be discrete or continuously distributed.
2.33 Identification*
A critical and important issue in structural econometric modeling is identification, meaning that
a parameter is uniquely determined by the distribution of the observed variables. It is relatively
straightforward in the context of the unconditional and conditional mean, but it is worthwhile to
introduce and explore the concept at this point for clarity.
Let F denote the distribution of the observed data, for example the distribution of the pair
(y, x). Let F be a collection of distributions F. Let θ be a parameter of interest (for example, the
mean Ey).
Equivalently, θ is identified if we can write it as a mapping θ = g(F ) on the set F. The restriction
to the set F is important. Most parameters are identified only on a strict subset of the space of all
distributions.
Take, for example, the mean n μ= Ey. It is uniquelyodetermined if E |y| < ∞, so it is clear that
R∞
μ is identified for the set F = F : −∞ |y| dF (y) < ∞ . However, μ is also well defined when it is
either positive or negative infinity. Hence, defining I1 and I2 as in (2.57) and (2.58), we can deduce
that μ is identified on the set F = {F : {I1 < ∞} ∪ {I2 < ∞}} .
Next, consider the conditional mean. Theorem 2.32.1 demonstrates that E |y| < ∞ is a sufficient
condition for identification.
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 51
That is, y∗ is capped at the value τ. A common example is income surveys, where income responses
are “top-coded”, meaning that incomes above the top code τ are recorded as equalling the top
code. The observed variable y ∗ has distribution
½
∗ F (u) for u ≤ τ
F (u) =
1 for u ≥ τ.
We are interested in features of the distribution F not the censored distribution F ∗ . For example,
we are interested in the mean wage μ = E (y) . The difficulty is that we cannot calculate μ from
F ∗ except in the trivial case where there is no censoring Pr (y ≥ τ ) = 0. Thus the mean μ is not
generically identified from the censored distribution.
A typical solution to the identification problem is to assume a parametric distribution. For
example, let F be the set of normal distributions y ∼ N(μ, σ2 ). It is possible to show that the
parameters (μ, σ 2 ) are identified for all F ∈ F. That is, if we know that the uncensored distribution
is normal, we can uniquely determine the parameters from the censored distribution. This is often
called parametric identification as identification is restricted to a parametric class of distribu-
tions. In modern econometrics this is generally viewed as a second-best solution, as identification
has been achieved only through the use of an arbitrary and unverifiable parametric assumption.
A pessimistic conclusion might be that it is impossible to identify parameters of interest from
censored data without parametric assumptions. Interestingly, this pessimism is unwarranted. It
turns out that we can identify the quantiles qα of F for α ≤ Pr (y ≤ τ ) . For example, if 20%
of the distribution is censored, we can identify all quantiles for α ∈ (0, 0.8). This is often called
nonparametric identification as the parameters are identified without restriction to a parametric
class.
What we have learned from this little exercise is that in the context of censored data, moments
can only be parametrically identified, while (non-censored) quantiles are nonparametrically identi-
fied. Part of the message is that a study of identification can help focus attention on what can be
learned from the data distributions available.
Proof of Theorem 2.7.1: For convenience, assume that the variables have a joint density f (y, x).
Since E (y | x) is a function of the random vector x only, to calculate its expectation we integrate
with respect to the density fx (x) of x, that is
Z
E (E (y | x)) = E (y | x) fx (x) dx.
Rk
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 52
Substituting in (2.7) and noting that fy|x (y|x) fx (x) = f (y, x) , we find that the above expression
equals Z µZ ¶ Z Z
yfy|x (y|x) dy fx (x) dx = yf (y, x) dydx = E (y)
Rk R Rk R
the unconditional mean of y. ¥
Proof of Theorem 2.7.2: Again assume that the variables have a joint density. It is useful to
observe that
f (y, x1 , x2 ) f (x1 , x2 )
f (y|x1 , x2 ) f (x2 |x1 ) = = f (y, x2 |x1 ) , (2.61)
f (x1 , x2 ) f (x1 )
the density of (y, x2 ) given x1 . Here, we have abused notation and used a single symbol f to denote
the various unconditional and conditional densities to reduce notational clutter.
Note that Z
E (y | x1 , x2 ) = yf (y|x1 , x2 ) dy. (2.62)
R
Integrating (2.62) with respect to the conditional density of x2 given x1 , and applying (2.61) we
find that
Z
E (E (y | x1 , x2 ) | x1 ) = E (y | x1 , x2 ) f (x2 |x1 ) dx2
Rk2
Z µZ ¶
= yf (y|x1 , x2 ) dy f (x2 |x1 ) dx2
k
ZR 2 Z R
= yf (y|x1 , x2 ) f (x2 |x1 ) dydx2
k
ZR 2 ZR
= yf (y, x2 |x1 ) dydx2
Rk2 R
= E (y | x1 )
as stated. ¥
This is (2.9). The assumption that E |g (x) y| < ∞ is required for the first equality to be well-
defined. Equation (2.10) follows by applying the Simple Law of Iterated Expectations to (2.9).
¥
Proof of Theorem 2.10.2: The assumption that Ey2 < ∞ implies that all the conditional
expectations below exist.
Set z = E(y | x1 , x2 ). By the conditional Jensen’s inequality (B.13),
¡ ¢
(E(z | x1 ))2 ≤ E z 2 | x1 .
Similarly, ³ ´ ³ ´
(Ey)2 ≤ E (E(y | x1 ))2 ≤ E (E(y | x1 , x2 ))2 . (2.63)
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 53
The variables y, E(y | x1 ) and E(y | x1 , x2 ) all have the same mean Ey, so the inequality (2.63)
implies that the variances are ranked monotonically:
so the decomposition
y − μ = y − E(y | x) + E(y | x) − μ
satisfies
var (y) = var (y − E(y | x)) + var (E(y | x)) . (2.65)
The monotonicity of the variances of the conditional mean (2.64) applied to the variance decom-
position (2.65) implies the reverse monotonicity of the variances of the differences, completing the
proof. ¥
where the two parts on the right-hand are finite since E |y|r < ∞ by assumption and E |m(x)|r < ∞
by the Conditional Expectation Inequality (B.14). The fact that (E |e|r )1/r < ∞ implies E |e|r <
∞. ¥
Proof of Theorem 2.18.1. For part 1, by the Expectation Inequality (B.15), (A.9) and Assump-
tion 2.18.1, ° ¡ 0 ¢° ° °
°E xx ° ≤ E °xx0 ° = E kxk2 < ∞.
Similarly, using the Expectation Inequality (B.15), the Cauchy-Schwarz Inequality (B.17) and As-
sumption 2.18.1,
³ ´1/2 ¡ ¢1/2
kE (xy)k ≤ E kxyk = E kxk2 Ey2 < ∞.
Thus the moments E (xy) and E (xx0 ) are finite and well defined.
For part 2, the coefficient β = (E (xx0 ))−1 E (xy) is well defined since (E (xx0 ))−1 exists under
Assumption 2.18.1.
Part 3 follows from Definition 2.18.1 and part 2.
For part 4, first note that
¡ ¢2
Ee2 = E y − x0 β
¡ ¢ ¡ ¢
= Ey 2 − 2E yx0 β + β 0 E xx0 β
¡ ¢¡ ¡ ¢¢−1
= Ey 2 − 2E yx0 E xx0 E (xy)
≤ Ey 2
< ∞.
The first inequality holds because E (yx0 ) (E (xx0 ))−1 E (xy) is a quadratic form and therefore neces-
sarily non-negative. Second, by the Expectation Inequality (B.15), the Cauchy-Schwarz Inequality
(B.17) and Assumption 2.18.1,
³ ´1/2 ¡ ¢1/2
kE (xe)k ≤ E kxek = E kxk2 Ee2 < ∞.
It follows that the expectation E (xe) is finite, and is zero by the calculation (2.28).
CHAPTER 2. CONDITIONAL EXPECTATION AND PROJECTION 54
Exercises
Exercise 2.1 Find E (E (E (y | x1 , x2 , x3 ) | x1 , x2 ) | x1 ) .
Exercise 2.3 Prove Theorem 2.8.1.4 using the law of iterated expectations.
Exercise 2.4 Suppose that the random variables y and x only take the values 0 and 1, and have
the following joint probability distribution
x=0 x=1
y=0 .1 .2
y=1 .4 .3
¡ ¢
Find E (y | x) , E y 2 | x and var (y | x) for x = 0 and x = 1.
(c) Show that σ 2 (x) minimizes the mean-squared error and is thus the best predictor.
¡ ¢
σ 2 (x) = E y2 | x − (E (y | x))2 .
Exercise 2.8 Suppose that y is discrete-valued, taking values only on the non-negative integers,
and the conditional distribution of y given x is Poisson:
Compute E (y | x) and var (y | x) . Does this justify a linear regression model of the form y =
x0 β + e?
j
Hint: If Pr (y = j) = exp(−λ)λ
j! , then Ey = λ and var(y) = λ.
Exercise 2.9 Suppose you have two regressors: x1 is binary (takes values 0 and 1) and x2 is
categorical with 3 categories (A, B, C). Write E (y | x1 , x2 ) as a linear regression.
¡ ¢
Exercise 2.10 True or False. If y = xβ + e, x ∈ R, and E (e | x) = 0, then E x2 e = 0.
¡ ¢
Exercise 2.11 True or False. If y = xβ + e, x ∈ R, and E (xe) = 0, then E x2 e = 0.
Exercise 2.15 Consider the intercept-only model y = α + e defined as the best linear predictor.
Show that α = E(y).
¡ ¢
Exercise 2.16 Let x and y have the joint density f (x, y) = 23 x2 + y 2 on 0 ≤ x ≤ 1, 0 ≤ y ≤ 1.
Compute the coefficients of the best linear predictor y = α + βx + e. Compute the conditional mean
m(x) = E (y | x) . Are the best linear predictor and conditional mean different?
(b) Use a linear transformation of x to find an expression for the best linear predictor of y given
x. (Be explicit, do not just use the generalized inverse formula.)
then
β = argmin d(β)
β∈Rk
¡ ¡ ¢¢−1
= E xx0 E (xm(x))
¡ ¡ 0 ¢¢−1
= E xx E (xy) .
Exercise 2.20 Verify that (2.60) holds with m(x) defined in (2.7) when (y, x) have a joint density
f (y, x).
Chapter 3
3.1 Introduction
In this chapter we introduce the popular least-squares estimator. Most of the discussion will be
algebraic, with questions of distribution and inference defered to later chapters.
Assumption 3.2.1 The observations {(y1 , x1 ), ..., (yi , xi ), ..., (yn , xn )} are a
random sample.
With a random sample, the ordering of the data is irrelevant. There is nothing special about any
specific observation or ordering. You can permute the order of the observations and no information
is gained or lost.
57
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 58
As most economic data sets are not literally the result of a random experiment, the random
sampling framework is best viewed as an approximation rather than being literally true.
The linear projection model applies to the random observations (yi , xi ) . This means that the
probability model for the observations is the same as that described in Section 2.18. We can write
the model as
yi = x0i β + ei (3.1)
where the linear projection coefficient β is defined as
yi = μ + ei
E(ei ) = 0.
The sample mean is the empirical analog of the population mean, and is the conventional estimator
in the lack of other information about μ or the distribution of y. We call μb the moment estimator
for μ.
Indeed, whenever we have a parameter which can be written as the expectation of a function of
random variables, a natural estimator of the parameter is the moment estimator, which is the sample
mean of the corresponding function of the observations. For example, for μ2 = E(yi2 ) the moment
1 Pn 2 b 1 Pn y1i y2i .
estimator is μb2 = i=1 yi , and for θ = E(y1i y2i ) the moment estimator is θ =
n n i=1
Alternatively, as Sn (β) is a scale multiple of SSEn (β), we may equivalently define β b as the min-
imizer of SSEn (β). Hence β b is commonly called the least-squares (LS) (or ordinary least
squares (OLS)) estimator of β. Here, as is common in econometrics, we put a hat “^” over the
parameter β to indicate that β b is a sample estimate of β. This is a helpful convention, as just by
b
seeing the symbol β we can immediately interpret it as an estimator (because of the hat), and as an
estimator of a parameter labelled β. Sometimes when we want to be explicit about the estimation
method, we will write β b ols to signify that it is the OLS estimator. It is also common to see the
b
notation βn , where the subscript “n” indicates that the estimator depends on the sample size n.
It is important to understand the distinction between population parameters such as β and
sample estimates such as β.b The population parameter β is a non-random feature of the population
while the sample estimate β b is a random feature of a random sample. β is fixed, while β b varies
across samples.
To visualize the quadratic function Sn (β), Figure 3.1 displays an example sum-of-squared er-
rors function SSEn (β) for the case k = 2. The least-squares estimator β b is the the pair (βb1 , βb2 )
minimizing this function.
n
X
SSEn (β) = (yi − xi β)2
i=1
à n ! à n ! à n !
X X X
= yi2 − 2β xi yi +β 2
x2i .
i=1 i=1 i=1
The OLS estimator βb minimizes this function. From elementary algebra we know that the minimizer
of the quadratic function a − 2bx + cx2 is x = b/c. Thus the minimizer of SSEn (β) is
Pn
b xi yi
β = Pi=1n 2 . (3.6)
i=1 xi
The intercept-only model is the special case xi = 1. In this case we find
Pn n
b i=1 1yi 1X
β = Pn 2 = yi = y, (3.7)
i=1 1 n
i=1
the sample mean of yi . Here, as is common, we put a bar “− ” over y to indicate that the quantity
is a sample mean. This calculation shows that the OLS estimator in the intercept-only model is
the sample mean.
The moment estimator of β replaces the population moments in (3.4) with the sample moments:
b=Q
β b −1 b
xx Qxy
à n !−1 à n !
1X 0 1X
= xi xi xi yi
n n
i=1 i=1
à n !−1 à n !
X X
= xi x0i xi yi
i=1 i=1
b is
Definition 3.6.1 The least-squares estimator β
b = argmin Sn (β)
β
β∈Rk
where
n
1 X¡ ¢2
Sn (β) = yi − x0i β
n
i=1
Adrien-Marie Legendre
The method of least-squares was first published in 1805 by the French math-
ematician Adrien-Marie Legendre (1752-1833). Legendre proposed least-
squares as a solution to the algebraic problem of solving a system of equa-
tions when the number of equations exceeded the number of unknowns. This
was a vexing and common problem in astronomical measurement. As viewed
by Legendre, (3.1) is a set of n equations with k unknowns. As the equations
cannot be solved exactly, Legendre’s goal was to select β to make the set of
errors as small as possible. He proposed the sum of squared error criterion,
and derived the algebraic solution presented above. As he noted, the first-
order conditions (3.8) is a system of k equations with k unknowns, which
can be solved by “ordinary” methods. Hence the method became known
as Ordinary Least Squares and to this day we still use the abbreviation
OLS to refer to Legendre’s estimation method.
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 62
3.7 Illustration
We illustrate the least-squares estimator in practice with the data set used to generate the
estimates from Chapter 2. This is the March 2009 Current Population Survey, which has extensive
information on the U.S. population. This data set is described in more detail in Section 3.19. For
this illustration, we use the sub-sample of married (spouse present) black female wages earners with
12 years potential work experience. This sub-sample has 20 observations. Let yi be log wages and
xi be years of education and an intercept. Then
n
X µ ¶
995.86
xi yi = ,
62.64
i=1
n
X µ ¶
5010 314
xi x0i = ,
314 20
i=1
and à n !−1
X µ ¶
0.0125 −0.196
xi x0i =
−0.196 3.124
i=1
Thus
µ ¶µ ¶
b= 0.0125 −0.196 995.86
β
−0.196 3.124 62.64
µ ¶
0.155
= . (3.10)
0.698
\
log(W age) = 0.155 education + 0.698. (3.11)
An interpretation of the estimated equation is that each year of education is associated with an
16% increase in mean wages.
Equation (3.11) is called a bivariate regression as there are only two variables. A multivari-
ate regression has two or more regressors, and allows a more detailed investigation. Let’s take
an example similar to (3.11) but include all levels of experience. This time, we use the sub-sample
of single (never married) asian men, which has 268 observations. Including as regressors years
of potential work experience (experience) and its square (experience 2 /100) (we divide by 100 to
simplify reporting), we obtain the estimates
\
log(W age) = 0.143 education + 0.036 experience − 0.071 experience2 /100 + 0.575. (3.12)
These estimates suggest a 14% increase in mean wages per year of education, holding experience
constant.
Sometimes ŷi is called the predicted value, but this is a misleading label. The fitted value ŷi is a
function of the entire sample, including yi , and thus cannot be interpreted as a valid prediction of
yi . It is thus more accurate to describe ŷi as a fitted rather than a predicted value.
Note that yi = ŷi + êi and
b + êi .
yi = x0i β (3.14)
We make a distinction between the error ei and the residual êi . The error ei is unobservable while
the residual êi is a by-product of estimation. These two variables are frequently mislabeled, which
can cause confusion.
Equation (3.8) implies that
Xn
xi êi = 0. (3.15)
i=1
Thus the residuals have a sample mean of zero and the sample correlation between the regressors
and the residual is zero. These are algebraic results, and hold true for all linear regression estimates.
y1 = x01 β + e1
y2 = x02 β + e2
..
.
yn = x0n β + en .
Now define ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
y1 x01 e1
⎜ y2 ⎟ ⎜ x02 ⎟ ⎜ e2 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
y=⎜ .. ⎟, X=⎜ .. ⎟, e=⎜ .. ⎟.
⎝ . ⎠ ⎝ . ⎠ ⎝ . ⎠
yn x0n en
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 64
Observe that y and e are n × 1 vectors, and X is an n × k matrix. Then the system of n equations
can be compactly written in the single equation
y = Xβ + e. (3.17)
X 0 ê = 0. (3.20)
Using matrix notation we have simple expressions for most estimators. This is particularly
convenient for computer programming, as most languages allow matrix notation and manipulation.
y = Xβ + e
¡ ¢ ¡ ¢
b = X 0 X −1 X 0 y
β
b
ê = y − X β
X 0 ê = 0.
As an important example, if we partition the matrix X into two matrices X 1 and X 2 so that
X = [X 1 X 2] ,
¡ ¢−1 0
P P = P X X 0X X
¡ 0 ¢−1 0
=X XX X
= P.
The matrix P has the property that it creates the fitted values in a least-squares regression:
¡ ¢−1 0
P y = X X 0X X y = Xβ b = ŷ.
−1
The i’th diagonal element of P = X (X 0 X) X 0 is
¡ ¢−1
hii = x0i X 0 X xi (3.21)
Theorem 3.10.1
n
X
hii = tr P = k (3.22)
i=1
and
0 ≤ hii ≤ 1. (3.23)
To show (3.22),
³ ¡ ¢−1 0 ´
tr P = tr X X 0 X X
³¡ ¢−1 0 ´
= tr X 0 X XX
= tr (I k )
= k.
See Appendix A.4 for definition and properties of the trace operator. The proof of (3.23) is defered
to Section 3.21.
M = In − P
¡ ¢−1 0
= I n − X X 0X X
M X = (I n − P ) X = X − P X = X − X = 0.
M Z = Z − P Z = 0.
tr M = n − k. (3.24)
(See Exercise 3.9.) While P creates fitted values, M creates least-squares residuals:
b=b
M y = y − P y = y − Xβ e. (3.25)
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 67
As discussed in the previous section, a special example of a projection matrix occurs when X = 1
is an n-vector of ones, so that P 1 = 1 (10 1)−1 10 . Similarly, set
M 1 = In − P 1
¡ ¢−1 0
= I n − 1 10 1 1.
While P 1 creates a vector of sample means, M 1 creates demeaned values:
M 1 y = y − 1ȳ.
For simplicity we will often write the right-hand-side as y − ȳ. The i’th element is yi − ȳ, the
demeaned value of yi .
We can also use (3.25) to write an alternative expression for the residual vector. Substituting
y = Xβ + e into ê = M y and using M X = 0 we find
ê = M y = M (Xβ + e) = M e (3.26)
which is free of dependence on the regression coefficient β.
However, this is infeasible as ei is not observed. In this case it is common to take a two-step
approach to estimation. The residuals êi are calculated in the first step, and then we substitute êi
for ei in expression (3.27) to obtain the feasible estimator
n
1X 2
σ̂ 2 = êi . (3.28)
n
i=1
y = P y + M y = ŷ + ê. (3.30)
ŷ 0 ê = (P y)0 (M y) = y 0 P M y = 0.
It follows that
y 0 y = ŷ 0 ŷ + 2ŷ 0 ê + ê0 ê = ŷ 0 ŷ + ê0 ê
or n n n
X X X
yi2 = ŷi2 + ê2i .
i=1 i=1 i=1
y − 1ȳ = ŷ − 1ȳ + ê
or
n
X n
X n
X
(yi − ȳ)2 = (ŷi − ȳ)2 + ê2i .
i=1 i=1 i=1
This is commonly called the analysis-of-variance formula for least squares regression.
A commonly reported statistic is the coefficient of determination or R-squared:
Pn 2 Pn 2
2 i=1 (ŷi − ȳ) i=1 êi
R = Pn 2 = 1 − Pn 2.
i=1 (yi − ȳ) i=1 (yi − ȳ)
It is often described as the fraction of the sample variance of yi which is explained by the least-
squares fit. R2 is a crude measure of regression fit. We have better measures of fit, but these require
a statistical (not just algebraic) analysis and we will return to these issues later. One deficiency
with R2 is that it increases when regressors are added to a regression (see Exercise 3.16) so the
“fit” can be always increased by increasing the number of regressors.
y = X 1 β 1 + X 2 β2 + e. (3.31)
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 69
b 11·2 = Q
where Q b 11 − Q b −1
b 12 Q b b b b b −1 b
22 Q21 and Q22·1 = Q22 − Q21 Q11 Q12 .
Thus
à !
b1
β
b=
β b2
β
" #" #
Qb −1
11·2 − b
Q
−1
b
Q b
Q
11·2 12 22
−1
b 1y
Q
=
−Q b −1 Q b −1
b 21 Q Qb −1 b 2y
Q
22·1 11 22·1
à −1 !
b
Q b
11·2 Q1y·2
= −1
b 22·1 Q
Q b 2y·1
Now
b 11·2 = Q
Q b 11 − Q b −1
b 12 Q b
22 Q21
µ ¶−1
1 1 1 0 1 0
= X 01 X 1 − X 01 X 2 X X2 X X1
n n n 2 n 2
1 0
= X 1M 2X 1
n
where ¡ ¢−1 0
M 2 = I n − X 2 X 02 X 2 X2
b 22·1 = 1 X 0 M 1 X 2 where
is the orthogonal projection matrix for X 2 . Similarly Q
n 2
¡ ¢−1 0
M 1 = I n − X 1 X 01 X 1 X1
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 70
b 1y·2 = Q
Q b 1y − Q b −1
b 12 Q b
22 Q2y
µ ¶−1
1 0 1 0 1 0 1 0
= X 1y − X 1X 2 X 2X 2 X y
n n n n 2
1
= X 01 M 2 y
n
b 2y·1 = 1 X 02 M 1 y.
and Q
n
Therefore ¡ ¢ ¡ ¢
b = X 0 M 2 X 1 −1 X 0 M 2 y
β (3.34)
1 1 1
and ¡ ¢ ¡ ¢
b = X 0 M 1 X 2 −1 X 0 M 1 y .
β (3.35)
2 2 2
These are algebraic expressions for the sub-coefficient estimates from (3.32).
where
f2 = M 1 X 2
X
and
ẽ1 = M 1 y.
Thus the coefficient estimate β b is algebraically equal to the least-squares regression of ẽ1 on
2
f
X 2 . Notice that these two are y and X 2 , respectively, premultiplied by M 1 . But we know that
multiplication by M 1 is equivalent to creating least-squares residuals. Therefore ẽ1 is simply the
least-squares residual from a regression of y on X 1 , and the columns of Xf2 are the least-squares
residuals from the regressions of the columns of X 2 on X 1 .
We have proven the following theorem.
In some contexts, the FWL theorem can be used to speed computation, but in most cases there
is little computational advantage to using the two-step algorithm.
This result is a direct analogy of the coefficient representation obtained in Section 2.22. The
result obtained in that section concerned the population projection coefficients, the result obtained
here concern the least-squares estimates. The key message is the same. In the least-squares
regression (3.32), the estimated coefficient β b 2 numerically equals the regression of y on the regressors
X 2 , only after the regressors X 1 have been linearly projected out. Similarly, the coefficient estimate
b numerically equals the regression of y on the regressors X 1 , after the regressors X 2 have been
β 1
linearly projected out. This result can be very insightful when intrepreting regression coefficients.
A common application of the FWL theorem, which you may have seen in an introductory
econometrics course, is the demeaning formula for regression. Partition X = [X 1 X 2 ] where
X 1 = 1 is a vector of ones and X 2 is a matrix of observed regressors. In this case,
¡ ¢−1 0
M 1 = I n − 1 10 1 1.
Observe that
f2 = M 1 X 2 = X 2 − X 2
X
and
ỹ = M 1 y = y − y
are the “demeaned” variables. The FWL theorem says that β b 2 is the OLS estimate from a regression
of yi − y on x2i − x2 :
à n !−1 à n !
X X
b =
β (x2i − x2 ) (x2i − x2 )0
(x2i − x2 ) (yi − y) .
2
i=1 i=1
Thus the OLS estimator for the slope coefficients is a regression with demeaned data.
Ragnar Frisch
Ragnar Frisch (1895-1973) was co-winner with Jan Tinbergen of the first
Nobel Memorial Prize in Economic Sciences in 1969 for their work in devel-
oping and applying dynamic models for the analysis of economic problems.
Frisch made a number of foundational contributions to modern economics
beyond the Frisch-Waugh-Lovell Theorem, including formalizing consumer
theory, production theory, and business cycle theory.
Here, X (−i) and y (−i) are the data matrices omitting the i’th row. The leave-one-out predicted
value for yi is
b
ỹi = x0i β (−i) ,
ẽi = yi − ỹi .
b
A convenient alternative expression for β (−i) (derived in Section 3.21) is
b b −1 ¡ 0 ¢−1
β (−i) = β − (1 − hii ) XX xi êi (3.37)
10
leave−one−out OLS
8
6
OLS
4
2
0
2 4 6 8 10
fitted line towards the 26th observation. In fact, the slope coefficient decreases from 0.97 (which
is close to the true value of 1.00) to 0.56, which is substantially reduced. Neither y26 nor x26 are
unusual values relative to their marginal distributions, so this outlier would not have been detected
from examination of the marginal distributions of the data. The change in the slope coefficient of
−0.41 is meaningful and should raise concern to an applied economist.
From (3.37)-(3.38) we know that
¡ ¢
b (−i) = (1 − hii )−1 X 0 X −1 xi êi
b −β
β
¡ ¢−1
= X 0X xi ẽi . (3.42)
By direct calculation of this quantity for each observation i, we can directly discover if a specific
observation i is influential for a coefficient estimate of interest.
For a general assessment, we can focus on the predicted values. The difference between the
full-sample and leave-one-out predicted values is
b − x0 β
ŷi − ỹi = x0i β b
i (−i)
¡ ¢−1
= x0i X 0 X xi ẽi
= hii ẽi
which is a simple function of the leverage values hii and prediction errors ẽi . Observation i is
influential for the predicted value if |hii ẽi | is large, which requires that both hii and |ẽi | are large.
One way to think about this is that a large leverage value hii gives the potential for observation
i to be influential. A large hii means that observation i is unusual in the sense that the regressor xi
is far from its sample mean. We call an observation with large hii a leverage point. A leverage
point is not necessarily influential as the latter also requires that the prediction error ẽi is large.
To determine if any individual observations are influential in this sense, several diagnostics have
been proposed (some names include DFITS, Cook’s Distance, and Welsch Distance). Unfortunately,
from a statistical perspective it is difficult to recommend these diagnostics for applications as they
are not based on statistical theory. Probably the most relevant measure is the change in the
coefficient estimates given in (3.42). The ratio of these changes to the coefficient’s standard error
is called its DFBETA, and is a postestimation diagnostic available in STATA. While there is no
magic threshold, the concern is whether or not an individual observation meaningfully changes an
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 74
This is the largest (absolute) change in the predicted value due to a single observation. If this diag-
nostic is large relative to the distribution of yi , it may indicate that that observation is influential.
If an observation is determined to be influential, what should be done? As a common cause
of influential observations is data entry error, the influential observations should be examined for
evidence that the observation was mis-recorded. Perhaps the observation falls outside of permitted
ranges, or some observables are inconsistent (for example, a person is listed as having a job but
receives earnings of $0). If it is determined that an observation is incorrectly recorded, then the
observation is typically deleted from the sample. This process is often called “cleaning the data”.
The decisions made in this process involve an fair amount of individual judgement. When this is
done it is proper empirical practice to document such choices. (It is useful to keep the source data
in its original form, a revised data file after cleaning, and a record describing the revision process.
This is especially useful when revising empirical work at a later date.)
It is also possible that an observation is correctly measured, but unusual and influential. In
this case it is unclear how to proceed. Some researchers will try to alter the specification to
properly model the influential observation. Other researchers will delete the observation from the
sample. The motivation for this choice is to prevent the results from being skewed or determined
by individual observations, but this practice is viewed skeptically by many researchers who believe
it reduces the integrity of reported empirical results.
For an empirical illustration, consider the log wage regression (3.12) for single asian males.
This regression, which has 268 observations, has Inf luence = 0.29. This means that the most
influential observation, when deleted, changes the predicted (fitted) value of the dependent variable
log(W age) by 0.29, or equivalently the wage by 29%. This is a meaningful change and suggests
further investigation. We examine the influential observation, and find that its leverage hii is 0.33,
which is disturbingly large. (Rcall that the leverage values are all positive and sum to one. One
third of the leverage in this sample of 268 observations is contained in just this single observation!)
Examining further, we find that this individual is 65 years old with 8 years education, so that his
potential experience is 51 years. This is the highest experience in the subsample — the next highest
is 41 years. The large leverage is due to to his unusual characteristics (very low education and
very high experience) within this sample. Essentially, regression (3.12) is attempting to estimate
the conditional mean at experience= 51 with only one observation, so it is not surprising that this
observation determines the fit and is thus influential. A reasonable conclusion is the regression
function can only be estimated over a smaller range of experience. We restrict the sample to
individuals with less than 45 years experience, re-estimate, and obtain the following estimates.
\
log(W age) = 0.144 education + 0.043 experience − 0.095 experience2 /100 + 0.531. (3.43)
For this regression, we calculate that Inf luence = 0.11, which is greatly reduced relative to the
regression (3.12). Comparing (3.43) with (3.12), the slope coefficient for education is essentially
unchanged, but the coefficients in experience and its square have slightly increased.
By eliminating the influential observation, equation (3.43) can be viewed as a more robust
estimate of the conditional mean for most levels of experience. Whether to report (3.12) or (3.43)
in an application is largely a matter of judgment.
The maximum likelihood estimator (MLE) (β b mle , σ̂ 2 ) maximizes log L(β, σ 2 ). Since the latter
mle
is a function of β only through the sum of squared errors SSEn (β), maximizing the likelihood is
identical to minimizing SSEn (β). Hence
b
β b
mle = β ols ,
b
the MLE for β equals the OLS estimator. Due to this equivalence, the least squares estimator β ols
is often called the MLE.
b into the log-likelihood we obtain
We can also find the MLE for σ 2 . Plugging β
³ ´ ¡ ¢ b
log L βb , σ2 = − n log (2π) − n log σ 2 − SSEn (βmle ) .
mle
2 2 2σ 2
Maximization with respect to σ 2 yields the first-order condition
∂ ³ ´
log L βb , σ̂ 2 = − n + 1 b ) = 0.
SSEn (β
mle mle
∂σ 2 2σ̂ 2 2 (σ̂ 2 )2
4. education
0 Less than 1st grade
4 1st, 2nd, 3rd, or 4th grade
6 5th or 6th grade
8 7th or 8th grade
9 9th grade
10 10th grade
11 11th grade or 12th grade with no high school diploma
12 High school graduate, high school diploma or equivalent
13 Some college but no degree
14 Associate degree in college, including occupation/vocation programs
16 Bachelor’s degree or equivalent (BA, AB, BS)
18 Master’s degree (MA, MS MENG, MED, MSW, MBA)
20 Professional degree or Doctorate degree (MD, DDS, DVM, LLB, JD, PHD, EDD)
3.20 Programming
Most packages allow both interactive programming (where you enter commands one-by-one) and
batch programming (where you run a pre-written sequence of commands from a file). Interactive
programming can be useful for exploratory analysis, but eventually all work should be executed in
batch mode. This is the best way to control and document your work.
Batch programs are text files where each line executes a single command. For Stata, this file
needs to have the filename extension “.do”, and for Matlab “.m”, while for Gauss and R there are
no specific naming requirements.
To execute a program file, you type a command within the program.
Stata: do chapter3 executes the file chapter3.do
Gauss: run chapter3.prg executes the file chapter3.prg
Matlab: run chapter3 executes the file chapter3.m
R: source(“chapter3.r”) executes the file chatper3.r
When writing batch files, it is useful to include comments for documentation and readability.
We illustrate programming files for Stata, Gauss, R, and Matlab, which execute a portion of
the empirical illustrations from Sections 3.7 and 3.17.
Stata do File
R Program File
Exercises
Exercise 3.1 Let y be a random variable with μ = Ey and σ 2 = var(y). Define
µ ¶
¡ 2
¢ y−μ
g y, μ, σ = .
(y − μ)2 − σ 2
Pn
Let (μ̂, σ̂2 ) be the values such that g n (μ̂, σ̂ 2 ) = 0 where g n (m, s) = n−1 i=1 g (yi , m, s) . Show that
μ̂ and σ̂ 2 are the sample mean and variance.
Exercise 3.2 Consider the OLS regression of the n × 1 vector y on the n × k matrix X. Consider
an alternative set of regressors Z = XC, where C is a k × k non-singular matrix. Thus, each
column of Z is a mixture of some of the columns of X. Compare the OLS estimates and residuals
from the regression of y on X to the OLS estimates from the regression of y on Z.
Exercise 3.4 Let ê be the OLS residual from a regression of y on X = [X 1 X 2 ]. Find X 02 ê.
Exercise 3.5 Let ê be the OLS residual from a regression of y on X. Find the OLS coefficient
from a regression of ê on X.
Exercise 3.6 Let ŷ = X(X 0 X)−1 X 0 y. Find the OLS coefficient from a regression of ŷ on X.
1 Pn
Exercise 3.11 Show that when X contains a constant, ŷi = ȳ.
n i=1
Exercise 3.12 A dummy variable takes on only the values 0 and 1. It is used for categorical
data, such as an individual’s gender. Let d1 and d2 be vectors of 1’s and 0’s, with the i0 th element
of d1 equaling 1 and that of d2 equaling 0 if the person is a man, and the reverse if the person is a
woman. Suppose that there are n1 men and n2 women in the sample. Consider fitting the following
three equations by OLS
y = μ + d1 α1 + d2 α2 + e (3.46)
y = d1 α1 + d2 α2 + e (3.47)
y = μ + d1 φ + e (3.48)
Can all three equations (3.46), (3.47), and (3.48) be estimated by OLS? Explain if not.
(a) Compare regressions (3.47) and (3.48). Is one more general than the other? Explain the
relationship between the parameters in (3.47) and (3.48).
(c) Letting α = (α1 α2 )0 , write equation (3.47) as y = Xα+e. Consider the assumption E(xi ei ) =
0. Is there any content to this assumption in this setting?
CHAPTER 3. THE ALGEBRA OF LEAST SQUARES 84
y ∗ = y − d1 y 1 − d2 y2
X ∗ = X − d1 x01 − d2 x02
where x1 and x2 are the k × 1 means of the regressors for men and women, respectively.
e from the OLS regresion
(c) Compare β
e + ẽ
y∗ = X ∗β
b from the OLS regression
with β
b + ê.
y = d1 α̂1 + d2 α̂2 + X β
Exercise 3.14 Let β b n = (X 0 X n )−1 X 0 y n denote the OLS estimate when y n is n × 1 and X n is
n n
n × k. A new observation (yn+1 , xn+1 ) becomes available. Prove that the OLS estimate computed
using this additional observation is
1 ¡ 0 ¢−1 ³ ´
b
β = b +
β X X x y − x0 b
n+1 n −1 n n n+1 n+1 n+1 n .
β
1 + x0n+1 (X 0n X n ) xn+1
Exercise 3.15 Prove that R2 is the square of the sample correlation between y and ŷ.
and
b + X 2β
y = X 1β b + ê.
1 2
Let R12 and R22 be the R-squared from the two regressions. Show that R22 ≥ R12 . Is there a case
(explain) when there is equality R22 = R12 ?
b
Exercise 3.18 For which observations will β b
(−i) = β?
Exercise 3.19 Use the data set from Section 3.19 and the sub-sample used for equation (3.43)
(see Section 3.20) for data construction)
1. Estimate equation (3.43) and compute the equation R2 and sum of squared errors.
2. Re-estimate the slope on education using the residual regression approach. Regress log(Wage)
on experience and its square, regress education on experience and its square, and the residuals
on the residuals. Report the estimates from this final regression, along with the equation R2
and sum of squared errors. Does the slope coefficient equal the value in (3.43)? Explain.
Exercise 3.20 Estimate equation (3.43) as in part 1 of the previous question. Let êi be the
OLS residual, ŷi the predicted value from the regression, x1i be education and x2i be experience.
Numerically calculate the following:
Pn
(a) i=1 êi
Pn
(b) i=1 x1i êi
Pn
(c) i=1 x2i êi
Pn 2
(d) i=1 x1i êi
Pn 2
(e) i=1 x2i êi
Pn
(f) i=1 ŷi êi
Pn 2
(g) i=1 êi
Are these calculations consistent with the theoretical properties of OLS? Explain.
1. Estimate a log wage regression for the subsample of white male Hispanics. In addition to
education, experience, and its square, include a set of binary variables for regions and marital
status. For regions, you create dummy variables for Northeast, South and West so that
Midwest is the excluded group. For marital status, create variables for married, windowed or
divorced, and separated, so that single (never married) is the excluded group.
2. Repeat this estimation using a different econometric package. Compare your results. Do they
agree?
Chapter 4
4.1 Introduction
In this chapter we investigate some finite-sample properties of least-squares applied to a random
sample in the the linear regression model. In particular, we calculate the finite-sample mean and
covariance matrix and propose standard errors for the coefficient estimates.
yi = μ + ei
E (ei ) = 0.
which is equivalent to the regression model with k = 1 and xi = 1. In the intercept model, μ = E (yi )
is the mean of yi . (See Exercise 2.15.) The least-squares estimator μb = y equals the sample mean
as shown in equation (3.7).
We now calculate the mean and variance of the estimator y. Since the sample mean is a linear
function of the observations, its expectation is simple to calculate
à n ! n
1X 1X
E (y) = E yi = Eyi = μ.
n n
i=1 i=1
This shows that the expected value of least-squares estimator (the sample mean) equals the projec-
tion coefficient (the population mean). An estimator with the property that its expectation equals
the parameter it is estimating is called unbiased.
We next calculate the variance of the estimator y. Making the substitution yi = μ + ei we find
n
1X
y−μ= ei .
n
i=1
86
CHAPTER 4. LEAST SQUARES REGRESSION 87
Then
yi = x0i β + ei (4.1)
E (ei | xi ) = 0. (4.2)
Eyi2 < ∞,
E kxi k2 < ∞,
and an invertible design matrix
¡ ¢
Qxx = E xi x0i > 0.
We will consider both the general case of heteroskedastic regression, where the conditional
variance ¡ ¢
E e2i | xi = σ 2 (xi ) = σi2
is unrestricted, and the specialized case of homoskedastic regression, where the conditional variance
is constant. In the latter case we add the following assumption.
CHAPTER 4. LEAST SQUARES REGRESSION 88
is independent of xi .
The first equality states that the conditional expectation of yi given {x1 , ..., xn } only depends on
xi , since the observations are independent across i. The second equality is the assumption of a
linear conditional mean.
Using definition (3.9), the conditioning theorem, the linearity of expectations, (4.4), and prop-
erties of the matrix inverse,
⎛Ã !−1 Ã n ! ⎞
³ ´ Xn X
E β b | X = E⎝ xi x0i xi yi | X ⎠
i=1 i=1
à n !−1 Ãà n ! !
X X
= xi x0i E xi yi |X
i=1 i=1
à n !−1 n
X X
= xi x0i E (xi yi | X)
i=1 i=1
à n !−1 n
X X
= xi x0i xi E (yi | X)
i=1 i=1
à n !−1 n
X X
= xi x0i xi x0i β
i=1 i=1
= β.
Now let’s show the same result using matrix notation. (4.4) implies
⎛ ⎞ ⎛ ⎞
.. ..
⎜ . ⎟ ⎜ 0. ⎟
E (y | X) = ⎜⎝ E (yi | X) ⎟ = ⎜ x β ⎟ = Xβ.
⎠ ⎝ i ⎠ (4.5)
.. ..
. .
Similarly ⎛ ⎞ ⎛ ⎞
.. ..
⎜ .⎟ ⎜ ⎟ .
E (e | X) = ⎜
⎝ E (ei | X) ⎟ = ⎜ E (ei | xi ) ⎟ = 0.
⎠ ⎝ ⎠ (4.6)
.. ..
. .
CHAPTER 4. LEAST SQUARES REGRESSION 89
Using definition (3.18), the conditioning theorem, the linearity of expectations, (4.5), and the
properties of the matrix inverse,
³ ´ ³¡ ¢ ´
E β b | X = E X 0 X −1 X 0 y | X
¡ ¢−1 0
= X 0X X E (y | X)
¡ 0 ¢−1 0
= XX X Xβ
= β.
At the risk of belaboring the derivation, another way to calculate the same result is as follows.
Insert y = Xβ + e into the formula (3.18) for β b to obtain
¡ ¢ ¡ ¢
βb = X 0 X −1 X 0 (Xβ + e)
¡ ¢−1 0 ¡ ¢−1 ¡ 0 ¢
= X 0X X Xβ + X 0 X Xe
¡ 0 ¢−1 0
=β+ XX X e. (4.7)
and
b = β.
E(β) (4.9)
Equation (4.9) says that the estimator β b is unbiased for β, meaning that the distribution of
b
β is centered at β. Equation (4.8) says that the estimator is conditionally unbiased, which is a
b is unbiased for any realization of the regressor matrix X.
stronger result. It says that β
and for any pair (Z, X) define the conditional covariance matrix
¡ ¢
var(Z | X) = E (Z − E (Z | X)) (Z − E (Z | X))0 | X .
We define ³ ´
def b|X
V βe = var β
the conditional covariance matrix of the regression coefficient estimates. We now derive its form.
The conditional covariance matrix of the n × 1 regression error e is the n × n matrix
¡ ¢
D = E ee0 | X .
where the first equality uses independence of the observations (Assumption 1.5.1) and the second
is (4.2). Thus D is a diagonal matrix with i’th diagonal element σi2 :
⎛ 2 ⎞
σ1 0 · · · 0
¡ 2 2
¢ ⎜ 2
⎜ 0 σ2 · · · 0 ⎟
⎟
D = diag σ1 , ..., σn = ⎜ . . (4.10)
⎝ ..
.
.. . . . .. ⎟
.
⎠
0 0 · · · σn2
In the special case of the linear homoskedastic regression model (4.3), then
¡ ¢
E e2i | xi = σi2 = σ 2
a weighted version of X 0 X.
In the special case of the linear homoskedastic regression model, D = I n σ 2 , so X 0 DX =
0
X Xσ 2 , and the variance matrix simplifies to
¡ ¢−1 2
V βe = X 0 X σ .
CHAPTER 4. LEAST SQUARES REGRESSION 91
the last equality using the homoskedasticity assumption D = I n σ 2 . The “best” unbiased linear
estimator is obtained by finding the matrix A0 satisfying A00 X = I k such that A00 A0 is minimized
in the positive definite sense, in that for any other matrix A satisfying A0 X = I k , then A0 A−A00 A0
is positive semi-definite.
The first part of the Gauss-Markov theorem is a limited efficiency justification for the least-
squares estimator. The justification is limited because the class of models is restricted to ho-
moskedastic linear regression and the class of potential estimators is restricted to linear unbiased
estimators. This latter restriction is particularly unsatisfactory as the theorem leaves open the
possibility that a non-linear or biased estimator could have lower mean squared error than the
least-squares estimator.
The second part of the theorem shows that in the (heteroskedastic) linear regression model,
within the class of linear unbiased estimators the best estimator is not least-squares but is (4.13).
This is called the Generalized Least Squares (GLS) estimator. The GLS estimator is infeasible
as the matrix D is unknown. This result does not suggest a practical alternative to least-squares.
We return to the issue of feasible implementation of GLS in Section 9.2.
We give a proof of the first part of the theorem below, and leave the proof of the second part
for Exercise 4.3.
Proof of Theorem 4.6.1.1. Let A be any n × k function of X such that A0 X = I k . The variance
−1
of the least-squares estimator is (X 0 X) σ 2 and that of A0 y is A0 Aσ 2 . It is sufficient to show
−1 −1
that the difference A0 A − (X 0 X) is positive semi-definite. Set C = A − X (X 0 X) . Note that
X 0 C = 0. Then we calculate that
¡ ¢−1 ³ ¡ ¢−1 ´0 ³ ¡ ¢−1 ´ ¡ 0 ¢−1
A0 A − X 0 X = C + X X 0X C + X X 0X − XX
¡ ¢−1 ¡ ¢ −1
= C 0C + C 0X X 0X + X 0X X 0C
¡ 0 ¢−1 0 ¡ 0 ¢−1 ¡ 0 ¢−1
+ XX XX XX − XX
= C 0 C.
4.7 Residuals
b and prediction errors ẽi = yi − x0 β
What are some properties of the residuals êi = yi − x0i β b
i (−i) ,
at least in the context of the linear regression model?
Recall from (3.26) that we can write the residuals in vector notation as
ê = M e
−1
where M = I n − X (X 0 X) X 0 is the orthogonal projection matrix. Using the properties of
conditional expectation
E (ê | X) = E (M e | X) = M E (e | X) = 0
and
var (ê | X) = var (M e | X) = M var (e | X) M = M DM (4.14)
where D is defined in (4.10).
We can simplify this expression under the assumption of conditional homoskedasticity
¡ ¢
E e2i | xi = σ 2 .
In particular, for a single observation i, we can find the (conditional) variance of êi by taking the
ith diagonal element of (4.16). Since the ith diagonal element of M is 1 − hii as defined in (3.21)
we obtain ¡ ¢
var (êi | X) = E ê2i | X = (1 − hii ) σ 2 . (4.16)
As this variance is a function of hii and hence xi , the residuals êi are heteroskedastic even if the
errors ei are homoskedastic.
Similarly, recall from (3.40) that the prediction errors ẽi = (1 − hii )−1 êi can be written in
vector notation as ẽ = M ∗ ê where M ∗ is a diagonal matrix with ith diagonal element (1 − hii )−1 .
Thus ẽ = M ∗ M e. We can calculate that
E (ẽ | X) = M ∗ M E (e | X) = 0
and
var (ẽ | X) = M ∗ M var (e | X) M M ∗ = M ∗ M DM M ∗
which simplifies under homoskedasticity to
var (ẽ | X) = M ∗ M M M ∗ σ 2
= M ∗ M M ∗ σ2 .
A residual with constant conditional variance can be obtained by rescaling. The standardized
residuals are
ēi = (1 − hii )−1/2 êi , (4.17)
and in vector notation
ē = (ē1 , ..., ēn )0 = M ∗1/2 M e.
From our above calculations, under homoskedasticity,
and ¡ ¢
var (ēi | X) = E ē2i | X = σ 2 (4.18)
and thus these standardized residuals have the same bias and variance as the original errors when
the latter are homoskedastic.
In the linear regression model we can calculate the mean of σ̂ 2 . From (3.26), the properties of
projection matrices and the trace operator, observe that
1 0 1 1 1 ¡ ¢ 1 ¡ ¢
σ̂ 2 = ê ê = e0 M M e = e0 M e = tr e0 M e = tr M ee0 .
n n n n n
Then
¡ ¢ 1 ¡ ¡ ¢¢
E σ̂ 2 | X = tr E M ee0 | X
n
1 ¡ ¡ ¢¢
= tr M E ee0 | X
n
1
= tr (M D) . (4.19)
n
¡ ¢
Adding the assumption of conditional homoskedasticity E e2i | xi = σ 2 , so that D = I n σ 2 , then
(4.19) simplifies to
¡ ¢ 1 ¡ ¢
E σ̂ 2 | X = tr M σ 2
n µ ¶
2 n−k
=σ ,
n
the final equality by (3.24). This calculation shows that σ̂ 2 is biased towards zero. The order of
the bias depends on k/n, the ratio of the number of estimated coefficients to the sample size.
Another way to see this is to use (4.16). Note that
n
¡ ¢ 1X ¡ ¢
E σ̂ 2 | X = E ê2i | X
n
i=1
Xn
1
= (1 − hii ) σ 2
n
i=1
µ ¶
n−k
= σ2 (4.20)
n
and thus σ̄ 2 is unbiased for σ 2 (in the homoskedastic linear regression model).
When k/n is small (typically, this occurs when n is large), the estimators σ̂ 2 , s2 and σ̄ 2 are
likely to be close. However, if not then s2 and σ̄ 2 are generally preferred to σ̂ 2 . Consequently it is
best to use one of the bias-corrected variance estimators in applications.
M SF En = Eẽ2n+1 .
³ ´
In the linear regression model, ẽn+1 = en+1 − x0n+1 βb − β , so
³ ³ ´´
M SF En = Ee2n+1 − 2E en+1 x0n+1 βb −β (4.25)
µ ³ ´³ ´0 ¶
+ E x0 βb −β β b − β xn+1 .
n+1
The first term in (4.25) is σ 2 . The second term in (4.25) is zero since en+1 x0n+1 is independent
b − β and both are mean zero. The third term in (4.25) is
of β
µ ´0 ¶
¡ ¢ ³ ´³
tr E xn+1 x0n+1 E β b −β β b −β
µ ´0 ¶
¡ ¢ ³ ´³
0 b
= tr E xn+1 xn+1 E β − β β − β b
³ ¡ ¢ ´
= tr E xn+1 x0n+1 EV βe
³¡ ¢ ´
= E tr xn+1 x0n+1 V βe
³ ´
= E x0n+1 V βe xn+1 (4.26)
µ³ ´³ ´0 ¶
b and use the face V e
where we use the fact that xn+1 is independent of β =E β b −β β b −β |X .
β
Thus ³ ´
M SF En = σ 2 + E x0n+1 V βe xn+1 .
A simple estimator for the MSFE is obtained by averaging the squared prediction errors (3.41)
n
2 1X 2
σ̃ = ẽi
n
i=1
CHAPTER 4. LEAST SQUARES REGRESSION 96
Eσ̃ 2 = Eẽ2i
³ ³ ´´2
= E ei − x0i βb (−i) − β
µ ³ ´³ ´0 ¶
= σ 2 + E x0i βb − β b
β − β xi .
(−i) (−i)
This is the MSFE based on a sample of size n − 1, rather than size n. The difference arises because
the in-sample prediction errors ẽi for i ≤ n are calculated using an effective sample size of n−1, while
the out-of sample prediction error ẽn+1 is calculated from a sample with the full n observations.
Unless n is very small we should expect M SF En−1 (the MSFE based on n − 1 observations) to
be close to M SF En (the MSFE based on n observations). Thus σ̃ 2 is a reasonable estimator for
M SF En .
Eσ̃ 2 = M SF En−1
which is known up to the unknown scale σ 2 . In Section 4.8 we discussed three estimators of σ 2 .
The most commonly used choice is s2 , leading to the classic covariance matrix estimator
0 ¡ ¢−1 2
Vb βe = X 0 X s . (4.27)
0
Since s2 is conditionally unbiased for σ 2 , it is simple to calculate that Vb βe is conditionally
unbiased for V βe under the assumption of homoskedasticity:
³ 0 ´ ¡ ¢−1 ¡ 2 ¢
E Vb βe | X = X 0 X E s |X
¡ ¢−1 2
= X 0X σ
= V βe .
CHAPTER 4. LEAST SQUARES REGRESSION 97
This estimator was the dominant covariance matrix estimator in applied econometrics for many
years, and is still the default method in most regression packages.
If the estimator (4.27) is used, but the regression error is heteroskedastic, it is possible for
0 −1 −1
b
V βe to be quite biased for the correct covariance matrix V βe = (X 0 X) (X 0 DX) (X 0 X) . For
example, suppose k = 1 and σi2 = x2i with Exi = 0. The ratio of the true variance of the least-squares
estimator to the expectation of the variance estimator is
Pn
Ve 4
Ex4i
³ 0β ´ = 2 P i=1 xi
n 2 ' ¡ ¢2 = κ.
E Vb βe | X σ i=1 xi Ex2i
(Notice that we use the fact that σi2 = x2i implies σ 2 = Eσ 2i = Ex2i .) The constant κ is the
standardized forth moment (or¡ kurtosis)
¢ of the regressor xi , and can be any number greater than
2
one. For example, if xi ∼ N 0, σ then κ = 3, so the true variance V βe is three times larger
0
than the expected homoskedastic estimator Vb βe . But κ can be much larger. Suppose, for example,
that xi ∼ χ21 − 1. In this case κ = 15, so that the true variance V βe is fifteen times larger than
0
the expected homoskedastic estimator Vb βe . While this is an extreme and constructed example,
the point is that the classic covariance matrix estimator (4.27) may be quite biased when the
homoskedasticity assumption fails.
Indeed,
à n !
³ ideal ´ ¡ ¢ X ¡ ¢ ¡ 0 ¢−1
−1
E Vb βe | X = X 0X xi x0i E e2i | X XX
i=1
à n !
¡ ¢−1 X ¡ ¢−1
= X 0X xi x0i σi2 X 0X
i=1
¡ ¢−1 ¡ 0 ¢¡ ¢−1
= X 0X X DX X 0 X
= V βe
CHAPTER 4. LEAST SQUARES REGRESSION 98
ideal
verifying that Vb βe is unbiased for V βe
ideal
Since the errors e2i are unobserved, Vb βe is not a feasible estimator. However, we can replace
the errors ei with the least-squares residuals êi . Making this substitution we obtain the estimator
à n !
W ¡ 0 ¢−1 X ¡ 0 ¢−1
Vb βe = X X xi x0i ê2i XX . (4.28)
i=1
We know, however, that ê2i is biased towards zero. To estimate the variance σ 2 the unbiased
estimator s2 scales the moment estimator σ̂ 2 by n/(n − k) . Making the same adjustment we obtain
the estimator µ ¶ Ã n !
n ¡ 0 ¢−1 X ¡ 0 ¢−1
Vb βe = XX xi x0i ê2i XX . (4.29)
n−k
i=1
While the scaling by n/(n − k) is ad hoc, it is recommended over the unscaled estimator (4.28).
Alternatively, we could use the prediction errors ẽi or the standardized residuals ēi , yielding the
estimators
à n !
¡ 0 ¢−1 X ¡ 0 ¢−1
Ve e = X X
β
xi x0i ẽ2i XX
i=1
à n !
¡ ¢−1 X ¡ 0 ¢−1
= X 0X (1 − hii )−2 xi x0i ê2i XX (4.30)
i=1
and
à n !
¡ 0 ¢−1 X 0 2
¡ 0 ¢−1
V βe = XX xi xi ēi XX
i=1
à n !
¡ 0 ¢−1 X −1 0 2
¡ 0 ¢−1
= XX (1 − hii ) xi xi êi XX . (4.31)
i=1
W
The four estimators Vb βe , Vb βe , Ve βe , and V βe are collectively called robust, heteroskedasticity-
consistent, or heteroskedasticity-robust covariance matrix estimators. The estimator Vb e was β
first developed by Eicker (1963) and introduced to econometrics by White (1980), and is sometimes
called the Eicker-White or White covariance matrix estimator. The scaled estimator Vb βe is the
default robust covariance matrix estimator implemented in Stata. The estimator Ve e was introduced
β
by Andrews (1991) based on the principle of leave-one-out cross-validation (and is implemented
using the vce(hc3) option in Stata). The estimator V βe was introduced by Horn, Horn and Duncan
(1975) (and is implemented using the vce(hc2) option in Stata).
Since (1 − hii )−2 > (1 − hii )−1 > 1 it is straightforward to show that
W
Vb βe < V βe < Ve βe (4.32)
(See Exercise 4.7). The inequality A < B when applied to matrices means that the matrix B − A
is positive definite.
In general, the bias of the covariance matrix estimators is quite complicated, but they greatly
CHAPTER 4. LEAST SQUARES REGRESSION 99
simplify under the assumption of homoskedasticity (4.3). For example, using (4.16),
à n !
³ W ´ ¡ ¢−1 X ¡ 2 ¢ ¡ 0 ¢−1
E Vb βe | X = X X 0 0
xi xi E êi | X XX
i=1
à n !
¡ ¢−1 X ¡ 0 ¢−1
= X 0X xi x0i (1 − hii ) σ 2 XX
i=1
à n !
¡ 0 ¢−1 2 ¡ 0 ¢−1 X ¡ ¢−1 2
= XX σ − XX xi x0i hii X 0 X σ
i=1
¡ ¢−1 2
≤ X 0X σ
= V βe .
W
This calculation shows that Vb βe is biased towards zero.
Similarly, (again under homoskedasticity) we can calculate that Ve βe is biased away from zero,
specifically ³ ´ ¡ ¢−1 2
e
E V βe | X ≥ X 0 X σ (4.33)
Halbert L. White
When β is a vector with estimate β b and covariance matrix estimate Vb e , standard errors for
β
individual elements are the square roots of the diagonal elements of Vb βe . That is,
q rh i
b
s(β̂j ) = V β̂j = Vb βe .
jj
As we discussed in the previous section, there are multiple possible covariance matrix estimators,
so standard errors are not unique. It is therefore important to understand what formula and method
is used by an author when studying their work. It is also important to understand that a particular
standard error may be relevant under one set of model assumptions, but not under another set of
assumptions.
To illustrate, we return to the log wage regression (3.11) of Section 3.7. We calculate that
2
s = 0.160. Therefore the homoskedastic covariance matrix estimate is
µ ¶−1 µ ¶
0 5010 314 0.002 −0.031
Vb βe = 0.215 = .
314 20 −0.031 0.499
We also calculate that µ ¶
n
X −1 763.26 48.51
(1 − hii ) xi x0i ê2i = .
48.51 3.11
i=1
Therefore the Horn-Horn-Duncan covariance matrix estimate is
µ ¶−1 µ ¶µ ¶−1
5010 314 763.26 48.51 5010 314
V βe =
314 20 48.51 3.11 314 20
µ ¶
0.001 −0.015
= . (4.35)
−0.015 0.243
The standard errors are the square roots of the diagonal elements of these matrices. A conventional
format to write the estimated equation with standard errors is
\
log(W age) = 0.155 Education + 0.698 .
(0.031) (0.493)
Alternatively, standard errors could be calculated using the other formulae. We report the
different standard errors in the following table.
Education Intercept
Homoskedastic (4.27) 0.045 0.707
White (4.28) 0.029 0.461
Scaled White (4.29) 0.030 0.486
Andrews (4.30) 0.033 0.527
Horn-Horn-Duncan (4.31) 0.031 0.493
CHAPTER 4. LEAST SQUARES REGRESSION 101
The homoskedastic standard errors are noticably different (larger, in this case) than the others,
but the four robust standard errors are quite close to one another.
4.13 Computation
We illustrate methods to compute standard errors for equation (3.12) extending the code of
Section 3.20.
n=rows(y);
k=cols(x);
a=n/(n-k);
sig2=(e’e)/(n-k);
u1=x.*e;
u2=x.*(e./(1-leverage));
u3=x.*(e./sqrt(1-leverage));
xx=inv(x’x);
v0=xx*sig2;
v1=xx*(u1’u1)*xx;
v1a=a*xx*(u1’u1)*xx;
v2=xx*(u2’u2)*xx;
v3=xx*(u3’u3)*xx
s0=sqrt(diag(v0)); @ Homoskedastic formula @
s1=sqrt(diag(v1)); @ White formula @
s1a=sqrt(diag(v1a)); @ Scaled White formula @
s2=sqrt(diag(v2)); @ Andrews formula @
s3=sqrt(diag(v3)); @ Horn-Horn-Duncan formula @
CHAPTER 4. LEAST SQUARES REGRESSION 102
n <- nrow(y)
k <- ncol(x)
a <- n/(n-k)
sig2 <- (t(e) %*% e)/(n-k)
u1 <- x*(e%*%matrix(1,1,k))
u2 <- x*((e/(1-leverage))%*%matrix(1,1,k))
u3 <- x*((e/sqrt(1-leverage))%*%matrix(1,1,k))
v0 <- xx*sig2
xx <- solve(t(x)%*%x)
v1 <- xx %*% (t(u1)%*%u1) %*% xx
v1a <- a * xx %*% (t(u1)%*%u1) %*% xx
v2 <- xx %*% (t(u2)%*%u2) %*% xx
v3 <- xx %*% (t(u3)%*%u3) %*% xx
s0 <- sqrt(diag(v0)) # Homoskedastic formula
s1 <- sqrt(diag(v1)) # White formula
s1a <- sqrt(diag(v1a)) # Scaled White formula
s2 <- sqrt(diag(v2)) # Andrews formula
s3 <- sqrt(diag(v3)) # Horn-Horn-Duncan formula
[n,k]=size(x);
a=n/(n-k);
sig2=(e’*e)/(n-k);
u1=x.*(e*ones(1,k));
u2=x.*((e./(1-leverage))*ones(1,k));
u3=x.*((e./sqrt(1-leverage))*ones(1,k));
xx=inv(x’*x);
v0=xx*sig2;
v1=xx*(u1’*u1)*xx;
v1a=a*xx*(u1’*u1)*xx;
v2=xx*(u2’*u2)*xx;
v3=xx*(u3’*u3)*xx;
s0=sqrt(diag(v0)); # Homoskedastic formula
s1=sqrt(diag(v1)); # White formula
s1a=sqrt(diag(v1a)); # Scaled White formula
s2=sqrt(diag(v2)); # Andrews formula
s3=sqrt(diag(v3)); # Horn-Horn-Duncan formula
Pn
where σ̂y2 = n−1 i=1 (yi − y)2 . R2 can be viewed as an estimator of the population parameter
var (x0i β) σ2
ρ2 = = 1 − 2.
var(yi ) σy
However, σ̂ 2 and σ̂y2 are biased estimators. Theil (1961) proposed replacing these by the unbi-
P
ased versions s2 and σ̃y2 = (n − 1)−1 ni=1 (yi − y)2 yielding what is known as R-bar-squared or
adjusted R-squared: P
2 s2 (n − 1) ni=1 ê2i
R =1− 2 =1− P .
σ̃y (n − k) ni=1 (yi − ȳ)2
2
While R is an improvement on R2 , a much better improvement is
Pn 2
e 2 ẽ σ̃ 2
R = 1 − Pn i=1 i 2 = 1 − 2
i=1 (yi − ȳ)
σ̂y
where ẽi are the prediction errors (3.38) and σ̃ 2 is the MSPE from (3.41). As described in Section
(4.9), σ̃ 2 is a good estimator of the out-of-sample mean-squared forecast error, so R e2 is a good
estimator of the percentage of the forecast variance which is explained by the regression forecast.
In this sense, R e2 is a good measure of fit.
2 e2 , is that R2
One problem with R2 , which is partially corrected by R and fully corrected by R
necessarily increases when regressors are added to a regression model. This occurs because R2 is a
negative function of the sum of squared residuals which cannot increase when a regressor is added.
2 e2 are non-monotonic in the number of regressors. R e2 can even be negative,
In contrast, R and R
which occurs when an estimated model predicts worse than a constant-only model.
In the statistical literature the MSPE σ̃ 2 is known as the leave-one-out cross validation
criterion, and is popular for model comparison and selection, especially in high-dimensional (non-
parametric) contexts. It is equivalent to use R e2 or σ̃ 2 to compare and select models. Models with
high Re2 (or low σ̃ 2 ) are better models in terms of expected out of sample squared error. In contrast,
2
R cannot be used for model selection, as it necessarily increases when regressors are added to a
2
regression model. R is also an inappropriate choice for model selection (it tends to select models
with too many parameters), though a justification of this assertion requires a study of the theory
2
of model selection. Unfortunately, R is routinely used by some economists, possibly as a hold-over
from previous generations.
In summary, it is recommended to calculate and report R e2 and/or σ̃ 2 in regression analysis,
2
and omit R2 and R .
Henri Theil
2
Henri Theil (1924-2000) of Holland invented R and two-stage least squares,
both of which are routinely seen in applied econometrics. He also wrote an
early influential advanced textbook on econometrics (Theil, 1971).
male union member, married female1 , married male, formerly married female2 , formerly married
male, hispanic, black, American Indian, Asian, and mixed race3 . The available sample is 46,943
so the parameter estimates are quite precise and reported in Table 4.1. For standard errors we use
the unbiased Horn-Horn-Duncan formula.
Table 4.1 displays the parameter estimates in a standard tabular format. The table clearly
states the estimation method (OLS), the dependent variable (log(Wage)), and the regressors are
clearly labeled. Both parameter estimates and standard errors are reported for all coefficients. In
addition to the coefficient estimates, the table also reports the estimated error standard deviation
and the sample size. These are useful summary measures of fit which aid readers.
Table 4.1
OLS Estimates of Linear Equation for Log(Wage)
β̂ s(β̂)
Education 0.117 0.001
Experience 0.033 0.001
Experience2 /100 -0.056 0.002
Female -0.098 0.011
Female Union Member 0.023 0.020
Male Union Member 0.095 0.020
Married Female 0.016 0.010
Married Male 0.211 0.010
Formerly Married Female -0.006 0.012
Formerly Married Male 0.083 0.015
Hispanic -0.108 0.008
Black -0.096 0.008
American Indian -0.137 0.027
Asian -0.038 0.013
Mixed Race -0.041 0.021
Intercept 0.909 0.021
σ̂ 0.565
Sample Size 46,943
Note: Standard errors are heteroskedasticity-consistent (Horn-Horn-Duncan formula)
As a general rule, it is advisable to always report standard errors along with parameter estimates.
This allows readers to assess the precision of the parameter estimates, and as we will discuss in
later chapters, form confidence intervals and t-tests for individual coefficients if desired.
The results in Table 4.1 confirm our earlier findings that the return to a year of education is
approximately 12%, the return to experience is concave, that single women earn approximately
10% less then single men, and blacks earn about 10% less than whites. In addition, we see that
Hispanics earn about 11% less than whites, American Indians 14% less, and Asians and Mixed races
about 4% less. We also see there are wage premiums for men who are members of a labor union
(about 10%), married (about 21%) or formerly married (about 8%), but no similar premiums are
apparant for women.
1
Defining “married” as marital code 1, 2, or 3.
2
Defining “formerly married” as marital code 4, 5, or 6.
3
Race code 6 or higher.
CHAPTER 4. LEAST SQUARES REGRESSION 105
4.16 Multicollinearity
−1 b are not defined. This situation is called strict
If X 0 X is singular, then (X 0 X) and β
multicollinearity, as the columns of X are linearly dependent, i.e., there is some α 6= 0 such that
Xα = 0. Most commonly, this arises when sets of regressors are included which are identically
related. For example, if X includes both the logs of two prices and the log of the relative prices,
log(p1 ), log(p2 ) and log(p1 /p2 ), for then X 0 X will necessarily be singular. When this happens, the
applied researcher quickly discovers the error as the statistical software will be unable to construct
(X 0 X)−1 . Since the error is discovered quickly, this is rarely a problem for applied econometric
practice.
The more relevant situation is near multicollinearity, which is often called “multicollinearity”
for brevity. This is the situation when the X 0 X matrix is near singular, when the columns of X are
close to linearly dependent. This definition is not precise, because we have not said what it means
for a matrix to be “near singular”. This is one difficulty with the definition and interpretation of
multicollinearity.
One potential complication of near singularity of matrices is that the numerical reliability of
the calculations may be reduced. In practice this is rarely an important concern, except when the
number of regressors is very large.
A more relevant implication of near multicollinearity is that individual coefficient estimates will
be imprecise. We can see this most simply in a homoskedastic linear regression model with two
regressors
yi = x1i β1 + x2i β2 + ei ,
and µ ¶
1 0 1 ρ
XX= .
n ρ 1
In this case
³ ´ σ2 µ 1 ρ ¶−1 σ2
µ
1 −ρ
¶
b
var β | X = = .
n ρ 1 n (1 − ρ2 ) −ρ 1
The correlation ρ indexes collinearity, since as ρ approaches 1 the matrix becomes singular. We
can see the
£ ¡effect ¢¤of collinearity on precision by observing that the variance of a coefficient esti-
−1
mate σ 2 n 1 − ρ2 approaches infinity as ρ approaches 1. Thus the more “collinear” are the
regressors, the worse the precision of the individual coefficient estimates.
What is happening is that when the regressors are highly dependent, it is statistically difficult to
disentangle the impact of β1 from that of β2 . As a consequence, the precision of individual estimates
are reduced. The imprecision, however, will be reflected by large standard errors, so there is no
distortion in inference.
Some earlier textbooks overemphasized a concern about multicollinearity. A very amusing
parody of these texts appeared in Chapter 23.3 of Goldberger’s A Course in Econometrics (1991),
which is reprinted
£ ¡ below.
¢¤−1 To understand his basic point, you should notice how the estimation
2
variance σ n 1 − ρ 2 depends equally and symmetrically on the correlation ρ and the sample
size n.
CHAPTER 4. LEAST SQUARES REGRESSION 106
Arthur S. Goldberger
Micronumerosity
Arthur S. Goldberger
A Course in Econometrics (1991), Chapter 23.3
1. Micronumerosity
The extreme case, “exact micronumerosity,” arises when n = 0, in which case
the sample estimate of μ is not unique. (Technically, there is a violation of
the rank condition n > 0 : the matrix 0 is singular.) The extreme case is
easy enough to recognize. “Near micronumerosity” is more subtle, and yet
very serious. It arises when the rank condition n > 0 is barely satisfied. Near
micronumerosity is very prevalent in empirical economics.
2. Consequences of micronumerosity
The consequences of micronumerosity are serious. Precision of estimation is
reduced. There are two aspects of this reduction: estimates of μ may have
large errors, and not only that, but Vȳ will be large.
Investigators will sometimes be led to accept the hypothesis μ = 0 because
ȳ/σ̂ȳ is small, even though the true situation may be not that μ = 0 but
simply that the sample data have not enabled us to pick μ up.
The estimate of μ will be very sensitive to sample data, and the addition of
a few more observations can sometimes produce drastic shifts in the sample
mean.
The true μ may be sufficiently large for the null hypothesis μ = 0 to be
rejected, even though Vȳ = σ 2 /n is large because of micronumerosity. But if
the true μ is small (although nonzero) the hypothesis μ = 0 may mistakenly
be accepted.
CHAPTER 4. LEAST SQUARES REGRESSION 108
nσ̂2 (n−k)s2
• σ2
= σ2
∼ χ2n−k
β̂j −βj
• ∼ tn−k
s(β̂j )
These are the exact finite-sample distributions of the least-squares estimator and variance esti-
mators, and are the basis for traditional inference in linear regression.
While elegant, the difficulty in applying Theorem 4.17.1 is that the normality assumption is too
restrictive to be empirical plausible, and therefore inference based on Theorem 4.17.1 has no guar-
antee of accuracy. We develop an alternative inference theory based on large sample (asymptotic)
approximations in the following chapter.
William Gosset
Exercises
1 Pn 0
Exercise 4.1 Explain the difference between n i=1 xi xi and E (xi x0i ) .
Exercise 4.2 True or False. If yi =P xi β + ei , xi ∈ R, E(ei | xi ) = 0, and êi is the OLS residual
from the regression of yi on xi , then ni=1 x2i êi = 0.
y = Xβ + e, E(e | X) = 0, var (e | X) = σ 2 Ω
e and an estimate of σ 2 is
the residual vector is ê = y − X β,
1
s2 = ê0 Ω−1 ê.
n−k
³ ´
(a) Find E βe|X .
³ ´
(b) Find var βe|X .
¡ ¢−1 0 −1
(c) Prove that ê = M 1 e, where M 1 = I − X X 0 Ω−1 X XΩ .
¡ ¢−1 0 −1
(d) Prove that M 01 Ω−1 M 1 = Ω−1 − Ω−1 X X 0 Ω−1 X XΩ .
¡ 2 ¢
(e) Find E s | X .
Exercise 4.5 Let (yi , xi ) be a random sample with E(y | X) = Xβ. Consider the Weighted
Least Squares (WLS) estimator of β
¡ ¢ ¡ ¢
e = X 0 W X −1 X 0 W y
β
e be a good estimator?
(a) In which contexts would β
e would perform better than
(b) Using your intuition, in which situations would you expect that β
OLS?
Exercise 4.8 Show (4.33) and (4.34) in the homoskedastic regression model.
Exercise
P 4.9 Let μ = E (yi ) , σ2 = E (yi − μ)2 and μ3 = E (yi − μ)3 and consider the sample mean
y = n1 ni=1 yi . Find E (y − μ)3 as a function of μ, σ 2 , μ3 and n.
CHAPTER 4. LEAST SQUARES REGRESSION 111
1. Calculate standard errors using the homoskedasticity formula and using the four covariance
matrices from Section 4.11.
Exercise 4.12 Continue the empirical analysis in Exercise 3.21. Calculate standard errors using
the Horn-Horn-Duncan method. Repeat in your second programming language. Are they identical?
Chapter 5
5.1 Introduction
In Chapter 4 we derived the mean and variance of the least-squares estimator in the context of
the linear regression model, but this is not a complete description of the sampling distribution, nor
sufficient for inference (confidence intervals and hypothesis testing) on the unknown parameters.
Furthermore, the theory does not apply in the context of the linear projection model, which is more
relevant for empirical applications.
To illustrate the situation with an example, let yi and xi be drawn from the joint density
µ ¶ µ ¶
1 1 2 1 2
f (x, y) = exp − (log y − log x) exp − (log x)
2πxy 2 2
and let β̂ be the slope coefficient estimate from a least-squares regression of yi on xi and a constant.
Using simulation methods, the density function of β̂ was computed and plotted in Figure 5.1 for
sample sizes of n = 25, n = 100 and n = 800. The vertical line marks the true projection coefficient.
From the figure we can see that the density functions are dispersed and highly non-normal. As
the sample size increases the density becomes more concentrated about the population coefficient.
Is there a simple way to characterize the sampling distribution of β̂?
In principle the sampling distribution of β̂ is a function of the joint distribution of (yi , xi )
and the sample size n, but in practice this function is extremely complicated so it is not feasible to
analytically calculate the exact distribution of β̂ except in very special cases. Therefore we typically
rely on approximation methods.
The most widely used and versatile method is asymptotic theory, which approximates sampling
distributions by taking the limit of the finite sample distribution as the sample size n tends to
infinity. It is important to understand that this is an approximation technique, as the asymptotic
distributions are used to assess the finite sample distributions of our estimators in actual practical
samples. The primary tools of asymptotic theory are the weak law of large numbers (WLLN),
central limit theorem (CLT), and continuous mapping theorem (CMT). With these tools we can
approximate the sampling distributions of most econometric estimators.
In this chapter we provide a concise summary. It will be useful for most students to review this
material, even if most is familiar.
112
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 113
of sampling distributions as the sample size tends to positive infinity, written “as n → ∞.” It is
not meant to be interpreted literally, but rather as an approximating device.
The first building block for asymptotic analysis is the concept of a limit of a sequence.
In words, an has the limit a if the sequence gets closer and closer to a as n gets larger. If a
sequence has a limit, that limit is unique (a sequence cannot have two distinct limits). If an has
the limit a, we also say that an converges to a as n → ∞.
Not all sequences have limits. For example, the sequence {1, 2, 1, 2, 1, 2, ...} does not have a
limit. It is therefore sometimes useful to have a more general definition of limits which always
exist, and these are the limit superior and limit inferior of sequence
def
Definition 5.2.2 lim inf n→∞ an = limn→∞ inf m≥n an
def
Definition 5.2.3 lim supn→∞ an = limn→∞ supm≥n an
The limit inferior and limit superior always exist, and equal when the limit exists. In the
example given earlier, the limit inferior of {1, 2, 1, 2, 1, 2, ...} is 1, and the limit superior is 2.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 114
The definition looks quite abstract, but it formalizes the concept of a sequence of random
variables concentrating about a point. The event {|zn − z| ≤ δ} occurs when zn is within δ of
the point z. Pr (|zn − z| ≤ δ) is the probability of this event — that zn is within δ of the point
z. Equation (5.1) states that this probability approaches 1 as the sample size n increases. The
definition of convergence in probability requires that this holds for any δ. So for any small interval
about z the distribution of zn concentrates within this interval for large n.
You may notice that the definition concerns the distribution of the random variables zn , not
their realizations. Furthermore, notice that the definition uses the concept of a conventional (deter-
ministic) limit, but the latter is applied to a sequence of probabilities, not directly to the random
variables zn or their realizations.
Two comments about the notation are worth mentioning. First, it is conventional to write the
p
convergence symbol as −→ where the “p” above the arrow indicates that the convergence is “in
probability”. You should try and adhere to this notation, and not simply write zn −→ z. Second,
it is important to include the phrase “as n → ∞” to be specific about how the limit is obtained.
A common mistake to confuse convergence in probability with convergence in expectation:
They are related but distinct concepts. Neither (5.1) nor (5.2) implies the other.
To see the distinction it might be helpful to think through a stylized example. Consider a
discrete random variable zn which takes the value 0 with probability 1 − n−1 and the value an 6= 0
with probability n−1 , or
1
Pr (zn = 0) = 1 − (5.3)
n
1
Pr (zn = an ) = .
n
In this example the probability distribution of zn concentrates at zero as n increases, regardless of
p
the sequence an . You can check that zn −→ 0 as n → ∞.
In this example we can also calculate that the expectation of zn is
an
Ezn = .
n
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 115
Despite the fact that zn converges in probability to zero, its expectation will not decrease to zero
unless an /n → 0. If an diverges to infinity at a rate equal to n (or faster) then Ezn will not converge
p
to zero. For example, if an = n, then Ezn = 1 for all n, even though zn −→ 0. This example might
seem a bit artificial, but the point is that the concepts of convergence in probability and convergence
in expectation are distinct, so it is important not to confuse one with the other.
Another common source of confusion with the notation surrounding probability limits is that
p
the expression to the right of the arrow “ −→” must be free of dependence on the sample size n.
p
Thus expressions of the form “zn −→ cn ” are notationally meaningless and should not be used.
© ª u2
The integral is over the event u2 > δ 2 , so that the inequality 1 ≤ 2 holds throughout. Thus
δ
Z Z Z
u2 u2 E (zn − Ezn )2 var(zn )
dFn (u) ≤ 2
dFn (u) ≤ 2
dFn (u) = 2
= ,
{u2 >δ 2 } {u2 >δ 2 } δ δ δ δ2
which establishes the desired inequality.
Applied to the sample mean y, Chebyshev’s inequality shows that for any δ > 0
σ2
Pr (|y − Ey| > δ) ≤ .
nδ 2
For fixed σ 2 and δ, the bound on the right-hand-side shrinks to zero as n → ∞. Thus the probability
that y is within δ of Ey = μ approaches 1 as n gets large, or
lim Pr (|y − μ| ≤ δ) = 1.
n→∞
p
Definition 5.4.1 An estimator θ̂ of a parameter θ is consistent if θ̂ −→ θ
as n → ∞.
Consistency is a good property for an estimator to possess. It means that for any given data
distribution, there is a sample size n sufficiently large such that the estimator θ̂ will be arbitrarily
close to the true value θ with high probability. The theorem does not tell us, however, how large
this n has to be. Thus the theorem does not give practical guidance for empirical practice. Still,
it is a minimal property for an estimator to be considered a “good” estimator, and provides a
foundation for more useful approximations.
The convergence (5.4) is stronger than (5.1) because it computes the probability of a limit
rather than the limit of a probability. Almost sure convergence is stronger than convergence in
a.s. p
probability in the sense that zn −→ z implies zn −→ z.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 117
In the example (5.3) of Section 5.3, the sequence zn converges in probability to zero for any
sequence an , but this is not sufficient for zn to converge almost surely. In order for zn to converge
to zero almost surely, it is necessary that an → 0.
In the random sampling context the sample mean can be shown to converge almost surely to
the population mean. This is called the strong law of large numbers.
The proof of the SLLN is technically quite advanced so is not presented here. For a proof see
Billingsley (1995, Section 22) or Ash (1972, Theorem 7.2.5).
The WLLN is sufficient for most purposes in econometrics, so we will not use the SLLN in this
text.
When working with random vectors y it is convenient to measure their magnitude by their
Euclidean length or Euclidean norm
¡ ¢
2 1/2
kyk = y12 + · · · + ym .
Theorem 5.6.1 For y ∈ Rm , E kyk < ∞ if and only if E |yj | < ∞ for
j = 1, ..., m.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 118
V is often called a variance-covariance matrix. You can show that the elements of V are finite if
E kyk2 < ∞.
A random sample {y 1 , ..., y n } consists of n observations of independent and identically distrib-
uted draws from the distribution of y. (Each draw is an m-vector.) The vector sample mean
⎛ ⎞
y1
1X
n ⎜ y ⎟
⎜ 2 ⎟
y= yi = ⎜ . ⎟
n ⎝ .. ⎠
i=1
ym
d
When z n −→ z, it is common to refer to z as the asymptotic distribution or limit distri-
bution of z n .
When the limit distribution z is degenerate (that is, Pr (z = c) = 1 for some c) we can write
d p
the convergence as z n −→ c, which is equivalent to convergence in probability, z n −→ c.
The typical path to establishing convergence in distribution is through the central limit theorem
(CLT), which states that a standardized sample average converges in distribution to a normal
random vector.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 119
√
The standardized sum z n = n (y n − μ) has mean zero and variance V . What the CLT adds is
that the variable z n is also approximately normally distributed, and that the normal approximation
improves as n increases.
The CLT is one of the most powerful and mysterious results in statistical theory. It shows that
the simple process of averaging induces normality. The first version of the CLT (for the number
of heads resulting from many tosses of a fair coin) was established by the French mathematician
Abraham de Moivre in an article published in 1733. This was extended to cover an approximation
to the binomial distribution in 1812 by Pierre-Simon Laplace in his book Théorie Analytique des
Probabilités, and the most general statements are credited to articles by the Russian mathematician
Aleksandr Lyapunov (1901) and the Finnish mathematician Jarl Waldemar Lindeberg (1920, 1922).
The above statement is known as the classic (or Lindeberg-Lévy) CLT due to contributions by
Lindeberg (1920) and the French mathematician Paul Pierre Lévy.
A more general version which does not require the restriction to identical distributions was
provided by Lindeberg (1922).
Equation (5.5) is known as Lindeberg’s condition. A standard method to verify (5.5) is via
Lyapunov’s condition: For some δ > 0
n
1 X
lim E (yi − μi )2+δ = 0. (5.6)
n→∞ νn2+δ i=1
It is easy to verify that (5.6) implies (5.5), and (5.6) is often easy to verify. For example, if
supi E (yi − μi )3 ≤ κ < ∞ and inf i σi2 ≥ c > 0 then
n
1 X nκ
3
E (yi − μi )3 ≤ →0
νn
i=1 (nc)3/2
so (5.6) is satisfied.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 120
μ = Eh (y)
for some function h : Rm → Rk . For example, the second moment of y is Ey2 , the k’th is Eyk , the
moment generating function is E exp (ty) , and the distribution function is E1 {y ≤ x} .
Estimating parameters of this form fits into our previous analysis by defining the random
variable z = h (y) for then μ = Ez is just a simple moment of z. This suggests the moment
estimator n n
1X 1X
μb= zi = h (y i ) .
n n
i=1 i=1
P
For example,
P the moment estimator of Eyk is n−1 ni=1 yik , that of the momentP generating function
is n−1 ni=1 exp (tyi ) , and for the distribution function the estimator is n−1 ni=1 1 {yi ≤ x}
Since μb is a sample average, and transformations of iid variables are also iid, the asymptotic
results of the previous sections immediately apply.
√ d
μ − μ) −→ N (0, V )
n (b
¡ ¢
where V = E (h (y) − μ) (h (y) − μ)0 .
Theorems 5.8.1 and 5.8.2 show that the estimate μ b is consistent for μ and asymptotically
normally distributed, so long as the stated moment conditions hold.
A word of caution. Theorems 5.8.1 and 5.8.2 give the impression that it is possible to estimate
any moment of y. Technically this is the case so long as that moment is finite. What is hidden
by the notation, however, is that estimates of highPorder momnets can be quite imprecise. For
example, consider the sample 8th moment μ b8 = n1 ni=1 yi8 , and suppose for simplicity that y is
1
N(0, 1). Then we can calculate that var (b μ8 ) = n−1 645, 015, which is huge, even for large n! In
general, higher-order moments are challenging to estimate because their variance depends upon
even higher moments which can be quite large in some cases.
2
1
μ8 ) = n−1 Ey 16 − Ey 8
By the formula for the variance of a mean var (e . Since y is N(0, 1), Ey 16 = 15!! =
2, 027, 025 and Ey 8 = 7!! = 105 where k!! = k(k − 2) · · · is the double factorial for odd k.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 121
σ 2 = E (w − Ew)2
= Ew2 − (Ew)2 .
E (w − Ew)3
sk = ³ ´3/2 .
E (w − Ew)2
p
Theorem 5.9.1 Continuous Mapping Theorem (CMT). If z n −→ c
p
as n → ∞ and g (·) is continuous at c, then g(z n ) −→ g(c) as n → ∞.
In our third example g defined in (5.9) is continuous for all μ such that var(w) = μ2 − μ21 > 0,
which holds unless w has a degenerate distribution. Thus if E |w|3 < ∞ and var(w) > 0 then as
c −→ p
n → ∞, sk sk.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 123
For a proof of Theorem 5.10.1 see Theorem 2.3 of van der Vaart (1998). It was first proved by
Mann and Wald (1943) and is therefore sometimes referred to as the Mann-Wald Theorem.
Theorem 5.10.1 allows the function g to be discontinuous only if the probability at being at a
discontinuity point is zero. For example, the function g(u) = u−1 is discontinuous at u = 0, but if
d d
zn −→ z ∼ N (0, 1) then Pr (z = 0) = 0 so zn−1 −→ z −1 .
A special case of the Continuous Mapping Theorem is known as Slutsky’s Theorem.
Even though Slutsky’s Theorem is a special case of the CMT, it is a useful statement as it
focuses on the most common applications — addition, multiplication, and division.
b is a function of μ
Despite the fact that the plug-in estimator β b for which we have an asymptotic
distribution, Theorem 5.10.1 does not directly give us an asymptotic distribution for β. b This is
b = g (b √
because β μ) is written as a function of μb , not of the standardized sequence n (b μ − μ) .
We need an intermediate step — a first order Taylor series expansion. This step is so critical to
statistical theory that it has its own name — The Delta Method.
The Delta Method allows us to complete our derivation of the asymptotic distribution of the
b of β.
estimator β
By combining Theorems 5.8.2 and 5.10.3 we can find the asymptotic distribution of the plug-in
b
estimator β.
√ ³ ´
d ¡ ¢
n βb − β −→ N 0, G0 V G
¡ ¢
where V = E (h (y) − μ) (h (y) − μ)0 and G = G (μ) .
xn = o(1)
xn = o(an )
is equivalent to a−1
n xn → 0 as n → ∞. The notation
xn = O(1)
(pronounced “big oh-one”) means that xn is bounded uniformly in n : there exists an M < ∞ such
that |xn | ≤ M for all n. The notation
xn = O(an )
is equivalent to a−1
n xn = O(1).
We now introduce similar concepts for sequences of random variables. Let zn and an , n = 1, 2, ...
be sequences of random variables. (In most applications, an is non-random.) The notation
zn = op (1)
p
(“small oh-P-one”) means that zn −→ 0 as n → ∞. We also write
zn = op (an )
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 125
if a−1 b
n zn = op (1). For example, for any consistent estimator β for β we can write
b = β + op (1).
β
Similarly, the notation zn = Op (1) (“big oh-P-one”) means that zn is bounded in probability.
Precisely, for any ε > 0 there is a constant Mε < ∞ such that
lim sup Pr (|zn | > Mε ) ≤ ε.
n→∞
Furthermore, we write
zn = Op (an )
if a−1
n zn = Op (1).
Op (1) is weaker than op (1) in the sense that zn = op (1) implies zn = Op (1) but not the reverse.
However, if zn = Op (an ) then zn = op (bn ) for any bn such that an /bn → 0.
d
If a random vector converges in distribution z n −→ z (for example, if z ∼ N (0, V )) then
b which satisfy the convergence of Theorem 5.10.4 then
z n = Op (1). It follows that for estimators β
we can write
b = β + Op (n−1/2 ).
β
In words, this statement says that the estimator βb equals the true coefficient β plus a random
component which is shrinking to zero at the rate n−1/2 .
Another useful observation is that a random sequence with a bounded moment is stochastically
bounded.
E kz n kδ = O (an )
z n = Op (a1/δ
n ).
1/δ
Similarly, E kz n kδ = o (an ) implies z n = op (an ).
This can be shown using Markov’s inequality (B.21). The assumptions imply that there is some
µ ¶1/δ
δ M
M < ∞ such that E kz n k ≤ M an for all n. For any ε set B = . Then
ε
³ ´ µ ¶
δ M an ε
−1/δ
Pr an kz n k > B = Pr kz n k > ≤ E kz n kδ ≤ ε
ε M an
as required.
There are many simple rules for manipulating op (1) and Op (1) sequences which can be deduced
from the continuous mapping theorem or Slutsky’s Theorem. For example,
op (1) + op (1) = op (1)
op (1) + Op (1) = Op (1)
Op (1) + Op (1) = Op (1)
op (1)op (1) = op (1)
op (1)Op (1) = op (1)
Op (1)Op (1) = Op (1)
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 126
max |yi | .
1≤i≤n
This is the magnitude of the largest observation in the sample {y1 , ..., yn }. If the support of the
distribution of yi is unbounded, then as the sample size n increases, the largest observation will
also tend to increase. It turns out that there is a simple characterization.
and (5.13) as
max |yi | = op (log n). (5.15)
1≤i≤n
Equation (5.12) says that if y has r finite moments, then the largest observation will diverge
at a rate slower than n1/r . As r increases this rate decreases. Equation (5.13) shows that if we
strengthen this to y having all finite moments and a finite moment generating function (for example,
if y is normally distributed) then the largest observation will diverge slower than log n. Thus the
higher the moments, the slower the rate of divergence.
To simplify the notation, we write (5.14) as yi = op (n1/r ) uniformly in 1 ≤ i ≤ n, and similarly
(5.15) as yi = op (log n), uniformly in 1 ≤ i ≤ n. It is important to understand when the Op or op
symbols are applied to subscript i random variables whether the convergence is pointwise in i, or
is uniform in i in the sense of (5.14)-(5.15).
Theorem 5.12.1 applies to random vectors. If E kykr < ∞ then
max ky i k = op (n1/r ),
1≤i≤n
Since each submodel η is parametric we can calculate the efficiency bound for estimation of μ
within this submodel. Specifically, given the density fη (y | θ) its likelihood score is
∂
Sη = log fη (y | θ0 ) ,
∂θ
³ ´−1
∂
so the Cramer-Rao lower bound for estimation of θ is ESη Sη0 . Defining M η = ∂θ μη (θ0 )0 ,
by Theorem B.11.5 the Cramer-Rao lower bound for estimation of μ within the submodel η is
³ ´−1
V η = M 0η ESη Sη0 M η.
As V η is the efficiency bound for the submodel class fη (y | θ) , no estimator can have an
asymptotic variance smaller than V η for any density fη (y | θ) in the submodel class, including the
true density f . This is true for all submodels η. Thus the asymptotic variance of any semiparametric
estimator cannot be smaller than V η for any conceivable submodel. Taking the supremum of the
Cramer-Rao bounds lower from all conceivable submodels we define2
V = sup V η .
η∈ℵ
The asymptotic variance of any semiparametric estimator cannot be smaller than V , since it cannot
be smaller than any individual V η . We call V the semiparametric asymptotic variance bound
or semiparametric efficiency bound for estimation of μ, as it is a lower bound on the asymptotic
variance for any semiparametric estimator. If the asymptotic variance of a specific semiparametric
estimator equals the bound V we say that the estimator is semiparametrically efficient.
For many statistical problems it is quite challenging to calculate the semiparametric variance
bound. However, in some cases there is a simple method to find the solution. Suppose that
we can find a submodel η0 whose Cramer-Rao lower bound satisfies V η0 = V μ where V μ is
the asymptotic variance of a known semiparametric estimator. In this case, we can deduce that
V = V η0 = V μ . Otherwise there would exist another submodel η1 whose Cramer-Rao lower bound
satisfies V η0 < V η1 but this would imply V μ < V η1 which contradicts the Cramer-Rao Theorem.
We now find this submodel for the sample mean μ b . Our goal is to find a parametric submodel
whose Cramer-Rao bound for μ is V . This can be done by creating a tilted version of the true
density. Consider the parametric submodel
¡ ¢
fη (y | θ) = f (y) 1 + θ0 V −1 (y − μ) (5.17)
and for all θ close to zero fη (y | θ) ≥ 0. Thus fη (y | θ) is a valid density function. It is a parametric
submodel since fη (y | θ0 ) = f (y) when θ0 = 0. This parametric submodel has the mean
Z
μ(θ) = yfη (y | θ) dy
Z Z
= yf (y)dy + f (y)y (y − μ)0 V −1 θdy
=μ+θ
Since
∂ ∂ ¡ ¢ V −1 (y − μ)
log fη (y | θ) = log 1 + θ0 V −1 (y − μ) =
∂θ ∂θ 1 + θ0 V −1 (y − μ)
it follows that the score function for θ is
∂
Sη = log fη (y | θ0 ) = V −1 (y − μ) . (5.18)
∂θ
By Theorem B.11.3 the Cramer-Rao lower bound for θ is
¡ ¡ ¢¢−1 ¡ −1 ¡ ¢ ¢−1
E S η S 0η = V E (y − μ) (y − μ)0 V −1 =V. (5.19)
The Cramer-Rao lower bound for μ(θ) = μ + θ is also V , and this equals the asymptotic variance
of the moment estimator μ
b . This was what we set out to show.
In summary, we have shown that in the submodel (5.17) the Cramer-Rao lower bound for
estimation of μ is V which equals the asymptotic variance of the sample mean. This establishes
the following result.
We call this result a proposition rather than a theorem as we have not attended to the regularity
conditions.
It is a simple matter to extend this result to the plug-in estimator βb = g (bμ). We know from
2
Theorem 5.10.4 that if E kyk < ∞ and g (u) is continuously differentiable at u = μ then the plug-
√ ³b ´
d 0
in estimator has the asymptotic distribution n β − β −→ N (0, G V G) . We therefore consider
the class of distributions
n o
L2 (g) = F : E kyk2 < ∞, g (u) is continuously differentiable at u = Ey .
© ª
For example, if β = μ1 /μ2 where μ1 = Ey1 and μ2 = Ey2 then L2 (g) = F : Ey12 < ∞, Ey22 < ∞, and Ey2 6= 0 .
For any submodel η the Cramer-Rao lower bound for estimation of β = g (μ) is G0 V η G by
Theorem B.11.5. For the submodel (5.17) this bound is G0 V G which equals the asymptotic variance
b from Theorem 5.10.4. Thus β
of β b is semiparametrically efficient.
The result in Proposition 5.13.2 is quite general. Smooth functions of sample moments are
efficient estimators for their population counterparts. This is a very powerful result, as most
econometric estimators can be written (or approximated) as smooth functions of sample means.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 130
Proof of Theorem 5.4.2: Without loss of generality, we can assume E(yi ) = 0 by recentering yi
on its expectation.
We need to show that for all δ > 0 and η > 0 there is some N < ∞ so that for all n ≥ N,
Pr (|y| > δ) ≤ η. Fix δ and η. Set ε = δη/3. Pick C < ∞ large enough so that
(where 1 (·) is the indicator function) which is possible since E |yi | < ∞. Define the random variables
so that
y =w+z
and
E |y| ≤ E |w| + E |z| . (5.21)
We now show that sum of the expectations on the right-hand-side can be bounded below 3ε.
First, by the Triangle Inequality (A.12) and the Expectation Inequality (B.15),
where the final inequality is (5.20). Then by Jensen’s Inequality (B.12), the fact that the wi are
iid and mean zero, and (5.24),
Ewi2 4C 2
(E |w|)2 ≤ E |w|2 = = ≤ ε2 (5.25)
n n
the final inequality holding for n ≥ 4C 2 /ε2 = 36C 2 /δ 2 η 2 . Equations (5.21), (5.23) and (5.25)
together show that
E |y| ≤ 3ε2 (5.26)
as desired.
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 131
For the reverse inequality, the Euclidean norm of a vector is larger than the length of any individual
component, so for any j, |yj | ≤ kyk . Thus, if E kyk < ∞, then E |yj | < ∞ for j = 1, ..., m. ¥
Proof of Theorem 5.7.1: The moment bound Ey 0i y i < ∞ is sufficient to guarantee that the
elements of μ and V are well defined and finite. Without loss of generality, it is sufficient to
consider the case μ = 0.
√
Our proof method is to calculate the characteristic function of ny n and show that it converges
pointwise to the characteristic function of N (0, V ) . By Lévy’s Continuity Theorem (see Van der
√
Vaart (2008) Theorem 2.13) this is sufficient to established that ny n converges in distribution to
N (0, V ) . ¡ ¢
For λ ∈ Rm , let C (λ) = E exp iλ0 y i denote the characteristic function of y i and set c (λ) =
log C(λ). Since y i has two finite moments the first and second derivatives of C(λ) are continuous
in λ. They are
∂ ¡ ¡ ¢¢
C(λ) = iE y i exp iλ0 y i
∂λ
∂2 2
¡ 0
¡ 0 ¢¢
0 C(λ) = i E y i y i exp iλ y i .
∂λ∂λ
When evaluated at λ = 0
C(0) = 1
∂
C(0) = iE (y i ) = 0
∂λ
∂2 ¡ 0
¢
0 C(0) = −E y i y i = −V .
∂λ∂λ
Furthermore,
∂ ∂
cλ (λ) = c(λ) = C(λ)−1 C(λ)
∂λ ∂λ
∂2 −1 ∂
2 ∂ ∂
cλλ (λ) = 0 c(λ) = C(λ) C(λ) − C(λ)−2 C (λ) C(λ)
∂λ∂λ ∂λ∂λ0 ∂λ ∂λ0
so when evaluated at λ = 0
c(0) = 0
cλ (0) = 0
cλλ (0) = −V .
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 132
Proof of Theorem 5.9.1: Since g is continuous at c, for all ε > 0 we can find a δ > 0 such
that if kz n − ck < δ then kg (z n ) − g (c)k ≤ ε. Recall that A ⊂ B implies Pr(A) ≤ Pr(B). Thus
p
Pr (kg (z n ) − g (c)k ≤ ε) ≥ Pr (kz n − ck < δ) → 1 as n → ∞ by the assumption that z n −→ c.
p
Hence g(z n ) −→ g(c) as n → ∞. ¥
Proof of Theorem 5.10.3: By a vector Taylor series expansion, for each element of g,
where θ∗nj lies on the line segment between θn and θ and therefore converges in probability to θ.
p
It follows that ajn = gjθ (θ∗jn ) − gjθ −→ 0. Stacking across elements of g, we find
√ √ d
n (g (θn ) − g(θ)) = (G + an )0 n (θn − θ) −→ G0 ξ. (5.28)
d √ d
The convergence is by Theorem 5.10.1, as G + an −→ G, n (θn − θ) −→ ξ, and their product is
continuous. This establishes (5.10)
When ξ ∼ N (0, V ) , the right-hand-side of (5.28) equals
¡ ¢
G0 ξ = G0 N (0, V ) = N 0, G0 V G
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 133
establishing (5.11). ¥
© ª
Proof of Theorem 5.12.1: First consider (5.12). Take any δ > 0. The event max1≤i≤n |yi | > δn1/r
1/r , which is the same as the event
Sn © 1/r
ª
means that at least
Sn one ofr the |y i | exceeds δn i=1 |yi | > δn
or equivalently i=1 {|yi | > δ r n} . Since the probability of the union of events is smaller than the
sum of the probabilities,
µ ¶ Ãn !
[ r
−1/r r
Pr n max |yi | > δ = Pr {|yi | > δ n}
1≤i≤n
i=1
n
X
≤ Pr (|yi |r > nδ r )
i=1
n
1 X
≤ E (|yi |r 1 (|yi |r > nδ r ))
nδ r
i=1
1
= r E (|yi |r 1 (|yi |r > nδ r ))
δ
where the second inequality is the strong form of Markov’s inequality (Theorem B.22) and the final
equality is since the yi are iid. Since E |y|r < ∞ this final expectation converges to zero as n → ∞.
This is because Z
E |yi | = |y|r dF (y) < ∞
r
implies Z
r r
E (|yi | 1 (|yi | > c)) = |y|r dF (y) → 0 (5.29)
r
|y| >c
where the second line uses exp (tδ log n) = exp (log n) = n. The assumption E exp(ty) < ∞ means
E (exp |ty| 1 (exp |ty| > n)) → 0 as n → ∞ by the same argument as in (5.29). This establishes
(5.13). ¥
CHAPTER 5. AN INTRODUCTION TO LARGE SAMPLE ASYMPTOTICS 134
Exercises
Exercise 5.1 For the following sequences, find the liminf, limsup and limit (if it exists) as n → ∞
1. an = 1/n
³π ´
2. an = sin n
2
1 ³π ´
3. an = sin n
n 2
1 Pn
Exercise 5.2 A weightedPsample mean takes the form y ∗ = n i=1 wi yi for some non-negative
constants wi satisfying n1 ni=1 wi = 1. Assume yi is iid.
2. Calculate var(y ∗ ).
p 1 Pn
3. Show that a sufficient condition for y ∗ −→ μ is that n2
2
i=1 wi −→ 0.
4. Show that a sufficient condition for the condition in part 3 is maxi≤n wi = o(n).
Exercise 5.3 Take a random variable Z such that EZ = 0 and var(Z) = 1. Use Chebyshev’s
inequality to find a δ such that Pr (|Z| > δ) ≤ 0.05. Contrast this with the exact δ which solves
Pr (|Z| > δ) = 0.05 when Z ∼ N (0, 1) . Comment on the difference.
√ d ¡ ¢
Exercise 5.4 Find the moment estimator μ b3 of μ3 = Eyi3 and show that μ3 − μ3 ) −→ N 0, v 2
n (b
2 2
for some v . Write v as a function of the moments of yi .
p p
Exercise 5.5 Suppose zn −→ c as n → ∞. Show that zn2 −→ c2 as n → ∞ without using the
definition of convergence in probability, without using the CMT.
√ d ¡ ¢
Exercise 5.6 Suppose μ − μ) −→ N 0, v2 and set β = μ2 and βb = μ
n (b b2 .
√ ³b ´
1. Use the Delta Method to obtain an asymptotic distribution for n β−β .
2. Now suppose μ = 0. Describe what happens to the asymptotic distribution from the previous
part.
3. Improve on the previous answer. Under the assumption μ = 0, find the asymptotic distribu-
tion for nβb = nb
μ2 .
6.1 Introduction
It turns out that the asymptotic theory of least-squares estimation applies equally to the pro-
jection model and the linear CEF model, and therefore the results in this chapter will be stated for
the broader projection model described in Section 2.18. Recall that the model is
yi = x0i β + ei
Some of the results of this section hold under random sampling (Assumption 1.5.1) and finite
second moments (Assumption 2.18.1). We restate this condition here for clarity.
Assumption 6.1.1
2. Ey 2 < ∞.
3. E kxk2 < ∞.
135
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 136
b converges
Third, the CMT ( Theorem 5.9.1) allows us to combine these equations to show that β
in probability to β. Specifically, as n → ∞,
β b −1
b=Q b
xx Qxy
p
−→ Q−1
xx Qxy
= β. (6.3)
b −→p
We have shown that β β, as n → ∞. In words, the OLS estimator converges in probability to
the projection coefficient vector β as the sample size n gets large.
To fully understand the application of the CMT we walk through it in detail. We can write
³ ´
b=g Q
β b xy
b xx , Q
where g (A, b) = A−1 b is a function of A and b. The function g (A, b) is a continuous function of
A and b at all values of the arguments such that A−1 exists. Assumption 2.18.1 implies that Q−1xx
exists and thus g (A, b) is continuous at A = Qxx . This justifies the application of the CMT in
(6.3).
For a slightly different demonstration of (6.3), recall that (4.7) implies that
β b −1 Q
b −β =Q b (6.4)
xx xe
where
X n
b xe = 1
Q xi ei .
n
i=1
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 137
β b −1
b −β =Q b
xx Qxe
p
−→ Q−1
xx 0
=0
p
b −→
which is the same as β β.
Theorem 6.2.1 states that the OLS estimator β b converges in probability to β as n increases,
b
and thus β is consistent for β. In the stochastic order notation, Theorem 6.2.1 can be equivalently
written as
b = β + op (1).
β (6.6)
To illustrate the effect of sample size on the least-squares estimator consider the least-squares
regression
We use the sample of 24,344white men from the March 2009 CPS. Randomly sorting the observa-
tions, and sequentially estimating the model by least-squares, starting with the first 5 observations,
and continuing until the full sample is used, the sequence of estimates are displayed in Figure 6.1.
You can see how the least-squares estimate changes with the sample size, but as the number of
observations increases it settles down to the full-sample estimate β̂1 = 0.114.
√ ³b ´
This shows that the normalized and centered estimator n β − β is a function of the sample
P P
average n1 ni=1 xi x0i and the normalized sample average √1n ni=1 xi ei . Furthermore, the latter has
mean zero so the central limit theorem (CLT, Theorem 5.7.1) applies.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 138
0.15
0.14
0.13
OLS Estimation
0.12
0.11
0.10
0.09
0.08
Number of Observations
The product xi ei is iid (since the observations are iid) and mean zero (since E (xi ei ) = 0).
Define the k × k covariance matrix ¡ ¢
Ω = E xi x0i e2i . (6.8)
We require the elements of Ω to be finite, written Ω < ∞. By the Expectation Inequality (B.15),
° °
kΩk ≤ E °xi x0i e2i ° = E kxi ei k2 = E kxi k2 e2i
or equivalently that E kxi ei k2 < ∞. Usingkxi ei k2 = kxi k2 e2i and the Cauchy-Schwarz Inequality
(B.17),
° ° ³ ´ ³ ´1/2 ¡ ¢1/2
kΩk ≤ E °xi x0i e2i ° = E kxi ei k2 = E kxi k2 e2i ≤ E kxi k4 Ee4i (6.9)
which is finite if xi and ei have finite fourth moments. As ei is a linear combination of yi and xi ,
it is sufficient that the observables have finite fourth moments (Theorem 2.18.1.6). We can then
apply the CLT (Theorem 5.7.1).
and
n
1 X d
√ xi ei −→ N (0, Ω) (6.11)
n
i=1
as n → ∞.
as n → ∞, where the final equality follows from the property that linear combinations of normal
vectors are also normal (Theorem B.9.1).
We have derived the asymptotic normal approximation to the distribution of the least-squares
estimator.
where
V β = Q−1 −1
xx ΩQxx , (6.12)
¡ ¢
Qxx = E (xi x0i ) , and Ω = E xi x0i e2i .
b = β + Op (n−1/2 )
β (6.13)
which is stronger than (6.6).
√ ³b ´
The matrix V β = Q−1 xx ΩQ −1
xx is the variance of the asymptotic distribution of n β − β .
Consequently, V β is often referred to as the asymptotic covariance matrix of β. b The expression
−1 −1
V β = Qxx ΩQxx is called a sandwich form, as the matrix Ω is sandwiched between two copies of
Q−1
xx .
It is useful to compare the variance of the asymptotic distribution given in (6.12) and the
finite-sample conditional variance in the CEF model as given in (4.12):
³ ´ ¡ ¢ ¡ ¢¡ ¢
V βe = var β b | X = X 0 X −1 X 0 DX X 0 X −1 . (6.14)
The expression V βe is useful for practical inference (such as computation of standard errors and
tests) since it is the variance of the estimator βb , while V β is useful for asymptotic theory as it
is well defined in the limit as n goes to infinity. We will make use of both symbols and it will be
advisable to adhere to this convention.
There is a special case where Ω and V β simplify. We say that ei is a Homoskedastic Pro-
jection Error when
cov(xi x0i , e2i ) = 0. (6.15)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 140
Figure 6.2: Density of Normalized OLS estimator with Double Pareto Error
Condition (6.15) holds in the homoskedastic linear regression model, but is somewhat broader.
Under (6.15) the asymptotic variance formulae simplify as
¡ ¢ ¡ ¢
Ω = E xi x0i E e2i = Qxx σ 2 (6.16)
V β = Q−1 −1 −1 2 0
xx ΩQxx = Qxx σ ≡ V β (6.17)
Figure 6.3: Density of Normalized OLS estimator with error process (6.18)
√ ³ ´
and ui ∼ N(0, 1). We show the sampling distribution of n βb − β setting n = 100, for k = 1, 4,
6 and 8. As k increases, the sampling distribution becomes highly skewed and non-normal. The
lesson from Figures 6.2 and 6.3 is that the N(0, 1) asymptotic approximation is never guaranteed
to be accurate.
Thus if x1i and x2i are positively correlated (ρ > 0) then β̂1 and β̂2 are negatively correlated (and
vice-versa).
For illustration, Figure 6.4 displays the probability contours of the joint asymptotic distribution
of β̂1 − β1 and β̂2 − β2 when β1 = β2 = 0, σ12 = σ22 = σ 2 = 1, and ρ = 0.5. The coefficient estimates
are negatively correlated since the regressors are positively correlated. This means that if β̂1 is
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 142
unusually negative, it is likely that β̂2 is unusually positive, or conversely. It is also unlikely that
we will observe both β̂1 and β̂2 unusually large and of the same sign.
This finding that the correlation of the regressors is of opposite sign of the correlation of the coef-
ficient estimates is sensitive to the assumption of homoskedasticity. If the errors are heteroskedastic
then this relationship is not guaranteed.
This can be seen through a simple constructed example. Suppose that x1i and x2i only take
the values {−1, +1}, symmetrically, with Pr (x1i = x2i = 1) = Pr (x1i = x2i = −1) = 3/8, and
Pr (x1i = 1, x2i = −1) = Pr (x1i = −1, x2i = 1) = 1/8. You can check that the regressors are mean
zero, unit variance and correlation 0.5, which is identical with the setting displayed ¡ in Figure 6.4.
¢
Now suppose that the error is heteroskedastic. Specifically, suppose that E e2i | x1i = x2i =
5 ¡ ¢ 1 ¡ ¢ ¡ ¢ ¡ ¢
and E e2i | x1i 6= x2i = . You can check that E e2i = 1, E x21i e2i = E x22i e2i = 1 and
4 4
¡ 2
¢ 7
E x1i x2i ei = . Therefore
8
V β = Q−1 −1
xx ΩQxx
⎡ ⎤⎡ ⎤⎡ ⎤
1 7 1
9 ⎢ 1 −2 ⎥ ⎢ 1 8 ⎥ ⎢ 1 − ⎥
2 ⎦
= ⎣ ⎦⎣ 7 ⎦⎣ 1
16 − 1 1 1 − 1
2 8 2
⎡ ⎤
1
4⎢ 1 4 ⎥
= ⎣ ⎦.
3 1 1
4
Thus the coefficient estimates β̂1 and β̂2 are positively correlated (their correlation is 1/4.) The
joint probability contours of their asymptotic distribution is displayed in Figure 6.5. We can see
how the two estimates are positively associated.
What we found through this example is that in the presence of heteroskedasticity there is no
simple relationship between the correlation of the regressors and the correlation of the parameter
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 143
Figure 6.5: Contours of Joint Distribution of β̂1 and β̂2 , heteroskedastic case
estimates.
We can extend the above analysis to study ¡the covariance
¢ between coefficient sub-vectors. For
0 0 0 0 0 0
example, partitioning xi = (x1i , x2i ) and β = β1 , β2 , we can write the general model as
yi = x01i β1 + x02i β2 + ei
³ 0 0´
b0 = β
and the coefficient estimates as β b ,βb . Make the partitions
1 2
∙ ¸ ∙ ¸
Q11 Q12 Ω11 Ω12
Qxx = , Ω= . (6.19)
Q21 Q22 Ω21 Ω22
From (2.41) ∙ ¸
Q−1 −Q−1 −1
11·2 Q12 Q22
Q−1
xx = −1
11·2
−1 −1
−Q22·1 Q21 Q11 Q22·1
where Q11·2 = Q11 − Q12 Q−1 −1
22 Q21 and Q22·1 = Q22 − Q21 Q11 Q12 . Thus when the error is ho-
moskedastic, ³ ´
cov βb ,β
b = −σ 2 Q−1 Q Q−1
1 2 11·2 12 22
where
¡ ¢ −1
V 11 = Q−1 −1 −1 −1 −1
11·2 Ω11 − Q12 Q22 Ω21 − Ω12 Q22 Q21 + Q12 Q22 Ω22 Q22 Q21 Q11·2 (6.21)
−1
¡ ¢ −1
V 21 = Q22·1 Ω21 − Q21 Q−1 −1 −1 −1
11 Ω11 − Ω22 Q22 Q21 + Q21 Q11 Ω12 Q22 Q21 Q11·2 (6.22)
−1
¡ ¢ −1
V 22 = Q22·1 Ω22 − Q21 Q−1 −1 −1 −1
11 Ω12 − Ω21 Q11 Q12 + Q21 Q11 Ω11 Q11 Q12 Q22·1 (6.23)
Thus the squared residual equals the squared error plus a deviation
³ ´ ³ ´0 ³ ´
2 2 0 b b 0 b
êi = ei − 2ei xi β − β + β − β xi xi β − β . (6.24)
So when we take the average of the squared residuals we obtain the average of the squared errors,
plus two terms which are (hopefully) asymptotically negligible.
à n ! à n !
1
n
X 1 X ³ ´ ³ ´0 1 X ³ ´
σ̂ 2 = e2i − 2 ei x0i βb −β + βb −β xi x0i βb −β . (6.25)
n n n
i=1 i=1 i=1
p
b −→
and Theorem 6.2.1 shows that β β. Hence (6.25) converges in probability to σ 2 , as desired.
Finally, since n/(n − k) → 1 as n → ∞, it follows that
µ ¶
2 n p
s = σ̂ 2 −→ σ 2 .
n−k
p p
Theorem 6.5.1 Under Assumption 6.1.1, σ̂2 −→ σ 2 and s2 −→ σ 2 as
n → ∞.
The standard moment estimator of Qxx is Q b xx defined in (6.1), and thus an estimator for Q−1
xx
−1
is Qb xx . Also, the standard estimator of σ 2 is the unbiased estimator s2 defined in (4.21). Thus a
natural plug-in estimator for V 0β = Q−1 2 b0 b −1 2
xx σ is V β = Qxx s .
0
Consistency of Vb β for V 0β follows from consistency of the moment estimates Q b xx and s2 , and
an application of the continuous mapping theorem. Specifically, Theorem 6.2.1 established that
b xx −→p p
Q Qxx , and Theorem 6.5.1 established s2 −→ σ 2 . The function V 0β = Q−1 2
xx σ is a continuous
2
function of Qxx and σ so long as Qxx > 0, which holds true under Assumption 6.1.1.4. It follows
by the CMT that
0
Vb β = Qb −1 2 p −1 2
xx s −→ Qxx σ = V β
0
0
so that Vb β is consistent for V 0β , as desired.
0 p
Theorem 6.6.1 Under Assumption 6.1.1, Vb β −→ V 0β as n → ∞.
It is instructive to notice that Theorem 6.6.1 does not require the assumption of homoskedastic-
0
ity. That is, Vb β is consistent for V 0β regardless if the regression is homoskedastic or heteroskedastic.
However, V 0β = V β = avar(β) b only under homoskedasticity. Thus in the general case, Vb 0 is con-
β
sistent for a well-defined but non-useful object.
The first term is an average of the iid random variables xi x0i e2i , and therefore by the WLLN
converges in probability to its expectation, namely,
n
1X p ¡ ¢
xi x0i e2i −→ E xi x0i e2i = Ω.
n
i=1
Technically, this requires that Ω has finite elements, which was shown in (6.10).
So to establish that Ωb is consistent for Ω it remains to show that
n
1X ¡ ¢ p
xi x0i ê2i − e2i −→ 0. (6.29)
n
i=1
There are multiple ways to do this. A reasonable straightforward yet slightly tedious derivation is
to start by applying the Triangle Inequality (A.12)
° n °
°1 X ¡ 2 ¢° 1 Xn
° ¡ ¢°
° 0 2 ° °xi x0i ê2i − e2i °
° xi xi êi − ei ° ≤
°n ° n
i=1 i=1
n
1X ¯ ¯
= kxi k2 ¯ê2i − e2i ¯ . (6.30)
n
i=1
Then recalling the expression for the squared residual (6.24), apply the Triangle Inequality and
then the Schwarz Inequality (A.10) twice
¯ 2 ¯ ¯ ³ ´¯ ³ ´0 ³ ´
¯êi − e2i ¯ ≤ 2 ¯¯ei x0i βb − β ¯¯ + β b − β xi x0 β
i
b −β
¯ ³ ´¯ ¯¯³ ´0 ¯¯2
¯ b − β ¯¯ + ¯ β
b − β xi ¯
= 2 |ei | ¯x0i β ¯ ¯
° ° ° °2
°b ° 2 °b °
≤ 2 |ei | kxi k °β − β° + kxi k °β − β° . (6.31)
p
b −→
Theorem 6.7.1 Under Assumption 6.1.2, as n → ∞, Ω Ω and
W p
Vb β −→ V β .
b replaced by
The alternative estimators Ve β and V β take the form (6.27) but with Ω
X n
e = 1
Ω (1 − hii )−2 xi x0i ê2i
n
i=1
and
n
1X
Ω= (1 − hii )−1 xi x0i ê2i ,
n
i=1
p
b −→ Ω, it is sufficient
respectively. To show that these estimators also consistent for V β , given Ω
to show that the differences Ω e −Ω b and Ω − Ω b converge in probability to zero as n → ∞.
The trick is to use the fact that the leverage values are asymptotically negligible:
(See Theorem 6.22.1 in Section 6.22).) Then using the Triangle Inequality
° ° 1X n
° ° ¯ ¯
° b ° °xi x0i ° ê2i ¯¯(1 − hii )−1 − 1¯¯
°Ω − Ω° ≤
n
i=1
à n !
1X ¯ ¯
¯ ¯
≤ kxi k2 ê2i ¯(1 − h∗n )−1 − 1¯ .
n
i=1
The sum in parenthesis can be shown to be Op (1) under Assumption 6.1.2 by the same argument
as³in in the´ proof of Theorem 6.7.1. (In fact, it can be shown to converge in probability to
2 2
E kxi k ei .) The term in absolute values is op (1) by (6.33). Thus the product is op (1), which
means that Ω = Ω b + op (1) −→ Ω.
Similarly,
° ° n
° ¯ ¯
°e b° 1 X° °xi x0i ° ê2i ¯¯(1 − hii )−2 − 1¯¯
°Ω − Ω° ≤
n
i=1
à n !
1X ¯ ¯
2 2 ¯ ∗ −2 ¯
≤ kxi k êi ¯(1 − hn ) − 1¯
n
i=1
= op (1).
p
e −→ p
Theorem 6.9.1 Under Assumption 6.1.2, as n → ∞, Ω Ω, Ω −→ Ω,
p p p
Vb β −→ V β , Ve β −→ V β , and V β −→ V β .
Theorem 6.9.1 shows that the alternative covariance matrix estimators are also consistent for
the asymptotic covariance matrix.
the transformation as a function of the coefficients, e.g. θ = r(β) for some function r : Rk → Rq .
The estimate of θ is
b = r(β).
θ b
p
b −→
By the continuous mapping theorem (Theorem 5.9.1) and the fact β β we can deduce that
b is consistent for θ.
θ
Furthermore, if the transformation is sufficiently smooth, by the Delta Method (Theorem 5.10.3)
b is asymptotically normal.
we can show that θ
where
V θ = R0 Vβ R (6.35)
r(β) = R0 β
then we can conformably partition β = (β01 , β02 )0 so that R0 β = β1 for β = (β01 , β02 )0 . Then
µ ¶
¡ ¢ I
Vθ = I 0 Vβ = V 11 ,
0
the upper-left sub-matrix of V 11 given in (6.21). In this case (6.34) states that
√ ³ ´
d
n βb − β −→ N (0, V 11 ) .
1 1
To illustrate the case of a nonlinear transformation, take the example θ = βj /βl for j 6= l. Then
⎛ ⎞
0
⎜ .. ⎟
⎜ . ⎟
⎜ ⎟
⎜ 1/βl ⎟
⎜ ⎟
⎜ .. ⎟
R=⎜ . ⎟ (6.37)
⎜ ⎟
⎜ −βj /β 2 ⎟
⎜ l ⎟
⎜ .. ⎟
⎝ . ⎠
0
so
V θ = V jj /βl2 + V ll βj2 /βl4 − 2V jl βj /βl3
where V ab denotes the ab’th element of V β .
For inference we need an estimate of the asymptotic variance matrix V θ = R0 Vβ R, and for
this it is typical to use a plug-in estimator. The natural estimator of R is the derivative evaluated
at the point estimates
b = ∂ r(β)
R b 0. (6.38)
∂β
The derivative in (6.38) may be calculated analytically or numerically. By analytically, we mean
working out for the formula for the derivative and replacing the unknowns by point estimates. For
∂
example, if θ = βj /βl , then ∂β r(β) is (6.37). However in some cases the function r(β) may be
extremely complicated and a formula for the analytic derivative may not be easily available. In
this case calculation by numerical differentiation may be preferable. Let δl = (0 · · · 1 · · · 0)0 be the
unit vector with the “1” in the l’th place. Then the jl’th element of a numerical derivative R b is
b b
b jl = rj (β + δl ε) − rj (β)
R
ε
for some small ε.
The estimate of V θ is
b 0 Vb β R.
Vb θ = R b (6.39)
0
Alternatively, Vb β , Ve β or V β may be used in place of Vb β . For example, the homoskedastic covari-
ance matrix estimator is
0
b 0 Vb 0 R
Vb θ = R b =R b −1 Rs
b 0Q b 2 (6.40)
β xx
Given (6.38), (6.39) and (6.40) are simple to calculate using matrix operations.
As the primary justification for Vb θ is the asymptotic approximation (6.34), Vb θ is often called
an asymptotic covariance matrix estimator.
p
The estimator Vb θ is consistent for V θ under the conditions of Theorem 6.10.2 since Vb β −→ Vβ
by Theorem 6.7.1, and
b = ∂ r(β)
R
p
b 0 −→ ∂
r(β)0 = R
∂β ∂β
p
b −→ ∂ 0
since β β and the function ∂β r(β) is continuous.
Theorem 6.10.3 shows that Vb θ is consistent for V θ and thus may be used for asymptotic
inference. In practice, we may set
b 0 Vb e R
Vb θe = R b 0 Vb β R
b = n−1 R b (6.41)
β
When the justification is based on asymptotic theory we call s(β̂j ) or s(θ̂) an asymptotic standard
error for β̂j or θ̂. When reporting your results, it is good practice to report standard errors for
each reported estimate, and this includesfunctions and transformations of your parameter estimates.
This helps users of the work (including yourself) assess the estimation precision.
We illustrate using the log wage regression
(100 times the partial derivative of the conditional expectation of log wages with respect to
education.)
θ2 = 100β2 + 2β3
(100 times the partial derivative of the conditional expectation of log wages with respect to
experience, evaluated at experience = 10)
θ3 = −50β2 /β3
(The level of experience at which the partial derivative of the conditional expectation of log
wages with respect to experience equals 0.)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 152
\
log(W age) = 0.118 education + 0.016 experience − 0.022 experience2 /100 + 0.947 .
(0.008) (0.006) (0.012) (0.157)
(6.42)
The standard errors are the square roots of the Horn-Horn-Duncan covariance matrix estimate
⎛ ⎞
0.632 0.131 −0.143 −11.1
⎜ 0.131 0.390 −0.731 −6.25 ⎟
V βe = ⎜
⎝ −0.143 −0.731
⎟ × 10−4 . (6.43)
1.48 9.43 ⎠
−11.1 −6.25 9.43 246
We calculate that
θb1 = 100βb1
= 100 × 0.118
= 11.8
p
s(θb1 ) = 1002 × 0.632 × 10−4
= 0.8
6.12 t statistic
Let θ = r(β) : Rk → R be any parameter of interest (for example, θ could be a single element
of β), θb its estimate and s(θ)
b its asymptotic standard error. Consider the statistic
θb − θ
tn (θ) = . (6.44)
b
s(θ)
Different writers have called (6.44) a t-statistic, a t-ratio, a z-statistic or a studentized sta-
tistic, sometimes using the different labels to distinguish between finite-sample and asymptotic
inference. As the statistics themselves are always (6.44) we won’t make such as distinction, and
will simply refer to tn (θ) as a t-statistic or a t-ratio. We also often suppress the parameter depen-
dence, writing it as tn . The t-statistic is a simple function of the estimate, its standard error, and
the parameter.
√ ³ ´
d p
By Theorems 6.10.2 and 6.10.3, n θb − θ −→ N (0, Vθ ) and Vbθ −→ Vθ . Thus
θb − θ
tn (θ) =
b
s(θ)
√ ³b ´
n θ−θ
= q
Vbθ
d N (0, Vθ )
−→ √
Vθ
= Z ∼ N (0, 1) .
The last equality is by the property that linear scales of normal distributions are normal.
Thus the asymptotic distribution of the t-ratio tn (θ) is the standard normal. Since this dis-
tribution does not depend on the parameters, we say that tn (θ) is asymptotically pivotal. In
special cases (such as the normal regression model, see Section 3.18), the statistic tn has an exact
t distribution, and is therefore exactly free of unknowns. In this case, we say that tn is exactly
pivotal. In general, however, pivotal statistics are unavailable and we must rely on asymptotically
pivotal statistics.
As we will see in the next section, it is also useful to consider the distribution of the absolute
d d
t-ratio |tn (θ)| . Since tn (θ) −→ Z, the continuous mapping theorem yields |tn (θ)| −→ |Z| . Letting
Φ(u) = Pr (Z ≤ u) denote the standard normal distribution function, we can calculate that the
distribution function of |Z| is
Pr (|Z| ≤ u) = Pr (−u ≤ Z ≤ u)
= Pr (Z ≤ u) − Pr (Z < −u)
= Φ(u) − Φ(−u)
= 2Φ(u) − 1
def
= Φ(u). (6.45)
d
Theorem 6.12.1 Under Assumptions 6.1.2 and 6.10.1, tn (θ) −→ Z ∼
d
N (0, 1) and |tn (θ)| −→ |Z| .
The asymptotic normality of Theorem 6.12.1 is used to justify confidence intervals and tests for
the parameters.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 154
where c > 0 is a pre-specified constant. This confidence interval is symmetric about the point
b and its length is proportional to the standard error s(θ).
estimate θ, b
Equivalently, Cn is the set of parameter values for θ such that the t-statistic tn (θ) is smaller (in
absolute value) than c, that is
( )
θb − θ
Cn = {θ : |tn (θ)| ≤ c} = θ : −c ≤ ≤c .
b
s(θ)
The coverage probability of this confidence interval is
Pr (θ ∈ Cn ) = Pr (|tn (θ)| ≤ c)
which is generally unknown. We can approximate the coverage probability by taking the asymptotic
limit as n → ∞. Since |tn (θ)| is asymptotically |Z| (Theorem 6.12.1), it follows that as n → ∞ that
Pr (θ ∈ Cn ) → Pr (|Z| ≤ c) = Φ(c)
where Φ(u) is given in (6.45). We call this the asymptotic coverage probability. Since the t-
ratio is asymptotically pivotal, the asymptotic coverage probability is independent of the parameter
θ, and is only a function of c.
As we mentioned before, an ideal confidence interval has a pre-specified probability coverage
1 − α, typically 90% or 95%. This means selecting the constant c so that
Φ(c) = 1 − α.
Effectively, this makes c a function of α, and can be backed out of a normal distribution table. For
example, α = 0.05 (a 95% interval) implies c = 1.96 and α = 0.1 (a 90% interval) implies c = 1.645.
Rounding 1.96 to 2, we obtain the most commonly used confidence interval in applied econometric
practice h i
Cn = θb − 2s(θ),
b θb + 2s(θ)
b . (6.47)
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 155
This is a useful rule-of thumb. This asymptotic 95% confidence interval Cn is simple to compute
and can be roughly calculated from tables of coefficient estimates and standard errors. (Technically,
it is an asymptotic 95.4% interval, due to the substitution of 2.0 for 1.96, but this distinction is
overly precise.)
Confidence intervals are a simple yet effective tool to assess estimation uncertainty. When
reading a set of empirical results, look at the estimated coefficient estimates and the standard
errors. For a parameter of interest, compute the confidence interval Cn and consider the meaning
of the spread of the suggested values. If the range of values in the confidence interval are too wide
to learn about θ, then do not jump to a conclusion about θ based on the point estimate alone.
For illustration, consider the three examples presented in Section 6.11 based on the log wage
regression for single asian men.
Percentage return to education. A 95% asymptotic confidence interval is 11.8±1.96×0.8 = [10.2,
13.3].
Percentage return to experience for individuals with 12 years experience. A 90% asymptotic
confidence interval is 1.1 ± 1.645 × 0.4 = [0.5, 1.8].
Experience level which maximizes expected log wages. An 80% asymptotic confidence interval
is 35 ± 1.28 × 7 = [26, 44].
m(x) = E (yi | xi = x) = x0 β.
In some cases, we want to estimate m(x) at a particular point x. Notice that this is a (linear)
function 0
qof β. Letting r(β) = x β and θ = r(β), we see that m(x)
b = θb = x0 β
b and R = x, so
b = x0 Vb e x. Thus an asymptotic 95% confidence interval for m(x) is
s(θ) β
∙ q ¸
0b
x β ± 1.96 x Vb βe x .
0
It is interesting to observe that if this is viewed as a function of x, the width of the confidence set
is dependent on x.
To illustrate, we return to the log wage regression (3.11) of Section 3.7. The estimated regression
equation is
\
log(W b = 0.155x + 0.698
age) = x0 β
where x = education. The covariance matrix estimate from (4.35) is
µ ¶
0.001 −0.015
Vb βe = .
−0.015 0.243
Thus the 95% confidence interval for the regression takes the form
p
0.155x + 0.698 ± 1.96 0.001x2 − 0.015x + 0.243.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 156
The estimated regression and 95% intervals are shown in Figure 6.6. Notice that the confidence
bands take a hyperbolic shape. This means that the regression line is less precisely estimated for
very large and very small values of education.
Plots of the estimated regression line and confidence intervals are especially useful when the
regression includes nonlinear terms. To illustrate, consider the log wage regression (6.42) which
includes experience and its square, with covariance matrix (6.43). We are interested in plotting
the regression estimate and regression intervals as a function of experience. Since the regression
also includes education, to plot the estimates in a simple graph we need to fix education at a
specific value. We select education=12. This only affects the level of the estimated regression, since
education enters without an interaction. Define the points of evaluation
⎛ ⎞
12
⎜ x ⎟
z(x) = ⎜ ⎟
⎝ x2 /100 ⎠
1
where x =experience.
Thus the 95% regression interval for education=12, as a function of x =experience is
The estimated regression and 95% intervals are shown in Figure 6.7. The regression interval
widens greatly for small and large values of experience, indicating considerable uncertainty about
the effect of experience on mean wages for this population. The confidence bands take a more
complicated shape than in Figure 6.6 due to the nonlinear specification.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 157
= σ 2 (x) + x0 Vβe x.
¡ ¢
Assuming E e2n+1 | xn+1 = σ 2 , the natural estimate of this variance is σ̂ 2 + x0 Vb βe x, so a standard
q
error for the forecast is ŝ(x) = σ̂ 2 + x0 Vb βe x. Notice that this is different from the standard error
for the conditional mean.
The conventional 95% forecast interval for yn+1 uses a normal approximation and sets
h i
x0 βb ± 2ŝ(x) .
It is difficult, however, to fully justify this choice. It would be correct if we have a normal approx-
imation to the ratio ³ ´
en+1 − x0 β b −β
.
ŝ(x)
The difficulty is that the equation error en+1 is generally non-normal, and asymptotic theory cannot
be applied to a single observation. The only special exception is the case where en+1 has the exact
distribution N(0, σ2 ), which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of en+1
given xn+1 = x, which is a much more difficult task. Perhaps due to this difficulty, many applied
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 158
h i
b ± 2ŝ(x) despite the lack of a convincing
forecasters use the simple approximate interval x0 β
justification.
where Vb θ = nVb θe . When q = 1, then Wn (θ) = tn (θ)2 is the square of the t-ratio. When q > 1,
Wn (θ) is typically called a Wald statistic. We are interested in its sampling distribution.
The asymptotic distribution of Wn (θ) is simple to derive given Theorem 6.10.2 and Theorem
6.10.3, which show that
√ ³ ´
d
n θb − θ −→ Z ∼ N (0, V θ )
and
p
Vb θ −→ V θ .
It follows that
√ ³b ´0 −1 √ ³ ´
d
b − θ −→
Wn (θ) = n θ − θ Vb θ n θ Z0 V −1
θ Z (6.49)
a quadratic in the normal random vector Z. Here we can appeal to a useful result from probability
theory. (See Theorem B.9.3 in the Appendix.)
The asymptotic distribution in (6.49) takes exactly this form. Note that V θ > 0 since R is
full rank under Assumption 6.10.1 It follows that Wn (θ) converges in distribution to a chi-square
random variable.
Theorem 6.16.2 is used to justify multivariate confidence regions and mutivariate hypothesis
tests.
Under the additional assumption of conditional homoskedasticity, it has the same asymptotic
distribution as Wn (θ).
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 159
¡ ¢
Theorem 6.17.1 Under Assumptions 6.1.2 and 6.10.1, and E e2i | xi =
σ 2 , as n → ∞,
d
Wn0 (θ) −→ χ2q .
⎛ ⎞
µ ¶ 0 0
0 100 0 0 ⎜ 100 0 ⎟
b
V θe = Vb βe ⎜ ⎟
0 0 100 20 ⎝ 0 100 ⎠
0 20
µ ¶
0.632 0.103
=
0.103 0.157
with inverse µ ¶
−1 1.77 −1.16
Vb θe = .
−1.16 7.13
Thus the Wald statistic is
³ ´0 ³ ´
Wn (θ) = θ b − θ Vb −1e b
θ − θ
θ
µ ¶0 µ ¶µ ¶
11.8 − θ1 1.77 −1.16 11.8 − θ1
=
1.2 − θ2 −1.16 7.13 1.2 − θ2
= 1.77 (11.8 − θ1 )2 − 2.32 (11.8 − θ1 ) (1.2 − θ2 ) + 7.13 (1.2 − θ2 )2 .
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 160
Figure 6.8: Confidence Region for Return to Experience and Return to Education
The 90% quantile of the χ22 distribution is 4.605 (we use the χ22 distribution as the dimension
of θ is two), so an asymptotic 90% confidence region for the two parameters is the interior of the
ellipse Wn (θ) = 4.605 which is displayed in Figure 6.8. Since the estimated correlation of the two
coefficient estimates is modest (about 0.3) the region is modestly elliptical.
Proposition 6.19.1 says that under the minimal conditions in which θ b is asymptotically normal,
then no semiparametric estimator can have a smaller asymptotic variance than θ. b
To show that an estimator is semiparametrically efficient it is sufficient to show that it falls
in the class covered by this Proposition. To show that the projection model falls in this class, we
write β = Q−1 0
xx Qxy = g (μ) where μ = Ez i and z i = (xi xi , xi yi ) . The class L2 (g) equals the class
of distributions n o
L4 (β) = F : Ey4 < ∞, E kxk4 < ∞, Exi x0i > 0 .
which is a smooth function of the moments Qyy , Qyx and Qxx . Similarly the estimator σ̂ 2 equals
n
2 1X 2
σ̂ = êi
n
i=1
−1
byy − Q
=Q b Q
b yx Q b
xx xy
Since the variables yi2 , yi x0i and xi x0i all have finite variances when F ∈ L4 (β), the conditions of
Proposition 6.19.1 are satisfied. We conclude:
= 1.
= x0 (β + θ) ,
R
using the homoskedasticity assumption (y − x0 β)2 f1 (y | x) dy = σ 2 . This means that in this
parametric submodel, the conditional mean is linear in x and the regression coefficient is β (θ) =
β + θ.
We now calculate the score for estimation of θ. Since
∂ ∂ ¡ ¡ ¢¡ ¢ ¢ x (y − x0 β) /σ 2
log f (y, x | θ) = log 1 + y − x0 β x0 θ /σ2 =
∂θ ∂θ 1 + (y − x0 β) (x0 θ) /σ 2
the score is
∂
s= log f (y, x | θ0 ) = xe/σ2 .
∂θ
The Cramer-Rao bound for estimation of θ (and therefore β (θ) as well) is
¡ ¡ 0 ¢¢−1 ¡ −4 ¡ ¢¢−1
E ss = σ E (xe) (xe)0 = σ 2 Q−1 0
xx = V β .
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 163
We have shown that there is a parametric submodel (6.51) whose Cramer-Rao bound for estimation
of β is identical to the asymptotic variance of the least-squares estimator, which therefore is the
semiparametric variance bound.
This result is similar to the Gauss-Markov theorem, in that it asserts the efficiency of the least-
squares estimator in the context of the homoskedastic regression model. The difference is that the
Gauss-Markov theorem states that OLS has the smallest variance among the set of unbiased linear
estimators, while Theorem 6.20.1 states that OLS has the smallest asymptotic variance among all
regular estimators. This is a much more powerful statement.
p
b − β −→ 0 it seems reasonable to guess that êi will be close to ei if n is large.
Since β
We can bound the difference in (6.52) using the Schwarz inequality (A.10) to find
¯ ³ ´¯ ° °
¯ b − β ¯¯ ≤ kxi k ° b − β°
|êi − ei | = ¯x0i β °β °. (6.53)
° °
°b °
To bound (6.53) we can use °β − β° = Op (n−1/2 ) from Theorem 6.3.2, but we also need to
bound the random° variable kxi k. If the regressor is bounded, that is, kxi k ≤ B < ∞, then
°
°b °
|êi − ei | ≤ B °β − β° = Op (n−1/2 ). However if the regressor does not have bounded support then
we have to be more careful. ¡ ¢
The key is Theorem 5.12.1 which shows that E kxi kr < ∞ implies xi = op n1/r uniformly in
i, or
p
n−1/r max kxi k −→ 0.
1≤i≤n
= op (n−1/2+1/r ).
Theorem 6.21.1 Under Assumption 6.1.2 and E kxi kr < ∞, then uni-
formly in 1 ≤ i ≤ n
êi = ei + op (n−1/2+1/r ). (6.54)
The rate of convergence in (6.54) depends on r. Assumption 6.1.2 requires r ≥ 4, so the rate
of convergence is at least op (n−1/4 ). As r increases, the rate improves. As a limiting case, from
Theorem 5.12.1 we see if E exp(t 0 x ) < ∞ for all ktk < ∞ then x = o (log n) uniformly in i,
¡ that ¢ i i p
and thus êi = ei + op n−1/2 log n .
We mentioned in Section 6.7 that there are multiple ways to prove the consistent of the co-
variance matrix estimator Ω.b We now show that Theorem 6.21.1 provides one simple method to
establish (6.32) and thus Theorem 6.7.1. Let qn = max1≤i≤n |êi − ei | = op (n−1/4 ). Since
ê2i − e2i = 2ei (êi − ei ) + (êi − ei )2 ,
then
° n °
°1 X ¡ 2 ¢° 1 X
n
° °¯ ¯
° 0 2 ° °xi x0i ° ¯ê2i − e2i ¯
° xi xi êi − ei ° ≤
°n ° n
i=1 i=1
n n
2X 1X
≤ kxi k2 |ei | |êi − ei | + kxi k2 |êi − ei |2
n n
i=1 i=1
n n
2X 1X
≤ kxi k2 |ei | qn + kxi k2 qn2
n n
i=1 i=1
−1/4
≤ op (n ).
For any r ≥ 2 then hii = op (1) (uniformly ¡ in i ¢≤ n). Larger r implies a stronger rate of
convergence, for example r = 4 implies hii = op n−1/2 .
Theorem (6.22.1) implies that under random sampling with finite variances and large samples,
no individual observation should have a large leverage value. Consequently individual observations
should not be influential, unless one of these conditions is violated.
CHAPTER 6. ASYMPTOTIC THEORY FOR LEAST SQUARES 166
Exercises
Exercise 6.1 Take the model yi = x01i β1 +x02i β2 +ei with Exi ei = 0. Suppose that β1 is estimated
by regressing yi on x1i only. Find the probability limit of this estimator. In general, is it consistent
for β1 ? If not, under what conditions is this estimator consistent for β1 ?
Exercise 6.2 Let y be n × 1, X be n × k (rank k). y = Xβ + e with E(xi ei ) = 0. Define the ridge
regression estimator
à n !−1 à n !
X X
b=
β xi x0i + λI k xi yi (6.56)
i=1 i=1
b as n → ∞. Is β
where λ > 0 is a fixed constant. Find the probability limit of β b consistent for β?
Exercise 6.3 For the ridge regression estimator (6.56), set λ = cn where c > 0 is fixed as n → ∞.
b as n → ∞.
Find the probability limit of β
Exercise 6.4 Verify some of the calculations reported in Section 6.4. Specifically, suppose that
x1i and x2i only take the values {−1, +1}, symmetrically, with
(a) Ex1i = 0
(b) Ex21i = 1
1
(c) Ex1i x2i =
2
¡ ¢
(d) E e2i = 1
¡ ¢
(e) E x21i e2i = 1
¡ ¢ 7
(f) E x1i x2i e2i = .
8
Exercise 6.5 Show (6.20)-(6.23).
yi = x0i β + ei
E (xi ei ) = 0
¡ ¢
Ω = E xi x0i e2i .
b Ω)
Find the method of moments estimators (β, b for (β, Ω) .
b Ω)
(a) In this model, are (β, b efficient estimators of (β, Ω)?
Exercise 6.7 Of the variables (yi∗ , yi , xi ) only the pair (yi , xi ) are observed. In this case, we say
that yi∗ is a latent variable. Suppose
yi∗ = x0i β + ei
E (xi ei ) = 0
yi = yi∗ + ui
E (xi ui ) = 0
E (yi∗ ui ) = 0
yi = xi β + ei
E (ei | xi ) = 0
(a) Under the stated assumptions, are both estimators consistent for β?
Exercise 6.11 As in Exercise 3.21, use the CPS dataset and the subsample of white male Hispan-
ics. Estimate the regression
\
log(W age) = β1 education + β2 experience + β3 experience2 /100 + β4 .
(b) Let θ be the ratio of the return to one year of education to the return to one year of experi-
ence. Write θ as a function of the regression coefficients and variables. Compute θb from the
estimated model.
(c) Write out the formula for the asymptotic standard error for θb as a function of the covariance
b Compute ŝ(θ)
matrix for β. b from the estimated model.
(d) Construct a 90% asymptotic confidence interval for θ from the estimated model.
(e) Compute the regression function at edu = 12 and experience=20. Compute a 95% confidence
interval for the regression function at this point
(f) Consider an out-of-sample individual with 16 years of education and 5 years experience.
Construct an 80% forecast interval for their log wage and wage. [To obtain the forecast
interval for the wage, apply the exponential function to both endpoints.]
Chapter 7
Restricted Estimation
7.1 Introduction
In the linear projection model
yi = x0i β + ei
E (xi ei ) = 0
yi = x01i β1 + ei
E (xi ei ) = 0
At first glance this appears the same as the linear projection model, but there is one important
difference: the error ei is uncorrelated with the entire regressor vector x0i = (x01i , x02i ) not just the
included regressor x1i .
In general, a set of q linear constraints on β takes the form
R0 β = c (7.1)
where R is k × q, rank(R) = q < k and c is q × 1. The assumption that R is full rank means that
the constraints are linearly independent (there are no redundant or contradictory constraints).
The constraint β2 = 0 discussed above is a special case of the constraint (7.1) with
µ ¶
0
R= , (7.2)
I
169
CHAPTER 7. RESTRICTED ESTIMATION 170
where
n
X ¡ ¢2
SSEn (β) = yi − x0i β = y 0 y − 2y 0 Xβ + β0 X 0 Xβ. (7.4)
i=1
The estimator βe minimizes the sum of squared errors over all β such that β ∈ B R , or equivalently
cls
e the constrained least-squares (CLS) estimator.
such that the restriction (7.1) holds. We call β cls
e is a restricted
We follow the convention of using a tilde “~” rather than a hat “^” to indicate that β cls
b
estimator in contrast to the unrestricted least-squares estimator β, and write it as β e cls to be clear
that the estimation method is CLS.
One method to find the solution to (7.3) uses the technique of Lagrange multipliers. The
problem (7.3) is equivalent to the minimization of the Lagrangian
1 ¡ ¢
L(β, λ) = SSEn (β) + λ0 R0 β − c (7.5)
2
over (β, λ), where λ is an s × 1 vector of Lagrange multipliers. The first-order conditions for
minimization of (7.5) are
∂ e ,λ e cls ) = −X 0 y + X 0 X β
e + Rλe cls = 0
L(β cls cls (7.6)
∂β
and
∂ e ,λ e cls ) = R0 β
e − c = 0.
L(β cls (7.7)
∂λ
−1
Premultiplying (7.6) by R0 (X 0 X) we obtain
¡ ¢
− R0 β e + R0 X 0 X −1 Rλ
b + R0 β e cls = 0 (7.8)
cls
−1 e cls − c = 0 from
b = (X 0 X) X 0 y is the unrestricted least-squares estimator. Imposing R0 β
where β
e
(7.7) and solving for λcls we find
h ¡ ¢ i−1 ³ ´
e cls = R0 X 0 X −1 R
λ b −c .
R0 β
You can show (See Exercise 7.6) that in the homoskedastic linear regression model under (7.1),
¡ ¢
E s2cls | X = σ 2 (7.12)
so that s2cls is unbiased for σ 2 .
µ ¶
e
β
e=
β 1 . (7.16)
0
It is not immediately obvious, but (7.9) and (7.16) are algebraically (and numerically) equivalent.
To see this, the first component of (7.9) with (7.2) is
" µ ¶∙ µ ¶¸−1 #
¡ ¢ −1 0 ¡ ¢ −1 0 ¡ ¢
e = I 0
β βb −Qb 0 I Q b 0 I β b .
1 xx xx
I I
b +Q
=β b −1 Q b b −1 b b
1 11·2 12 Q22 Q22·1 β 2
³ ´
b −1 Q
=Q b 1y − Q b 12 Qb −1 Q
b 2y
11·2 22
−1 −1
³ ´
b 11·2 Q
+Q b 12 Q
b 22 Q b −1
b 22·1 Q 22·1
b
Q 2y − b
Q b
Q
−1
21 11
b
Q 1y
−1
³ −1 −1
´
b b b b b b b
=Q 11·2 Q1y − Q12 Q22 Q21 Q11 Q1y
³ ´ −1
b −1
=Q 11·2
b
Q 11 − b
Q b
Q
12 22
−1
b
Q b b
21 Q11 Q1y
b −1
=Q b
11 Q1y
The CLS estimator is the special case when W n = Qb xx , and we write this criterion function as
³ ´0 ³ ´
Jn0 (β) = n β b −β Q b xx β b −β (7.19)
To see the equality of CLS and minimum distance, rewrite the least-squares criterion as follows.
b + êi and substitute this equation
Write the unconstrained least-squares fitted equation as yi = x0i β
into SSEn (β) to obtain
n
X ¡ ¢2
SSEn (β) = yi − x0i β
i=1
n ³
X ´2
= b + êi − x0 β
x0i β i
i=1
à n !
Xn ³ ´0 X ³ ´
= ê2i + βb −β 0
xi xi βb −β
i=1 i=1
= nσ̂2 + Jn0 (β) (7.20)
P P b xx
where the third equality uses the fact that ni=1 xi êi = 0, and the last line uses ni=1 xi x0i = nQ
0
. The expression (7.20) only depends on β through Jn (β) . Thus minimization of SSEn (β) and
e =β
Jn0 (β) are equivalent, and hence β e when W n = Q b xx .
md cls
e
We can solve for βmd explicitly by the method of Lagrange multipliers. The Lagrangian is
1 ¡ ¢
L(β, λ) = Jn (β, W n ) + λ0 R0 β − c
2
which is minimized over (β, λ). The solution is
¡ ¢ ³ ´
e md = n R0 W −1 R −1 R0 β
λ b −c (7.21)
n
¡ ¢ ³ ´
β b − W −1 R R0 W −1 R −1 R0 β
e md = β b −c . (7.22)
n n
e md specializes to β
(See Exercise 7.7.) Comparing (7.22) with (7.10) we can see that β e cls when we
b
set W n = Qxx .
An obvious question is which weight matrix W n is best. We will address this question after we
derive the asymptotic distribution for a general weight matrix.
CHAPTER 7. RESTRICTED ESTIMATION 173
p
Assumption 7.5.2 W n −→ W > 0.
as n → ∞, where
¡ ¢−1 0
V β (W ) = V β − W −1 R R0 W −1 R RVβ
¡ 0 −1 ¢−1 0 −1
−V β R R W R RW
¡ ¢−1 ¡ ¢−1 0 −1
+W −1 R R0 W −1 R R0 V β R R0 W −1 R RW (7.24)
and V β = Q−1 −1
xx ΩQxx .
where
¡ 0 −1 ¢−1 0
V cls = V β − Q−1
xx R R Qxx R RVβ
¡ 0 −1 ¢−1 0 −1
− V β R R Qxx R R Qxx
−1
¡ 0 −1 ¢−1 0 ¡ ¢−1 0 −1
+ Qxx R R Qxx R R V β R R0 Q−1
xx R R Qxx
is unknown this weight matrix cannot be used for a feasible estimator, but we can replace V −1β
−1
b
with a consistent estimate V β and the asymptotic distribution (and efficiency) are unchanged.
−1
We call the minimum distance estimator setting W n = Vb β the efficient minimum distance
estimator and takes the form
³ ´−1 ³ ´
β b − Vb β R R0 Vb β R
e emd = β b −c .
R0 β (7.25)
The asymptotic distribution of (7.25) can be deduced from Theorem 7.5.2. (See Exercises 7.11 and
7.12.)
as n → ∞, where
¡ ¢−1 0
V ∗β = V β − V β R R0 V β R R V β. (7.26)
Since
V ∗β ≤ V β (7.27)
the estimator (7.25) has lower asymptotic variance than the unrestricted
estimator. Furthermore, for any W ,
V ∗β ≤ V β (W ) (7.28)
Theorem 7.6.1 shows that the minimum distance estimator with the smallest asymptotic vari-
ance is (7.25). One implication is that the constrained least squares estimator is generally in-
efficient. The interesting exception is the case of conditional homoskedasticity, in which case the
0−1
optimal weight matrix is W = V β so in this case CLS is an efficient minimum distance estimator.
Otherwise when the error is conditionally heteroskedastic, there are asymptotic efficiency gains by
using minimum distance rather than least squares.
The fact that CLS is generally inefficient is counter-intuitive and requires some reflection to
understand. Standard intuition suggests to apply the same estimation method (least squares) to
the unconstrained and constrained models, and this is the most common empirical practice. But
Theorem 7.6.1 shows that this is not the efficient estimation method. Instead, the efficient minimum
distance estimator has a smaller asymptotic variance. Why? The reason is that the least-squares
estimator does not make use of the regressor x2i . It ignores the information E (x2i ei ) = 0. This
information is relevant when the error is heteroskedastic and the excluded regressors are correlated
with the included regressors.
Inequality (7.27) shows that the efficient minimum distance estimator β e
emd has a smaller as-
b
ymptotic variance than the unrestricted least squares estimator β. This means that estimation is
more efficient by imposing correct restrictions when we use the minimum distance method.
yi = x01i β1 + x02i β2 + ei
with the exclusion restriction β2 = 0. We have introduced three estimators of β1 . The first is
unconstrained least-squares applied to (7.13), which can be written as
b =Q
β 1
b −1 b
11·2 Q1y·2 .
e
β b −1 b
1,cls = Q11 Q1y .
Its asymptotic variance can be deduced from Theorem 7.5.3, but it is simpler to apply the CLT
directly to show that
e
avar(β −1 −1
(7.29)
1,cls ) = Q11 Ω11 Q11 .
The third estimator of β1 is the efficient minimum distance estimator. Applying (7.25), it equals
β b 1 − Vb 12 Vb −1 β
e 1,md = β b (7.30)
22 2
In general, the three estimators are different, and they have different asymptotic variances.
It is quite instructive to compare the asymptotic variances of the CLS and unconstrained least-
squares estimators to assess whether or not the constrained estimator is necessarily more efficient
than the unconstrained estimator.
First, consider the case of conditional homoskedasticity. In this case the two covariance matrices
simplify to
b 1 ) = σ 2 Q−1
avar(β 11·2
and
e
avar(β 2 −1
1,cls ) = σ Q11 .
If Q12 = 0 (so x1i and x2i are orthogonal) then these two variance matrices equal and the two
estimators have equal asymptotic efficiency. Otherwise, since Q12 Q−1
22 Q21 ≥ 0, then Q11 ≥ Q11 −
−1
Q12 Q22 Q21 , and consequently
¡ ¢−1 2
Q−1 2 −1
11 σ ≤ Q11 − Q12 Q22 Q21 σ .
b )= 2
avar(β 1 (7.32)
3
e 1,cls ) = 1
avar(β (7.33)
7.9 Misspecification
e if the constraint (7.1) is incorrect?
What are the consequences for a constrained estimator β
To be specific, suppose that
R0 β = c∗
where c∗ is not necessarily equal to c.
This situation is a generalization of the analysis of “omitted variable bias” from Section 2.23,
where we found that the short regression (e.g. (7.15)) is estimating a different projection coefficient
than the long regression (e.g. (7.13)).
One mechanical answer is that we can use the formula (7.22) for the minimum distance estimator
to find that ¡ ¢−1 ∗
p
e md −→
β β∗md = β − W −1 R R0 W −1 R (c − c) . (7.37)
¡ ¢ −1
The second term, W −1 R R0 W −1 R (c∗ − c), shows that imposing an incorrect constraint leads
to inconsistency — an asymptotic bias. We can call the limiting value β∗md the minimum-distance
projection coefficient or the pseudo-true value implied by the restriction.
However, we can say more.
For example, we can describe some characteristics of the approximating projections. The CLS
estimator projection coefficient has the representation
¡ ¢2
β∗cls = argmin E yi − x0i β ,
R0 β=c
the best linear predictor subject to the constraint (7.1). The minimum distance estimator converges
to
β∗md = argmin (β − β 0 )0 W (β − β0 )
R0 β=c
where β0 is the true coefficient. That is, β∗md is the coefficient vector satisfying (7.1) closest to
the true value ın the weighted Euclidean norm. These calculations show that the constrained
estimators are still reasonable in the sense that they produce good approximations to the true
coefficient, conditional on being required to satisfy the constraint.
We can also show that β e
md has an asymptotic normal distribution. The trick is to define the
pseudo-true value
¡ 0 −1 ¢−1 ∗
β∗n = β − W −1
n R R Wn R (c − c) . (7.38)
(Note that (7.37) and (7.38) are different!) Then
√ ³ ´ √ ³ ´ ¡ ¢ √ ³ ´
n βe md − β∗ = n β b − β − W −1 R R0 W −1 R −1 n R0 β b − c∗
n n n
³ ¡ ¢ ´√ ³ ´
−1 b −β
= I − W −1 0
n R R Wn R
−1
R0 n β
³ ¡ ¢−1 0 ´
d
−→ I − W −1 R R0 W −1 R R N (0, V β )
= N (0, V β (W )) . (7.39)
CHAPTER 7. RESTRICTED ESTIMATION 178
In particular
√ ³ ´
d ¡ ¢
e − ∗ ∗
n β emd β n −→ N 0, V β .
This means that even when the constraint (7.1) is misspecified, the conventional covariance matrix
estimator (7.35) and standard errors (7.36) are appropriate measures of the sampling variance,
though the distributions are centered at the pseudo-true values (or projections) β∗n rather than β.
The fact that the estimators are biased is an unavoidable consequence of misspecification.
An alternative approach to the asymptotic distribution theory under misspecification uses the
concept of local alternatives. It is a technical device which might seem a bit artificial, but it is a
powerful method to derive useful distributional approximations in a wide variety of contexts. The
idea is to index the true coefficient βn by n via the relationship
R0 βn = c + δn−1/2 . (7.40)
Equation (7.40) specifies that β n violates (7.1) and thus the constraint is misspecified. However,
the constraint is “close” to correct, as the difference R0 βn − c = δn−1/2 is “small” in the sense that
it decreases with the sample size n. We call (7.40) local misspecification.
The asymptotic theory is then derived as n → ∞ under the sequence of probability distributions
with the coefficients βn . The way to think about this is that the true value of the parameter is
βn , and it is “close” to satisfying (7.1). The reason why the deviation is proportional to n−1/2 is
because this is the only choice under which the localizing parameter δ appears in the asymptotic
distribution but does not dominate it. The best way to see this is to work through the asymptotic
approximation.
Since βn is the true coefficient value, then yi = x0i βn +ei and we have the standard representation
for the unconstrained estimator, namely
à n !−1 à !
√ ³ ´ 1 X 1
n
X
n β b−β = xi xi0
√ xi ei
n
n n
i=1 i=1
d
−→ N (0, V β ) . (7.41)
There is no difference under fixed (classical) or local asymptotics, since the right-hand-side is
independent of the coefficient β n .
A difference arises for the constrained estimator. Using (7.40), c = R0 βn − δn−1/2 , so
³ ´
b − c = R0 β
R0 β b − β + δn−1/2
n
and
¡ ¢ ³ ´
β b − W −1 R R0 W −1 R −1 R0 β
e md = β b −c
n n
¡ ¢ ³ ´ ¡ ¢
=βb − W −1 R R0 W −1 R −1 R0 βb − βn + W −1 R R0 W −1 R −1 δn−1/2 .
n n n n
It follows that
√ ³e ´ ³ ¡ 0 −1 ¢−1 0 ´ √ ³
b −β
´
n βmd − βn = I − W −1
n R R W n R R n β n
−1
¡ 0 −1 ¢−1
+ Wn R R Wn R δ.
The first term is asymptotically normal (from 7.41)). The second term converges in probability to
√
a constant. This is because the n−1/2 local scaling in (7.40) is exactly balanced by the n scaling
of the estimator. No alternative rate would have produced this result.
Consequently, we find that the asymptotic distribution equals
√ ³ ´
d ¡ ¢−1
n β e md − βn −→ N (0, V β ) + W −1 R R0 W −1 R δ
= N (δ ∗ , V β (W )) (7.42)
CHAPTER 7. RESTRICTED ESTIMATION 179
where ¡ ¢−1
δ ∗ = W −1 R R0 W −1 R δ.
The asymptotic distribution (7.42) is an approximation of the sampling distribution of the
restricted estimator under misspecification. The distribution (7.42) contains an asymptotic bias
component δ ∗ . The approximation is not fundamentally different from (7.39) — they both have the
same asymptotic variances, and both reflect the bias due to misspecification. The difference is that
(7.39) puts the bias on the left-side of the convergence arrow, while (7.42) has the bias on the
right-side. There is no substantive difference between the two, but (7.42) is more convenient for
some purposes, such as the analysis of the power of tests, as we will explore in the next chapter.
e = argmin Jn (β)
β (7.45)
md
r(β)=0
where SSEn (β) and Jn (β) are defined in (7.4) and (7.17), respectively. The solutions minimize
the Lagrangians
1
L(β, λ) = SSEn (β) + λ0 r(β) (7.46)
2
or
1
L(β, λ) = Jn (β) + λ0 r(β) (7.47)
2
over (β, λ).
Computationally, there is in general no explicit expression for the solutions so they must be
found numerically. Algorithms to numerically solve (7.44) and (7.45) are known as constrained
optimization methods, and are available in programming languages including Matlab, Gauss and
R.
∂
Assumption 7.10.1 r(β) = 0 with rank(R) = q, where R = r(β)0 .
∂β
The asymptotic distribution is a simple generalization of the case of a linear constraint, but the
proof is more delicate.
CHAPTER 7. RESTRICTED ESTIMATION 180
e =
Theorem 7.10.1 Under Assumptions 6.1.2, 7.10.1, and 7.5.2, for β
e e e
β md and β = βcls defined in (7.44) and (7.45),
√ ³ ´
d
n βe − β −→ N (0, V β (W ))
The asymptotic variance matrix for the efficient minimum distance estimator can be estimated
by
∗
³ 0 ´−1 0
Vb β = Vb β − Vb β R
b R b
b Vb β R b Vb β
R
where
b = ∂ r(β
R e )0 .
md (7.48)
∂β
e
Standard errors for the elements of β b ∗h =
md are the square roots of the diagonal elements of V β
∗
n−1 Vb β .
r(β) ≥ 0 (7.49)
β1 ≥ 0.
and
e = argmin Jn (β) .
β (7.51)
md
r(β)≥0
Except in special cases the constrained estimators do not have simple algebraic solutions. An
important exception is when there is a single non-negativity constraint, e.g. β1 ≥ 0 with q = 1.
In this case the constrained estimator can be found by two-step approach. First compute the
uncontrained estimator β. e = β.
b If βb1 ≥ 0 then β b Second, if βb1 < 0 then impose β1 = 0 (eliminate
the regressor X1 ) and re-estimate. This yields the constrained least-squares estimator. While this
method works when there is a single non-negativity constraint, it does not immediately generalize
to other contexts.
The computational problems (7.50) and (7.51) are examples of quadratic programming
problems. Quick and easy computer algorithms are available in programming languages including
Matlab, Gauss and R.
CHAPTER 7. RESTRICTED ESTIMATION 181
Proof of Theorem 7.10.1. We show the result for the minimum distance estimator β e=β e , as
md
the proof for the constrained least-squares estimator is similar. For simplicity we assume that the
CHAPTER 7. RESTRICTED ESTIMATION 182
p
e −→
constrained estimator is consistent β β. This can be shown with more effort, but requires a
deeper treatment than appropriate for this textbook.
For each element rj (β) of the q-vector r(β), by the mean value theorem there exists a β∗j on
e and β such that
the line segment joining β
³ ´
e = rj (β) + ∂ rj (β ∗ )0 β
rj (β) e −β . (7.52)
j
∂β
b is defined in (7.48).
where R
Premultiplying by R∗0 W −1n , inverting, and using (7.53), we find
³ ´−1 ³ ´ ³ ´−1 ³ ´
e = R∗0 W −1 R
b R∗0 b e ∗0 −1 b
R∗0 b
λ n n n β − β = Rn W n R n β−β .
Thus µ ³ ´−1 ¶³ ´
e −β =
β −1 b ∗0 −1 b
I − W n R Rn W n R ∗0
Rn βb −β . (7.54)
¥
CHAPTER 7. RESTRICTED ESTIMATION 183
Exercises
Exercise 7.1 In the model y = X 1 β1 + X 2 β2 + e, show directly from definition (7.3) that the
CLS estimate of β = (β 1 , β2 ) subject to the constraint that β 2 = 0 is the OLS regression of y on
X 1.
Exercise 7.2 In the model y = X 1 β1 + X 2 β2 + e, show directly from definition (7.3) that the
CLS estimate of β = (β1 , β2 ), subject to the constraint that β1 = c (where c is some given vector)
is the OLS regression of y − X 1 c on X 2 .
Exercise 7.3 In the model y = X 1 β1 + X 2 β2 + e, with X 1 and X 2 each n × k, find the CLS
estimate of β = (β 1 , β2 ), subject to the constraint that β1 = −β2 .
b − c = R0 (X 0 X) −1
(a) R0 β X 0e
³ ´−1
e − β = (X 0 X)−1 X 0 e − (X 0 X)−1 R R0 (X 0 X)−1 R
(b) β
−1
R0 (X 0 X) X 0 e
cls
−1
e = (I − P + A) e for P = X (X 0 X) X 0 and some matrix A (find this matrix A).
(c) e
e md with W n =
Exercise 7.7 Verify (7.21) and (7.22), and that the minimum distance estimator β
b
Qxx equals the CLS estimator.
Exercise 7.10 Prove Theorem 7.5.3. (Hint: Use that CLS is a special case of Theorem 7.5.2.)
Exercise 7.15 As in Exercise 6.11 and 3.21, use the CPS dataset and the subsample of white male
Hispanics.
\
log(W age) = β1 education + β2 experience + β3 experience2 /100 + β4 M arried1
+ β5 M arried2 + β6 M arried3 + β7 W idowed + β8 Divorced + β9 Separated + β10
where M arried1 , M arried2 , and M arried3 are the first three marital status codes as listed
in Section 3.19.
(b) Estimate the equation using constrained least-squares, imposing the constraints β4 = β7 and
β8 = β9 , and report the estimates and standard errors
(c) Estimate the equation using efficient minimum distance, imposing the same constraints, and
report the estimates and standard errors
(d) Under what constraint on the coefficients is the wage equation non-decreasing in experience
for experience up to 50?
(e) Estimate the equation imposing β4 = β7 , β8 = β9 , and the inequality from part (d).
Chapter 8
Hypothesis Testing
8.1 Hypotheses
In Chapter 7 we discussed estimation subject to restrictions, including linear restrictions (7.1),
nonlinear restrictions (7.43), and inequality restrictions (7.49). In this chapter we discuss tests of
such restrictions.
Hypothesis tests attempt to assess whether there is evidence to contradict a proposed parametric
restriction. Let
θ = r(β)
be a q × 1 parameter of interest where r : Rk → Θ ⊂ Rq is some transformation. For example, θ
may be a single coefficient, e.g. θ = βj , the difference between two coefficients, e.g. θ = βj − β , or
the ratio of two coefficients, e.g. θ = βj /β .
A point hypothesis concerning θ is a proposed restriction such as
θ = θ0 (8.1)
185
CHAPTER 8. HYPOTHESIS TESTING 186
relative to a critical value c. The hypothesis test then consists of the decision rule
1. Accept H0 if Tn ≤ c,
2. Reject H0 if Tn > c.
A test statistic Tn should be designed so that small values are likely when H0 is true and large
values are likely when H1 is true. There is a well developed statistical theory concerning the design
of optimal tests. We will not review that theory here, but instead refer the reader to Lehmann
and Romano (2005). In this chapter we will summarize the main approaches to the design of test
statistics.
The most commonly used test statistic is the absolute value of the t-statistic
where
θb − θ
tn (θ) = (8.3)
b
s(θ)
is the t-statistic from (6.44), where θb is a point estimate and s(θ) b its standard error. tn is an
appropriate statistic when testing hypotheses on individual coefficients or real-valued parameters
θ = h(β) and θ0 is the hypothesized value. Quite typically, θ0 = 0, as interest focuses on whether
or not a coefficient equals zero, but this is not the only possibility. For example, interest may focus
on whether an elasticity θ equals 1, in which case we may wish to test H0 : θ = 1.
CHAPTER 8. HYPOTHESIS TESTING 187
The finite sample size of the test is defined as the supremum of (8.4) across all data distributions
which satisfy H0 . A primary goal of test construction is to limit the incidence of Type I error by
bounding the size of the test.
For the reasons discussed in Chapter 6, in typical econometric models the exact sampling
distributions of estimators and test statistics are unknown and hence we cannot explicitly calculate
(8.4). Instead, we typically rely on asymptotic approximations. We start by assuming that the test
statistic has an asymptotic distribution under H0 . That is, when H0 is true
d
Tn −→ T (8.5)
We see that the asymptotic size of the test is a simple function of the asymptotic null distribution G
and the critical value c. For example, the asymptotic size of a test based on the absolute t-statistic
with critical value c is 1 − Φ(c).
In the dominant approach to hypothesis testing, the researcher pre-selects a significance level
α ∈ (0, 1) and then selects c so that the (asymptotic) size is no larger than α. When the asymptotic
null distribution G is pivotal, we can accomplish this by setting c equal to the (1 − α)th quantile of
the distribution G. (If the distribution G is not pivotal, more complicated methods must be used,
pointing out the great convenience of using asymptotically pivotal test statistics.) We call c the
asymptotic critical value because it has been selected from the asymptotic null distribution.
For example, since Φ(1.96) = 0.95, it follows that the 5% asymptotic critical value for the absolute
t-statistic is c = 1.96.
8.4 t tests
As we mentioned earlier, the most common test of the one-dimensional hypothesis
H0 : θ = θ0 (8.6)
is the absolute value of the t-statistic (8.3). We now formally state its asymptotic null distribution,
which is a simple application of Theorem 6.12.1.
The theorem shows that asymptotic critical values can be taken from the normal distribution
table.
The alternative hypothesis (8.7) is sometimes called a “two-sided” alternative. Sometimes we
are interested in testing for one-sided alternatives such as
H1 : θ > θ0 (8.8)
or
H1 : θ < θ0 . (8.9)
Tests of (8.6) against (8.8) or (8.9) are based on the signed t-statistic tn = tn (θ0 ). The hypothesis
(8.6) is rejected in favor of (8.8) if tn > c where c satisfies α = 1 − Φ(c). Negative values of tn are
not taken as evidence against H0 , as point estimates θb less than θ0 do not point to (8.8). Since the
critical values are taken from the single tail of the normal distribution, they are smaller than for
two-sided tests. Specifically, the asymptotic 5% critical value is α = 1.645. Thus, we reject (8.6) in
favor of (8.8) if tn > 1.645.
Conversely, tests of (8.6) against (8.9) reject H0 for negative t-statistics, e.g. if tn ≤ −c. For this
alternative large positive values of tn are not evidence against H0 . An asymptotic 5% test rejects
if tn < −1.645.
There seems to be an ambiguity. Should we use the two-sided critical value 1.96 or the one-
sided critical value 1.645? The answer is that we should use one-sided tests and critical values only
when the parameter space is known to satisfy a one-sided restriction such as θ ≥ θ0 . This is when
the test of (8.6) against (8.8) makes sense. If the restriction θ ≥ θ0 is not known a priori, then
imposing this restriction to test (8.6) against (8.8) does not makes sense. Since linear regression
coefficients typically do not have a priori sign restrictions, we conclude that two-sided tests are
generally appropriate.
We call πn (θ) the power function and is written as a function of θ to indicate its dependence on
the true value of the parameter θ.
CHAPTER 8. HYPOTHESIS TESTING 189
In the dominant approach to hypothesis testing, the goal of test construction is to have high
power, subject to the constraint that the size of the test is lower than the pre-specified significance
level. Generally, the power of a test depends on the true value of the parameter θ, and for a well
behaved test the power is increasing both as θ moves away from the null hypothesis θ0 and as the
sample size n increases.
Given the two possible states of the world (H0 or H1 ) and the two possible decisions (Accept H0
or Reject H0 ), there are four possible pairings of states and decisions as is depicted in the following
chart.
Given a test statistic Tn , increasing the critical value c increases the acceptance region S0 while
decreasing the rejection region S1 . This decreases the likelihood of a Type I error (decreases the
size) but increases the likelihood of a Type II error (decreases the power). Thus the choice of c
involves a trade-off between size and the power. This is why the significance level α of the test
cannot be set arbitrarily small. (Otherwise the test will not have meaningful power.)
It is important to consider the power of a test when interpreting hypothesis tests, as an overly
narrow focus on size can lead to poor decisions. For example, it is trivial to design a test which
has perfect size yet has trivial power. Specifically, for any hypothesis we can use the following test:
Generate a random variable U ∼ U [0, 1] and reject H0 if U < α. This test has exact size of α. Yet
the test also has power precisely equal to α. When the power of a test equals the size, we say that
the test has trivial power. Nothing is learned from such a test.
greater than the 5% asymptotic critical value of 1.96. Therefore we reject the hypothesis that
union membership does not affect wages for men. In this case, we can say that union membership
is statistically significant for men. However, the absolute t-statistic for the coefficient on “Female
Union Member” is 0.022/0.020 = 1.10, which is less than 1.96 and therefore we do not reject the
hypothesis that union membership does not affect wages for women. In this case we find that
membership for women is not statistically significant.
When a test accepts a null hypothesis (when a test is not statistically significant), a common
misinterpretation is that this is evidence that the null hypothesis is true. This is incorrect. Failure
to reject is by itself not evidence. Without an analysis of power, we do not know the likelihood of
making a Type II error, and thus are uncertain. In our wage example, it would be a mistake to
write that “the regression finds that female union membership has no effect on wages”. This is an
incorrect and most unfortunate interpretation. The test has failed to reject the hypothesis that the
coefficient is zero, but that does not mean that the coefficient is actually zero.
When a test rejects a null hypothesis (when a test is statistically significant) it does not mean
that the null hypothesis is false (as we could be making a Type I error) but we know that this
event is unlikely. Thus it is appropriate to interpret the rejection as an evidential statement: The
null hypothesis appears to be incorrect. However, while we can conclude that the true value θ is
numerically different than the hypothesized value θ0 , the test alone does not tell us that the true
θ value is meaingfully different than θ0 . That is, whether or not the deviation of θ from θ0 is
meaningful with respect to the interpretation of the coefficient. This is where an examination of
confidence intervals can be quite helpful.
8.7 P-Values
Continuing with the wage regression estimates reported in Table 4.1, consider another question:
Does marriage status affect wages? To test the hypothesis that marriage status has no effect on
wages, we examine the t-statistics for the coefficients on “Married Male” and “Married Female”
in Table 4.1, which are 0.180/0.008 = 22.5 and 0.016/0.008 = 2.0, respectively. Both exceed the
asymptotic 5% critical value of 1.96, so we reject the hypothesis for both men and women. But the
statistic for men is exceptionally high, and that for women is only slightly above the critical value.
Suppose in contrast that the t-statistic had been 1.9, which is less than the critical value. This would
lead to the decision “Accept H0 ” rather than “Reject H0 ”. Should we really be making a different
decision if the t-statistic is 1.9 rather than 2.0? The difference in values is small, shouldn’t the
difference in the decision be also small? Thinking through these examples it seems unsatisfactory
to simply report “Accept H0 ” or “Reject H0 ”. These two decisions do not summarize the evidence.
Instead, the magnitude of the statistic Tn suggests a “degree of evidence” against H0 . How can we
take this into account?
The answer is to report what is known as the asymptotic p-value
pn = 1 − G(Tn ).
Since the distribution function G is monotonically increasing, the p-value is a monotonically de-
creasing function of Tn and is an equivalent test statistic. Instead of rejecting H0 at the significance
level α if Tn > c, we can reject H0 if pn < α. Thus it is sufficient to report pn , and let the reader
decide.
In is instructive to interpret pn as the marginal significance level: the largest value of α for
which the test Tn “rejects” the null hypothesis. That is, pn = 0.11 means that Tn rejects H0 for all
significance levels greater than 0.11, but fails to reject H0 for significance levels less than 0.11.
Furthermore, the asymptotic p-value has a very convenient asymptotic null distribution. Since
CHAPTER 8. HYPOTHESIS TESTING 191
d d
Tn −→ T under H0 , then pn = 1 − G(Tn ) −→ 1 − G(T ), which has the distribution
Pr (1 − G(T ) ≤ u) = Pr (1 − u ≤ G(T ))
¡ ¢
= 1 − Pr T ≤ G−1 (1 − u)
¡ ¢
= 1 − G G−1 (1 − u)
= 1 − (1 − u)
= u,
d
which is the uniform distribution on [0, 1]. Thus pn −→ U[0, 1]. This means that the “unusualness”
of pn is easier to interpret than the “unusualness” of Tn .
An important caveat is that the p-value pn should not be interpreted as the probability that
either hypothesis is true. For example, a common mis-interpretation is that pn is the probability
“that the null hypothesis is false.” This is incorrect. Rather, pn is a measure of the strength of
information against the null hypothesis.
Returing to our empirical example, for the test that the coefficient on “Married Male” is zero,
the p-value is 0.000. This means that it would be highly unlikely to observe a t-statistic as large
as 22.5 when the true value of the coefficient is zero, and thus we can reject that the true value is
zero. When presented with such evidence we can say that we “strongly reject” the null hypothesis,
that the test is “highly significant”, or that “the test rejects at any conventional critical value”.
In contrast, the p-value for the coefficient on “Married Female” is 0.046. In this context it is
typical to say that the test is “marginally significant”, meaning that the test statistic is close to
the asymptotic 5% critical value.
A related (but somewhat inferior) empirical practice is to append asterisks (*) to coefficient
estimates or test statistics to indicate the level of significance. A common practice to to append
a single asterisk (*) for an estimate or test statistic which exceeds the 10% critical value (i.e.,
is significant at the 10% level), append a double asterisk (**) for a test which exceeds the 5%
critical value, or append a trip asterisk (***) for a test which which exceeds the 1% critical value.
Such a practice can be better than a table of raw test statistics as the asterisks permit a quick
interpretation of significance. On the other hand, asterisks are inferior to p-values, which are also
easy and quick to interpret. The goal is essentially the same; it seems wiser to report p-values
whenever possible and avoid the use of asterisks.
Our recommendation is that the best empirical practice is to compute and report the asymptotic
p-value pn rather than simply the test statistic Tn , the binary decision Accept/Reject, or appending
asterisks. The p-value is a simple statistic, easy to interpret, and contains more information than
the other choices.
We now summarize the main features of hypothesis testing.
3. Set the asymptotic critical value c so that 1 − G(c) = α, where G is the distribution function
of T.
6. Accept H0 if Tn ≤ c, or equivalently pn ≥ α.
where Vb θe = Rb 0 Vb e R
b is an estimate of V e and R b = ∂ r(β) b 0 . Notice that we can write Wn
β θ ∂β
alternatively as ³ ´0 ³ ´
Wn = n θ b − θ0 Vb −1 θ b − θ0
θ
b as
using the asymptotic variance estimate Vb θ , or we can write it directly as a function of β
³ ´0 ³ 0 ´−1 ³ ´
Wn = r(β) b − θ0 Rb Vb e R
b r( b − θ0 .
β) (8.11)
β
Also, when r(β) = R0 β is a linear function of β, then the Wald statistic simplifies to
³ ´0 ³ ´−1 ³ ´
b − θ0
Wn = R0 β R0 Vb βe R b − θ0 .
R0 β
The Wald statistic Wn is a weighted Euclidean measure of the length of the vector θ b −θ0 . When
q = 1 then Wn = t2n , the square of the t-statistic, so hypothesis tests based on Wn and |tn | are
equivalent. The Wald statistic (8.10) is a generalization of the t-statistic to the case of multiple
restrictions.
d
As shown in Theorem 6.16.2, when β satisfies r(β) = θ0 then Wn −→ χ2q , a chi-square random
variable with q degrees of freedom. Let Gq (u) denote the χ2q distribution function. For a given
significance level α, the asymptotic critical value c satisfies α = 1 − Gq (c) and can be found from
the chi-square distribution table. For example, the 5% critical values for q = 1, q = 2, and q = 3 are
3.84, 5.99, and 7.82, respectively. An asymptotic test rejects H0 in favor of H1 if Wn > c. As with
t-tests, it is conventional to describe a Wald test as “significant” if Wn exceeds the 5% asymptotic
critical value.
Pr (Wn > c | H0 ) −→ α
Notice that the asymptotic distribution in Theorem 8.9.1 depends solely on q, the number of
restrictions being tested. It does not depend on k, the number of parameters estimated.
The asymptotic p-value for Wn is pn = 1 − Gq (Wn ). It is particularly useful to report p-values
instead of the Wald statistic. For example, if you write that a Wald test on eight restrictions
CHAPTER 8. HYPOTHESIS TESTING 194
(q = 8) has the value Wn = 11.2, it is difficult for a reader to assess the magnitude of this statistic
without the time-consuming and cumbersome process of looking up the critical values from a table.
Instead, if you write that the p-value is pn = 0.19 (as is the case for Wn = 11.2 and q = 8) then it
is simple for a reader to intrepret its magnitude as “insignificant”.
For example, consider the empirical results presented in Table 4.1. The hypothesis “Union
membership does not affect wages” is the joint restriction that both coefficients on “Male Union
Member” and “Female Union Member” are zero. We calculate the Wald statistic (8.10) for this
joint hypothesis and find Wn = 23.14 with a p-value of pn = 0.000. Thus we reject the hypothesis
in favor of the alternative that at least one of the coefficients is non-zero. This does not mean that
both coefficients are non-zero, just that one of the two is non-zero. Therefore examining the joint
Wald statistic and the individual t-statistics is useful for interpretation.
The Wald statistic is named after the statistician Abraham Wald, who showed that Wn has
optimal weighted average power in certain settings.
We call (8.12) the homoskedastic Wald statistic as it is an appropriate test when the errors are
conditionally homoskedastic.
As for Wn , when q = 1 then Wn0 = t2n , the square of the t-statistic, where the latter is computed
with a homoskedastic standard error.
In the case of linear hypotheses H0 : R0 β = θ0 the homoskedastic Wald statistic equals
³ ´0 ³ 0 ¡ ¢ ´−1 ³ ´
b − θ0
Wn0 = R0 β b X 0 X −1 R
R b − θ0 /s2 .
R0 β (8.13)
¡ ¢
Theorem 8.10.1 Under Assumptions 6.1.2 and 6.10.1, E e2i | xi = σ 2 ,
and H0 : θ = θ0 , then
d
Wn0 −→ χ2q ,
and for c satisfying α = 1 − Gq (c),
¡ ¢
Pr Wn0 > c | H0 −→ α
Criterion-based testing applies when we have a criterion function, say Jn (β) with β ∈ B,
which is minimized for estimation, and the goal is to test H0 : β ∈ B 0 versus H0 : β ∈ / B0
where B 0 ⊂ B. Minimizing the criterion function over B and B 0 we obtain the unrestricted and
restricted estimators
b = argmin Jn (β)
β
β∈B
e = argmin Jn (β) .
β
β∈B 0
Consider the class of linear hypotheses H0 : R0 β = θ0 . In this case we know from (7.25) that
the efficient minimum distance estimator β e 0
emd subject to the constraint R β = θ 0 is
³ ´−1 ³ ´
e
β = b − Vb β R R0 Vb β R
β R0b
β − θ 0
emd
= Wn , (8.17)
CHAPTER 8. HYPOTHESIS TESTING 196
Testing using the minimum distance statistic Jn∗ is similar to testing using the Wald statistic
Wn . Critical values and p-values are computed using the χ2q distribution. H0 is rejected in favor of
H1 if Jn∗ exceeds the level α critical value. The asymptotic p-value is pn = 1 − Fq (Jn∗ ).
Notice that we have scaled the criterion by the unbiased variance estimator s2 from (4.21) for
reasons which will become clear momentarily.
Equation (7.20) showed that
SSEn (β) = nσ̂ 2 + Jn0 (β)
and so the minimizers of SSEn (β) and Jn0 (β) are identical. Thus the constrained minimizer of
Jn0 (β) is constrained least-squares
e cls = argmin J 0 (β) = argmin SSEn (β)
β (8.18)
n
β∈B 0 β∈B 0
and therefore
e )/s2
Jn0 = Jn0 (β cls
³ ´0 ³ ´
=n β b −β e cls Q b xx βb −β
e cls /s2 .
This is the homoskedastic Wald statistic (8.13). Thus for testing linear hypotheses, homoskedastic
minimum distance and Wald statistics agree.
For nonlinear hypotheses they disagree, but have the same null asymptotic distribution.
CHAPTER 8. HYPOTHESIS TESTING 197
¡ ¢
Theorem 8.13.1 Under Assumptions 6.1.2 and 6.10.1, E e2i | xi = σ2 ,
d
and H0 : θ = θ0 , then Jn0 −→ χ2q .
8.14 F Tests
The F statistic for testing H0 : β ∈ B 0 is
³ ´
SSEn (β e ) − SSEn (β)
b /q
cls
Fn = (8.20)
b
SSEn (β)/(n − k)
where
n
X ¡ ¢2
SSEn (β) = yi − x0i β
i=1
e ) − SSEn (β)
SSEn (β b
cls
Fn =
qs2
which is a scale of the difference of sum-of-squared errors, and is thus a criterion-based statistic.
Using (7.20) we can also write the statistic as
Fn = Jn0 /q,
so the F stastistic is identical to the homoskedastic minimum distance statistic divided by the
number of restrictions q.
Another useful way of writing (8.20) is
µ ¶¡ 2 ¢
n−k σ̃ − σ̂ 2
Fn = (8.21)
q σ̂ 2
where
n
b
SSEn (β) 1X 2
σ̂ 2 = = êi
n n
i=1
is the residual variance estimate under H1 and
n
e )
SSEn (β cls 1X 2
σ̃ 2 = = ẽi
n n
i=1
When reporting an Fn statistic it is conventional to calculate critical values and p-values using
the F (q, n − k) distribution instead of the asymptotic χ2q /q distribution. This is a prudent small
sample adjustment, as the F distribution is exact when the errors are independent of the regressors
and normally distributed. However, when the degrees of freedom n − k are large then the difference
is negligible. More relevantly, if n − k is small enough to make a difference, probably we shouldn’t
be trusting the asymptotic approximation anyway!
An elegant feature about (8.20) or (8.21) is that they are directly computable from the standard
output from two simple OLS regressions, as the sum of squared errors (or regression variance) is
a typical printed output from statistical packages, and is often reported in applied tables. Thus
Fn can be calculated by hand from standard reported statistics even if you don’t have the original
data (or if you are sitting in a seminar and listening to a presentation!).
If you are presented with an Fn statistic (or a Wald statistic, as you can just divide by q) but
don’t have access to critical values, a useful rule of thumb is to know that for large n, the 5%
asymptotic critical value is decreasing as q increases, and is less than 2 for q ≥ 7.
In many statistical packages, when an OLS regression is estimated an “F-statistic” is auto-
matically reported, even though hypothesis test is requested. This is the F statistic Fn where H0
restricts all coefficients except the intercept to be zero. This was a popular statistic in the early
days of econometric reporting, when sample sizes were very small and researchers wanted to know
if there was “any explanatory power” to their regression. This is rarely an issue today, as sample
sizes are typically sufficiently large that this F statistic is nearly always highly significant. While
there are special cases where this F statistic is useful, these cases are atypical. As a general rule,
there is no reason to report this F statistic.
The Fn statistic is named after the statistician Ronald Fisher, one of the founders of modern
statistical theory.
which is a monotonic function of σ̃ 2 /σ̂ 2 . Recall that the F statistic (8.21) is also a monotonic
function of σ̃ 2 /σ̂ 2 . Thus LRn and Fn are fundamentally the same statistic and have the same
information about H0 versus H1 .
CHAPTER 8. HYPOTHESIS TESTING 199
This shows that the two statistics (LRn and Fn ) will be numerically close. It also shows that the
F statistic and the homoskedastic Wald statistic for linear hypotheses can also be interpreted as
approximate likelihood ratio statistics under normality.
yi = β + ei
ei ∼ N(0, σ 2 )
H0 (s) : β s = 1
for any positive integer s. Letting r(β) = β s , and noting R = sβ s−1 , we find that the standard
Wald test for H0 (s) is
³ ´2
β̂ s − 1
Wn (s) = n .
σ̂ 2 s2 β̂ 2s−2
While the hypothesis β s = 1 is unaffected by the choice of s, the statistic Wn (s) varies with s. This
is an unfortunate feature of the Wald statistic.
To demonstrate this effect, we have plotted in Figure 8.1 the Wald statistic Wn (s) as a function
of s, setting n/σ̂ 2 = 10. The increasing solid line is for the case β̂ = 0.8. The decreasing dashed
line is for the case β̂ = 1.6. It is easy to see that in each case there are values of s for which the
test statistic is significant relative to asymptotic critical values, while there are other values of s
for which the test statistic is insignificant. This is distressing since the choice of s is arbitrary and
irrelevant to the actual hypothesis.
d
Our first-order asymptotic theory is not useful to help pick s, as Wn (s) −→ χ21 under H0 for any
s. This is a context where Monte Carlo simulation can be quite useful as a tool to study and
compare the exact distributions of statistical procedures in finite samples. The method uses random
simulation to create artificial datasets, to which we apply the statistical tools of interest. This
produces random draws from the statistic’s sampling distribution. Through repetition, features of
this distribution can be calculated.
In the present context of the Wald statistic, one feature of importance is the Type I error
of the test using the asymptotic 5% critical value 3.84 — the probability of a false rejection,
Pr (Wn (s) > 3.84 | β = 1) . Given the simplicity of the model, this probability depends only on
s, n, and σ 2 . In Table 8.1 we report the results of a Monte Carlo simulation where we vary these
CHAPTER 8. HYPOTHESIS TESTING 200
three parameters. The value of s is varied from 1 to 10, n is varied among 20, 100 and 500, and σ
is varied among 1 and 3. The Table reports the simulation estimate of the Type I error probability
from 50,000 random samples. Each row of the table corresponds to a different value of s — and thus
corresponds to a particular choice of test statistic. The second through seventh columns contain the
Type I error probabilities for different combinations of n and σ. These probabilities are calculated
as the percentage of the 50,000 simulated Wald statistics Wn (s) which are larger than 3.84. The
null hypothesis β s = 1 is true, so these probabilities are Type I error.
To interpret the table, remember that the ideal Type I error probability is 5% (.05) with devia-
tions indicating distortion. Type I error rates between 3% and 8% are considered reasonable. Error
rates above 10% are considered excessive. Rates above 20% are unacceptable. When comparing
statistical procedures, we compare the rates row by row, looking for tests for which rejection rates
are close to 5% and rarely fall outside of the 3%-8% range. For this particular example the only
test which meets this criterion is the conventional Wn = Wn (1) test. Any other choice of s leads
to a test with unacceptable Type I error probabilities.
Table 8.1
Type I Error Probability of Asymptotic 5% Wn (s) Test
σ=1 σ=3
s n = 20 n = 100 n = 500 n = 20 n = 100 n = 500
1 .06 .05 .05 .07 .05 .05
2 .08 .06 .05 .15 .08 .06
3 .10 .06 .05 .21 .12 .07
4 .13 .07 .06 .25 .15 .08
5 .15 .08 .06 .28 .18 .10
6 .17 .09 .06 .30 .20 .11
7 .19 .10 .06 .31 .22 .13
8 .20 .12 .07 .33 .24 .14
9 .22 .13 .07 .34 .25 .15
10 .23 .14 .08 .35 .26 .16
Note: Rejection frequencies from 50,000 simulated random samples
CHAPTER 8. HYPOTHESIS TESTING 201
In Table 8.1 you can also see the impact of variation in sample size. In each case, the Type I
error probability improves towards 5% as the sample size n increases. There is, however, no magic
choice of n for which all tests perform uniformly well. Test performance deteriorates as s increases,
which is not surprising given the dependence of Wn (s) on s as shown in Figure 8.1.
In this example it is not surprising that the choice s = 1 yields the best test statistic. Other
choices are arbitrary and would not be used in practice. While this is clear in this particular
example, in other examples natural choices are not always obvious and the best choices may in fact
appear counter-intuitive at first.
This point can be illustrated through another example which is similar to one developed in
Gregory and Veall (1985). Take the model
H0 : β1 − θ0 β2 = 0.
β̂1 − θ0 β̂2
t2n = ³ ´1/2 .
0 b
R2 V βe R
2
where ⎛ ⎞
0
R2 = ⎝ 1 ⎠ .
−θ0
To compare t1n and t2n we perform another simple Monte Carlo simulation. We let x1i and x2i
be mutually independent N(0, 1) variables, ei be an independent N(0, σ 2 ) draw with σ = 3, and
normalize β0 = 0 and β1 = 1. This leaves β2 as a free parameter, along with sample size n. We vary
β2 among .1, .25, .50, .75, and 1.0 and n among 100 and 500.
CHAPTER 8. HYPOTHESIS TESTING 202
Table 8.2
Type I Error Probability of Asymptotic 5% t-tests
n = 100 n = 500
Pr (tn < −1.645) Pr (tn > 1.645) Pr (tn < −1.645) Pr (tn > 1.645)
β2 t1n t2n t1n t2n t1n t2n t1n t2n
.10 .47 .06 .00 .06 .28 .05 .00 .05
.25 .26 .06 .00 .06 .15 .05 .00 .05
.50 .15 .06 .00 .06 .10 .05 .00 .05
.75 .12 .06 .00 .06 .09 .05 .00 .05
1.00 .10 .06 .00 .06 .07 .05 .02 .05
The one-sided Type I error probabilities Pr (tn < −1.645) and Pr (tn > 1.645) are calculated
from 50,000 simulated samples. The results are presented in Table 8.2. Ideally, the entries in the
table should be 0.05. However, the rejection rates for the t1n statistic diverge greatly from this
value, especially for small values of β2 . The left tail probabilities Pr (t1n < −1.645) greatly exceed
5%, while the right tail probabilities Pr (t1n > 1.645) are close to zero in most cases. In contrast,
the rejection rates for the linear t2n statistic are invariant to the value of β2 , and are close to the
ideal 5% rate for both sample sizes. The implication of Table 4.2 is that the two t-ratios have
dramatically different sampling behavior.
The common message from both examples is that Wald statistics are sensitive to the algebraic
formulation of the null hypothesis.
A simple solution is to use the minimum distance statistic Jn , which equals Wn with r = 1 in
the first example, and |t2n | in the second example. The minimum distance statistic is invariant to
the algebraic formulation of the null hypothesis, so is immune to this problem. Whenever possible,
the Wald statistic should not be used to test nonlinear hypotheses.
Gn (u, F ) = Pr (Tn ≤ u | F ) .
While the asymptotic distribution of Tn might be known, the exact (finite sample) distribution Gn
is generally unknown.
Monte Carlo simulation uses numerical simulation to compute Gn (u, F ) for selected choices of F.
This is useful to investigate the performance of the statistic Tn in reasonable situations and sample
sizes. The basic idea is that for any given F, the distribution function Gn (u, F ) can be calculated
numerically through simulation. The name Monte Carlo derives from the famous Mediterranean
gambling resort where games of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses F (the dis-
tribution of the data) and the sample size n. A “true” value of θ is implied by this choice, or
equivalently the value θ is selected directly by the researcher which implies restrictions on F .
Then the following experiment is conducted by computer simulation:
1. n independent random pairs (yi∗ , x∗i ) , i = 1, ..., n, are drawn from the distribution F using
the computer’s random number generator.
2. The statistic Tn = Tn ((y1∗ , x∗1 ) , ..., (yn∗ , x∗n ) , θ) is calculated on this pseudo data.
CHAPTER 8. HYPOTHESIS TESTING 203
For step 1, most computer packages have built-in procedures for generating U[0, 1] and N(0, 1)
random numbers, and from these most random variables can be constructed. (For example, a
chi-square can be generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the “true” value of θ corresponding
to the choice of F.
The above experiment creates one random draw from the distribution Gn (u, F ). This is one
observation from an unknown distribution. Clearly, from one observation very little can be said.
So the researcher repeats the experiment B times, where B is a large number. Typically, we set
B = 1000 or B = 5000. We will discuss this choice later.
Notationally, let the bth experiment result in the draw Tnb , b = 1, ..., B. These results are stored.
After all B experiments have been calculated, these results constitute a random sample of size B
from the distribution of Gn (u, F ) = Pr (Tnb ≤ u) = Pr (Tn ≤ u | F ) .
From a random sample, we can estimate any feature of interest using (typically) a method of
moments estimator. We now describe some specific examples.
Suppose we are interested in the bias, mean-squared error (MSE), and/or variance of the dis-
tribution of θ̂ − θ. We then set Tn = θ̂ − θ, run the above experiment, and calculate
B B
\θ̂) = 1 X T = 1 X θ̂ − θ
Bias( nb b
B B
b=1 b=1
1 X³ ´2
XB B
1
M\
SE(θ̂) = 2
(Tnb ) = θ̂b − θ
B B
b=1 b=1
µ ¶2
\ \ \
var(θ̂) = M SE(θ̂) − Bias(θ̂)
B
1 X
P̂ = 1 (Tnb ≥ 1.96) , (8.23)
B
b=1
the percentage of the simulated t-ratios which exceed the asymptotic 5% critical ³value.´
Suppose we are interested in the 5% and 95% quantile of Tn = θ̂ or Tn = θ̂ − θ /s(θ̂) We
then compute the 5% and 95% sample quantiles of the sample {Tnb }. The α% sample quantile is a
number qα such that α% of the sample are less than qα . A simple way to compute sample quantiles
is to sort the sample {Tnb } from low to high. Then qα is the N ’th number in this ordered sequence,
where N = (B + 1)α. It is therefore convenient to pick B so that N is an integer. For example, if
we set B = 999, then the 5% sample quantile is 50’th sorted value and the 95% sample quantile is
the 950’th sorted value.
The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical
procedure (estimator or test) in realistic settings. Generally, the performance will depend on n and
F. In many cases, an estimator or test may perform wonderfully for some values, and poorly for
others. It is therefore useful to conduct a variety of experiments, for a selection of choices of n and
F.
As discussed above, the researcher must select the number of experiments, B. Often this is
called the number of replications. Quite simply, a larger B results in more precise estimates of
the features of interest of Gn , but requires more computational time. In practice, therefore, the
choice of B is often guided by the computational demands of the statistical procedure. Since the
results of a Monte Carlo experiment are estimates computed from a random sample of size B, it
is straightforward to calculate standard errors for any quantity of interest. If the standard error is
too large to make a reliable inference, then B will have to be increased.
CHAPTER 8. HYPOTHESIS TESTING 204
In particular, it is simple to make inferences about rejection probabilities from statistical tests,
such as the percentage estimate reported in (8.23). The random variable 1 (Tnb ≥ 1.96) is iid
Bernoulli, equalling 1 with probability p = E1 (Tnbp≥ 1.96) . The average (8.23) is therefore an
unbiased estimator of p with standard error s (p̂) = p (1 − p) /B. As p is unknown, this may be
approximated by replacing p with p̂ or with an hypothesized
p value. For√example, if we are assessing
an asymptotic 5% test, then we can set s (p̂) = (.05) (.95) /B ' .22/ B. Hence, standard errors
for B = 100, 1000, and 5000, are, respectively, s (p̂) = .022, .007, and .003.
Most papers in econometric methods, and some empirical papers, include the results of Monte
Carlo simulations to illustrate the performance of their methods. When extending existing results,
it is good practice to start by replicating existing (published) results. This is not exactly possible
in the case of simulation results, as they are inherently random. For example suppose a paper
investigates a statistical test, and reports a simulated rejection probability of 0.07 based on a
simulation with B = 100 replications. Suppose you attempt to replicate this result, and find a
rejection probability of 0.03 (again using B = 100 simulation replications). Should you conclude
that you have failed in your attempt? Absolutely not! Under the hypothesis that both simulations
are identical, you have two independent estimates, p̂1 = 0.07 and p̂2 = 0.03, of a common probability
√ d
p. The asymptotic (as B → ∞) distribution pof their difference is B (p̂1 − p̂2 ) −→ N(0, 2p(1−p)), so
a standard error for p̂1 − p̂2 = 0.04 is ŝ = 2p(1 − p)/B ' 0.03, using the estimate p = (p̂1 + p̂2 )/2.
Since the t-ratio 0.04/0.03 = 1.3 is not statistically significant, it is incorrect to reject the null
hypothesis that the two simulations are identical. The difference between the results p̂1 = 0.07 and
p̂2 = 0.03 is consistent with random variation.
What should be done? The first mistake was to copy the previous paper’s choice of B = 100.
Instead, suppose you setpB = 5000. Suppose you now obtain p̂2 = 0.04. Then p̂1 − p̂2 = 0.03 and
a standard error is ŝ = p(1 − p) (1/100 + 1/5000) ' 0.02. Still we cannot reject the hypothesis
that the two simulations are different. Even though the estimates (0.07 and 0.04) appear to be
quite different, the difficulty is that the original simulation used a very small number of replications
(B = 100) so the reported estimate is quite imprecise. In this case, it is appropriate to conclude
that your results “replicate” the previous study, as there is no statistical evidence to reject the
hypothesis that they are equivalent.
Most journals have policies requiring authors to make available their data sets and computer
programs required for empirical results. They do not have similar policies regarding simulations.
Never-the-less, it is good professional practice to make your simulations available. The best practice
is to post your simulation code on your webpage. This invites others to build on and use your results,
leading to possible collaboration, citation, and/or advancement.
That is, we can describe Cn as “The point estimate plus or minus 2 standard errors” or “The set of
parameter values not rejected by a two-sided t-test.” The second definition, known as “test statistic
inversion” is a general method for finding confidence intervals, and typically produces confidence
intervals with excellent properties.
Given a test statistic Tn (θ) and critical value c, the acceptance region “Accept if Tn (θ) ≤ c”
is identical to the confidence interval Cn = {θ : Tn (θ) ≤ c}. Since the regions are identical, the
probability of coverage Pr (θ ∈ Cn ) equals the probability of correct acceptance Pr (Accept|θ) which
CHAPTER 8. HYPOTHESIS TESTING 205
is exactly 1 minus the Type I error probability. Thus inverting a test with good Type I error
probabilities yields a confidence interval with good coverage probabilities.
Now suppose that the parameter of interest θ = r(β) is a nonlinear function of the coefficient
b
confidence interval for θ is the set Cn as in (8.24) where θ̂ = r(β)
vector β. In this case the standardq
is the point estimate and s(θ̂) = R b 0 Vb e R
b is the delta method standard error. This confidence
β
interval is inverting the t-test based on the nonlinear hypothesis r(β) = θ. The trouble is that in
Section 8.16 we learned that there is no unique t-statistic for tests of nonlinear hypotheses and that
the choice of parameterization matters greatly.
For example, if θ = β1 /β2 then the coverage probability of the standard interval (8.24) is 1
minus the probability of the Type I error, which as shown in Table 8.2 can be far from the nominal
5%.
In this example a good solution is the same as discussed in Section 8.16 — to rewrite the
hypothesis as a linear restriction. The hypothesis θ = β1 /β2 is the same as θβ2 = β1 . The t-
statistic for this restriction is
β̂1 − β̂2 θ
tn (θ) = ³ ´1/2
R0 Vb βe R
where µ ¶
1
R=
−θ
and Vb βe is the covariance matrix for (β̂1 β̂2 ). A 95% confidence interval for θ = β1 /β2 is the set of
values of θ such that |tn (θ)| ≤ 1.96. Since θ appears in both the numerator and denominator, tn (θ)
is a non-linear function of θ so the easiest method to find the confidence set is by grid search over
θ.
For example, in the wage equation
log(W age) = β1 Experience + β2 Experience2 /100 + · · ·
the highest expected wage occurs at Experience = −50β1 /β2 . From Table 4.1 we have the point
estimate θ̂ = 29.8 and we can calculate the standard error s(θ̂) = 0.022 for a 95% confidence interval
[29.8, 29.9]. However, if we instead invert the linear form of the test we can numerically find the
interval [29.1, 30.6] which is much larger. From the evidence presented in Section 8.16 we know the
first interval can be quite inaccurate and the second interval is greatly preferred.
For tests of the form “Reject H0 if Tn > c”, a sufficient condition for test consistency is that
the Tn diverges to positive infinity with probability one for all θ ∈ Θ1 .
p
Definition 8.19.2 Tn −→ ∞ as n → ∞ if for all M < ∞,
p
Pr (Tn ≤ M ) → 0 as n → ∞. Similarly, Tn −→ −∞ as n → ∞ if for
all M < ∞, Pr (Tn ≥ −M ) → 0 as n → ∞.
In general, t-test and Wald tests are consistent against fixed alternatives. Take a t-statistic for
a test of H0 : θ = θ0
θb − θ0
tn =
b
s(θ)
q
where θ0 is a known value and s(θ)b = n−1 V̂θ . Note that
√
θb − θ n (θ − θ0 )
tn = + q .
b
s(θ) V̂θ
The first term on the right-hand-side converges in distribution to N(0, 1). The second term on the
right-hand-side equals zero if θ = θ0 , converges in probability to +∞ if θ > θ0 , and converges
in probability to −∞ if θ < θ0 . Thus the two-sided t-test is consistent against H1 : θ 6= θ0 , and
one-sided t-tests are consistent against the alternatives for which they are designed.
³ ´0 ³ ´ p
Under H1 , θ
p
b −→ θ 6= θ0 . Thus b − θ0 Vb −1 θb − θ0 −→ (θ − θ0 )0 V −1
θ θ θ (θ − θ 0 ) > 0. Hence
p
under H1 , Wn −→ ∞. Again, this implies that Wald tests are consistent tests.
where the scalar h is is called a localizing parameter. We index βn and θn by sample size to
indicate their dependence on n. The way to think of (8.25) is that the true value of the parameters
are β n and θn . The parameter θn is close to the hypothesized value θ0 , with deviation n−1/2 h.
The specification (8.25) states that for any fixed h , θn approaches θ0 as n gets large. Thus θn
is “close” or “local” to θ0 . The concept of a localizing sequence (8.25) might seem odd at first as
in the actual world the sample size cannot mechanically affect the value of the parameter. Thus
(8.25) should not be interpreted literally. Instead, it should be interpreted as a technical device
which allows the asymptotic distribution of the test statistic to be continuous in the alternative
hypothesis.
To evaluate the asymptotic distribution of the test statistic we start by examining the scaled
estimate centered at the hypothesized value θ0 . Breaking it into a term centered at the true value
θn and a remainder we find
√ ³ ´ √ ³ ´ √
n θb − θ0 = n θb − θn + n (θn − θ0 )
√ ³ ´
= n θb − θn + h
where the second equality is (8.25). The first term is asymptotically normal:
√ ³b ´
d p
n θ − θn −→ Vθ Z.
or N(h, Vθ ). This is a continuous asymptotic distribution, and depends continuously on the localing
parameter h.
Applied to the t statistic we find
θb − θ0
tn =
b
s(θ)
√
d Vθ Z + h
−→ √
Vθ
∼Z+δ (8.26)
√
where δ = h/ Vθ . This generalizes Theorem 8.4.1 (which assumes H0 is true) to allow for local
alternatives of the form (8.25).
Consider a t-test of H0 against the one-sided alternative H1 : θ > θ0 which rejects H0 for tn > cα
where Φ(cα ) = 1 − α. The asymptotic local power of this test is the limit (as the sample size
CHAPTER 8. HYPOTHESIS TESTING 208
h n−1/2 h θn − θ0
δ=√ ' = .
Vθ s(θ̂) s(θ̂)
Thus δ equals the deviation θn − θ0 expressed as multiples of the standard error s(θ̂). Thus as
we examine Figure 8.2, we can interpret the power function at δ = 1 (e.g. 26% for a 5% size
test) as the power when the parameter θn is one standard error above the hypothesized value. For
example, from Table 4.1 the standard error for the coefficient on “Married Female” is 0.008. Thus
in this example, δ = 1 corresonds to θn = 0.008 or an 0.8% wage premium for married females.
Our calculations show that the asymptotic power of a one-sided 5% test against this alternative is
about 26%.
CHAPTER 8. HYPOTHESIS TESTING 209
The difference between power functions can be measured either vertically or horizontally. For
example, in Figure 8.2 there is a vertical dotted line at δ = 1, showing that the asymptotic local
power function πα (δ) equals 39% for α = 0.10, equals 26% for α = 0.05 and equals 9% for α = 0.01.
This is the difference in power across tests of differing size, holding fixed the parameter in the
alternative
A horizontal comparison can also be illuminating. To illustrate, in Figure 8.2 there is a hori-
zontal dotted line at 50% power. 50% power is a useful benchmark, as it is the point where the
test has equal odds of rejection and acceptance. The dotted line crosses the three power curves at
δ = 1.29 (α = 0.10), δ = 1.65 (α = 0.05), and δ = 2.33 (α = 0.01). This means that the parameter
θ must be at least 1.65 standard errors above the hypothesized value for the one-sided test to have
50% (approximate) power. As discussed agove, these values can be interpreted as the multiple of
the standard error needed for a coefficient to obtain power equal to 50%.
The ratio of these values (e.g. 1.65/1.29 = 1.28 for the asymptotic 5% versus 10% tests)
measures the relative parameter magnitude needed to achieve the same power. (Thus, for a 5% size
test to achieve 50% power, the parameter must be 28% larger than for a 10% size test.) Even more
interesting, the square of this ratio (e.g. (1.65/1.29)2 = 1.64) can be interpreted as the increase
in sample size needed to achieve the same power under fixed parameters. That is, to achieve 50%
power, a 5% size test needs 64% more observations than a 10% size √ test. This interpretation
√ follows
√
by the following informal argument. By definition and (8.25) δ = h/ Vθ = n (θn − θ0 ) / Vθ . Thus
holding θ and Vθ fixed, we can see that δ 2 is proportional to n.
The analysis of a two-sided t test is similar. (8.26) implies that
¯ ¯
¯ θb − θ ¯
¯ 0¯ d
tn = ¯ ¯ −→ |Z + δ|
¯ s(θ)b ¯
Pr (t(θ0 ) > cα ) −→ Φ (δ − cα ) .
The asymptotic distribution (8.28) is a quadratic in the normal random vector Zh , similar to that
found for the asymptotic null distribution of the Wald statistic. The important difference, however,
is that Zh has a mean of h. The distribution of the quadratic form (8.28) is a close relative of the
chi-square distribution.
d
The convergence (8.28) shows that under the local alternatives (8.27), Wn −→ χ2q (λ). This
generalizes the null asymptotic distribution which obtains as the special case λ = 0. We can use this
result to obtain a continuous asymptotic approximation to ¡the power¢ function. For any significance
2
level α > 0 set the asymptotic critical value cα so that Pr χq > cα = α. Then as n → ∞,
¡ ¢ def
Pr (Wn > cα ) −→ Pr χ2q (λ) > cα = πα,q (λ).
The asymptotic local power function πα,q (λ) depends only on α, q, and λ.
The non-central chi-square distribution is a generalization of the chi-square, with χ2q (λ) special-
izing to χ2q when λ = 0. In the case q = 1, χ2q (λ) = |Z + δ|2 with λ = δ 2 , and thus Theorem 8.21.2
generalizes Theorem 8.20.1 from q = 1 to q ≥ 1.
Figure 8.3 plots π0.05,q (λ) (the power of asymptotic 5% tests) as a function of λ for q = 1, q = 2,
and q = 3. The power functions are monotonically increasing in λ and asymptote to one.
Figure 8.3 also shows the power loss for fixed non-centrality parameter λ as the dimensionality
of the test increases. The power curves shift to the right as q increases, resulting in a decrease
in power. This is illustrated by the dotted line at 50% power. The dotted line crosses the three
power curves at λ = 3.85 (q = 1), λ = 4.96 (q = 2), and λ = 5.77 (q = 3). The ratio of these λ
values correspond to the relative sample sizes needed to obtain the same power. Thus increasing
the dimension of the test from q = 1 to q = 2 requires a 28% increase in sample size, or an increase
from q = 1 to q = 3 requires a 50% increase in sample size, to obtain a test with 50% power.
√ ³ ´ ³ ´−1
∗0 √
³ ´
b −β
n β e = b
V β
b
R R ∗0 b b
V β R R n βb −β
emd n n
d ¡ 0 ¢−1 0
−→ V β R R V β R R N(0, V β )
= V β R Z.
−1
where Z ∼ N(0, (R0 V β R) ). Thus
³ ´0 −1 ³ ´
Jn∗ = n βb −β
e b b −β
e
emd V β β emd
d
−→ Z0 R0 V β V −1
β V βR Z
0
¡ 0 ¢
= Z R V βR Z
= χ2q .
CHAPTER 8. HYPOTHESIS TESTING 212
¥
Proof of Theorem 8.21.1. We show that the random variable Q = Z0h V −1 Zh depends only on
q and λ.
First, let G be a square root of V so that V = GG0 . Define Z∗h = G−1 Zh ∼ N(h∗ , I q ) with
h = G−1 h. Note that λ = h0 V −1 h = h∗0 h∗ .
∗
Q = Z0h V −1 Zh
= Z∗0 ∗
h Zh
= Z∗∗0
h Zh
∗∗
Exercises
2
Exercise 8.1 Prove that if an additional regressor X k+1 is added to X, Theil’s adjusted R
increases if and only if |tk+1 | > 1, where tk+1 = β̂k+1 /s(β̂k+1 ) is the t-ratio for β̂k+1 and
¡ ¢1/2
s(β̂k+1 ) = s2 [(X 0 X)−1 ]k+1,k+1
Exercise 8.2 You have two independent samples (y 1 , X 1 ) and (y 2 , X 2 ) which satisfy y 1 = X 1 β 1 +
e1 and y 2 = X 2 β2 + e2 , where E (x1i e1i ) = 0 and E (x2i e2i ) = 0, and both X 1 and X 2 have k
columns. Let βb and βb be the OLS estimates of β and β . For simplicity, you may assume that
1 2 1 2
both samples have the same number of observations n.
√ ³³ b ´ ´
b − (β − β ) as n → ∞.
(a) Find the asymptotic distribution of n β 2 − β 1 2 1
Exercise 8.3 The data set invest.dat contains data on 565 U.S. firms extracted from Compustat
for the year 1987. The variables, in order, are
The flow variables are annual sums for 1987. The stock variables are beginning of year.
(a) Estimate a linear regression of Ii on the other variables. Calculate appropriate standard
errors.
(c) This regression is related to Tobin’s q theory of investment, which suggests that investment
should be predicted solely by Qi . Thus the coefficient on Qi should be positive and the others
should be zero. Test the joint hypothesis that the coefficients on Ci and Di are zero. Test the
hypothesis that the coefficient on Qi is zero. Are the results consistent with the predictions
of the theory?
(d) Now try a non-linear (quadratic) specification. Regress Ii on Qi , Ci , Di , Q2i , Ci2 , Di2 , Qi Ci ,
Qi Di , Ci Di . Test the joint hypothesis that the six interaction and quadratic coefficients are
zero.
Exercise 8.4 In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric
companies. (The problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the
empirical exercise in Chapter 1 of Hayashi). The data file nerlov.dat contains his data. The
variables are described on page 77 of Hayashi. Nerlov was interested in estimating a cost function:
T C = f (Q, P L, P F, P K).
CHAPTER 8. HYPOTHESIS TESTING 214
Report parameter estimates and standard errors. You should obtain the same OLS estimates
as in Hayashi’s equation (1.7.7), but your standard errors may differ.
(c) Estimate (8.29) by constrained least-squares imposing β3 +β4 +β5 = 1. Report your parameter
estimates and standard errors.
Regression Extensions
The exponential link function is strictly positive, so this choice can be useful when it is desired to
constrain the mean to be strictly positive.
where
Λ(u) = (1 + exp(−u))−1 (9.1)
is the Logistic distribution function. Since the logistic link function lies in [0, 1], this choice can be
useful when the conditional mean is bounded between 0 and 1.
m (x, θ) = θ1 + θ2 exp(θ3 x)
m (x, θ) = θ1 + θ2 xθ3
with x > 0.
m (x, θ) = θ1 + θ2 x(θ3 )
where ⎧ ⎫
⎨ xλ − 1 ⎬
x(λ) = , if λ > 0 (9.2)
λ
⎩ log(x), if λ = 0 ⎭
and x > 0. The function (9.2) is called the Box-Cox Transformation and was introduced by Box
and Cox (1964). The function nests linearity (λ = 1) and logarithmic (λ = 0) transformations
continuously.
215
CHAPTER 9. REGRESSION EXTENSIONS 216
m (x, θ) = θ1 + θ2 x + θ3 (x − θ4 ) 1 (x > θ4 )
where
∂
mθ (x, θ) = m (x, θ) .
∂θ
where xi (γ) is a function of xi and the unknown parameter γ. Examples include xi (γ) = xγi ,
xi (γ) = exp (γxi ) , and xi (γ) = xi 1 (g (xi ) > γ). The model is linear when β2 = 0, and this is
often a useful hypothesis (sub-model) to consider. Thus we want to test
H0 : β2 = 0.
Proof of Theorem 9.1.1 (Sketch). NLLS estimation falls in the class of optimization estimators.
For this theory, it is useful to denote the true value of the parameter θ as θ0 .
p
The first step is to show that θ̂ −→ θ0 . Proving that nonlinear estimators are consistent is more
challenging than for linear estimators. We sketch the main argument. The idea is that θ̂ minimizes
the sample criterion function Sn (θ), which (for any θ) converges in probability to the mean-squared
error function E (yi − m (xi , θ))2 . Thus it seems reasonable that the minimizer θ̂ will converge in
probability to θ0 , the minimizer of E (yi − m (xi , θ))2 . It turns out that to show this rigorously, we
need to show that Sn (θ) converges uniformly to its expectation E (yi − m (xi , θ))2 , which means
that the maximum discrepancy must converge in probability to zero, to exclude the possibility that
Sn (θ) is excessively wiggly in θ. Proving uniform convergence is technically challenging, but it
can be shown to hold broadly for relevant nonlinear regression models, especially if the regression
function m (xi , θ) is differentiabel in θ. For a complete treatment of the theory of optimization
estimators see Newey and McFadden (1994).
p
Since θ̂ −→ θ0 , θ̂ is close to θ0 for n large, so the minimization of Sn (θ) only needs to be
examined for θ close to θ0 . Let
yi0 = ei + m0θi θ0 .
For θ close to the true value θ0 , by a first-order Taylor series approximation,
Thus
¡ ¢
yi − m (xi , θ) ' (ei + m (xi , θ0 )) − m (xi , θ0 ) + m0θi (θ − θ0 )
= ei − m0θi (θ − θ0 )
= yi0 − m0θi θ.
CHAPTER 9. REGRESSION EXTENSIONS 218
and the right-hand-side is the SSE function for a linear regression of yi0 on mθi . Thus the NLLS
b has the same asymptotic distribution as the (infeasible) OLS regression of y 0 on mθi ,
estimator θ i
which is that stated in the theorem.
yi = x0i β + ei
E (ei | xi ) = 0,
the least-squares estimator is inefficient. The theory of Chamberlain (1987) can be used to show
that in this model the semiparametric efficiency bound is obtained by the Generalized Least
Squares (GLS) estimator (4.13) introduced in Section 4.6.1. The GLS estimator is sometimes
called the Aitken estimator. The GLS estimator (9.2) is infeasible since the matrix D is unknown.
A feasible GLS (FGLS) estimator replaces the unknown D with an estimate D̂ = diag{σ̂12 , ..., σ̂n2 }.
We now discuss this estimation problem.
Suppose that we model the conditional variance using the parametric form
σi2 = α0 + z 01i α1
= α0 z i ,
where z 1i is some q × 1 function of xi . Typically, z 1i are squares (and perhaps levels) of some (or
all) elements of xi . Often the functional form is kept simple for parsimony.
Let ηi = e2i . Then
E (ηi | xi ) = α0 + z 01i α1
and we have the regression equation
ηi = α0 + z 01i α1 + ξi (9.4)
E (ξi | xi ) = 0.
This regression error ξi is generally heteroskedastic and has the conditional variance
¡ ¢
var (ξi | xi ) = var e2i | xi
³¡ ¡ ¢¢2 ´
= E e2i − E e2i | xi | xi
¡ ¢ ¡ ¡ ¢¢2
= E e4i | xi − E e2i | xi .
and √ d
α − α) −→ N (0, V α )
n (b
CHAPTER 9. REGRESSION EXTENSIONS 219
φi ≡ η̂i − ηi
= ê2i − e2i
³ ´
= −2ei x0i βb − β + (β
b − β)0 xi x0 (β
b − β).
i
And then
1 X
n n
−2 X √ ³ ´ n
X √
√ z i φi = z i ei x0i n βb −β + 1 b − β)0 xi x0 (β
z i (β b − β) n
i
n n n
i=1 i=1 i=1
p
−→ 0
Let ¡ ¢−1 0
e = Z 0Z
α Z η̂ (9.6)
be from OLS regression of η̂i on z i . Then
√ √ ¡ ¢−1 −1/2 0
n (αe − α) = n (b α − α) + n−1 Z 0 Z n Zφ
d
−→ N (0, V α ) (9.7)
Thus the fact that ηi is replaced with η̂i is asymptotically irrelevant. We call (9.6) the skedastic
regression, as it is estimating the conditional variance of the regression of yi on xi . We have shown
that α is consistently estimated by a simple procedure, and hence we can estimate σi2 = z 0i α by
and ³ ´−1
e −1 X
e = X 0D
β e −1 y.
X 0D
This is the feasible GLS, or FGLS, estimator of β. Since there is not a unique specification for
the conditional variance the FGLS estimator is not unique, and will depend on the model (and
estimation method) for the skedastic regression.
One typical problem with implementation of FGLS estimation is that in the linear specification
(9.4), there is no guarantee that σ̃i2 > 0 for all i. If σ̃i2 < 0 for some i, then the FGLS estimator
is not well defined. Furthermore, if σ̃i2 ≈ 0 for some i then the FGLS estimator will force the
regression equation through the point (yi , xi ), which is undesirable. This suggests that there is a
need to bound the estimated variances away from zero. A trimming rule takes the form
σ 2i = max[σ̃i2 , cσ̂ 2 ]
for some c > 0. For example, setting c = 1/4 means that the conditional variance function is
constrained to exceed one-fourth of the unconditional variance. As there is no clear method to
select c, this introduces a degree of arbitrariness. In this context it is useful to re-estimate the
model with several choices for the trimming parameter. If the estimates turn out to be sensitive to
its choice, the estimation method should probably be reconsidered.
It is possible to show that if the skedastic regression is correctly specified, then FGLS is asymp-
totically equivalent to GLS. As the proof is tricky, we just state the result without proof.
CHAPTER 9. REGRESSION EXTENSIONS 220
and thus √ ³ ´
e d
n β F GLS − β −→ N (0, V β ) ,
where ¡ ¡ ¢¢−1
V β = E σi−2 xi x0i .
Examining the asymptotic distribution of Theorem 9.2.1, the natural estimator of the asymp-
e is
totic variance of β
à n !−1 µ ¶−1
0 1 X 1 0 −1
e
Vβ = −2 0
σ̃i xi xi = X D̃ X .
n n
i=1
0
which is consistent for V β as n → ∞. This estimator Ve β is appropriate when the skedastic
regression (9.4) is correctly specified.
It may be the case that α0 z i is only an approximation to the true conditional variance σi2 =
E(ei | xi ). In this case we interpret α0 z i as a linear projection of e2i on z i . β̃ should perhaps be
2
called a quasi-FGLS estimator of β. Its asymptotic variance is not that given in Theorem 9.2.1.
Instead,
³ ³¡ ¢−1 ´´−1 ³ ³¡ ¢−2 2 ´´ ³ ³¡ ¢−1 ´´−1
V β = E α0 z i xi x0i E α0 z i σi xi x0i E α0 z i xi x0i .
V β takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless σi2 = α0 z i ,
0
Ve β is inconsistent for V β .
0
An appropriate solution is to use a White-type estimator in place of Ve β . This may be written
as
à n !−1 à n !à n !−1
1 X 1 X 1 X
Ve β = σ̃i−2 xi x0i σ̃i−4 ê2i xi x0i σ̃i−2 xi x0i
n n n
i=1 i=1 i=1
µ ¶−1 µ ¶µ ¶−1
1 0 e −1 1 0 e −1 b e −1 1 0 e −1
= XD X X D DD X XD X
n n n
where D b = diag{ê2 , ..., ê2n }. This is estimator is robust to misspecification of the conditional vari-
1
ance, and was proposed by Cragg (1992).
In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not
exclusively estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on specification and estimation of the skedastic regression.
Since the form of the skedastic regression is unknown, and it may be estimated with considerable
error, the estimated conditional variances may contain more noise than information about the true
conditional variances. In this case, FGLS can do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming
to solve. This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, and probably most importantly, OLS is a robust estimator of the parameter vector. It
is consistent not only in the regression model, but also under the assumptions of linear projection.
The GLS and FGLS estimators, on the other hand, require the assumption of a correct conditional
mean. If the equation of interest is a linear projection and not a conditional mean, then the OLS
CHAPTER 9. REGRESSION EXTENSIONS 221
and FGLS estimators will converge in probability to different limits as they will be estimating two
different projections. The FGLS probability limit will depend on the particular function selected for
the skedastic regression. The point is that the efficiency gains from FGLS are built on the stronger
assumption of a correct conditional mean, and the cost is a loss of robustness to misspecification.
H0 : α1 = 0
in the regression (9.4). We may therefore test this hypothesis by the estimation (9.6) and con-
structing a Wald statistic. In the classic literature it is typical to impose the stronger assumption
that ei is independent of xi , in which case ξi is independent of xi and the asymptotic variance (9.5)
for α̃ simplifies to ¡ ¡ ¢¢−1 ¡ 2 ¢
Vα = E z i z 0i E ξi . (9.9)
Hence the standard test of H0 is a classic F (or Wald) test for exclusion of all regressors from
the skedastic regression (9.6). The asymptotic distribution (9.7) and the asymptotic variance (9.9)
under independence show that this test has an asymptotic chi-square distribution.
Theorem 9.3.1 Under H0 and ei independent of xi , the Wald test of H0 is asymptotically χ2q .
Most tests for heteroskedasticity take this basic form. The main differences between popular
tests are which transformations of xi enter z i . Motivated by the form of the asymptotic variance
of the OLS estimator β, b White (1980) proposed that the test for heteroskedasticity be based on
setting z i to equal all non-redundant elements of xi , its squares, and all cross-products. Breusch-
Pagan (1979) proposed what might appear to be¡a distinct
¢ test, but the only difference is¡that ¢they
allowed for general choice of z i , and replaced E ξi2 with 2σ 4 which holds when ei is N 0, σ 2 . If
this simplification is replaced by the standard formula (under independence of the error), the two
tests coincide.
It is important not to misuse tests for heteroskedasticity. It should not be used to determine
whether to estimate a regression equation by OLS or FGLS, nor to determine whether classic or
White standard errors should be reported. Hypothesis tests are not designed for these purposes.
Rather, tests for heteroskedasticity should be used to answer the scientific question of whether or
not the conditional variance is a function of the regressors. If this question is not of economic
interest, then there is no value in conducting a test for heteorskedasticity
yi = x0i β + ei
CHAPTER 9. REGRESSION EXTENSIONS 222
by OLS, and form the Wald statistic Wn for γ = 0. It is easy (although somewhat tedious) to
d
show that under the null hypothesis, Wn −→ χ2m−1 . Thus the null is rejected at the α% level if Wn
exceeds the upper α% tail critical value of the χ2m−1 distribution.
To implement the test, m must be selected in advance. Typically, small values such as m = 2,
3, or 4 seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of
smooth alternatives. It is particularly powerful at detecting single-index models of the form
yi = G(x0i β) + ei
where G(·) is a smooth “link” function. To see why this is the case, note that (9.10) may be written
as ³ ´2 ³ ´3 ³ ´m
e + x0 β
yi = x0i β b γ̃1 + x0 β b
b γ̃2 + · · · x0 β γ̃m−1 + ẽi
i i i
• E (sgn (ei ) | xi ) = 0
be the average of absolute deviations. The least absolute deviations (LAD) estimator of β
minimizes this function
b = argmin LADn (β)
β
β
1X
n ³ ´
b = 0.
xi sgn yi − x0i β (9.12)
n
i=1
where
1¡ ¡ ¢¢−1 ¡ ¢¡ ¡ ¢¢−1
V = E xi x0i f (0 | xi ) Exi x0i E xi x0i f (0 | xi )
4
and f (e | x) is the conditional density of ei given xi = x.
Proof of Theorem 9.5.1: Similar to NLLS, LAD is an optimization estimator. Let β0 denote
the true value of β0 .
CHAPTER 9. REGRESSION EXTENSIONS 224
p
The first step is to show that β̂ −→ β0 . The general nature of the proof is similar to that for the
p
NLLS estimator, and is sketched here. For any fixed β, by the WLLN, LADn (β) −→ E |yi − x0i β| .
Furthermore, it can be shown that this convergence is uniform in β. (Proving uniform convergence
is more challenging than for the NLLS criterion since the LAD criterion is not differentiable in
β.) It follows that β̂, the minimizer of LADn (β), converges in probability to β0 , the minimizer of
E |yi − x0i β|.
b = 0, where gn (β) = n−1 Pn g i (β)
Since sgn (a) = 1−2·1 (a ≤ 0) , (9.12) is equivalent to g n (β) i=1
and g i (β) = xi (1 − 2 · 1 (yi ≤ x0i β)) . Let g(β) = Eg i (β). We need three preliminary results. First,
by the central limit theorem (Theorem 5.7.1)
n
X
√ −1/2 d ¡ ¢
n (g n (β0 ) − g(β0 )) = −n g i (β0 ) −→ N 0, Exi x0i
i=1
since Egi (β0 )g i (β0 )0 = Exi x0i . Second using the law of iterated expectations and the chain rule of
differentiation,
∂ ∂ ¡ ¡ 0
¢¢
0 g(β) = 0 Exi 1 − 2 · 1 yi ≤ xi β
∂β ∂β
∂ £ ¡ ¡ ¢ ¢¤
= −2 0 E xi E 1 ei ≤ x0i β − x0i β0 | xi
∂β
" Z 0 #
xi β−x0i β0
∂
= −2 0 E xi f (e | xi ) de
∂β −∞
£ ¡ ¢¤
= −2E xi x0i f x0i β − x0i β0 | xi
so
∂ £ 0
¤
0 g(β) = −2E xi xi f (0 | xi ) .
∂β
Third, by a Taylor series expansion and the fact g(β) = 0
³ ´
b ' ∂ g(β) β
g(β) b −β .
∂β0
Together
µ ¶−1
√ ³ ´ ∂ √
n βb −β '
0 0 g(β 0 ) ng(β̂)
∂β
¡ £ ¤¢−1 √ ³ ´
= −2E xi x0i f (0 | xi ) n g(β̂) − g n (β̂)
1¡ £ ¤¢−1 √
' E xi x0i f (0 | xi ) n (gn (β0 ) − g(β0 ))
2
d 1¡ £ ¤¢−1 ¡ ¢
−→ E xi x0i f (0 | xi ) N 0, Exi x0i
2
= N (0, V ) .
p
b −→
The third line follows from an asymptotic empirical process argument and the fact that β β0 .
Qτ = inf {u : F (u) ≥ τ }
CHAPTER 9. REGRESSION EXTENSIONS 225
When F (u) is continuous and strictly monotonic, then F (Qτ ) = τ, so you can think of the quantile
as the inverse of the distribution function. The quantile Qτ is the value such that τ (percent) of
the mass of the distribution is less than Qτ . The median is the special case τ = .5.
The following alternative representation is useful. If the random variable U has τ ’th quantile
Qτ , then
Qτ = argmin Eρτ (U − θ) . (9.14)
θ
yi = x0i βτ + ei
where ei is the error defined to be the difference between yi and its τ ’th conditional quantile x0i βτ .
By construction, the τ ’th conditional quantile of ei is zero, otherwise its properties are unspecified
without further restrictions.
Given the representation (9.14), the quantile regression estimator β b τ for βτ solves the mini-
mization problem
βb = argmin S τ (β)
τ n
β
where
n
1X ¡ ¢
Snτ (β) = ρτ yi − x0i β
n
i=1
where
¡ ¡ ¢¢−1 ¡ ¢¡ ¡ ¢¢−1
V τ = τ (1 − τ ) E xi x0i f (0 | xi ) Exi x0i E xi x0i f (0 | xi )
In general, the asymptotic variance depends on the conditional density of the quantile regression
error. When the error ei is independent of xi , then f (0 | xi ) = f (0) , the unconditional density of
ei at 0, and we have the simplification
τ (1 − τ ) ¡ ¡ ¢¢−1
Vτ = 2 E xi x0i .
f (0)
Exercises
³ ´ that yi = g(xi , θ)+ei with E (ei | xi ) = 0, θ̂ is the NLLS estimator, and V̂ is
Exercise 9.1 Suppose
the estimate of var θ̂ . You are interested in the conditional mean function E (yi | xi = x) = g(x)
at some x. Find an asymptotic 95% confidence interval for g(x).
Exercise 9.2 In Exercise 8.4, you estimated a cost function on a cross-section of electric companies.
The equation you estimated was
(a) Following Nerlove, add the variable (log Qi )2 to the regression. Do so. Assess the merits of
this new specification using a hypothesis test. Do you agree with this modification?
(b) Now try a non-linear specification. Consider model (9.16) plus the extra term β6 zi , where
(c) Estimate the model by non-linear least squares. I recommend the concentration method:
Pick 10 (or more if you like) values of β7 in this range. For each value of β7 , calculate zi and
estimate the model by OLS. Record the sum of squared errors, and find the value of β7 for
which the sum of squared errors is minimized.
(d) Calculate standard errors for all the parameters (β1 , ..., β7 ).
Exercise 9.3 The data file cps78.dat contains 550 observations on 20 variables taken from the
May 1978 current population survey. Variables are listed in the file cps78.pdf. The goal of the
exercise is to estimate a model for the log of earnings (variable LNWAGE) as a function of the
conditioning variables.
(a) Start by an OLS regression of LNWAGE on the other variables. Report coefficient estimates
and standard errors.
(b) Consider augmenting the model by squares and/or cross-products of the conditioning vari-
ables. Estimate your selected model and report the results.
(c) Are there any variables which seem to be unimportant as a determinant of wages? You may
re-estimate the model without these variables, if desired.
(d) Test whether the error variance is different for men and women. Interpret.
(e) Test whether the error variance is different for whites and nonwhites. Interpret.
(f) Construct a model for the conditional variance. Estimate such a model, test for general
heteroskedasticity and report the results.
(g) Using this model for the conditional variance, re-estimate the model from part (c) using
FGLS. Report the results.
CHAPTER 9. REGRESSION EXTENSIONS 228
(h) Do the OLS and FGLS estimates differ greatly? Note any interesting differences.
(i) Compare the estimated standard errors. Note any interesting differences.
Exercise 9.4 For any predictor g(xi ) for yi , the mean absolute error (MAE) is
E |yi − g(xi )| .
Show that the function g(x) which minimizes the MAE is the conditional median m (x) = med(yi |
xi ).
The Bootstrap
Gn (u, F ) = Pr(Tn ≤ u | F )
We call G∗n the bootstrap distribution. Bootstrap inference is based on G∗n (u).
Let (yi∗ , x∗i ) denote random variables with the distribution Fn . A random sample from this dis-
tribution is called the bootstrap data. The statistic Tn∗ = Tn ((y1∗ , x∗1 ) , ..., (yn∗ , x∗n ) , Fn ) constructed
on this sample is a random variable with distribution G∗n . That is, Pr(Tn∗ ≤ u) = G∗n (u). We call
Tn∗ the bootstrap statistic. The distribution of Tn∗ is identical to that of Tn when the true CDF is
Fn rather than F.
The bootstrap distribution is itself random, as it depends on the sample through the estimator
Fn .
In the next sections we describe computation of the bootstrap distribution.
229
CHAPTER 10. THE BOOTSTRAP 230
sample moment:
n
1X
Fn (y, x) = 1 (yi ≤ y) 1 (xi ≤ x) . (10.2)
n
i=1
Fn (y, x) is called the empirical distribution function (EDF). Fn is a nonparametric estimate of F.
Note that while F may be either discrete or continuous, Fn is by construction a step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any (y, x), 1 (yi ≤ y) 1 (xi ≤ x)
p
is an iid random variable with expectation F (y, x). Thus by the WLLN (Theorem 5.4.2), Fn (y, x) −→
F (y, x) . Furthermore, by the CLT (Theorem 5.7.1),
√ d
n (Fn (y, x) − F (y, x)) −→ N (0, F (y, x) (1 − F (y, x))) .
To see the effect of sample size on the EDF, in the Figure below, I have plotted the EDF and
true CDF for three random samples of size n = 25, 50, 100, and 500. The random draws are from
the N (0, 1) distribution. For n = 25, the EDF is only a crude approximation to the CDF, but the
approximation appears to improve for the large n. In general, as the sample size gets larger, the
EDF step function gets uniformly close to the true CDF.
The EDF is a valid discrete probability distribution which puts probability mass 1/n at each
pair (yi , xi ), i = 1, ..., n. Notationally, it is helpful to think of a random pair (yi∗ , x∗i ) with the
distribution Fn . That is,
Pr(yi∗ ≤ y, x∗i ≤ x) = Fn (y, x).
We can easily calculate the moments of functions of (yi∗ , x∗i ) :
Z
∗ ∗
Eh (yi , xi ) = h(y, x)dFn (y, x)
n
X
= h (yi , xi ) Pr (yi∗ = yi , x∗i = xi )
i=1
n
1X
= h (yi , xi ) ,
n
i=1
such a calculation is computationally infeasible. The popular alternative is to use simulation to ap-
proximate the distribution. The algorithm is identical to our discussion of Monte Carlo simulation,
with the following points of clarification:
• The sample size n used for the simulation is the same as the sample size.
• The random vectors (yi∗ , x∗i ) are drawn randomly from the empirical distribution. This is
equivalent to sampling a pair (yi , xi ) randomly from the sample.
The bootstrap statistic Tn∗ = Tn ((y1∗ , x∗1 ) , ..., (yn∗ , x∗n ) , Fn ) is calculated for each bootstrap sam-
ple. This is repeated B times. B is known as the number of bootstrap replications. A theory
for the determination of the number of bootstrap replications B has been developed by Andrews
and Buchinsky (2000). It is desirable for B to be large, so long as the computational costs are
reasonable. B = 1000 typically suffices.
When the statistic Tn³is a function
´ of F, it is typically through dependence on a parameter.
For example, the t-ratio θ̂ − θ /s(θ̂) depends on θ. As the bootstrap statistic replaces F with
Fn , it similarly replaces θ with θn , the value of θ implied by Fn . Typically θn = θ̂, the parameter
estimate. (When in doubt use θ̂.)
Sampling from the EDF is particularly easy. Since Fn is a discrete probability distribution
putting probability mass 1/n at each sample point, sampling from the EDF is equivalent to random
sampling a pair (yi , xi ) from the observed data with replacement. In consequence, a bootstrap
sample {(y1∗ , x∗1 ) , ..., (yn∗ , x∗n )} will necessarily have some ties and multiple values, which is generally
not a problem.
= θ̂∗ − θ̂.
θ̃∗ = θ̂ − τ̂n∗
= θ̂ − (θ̂∗ − θ̂)
= 2θ̂ − θ̂∗ .
Note, in particular, that the biased-corrected estimator is not θ̂∗ . Intuitively, the bootstrap makes
the following experiment. Suppose that θ̂ is the truth. Then what is the average value of θ̂
calculated from such samples? The answer is θ̂∗ . If this is lower than θ̂, this suggests that the
estimator is downward-biased, so a biased-corrected estimator of θ should be larger than θ̂, and the
best guess is the difference between θ̂ and θ̂∗ . Similarly if θ̂∗ is higher than θ̂, then the estimator is
upward-biased and the biased-corrected estimator should be lower than θ̂.
Let Tn = θ̂. The variance of θ̂ is
Vn = E(Tn − ETn )2 .
A bootstrap
q standard error for θ̂ is the square root of the bootstrap estimate of variance,
s∗ (θ̂) = V̂n∗ .
While this standard error may be calculated and reported, it is not clear if it is useful. The
primary use of asymptotic standard errors is to construct asymptotic confidence intervals, which are
based on the asymptotic normal approximation to the t-ratio. However, the use of the bootstrap
presumes that such asymptotic approximations might be poor, in which case the normal approxi-
mation is suspected. It appears superior to calculate bootstrap confidence intervals, and we turn
to this next.
The interval C1 is a popular bootstrap confidence interval often used in empirical practice. This
is because it is easy to compute, simple to motivate, was popularized by Efron early in the history
of the bootstrap, and also has the feature that it is translation invariant. That is, if we define
φ = f (θ) as the parameter of interest for a monotonically increasing function f, then percentile
method applied to this problem will produce the confidence interval [f (qn∗ (α/2)), f (qn∗ (1 − α/2))],
which is a naturally good property.
However, as we show now, C1 is in a deep sense very poorly motivated.
It will be useful if we introduce an alternative definition of C1 . Let Tn (θ) = θ̂ − θ and let qn (α)
be the quantile function of its distribution. (These are the original quantiles, with θ subtracted.)
Then C1 can alternatively be written as
which generally is not 1−α! There is one important exception. If θ̂−θ0 has a symmetric distribution
about 0, then Gn (−u, F0 ) = 1 − Gn (u, F0 ), so
¡ ¢
Pr θ0 ∈ C10 = Gn (−qn (α/2), F0 ) − Gn (−qn (1 − α/2), F0 )
= (1 − Gn (qn (α/2), F0 )) − (1 − Gn (qn (1 − α/2), F0 ))
³ α´ ³ ³ α ´´
= 1− − 1− 1−
2 2
=1−α
and this idealized confidence interval is accurate. Therefore, C10 and C1 are designed for the case
that θ̂ has a symmetric distribution about θ0 .
When θ̂ does not have a symmetric distribution, C1 may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there
exists some monotonically increasing transformation f (·) such that f (θ̂) is symmetrically distributed
about f (θ0 ), then the idealized percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless
the sampling distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented, at least in principle, by an
alternative method.
Let Tn (θ) = θ̂ − θ. Then
Notice that generally this is very different from the Efron interval C1 ! They coincide in the special
case that G∗n (u) is symmetric about θ̂, but otherwise they differ.
Computationally, this³interval´ can be estimated from a bootstrap simulation by sorting the
bootstrap statistics Tn∗ = θ̂∗ − θ̂ , which are centered at the sample estimate θ̂. These are sorted
to yield the quantile estimates q̂n∗ (.025) and q̂n∗ (.975). The 95% confidence interval is then [θ̂ −
q̂n∗ (.975), θ̂ − q̂n∗ (.025)].
This confidence interval is discussed in most theoretical treatments of the bootstrap, but is not
widely used in practice.
Thus c = qn (α). Since this is unknown, a bootstrap test replaces qn (α) with the bootstrap estimate
qn∗ (α), and the test rejects if Tn (θ0 ) < qn∗ (α).
Similarly, if the alternative is H1 : θ > θ0 , the bootstrap test rejects if Tn (θ0 ) > qn∗ (1 − α).
Computationally, these critical
³ values ´ can be estimated from a bootstrap simulation by sorting
∗
the bootstrap t-statistics Tn = θ̂ − θ̂ /s(θ̂∗ ). Note, and this is important, that the bootstrap test
∗
statistic is centered at the estimate θ̂, and the standard error s(θ̂∗ ) is calculated on the bootstrap
sample. These t-statistics ∗ ∗
³ ´ are sorted to find the estimated quantiles q̂n (α) and/or q̂n (1 − α).
Let Tn (θ) = θ̂ − θ /s(θ̂). Then taking the intersection of two one-sided intervals,
This is often called a percentile-t confidence interval. It is equal-tailed or central since the probability
that θ0 is below the left endpoint approximately equals the probability that θ0 is above the right
endpoint, each α/2.
Computationally, this is based on the critical values from the one-sided hypothesis tests, dis-
cussed above.
Note that
which is a symmetric distribution function. The ideal critical value c = qn (α) solves the equation
Gn (qn (α)) = 1 − α.
Computationally, ¯ qn∗ ¯(α) is estimated from a bootstrap simulation by sorting the bootstrap t-
¯ ¯
statistics |Tn∗ | = ¯θ̂∗ − θ̂¯ /s(θ̂∗ ), and taking the upper α% quantile. The bootstrap test rejects if
|Tn (θ0 )| > qn∗ (α).
Let
C4 = [θ̂ − s(θ̂)qn∗ (α), θ̂ + s(θ̂)qn∗ (α)],
where qn∗ (α) is the bootstrap critical value for a two-sided hypothesis test. C4 is called the symmetric
percentile-t interval. It is designed to work well since
³ ´
Pr (θ0 ∈ C4 ) = Pr θ̂ − s(θ̂)qn∗ (α) ≤ θ0 ≤ θ̂ + s(θ̂)qn∗ (α)
= Pr (|Tn (θ0 )| < qn∗ (α))
' Pr (|Tn (θ0 )| < qn (α))
= 1 − α.
or some other asymptotically chi-square statistic. Thus here Tn (θ) = Wn (θ). The ideal test rejects
if Wn ≥ qn (α), where qn (α) is the (1 − α)% quantile of the distribution of Wn . The bootstrap test
rejects if Wn ≥ qn∗ (α), where qn∗ (α) is the (1 − α)% quantile of the distribution of
³ ∗ ´0 ∗−1 ³ ∗ ´
Wn∗ = n θ̂ − θ̂ V̂ θ θ̂ − θ̂ .
Computationally, the critical value qn∗ (α) is found as the quantile from
³ simulated
´ values ∗
³ ∗ of W´n .
∗
Note in the simulation that the Wald statistic is a quadratic form in θ̂ − θ̂ , not θ̂ − θ0 .
[This is a typical mistake made by practitioners.]
In some cases, such as when Tn is a t-ratio, then σ 2 = 1. In other cases σ 2 is unknown. Equivalently,
writing Tn ∼ Gn (u, F ) then for each u and F
³u´
lim Gn (u, F ) = Φ ,
n→∞ σ
or ³u´
Gn (u, F ) = Φ + o (1) . (10.4)
σ
¡ ¢
While (10.4) says that Gn converges to Φ σu as n → ∞, it says nothing, however, about the rate
of convergence, or the size of the divergence for any particular sample size n. A better asymptotic
approximation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let an be a sequence.
√ ¡ ¢
[Side Note: When Tn = n X̄n − μ /σ, a standardized sample mean, then
1 ¡ ¢
g1 (u) = − κ3 u2 − 1 φ(u)
6µ ¶
1 ¡ ¢ 1 ¡ ¢
g2 (u) = − κ4 u3 − 3u + κ23 u5 − 10u3 + 15u φ(u)
24 72
κ3 = E (X − μ)3 /σ 3
κ4 = E (X − μ)4 /σ 4 − 3
the standardized skewness and excess kurtosis of the distribution of X. Note that when κ3 = 0
and κ4 = 0, then g1 = 0 and g2 = 0, so the second-order Edgeworth expansion corresponds to the
normal distribution.]
Francis Edgeworth
√
Since Fn converges to F at rate n, and g1 is continuous with respect to F, the difference
√
(g1 (u, Fn ) − g1 (u, F0 )) converges to 0 at rate n. Heuristically,
∂
g1 (u, Fn ) − g1 (u, F0 ) ≈ g1 (u, F0 ) (Fn − F0 )
∂F
= O(n−1/2 ),
∂
The “derivative” ∂F g1 (u, F ) is only heuristic, as F is a function. We conclude that
or
Pr (Tn∗ ≤ u) = Pr (Tn ≤ u) + O(n−1 ),
which is an improved rate of convergence over the asymptotic test (which converged at rate
O(n−1/2 )). This rate can be used to show that one-tailed bootstrap inference based on the t-
ratio achieves a so-called asymptotic refinement — the Type I error of the test converges at a faster
rate than an analogous asymptotic test.
Pr (|y| ≤ u) = Pr (−u ≤ y ≤ u)
= Pr (y ≤ u) − Pr (y ≤ −u)
= H(u) − H(−u).
Similarly, if Tn has exact distribution Gn (u, F ), then |Tn | has the distribution function
errors! This means that in contexts where standard errors are not available or are difficult to
calculate, the percentile bootstrap methods provide an attractive inference method.
The bad news is that the rate of convergence is disappointing. It is no better than the rate
obtained from an asymptotic one-sided confidence region. Therefore if standard errors are available,
it is unclear if there are any benefits from using the percentile bootstrap over simple asymptotic
methods.
Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2001) tends to
advocate the use of the percentile-t bootstrap methods rather than percentile methods.
A conditional distribution with these features will preserve the main important features of the data.
This can be achieved using a two-point distribution of the form
à à √ ! ! √
1 + 5 5−1
Pr e∗i = êi = √
2 2 5
à à √ ! ! √
1− 5 5+1
Pr e∗i = êi = √
2 2 5
Exercises
Exercise 10.1 Let Fn (x) denote the EDF of a random sample. Show that
√ d
n (Fn (x) − F0 (x)) −→ N (0, F0 (x) (1 − F0 (x))) .
Exercise 10.2 Take a random sample {y1 , ..., yn } with μ = Eyi and σ 2 = var (yi ) . Let the statistic
of interest be the sample mean Tn = y n . Find the population moments ETn and var (Tn ) . Let
{y1∗ , ..., yn∗ } be a random sample from the empirical distribution function and let Tn∗ = y ∗n be its
sample mean. Find the bootstrap moments ETn∗ and var (Tn∗ ) .
Exercise 10.3 Consider the following bootstrap procedure for a regression of yi on xi . Let β̂
denote the OLS estimator from the regression of y on X, and ê = y − X β̂ the OLS residuals.
(a) Draw a random vector (x∗ , e∗ ) from the pair {(xi , êi ) : i = 1, ..., n} . That is, draw a random
integer i0 from [1, 2, ..., n], and set x∗ = xi0 and e∗ = êi0 . Set y∗ = x∗0 β̂ + e∗ . Draw (with
replacement) n such vectors, creating a random bootstrap data set (y ∗ , X ∗ ).
∗
(b) Regress y ∗ on X ∗ , yielding OLS estimates β̂ and any other statistic of interest.
Show that this bootstrap procedure is (numerically) identical to the non-parametric boot-
strap.
Exercise 10.4 Consider the following bootstrap procedure. Using the non-parametric bootstrap,
generate bootstrap samples, calculate the estimate θ̂∗ on these samples and then calculate
Tn∗ = (θ̂∗ − θ̂)/s(θ̂),
where s(θ̂) is the standard error in the original data. Let qn∗ (.05) and qn∗ (.95) denote the 5% and
95% quantiles of Tn∗ , and define the bootstrap confidence interval
h i
C = θ̂ − s(θ̂)qn∗ (.95), θ̂ − s(θ̂)qn∗ (.05) .
Show that C exactly equals the Alternative percentile interval (not the percentile-t interval).
Exercise 10.5 You want to test H0 : θ = 0 against H1 : θ > 0. The test for H0 is to reject if
Tn = θ̂/s(θ̂) > c where c is picked so that Type I error is α. You do this as follows. Using the non-
parametric bootstrap, you generate bootstrap samples, calculate the estimates θ̂∗ on these samples
and then calculate
Tn∗ = θ̂∗ /s(θ̂∗ ).
Let qn∗ (.95) denote the 95% quantile of Tn∗ . You replace c with qn∗ (.95), and thus reject H0 if
Tn = θ̂/s(θ̂) > qn∗ (.95). What is wrong with this procedure?
Exercise 10.6 Suppose that in an application, θ̂ = 1.2 and s(θ̂) = .2. Using the non-parametric
bootstrap, 1000 samples are generated from the bootstrap distribution, and θ̂∗ is calculated on each
sample. The θ̂∗ are sorted, and the 2.5% and 97.5% quantiles of the θ̂∗ are .75 and 1.3, respectively.
(a) Report the 95% Efron Percentile interval for θ.
(b) Report the 95% Alternative Percentile interval for θ.
(c) With the given information, can you report the 95% Percentile-t interval for θ?
Exercise 10.7 The datafile hprice1.dat contains data on house prices (sales), with variables
listed in the file hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot
size, size of house, and the colonial dummy. Calculate 95% confidence intervals for the regression
coefficients using both the asymptotic normal approximation and the percentile-t bootstrap.
Chapter 11
NonParametric Regression
11.1 Introduction
When components of x are continuously distributed then the conditional expectation function
E (yi | xi = x) = m(x)
can take any nonlinear shape. Unless an economic model restricts the form of m(x) to a parametric
function, the CEF is inherently nonparametric, meaning that the function m(x) is an element
of an infinite dimensional class. In this situation, how can we estimate m(x)? What is a suitable
method, if we acknowledge that m(x) is nonparametric?
There are two main classes of nonparametric regression estimators: kernel estimators, and series
estimators. In this chapter we introduce kernel methods.
To get started, suppose that there is a single real-valued regressor xi . We consider the case of
vector-valued regressors later.
where
1 (|xi − x| ≤ h)
wi (x) = Pn .
j=1 1 (|xj − x| ≤ h)
Pn
Notice that i=1 wi (x) = 1, so (11.2) is a weighted average of the yi .
243
CHAPTER 11. NONPARAMETRIC REGRESSION 244
It is possible
P that for some values of x there are no values of xi such that |xi − x| ≤ h, which
implies that ni=1 1 (|xi − x| ≤ h) = 0. In this case the estimator (11.1) is undefined for those values
of x.
To visualize, Figure 11.1 displays a scatter plot of 100 observations on a random pair (yi , xi )
generated by simulation1 . (The observations are displayed as the open circles.) The estimator
(11.1) of the CEF m(x) at x = 2 with h = 1/2 is the average of the yi for the observations
such that xi falls in the interval [1.5 ≤ xi ≤ 2.5]. (Our choice of h = 1/2 is somewhat arbitrary.
Selection of h will be discussed later.) The estimate is m(2)
b = 5.16 and is shown on Figure 11.1 by
the first solid square. We repeat the calculation (11.1) for x = 3, 4, 5, and 6, which is equivalent to
partitioning the support of xi into the regions [1.5, 2.5], [2.5, 3.5], [3.5, 4.5], [4.5, 5.5], and [5.5, 6.5].
These partitions are shown in Figure 11.1 by the verticle dotted lines, and the estimates (11.1) by
the solid squares.
These estimates m(x)
b can be viewed as estimates of the CEF m(x). Sometimes called a binned
estimator, this is a step-function approximation to m(x) and is displayed in Figure 11.1 by the
horizontal lines passing through the solid squares. This estimate roughly tracks the central tendency
of the scatter of the observations (yi , xi ). However, the huge jumps in the estimated step function
at the edges of the partitions are disconcerting, counter-intuitive, and clearly an artifact of the
discrete binning.
If we take another look at the estimation formula (11.1) there is no reason why we need to
evaluate (11.1) only on a course grid. We can evaluate m(x)b for any set of values of x. In particular,
we can evaluate (11.1) on a fine grid of values of x and thereby obtain a smoother estimate of the
CEF. This estimator with h = 1/2 is displayed in Figure 11.1 with the solid line. This is a
generalization of the binned estimator and by construction passes through the solid squares.
The bandwidth h determines the degree of smoothing. Larger values of h increase the width
of the bins in Figure 11.1, thereby increasing the smoothness of the estimate m(x) b as a function
of x. Smaller values of h decrease the width of the bins, resulting in less smooth conditional mean
estimates.
1
The distribution is xi ∼ N(4, 1) and yi | xi ∼ N(m(xi ), 16) with m(x) = 10 log(x).
CHAPTER 11. NONPARAMETRIC REGRESSION 245
The uniform density k0 (u) is a special case of what is known as a kernel function.
Essentially, a kernel function is a probability density function which is bounded and symmetric
about zero. A generalization of (11.1) is obtained by replacing the uniform kernel with any other
kernel function: µ ¶
Pn xi − x
i=1 k yi
h
b
m(x) = µ ¶ . (11.4)
Pn xi − x
i=1 k
h
The estimator (11.4) also takes the form (11.2) with
µ ¶
xi − x
k
h
wi (x) = µ ¶.
Pn xj − x
j=1 k
h
The estimator (11.4) is known as the Nadaraya-Watson estimator, the kernel regression
estimator, or the local constant estimator.
The bandwidth h plays the same role in (11.4) as it does in (11.1). Namely, larger values of
h will result in estimates m(x)
b which are smoother in x, and smaller values of h will result in
estimates which are more erratic. It might be helpful to consider the two extreme cases h → 0 and
h → ∞. As h → 0 we can see that m(x b i ) → yi (if the values of xi are unique), so that m(x)
b is
simply the scatter of yi on xi . In contrast, as h → ∞ then for all x, m(x)
b → y, the sample mean,
so that the nonparametric CEF estimate is a constant function. For intermediate values of h, m(x)
b
will lie between these two extreme cases.
CHAPTER 11. NONPARAMETRIC REGRESSION 246
The uniform density is not a good kernel choice as it produces discontinuous CEF estimates.
To obtain a continuous CEF estimate m(x)
b it is necessary for the kernel k(u) to be continuous.
The two most commonly used choices are the Epanechnikov kernel
3¡ ¢
k1 (u) = 1 − u2 1 (|u| ≤ 1)
4
and the normal or Gaussian kernel
µ 2¶
1 u
kφ (u) = √ exp − .
2π 2
For computation of the CEF estimate (11.4) the scale of the kernel is not ³important
´ so long as
u
the bandwidth is selected appropriately. That is, for any b > 0, kb (u) = b−1 k is a valid kernel
b
function with the identical shape as k(u). Kernel regression with the kernel k(u) and bandwidth h
is identical to kernel regression with the kernel kb (u) and bandwidth h/b.
The estimate (11.4) using the Epanechnikov kernel and h = 1/2 is also displayed in Figure 11.1
with the dashed line. As you can see, this estimator appears to be much smoother than that using
the uniform kernel.
Two important constants associated with a kernel function k(u) are its variance σk2 and rough-
ness Rk , which are defined as
Z ∞
σk2 = u2 k(u)du (11.5)
−∞
Z ∞
Rk = k(u)2 du. (11.6)
−∞
Some common kernels and their roughness and variance values are reported in Table 9.1.
This is a weighted regression of yi on an intercept only. Without the weights, this estimation
problem reduces to the sample mean. The NW estimator generalizes this to a local mean.
This interpretation suggests that we can construct alternative nonparametric estimators of the
CEF by alternative local approximations. Many such local approximations are possible. A popular
choice is the local linear (LL) approximation. Instead of approximating m(x) locally as a constant,
CHAPTER 11. NONPARAMETRIC REGRESSION 247
the local linear approximation approximates the CEF locally by a linear function, and estimates
this local approximation by locally weighted least squares.
Specifically, for each x we solve the following minimization problem
n o Xn µ ¶
b xi − x
αb(x), β(x) = argmin k (yi − α − β (xi − x))2 .
α,β h
i=1
b
m(x) b(x)
=α
and the local linear estimator of the regression derivative ∇m(x) is the estimated slope coefficient
d
∇m(x) b
= β(x).
and µ ¶
xi − x
ki (x) = k .
h
Then
µ ¶ Ã n !−1 n
b(x)
α X X
0
b = ki (x)z i (x)z i (x) ki (x)z i (x)yi
β(x)
i=1 i=1
¡ ¢−1 0
= Z 0 KZ Z Ky
êi = yi − m(x
b i ).
CHAPTER 11. NONPARAMETRIC REGRESSION 248
As a general rule, but especially when the bandwidth h is small, it is hard to view êi as a good
measure of the fit of the regression. As h → 0 then m(x b i ) → yi and therefore êi → 0. This clearly
indicates overfitting as the true error is not zero. In general, since m(xb i ) is a local average which
includes yi , the fitted value will be necessarily close to yi and the residual êi small, and the degree
of this overfitting increases as h decreases.
A standard solution is to measure the fit of the regression at x = xi by re-estimating the model
excluding the i’th observation. For Nadaraya-Watson regression, the leave-one-out estimator of
m(x) excluding observation i is
µ ¶
P xj − x
j6=i k yj
h
e −i (x) =
m µ ¶ .
P xj − x
j6=i k
h
Notationally, the “−i” subscript is used to indicate that the i’th observation is omitted.
The leave-one-out predicted value for yi at x = xi equals
µ ¶
P xj − xi
j6=i k yj
h
e −i (xi ) =
ỹi = m µ ¶ .
P xj − xi
j6=i k
h
The leave-one-out residuals (or prediction errors) are the difference between the leave-one-out pre-
dicted values and the actual observation
ẽi = yi − ỹi .
Since ỹi is not a function of yi , there is no tendency for ỹi to overfit for small h. Consequently, ẽi
is a good measure of the fit of the estimated nonparametric regression.
Similarly, the leave-one-out local-linear residual is ẽi = yi − α ei with
⎛ ⎞−1
µ ¶ X X
ei
α ⎝ 0 ⎠
= kij z ij z kij z ij yj ,
βei ij
j6=i j6=i
CHAPTER 11. NONPARAMETRIC REGRESSION 249
µ ¶
1
z ij =
xj − xi
and µ ¶
xj − xi
kij = k .
h
where fx (x) is the marginal density of xi . Notice that we have defined the IMSE as an integral with
respect to the density fx (x). Other weight functions could be used, but it turns out that this is a
convenient choice.
The IMSE is closely related with the MSFE of Section 4.9. Let (yn+1 , xn+1 ) be out-of-sample
observations (and thus independent of the sample) and consider predicting yn+1 given xn+1 and
the nonparametric estimate m(x,
b h). The natural point estimate for yn+1 is m(x
b n+1 , h) which has
mean-squared forecast error
b n+1 , h))2
M SF En (h) = E (yn+1 − m(x
b n+1 , h))2
= E (en+1 + m(xn+1 ) − m(x
b n+1 , h))2
= σ 2 + E (m(xn+1 ) − m(x
Z
= σ 2 + E (m(x,
b h) − m(x))2 fx (x)dx
where the final equality uses the fact that xn+1 is independent of m(x,
b h). We thus see that
M SF En (h) = σ 2 + IM SEn (h).
Since σ 2 is a constant independent of the bandwidth h, M SF En (h) and IM SEn (h) are equivalent
measures of the fit of the nonparameric regression.
The optimal bandwidth h is the value which minimizes IM SEn (h) (or equivalently M SF En (h)).
While these functions are unknown, we learned in Theorem 4.9.1 that (at least in the case of linear
regression) M SF En can be estimated by the sample mean-squared prediction errors. It turns out
that this fact extends to nonparametric regression. The nonparametric leave-one-out residuals are
ẽi (h) = yi − m
e −i (xi , h)
CHAPTER 11. NONPARAMETRIC REGRESSION 250
where we are being explicit about the dependence on the bandwidth h. The mean squared leave-
one-out residuals is
n
1X
CV (h) = ẽi (h)2 .
n
i=1
This function of h is known as the cross-validation criterion.
The cross-validation bandwidth b h is the value which minimizes CV (h)
b
h = argmin CV (h) (11.7)
h≥h
for some h > 0. The restriction h ≥ h is imposed so that CV (h) is not evaluated over unreasonably
small bandwidths.
There is not an explicit solution to the minimization problem (11.7), so it must be solved
numerically. A typical practical method is to create a grid of values for h, e.g. [h1 , h2 , ..., hJ ],
evaluate CV (hj ) for j = 1, ..., J, and set
b
h= argmin CV (h).
h∈[h1 ,h2 ,...,hJ ]
Evaluation using a coarse grid is typically sufficient for practical application. Plots of CV (h) against
h are a useful diagnostic tool to verify that the minimum of CV (h) has been obtained.
We said above that the cross-validation criterion is an estimator of the MSFE. This claim is
based on the following result.
Theorem 11.6.1
Theorem 11.6.1 shows that CV (h) is an unbiased estimator of IM SEn−1 (h) + σ 2 . The first
term, IM SEn−1 (h), is the integrated MSE of the nonparametric estimator using a sample of size
n − 1. If n is large, IM SEn−1 (h) and IM SEn (h) will be nearly identical, so CV (h) is essentially
unbiased as an estimator of IM SEn (h) + σ 2 . Since the second term (σ 2 ) is unaffected by the
bandwidth h, it is irrelevant for the problem of selection of h. In this sense we can view CV (h)
as an estimator of the IMSE, and more importantly we can view the minimizer of CV (h) as an
estimate of the minimizer of IM SEn (h).
To illustrate, Figure 11.3 displays the cross-validation criteria CV (h) for the Nadaraya-Watson
and Local Linear estimators using the data from Figure 11.1, both using the Epanechnikov kernel.
The CV functions are computed on a grid with intervals 0.01. The CV-minimizing bandwidths are
h = 1.09 for the Nadaraya-Watson estimator and h = 1.59 for the local linear estimator. Figure
11.3 shows the minimizing bandwidths by the arrows. It is not surprising that the CV criteria
recommend a larger bandwidth for the LL estimator than for the NW estimator, as the LL employs
more smoothing for a given bandwidth.
The CV criterion can also be used to select between different nonparametric estimators. The
CV-selected estimator is the one with the lowest minimized CV criterion. For example, in Figure
11.3, the NW estimator has a minimized CV criterion of 16.88, while the LL estimator has a
minimized CV criterion of 16.81. Since the LL estimator achieves a lower value of the CV criterion,
LL is the CV-selected estimator. The difference (0.07) is small, suggesting that the two estimators
are near equivalent in IMSE.
Figure 11.4 displays the fitted CEF estimates (NW and LL) using the bandwidths selected by
cross-validation. Also displayed is the true CEF m(x) = 10 ln(x). Notice that the nonparametric
CHAPTER 11. NONPARAMETRIC REGRESSION 251
Figure 11.3: Cross-Validation Criteria, Nadaraya-Watson Regression and Local Linear Regression
estimators with the CV-selected bandwidths (and especially the LL estimator) track the true CEF
quite well.
Proof of Theorem 11.6.1. Observe that m(xi ) − m e −i (xi , h) is a function only of (x1 , ..., xn ) and
(e1 , ..., en ) excluding ei , and is thus uncorrelated with ei . Since ẽi (h) = m(xi ) − m e −i (xi , h) + ei ,
then
¡ ¢
E (CV (h)) = E ẽi (h)2
¡ ¢
= E e2i + E (m e −i (xi , h) − m(xi ))2
e −i (xi , h) − m(xi )) ei )
+ 2E ((m
e −i (xi , h) − m(xi ))2 .
= σ 2 + E (m (11.9)
The second term is an expectation over the random variables xi and m e −i (x, h), which are indepen-
dent as the second is not a function of the i’th observation. Thus taking the conditional expectation
given the sample excluding the i’th observation, this is the expectation over xi only, which is the
integral with respect to its density
Z
E−i (me −i (xi , h) − m(xi ))2 = (m e −i (x, h) − m(x))2 fx (x)dx.
= IM SEn−1 (h)
where σk2 and Rk are defined in (11.5) and (11.6). For the Nadaraya-
Watson estimator
1
B(x) = m00 (x) + fx (x)−1 fx0 (x)m0 (x)
2
and for the local linear estimator
1
B(x) = fx (x)m00 (x)
2
There are several interesting features about the asymptotic distribution which are √ noticeably
√
different than for√ parametric estimators. First, the estimator converges at the rate nh, not n.
√
Since h → 0, nh diverges slower than n, thus the nonparametric estimator converges more
slowly than a parametric estimator. Second, the asymptotic distribution contains a non-neglible
bias term h2 σk2 B(x). This term asymptotically disappears since h → 0. Third, the assumptions that
nh → ∞ and h → 0 mean that the estimator is consistent √ for the CEF m(x).
The fact that the estimator converges at the rate nh has led to the interpretation of nh as the
“effective sample size”. This is because the number of observations being used to construct m(x) b
is proportional to nh, not n as for a parametric estimator.
It is helpful to understand that the nonparametric estimator has a reduced convergence rate
because the object being estimated — m(x) — is nonparametric. This is harder than estimating a
finite dimensional parameter, and thus comes at a cost.
Unlike parametric estimation, the asymptotic distribution of the nonparametric estimator in-
cludes a term representing the bias of the estimator. The asymptotic distribution (11.10) shows
the form of this bias. Not only is it proportional to the squared bandwidth h2 (the degree of
smoothing), it is proportional to the function B(x) which depends on the slope and curvature of
the CEF m(x). Interestingly, when m(x) is constant then B(x) = 0 and the kernel estimator has no
asymptotic bias. The bias is essentially increasing in the curvature of the CEF function m(x). This
is because the local averaging smooths m(x), and the smoothing induces more bias when m(x) is
curved.
Theorem 11.7.1 shows that the asymptotic distributions of the NW and LL estimators are
similar, with the only difference arising in the bias function B(x). The bias term for the NW
estimator has an extra component which depends on the first derivative of the CEF m(x) while the
bias term of the LL estimator is invariant to the first derivative. The fact that the bias formula for
the LL estimator is simpler and is free of dependence on the first derivative of m(x) suggests that
the LL estimator will generally have smaller bias than the NW estimator (but this is not a precise
ranking). Since the asymptotic variances in the two distributions are the same, this means that the
LL estimator achieves a reduced bias without an effect on asymptotic variance. This analysis has
led to the general preference for the LL estimator over the NW estimator in the nonparametrics
literature.
One implication of Theorem 11.7.1 is that we can define the asymptotic MSE (AMSE) of m(x) b
as the squared bias plus the asymptotic variance
¡ ¢2 Rk σ 2 (x)
b
AM SE (m(x)) = h2 σk2 B(x) + . (11.11)
nhfx (x)
CHAPTER 11. NONPARAMETRIC REGRESSION 254
which when solved for h yields h = n−1/5 . What this means is that for AMSE-efficient estimation
of m(x), the optimal rate for the bandwidth is h ∝ n−1/5 .
This result means that the bandwidth should take the form h = cn−1/5 . The optimal constant
c depends on the kernel k, the bias function B(x) and the marginal density fx (x). A common mis-
interpretation is to set h = n−1/5 , which is equivalent to setting c = 1 and is completely arbitrary.
Instead, an empirical bandwidth selection rule such as cross-validation should be used in practice.
When h = cn−1/5 we can rewrite the asymptotic distribution (11.10) as
µ ¶
2/5 d 2 2 Rk σ 2 (x)
b
n (m(x) − m(x)) −→ N c σk B(x), 1/2
c fx (x)
That is, the estimator is asymptotically normal without a bias component. Not having an asymp-
totic bias component is convenient ¡ for some
¢ theoretical manipuations, so many authors impose the
undersmoothing condition h = o n−1/5 to ensure this situation. This convenience comes at a cost.
¡ ¢ ¡ ¢
First, the resulting estimator is inefficient as its convergence rate is is Op n−(1−α)/2 > Op n−2/5
since α > 1/5. Second, the distribution theory is an inherently misleading approximation as it misses
a critically key ingredient of nonparametric estimation — the trade-off between bias and variance.
The approximation (11.10) is superior precisely because it contains the asymptotic bias component
which is a realistic implication of nonparametric estimation. Undersmoothing assumptions should
be avoided when possible.
CHAPTER 11. NONPARAMETRIC REGRESSION 255
Even if the conditional mean m(x) is parametrically specified, it is natural to view σ 2 (x) as inher-
ently nonparametric as economic models rarely specify the form of the conditional variance. Thus
it is quite appropriate to estimate σ 2 (x) nonparametrically.
We know that σ 2 (x) is the CEF of e2i given xi . Therefore if e2i were observed, σ 2 (x) could be
nonparametrically estimated using NW or LL regression. For example, the ideal NW estimator is
Pn
2 ki (x)e2i
σ (x) = Pi=1
n .
i=1 ki (x)
Since the errors ei are not observed, we need to replace them with an empirical residual, such as
b i ) where m(x)
êi = yi − m(x b is the estimated CEF. (The latter could be a nonparametric estimator
such as NW or LL, or even a parametric estimator.) Even better, use the leave-one-out prediction
b −i (xi ), as these are not subject to overfitting.
errors ẽi = yi − m
With this substitution the NW estimator of the conditional variance is
Pn
2 ki (x)ẽ2i
σ̃ (x) = Pi=1
n . (11.13)
i=1 ki (x)
This estimator depends on a set of bandwidths h1 , ..., hq , but there is no reason for the band-
widths to be the same as those used to estimate the conditional mean. Cross-validation can be used
to select the bandwidths for estimation of σ̂2 (x) separately from cross-validation for estimation of
b
m(x).
There is one subtle difference between CEF and conditional variance estimation. The conditional
variance is inherently non-negative σ 2 (x) ≥ 0 and it is desirable for our estimator to satisfy this
property. Interestingly, the NW estimator (11.13) is necessarily non-negative, since it is a smoothed
average of the non-negative squared residuals, but the LL estimator is not guarenteed to be non-
negative for all x. For this reason, the NW estimator is preferred for conditional variance estimation.
Fan and Yao (1998, Biometrika) derive the asymptotic distribution of the estimator (11.13).
They obtain the surprising result that the asymptotic distribution of this two-step estimator is
identical to that of the one-step idealized estimator σ̃ 2 (x).
Rk σ̂ 2 (x)
V̂ (x) = .
fˆx (x)
CHAPTER 11. NONPARAMETRIC REGRESSION 256
b
Theorem 11.10.1 Let m(x) denote either the Nadarya-Watson or Local
Linear estimator of m(x). If x is interior to the support of xi and fx (x) >
0, then as n → ∞ and hj → 0 such that n |h| → ∞,
⎛ ⎞
Xd µ ¶
p d Rkd σ 2 (x)
⎝ b
n |h| m(x) − m(x) − σk 2 2 ⎠
hj B,j (x) −→ N 0,
fx (x)
j=1
1 ∂2 ∂ ∂
Bj (x) = m(x) + fx (x)−1 fx (x) m(x)
2 ∂x2j ∂xj ∂xj
1 ∂2
Bj (x) = m(x)
2 ∂x2j
For notational simplicity consider the case that there is a single common bandwidth h. In this
case the AMSE takes the form
1
b
AM SE(m(x)) ∼ h4 + d
nh
That is, the squared bias is of order h4 , the same as in the single regressor case, but the variance is
of larger order (nhd )−1 . Setting h to balance these two components requires setting h ∼ n−1/(4+d) .
CHAPTER 11. NONPARAMETRIC REGRESSION 258
In all estimation problems an increase in the dimension decreases estimation precision. For
example, in parametric estimation an increase in dimension typically increases the asymptotic vari-
ance. In nonparametric estimation an increase in the dimension typically decreases the convergence
rate, which is a more
¡ fundamental
¢ decrease in precision. For example, in kernel regression the con-
vergence rate Op n−2/(4+d) decreases as d increases. The reason is the estimator m(x) b is a local
average of the yi for observations such that xi is close to x, and when there are multiple regressors
the number of such observations is inherently smaller. This phenomenon — that the rate of con-
vergence of nonparametric estimation decreases as the dimension increases — is called the curse of
dimensionality.
Chapter 12
Series Estimation
= z K (x)0 βK (12.1)
where zjK (x) are (nonlinear) functions of x, and are known as basis functions or basis function
transformations of x.
For real-valued x, a well-known linear series approximation is the p’th-order polynomial
p
X
mK (x) = xj βjK
j=0
where K = p + 1.
When x ∈ Rd is vector-valued, a p’th-order polynomial is
p
X p
X
mK (x) = ··· xj11 · · · xjdd βj1 ,...,jd K .
j1 =0 jd =0
This includes all powers and cross-products, and the coefficient vector has dimension K = (p + 1)d .
In general, a common method to create a series approximation for vector-valued x is to include all
non-redundant cross-products of the basis function transformations of the components of x.
12.2 Splines
Another common series approximation is a continuous piecewise polynomial function known
as a spline. While splines can be of any polynomial order (e.g. linear, quadratic, cubic, etc.),
a common choice is cubic. To impose smoothness it is common to constrain the spline function
to have continuous derivatives up to the order of the spline. Thus a quadratic spline is typically
259
CHAPTER 12. SERIES ESTIMATION 260
constrained to have a continuous first derivative, and a cubic spline is typically constrained to have
a continuous first and second derivative.
There is more than one way to define a spline series expansion. All are based on the number of
knots — the join points between the polynomial segments.
To illustrate, a piecewise linear function with two segments and a knot at t is
⎧
⎨ m1 (x) = β00 + β01 (x − t) x<t
mK (x) =
⎩
m2 (x) = β10 + β11 (x − t) x≥t
(For convenience we have written the segments functions as polyomials in x − t.) The function
mK (x) equals the linear function m1 (x) for x < t and equals m2 (t) for x > t. Its left limit at x = t
is β00 and its right limit is β10 , so is continuous if (and only if) β00 = β10 . Enforcing this constraint
is equivalent to writing the function as
mK (x) = β0 + β1 (x − t) + β2 (x − t) 1 (x ≥ t)
mK (x) = β0 + β1 x + β2 (x − t) 1 (x ≥ t)
Notice that this function has K = 3 coefficients, the same as a quadratic polynomial.
A piecewise quadratic function with one knot at t is
⎧ 2
⎨ m1 (x) = β00 + β01 (x − t) + β02 (x − t) x<t
mK (x) =
⎩
m2 (x) = β10 + β11 (x − t) + β12 (x − t)2 x≥t
This function is continuous at x = t if β00 = β10 , and has a continuous first derivative if β01 = β11 .
Imposing these contraints and rewriting, we obtain the function
mK (x) = β0 + β1 x + β2 x2 + β3 (x − t)2 1 (x ≥ t) .
Here, K = 4.
Furthermore, a piecewise cubic function with one knot and a continuous second derivative is
mK (x) = β0 + β1 x + β2 x2 + β3 x3 + β4 (x − t)3 1 (x ≥ t)
which has K = 5.
The polynomial order p is selected to control the smoothness of the spline, as mK (x) has
continuous derivatives up to p − 1.
In general, a p’th-order spline with N knots at t1 , t2 , ..., tN with t1 < t2 < · · · < tN is
p
X N
X
mK (x) = βj xj + γk (x − tk )p 1 (x ≥ tk )
j=0 k=1
where z K = z K (x2 ) are the basis transformations of x2 (typically polynomials or splines) and β2K
are coefficients. After transformation the regressors are xK = (x01 , z 0K ). and the coefficients are
βK = (β01 , β02K )0 .
Series methods are quite convenient for estimation of additively separable models, as we simply
apply series expansions (polynomials or splines) separately for each component mj (xj ) . The advan-
tage of additive separability is the reduction in dimensionality. While an unconstrained p’th order
polynomial has (p + 1)d coefficients, an additively separable polynomial model has only (p + 1)d
coefficients. This can be a major reduction in the number of coefficients. The disadvantage of this
simplification is that the interaction effects have been eliminated.
The decision to impose additive separability can be based on an economic model which suggests
the absence of interaction effects, or can be a model selection decision similar to the selection of
the number of series terms. We will discuss model selection methods below.
Thus the true unknown m(x) can be arbitrarily well approximately by selecting a suitable polyno-
mial.
CHAPTER 12. SERIES ESTIMATION 262
The result (12.2) can be stengthened. In particular, if the sth derivative of m(x) is continuous
then the uniform approximation error satisfies
¡ ¢
sup |mK (x) − m(x)| = O K −α (12.3)
x∈X
as K → ∞ where α = s/d. This result is more useful than (12.2) because it gives a rate at which
the approximation mK (x) approaches m(x) as K increases.
Both (12.2) and (12.3) hold for spline approximations as well.
Intuitively, the number of derivatives s indexes the smoothness of the function m(x). (12.3)
says that the best rate at which a polynomial or spline approximates the CEF m(x) depends on
the underlying smoothness of m(x). The more smooth is m(x), the fewer series terms (polynomial
order or spline knots) are needed to obtain a good approximation.
To illustrate polynomial approximation, Figure 12.1 displays the CEF m(x) = x1/4 (1 − x)1/2
on x ∈ [0, 1]. In addition, the best approximations using polynomials of order K = 3, K = 4, and
K = 6 are displayed. You can see how the approximation with K = 3 is fairly crude, but improves
with K = 4 and especially K = 6. Approximations obtained with cubic splines are quite similar so
not displayed.
As a series approximation can be written as mK (x) = z K (x)0 βK as in (12.1), then the coefficient
of the best uniform approximation (12.3) is then
¯ ¯
β∗K = argmin sup ¯z K (x)0 βK − m(x)¯ . (12.4)
βK x∈X
m bK .
b K (x) = z K (x)0 β (12.7)
As we learned in Chapter 2, the least-squares coefficient is estimating the best linear predictor
of yi given z Ki . This is ¡ ¢−1
βK = E z Ki z 0Ki E (z Ki yi ) .
CHAPTER 12. SERIES ESTIMATION 264
Given this coefficient, the series approximation is z K (x)0 βK with approximation error
yi = z 0Ki β K + eKi
y = Z K βK + rK + e
= Z K βK + eK . (12.10)
We now impose some regularity conditions on the regression model to facilitate the theory.
Define the K × K expected design matrix
¡ ¢
QK = E z Ki z 0Ki ,
let X denote the support of xi , and define the largest normalized length of the regressor vector in
the support of xi
¡ ¢1/2
ζK = sup z K (x)0 Q−1
K z K (x) . (12.11)
x∈X
Assumption 12.7.1
Assumptions 12.7.1.1 through 12.7.1.3 concern properties of the regression model. Assumption
12.7.1.1 holds with α = s/d if X is compact and the s’th derivative of m(x) is continuous. Assump-
tion 12.7.1.2 allows for conditional heteroskedasticity, but requires the conditional variance to be
bounded. Assumption 12.7.1.3 excludes near-singular designs. Since estimates of the conditional
mean are unchanged if we replace z Ki with z ∗Ki = B K z Ki for any non-singular B K , Assumption
12.7.1.3 can be viewed as holding after transformation by an appropriate non-singular B K .
CHAPTER 12. SERIES ESTIMATION 265
Assumption 12.7.1.4 concerns the choice of the number of series terms, which is under the
control of the user. It specifies that K can increase with sample size, but at a controlled rate of
growth. Since ζK = O(K) for polynomials and ζK = O(K 1/2 ) for splines, Assumption 12.7.1.4 is
satisfied if K 3 /n → 0 for polynomials and K 2 /n → 0 for splines. This means that while the number
of series terms K can increase with the sample size, K must increase at a much slower rate.
In Section 12.5 we introduced the best uniform approximation, and in this section we introduced
the best linear predictor. What is the relationship? They may be similar in practice, but they are
not the same and we should be careful to maintain the distinction. Note that from (12.5) we can
write m(xi ) = z 0Ki β∗K + rKi
∗ where r ∗ = r ∗ (x ) satisfies sup |r ∗ | = O (K −α ) from (12.6). Then
Ki K i i Ki
the best linear predictor equals
¡ ¢−1
β K = E z Ki z 0Ki E (z Ki yi )
¡ ¢ −1
= E z Ki z 0Ki E (z Ki m(xi ))
¡ ¢ −1 ¡ ¢
= E z Ki z 0Ki E z Ki (z 0Ki β∗K + rKi
∗
)
¡ ¢−1
= β∗K + E z Ki z 0Ki ∗
E (z Ki rKi ).
and by (12.6) Z
¡ ∗2 ¢ ∗
¡ ¢
E rKi = rK (x)2 fx (x)dx ≤ O K −2α . (12.14)
Then applying the Schwarz inequality to (12.12), Definition (12.11), (12.13) and (12.14), we find
³ ¡ ¢−1 ´1/2
∗
|rK (x) − rK (x)| ≤ z K (x)0 E z Ki z 0Ki z K (x)
³ ¡ ¢−1 ´1/2
∗
E (rKi z Ki )0 E z Ki z 0Ki ∗
E (z Ki rKi )
¡ ¢
≤ O ζK K −α . (12.15)
The bound (12.16) is probably not the best possible, but it shows that the best linear predictor
satisfies a uniform approximation bound. Relative to (12.6), the rate is slower by the factor ζK .
The bound (12.16) term is o(1) as K → ∞ if ζK K −α → 0. A sufficient condition is that α > 1
(s > d) for polynomials and α > 1/2 (s > d/2) for splines, where d = dim(x) and s is the number
of continuous derivatives of m(x).
It is also useful to observe that since βK is the best linear approximation to m(xi ) in mean-
square (see Section 2.24), then
2
¡ ¢2
ErKi = E m(xi ) − z 0Ki βK
¡ ¢2
≤ E m(xi ) − z 0Ki β∗K
¡ ¢
≤ O K −2α (12.17)
êiK = yi − m
b K (xi ).
ẽiK = yi − m
b K,−i (xi )
b
= yi − z 0 β Ki K,−i
where βb
K,−i is the least-squares coefficient with the i’th observation omitted. Using (3.38) we can
also write
ẽiK = êiK (1 − hKii )−1
−1
where hKii = z 0Ki (Z 0K Z K ) z Ki .
As for kernel regression, the prediction errors ẽiK are better estimates of the errors than the
fitted residuals êiK , as they do not have the tendency to “over-fit” when the number of series terms
is large.
To assess the fit of the nonparametric regression, the estimate of the mean-square prediction
error is
n n
1X 2 1X 2
2
σ̃K = ẽiK = êiK (1 − hKii )−2
n n
i=1 i=1
By selecting the series terms to minimize CV (K), or equivalently maximize R e2 , we have a data-
K
dependent rule which is designed to produce estimates with low integrated mean-squared error
(IMSE) and mean-squared forecast error (MSFE). As shown in Theorem 11.6.1, CV (K) is an
approximately unbiased estimated of the MSFE and IMSE, so finding the model which produces
the smallest value of CV (K) is a good indicator that the estimated model has small MSFE and
IMSE. The proof of the result is the same for all nonparametric estimators (series as well as kernels)
so does not need to be repeated here.
As a practical matter, an estimator corresponds to a set of regressors z Ki , that is, a set of
transformations of the original variables xi . For each set of regressions, the regression is estimated
and CV (K) calculated, and the estimator is selected which has the smallest value of CV (K). If
there are p ordered regressors, then there are p possible estimators. Typically, this calculation is
simple even if p is large. However, if the p regressors are unordered (and this is typical) then there
are 2p possible subsets of conceivable models. If p is even moderately large, 2p can be immensely
large so brute-force computation of all models may be computationally demanding.
CHAPTER 12. SERIES ESTIMATION 267
The proof of Theorem 12.10.1 is rather technical and deferred to Section 12.16.
The rate of convergence in (12.18) has two terms. The Op (K/n) term is due to estimation
variance. Note in contrast that the corresponding rate would be Op (1/n) in the parametric case.
The difference is that in the parametric case we assume that the number of regressors K is fixed as
n increases, while in the nonparametric case we allow the
¡ number
¢ of regressors K to be flexible. As
K increases, the estimation variance increases. The op K −2α term in (12.18) is due to the series
approximation error.
Using Theorem 12.10.1 we can establish the following convergence rate for the estimated re-
gression function.
Theorem 12.10.2 shows that the integrated squared difference between the fitted regression and
the true CEF converges in probability to zero if K → ∞ as n → ∞. The convergence results of
Theorem 12.10.2 show that the number of series terms K involves a trade-off similar to the role of
the bandwidth h in kernel regression. Larger K implies smaller approximation error but increased
estimation variance. ¡ ¢
The optimal rate which minimizes the average squared error in (12.19) is K = O n1/(1+2α) ,
¡ ¢
yielding an optimal rate of convergence in (12.19) of Op n−2α/(1+2α) . This rate depends on the
unknown smoothness α of the true CEF (the number of derivatives s) and so does not directly
syggest a practical rule for determining K. Still, the implication is that when the function being
estimated is less smooth (α is small) then it is necessary to use a larger number of series terms K
to reduce the bias. In contrast, when the function is more smooth then it is better to use a smaller
number of series terms K to reduce the variance.
To establish (12.19), using (12.7) and (12.8) we can write
³ ´
b K (x) − m(x) = z K (x)0 βb −β (12.20)
m K K − rK (x).
CHAPTER 12. SERIES ESTIMATION 268
Since eRKi are projection errors, they satisfy E (z Ki eKi ) = R0 and thus E (z Ki rKi ) = 0. This
0 2
Rmeans 2 z K (x)rK (x)fx (x)dx = 0. Also observe that QK = z K (x)z K (x) fx (x)dx and ErKi =
rK (x) fx (x)dx. Then
Z
b K (x) − m(x))2 fx (x)dx
(m
³ ´0 ³ ´
= β b K − βK QK β b K − βK + Er2
Ki
µ ¶
K ¡ ¢
≤ Op + Op K −2α
n
Relative to Theorem 12.10.2, the error has been increased multiplicatively by ζK . This slower
convergence rate is a penalty for the stronger uniform convergence, though it is probably not
the best possible rate. Examining the bound in (12.21) notice that the first term is op (1) under
Assumption 12.7.1.4. The second term is op (1) if ζK K −α → 0, which requires that K → ∞ and
that α be sufficiently large. A sufficient condition is that s > d for polynomials and s > d/2 for
splines, where d = dim(x) and s is the number of continuous derivatives of m(x). Thus higher
dimensional x require a smoother CEF m(x) to ensure that the series estimate m b K (x) is uniformly
consistent.
The convergence (12.21) is straightforward to show using (12.18). Using (12.20), the Triangle
Inequality, the Schwarz inequality (A.10), Definition (12.11), (12.18) and (12.16),
b K (x) − m(x)|
sup |m
x∈X
¯ ³ ´¯
¯ b − β ¯¯ + sup |rK (x)|
≤ sup ¯z K (x)0 β K K
x∈X x∈X
µ ´¶1/2
¡ ¢1/2 ³ ´0 ³
≤ sup z K (x)0 Q−1 z (x) b
β − β Q βb − β
K K K K K K K
x∈X
¡ ¢
+ O ζK K −α
µ µ ¶ ¶
K ¡ −2α ¢ 1/2 ¡ ¢
≤ ζK Op + Op K + O ζK K −α ,
n
Ãr !
2K
ζK ¡ ¢
= Op + Op ζK K −α . (12.22)
n
This is (12.21).
CHAPTER 12. SERIES ESTIMATION 269
b
b K ) = a0K β
θ̂K = a (m K
b follows since a is
b K ) = a0K β
for some K × 1 vector of constants aK 6= 0. (The relationship a (m K
linear in m and m b K .)
b K is linear in β
If K were fixed as n → ∞, then by standard asymptotic theory we would expect θ̂K to be
asymptotically normal with variance
vK = a0K Q−1 −1
K ΩK QK aK
where ¡ ¢
ΩK = E z Ki z 0Ki e2Ki .
The standard justification, however, is not valid in the nonparametric case, in part because vK
may diverge as K → ∞, and in part due to the finite sample bias due to the approximation error.
Therefore a new theory is required. Interestingly, it turns out that in the nonparametric case θ̂K is
still asymptotically normal, and vK is still the appropriate variance for θ̂K . The proof is different
than the parametric case as the dimensions of the matrices are increasing with K, and we need to
be attentive to the estimator’s bias due to the series approximation.
¡ ¢
Theorem ¡12.12.1 ¢ Under Assumption 12.7.1, if in addition E e4i |xi ≤
κ4 < ∞, E e2i |xi ≥ σ 2 > 0, and ζK K −α = O(1), then as n → ∞,
√ ³ ´
n θ̂K − θ + a (rK ) d
1/2
−→ N (0, 1) (12.23)
vK
One useful message from Theorem 12.12.1 is that the classic variance formula vK for θ̂K still
applies for series regression. Indeed, we can estimate the asymptotic variance using the standard
White formula
b −1
v̂K = a0K Q b b −1
K ΩK QK aK
Xn
bK = 1
Ω z Ki z 0Ki ê2iK
n
i=1
Xn
bK = 1
Q z Ki z 0Ki .
n
i=1
¡ ¢
Theorem 12.13.1
¡ 2 ¢ Under2 Assumption 12.7.1, if in addition E e4i |xi ≤
κ4 < ∞, E ei |xi ≥ σ > 0, a (rK ∗ ) ≤ O (K −α ) , nK −2α → 0, and
−1
a0K QK aK is bounded away from zero, then
√ ³ ´
n θ̂K − θ d
1/2
−→ N (0, 1) . (12.24)
vK
The condition a (rK∗ ) ≤ O (K −α ) states that the function of interest (for example, the regression
function, its derivative, or its integral) applied to the uniform approximation error converges to
zero as the number of terms K in the series approximation increases. If a (m) = m(x) then this
condition holds by (12.6).
The condition that a0K Q−1K aK is bounded away from zero is simply a technical requirement to
exclude degeneracy.
CHAPTER 12. SERIES ESTIMATION 271
The critical condition is the assumption that nK −2α → 0. This requires that K → ∞ at a
rate faster than n¢1/2α . This is a troubling condition. The optimal rate for estimation of m(x) is
¡ 1/(1+2α)
K=O n . If we set K = n1/(1+2α) by this rule then nK −2α = n1/(1+2α) → ∞, not zero.
Thus this assumption is equivalent to assuming that K is much larger than optimal. The reason
why this trick works (that is, why the bias is negligible) is that by increasing K, the asymptotic
bias decreases and the asymptotic variance increases and thus the variance dominates. Because K
is larger than optimal, we typically say that m b K (x) is undersmoothed relative to the optimal series
estimator.
Many authors like to focus their asymptotic theory on the assumptions in Theorem 12.13.1, as
the distribution (12.24) appears cleaner. However, it is a poor use of asymptotic theory. There
are three problems with the assumption nK −2α → 0 and the approximation (12.24). First, it says
that if we intentionally pick K to be larger than optimal, we can increase the estimation variance
relative to the bias so the variance will dominate the bias. But why would we want to intentionally
use an estimator which is sub-optimal? Second, the assumption nK −2α → 0 does not eliminate the
asymptotic bias, it only makes it of lower order than the variance. So the approximation (12.24) is
technically valid, but the missing asymptotic bias term is just slightly smaller in asymptotic order,
and thus still relevant in finite samples. Third, the condition nK −2α → 0 is just an assumption, it
has nothing to do with actual empirical practice. Thus the difference between (12.23) and (12.24)
is in the assumptions, not in the actual reality or in the actual empirical practice. Eliminating a
nuisance (the asymptotic bias) through an assumption is a trick, not a substantive use of theory.
My strong view is that the result (12.23) is more informative than (12.24). It shows that the
asymptotic distribution is normal but has a non-trivial finite sample bias.
¡ 4 ¢
Theorem ¡12.14.1¢ Under Assumption 12.7.1, if in addition E ei |xi ≤
2 2 −α
κ4 < ∞, E ei |xi ≥ σ > 0, and ζK K = O(1), then as n → ∞,
√
b K (x) − m(x) + rK (x))
n (m d
1/2
−→ N (0, 1) (12.25)
vK (x)
where
vK (x) = z K (x)0 Q−1 −1
K ΩK QK z K (x).
There are two important features about the asymptotic distribution (12.25).
First, as mentioned in the previous section, it shows how to construct asymptotic standard
errors for the CEF m(x). These are
r
1 b −1 Ω b −1 z K (x).
b KQ
ŝ(x) = z K (x)0 Q K K
n
CHAPTER 12. SERIES ESTIMATION 272
Second, (12.25) shows that the estimator has the asymptotic bias component rK (x). This is
due to the fact that the finite order series is an approximation to the unknown CEF m(x), and this
results in finite sample bias.
The asymptotic distribution (12.26) shows that the bias term is negligable if K diverges fast
enough so that nK −2α → 0. As discussed in the previous section, this means that K is larger than
optimal.
The assumption that z K (x)0 Q−1K z K (x) is bounded away from zero is a technical condition to
exclude degenerate cases, and is automatically satisfied if z K (x) includes an intercept.
Plots of the CEF estimate m b K (x) can be accompanied by 95% confidence intervals m b K (x) ±
2ŝ(x). As we discussed in the chapter on kernel regression, this can be viewed as a confidence
interval for the pseudo-true CEF m∗K (x) = m(x) − rK (x), not for the true m(x). As for kernel
regression, the difference is the unavoidable consequence of nonparametric estimation.
We start with some convergence results for the sample design matrix
X n
b K = 1 Z0 ZK = 1
Q z Ki z 0Ki .
K
n n
i=1
and
b K ) −→ 1. p
λmin (Q (12.28)
Proof. Since à !2
° °2 XK X
K
1X
n
°b °
°QK − I K ° = (z jKi z Ki − Ez jKi z Ki )
n
j=1 =1 i=1
then
à !
° °2 X K
K X
1X
n
°b °
E °QK − I K ° = var z jKi z Ki
n
j=1 =1 i=1
K X
X K
−1
=n var (z jKi z Ki )
j=1 =1
K
X K
X
−1
≤n E z 2jKi z 2Ki
j=1 =1
−1
¡ 0
¢2
=n E z Ki z Ki . (12.29)
Since z 0Ki z Ki ≤ ζK
2 by definition (12.11) and using (A.1) we find
¡ ¢ ¡ ¢
E z 0Ki z Ki = tr Ez Ki z 0Ki = tr I K = K, (12.30)
so that ¡ ¢2 2
E z 0Ki z Ki ≤ ζK K (12.31)
and hence (12.29) is o(1) under Assumption 12.7.1.4. Theorem 5.11.1 shows that this implies
(12.27).
Let λ1 , λ2 , ..., λK be the eigenvalues of Q b K − I K which are real as Q
b K − I K is symmetric. Then
ÃK !1/2
¯ ¯ ¯ ¯ X ° °
¯ ¯ ¯ ¯
b K − I K )¯ ≤
b K ) − 1¯ = ¯λmin (Q °b °
¯λmin (Q λ2 = °Q K − I K °
=1
where the second equality is (A.8). This is op (1) by (12.27), establishing (12.28). ¥
Proof of Theorem 12.10.1. As above, assume that the regressors have been transformed so that
QK = I K .
From expression (12.10) we can substitute to find
¡ ¢
βb − β = Z 0 Z K −1 Z 0 eK .
K K K K
µ ¶
−1 1 0
=Q bK Z eK (12.32)
n K
CHAPTER 12. SERIES ESTIMATION 274
Since eKi = ei + rKi , and using Assumption 12.7.1.2 and (12.16), then
¡ ¢ ¡ 2 −2α ¢
sup E e2Ki |xi = σ 2 + sup rKi
2
≤ σ 2 + O ζK K . (12.35)
i i
As eKi are projection errors, they satisfy E (z Ki eKi ) = 0. Since the observations are indepen-
dent, using (12.30) and (12.35), then
⎛ ⎞
Xn n
X
¡ ¢
n−2 E e0K Z K Z 0K eK = n−2 E ⎝ eKi z 0Ki z Kj eKj ⎠
i=1 ij=1
n
X ¡ ¢
= n−2 E z 0Ki z Ki e2Ki
i=1
−1
¡ ¢ ¡ ¢
≤n E z 0Ki z Ki sup E e2Ki |xi
i
µ 2 1−2α ¶
K ζ K
≤ σ2 + O K
n n
K ¡ ¢
= σ 2 + o K −2α (12.36)
n
2 K/n = o(1) by Assumption 12.7.1.4. Theorem 5.11.1 shows that this implies
since ζK
¡ ¢ ¡ ¢
n−2 e0K Z K Z 0K eK = Op n−2 + op K −2α . (12.37)
Proof of Theorem 12.12.1. As above, assume that the regressors have been transformed so that
QK = I K .
Using m(x) = z K (x)0 βK + rK (x) and linearity
θ = a (m)
¡ ¢
= a z K (x)0 βK + a (rK )
= a0K βK + a (rK )
and thus
r ´ rn
n ³ ³
b −β
´
θ̂K − θK + a (rK ) = a0K β K K
vk vk
r
1 0 b −1 0
= a Q Z eK
nvk K K K
1
=√ a0 Z 0 eK (12.38)
nvK K K
1 ³ −1 ´
+√ a0K Q b K − I K Z 0K e (12.39)
nvK
1 ³ −1 ´
+√ a0K Q b K − I K Z 0K rK . (12.40)
nvK
Observe that a0K z Ki eKi are independent across i, mean zero, and have variance
¡ ¢2 ¡ ¢
E a0K z Ki eKi = a0K E z Ki z 0Ki e2Ki aK = vK .
We will apply the Lindeberg CLT 5.7.2, for which it is sufficient to verify Lyapunov’s condition
(5.6):
n
1 X ¡ 0 ¢4 1 ³¡ ¢4 4 ´
0
2 E a z e
K Ki Ki = 2 E aK Ki eKi → 0.
z (12.42)
n2 vK i=1
nvK
The assumption ¡that ζ¢K K −α = O(1) means ζK K −α ≤ κ1 for some κ1 < ∞. Then by the cr
inequality and E e4i |xi ≤ κ
¡ ¢ ¡ ¡ ¢ ¢
sup E e4Ki |xi ≤ 8 sup E e4i |xi + rKi
4
≤ 8 (κ + κ1 ) . (12.43)
i i
1 ³¡ ¢4 4 ´ 8 (κ + κ1 ) ζK
2K
0
2 E aK z Ki eKi ≤ = o(1)
nvK σ4 n
CHAPTER 12. SERIES ESTIMATION 276
under Assumption 12.7.1.4. This establishes Lyapunov’s condition (12.42). Hence the Lindeberg
CLT applies to (12.41) and we conclude
1 d
√ a0K Z 0K eK −→ N (0, 1) . (12.46)
nvK
¡ ¢
Second, take (12.39). Since E (e | X) = 0, then applying E e2i |xi ≤ σ̄ 2 , the Schwarz and Norm
Inequalities, (12.45), (12.34) and (12.27),
õ ¶2 !
1 ³ −1 ´
E √ a0 b − IK Z e | X
Q 0
nvK K K K
1 0 ³ b −1 ´ ¡ ¢ ³ −1 ´
b − I K aK
= aK QK − I K Z 0K E ee0 | X Z K Q K
nvK
σ̄ 2 0 ³ b −1 ´
bK Q
³ −1 ´
b K − I K aK
≤ aK QK − I K Q
vK
σ̄ 2 0 ³ b ´ −1 ³
bK Q
´
b K − I K aK
= aK QK − I K Q
vK
σ̄ 2 a0K aK ³ −1 ´ ° °2
≤ λmax QbK ° °Qb K − IK° °
vK
σ̄ 2
≤ 2 op (1).
σ
This establishes ³ −1 ´
1 p
b K − I K Z 0K e −→
√ a0K Q 0. (12.47)
nvK
Third, take (12.40). By the Cauchy-Schwarz inequality, (12.45), and the Quadratic Inequality,
µ ³ −1 ´ ¶2
1 0 b − I K Z rK 0
√ a Q
nvK K K K
a0 aK 0 ³ −1 ´ ³ −1 ´
≤ K rK Z K Qb K − IK Q b K − I K Z 0K rK
nvK
1 ³ −1 ´2
≤ 2 λmax Q b K − I K 1 r0K Z K Z 0K rK . (12.48)
σ n
2 , and (12.17)
Observe that since the observations are independent and Ez Ki rKi = 0, z 0Ki z Ki ≤ ζK
⎛ ⎞
µ ¶ Xn X n
1 0 1
E r Z K Z 0K rK = E ⎝ rKi z 0Ki z Kj rKj ⎠
n K n
i=1 ij=1
à n !
1X 0 2
=E z Ki z Ki rKi
n
i=1
2
¡ 2 ¢
≤ ζK E rKi
¡ 2 −2α ¢
= O ζK K
= O(1)
1
since ζK K −2 = O(1). Thus r0K Z K Z 0K rK = Op (1). This means that (12.48) is op (1) since (12.28)
n
implies ³ −1 ´ ³ −1 ´
λmax Qb K − I K = λmax Q b K − 1 = op (1). (12.49)
Equivalently, ³ −1 ´
1 p
b − I K Z 0 rK −→
√ a0K QK K 0. (12.50)
nvK
CHAPTER 12. SERIES ESTIMATION 277
so the conditions of Theorem 12.12.1 are satisfied. It is thus sufficient to show that
r
n
a (rK ) = o(1).
vk
From (12.12)
∗
rK (x) = rK (x) + z K (x)0 γK
¡ ¢−1 ∗
γK = E z Ki z 0Ki E (z Ki rKi ).
0
¡ ∗ 0 ¢ ¡ ¢−1 ∗
nγK γK = nE rKi z Ki E z Ki z 0Ki E (z Ki rKi )
¡ −2α ¢
≤ nO K
= o(1).
yi = x0i β + ei
= x01i β1 + x02i β2 + ei
E (xi ei ) = 0
where x1i is k × 1 and x2i is r × 1 with = k + r. We know that without further restrictions, an
asymptotically efficient estimator of β is the OLS estimator. Now suppose that we are given the
information that β2 = 0. Now we can write the model as
yi = x01i β1 + ei
E (xi ei ) = 0.
In this case, how should β1 be estimated? One method is OLS regression of yi on x1i alone. This
method, however, is not necessarily efficient, as there are restrictions in E (xi ei ) = 0, while β1 is
of dimension k < . This situation is called overidentified. There are − k = r more moment
restrictions than free parameters. We call r the number of overidentifying restrictions.
This is a special case of a more general class of moment condition models. Let g(y, x, z, β) be
an × 1 function of a k × 1 parameter β with ≥ k such that
Eg(yi , xi , z i , β0 ) = 0 (13.1)
where β0 is the true value of β. In our previous example, g(y, z, β) = z·(y −x01 β 1 ). In econometrics,
this class of models are called moment condition models. In the statistics literature, these are
known as estimating equations.
As an important special case we will devote special attention to linear moment condition models,
which can be written as
yi = x0i β + ei
E (z i ei ) = 0.
where the dimensions of xi and z i are k × 1 and × 1 , with ≥ k. If k = the model is just
identified, otherwise it is overidentified. The variables xi may be components and functions of
z i , but this is not required. This model falls in the class (13.1) by setting
278
CHAPTER 13. GENERALIZED METHOD OF MOMENTS 279
The method of moments estimator for β is defined as the parameter value which sets gn (β) = 0.
This is generally not possible when > k, as there are more equations than free parameters. The
idea of the generalized method of moments (GMM) is to define an estimator which sets g n (β)
“close” to zero.
For some × weight matrix W n > 0, let
This is a non-negative measure of the “length” of the vector g n (β). For example, if W n = I, then,
Jn (β) = n · g n (β)0 g n (β) = n · kg n (β)k2 , the square of the Euclidean length. The GMM estimator
minimizes Jn (β).
Proposition 13.2.1
¡¡ ¢ ¡ ¢¢ ¡ ¢ ¡ ¢
b GMM = X 0 Z W n Z 0 X −1 X 0 Z W n Z 0 y .
β
While the estimator depends on W n , the dependence is only up to scale, for if W n is replaced
b GMM does not change.
by cW n for some c > 0, β
CHAPTER 13. GENERALIZED METHOD OF MOMENTS 280
In general, GMM estimators are asymptotically normal with “sandwich form” asymptotic vari-
ances.
The optimal weight matrix W 0 is one which minimizes V β . This turns out to be W 0 = Ω−1 .
The proof is left as an exercise. This yields the efficient GMM estimator:
¡ ¢
b = X 0 ZΩ−1 Z 0 X −1 X 0 ZΩ−1 Z 0 y.
β
Thus we have
p
W 0 = Ω−1 is not known in practice, but it can be estimated consistently. For any W n −→ W 0 ,
b the efficient GMM estimator, as it has the same asymptotic distribution.
we still call β
By “efficient”, we mean that this estimator has the smallest asymptotic variance in the class
of GMM estimators with this set of moment conditions. This is a weak concept of optimality, as
we are only considering alternative weight matrices W n . However, it turns out that the GMM
estimator is semiparametrically efficient, as shown by Gary Chamberlain (1987).
If it is known that E (g i (β)) = 0, and this is all that is known, this is a semi-parametric
problem, as the distribution of the data is unknown. Chamberlain showed that in this context,
no semiparametric estimator (one which is consistent globally for the class of models considered)
¡ ¢−1
can have a smaller asymptotic variance than G0 Ω−1 G where G = E ∂β∂
0 g i (β). Since the GMM
ĝ ∗i = ĝ i − gn ,
and define à !−1 à !−1
n n
1 X ∗ ∗0 1X
Wn = ĝ i ĝ i = ĝ i ĝ 0i − g n g 0n . (13.4)
n n
i=1 i=1
p
Then W n −→ Ω−1 = W 0 , and GMM using W n as the weight matrix is asymptotically efficient.
A common alternative choice is to set
à n !−1
1X 0
Wn = ĝi ĝ i
n
i=1
which uses the uncentered moment conditions. Since Eg i = 0, these two estimators are asymptot-
ically equivalent under the hypothesis of correct specification. However, Alastair Hall (2000) has
shown that the uncentered estimator is a poor choice. When constructing hypothesis tests, under
the alternative hypothesis the moment conditions are violated, i.e. Egi 6= 0, so the uncentered
estimator will contain an undesirable bias term and the power of the test will be adversely affected.
A simple solution is to use the centered moment conditions to construct the weight matrix, as in
(13.4) above.
Here is a simple way to compute the efficient GMM estimator for the linear model. First, set
W n = (Z 0 Z)−1 , estimate β b using this weight matrix, and construct the residual êi = yi − x0 β.
b
i
Then set ĝ i = z i êi , and let ĝ be the associated n × matrix. Then the efficient GMM estimator is
³ ¡ ¢ ´−1 ¡ ¢−1 0
b = X 0 Z ĝ 0 ĝ − ng n g0 −1 Z 0 X
β X 0 Z ĝ 0 ĝ − ngn g 0n Z y.
n
In most cases, when we say “GMM”, we actually mean “efficient GMM”. There is little point in
using an inefficient GMM estimator when the efficient estimator is easy to compute.
An estimator of the asymptotic variance of β̂ can be seen from the above formula. Set
³ ¡ ¢−1 0 ´−1
Vb = n X 0 Z ĝ 0 ĝ − ng n g0n ZX .
1
Asymptotic standard errors are given by the square roots of the diagonal elements of Vb .
n
There is an important alternative to the two-step GMM estimator just described. Instead, we
can let the weight matrix be considered as a function of β. The criterion function is then
à n !−1
0 1X ∗ ∗ 0
J(β) = n · g n (β) gi (β)g i (β) gn (β).
n
i=1
where
g ∗i (β) = g i (β) − gn (β)
The βb which minimizes this function is called the continuously-updated GMM estimator, and
was introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be nu-
merically tricky to obtain in some cases. This is a current area of research in econometrics.
CHAPTER 13. GENERALIZED METHOD OF MOMENTS 282
and à !−1
n
1X
Wn = ĝ i ĝ0i − g n g 0n ,
n
i=1
where ¡ ¢
Ω = E g i g 0i
and
∂
G=E g (β).
∂β0 i
b may be estimated by
The variance of β
³ 0 −1 ´−1
Vb β = Ĝ Ω̂ Ĝ
where X
Ω̂ = n−1 ĝ ∗i ĝ ∗0
i
i
and
X ∂
Ĝ = n−1 g (β̂).
i
∂β0 i
The general theory of GMM estimation and testing was exposited by L. Hansen (1982).
Eg(yi , xi , z i , β) = 0
holds. Thus the model — the overidentifying restrictions — are testable.
For example, take the linear model yi = β 01 x1i +β02 x2i +ei with E (x1i ei ) = 0 and E (x2i ei ) = 0.
It is possible that β2 = 0, so that the linear equation may be written as yi = β01 x1i + ei . However,
it is possible that β2 6= 0, and in this case it would be impossible to find a value of β 1 so that
both E (x1i (yi − x01i β1 )) = 0 and E (x2i (yi − x01i β1 )) = 0 hold simultaneously. In this sense an
exclusion restriction can be seen as an overidentifying restriction.
p
Note that g n −→ Egi , and thus g n can be used to assess whether or not the hypothesis that
Eg i = 0 is true or not. The criterion function at the parameter estimates is
Jn = n g 0n W n g n
¡ ¢−1
= n2 g 0n ĝ 0 ĝ − ng n g 0n gn.
The proof of the theorem is left as an exercise. This result was established by Sargan (1958)
for a specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying restric-
tions. If the statistic J exceeds the chi-square critical value, we can reject the model. Based on
this information alone, it is unclear what is wrong, but it is typically cause for concern. The GMM
overidentification test is a very useful by-product of the GMM methodology, and it is advisable to
report the statistic J whenever GMM is the estimation method.
When over-identified models are estimated by GMM, it is customary to report the J statistic
as a general test of model adequacy.
H0 : h(β) = 0.
b and Jn (β).
The two minimizing criterion functions are Jn (β) e The GMM distance statistic is the
difference
e − Jn (β).
Dn = Jn (β) b
Proposition 13.7.1 If the same weight matrix W n is used for both null
and alternative,
1. D ≥ 0
d
2. D −→ χ2r
If h is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence
suggests that the Dn statistic appears to have quite good sampling properties, and is the preferred
test statistic.
Newey and West (1987) suggested to use the same weight matrix W n for both null and alter-
native, as this ensures that Dn ≥ 0. This reasoning is not compelling, however, and some current
research suggests that this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural by-product of the compu-
tation of alternative models.
E (ei (β) | z i ) = 0
where ei (β) is some s × 1 function of the observation and the parameters. In many cases, s = 1.
The variable z i is often called an instrument.
It turns out that this conditional moment restriction is much more powerful, and restrictive,
than the unconditional moment restriction discussed above.
As discussed later in Chapter 15, the linear model yi = x0i β + ei with instruments z i falls into
this class under the assumption E (ei | z i ) = 0. In this case, ei (β) = yi − x0i β.
It is also helpful to realize that conventional regression models also fall into this class, except
that in this case xi = z i . For example, in linear regression, ei (β) = yi − x0i β, while in a nonlinear
regression model ei (β) = yi − g(xi , β). In a joint model of the conditional mean and variance
⎧
⎨ yi − x0i β
ei (β, γ) = .
⎩ 0 2 0
(yi − xi β) − f (xi ) γ
Here s = 2.
Given a conditional moment restriction, an unconditional moment restriction can always be
constructed. That is for any × 1 function φ (xi , β) , we can set g i (β) = φ (xi , β) ei (β) which
CHAPTER 13. GENERALIZED METHOD OF MOMENTS 285
satisfies Eg i (β) = 0 and hence defines a GMM estimator. The obvious problem is that the class of
functions φ is infinite. Which should be selected?
This is equivalent to the problem of selection of the best instruments. If xi ∈ R is a valid
instrument satisfying E (ei | xi ) = 0, then xi , x2i , x3i , ..., etc., are all valid instruments. Which
should be used?
One solution is to construct an infinite list of potent instruments, and then use the first k
instruments. How is k to be determined? This is an area of theory still under development. A
recent study of this problem is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Cham-
berlain (1987). Take the case s = 1. Let
µ ¶
∂
Ri = E ei (β) | z i
∂β
and ¡ ¢
σi2 = E ei (β)2 | z i .
Then the “optimal instrument” is
Ai = −σi−2 Ri
so the optimal moment is
g i (β) = Ai ei (β).
Setting g i (β) to be this choice (which is k × 1, so is just-identified) yields the best GMM estimator
possible.
In practice, Ai is unknown, but its form does help us think about construction of optimal
instruments.
In the linear model ei (β) = yi − x0i β, note that
Ri = −E (xi | z i )
and ¡ ¢
σi2 = E e2i | z i ,
so
Ai = σi−2 E (xi | z i ) .
In the case of linear regression, xi = z i , so Ai = σi−2 z i . Hence efficient GMM is GLS, as we
discussed earlier in the course.
In the case of endogenous variables, note that the efficient instrument Ai involves the estimation
of the conditional mean of xi given z i . In other words, to get the best instrument for xi , we need the
best conditional mean model for xi given z i , not just an arbitrary linear projection. The efficient
instrument is also inversely proportional to the conditional variance of ei . This is the same as the
GLS estimator; namely that improved efficiency can be obtained if the observations are weighted
inversely to the conditional variance of the errors.
theoretical justification for percentile-t methods. Furthermore, the bootstrap applied J test will
yield the wrong answer.
The problem is that in the sample, β b is the “true” value and yet g (β̂) 6= 0. Thus according to
n
∗ ∗ ∗
random variables (yi , z i , xi ) drawn from the EDF Fn ,
³ ³ ´´
E gi β b = g n (β̂) 6= 0.
This means that (yi∗ , z ∗i , x∗i ) do not satisfy the same moment conditions as the population distrib-
ution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap
sample (y ∗ , Z ∗ , X ∗ ), define the bootstrap GMM criterion
³ ´0 ³ ´
Jn∗ (β) = n · g ∗n (β) − g n (β̂) W ∗n g ∗n (β) − g n (β̂)
where ê = y − X β b ∗ ).
b are the in-sample residuals. The bootstrap J statistic is J ∗ (β
n
Brown and Newey (2002) have an alternative solution. They note that we can sample from
the observations
³ ´ with the empirical likelihood probabilities p̂i described in Chapter 14. Since
Pn b
i=1 p̂i g i β = 0, this sampling scheme preserves the moment conditions of the model, so no
recentering or adjustments is needed. Brown and Newey argue that this bootstrap procedure will
be more efficient than the Hall-Horowitz GMM bootstrap.
CHAPTER 13. GENERALIZED METHOD OF MOMENTS 287
Exercises
Exercise 13.1 Take the model
yi = x0i β + ei
E (xi ei ) = 0
e2i = z 0i γ + ηi
E (z i ηi ) = 0.
³ ´
Find the method of moments estimators β̂, γ̂ for (β, γ) .
y = Xβ + e
E (e | Z) = 0
¡ ¢ −1
Assume E e2i | z i = σ 2 . Show that if β̂ is estimated by GMM with weight matrix W n = (Z 0 Z) ,
then √ ³ ´ ³ ¡ ¢−1 ´
b − β −→ d
n β N 0, σ 2 Q0 M −1 Q
b where β
Exercise 13.3 Take the model yi = x0i β + ei with E (z i ei ) = 0. Let êi = yi − x0i β b is
consistent for β (e.g. a GMM estimator with arbitrary weight matrix). Define the estimate of the
optimal GMM weight matrix
à n !−1
1X 0 2
Wn = z i z i êi .
n
i=1
p −1
¡ 0 2¢
Show that W n −→ Ω where Ω = E z i z i ei .
Exercise 13.4 In the linear model estimated by GMM with general weight matrix W , the asymp-
totic variance of β̂GMM is
¡ ¢−1 0 ¡ ¢−1
V = Q0 W Q Q W ΩW Q Q0 W Q
¡ ¢−1
(a) Let V 0 be this matrix when W = Ω−1 . Show that V 0 = Q0 Ω−1 Q .
(b) We want to show that for any W , V − V 0 is positive semi-definite (for then V 0 is the smaller
possible covariance matrix and W = Ω−1 is the efficient weight matrix). To do this, start by
finding matrices A and B such that V = A0 ΩA and V 0 = B 0 ΩB.
yi = m(xi , β) + ei
E (z i ei ) = 0.
The GMM test statistic (the distance statistic) of the hypothesis h(β) = 0 is
e = min Jn (β).
D = Jn (β) (13.6)
h(β)=0
Jn = ng n (β) b −1 g n (β)
b 0Ω b
d
denote the test of overidentifying restrictions. Show that Jn −→ χ2−k as n → ∞ by demonstrating
each of the following:
b = Dn C 0 g n (β0 ) where
(c) C 0 g n (β)
µ ¶ µµ ¶ µ ¶¶−1 µ ¶
0 1 0 1 0 b −1 1 0 1 0 b −1 C 0−1
Dn = I − C ZX XZ Ω ZX XZ Ω
n n n n
1 0
g n (β 0 ) = Z e.
n
p −1
(d) Dn −→ I − R (R0 R) R0 where R = C 0 E (z i x0i )
d
(e) n1/2 C 0 g n (β0 ) −→ u ∼ N (0, I )
³ ´
d −1
(f) Jn −→ u0 I − R (R0 R) R0 u
³ ´
−1
(g) u0 I − R (R0 R) R0 u ∼ χ2−k .
−1
Hint: I − R (R0 R) R0 is a projection matrix.
Chapter 14
Empirical Likelihood
Since each observation is observed once in the sample, the log-likelihood function for this multino-
mial distribution is
n
X
log L (p1 , ..., pn ) = log(pi ). (14.2)
i=1
First let us consider a just-identified model. In this case the moment condition places no
additional restrictions on the multinomial distribution. The maximum likelihood estimators of
the probabilities (p1 , ..., pn ) are those which maximize the log-likelihood subject to the constraint
(14.1). This is equivalent to maximizing
n
à n !
X X
log(pi ) − μ pi − 1
i=1 i=1
Eg i (β0 ) = 0
where g is × 1 and β is k × 1 and for simplicity we write gi (β) = g(yi , z i , xi , β). The multinomial
distribution which places probability pi at each observation (yi , xi , z i ) will satisfy this condition if
and only if
X n
pi g i (β) = 0 (14.3)
i=1
The empirical likelihood estimator is the value of β which maximizes the multinomial log-
likelihood (14.2) subject to the restrictions (14.1) and (14.3).
289
CHAPTER 14. EMPIRICAL LIKELIHOOD 290
where λ and μ are Lagrange multipliers. The first-order-conditions of L with respect to pi , μ, and
λ are
1
= μ + nλ0 g i (β)
pi
n
X
pi = 1
i=1
n
X
pi g i (β) = 0.
i=1
Multiplying the first equation by pi , summing over i, and using the second and third equations, we
find μ = n and
1
pi = ¡ ¢.
n 1 + λ0 g i (β)
Substituting into L we find
n
X ¡ ¢
R (β, λ) = −n log (n) − log 1 + λ0 g i (β) . (14.4)
i=1
This minimization problem is the dual of the constrained maximization problem. The solution
(when it exists) is well defined since R(β, λ) is a convex function of λ. The solution cannot be
obtained explicitly, but must be obtained numerically (see section 6.5). This yields the (profile)
empirical log-likelihood function for β.
The EL estimate β̂ is the value which maximizes R(β), or equivalently minimizes its negative
G = EGi (β0 )
¡ ¢
Ω = E g i (β0 ) g i (β0 )0
and ¡ ¢−1
V = G0 Ω−1 G (14.9)
¡ ¢−1 0
Vλ = Ω − G G0 Ω−1 G G (14.10)
¡ ¢
For example, in the linear model, Gi (β) = −z i x0i , G = −E (z i x0i ), and Ω = E z i z 0i e2i .
The theorem shows that the asymptotic variance V β for β̂ is the same as for efficient GMM.
Thus the EL estimator is asymptotically efficient.
Chamberlain (1987) showed that V β is the semiparametric efficiency bound for β in the overi-
dentified moment condition model. This means that no consistent estimator for this class of models
can have a lower asymptotic variance than V β . Since the EL estimator achieves this bound, it is
an asymptotically efficient estimator for β.
P P P
Let Gn = n1 ni=1 Gi (β0 ) , gn = n1 ni=1 gi (β0 ) and Ωn = n1 ni=1 gi (β0 ) g i (β0 )0 .
Expanding (14.12) around β = β0 and λ = λ0 = 0 yields
³ ´
0 ' G0n λ̂ − λ0 . (14.13)
d
Theorem 14.3.1 If Eg i (β0 ) = 0 then LRn −→ χ2−k .
The EL overidentification test is similar to the GMM overidentification test. They are asymp-
totically first-order equivalent, and have the same interpretation. The overidentification test is a
very useful by-product of EL estimation, and it is advisable to report the statistic LRn whenever
EL is the estimation method.
1 X ³ ´ √ ³ ³ ´´
n
√ g i β̂ ' n g n + Gn β̂ − β0
n
i=1
³ ¡ ¢−1 0 −1 ´ √
' I − Gn G0n Ω−1
n Gn Gn Ωn ng n
√
' Ωn nλ̂.
CHAPTER 14. EMPIRICAL LIKELIHOOD 293
14.4 Testing
Let the maintained model be
Eg i (β) = 0 (14.18)
where g is × 1 and β is k × 1. By “maintained” we mean that the overidentfying restrictions
contained in (14.18) are assumed to hold and are not being challenged (at least for the test discussed
in this section). The hypothesis of interest is
h(β) = 0.
where h : Rk → Ra . The restricted EL estimator and likelihood are the values which solve
β̃ = argmax R(β)
h(β)=0
d
Theorem 14.4.1 Under (14.18) and H0 : h(β) = 0, LRn −→ χ2a .
http://www.ssc.wisc.edu/~bhansen/progs/elike.prc
Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (14.4). Define
gi (β)
g ∗i (β, λ) = ¡ ¢
1 + λ0 g i (β)
Gi (β)0 λ
G∗i (β, λ) =
1 + λ0 g i (β)
X n
∂
Rλ = R (β, λ) = − g ∗i (β, λ)
∂λ
i=1
n
X
∂
Rβ = R (β, λ) = − G∗i (β, λ) .
∂β
i=1
Inner Loop
The so-called “inner loop” solves (14.5) for given β. The modified Newton method takes a
quadratic approximation to Rn (β, λ) yielding the iteration rule
where δ > 0 is a scalar steplength (to be discussed next). The starting value λ1 can be set to the
zero vector. The iteration (14.19) is continued until the gradient Rλ (β, λj ) is smaller than some
prespecified tolerance.
Efficient convergence requires a good choice of steplength δ. One method uses the following
quadratic approximation. Set δ0 = 0, δ1 = 12 and δ2 = 1. For p = 0, 1, 2, set
A quadratic function can be fit exactly through these three points. The value of δ which minimizes
this quadratic is
R2 + 3R0 − 4R1
δ̂ = .
4R2 + 4R0 − 8R1
yielding the steplength to be plugged into (14.19).
CHAPTER 14. EMPIRICAL LIKELIHOOD 295
∂ ∂
Rβ = R(β) = R(β, λ) = Rβ + λ0β Rλ = Rβ
∂β ∂β
∂
λβ = λ(β) = −R−1
λλ Rλβ ,
∂β0
the second equality following from the implicit function theorem applied to Rλ (β, λ(β)) = 0.
The Hessian for (14.6) is
∂
Rββ = − R(β)
∂β∂β0
∂ £ ¤
= − 0 Rβ (β, λ(β)) + λ0β Rλ (β, λ(β))
∂β
¡ ¢
= − Rββ (β, λ(β)) + R0λβ λβ + λ0β Rλβ + λ0β Rλλ λβ
= R0λβ R−1
λλ Rλβ − Rββ .
It is not guaranteed that Rββ > 0. If not, the eigenvalues of Rββ should be adjusted so that all
are positive. The Newton iteration rule is
β j+1 = β j − δR−1
ββ Rβ
Endogeneity
We say that there is endogeneity in the linear model y = x0i β + ei if β is the parameter of
interest and E(xi ei ) 6= 0. This cannot happen if β is defined by linear projection, so requires a
structural interpretation. The coefficient β must have meaning separately from the definition of a
conditional mean or linear projection.
Example: Measurement error in the regressor. Suppose that (yi , x∗i ) are joint random
variables, E(yi | x∗i ) = x∗0 ∗
i β is linear, β is the parameter of interest, and xi is not observed. Instead
we observe xi = x∗i + ui where ui is an k × 1 measurement error, independent of yi and x∗i . Then
yi = x∗0
i β + ei
= (xi − ui )0 β + ei
= x0i β + vi
where
vi = ei − u0i β.
The problem is that
£ ¡ ¢¤ ¡ ¢
E (xi vi ) = E (x∗i + ui ) ei − u0i β = −E ui u0i β 6= 0
296
CHAPTER 15. ENDOGENEITY 297
so
µ ¶ ∙ ¸−1 µ ¶
qi 1 β1 e1i
=
pi 1 −β2 e2i
∙ ¸µ ¶
β2 β1 e1i
=
1 −1 e2i
µ ¶
β2 e1i + β1 e2i
= .
(e1i − e2i )
The projection of qi on pi yields
qi = β ∗ pi + εi
E (pi εi ) = 0
where
E (pi qi ) β − β1
β∗ = ¡ ¢ = 2
E p2i 2
p
Hence if it is estimated by OLS, β̂ −→ β ∗ , which does not equal either β1 or β2 . This is called
simultaneous equations bias.
In a typical set-up, some regressors in xi will be uncorrelated with ei (for example, at least the
intercept). Thus we make the partition
µ ¶
x1i k1
xi = (15.3)
x2i k2
where E(x1i ei ) = 0 yet E(x2i ei ) 6= 0. We call x1i exogenous and x2i endogenous. By the above
definition, x1i is an instrumental variable for (15.1), so should be included in z i . So we have the
partition µ ¶
x1i k1
zi = (15.4)
z 2i 2
where x1i = z 1i are the included exogenous variables, and z 2i are the excluded exogenous
variables. That is z 2i are variables which could be included in the equation for yi (in the sense
that they are uncorrelated with ei ) yet can be excluded, as they would have true zero coefficients
in the equation.
The model is just-identified if = k (i.e., if 2 = k2 ) and over-identified if > k (i.e., if
2 > k2 ).
We have noted that any solution to the problem of endogeneity requires instruments. This does
not mean that valid instruments actually exist.
CHAPTER 15. ENDOGENEITY 298
ui = xi − Γ0 z i
as the projection error. Then the reduced form linear relationship between xi and z i is
xi = Γ0 z i + ui . (15.5)
X = ZΓ + U (15.6)
where U is n × k.
By construction,
E(z i u0i ) = 0,
so (15.5) is a projection and can be estimated by OLS:
b 0 z i + ûi .
xi = Γ
or
b +U
X = ZΓ b
where ¡ ¢ ¡ ¢
b = Z 0 Z −1 Z 0 X .
Γ
Substituting (15.6) into (15.2), we find
y = (ZΓ + U ) β + e
= Zλ + v, (15.7)
where
λ = Γβ (15.8)
and
v = U β + e.
Observe that ¡ ¢
E (z i vi ) = E z i u0i β + E (z i ei ) = 0.
Thus (15.7) is a projection equation and may be estimated by OLS. This is
y = Z λ̂ + v̂,
¡ ¢−1 ¡ 0 ¢
λ̂ = Z 0 Z Zy
The equation (15.7) is the reduced form for y. (15.6) and (15.7) together are the reduced form
equations for the system
y = Zλ + v
X = ZΓ + U .
³ ´
As we showed above, OLS yields the reduced-form estimates λ̂, Γ̂
CHAPTER 15. ENDOGENEITY 299
15.3 Identification
The structural parameter β relates to (λ, Γ) through (15.8). The parameter β is identified,
meaning that it can be recovered from the reduced form, if
Assume that (15.9) holds. If = k, then β = Γ−1 λ. If > k, then for any W > 0, β =
−1
(Γ0 W Γ) Γ0 W λ.
If (15.9) is not satisfied, then β cannot be recovered from (λ, Γ) . Note that a necessary (although
not sufficient) condition for (15.9) is ≥ k.
Since Z and X have the common variables X 1 , we can rewrite some of the expressions. Using
(15.3) and (15.4) to make the matrix partitions Z = [Z 1 , Z 2 ] and X = [Z 1 , X 2 ] , we can partition
Γ as
∙ ¸
Γ11 Γ12
Γ=
Γ21 Γ22
∙ ¸
I Γ12
=
0 Γ22
(15.6) can be rewritten as
X1 = Z1
X 2 = Z 1 Γ12 + Z 2 Γ22 + U 2 . (15.10)
β is identified if rank(Γ) = k, which is true if and only if rank(Γ22 ) = k2 (by the upper-diagonal
structure of Γ). Thus the key to identification of the model rests on the 2 × k2 matrix Γ22 in
(15.10).
15.4 Estimation
The model can be written as
yi = x0i β + ei
E (z i ei ) = 0
or
Eg i (β) = 0
¡ ¢
g i (β) = z i yi − x0i β .
This is a moment condition model. Appropriate estimators include GMM and EL. The estimators
and distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM
estimator, for given weight matrix W n , is
¡ ¢−1 0
β̂ = X 0 ZW n Z 0 X X ZW n Z 0 y.
This estimator is often called the instrumental variables estimator (IV) of β, where Z is used
as an instrument for X. Observe that the weight matrix W n has disappeared. In the just-identified
case, the weight matrix places no role. This is also the method of moments estimator of β, and the
EL estimator. Another interpretation stems from the fact that since β = Γ−1 λ, we can construct
the Indirect Least Squares (ILS) estimator:
b=Γ
β b −1 λ
b
³¡ ¢−1 ¡ 0 ¢´−1 ³¡ 0 ¢−1 ¡ 0 ¢´
0
= ZZ ZX ZZ Zy
¡ 0 ¢−1 ¡ 0 ¢ ¡ 0 ¢−1 ¡ 0 ¢
= ZX ZZ ZZ Zy
¡ 0 ¢−1 ¡ 0 ¢
= ZX Zy .
This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil
(1953) and Basmann (1957), and is the classic estimator for linear equations with instruments.
Under the homoskedasticity assumption, the 2SLS estimator is efficient GMM, but otherwise it is
inefficient.
It is useful to observe that writing
¡ ¢−1 0
P = Z Z 0Z Z
c = P X = ZΓ
X b
= [P X 1 , P X 2 ]
= [X 1 , P X 2 ]
h i
= X 1, X c2 ,
c2 . So only the
since X 1 lies in the span of Z. Thus in the second stage, we regress y on X 1 and X
endogenous variables X 2 are replaced by their fitted values:
X b 12 + Z 2 Γ
c2 = Z 1 Γ b 22 .
CHAPTER 15. ENDOGENEITY 301
y = Xβ + e (15.11)
X = ZΓ + U (15.12)
ξ = (e, U )
E (ξ | Z) = 0
¡ 0 ¢
E ξξ|Z =S
where
l
α= .
n
Using (15.12) and this result,
1 ¡ 0 ¢ 1 ¡ 0 0 ¢ 1 ¡ 0 ¢
E X P e = E Γ Z e + E U P e = αs21 ,
n n n
CHAPTER 15. ENDOGENEITY 302
and
1 ¡ 0 ¢ ¡ ¢ ¡ ¢ 1 ¡ ¢
E X P X = Γ0 E z i z 0i Γ + Γ0 E (z i ui ) + E ui z 0i Γ + E U 0 P U
n n
= Γ0 QΓ + αS 22 .
Together
³ ´ µ µ1 ¶¶−1 µ
1 0
¶
0
E β̂2SLS − β ≈ E X PX E X Pe
n n
¡ ¢−1
= α Γ0 QΓ + αS 22 s21 . (15.14)
In general this is non-zero, except when s21 = 0 (when X is exogenous). It is also close to zero
when α = 0. Bekker (1994) pointed out that it also has the reverse implication — that when α = l/n
is large, the bias in the 2SLS estimator will be large. Indeed as α → 1, the expression in (15.14)
approaches that in (15.13), indicating that the bias in 2SLS approaches that of OLS as the number
of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that α is
fixed as n → ∞ (so that the number of instruments goes to infinity proportionately with sample
size) then the expression in (15.14) is the probability limit of β̂2SLS − β
X 2 = Z 1 Γ12 + Z 2 Γ22 + U 2 .
The parameter β fails to be identified if Γ22 has deficient rank. The consequences of identification
failure for inference are quite severe.
Take the simplest case where k = l = 1 (so there is no Z 1 ). Then the model may be written as
yi = xi β + ei
xi = zi γ + ui
and Γ22 = γ = E (zi xi ) /Ezi2 . We see that β is identified if and only if γ 6= 0, which occurs
when E (xi zi ) 6= 0. Thus identification hinges on the existence of correlation between the excluded
exogenous variable and the included endogenous variable.
Suppose this condition fails, so E (xi zi ) = 0. Then by the CLT
n
1 X d ¡ ¡ ¢¢
√ zi ei −→ N1 ∼ N 0, E zi2 e2i (15.15)
n
i=1
n n
1 X 1 X d ¡ ¡ ¢¢
√ zi xi = √ zi ui −→ N2 ∼ N 0, E zi2 u2i (15.16)
n n
i=1 i=1
therefore Pn
√1
n i=1 zi ei d N1
β̂ − β = 1 Pn −→ ∼ Cauchy,
√
n i=1 zi xi
N2
since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution
does not have a finite mean. This result carries over to more general settings, and was examined
by Phillips (1989) and Choi and Phillips (1992).
CHAPTER 15. ENDOGENEITY 303
Suppose that identification does not completely fail, but is weak. This occurs when Γ22 is full
rank, but small. This can be handled in an asymptotic analysis by modeling it as local-to-zero, viz
Γ22 = n−1/2 C,
where C is a full rank matrix. The n−1/2 is picked because it provides just the right balancing to
allow a rich distribution theory.
To see the consequences, once again take the simple case k = l = 1. Here, the instrument xi is
weak for zi if
γ = n−1/2 c.
Then (15.15) is unaffected, but (15.16) instead takes the form
n n n
1 X 1 X 2 1 X
√ zi xi = √ zi γ + √ zi ui
n n n
i=1 i=1 i=1
n n
1X 2 1 X
= zi c + √ zi ui
n n
i=1 i=1
d
−→ Qc + N2
therefore
d N1
β̂ − β −→ .
Qc + N2
As in the case of complete identification failure, we find that β̂ is inconsistent for β and the
asymptotic distribution of β̂ is non-normal. In addition, standard test statistics have non-standard
distributions, meaning that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended
to nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained
by Wang and Zivot (1998).
The bottom line is that it is highly desirable to avoid identification failure. Once again, the
equation to focus on is the reduced form
X 2 = Z 1 Γ12 + Z 2 Γ22 + U 2
Exercises
1. Consider the single equation model
yi = zi β + ei ,
where yi and zi are both real-valued (1 × 1). Let β̂ denote the IV estimator of β using as an
instrument a dummy variable di (takes only the values 0 and 1). Find a simple expression
for the IV estimator in this context.
yi = x0i β + ei
E (ei | xi ) = 0
¡ ¢
suppose σi2 = E e2i | xi is known. Show that the GLS estimator of β can be written as an
IV estimator using some instrument z i . (Find an expression for z i .)
4. The reduced form between the regressors xi and instruments z i takes the form
xi = Γ0 z i + ui
or
X = ZΓ + U
where xi is k × 1, z i is l × 1, X is n × k, Z is n × l, U is n × k, and Γ is l × k. The parameter
Γ is defined by the population moment condition
¡ ¢
E z i u0i = 0
−1
Show that the method of moments estimator for Γ is Γ̂ = (Z 0 Z) (Z 0 X) .
y = Xβ + e
X = ZΓ + U
with Γ l × k, l ≥ k, we claim that β is identified (can be recovered from the reduced form) if
rank(Γ) = k. Explain why this is true. That is, show that if rank(Γ) < k then β cannot be
identified.
yi = xi β + ei
E (ei | xi ) = 0.
E (z i (yi − xi β)) = 0.
7. Suppose that price and quantity are determined by the intersection of the linear demand and
supply curves
Demand : Q = a0 + a1 P + a2 Y + e1
Supply : Q = b0 + b1 P + b2 W + e2
where income (Y ) and wage (W ) are determined outside the market. In this model, are the
parameters identified?
8. The data file card.dat is taken from Card (1995). There are 2215 observations with 29
variables, listed in card.pdf. We want to estimate a wage equation
where Educ = Eduation (Years) Exper = Experience (Years), and South and Black are
regional and racial dummy variables.
(a) Estimate the model by OLS. Report estimates and standard errors.
(b) Now treat Education as endogenous, and the remaining variables as exogenous. Estimate
the model by 2SLS, using the instrument near4, a dummy indicating that the observation
lives near a 4-year college. Report estimates and standard errors.
(c) Re-estimate by 2SLS (report estimates and standard errors) adding three additional
instruments: near2 (a dummy indicating that the observation lives near a 2-year college),
fatheduc (the education, in years, of the father) and motheduc (the education, in years,
of the mother).
(d) Re-estimate the model by efficient GMM. I suggest that you use the 2SLS estimates as
the first-step to get the weight matrix, and then calculate the GMM estimator from this
weight matrix without further iteration. Report the estimates and standard errors.
(e) Calculate and report the J statistic for overidentification.
(f) Discuss your findings.
Chapter 16
A time series yt is a process observed in sequence over time, t = 1, ..., T . To indicate the
dependence on time, we adopt new notation, and use the subscript t to denote the individual
observation, and T to denote the number of observations.
Because of the sequential nature of time series, we expect that yt and yt−1 are not independent,
so classical assumptions are not valid.
We can separate time series into two categories: univariate (yt ∈ R is scalar); and multivariate
(yt ∈ Rm is vector-valued). The primary model for univariate time series is autoregressions (ARs).
The primary model for multivariate time series is vector autoregressions (VARs).
E(yt ) = μ
is independent of t, and
306
CHAPTER 16. UNIVARIATE TIME SERIES 307
The following two theorems are essential to the analysis of stationary time series. The proofs
are rather difficult, however.
Proof of Theorem 16.1.3. Part (1) is a direct consequence of the Ergodic theorem. For Part
(2), note that
T
1X
γ̂(k) = (yt − μ̂) (yt−k − μ̂)
T t=1
T T T
1X 1X 1X
= yt yt−k − yt μ̂ − yt−k μ̂ + μ̂2 .
T t=1 T t=1 T t=1
CHAPTER 16. UNIVARIATE TIME SERIES 308
By Theorem 16.1.1 above, the sequence yt yt−k is strictly stationary and ergodic, and it has a finite
mean by the assumption that Eyt2 < ∞. Thus an application of the Ergodic Theorem yields
T
1X p
yt yt−k −→ E(yt yt−k ).
T t=1
Thus
p
γ̂(k) −→ E(yt yt−k ) − μ2 − μ2 + μ2 = E(yt yt−k ) − μ2 = γ(k).
p
Part (3) follows by the continuous mapping theorem: ρ̂(k) = γ̂(k)/γ̂(0) −→ γ(k)/γ(0) = ρ(k).
16.2 Autoregressions
In time-series, the series {..., y1 , y2 , ..., yT , ...} are jointly random. We consider the conditional
expectation
E (yt | Ft−1 )
where Ft−1 = {yt−1 , yt−2 , ...} is the past history of the series.
An autoregressive (AR) model specifies that only a finite number of past lags matter:
A linear AR model (the most common type used in practice) specifies linearity:
Letting
et = yt − E (yt | Ft−1 ) ,
then we have the autoregressive model
Regression errors are naturally a MDS. Some time-series processes may be a MDS as a conse-
quence of optimizing behavior. For example, some versions of the life-cycle hypothesis imply that
either changes in consumption, or consumption growth rates, should be a MDS. Most asset pricing
models imply that asset returns should be the sum of a constant plus a MDS.
The MDS property for the regression error plays the same role in a time-series regression as
does the conditional mean-zero property for the regression error in a cross-section regression. In
fact, it is even more important in the time-series context, as it is difficult to derive distribution
theories without this property.
A useful property of a MDS is that et is uncorrelated with any function of the lagged information
Ft−1 . Thus for k > 0, E (yt−k et ) = 0.
CHAPTER 16. UNIVARIATE TIME SERIES 309
Loosely speaking, this series converges if the sequence αk et−k gets small as k → ∞. This occurs
when |α| < 1.
Theorem 16.3.1 If and only if |α| < 1 then yt is strictly stationary and
ergodic.
where the λ1 , ..., λk are the complex roots of α(z), which satisfy α(λj ) = 0.
We know that an AR(1) is stationary iff the absolute value of the root of its autoregressive
polynomial is larger than one. For an AR(k), the requirement is that all roots are larger than one.
Let |λ| denote the modulus of a complex number λ.
Theorem 16.5.1 The AR(k) is strictly stationary and ergodic if and only
if |λj | > 1 for all j.
One way of stating this is that “All roots lie outside the unit circle.”
If one of the roots equals 1, we say that α(L), and hence yt , “has a unit root”. This is a special
case of non-stationarity, and is of great interest in applied time series.
16.6 Estimation
Let
¡ ¢0
xt = 1 yt−1 yt−2 · · · yt−k
¡ ¢0
β= α0 α1 α2 · · · αk .
The vector xt is strictly stationary and ergodic, and by Theorem 16.1.1, so is xt x0t . Thus by the
Ergodic Theorem,
T
1X p ¡ ¢
xt x0t −→ E xt x0t = Q.
T
t=1
Combined with (16.1) and the continuous mapping theorem, we see that
à T
!−1 Ã T
!
b −β = 1X 1X p
β xt x0t xt et −→ Q−1 0 = 0.
T t=1 T t=1
where
Ω = E(xt x0t e2t ).
This is identical in form to the asymptotic distribution of OLS in cross-section regression. The
implication is that asymptotic inference is the same. In particular, the asymptotic covariance
matrix is estimated just as in the cross-section case.
CHAPTER 16. UNIVARIATE TIME SERIES 312
3. Simulate iid draws e∗i from the empirical distribution of the residuals {ê1 , ..., êT }.
This construction imposes homoskedasticity on the errors e∗i , which may be different than the
properties of the actual ei . It also presumes that the AR(k) structure is the truth.
4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for model-
misspecification.
5. The results may be sensitive to the block length, and the way that the data are partitioned
into blocks.
yt = μ0 + μ1 t + St (16.2)
St = ρ1 St−1 + ρ2 St−2 + · · · + ρk St−k + et , (16.3)
or
yt = α0 + α1 t + ρ1 yt−1 + ρ2 yt−1 + · · · + ρk yt−k + et . (16.4)
There are two essentially equivalent ways to estimate the autoregressive parameters (α1 , ..., αk ).
• You can estimate (16.2)-(16.3) sequentially by OLS. That is, first estimate (16.2), get the
residual Ŝt , and then perform regression (16.3) replacing St with Ŝt . This procedure is some-
times called Detrending.
CHAPTER 16. UNIVARIATE TIME SERIES 313
The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell
theorem.
Seasonal Effects
• Include dummy variables for each season. This presumes that “seasonality” does not change
over the sample.
• Use “seasonally adjusted” data. The seasonal factor is typically estimated by a two-sided
weighted average of the data for that season in neighboring years. Thus the seasonally
adjusted data is a “filtered” series. This is a flexible approach which can extract a wide range
of seasonal factors. The seasonal adjustment, however, also alters the time-series correlations
of the data.
• First apply a seasonal differencing operator. If s is the number of seasons (typically s = 4 or
s = 12),
∆s yt = yt − yt−s ,
or the season-to-season change. The series ∆s yt is clearly free of seasonality. But the long-run
trend is also eliminated, and perhaps this was of relevance.
ut = θut−1 + et (16.6)
with et a MDS. The hypothesis of no omitted serial correlation is
H0 : θ = 0
H1 : θ 6= 0.
We want to test H0 against H1 .
To combine (16.5) and (16.6), we take (16.5) and lag the equation once:
yt−1 = α0 + α1 yt−2 + ut−1 .
α(L)yt = α0 + et
α(L) = 1 − α1 L − · · · − αk Lk .
α1 + α2 + · · · + αk = 1.
In this case, yt is non-stationary. The ergodic theorem and MDS CLT do not apply, and test
statistics are asymptotically non-normal.
A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:
These models are equivalent linear transformations of one another. The DF parameterization
is convenient because the parameter ρ0 summarizes the information about the unit root, since
α(1) = −ρ0 . To see this, observe that the lag polynomial for the yt computed from (16.7) is
(1 − L) − ρ0 L − ρ1 (L − L2 ) − · · · − ρk−1 (Lk−1 − Lk )
But this must equal ρ(L), as the models are equivalent. Thus
α(1) = (1 − 1) − ρ0 − (1 − 1) − · · · − (1 − 1) = −ρ0 .
H0 : ρ0 = 0.
H1 : ρ0 < 0.
which is an AR(k-1) in the first-difference ∆yt . Thus if yt has a (single) unit root, then ∆yt is a
stationary AR process. Because of this property, we say that if yt is non-stationary but ∆d yt is
stationary, then yt is “integrated of order d”, or I(d). Thus a time series with unit root is I(1).
CHAPTER 16. UNIVARIATE TIME SERIES 315
Since α0 is the parameter of a linear regression, the natural test statistic is the t-statistic for
H0 from OLS estimation of (16.7). Indeed, this is the most popular unit root test, and is called the
Augmented Dickey-Fuller (ADF) test for a unit root.
It would seem natural to assess the significance of the ADF statistic using the normal table.
However, under H0 , yt is non-stationary, so conventional normal asymptotics are invalid. An
alternative asymptotic framework has been developed to deal with non-stationary data. We do not
have the time to develop this theory in detail, but simply assert the main results.
ρ̂0
ADF = → DFt .
s(ρ̂0 )
The limit distributions DFα and DFt are non-normal. They are skewed to the left, and have
negative means.
The first result states that ρ̂0 converges to its true value (of zero) at rate T, rather than the
conventional rate of T 1/2 . This is called a “super-consistent” rate of convergence.
The second result states that the t-statistic for ρ̂0 converges to a limit distribution which is
non-normal, but does not depend on the parameters ρ. This distribution has been extensively
tabulated, and may be used for testing the hypothesis H0 . Note: The standard error s(ρ̂0 ) is the
conventional (“homoskedastic”) standard error. But the theorem does not require an assumption
of homoskedasticity. Thus the Dickey-Fuller test is robust to heteroskedasticity.
Since the alternative hypothesis is one-sided, the ADF test rejects H0 in favor of H1 when
ADF < c, where c is the critical value from the ADF table. If the test rejects H0 , this means that
the evidence points to yt being stationary. If the test does not reject H0 , a common conclusion is
that the data suggests that yt is non-stationary. This is not really a correct conclusion, however.
All we can say is that there is insufficient evidence to conclude whether the data are stationary or
not.
We have described the test for the setting of with an intercept. Another popular setting includes
as well a linear time trend. This model is
This is natural when the alternative hypothesis is that the series is stationary about a linear time
trend. If the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non-
stationary, but it may be stationary around the linear time trend. In this context, it is a silly waste
of time to fit an AR model to the level of the series without a time trend, as the AR model cannot
conceivably describe this data. The natural solution is to include a time trend in the fitted OLS
equation. When conducting the ADF test, this means that it is computed as the t-ratio for ρ0 from
OLS estimation of (16.8).
If a time trend is included, the test procedure is the same, but different critical values are
required. The ADF test has a different distribution when the time trend has been included, and a
different table should be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has
been omitted from the model. These are included for completeness (from a pedagogical perspective)
but have no relevance for empirical practice where intercepts are always included.
Chapter 17
A multivariate time series y t is a vector process m × 1. Let Ft−1 = (y t−1 , y t−2 , ...) be all lagged
information at time t. The typical goal is to find the conditional expectation E (y t | Ft−1 ) . Note
that since y t is a vector, this conditional expectation is also a vector.
316
CHAPTER 17. MULTIVARIATE TIME SERIES 317
17.2 Estimation
Consider the moment conditions
E (xt ejt ) = 0,
j = 1, ..., m. These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equation-by-equation OLS
âj = (X 0 X)−1 X 0 y j .
where ¡ ¢
Y = y1 y2 · · · ym
the T × m matrix of the stacked y 0t .
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator,
and was originally derived by Zellner (1962)
yjt = a0j xt + et ,
and xt consists of lagged values of yjt and the other ylt0 s. In this case, it is convenient to re-define
the variables. Let yt = yjt , and z t be the other variables. Let et = ejt and β = aj . Then the single
equation takes the form
yt = x0t β + et , (17.1)
and h¡ ¢0 i
xt = 1 y t−1 · · · y t−k z 0t−1 · · · z 0t−k .
yt = x0t β + et
et = θet−1 + ut (17.2)
E (ut | Ft−1 ) = 0.
H0 : θ = 0 H1 : θ 6= 0.
Take the equation yt = x0t β + et , and subtract off the equation once lagged multiplied by θ, to get
¡ ¢ ¡ ¢
yt − θyt−1 = x0t β + et − θ x0t−1 β + et−1
= x0t β − θxt−1 β + et − θet−1 ,
or
yt = θyt−1 + x0t β + x0t−1 γ + ut , (17.3)
which is a valid regression model.
So testing H0 versus H1 is equivalent to testing for the significance of adding (yt−1 , xt−1 ) to
the regression. This can be done by a Wald test. We see that an appropriate, general, and simple
way to test for omitted serial correlation is to test the significance of extra lagged values of the
dependent variable and regressors.
You may have heard of the Durbin-Watson test for omitted serial correlation, which once was
very popular, and is still routinely reported by conventional regression packages. The DW test is
appropriate only when regression yt = x0t β + et is not dynamic (has no lagged values on the RHS),
and et is iid N(0, σ 2 ). Otherwise it is invalid.
Another interesting fact is that (17.2) is a special case of (17.3), under the restriction γ = −βθ.
This restriction, which is called a common factor restriction, may be tested if desired. If valid,
the model (17.2) may be estimated by iterated GLS. (A simple version of this estimator is called
Cochrane-Orcutt.) Since the common factor restriction appears arbitrary, and is typically rejected
empirically, direct estimation of (17.2) is uncommon in recent applications.
where p is the number of parameters in the model, and êt (k) is the OLS residual vector from the
model with k lags. The log determinant is the criterion from the multivariate normal likelihood.
CHAPTER 17. MULTIVARIATE TIME SERIES 319
The information set F1t is generated only by the history of y t , and the information set F2t is
generated by both y t and z t . The latter has more information.
We say that z t does not Granger-cause y t if
E (y t | F1,t−1 ) = E (y t | F2,t−1 ) .
That is, conditional on information in lagged y t , lagged z t does not help to forecast y t . If this
condition does not hold, then we say that z t Granger-causes y t .
The reason why we call this “Granger Causality” rather than “causality” is because this is not
a physical or structure definition of causality. If z t is some sort of forecast of the future, such as a
futures price, then z t may help to forecast y t even though it does not “cause” y t . This definition
of causality was developed by Granger (1969) and Sims (1972).
In a linear VAR, the equation for y t is
H0 : γ 1 = γ 2 = · · · = γ k = 0.
Clive W. J. Granger
Clive Granger (1934-2009) of England was one of the leading figures in time-
series econometrics, and co-winner in 2003 of the Nobel Memorial Prize in
Economic Sciences (along with Robert Engle). In addition to formalizing
the definition of causality known as Granger causality, he invented the con-
cept of cointegration, introduced spectral methods into econometrics, and
formalized methods for the combination of forecasts.
17.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and
Granger (1987).
CHAPTER 17. MULTIVARIATE TIME SERIES 320
If the series y t is not cointegrated, then r = 0. If r = m, then y t is I(0). For 0 < r < m, y t is
I(1) and cointegrated.
In some cases, it may be believed that β is known a priori. Often, β = (1 −1)0 . For example, if
y t is a pair of interest rates, then β = (1 − 1)0 specifies that the spread (the difference in returns)
is stationary. If y = (log(C) log(I))0 , then β = (1 − 1)0 specifies that log(C/I) is stationary.
In other cases, β may not be known.
If y t is cointegrated with a single cointegrating vector (r = 1), then it turns out that β can
be consistently estimated by an OLS regression of one component of y t on the others. Thus y t =
p
(Y1t , Y2t ) and β = (β1 β2 ) and normalize β1 = 1. Then β̂2 = (y 02 y 2 )−1 y 02 y 1 −→ β2 . Furthermore
d
this estimation is super-consistent: T (β̂2 − β2 ) −→ Limit, as first shown by Stock (1987). This
is not, in general, a good method to estimate β, but it is useful in the construction of alternative
estimators and tests.
We are often interested in testing the hypothesis of no cointegration:
H0 : r = 0
H1 : r > 0.
A(L)y t = et
A(L) = I − A1 L − A2 L2 − · · · − Ak Lk
or alternatively as
∆y t = Πy t−1 + D(L)∆y t−1 + et
where
Π = −A(1)
= −I + A1 + A2 + · · · + Ak .
CHAPTER 17. MULTIVARIATE TIME SERIES 321
Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model
can be written as
A “limited dependent variable” y is one which takes a “limited” set of values. The most common
cases are
• Binary: y ∈ {0, 1}
• Censored: y ∈ R+
The traditional approach to the estimation of limited dependent variable (LDV) models is
parametric maximum likelihood. A parametric model is constructed, allowing the construction of
the likelihood function. A more modern approach is semi-parametric, eliminating the dependence
on a parametric distributional assumption. We will discuss only the first (parametric) approach,
due to time constraints. They still constitute the majority of LDV applications. If, however, you
were to write a thesis involving LDV estimation, you would be advised to consider employing a
semi-parametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of
the likelihood function.
Pr (yi = 1 | xi ) = x0i β.
As Pr (yi = 1 | xi ) = E (yi | xi ) , this yields the regression: yi = x0i β + ei which can be estimated by
OLS. However, the linear probability model does not impose the restriction that 0 ≤ Pr (yi | xi ) ≤ 1.
Even so estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form
¡ ¢
Pr (yi = 1 | xi ) = F x0i β
where F (·) is a known CDF, typically assumed to be symmetric about zero, so that F (u) =
1 − F (−u). The two standard choices for F are
−1
• Logistic: F (u) = (1 + e−u ) .
322
CHAPTER 18. LIMITED DEPENDENT VARIABLES 323
If F is logistic, we call this the logit model, and if F is normal, we call this the probit model.
This model is identical to the latent variable model
yi∗ = x0i β + ei
ei ∼ F (·)
½
1 if yi∗ > 0
yi = .
0 otherwise
For then
Pr (yi = 1 | xi ) = Pr (yi∗ > 0 | xi )
¡ ¢
= Pr x0i β + ei > 0 | xi
¡ ¢
= Pr ei > −x0i β | xi
¡ ¢
= 1 − F −x0i β
¡ ¢
= F x0i β .
Estimation is by maximum likelihood. To construct the likelihood, we need the conditional
distribution of an individual observation. Recall that if y is Bernoulli, such that Pr(y = 1) = p and
Pr(y = 0) = 1 − p, then we can write the density of y as
f (y) = py (1 − p)1−y , y = 0, 1.
In the Binary choice model, yi is conditionally Bernoulli with Pr (yi = 1 | xi ) = pi = F (x0i β) . Thus
the conditional density is
f (yi | xi ) = pyi i (1 − pi )1−yi
¡ ¢y ¡ ¢
= F x0i β i (1 − F x0i β )1−yi .
Hence the log-likelihood function is
n
X
log L(β) = log f (yi | xi )
i=1
Xn
¡ ¡ ¢y ¡ ¢ ¢
= log F x0i β i (1 − F x0i β )1−yi
i=1
n
X £ ¡ ¢ ¡ ¢¤
= yi log F x0i β + (1 − yi ) log(1 − F x0i β )
i=1
X ¡ ¢ X ¡ ¢
= log F x0i β + log(1 − F x0i β ).
yi =1 yi =0
The MLE β̂ is the value of β which maximizes log L(β). Standard errors and test statistics are
computed by asymptotic approximations. Details of such calculations are left to more advanced
courses.
The conditional density is the Poisson with parameter λi . The functional form for λi has been
picked to ensure that λi > 0.
The log-likelihood function is
n
X n
X ¡ ¢
log L(β) = log f (yi | xi ) = − exp(x0i β) + yi x0i β − log(yi !) .
i=1 i=1
so the model imposes the restriction that the conditional mean and variance of yi are the same.
This may be considered restrictive. A generalization is the negative binomial.
yi∗ = x0i β + ei
iid
ei ∼ N(0, σ2 )
with the observed variable yi generated by the censoring equation (18.1). This model (now called
the Tobit) specifies that the latent (or ideal) value of consumption may be negative (the household
would prefer to sell than buy). All that is reported is that the household purchased zero units of
the good.
The naive approach to estimate β is to regress yi on xi . This does not work because regression
estimates E (yi | xi ) , not E (yi∗ | xi ) = x0i β, and the latter is of interest. Thus OLS will be biased
for the parameter of interest β.
[Note: it is still possible to estimate E (yi | xi ) by LS techniques. The Tobit framework postu-
lates that this is not inherently interesting, that the parameter of β is defined by an alternative
statistical structure.]
CHAPTER 18. LIMITED DEPENDENT VARIABLES 325
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that
the probability of being censored is
yi = x0i β + e1i
¡ ¢
Ti = 1 z 0i γ + e0i > 0
where 1 (·) is the indicator function. The dependent variable yi is observed if (and only if) Ti = 1.
Else it is unobserved.
For example, yi could be a wage, which can be observed only if a person is employed. The
equation for Ti is an equation specifying the probability that the person is employed.
The model is often completed by specifying that the errors are jointly normal
µ ¶ µ µ ¶¶
e0i 1 ρ
∼ N 0, .
e1i ρ σ2
CHAPTER 18. LIMITED DEPENDENT VARIABLES 326
• The OLS standard errors will be incorrect, as this is a two-step estimator. They can be
corrected using a more complicated formula. Or, alternatively, by viewing the Probit/OLS
estimation equations as a large joint GMM problem.
The Heckit estimator is frequently used to deal with problems of sample selection. However,
the estimator is built on the assumption of normality, and the estimator can be quite sensitive
to this assumption. Some modern econometric research is exploring how to relax the normality
assumption.
The estimator can also work quite poorly if λ (z 0i γ̂) does not have much in-sample variation.
This can happen if the Probit equation does not “explain” much about the selection choice. Another
potential problem is that if z i = xi , then λ (z 0i γ̂) can be highly collinear with xi , so the second
step OLS estimator will not be able to precisely estimate β. Based this observation, it is typically
recommended to find a valid exclusion restriction: a variable should be in z i which is not in xi . If
this is valid, it will ensure that λ (z 0i γ̂) is not collinear with xi , and hence improve the second stage
estimator’s precision.
Chapter 19
Panel Data
A panel is a set of observations on individuals, collected over time. An observation is the pair
{yit , xit }, where the i subscript denotes the individual, and the t subscript denotes time. A panel
may be balanced:
{yit , xit } : t = 1, ..., T ; i = 1, ..., n,
or unbalanced :
{yit , xit } : For i = 1, ..., n, t = ti , ..., ti .
E (xit ui ) = 0 (19.1)
If this condition fails, then OLS is inconsistent. (19.1) fails if the individual-specific unobserved
effect ui is correlated with the observed explanatory variables xit . This is often believed to be
plausible if ui is an omitted variable.
If (19.1) is true, however, OLS can be improved upon via a GLS technique. In either event,
OLS appears a poor estimation choice.
Condition (19.1) is called the random effects hypothesis. It is a strong assumption, and most
applied researchers try to avoid its use.
327
CHAPTER 19. PANEL DATA 328
and ⎛ ⎞
di1
⎜ ⎟
di = ⎝ ... ⎠ ,
din
an n × 1 dummy vector with a “1” in the i0 th place. Let
⎛ ⎞
u1
⎜ .. ⎟
u=⎝ . ⎠.
un
• If xit contains an intercept, it will be collinear with di , so the intercept is typically omitted
from xit .
• Any regressor in xit which is constant over time for all individuals (e.g., their gender) will be
collinear with di , so will have to be omitted.
• There are n + k regression parameters, which is quite large as typically n is very large.
Computationally, you do not want to actually implement conventional OLS estimation, as the
parameter space is too large. OLS estimation of β proceeds by the FWL theorem. Stacking the
observations together:
y = Xβ + Du + e,
then by the FWL theorem,
¡ ¢−1 ¡ 0 ¢
β̂ = X 0 (I − P D ) X X (I − P D ) y
¡ ¢−1 ¡ ∗0 ∗ ¢
= X ∗0 X ∗ X y ,
where
y ∗ = y − D(D0 D)−1 D0 y
X ∗ = X − D(D0 D)−1 D0 X.
Since the regression of yit on di is a regression onto individual-specific dummies, the predicted value
from these regressions is the individual specific mean y i , and the residual is the demean value
∗
yit = yit − yi .
∗ on x∗ , the dependent variable and regressors in deviation-
The fixed effects estimator β̂ is OLS of yit it
from-mean form.
CHAPTER 19. PANEL DATA 329
and then take individual-specific means by taking the average for the i0 th individual:
ti ti ti
1 X 1 X 1 X
yit = x0it β + ui + eit
Ti t=t Ti t=t Ti t=t
i i i
or
y i = x0i β + ui + ei .
Subtracting, we find
∗
yit = x∗0 ∗
it β + eit ,
the Epanechnikov ½ ¡ ¢
3
4 1 − u2 , |u| ≤ 1
K(u) =
0 |u| > 1
and the Biweight or Quartic
½ 15
¡ ¢2
1 − u2 , |u| ≤ 1
K(u) = 16
0 |u| > 1
In practice, the choice between these three rarely makes a meaningful difference in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by
the bandwidth h > 0. Let
1 ³u´
Kh (u) = K .
h h
be the kernel K rescaled by the bandwidth h. The kernel density estimator of f (x) is
n
1X
fˆ(x) = Kh (Xi − x) .
n
i=1
330
CHAPTER 20. NONPARAMETRIC DENSITY ESTIMATION 331
This estimator is the average of a set of weights. If a large number of the observations Xi are near
x, then the weights are relatively large and fˆ(x) is larger. Conversely, if only a few Xi are near x,
then the weights are small and fˆ(x) is small. The bandwidth h controls the meaning of “near”.
Interestingly, fˆ(x) is a valid density. That is, fˆ(x) ≥ 0 for all x, and
Z ∞ Z ∞ X n
ˆ 1
f (x)dx = Kh (Xi − x) dx
−∞ −∞ n i=1
n Z
1X ∞
= Kh (Xi − x) dx
n
i=1 −∞
n Z
1X ∞
= K (u) du = 1
n −∞
i=1
the sample mean of the Xi , where the second-to-last equality used the change-of-variables u =
(Xi − x)/h which has Jacobian h.
The second moment of the estimated density is
Z ∞ n Z
2ˆ 1X ∞ 2
x f (x)dx = x Kh (Xi − x) dx
−∞ n
i=1 −∞
n Z
1X ∞
= (Xi + uh)2 K (u) du
n −∞
i=1
X n n Z ∞ n Z
1 2 2X 1X 2 ∞ 2
= Xi + Xi h K(u)du + h u K (u) du
n n −∞ n −∞
i=1 i=1 i=1
n
1X 2
= Xi + h2 σK
2
n
i=1
where Z ∞
2
σK = u2 K (u) du
−∞
is the variance of the kernel. It follows that the variance of the density fˆ(x) is
Z ∞ µZ ∞ ¶2 n
à n !2
1 X 1 X
2ˆ
x f (x)dx − xfˆ(x)dx = 2 2 2
Xi + h σK − Xi
−∞ −∞ n n
i=1 i=1
= σ̂ 2 + h2 σK
2
moment.
CHAPTER 20. NONPARAMETRIC DENSITY ESTIMATION 332
The second equality uses the change-of variables u = (z − x)/h. The last expression shows that the
expected value is an average of f (z) locally about x.
This integral (typically) is not analytically solvable, so we approximate it using a second order
Taylor expansion of f (x + hu) in the argument hu about hu = 0, which is valid as h → 0. Thus
1
f (x + hu) ' f (x) + f 0 (x)hu + f 00 (x)h2 u2
2
and therefore
Z ∞ µ ¶
0 1 00 2 2
EKh (X − x) ' K (u) f (x) + f (x)hu + f (x)h u du
−∞ 2
Z ∞ Z ∞
= f (x) K (u) du + f 0 (x)h K (u) udu
−∞ −∞
Z ∞
1
+ f 00 (x)h2 K (u) u2 du
2 −∞
1 00
= f (x) + f (x)h2 σK 2
.
2
The bias of fˆ(x) is then
n
1X 1
Bias(x) = Efˆ(x) − f (x) = EKh (Xi − x) − f (x) = f 00 (x)h2 σK
2
.
n 2
i=1
We see that the bias of fˆ(x) at x depends on the second derivative f 00 (x). The sharper the derivative,
the greater the bias. Intuitively, the estimator fˆ(x) smooths data local to Xi = x, so is estimating
a smoothed version of f (x). The bias results from this smoothing, and is larger the greater the
curvature in f (x).
We now examine the variance of fˆ(x). Since it is an average of iid random variables, using
first-order Taylor approximations and the fact that n−1 is of smaller order than (nh)−1
1
var (x) = var (Kh (Xi − x))
n
1 1
= EKh (Xi − x)2 − (EKh (Xi − x))2
n n
Z ∞ µ ¶
1 z−x 2 1
' K f (z)dz − f (x)2
nh2 −∞ h n
Z ∞
1
= K (u)2 f (x + hu) du
nh −∞
Z
f (x) ∞
' K (u)2 du
nh −∞
f (x) R(K)
= .
nh
CHAPTER 20. NONPARAMETRIC DENSITY ESTIMATION 333
R∞
where R(K) = −∞ K (u)2 du is called the roughness of K.
Together, the asymptotic mean-squared error (AMSE) for fixed x is the sum of the approximate
squared bias and approximate variance
1 f (x) R(K)
AM SEh (x) = f 00 (x)2 h4 σK
4
+ .
4 nh
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
Z
h4 σK
4 R(f 00 )
R(K)
AM ISEh = AM SEh (x)dx = + . (20.1)
4 nh
R
where R(f 00 ) = (f 00 (x))2 dx is the roughness of f 00 . Notice that the first term (the squared bias)
is increasing in h and the second term (the variance) is decreasing in nh. Thus for the AMISE to
decline with n, we need h → 0 but nh → ∞. That is, h must tend to zero, but at a slower rate
than n−1 .
Equation (20.1) is an asymptotic approximation to the MSE. We define the asymptotically
optimal bandwidth h0 as the value which minimizes this approximate MSE. That is,
h0 = argmin AM ISEh
h
d R(K)
AM ISEh = h3 σK
4
R(f 00 ) − =0
dh nh2
yielding
µ ¶1/5
R(K)
h0 = 4 n−1/2 . (20.2)
σK R(f 00 )
This solution takes the form h0 = cn−1/5 where c is a function of K and f, but not of n. We
thus say that the optimal bandwidth is of order O(n−1/5 ). Note that this h declines to zero, but at
a very slow rate.
In practice, how should the bandwidth be selected? This is a difficult problem, and there is a
large and continuing literature on the subject. The asymptotically optimal choice given in (20.2)
depends on R(K), σK 2 , and R(f 00 ). The first two are determined by the kernel function. Their
values for the three functions introduced in the previous section are given here.
R R
2 = ∞ u2 K (u) du R(K) = ∞ K (u)2 du
K σK −∞ −∞√
Gaussian 1 1/(2 π)
Epanechnikov 1/5 1/5
Biweight 1/7 5/7
An obvious difficulty is that R(f 00 ) is unknown. A classic simple solution proposed by Silverman
(1986) has come to be known as the reference bandwidth or Silverman’s Rule-of-Thumb. It
uses formula (20.2) but replaces R(f 00 ) with σ̂ −5 R(φ00 ), where φ is the N(0, 1) distribution and σ̂ 2 is
an estimate of σ 2 = var(X). This choice for h gives an optimal rule when f (x) is normal, and gives
a nearly optimal rule when f (x) is close to normal. The downside is that if the density is very far
√
from normal, the rule-of-thumb h can be quite inefficient. We can calculate that R(φ00 ) = 3/ (8 π) .
Together with the above table, we find the reference rules for the three kernel functions introduced
earlier.
Gaussian Kernel: hrule = 1.06σ̂n−1/5
Epanechnikov Kernel: hrule = 2.34σ̂n−1/5
Biweight (Quartic) Kernel: hrule = 2.78σ̂n−1/5
CHAPTER 20. NONPARAMETRIC DENSITY ESTIMATION 334
Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is
a good practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate
fˆ(x). There are other approaches, but implementation can be delicate. I now discuss some of these
choices. The plug-in approach is to estimate R(f 00 ) in a first step, and then plug this estimate into
the formula (20.2). This is more treacherous than may first appear, as the optimal h for estimation
of the roughness R(f 00 ) is quite different than the optimal h for estimation of f (x). However, there
are modern versions of this estimator work well, in particular the iterative method of Sheather
and Jones (1991). Another popular choice for selection of h is cross-validation. This works by
constructing an estimate of the MISE using leave-one-out estimators. There are some desirable
properties of cross-validation bandwidths, but they are also known to converge very slowly to the
optimal values. They are also quite ill-behaved when the data has some discretization (as is common
in economics), in which case the cross-validation rule can sometimes select very small bandwidths
leading to dramatically undersmoothed estimates. Fortunately there are remedies, which are known
as smoothed cross-validation which is a close cousin of the bootstrap.
Appendix A
Matrix Algebra
A.1 Notation
A scalar a is a single number.
A vector a is a k × 1 list of numbers, typically arranged in a column. We write this as
⎛ ⎞
a1
⎜ a2 ⎟
⎜ ⎟
a=⎜ . ⎟
.
⎝ . ⎠
ak
By convention aij refers to the element in the i0 th row and j 0 th column of A. If r = 1 then A is a
column vector. If k = 1 then A is a row vector. If r = k = 1, then A is a scalar.
A standard convention (which we will follow in this text whenever possible) is to denote scalars
by lower-case italics (a), vectors by lower-case bold italics (a), and matrices by upper-case bold
italics (A). Sometimes a matrix A is denoted by the symbol (aij ).
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
⎡ ⎤
α1
£ ¤ ⎢ ⎢ α2 ⎥
⎥
A = a1 a2 · · · ar = ⎢ . ⎥
⎣ .. ⎦
αk
where ⎡ ⎤
a1i
⎢ a2i ⎥
⎢ ⎥
ai = ⎢ .. ⎥
⎣ . ⎦
aki
are column vectors and £ ¤
αj = aj1 aj2 · · · ajr
335
APPENDIX A. MATRIX ALGEBRA 336
A + B = (aij + bij ) .
A+B =B+A
A + (B + C) = (A + B) + C.
Ac = cA = (aij c) .
⎡ ⎤
a01 b1 a01 b2 · · · a01 bs
⎢ a02 b1 a02 b2 · · · a02 bs ⎥
⎢ ⎥
=⎢ .. .. .. ⎥.
⎣ . . . ⎦
a0k b1 a0k b2 · · · a0k bs
A (BC) = (AB) C
A (B + C) = AB + AC
An alternative way to write the matrix product is to use matrix partitions. For example,
∙ ¸∙ ¸
A11 A12 B 11 B 12
AB =
A21 A22 B 21 B 22
∙ ¸
A11 B 11 + A12 B 21 A11 B 12 + A12 B 22
= .
A21 B 11 + A22 B 21 A21 B 12 + A22 B 22
As another example,
⎡ ⎤
B1
£ ¤⎢
⎢ B2 ⎥
⎥
AB = A1 A2 · · · Ar ⎢ .. ⎥
⎣ . ⎦
Br
= A1 B 1 + A2 B 2 + · · · + Ar B r
Xr
= Aj B j
j=1
A.4 Trace
The trace of a k × k square matrix A is the sum of its diagonal elements
k
X
tr (A) = aii .
i=1
APPENDIX A. MATRIX ALGEBRA 338
Some straightforward properties for square matrices A and B and real c are
tr (cA) = c tr (A)
¡ ¢
tr A0 = tr (A)
tr (A + B) = tr (A) + tr (B)
tr (I k ) = k.
Indeed,
⎡ ⎤
a01 b1 a01 b2 · · · a01 bk
⎢ a02 b1 a02 b2 · · · a02 bk ⎥
⎢ ⎥
tr (AB) = tr ⎢ .. .. .. ⎥
⎣ . . . ⎦
a0k b1 a0k b2 · · · a0k bk
k
X
= a0i bi
i=1
Xk
= b0i ai
i=1
= tr (BA) .
is the number of linearly independent columns aj , and is written as rank (A) . We say that A has
full rank if rank (A) = r.
A square k × k matrix A is said to be nonsingular if it is has full rank, e.g. rank (A) = k.
This means that there is no k × 1 c 6= 0 such that Ac = 0.
If a square k × k matrix A is nonsingular then there exists a unique matrix k × k matrix A−1
called the inverse of A which satisfies
AA−1 = A−1 A = I k .
AA−1 = A−1 A = I k
¡ −1 ¢0 ¡ 0 ¢−1
A = A
(AC)−1 = C −1 A−1
¡ ¢−1 −1
(A + C)−1 = A−1 A−1 + C −1 C
¡ ¢−1
A−1 − (A + C)−1 = A−1 A−1 + C −1 A−1
Another useful result for non-singular A is known as the Woodbury matrix identity
¡ ¢−1
(A + BCD)−1 = A−1 − A−1 BC C + CDA−1 BC CDA−1 . (A.2)
In particular, for C = −1, B = b and D = b0 for vector b we find what is known as the Sherman—
Morrison formula
¡ ¢−1 ¡ ¢−1 −1 0 −1
A − bb0 = A−1 + 1 − b0 A−1 b A bb A . (A.3)
A11 = A−1 −1 −1 −1
11 + A11 A12 A22·1 A21 A11
A22 = A−1 −1 −1 −1
22 + A22 A21 A11·2 A12 A22
A12 = −A−1 −1
11 A12 A22·1
A21 = −A−1 −1
22 A21 A11·2
Even if a matrix A does not possess an inverse, we can still define the Moore-Penrose gen-
eralized inverse A− as the matrix which satisfies
AA− A = A
A− AA− = A−
AA− is symmetric
A− A is symmetric
For any matrix A, the Moore-Penrose generalized inverse A− exists and is unique.
For example, if ∙ ¸
A11 0
A=
0 0
and when A−1
11 exists then ∙ ¸
− A−1
11 0
A = .
0 0
A.6 Determinant
The determinant is a measure of the volume of a square matrix.
While the determinant is widely used, its precise definition is rarely needed. However, we present
the definition here for completeness. Let A = (aij ) be a general k × k matrix . Let π = (j1 , ..., jk )
denote a permutation of (1, ..., k) . There are k! such permutations. There is a unique count of the
number of inversions of the indices of such permutations (relative to the natural order (1, ..., k) ,
and let επ = +1 if this count is even and επ = −1 if the count is odd. Then the determinant of A
is defined as X
det A = επ a1j1 a2j2 · · · akjk .
π
APPENDIX A. MATRIX ALGEBRA 340
For example, if A is 2 × 2, then the two permutations of (1, 2) are (1, 2) and (2, 1) , for which
ε(1,2) = 1 and ε(2,1) = −1. Thus
A.7 Eigenvalues
The characteristic equation of a k × k square matrix A is
det (A − λI k ) = 0.
The left side is a polynomial of degree k in λ so it has exactly k roots, which are not necessarily
distinct and may be real or complex. They are called the latent roots or characteristic roots or
eigenvalues of A. If λi is an eigenvalue of A, then A − λi I k is singular so there exists a non-zero
vector hi such that
(A − λi I k ) hi = 0.
The vector hi is called a latent vector or characteristic vector or eigenvector of A corre-
sponding to λi .
We now state some useful properties. Let λi and hi , i = 1, ..., k denote the k eigenvalues and
eigenvectors of a square matrix A. Let Λ be a diagonal matrix with the characteristic roots in the
diagonal, and let H = [h1 · · · hk ].
Q
• det(A) = ki=1 λi
P
• tr(A) = ki=1 λi
• If A has distinct characteristic roots, there exists a nonsingular matrix P such that A =
P −1 ΛP and P AP −1 = Λ.
• If A is symmetric, then A = HΛH 0 and H 0 AH = Λ, and the characteristic roots are all
real. A = HΛH 0 is called the spectral decomposition of a matrix.
• When the eigenvalues of k × k A are real they are written in decending order λ1 ≥ λ2 ≥ · · · ≥
λk . We also write λmin (A) = λk = min{λ } and λmax (A) = λ1 = max{λ }.
APPENDIX A. MATRIX ALGEBRA 341
and ³ ´
∂ ∂ ∂
g (x) = ∂x1 g (x) ··· ∂xk g (x) .
∂x0
Some properties are now summarized.
∂ ∂
• ∂x (a0 x) = ∂x (x0 a) = a
∂
• ∂x0 (Ax) = A
∂
• ∂x (x0 Ax) = (A + A0 ) x
∂2
• ∂x∂x0 (x0 Ax) = A + A0
To see this, by the spectral decomposition A = HΛH 0 with H 0 H = I and Λ = diag{λ1 , ..., λm },
so Ãm !1/2
¡ ¡ ¢¢ X
0 0 1/2 1/2 2
kAk = tr HΛH HΛH = (tr (ΛΛ)) = λ . (A.8)
=1
and in particular ° 0°
°aa ° = kak2 . (A.9)
The are other matrix norms. Another norm of frequent use is the spectral norm
¡ ¡ ¢¢1/2
kAkS = λmax A0 A
Trace Inequality. For any m × m matrices A and B such that A is symmetric and B ≥ 0
Proof of Schwarz Inequality: First, suppose that kbk = 0. Then b = 0 and both |a0 b| = 0 and
¡ ¢−1 0
kak kbk = 0 so the inequality is true. Second, suppose that kbk > 0 and define c = a−b b0 b b a.
0
Since c is a vector, c c ≥ 0. Thus
¡ ¢2 ¡ ¢
0 ≤ c0 c = a0 a − a0 b / b0 b .
APPENDIX A. MATRIX ALGEBRA 345
Proof of Schwarz Matrix Inequality: Partition A = [a1 , ..., an ] and B = [b1 , ..., bn ]. Then
by partitioned matrix multiplication, the definition of the matrix Euclidean norm and the Schwarz
inequality
° 0 °
° a b1 a0 b2 · · · °
° 0 ° ° 0 ° 1 1 °
°A B ° = ° a2 b1 a02 b2 · · · ° °
° .. .. .. °
° . . . °
° °
° ka1 k kb1 k ka1 k kb2 k · · · °
° °
° °
≤ ° ka2 k kb1 k ka2 k kb2 k · · · °
° .
.. .
.. . .. °
° °
⎛ ⎞1/2
Xn X n
=⎝ kai k2 kbj k2 ⎠
i=1 j=1
à n !1/2 à n !1/2
X 2
X 2
= kai k kbi k
i=1 i=1
⎛ ⎞1/2 ⎛ ⎞1/2
X m
n X X m
n X
=⎝ a2ji ⎠ ⎝ kbji k2 ⎠
i=1 j=1 i=1 j=1
= kAk kBk
Proof of Triangle Inequality: Let a = vec (A) and b = vec (B) . Then by the definition of the
matrix norm and the Schwarz Inequality
kA + Bk2 = ka + bk2
= a0 a + 2a0 b + b0 b
¯ ¯
≤ a0 a + 2 ¯a0 b¯ + b0 b
≤ kak2 + 2 kak kbk + kbk2
= (kak + kbk)2
= (kAk + kBk)2
Proof of Trace Inequality. By the spectral decomposition for symmetric matices, A = HΛH 0
where Λ has the eigenvalues λj of A on the diagonal and H is orthonormal. Define C = H 0 BH
which has non-negative diagonal elements Cjj since B is positive semi-definite. Then
m
X m
X
tr (AB) = tr (ΛC) = λj Cjj ≤ max λj Cjj = λmax (A) tr (C)
j
j=1 j=1
where the inequality uses the fact that Cjj ≥ 0. But note that
¡ ¢ ¡ ¢
tr (C) = tr H 0 BH = tr HH 0 B = tr (B)
APPENDIX A. MATRIX ALGEBRA 346
Proof of Quadratic Inequality: In the Trace Inequality set B = bb0 and note tr (AB) = b0 Ab
and tr (B) = b0 b. ¥
The final equality holds since A ≥ 0 implies that λmax (AA) = λmax (A)2 . ¥
Proof of Jensen’s Inequality (A.17). By the definition of convexity, for any λ ∈ [0, 1]
This implies
⎛ ⎞ ⎛ ⎞
Xm m
X aj
g⎝ aj xj ⎠ = g ⎝a1 g (x1 ) + (1 − a1 ) xj ⎠
1 − a1
j=1 j=2
⎛ ⎞
Xm
≤ a1 g (x1 ) + (1 − a1 ) g ⎝ bj xj ⎠ .
j=2
Pm
where bj = aj /(1 − a1 ) and j=2 bj = 1. By another application of (A.21) this is bounded by
⎛ ⎛ ⎞⎞ ⎛ ⎞
Xm Xm
a1 g (x1 )+(1 − a1 ) ⎝b2 g(x2 ) + (1 − b2 )g ⎝ cj xj ⎠⎠ = a1 g (x1 )+a2 g(x2 )+(1 − a1 ) (1−b2 )g ⎝ cj xj ⎠
j=2 j=2
Proof of Loève’s cr Inequality. For r ≥ 1 this is simply ³a rewriting´of the finite form Jensen’s
Pm
inequality (A.18) with g(u) = ur . For r < 1, define bj = |aj | / j=1 |aj | . The facts that 0 ≤ bj ≤ 1
r
and r < 1 imply bj ≤ bj and thus
m
X Xm
1= bj ≤ brj
j=1 j=1
which implies ⎛ ⎞r
Xm m
X
⎝ |aj |⎠ ≤ |aj |r .
j=1 j=1
¥
APPENDIX A. MATRIX ALGEBRA 347
¥
Appendix B
Probability
B.1 Foundations
The set S of all possible outcomes of an experiment is called the sample space for the exper-
iment. Take the simple example of tossing a coin. There are two outcomes, heads and tails, so
we can write S = {H, T }. If two coins are tossed in sequence, we can write the four outcomes as
S = {HH, HT, T H, T T }.
An event A is any collection of possible outcomes of an experiment. An event is a subset of S,
including S itself and the null set ∅. Continuing the two coin example, one event is A = {HH, HT },
the event that the first coin is heads. We say that A and B are disjoint or mutually exclusive
if A ∩ B = ∅. For example, the sets {HH, HT } and {T H} are disjoint. Furthermore, if the sets
A1 , A2 , ... are pairwise disjoint and ∪∞
i=1 Ai = S, then the collection A1 , A2 , ... is called a partition
of S.
The following are elementary set operations:
Union: A ∪ B = {x : x ∈ A or x ∈ B}.
Intersection: A ∩ B = {x : x ∈ A and x ∈ B}.
Complement: Ac = {x : x ∈ / A}.
The following are useful properties of set operations.
Commutatitivity: A ∪ B = B ∪ A; A ∩ B = B ∩ A.
Associativity: A ∪ (B ∪ C) = (A ∪ B) ∪ C; A ∩ (B ∩ C) = (A ∩ B) ∩ C.
Distributive Laws: A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) ; A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) .
c c c c
DeMorgan’s Laws: (A ∪ B) = A ∩ B ; (A ∩ B) = Ac ∪ B c .
A probability function assigns probabilities (numbers between 0 and 1) to events A in S.
This is straightforward when S is countable; when S is uncountable we must be somewhat more
careful. A set B is called a sigma algebra (or Borel field) if ∅ ∈ B , A ∈ B implies Ac ∈ B, and
A1 , A2 , ... ∈ B implies ∪∞i=1 Ai ∈ B. A simple example is {∅, S} which is known as the trivial sigma
algebra. For any sample space S, let B be the smallest sigma algebra which contains all of the open
sets in S. When S is countable, B is simply the collection of all subsets of S, including ∅ and S.
When S is the real line, then B is the collection of all open and closed intervals. We call B the
sigma algebra associated with S. We only define probabilities for events contained in B.
We now can give the axiomatic definition of probability. Given S and B, a probability function
Pr satisfies Pr(S) P∞= 1, Pr(A) ≥ 0 for all A ∈ B, and if A1 , A2 , ... ∈ B are pairwise disjoint, then
Pr (∪∞ i=1 A i ) = i=1 Pr(Ai ).
Some important properties of the probability function include the following
• Pr (∅) = 0
• Pr(A) ≤ 1
• Pr (Ac ) = 1 − Pr(A)
348
APPENDIX B. PROBABILITY 349
• Pr (B ∩ Ac ) = Pr(B) − Pr(A ∩ B)
• Pr (A ∪ B) = Pr(A) + Pr(B) − Pr(A ∩ B)
• If A ⊂ B then Pr(A) ≤ Pr(B)
• Bonferroni’s Inequality: Pr(A ∩ B) ≥ Pr(A) + Pr(B) − 1
• Boole’s Inequality: Pr (A ∪ B) ≤ Pr(A) + Pr(B)
For some elementary probability models, it is useful to have simple rules to count the number
of objects in a set. These counting rules are facilitated by using the binomial coefficients which are
defined for nonnegative integers n and r, n ≥ r, as
µ ¶
n n!
= .
r r! (n − r)!
When counting the number of objects in a set, there are two important distinctions. Counting
may be with replacement or without replacement. Counting may be ordered or unordered.
For example, consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is
without replacement if you are not allowed to select the same number twice, and is with replacement
if this is allowed. Counting is ordered or not depending on whether the sequential order of the
numbers is relevant to winning the lottery. Depending on these two distinctions, we have four
expressions for the number of objects (possible arrangements) of size r from n objects.
Without With
Replacement Replacement
n!
Ordered (n−r)! nr
¡n¢ ¡n+r−1¢
Unordered r r
Pr(A ∩ B) = Pr (A | B) Pr(B)
which is often quite useful. We can say that the occurrence of B has no information about the
likelihood of event A when Pr (A | B) = Pr(A), in which case we find
We say that the events A and B are statistically independent when (B.1) holds. Furthermore,
we say that the collection of events A1 , ..., Ak are mutually independent when for any subset
{Ai : i ∈ I}, Ã !
\ Y
Pr Ai = Pr (Ai ) .
i∈I i∈I
Theorem 3 (Bayes’ Rule). For any set B and any partition A1 , A2 , ... of the sample space, then
for each i = 1, 2, ...
Pr (B | Ai ) Pr(Ai )
Pr (Ai | B) = P∞
j=1 Pr (B | Aj ) Pr(Aj )
APPENDIX B. PROBABILITY 350
F (x) = Pr (X ≤ x) . (B.2)
Sometimes we write this as FX (x) to denote that it is the CDF of X. A function F (x) is a CDF if
and only if the following three properties hold:
We say that the random variable X is discrete if F (x) is a step function. In the latter case,
the range of X consists of a countable set of real numbers τ1 , ..., τr . The probability function for X
takes the form
Pr (X = τj ) = πj , j = 1, ..., r (B.3)
Pr
where 0 ≤ πj ≤ 1 and j=1 πj = 1.
We say that the random variable X is continuous if F (x) is continuous in x. In this case Pr(X =
τ ) = 0 for all τ ∈ R so the representation (B.3) is unavailable. Instead, we represent the relative
probabilities by the probability density function (PDF)
d
f (x) = F (x)
dx
so that Z x
F (x) = f (u)du
−∞
and Z b
Pr (a ≤ X ≤ b) = f (u)du.
a
These expressions only make sense if F (x) is differentiable. While there are examples of continuous
random variables which do not possess a PDF, these cases are unusualRand are typically ignored.
∞
A function f (x) is a PDF if and only if f (x) ≥ 0 for all x ∈ R and −∞ f (x)dx = 1.
B.3 Expectation
For any measurable real function g, we define the mean or expectation Eg(X) as follows. If
X is discrete,
Xr
Eg(X) = g(τj )πj ,
j=1
and if X is continuous Z ∞
Eg(X) = g(x)f (x)dx.
−∞
The latter is well defined and finite if
Z ∞
|g(x)| f (x)dx < ∞. (B.4)
−∞
APPENDIX B. PROBABILITY 351
Pr (X = x) = px (1 − p)1−x , x = 0, 1; 0≤p≤1
EX = p
var(X) = p(1 − p)
Binomial
µ ¶
n x
Pr (X = x) = p (1 − p)n−x , x = 0, 1, ..., n; 0≤p≤1
x
EX = np
var(X) = np(1 − p)
Geometric
Negative Binomial
Γ (r + x) r
Pr (X = x) = p (1 − p)x−1 , x = 0, 1, 2, ...; 0≤p≤1
x!Γ (r)
r (1 − p)
EX=
p
r (1 − p)
var(X) =
p2
Poisson
exp (−λ) λx
Pr (X = x) = , x = 0, 1, 2, ..., λ>0
x!
EX = λ
var(X) = λ
Beta
Γ(α + β) α−1
f (x) = x (1 − x)β−1 , 0 ≤ x ≤ 1; α > 0, β > 0
Γ(α)Γ(β)
α
μ=
α+β
αβ
var(X) =
(α + β + 1) (α + β)2
Cauchy
1
f (x) = , −∞ < x < ∞
π (1 + x2 )
EX = ∞
var(X) = ∞
Exponential
1 ³x´
f (x) = exp , 0 ≤ x < ∞; θ>0
θ θ
EX = θ
var(X) = θ2
Logistic
exp (−x)
f (x) = , −∞ < x < ∞;
(1 + exp (−x))2
EX = 0
π2
var(X) =
3
Lognormal
à !
1 (log x − μ)2
f (x) = √ exp − , 0 ≤ x < ∞; σ>0
2πσx 2σ 2
¡ ¢
EX = exp μ + σ 2 /2
¡ ¢ ¡ ¢
var(X) = exp 2μ + 2σ2 − exp 2μ + σ 2
Pareto
βαβ
f (x) = , α ≤ x < ∞, α > 0, β>0
xβ+1
βα
EX = , β>1
β−1
βα2
var(X) = , β>2
(β − 1)2 (β − 2)
Uniform
1
f (x) = , a≤x≤b
b−a
a+b
EX =
2
(b − a)2
var(X) =
12
APPENDIX B. PROBABILITY 354
Weibull
µ γ¶
γ γ−1 x
f (x) = x exp − , 0 ≤ x < ∞; γ > 0, β > 0
β β
µ ¶
1/γ 1
EX = β Γ 1 +
γ
µ µ ¶ µ ¶¶
2/γ 2 2 1
var(X) = β Γ 1+ −Γ 1+
γ γ
Gamma
1 ³ x´
α−1
f (x) = x exp − , 0 ≤ x < ∞; α > 0, θ > 0
Γ(α)θα θ
EX = αθ
var(X) = αθ2
Chi-Square
1 ³ x´
r/2−1
f (x) = x exp − , 0 ≤ x < ∞; r>0
Γ(r/2)2r/2 2
EX = r
var(X) = 2r
Normal
à !
1 (x − μ)2
f (x) = √ exp − , −∞ < x < ∞; −∞ < μ < ∞, σ2 > 0
2πσ 2σ 2
EX = μ
var(X) = σ 2
Student t
¡ ¢ µ ¶− r+1
Γ r+1
2¡ ¢ x2 ( 2 )
f (x) = √ 1+ , −∞ < x < ∞; r>0
rπΓ 2r r
EX = 0 if r > 1
r
var(X) = if r > 2
r−2
∂2
f (x, y) = F (x, y).
∂x∂y
FX (x) = Pr(X ≤ x)
= lim F (x, y)
y→∞
Z x Z ∞
= f (x, y)dydx
−∞ −∞
The random variables X and Y are defined to be independent if f (x, y) = fX (x)fY (y).
Furthermore, X and Y are independent if and only if there exist functions g(x) and h(y) such that
f (x, y) = g(x)h(y).
If X and Y are independent, then
Z Z
E (g(X)h(Y )) = g(x)h(y)f (y, x)dydx
Z Z
= g(x)h(y)fY (y)fX (x)dydx
Z Z
= g(x)fX (x)dx h(y)fY (y)dy
= Eg (X) Eh (Y ) . (B.5)
E(XY ) = EXEY.
MZ (λ) = E exp (λ (X + Y ))
= E (exp (λX) exp (λY ))
¡ ¢ ¡ ¢
= E exp λ0 X E exp λ0 Y
= MX (λ)MY (λ). (B.6)
|ρXY | ≤ 1. (B.7)
F (x) = Pr(X ≤ x)
∂k
f (x) = F (x).
∂x1 · · · ∂xk
where the symbol dx denotes dx1 · · · dxk . In particular, we have the k × 1 multivariate mean
μ = EX
f (x, y)
fY |X (y | x) =
fX (x)
APPENDIX B. PROBABILITY 357
if fX (x) > 0. One way to derive this expression from the definition of conditional probability is
∂
fY |X (y | x) = lim Pr (Y ≤ y | x ≤ X ≤ x + ε)
∂y ε→0
∂ Pr ({Y ≤ y} ∩ {x ≤ X ≤ x + ε})
= lim
∂y ε→0 Pr(x ≤ X ≤ x + ε)
∂ F (x + ε, y) − F (x, y)
= lim
∂y ε→0 FX (x + ε) − FX (x)
∂
∂ ∂x F (x + ε, y)
= lim
∂y ε→0 fX (x + ε)
∂2
∂x∂y F (x, y)
=
fX (x)
f (x, y)
= .
fX (x)
The conditional mean m(x) is a function, meaning that when X equals x, then the expected value
of Y is m(x).
Similarly, we define the conditional variance of Y given X = x as
σ 2 (x) = var (Y | X = x)
³ ´
= E (Y − m(x))2 | X = x
¡ ¢
= E Y 2 | X = x − m(x)2 .
Evaluated at x = X, the conditional mean m(X) and conditional variance σ 2 (X) are random
variables, functions of X. We write this as E(Y | X) = m(X) and var (Y | X) = σ 2 (X). For
example, if E (Y | X = x) = α + β 0 x, then E (Y | X) = α + β 0 X, a transformation of X.
The following are important facts about conditional expectations.
Simple Law of Iterated Expectations:
E (E (Y | X)) = E (Y ) (B.8)
Proof :
E (E (Y | X)) = E (m(X))
Z ∞
= m(x)fX (x)dx
−∞
Z ∞Z ∞
= yfY |X (y | x) fX (x)dydx
−∞ −∞
Z ∞Z ∞
= yf (y, x) dydx
−∞ −∞
= E(Y ).
E (E (Y | X, Z) | X) = E (Y | X) (B.9)
APPENDIX B. PROBABILITY 358
B.8 Transformations
Suppose that X ∈ Rk with continuous distribution function FX (x) and density fX (x). Let
Y = g(X) where g(x) : Rk → Rk is one-to-one, differentiable, and invertible. Let h(y) denote the
inverse of g(x). The Jacobian is
µ ¶
∂
J(y) = det h(y) .
∂y 0
Consider the univariate case k = 1. If g(x) is an increasing function, then g(X) ≤ Y if and only
if X ≤ h(Y ), so the distribution function of Y is
FY (y) = Pr (g(X) ≤ y)
= Pr (X ≤ h(Y ))
= FX (h(Y )) .
Taking the derivative, the density of Y is
d d
fY (y) = FY (y) = fX (h(Y )) h(y).
dy dy
If g(x) is a decreasing function, then g(X) ≤ Y if and only if X ≥ h(Y ), so
FY (y) = Pr (g(X) ≤ y)
= 1 − Pr (X ≥ h(Y ))
= 1 − FX (h(Y ))
and the density of Y is
d
fY (y) = −fX (h(Y )) h(y).
dy
We can write these two cases jointly as
fY (y) = fX (h(Y )) |J(y)| . (B.11)
This is known as the change-of-variables formula. This same formula (B.11) holds for k > 1, but
its justification requires deeper results from analysis.
As one example, take the case X ∼ U [0, 1] and Y = − log(X). Here, g(x) = − log(x) and
h(y) = exp(−y) so the Jacobian is J(y) = − exp(y). As the range of X is [0, 1], that for Y is [0,∞).
Since fX (x) = 1 for 0 ≤ x ≤ 1 (B.11) shows that
fY (y) = exp(−y), 0 ≤ y ≤ ∞,
an exponential density.
APPENDIX B. PROBABILITY 359
It is conventional to write X ∼ N (0, 1) , and to denote the standard normal density function by
φ(x) and its distribution function by Φ(x). The latter has no closed-form solution. The normal
density has all moments finite. Since it is symmetric about zero all odd moments are zero. By
iterated integration by parts, we can also show that EX 2 = 1 and EX 4 = 3. In fact, for any positive
integer m, EX 2m = (2m − 1)!! = (2m − 1) · (2m − 3) · · · 1. Thus EX 4 = 3, EX 6 = 15, EX 8 = 105,
and EX 10 = 945.
If Z is standard normal and X = μ + σZ, then using the change-of-variables formula, X has
density à !
1 (x − μ)2
f (x) = √ exp − , −∞ < x < ∞.
2πσ 2σ 2
which is the univariate normal density. ¡ The¢ mean and variance of the distribution are μ and
σ 2 , and it is conventional to write X ∼ N μ, σ2 .
For x ∈ Rk , the multivariate normal density is
µ ¶
1 (x − μ)0 Σ−1 (x − μ)
f (x) = k/2 1/2
exp − , x ∈ Rk .
(2π) det (Σ) 2
The mean and covariance matrix of the distribution are μ and Σ, and it is conventional to write
X ∼ N (μ, Σ). ¡ ¢ ¡ ¢
The MGF and CF of the multivariate normal are exp λ0 μ + λ0 Σλ/2 and exp iλ0 μ − λ0 Σλ/2 ,
respectively.
If X ∈ Rk is multivariate normal and the elements of X are mutually uncorrelated, then
Σ = diag{σj2 } is a diagonal matrix. In this case the density function can be written as
à à !!
1 (x1 − μ1 )2 /σ12 + · · · + (xk − μk )2 /σk2
f (x) = exp −
(2π)k/2 σ1 · · · σk 2
k
à !
Y 1 (xj − μj )2
= exp −
j=1 (2π)
1/2
σj 2σj2
which is the product of marginal univariate normal densities. This shows that if X is multivariate
normal with uncorrelated elements, then they are mutually independent.
1/2
where μY = a+Bμ and ΣY = BΣB 0 , where we used the fact that det (BΣB 0 ) = det (Σ)1/2 det (B) .
¥
Proof of Theorem B.9.2. First, suppose a random variable Q is distributed chi-square with r
degrees of freedom. It has the MGF
Z ∞
1
E exp (tQ) = ¡r¢
r/2
xr/2−1 exp (tx) exp (−x/2) dy = (1 − 2t)−r/2
0 Γ 2 2
R∞
where the second equality uses the fact that 0 y a−1 exp (−by) dy = b−a Γ(a), which can be found
by applying change-of-variables to the gamma function. Our goal is to calculate the MGF of
Q = X 0 X and show that it equals (1 − 2t)−r/2 Pr , which will establish that Q ∼ χ2r .
0 2
Note that we can write Q = X X = j=1 Zj where the Zj are independent N (0, 1) . The
distribution of each of the Zj2 is
¡ ¢ √
Pr Zj2 ≤ y = 2 Pr (0 ≤ Zj ≤ y)
Z √y µ 2¶
1 x
=2 √ exp − dx
0 2π 2
Z y ³ s´
1 −1/2
= ¡ 1
¢ s exp − ds
1/2 2
0 Γ 2 2
¡ ¢ √
using the change—of-variables s = x2 and the fact Γ 12 = π. Thus the density of Zj2 is
1 ³ x´
¡1¢ −1/2
f1 (x) = x exp −
Γ 2 21/2 2
³ ´
which is the χ21 and by our above calculation has the MGF of E exp tZj2 = (1 − 2t)−1/2 .
Pr
Since the Zj2 are mutually independent, (B.6) implies that the MGF of Q = 2
j=1 Zj is
h ir
(1 − 2t)−1/2 = (1 − 2t)−r/2 , which is the MGF of the χ2r density as desired. ¥
Proof of Theorem B.9.3. The fact that A > 0 means that we can write A = CC 0 where C is
non-singular. Then A−1 = C −10 C −1 and
¡ ¢ ¡ ¢
C −1 Z ∼ N 0, C −1 AC −10 = N 0, C −1 CC 0 C −10 = N (0, I q ) .
Thus ¡ ¢0 ¡ ¢
Z 0 A−1 Z = Z 0 C −10 C −1 Z = C −1 Z C −1 Z ∼ χ2q .
¥
Proof of Theorem B.9.4. Using the simple law of iterated expectations, Tr has distribution
APPENDIX B. PROBABILITY 361
function
à !
Z
F (x) = Pr p ≤x
Q/r
( r )
Q
=E Z≤x
r
" Ã r !#
Q
= E Pr Z ≤ x |Q
r
à r !
Q
= EΦ x
r
à r !
d Q
f (x) = E Φ x
dx r
à à r !r !
Q Q
=E φ x
r r
Z ∞µ µ ¶¶ r à !
1 qx2 q 1
= √ exp − ¡ ¢ q r/2−1 exp (−q/2) dq
0 2π 2r r Γ 2r 2r/2
¡ ¢ µ ¶− r+1
Γ r+12¡ ¢ x2 ( 2 )
=√ 1+
rπΓ 2r r
which is that of the student t with r degrees of freedom. ¥
B.10 Inequalities
Jensen’s Inequality. If g(·) : Rm → R is convex, then for any random vector x for which
Conditional Jensen’s Inequality. If g(·) : Rm → R is convex, then for any random vectors
(y, x) for which E kyk < ∞ and E kg (y)k < ∞,
Conditional Expectation Inequality. For any r ≥ such that E |y|r < ∞, then
kE(Y )k ≤ E kY k . (B.15)
Hölder’s Inequality. If p > 1 and q > 1 and p1 + 1q = 1, then for any random m × n matrices X
and Y, ° °
E °X 0 Y ° ≤ (E kXkp )1/p (E kY kq )1/q . (B.16)
APPENDIX B. PROBABILITY 362
Markov’s Inequality (standard form). For any random vector x and non-negative function
g(x) ≥ 0,
Pr(g(x) > α) ≤ α−1 Eg(x). (B.21)
Markov’s Inequality (strong form). For any random vector x and non-negative function
g(x) ≥ 0,
Pr(g(x) > α) ≤ α−1 E (g (x) 1 (g(x) > α)) . (B.22)
Proof of Jensen’s Inequality (B.12). Since g(u) is convex, at any point u there is a nonempty
set of subderivatives (linear surfaces touching g(u) at u but lying below g(u) for all u). Let a + b0 u
be a subderivative of g(u) at u = Ex. Then for all u, g(u) ≥ a + b0 u yet g(Ex) = a + b0 Ex.
Applying expectations, Eg(x) ≥ a + b0 Ex = g(Ex), as stated. ¥
Proof of Conditional Jensen’s Inequality. The same as the proof of (B.12), but using condi-
tional expectations. The conditional expectations exist since E kyk < ∞ and E kg (y)k < ∞. ¥
Proof of Conditional Expectation Inequality. As the function |u|r is convex for r ≥ 1, the
Conditional Jensen’s inequality implies
as required. ¥
kλU 1 + (1 − λ)U 2 k ≤ λ kU 1 k + (1 − λ) kU 2 k
APPENDIX B. PROBABILITY 363
which shows that the matrix norm g(U ) = kU k is convex. Applying Jensen’s Inequality (B.12) we
find (B.15). ¥
Proof of Liapunov’s Inequality. The function g(u) = up/r is convex for u > 0 since p ≥ r. Set
u = kXkr . By Jensen’s inequality, g (Eu) ≤ Eg (u) or
(E kXkr )p/r ≤ E (kXkr )p/r = E kXkp .
Raising both sides to the power 1/p yields (E kXkr )1/r ≤ (E kXkp )1/p as claimed. ¥
Proof of Minkowski’s Inequality. Note that by rewriting, using the triangle inequality (A.12),
and then Hölder’s Inequality to the two expectations
³ ´
E kX + Y kp = E kX + Y k kX + Y kp−1
³ ´ ³ ´
≤ E kXk kX + Y kp−1 + E kY k kX + Y kp−1
³ ´1/q
≤ (E kXkp )1/p E kX + Y kq(p−1)
³ ´1/q
+ (E kY kp )1/p E kX + Y kq(p−1)
³ ´
= (E kXkp )1/p + (E kY kp )1/p E (kX + Y kp )(p−1)/p
APPENDIX B. PROBABILITY 364
where the second equality picks q to satisfy 1/p+1/q = 1, and the final equality uses this fact to make
the substitution q = p/(p−1) and then collects terms. Dividing both sides by E (kX + Y kp )(p−1)/p ,
we obtain (B.19). ¥
the inequality using the region of integration {g(u) > α}. This establishes the strong form (B.22).
Since 1 (g(x) > α) ≤ 1, the final expression is less than α−1 E (g(x)) , establishing the standard
form (B.21). ¥
Proof of Chebyshev’s
© Inequality.
ª Define y = (x − Ex)2 and note that Ey = var (x) . The events
{|x − Ex| > α} and y > α2 are equal, so by an application Markov’s inequality we find
Pr(|x − Ex| > α) = Pr(y > α2 ) ≤ α−2 E (y) = α−2 var (x)
as stated. ¥
The likelihood of the sample is this joint density evaluated at the observed sample values, viewed
as a function of θ. The log-likelihood function is its natural logarithm
n
X
log L(θ) = log f (y i | θ) .
i=1
The likelihood score is the derivative of the log-likelihood, evaluated at the true parameter
value.
∂
Si = log f (y i | θ0 ) .
∂θ
We also define the Hessian
∂2
H = −E log f (y i | θ0 ) (B.24)
∂θ∂θ0
APPENDIX B. PROBABILITY 365
Theorem B.11.1
¯
∂ ¯
E log f (y | θ)¯¯ =0 (B.26)
∂θ θ=θ0
ES i = 0 (B.27)
and
H=Ω≡I (B.28)
The matrix I is called the information, and the equality (B.28) is called the information
matrix equality.
The maximum likelihood estimator (MLE) θ̂ is the parameter value which maximizes the
likelihood (equivalently, which maximizes the log-likelihood). We can write this as
In some simple cases, we can find an explicit expression for θ̂ as a function of the data, but these
cases are rare. More typically, the MLE θ̂ must be found by numerical methods.
To understand why the MLE θ̂ is a natural estimator for the parameter θ observe that the
standardized log-likelihood is a sample average and an estimator of E log f (y i | θ) :
n
1 1X p
log L(θ) = log f (y i | θ) −→ E log f (y i | θ) .
n n
i=1
As the MLE θ̂ maximizes the left-hand-side, we can see that it is an estimator of the maximizer of
the right-hand-side. The first-order condition for the latter problem is
∂
0= E log f (y i | θ)
∂θ
which holds at θ = θ0 by (B.26). This suggests that θ̂ is an estimator of θ0 . In. fact, under
p
conventional regularity conditions, θ̂ is consistent, θ̂ −→ θ0 as n → ∞. Furthermore, we can derive
its asymptotic distribution.
√ ³ ´
d
Theorem B.11.2 Under regularity conditions, n θ̂ − θ0 −→
¡ ¢
N 0, I −1 .
We omit the regularity conditions for Theorem B.11.2, but the result holds quite broadly for
models which are smooth functions of the parameters. Theorem B.11.2 gives the general form for
the asymptotic distribution of the MLE. A famous result shows that the asymptotic variance is the
smallest possible.
APPENDIX B. PROBABILITY 366
e is an unbiased reg-
Theorem B.11.3 Cramer-Rao Lower Bound. If θ
e −
ular estimator of θ, then var(θ) ≥ (nI) .
The Cramer-Rao Theorem shows that the finite sample variance of an unbiased estimator is
bounded
³ below
´ by (nI)−1 . This means that the asymptotic variance of the standardized estimator
√ e
n θ − θ0 is bounded below by I −1 . In other words, the best possible asymptotic variance among
all (regular) estimators is I −1 . An estimator is called asymptotically efficient if its asymptotic
variance equals this lower bound. Theorem B.11.2 shows that the MLE has this asymptotic variance,
and is thus asymptotically efficient.
Theorem B.11.4 gives a strong endorsement for the MLE in parametric models.
Finally, consider functions of parameters. If ψ = g(θ) then the MLE of ψ is ψ b = g(θ).
b
This is because maximization (e.g. (B.29)) is unaffected by parameterization and transformation.
Applying the Delta Method to Theorem B.11.2 we conclude that
√ ³ ´ √ ³ ´
d ¡ ¢
n ψ b − ψ ' G0 n θ b − θ −→ N 0, G0 I −1 G (B.30)
∂ ∂
E log f (y | θ0 ) = E log f (y | θ0 ) = 0.
∂θ ∂θ
APPENDIX B. PROBABILITY 367
Proof of Theorem B.11.2 Taking the first-order condition for maximization of log L(θ), and
making a first-order Taylor series expansion,
¯
∂ ¯
0= log L(θ)¯¯
∂θ θ=θ̂
Xn
∂ ³ ´
= log f y i | θ̂
∂θ
i=1
Xn
∂ X ∂2 n ³ ´
= log f (y i | θ0 ) + log f (y i | θ n ) θ̂ − θ 0 ,
i=1
∂θ ∂θ∂θ0 i=1
where θn lies on a line segment joining θ̂ and θ0 . (Technically, the specific value of θn varies by
row in this expansion.) Rewriting this equation, we find
à !−1 à n !
³ ´ n
X ∂2 X
θ̂ − θ0 = − log f (y i | θn ) Si
i=1
∂θ∂θ0 i=1
where S i are the likelihood scores. Since the score S i is mean-zero (B.27) with covariance matrix
Ω (equation B.25) an application of the CLT yields
n
1 X d
√ S i −→ N (0, Ω) .
n
i=1
The analysis of the sample Hessian is somewhat more complicated due to the presence of θn .
∂2 p
Let H(θ) = − ∂θ∂θ 0 log f (y i , θ) . If it is continuous in θ, then since θ n −→ θ 0 it follows that
p
H(θn ) −→ H and so
n n µ ¶
1 X ∂2 1X ∂2
− log f (y i , θn ) = − log f (y i , θn ) − H(θn ) + H(θn )
n
i=1
∂θ∂θ0 n
i=1
∂θ∂θ0
p
−→ H
by an application of a uniform WLLN. (By uniform, we mean that the WLLN holds uniformly over
the parameter value. This requires the second derivative to be a smooth function of the parameter.)
Together,
√ ³ ´
d ¡ ¢ ¡ ¢
n θ̂ − θ0 −→ H−1 N (0, Ω) = N 0, H−1 ΩH−1 = N 0, I −1 ,
e=θ
which by Theorem (B.11.1) has mean zero and variance nI. Write the estimator θ e (Y ) as a
e
function of the data. Since θ is unbiased for any θ,
Z
e
θ = Eθ = θ e (Y ) f (Y , θ) dY .
as stated. ¥
Appendix C
Numerical Optimization
where the parameter is θ ∈ Θ ⊂ Rm and the criterion function is Q(θ) : Θ → R. For example
NLLS, GLS, MLE and GMM estimators take this form. In most cases, Q(θ) can be computed
for given θ, but θ̂ is not available in closed form. In this case, numerical methods are required to
obtain θ̂.
369
APPENDIX C. NUMERICAL OPTIMIZATION 370
∂
g(θ) = Q(θ)
∂θ
and some require the Hessian
∂2
H(θ) = Q(θ).
∂θ∂θ0
If the functions g(θ) and H(θ) are not analytically available, they can be calculated numerically.
Take the j 0 th element of g(θ). Let δj be the j 0 th unit vector (zeros everywhere except for a one in
the j 0 th row). Then for ε small
Q(θ + δj ε) − Q(θ)
gj (θ) ' .
ε
Similarly,
Q(θ + δj ε + δk ε) − Q(θ + δk ε) − Q(θ + δj ε) + Q(θ)
gjk (θ) '
ε2
In many cases, numerical derivatives can work well but can be computationally costly relative to
analytic derivatives. In some cases, however, numerical derivatives can be quite unstable.
Most gradient methods are a variant of Newton’s method which is based on a quadratic
approximation. By a Taylor’s expansion for θ close to θ̂
³ ´
0 = g(θ̂) ' g(θ) + H(θ) θ̂ − θ
which implies
θ̂ = θ − H(θ)−1 g(θ).
This suggests the iteration rule
θ̂i+1 = θi − H(θi )−1 g(θi ).
where
One problem with Newton’s method is that it will send the iterations in the wrong direction if
H(θi ) is not positive definite. One modification to prevent this possibility is quadratic hill-climbing
which sets
θ̂i+1 = θi − (H(θi ) + αi I m )−1 g(θi ).
where αi is set just above the smallest eigenvalue of H(θi ) if H(θ) is not positive definite.
Another productive modification is to add a scalar steplength λi . In this case the iteration
rule takes the form
θi+1 = θi − Di g i λi (C.2)
where gi = g(θi ) and Di = H(θi )−1 for Newton’s method and Di = (H(θi ) + αi I m )−1 for
quadratic hill-climbing.
Allowing the steplength to be a free parameter allows for a line search, a one-dimensional
optimization. To pick λi write the criterion function as a function of λ
Q(λ) = Q(θi + Di g i λ)
a one-dimensional optimization problem. There are two common methods to perform a line search.
A quadratic approximation evaluates the first and second derivatives of Q(λ) with respect to
λ, and picks λi as the value minimizing this approximation. The half-step method considers the
sequence λ = 1, 1/2, 1/4, 1/8, ... . Each value in the sequence is considered and the criterion
Q(θi + Di g i λ) evaluated. If the criterion has improved over Q(θi ), use this value, otherwise move
to the next element in the sequence.
APPENDIX C. NUMERICAL OPTIMIZATION 371
Newton’s method does not perform well if Q(θ) is irregular, and it can be quite computationally
costly if H(θ) is not analytically available. These problems have motivated alternative choices for
the weight matrix Di . These methods are called Quasi-Newton methods. Two popular methods
are do to Davidson-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS).
Let
∆g i = gi − g i−1
∆θi = θi − θi−1
For any of the gradient methods, the iterations continue until the sequence has converged in
some sense. This can be defined by examining whether |θi − θi−1 | , |Q (θi ) − Q (θi−1 )| or |g(θi )|
has become small.
[1] Abadir, Karim M. and Jan R. Magnus (2005): Matrix Algebra, Cambridge University Press.
[2] Aitken, A.C. (1935): “On least squares and linear combinations of observations,” Proceedings
of the Royal Statistical Society, 55, 42-48.
[3] Akaike, H. (1973): “Information theory and an extension of the maximum likelihood prin-
ciple.” In B. Petroc and F. Csake, eds., Second International Symposium on Information
Theory.
[4] Anderson, T.W. and H. Rubin (1949): “Estimation of the parameters of a single equation in
a complete system of stochastic equations,” The Annals of Mathematical Statistics, 20, 46-63.
[5] Andrews, Donald W. K. (1988): “Laws of large numbers for dependent non-identically dis-
tributed random variables,’ Econometric Theory, 4, 458-467.
[6] Andrews, Donald W. K. (1991), “Asymptotic normality of series estimators for nonparameric
and semiparametric regression models,” Econometrica, 59, 307-345.
[7] Andrews, Donald W. K. (1993), “Tests for parameter instability and structural change with
unknown change point,” Econometrica, 61, 821-8516.
[8] Andrews, Donald W. K. and Moshe Buchinsky: (2000): “A three-step method for choosing
the number of bootstrap replications,” Econometrica, 68, 23-51.
[9] Andrews, Donald W. K. and Werner Ploberger (1994): “Optimal tests when a nuisance
parameter is present only under the alternative,” Econometrica, 62, 1383-1414.
[10] Ash, Robert B. (1972): Real Analysis and Probability, Academic Press.
[12] Bekker, P.A. (1994): “Alternative approximations to the distributions of instrumental vari-
able estimators, Econometrica, 62, 657-681.
[13] Billingsley, Patrick (1968): Convergence of Probability Measures. New York: Wiley.
[14] Billingsley, Patrick (1995): Probability and Measure, 3rd Edition, New York: Wiley.
[16] Box, George E. P. and Dennis R. Cox, (1964). “An analysis of transformations,” Journal of
the Royal Statistical Society, Series B, 26, 211-252.
[17] Breusch, T.S. and A.R. Pagan (1979): “The Lagrange multiplier test and its application to
model specification in econometrics,” Review of Economic Studies, 47, 239-253.
372
BIBLIOGRAPHY 373
[18] Brown, B. W. and Whitney K. Newey (2002): “GMM, efficient bootstrapping, and improved
inference ,” Journal of Business and Economic Statistics.
[19] Card, David (1995): “Using geographic variation in college proximity to estimate the return
to schooling,” in Aspects of Labor Market Behavior: Essays in Honour of John Vanderkamp,
L.N. Christofides, E.K. Grant, and R. Swidinsky, editors. Toronto: University of Toronto
Press.
[20] Carlstein, E. (1986): “The use of subseries methods for estimating the variance of a general
statistic from a stationary time series,” Annals of Statistics, 14, 1171-1179.
[21] Casella, George and Roger L. Berger (2002): Statistical Inference, 2nd Edition, Duxbury
Press.
[22] Chamberlain, Gary (1987): “Asymptotic efficiency in estimation with conditional moment
restrictions,” Journal of Econometrics, 34, 305-334.
[23] Choi, In and Peter C.B. Phillips (1992): “Asymptotic and finite sample distribution theory for
IV estimators and tests in partially identified structural equations,” Journal of Econometrics,
51, 113-150.
[24] Chow, G.C. (1960): “Tests of equality between sets of coefficients in two linear regressions,”
Econometrica, 28, 591-603.
[25] Cragg, John (1992): “Quasi-Aitken Estimation for Heterskedasticity of Unknown Form"
Journal of Econometrics, 54, 179-201.
[26] Davidson, James (1994): Stochastic Limit Theory: An Introduction for Econometricians.
Oxford: Oxford University Press.
[27] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge
University Press.
[28] Dickey, D.A. and W.A. Fuller (1979): “Distribution of the estimators for autoregressive time
series with a unit root,” Journal of the American Statistical Association, 74, 427-431.
[29] Donald Stephen G. and Whitney K. Newey (2001): “Choosing the number of instruments,”
Econometrica, 69, 1161-1191.
[30] Dufour, J.M. (1997): “Some impossibility theorems in econometrics with applications to
structural and dynamic models,” Econometrica, 65, 1365-1387.
[31] Efron, Bradley (1979): “Bootstrap methods: Another look at the jackknife,” Annals of Sta-
tistics, 7, 1-26.
[32] Efron, Bradley (1982): The Jackknife, the Bootstrap, and Other Resampling Plans. Society
for Industrial and Applied Mathematics.
[33] Efron, Bradley and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York:
Chapman-Hall.
[34] Eicker, F. (1963): “Asymptotic normality and consistency of the least squares estimators for
families of linear regressions,” Annals of Mathematical Statistics, 34, 447-456.
[35] Engle, Robert F. and Clive W. J. Granger (1987): “Co-integration and error correction:
Representation, estimation and testing,” Econometrica, 55, 251-276.
[37] Frisch, Ragnar and F. Waugh (1933): “Partial time regressions as compared with individual
trends,” Econometrica, 1, 387-401.
[38] Gallant, A. Ronald and D.W. Nychka (1987): “Seminonparametric maximum likelihood es-
timation,” Econometrica, 55, 363-390.
[39] Gallant, A. Ronald and Halbert White (1988): A Unified Theory of Estimation and Inference
for Nonlinear Dynamic Models. New York: Basil Blackwell.
[40] Galton, Francis (1886): “Regression Towards Mediocrity in Hereditary Stature,” The Journal
of the Anthropological Institute of Great Britain and Ireland, 15, 246-263.
[44] Goffe, W.L., G.D. Ferrier and J. Rogers (1994): “Global optimization of statistical functions
with simulated annealing,” Journal of Econometrics, 60, 65-99.
[45] Gosset, William S. (a.k.a. “Student”) (1908): “The probable error of a mean,” Biometrika,
6, 1-25.
[46] Gauss, K.F. (1809): “Theoria motus corporum coelestium,” in Werke, Vol. VII, 240-254.
[47] Granger, Clive W. J. (1969): “Investigating causal relations by econometric models and
cross-spectral methods,” Econometrica, 37, 424-438.
[48] Granger, Clive W. J. (1981): “Some properties of time series data and their use in econometric
specification,” Journal of Econometrics, 16, 121-130.
[49] Granger, Clive W. J. and Timo Teräsvirta (1993): Modelling Nonlinear Economic Relation-
ships, Oxford University Press, Oxford.
[50] Gregory, A. and M. Veall (1985): “On formulating Wald tests of nonlinear restrictions,”
Econometrica, 53, 1465-1468,
[52] Hall, A. R. (2000): “Covariance matrix estimation and the power of the overidentifying
restrictions test,” Econometrica, 68, 1517-1527,
[53] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.
[54] Hall, P. (1994): “Methodology and theory for the bootstrap,” Handbook of Econometrics,
Vol. IV, eds. R.F. Engle and D.L. McFadden. New York: Elsevier Science.
[55] Hall, P. and J.L. Horowitz (1996): “Bootstrap critical values for tests based on Generalized-
Method-of-Moments estimation,” Econometrica, 64, 891-916.
[58] Hansen, Bruce E. (1992): “Efficient estimation and testing of cointegrating vectors in the
presence of deterministic trends,” Journal of Econometrics, 53, 87-121.
[59] Hansen, Bruce E. (1996): “Inference when a nuisance parameter is not identified under the
null hypothesis,” Econometrica, 64, 413-430.
[60] Hansen, Bruce E. (2006): “Edgeworth expansions for the Wald and GMM statistics for non-
linear restrictions,” Econometric Theory and Practice: Frontiers of Analysis and Applied
Research, edited by Dean Corbae, Steven N. Durlauf and Bruce E. Hansen. Cambridge Uni-
versity Press.
[61] Hansen, Lars Peter (1982): “Large sample properties of generalized method of moments
estimators, Econometrica, 50, 1029-1054.
[62] Hansen, Lars Peter, John Heaton, and A. Yaron (1996): “Finite sample properties of some
alternative GMM estimators,” Journal of Business and Economic Statistics, 14, 262-280.
[63] Hausman, J.A. (1978): “Specification tests in econometrics,” Econometrica, 46, 1251-1271.
[64] Heckman, J. (1979): “Sample selection bias as a specification error,” Econometrica, 47, 153-
161.
[65] Horn, S.D., R.A. Horn, and D.B. Duncan. (1975) “Estimating heteroscedastic variances in
linear model,” Journal of the American Statistical Association, 70, 380-385.
[66] Horowitz, Joel (2001): “The Bootstrap,” Handbook of Econometrics, Vol. 5, J.J. Heckman
and E.E. Leamer, eds., Elsevier Science, 3159-3228.
[67] Imbens, G.W. (1997): “One step estimators for over-identified generalized method of moments
models,” Review of Economic Studies, 64, 359-383.
[68] Imbens, G.W., R.H. Spady and P. Johnson (1998): “Information theoretic approaches to
inference in moment condition models,” Econometrica, 66, 333-357.
[69] Jarque, C.M. and A.K. Bera (1980): “Efficient tests for normality, homoskedasticity and
serial independence of regression residuals, Economic Letters, 6, 255-259.
[70] Johansen, S. (1988): “Statistical analysis of cointegrating vectors,” Journal of Economic
Dynamics and Control, 12, 231-254.
[71] Johansen, S. (1991): “Estimation and hypothesis testing of cointegration vectors in the pres-
ence of linear trend,” Econometrica, 59, 1551-1580.
[72] Johansen, S. (1995): Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Mod-
els, Oxford University Press.
[73] Johansen, S. and K. Juselius (1992): “Testing structural hypotheses in a multivariate cointe-
gration analysis of the PPP and the UIP for the UK,” Journal of Econometrics, 53, 211-244.
[74] Kitamura, Y. (2001): “Asymptotic optimality and empirical likelihood for testing moment
restrictions,” Econometrica, 69, 1661-1672.
[75] Kitamura, Y. and M. Stutzer (1997): “An information-theoretic alternative to generalized
method of moments,” Econometrica, 65, 861-874..
[76] Koenker, Roger (2005): Quantile Regression. Cambridge University Press.
[77] Kunsch, H.R. (1989): “The jackknife and the bootstrap for general stationary observations,”
Annals of Statistics, 17, 1217-1241.
BIBLIOGRAPHY 376
[78] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): “Testing the null hypoth-
esis of stationarity against the alternative of a unit root: How sure are we that economic time
series have a unit root?” Journal of Econometrics, 54, 159-178.
[79] Lafontaine, F. and K.J. White (1986): “Obtaining any Wald statistic you want,” Economics
Letters, 21, 35-40.
[80] Lehmann, E.L. and George Casella (1998): Theory of Point Estimation, 2nd Edition,
Springer.
[81] Lehmann, E.L. and Joseph P. Romano (2005): Testing Statistical Hypotheses, 3rd Edition,
Springer.
[82] Lindeberg, Jarl Waldemar, (1922): “Eine neue Herleitung des Exponentialgesetzes in der
Wahrscheinlichkeitsrechnung,” Mathematische Zeitschrift, 15, 211-225.
[84] Lovell, M.C. (1963): “Seasonal adjustment of economic time series,” Journal of the American
Statistical Association, 58, 993-1010.
[85] MacKinnon, James G. (1990): “Critical values for cointegration,” in Engle, R.F. and C.W.
Granger (eds.) Long-Run Economic Relationships: Readings in Cointegration, Oxford, Oxford
University Press.
[86] MacKinnon, James G. and Halbert White (1985): “Some heteroskedasticity-consistent covari-
ance matrix estimators with improved finite sample properties,” Journal of Econometrics, 29,
305-325.
[87] Magnus, J. R., and H. Neudecker (1988): Matrix Differential Calculus with Applications in
Statistics and Econometrics, New York: John Wiley and Sons.
[88] Mann, H.B. and A. Wald (1943). “On stochastic limit and order relationships,” The Annals
of Mathematical Statistics 14, 217—226.
[89] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.
[90] Nelder, J. and R. Mead (1965): “A simplex method for function minimization,” Computer
Journal, 7, 308-313.
[91] Nerlove, Marc (1963): “Returns to Scale in Electricity Supply,” Chapter 7 of Measurement
in Economics (C. Christ, et al, eds.). Stanford: Stanford University Press, 167-198.
[92] Newey, Whitney K. (1990): “Semiparametric efficiency bounds,” Journal of Applied Econo-
metrics, 5, 99-135.
[93] Newey, Whitney K. (1997): “Convergence rates and asymptotic normality for series estima-
tors,” Journal of Econometrics, 79, 147-168.
[94] Newey, Whitney K. and Daniel L. McFadden (1994): “Large Sample Estimation and Hy-
pothesis Testing,” in Robert Engle and Daniel McFadden, (eds.) Handbook of Econometrics,
vol. IV, 2111-2245, North Holland: Amsterdam.
[95] Newey, Whitney K. and Kenneth D. West (1987): “Hypothesis testing with efficient method
of moments estimation,” International Economic Review, 28, 777-787.
[96] Owen, Art B. (1988): “Empirical likelihood ratio confidence intervals for a single functional,”
Biometrika, 75, 237-249.
BIBLIOGRAPHY 377
[97] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.
[98] Park, Joon Y. and Peter C. B. Phillips (1988): “On the formulation of Wald tests of nonlinear
restrictions,” Econometrica, 56, 1065-1083,
[99] Phillips, Peter C.B. (1989): “Partially identified econometric models,” Econometric Theory,
5, 181-240.
[100] Phillips, Peter C.B. and Sam Ouliaris (1990): “Asymptotic properties of residual based tests
for cointegration,” Econometrica, 58, 165-193.
[101] Politis, D.N. and J.P. Romano (1996): “The stationary bootstrap,” Journal of the American
Statistical Association, 89, 1303-1313.
[102] Potscher, B.M. (1991): “Effects of model selection on inference,” Econometric Theory, 7,
163-185.
[103] Qin, J. and J. Lawless (1994): “Empirical likelihood and general estimating equations,” The
Annals of Statistics, 22, 300-325.
[104] Ramsey, J. B. (1969): “Tests for specification errors in classical linear least-squares regression
analysis,” Journal of the Royal Statistical Society, Series B, 31, 350-371.
[105] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGraw-Hill.
[106] Runge, Carl (1901): “Über empirische Funktionen und die Interpolation zwischen äquidis-
tanten Ordinaten,” Zeitschrift für Mathematik und Physik, 46, 224-243.
[107] Said, S.E. and D.A. Dickey (1984): “Testing for unit roots in autoregressive-moving average
models of unknown order,” Biometrika, 71, 599-608.
[108] Secrist, Horace (1933): The Triumph of Mediocrity in Business. Evanston: Northwestern
University.
[109] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.
[110] Sargan, J.D. (1958): “The estimation of economic relationships using instrumental variables,”
Econometrica, 2 6, 393-415.
[112] Sheather, S.J. and M.C. Jones (1991): “A reliable data-based bandwidth selection method
for kernel density estimation, Journal of the Royal Statistical Society, Series B, 53, 683-690.
[113] Shin, Y. (1994): “A residual-based test of the null of cointegration against the alternative of
no cointegration,” Econometric Theory, 10, 91-115.
[114] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chap-
man and Hall.
[115] Sims, C.A. (1972): “Money, income and causality,” American Economic Review, 62, 540-552.
[116] Sims, C.A. (1980): “Macroeconomics and reality,” Econometrica, 48, 1-48.
[117] Staiger, D. and James H. Stock (1997): “Instrumental variables regression with weak instru-
ments,” Econometrica, 65, 557-586.
[118] Stock, James H. (1987): “Asymptotic properties of least squares estimators of cointegrating
vectors,” Econometrica, 55, 1035-1056.
BIBLIOGRAPHY 378
[119] Stock, James H. (1991): “Confidence intervals for the largest autoregressive root in U.S.
macroeconomic time series,” Journal of Monetary Economics, 28, 435-460.
[120] Stock, James H. and Jonathan H. Wright (2000): “GMM with weak identification,” Econo-
metrica, 68, 1055-1096.
[121] Stock, James H. and Mark W. Watson (2010): Introduction to Econometrics, 3rd edition,
Addison-Wesley.
[122] Stone, Marshall H. (1937): “Applications of the Theory of Boolean Rings to General Topol-
ogy,” Transactions of the American Mathematical Society, 41, 375-481.
[123] Stone, Marshall H. (1948): “The Generalized Weierstrass Approximation Theorem,” Mathe-
matics Magazine, 21, 167-184.
[124] Theil, Henri. (1953): “Repeated least squares applied to complete equation systems,” The
Hague, Central Planning Bureau, mimeo.
[125] Theil, Henri (1961): Economic Forecasts and Policy. Amsterdam: North Holland.
[127] Tobin, James (1958): “Estimation of relationships for limited dependent variables,” Econo-
metrica, 2 6, 24-36.
[128] Tripathi, Gautam (1999): “A matrix extension of the Cauchy-Schwarz inequality,” Economics
Letters, 63, 1-3.
[129] van der Vaart, A.W. (1998): Asymptotic Statistics, Cambridge University Press.
[130] Wald, A. (1943): “Tests of statistical hypotheses concerning several parameters when the
number of observations is large,” Transactions of the American Mathematical Society, 54,
426-482.
[131] Wang, J. and E. Zivot (1998): “Inference on structural parameters in instrumental variables
regression with weak instruments,” Econometrica, 66, 1389-1404.
[132] Weierstrass, K. (1885): “Über die analytische Darstellbarkeit sogenannter willkürlicher Func-
tionen einer reellen Veränderlichen,” Sitzungsberichte der Königlich Preußischen Akademie
der Wissenschaften zu Berlin, 1885.
[134] White, Halbert (1984): Asymptotic Theory for Econometricians, Academic Press.
[135] Wooldridge, Jeffrey M. (2010) Econometric Analysis of Cross Section and Panel Data, 2nd
edition, MIT Press.
[136] Wooldridge, Jeffrey M. (2009) Introductory Econometrics: A Modern Approach, 4th edition,
Southwestern
[137] Zellner, Arnold. (1962): “An efficient method of estimating seemingly unrelated regressions,
and tests for aggregation bias,” Journal of the American Statistical Association, 57, 348-368.
[138] Zhang, Fuzhen and Qingling Zhang (2006): “Eigenvalue inequalities for matrix product,”
IEEE Transactions on Automatic Control, 51, 1506-1509.