CH 02
CH 02
Principles of
Econometrics
Fifth Edition
R. Carter Hill William E. Griffiths Guay C. Lim
Chapter Outline: 1 of 1
1. An Economic Model
2. An Econometric Model
3. Estimating the Regression Parameters
4. Assessing the Least Squares Estimators
5. The Gauss–Markov Theorem
6. The Probability Distributions of the Least Squares Estimators
7. Estimating the Variance of the Error Term
8. Estimating Nonlinear Relationships
9. Regression with Indicator Variables
10. The Independent Variable
Let e (error term) = everything else affecting food other than income.
cov 0
This says the population the average value of the dependent variable for the ith
This also says given a change in x, Δx, the resulting change in is holding all else
constant.
We can say that a change in x leads to, or causes, a change in the expected
(population average) value of y given , .
=
If this assumption is violated, and then the random errors are
said to be heteroskedastic.
The assumption that the pairs (, ) represent random iid draws from
a probability distribution is not realistic.
We cannot predict the random error at time , using any of the
values of the explanatory variable.
With cross-sectional data, data collected at one point in time, there may be a lack
of statistical independence between random errors for individuals who are
spatially connected.
Within a larger sample of data, there may be clusters of observations with
correlated errors because of the spatial component.
(2.5) yˆ i b1 b2 xi
(2.6) eˆi yi yˆ i yi b1 b2 xi
Copyright ©2018 John Wiley & Son, Inc. 21
Graphs of the fitted regression line and least
squares residual equations
(2.7)
b2
( x x )( y
i i y)
(x x) i
2
(2.8)
b1 y b2 x
Any time you ask how much a change in one variable will affect another
Similarly, any time you wish to predict the value of one variable given the
1. If the least squares estimators are random variables, then what are
their expected values, variances, covariances, and probability
distributions?
Where xi x
wi
(2.11) ( xi x) 2
that the estimator is unbiased. We can find the expected value of b2 using the
fact that the expected value of a sum is the sum of the expected values:
(2.13)
(2.14) 2
var(b1 ) σ
x 2
i
N xi x
2
σ2
var(b2 )
(2.15) x x i
2
x
2
cov(b1 , b2 ) σ
xi x
2
(2.16)
Copyright ©2018 John Wiley & Son, Inc. 30
Major Points About the Variances and
Covariances of b1 and b2
1. The larger the variance term, the greater the uncertainty there is in the statistical model,
and the larger the variances and covariance of the least squares estimators.
The larger the sum of squares, x x , the smaller the variances of the least squares
2
2. i
estimators and the more precisely we can estimate the unknown parameters.
3. The larger the sample size N, the smaller the variances and covariance of the least
squares estimators.
5. The absolute magnitude of the covariance increases the larger in magnitude is the
sample mean x , and the covariance has a sign opposite to that of x .
Copyright ©2018 John Wiley & Son, Inc. 31
2.5 The Gauss—Markov Theorem
linear and unbiased estimators of b1 and b2. They are the best linear
2. The estimators b1 and b2 are best within their class because they have the minimum
variance. When comparing two linear and unbiased estimators, we always want to use
the one with the smaller variance, because that estimation rule gives us the higher
probability of obtaining an estimate that is close to the true parameter value.
3. In order for the Gauss-Markov Theorem to hold, assumptions SR1-SR5 must be true. If
any of these assumptions are not true, then b1 and b2 are not the best linear unbiased
SR6).
5. In the simple linear regression model, if we want to use a linear and unbiased estimator, then
we have to do no more searching. The estimators b1 and b2 are the ones to use. This explains
why we are studying these estimators and why they are so widely used in research, not only
6. The Gauss-Markov theorem applies to the least squares estimators. It does not apply to the
(2.18)
σ̂ 2
i
e 2
N
Where the error terms are ei yi β1 β 2 xi
(2.20)
(2.21)
(2.22)
In particular, if assumption SR6 holds, and the random error terms ei are
we might call the true standard deviation of b2, measures the sampling
(2.27)
d PRICE 2αˆ SQFT
2
dSQFT
Both its slope and elasticity change at each point and are the same sign as b
β1 β 2 if UTOWN 1
E PRICE
β1 if UTOWN 0
In the simple regression model, an indicator variable on the right-hand side
Implication 1: E(ei)=0. The “average” of all factors omitted from the regression
model is zero.
The idea is to collect data pairs (yi , xi ) in such a way that the ith pair is