0% found this document useful (0 votes)
15 views57 pages

Lecture 2 - MRA and Inference

The document outlines key concepts in econometrics, focusing on multiple regression models, OLS estimation, and the importance of assumptions such as linearity and zero conditional mean. It discusses the interpretation of regression coefficients, goodness-of-fit measures like R-squared, and the implications of omitted variable bias. Additionally, it provides examples to illustrate the effects of including or excluding variables in regression analysis.

Uploaded by

hassan.domiaty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views57 pages

Lecture 2 - MRA and Inference

The document outlines key concepts in econometrics, focusing on multiple regression models, OLS estimation, and the importance of assumptions such as linearity and zero conditional mean. It discusses the interpretation of regression coefficients, goodness-of-fit measures like R-squared, and the implications of omitted variable bias. Additionally, it provides examples to illustrate the effects of including or excluding variables in regression analysis.

Uploaded by

hassan.domiaty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as KEY, PDF, TXT or read online on Scribd
You are on page 1/ 57

Lecture 2: MRA and inference

Dr. Yundan Gong


Econometrics (6YYD0017)
Test

1. Econometrics is the branch of economics that _____.


a. studies the behaviour of individual economic agents in
making economic decisions
b. develops and uses statistical methods for estimating
economic relationships
c. deals with the performance, structure, behaviour, and
decision-making of an economy as a whole
d. applies mathematical methods to represent economic
theories and solve economic problems
Test

2. The term ‘u’ in an econometric model is usually referred to as

the _____.

a. error term

b. parameter

c. hypothesis

d. dependent variable
Test

3. The parameters of an econometric model _____.

a. include all unobserved factors affecting the variable

being studied

b. describe the strength of the relationship between the

variable under study and the factors affecting it

c. refer to the explanatory variables included in the model


d. refer to the predictions that can be made using the

model
Test

4. _____ has a causal effect on _____.

a. Income; unemployment

b. Height; health

c. Income; consumption

d. Age; wage
Test

5. If a change in variable x causes a change in variable y, variable


x is called the _____.
1.

1.

a. dependent variable
b. explained variable
c. explanatory variable
d. response variable
Test

6. In the equation is the _____.


a. dependent variable
b. independent variable
c. slope parameter
d. Intercept parameter

7. And what is the estimated value of β0 ?


a.
b.
c.

d.
Test

8. What does the equation denote if the regression


equation is
?
a. The explained sum of squares
b. The total sum of squares
c. The sample regression function
d. The population regression function
Test

9. If the total sum of squares (SST) in a regression equation is 81,


and the residual sum of squares (SSR) is 25, what is the explained
sum of squares (SSE)?
a. 64
b. 56
c. 32
d. 18
Test

10. If the residual sum of squares (SSR) in a regression analysis is


66 and the total sum of squares (SST) is equal to 90, what is the
value of the coefficient of determination?
a. 0.73
b. 0.55
c. 0.27
d. 1.2
Test

11. Which of the following is true of ?

a.  is also called the standard error of regression.


b. A low  indicates that the Ordinary Least Squares line fits the
data well.
c.  usually decreases with an increase in the number of
independent variables in a regression.
d.  shows what percentage of the total variation in the dependent
variable, Y, is explained by the explanatory variables.
Test

12. The value of  always _____.


a. lies below 0
b. lies above 1
c. lies between 0 and 1
d. lies between 1 and 1.5
Today‘s agenda

Interpretation of OLS

The expected value of the OLS estimation

Efficiency of OLS: THE Gauss-Markov Theorem

Discussion of the normality assumption


Reading List

Wooldridge, J., Introductory Econometrics: A Modern Approach,

EMEA , 2014, South-Western, chapter 3, 4&7 (HB129WOO2014)

Hill, Griffiths and Lim, Principles of Econometrics, 4th ed., 2011,

Wiley, chapter 5&6 (HB139HIL)


Goodness-of-fit

Goodness-of-Fit

“How well does the explanatory variable explain the dependent


variable?”
Measures of Variation

Total sum of squares, Explained sum of Residual sum of


represents total squares, squares,
variation represents variation represents variation
in the dependent explained by not
variable regression explained by
regression
R-squared
The Simple Regression Model

Decomposition of total variation

Total Explained Unexplained


variation part part

Goodness-of-fit measure (R-squared)

R-squared measures the fraction


of the total variation that is
explained by the regression
R_squared examples
The Simple Regression Model

CEO Salary and return on equity

The regression explains only


1.3%
of the total variation in salaries

Voting outcomes and campaign expenditures

The regression explains 85.6%


of the total variation in election
outcomes

Caution: A high R-squared does not necessarily mean that the


regression has a causal interpretation!
Incorporating
The Simple Regression Model
nonlinearities: Semi-
logarithmic
Regression of log wages on form
years of education

Natural logarithm of
This wage
changes the interpretation of the regression coefficient:

Percentage change of
wage

… if years of education
are increased by one
year
Fitted regression
The Simple Regression Model

The wage increases by 8.3% for


every additional year of
education
(= return to another year of
education)
For
example:

Growth rate of wage is 8.3%


per year of education
Incorporating
The Simple Regression Model
nonlinearities: log-log form
CEO salary and firm sales

Natural logarithm of CEO Natural logarithm of his/her


salary firm‘s sales
This changes the interpretation of the regression coefficient:

Percentage change of
salary
… if sales increase by
1%
Logarithmic changes
are
always percentage
changes
Fitted regression
The Simple Regression Model

CEO salary and firm sales: fitted regression

+ 1% sales; + 0.257%
For example: salary

The log-log form postulates a constant elasticity model, whereas the


semi-log form assumes a semi-elasticity model
Definition of the multiple
linear regression model
“Explains variable in terms of variables

Interce Slope
pt parameters

Dependent
variable, Error term,
Independent disturbance,
explained variables,
variable, unobservable
explanatory s,…
response variables,
variable,… regressors,…
Motivation for multiple
regression
Motivation:
Incorporate more explanatory factors into the model
Explicitly hold fixed other factors that otherwise would be in
Allow for more flexible functional forms

Example: Wage equation


Now measures effect of education explicitly holding
experience fixed

All other
factors…

Hourly Years of Years of labor market


wage education experience
Example: Average test
scores and per student
spending
Other
factors
Average Per student Average family
standardized spending income
test score of at this school of students at this
school school

Per student spending is likely to be correlated with average family


income at a given high school because of school financing
Omitting average family income in regression would lead to biased
estimate of the effect of spending on average test scores
In a simple regression model, effect of per student spending would
partly include the effect of family income on test scores
Example: Family income
and family consumption

Other
factors
Family Family Family income
consumption income squared
Model has two explanatory variables: income and income squared
Consumption is explained as a quadratic function of income
One has to be very careful when interpreting the coefficients :

By how much does Depends on


consumption how much
increase if income is income is
increased already there
by one unit?
Example: CEO salary, sales,
and CEO tenure

Log of CEO Log Quadratic function of CEO tenure with


. salary sales the firm

Model assumes a constant elasticity relationship between CEO salary


and the sales of his or her firm
Model assumes a quadratic relationship between CEO salary and his or
her tenure with the firm

Meaning of “linear” regression


.

The model has to be linear in the parameters (not in the variables)


OLS estimation of the
multiple regression model
OLS Estimation of the multiple regression model
Random sample

Regression residuals

Minimize sum of squared residuals

Minimization will be carried out by


computer
Interpretation of the
multiple regression model

By how much does the dependent variable change if


the j-th
independent variable is increased by one unit,
holding all
other independent variables and the error term
The multiple linear constant
regression model manages to hold the values
of other explanatory variables fixed even if, in reality, they are
correlated with the explanatory variable under consideration

“Ceteris paribus”-interpretation

It has still to be assumed that unobserved factors do not change if


the explanatory variables are changed
Example: Determinants of
college GPA

Grade point average at High school grade point Achievement test


college average score

Interpretation
.

Holding ACT fixed, another point on high school grade point


average is associated with another .453 points college grade point
average
Or: If we compare two students with the same ACT, but the hsGPA
of student A is one point higher, we predict student A to have a
colGPA that is .453 higher than that of student B
Holding high school grade point average fixed, another 10 points
on ACT are associated with less than one point on college GPA
Properties of OLS on any
sample of data
Fitted values and residuals

Fitted or predicted Residu


values als
Algebraic properties of OLS regression

Deviations from Covariance between Sample averages of y and


regression line sum deviations and regressors of the regressors lie on
up to zero are zero regression line
Goodness-of-Fit

Decomposition of total variation

SST = SSE + SSR


Notice that R-squared can
R-squared only
increase if another
explanatory
variable is added to the
regression

Alternative expression for R-squared


R-squared is equal to the
squared
correlation coefficient
between the
actual and the predicted
value of
the dependent variable
Example: Explaining arrest
records
Number of Proportion prior Months in prison Quarters employed
times arrests 1986 1986
arrested 1986 that led to
conviction

Interpretation:
.

If the proportion prior arrests increases by 0.5, the predicted fall


in arrests is 7.5 arrests per 100 men
If the months in prison increase from 0 to 12, the predicted fall in
arrests is 0.408 arrests for a particular man
If the quarters employed increase by 1, the predicted fall in
arrests is 10.4 arrests per 100 men
Example: Explaining arrest
records (cont.)
An additional explanatory variable is added:

Average sentence in prior


convictions

R-squared increases only


Interpretation:
slightly

Average prior sentence increases number of arrests (?)


Limited additional explanatory power as R-squared increases by
little

General remark on R-squared

Even if R-squared is small (as in the given example), regression may


still provide good estimates of ceteris paribus effects
Standard assumptions for
the multiple regression
model
Assumption MLR.1 (Linear in parameters)

In the population, the


relation-
ship between y and the
Assumption MLR.2 (Random sampling) expla-
natory variables is linear

The data is a random


sample
drawn from the
population

Each data point therefore follows the population


equation
Standard assumptions for the
multiple regression model
(cont.)
Assumption MLR.3 (No perfect collinearity)
“In the sample (and therefore in the population), none
of the independent variables is constant and there are
no exact linear relationships among the independent
variables.”

Remarks on MLR.3

The assumption only rules out perfect collinearity/correlation


between explanatory variables; imperfect correlation is allowed
If an explanatory variable is a perfect linear combination of other
explanatory variables it is superfluous and may be eliminated
Constant variables are also ruled out (collinear with intercept)
Examples for perfect
collinearity
Example for perfect collinearity: small sample

In a small sample, avginc may accidentally be an exact multiple of


expend; it will not
Example for perfect
be possible collinearity:
to disentangle relationships
their separate between
effects because there isregressors
exact
covariation

Either shareA or shareB will have to be dropped from the regression


because there
is an exact linear relationship between them: shareA + shareB = 1
Standard assumptions for the
multiple regression model
(cont.)
Assumption MLR.4 (Zero conditional mean)

The value of the explanatory


variables
must contain no information about
the mean of the unobserved factors

In a multiple regression model, the zero conditional mean


assumption is much more likely to hold because fewer things end
up in the error

Example: Average test scores

If avginc was not included in the regression, it would end up in the


error term; it would then be hard to defend that expend is
uncorrelated with the error
Zero conditional mean

Discussion of the zero mean conditional assumption


.

Explanatory variables that are correlated with the error term are
called endogenous; endogeneity is a violation of assumption MLR.4
Explanatory variables that are uncorrelated with the error term are
called exogenous; MLR.4 holds if all explanat. var. are exogenous
Exogeneity is the key assumption for a causal interpretation of the
regression, and for unbiasedness of the OLS estimators

Theorem 3.1 (Unbiasedness of OLS)

Unbiasedness is an average property in repeated samples; in a


given sample, the estimates may still be far away from the true
Including irrelevant
variables/Omitted variable
bias
Including irrelevant variables in a regression model

No problem because = 0 in the population


.
However, including irrevelant variables may increase
sampling variance.
Omitting relevant variables: the simple case

True model (contains x1 and x2)

Estimated model (x2 is


omitted)
Omitted variable bias

Omitted variable bias


If x1 and x2 are correlated, assume a
linear regression relationship
between them

If y is only If y is only error term


regressed regressed
on x1 this will be on x1, this will be
Conclusion: All estimatedthe
the estimated coefficients
estimated will be biased
intercept slope on x1
Omitted variable bias

Example: Omitting ability in a wage equation

Will both be positive

The return to education will be overestimated because . It


will look
as if people with many years of education earn very high wages, but this is
When is there
partly no omitted variable bias?
. due to the fact that people with more education are also more able on
average.
If the omitted variable is irrelevant or uncorrelated
Omitted variable bias

Omitted variable bias: more general cases

True model (contains x1, x2,


and x3)

Estimated model (x3 is


omitted)

No general statements possible about direction of bias


Analysis as in simple case if one regressor uncorrelated with
others

Example: Omitting ability in a wage equation

If exper is approximately uncorrelated with educ and abil, then the


direction
of the omitted variable bias can be as analyzed in the simple two
variable case.
Standard assumptions for the
multiple regression model
(cont.)
Assumption MLR.5 (Homoskedasticity)

The value of the explanatory


variables
must contain no information about
the variance of the unobserved
Example: Wage equation factors
This assumption may also be
hard
to justify in many cases

Short hand notation


All explanatory variables are
collected in a random vector

wit
h
Graphical illustration
The Simple Regression Model of
homoskedasticity

The variability of the


unobserved
influences does not depend on
the value of the explanatory
variable
An
The example for
Simple Regression Model
heteroskedasticity
An example for heteroskedasticity: Wage and education

The variance of the unobserved


determinants of wages
increases
with the level of education
Theorem 3.2 (sampling
variances of the OLS slope
estimators)
Under assumptions MLR.1 –
MLR.5:

Variance of the error


term

Total sample variation in R-squared from a regression of explanatory


explanatory variable xj: variable xj on all other independent
variables
(including a constant)
Components of OLS variances

The error variance


.

A high error variance increases the sampling variance because


there is more “noise” in the equation
A large error variance necessarily makes estimates imprecise
The error variance does not decrease with sample size

The total sample variation in the explanatory variable


.

More sample variation leads to more precise estimates


Total sample variation automatically increases with the sample size
Increasing the sample size is thus a way to get more precise
estimates
Components of OLS variances

Linear relationships among the independent variables

Regress on all other independent variables (including a


constant)

The R-squared of this regression will be the


higher
the better xj can be linearly explained by the
other independent variables

Sampling variance of will be the higher the better explanatory


variable can be linearly explained by other independent variables
The problem of almost linearly dependent explanatory variables is
called multicollinearity (i.e. for some )
An example for
multicollinearity

Average Expenditu Expenditures Other


standardized res for in- ex-
test score of for structional penditu
school teachers materials res

The different expenditure categories will be strongly correlated because if a


school has a lot of resources it will spend a lot on everything.
It will be hard to estimate the differential effects of different expenditure
categories because all expenditures are either high or low. For precise estimates
of the differential effects, one would need information about situations where
expenditure categories change differentially.
As a consequence, sampling variance of the estimated effects will be large.
Discussion of the
multicollinearity problem
.

In the above example, it would probably be better to lump all


expenditure categories together because effects cannot be
disentangled

In other cases, dropping some independent variables may reduce


multicollinearity (but this may lead to omitted variable bias)
Discussion of the
multicollinearity problem

Only the sampling variance of the variables involved in


multicollinearity will be inflated; the estimates of other effects may
be very precise
Note that multicollinearity is not a violation of MLR.3 in the strict
sense
Multicollinearity may be detected As through “variance
an (arbitrary) inflation
rule of thumb, the
variance
factors” inflation factor should not be larger
than 10
Variances in misspecified
models
.

The choice of whether to include a particular variable in a


regression can be made by analyzing the tradeoff between bias and
variance True population
model

Estimated model 1

Estimated model 2

It might be the case that the likely omitted variable bias in the
misspecified model 2 is overcompensated by a smaller variance
Variances in misspecified
models (cont.)

Conditional on x1 and x2,


the variance in model 2
is always smaller than
that in model 1

Case 1:
Conclusion: Do not include irrelevant
regressors

Case 2:
Trade off bias and variance; Caution: bias will not vanish even in
large samples
Estimating the error variance

An unbiased estimate of the error variance can be obtained by substracting the


number of estimated regression coefficients from the number of observations.
The number of obser-vations minus the number of estimated parameters is also
called the degrees of freedom. The n estimated squared residuals in the sum
are not completely independent but related
through the k+1 equations that define the first order conditions of the
minimization problem.

Theorem 3.3 (Unbiased estimator of the error variance)


Estimation of the sampling
variances of the OLS
estimators
The true sampling
variation of the
estimated

Plug in for the unknown

The estimated
samp-
ling variation of
the
Note that theseformulas are only valid under assumptions MLR.1-
estimated
MLR.5 (in particular, there has to be homoskedasticity)
Efficiency of OLS: the Gauss-
Markov Theorem
.

Under assumptions MLR.1 - MLR.5, OLS is unbiased


However, under these assumptions there may be many other
estimators that are unbiased
Which one is the unbiased estimator with the smallest variance?
In order to answer this question one usually limits oneself to linear
estimators, i.e. estimators linear in the dependent variable

May be an arbitrary function of the sample


values of all the explanatory variables; the
OLS estimator
can be shown to be of this form
Sampling distribution of
the OLS
Statistical inference in the regression model
.

Hypothesis tests about population parameters


Construction of confidence intervals

Sampling distributions of the OLS estimators


.

The OLS estimators are random variables


We already know their expected values and their variances
However, for hypothesis tests we need to know their distribution
In order to derive their distribution we need additional
assumptions
Assumption about distribution of errors: normal distribution

You might also like