0% found this document useful (0 votes)
26 views28 pages

Chapter 2 Econometric

This document defines the simple linear regression model and derives the ordinary least squares estimates. It discusses properties of OLS estimates including unbiasedness and consistency under certain assumptions. It also covers units of measurement, functional form, and the expected values and variances of the OLS estimators.

Uploaded by

mukesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views28 pages

Chapter 2 Econometric

This document defines the simple linear regression model and derives the ordinary least squares estimates. It discusses properties of OLS estimates including unbiasedness and consistency under certain assumptions. It also covers units of measurement, functional form, and the expected values and variances of the OLS estimators.

Uploaded by

mukesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 2

The Simple Regression Model


Overview
• Definition of the simple regression model
• Deriving the ordinary least squares estimates
• Properties of OLS on any sample of data
• Units of measurement and functional form
• Expected values and variances of the OLS estimators
Definition of the simple regression model
• Objectives:
• For two variables y and x that represent some population, explain how y
varies with changes in x
• Issues
• Other factors that affect y
• Functional relationship between y and x
• Ceteris paribus relationship between y and x
Definition of the simple regression model
• Simple linear regression model

• Dependent variable (or explained variable, regressand) y


• Independent variable (or explanatory variable, regressor) x
• Error term (or disturbance) u captures unobserved factors affecting y
• Intercept parameter (or constant term) 𝛽0
• Slope parameter 𝛽1
• ... captures ceteris paribus effect ∆x on y: ∆y = 𝛽1 ∆x if ∆u = 0
• How can we learn about this effect given that we do not observe u?
• Example: A simple wage equation:
Definition of the simple regression model
• Zero conditional mean assumption:
• We can learn about the ceteris paribus effect 𝛽1 from a random sample if
we assume

• With intercept, is innocuous normalization


• Mean independence u of x is key
• In wage example with u = abil, requires e.g. E(abil|6)=E(abil|16)
• Population regression function
• Under the zero conditional mean assumption
Definition of the simple regression model
Deriving the ordinary least squares
estimates
• We would like to estimate 𝛽0 and 𝛽1 using a random sample
{ 𝑥𝑖 , 𝑦𝑖 : 𝑖 = 1, … , 𝑛} from the population, with 𝑦𝑖 = 𝛽0 + 𝛽1 𝑥𝑖 + 𝑢𝑖
Deriving the ordinary least squares
estimates
• OLS estimation as a method of moments
• The zero conditional mean assumption implies

• Using , this can be rewritten as

• The ordinary least squares (OLS) estimates 𝛽መ0 and 𝛽መ1 satisfy the
sample equivalent of these two moment conditions
Deriving the ordinary least squares
estimates
• Substituting in in the second moment condition gives

• so that

• Provided that this gives


Deriving the ordinary least squares
estimates
Deriving the ordinary least squares
estimates
• OLS estimates as minimizers of the sum of squared residuals
• The estimates 𝛽መ0 and 𝛽መ1 minimize the sum of squared residuals
(SSR)

• The sample moment conditions

are the first order conditions for this minimization problem


• 𝑢ො 𝑖 is the residual for observation i
• 𝑦ො𝑖 is the fitted value for observation i
Deriving the ordinary least squares
estimates
• OLS regression line or sample regression function
Deriving the ordinary least squares
estimates
• Example: CEO salary and return on equity
Deriving the ordinary least squares
estimates
• Two more examples
• Wage and education

• Voting outcome and campaign expenditures


Properties of OLS on any sample of data
• Example: CEO salary and return on equity
Properties of OLS on any sample of data
• Algebraic properties of OLS statistics
• The sample moment conditions immediately imply
• σ𝑛𝑖=1 𝑢ො 𝑖 = 0 sample average OLS residuals zero
• σ𝑛𝑖=1 𝑥𝑖 𝑢ො 𝑖 = 0 sample covariance regressors and residuals zero
• 𝑦ത = 𝛽መ0 + 𝛽መ1 𝑥ҧ (𝑥ҧ , 𝑦)
ത on regression line
• Moreover, so that
SST=SSR+SSE
• the explained sum of squares
• the total sum of squares
Properties of OLS on any sample of data
• Goodness-of-fit
• The R-squared (or coefficient of determination)

measures the fraction in the sample variation in y explained by x


• 0 ≤ 𝑅2 ≤ 1
• Low R-squared common in cross-sectional economic analysis
• Regression with low R-squared may estimate ceteris paribus relationship well
• Examples
• CEO salary and return on equity
• Wage and education
• Voting outcome and campaign expenditures
Units of measurement and functional form
• Units of measurement
• Consider the wage and education example
• What happens if we rescale the wage (from hourly to daily)?
• What happens if we rescale education (from years to months)?
• Functional form
• We can transform y or x
• For example, with

• 100𝛽1 (roughly) measures the percentage change in wages given one additional
year of schooling
• In a log-log model, 𝛽1 is an elasticity
• Regression remains linear in the parameters
Units of measurement and functional form
Expected values and variances of the OLS
estimators
• Statistical properties of OLS
• Distribution of the OLS estimators 𝛽መ0 and 𝛽መ1 over repeated
random samples from the population
• Assumptions
• SLR1 (Linear in parameters):
• SLR2 (Random sampling): We have a random sample { 𝑥𝑖 , 𝑦𝑖 : 𝑖 = 1, … , 𝑛}
following SLR1’s population model
• SLR1 and SLR2 gives for each observation i
Expected values and variances of the OLS
estimators
Expected values and variances of the OLS
estimators
• Assumptions
• SLR1 (Linear in parameters):
• SLR2 (Random sampling): We have a random sample { 𝑥𝑖 , 𝑦𝑖 : 𝑖 = 1, … , 𝑛}
following SLR1’s population model
• SLR3 (Sample variation in the explanatory variable): The sample values
𝑥1 … 𝑥𝑛 of the explanatory variable are not all the same
• SLR4 (Zero conditional mean):
• Theorem 2.1: Unbiasedness of OLS
Under Assumptions SLR1–SLR4,
Expected values and variances of the OLS
estimators
• Example: Wage and education
• Recall that
• How can we interpret these numbers under SLR1–SLR4?
• Why may SLR4 fail and how does this affect this interpretation?
• Under SLR1–SLR4, how close can we expect 0.54 to be to 𝛽መ1
• Additional assumption needed for deriving variances of OLS
estimators
• Denote the error variance var(u) with 𝜎 2
• SLR5 (Homoskedasticity): var(u|x) = 𝜎 2
• Note that SLR4 and SLR5 are equivalent to
Expected values and variances of the OLS
estimators
Expected values and variances of the OLS
estimators
Expected values and variances of the OLS
estimators
• Theorem 2.2: Sampling variances of the OLS estimators
Under Assumptions SLR1–SLR5 (and, implicitly, conditional on x)
Expected values and variances of the OLS
estimators
• Estimating the error variance
• The error variance 𝜎 2 can be estimated with

• Substituting 𝜎ො 2 for 𝜎 2 in the appropriate expressions gives


• the standard error of the regression, 𝜎ො = 𝜎ො 2
• an estimator of var(𝛽መ1 ), 𝜎ො 2 /𝑆𝑆𝑇𝑥
• the standard error of 𝛽መ1 𝜎/ ො 𝑆𝑆𝑇𝑥
• Theorem 2.3: Unbiased estimation of 𝜎 2
Under Assumptions SLR1–SLR5,
Regression through the origin
• OLS without an intercept
• Estimate alternative regression line 𝑦෤ = 𝛽෨1 𝑥
• The OLS through the origin estimate 𝛽෨1 minimizes

• which gives the moment (first order) condition

• Note that 𝛽෨1 = 𝛽መ1 if and only if 𝑥ҧ = 0


• Take care: 𝛽෨1 is biased if 𝛽0 ≠ 0

You might also like