Ref. CH 3 Gujarati Book
Ref. CH 3 Gujarati Book
Ref. CH 3 Gujarati Book
Problem of Estimation
• Problem of Estimation:
– Ordinary Least Squares Method
– Method of Moment Estimation Procedure
– Maximum Likelihood Estimation Procedure
• Gauss-Markov Theorem
• Coefficient of Determination
Ref. Ch 3 Gujarati Book
Simple Linear Regression Model
Finds a linear relationship between:
- one independent variable X and
- one dependent variable Y
First prepare a scatter plot to verify the data has a linear
trend.
Use alternative approaches if the data is not linear.
Simple Linear Regression Model: Estimation
Model:
where
Y = dependent variable
X = independent variable
β1 = intercept/constant term
β2 = slope coefficient term
ui = error or random factor
Yi Xi
70 80
65 100
90 120
95 140
110 160
115 180
120 200
140 220
155 240
150 260
Simple Linear Regression Model: Estimation
Model:
Yi = ˆ1 + ˆ2 X i + uˆi − − − SRF
Yˆi = ˆ1 + ˆ2 X i − − − − FittedLine
uˆ = (Y − ˆ − ˆ X )
i i 1 2 i
or
uˆi = Yi − Yˆi
Yi Xi
70 80 4.82 65.18
65 100 -10.36 75.36
90 120 4.45 85.55
95 140 -0.73 95.73
110 160 4.09 105.91
115 180 -1.09 116.09
120 200 -6.27 126.27
140 220 3.55 136.45
155 240 8.36 146.64
*
150 260 -6.82 156.82
Simple Linear Regression Model: Estimation
I .Using Methods of Ordinary Least Squares: OLS
We estimate the intercept and slope by minimizing the vertical distance of the
data point and the estimated sample regression function. We are minimizing
the sum of squared residual
𝑛 𝑛
2
𝑀𝑖𝑛 𝑢ො 𝑖 = 𝑀𝑖𝑛 ቀ𝑌𝑖 − 𝛽መ1 − 𝛽መ2 𝑋𝑖 ሻ2 ≡ 𝑀𝑖𝑛 𝑆 𝛽መ1 𝛽መ2
𝑖=1 𝛽^1 ,𝛽^2 𝑖=1 𝛽^1 ,𝛽^2
S ( ˆ1 , ˆ2 ) n
Y = nˆ + ˆ X
i 1 2 i
Solving we get
Y X = ˆ X + ˆ X
2
i i 1 i 2 i
Estimation of Slope and Intercept
Further Simplifying these two normal equations together.
n
(X i − X )(Yi − Y )
x y
1) ˆ2 = i =1
or i i
Slope
x
n 2
(X
i =1
i − X )2 i Coefficients
• Another way of establishing the OLS formula is through the Method of Moments
approach, Developed by Pearson 1894
• The basic idea of this method is to equate certain sample characteristics, such as
the mean, to the corresponding population expected values.
• Method of moments estimation is based solely on the law of large numbers
The Method of Moments (MM) and GMM
1. Unconditional Moment
E(X-µ)=mean
The Method of Moments (MM) and GMM
2. Conditional Moment
Simple Linear Regression Model: Estimation(MoM)
• To derive the OLS estimates we need to realize that our main
assumption of E(u|x) = E(u) = 0 also implies that Cov(x,u) = E(xu) =
0
• E(y – β1 – β2x) = 0
• E[x(y – β1 – β2x)] = 0
• We want to choose values of the parameters that will ensure that the
sample versions of our moment restrictions are true
• The sample versions are as follows:
iii) E (uˆi ) = 0
Y
iv)
−
Cov( 1 , 2 ) = − X var( 2 )
v) (uˆi X i ) = 0
X Xi
Some theorem
In deviation from:
SRF:
4. R2=r2
1. 0
Assumptions of CLRM
E ( ui | X i ) =0
Assumptions of CLRM
4: Homoscedasticity or equal
variance of ui
• Given the value of X, the
variance of, ui, the
disturbance term is the
same for all observations.
var ( ui | X i ) = E ui − E ( ui | X i )
2
.. .. . .. ..
.. . ...
.
. ...... .
... . . . . . . .. .
.
uˆt −1 . .. . uˆt −1
.
.. . . . .
. . . ..
cov ( ui X i ) = E ui − E ( ui | X i ) X i − E ( X i )
= E ( ui ) ( X i − E ( X i ) ) uses Assumption 3
= E ( ui X i ) − E ( ui ) E ( X i ) since E ( X i ) is nonstochastic
= E ( ui X i ) since E ( ui ) = 0
= 0 by assumption
Assumptions of CLRM
7: The number of observations (n) must be greater
than the number of parameters to be estimated
(k) (Micronumeriosity)
8: Variability in X values
• Technically Var(X) must be a finite positive number
8. Variability in X values
β1 = 17.00
β2 = 0.60
σ = 11.32
σ2 = 128.42
Sample Regression Function SRF
Precision or S.E. of Least Squares Estimators
β1 = 17.00
β2 = 0.60
σ = 11.32
σ2 = 128.42
Estimated Model: Y=24.455+0.509X
Precision or S.E. of Least Squares Estimators
The more variation of X, the smaller the variance of and the more precise estimate of
Variance of ̂ 2
Variation of X is relatively
( )
Var ˆ2 = n
2
(X − X)
2
Y small. Slope estimate is very i
i =1
imprecise.
. .
. ..
. .. .. . .
.
variation of X
X
Variance of ̂ 2
(X − X)
Y estimate is much more precise. 2
i
. i =1
.
. . . .. . .
. . .
. . .. . . .
. . .
.
.
.
variation of X
X
Precision or S.E. of Least Squares Estimators
̂ 2
=
ˆ
ui2
n−k
Where n-k are the number of degrees of freedom.
2. The variance of X
Residual
Yi
Slope
Intercept
Xi
X
1. Variance of ui
Y
Yˆi = ˆ1 + ˆ2 X i
True relationship
X
Variance in X in relation to variance in u
2. Variation in X
Y
True relationship
X
Variance in X in relation to variance in u
2.Variation in X
X
And this is your sample…
3. Number of observations
Y
True relationship
X
Variance in X in relation to variance in u
3.Number of observations
X
Precision or S.E. of Least Squares Estimators
− Y
Cov( 1 , 2 ) = − X var( 2 )
(X
i =1
i − X )(Yi − Y )
May be rewritten as
̂ 2 = N
(X
i =1
i − X )2
N
Where
( X i − X )(Yi ) N (Xi − X )
̂ 2 = i =1
N
= wY i i
wi = N
(X i − X) 2 i =1
(X
i =1
i − X )2
i =1
– 2 now is a linear function of Yi
– 1 can similarly be written as a linear function of Yi
Gauss Markov Theorem
(2) Unbiased
– The expected value of the estimator is the true
underlying parameter
E ( ˆ 2 ) = 2
(3) Efficiency : (or Minimum Variance)
~
Var ( ˆ ) Var ( 2 )
OLS
2
~
– Of all the linear, unbiased estimators, 2 OLS
has the smallest variance
Gauss Markov Theorem
(4) Consistency: An Estimator is called consistent if it converges
stochastically to the true parameter value with probability
approaching one as the sample size increased indefinitely. This
implies
p lim ˆ 2 = 2
n →
n is the number of samples
Pr{| ˆ 2 − 2 | } → 1 as n goes to infinity for small
Sufficient condition
(1) E ( ˆ 2 ) → 2
( 2)Var ( ˆ 2 ) → 0
as n goes to infinity
Similarly, we can generalise this with 1
The Overall Goodness of Fit: R2
This measure helps to determine the goodness of fit, or how well the sample
regression line fits the data
Decomposition of Variance of Yi
Yi = Yˆi + uˆ i
or
y i = yˆ i + uˆ i
Squaring this equation and summing over the sample, we obtain
= 2
yi ˆy i2 + yˆ uˆ
ˆu i2 + 2 i i
y = yˆ + uˆ
2
i
2
i
2
i
we define r 2 as
r =
2 (Yˆi − Y ) 2
=
ESS
(Yi − Y ) 2 TSS
or
i
2
ˆ
u RSS
r 2 = 1− = 1−
i
(Y − Y ) 2
TSS