0% found this document useful (0 votes)
182 views13 pages

4basic Econometrics Chapter III

The document summarizes key aspects of two-variable regression models and the ordinary least squares (OLS) estimation method. It outlines the 10 assumptions required for OLS, including that the regression is linear, X values are fixed, and the errors have zero mean and constant variance. It also describes how OLS provides best linear unbiased estimators that minimize the variance of the estimates. The coefficient of determination (R2) is introduced as a measure of the model's goodness of fit. Numerical examples are provided to illustrate the methodology.

Uploaded by

sajId146
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
182 views13 pages

4basic Econometrics Chapter III

The document summarizes key aspects of two-variable regression models and the ordinary least squares (OLS) estimation method. It outlines the 10 assumptions required for OLS, including that the regression is linear, X values are fixed, and the errors have zero mean and constant variance. It also describes how OLS provides best linear unbiased estimators that minimize the variance of the estimates. The coefficient of determination (R2) is introduced as a measure of the model's goodness of fit. Numerical examples are provided to illustrate the methodology.

Uploaded by

sajId146
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 13

Basic Econometrics

Chapter 3:
TWO-VARIABLE
REGRESSION MODEL:
The problem of Estimation

1
Prof. Himayatullah May 2004
3-1. The method of ordinary
least square (OLS)
 Least-square criterion:
 Minimizing U^2i = (Yi – Y^i) 2
= (Yi- ^1 - ^2X)2 (3.1.2)
 Normal Equation and solving it
for ^1 and ^2 = Least-square
estimators [See (3.1.6)(3.1.7)]
 Numerical and statistical
properties of OLS are as follows:
2
Prof. Himayatullah May 2004
3-1. The method of ordinary least
square (OLS)
 OLS estimators are expressed solely in terms of
observable quantities. They are point estimators
 The sample regression line passes through
sample means of X and Y
 The mean value of the estimated Y^ is equal to
the mean value of the actual Y: E(Y) = E(Y^)
 The mean value of the residuals U^i is zero:
E(u^i )=0
 u^i are uncorrelated with the predicted Y^i and
with Xi : That are u^iY^i = 0; u^iXi = 0
3
Prof. Himayatullah May 2004
3-2. The assumptions underlying
the method of least squares

 Ass 1: Linear regression model


(in parameters)
 Ass 2: X values are fixed in repeated
sampling
 Ass 3: Zero mean value of ui : E(uiXi)=0
 Ass 4: Homoscedasticity or equal
variance of ui : Var (uiXi) = 2
[VS. Heteroscedasticity]
 Ass 5: No autocorrelation between the
disturbances: Cov(ui,ujXi,Xj ) = 0
with i # j [VS. Correlation, + or - ]
4
Prof. Himayatullah May 2004
3-2. The assumptions underlying
the method of least squares
 Ass 6: Zero covariance between ui and Xi
Cov(ui, Xi) = E(ui, Xi) = 0
 Ass 7: The number of observations n must be
greater than the number of parameters
to be estimated
 Ass 8: Variability in X values. They must
not all be the same
 Ass 9: The regression model is correctly
specified
 Ass 10: There is no perfect multicollinearity
between Xs
5
Prof. Himayatullah May 2004
3-3. Precision or standard errors of
least-squares estimates
 In statistics the precision of an
estimate is measured by its standard
error (SE)
 var( ^2) = 2 / x2i (3.3.1)
 se(^2) =  Var(^2) (3.3.2)
 var( ^1) = 2 X2i / n x2i (3.3.3)
 se(^1) =  Var(^1) (3.3.4)
 ^ 2 = u^2i / (n - 2) (3.3.5)
 ^ =  ^ 2 is standard error of the
estimate 6
Prof. Himayatullah May 2004
3-3. Precision or standard errors of
least-squares estimates

 Features of the variance:


+ var( ^2) is proportional to 2 and
inversely proportional to x2i
+ var( ^1) is proportional to 2 and
X2i but inversely proportional to x2i
and the sample size n.
+ cov ( ^1 , ^2) = -X var( ^2) shows
the independence between ^1 and ^2
7
Prof. Himayatullah May 2004
3-4. Properties of least-squares
estimators: The Gauss-Markov Theorem
 An OLS estimator is said to be BLUE if :
+ It is linear, that is, a linear function of a
random variable, such as the dependent
variable Y in the regression model
+ It is unbiased , that is, its average or expected
value, E(^2), is equal to the true value 2
+ It has minimum variance in the class of all
such linear unbiased estimators
An unbiased estimator with the least variance is
known as an efficient estimator
8
Prof. Himayatullah May 2004
3-4. Properties of least-squares
estimators: The Gauss-Markov
Theorem

 Gauss- Markov Theorem:


Given the assumptions of the
classical linear regression model,
the least-squares estimators, in
class of unbiased linear estimators,
have minimum variance, that is,
they are BLUE

9
Prof. Himayatullah May 2004
β̂ 2

3-5. The coefficient of determination


r2: A measure of “Goodness of fit”
 Yi = Ŷ i + Û i or
 Yi - Y = Ŷ i - Ŷi + Ûi or
 yi = ŷ i + Û i (Note: Y= Ŷ )
Squaring on both side and summing =>
  yi2 = β̂22 x2i +  Û 2i ; or
 TSS = ESS + RSS

10
Prof. Himayatullah May 2004
3-5. The coefficient of determination r2:
A measure of “Goodness of fit”

 TSS =  yi2 = Total Sum of Squares


 ESS =  Y^ i2 = ^22 x2i =
Explained Sum of Squares
 RSS =  u^2I = Residual Sum of
Squares
ESS RSS
1= -------- + -------- ; or
TSS TSS
RSS RSS
1= r2 + ------- ; or r2 = 1 - -------
TSS TSS 11
Prof. Himayatullah May 2004
3-5. The coefficient of determination r2:
A measure of “Goodness of fit”
 r2 = ESS/TSS
is coefficient of determination, it measures
the proportion or percentage of the total
variation in Y explained by the regression
Model
 0  r2  1;
 r =  r2 is sample correlation coefficient
 Some properties of r

12
Prof. Himayatullah May 2004
3-5. The coefficient of determination r2:
A measure of “Goodness of fit”

3-6. A numerical Example (pages 80-83)


3-7. Illustrative Examples (pages 83-85)
3-8. Coffee demand Function
3-9. Monte Carlo Experiments (page 85)
3-10. Summary and conclusions (pages
86-87)

13
Prof. Himayatullah May 2004

You might also like