0% found this document useful (0 votes)
27 views25 pages

ECN 318 Lecture Notes Weeks 3-4

Econometrics note

Uploaded by

jegedetolani039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views25 pages

ECN 318 Lecture Notes Weeks 3-4

Econometrics note

Uploaded by

jegedetolani039
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

INTRODUCTORY

ECONOMICS 1
ECN318
Simple (Two-Variable) Linear Regression
Model
Week 3
THE METHOD OF ORDINARY LEAST
SQUARES

 (Derivation of the OLS Estimates)

 Under certain assumptions the method of


least squares has some very attractive
statistical properties that have made it one of
the most powerful and popular methods of
regression analysis.
ASSUMPTIONS UNDERLYING THE METHOD OF
LEAST SQUARES

 In regression analysis our objective is not


only to obtain the estimates b1 and b2 but
also to draw inferences about the true β1 and
β2.
 We must not only specify the functional form
of the model, but also make certain
assumptions about the manner in which Yi
are generated.
 The classical linear regression model (CLRM),

which is the cornerstone of most econometric


theory, rest on the assumptions listed below.
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.

 Assumption 1: Linear regression model.


The regression model is linear in the
parameters,

Yi = β1 + β2Xi + µi
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.
 Assumption 2: X values are fixed in
repeated sampling. Values taken by the
regressor X are considered fixed in repeated
samples. More technically, X is assumed to
be non-stochastic.
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.
 Assumption 3: Zero mean value of
disturbance µi. Given the value of X, the
mean, or expected, value of the random
disturbance term µi is zero. Technically, the
conditional mean value of µi is zero.
Symbolically, we have

E(µi |Xi) = 0
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.
 Assumption 4: Homoscedasticity or
equal variance of µi. Given the value of X,
the variance of µi is the same for all
observations. That is, the conditional
variances of µi are identical. Symbolically, we
have

var (µi |Xi) = E[µi − E(µi |Xi)]2


= E(µi 2
| Xi ) because of
Assumption 3
= σ2
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.
 Assumption 5: No Autocorrelation
(Serial Correlation) between the
disturbances. Given any two X values, Xi
and Xj (i _= j), the correlation between any
two µi and µj (where i and j are two different
observations) is zero. Symbolically,

cov (µi , µj |Xi, Xj) = E{[ µi − E(µi)] | Xi }{[ µj − E(µj)] | Xj }


= E(µi |Xi)( µj | Xj)
=0
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.
 Assumption 6: Zero covariance between
ui and Xi, or E(µi Xi) = 0.
i.e,
cov (µi, Xi) = E[µi − E(µi)][Xi − E(Xi)]
= E[µi (Xi − E(Xi))] since E(µi) = 0
= E(uiXi) − E(Xi)E(µi) since E(Xi) is
non-stochastic
= E(µi Xi) since E(µi) = 0
=0 by assumption
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.
 Assumption 7: The number of
observations n must be greater than the
number of parameters to be estimated.

Alternatively, the number of observations n


must be greater than the number of
explanatory variables.
ASSUMPTIONS UNDERLYING THE
METHOD OF LEAST SQUARES CONTD.
 Assumption 8: Variability in X values. The X
values in a given sample must not all be the same.
Technically, var (X) must be a finite positive number.

 Assumption 9: The regression model is correctly


specified. Alternatively, there is no specification
bias or error in the model used in empirical analysis.

 Assumption 10: (For Multiple Regression


Models) No perfect Multicollinearity. There are no
perfect linear relationships among the explanatory
variables.
DESIRABLE PROPERTIES OF THE LEAST
SQUARES ESTIMATORS

The desirable properties of the Least Squares


Estimators can be categorized in two groups.
These are:
 Numerical properties

 Statistical properties
NUMERICAL PROPERTIES OF THE OLS

 Numerical properties are those that hold as a


consequence of the use of ordinary least
squares, regardless of how the data were
generated.
NUMERICAL PROPERTIES OF THE
OLS CONTD.
 The OLS estimators are expressed solely in
terms of the observable (i.e., sample)
quantities (i.e., X and Y). Therefore, they can
be easily computed.
 They are point estimators; that is, given

the sample, each estimator will provide only


a single (point) value of the relevant
population parameter.
 Once the OLS estimates are obtained from

the sample data, the sample regression line


can be easily obtained.
NUMERICAL PROPERTIES OF THE
OLS CONTD.
The regression line thus obtained has the
following properties:
 It passes through the sample means of Y and

X.
 The mean value of the estimated Y is equal

to the mean value of the actual Y.


 The mean value of the residuals u is zero.
i

 The residuals ui are uncorrelated with the


predicted Yi.
 The residuals ui are uncorrelated with X ;
i
that is, ∑ui Xi = 0.
STATISTICAL PROPERTIES OF THE LEAST
SQUARES ESTIMATORS

The statistical properties of the Least


Squares Estimators are based on the
assumptions of CLRM and are enshrined in
the famous Gauss–Markov theorem.
 This theorem is based on the understanding

of the best linear unbiasedness property


of an estimator. An estimator, say the OLS
estimator b2, is said to be a best linear
unbiased estimator (BLUE) of β2 if the
following hold:
STATISTICAL PROPERTIES OF THE
LEAST SQUARES ESTIMATORS CONTD.
 It is linear, that is, a linear function of a
random variable, such as the dependent
variable Y in the regression model.
 It is unbiased, that is, its average or

expected value, E(b2), is equal to the true


value, β2.
 It has minimum variance in the class of all
such linear unbiased estimators; an unbiased
estimator with the least variance is known as
an efficient estimator.
GAUSS–MARKOV THEOREM
 In the regression context it can be proved
that the OLS estimators are BLUE.

Gauss–Markov theorem is stated as follows:

Gauss–Markov Theorem: Given the


assumptions of the classical linear regression
model, the least-squares estimators, in the
class of unbiased linear estimators, have
minimum variance, that is, they are BLUE.
GAUSS–MARKOV THEOREM
 The Gauss–Markov theorem is remarkable in
that it makes no assumptions about the
probability distribution of the random
variable µi, and therefore of Yi

 The statistical properties of the Least


Squares estimators are known as finite
sample properties: These properties hold
regardless of the sample size on which the
estimators are based.
PRECISION OF THE LEAST SQUARES ESTIMATES.

 Least-squares estimates are a function of the


sample data. But since the data are likely to
change from sample to sample, the
estimates will change also.
 Therefore, we need some measure of

“reliability” or precision of the estimators b1


and b2.
 In statistics the precision of an estimate is
measured by its standard error (se).
PRECISION OF THE LEAST SQUARES
ESTIMATES.

 (Derivation of the Standard Error of OLS


Estimates)
GOODNESS OF FIT

 A measure of the goodness of fit of the


fitted regression line to a set of data is a
measure of how “well” the sample regression
line fits the data.

 The coefficient of determination r2 (two-


variable case) or R2 (multiple regression) is a
summary measure that tells how well the
sample regression line fits the data

GOODNESS OF FIT

 (Derivation of the Coefficient of


Determination)
GOODNESS OF FIT

Two properties of r 2 may be noted:

 1. It is a nonnegative quantity.
 2. Its limits are 0 ≤ r 2 ≤ 1.
FORTHCOMING TEST

General Discussion on the


forthcoming Test

You might also like