0% found this document useful (0 votes)
35 views9 pages

Chapter Iii - Part III

The document outlines the classical normal linear regression model (CNLRM). It discusses how the CNLRM makes assumptions that the error terms (ui) follow a normal distribution with mean 0 and variance σ2. This allows the properties of the OLS estimators β1 and β2 to be derived, including that they are unbiased and normally distributed. It further describes how test statistics can be constructed using these normal distributions to test hypotheses about the parameters.

Uploaded by

Uyên Lê
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views9 pages

Chapter Iii - Part III

The document outlines the classical normal linear regression model (CNLRM). It discusses how the CNLRM makes assumptions that the error terms (ui) follow a normal distribution with mean 0 and variance σ2. This allows the properties of the OLS estimators β1 and β2 to be derived, including that they are unbiased and normally distributed. It further describes how test statistics can be constructed using these normal distributions to test hypotheses about the parameters.

Uploaded by

Uyên Lê
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Outline

1. Some basic ideas


2. The problem of estimation: OLS method
3. Classical Normal Linear Regression Model
(CNLRM)
4. Interval estimation and hypothesis testing
5. Extensions of the two variable linear regression
model

10/24/2017 Mai VU-FIE-FTU 2


3. Classical normal linear regression model
(CNLRM)
 Classical theory of statistical inference consists of two
branches, namely, estimation and hypothesis testing.
 We have thus far covered the topic of estimation of the
parameters of the (two variable) linear regression model.
Using the method of OLS we were able to estimate the
parameters β1, β2, and σ2.
 But estimation is half the battle. Hypothesis testing is the
෢1 is to the
other half. We would like to find out how close 𝛽
true β1 or how close 𝜎ො 2 is to the true σ2.
 Therefore, we need to find out their probability
distributions, for without that knowledge we will not be
able to relate them to their true values.
10/24/2017 Mai VU-FIE-FTU 3

3. Classical normal linear regression


model (CNLRM)
3.1. The probability distribution of disturbance ui
3.2. The normality assumption for ui
3.3. Properties of OLS estimators under the normality
assumption

10/24/2017 Mai VU-FIE-FTU 4


3.1. The probability distribution of disturbances ui
 To find out the probability distributions of the OLS
estimators, we proceed as follows. Specifically, consider 𝛽መ2 :
𝛽መ2 = σ 𝑘𝑖 𝑌𝑖 (4.1.1)
where 𝑘𝑖 = 𝑥𝑖 / σ 𝑥𝑖2
 Since Yi = β1 + β2Xi + ui , we can write (4.1.1) as:

𝛽መ2 = ෍ 𝑘𝑖 (𝛽1 + 𝛽2 𝑋𝑖 + 𝑢𝑖 )

 Because ki, the betas, and Xi are all fixed, 𝛽መ2 is ultimately a
linear function of the random variable ui, which is random
by assumption. Therefore, the probability distribution of 𝛽መ2
(and also of 𝛽መ2 ) will depend on the assumption made about
the probability distribution of ui.
10/24/2017 Mai VU-FIE-FTU 5

3.1. The probability distribution of disturbances ui


 For reasons to be explained shortly, in the regression
context it is usually assumed that the u’s follow the
normal distribution.
 Adding the normality assumption for ui to the
assumptions of the classical linear regression model
(CLRM), we obtain what is known as the classical
normal linear regression model (CNLRM)

10/24/2017 Mai VU-FIE-FTU 6


3.2. The normality assumption for ui
 The classical normal linear regression model assumes that each
ui is distributed normally with:
 Mean: E(ui)= 0
 Variance: E[ui- E(ui)]2= E(ui2)=σ2
 Cov (ui, uj): E{[(ui- E(ui)] [(uj- E(uj)]}= E(uiuj)= 0 i≠ j
 The assumptions given above can be more compactly stated as:
ui ~ N(0, σ2 )
 where the symbol ∼ means distributed as and N stands for the
normal distribution, the terms in the parentheses representing
the two parameters of the normal distribution, namely, the mean
and the variance.

10/24/2017 Mai VU-FIE-FTU 7

Why the Normality Assumption?


1. ui represent the combined influence (on the
dependent variable) of a large number of
independent variables that are not explicitly
introduced in the regression model → we hope that
the influence of these omitted or neglected variables
is small and at best random. If there are a large
number of independent and identically distributed
random variables, then, with a few exceptions, the
distribution of their sum tends to a normal
distribution as the number of such variables increase
indefinitely

10/24/2017 Mai VU-FIE-FTU 8


Why the Normality Assumption?
2. Even if the number of variables is not very large or if
these variables are not strictly independent, their sum
may still be normally distributed.
3. With the normality assumption, the probability
distributions of OLS estimators can be easily derived
because, one property of the normal distribution is that
any linear function of normally distributed variables
is itself normally distributed. OLS estimators 𝛽 ෢1 and
෢2 are linear functions of ui. Therefore, if ui are normally
𝛽
෢1 and 𝛽
distributed, so are 𝛽 ෢2 , which makes our task of
hypothesis testing very straightforward.

10/24/2017 Mai VU-FIE-FTU 9

Why the Normality Assumption?


4. The normal distribution is a comparatively simple
distribution involving only two parameters (mean and
variance); it is very well known and its theoretical
properties have been extensively studied in mathematical
statistics. Besides, many phenomena seem to follow the
normal distribution.
5. If we are dealing with a small, or finite, sample size, say
data of less than 100 observations, the normality
assumption assumes a critical role. It not only helps us to
derive the exact probability distributions of OLS
estimators but also enables us to use the t, F, and χ2
statistical tests for regression models.

10/24/2017 Mai VU-FIE-FTU 10


3.3. Properties of OLS estimators under the
normality assumption
With the assumption that ui follow the normal
distribution, the OLS estimators have the following
properties:
1. They are unbiased.
2. They have minimum variance. Combined with 1, this
means that they are minimum-variance unbiased, or
efficient estimators.
3. They have consistency; that is, as the sample size
increases indefinitely, the estimators converge to
their true population values.
10/24/2017 Mai VU-FIE-FTU 11

3.3. Properties of OLS estimators under the


normality assumption
෢1 (being a linear function of ui) is normally
4. 𝛽
distributed with:
 Mean: E ( ˆ1 ) = 1
n

X i
2

 Variance: var( ˆ1 ) =  2ˆ = i =1


n
2
n xi2
1

i =1

→ ˆ1 ~ N (β1, 2ˆ ) 1

10/24/2017 Mai VU-FIE-FTU 12


3.3. Properties of OLS estimators under the
normality assumption
 Then by the properties of the normal distribution the
variable Z, which is defined as:

ˆ1 − 1
Z=
 ˆ
1

 Z follows the standard normal distribution, that is, a


normal distribution with zero mean and unit (=1)
variance, or Z~ N (0, 1)

10/24/2017 Mai VU-FIE-FTU 13

3.3. Properties of OLS estimators under the


normality assumption
෢2 (being a linear function of ui) is normally
5. 𝛽
distributed with:
 Mean: E ( ˆ2 ) =  2

2
 Variance: var(ˆ2 ) = n

x
i =1
2
i

→ ̂ 2 ~ N (β2, 2ˆ )
2

10/24/2017 Mai VU-FIE-FTU 14


3.3. Properties of OLS estimators under the
normality assumption
 Then by the properties of the normal distribution the
variable Z, which is defined as:

ˆ2 −  2
Z=
 ˆ 2

 Z follows the standard normal distribution, that is, a


normal distribution with zero mean and unit (=1)
variance, or Z~ N (0, 1)

10/24/2017 Mai VU-FIE-FTU 15

3.3. Properties of OLS estimators under the


normality assumption
f ( ˆ1 ) f ( ˆ2 )
Mật độ xác suất

Mật độ xác suất

E ( ˆ1 ) = 1 ˆ1 E ( ˆ2 ) =  2 ̂ 2

f (Z ) f (Z )
Mật độ xác suất

Mật độ xác suất

ˆ1 − 1 ˆ2 −  2
Z= Z=
0  ˆ
1
0  ˆ
2

10/24/2017 Mai VU-FIE-FTU 16


3.3. Properties of OLS estimators under the
normality assumption
 (n− 2)( 𝜎ො 2 /σ2) is distributed as the χ2 (chi-square)
distribution with (n − 2)df. This knowledge will help us to
draw inferences about the true σ2 from the estimated σ2.
 (𝛽መ1 , 𝛽መ2 ) are distributed independently of 𝜎ො 2 .
 𝛽መ1 and 𝛽መ2 have minimum variance in the entire class of
unbiased estimators, whether linear or not. This result, due
to Rao, is very powerful because, unlike the Gauss–Markov
theorem, it is not restricted to the class of linear estimators
only. Therefore, we can say that the least-squares
estimators are best unbiased estimators (BUE); that is,
they have minimum variance in the entire class of unbiased
estimators.

10/24/2017 Mai VU-FIE-FTU 17

You might also like