Chapter 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

Chapter 4

Classical linear regression model assumptions


and diagnostics
Violation of the Assumptions of the CLRM

• Recall that we assumed of the CLRM disturbance terms:

1. E(ut) = 0

2. Var(ut) = σ2 < ∞

3. Cov (ui,uj) = 0

4. The X matrix is non-stochastic or fixed in repeated samples

5. ut ∼ N(0,σ2)

Zenegnaw Abiy Hailu (PhD) 2


Investigating Violations of the
Assumptions of the CLRM

• We will now study these assumptions further, and in particular look at:
- How we test for violations
- Causes
- Consequences
in general we could encounter any combination of 3 problems:
- the coefficient estimates are wrong
- the associated standard errors are wrong
- the distribution that we assumed for the
test statistics will be inappropriate
- Solutions
- the assumptions are no longer violated
- we work around the problem so that we
use alternative techniques which are still valid

Zenegnaw Abiy Hailu (PhD) 3


Statistical Distributions for Diagnostic Tests

• Often, an F- and a χ2- version of the test are available.

• The F-test version involves estimating a restricted and an unrestricted


version of a test regression and comparing the RSS.

• The χ2- version is sometimes called an “LM” test, and only has one degree
of freedom parameter: the number of restrictions being tested, m.

• Asymptotically, the 2 tests are equivalent since the χ2 is a special case of the
F-distribution:
χ 2 (m )
→ F (m, T − k ) as T − k → ∞
m
• For small samples, the F-version is preferable.

Zenegnaw Abiy Hailu (PhD) 4


Assumption 1: E(ut) = 0

• Assumption that the mean of the disturbances is zero.

• For all diagnostic tests, we cannot observe the disturbances and so


perform the tests of the residuals.

• The mean of the residuals will always be zero provided that there is a
constant term in the regression.

Zenegnaw Abiy Hailu (PhD) 5


Assumption 2: Var(ut) = σ2 < ∞

• We have so far assumed that the variance of the errors is constant, σ2 - this
is known as homoscedasticity.
û + t
• If the errors do not have a
constant variance, we say
that they are heteroscedastic
e.g. say we estimate a regression
and calculate the residuals, ut.
x 2t

Zenegnaw Abiy Hailu (PhD) 6


Detection of Heteroscedasticity: The GQ Test

• Graphical methods
• Formal tests: There are many of them: we will discuss Goldfeld-Quandt
test and White’s test

The Goldfeld-Quandt (GQ) test is carried out as follows.


1. Split the total sample of length T into two sub-samples of length T1 and T2.
The regression model is estimated on each sub-sample and the two
residual variances are calculated.
2. The null hypothesis is that the variances of the disturbances are equal,
H0: σ 12 = σ 22

Zenegnaw Abiy Hailu (PhD) 7


The GQ Test (Cont’d)

3. The test statistic, denoted GQ, is simply the ratio of the two residual
variances where the larger of the two variances must be placed in
the numerator.
s12
GQ = 2
s2
4. The test statistic is distributed as an F(T1-k, T2-k) under the null of
homoscedasticity.

5. A problem with the test is that the choice of where to split the
sample is that usually arbitrary and may crucially affect the
outcome of the test.

Zenegnaw Abiy Hailu (PhD) 8


Detection of Heteroscedasticity using White’s Test

• White’s general test for heteroscedasticity is one of the best


approaches because it makes few assumptions about the form of the
heteroscedasticity.
• The test is carried out as follows:
1. Assume that the regression we carried out is as follows
yt = β1 + β2x2t + β3x3t + ut
And we want to test Var(ut) = σ2. We estimate the model, obtaining
the residuals,ut
2. Then run the auxiliary regression

uˆt2 = α1 + α 2 x2t + α 3 x3t + α 4 x22t + α 5 x32t + α 6 x2t x3t + vt

Zenegnaw Abiy Hailu (PhD) 9


Performing White’s Test for Heteroscedasticity

3. Obtain R2 from the auxiliary regression and multiply it by the number of


observations, T. It can be shown that
T R2 ∼ χ2 (m)
where m is the number of regressors in the auxiliary regression excluding
the constant term.

4. If the χ2 test statistic from step 3 is greater than the corresponding value
from the statistical table then reject the null hypothesis that the disturbances
are homoscedastic.

Zenegnaw Abiy Hailu (PhD) 10


Consequences of Using OLS in the Presence of
Heteroscedasticity
• OLS estimation still gives unbiased coefficient estimates, but they are no
longer BLUE.

• This implies that if we still use OLS in the presence of heteroscedasticity,


our standard errors could be inappropriate and hence any inferences we
make could be misleading.

• Whether the standard errors calculated using the usual formulae are too big
or too small will depend upon the form of the heteroscedasticity.

Zenegnaw Abiy Hailu (PhD) 11


How Do we Deal with Heteroscedasticity?

• If the form (i.e. the cause) of the heteroscedasticity is known, then we can
use an estimation method which takes this into account (called generalised
least squares, GLS).
• A simple illustration of GLS is as follows: Suppose that the error variance is
related to another variable zt by
var(ut ) = σ 2 zt2
• To remove the heteroscedasticity, divide the regression equation by zt
yt 1 x x
= β1 + β 2 2t + β 3 3t + vt
zt zt zt zt
ut
where vt = is an error term.
zt
u  var(ut ) σ 2 zt2
• Now var(vt ) = var t  = 2
= 2
= σ 2
for known zt.
 zt  z t z t

Zenegnaw Abiy Hailu (PhD) 12


Other Approaches to Dealing
with Heteroscedasticity

• So the disturbances from the new regression equation will be


homoscedastic.

• Other solutions include:


1. Transforming the variables into logs or reducing by some other measure
of “size”.
2. Use White’s heteroscedasticity consistent standard error estimates.
The effect of using White’s correction is that in general the standard errors
for the slope coefficients are increased relative to the usual OLS standard
errors.
This makes us more “conservative” in hypothesis testing, so that we would
need more evidence against the null hypothesis before we would reject it.

Zenegnaw Abiy Hailu (PhD) 13


Background –
The Concept of a Lagged Value

t yt yt-1 ∆yt
1989M09 0.8 - -
1989M10 1.3 0.8 1.3-0.8=0.5
1989M11 -0.9 1.3 -0.9-1.3=-2.2
1989M12 0.2 -0.9 0.2--0.9=1.1
1990M01 -1.7 0.2 -1.7-0.2=-1.9
1990M02 2.3 -1.7 2.3--1.7=4.0
1990M03 0.1 2.3 0.1-2.3=-2.2
1990M04 0.0 0.1 0.0-0.1=-0.1
. . . .
. . . .
. . . .

Zenegnaw Abiy Hailu (PhD) 14


Autocorrelation

• We assumed of the CLRM’s errors that Cov (ui , uj) = 0 for i≠j, i.e.
This is essentially the same as saying there is no pattern in the errors.

• Obviously we never have the actual u’s, so we use their sample counterpart,
the residuals (the ut’s).

• If there are patterns in the residuals from a model, we say that they are
autocorrelated.

• Some stereotypical patterns we may find in the residuals are given on the
next 3 slides.

Zenegnaw Abiy Hailu (PhD) 15


Positive Autocorrelation

+
û t û t
+

- +
uˆ t −1 Time

Positive Autocorrelation is indicated by a cyclical residual plot over time.

Zenegnaw Abiy Hailu (PhD) 16


Negative Autocorrelation

+ û t
û t
+

- +
uˆ t −1 Time

- -

Negative autocorrelation is indicated by an alternating pattern where the residuals


cross the time axis more frequently than if they were distributed randomly

Zenegnaw Abiy Hailu (PhD) 17


No pattern in residuals –
No autocorrelation
û t
+
û t +

- +
uˆ t −1

No pattern in residuals at all: this is what we would like to see

Zenegnaw Abiy Hailu (PhD) 18


Detecting Autocorrelation:
The Durbin-Watson Test

The Durbin-Watson (DW) is a test for first order autocorrelation - i.e. it


assumes that the relationship is between an error and the previous one
ut = ρut-1 + vt (1)
where vt ∼ N(0, σv2).
• The DW test statistic actually tests
H0 : ρ=0 and H1 : ρ≠0
• The test statistic is calculated by
T
∑ ( ut − ut −1) 2
DW = t = 2 T
∑ ut 2
t =2

Zenegnaw Abiy Hailu (PhD) 19


The Durbin-Watson Test:
Critical Values

• We can also write


DW ≈ 2(1 − ρ ) (2)
where ρ is the estimated correlation coefficient. Since ρ is a
correlation, it implies that − 1 ≤ pˆ ≤ 1.
• Rearranging for DW from (2) would give 0≤DW≤4.

• If ρ = 0, DW = 2. So roughly speaking, do not reject the null


hypothesis if DW is near 2 → i.e. there is little evidence of
autocorrelation

• Unfortunately, DW has 2 critical values, an upper critical value (du)


and a lower critical value (dL), and there is also an intermediate region
where we can neither reject nor not reject H0.

Zenegnaw Abiy Hailu (PhD) 20


The Durbin-Watson Test: Interpreting the Results

Conditions which Must be Fulfilled for DW to be a Valid Test


1. Constant term in regression
2. Regressors are non-stochastic
3. No lags of dependent variable
Zenegnaw Abiy Hailu (PhD) 21
Another Test for Autocorrelation:
The Breusch-Godfrey Test

• It is a more general test for rth order autocorrelation:


ut = ρ1ut −1 + ρ2 ut − 2 + ρ3ut − 3 +...+ ρr ut − r + vt , vt ∼N(0, σ v2 )
• The null and alternative hypotheses are:
H0 : ρ1 = 0 and ρ2 = 0 and ... and ρr = 0
H1 : ρ1 ≠ 0 or ρ2 ≠ 0 or ... or ρr ≠ 0
• The test is carried out as follows:
1. Estimate the linear regression using OLS and obtain the residuals, ut .
2. Regress ut on all of the regressors from stage 1 (the x’s) plus ut −1 , ut − 2 ,..., ut − r
Obtain R2 from this regression.
3. It can be shown that (T-r)R2 ∼ χ2(r)
• If the test statistic exceeds the critical value from the statistical tables, reject
the null hypothesis of no autocorrelation.
Zenegnaw Abiy Hailu (PhD) 22
Consequences of Ignoring Autocorrelation
if it is Present

• The coefficient estimates derived using OLS are still unbiased, but they are
inefficient, i.e. they are not BLUE, even in large sample sizes.

• Thus, if the standard error estimates are inappropriate, there exists the
possibility that we could make the wrong inferences.

• R2 is likely to be inflated relative to its “correct” value for positively


correlated residuals.

Zenegnaw Abiy Hailu (PhD) 23


“Remedies” for Autocorrelation

• If the form of the autocorrelation is known, we could use a GLS


procedure – i.e. an approach that allows for autocorrelated residuals
e.g., Cochrane-Orcutt.

• But such procedures that “correct” for autocorrelation require


assumptions about the form of the autocorrelation.

• If these assumptions are invalid, the cure would be more dangerous


than the disease! - see Hendry and Mizon (1978).

• However, it is unlikely to be the case that the form of the


autocorrelation is known, and a more “modern” view is that residual
autocorrelation presents an opportunity to modify the regression.

Zenegnaw Abiy Hailu (PhD) 24


Dynamic Models

• All of the models we have considered so far have been static, e.g.
yt = β1 + β2x2t + ... + βkxkt + ut

• But we can easily extend this analysis to the case where the current value
of yt depends on previous values of y or one of the x’s, e.g.
yt = β1 + β2x2t + ... + βkxkt + γ1yt-1 + γ2x2t-1 + … + γkxkt-1+ ut

• We could extend the model even further by adding extra lags, e.g.
x2t-2 , yt-3 .

Zenegnaw Abiy Hailu (PhD) 25


Why Might we Want/Need To Include Lags
in a Regression?

• Inertia of the dependent variable


• Over-reactions
• Measuring time series as overlapping moving averages

• However, other problems with the regression could cause the null hypothesis of no
autocorrelation to be rejected:
– Omission of relevant variables, which are themselves
autocorrelated.
– If we have committed a “misspecification” error by using
an inappropriate functional form.
– Autocorrelation resulting from unparameterised
seasonality.

Zenegnaw Abiy Hailu (PhD) 26


Models in First Difference Form

• Another way to sometimes deal with the problem of autocorrelation is


to switch to a model in first differences.

• Denote the first difference of yt, i.e. yt - yt-1 as ∆yt; similarly for the x-
variables, ∆x2t = x2t - x2t-1 etc.

• The model would now be


∆yt = β1 + β2 ∆x2t + ... + βk∆xkt + ut

• Sometimes the change in y is purported to depend on previous values


of y or xt as well as changes in x:
∆yt = β1 + β2 ∆x2t + β3x2t-1 +β4yt-1 + ut

Zenegnaw Abiy Hailu (PhD) 27


The Long Run Static Equilibrium Solution

• One interesting property of a dynamic model is its long run or static


equilibrium solution.
• “Equilibrium” implies that the variables have reached some steady state
and are no longer changing, i.e. if y and x are in equilibrium, we can say
yt = yt+1 = ... =y and xt = xt+1 = ... =x
Consequently, ∆yt = yt - yt-1 = y - y = 0 etc.
• So the way to obtain a long run static solution is:
1. Remove all time subscripts from variables
2. Set error terms equal to their expected values, E(ut)=0
3. Remove first difference terms altogether
4. Gather terms in x together and gather terms in y together.
• These steps can be undertaken in any order

Zenegnaw Abiy Hailu (PhD) 28


The Long Run Static Equilibrium Solution:
An Example

If our model is
∆yt = β1 + β2 ∆x2t + β3x2t-1 +β4yt-1 + ut

then the static solution would be given by


0 = β1 + β3x2t-1 +β4yt-1

β4yt-1 = - β1 - β3x2t-1

− β1 β3
y= − x2
β4 β4

Zenegnaw Abiy Hailu (PhD) 29


Problems with Adding Lagged Regressors
to “Cure” Autocorrelation
• Inclusion of lagged values of the dependent variable violates the
assumption that the RHS variables are non-stochastic.

• What does an equation with a large number of lags actually mean?

• Note that if there is still autocorrelation in the residuals of a model


including lags, then the OLS estimators will not even be consistent.

Zenegnaw Abiy Hailu (PhD) 30


Multicollinearity

• This problem occurs when the explanatory variables are very highly correlated
with each other.

• Perfect multicollinearity
Cannot estimate all the coefficients
- e.g. suppose x3 = 2x2
and the model is yt = β1 + β2x2t + β3x3t + β4x4t + ut

• Problems if Near Multicollinearity is Present but Ignored


- R2 will be high but the individual coefficients will have high standard errors.
- The regression becomes very sensitive to small changes in the specification.
- Thus confidence intervals for the parameters will be very wide, and
significance tests might therefore give inappropriate conclusions.

Zenegnaw Abiy Hailu (PhD) 31


Measuring Multicollinearity

• The easiest way to measure the extent of multicollinearity is simply to


look at the matrix of correlations between the individual variables. e.g.

Corr x2 x3 x4
x2 - 0.2 0.8
x3 0.2 - 0.3
x4 0.8 0.3 -
• But another problem: if 3 or more variables are linear
- e.g. x2t + x3t = x4t

• Note that high correlation between y and one of the x’s is not
muticollinearity.

Zenegnaw Abiy Hailu (PhD) 32


Solutions to the Problem of Multicollinearity

• “Traditional” approaches, such as ridge regression or principal


components. But these usually bring more problems than they solve.

• Some econometricians argue that if the model is otherwise OK, just


ignore it

• The easiest ways to “cure” the problems are


- drop one of the collinear variables
- transform the highly correlated variables into a ratio
- go out and collect more data e.g.
- a longer run of data
- switch to a higher frequency

Zenegnaw Abiy Hailu (PhD) 33


Adopting the Wrong Functional Form

• We have previously assumed that the appropriate functional form is linear.


• This may not always be true.
• We can formally test this using Ramsey’s RESET test, which is a general test
for mis-specification of functional form.

• Essentially the method works by adding higher order terms of the fitted values
(e.g. yt2 , yt3 etc.) into an auxiliary regression:
Regress ut on powers of the fitted values:
ut = β0 + β1 yt2 + β2 yt3 +...+ β p −1 ytp + vt
Obtain R2 from this regression. The test statistic is given by TR2 and is
distributed as a χ 2 ( p − 1) .

• So if the value of the test statistic is greater than a χ ( p − 1) then reject the null
2

hypothesis that the functional form was correct.

Zenegnaw Abiy Hailu (PhD) 34


But what do we do if this is the case?

• The RESET test gives us no guide as to what a better specification might


be.

• One possible cause of rejection of the test is if the true model is


yt = β1 + β 2 x2t + β 3 x22t + β 4 x23t + ut
In this case the remedy is obvious.

• Another possibility is to transform the data into logarithms. This will


linearise many previously multiplicative models into additive ones:

yt = Axtβ e ut ⇔ ln yt = α + β ln xt + ut

Zenegnaw Abiy Hailu (PhD) 35


Testing the Normality Assumption

• Why did we need to assume normality for hypothesis testing?

Testing for Departures from Normality

• The Bera Jarque normality test


• A normal distribution is not skewed and is defined to have a coefficient of
kurtosis of 3.
• The kurtosis of the normal distribution is 3 so its excess kurtosis (b2-3) is
zero.
• Skewness and kurtosis are the (standardised) third and fourth moments of a
distribution.

Zenegnaw Abiy Hailu (PhD) 36


Normal versus Skewed Distributions

f(x ) f(x )

x x

A normal distribution A skewed distribution

Zenegnaw Abiy Hailu (PhD) 37


Leptokurtic versus Normal Distribution

0.5

0.4

0.3

0.2

0.1

0.0
-5.4 -3.6 -1.8 -0.0 1.8 3.6 5.4

Zenegnaw Abiy Hailu (PhD) 38


Testing for Normality

• Bera and Jarque formalise this by testing the residuals for normality by
testing whether the coefficient of skewness and the coefficient of excess
kurtosis are jointly zero.
• It can be proved that the coefficients of skewness and kurtosis can be
expressed respectively as:
E[u3 ] E[u4 ]
b1 = and b2 =
(σ ) (σ )
3/ 2
2 2 2

• The Bera Jarque test statistic is given by


 b12 (b2 − 3)2 
W =T +  ~ χ 2
(2)
 6 24 
• We estimate b1 and b2 using the residuals from the OLS regression, u .

Zenegnaw Abiy Hailu (PhD) 39


What do we do if we find evidence of Non-Normality?

• It is not obvious what we should do!

• Could use a method which does not assume normality, but difficult and
what are its properties?

• Often the case that one or two very extreme residuals causes us to reject
the normality assumption.

• An alternative is to use dummy variables.


e.g. say we estimate a monthly model of asset returns from 1980-1990, and
we plot the residuals, and find a particularly large outlier for October 1987:

Zenegnaw Abiy Hailu (PhD) 40


What do we do if we find evidence
of Non-Normality? (cont’d)

û t
+

Oct Time
1987

• Create a new variable:


-

D87M10t = 1 during October 1987 and zero otherwise.


This effectively knocks out that observation. But we need a theoretical
reason for adding dummy variables.

Zenegnaw Abiy Hailu (PhD) 41


Omission of an Important Variable or
Inclusion of an Irrelevant Variable

Omission of an Important Variable


• Consequence: The estimated coefficients on all the other variables will be
biased and inconsistent unless the excluded variable is uncorrelated with
all the included variables.
• Even if this condition is satisfied, the estimate of the coefficient on the
constant term will be biased.
• The standard errors will also be biased.

Inclusion of an Irrelevant Variable


• Coefficient estimates will still be consistent and unbiased, but the
estimators will be inefficient.

Zenegnaw Abiy Hailu (PhD) 42


Parameter Stability Tests

• So far, we have estimated regressions such as yt = β1 + β2x2t + β3x3t + ut

• We have implicitly assumed that the parameters (β1, β2 and β3) are
constant for the entire sample period.

• We can test this implicit assumption using parameter stability tests. The
idea is essentially to split the data into sub-periods and then to estimate up
to three models, for each of the sub-parts and for all the data and then to
“compare” the RSS of the models.

• There are two types of test we can look at:


- Chow test (analysis of variance test)
- Predictive failure tests

Zenegnaw Abiy Hailu (PhD) 43


The Chow Test

• The steps involved are:


1. Split the data into two sub-periods. Estimate the regression over the
whole period and then for the two sub-periods separately (3 regressions).
Obtain the RSS for each regression.
2. The restricted regression is now the regression for the whole period
while the “unrestricted regression” comes in two parts: for each of the sub-
samples.
We can thus form an F-test which is the difference between the RSS’s.

RSS − ( RSS1 + RSS2 ) T − 2k


The statistic is ×
RSS1 + RSS2 k

Zenegnaw Abiy Hailu (PhD) 44


The Chow Test (cont’d)

where:
RSS = RSS for whole sample
RSS1 = RSS for sub-sample 1
RSS2 = RSS for sub-sample 2
T = number of observations
2k = number of regressors in the “unrestricted” regression (since it comes
in two parts)
k = number of regressors in (each part of the) “unrestricted” regression

3. Perform the test. If the value of the test statistic is greater than the
critical value from the F-distribution, which is an F(k, T-2k), then reject
the null hypothesis that the parameters are stable over time.

Zenegnaw Abiy Hailu (PhD) 45


A Chow Test Example

• Consider the following regression for the CAPM β (again) for the
returns on Glaxo.

• Say that we are interested in estimating Beta for monthly data from
1981-1992. The model for each sub-period is

• 1981M1 - 1987M10
0.24 + 1.2RMt T = 82 RSS1 = 0.03555
• 1987M11 - 1992M12
0.68 + 1.53RMt T = 62 RSS2 = 0.00336
• 1981M1 - 1992M12
0.39 + 1.37RMt T = 144 RSS = 0.0434

Zenegnaw Abiy Hailu (PhD) 46


A Chow Test Example - Results

• The null hypothesis is

H0 : α1 = α 2 and β1 = β2
• The unrestricted model is the model where this restriction is not imposed

00434
. − ( 00355
. + 000336
. ) 144 − 4
Test statistic = ×
00355
. + 000336
. 2
= 7.698

Compare with 5% F(2,140) = 3.06

• We reject H0 at the 5% level and say that we reject the restriction that the
coefficients are the same in the two periods.

Zenegnaw Abiy Hailu (PhD) 47


The Predictive Failure Test

• Problem with the Chow test is that we need to have enough data to do the
regression on both sub-samples, i.e. T1>>k, T2>>k.
• An alternative formulation is the predictive failure test.
• What we do with the predictive failure test is estimate the regression over a “long”
sub-period (i.e. most of the data) and then we predict values for the other period
and compare the two.
To calculate the test:
- Run the regression for the whole period (the restricted regression) and obtain the RSS
- Run the regression for the “large” sub-period and obtain the RSS (called RSS1). Note
we call the number of observations T1 (even though it may come second).

RSS − RSS1 T1 − k
Test Statistic =
where T2 = number of observations we are × to “predict”. The test statistic
attempting
RSS1 T2
will follow an F(T2, T1-k).

Zenegnaw Abiy Hailu (PhD) 48


Backwards versus Forwards Predictive Failure Tests

• There are 2 types of predictive failure tests:

- Forward predictive failure tests, where we keep the last few observations
back for forecast testing, e.g. we have observations for 1970Q1-1994Q4.
So estimate the model over 1970Q1-1993Q4 and forecast 1994Q1-1994Q4.

- Backward predictive failure tests, where we attempt to “back-cast” the


first few observations, e.g. if we have data for 1970Q1-1994Q4, and we
estimate the model over 1971Q1-1994Q4 and backcast 1970Q1-1970Q4.

Zenegnaw Abiy Hailu (PhD) 49


How do we decide the sub-parts to use?

• As a rule of thumb, we could use all or some of the following:


- Plot the dependent variable over time and split the data accordingly to any
obvious structural changes in the series, e.g. 1400

1200

1000

Value of Series (yt)


800

600

400

200

1
27
53
79
105
131
157
183
209
235
261
287
313
339
365
391
417
443
- Split the data according to any known important
Sample Period
historical events (e.g. stock market crash, new government elected)
- Use all but the last few observations and do a predictive failure test on those.

Zenegnaw Abiy Hailu (PhD) 50


A Strategy for Building Econometric Models

Our Objective:
• To build a statistically adequate empirical model which
- satisfies the assumptions of the CLRM
- is parsimonious
- has the appropriate theoretical interpretation
- has the right “shape” - i.e.
- all signs on coefficients are “correct”
- all sizes of coefficients are “correct”
- is capable of explaining the results of all competing models

Zenegnaw Abiy Hailu (PhD) 51


2 Approaches to Building Econometric Models

• There are 2 popular philosophies of building econometric models: the


“specific-to-general” and “general-to-specific” approaches.

• “Specific-to-general” was used almost universally until the mid 1980’s,


and involved starting with the simplest model and gradually adding to it.

• Little, if any, diagnostic testing was undertaken. But this meant that all
inferences were potentially invalid.

• An alternative and more modern approach to model building is the “LSE”


or Hendry “general-to-specific” methodology.

• The advantages of this approach are that it is statistically sensible and also
the theory on which the models are based usually has nothing to say about
the lag structure of a model.

Zenegnaw Abiy Hailu (PhD) 52


The General-to-Specific Approach

• First step is to form a “large” model with lots of variables on the right hand
side
• This is known as a GUM (generalised unrestricted model)
• At this stage, we want to make sure that the model satisfies all of the
assumptions of the CLRM
• If the assumptions are violated, we need to take appropriate actions to remedy
this, e.g.
- taking logs
- adding lags
- dummy variables
• We need to do this before testing hypotheses
• Once we have a model which satisfies the assumptions, it could be very big
with lots of lags & independent variables

Zenegnaw Abiy Hailu (PhD) 53


The General-to-Specific Approach:
Reparameterising the Model

• The next stage is to reparameterise the model by


- knocking out very insignificant regressors
- some coefficients may be insignificantly different from each other,
so we can combine them.

• At each stage, we need to check the assumptions are still OK.

• Hopefully at this stage, we have a statistically adequate empirical model


which we can use for
- testing underlying management theories
- forecasting future values of the dependent variable
- formulating policies, etc.

Zenegnaw Abiy Hailu (PhD) 54


End

You might also like