0% found this document useful (0 votes)
5 views24 pages

Chapter6 Econometrics RegressionAnalysisUnderLinearRestrictions

Uploaded by

Rafik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views24 pages

Chapter6 Econometrics RegressionAnalysisUnderLinearRestrictions

Uploaded by

Rafik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 6

Regression Analysis Under Linear Restrictions and Preliminary Test Estimation

One of the basic objectives in any statistical modeling is to find good estimators of the parameters. In the
context of the multiple linear regression model y  X    , the ordinary least squares estimator

b   X ' X  X ' y is the best linear unbiased estimator of  . Several approaches have been attempted in the
1

literature to improve further the OLSE. One approach to improve the estimators is the use of extraneous
information or prior information. In applied work, such prior information may be available about the
regression coefficients. For example, in economics, the constant returns to scale imply that the exponents in a
Cobb-Douglas production function should sum to unity. In another example, the absence of money illusion
on the part of consumers implies that the sum of money income and price elasticities in a demand function
should be zero. These types of constraints or the prior information may be available from
(i) some theoretical considerations.
(ii) past experience of the experimenter.
(iii) empirical investigations.
(iv) some extraneous sources etc.

To utilize such information in improving the estimation of regression coefficients, it can be expressed in the
form of
(i) exact linear restrictions
(ii) stochastic linear restrictions
(iii) inequality restrictions.

We consider the use of prior information in the form of exact and stochastic linear restrictions in the model
y  X    where y is a (n  1) vector of observations on study variable, X is a (n  k ) matrix of
observations on explanatory variables X 1 , X 2 ,..., X k ,  is a (k 1) vector of regression coefficients and  is

a (n  1) vector of disturbance terms.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
1
Exact linear restrictions:
Suppose the prior information binding the regression coefficients is available from some extraneous sources
which can be expressed in the form of exact linear restrictions as
r  R
where r is a (q 1) vector and R is a (q  k ) matrix with rank ( R)  q (q  k ). The elements in
r and R are known.

Some examples of exact linear restriction r  R are as follows:


(i) If there are two restrictions with k  6 like
2  4
3  2 4  5  1
then

0  0 1 0  1 0 0 0 
r   , R   .
1  0 0 1 2 1 0 0 

(ii) If k  3 and suppose  2  3, then

r  3 , R   0 1 0

(iii) If k  3 and suppose 1 :  2 : 3 :: ab : b :1

0  1 a 0 
  
then r  0  , R  0 1 b  .
0  1 0  ab 

The ordinary least squares estimator b  ( X ' X ) 1 X ' y does not use the prior information. It does not obey
the restrictions in the sense that r  Rb. So the issue is how to use the sample information and prior
information together in finding an improved estimator of  .

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
2
Restricted least squares estimation
The restricted least squares estimation method enables the use of sample information and prior information
simultaneously. In this method, choose  such that the error sum of squares is minimized subject to linear
restrictions r  R . This can be achieved using the Lagrangian multiplier technique. Define the Lagrangian
function
S (  ,  )  ( y  X  ) '( y  X  )  2 '( R  r )
where  is a (k 1) vector of the Lagrangian multiplier.

Using the result that if a and b are vectors and A is a suitably defined matrix, then

a ' Aa  ( A  A ')a
a

a ' b  b,
a
we have
S (  ,  )
 2 X ' X   2 X ' y  2R '   0 (*)

S (  ,  )
 R  r  0.

Pre-multiplying equation (*) by R ( X ' X ) 1 , we have

2 R  2 R ( X ' X ) 1 X ' y  2 R( X ' X ) 1 R '  '  0

or R  Rb  R( X ' X ) 1 R '  '  0


1
  '    R( X ' X ) 1 R ' ( Rb  r )

using R ( X ' X ) 1 R '  0.


Substituting  in equation (*), we get
1
2 X ' X   2 X ' y  2 R '  R( X ' X ) 1 R ' ( Rb  r )  0

or X ' X   X ' y  R '  R( X ' X ) 1 R ' ( Rb  r ) .


1

Pre-multiplying by  X ' X  yields


1

1
ˆR   X ' X  X ' y   X ' X  R '  R( X ' X )1 R '  r  Rb 
1 1

1
 b   X ' X  R '  R  X ' X  R '  Rb  r  .
1 1
 
This estimation is termed as restricted regression estimator of  .
Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
3
Properties of restricted regression estimator
1. The restricted regression estimator ˆR obeys the exact restrictions, i.e., r  RˆR . To verify this, consider

R ˆR  R b   X ' X  R ' R( X ' X ) 1  r  Rb  


1 1

 
 Rb  r  Rb
 r.

2. Unbiasedness
The estimation error of ˆR is
1
ˆR     b     ( X ' X ) 1 R '  R  X ' X  R '  R  Rb 
1
 
  I   X ' X  R ' R( X ' X ) 1 R ' R   b   
1 1

 
 D b   

where
1
D  I  ( X ' X ) 1 R  R  X ' X  R ' R.
1
 
Thus

 
E ˆR    DE  b   
0

implying that ˆR is an unbiased estimator of  .

3. Covariance matrix
The covariance matrix of ˆR is

  
V ˆR  E ˆR    ˆ R 
 '
 DE  b    b    ' D '
 DV (b) D '
  2D  X ' X  D '
1

1
  2  X ' X    2  X ' X  R '  R  X ' X  R ' R  X ' X 
1 1 1 1
 
which can be obtained as follows:

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
4
Consider
1
D  X ' X    X ' X    X ' X  R '  R  X ' X  R ' R  X ' X 
1 1 1 1 1
 

   
'
D  X ' X  D '   X ' X    X ' X  R ' R  X ' X  R ' R  X ' X    I   X ' X  R ' R  X ' X  R ' R 
1 1 1 1 1 1 1 1 1

  
1
  X ' X    X ' X  R '  R  X ' X  R ' ' R  X ' X    X ' X  R '  R  X ' X  R ' R  X ' X 
1 1 1 1 1 1 1
   
1 1
  X ' X  R '  R  X ' X  R ' R  X ' X  R '  R  X ' X  R '  R  X ' X 
1 1 1 1 1
   
1
  X ' X    X ' X  R '  R  X ' X  R ' R  X ' X  .
1 1 1 1
 

Maximum likelihood estimation under exact restrictions:


Assuming  ~ N (0,  2 I ) , the maximum likelihood estimator of  and  2 can also be derived such that it
follows r  R . The Lagrangian function as per the maximum likelihood procedure can be written as
n
 1 2  1  ( y  X  ) '( y  X  ) 
L   , 2 ,     2 
exp      '  R  r  
 2   2
 2 

where  is a  q 1 vector of Lagrangian multipliers. The normal equations are obtained by partially

differentiating the log-likelihood function with respect to  ,  2 and  and equated to zero as

 ln L   ,  2 ,   1
  X ' X   X ' y   2R '   0 (1)
 2
 ln L   ,  2 ,  
 2  R  r   0 (2)

 ln L   ,  2 ,   2n 2  y  X   ' y  X  
   0. (3)
 2
 2
4
Let R ,  R2 and  denote the maximum likelihood estimators of  ,  2 and  respectively which are

obtained by solving equations (1), (2) and (3) as follows:

From equation (1), we get optimal  as

 r  R 
1
 R  X ' X 1 R '
    .
 2

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
5
Substituting  in equation (1) gives

 r  R 
1
R     X ' X  R '  R  X ' X  R '
1 1
 

where    X ' X  X ' y is the maximum likelihood estimator of  without restrictions. From equation (3),
1

we get

 2

 y  X   '  y  X  
.
R
n
The Hessian matrix of second-order partial derivatives of  and  2 is positive definite at

  R and  2   R2 .
The restricted least squares and restricted maximum likelihood estimators of  are the same whereas they

are different for  2 .

Test of hypothesis
It is important to test the hypothesis
H 0 : r  R
H1 : r  R 
before using it in the estimation procedure.

The construction of the test statistic for this hypothesis is detailed in the module on a multiple linear
regression model. The resulting test statistic is
 (r  Rb) '  R( X ' X ) 1 R ' 1 (r  Rb) 
   
 q 
F 
 ( y  Xb) '( y  Xb 
 
 nk 
which follows a F -distribution with q and (n  k ) degrees of freedom under H 0 . The decision rule is to

reject H 0 at  level of significance whenever

F  F1 (q, n  k ).

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
6
Stochastic linear restrictions:
The exact linear restrictions assume that there is no randomness involved in the auxiliary or prior
information. This assumption may not hold true in many practical situations, and some randomness may be
present. The prior information in such cases can be formulated as
r  R  V

where r is a  q 1 vector, R is a  q  k  matrix and V is a  q 1 vector of random errors. The elements

in r and R are known. The term V reflects the randomness involved in the prior information r  R .
Assume
E (V )  0
E (VV ')  
E   V '  0.

where  is a known  q  q  positive definite matrix and  is the disturbance term is a multiple regression

model y  X    .
Note that E (r )  R .

The possible reasons for such stochastic linear restriction are as follows:
(i) Stochastic linear restrictions exhibit the instability of estimates. An unbiased estimate with the
standard error may exhibit stability. For example, in repetitive studies, the surveys are conducted
every year. Suppose the regression coefficient 1 remains stable for several years. Suppose its

estimate is provided along with its standard error. Suppose its value remains stable around the
value 0.5 with standard error 2. This information can be expressed as
r  1  V1 ,

where r  0.5, E (V1 )  0, E (V12 )  22.

Now  can be formulated with this data. It is not necessary that we should have information for
all regression coefficients, but we can have information on some of the regression coefficients
only.

(ii) Sometimes the restrictions are in the form of inequality. Such restrictions may arise from
theoretical considerations. For example, the value of a regression coefficient may lie between 3
and 5, i.e., 3  1  5, say. In another example, consider a simple linear regression model

y   0  1 x  
Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
7
where y denotes the consumption expenditure on food and x denotes the income. Then the
marginal propensity (tendency) to consume is
dy
 1 ,
dx
i.e., if salary increase by rupee one, then one is expected to spend 1 , amount of rupee one on

food or save (1  1 ) amount. We may put a bound on  that either one can not spend all of

rupee one or nothing out of rupee one. So 0  1  1. This is a natural restriction arising from

theoretical considerations.

These bounds can be treated as p  sigma limits, say 2-sigma limits or confidence limits. Thus
  2  0
  2  1
1 1
 ,  .
2 4
These values can be interpreted as
1
1  V1 
2
1
E (V12 )  .
16
(iii) Sometimes the truthfulness of exact linear restriction r  R can be suspected and accordingly,
an element of uncertainty can be introduced. For example, one may say that 95% of the
restrictions hold. So some element of uncertainty prevails.

Pure and mixed regression estimation:


Consider the multiple regression model
y  X 
with n observations and k explanatory variables X 1 , X 2 ,..., X k . The ordinary least squares estimator of  is

b X 'X  X 'y


1

which is termed as pure estimator. The pure estimator b does not satisfy the restriction r  R  V . So the
objective is to obtain an estimate of  by utilizing the stochastic restrictions such that the resulting estimator
satisfies the stochastic restrictions also. In order to avoid the conflict between prior information and sample
information, we can combine them as follows:

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
8
Write
y  X  E ( )  0, E ( ')   2 I n
r  R  V E (V )  0, E (VV ')   , E ( V ')  0
jointly as
 y  X   

r   R    V 
     
or a  A  w

where a   y r  ', A  ( X R) ', w    V  '.

Note that
 E ( )  0 
E ( w)    
 E (V )  0 
  E ( ww ')
  '  V ' 
E 
V  ' VV '
 2 I n 0 
 .
 0 

This shows that the disturbances w are non-spherical or heteroskedastic. So the application of generalized
least squares estimation will yield more efficient estimator than ordinary least squares estimation. So
applying generalized least squares to the model
a  AB  w E ( w)  0, V ( w)  ,
the generalized least square estimator of  is given by

ˆM   A '  1 A  A '  1a.


1

The explicit form of this estimator is obtained as follows:


 1 
 I 0   y
A '  a   X ' R ' 
1 2 n
  
1  r 
 0  
 12 X ' y  k '  1r

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
9
 1 
 I 0  X 
A '  A   X ' R ' 
1 2 n
  
1  R 
 0  
 12 X ' X  R '  1 R.

Thus
1
 1   1 
ˆM   2 X ' X  R ' 1 R   2 X ' y  R '  1r 
   

assuming  2 to be unknown. This is termed as mixed regression estimator.


1
If  2 is unknown, then  2 can be replaced by its estimator ˆ 2  s 2   y  Xb  '  y  Xb  and feasible
nk
mixed regression estimator of  is obtained as
1
1  1 
ˆ f   2 X ' X  R '  1 R   2 X ' y  R '  1r  .
s  s 
This is also termed as estimated or operationalized generalized least squares estimator.

Properties of mixed regression estimator:


(i) Unbiasedness:
The estimation error of ˆm is

ˆM     A ' A  A '  1a  


1

  A '  1 A  A '  1  AB  w   
1

  A '  1 A  A '  1w.


1

 
E ˆM     A '  1 A  A '  1 E ( w)
1

 0.
So mixed regression estimator provides an unbiased estimator of  . Note that the pure regression

b   X ' X  X ' y estimator is also an unbiased estimator of  .


1

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
10
(ii) Covariance matrix
The covariance matrix of ˆM is

 
V ˆM  E ˆM    ˆ M 
 '

  A '  1 A  A '  1 E VV '   1 A  A '  1 A 


1 1

  A '  1 A 
1

1
 1 
  2 X ' X  R '  1 R  .
 

(iii) The estimator ˆM satisfies the stochastic linear restrictions in the sense that

r  RˆM  V

 
E (r )  RE ˆM  E (V )
 R  0
 R .

(iv) Comparison with OLSE


We first state a result that is used further to establish the dominance of ˆM over b .

Result: The difference of matrices  A11  A21  is positive definite if  A2  A1  is positive definite.

Let
A1  V (b)   2 ( X ' X ) 1
1
 1 
A2  V ( ˆM )   2 X ' X  R '  1 R 
 
1 1
then A11  A21  X ' X  R '  1 R  X 'X
 2
2
 R '  1 R
which is a positive definite matrix. This implies that
A1  A2  V (b)  V ( ˆM )

is a positive definite matrix. Thus ˆM is more efficient than b under the criterion of covariance matrices or

Loewner ordering provided  2 is known.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
11
Testing of hypothesis:
In the prior information specified by stochastic restriction r  R  V , we want to test whether there is a
close relation between the sample information and the prior information. The test for the compatibility of
sample and prior information is tested by  2  test statistic given by
1 1
 r  Rb  '  R  X ' X  R '    r  Rb 
1
2 
 2 

assuming  2 is known and b   X ' X  X ' y . This follows a  2 -distribution with q degrees of freedom.
1

If   0 , then the distribution is degenerated and hence r becomes a fixed quantity. For the feasible version
of mixed regression estimator
1
1  1 
ˆ f   2 X ' X  R '  1 R   2 X ' y  R '  1r  ,
s  s 
the optimal properties of mixed regression estimator like linearity unbiasedness and/or minimum variance do
not remain valid. So there can be situations when the incorporation of prior information may lead to a loss in
efficiency. This is not a favourable situation. Under such situations, pure regression estimator is better to use.
In order to know whether the use of prior information will lead to better estimator or not, the null hypothesis
H 0 : E (r )  R  can be tested.

For testing the null hypothesis


H 0 : E (r )  R

when  2 is unknown, we use the F  statistic given by


 r  Rb ' R X ' X 1 R ' 
  r  Rb 
1

   

q
F
s2
1
where s 2   y  Xb  '  y  Xb  and F follows a F  distribution with q and (n  k ) degrees of freedom
nk
under H 0 .

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
12
Inequality Restrictions
Sometimes the restriction on the regression parameters or equivalently the prior information about the
regression parameters is available in the form of inequalities. For example,
1  1  2,5   3  6, 2  1  2 2  5 , etc. Suppose such information is expressible in the form of R  r .

We want to estimate the regression coefficient  in the model y  X    subject to constraints R  r .

One can minimize ( y  X  ) '( y  X  ) subject to R  r to obtain an estimator of  . This can be


formulated as a quadratic programming problem and can be solved using an appropriate algorithm, e.g.
Simplex algorithm and a numerical solution is obtained. The advantage of this procedure is that a solution
ˆ is found that fulfils the condition. The disadvantage is that the statistical properties of the estimates are
not easily determined, and no general conclusions about superiority can be made.

Another option to obtain an estimator of  is subject to inequality constraints is to convert the inequality
constraints in the form of stochastic linear restrictions e.g., p-sigma limits, and use the framework of mixed
regression estimation.

The minimax estimation can also be used to obtain the estimator of  under inequality constraints. The

minimax estimation is based on the idea that the quadratic risk function for the estimate ˆ is not minimized
over the entire parameter space but only over an area that is restricted by the prior knowledge or restrictions
in relation to the estimate.

If all the restriction define a convex area, this area can be enclosed in an ellipsoid of the following form
B (  )   :  ' T   k 

with the origin as center point or in


B (  ,  0 )   : (    0 ) ' T (    0 )  k 

with the center point vector  0 where k > 0 is a given constant and T is a known p  p matrix which is

assumed to be positive definite. Here B defines a concentration ellipsoid.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
13
First, we consider an example to understand how the inequality constraints are framed. Suppose it is known a
priori that
ai   i  bi (i  1, 2,..., n)

when ai and bi (i = 1,2,…,n) are known and may include ai   and bi   These restrictions can be

written as
ai  bi
i 
2
 1, i  1, 2,..., n.
1
2(bi  ai )

Now we want to construct a concentration ellipsoid (    0 )T (    0 )  1 which encloses the cuboid and

fulfils the following conditions:


1
(i) The ellipsoid and the cuboid have the same center point,  0  (a1  b1 , , a p  bp ).
2
(ii) The axes of the ellipsoid are parallel to the coordinate axes, that is , T  diag (t1 , , t p ).

(iii)The corner points of the cuboid are on the surface of the ellipsoid, which means we have
2
p
 ai  bi 
 
i 1  2 
 ti  1.

(iv) The ellipsoid has minimal volume:


p 1

V  c K  ti 2 ,
i 1

with c p being a constant dependent on the dimension .

We now include the linear restriction (iii) for the by means of Lagrangian multipliers and solve (with
p
c p2V p2   ti 1 )
i 1

 p 1  p  ai  bi  2  
min V  min  ti     
 
 i  .
t 1
ti  ti
 i 1  i 1  2   
The normal equations are then obtained as
2
V  a j  bj 
 ti2  ti1     0
ti i j  2 

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
14
2
V  a j  bj 
and    ti  1  0.
  2 

V
From  0, we get
ti
2
 2 
  ti2  ti1   (for all j  1, 2, , p )
 a j  bj
i j  
2
 2 p 
 t  t  1 1
 ,
i a j  bj i
i 1  
and for any two i, j we obtain
2 2
 a  bj   a j  bj 
ti  j   tj   ,
 2   2 
V
and hence after summation according to  0 gives

2 2
p
 a j  bj   a j  bj 
 
2 
 t j  pt j    1.
i 1   2 
This leads to the required diagonal elements of
4
 a j  b j   j  1, 2, , p  .
2
tj 
p
Hence, the optimal ellipsoid (    0 )T (    0 )  1 , which contains the cuboid, has the center point vector

1
 a1  b1 , , a p  bp 
 0 
2
and the following matrix, which is positive definite for finite limits ai , bi (ai  bi ) ,

T  diag
4
p

 b1  a1  , ,  bp  a p  .
2 2

Interpretation: The ellipsoid has a larger volume than the cuboid. Hence, the transition to an ellipsoid as a
priori information represents a weakening but comes with easier mathematical handling.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
15
Example: (Two real regressors) The center-point equation of the ellipsoid is (see Figure )

x2 y 2
  1,
a 2 b2
 1 
 a2 0
 x
or  x, y     1
 0 1  y 
 
 b2 

 1 1 
with T  diag  2 , 2   diag  t1 , t2 
a b 
1 1
 
and the area F   ab   t1 2 t2 2 .

The Minimax Principle:



Consider the quadratic risk R( ˆ ,  , A)  tr  AE ˆ  

  ˆ     and a class {ˆ} of estimators. Let

B(  )   p be a convex region of a priori restrictions for . The criterion of the minimax estimator leads to
the following.
Definition: An estimator b*  {ˆ} is called a minimax estimator of  if
An explicit solution can be achieved if the weight matrix is of the form A  aa ' of rank 1.
Using the abbreviation, D*   S  k 1 2T  where S  X ' X , we have the following result:

Result: In the model y  X    ,   N (0,  2 I ) , with the restriction  'T   k with T  0 , and the risk

 
function R ˆ ,  , a , the linear minimax estimator is of the following form:

b*  ( X ' X  k 1 2T ) 1 X ' y


 D*1 X ' y
with the bias vector and covariance matrix as
Bias  b* ,    k 1 2 D*1T  ,
V  b*    2 D*1SD*1

and the minimax risk is


sup R  b* ,  , a    2 aD*1a.
 T   k

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
16
Result: If the restrictions are (    0 )T (    0 )  k with center point  0  0, the linear minimax estimator

is of the following form:


b* (  0 )   0  D*1 X   y  X  0 

with bias vector and covariance matrix as


Bias  b*   0  ,     k 1 2 D*1T     0  ,
V  b*   0    V  b*  ,

and the minimax risk is


sup R  b*   0  ,  , a    2 aD*1a.
   0  T    0  k

Interpretation: A change of the center point of the a priori ellipsoid has an influence only on the estimator
itself and its bias. The minimax estimator is not operational because  2 is unknown. The smaller the value
of k, the stricter is the a priori restriction for fixed T. Analogously, the larger the value of k, the smaller is the
influence of  'T   k on the minimax estimator. For the borderline case, we have

B      :  T   k   K as k  
lim b*  b   X X  X y.
1
and
k 

Comparison of b* and b :
Minimax Risk: Since the OLS estimator is unbiased, its minimax risk is
sup R  b, , a    2 aS 1a.
 T   k

The linear minimax estimator b* has a smaller minimax risk than the OLS estimator, and

R  b, , a   sup R  b* ,  , a 
 T   k

  2 a( S 1   k 1 2T  S  )a  0,
1

since S 1   k 1 2T  S   0
1

Considering the superiority with MSE matrices, we get

M  b* ,    V  b*   Bias  b* ,   Bias  b* ,  
  2 D*1  S  k 2 2T  T   D*1

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
17
Hence, b* superior to under the criterion of Loewner ordering when

  b, b*   V  bx   M  b* ,     2 D*1[ D* S 1 D*  S  k 2 2T  T ]D*1  0,

which is possible if and only if


B  D* S 1 D*  S  k 2 2T  T 
 k 2 4T S 1  2k 2T 1   2   T  0
1
 
1
 
1

1
 k 2 4TC 2  I   2C 2  C 2  C 2 T  0
 
with C  S 1  2k 2T 1. This is equivalent to
 2  ( S 1  2k 2T 1 )  0.

Since  2k 2T 1    S 1  2k 2T 1   0,


1

2
k 1  .
 

Preliminary Test Estimation:


The statistical modeling of the data is usually done assuming that the model is correctly specified and the
correct estimators are used for the purpose of estimation and drawing statistical inferences from a sample of
data. Sometimes the prior information or constraints are available from outside the sample as non-sample
information. The incorporation and use of such prior information along with the sample information lead to
more efficient estimators provided it is correct. So the suitability of the estimator lies on the correctness of
prior information. One possible statistical approach to check the correctness of prior information is through
the framework of the test of hypothesis. For example, if prior information is available in the form of exact
linear restrictions r  R , there are two possibilities- either it is correct or incorrect. If the information is
correct, then r  R holds true in the model y  X    , then the restricted regression estimator (RRE)
1
ˆR  b   X ' X  R '  R  X ' X  R '  r  Rb 
1 1
of is used, which is more efficient than OLSE
 

b   X ' X  X ' y of  . Moreover, RRE satisfies the restrictions, i.e. R ˆR  r . On the other hand, when the
1

information is incorrect, i.e., r  R , then OLSE is better than RRE. The truthfulness of prior information in
terms of r  R or r  R is tested by the null hypothesis H 0 : R  r using the F-statistics.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
18
 If H 0 is accepted at level of significance, then we conclude that R  r and in such a situation, RRE

is better than OLSE.


 On the other hand, if H 0 is rejected at level of significance, then we conclude that R  r and OLSE

is better than RRE under such situations.

So when the exact content of the true sampling model is unknown, then the statistical model to be used is
determined by a preliminary test of hypothesis using the available sample data. Such procedures are
completed in two stages and are based on a test of hypothesis which provides a rule for choosing between the
estimator based on the sample data, and the estimator is consistent with the hypothesis. This requires to make
a test of the compatibility of OLSE (or maximum likelihood estimator) based on sample information only
and RRE based on the linear hypothesis. The one can make a choice of estimator depending upon the
outcome. Consequently, one can choose OLSE or RRE. Note that under the normality of random errors, the
equivalent choice is made between the maximum likelihood estimator of and the restricted maximum
likelihood estimator of  , which has the same form as OLSE and RRE, respectively. So essentially a pre-
test of hypothesis is done for H 0 : R  r and based on that, a suitable estimator is chosen. This is called the

pre-test procedure, which generates the pre-test estimator that, in turn, provides a rule to choose between
restricted or unrestricted estimators.

One can also understand the philosophy behind the preliminary test estimation as follows. Consider the
problem of an investigator who has a single data set and wants to estimate the parameters of a linear model
that are known to lie in a high dimensional parametric space 1 . However, the prior information about the

parameter is available, and it suggests that the relationship may be characterized by a lower-dimensional
parametric space  2  1 . Under such uncertainty, if the parametric space 1 is estimated by OLSE, the
result from the over specified model will be unbiased but will have larger variance. Alternatively, the
parametric space 2 may incorrectly specify the statistical model and if estimated by OLSE will be biased.

The bias may or may not overweigh the reduction in variance. If such uncertainty is represented in the form
of the general linear hypothesis, this leads to pre-test estimators.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
19
Let us consider the conventional pre-test estimator under the model y  X    with usual assumptions and
the general linear hypothesis H 0 : R  r which can be tested by using F statistics. The null hypothesis H 0 is

rejected at  level of significance when


u  Fcalculated  F , p ,n  p  c

where the critical value is determined for the given level of the test  by

dF
c
p ,n  p  P  Fp ,n  p  c    .

 If H0 is true, meaning thereby that the prior information is correct, then use RRE
1
ˆR  b   X ' X  R '  R  X ' X  R '  r  Rb  to estimate  .
1 1
 
 If H 0 is false, meaning thereby that the prior information is incorrect, then use OLSE

b   X ' X  X ' y to estimate  .


1

Thus the estimator to be used depends on the preliminary test of significance and is of the form
 ˆR if u  c
ˆPT  
b if u  c.

This estimator ˆPT called a preliminary test or pre-test estimator of  . Alternatively,

ˆPT  ˆR .I  0,c   u   b.Ic ,   u 


 ˆR .I  0,c   u   1  I  0,c   u   .b

 b  (b  ˆR ).I  0,c   u 


 b   b  r  .I  0,c   u 

where the indicator functions are defined as


1 when 0uc
I  0,c   u   
0 otherwise
1 when u  c
Ic ,   u   
0 otherwise.

 If   0 , then ˆPT  ˆR .I0,   u   b.I     u   ˆR .

 If   1 then ˆPT  ˆR .I  0  u   b.I0,   u   b .

Note that   0 and   1 indicate that the probability of type 1 error (i.e., rejecting H 0 when it is true) is 0

and 1 respectively. So the entire area under the sampling distribution is the area of acceptance or the area of
Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
20
rejection of null hypothesis. Thus the choice of  has a crucial role to play in determining the sampling
performance of the pre-test estimators. Therefore in a repeated sampling context, the data, the linear
hypothesis, and the level of significance all determine the combination of the two estimators that are chosen
on the average. The level of significance has an impact on the outcome of pretest estimator in the sense of
determining the proportion of the time each estimator is used and in determining the sampling performance
of pretest estimator.

We use the following result to derive the bias and risk of pretest estimator:

Result 1: If the K 1 random vector, Z /  , is distributed as a multivariate normal random vector with mean
 /  and covariance matrix I k and is independent of  (2n  k ) then

  Z Z n  K  z        K  2. 
2

E  I  0,c   2 2     *
 ,
    n  K  K       P c
    n  K 
2
    

where    '  / 2 2 and c*  cK / (n  K ). .

Result 2: If the K 1 random vector, Z /  is distributed as a multivariate normal random vector with mean
 /  and covariance I k and is independent of  (2n  k ) then

  Z 'Z   Z '  Z      2K  2,  /2 2    2K  4,  /2 2 


   
  
E  I  0,c*  2 2        KE  I  0,c*    E  I  0,c*
   n K           n K 
2
  2
 nK  
2
        
  2K  2,  /2 2    2K  4,  / 2 2 
   cK   
    cK 
 KP   P  ,
  2n  K  T  K   2   2n  K  nK 
   
where c*  cK / (n  K ).

Using these results, we can find the bias and risk of ˆPT follows:

Bias:

 
E ˆPT  E  b   E  I  0,c   u  .  b  r  
2  2 
  p  2,  /2  p 
   P 2 c
 n p
   
2
n  p , 
 /2

    P  F p  2, n  p ,  /2 2  c 
   
Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
21
where   r  R ,  (2p  2, ' /2 2 ) denotes the non-central  2 distribution with noncentrality parameter

 '  / 2 2 , F( p  2,n  p , ' /2 2


)
denotes the non-central distribution with noncentrality parameter  '  / 2 2 .

Thus, if   0 , the pretest estimator is unbiased. Note that the size of bias is affected by the probability of a
random variable with a non-central F- distribution being less than a constant that is determined by the level
of the test, the number of hypothesis and the degree of hypothesis error . Since the probability is always
less than or equal to one, so bias ( ˆPT )  bias (b) .

Risk:
The risk of pretest estimator is obtained as
 
  

  , ˆPT  E  ˆPT   ˆPT   
 
 
 

 

 E  b    I  0,c   u  b  r  b    I  0,c   u  b  r  


 E  b     b      E  I  0,c   u  b     b      E  I  0,c   u    
   
  2p  2,  / 2 2    2p  4,  / 2 2 
   
  p   2    p  P  cp   cp 
2 2
    P 
  2n  p  n p   2n  p  n p
   
or compactly,
 
  , ˆPT   2 p   2    2 p  l  2     l  4 
where
 2p  2,  /2  2p  4,  /2
  2
  2

l (2)  , l (4)  , 0  l (4)  l (2)  1.


 n p 
2
 n p 
2

The risk function implies the following results:


1. If the restrictions are correct and   0 , the risk of the pretest estimator  2 K [1  l (2)] where
0  [1  l (2)]  1 for 0  c   . Therefore, the pretest estimator has risk less than that of the least-
squares estimator at the origin   0 , and the decrease in risk depends on the level of significance 
and, correspondingly, on the critical value of the test c.
2. As the hypothesis error   r   , and thus  '  / 2 2 , increases and approaches infinity, l () and

 ' l () approach zero. The risk of the pretest estimator, therefore, approaches  2 p , the risk of
the unrestricted least squares estimator.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
22
3. As the hypothesis error grows, the risk of the pretest estimator increases obtains a maximum after
crossing the risk of the least-squares estimator, and then monotonically decreases to approach  2 p
the risk of the OLSE.
4. The pretest estimator risk function defined on the  '  / 2 2 parameter spaces crosses the risk
function of the least-squares estimator within the bounds p / 4   '  / 2 2  p / 2 .

The sampling characteristic of the preliminary test estimator are summarized in Figure 1.

From these results, we see that the pretest estimator does well relative to OLSE if the hypothesis is correctly
specified. However, in the  '  / 2 2 space representing the range of hypothesis are correctly specified.
However, in the  '  / 2 2 space representing the range of hypothesis errors, the pretest estimator is inferior
to the least-squares estimator over an infinite range of the parameter space. In figures 1 and 2, there is a
range of the parameter space which the pretest estimator has the risk that is inferior to (greater than) that of
both the unrestricted and restricted least squares estimators. No one estimator depicted in Figure 1 dominates
the other competitors. In addition, in applied problems, the hypothesis errors, and thus the correct in the
specification error parameter space, are seldom known. Consequently, the choice of estimator is unresolved.

The Optimal Level of Significance


The form of the pretest estimator involves, for evaluation purposes, the probabilities of ratios of random
variables l (2) and l (4) being less than a constant that depends on the critical value of the test or on the
level of statistical significance  . Thus as   0 the probabilities l ()  1 , and the risk of the pretest

estimator approaches that of the restricted regression estimator ˆR . In contrast, as   1, l () approaches

zero and the risk of the pretest estimator approaches that of the least-squares estimator b. The choice of
, which has a crucial impact on the performance of the pretest estimator, is portrayed in Figure 3.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
23
Since the investigator is usually unsure of the degree of hypothesis specification error, and thus is unsure of
the appropriate point in the  space for evaluating the risk, the best of worlds would be to have a rule that
mixes the unrestricted and restricted estimators so as to minimize risk regardless of the relevant specification
error  '  / 2 2 . Thus the risk function traced out by the cross-hatched area in Figure 2 is relevant.
Unfortunately, the risk of the pretest estimator, regardless of the choice of  , is always equal to or greater
than the minimum risk function for some range of the parameter space. Given this result, one criterion that
has been proposed for choosing the  level might be to select the critical value c that would minimize the
maximum regret of not being on the minimum risk function, reflected by the boundary of the shaded area.
Another criterion that has been proposed for choosing  is to minimize the average regret over the whole
 '  / 2 2 space. Each of these criteria leads to different conclusions or rules for choice, and the question
concerning the optimal level of the test is still open. One obvious thing is that conventional choices of 0.05
and 0.01 may have rather severe statistical consequences.

Econometrics | Chapter 6 | Linear Restrictions and Preliminary Test Estimation | Shalabh, IIT Kanpur
24

You might also like