0% found this document useful (0 votes)
16 views131 pages

Introductory Econometrics Sem 4

The document is a course outline for an Introductory Econometrics class taught by Rahul Sir, a graduate of SRCC and DSE. It includes an index of chapters covering topics such as simple and multiple linear regression, dummy variables, multicollinearity, and model selection criteria. Additionally, it contains objective-type questions, true/false statements, and proofs related to econometric concepts.

Uploaded by

janvichhabra5678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views131 pages

Introductory Econometrics Sem 4

The document is a course outline for an Introductory Econometrics class taught by Rahul Sir, a graduate of SRCC and DSE. It includes an index of chapters covering topics such as simple and multiple linear regression, dummy variables, multicollinearity, and model selection criteria. Additionally, it contains objective-type questions, true/false statements, and proofs related to econometric concepts.

Uploaded by

janvichhabra5678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 131

www.rsgclasses.

com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

RSGCLASSES

ECONOMICS (H) SEM-4

INTRODUCTORY ECONOMETRICS

BY RAHUL SIR
(SRCC GRADUATE , DSE ALUMNI)
R25

1
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

INDEX
CHAPTERS No. CHAPTER NAME Page
1. Simple linear regression 3-19

2. Multiple Linear regression 20-34

3. Functional form of regression 35-50

4. Dummy Variable 51-66

5. Multicollinearity 67- 78

6. Heteroscedasticity 79- 91

7. Autocorrelation 92-104

8. Model Selection Criteria 105-110

nOTE- IF YOU HAVE FIND ANY MISTAKE IN


QUESTIONS PLEASE CONTACT RAHUL SIR AT
9810148860.
THANKYOU………..

2
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Chapter- 1
SIMPLE LINEAR REGRESSION ANALYSIS

OBJECTIVE TYPE QUESTION

Choose the Best alternative for each question


1. Regression analysis is concerned with estimating
a. The mean value of the dependent value
b. The mean value of the explanatory variable
c. The mean value of the correlation coefficient
d. The mean value of the fixed variable

2. The locus of the conditional means of Y for the fixed values of X is the
a. Conditional expectation function
b. Intercept line
c. Population regression line
d. Linear regression line

3. E(Y|Xi) = f(Xi) is referred to as


a. Conditional expectation function
b. Intercept line
c. Population regression line
d. Linear regression line

4. Liner regression model is


a. Linear in explanatory variables but may not be linear in parameters
b. Nonlinear in parameters and must be linear on variables
c. Linear in parameters and must be linear in variables
d. Linear in parameters and may or not be linear in variables

5. In Yi = β1+β2Xi +ui, ui can take values that are


a. Only positive
b. Only negative
c. Only zero
d. Positive, negative or zero

6. In Yi = E (Y|Xi) + ui the deterministic component is given by


a. yi
b. E (Y|Xi)
c. ui
d. E(Y|Xi) + ui

7. The sample Regression line is at best an approximation of the population


regression. The statement
a. Is always true

3
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

b. Is always false
c. May sometimes be true sometimes false
d. Nonsense statement

8. Yi = β1+β2Xi +ui represents


a. Sample regression function
b. Population regression function
c. Nonlinear regression function
d. Estimate of regression function

9. Yi = β̂1 +β̂2Xi+ ûi ,represents


a. Sample regression function
b. Population regression function.
c. Nonlinear regression function
d. Estimate of regression function

10. In Yi = β̂1 + β̂2Xi + ûi’ ̂βi and β̂2 represent.


a. Fixed component
b. Residual component
c. Estimates
d. Estimators

11. In Yi = β̂1 + β̂2Xi + ûi , ûi represent.


a. Fixed component
b. Residual component estimated
c. Estimates
d. Estimators

Yi + β̂2Xi
12. In sample regression function, the observed Yi can be expressed as Yi = ̂
+ ûi. The statement is
a. True
b. False
c. Depend on 𝛽̂ 2
d. Depends on 𝑌̂i

13. In Yi = β̂1 + β̂2Xi + ûi , ’ûi gives the difference between


a. The actual and estimated Y values
b. The actual and estimated X values
c. The actual and estimated beta values
d. The actual and estimated u values

14. Under the least square procedure, larger the 𝑢̂i (in absolute terms), the larger the
a. Standard error
b. Regression error
c. Squared sum of residuals
d. Difference between true parameter and estimated parameter

4
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

15. The method of least squares provide with unique estimates of β̂1 and β̂2 that give
the smallest possible value of
a. 𝑢̂i
b.  𝑢̂i
c.   𝑢̂i
d.  𝒖 ̂𝒊𝟐

16. The least square estimators are


a. Period estimators
b. Point estimators
c. Population estimators
d. Popular estimators

17. The mean value of the estimated (𝑌̂) is


a. Equal to the mean value of actual Y .
b. Not equal to mean value of actual Y (𝑌̂).
c. Equal to the mean value of actual X(𝑋).
d. Not Equal to mean value of actual X(𝑋).

18. The mean value of ui conditional upon the given Xi is


a. Positive values
b. Negative values
c. Equal to zero
d. Any of the above

19. In classical linear regression model, Xi and ui are


a. Positively correlated
b. Negatively correlated
c. Highly correlated
d. Not correlated

20. Homoscedastic refers to the error terms having


a. Zero mean
b. Positive variance
c. Constance variance
d. Positive mean

21. One of the assumptions of CRLM is that the values of the explanatory variable X
must
a. All be positive
b. Not all be the same
c. All be negative
d. Average to zero

22. In statistics standard error measures the


a. Precision of an estimate
b. Correlation between Y and X

5
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

c. Specification error of the model


d. Autocorrelation in the regression model

23. In a two variable linear regression model the slope coefficient measures
a. The mean value of Y
b. The change in Y which the model predicts for a unit change in X
c. The change in which the model predicts for a unit change in Y
d. The value of Y for any given value of X

24. The fitted regression of equation is given by 𝑌̂𝑖 = 12 + 0.5 Xi What is the value of
the residual at the point X=50, Y=70 ?
a. 57
b. -57
c. 0
d. 33

25. What is the number of degrees of freedom for a simple bivariate linear regression
with 100 observations?
a. 100
b. 97
c. 98
d. 2

26. Given the assumption of the CRLM, the least squares estimates possess some
optimum properties given by Gauss-Markov theorem. Which of these statements
is NOT part of the theorem
a. The estimators of 𝛽̂ 2 is a linear function of a random variable
b. The average value of the estimator 𝛽̂ 2 is equal to zero
c. The estimator 𝛽̂ 2has minimum variance
d. The estimator 𝛽̂ 2 is unbiased estimator

27. Coefficient of correlation


a. Lies between -1 and +1
b. Is always equal to zero
c. Is a measure of nonlinear dependence of two variables
d. Implies causation in a relationship

28. For coefficient of determination r2 for a regression model


a. r2= Y
b. 0 <r2 <1
c. r2 <1
d. r2 = 0
29. When the estimated slope coefficient in the simple regression model is zero, then
a. r2 = 1
b. 0 <r2 <1
c. r2 = 1
d. r2 = 0

6
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

30. Zero correlation does not necessarily imply independence between the two
variables. The statement is
a. False
b. True
c. Depends on the mean value of X and Y
d. None

31. The r2 measures the percentage of the total variation in


a. Y explained by betas
b. Y explained by 𝑢̂i X explained by Y
c. Y explained by the regression model
d. None

̂𝑖 = Yi for each I in a regression model then the value of r2 would be


32. When 𝑌
a. r2 = 1
b. 0 <r2 <1
c. r2 = 1
d. r2 = 0

TRUE FALSE
State whether the following statements are true false , or uncertain, Give your reasons.
Be precise.
i) The stochastic error term 𝑢𝑖 and the residual term 𝑒𝑖 mean the same thing.

ii) The PRF gives the value of the dependent variable corresponding to each value of the
independent variable.

iii) A linear regression model linear in the variables.

iv) In the linear regression model the explanatory variable is the cause and the
dependent variable is the effect.

v) The conditional and unconditional mean of a random variable are the same thing.

vi) In practice, the two- variable regression model is useless because the behavior of a
dependent variable can never be explained by a single explanatory variable.

vii) The sum of the deviation of a random variable from its mean value is always equal to
zero.

viii) OLS is an estimating procedure that minimizes the sum of the errors squared,∑𝑒𝑖 2

7
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

ix) The coefficient of correlation, r, has the same sign as the slope coefficient b2.

x) r² is the ration of TSS/ESS.

xi) In simple regression model 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋𝑖 + 𝑢𝑖 , the OLS estimator 𝛽̂1 and 𝛽̂2 each
follow normal distribution only if 𝑢𝑖 follows normal distribution.[Eco(h)2019]

xii) If the estimate of slope coefficient in a bivariate regression is zero, the measure of
coefficient of determination is also zero. [Eco(h)2019]

xiii) If you choose a higher level of significance, a regression coefficient is more likely
to be significant. [Eco(h)2013]
xiv) In the regression modal Yt = B1 + B2Xi + ui, suppose we obtain a 95% confidence
interval for B2 as (0.1934, 1.8499), we can say the probability is,95% that this interval
includes the B2. [Eco(h)2014]

xv) In a two-variable PRF, if the slope coefficient 𝛽2 is zero; the intercept 𝛽1 is estimated
by the sample mean. [Eco(h)2015]

xvi) All Actual 𝑌𝑖 cannot lie above the sample linear regression line. [Eco(h)2017]

xvii) Consider a simple regression model estimated using OLS. It is known that the
Explained Sum of Squares is 75% higher than the Residual Sum of Squares. This
implies that more than 75% of the total variation in the dependent variable is
explained by the variation in the explanatory variable. [Eco(h)2023]

xviii) In a simple regression model estimated using OLS, the residuals (ei) are such that
𝑒̅ = 0 and 𝑒̅ 2 = 0. [Eco(h)2023]

xix) The OLS estimate of slope coefficient of regressing Y on X is same as that of


regressing X on Y. [Eco(h)2023]

xx) In a linear regression In Y = 𝛽1 + 𝛽2 X, + 𝑢𝑖 ; the measure of goodness of fit R2 was


estimated as 0.70. The p - value of the slope coefficient is 0.578. The coefficient is
statistically significant since X explains 70% of variation in Y. [Eco(h)2023]

xxi) If X and Y are related to each other by the equation: Y = 2 + 0.5 X, the correlation
coefficient between them is 0.5 [Eco(h)2023]

8
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

xxii) In linear regression models, 𝑟 2 value is invariant to changes in the unit of


measurement, as it is dimensionless. [Eco(h)2022]

xxiii) The correlation coefficient between 𝑈 = 3 X + 2 and 𝑉 = −4𝑌 + 5 is the same as


between X and Y. [Eco(h)2022]

Proofs

1. Prove that ̅
Y = b1 + b2̅
X

2. Prove that Σ 𝑒𝑖 =0

3. Prove that Σ𝑒𝑖 𝑥𝑖 =0 where ei is the residual term and 𝑥𝑖 is the deviation
of Xi from mean.

4. ̂𝑖 𝑢𝑖 =0 where ui is the residual term and 𝑌


Prove that Σ𝑌 ̂𝑖 is the
estimated value of 𝑌𝑖 .

5. Prove that residual term is uncorrelated with independent variable.

6. Prove that residual term is uncorrelated with the predicted value.

7. Prove that the mean of predicted value of Yi is always equal to actual mean, i.e.,
𝑌̂ =𝑌

8. Prove that Σ𝑥𝑖 𝑦𝑖 = Σ𝑋𝑖 𝑦𝑖 = Σ𝑥𝑖 𝑌𝑖

9. Prove that the least square estimator b2 is linear, unbiased and consistent.

10. In CLRM, shows that OLS estimator for the slope coefficient is linear and unbiased.

11. Show that the OLS, estimators have the property of being linear and unbiased.

12. Prove that the least square estimators have the minimum variance amongst the
class of estimators.

13. Prove that the OLS estimators are best linear Unbiased Estimators (BLUE).

14. Derive the numerical properties of the OLS estimators and the regression line.

15. Show that. Cov( 𝛽̂1 , 𝛽̂2 ) =-𝑋 Var(𝛽̂2 )

9
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(∑ 𝑦 𝑦̂ )2
𝑖 𝑖
16. Show that : (∑ 𝑦 2 )(∑
𝑖 𝑦̂ 2)𝑖

17. Prove that : ∑ 𝑢̂𝑖 = 𝑛𝑌̅ − 𝑛𝛽̂1 − 𝑛𝛽̂1 𝑋̅

18. If we have two regression model Y on X and X on Y then show that product of two
regression slope coefficients of X on Y and Y on X is coefficient of determination.

19. If the estimation of slope coefficient in a bivariate regression is zero. The measure
of coefficient of determination is also zero.

LINEAR IN PARAMETER ,LINEAR IN VARIABLE

1. State whether the following models are linear regression models:


1
(a) Yi = β1 + β2 {𝑋} + ui (b)Yi = β1 + β2 In(Xi) +ui
(c) In Yi = β1 + β2 Xi +ui (d) Yi = e β1 + β2 In(Xi) +ui
(e) Yi = β1 –𝛽2 3 Xi +ui

Ans: (a) LIP, (b) LIP, (c) LIP, (d) LIP (e) LIV
2. Determine whether the following models are linear in the parameters, or the
variables, or both. Which of these models are linear regression models?
1
(i) InYi = β1 + β2 In(Xi) + ui, (ii) Yi = β1 + 𝛽2 Xi + ui
1
(iii) Yi = β1 + 𝛽2 2 Xi + ui, (iv) InYi = β1 - β2 {𝑋 } + ui
2
(v) Y = 𝑒 𝛽1 +𝛽2 𝑋𝑖 +𝑢𝑖 (vi) Yi = β1 – β32 Xi +ui
Ans. (i) LIP (ii) LIV (iii) LIV (iv) LIP (v) Neither (vi) LIV

3. Determine whether the following models are linear in parameters or variables or


both. Which of these models are linear regression models:
1
(a) Yi = β1 + β2 {Xi } + ui (b) Yi = β1 + β2 In(Xi) +ui
(c) InYi = β1 + β2 Xi +ui (d) In Yi = In β1 + β2 InXi +ui

Calculate the elasticity for all the cases:


Ans: (a) LIP (b) LIP (c) LIP (d) none

10
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

ESTIMATION OF REGRESSION LINE , HYPOTHESIS TESTING & CONFIDENCE


INTERVAL OF REGRESSION COFFICIENT , ANOVA TABLE

1. You are given the following data on X and Y.


X 1 2 3 4 5

Y 3 5 7 14 11

(i)Obtain the estimated regression equation using ordinary least squares when Y
is regressed on X with in an intercept term.
(ii) Prepare the ANOVA table for this data.

2. From the following hypothetical data on weekly family consumption expenditure


(Y) and weekly Family income (X) fit a two variable linear regression model
Y = β1 + β2 Xi +ui

Yi 70 65 90 95 110 115 120 140 155 160

Xi 80 100 120 140 160 180 200 220 240 260

Also find standard errors of 𝛽̂1 , 𝛽̂2 are coefficients of determination

3. Fit the linear regression Y = β1 + β2 Xi for the following data:


X -4 -3 -2 -1 0 1 2 3 4

Y 10 20 30 40 50 60 70 80 90

Also find the variance and standard variance errors of intercept and slope
coefficients.

4. Given below is the data for 10 years from the economic survey of india:
Year Private Final Consumption Expenditure GDP
(PFCE) (in Rs. ‘0000 cr.)
(in Rs. ‘0000cr.)

1985-86 43 54

1986-87 43 55

1987-88 45 56

1988-89 48 62

1989-90 51 67

11
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

1990-91 53 69

1991-92 54 70

1992-93 55 74

1993-94 57 78

1994-95 61 86

We take PFCE as dependent variable Y and GDP independent variable X.


Find:
(i) Marginal propensity to consume,
(ii) ESS
(iii) RSS
(iv) Coefficient of determination,
(v) Test the null hypothesis Ho: MPC ≥0.5 at 5% level of significance.
(vi) Construct the ANOVA table for the above Data and find the F–statistic.

5. You have the following data based on 50 observations:


𝑋̅ = 4, 𝑌̅ = 6.2, ∑ 𝑥𝑖2 = 800 , ∑ 𝑥𝑖 , 𝑦𝑖 = 490
Where 𝑥 and 𝑦 are in deviations
(i) Estimate the linear regression of Y on X,
(ii) Interpret the slope coefficient,
(iii) If ∑ 𝑦𝑖2 = 396.2 construct the ANOVA table and calculate R2.

6. For a simple linear regression model , Y i =B1 + B2Xi + ui the following data are
given for 22 observations:
𝑋̅ = 10 𝑌̅ = 20 ∑𝑛𝑖=1(𝑋𝑖 − 𝑋̅ )2 = 60 ∑𝑛𝑖=1(𝑌𝑖 − 𝑌̅ )2 = 100

∑𝑛𝑖=1(𝑋𝑖 − 𝑋̅ )(𝑌𝑖 − 𝑌̅) = 30


(i) Compute the least squares estimates of the slope and intercept parameters.
(ii) Prepare an ANOVA table for the above results
(iii) Test the hypothesis that B2 = 1 at 5% level of significance. How would your
testing procedure change if you were given the true value of the error
variance?

7. For a sample of 10 observations of the following results are obtained:


𝝨X=1700 , 𝝨Y=1110 , 𝝨XY=205500 , 𝝨X²=322000 , 𝝨Y²=132100

(i) Find the regression coefficients and regression line.


(ii) Test whether regression coefficients are statistically significant at 5% level of
significance.
(iii) Calculate and interpret coefficient determination.

8. Given the following summary results for 6 pairs of observations on the dependent
variable Y and the independent variable X, calculate the 95% confidence interval for

12
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

the true regression coefficient β1.

Σ Xi = 90; Σ Yi =10.5; Σ X2i = 1694; ΣY2i = 20.29; Σ XiYi = 181.1

9. Using the following data:


n = 10, ΣYi = 5070, Σ Xi = 5,60,000 , ΣYiXi = 30,55,50,000, ΣX2i = 47,60,00,00,000,
ΣY2i = 26,07,100

(i) Fit the linear regression Yi = β1 + β 2Xi ,


(ii) Find S.E.( 𝛽̂1 ) and S.E.( 𝛽̂2 ),
(iii) Find 95% confidence intervals of slope and intercept coefficients,
(iv) Test the significance of slope coefficients at 5%.

10. You have the following information:

∑ 𝑋 = 1680, ∑ 𝑌 = 1110, ∑ 𝑋𝑌 = 204200, ∑ 𝑋 2 = 315400, ∑ 𝑌 2 = 133300, 𝑛 = 10.


Assume all assumptions of CLRM are fulfilled. Obtain.

i. 𝛽̂1 and 𝛽̂2


ii. Establish 95% interval for the population slope coefficient 𝛽1
iii. 𝑅2 [Eco(h) 2019]

11. For the regression model answer the questions that follow:
𝑌̂𝑖 = 432.4138 + 0.0013𝑋𝑖
𝑠𝑒 (16.9061)(0.000245)
𝑛 = 10, 𝑟 2 = 0.7849
𝑌𝑖 → 𝑚𝑎𝑟𝑘𝑠 𝑜𝑏𝑡𝑎𝑖𝑛𝑒𝑑
𝑋𝑖 → 𝐹𝑎𝑚𝑖𝑙𝑦 𝐼𝑛𝑐𝑜𝑚𝑒

(i) Interpret the regression model.


(ii) Find 95% confidence intervals of slope and intercept coefficients,
(iii) Test the significance of slope coefficient at 5%.

12. Consider the following regression:


̂i = -66.1058 + 0.0650 Xi
Y
se (10.7509) ( )
t ( ) (18.73)
n = 20, r2 = 0.946

Fill in the missing numbers. Would you reject the hypothesis that true B 2 is zero at α
= 0.05? Tell whether you are using a one tailed or two tailed test and why?

13. Given the following regression between retails sales of passenger cars (Si) and real
disposable income (Xi)
Ŝi = 5807 + 3.24Xi
SE = (1.634)
R = 0.22, n = 30
2

13
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(i) Interpret the regression coefficients of Xi.


(ii) Establish a 95% confidence interval for coefficients of X i.
(iii) Compute t-value under zero null hypothesis and test at 5% level of significance.
Which t-test would you use one tail or two tail and why?

14. A regression was run between per capita savings (S0 and per capita income were
obtained):
Ŝi = 450.03+ 0.67Yi
SE (151.105) (0.011) n=20
(i) What is the economic interpretation of regression coefficients?
(ii) What do you think about the sign of constant term? What can be the possible reason
behind
it?
(iii) Say something about goodness of fit. Also carry out, ‘t’ test for slope coefficient at
1%.
(iv) Reform the above model by stating this is per 100 rupees. What do you think would
be impact on slope intercept?
(iv) Prepare 99% confidence intervals.

15. A regression was run between personal consumption expenditure (was run between
personal consumption expenditure (Y) and gross domestic product (X) all measured
in billions of dollars for the years 1982 to 1996 and the following results were
obtained:
̂
Yi = -184.0780 + 0.7064Xi
Se = (46.2619) (0.007827)
R2 = 0.22
(i) What is the economic interpretation of regression coefficient?
(ii) What is MPC?
(iii) Interpret r2.
(iv) Prepare 95% confidence intervals of regression coefficient.
(v) Test the significance of β1 and β2 writing the hypothesis.

16. The rational expectation hypothesis claim that expectation are un biased, i.e., the
average predicted value is equal to the actual values of the variable under
investigation. A researcher wished to see the validity of this claim with reference to
the interest rates on 3 months US treasury bills for 30 quarterly observations. The
results of the regression of actual interest (ri) on the predicted interest rates (r*i)
were as follows:
r̂i = 0.0240 + 0.9400 r*i
se (0.86) (0.14)
Carry out the tests to see the validity of the rational expectation hypothesis (choose
α=5%). Assume all basic assumption of the classical linear regression model are
satisfied.

14
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

MEAN FORECASTING
1. Using cross- sectional data on total sales and profits for 27 German companies in
1995, the following model is estimated:
Profit= B1+ B2 Salesi + ui
Where
Profits: Total profits in millions of dollars
Sales: Total sales in billions of dollars
The regression results are given below:
Estimates of Coefficients Standard errors

Constant 83.5753 118.131

Sales 18.4338 4.4463

r2=0.4074
(a) Construct a 95% confidence interval for the slope coefficient. What can you say about
its statistical significance?
(b) Prove that in a simple regression model with an intercept, the F statistic for goodness
of fit of the model is equal to the square of the t statistic for a two sided t test on the
slope coefficient. Verify this statement for the regression results given in these
questions.
(c) Find the forecasted mean profits if annual sales are 25 billion dollars. Explain the
concept of a confidence band for true mean profits.

2. The following regression equation was estimated for 10 observations on X and Y.


̂i = 24-0.5 Xi
Y mean = 170. ∑ 𝑥𝑖2 = 33 σ2 = 42
Establish a 95% confidence interval for E (Y/X=100)

3. Based on the data collected on a particular Monday for 13 B.A (H) Economics,
second year students we want to estimate the following population regression
Equation: 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋𝑖 + 𝑢𝑖
Where:
𝑌𝑖 : Travelling time (in hrs) for the ith student from her home to college.
𝑋𝑖 : distance from home to college for ith student in km.

The sample gave the following values:


∑ 𝑋𝑖 = 195, ∑ 𝑌𝑖 = 26, ∑ 𝑋𝑖2 = 3050, ∑ 𝑌𝑖2 = 53 , ∑ 𝑋𝑖 𝑌𝑖 = 400

Using the above data and assuming that all the CLRM assumptions are satisfied, using a
95% confidence interval for the predicted mean travelling time when the distance
between college and a student’s house is 11Km

15
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Relation between F and 𝒕𝟐

1. Given the following regression results (t statistics are reported in parentheses)


̂
Yi = 16,899 – 2978.5Xi
t (8.51) (-4.72) R2 = 0.6149
Use the relationship between R 2, F and t to find out the underlying sample size.

2. Given the following regression results (t statistics are reported in parentheses)


̂
Yi = 4.3863 + 1.08132Xi
t (4.42) (13.99) R2 = 0.938
Use the relationship between R 2, F and t to find out the underlying sample size.

JARQUE BERA TEST

1. Explain the steps involved in the Jarque-Bera test for testing the validity of the
normality assumption in an empirical exercise. Perform the test for a JB test statistic
value equal to 0.8153 at 5% level of significance.

2. A researcher computers Jarque-Bera statistic, for a large sample as 7.378. Does it


provide evidence in favour of normality of the error term. Use 5% level of
significance.

3. Test the normality of residuals using the following data:


Skewness 1.50555
Kurtosis 6.432967
No. of observation 379

4. Information was collected on daily changes in rupee (distribution A) and daily


return on nifty (distribution B) for six month (150 day) and following are the
summarized results:
Distribution A Distribution B
Mean 39.29 10.53
Standard Deviation 8.17 8.024
Skewness 0.38 1.78
Kurtosis 2.61 6.24
Determine which of the above distribution is normally distribution clearly specifying the
test

EXAM STYLE QUESTION

1. Consider the following regression 𝑌𝑖 = 𝛽1 𝑋𝑖 + 𝑢𝑖 where 𝛽̂1 is the OLS estimator of 𝛽1 .


i) Find the value of 𝛽̂1

16
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

ii) Find V(𝛽̂1 )


iii) Verify that 𝛽̂1 is unbiased.

2. For the model 𝑌𝑖 =𝛽1 + 𝑢𝑖 , given that all the CLRM Assumption are satisfied , use
OLS to find the estimator of 𝛽1 . Show that this estimator can be decomposed into the
true value plus a linear combination of the disturbance term in the sample. Also
demonstrate that this estimator is an unbiased estimator of 𝛽1 .[ Eco(h) 2015]

3. Suppose that you are considering opening a restaurant at a location where average
traffic volume is 1000 care per day. To help you decide whether to open the restaurant
or not, you collect data on daily sales (in thousands of rupees) and average traffic
volume (in hundreds of cars per day) for a random sample of 22 restaurants. You set
up your model as:

Salesi = B1 + B2 AVtraffici + ui

You know that ∑ 𝑋𝑖 𝑌𝑖 = 17170, ∑ 𝑋𝑖2 = 13055, 𝑌̅ = 32, 𝑋̅ = 22.5

i. Obtain the ordinary least square estimator of the slope, coefficient and interpret
it
ii. Estimate the average sales for your potential restaurant location.
iii. Will the values of the coefficient of determination change if you want to change
the unit of sales from thousands of rupees, leaving units of traffic volume
unchanged?
Explain your answer. [Eco(h) 2014]

4. Using data on sales of cameras (SALES) and its price (PRICE in thousands of rupees)
for 17 brands, the effect of price on sales is given by:

SALESt = 𝛼 + 𝛽 PRICEi + ui

This is tested using OLS method The results obtained are as follows (t-ratios are
mentioned within parentheses). Assume all assumptions for classical linear regression
model hold good.

̂ i= 112.85 – 2.375 PRICEi


SALES

a) Interpret the slope coefficient


b) Construct 95% confidence interval for the slope coefficient.
c) Interpret R2. [Eco(h) 2016]

5. Let the population regression function be:

𝑦𝑖 = 𝐵1 + 𝐵2𝑥𝑖 + 𝜇𝑖

17
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Where 𝑦𝑖 and 𝑥𝑖 are deviations from their respective mean values.

i) What will be the estimated value of 𝐵1? Why?


ii) Derive the estimate of B2 and show that it is identical to the one obtained from a
regression of Y on X. Explain why it is so.
iii) How would you test the hypothesis that the error term in a two variable simple
regression model is normally distributed?
iv) Derive an expression for the 95% confidence intervals for the mean prediction
for the two variable simple linear regression model. [Eco(h) 2020]

6. Following regression output is based on a sample of 30 farms where Y = output of


rice per acre in tonnes and X = quantity of manure applied per acre in kgs.
𝑌̂𝑖 = 384.105 + 3.67𝑋𝑖
𝑠𝑒 = (151.54) (1.00)
𝑅𝑆𝑆 = 6776
Construct a 95% confidence interval for mean output when 8kg of manure is applied
given that the sample average of manure applied per acre is 5kgs. [Eco(h) 2022]

7. How do you test for normality of error terms in the PRF using Jarque Bera test ?
What happen to least square estimates if the errors are not normally distributed?
What are its consequences for the Gauss Markov theorem?[Eco(h) 2021]

8. Consider the following formulations of the two variable PRF:

Model I- 𝑌𝑖 =𝛽1 + 𝛽2 𝑋𝑖 +𝑢𝑖


Model II- 𝑌𝑖 =𝛼1 + 𝛼2 (𝑋𝑖 -𝑋 )+𝑢𝑖

a. Find the estimators of 𝛽1 & 𝛼1 . Are they identical ? Are their variances identical ?
b. Find the estimators of 𝛽2 & 𝛼2 . Are they identical ? Are their variances identical ?
c. What is the advantage , if any , of the model II over model I ?

9. Suppose that the regression model 𝑌𝑖 = 𝐵1 + 𝐵2 𝑋𝑖 + 𝑢𝑖 , is estimated using the least


squares method as 𝑌̂𝑖 = 𝑏1 + 𝑏2 𝑋𝑖 . If 𝑌 is related: to 𝑍 through the equation, 𝑍𝑖 = 2 +
5𝑌𝑖 , 𝑍𝑖 = 𝐴1 + 𝐴2 𝑋𝑖 + 𝑢𝑖 and another is estimated using the method of least squares
as 𝑍̂𝑖 = 𝑎1 + 𝑎2 𝑋𝑖 .
(i) Are the slope coefficients of the two estimated regression equations the same, i.e.,
is 𝑎2 = 𝑏2 ?
(ii) How will the t statistics of 𝑎2 be related to the t statistics of 𝑏2 ?[Eco(h) 2018]

10. Based on a sample of size 20, the following regression line was estimated using the
least-squares method,

𝑌̂𝑖 = 5 + 3𝑋𝑖

18
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

In addition 𝑋̅ = 2 ∑(𝑋𝑖 − 𝑋̅ )2 = 20and the standard error of regression was estimated to


be equal to 1.

Construct a 95% confidence interval estimate of the true population mean of 𝑌 for 𝑋0 =
15. Do you expect the confidence interval to be wider if a similar interval is estimated for
𝑋0 = 2? Explain your answer. [Eco(h) 2018]

11. For an OLS estimated regression equation, 𝑌̂𝑖 = 𝑏1 + 𝑏2 𝑋𝑖 , the sum of the product of
the residuals and mean deviation of the explanatory variable is zero ?TRUE OR
FALSE Eco(h)2024]

12. For an OLS estimated regression equation, 𝑌̂𝑖 = 𝑏1 + 𝑏2 𝑋𝑖 , if we multiply both Y and
X by 1000 and re-estimate two variable regression model, the intercept coefficient
will increase by 1000 times ? TRUE OR FALSE [Eco(h)2024]

13. Consider the following regression function on sales revenue for a particular firm for
last 10 months, as estimated by OLS:

𝑌̂𝑡 = 51.5.8 + 0.054𝑋𝑡

𝑠𝑒 = (4.03)(0.04)

̂
𝜎 2
𝑢 = 1.05

𝑋̅ = 25 ∑ 𝑋𝑡2 = 6500

Y is the monthly sales revenue in billions of dollars

X is the monthly expenditure on advertising in millions of dollars.

Find the predicted mean sales revenue if the advertising expenditure for the firm in the
next month is 30 million dollars. Also, find the 99% confidence interval for the true
predicted mean of sales revenue corresponding to 30 million dollars advertising
expenditure. [Eco(h)2024]

Answer of objective Questions


1) a 2) c. 3) a. 4) d. 5) d. 6). b 7) a. 8). b. 9) a. 10) d. 11) b. 12) b.

13) a. 14) c. 15) d. 16) c. 17) a. 18)c. 19) d. 20) c. 21) b. 22) a 23) b

24) a. 25) c. 26) b. 27) a. 28) b. 29) d. 30) b 31) c. 32) c

19
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

CHAPTER-2
Multiple Linear Regression

OBJECTIVE TYPE QUESTION


Choose the Best alternative for each question
1. The simplest possible multiple regression model is a
a. One variable model
b. Two variable model
c. Three variable model
d. Multi-variable model

2. Multiple linear regression models


a. Are linear in parameter and linear in variables
b. Are linear in parameter and may not be linear in variables
c. May not be linear in parameter but are linear in variables
d. May not be linear in parameter and variables

3. In Yi = 𝛽 1X1i + 𝛽 2X2i +𝛽 3X3i+ui Where Xii= 1 for all i. This is an example of


a. Three variable model
b. X variable model
c. Four variable model
d. Three beta model

4. In Yi = 𝛽 1 + 𝛽 2X2i +𝛽 3X3i+ui, the partial regression coefficients are given by


a. 𝛽 2 and 𝛽 2
b. 𝛽 2 and 𝛽 3
c. 𝛽 1 and 𝛽 3
d. 𝛽 1 and 𝑢i

5. In classical linear regression model, Var(ui) =σ2 refers to the assumption of


a. Zero mean value of disturbance term
b. Homoscedasticity
c. No autocorrelation
d. No multicollinearity

6. In classical linear regression model, λ2X2i + λ3X3=0 with λ3=λ3 = O refers to the
ASSUMPTION of
a. Zero mean value of disturbance term
b. Homoscedasticity
c. No autocorrelation
d. No multicollinearity

7. In classical linear regression model, Cov (u i, uj)=0, i ≠ j refers to the assumption


of
a. Zero mean value of disturbance term
b. Homoscedasticity

20
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

c. No autocorrelation
d. No multicollinearity

8. The assumption of perfect multicollinearity means that


a. There should be no correlation among the regressors
b. There should be no linear relationship among the regressors
c. There should be no nonlinear relationship among the rergressors
d. There should be no relationship among the regressors

9. Given Yi = 𝛽 1X1i + 𝛽 2X2i +𝛽 3X3i+ui, state which of the following statement is true
a. 𝛽 2 measures the change in the mean value of Y per unit change in X2 ,holding
the value of X3 constant
b. 𝛽 3 gives the net effect of a unit change in X3, on the mean value of Y, net of
any effect that X2 may have on mean Y
c. Both a and b are true
d. Neither a nor b is true

10. The measure of proportion or percentage of a variation in Y explained by the


explanatory variables (X2, X3, …) jointly is given by
a. r2
b. R2
c. R
d. None

11. Multiple coefficient of determination measures the


a. Goodness of fit of multiple regression model
b. Homoscedasticity of multiple regression model
c. Heteroscedasticity of multiple regression model
d. Multicollinearity of multiple regression model

12. When R2 = 1; 𝑅̅2 would be equal to


a. 0
b. +1
c. -1
d. Less than 1

13. 𝑅̅2 can take values


a. Between 0 and 1
b. Between -1 and 1
c. Between -1 and 0
d. Less than equal to +1

14. The Values of 𝑅̅2is always less than R 2. This statement is


a. Incorrect
b. Correct
c. Depends of k value
d. Depends on n value

15. In comparing two models on the basis of goodness of fit

21
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

a. The sample size must be the same


b. The dependent variable must be the same
c. The independent variables must be the same
d. Both a and b above

16. Quadratic function is represented by


a. Yi = β0 +β1 X2i+ui
b. Yi = β0 +β1 Xi+β2 X2i+ui
c. Yi = β0 +β1 Xi+β2 X2i+β3 X3i+ui
d. Yi = β0 + β1 X3i +ui

17. Given the regression model Yi=β1+β2X2i +β3X3i+ui,, how would you state the null
hypothesis to test that X2 has no influence on Y with X3 held constant.
a. H0: β1 = 0
b. H0: β2 = 0
c. H0: β3 = 0
d. H0: β2 = 0 given β3 = 0

18. In hypothesis testing using t statistics, when the computed t value is found to
exceed the critical t value at the chosen level of significance, then
a. We reject the null hypothesis
b. We do not reject the null hypothesis
c. It depends on alternate hypothesis
d. It depends on F value

19. A hypothesis such as H0: β2 = β3 = 0, can be tested using


a. t-test
b. Chi-square test
c. ANOVA test
d. F-test

20. In regression model Yi=β1+β2X2i +β3X3i+ui,,testing the overall significance of the


model using F-test, degrees of freedom used (k-1), (n-1), where k is equal to
a. 2
b. 1
c. 3
d. Sample size

21. When R2 for a regression model is equal to zero, the F value is equal to
a. Infinity
b. High positive value
c. Low positive value
d. Zero

22. In the multiple regression model, the adjusted R2


a. Cannot be negative
b. Will never be greater than the regression R2
c. Equals to square of correlation coefficient r
d. Cannot decrease when an additional explanatory variable is added

22
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

TRUE/ FALSE
Stats whether the following statement is True or False. Give reasons for your answer:

1. Two or more models cannot be comparable on the basis of 𝑅2 ?


2. In Multiple linear regression analysis degree of freedom corresponding to total sum
of square is n-1?
3. If coefficient of correlation is -1 then residual sum of square is negative ?
4. In regression model 𝑌1 = 𝐵1 + 𝐵1 𝑋2𝑖 + 𝐵3 𝑋3𝑖 + 𝑢1 , if all vales of 𝑋3 are identical, then
the variance of ordinary least squares estimators of the slope coefficients is not
defined ?
5. The value of 𝑅̅2 is always greater than 𝑅2 .
6. 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝜖𝑖 is estimated as 𝑌̂𝑖 = 𝛽̂1 + 𝛽̂2 𝑋2𝑖 + 𝛽̂3 𝑋3𝑖 using OLS. Here
𝑋2 and 𝛽1 are random variables and 𝛽̂3 is unknown.
7. If the regression model : 𝑌1 = 𝐵1 + 𝐵1 𝑋2𝑖 + 𝐵2 𝑋2𝑖 + 𝐵3 𝑋3𝑖 + 𝑢1, is estimated using the
method of ordinary least squares, the sum of the estimated residuals (𝑒𝑖 ) is zero.
8. An increase in the number of explanatory variables in a multiple regression model
will necessarily increase adjusted R squared.
9. An addition of a variable in a regression model with 30 observations and 4 variables,
would always lead to a rise in R2 and adjusted R2, given that the additional variable is
statistically significantly different from zero at a = 20%.
10. In a multiple regression model 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖 , testing a joint
restriction 𝐻0 : 𝛽2 = 𝛽3 = 0 is same as testing for 𝐻0 : 𝛽2 = 0 and 𝐻0 : 𝛽3 = 0.
11. For a three variable regression model, TSS is always equal to the sum of ESS and RSS

PROOFS

1. Show that arithmetic mean of residual 𝑒𝑖 is always equal to zero.


2. Show that 𝑒𝑖 would be uncorrelated with estimated Y values.
3. Show that 𝑒𝑖 would be uncorrelated with 𝑥𝑖 values, where 𝑥𝑖 = 𝑋𝑖 -𝑋.
4. In Multiple linear regression analysis mean value of dependent variable is always
equal to mean predicted value of dependent variable.

Practical Question

1. You are given the following data:

Y 1 3 8

X2 1 2 3

X3 2 1 -3

23
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Obtain the estimated regression equation using ordinary least squares if Y is regressed
on X2 and X3 with an intercept term.

2. An econometric analyst is estimating the following production function from annual


data on a firm in India:
Q = β0 + β1 L + β 2 K
Where L = Rupees of Labor
K = Rupees of Capital
The analyst knows that the firm always budget Rs. 12 Lakhs a year of labor and
capital together. The other relevant data are provided:
2 2
Σ𝑋2𝑖 = 14588, Σ𝑋3𝑖 = 2725, Σ𝑌𝑖2 = 47921, Σ𝑋2𝑖 Yi = 7454,
Σ𝑋3𝑖 Yi = 4554 ̅ ̅
Σ𝑋2𝑖 𝑋3𝑖 = 4796, 𝑋2 = 5802.25, 𝑋3 = 18, 𝑌̅ = 67,
N = 14

Can you estimate the regression coefficients in this model? Explain your answers.
3. The following results were obtained from a sample of 12 firms on their output (Y),
labour input (X2) and capital input (X3), measured in arbitrary units:
∑ 𝑌 = 753 ∑ 𝑌2 = 48,139 ∑ 𝑌𝑋2 = 40,830
∑ 𝑋2 = 643 2
∑ 𝑋2 = 34,843 ∑ 𝑌𝑋3 = 6,796
∑ 𝑋2 = 106 2
∑ 𝑋3 = 976
Find the regression equation:
𝑌 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖
4. The following tables contains the scales price of 5 holiday cottages in Ushered,
Denmark, together with the age and the livable area of each cottage.

Price (in$) Age (in Years) Area (in m2)

Yi X2i X3i

745 36 66

895 37 68

442 47 64

440 32 53

1598 10 101

Suppose it is thought that the price obtained for a cottage depends primarily on the age
and livable area. A possible model for the data might be th linear regression model
Yi = β1+β2X2i+β3X3i+ ui

24
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

where the random errors ui are independent, normally distributed random variables
with the zero mean and constant variances. Fit the model and obtain the parameters
and their respective standard errors.

5. You are given the following data based on a simple regression estimated for the
relationship between price (X2) and quantity of oranges sold (Y) in a super market
and also on the amount spent on advertising the product (X 3), for 12 consecutive
days.
𝑌̅ = 100, 𝑋̅2 = 70 𝑋̅3 = 6.7 2
∑ 𝑥2𝑖 = 2250 ∑ 𝑦𝑖 𝑥2𝑖 = −3550
∑ 𝑦𝑖 𝑥3𝑖 = 125.25 ∑ 𝑥2𝑖 𝑥3𝑖 = −54 2
∑ 𝑦𝑖 = 6300 2
∑ 𝑥3𝑖 = 4.857

(ii) Test the statistical significance of each estimated regression coefficient using α =
5%

6. You are given the following data based on 15 observations:


𝑌 = 367.693, 𝑋̅2 = 402.760, 𝑋̅3 = 8.0, ∑ 𝑦𝑖2 = 66042.269
2
∑ 𝑥2𝑖 2
∑ 𝑥3𝑖 ∑ 𝑦𝑖 𝑥2𝑖 = 74,778.346
= 84,855.096, = 280,
∑ 𝑦𝑖 𝑥3𝑖 = 4250.9 ∑ 𝑥2𝑖 𝑥3𝑖 = 4796.0

(i) Estimate the three multiple regression coefficients and their standards
error .
(ii) Obtain R2 and
(iii) Test the statistical significance of each estimated regression coefficient
using α = 5%

7. Consider the following estimate regression estimated equation:

̂
Yi = 1336.049 + 12.7413X2i +85.7640X3i
se = (175.2725) (0.9123 ) (8.8019)
t = (-7.6226). (13.9653) (9.7437)
R2 = 0.8906, F = 118.0585, n = 32

Where, Y = Auction price of antique clock


X2 = Age of clock
X3 = Number of bidders

(i) Interpret all the three coefficients of the equation.


(ii) What do you understand by the concept of standard error of an estimate?
How would you calculate it?
(iii) Test the whether the age of clock has any significant contribution in
explaining the variation in auction price of antique clock.
(iv) Would you say that this regression equation is a god fit on the data? Explain
the basis of your answers.

25
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(v) Test the overall significance of this equation i.e., test the joint hypothesis that
X2 and X3 are in significant in explaining the variance in Y.
(vi) What is the relationship between F and R2? Establish this for the regression
results presented above.
8. Consider the following regression for an imaginary country, say Utopia, for a period
of 15 years variables are: IMP = imports, GNP = Gross National Product and CPI =
Consumer Price Index.
̂ = -108.20 + 0.045 GNP2t -0.931CPI3t
IMP
t = (3.45) (1.23 ) (1.844)
R 2 = 0.9894
(i) Test whether, individually, the partial slope coefficients for GNP and CPI are
statistically significant at the 5% level of significance.
(ii) Test the whether GNP and CPI jointly have nay statistical significance in
explaining variations in exports. Cary out this test at 5% level of significance.

9. Consider the following model relating the gain in salary due to an MBA degree to a
number of its determinants.
𝑆𝐿𝑅𝑌𝐺𝐴𝐼𝑁𝑡 = 𝐵1 + 𝐵2 𝑇𝑈𝐼𝑇𝐼𝑂𝑁𝑡 + 𝐵3 𝑍1𝑡 + 𝐵4 𝑍2𝑡 + 𝐵5 𝑍3𝑡 + 𝑢𝑡

Where,
SLRYGAIN = Post salary MBA minus pre MBA salary, in thousands of dollars.
TUTION = annual tuition coast, in thousands of dollars.
Z1 = MBA skills in being in analysts, graded by recruiters.
Z2 = MBA skills in being team players, grade by recruiters.
Z3 = Curriculum evaluation by MBA’s.
Using data for top 25 business schools, the coefficients were estimated as follows,
standard errors in parenthesis.

B^1 60.899 (2.513)


B^2 0.314 (0.750)
B^3 -0.3948 (2.756)
B^4 -2.016 (2.165)
B^5 -5.325 (3.773)

(i) Carry out individual two tail tests at 10% level of significance for the slope
coefficients.
(ii) Test the model for overall significance at the 10% level if R2 = 0.461 was
obtained for the model.
10. For the multiple regression model for Y = mental impairment, X 1 = life events, and
X2 = SES.
E(Y) = α + β1X1 + β2X2
Following table contains the required results:
Coff. Std.Error t

26
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(Constant) 28.230 2.174 12.984


LIFE .103 .032 3.177
SES -.097 .029 -3.351
n = 40, R2 = 0.9542
(i) Interpret the regression model.
(ii) Test the significance of partial slope coefficients.
(iii) Construct the 95% confidence interval for partial slope of coefficients
(iv) Construct the ANOVA Table and Test whether the model is significant.

11. The grades points average (GPA) of a random sample of 427 students in a college
were regressed on verbal SAT scores (VSAT) and mathematicians SAT scores
(MSAT) and the following regression model was estimated. (Standard errors are
reported in parentheses)

̂
𝐺𝑃𝐴 = 0.423 + 0.398VSATi + 0.001MSATi
SE (0.220) (0.061) (0.00029)

(i) The analyst found the unadjusted R2 = 0.22 and concluded that the VSAT and
MSAT scores are not good predictors of GPA. Do you agree with him? Write
down all the steps to test his claim and check it at 5% level of significance.
(ii) Suppose a student’s VSAT and MSAT scores increased by 100 points each.
How much increase in GPA can be expected?
(iii) As a result of the college policy if all the GPA scores were increased by 10%
what impact would it have on the regression coefficients and coefficient of
determination R2.

12. Using time series data for 1979 to 2009 for a certain economy, the following model of
demand for money was estimated:

𝑀𝐷𝑖 = 𝐵1 + 𝐵2 𝑌𝑖 , +𝐵3 𝐼𝑁𝑇𝑅𝐴𝑇𝐸𝑖 + 𝑢𝑖

Where

MD = Quantity of money demanded, measured in billions of rupees.

Y = National income, measured in billions of rupees

INTRATE = Interest rate in percent on 3 month treasury bills.

The table below has estimates of the coefficients and their standard errors

Variable Estimate of coefficients Standard errors

CONTACT 0.003 0.009

Y 0.530 0.112

27
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

INTRATE -0.0261 0.101

a) Interpret the slope coefficients.


b) Test the overall significance of the model, at 5% level of significance, if coefficient
of determination reported for the model is 0.519.

13. A relationship was established between demands for housing (H). Gross National
Product (GDP), interest rate (INT) prevailing in the economy. The following results
were obtained:
Ĥ = 678.89 + 0.905GNP – 169.65INT
t = (1.80) (3.64) (-3.87)
R2 = 0.432, R2 = 0.375, df = 20
(i) Calculate the F value from the data?
(ii) What conclusion do you draw from the F-value?

14. Consider the following simple regression model

Price = 𝛽0 + 𝛽1 assess + 𝑢

Where, Price is the housing price

Assess is the assessment of housing price.

The estimated equation is

̂ = −14.47 + 0.976 𝐴𝑠𝑠𝑒𝑠𝑠


𝑃𝑟𝑖𝑐𝑒

𝑡 = (16.27) (0.049)

𝑛 = 88, 𝑆𝑆𝑅 = 165644.51, 𝑟 2 = 0.820

i. How will you test the constraints 𝛽1 = 1 and 𝛽0 = 0 in the above regression if you
are given the SSR in the restricted model as 209448.99? Conduct the necessary
test(s) at 1% level of significance and give your conclusion?
ii. Suppose now that the estimated model is
Price = 𝛽0 + 𝛽1 Assess +𝐿𝑜𝑡𝑠𝑖𝑧𝑒 + 𝛽3 𝑆𝑞𝑟𝑓𝑡 + 𝛽4 𝐵𝑑𝑟𝑚𝑠 + 𝑢
Where
Lotsize = the size of the lot
Sqrft = the square footage
Bdrms = the number of bedrooms
The R2 = from estimating this model using the same 88 houses is 0.829. Test at
1% level of significance that all partial slope coefficients are equal to zero.

28
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

15. Based on the data for 1965 – IQ to 1983 – IVQ (n = 76), the following results were
obtained in the regression model to explain the personal consumption expenditure:
̂
Yi = -10.96 + 0.93 X2i – 2.09X3i
t = (-3.33) (249.06) (-3.09) R2 = 0.996
where, Y = PCE in billion rupees
X2 = the disposable income in billion rupees
X3 = the prime rate (%) charged by banks
(a) What is the marginal propensity to consume (MPC) the amount of additional
consumption expenditure?
(b) Is the MPC, statistically different from 1? Show the appropriate testing
procedure.
(c) What is the rational for inclusion of prime rate variable in the model? A priori,
would you expect a negative sign for this variable?
(d) Is b3 statistically different from zero?
(e) Test the hypothesis that R2 = zero?
(f) Compute the standard error for each coefficient.

16. The monthly salary (wage, n hundred s of rupees), age (AGE, in years), number of
years of experience (EXP, in years), number of years of education (EDU) were
obtained for 49 persons in a certain office. The estimator regression of wage on the
characteristics of a person were obtained as follows (with a statistic in parenthesis):
Wage = 632.244 + 142.510EDU + 43.225 EXP - 1.913 AGE
(1.493) (4.008) (3.022) (- 0.22)
(i) The value of adjusted R2, = 0.277. Using this information, test the model for
overall significance.
(ii) Test the coefficient of EDU and EXP for statistical significance at 1% level and
coefficients for age at 10% level.

17. Using quarterly data for 10 years (n= 40) for the U.S. economy, the following model
of demand for new cars were estimated:
NUMCARSi = B1 +B2 PRICEi + B3 INCOMEi + B4 INTRATEi +ui
Where
NUMCARS: Number of new car sales per thousand people
PRICE: New car price index
INCOME: Per capita real disposal income (in dollars)
The table below gives estimates of the coefficients and their standard errors:

ESTIMATES OF COFF STD ERROR

CONSTANT -7.4534 13.5782

PRICE -.0714 .0032

29
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

INCOME .0032 .0017

INTRATE -.1537 .0491

(i) A priori, what are the expected signs of the partial slope coefficients? Are
the results in accordance with these expectations?
(ii) Interpret the various slope coefficients and test whether they are
individually statistically different from zero. Use 10% level of significance.
(iii) The adjusted R squared reported for this model is 0.758. Test the Model
for overall goodness of fit at 5% level of significance.
18. A multiple regression analysis between yearly income (Y in $1.000s), college grade
point average (X1) age of the individuals (X2), and the gender of the individual (X3,
zero representing male) was performed on a sample of 10 people, and the following
results were obtained.

Coefficients Standard errors


Constant 4.0928 1.4400
X1 10.0230 1.6512
X2 0.1020 0.1225
X3 -4.4811. 1.4400

Analysis of variance
Source of Degrees Sum of Mean
Variation of Freedom Squares Square
Regression 360.59
Error 23.91

(i) Write the regression equation for the above.


(ii) Interpret the meaning of the coefficients of X 3.
(iii) Compute the coefficient of determination.
(iv) Is the coefficient of X1 significant? Use α = 0.05
(v) Is the coefficient of X2 significant? Use α = 0.05.
(vi) Is the coefficient of X3 significant? Use α = 0.05.
(vii) Complete the ANOVA table
(viii) Perform an F test and determine whether or not the model is significant.

19. A three variable regression gave the following results:


Source of Variation Sum of squares d.f. Mean sum of squares
Due to regression (ESS) 65,965 - -
Due to residual (RSS) - - -
Total (TSS) 66,042 14
(i) What is the sample size?
(ii) What is the value of RSS?

30
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(iii) What are the d.f. of ESS and RSS?


(iv) What is R2 and adj. R2?
(v) Test the hypothesis that X2 and X3 have zero influence on Y. Which test do you
use and why?
(vi) From the preceding information can you determine the individual
contribution of X2 and X3 toward Y?
(vii) Recast the ANOVA table in terms of R2

20. Child Mortality Rate (CMR) for 25 countries was regressed on Female Literacy Rate
(FLR) and per capita GDP (PCG). The following results were obtained:

̂ = 263.64 – 0.0056PCG-2.2316FLRi
𝐶𝑀𝑅

se = (11.59) (0.0019) (0.2099)


R2 = 0.7077, ADJ.R2 = 0.6981

(i) Interpret the regression results.


(ii) Are the coefficients of regression significant independently and jointly?
(iii) If by adding another explanatory variable R2 increases to 0.77. Will this imply
that this inclusion is justifiable?

21. You are given the following regression models, compute adjusted R 2 for each of the
model and hence decide which of these a better fit is:

Model Dependent Intercept Age No. of R2


Variable Term Bidders

A Auction Price 1328.094 - - 0.00

B Auction Price -191.6662 10.4856 - 0.5325

C Auction Price 807.9501 - 54.5724 0.1549

D Auction Price -1336.049 12.7413 85.7640 0.8905

n = 32 for each model. Also compare model B and D using method of restricted least
squares.
22. Based on a sample of 38 countries the following regression was obtained:
̂
Yi = 414.4583 + 0.0523X1i – 50.0476X2i
se = (266.4583) (0.0018) (9.9581)
t = (1.1538) (28.2742) (-5.0257)
R = 0.916,
2 Adj R = 0.9594 F= 439.22
2

31
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Where, Y = expenditure on education (billions of Rupees)


X1 = GDP (billions of Rupees)
X2 = Population (billions of people)
(i) Test whether the partial slope coefficients of GDP and population are
individually statistically significant at 5% level of significance.
(ii) Test whether jointly both GDP and population significantly explain variation
in the dependent variable. Use α = 5%.
(iii) Now if we impose the restriction that slope coefficient on population is Zero.
We obtain the following regression:

Yi = 386.482 + 0.0732X1i
se = (268.421) (0.0049)
t = (1.4398) (14.9397)
R2 = 0.8978, ADJR2 = 0.8823 F = 436.81

Test whether this restricted is statistically regression, if dependent variables


do not have the same form, which alternative test do you use?

23. How the regression coefficients , TSS , RSS, ESS , Coefficients of determination
affected in case of change of origin and change of scale .

EXAM STYLE QUESTIONS


1. Let X2 be the hours spent on mathematics coaching during a week . let X 3 be the time
spent on other subjects and Y be the scores obtained in mathematics final exam. The
following summations for 23 students were obtained as belows.
𝑋2= 10. 𝑋3= 5. 𝑌 =12 , n=23
2 2
Σ𝑥2𝑖 =12 , Σ𝑥2𝑖 𝑥3𝑖 =8 Σ𝑥3𝑖 =12. Σ𝑥2𝑖 𝑦𝑖 =10. Σ𝑥3𝑖 𝑦𝑖 =8. Σ𝑦𝑖2 =10.

𝑥2 , 𝑥3 ,and y are variables measured the deviation form.

i) Estimate the following regression coefficient Yi = β1+β2X2i+β3X3i+ ui


ii) Estimate the standard errors of the slope coefficients
iii) Obtain 𝑅2 of the regression.
iv) Interpret the slope coefficients and comment on their statistical significance .
[Eco(h) 2022]

2. The estimated equation for sales of TV is given as below :

̂ = 118.91 − 7.908 𝑃𝑟𝑖𝑐𝑒 + 1.863 𝐴𝑑𝑣𝑒𝑟𝑡


Sales

(𝑠𝑒) (6.35) (1.096) (0.953) 𝑅2 = 0.448, 𝑛 = 30

32
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Where Price of TV measured in Rs.

Sales is sale revenue and Advert is advertising expenditure. Both Sales and Advert are
measured in terms of thousands of rupees.

i. Is the slope coefficient of price statistically different from 1? Test at 𝛼 = 2%.


ii. Calculate the elasticity of sales revenue with respect to price if average sales
revenue is 300 and average price is 100?
iii. How would you test that an increase in advertising expenditure will bring an
increase in sales revenue that is sufficient to cover the increased advertising
expenditure? Clearly state the Null and alternative hypothesis. Test at 𝛼 =5%.
iv. Estimate the sales revenue for a price of Rs. 6 and an advertising expenditure of
Rs. 1200. [Eco(h)2022]

3. Consider the following data on hourly wage rates (Y), Labour productivity (𝑋1 ) and
literacy rate (𝑋2 ) in a country ABV:

𝑌 90 72 54 42 30 12

𝑋1 3 5 6 8 12 14

𝑋2 16 10 7 4 3 2

i. Calculate the estimators of the regression 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖


ii. Test the hypothesis 𝛽2 = 0 against the alternative 𝛽2 > 0 at 5% level of significance.
iii. Calculate R2 and 𝑅̅2 and comment on them.
iv. Construct an ANOVA table and check for the significance of the regression at 5%
level of significance.
v. Do you think that Cov (u, x) will be non-zero in the model which has low R2?
Explain. [Eco(h)2021]

4. Using time series data for 1979 to 2009 for a certain economy, the following model
of demand for money was estimated.

𝑀𝐷𝑖 = 𝐵1 + 𝐵2 𝑌𝑖 + 𝐵3 𝐼𝑁𝑇𝑅𝐴𝑇𝐸𝑖 + 𝑢𝑖

Where

MD = Quantity of money demanded, measured in billions of rupees.

Y = National income, measured in billions of rupees

INTRATE = Interest rate in percent on 3 month treasury bills

The table below has estimates of the coefficients and their standard errors

33
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Variable Estimates of coefficients Standard errors

CONSTANT 0.003 0.009

Y 0.530 0.112

INTRATE -0.0261 0.101

a) Interpret the slope coefficients.


b) Test the overall significance of the model. at 5% level of significance, if coefficient
of determination reported for the model is 0.519. [Eco(h) 2016]

Answer of objective Questions

1) c 2) b. 3) a. 4) b. 5) b. 6) d 7) c. 8). b. 9) c. 10) b. 11) a. 12) b.

13) d. 14) b. 15) d. 16) b. 17) b. 18)a. 19) d. 20) c. 21) d. 22) b

34
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

CHAPTER-3

FUNCTIONAL FORM OF REGRESSION MODEL


OBJECTIVE TYPE QUESTION
Choose the Best alternative for each question
1. For a regression through the origin, the intercept is equal to
a. 1
b. 2
c. 0
d. -1

2. If in Yi =  1+  2Xi + ui, both Y and X are standardized variables. The intercept


term will be be
a. Positve
b. Negative
c. Between -1 and +1
d. Equal to zero

3. In double log regression model, the regression slope gives


a. The relative change in Y for an absolute change in X
b. The percentage change in Y for a given percentage change in X
c. The absolute change in Y for a percent change in Y
d. By how many units Y changes for a unit change in X

4. In Log-Lin regression model, the slope coefficient gives


a. The relative change in Y for an absolute change in X
b. The percentage change in Y for a given percentage change in X
c. The absolute change in Y for a percent change in Y
d. By how many units Y changes for a unit change in X

5. In Lin-Log regression model, the slope coefficient gives


a. The relative change in Y for an absolute change in X
b. The percentage change in Y for a given percentage change in X
c. The absolute change in Y for a percent change in X
d. By how many units Y changes for a unit change in X

6. In double log model, elasticity of Y with respect to X is given by


a. β2
b. β2(X/Y)
c. β2X
d. β2(1/Y)

7. In Log-Lin model, elasticity of Y with respect to X is given by


a. β2
b. β2(X/Y)
c. β2X

35
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

d. β2(1/Y)

8. In Lin-Log model, elasticity of Y with respect to X is given by


a. β2
b. β2(X/Y)
c. β2X
d. β2(1/Y)

9. In linear model, elasticity of Y with respect to X is given by


a. β2
b. β2(X/Y)
c. β2X
d. β2(1/Y)

10. When comparing r2 of two regression models, the models should have the same
a. X variables
b. Y variables
c. Error term
d. Beta coefficients

TRUE/FALSE

1. In regression through origin models, the conventionally computed R 2 may not be


meaningful?
2. In a double log model In Yi = A + B In Xi + vi the slope coefficients are different from
elasticity coefficients?
3. In log-liner regression models, the magnitude of the estimated slope coefficients is
invariant to the units in which the explanatory variables are measured, unlike linear
models?
4. Log-Log Model is also called growth rate model?
5. Cob-Douglas production function in Log-lin Model?
6. In Double log model a, elasticity of a function is constant while in Linear regression
model slope of function is constant ?
7. Log-lin models and Lin- log models are comparable on the basis of 𝑅2 ?
8. In Double log model , regression line is passess through the mean of X and mean of Y?
𝜎2
̂0)=𝑋0 2
9. Under regression through origin model V(Y ?
Σ𝑋𝑖2

Practical Questions

Double Log Models

1. The OLS Regression based on the log-linear data gave the following results:

36
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

̂ = 4.8877 + 0.1258 InXt


InYt
se = (0.1573) (0.0148).
t = (31.0740) (8.5095)
p = (1.25 x 10-9) (2.79 x 10-5) r2 = 0.9005, n =10

Where Y= Math’s Score


X = Family Income
(i) Interpret the intercept and slope term.
(ii) Interpret the coefficient of determination.
(iii) Test the significance of regression coefficients.

2. Based on 11 annual observations the following results were obtained:


Model A:
̂
Yt = 2.6911-0.4795 Xt
Se = (0.1216) (0.1140) r2 = 0.6628

Model B :-
̂ = 0.7774 – 0.2530 InX t
InYt
se = (0.0152) (0.0494) r2 = 0.7448
Where Y= cups of coffee consumed per person per day
X= the price of coffee in rupees per cup.

(a) Interpret the slope coefficient in two models.


(b) You are told that Y̅ =2.43 and X̅= 1.11. At these mean values, estimate the price
elasticity for the model A.
(c) What is the price elasticity for the model B?
(d) From the estimated elasticities, can you say that the demand for coffee is price
inelastic?
(e) How would you interpret the intercept in the model B?
(f) Since r2 of Model B is larger than that of model A, Model B is preferable to
Model A. Comment on this statement.

3. Using 21 annual observations, the following equation for demand for a good was
estimated using OLS:
̂ t = 1.71 – 0.35 InX1t + 0.47 InX2t
InY R2 = 0.876, Ṝ2 = 0.843
se = (0.059) (0.083) (0.083)

Where,
Y = No of units demanded
X1 = Price of goods ( Rs. Per unit)
X2 = Consumer’s income
(i) Test at α =5% whether the good has unit income elasticity against the
alternative that the demand for the good is income inelastic.

37
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(ii) Test the overall significance of the regression.

4. For the data for 46 states in USA for 1992following regression result was obtained:
̂ = 4.30 + 1.34 InP + 0.17 InY
InC
se = (0.91) (0.32) (0.20) Ṝ2 = 0.27

Where C = cigarette consumption packs per year


P= Real price per pack
Y= Real disposable income per capita
(i) What is the elasticity of demand of cigarettes with respect to price and
income? Are they statistically significant if not then why?
(ii) How would we obtain R2 from Ṝ2 given above? Then test for overall
significance of regression.

5. You are given the following Cobb Douglas Production function:


̂ i= -1.65 + 0.34 In Li + 0.85 In K
InY
t = (-2.73) (1.83) (9.06) R2 = 0.995 n = 22

(i) Interpret the partial regression coefficients


(ii) Find the returns to scale.
(iii) Test for the significance of partial regression coefficients. Will you use a one
tail or to tail test?
(iv) What can you about the overall significance of the regression model.

6. From the following regression function:


̂ i= 1.5195 + 0.9972 In X2i – 0.3315 In X3t
InY
se = (0.903) (0.0191) (0.0243)R 2 = 0.994 n = 23

Where Y = final demand


X2 = Real GDP
X3= Real energy price
(i) Interpret the partial regression coefficient
(ii) Test for the significance of partial regression coefficient. Will you use a one
tail or two tail test?
(iii) What can you about the overall significance of the regression model.
(iv) Compute the value of adj. R2 for the above model.

7. Consider the Cobb-Douglas production function in its logarithmic form as follows:

̂ i = B1 + B2 In Li + B3In Ki + Ui
InY
where, Y = Output
L = Labor input
K = Capital input

38
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Suppose the following production function is estimated


𝑌 𝐾
In (𝐿 ) = B1 + B3 In ( 𝐿 ) + vi

(i) What restriction has been imposed on the Cobb-Douglas production function
to obtain this estimated production function?
(ii) How will you test the validity of this restriction?

Semi Log Models


1. From the data based on population of USA (millions of people) for the years 1975 to
2007 the following regression model was obtained:
̂ i = 5.3593 + 0.0107 t
InY
t = (3321.13) (129.779) R2= 0.9982

Where Y = population of USA (millions of people)


t = time period (in year)
(i) Interpret the Intercept term
(ii) Interpret the slope term.
(iii) Find the instantaneous growth rate as well as the compound growth rate.
(iv) Test the significance of regression coefficients.
(v) Interpret r2.

2. Consider the following equation:


̂ i = -5.10 + 0.100EDUi + 0.T10EXPi
InSal
se = (0.025) (0.050)
R = 0.48,
2 n = 28
Where In (Sal)i = log of salary of ith worker
EDi = Years of education of ith worker
EXPi = Years of experience of ith worker

(i) Interpret the equation. Make appropriate hypothesis for signs of coefficient
and test your hypothesis.
(ii) What are the elasticity of salary with respect to education and experience?
(iii) If we run a linear regression instead of log-linear regression then how would
the interpretation change?
3. To determine how expenditure on service (Y) behaves if total personal expenditure
(X) rises by a certain percentage, the following regression model was obtained:
̂t = -12564.8 + 1844.22 In Xt
Y
se = (916.351) (114.32) r2 = 0.881 n=20

(i) Interpret the intercept term


(ii) Intercept the slope term
(iii) Test the significance of regression coefficient.
(iv) Interpret r2.

39
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

4. Consider the following regression for cross sectional data for 55 rural households in
India. The regress and in this equation is expenditure on food and the regress or is
total expenditure (a proxy for income)
̂ t = 1283.912 + 257.27 In (TEXP)
𝐹𝐸𝑋𝐵
t = (-4.3848)* (5.6625)* r2 = 0.3769
Note: *denotes an extremely small p-value.

(i) What is the interpretation of coefficient of In (TEXP)?


(ii) Would you say that Engel’s Law is validated for this sample? Explain.

5. Consider the following population regression function


In (Div)t = β1 + β2 In (PRFT)t +β3 Time + ui
Here, Div. = Corporate Dividends Paid
PRFT = Corporate Profits
In = Natural Logarithms
The estimated sample regression results for an economy for 244 quarterly
observation are presented below:

Coeff. Standard Errors t-statistic Prob-value


Intercept 0.4357 0.1921 2.2674 0.0243
In (PRFT) 0.4245 0.0777 5.4614 0.0000
Time 0.0126 0.0014 8.93 0.0000
R = 0.9914,
2 adj. R = 0.9913
2

Sum of Squared Residuals = 4.2657, F – Statistic = 13930.73


SE of Regression = 0.133 Prob (F-statistic) = 0.0000
Durbin – Watson Statistic = 0.0201

(i) What are the economic interpretations of β2^and β3^?


(ii) On what counts would a researcher be satisfied with these result s at a first
glance? Verify your conjectures using formal tests. For tables take the
closest value of n.

RECIPROCAL MODEL
1. Based on annual percentage change in wage rates, Y and the unemployment rate, X
for kingdom for the period 1950-1966 the following results were obtained:
1
̂
Yi = -1.4282 + 8.02743 𝑋𝑖
Se = (2.0675) (2.8478) r2 = 0.3849,
(i) What is the interpretation of 8.02743?
(ii) Test the hypothesis that the estimated slope coefficient is not different from
zero. Which test will you use?
(iii) How would you use the F test to test the preceding hypothesis.

40
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(iv) Given that Y = 4.8 percent and X = 10.5 percent, what is the rate of change of
Y at these mean values?
(v) What is the elasticity of Y with respect to X at these mean values.
(vi) How would you test the hypothesis, is that true r 2 =0?

2. The percentage change in the index of hourly earnings (Y) and the civilian
unemployment rate (X) for the United States for the year 1958 to 1969 gives the
following regression model:
̂i = -0.2594 + 20.5880 1
Y 𝑋𝑖
t = (-0.2572) (4.3996) r2 = 0.6594
(i) What is the wage floor?
(ii) Interpret the slope term.
(iii) Test the significance of regression coefficients.
(iv) Interpret r2.
(v) The linear model for the same data is
̂
Yi = 8.0147 – 0.7883 Xt
t = (6.4625) (-3.2605) r2 = 0.5153
(a) Is positive slope in the reciprocal model analogous to negative slope in
the reciprocal model.
(b) Compare the slope terms of two models.
(c) Compare r2 for two models.

Polynomial Regression
1. The following regression considers the relationship between lung cancer and
smoking for 43 states in India:
Yi = β1 + β2Xi + β3X2i + ui
Where, Y = number of deaths from lung cancer.
X = number of cigarettes smoked.
Results are as follows:
Predictor Coeff. Std. error t p
Constant -6.910 6.193 -1.12 0.271
X 1.5765 0.4560 3.46 0.001
X2 -0.019 0.008 -2.35 0.024
R2 = 0.564, ADJ. R2 = 0.543
F P
Residual sum of squares 311.69 26.56 0.00
Sum of squares regression 403.89
(i) Interpret the above regression
(ii) Test the individual significance of regression coefficients. Which test do
you and why? (Use α = 5%)
(iii) Construct an ANOVA table for the problem and test for the overall
significance of the model. (Use α =5%)

41
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

2.The OLS regression results based on the Cost (Y) and Output (X) are as follows:
̂i = 141.7667 + 63.4776Xi – 12.9615X2i + 0.9396X3i
Y
se = (6.3753) (4.7786) (0.9857) (0.0591)
R = 0.9983,
2 n = 10
(i) Does this model represent the cost function; explain by testing the
coefficient in the model.
(ii) Test the significance of the regression coefficient.
(iii) Construct an ANOVA table for the problem and test for the overall
significance of the model. (Use α =5%)
(iv) Find the average and marginal cost curves.

Regression Through Origin

1. Based on monthly data from January 1978 to December 1987, the following
regression results were obtained:
Model 1 : ̂t = 0.00681 + 0.758Xt
Y r2 = 0.4406
t = (0.262) (2.80)
p = (0.798) (0.0186)
Model 2 : ̂
Yt = 0.762Xt r2 = 0.4368
t = (2.954)
p = (0.0131)
Where, Y = monthly rate of return on Texaco common stick in %.
X = monthly market rate of return in %
(i) What is the difference between two regression models?
(ii) Would you retain the intercept term in model 1? Why or why not?
(iii) How would you interpret the slope term in the two models?

2. The following two models are based on the returns on a future fund (Y) and the
term on the market portfolio(X) for the period 1971-1980:
Model A:
̂i = 1.2797 + 1.0691 Xi
Y
se = (7.6886) (0.2383) r2 = 0.7115
Model B:
̂
Yi = 1.0899Xt
se = (0.1916) raw r2 = 0.7825

(i) Test the significance of incept term in the model A. Does this justify the
model B.
(ii) If the intercept term is absent then the slope term can be estimated by far
greater precision. Explain with the help of above models.
(iii) Can we compare the r2 of two models?

42
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

3. Consider the regression through the origin model:


Yi = B2Xi + ui
a) Write the normal equation and use it to derive the ordinary least square
estimator b2 of B2?
b) Show that b2 is a linear and unbiased estimator of B2.
c) Explain why the sum of the estimated residual, Σei need not be zero in this
regression model.
4. Consider the following regression:
Yi = β2Xi +ei
(i) How would you go about estimating the unknowns?
(ii) Will ∑ei = 0 for this model? Why or why not?
(iii) Will ∑eiXi = 0 for this model? Explain.
(iv) Will ? Explain.

EXAM STYLE QUESTIONS

1. The relationship between infant mortality rate (IMR) and the expenditure on
immunization programmes for children (IMMUN) in lakhs of rupees for 63 districts
of India is postulated by the following two alternate models :

Model A: IMRt = 𝛼1 + 𝛼2 IMMUNi + ui

Model B: IMRt = 𝛽1 + 𝛽2 IMMUNi + 𝛽3 IMMUNi2 + vi

The R2 for Model A and Model B are obtained as 0.6152 and 0.8254 respectively. Use a
suitable test at 5% significance level to decide which model would you prefer-restricted
or unrestricted. State the null and alternate hypothesis clearly.

2. Consider the following Cobb Douglas production function estimated for Taiwan for
the period: 1965-1974.

̂ t =1.6624 + 0.3397In Lt + 0.8460 ln Kt


InGDP

t= (-2.725) (1.8296) (9.0626)

R2 = 0.9951

RSSUR = 0.0136

where GDPt = GDP at time t,

Lt = labour at time t,

Kt = capital at time t

In = natural logarithms:

43
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

i. Interpret the coefficients of the regression and comment on their individual


significance.
ii. Comment on the returns to scale experienced by the Taiwanese economy.
iii. By imposing the restriction of constant returns to scale, the following regression
was obtained:
𝐺𝐷𝑃 𝐾
In ( ) = −0.4947 + 1.0153 In ( 𝐿 )
𝐿 𝑡 𝑡
t = (-4.0612). (28.1056)

𝑅2 = 0.977 𝑅𝑆𝑆𝑅 = 0.0166

Interpret the above regression.


Use a test statistic to see whether the economy is characterized by constant
return to scale. [Eco(h) 2015 ]

3. Consider the following model of monthly rents paid on rental units in industrial hub
cities of an economy:

in (rent) = B0 + B1 In (pop) + B2 In (avinc) + B3 (socind) + u

where

rent = average monthly rent paid in rupees

pop city population

avinc = average city income in rupees

socind = index of social infrastructure

i. How will you test the hypothesis that city population and social infrastructure
have no significant joint effects on monthly rents? Explain the steps involved in
the test with reference to the above model.
ii. Suppose b1 is estimated.as. 0.066. What is wrong with the statement: "A 10%
increase in population is associated with a 6.6% increase in monthly rent".
[Eco(H) 2014]
4. find the slope and elasticity of Y with respect to X for the following functional formal:
a) In Y = B1 – B2 (1/X)
b) Y = B1 + B2 In X. [Eco(h) 2013]

5. Consider the following models:

Model 1- In 𝑌𝑖∗ = 𝛼1 + 𝛼2 𝐼𝑛𝑋𝑖∗ + 𝑢𝑖∗

Model 2- In 𝑌𝑖 = 𝛽1 + 𝛽2 𝐼𝑛𝑋𝑖∗ + 𝑢𝑖∗

Where 𝑌𝑖∗ = 𝑤1 𝑌𝑖 and 𝑋𝑖∗ = 𝑤2 𝑋𝑖 , the w’s being constants.

44
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

i. Establish the relationships between the two sets of regression coeffcients and
their standard errors.
ii. Is the R2 different between the two models? [Eco(h) 2019]

6. Suppose the CLRM applies to 𝑌𝑖 = 𝛽2 𝑋𝑖 + 𝜀𝑖 .


i) Find the slope coefficient in the regression of Y and X.
ii) Suppose now we have a regression of X on Y, 𝑋𝑖 = 𝛾2 𝑌𝑖 + v𝑖 . In slope coefficient
of regression on X on Y an inverse of slope of regression of Y on X. [Eco(h) 2019]

7. Two models for Engel expenditure function are estimated.


Model 1 : 𝑌𝑡 = 1087.930 + 0.077𝑋𝑡
t = (25.58) (21.64) R2 = 0.350 F = 468.645
Model 2 : 𝑌𝑡 = 4005.077 + 0.3381/𝑋𝑡
t = (19.259) (-20.816) R2 = 0.333 F = 433.310
where Yt = expenditure on food in rupees = total expenditure in rupees
i. Interpret all coefficient value of the two models.
ii. Are the sign of the coefficients in the two models contradictory?
iii. Can we compare the results of the two models?
iv. Diagrammatically show the sample regression function in the above model.
[Eco(h) 2019]

8. Data is available on per unit cost (Y in Rs) of a manufacturing firm over a 20-year
period, and index of its output (X). Following results were obtained:

𝑌̂𝑡 = 10.522 - 0.175 𝑋𝑡 + 0.000895 𝑋𝑡2

t = (14.3) (-9.7) (7.8)

R2 = 0.978 TSS= 5700

i. Interpret the signs of the two slope coefficients in the above regression.
ii. At what level of output will the average cost function be minimum?
iii. Compute adjusted R' Is adjusted R' always less than R?? Justify your answer.
iv. Test that the variance of per unit cost (ox) over this 20 year period=20 against
not equal to 20. Use 5% level of significance.
v. Would your answer remain the same if a 95% confidence interval is constructed
to test the same hypothesis? Construct the interval and justify your answer.

[Eco(h) 2023]
9. Consider the model

𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖

Where,

𝑌𝑖 is the long term consumption measured in Rs thousands

45
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

𝑋2𝑖 is the income measured in Rs thousands

𝑋3𝑖 is the age measured in years

a) How will the estimated intercept and slope coefficients change if the unit of
measurement of income is changed to Rs lakhs.
b) Suppose the researcher thinks that usually consumption increases with income
but at a decreasing rate and consumption increases with age. How would he
modify the model to see whether the data supports his hypothesis?
c) Suppose the researcher wants to assess the relative importance of age and
income on long term consumption, what model should he estimate? Explain.
[Eco(h) 2021]

10. Following is the demand schedule for commodity 𝑥


𝐷𝑥 = 𝑓(𝑃𝑥 , 𝑃𝑦 , 𝑌)
Where 𝐷𝑥 is the demand for commodity 𝑥, 𝑃𝑥 is its price, 𝑃𝑦 is the price of related
commodity y andY is the income of the consumer. How do you measure the elasticity
of demand with respect to own price and price of related commodity Y if you use (i)
double log model, (i) linear model. [Eco(h) 2017]

11. Consider the Cobb-Douglas production function: [Eco. (H) III Sem. 2017(ER)]
𝛽 𝛾
𝑄𝑡 = 𝑒 𝛼 𝐾1 𝐿𝑡 𝑒 𝑢
Where, 𝑄 denotes output, K denotes capital input and L denotes labour input and e =
2.71828.
(a) Formulate a model that can be used to estimate the parameters a, 𝛽 and 𝛾 using
ordinary least squares.
(b) Show that this model implies a constant partial elasticity of output with respect to
labour but a variable marginal effect of labour on output. [Eco(h) 2020]

12. The following regression model was estimated using annual time-series data for the
period 1990-2012 for a certain country:

̂ 𝑡 = 𝑏1 + 𝑏2 𝐼𝑛𝑋2𝑡 + 𝑏3 𝐼𝑛𝑋3𝑡
𝐼𝑛𝑌

Where 𝑌1 = demand for cheese (in kg.)

𝑋2 = disposable income (in Rs. '000)

𝑋3 = price of cheese (in Rs. per kg.)

The results are summarized in the following table:

Coefficient Standard error

46
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Intercept 2.03 0.116

𝑋2 0.45 0.025

𝑋3 -0.377 0.063

i) Interpret the partial slope coefficients. [Eco(h) 2017]


ii) If the calculated F statistic for the estimated model is 492.513, what is its R 2?

13. Consider the following regression model:

𝐼𝑛𝑌 = 𝐵0 + 𝐵1 𝐼𝑛(𝑋1 ) + 𝐵2 𝐼𝑛 (𝑋2 ) + 𝐵3 𝐼𝑛 (𝑋3 ) + 𝐵4 𝐼𝑛 (𝑋4 ) + 𝑢𝑖

Where

𝑌 = per capita consumption of potatoes in kg.

𝑋1 = per capita income in Rs. '000.

𝑋2 = price of potatoes in Rs. per kg.

𝑋3 = price of cauliflower in Rs. per kg.

𝑋4 = price of cabbage in Rs. per kg.

(i) How will you test the joint hypothesis that potato consumption is not affected
by the prices of cabbage and cauliflower? Explain the steps involved in the test
with reference to the above model.
(ii) If the estimated value of b, is 200, it means "a 1% increase in income is
associated with a 200% increase in per capita consumption of potatoes;
everything else kept constant." Is the above interpretation correct ? Explain.

14. Based on the data on GNP and money supply for the period 1965-2006 for India. Ma
the following regression results were obtained by regressing GNP (in billions of
Rupees) on money supply (in billions of Rupees) for alternate models :

Model Intercept Slope Coefficient 𝑅2


Coefficient

Log-linear 0.8726 0.7839 0.927

(11.40) (108.93)

Log-lin 6.2392 0.0002 0.852

(75.85) (12.07)

Lin-log 14299 2383.4 0.879

47
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(14.45) (16.84)

Linear 603.28 0.3718 0.921

(7.04) (55.58)

Where the figures in parentheses are t-ratios

(i) For each model. Interpret the slope coefficient.


(ii) For each model, estimate the elasticity of GNP with respect to money supply
(sample means of the GNP and money supply are 5113.65 and 9347.53
respectively.
(iii) Are all 𝑅2 values comparable? If not, which ones are? [Eco(h) 2015]

15. Using annual time-series data for the company 'Pure Juice' for the period 2000 - 2016,
the following equation was obtained :

̂ 𝑡 = 1.2028 + 0.0214𝑡
𝐼𝑛𝑌

𝑆𝑒 = (0.0233) (0.0025)

Where 𝑌𝑡 = revenue of the company in crores at time 𝑡 and 𝐼𝑛 indicates natural log.

(i) Interpret the estimated coefficients.


(ii) Explain how the annual compound growth rate in revenues of the company
during the period can be obtained?
(iii) Using the estimated model, how can the forecast revenue for the year 2017 be
obtained? [Eco(h) 2018]
16. The sales manager of a company believes that the district sales (𝑆𝑡 ) of motor vehicles
has been growing according to the model 𝑆𝑡 = 𝑆0 (1 + 𝑔)𝑡 , where 𝑡 is the time. Average
sales is 50 units and average time is 4 years. He obtains the following OLS regression
results:

̂ 𝑡 = 3.6889 + 0.583𝑡
𝐼𝑛𝑆

(i) What is the estimate of the instantaneous and compound growth rate?
(ii) What is the estimate of 𝑆0 ?
(iii) What will be the elasticity of sales with respect to time?
(iv) Suppose the researcher modifies the above equation and estimates the
following regression: 𝑆̂𝑡 = 5.6731 + 2.7530𝑡 Interpret the model.
(v) Compute elasticity of sales with respect to time for the model in part iv. Compare
your results with the answer obtained in part iii. [Eco(h) 2021]
17. Consider the following functional form :

1
𝑌 = 𝐵1 + 𝐵2 𝑋 + 𝐵3 ( )
𝑋

(i) Derive the expression for the marginal effect of Y with respect to X.

48
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(ii) Derive the expression for elasticity of Y with respect to X and express it in terms
of X only.
(iii) Assume without loss of generality. 𝐵1 = 0 and 𝐵2 > 0, 𝐵3 > 0. For what
value of X will this function attain a minima? Draw a rough sketch for the function
[Eco(h) 2017]
18. In order to test whether the developing economies are catching up with the advanced
economies or not, a researcher regressed the growth rate of GDP of a country on its
relative per capita GDP for 119 developing countries. The relative per capita GDP of a
country is measured as a ratio of the country's per capita GDP to the GDP per capita
of USA. The regression results were obtained as under (standard errors are reported
in parentheses):

𝐺̂ = 0.013 + 0.062 𝑃𝑖 - 0.061𝑃𝑖2

s.e. = (0.04) (0.02) (0.033)

R2 = 0.053, adjusted R2 = 0.036

Where, G is the growth rate of GDP (in %)

And, P is the relative per capita GDP (in %)

(i) Interpret the above regression results.


(ii) Find the marginal effect of P on G. [Eco(h) 2013]

19. In each of the following cases suggest a suitable functional form to explain the
relationship between dependent variable and the explanatory variable. Also justify
your choice and interpret the coefficients in each case.
(i) Cobb Douglas production function
(ii) Rate of growth of population in an economy
(iii) Total cost function of a firm
(iv) Engel Expenditure Function
(v) Phillips Curve
(vi) Average salary earned by the employee conditional upon the gender of the
employee. [Eco(h) 2020]

20. Suppose a third-degree polynomial regression was fitted to a cost-output model for
18 firms and the following results were obtained:

̂ 𝑖 = 141.7667 + 0.208𝑄𝑖 − 0.086𝑄𝑖2 + 0.0054𝑄𝑖3


𝐶𝑜𝑠𝑡

𝑠𝑒 = (6.37)(4.78)(2.98)(6.98)

𝑄𝑖 is the output of the 𝑖 𝑡ℎ firm.POLYNOMIAL

49
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

i. If cost curves are to have U-shaped average and marginal cost curves, then what
are your a priori expectations about the intercept and slope estimators? Check if
these a priori expectations are satisfied for the given model.
ii. Interpret the estimated slope coefficient of Q' and test its significance at 10% level
of significance. [ECO(H)2024]

21. For a particular cafeteria, the following equation was estimated using yearly data on
cups of chocolate flavoured cold coffee sold:DOUBLE LOG MODEL

̂
log (𝑄𝑡 ) = 1.534 + 0.250𝑙𝑜𝑔(𝑃𝑡 ) − 0.025 𝑙𝑜𝑔(𝑃𝑡∗∗ )

𝑠𝑒 = (0.2001)(0.240)(0.016)

𝑅2 = 0.804 𝑇𝑆𝑆 = 4803

Where

𝑄𝑡 = cups of chocolate flavoured cold coffee sold

𝑃𝑡 = price (in rupees) per cup of chocolate flavoured cold coffee

𝑃𝑡∗∗ = price (in rupees) per cup of vanilla flavoured cold coffee

t = 1991-2023

Note that all cups in the cafeteria are of same size.

i. What can you say about the own-price and cross-price elasticity of demand for
chocolate flavoured coffee? Test the hypothesis that the own price elasticity of
demand of chocolate flavoured coffee is unitary elastic at 10% level of significance.
ii. Draw the ANOVA table for the regression equation. Also calculate.
[ECO(H)2024]

Answer of objective Questions


1) c 2) d 3) b 4) a. 5) c. 6) a 7) c. 8)d. 9) b. 10) b

50
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

CHAPTER-4
DUMMY VARIABLE
OBJECTIVE TYPE QUESTION
Choose the Best alternative for each question
1. Dummy variables classify the data into
a. Inclusive categories
b. Mutually exclusive categories
c. Qualitative categories
d. Quantitative categories

2. If a quantitative variable has ‘m’ categories, we can introduce


a. Only ‘m-1’dummy variables
b. Only ‘m’dummy variables
c. Only ‘m+1’dummy variables
d. Only ‘m*2’dummy variables

3. We are trying to estimate the differentials in average annual salary of professors


for three categories in India—those employed at a fully government aided college
(D1i), those employed at partially government aided colleges(D 2i) and those
employed at private college(D3i) Which of the following is NOT a correct functional
form?
a. Yi=β0+β1D1i +β2D2i+Ui
b. Yi=β1D1i +β2D2i+β3D3i +Ui
c. Yi=β0+β1D1i +β2D2i+β3D3i +Ui
d. LnYi=β0+β1D1i +β2D2i+Ui

4. For question (3) above, given Yi=β1+β2D2i +β3D3i+ui, β1 represents the mean
annual salary of professors working in
a. Fully government aided colleges
b. Partially government aided colleges
c. Private colleges
d. All three colleges

5. For question (3) above, mean annual salary of professors working in fully
government aided colleges is given by
a. β1
b. β1 + β2
c. β1 + β3
d. β2 + β3

6. In trying to test that females earn less than their male counterparts was estimates
the following model: Yi=β1 +β2Di, where Y = average earnings per day in Rs. D =
1 for females and 0 otherwise. β2 here refers to the
a. Average earnings of male
b. Average earnings of female
c. Differential intercept coefficient for male earnings
d. Differential intercept coefficient for female earnings

51
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

7. ANCOVA models include regressors that are


a. Only quantitative variables
b. Only qualitative variables
c. Only categorical variables
d. Both qualitative and quantitative variables

8. ANOVA models is include


a. Only quantitative variables
b. Only qualitative variables
c. Only categorical variables
d. Both qualitative and quantitative variables

9. The Process of removing the seasonal component from a time series sample date
is known as
a. Seasonalization
b. Seasonality
c. Deseasonalizstion
d. Seasonal trend testing

TRUE/FALSE

a. Dummy variable are also called stochastic variable?


b. Dummy variable trap situation arises, when there is high multicollinearity
between the explanatory dummy variable ?
c. ANCOVA model is the extension of ANOVA model?
d. Slope coefficient and differential intercept coefficient are same?
e. In a regression model with intercept, number of dummies for each
qualitative variable must be one less than the number of categories of
that variable

Practical Questions

ANOVA models With One Qualitative Variable Having Two Categories


1. Regressing food expenditure on the gender dummy variable, we obtain the
following results:

̂
Yi = 3176.833 -503.1667Di
se = (233.0446) (329.5749)
r2 = 0.1890, n = 12

Where
Yi = Food expenditure (in Rs.)
Di = 1 for female
0 for male
(i) Find the average food expenditure of males and females.
(ii) Is there a significant difference in the average food expenditure of males and
females.

52
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(iii) What is the benchmark category.

2. Consider the following model:


Yt = β1 + β2Dt + ui
Where Dt = 0 for first 20 observations and 1 for next 30 observations
Var (ui) = 300
(a) How would you interpret β1 and β2?
(b) What are the mean values of 2 groups?
(c) Find the Cov(𝛽̂ 1 , 𝛽̂ 2)

ANOVA Models With One Qualitative Variable Having more than Two Categories.
1. The data on average salary (in dollars) of public school teachers in 50 states and the
District Of Columbia for the year 1985 was available. These 51 areas are classified
into three geographical regions: (1) Northeast and North Central (21 states in all)
(2) South (17 states in all), and (3) West (13 states in all). The following regressions
model was obtained from the given data:
̂
Yi = 26,158.62 - 1734.473D2i - 3264.615D3i
Se = (1128.523). (1435.953) (1499.615)
t = (23.1759) - (-1.2078) (-2.1776)
(0.0000)* (0.2330)* (0.0349)* R2 = 0.0901

Where, * indicates the p values.


Yi = (average) salary of public school teacher in state i
D2i = 1 if the state is in the Northeast or North Central
= 0 otherwise (i.e., in other regions of the country)
D3i = 1 if the state is in South.
= 0 otherwise (i.e., in other regions of the country)
Find:
(i) Mean salary of public school teachers in the northeast and North Central.
(ii) Mean salary of public school teachers in the South.
(iii) Mean salary of public school teachers in the West.
(iv) The benchmark category.
(v) Is the mean salary of teachers statistically different from each other?

ANOVA Models with Two Qualitative Variables

1. From a sample of 528 persons in May 1985, the following regression results were
obtained:
̂
Yi = 8.8148 + 1.0997D2i – 1.6729D3i
se = (0.4015) (0.4642) (0.4854)
t = (21.9528) (2.3688) (-3.4462)
(0.0000)* (0.0182)* (0.0006)*

Where Y = hourly wage ($)


D2 = married status, 1 = married, 0 = otherwise
D3 = region of residence; 1 = South, 0 = otherwise
And * denotes the p values.
Find:

53
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(i) The benchmark category.


(ii) Interpret the regression model.
(iii) Test the significance of the regression coefficients individually.

Regression with a Mixture of Quantitative and Qualitative Regressors: The ANCOVA


Models

1. The following regression results were obtained for 22 individuals, (standard error in
parenthesis)
̂
Yi = 1506.244 – 228.9868Di + 0.0589Xi
(188.0096) (107.0582) (0.0061)
R = 0.9284
2

Where,
Y = expenditure on food ($)
Di = Gender dummy variable = 1 for female
= 0 for male
Xi = after tax income ($)
(i) Holding after tax income constant, what is the difference between mean food
expenditure of males and females at the 5% level of significance? Is the
difference statistically significant? How can you say so?
(ii) What is the marginal propensity of food consumption holding gender
difference constant?
(iii) Write and draw the regression equation for males and females separately.

2. The following regression was estimated using data from a sample of 15 houses
(standard errors are given in brackets) :

̂i = 200.091 + 16.186 Xi + 3.853Di


Y

se = (4.354) (2.578) (1.241)

Yi = assessed value of a house (in lakhs)

Xi = size of the house (in hundreds of square feet)

Di = 0 for house i, if it does not face a park = 1 for house i, if it faces a park.

i. Interpret the estimated coefficient of Di.


ii. Test whether the presence of a park in front of the house increases the assessed
value of the house, using the p-value approach and a 5% level of significance.

3. A person holding two or more jobs, one primary and one or more secondary, is
known as moonlighter. Based on a sample of 318 moonlighters, the following
regression is obtained, with standard errors in parenthesis:
̂ m = 37.07 + 0.403W – 90.06race + 75.51urban + 47.33hisch + 113.6region +
W
2.26age
se (0.06) (24.47) (21.6) (23.42) (27.62) (0.94)
Where,

54
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Wm = moonlighting wage
W = primary wage
Age = age in years
Race = 0, if white, 1 if non – white,
Urban = 0 if non urban, 1 if urban
Region = 0 if non west, 1 if west
Hisch = 0 if non graduate, 1 if high school graduate
Derive the wage equations for the following type of moonlighters
(i) White, non urban, western resident and high school graduate.
(ii) Non white, urban, non western resident and non high school graduate.
(iii) White, non urban, non western resident, and high school graduate.

4. You are given the following estimated double log model for cigarette consumption in
Turkey.
The results are based on 29 observations, for the period 1960 – 1988. The variables are
described
as follows:
InQ = Logarithm of cigarette consumption per adult (dependent variable)
InY = Logarithm pf per capita GNP in 1968 prices (in Turkish Liras)
InP = Logarithm of real price of cigarettes (in Turkish Liras per kg)
D82 = 1 for 1982 onward 0 before that
D86 = 1 for 1986 onward 0 before that

̂ = -4.997 + 21.793(D82) – 28.29(D86) + 0.732(inY) + 2.602(D82)(InY)


LogQ
+3.928(D86)(InY) – 0.371(InP) + 0.288(D82)(InP)
R2 = 0.921

(i) What is the numerical value of the elasticity of demand for cigarettes with
respect to income for the period 1969 – 81? For the period 1986 – 88?
(ii) What is the numerical value of the elasticity of demand for cigarettes with
respect to price for the period 1982 – 85?

5. Take the following model

Y = 1000 + 25X1 + 10X2 - 30X3 + 15X4

Where,

Y = annual sales dollars generated by an auto parts counter person,

X4 = years of experience.

X1, X2 and X3 are the dummy variables representing the education level. Base case is
primary school. X1 for high school, X2 for higher secondary and X3 for graduate school.

i. If a salesperson has a graduate degree, how much will sales change according to
this model compared to a person with a primary education?
ii. How much in sales will a counter person with 10 years of experience and a high
school educate generate?

55
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

iii. Why do we need three dummy variables to use education level" in this regression
equation?
Interaction Dummies
1. (i) You are told that monthly wages. W (in rupees) earned by a person depends on his
age
A (in years). Write an appropriate model to study the effect of age on monthly wages.
(ii) Suppose it has been found that wages also depend on
 Area of residence (Urban/ nonurban)
 Level of education (Post graduate/ graduate)
Modify your model in part (i) above to include these qualitative variables.
(iii) Will your answer change if you are told that a person’s area of residence also
determines his level of education? What will be the regression equation for
urban post graduates?
2. Using data for 526 individuals the following model of wage determination was estimated:
LOG (W)I = B0 + B1D1 +B2EDUi + B3(D*EDU)i + ui
Where,
W = Daily wages in rupees
D = Dummy variable for gender, D = 1 for females and 0 for males
EDU = years of education
D*EDU = Interactive dummy

The table below gives estimated regression coefficients and their standard errors:
Estimates of Coefficients Standard errors

CONSTANT 0.3890 0.1190

D -0.2270 0.1680

EDU 0.0820 0.0080

D*EDU -0.0056 0.0131

(a) Write the regression equations relating LOG (W) to EDU for males and females
separately.
(b) The returns to education are measured by the percentage increase in wages due to
an extra year of education, for males and females.
(c) Is the difference between returns to education for males and females statistically
significant at 5% level of significance?

3. To study the rate of growth of population in an economy over the period 1970 – 1992 the
Following models were estimated:
Model I:
̂ t = 4.73 + 0.024t
Inpop
t = (781.25) (54.71)
Model II:
̂ t = 4.77 + 0.015t
Inpop - 0.075Dt + 0.011(Dtt)
t = (2477.92) (34.01) (-17.03) (25.54)

56
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

where,
pop = population in millions
t = trend variable
Dt = 1 for 1970 – 1980, 0 otherwise (for 1980 – 1992)
(i) In model I, what is the rate of growth of population over the sample period.
Differentiate between instantaneous and compound rate of growth.
(ii) Are the population growth rates statistically different pre and post 1980?
(iii) If they are different, then what are growth rates for 1970 – 79 and 1980 – 92?

4. Consider the following model :

Yi = B0 + B1 Xi + B2 D2i + B3 D3i + ui

Where, Y: annual earnings of MBA graduates

X: Years of service

D2 = 1 if Harvard MBA

0 otherwise

D3 = 1 if Wharton MBA

0 otherwise

i. What are the expected signs of the various coefficients?


ii. How would you interpret B2 and B3?
iii. If B2 > B3, what conclusions would you draw?
Now suppose the following model is used:

Yi = B0 + B1 Xi + B2 D2i + B3 D3i + B4 (D2i Xi) + B4 (D3i Xi) + ui

What is the interpretation of B4 and B5. If both of these are statistically significant then
which model will you use and why?

Chow Test
1. For the data of savings and income for the US economy the following model is being
estimated:
Savings = β1 + β2 (Income)
We have the following regression results:
For the time period 1970 – 85
RSS = 1785.032 df = 10
For the time period 1985 – 95
RSS = 10,005.22 df = 12
For the time period: 1970 – 95
RSS = 23,248.30 df = ? (find out)

57
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Has the saving income relationship changed pre 1985 as compared to post 1985? Use
Chow test to find out (Given critical F value for given dof at 1% level of significance =
7.72)

Dummy Variables as Alternative to Chow Test

1. Consider the regression result


UN t = 2.7494 + 1.15 Dt – 1.5294 Vt – 0.8511 (DtVt)
t = (26.896) (3.628) (-12.55) (-1.9819) R2 = 0.9128
where,
UN = unemployment rate (%)
V = job vacancy rate (%)
D = 1 for period beginning in 1966 – IV
= 0 for otherwise
The time period starts 1958 – IV till 1971 – II. Time measured in quarters.
(i) Comment upon the statistical significance of the model.
(ii) Interpret the dummy coefficients interactive dummy term.
(iii) Derive equations for the two periods. (Prior 1966 – IV and Post).

2. Suppose we have the following relationship between savings and income form 1970 –
1995.
̂
Yi = 1.0161 + 152.4786Di + 0.0803Xi +.0655(DiXi)
Se = (20.1648) (33.0824) (0.0144) (0.0159)
R = 0.8819
2

Where, Y = savings; X = Income;


D = 1 for observations in 19825 – 1995
= 0 for otherwise (1970 – 1981)
(i) Interpret the above regression.
(ii) Derive the regression was obtained for the Indian savings – income data for
the period 1970 – 1995:
̂
Yi = 1.0161 + 152.4786Di + 0.0803Xi + 0.0655(DiXi)
Se = (0.0504) (4.6090) (5.5413) (- 4.0963)
R = 0.8819
2

Where, Y = savings; X = Income;


D = 1 for observations in 1982 – 1995
= 0 otherwise (1970 – 1981)
(i) Comment on the statistical significance of the above regression. How would
you interpret the dummy coefficient?
(ii) Derive the regressions for two periods, i.e., 1970 - 1981 and 1982 – 1995.
(iii) What are the advantages of the dummy variable technique over the Chow
Test.

58
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

EXAM STYLE QUESTION

1. A researcher wants to find out what are the factors which determine the number of
installs (I) of an application (app) from a famous app store. Size in Mbs (S), Reviews
in 000s (Re), Ratings (0 to 5) (Ra), Price in 'Rs (P). She ran the following regressions:

log I = 1.329 + 0.2356S + 0.4320 log(Ra) - 0.2678P + 1.928 log(Re)

Se = (0.63) (0.242) (1.29) (0.001) (0.156)

R2= 0.734

df = 156

i. Interpret the regression above.


ii. Test for statistical significance of Price in the model. Depending on the result do
you suggest that price is a significant factor affecting app installation?
iii. Suppose the regression is re-estimated where number of installs (I) varies only
with respect to price (P). Average I in sample is 5 and average P is Rs 8.9. Following
regression was estimated:
1
𝐼̂ = 52.351 + 3.139 𝑃

se = (37.39) (0.0187)

df = 156. R2 = 0.806

How would you interpret this model? Explain the shape of the curve.

iv. What would be the slope and elasticity of number of installs with reference to the
equation given in above?
v. How would the equation in (iii) change if we suggest that number of app
installations varies with respect to the kind of cellular phone used by the
customer, that is android or ios phones? [Eco(h) 2021]

2. A regression equation includes a quantitative dependent variable (Y = wages), a


quantitative independent variable (X = years of experience) and two qualitative
variables; Gender and Education Level with two categories each; Male & Female: and
Graduate & Not a Graduate. Assume that the qualitative variables do not interact with
each other.
i) Using intercept dummy variables, write the wage regression model if the impact
of years of experience, gender and education level is to be analyzed on wages (use
Female Graduate as the reference category). Write the estimated equation for
Male Graduate Category.
ii) How could answer in Part (i) be changed if the Education level has three categories
instead namely; Graduate, Post Graduate and Ph.D.
iii) Base on part (ii), write the wage equation for the two specific categories.

59
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(a)Female with Ph. D. (b) Male Post Graduates.

iv) How would the model in part (ii) be modified if the objective is to examine
whether the marginal effect of experience is gender specific?
v) How would be the regression in part (i) be modified if qualitative variable interact
with each other ? [Eco(h) 2022]

3. The purpose of this empirical exercise was to analyze the impact of takeovers on CEO
compensation. The model of interest was:

𝐶𝑜𝑚𝑝𝑖 = 𝐵1 + 𝐵2 𝑆𝑀𝑃𝑖 + 𝐵3 𝐷1 +𝐵4 𝐷1 𝑆𝑀𝑃𝑖

Where:

Comp = CEO's compensation in hundreds of rupees

SMP = index of firm's stock market performance

D = Dummy variable defined as 1 if the firm acquires another firm, O otherwise .

The model was estimated from data on 34 firms. The results are summarized in the
following table:

Coefficient Standard Error

Intercept 964.5202 69.1662

SMP 1868.567 288.0425

D 996.8745 111.9876

D*SMP 5157.474 545.9090

i. Using the regression results, interpret the coefficients of Di and Di*SMPi.


ii. Test the hypothesis that compensation's relation with stock market performance
remains the same irrespective of take-overs made by the firm. [Eco(h) 2017 ]

4. The following model was estimated for United States from 1958 to 1977 :

1 1
𝑌̂𝑡 = 10.078 − 10.337𝐷𝑡 − 17.549 ( ) + 38.173𝐷𝑡 ( )
𝑋𝑡 𝑋𝑡

se = (1.4204) (1.6859) (8.3373) (9.399)

R2= 0.8787

60
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Where, Y = year-to-year percentage change in the index of hourly earnings

X = percent unemployment rate

D = 1 for 1958-1969

= 0 if otherwise

i. Show the Phillips curve for two periods separately.


ii. Are differential intercept and slope coefficients statistically significant? What does
this suggest?
iii. Interpret the regression. [Eco(h) 2019]

5. Consider the following regression results:

̂ = 3840.83 - 0.163totwork - 11.71educ - 8.70age + 0.128age2 + 87.75D


𝑆𝑙𝑒𝑒𝑝

Se = (235.11) (0.018) (5.86) (11.21) (0.134) (34.33)

N = 706. R2 = 0.123,

̅𝑅̅̅2̅ = 0.117

where sleep is total minutes per week spent sleeping.

tot work = total weekly minutes spent working.

educ is education measured in years and age is age of the individual in years.

D is gender dummy and D = 1 if male, 0 otherwise.

i. Is there any evidence that men sleep more than women? How strong is the
evidence?
ii. Interpreting the coefficients of the age and age squared variables explain what
does the researcher have in mind about the relation between sleep and age.
iii. Is there a statistically significant trade-off between working and sleeping? How
would the regression model have to be modified if there is reason to believe that
this trade off might be gender specific? [Eco(h) 2020 ]

6. Data was collected on 344 corporate executives to find out the effect of MBA degree
and work experience on their salary. The following model was estimated :

Yi = 2.3501 + 3.6306D1i - 26354D2i + 0.8527Xi + 1.634(D1 *X)i

t = (1.263) (2.1805) (- 3.457) (7.605) (2.98)

R2 = 0.8968

Y: Annual Income in Lakhs of Rupees

61
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

D1 and D2 are MBA and gender dummies respectively

X: Work experience in years

D1 = 1 if one has MBA degree

= 0 otherwise

D2 = 1 for a female executive

= 0 for a male executive

i. Write the regression equations for female MBA executives and male MBA
executives separately.
ii. Find the mean income level for the reference category and interpret it.
iii. Test the statistical significance of differential intercept coefficient between female
MBA executives and Male MBA executives at 5% level of significance.
iv. Interpret the coefficient of D1 * X1.
v. Now suppose out of this sample of 344 executives, 48 are female MBA executives
and 156 are male MBA executives. To find out the relation between income earned
and work experience, we run three regressions and the results obtained are as
follows:
Regression A: 156 male MBA executives, RSSA = 3.701
Regression B: for 48 female MBA executives, RSSB = 4.803
Pooled Regression: with 204 (156male + 48female) executives, RSS = 9.7602

Using the above data. do the Chow test at 10% level of significance to check whether there
is significant improvement in doing a pooled regression as compared to other two
subsample regressions. [Eco(h) 2021]

7. Demographic data from 126 countries is obtained for the year 2017. It is hypothesized
that life expectancy (Y) is dependent on number of under five deaths (X2), polio
immunization coverage (D), Per capita Govt. Exp. on Health Care (X3) (in Rs crores),
Per Capita GNI (in Rs crores) (X4) and Average number of years of Schooling (X). Polio
immunization coverage = 1 if yes and 0 otherwise.

Following regressions were estimated:

MODEL 1:

𝑌̂𝑡 = 0.903 − 0.561𝑋2𝑖 + 2.008𝑋3𝑖 + 0.553𝑋4𝑖 + 0.778𝑋5𝑖 + 3.638𝐷

𝑠𝑒 = (1.280) (0.405) (0.765) (0.712) (0.491)

𝑅2 = 0.787 𝑅𝑆𝑆 = 1339.8

MODEL 2:

62
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

𝑌̂𝑡 = 1.379 + 0.594𝑋3𝑖 + 2.139𝐷

𝑠𝑒 = (0.406)(0.465)

𝑅2 = 0.677 𝑅𝑆𝑆 = 1567.28

i. Is it a time series or a cross sectional data


ii. Show model 2 is a restricted version of model 1 and what is the restriction?
iii. Test for the statistical significance of the restriction at 5% level.
iv. Construct a 95% confidence interval for true per capita government health
expenditure in model Il and check whether it is statistically
significant.[Eco(h)2021]

8. A real estate Company used housing sales data to estimate the effect that the
pandemic lockdown had on demand for sub-urban real estate

̂ 𝑡 = 1.83 + 0.08𝐷𝑡 − 0.91𝐼𝑛𝑋𝑡 + 0.55(𝐷𝑡 𝐼𝑛𝑋𝑡 )


𝐼𝑛𝑌

Where Y= Share of sub-urban housing deals during a month, X= price per square metre
of sub-urban real estate, t = time,

Dt = 1, if it is a lockdown month = 0, if t is not a lockdown month

All estimates are statistically significant at 5% level of significance.

i. Write the regression functions for lockdown months and non- lockdown months.
ii. How would you test the hypothesis that lockdown had no impact on price-
elasticity for sub-urban housing?
iii. Rewrite the regression result if Dummy assignment is switched as below:
Dt=0, if t is a lockdown month

= 1 , if t is not a lockdown month

iv. Another investigator believes that the relationship between the two variables X
and Y is given by Yt = 𝛽1 + 𝛽2 𝑋𝑡 + 𝜀𝑡 . Given a sample of n observations, the
investigator estimates 𝛽2 by calculating it as the average value of Y divided by the
average value of X. Discuss the properties of this estimator. What difference would
it make if it could be assumed that 𝛽1 = 0?
v. What will be the consequence for the Gauss Markov theorem if there are errors in
measuring Y? [Eco(h) 2023]

9. Regression results for Morena savings-income data are presented for the period
1970-1995,

𝑌̂𝑡 = 1.0161 + 152.4786𝐷𝑡 + 0.0803𝑋𝑡 − 0.0655(𝐷𝑡 𝑋𝑡 )

63
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

𝑡 = (0.0504) (4.6090) (5.5413) (−4.0963)

𝑅2 = 0.8819

Where

𝑌𝑡 = savings

𝑋𝑡 =income

𝐷𝑡 = 1 for observations in 1982-1995

= 0 otherwise

i. Interpret the regression results and obtain the regressions or the two time
periods, that is, 1970-1981 and 1982-1995
ii. What do you infer by the statistical significance of the differential intercept and
the differential slope coefficients? [Eco(h) 2014]

10. i)In the regression model, in Yi = B1 + B2Di + ui where D is a dummy regressor, prove
that the relative change in Y when the dummy changes from 0 to 1 can be obtained as:

(𝑒 𝑏2 − 1 )

where e is the base of natural logarithm and b, is the ordinary least squares estimator of
the slope coefficient.

(ii) Suppose you have quarterly data on air-conditioner sales. Explain how you can obtain
average sales of air-conditioners for the our quarters separately using the method of
dummy variables. [Eco(h) 2013]

11. Using data for 120 individuals, the following model of wage determination was
estimated:

WAGEi = 𝛽1 + 𝛽2 𝐼𝑄2𝑖 + 𝛽3 + 𝛽3 PGRADI3i + ui :

where

WAGE: Hourly wages, in Rupees

IQ: Intelligent Quotient, measured on a scale of 70-130

PGRAD: Dummy variable = 1, if the individual is a postgraduate

= 0, if the individual is a undergraduate

The regression results were reported as follows, (standard errors in parentheses):

64
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

WAGEi = 224.8488 + 5.07661Q2i + 498.0493 PGRAD3i

(se) = (66.6424) (0.6624) (20.0768)

R2 = 0.4540

(a) Write the estimated regression equation for postgraduates and undergraduates
separately.

(b) Test the statistical significance of dummy variable at 5% level of significance. What
conclusion can you draw from this test?

(c) It PGRAD was defined to take values (0, 2) instead of (0, 1) will the estimated value of
B3 and its standard error change? What about its statistical significance?[Eco(h) 2016]

12. Suppose that earnings of individuals are dependent on whether they are skilled
workers and their work experience over the years. 6
(i) Define dummy variables to capture whether workers are skilled or not. Take
workers being unskilled as the reference category.
(ii) Develop a model which is linear in parameters that shows earnings of an
individual as a function of work experience and whether they are skilled. Interpret
your model.
(iii) Now assume that there is an interaction between skill of the workers and their
work experience. How would the model in (ii) change. Interpret the new model.
[Eco(h) 2019]

13. Using data for 110 schools the following model of monthly expenditure incurred by a
school was estimated .

̂ 𝑖 = 138.8 − 1.571𝐷𝑖 + 0.8.8𝑋𝑖 − 0.0944𝐷𝑖 𝑋𝑖


log(𝑌)

𝑡 = (2.864)(−1.06)(13.466)(−2.04)

𝑌𝑖 = annual expenditure incurred by a school, in thousands of rupees 𝑋𝑖 =number


1 𝑖𝑓 𝑔𝑜𝑣𝑒𝑟𝑛𝑚𝑒𝑛𝑡 𝑠𝑐ℎ𝑜𝑜𝑙
government school of students in the school 𝐷𝑖 = { }
0 𝑖𝑓 𝑝𝑟𝑖𝑣𝑎𝑡𝑒 𝑠𝑐ℎ𝑜𝑜𝑙

i. Write the regression equation for government school and private school
separately. Also, interpret the slope coefficient of two categories.
ii. Check the statistical significance of differential intercept coefficient and slope
drifter at 10% level of significance. Based on the conclusion of these tests, draw
the population regression lines for the government and private schools.
iii. Test the hypothesis that the population error term is normally distributed at 1%
level of significance when it is given that for the residuals, skewness is 0.28 and
the value of kurtosis is 2. Also, in classical linear regression model, what is the

65
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

rationale for the assumption that population error term follows normal
distribution?
[Eco(h) 2024]

Answer of objective Questions

1) b 2) a. 3) c. 4) c. 5) b 6). d 7) d. 8) c. 9) c.

66
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

CHAPTER -5

MULTICOLLINEARITY
OBJECTIVE TYPE QUESTION

1. One of the assumptions of CLRM is that the number of observations in the sample
must be greater than the number of
a) Regressors
b) Regressands
c) Dependent variable
d) Dependent and independent variables
2. Perfect multicollinearity between variables X 1 , X2 and X3 can be expressed using
constants 𝜆1 , 𝜆2 and 𝜆3 such that
a) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 = 0, where 𝜆1 , 𝜆2 and 𝜆3 are all equal to zero
simultaneously
b) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 + 𝑣 = 0 where 𝑣 is the stochastic term and 𝜆1 , 𝜆2 and
𝜆3 are not all equal to zero simultaneously.
c) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 = 0; where 𝜆1 , 𝜆2 and 𝜆3 are not equal to zero
simultaneously.
d) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 + 𝑣 = 0 where 𝑣 is the stochastic term and 𝜆1 , 𝜆2 and
𝜆3 are all equal to zero simultaneously.
3. In a regression model 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖 , F-test is seen to statistical
significant at less than 5 percent level of significance but the coefficients 𝛽1 and 𝛽2 ,
are seen to be statistically insignificant. This means that the
a) Two coefficients are highly correlated
b) Two variables are highly correlated
c) Two variables are perfectly correlated
d) Two variables are not correlated
4. If for a set of explanatory variables 𝑋2 , and 𝑋3 , the coefficients of correlation is
equal to 1, this means that between 𝑋2 and 𝑋3 there exists
a) No collinearity
b) Low level of collinearity
c) Perfect collinearity
d) Very high collinearity
5. If there exists high multicollinearity, then the regression coefficients are
a) Determinate
b) Indeterminate
c) Infinite values
d) Small negative value
6. If multicollinearity is perfect in a regression model then the regression coefficients
of the explanatory variables are
a) Determinate
b) Indeterminate

67
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

c) Infinite values
d) Small negative value
7. If multicollinearity is perfect in a regression model the standard errors of the
regression coefficients are
a) Determinate
b) Indeterminate
c) Infinite values
d) Small negative value
8. The coefficients of explanatory variables in a regression model with less than
perfect multicollinearly cannot be estimated with great precision and accuracy.
This statement is
a) Always true
b) Always false
c) Sometimes true
d) Nonsense statement
9. In a regression model with multicollinarity being very high, the estimators
a) Are unbiased
b) Are consistent
c) Standard errors are correctly estimated
d) All of the above
10. Multicollinearity is essentially a
a) Sample phenomenon
b) Population phenomenon
c) Both a and b
d) Either a or b
11. Which of the following statements is NOT TRUE about a regression model in the
presence of multicollinearity
a) t ratio of coefficients tends to be statistically insignificant
b) R2 is high
c) OLS estimators are not BLUE
d) OLS estimators are sensitive to small changes in the data
12. Which of these is NOT a symptom of multicollinearity in a regression model
a) High R2 with few significant t ratios for coefficients
b) High pair-wise correlations among regressors
c) High R2 and all partial correlation among regressors
d) VIF of a variable is below 10
13. A sure way of removing multicollinearity from the model is to
a) Work with panel data
b) Drop variables that cause multicollinearity in the first place
c) Transform the variables by first differencing them
d) Obtaining additional sample data
14. Assumption of 'No multicollinearity' means the correlation between the regresand
and regressor is
a) High
b) Low

68
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

c) Zero
d) Any of the above
15. An example of a perfect collinear relationship is a quadratic or cubic function. This
statement is
a) True
b) False
c) Depends on the functional form
d) Depends on economic theory
16. Multicollinearity is limited to
a) Cross-section data
b) Time series data
c) Pooled data
d) All of the above
17. Multicollinearity does not hurt is the objective of the estimation is
a) Forecasting only
b) Prediction only
c) Getting reliable estimation of parameters
d) Prediction or forecasting
18. As a remedy to multicollinearity, doing this may lead to specification bias
a) Transforming the variables
b) Adding new data
c) Dropping one of the collinear variables
d) First differencing the successive values of the variable
19. F test in most cases will reject the hypothesis that the partial slope coefficients are
simultaneously equal to zero. This happens when
a) Multicollinearity is present
b) Multicollinearity is absent
c) Multicollinearity may be present OR may not be present
d) Depends on the F-value

TRUE/FALSE
1. Despite perfect multicollinearity , OLS estimators are BLUE .

2. In case of high multicollinearity it is not possible to assess the individual significance


of one or more partial regression coefficients ?
3. If an auxiliary regression shows that a particular. Ri2 is high , there is a definite
evidence of high collinearity .
4. High pairwise correlation does not imply high multicollinearity .
5. Multicollinearity is harmless if objective of analysis is prediction only .
6. Ceteris paribus , the higher the VIF is , the larger the variance of OLS estimators .
7. In the regression of Y on X2 & X3 , Suppose there is a little variability in the values of
X3. This would increase V(β̂3) and if all X3 are same then V(β̂3) is infinite .
8. A regression model with a high R2 may not be judged to be good if one or more
coefficients have the wrong sign.
9. Consider the model: Yi = B1 + B2 X2i + B3 X2i + ui. Given that X2i =10+5X3i, we can

69
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

uniquely estimate all the parameters of the model.

10. Consider the model: Yi = B1 + B2 Xi+B3Xi2 + B4Xi3 + ui. Since 𝑋 2 and 𝑋 3 are the
function of X , there is a perfect multicollinearity ?

11. Do you think that Model suffer from multicollinearity

InYi =  1+  2InXi + B3InXi2 + ui. ?


12. Multicollinearity always implies that correlation between explanatory variables is
lying between -1 and +1 .
13. Very high multicollinearity always implies that estimators are not BLUE ?
14. It can easily happen that F statistic is significant while t statistics of explanatory
variables are not significant

Practical Questions
1. In the regression model

Yi = A1 + A2 X2i + A3 X3i + Ui

Suppose X3i = 10 + 3X2i

Show that we cannot uniquely estimate the original parameters A1, A2 and A3.

2. Let Y be the output. X2 be unskilled labour and X3 be skilled labour in the following
relationship:

Yi = B1 +B2 X2i +B 3 X3i + B4 X4i +ui

Where X4i = X2i + X3i

Can the parameters of the model be uniquely estimated by ordinary least squares?
Explain.

3. Consider the set of hypothetical data given in the following table:

Y -10 -8 -6 0 2 4

X2 1 2 3 4 5 6

X3 1 3 5 7 9 11

Suppose you want to fit the following model:

Yi = B1 + B2 X2i + B3 X3i + ui

(i) Explain, without solving, why you cannot estimate the three unknown
parameters of the model.

70
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(ii) Are any linear functions of these parameters estimable? Show the necessary
derivations.

4. Let Y be output. X2 be unskilled labour and X3 be skilled labour in the following


relationship :
2 2
Yi = B1 + B2 X2i + B3 X3i + B4 (X2i + X3i) + B5 X2i + B6 X 3i + ui

What parameters are estimated by ordinary least squares? Explain

5. In a regression of consumption expenditure Ci on dispoasable incomeYi and wealth


Wi the following results were obtained :
Ci = 24.7747 + 9415Yi – 0.0424Wi
t = (3.6690) (1.0442) (-0.5261)
R2 = 0.9635 degrees of freedom = 17
How can you use these results to detect the presence of multicollinearity ? Suggests
any two methods to reduce the severity of the problem?

6. Consider the regression model :

Yi = Bi + B2 X2i + B3 X3i + Ui

In order to check for presence of multicollinearity. the auxiliary regression is run and the
results are as follows :

𝑋̂2i = 2.456 + 0.7952X3i

(se) = (0.56)(1598) R2 = 0.9

i. Compute variance inflation factor (VIF). Do you find evidence of multicollinearity?


ii. Would multicollinearity necessarily result in high standard errors of the OLS
estimators in the above model?

7. The consumption expenditure of families (c) is regressed upon the income of families
(1) and the wealth of families (W). All variables are measured in Rupees. The following
regression results were obtained for a sample of 10 families.
Variable Coefficient t Statistics
Income 0.94 1.14
Wealth - 0.04 -0.52
Constant 24.77 6.75
df = 7, R = 0.96
2

(i) Based on institution, what signs would you expect for the partial slope
coefficients? Do the observed signs agree with your intuition?
(ii) Every t statistic is insignificant but F statistic is significant. Verify this
statement at 10% level of significance. What are the reasons for this
paradoxical statement?
(iii) Do you expect the estimated coefficients to be unbiased and efficient?
8. Consider the following model relating the gain in salary due to an MBA degree to a
number of its determinants.

71
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

SLRYGAIN = Bt + B2 TUITIONt + B3Z1t + B4Z2t + B5Z3t + ut


where,
SLRYGAIN = Post salary MBA minus pre MBA salary, in thousands of dollars.
TUITION = annual tuition costs, in thousands of dollars
Z1 = MBA skills in being analysts, graded by recruiters.
Z2 = MBA skills in being team p[layers, graded by recruiters.
Z3 = Curriculum evaluation by MBA’s.
Using data for top 25 business schools, the coefficients were estimated as follows,
standard errors in parenthesis.
B1 60.899 (2.513)
B2 0.314 (0.750)
B3 -0.3948 (2.756)
B4 - 2.016 (3.773)
B5 -5.325 (3.773)
(i) Carry out individual two tail tests at 10% level of significance for the slope
coefficients.
(ii) Test the model for overall significance at the 10%level if R2 = 0.461 was
obtained for the model.
(iii) Is there a conflict between your conclusions in (i) and (ii)? If yes can you
suggest a possible explanation?

9. Consider the following regression for an imaginary country, say Utopia, for a period
of 15 years. Variables are: IMP = imports, GNP = Gross National Product and CPI =
Consumer Price Index
Regression 1 :
̂ t= -108.20 + 0.045GNP2t + 0.931CPI3t
IMP
t = (3.45) (1.232) (1.844)
R2 = 0.9894
(i) Test whether, individually, the partial slope coefficients for GNP and CPI are
statistically significant at the 5% level of significance.
(ii) Test whether GNP and CPI jointly have any statistical significance in
explaining variations in exports. Carry out this test at 5% level of significance.
(iii) Comment on the results obtained in (i) and (ii) above. Do you suspect any
problem?
(iv) Do you expect that OLS coefficients have retained their BLUE property? If no,
explain why. If yes explain why you would still be worried about their quality.
(v) Using a transformation of variables, real imports are regressed on real
income.
Regression 2:
̂t
IMP 𝐺𝑁𝑃𝑡
= -1.39 + 0.202
𝐶𝑃𝐼 𝐶𝑃𝐼

t = (-5.46) (12.22) R2 = 0.9142


Would you say that the problem identified above has now been somewhat
solved?

10. In a study of the production function of a firm for the period 1991 to 2011, the
following two regression models were obtained :

72
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Model I

̂ = -5.04 + 0.887logK + 0.893logH


logQ

se =(1.40) (0.087) (0.137) R2= 0.878

Model II

̂ = -8.57 + 0.0272t + 0.460logK + 1.28510H


logQ

se = (2.99) (0.0204) (0.333) (0.324) R2 = 0.889

where,

Q is the index of production at constant factor cost,

K is the gross capital stock H is the hours worked, and

t is time trend

i) Interpret both the regressions.


ii) In model I, verify that each partial slope coefficient is statistically significant at the
5% level.
iii) In model Il, verify that the coefficients of t and logK are individually insignificant
at the 5% level.
iv) What is the probable reason of insignificance of logK in model II?
v) If you were told that the correlation coefficient between t and logK is 0.98, what
conclusion would you draw?
vi) Even if t and logK are individually insignificant in Model II, would you accept or
reject the hypothesis that in Model Il all partial slopes are simultaneously zero?

11.From the annual data for the US about the manufacturing sector, the results would
be following:

̂ 𝑌 = 2.81 − 0.53𝑙𝑜𝑔𝐾 + 0.91𝑙𝑜𝑔𝐿 + 0.047𝑡


𝑙𝑜𝑔
se = (1.38) (0.34) (0.14) (0.021)

𝑅2 = 0.97, 𝐹 = 189.8

Where Y= index of real output, K= index of real capital input, L= index of


real labour input, t= time or trend.

Using the same data, he also obtained the following regression:

̂ (𝑌) = −0.11 + 0.11𝑙𝑜𝑔(𝐾/𝐿) + 0.006𝑡


𝑙𝑜𝑔 𝐿
se = (0.03) (0.15) (0.006)

𝑅2 = 0.65, 𝐹 = 19.5

73
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

a) Is there multicollinearity in the regression (1)? How do you know?

b) In regression (1), what is the priori sign of log (k)? Do the results conform to
this expectation? Why or why not?

c) How would you justify the functional form of regression (1)?

d) Interpret regression (1)? What is the role of trend variable in this regression?

e) What is the logic behind estimating the regression (2)?

f) If there was multicollinearity in regression (1), has that been reduced by


regression (2)? How do you know?

g) What restriction has been imposed in regression (2)?

h) Are the 𝑅2 values of the two regressions comparable? Why or why not? How
would you make them comparable, if they are not comparable in the existing
form?

EXAM STYLE QUESTION

1. Consider the three-variable model, Yi = B1 +B2 X2i + B3 X3i + B4 X4i + ui. Let b2 the а
OLS estimator of the slope coefficient B 2.
i. Derive variance of b2, i.e., var(b2), terms of Variance Inflation Factor (VIF).
ii. When X2 is regressed on X3 and X4, 𝑅22 . obtained from this auxillary regression is
0.9217. Does it necessarily imply high variance of b2? Explain. [Eco(h) 2018]

2. Consider the following regression results :

̂ = 9840.83 − 0.163𝑡𝑜𝑤𝑜𝑟𝑘 − 11.71𝑒𝑑𝑢𝑐 − 8.70𝑎𝑔𝑒 + 0.128𝑎𝑔𝑒 2 + 87.75𝐷


𝑆𝑙𝑒𝑒𝑝

𝑆𝑒 = (235.11)(0.018)(5.86) (11.21) (0.134) (34.33)

𝑁 = 706, 𝑅2 = 0.123 ̅𝑅̅̅̅2 = 0.117

where sleep is total minutes per week spent sleeping,

totwork = total weekly minutes spent working,

educ is education measured in years and age is age of the individual in years.

D is gender dummy and D = 1 if male. 0 otherwise.

74
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

i. Is there any evidence that men sleep more than women? How strong is the
evidence?
ii. Interpreting the coefficients of the age and age squared variables explain what
does the researcher have in mind about the relation between sleep and age
iii. Is there a statistically significant trade-off between working and sleeping? How
would the regression model have to be modified if there is reason to believe that
this trade off might be gender specific?
iv. Do you suspect multicollinearity in the model? Explain your answer.[Eco(h) 2020]

3. In the regression of consumption expenditure (Ci) on disposable income (Yi) and


wealth (Wi). The following results were obtained (the paranthesis contain the t-
ratios)

𝐶̂𝑖 = 24.7747 + 0.9415Yi - 0.0424Wi

(t) (3.669) (1.0442) (-0.5261) R2 =0.9635, df=17

a) Is there any evidence of presence of Multicollinearity? Why or Why not?


b) Give any three ways in which the problem of Multicollinearity can be
remedied.[Eco(h) 2017]

4. Consider the following regression results for 45 countries for the year 2011-2012.
(the /-ratios are given in brackets):

̂𝑡 = 21.045 +0.0545 GDP + 1.864 GOV_INDEX,


𝐹𝐷𝐼

t = (1.232) (0.744) (1.005) R2 = 0.9667

where. FDI = Foreign Direct Investment (billion dollars)

GDP = Gross Domestic Product (billion dollars)

GOV_INDEX = Governance Index (a higher value indicates better on governance)

i. Is there evidence of multicollinearity ? Explain your answer.


ii. Discuss any two methods that can be used to deal with the issue of
multicollinearity ? [Eco(h) 2018]

5. Quarterly data on country XYZ was collected for the period 2005-2019 to estimate the
relation between Foreign Direct Investment (FDI), Trade Openness (TO). Gross
Domestic Product (GDP) and Exchange Rate (E). TO is defined as the ratio of export
plus imports to GDP and t = trend. Following regression was estimated:

̂𝑡 = -0.58 + 0.012E, - 0.025TOt + 0.006GDPt + 0.34t


𝐹𝐷𝐼

Se = (0.097) (0.013) (0.004) (0.015) (0.09)

R2 = 0.904, d = 1.45

75
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

i. Interpret the estimated slope coefficients. Do you suspect some problem with the
above regression?
ii. What is the nature of the problem? How do you know? Explain its consequences?
̂𝑡
𝐹𝐷𝐼 𝐸𝑡 𝑇𝑂𝑡 𝑡
= 𝛽0 + 𝛽1 + 𝛽2 + 𝛽3 + 𝑢𝑡
𝐺𝐷𝑃𝑡 𝐺𝐷𝑃𝑡 𝐺𝐷𝑃𝑡 𝐺𝐷𝑃𝑡
Will this transformation solve the problem in (ii) above? How? Can you compare
R of this model with the model above? [Eco(h) 2022 ]

6. Suppose demand for Brazilian coffee in country Rico is a function of the real price of
Brazilian coffee (Pbc), real price of tea (Pt) and real disposable income (Y d) in Rico.
Suppose following results were obtained by running the implied regression:

̂ = 9.1 + 7.8 𝑃𝑏𝑐 + 2.4𝑃𝑡 + 0.0035𝑌𝑑


𝐶𝑜𝑓𝑓𝑒𝑒

𝑡 = (0.5) (2.0) (3.5)

𝑅̅2 = 0.60 𝑁 = 25

i. Interpret the slope coefficients. Are the signs in accordance with economic theory?

ii. Do you think that the equation suffers from some problem? What could be the
nature of the problem?
iii. What are in general the consequences of problem if any detected in part (ii)? (iv)
Suppose the researcher drops Pbc and run the following regression
̂ = 9.3 + 2.6 𝑃𝑡 + 0.0036𝑌𝑑
𝐶𝑜𝑓𝑓𝑒𝑒

𝑡 = (2.6) (4.0)

𝑅̅2 = 0.61 𝑁 = 25

Has the researcher made the correct decision in dropping 𝑃𝑏𝑐 from the equation? Explain.

iv. Do you think that Brazilian coffee in Rico is price inelastic? Why/Why not?
[Eco(h) 2023]

7. The following function.is known as the transcendental production function (TPF), a


generalization of the well-known Cobb-Douglas production function:

𝑌 = 𝐵1 𝐿𝐵2 𝐾𝐵3 𝑒 (𝐵4𝐿+𝐵5 𝐾)

(a) Perform a suitable logarithmic transformation so that the function is estimable


using ordinary least squares.
(b) For the logarithm TPF to reduce the cob-Douglas production function expressed
in logarithm form. What must be restriction on the value of 𝐵2 and 𝐵3 .what must
be the restriction on the values of 𝐵4 and 𝐵5 ? outline the steps for testing the

76
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

validity of restriction on 𝐵4 and 𝐵5 in choosing between the TPF function and


cob-Douglas models in logarithm form ? [Eco(h) 2013]

8. In order to test whether the developing economies are catching up with the advanced
economies or not. a researcher regressed the growth rate of GDP of a country on its
relative per capita GDP for 119 developing countries. The relative per capita GDP of a
country is measured as a ratio of the country's per capita GDP to the GDP per capita
of USA. The regression results were obtained as under (standard errors are reported
in parentheses) :

𝐺̂ = 0.013 + 0.062𝑃𝑖 − 0.061𝑃𝑖2

𝑠𝑒 = (0.013) (0.02) (0.033)

𝑅2 = 0.053, adjusted 𝑅2 = 0.036

Where, G is the growth rate of GDP (in %)

And, P is the relative per capita GDP (in %)

i. Interpret the above regression results. (ii) Find the marginal effect of P on G.
ii. If a researcher wishes to estimate the above relationship in logarithmic form and
estimates the following relationship :
InGi = B1 + B2 In Pi + B3 In 𝑃𝑖2 + ui
Do you think he will be able to estimate the model? Give reasons for your answer

[Eco(h) 2013]

9. Let Y be the Gross National Product, 𝑋2 be the Exports, 𝑋3 be the Imports and 𝑋4 be
the Net Exports in the following relationship:

𝑌𝑖 = 𝐵1 + 𝐵2 𝑋2𝑖 + 𝐵3 𝑋3𝑖 + 𝐵4 𝑋4𝑖 + 𝑈𝑖

Which assumptions of Classical Linear Regression Model is violated here? If you estimate
this equation by ordinary least squares, then which of the parameters can be estimated?
Explain. [Eco(h) 2024]

10. Using data collected from 1990 to 2022, for a particular country, the following
equation was estimated using OLS .

𝑌̂𝑡 = 4.47 + 0.34 log 𝑋2𝑖 − 1.22 log 𝑋3𝑖

𝑡 = (4.28) (5.31) (−0.98)

77
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

𝑅2 = 0.9987

Y = Child Mortality Rate

𝑋2 = Female Literacy Rate

𝑋3 = Per Capita GNP

i. Are the regression results in conformity with your a priori expectations. If not,
what can be the reason behind it? Explain any three remedies to overcome this
problem.
ii. Now suppose the researcher regressed female literacy rate on per capita GNP and
in this second regression, the R? is 0.5284. Do you think that the reason that you
suggested in (i) is significantly present in the model, at 5% level of significance? If
your answer is yes, then do you think that the presence of this is necessarily bad?
[Eco(h) 2024]

Answer of objective Questions


1) a 2) c. 3) b. 4) c. 5) a. 6) b 7) c. 8) a. 9) d. 10) c. 11) a. 12) c.

13) d. 14) b. 15) d. 16) a. 17) d. 18) d. 19) c.

78
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

CHAPTER-6
Heteroscedasticity

OBJECTIVE TYPE QUESTION


Choose the Best alternative for each question
1. Heteroscedasticity means that
a. All X variables cannot be assumed to be homogeneous
b. The Variance of the error term is not constant
c. The observed units have no relation
d. The X and Y are not correlated

2. Heteroscedasticity is more likely a problem of


a. Cross-section data
b. Time series data
c. Pooled data
d. All of the above

3. The coefficient estimated in the presence of heteroscedasticity are NOT


a. Unbiased estimators
b. Consistent estimators
c. Efficient estimators
d. Linear estimators

4. Estimating the regression model in the presence of heteroscedasticity using this


method leads to BLUE estimators
a. OLS
b. GLS
c. MLE
d. Two-stage regression estimation

5. In the regression model Yi = 𝛽 1 X0i + 𝛽 2X1i +ui, if 𝛽 1 is the intercept coefficient


then the values that X0i can take are
a. All ones
b. All zeros
c. Any value
d. Any positive value

2
6. Under park test in 𝑢̂ = In σ2 + 𝛽 In Xi + vi, is the suggested regression model.
𝑖
Here if we find 𝛽 to be statistically significantly different from zero, this means
that
a. Homoscedasticity assumption is satisfied
b. Homoscedasticity assumption is not satisfied
c. We need further testing

79
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

d. Xi has impact on Yi

7. According to Goldfeld and Quandt the problem with Park test is that the
a. Error term is heteroscedastic
b. Expected value of vi is nonzero
c. vi is serially correlated
d. Model is nonlinear in parameter

8. The heteroscedasticity test that is sensitive to the normality assumption or error


term is
a. Goldfeld-Quandt test
b. Breuseh-Pagan-Godfrey test
c. White general heteroscedasticity test
d. Spearman’s rank correlation test

2
9. The following remedial measure for heteroscedasticity is used when σ is known
𝑖
for a regression model
a. Koenker-Bassett method
b. Weighted least square method
c. OLS method
d. White’s procedure

10. Which of the following is NOT considered the assumption about the pattern of
heteroscedasticity

a. The error variance is proportional to Xi


b. The error variance is proportional to Yi
2
c. The error variance is proportional to X
𝑖
d. The error variance is proportional to square of the mean value of Y

11. Even if heteroscedasticity is suspected and detected, it is not easy to correct the
problem. This statement is
a. True
b. False
c. Sometimes true
d. Depends on test statistics used

TRUE AND FALSE

State whether the following statements are true or false. Briefly justify your answer.

a. In the presence of heteroscedasticity OLS estimators are biased as well as


inefficient.
b.If heteroscedasticity is present, the conventional t and F tests are invalid.

80
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

c. In the presence of heteroscedasticity the usual OLS method always


overestimates the standard errors of estimators.
d.If residuals estimated from an OLS regression exhibit a systematic pattern, it
means heteroscedasticity is present in the data.
e. There is no general test for heteroscedasticity that is free of any assumption
about which variable the error term is correlated with.
f. If a regression model is mis-specified, the OLS residuals will show a distinct
pattern.
g. If a regressor that has non constant variance is omitted from a model, the OLS
residuals will be heteroscedastic.
h. If a pattern is observed on plotting residuals against time, it shows presence of
heteroscedasticity.

Theory Questions
1. Suppose Heteroscedasticity is present in a regression model and ordinary least
squares procedure is applied to estimate the parameters of the model? What are
the consequences for the properties of the estimators and the hypothesis testing
procedures?

2. Consider the following model:


yi = β1+ β2X2i + β3X3i + ui
Suppose it is revealed to us that this regression suffers from heteroscedasticity.
How can we transform the model so that there is homoscedasticity if:

(i) Error variances are known,

(ii) Error variances are unknown.

3. For a cross sectional data on 20 countries, consider the following regression


model:
̂ i = b1 + b2GNP2i + b3EDU3i + ui
IMR
Where, IMR = Infant Mortality rate.
GNP = Per Capita Gross National Product
EDU = Enrolment in Primary Education
The data set has 5 lower middle income countries, 5 upper middle, 5 low income
and 5 high income countries (according to the classification used by the world
bank)
(i) For such a diverse sample, do you have any priori reason to suspect that error
variance might violate an important assumption of classical linear regression
model? Explain.
(ii) Suppose you conduct a formal test to verify your conjecture. State the null
hypothesis carefully. You are given the following auxiliary regression result using
the residuals obtained from the above regression:
ûi = -15.76 + 0.3810GNPi – 4.5641EDUi + 0.00000(GNPi)2 + 0.1328(EDUt)2 –
0.005(GNPt) (EDUi)
R2 = 0.30
What is your conclusion?

81
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

4. Consider the following model relating profits to sales of a number of firms.


Pt = B1 + B2St + B3Dt+ ui
Where,
Pt = Annual profits
St = Annual sales
Dt = 1, if firm is in manufacturing industry
0, otherwise
(i) State the auxiliary equation so that you can carry out White’s test for
heteroscedasticity.
(ii) State the null and alternative hypothesis, the test statistic, its distribution and
degree of freedom, given n observations. Write down the 5% critical value for
the test and describe the decision rule.
(iii) Explain the consequences on interpretation of regression results based on
ordinary least squares.

5. In a two variable population regression function Y i = B1 + B2Xi + ui suppose the


error variance has the following structure: E(u 2i) = σ2Xi4. How will you transform
the model to achieve homoscedastic error variance?

6. Describe the Park’s test for detecting heteroscedasticity.

Practical Questions

1. Based upon the data on research and development (R&D) expenditure. sales. and
profits for 18 industry groupings in the United States. all figures in millions of dollars,
the following model is fitted. Since the cross sectional data presented in used for this
model are quite heterogeneous, in a regression of R&D on sales (or profits).
heteroscedasticity is likely. The regression results were as follows :

̂ 𝐷 = 192.9931 + 0.0319 𝑆𝑎𝑙𝑒𝑠𝑖


𝑅&

𝑠𝑒 = (533.9317)(0.0083) 𝑟 2 = 0.4183

To see if the above model suffers from heteroscedasticity we obtained the residuals 𝑒𝑖 ,
squared them and fitted the following models to conduct formal tests.

i. |𝑒𝑖2 | = −974,469.1 + 86.2321 𝑆𝑎𝑙𝑒𝑠𝑖


𝑠𝑒 = (48,02,343) (40.3625) 𝑟 2 = 0.2219
Use the Park test to determine, if the model suffers from the problem of
heteroscedasticity.
ii. |𝑒𝑖 | = 578.5710 + 0.0119 𝑆𝑎𝑙𝑒𝑠𝑖
𝑠𝑒 = (678.6950)(0.0057) 𝑟 2 = 0.2140

82
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Use the Glejser test to determine, if the model suffers from the problem of
heteroscedasticity.
iii. 𝑒𝑖2 = 62,19,665 + 229.3508𝑆𝑎𝑙𝑒𝑠𝑖 − 0.000537𝑆𝑎𝑙𝑒𝑠𝑖2
𝑠𝑒 = (64,59,809)(126.2197)(0.0004) 𝑟 2 = 0.2895
Use the White's test to determine, if the model suffers from the problem of
heteroscedasticity.

2. In a regression of housing expenditure in Rupees (Y i) on annual incomes of families


in Rupees (Xi) for a sample of 27 families and the following results were obtained:
Variable Coefficients Standard error
X 0.121 0.009
Constant 3.803 4.570
n = 27 R = 0.776
2

On plotting the residual against X i, it was found that the variance of the residuals
increased with Xi
(i) What problem does this indicate? Name any one test for its detection.

(ii) What are the consequences of this problem for OLS estimators?

(iii) Which type of dataset is more likely to be characterized by this problem?

(iv) Explain the estimation process of Weighted Least Squares with known error
variances in this context.

3. A regression of salaries of 222 professors from seven universities in the U.S. on their
years of experience since they completed their Ph.D. was performed.

(a) The graph of squared residuals against the fitted values of the dependent
variable, salary is shown – below. What does the graph show? Is there u 2
versus fitted values(with least squares fit)

(b) The test statistic for White’s test for this regression was reported as
19.7.State the null and alternative hypothesis and the test statistic for carrying
out this test. Is the null hypothesis rejected at 5% level of significance?

83
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

4. Consider the regression model that postulates relationship between monthly demand
for burgers (Y) and monthly household income (HH_INC, in rupees).

𝑌𝑖 = 𝐴 + 𝐵 𝐻𝐻_𝐼𝑁𝐶𝑖 + 𝑢𝑖 ,

The regression was run for a cross section of 41 observations. Susp heteroscedasticity,
the White's test for heteroscedasticity was chosen following the results were obtained :

𝑒𝑖2 = −6219 + 229.35𝐻𝐻_𝐼𝑁𝐶𝑖 + 0.000544𝐻𝐻_𝐼𝑁𝐶𝑖2

𝑅2 = 0.1148

Test for heteroscedasticity at 5% level of significance. State the null and alternative
hypothesis clearly.

5. In a regression of average wages (W,$) on the number of employees (N) for a


random sample of 30 firms, the following regression results were obtained:
𝑊̂ = 7.5 + 0.009N
t = n.a. (16.10) r2 = 0.90

̂ = (0.008) + 7.8(1/N)
W/N
t = (14.43) (76.58) r2 = 0.99

(a) How do you interpret the two regressions?

(b) What is the reason for transforming Eq (1) into Eq. (2).

(c) What is the author assuming in going from Eq. (1) to (2) ?

(d) Has the author successfully removed the problem which Eq. (1) is suffering
from.

(e) Can you relate the slopes and intercepts of the two models?

(f) Can you compare the R2 values of the two models? Why or Why not?

6. For pedagogic purposes Hanushek and Jackson estimate the following model:

𝐶̂𝑡 = 26.19 + 0.6248 𝐺𝑁𝑃𝑡 − 0.4398𝐷𝑡 𝑅2 = 0.999


(2.73) (0.0060) (0.0736)

̂ 𝑡 = 25.92(1/𝐺𝑁𝑃𝑡 ) + 0.6246 − 0.4315(𝐷𝑡 /𝐺𝑁𝑃𝑡 )


𝐶𝑡 /𝐺𝑁𝑃

(2.22) (0.0068) (0.0597) 𝑅2 = 0.875

a) What assumption is made by the authors about the nature of the


heteroscedasticity? Can you justify it?

84
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

b) Compare the results of the two regressions. Has the transformation of the
original model improved the results, that is, reduced the estimated standard
errors? Why or why not?

c) Can you compare the two 𝑅2 values? Why or why not?

7. You are given the following data:

𝑅𝑆𝑆1 𝑏𝑎𝑠𝑒𝑑 𝑜𝑛 𝑡ℎ𝑒 1𝑠𝑡 30 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛𝑠 = 55, 𝑑𝑓 = 25


𝑅𝑆𝑆2 𝑏𝑎𝑠𝑒𝑑 𝑜𝑛 𝑡ℎ𝑒 𝑙𝑎𝑠𝑡 30 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛𝑠 = 140, 𝑑𝑓 = 25

Carry out the Goldfled- Quandt test of heteroscedasticity at the 5% level of


significance.

8. The regression results from the model,

Yi = B1 + B2 Xt + ut

are obtained for a cross-section of 30 households, where Y is consumption expenditures


(in Rs. thousands) and X is income (in Rs. thousands). In order to check for the presence
of heteroscedasticity. The observations are arranged in the increasing order of the
magnitude of X. The regression is run separate!) for first 11 (Group I) and last 11
observations (Group 2). The regression results for these two subgroups are reported as
follows (standard errors are reported in the parentheses) :

Group 1: 𝑌̂𝑖 = 1.0533 + 0.876𝑋𝑖

(se) = (0.616) (0.038)

R2 = 0.9851. RSS1 =0.475 x 105

Group 2: 𝑌̂𝑖 = 3.279 + 0.835𝑋𝑖

(se) = (3.443) (0.096)

R2 = 0.9585 RSS2 = 3.154 x 105

i. Perform Goldfeld-Quandt test at 5% level of significance. state the null and


alternate hypotheses clearly. Do you find evidence of heteroscedasticity.
ii. List the underlying assumptions related to the disturbance term made in the
above test.

85
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

9. A researcher estimated the following regression :

𝐼𝑛 (𝑆𝑎𝑙𝑎𝑟𝑦)𝑖 = 53.809 + 0.0438 𝑌𝑒𝑎𝑟𝑠𝑖 − 6.237𝑌𝑒𝑎𝑟𝑠𝑖2 + 𝑒𝑖

𝑡 (92.1) (9.088) (-5.19) 𝑅2 = 0.789, 𝑛 = 219

Where

In : natural logarithm

Salary : Salary of an individual (Rs. '000)

Years : Years of experience

When the White's general test for heteroscedasticity at 5% level of significance was
conducted, the R2 obtained from the regression of 𝑒𝑖2 on a constant, Years, Years2 and
Years3 was 0.45. Is the researcher correct in concluding that Years and Years 2 are
individually significant variables in the salary regression? Why? Why not?

EXAM STYLE QUESTION

1. Using data on compensation per employee in thousands of dollars (COMP) and


average productivity in thousands of dollars (PROD) for a cross section of 50 firms for
the year 1958, the following regression results were obtained (t ratios in
parentheses) :

̂ = 1992.35 + 0.233PRODi
COMP

t= (2.1275) (2.333)

R2 = 0.5891

Since the cross sectional data included heterogeneous units, heteroscedasticity was likely
to be present. The Park test was performed and the following results of auxiliary
regression were obtained :

̂2 = 35.817 − 2.8099𝑃𝑅𝑂𝐷
𝑙𝑛𝑒1 𝑖

𝑡 = (0.934) (-0.667) 𝑅2 = 0.0595

(i) Use the result of auxiliary regression to check if the model indeed suffers from
heteroscedasticity, perform the test at 5% level of significance.
(ii) What could be the possible remedies of heteroscedasticity?[Eco(H) 2019]

2. The Home ministry of a country wants 10 lest if petty crimes (minor theis) are higher
in states where poverty rates are high. They obtain data on several variables and ran
the following cross section regression for 35 states in the country.

86
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

𝐶̂𝑖 = 6.275 + 0.1147𝑃𝑅𝑖 − 0.0712𝐿𝑅𝑖 + 0.0862𝑆𝐷𝑃𝑖

𝑆𝑒 = (3.125)(0.02713)(0.0361)(0.03834)
𝑛 = 35 𝑅2 = 0.6876

where C = Crimes per lakh of population

PR = Poverty Rates

LR = Literacy Rates

SDP = State Domestic Product

The regression equation given in part (a) is modified as follows :

𝐶̂𝑖 = 23.83 − 0.0089𝐿𝑅𝑖

This equation was estimated using 50 cross sectional observations on states. t ordinary
least squares (OLS). To check for heteroscedasticity related to L separate regressions
were run for the 17 states with the lowest LR and the 17 states with the highest LR. The
sum of squared residuals for the low LR states was 270. The sum of squared residuals for
the high-LR states was 90.

i. Compute unbiased estimates of the variance of the error term in the two
subsamples.
ii. Conduct the Goldfeld-Quandt test at 5% level of significance.
iii. Regardless of your conclusion for part (ii), suppose you believe that
heteroscedasticity is indeed present and that the variance of the error term is
inversely proportional to state LR : Var (∈𝑖 ) = Y/LRi, where Y = an unknown
constant. Explain how you would transform the data to satisfy the classical
assumptions. [Eco(h) 2022]

3. In the model 𝑌𝑖 = 𝛽2 𝑋𝑖 + 𝜇𝑖 , 𝑉𝑎𝑟(𝜇𝑖 ) = 𝜎 2 𝑋𝑖2 .


𝜎 2 𝑋𝑖2
i. ̂2 ) =
Show that 𝑉𝑎𝑟(𝛽 2 .
(∑ 𝑋𝑖2 )
ii. How would you use the Bresuch-Pagan-Godfrey test to check for the violation of
homoscedasticity?
iii. How would you transform the model to correct for heteroscedasticity? What
assumptions are being made here in the process? [Eco(h) 2021]

4. Let the population regression function be :

𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖

How will you transform the model to obtain homoscedastic errors under each of the
following cases, assuming other CLRM assumptions for 𝑢𝑖 hold:

i. 𝑢𝑖 = 𝜀𝑖 (𝑋2𝑖 )1/2

87
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

ii. 𝑢𝑖 = 𝜀𝑖 𝑍𝑖 (where 𝑍𝑖 is a non-stochastic variable which does not belong to this


model)
iii. 𝐸 (𝑢𝑖2 ) = 𝜎 2 /𝑋3𝑖
It is given that 𝜀𝑖 – N (mean = 0. variance = 𝜎 2 ). [Eco(h) 2015]

5. Let the population regression function be:

𝑌𝑖 = 𝛽1 + 𝛽2 𝐷1𝑖 + 𝛽3 𝐷2𝑖 + 𝛽4 𝑋𝑖 + 𝛽5 (𝐷1 ∗ 𝑋)𝑖 + 𝜇𝑖

Y : Annual Income in Lakhs of Rupees

Di and D2 are MBA and gender dummies respectively

X : Work experience in years

D1 = 1 if one has MBA degree

= 0 otherwise

D2 = 1 for a female executive

= 0 for a male executive

Suppose that E(𝜇/X, D1, D2) = 0 and V(𝜇/X, D1, D2) = 𝜎 2 𝑋 2 . Transform the original
equation to obtain homoscedastic error term.

6. Based on data on value added in manufacturing, MANU, and gross domestic product
for 28 countries in 2010, all measured in millions of US dollars. The following
regression results were reported (standard errors in parentheses),

̂ 𝑖 = 604 + 0.194𝐺𝐷𝑃𝑖
𝑀𝐴𝑁𝑈

𝑠𝑒 = (533.93) (0.013)

Since the cross sectional data were based on heterogeneous units, heteroscedasticity was
likely to be present. White's test was performed using ordinary least squares residuals,
ei of the above regression and the following results were obtained :

𝑒̂𝑡2 = −62196 + 229.3508𝐺𝐷𝑃𝑖 − 0.000537𝐺𝐷𝑃𝑖2

𝑅2 = 0.5891

i. Use the R2 value reported in the auxiliary regression to test if the model indeed
suffers from heteroscedasticity. Perform the test at 5% level of significance.
ii. In the light of your answer in part (i) what can you say about the regression results
reported above? [Eco(h) 2013]

88
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

7. A researcher finds evidence of heteroscedasticity in the regression model.

Yi = A + BXi + ui

How will you modify the original regression in order to deal with the problem of
heteroscedasticity in each of the following cases, if error variance follows the

a) 𝐸 (𝑢𝑖2 ) = 𝜎 2 𝑋𝑖2
b) 𝐸 (𝑢𝑖2 ) = 𝜎 2 𝑋𝑖3
1/3
c) 𝐸 (𝑢𝑖2 ) = 𝜎 2 𝑋𝑖 [Eco(h) 2017]
8. A researcher finds evidence of heteroscedasticity in the regression model.

Yi = B1 + B2 Xi + ui

The function is estimated using OLS and the residuals, ei, are found to be heteroscedastic
Transform the above model by applying the weighted least squares (WLS) method to
obtain homoscedastic errors under each of the following. Do the transformed regressions
in each have an intercept term :

i. 𝑢𝑖 =∈𝑖 . 𝑋𝑖 where ∈𝑖 ~𝑁(0, 𝜎 2 )


ii. 𝐸 (𝑢12 ) = 𝜎 2 √𝑋𝑖

9. A researcher postulates that the car density (number of cars per thousand
population), Y, in a city depends on the bus density (number of buses per thousand
population), X. He runs the regression model. Yi = B1 + B2 Xi + ui for a cross-section
of 128 cities in India and finds evidence of heteroscedasticity.
i. How would the model be re-estimated if it is assumed that error variance is
𝜎2
proportional to the reciprocal of Xi , that is 𝐸 (𝑢12 ) = ? Show that the transformed
𝑋
error term is homoscedastic.
ii. Can we compare R2 of the original model and the transformed model? Explain your
answer. [Eco(h) 2018]

10. A researcher obtained the following results for determining the relation between
school dropout rates of a district (% of class V students who drop out of school) in
India and district's per capita income, district's expenditure on education and a
dummy variable D_partyABC =1 if political party ABC was in power, 0 otherwise. 215
districts were included in this study.

Model Intercept Per District’s D_partyABC R2 TSS


# Capita ecxpenditure
(X4)
Income on education
(X2) (X3)

1 1.422 -0.231 -0.379 0.002 0.9452

89
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(.876) (.058) (.14) (.00001)

2 0.442 -0.115 0.8952

(.561) (.045)

P values are reported in the parentheses

i. What are a priori expected sign of the coefficient of district's expenditure on


education and why? What is the p value of this coefficient in model#1?
ii. An opposition party XYZ claims that wherever party ABC comes to power, school
drop- out rates increase. Is this a valid claim?
iii. Test the hypothesis Ho: B3=0 & B4=0?
iv. Calculate R2 for model #2. Will this be greater than the ̅𝑅̅̅2̅ for model# 1 and why?
To test for heteroscedasticity, the researcher conducted a Glejser test for model
#1 and obtained the p value to be 0.04. What can you conclude about the absence
of heteroscedasticity? [Eco(h) 2023]

11. The amount of loan (Li in lakhs) that is sanctioned by a bank to an applicant is
regressed on Gender Duminy for Male: D_Male=1 if male, 0 otherwise), Credit Score
(Ci higher values indicate good credit history), Income of (Inc; in lakh Rupees) and
education level (Ed; in years) of the applicant for a sample of 45 applicants

In Li = 4.999 - 0.0038D_Malei + 0.043Ci + 1.062 In Inci + 0.998Edi R2=0.6541

i. What are the likely consequences on the results of the Gauss Markov theorem if it
is found that income and education have a high correlation coefficient of 0.88?
ii. Interpret the coefficient of D_Male.
iii. Test for overall goodness of fit of this regression.
iv. The value of the test statistic of the White's General test was found to be 9.69. What
is the distribution of this test statistic? What are the null and alternative
hypotheses of this test? What can you conclude about the presence of
heteroscedasticity based on the above information given squares and cross
products of explanatory variables were included in the auxiliary regression?
v. What could be the possible remedy of the problem if heteroscedasticity is indeed
present? Assume that error variances are unknown. [Eco(h) 2023]

12. In a regression of housing expenditure in rupees (𝑌𝑡 ) on annual income of families in


rupees (𝑋𝑡 ) for a sample of 27 families, the following results were obtained:

Variable Coefficients Standard Error

X 0.121 0.009

Constant 3.803 4.570

90
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

N=27

𝑅2 = 0.776

On plotting the residuals against X, it was found that the variance of the residuals
increased with X,

i. What problem does this indicate? Name any one test for its detection and explain
the steps to conduct this test.
ii. What are the consequences of this problem for OLS estimators? Which type of
dataset is more likely to be characterized by this problem?
iii. Explain the estimation process of Weighted Least Squares with unknown error
variances in this context where error variance is proportional to 𝑋 2 . Also, mention
which is the intercept and slope parameter in the the transformed model and how
do we get back to the original model.

Answer of objective Questions


1) b 2) a. 3) c. 4) b. 5) a. 6). b 7) a. 8). b. 9) b. 10) b. 11) a.

91
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

CHAPTER-7
Autocorrelation
OBJECTIVE TYPE QUESTION
Choose the Best alternative for each question
1. When error terms across time series data are intercorrelated, it is known as
a. Cross correlation
b. Cross autocorrelation
c. Spatial autocorrelation
d. Serial autocorrelation

2. The regression coefficient estimated in the presence of autocorrelation in the


sample date are NOT
a. Unbiased estimators
b. Consistent estimators
c. Efficient estimators
d. Linear estimators

3. Estimating the coefficient of regression model in the presence if autocorrelation


leads to this test being NOT valid
a. t-test
b. F-test
c. Chi-square test
d. All of the above

4. If in our regression model, one of the explanatory variables included is the ;aged
value of the dependent variable, then the model is referred to as
a. Best fit model
b. Dynamic model
c. Autoregressive model
d. First-difference form

5. Regression of Ui on itself lagged one period is referred to as


a. AR(1) model
b. AR(2) model
c. Coefficient of auto-covariance model
d. White noise model

6. In regression model Ut = 𝜌ut-1 + Є1’ -1 <𝜌<+1, 𝜌, 𝜌 is the


a. Coefficient of autocorrelation
b. First-order coefficient of autocorrelation
c. Coefficient of autocorrelation at lag 1
d. All of the above

92
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

7. Estimating the regression model in the presence of autocorrelation using this


method leads to BLUE estimators:
a. OLS
b. GLS
c. MLE
d. Two-stage regression estimation

8. The regression model does not include the lagged value(s) of the dependent
variable as one of the explanatory variables. This is an assumption underlying on
of the following tests of autocorrelation:
a. Durbin-Watson d test
b. Runs test
c. Breusch-Godfrey test
d. Graphical method

9. The d-statistics value is limited to


a. 0 to 2
b. 2 to 4
c. 0 to 4
d. 4 ± 2

10. If the durbin-watsond-test statistics is found to be equal to 0, this means that fors-
order autocorrelation is
a. Perfectly positive
b. Perfectly negative
c. Zero
d. Imperfect negative correlation

TRUE AND FALSE

State whether the following statements are true or false. Briefly justify your answer.

(a) When autocorrelation is present, OLS estimators are biased as well as


inefficient.
(b) The Durbin Watson test assumes that the variance of the error term u i is
homoscedastic.
(c) The first differences transformation to eliminate autocorrelation assumes
that the coefficient of autocorrelation 𝜌is -1.
(d) The R2 values of two models, one involving regression in the first difference
form and another in the level form, are not directly comparable
(e) A significant Durbin-Watson d does not necessarily mean there is
autocorrelation of the first order.
(f) In the presence of autocorrelation, the conventionally computed variances
and standard errors of forecast values are inefficient.
(g) The exclusion of an important variable from a regression model may give a
significant d value.

93
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(h) In the regression of the first difference of Y on the first differences of X, if there
is a constant term and a linear trend term, it means in the original model there
is linear as well as a quadratic trend term.
(i) For the two-variable regression model. 𝑌1 + 𝐵1 + 𝐵2𝑋𝑡 + 𝑢𝑡 , if the OLS residuals
(et) are plotted against time (t) and a distinct pattern is observed. then it is an
indication of heteroscedasticity.

THEORY QUESTIONS

1. What do you understand by the term autocorrelation? If the coefficient of


autocorrelation, 𝜌, is not known, how can it be estimated from each of the following?
i. Durbin-Watson d-statistic
ii. OLS residuals.

2. If 𝜌 is known to be 0.8. Discuss how the problem of autocorrelation can be remedied


using Generalized Least Squares (GLS) for the following two-variable regression
model,

Yt = B1 + B 2 Xt + ut

Where the disturbance term. u1 follows AR(1) scheme, that is.

ut = 𝜌ut-1 + vt

3. In the two variables regression model, Y t =B1 + B2 Xt + ut, discuss how the problem
of autocorrelation can be remedied using First Difference Method (𝜌 = 1) if the
disturbance term u, follows AR(1) scheme. that is. u t = 𝜌ut-1 + vt.

Practical Questions
1. Given a sample of 50 observations and 4 explanatory variables, what can you say
about autocorrelation if
a) d=1.05, b) d=1.05, c) d=2.50, d) d=3.97

2. Durbin-Watson d-statistic for a regression model is computed as 2.317. There are 5


regressors (excluding the intercept) in this model estimated for 45 observations.
Test for presence of autocorrelation at 5% level.
3. In studying the movement in labour's share in value added in the metal industry for
an economy, based on annual data for 1980-2000, the following linear trend model
was considered

𝑌𝑡 = 𝐵1 + 𝐵2 𝑡 + 𝑢𝑡

Where Y= Labour's share in value added

94
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

t = time

The following regression results were obtained, t-ratios in paranthesis :

𝑌̂𝑡 = 0.4529 − 0.0041𝑡

(𝑡)(2.535) (−3.9608)

𝑅2 = 0.5284, Durbin Watson’s d-statistic =0.8252

(a) Use the Durbin Watson d-statistic test to check if there is autocorrelation in the
model. Give the null and alternate hypothesis clearly.
(b) Give any three reasons that can cause autocorrelation.

4. Let the population regression function be as follows. where errors follow AR(1)
process:

𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝜇𝑡

𝜇𝑡 = 𝜌𝜇𝑡−1 + 𝜀𝑡

OLS is used to estimate the function using time-series data for 10 consecutive time
periods.

(i) If errors follow AR(1) how would it affect the least squares estimation?
(ii) The residuals for the 10 consecutive time periods are as follows
Time 1 2 3 4 5 6 7 8 9 10

Period

Residuals -5 -4 -3 -2 -1 +1 +2 +3 +4 +5

Plot the residuals with respect to time. What conclusion can you draw about the pattern
of the residuals over time?

a. Compute the Durbin-Watson d-statistic and interpret it.


b. What are the underlying assumptions of the d' statistic? What alternative tests can
be used if these assumptions are not met?
c. Now suppose that in the regression given above errors are assumed to follow
higher order autoregressive process. It is also given that the auxiliary regression
of estimated residuals on original X and lagged values of estimated residuals gives
an R of 0.7498. Obtain an appropriate test statistic to test for serial correlation.
Outline the steps of the test clearly.

95
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

5. A researcher estimated the demand function for money for an economy for 100
quarters using quarterly data for the period @1: 1985-1986 to Q2: 2010-2011. The
regression results are as follows (standard errors are mentioned in the brackets and
In indicates natural log) :
̂𝑡 = 2.6027 − 0.4024𝐼𝑛𝑅𝑡 + 0.59𝐼𝑛𝑌𝑡
𝐼𝑛𝑀
(𝑠𝑒) = (1.24)(0.36)(0.36)
𝑅2 = 9.2, 𝐷𝑢𝑟𝑏𝑖𝑛 𝑊𝑎𝑡𝑠𝑜𝑛 𝑑 − 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐 = 1.755
Where Mt = real cash balances
Rt = long-term interest rate
Yt = aggregate real national income
Use Durbin-Watson d test to check for the presence of first order autocorrelation
at 5% level of significance.

6. In a study of the determination of prices of final output at factor cost in the UK, the
following results were obtained on the basis of the data:

PFt^ = 2.033 + 0.273Wt − 0.521Xt + 0.256Mt + 0.028Mt−1 + 0.121PFt−1


se = (0.992) (0.127) (0.099) (0.024) (0.039) (0.119)

𝑅2 = 0.984, d=2.54

Where PF= prices of final output at factor cost, W= wages and salaries per employee,
X= gross domestic product per person employed, M= import prices, Mt−1 = import
prices lagged 1 year, PFt−1 = prices of final output at factor cost lagged 1 year.

“Since for 18 observations and 5 explanatory variables, the 5% lower & upper d values
are 0.71 and 2.06, the estimated d value of 2.54 indicates that there is no positive
autocorrelation. Comment.

7. Suppose that you estimate the following regression:


∆ ln 𝑜𝑢𝑡𝑝𝑢𝑡 = 𝛽1 + 𝛽2 ∆lnLt + 𝛽3 ∆lnK t + ui

Where Y is output and L is labour input, and K is capital input and∆is the first
difference operator. How would you interpret 𝛽1 in this model? Could it be
regarded as a estimate of technological change? Justify your answer.

8. (i) To study the effect of unemployment rate (u) on the index of variances (VAC i)
in U.S.A. for 24 observations, the following results were obtained:
In VACi = 7.3084 – 1.5375Inui
t= (5.8250) (-21.612)
r = 0.9550,
2 d = 0.9108
Is there a problem of autocorrelation indicate in the results. Choose α = 5%.

(ii) Outline the method of estimation that will produce BLUE estimators in the
presence of AR(1) autocorrelation.

96
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

9. The following production function was estimate by an economist (standard


errors are reported in parentheses)
logQi = 3.39 + 1.45 logL + 0.384 log K
Se (0.23) (0.08) (0.04)
R2 = 0.9948 d = 0.88, n = 39,

(i) Test for the presence of autocorrelation using Durbin Watson test at 5% level
of significance. State your hypotheses clearly.

(ii) What are the limitations of Durbin Watson method?

10. (i)From the given data on the indexes of real compensation per hour (Y) and output
per hour (X) in the business sector of the U.S. economy for the period 1959 to 1998,
the base of the indexes being 1992 = 100. We obtain the following regression model.

Yt = 29.5192 + 0.7136Xt
se = (1.9423) (0.0241)
t = (15.1977) (29.6066)
r2 = 0.9584, d = 0.1229
Using Durbin Watson d test, check does the model suffers from autocorrelation.

(ii) By changing the functional form we obtain the following model:


̂ 𝑡 = 1.5239
InY + 0.6716In Xt
se = (0.0762) (0.0175)
t = (19.9945) (38.2892)
r2 = 0.9747 d = 0.1542
Does by changing the functional form the model becomes free from
autocorrelation. Comment.

(iii) Since the data underlying regression in part(i) is time series data, it is quite
possible that both wages and productivity exhibit trends. If that is the case,
then we need to include the time or trend, t, variable in the model to see the
relationship between wages and productivity net of the trends in the two
variables.
To test this, we include the trend variable in regression given in part(i) and
obtained the following results

̂𝑡
Y = 1.4752+ 1.3057Xt -0.9032t
se = (13.18) (0.2765) (0.4203)
t = (0.1119). (4.7230) (-2.1490)
R2 = 0.9631 d = 0.2046

Has the problem of autocorrelation resolved. If not, can we say that the model suffers
from pure autocorrelation?

97
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

11. For the Phillips curves for United States from 1958 to 1969 the following regression
was obtained:

1
̂
Y𝑡 = -0.2594 + 20.5880𝑋
𝑡

t = (-0.2572) (4.3996)
R2 = 0.6594, d = 0.6394

(i) Interpret the regression. Is there any evidence of first order autocorrelation in
the residuals?

(ii) If there is autocorrelation, estimate the coefficient of first order autocorrelation.

12. Consider the following population regression function


In (Div)t = β1 + β2 In (PRFT)t + β3 Time + ui

Here, DIV = Corporate Dividends Paid


PRFT = Corporate Profit
In = Natural Logarithms
The estimated sample regression results for an economy for 244 quarterly observations
are presented below:
Coeff. Standard t-statistic Prob-value
Errors
Intercept 0.4357 0.1921 2.2674 0.0243
In(PRFT) 0.4245 0.0777 5.4614 0.0000
Time 0.0126 0.0014 8.93 0.0000
R = 0.9914, adj.R = 0.9913,
2 2

Sum of Regression = 0.133 F-statistic = 13930.73


SE of Regression = 0.133 Prob(F-statistic) = 0.0000
Durbin –Watson – statistic = 0.0201

(i)What are the economic interpretation of β2 and β3?

(ii) On what counts would a researcher be satisfied with these results at a first
glance? Verify your conjectures using formal tests. For tables take the closest
value of n.

(iii) Is there anything in these results that the researcher needs to worry about?
Verify using formal test (s).

13. Consider the following demand for energy model for India for 1945 to1995:

98
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

̂ 𝑡 = 1.5495 – 0.9972 InX2t – 0.3315 In X3t + 0.5284 In Yt-1


InY
se = (0.0903) (0.0191) (0.0243) (0.024)
R2 = 0.6594 R2 = 0.994, d = 1.8

Does the model suffer from first order autocorrelation? Describe the test statistic you use
and why?

14. Consider the following regression results on a model of demand for competitive
imports based on U.K. quarterly data covering 1980(Q1) to 1996(Q4).

̂ 𝑡 = -5.5443 + 0.81051nGDPt- 0.0113 In Pt + 0.6178 In Mt-1


InM

Se (.024)

R2 = 0.9897, Durbin Watson, d = 1.8125

Where:

Mt = aggregate imports in units of domestic currency at constant prices

GDPt = gross domestic product at constant prices

Pt = an index of relative price of imports expressed in domestic currency. Apply the


Durbin's h-test to detect the presence of first order autocorrelation. Based on your
results, comment on the regression results reported above.

15. Based on 147 quarterly observations, an aggregate consumption function is


estimated wherein aggregate consumption expenditure C1, is regressed on
disposable income YDt, and one period lagged dependent variable.
The estimated least square equation is as follows (standard errors in parentheses):

Ĉ𝑡 = 1.88 + 0.086YDt + 0.911Ct-1


(-4.49) (0.028) (0.0304)
DW = 1.569, R2 = 0.999

Which test should be used to test the presence of AR(1) error process in this model?
Describe the test and perform this test at 5% level of significance.

16. Complete the following table:


Number of Durbin-watson Evidence of
Sample size explanatory d autocorrelation
variables
25 2 0.83 Yes
30 5 1.24 —
50 8 1.98 —

99
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

60 6 3.72 —
200 20 1.61 —

17. For the model 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑡 + 𝛽3 𝑌𝑡−1 + 𝑢𝑡 , an auxiliary regression found:

𝑢̂𝑡 = 2.3 + 1.539𝑋2𝑡 + 1.32𝑢̂𝑡−1 + 0.892𝑢̂𝑡−2

𝑠𝑒 = (0.99)(0.089)(0.051)(0.0058)

𝑅2 = 0.567, 𝑛 = 34

Use the Breusch-Godfrey test to check for the presence of AR(1) scheme of
autocorrelation at 1% level of significance.

18. The following model of consumption is estimated for an economy for the years 1947-
2000 :

In Ct = B1 + B2 InPDIt + B3 INTt + ut

where C= real consumption expenditures in billions of dollars

PDI = real disposable personal income in billions of dollars

INT = real interest rate

and In indicates natural log.

The OLS residuals (et) are then regressed on InPDI, INT, and et-1 as follows:

et = A1 + A2 InPDlt + A3 INTt + A4 et-1 + vt

The above regressions reported to have R 2= 0.0983. Perform Breusch-Godfrey test to


check for the presence of autocorrelation at 5% level of significance.

100
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

EXAM STYLE QUESTIONS

1. Consider the following model of Indian imports estimated using data for 40 years for
the period 1945-1985. (Standard errors are given in parentheses)

̂ 𝑡 = 1.5495 + 0.9972 𝐼𝑛 𝑋2𝑡 − 0.3315𝐼𝑛 𝑋3𝑡 + 0.5284 𝐼𝑛 𝑌𝑡−1


InY

𝑠𝑒 = (0.0903)(0.0191)(0.0243)(0.024)

R2 = 0.994, d = 1.8

Where,

Y = imports, X2 = GDP, X3 = CPI

i. Does the model suffer from first order autocorrelation? Which test statistic do you
use and why?
ii. Outline the steps of the test used. Compute the test statistic and test the
hypotheses that the preceding regression does not suffer first order
autocorrelation.
iii. If the general model is given Yi = B1 + B2 X2i + B3 X3i + ui where errors follow
AR(1) scheme, that is 𝑢1 = 𝜌𝑢𝑡−1 + 𝛿𝑡 , where 5, is a white noise error term. Then
how would you transform the model to correct for the problem of autocorrelation.
[ Eco(h) 2014]
2. Consider the following model :

Ct = 𝛽 1+ 𝛽 2 GNPt + 𝛽 3 GNPt-1 + 𝛽 4 (GNPt – GNPt-1) + ut

where GNPt = GNP at time t,

Ct = aggregate private consumption expenditure in year t.

GNPt-1 = Gross National Product at time (t - 1)

(GNPt – GNPt-1 ) = change in the GNP between time t and time (1 - 1).

i. Assuming you have the data to estimate the preceding model, would it be possible
to estimate all the coefficients of this model? If not. what coefficients can be
estimated? Do you suspect a problem in the regression?
ii. Suppose that the GNP, explanatory variable was absent from the model. Would
your answer to (i) be the same?
iii. What is a possible remedy to the problem detected in (i) above?
iv. Now suppose the model is given as Ct = 𝛽 1 + 𝛽 2 GNP1 + 𝛽 3 Ct-1 + ut and the errors
are assumed to be autocorrelated. How would you test for serial correlation in the
model? Discuss the underlying assumptions of the test if any?

101
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

v. Suppose the equation given in (iv) above is transformed and estimated as: C t
/GNPt = 𝛽 1 (1/GNPt) + 𝛽2 + 𝛽3(Ct-1 /GNPt) +ut /GNPt. What could be the possible
reason for the transformation? How would you test for such a problem?

3. What do you understand by the term Autocorrelation? Consider the regression model.
Yt = B1 + B2Xt +ut. How can the problem of autocorrelation be remedied if 𝜌 is
assumed to be 1 ( 𝜌 = 1) and it is assumed that the error term follows the AR (1)
scheme. that is.

ut = 𝜌ut-1 + et, −1 ≤ 𝜌 ≤ 1

4. Quarterly data on country XYZ was collected for the period 2005-2019 to estimate
the relation between Foreign Direct Investment (FDI), Trade Openness (TO). Gross
Domestic Product (GDP) and Exchange Rate (E). TO is defined as the ratio of export
plus imports to GDP and t = trend. Following regression was estimated:

̂𝑡 = −0.58 + 0.012𝐸𝑡 − 0.025𝑇𝑂𝑡 + 0.006𝐺𝐷𝑃𝑡 + 0.34𝑡


𝐹𝐷𝐼

𝑠𝑒 = (0.097)(0.013)(0.004)(0.015)(0.09)

𝑅2 = 0.904, 𝑑 = 1.45

i. Interpret the estimated slope coefficients. Do you suspect some problem with the
above regression?
ii. What is the nature of the problem? How do you know? Explain its consequences?
𝐹𝐷𝐼𝑡 𝐸𝑡 𝑇𝑂 𝑡
= 𝛽0 + 𝛽1 𝐺𝐷𝑃 + 𝛽2 𝐺𝐷𝑃𝑡 + 𝛽3 𝐺𝐷𝑃 + 𝑢𝑡
𝐺𝐷𝑃𝑡 𝑡 𝑡 𝑡
Will this transformation solve the problem in (ii) above? How? Can you compare
R of this model with the model above?
iii. Suppose now the regression is estimated as given below
̂𝑡 = −0.74 − 0.042𝑇𝑂𝑡 + 0.41𝑡
𝐹𝐷𝐼
𝑠𝑒 = (0.057)(0.019)(0.364)
𝑅2 = 0.896, 𝑑 = 1.34
Test whether the regression specified above suffers from first order
autocorrelation? Which test will you use and why? (Use a = 5%)
iv. If the errors obtained from regression specified in (iii) above follows higher order
autoregressive process then how would you test for serial correlation? Give the
steps of the test in detail.
v. With reference to the regression specified in part (iii). What will be the remedy
for the problem of autocorrelation if it is detected? Explain.[Eco(h) 2022]

5. The following regression was estimated using quarterly data for 10 years

̂ 𝑡 = −7.453 − 0.0714𝑃𝑡 + 0.00315𝑌𝑡 − 0.1537𝑖𝑡


𝑁𝐶

𝑆𝑒 = (13.58)(0.0347)(0.0017)(0.04919)

102
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

̅𝑅̅̅2̅ = 0.758 𝐸𝑆𝑆 == 23.5104 𝑅𝑆𝑆 = 14.1867 𝑑 = 2.04

Where NC = new car sales per 1000 population

P = new car price index

Y = per capita real disposable income in Rs.

i = interest rate

i. Interpret the above regression and comment on the expected and estimated signs
of the coefficients. Also comment on the individual significance of the coefficients.
ii. Construct an ANOVA table and comment on the joint significance of the regression.
iii. Suppose you wish to test the restriction 𝛽3 = 𝛽4 for the above regression. Explain
the two methods that you can use to carry out this test.
iv. Do you suspect autocorrelation in the model? If yes, how would you test for it?
[Eco(h) 2020]

6. A researcher estimated the demand function for money for an economy for 101
quarters using quarterly data for the period Qi: 1986-1987 to Qz: 2011-2012. The
regression results are as follows (standard errors are mentioned in the brackets and
in indicates natural log):

̂ 𝑡 = 2.6027 − 0.424𝐼𝑛𝑅𝑡 + 0.59𝐼𝑛𝑌𝑡 + 0.524𝐼𝑛𝑀𝑡−1


𝐼𝑛𝑀

𝑠𝑒 = (1.24)(0.36)(0.34)(0.02)

R2 = 0.9165

Durbin-Watson d-statistic = 0.650

Mt = real cash balances

Rt = long-term interest rate

Yt = aggregate real national income. [Eco(h) 2018]

i. Use Durbin's h-test to check for the presence of first order autocorrelation at 1%
level of significance.
ii. Can we use Durbin-Watson d-statistic test for the above regression ? Give reasons.

7. An NGO has performed a regression analysis to determine whether divorce rates


affect suicide rates (Si) in a country. The NGO used data for 40 countries for the year
2010 and obtained the following results using OLS

Si = 22.33 - 0.0237HDI + 532.45nGDP per capita + 0.0056 Divorce Rates

(0.0034) (.019) (0.15) (.05)

103
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Where Si is the number of suicides per million population in a country in the year 2019

HDI is the Human development index ranging from 0 to 1.00

GDP per capita is Gross domestic product per capita (in $)

Divorce Rates is number of divorces per million population in a country in the year 2019

i. Why did not the NGO use only divorce rate as an explanatory variable? What
would be the properties of OLS estimator of the coefficient of divorce rate in such
a regression?
ii. Given GDP has an exact relation with HDI where HDI = (GDP per capita*Literacy
Rates*Life Expectancy)3, will perfect multi-collinearity be a problem in the above
regression?
iii. Interpret the coefficients of In GDP per capita and Divorce rates:
iv. Suppose NGO only examines the impact of divorce rates on suicide rates and run
the following regression: Si = 𝛽1 + 𝛽2 Divorce Rates 𝑠𝑖 + 𝜀2 . Show that 𝛽2 is an
efficient estimator.
v. The NGO also ran a time series regression for one specific country for a period of
35 years and obtained the following results.
St = 10.433-.047 HDIt † 343.45 In GDP per capitat + 0002 Divorce Rates
Durbin Watson d=2.03
What can be inferred about the presence of AR(1) from the results?[Eco(h) 2023]

8. For estimating the Phillips curves for the United States from 1958 to 1969 the
following regression was obtained :

1
𝑌̂𝑡 = −0.2594 + 20.5880
𝑋𝑡

𝑡 = (−0.2572)(4.3996)

𝑅2 = 0.6594, 𝑑 = 0.6394

Y = Rate of change of money wages in percent

X = Unemployment rate in percent

i. Interpret the intercept estimate. Is there any evidence of first order


autocorrelation in the residuals at 5% level of significance?
ii. Outline the method of estimation that will produce BLUE estimators in the
presence of first order autocorrelation.[Eco(h) 2024]

Answer of objective Questions

1) d 2) c. 3) d . 4) c. 5) a. 6) d. 7) b 8) a. 9) c 10) a.

104
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

CHAPTER-8
Model Selection Criteria

Theory Questions

1. Suppose the true model is


𝑌𝑖 = 𝛽1 + 𝛽2 𝑋𝑖 + 𝛽2 𝑋𝑖2 + 𝛽3 𝑋𝑖2 + 𝑢𝑖
but you estimate
𝑌𝑖 = 𝛼2 𝑋𝑖 + 𝑣𝑖
If you use observations of Y at X = -3, -2, -1, 0, 1, 2, 3, and estimate the "incorrect" model,
what bias will result in these estimates?

2. For a given model, 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽2 𝑋3𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖 , prove that if we omit X3 (a


relevant variable) then OLS estimator of 𝛽2 is not unbiased. What does the direction
of the bias depend upon?

Practical questions
1. Consider the data in following Table:
Y X2 X3
1 1 2
3 2 1
8 3 -3
Based on these data, estimate the following regressions:
Yi = α1 + α2X2i + u1i
Yi = λ1 + λ3X3i + u2i
Yi = β1 + β2X2i + β3X3i + u3i
Note :Estimate only the coefficients and not the standard errors:
(i) Is α2 = β2? Why or why not?
(ii) Is λ3 = β3? Why or why not?
What important conclusion do you draw from this exercise?

2. The correct regression model is given as under:


̂ i= 263.6416 – 0.0056 PGNPi – 2.2316FLRi
𝐶𝑀
se = (11.5932) (0.0019). (.2099)
r2 = 0.7077,
̂ = 157.4244 + 0.0144 PGNPi
𝐶𝑀
Se. (9.8455). (.0032)
r2 = 0.1662
(i) Interpret and compare the slope terms in two models.
(ii) Interpret and compare the intercept terms in two models.

105
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(iii) If FLR is regressed upon the PGNP the following results are obtained:
̂ i = 47.5971 –
𝐹𝐿𝑅 0.00256PGNPi
Se = (3.5553) - (0.0011)
r2 = 0.0721,
Explain the net effect and gross effect of PGNP on CM.

3. Suppose we estimate an equation for demand for food in India for the period 1922 –
41:
QD = demand for food
PD = food prices
Y = income
𝑄̂D = 92.05 – 0.142PD + 0.236Y
se (5.84) (0.067) (0.031)
R2 = 0.9832
(i) Comment on the above regression. Now if we omit the income variable we
get the following regression:
Qd = 89.97 + 0.107PD
se (11.85) (0.118)
(ii) Comment on the new regression with omitted variable. Do you suspect any
problem?
(iii) If the answer to (ii) above is yes, then what is the nature of the problem?
(iv) What are the consequences of such a problem?

4. Using quarterly for 10 years (n = 40) for the U.S. economy, the following model of
demand for new cars was estimated:
NUMCARSi = B1 + B2PRICEi + B3INCOMEi + B4 INTRATEi + ui
Where
NUMCARS: Number of new car sales per thousand people
Price: New car price index
INCOME: Per capita real disposable income (in$)
INTRATE: Interest rate (in percent)
The table below gives estimates of the coefficients and their standard errors:

Estimate of Coefficient Standard errors

CONSTANT -7.4534 13.5782

PRICE -0.0714 0.0032

INCOME 0.0032 0.0017

INTRATE -0.1537 0.0491

(i) A priori, what are the expected signs of the partial slope coefficients? Are the
results in accordance with these expectations?
(ii) Interpret the various slope coefficients and test whether they are individually
statistically different from zero. Use 10% level of significance.

106
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

(iii) The adjusted R squared reported for this model is 0.758. Test the model for
overall goodness of fit at 5% level of significance.
(iv) Suppose unemployment rate is an important determinant of demand for new
cars but is not included in the above regression model. What are the
consequences of omitting this variable?

5. The monthly salary (Wage, in hundred of rupees), age (AGE in years), number of
years of experience (EXP, in years), number of years of education (EDU) were
obtained for 49 persons in a certain office. The estimated regression of Wage on the
characteristics of a person were obtained as follows (with t statistics in parenthesis)

̂ = 632.244 + 142.510 EDU + 43.225EXP – 1.913AGE


𝑊𝐴𝐺𝐸
(1.493) (4.008) (3.022) (-0.22)
(i) The value of adjusted R2, R2 = 0.277. Using this information, test the model
for overall significance.
(ii) Test the coefficients of EDU and EXP for statistical significance at 1% level
and coefficients for AGE at 10% level.
(iii) Can you rationalize the negative sign for AGE? If someone suggests that AGE
be eliminated, will you follow the suggestion?

6. Suppose the true model is :


Model A : 𝑌̂𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖

The regression results for the model for n = 45 are given below. The figures in
parantheses denote the standard errors.

Model A: 𝑌̂𝑖 = 263.6416 − 0.0112𝑋2𝑖 − 4.4632𝑋3𝑖 .

𝑆𝑒 = (9.5932)(0.0027)(0.2099)

𝑅2 = 0.7897

When 𝑋3𝑖 is regressed on 𝑋2𝑖 , the results obtained are as follows :

𝑋̂3𝑖 = 47.5971 + 0.00512𝑋2𝑖

𝑆𝑒 (0.553)(0.0011) 𝑅2 = 0.0721

If a researcher underfits the model by omitting 𝑋3𝑖 and runs Model B :

𝑌̂𝑖 = 𝛼1 + 𝛼2 𝑋2𝑖 + 𝑣𝑖

What shall be value of the coefficient 𝑋2𝑖 in Model B?

107
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

EXAM STYLE QUESTIONS

1. The following are the regression results for Cobb-Douglas production function
estimated for Taiwan for the period 1958-1972 :

̂ 𝑡 = 7.8439 + 0.7148 In 𝐿𝑡 + 1.1135 In 𝐾𝑡


𝐼𝑛𝑄

𝑡 = (−0.2011)(4.46642)(3.7214)

Where:

𝑄𝑡 = real gross product, in billion of rupees

𝐿𝑡 = labour input

𝐾𝑡 = capital Input

The slope coefficient in the regression of In 𝐾𝑡 on In 𝐿𝑡 is 0.4875

Suppose the researcher estimates the following mis-specified equation in which capital
input is omitted:

In 𝑄𝑡 = 𝐴1 + 𝐴2 In 𝐿𝑡 + 𝑢𝑡

i. Find the numerical value of E(𝑎2 ) using the information given in the equation,
where 𝑎2 is the OLS estimator of 𝐴2 . Is it biased upward or downward?
ii. What will be the other consequences of estimating this mis-specified equation?
[Eco(h) 2013]

2. A researcher wanted to study the relation between demand for a commodity, 𝒬, in


relation to' its price. P, and disposable income. Y, based on 30 observations. The
following regression result is obtained (figures in parentheses are the standard
errors):

Model 1: 𝒬̂𝑡 = 92.05 − 0.142𝑃𝑖 + 0.236𝑌𝑖

(𝑠𝑒) = (5.84)(0.067)(0.031)

Estimate of the error variance. 𝜎̂𝑡 = 1.952

However, if income, a relevant and important variable, is omitted from the above model,
then the following regression result is obtained:

108
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

Model 1: 𝒬̂𝑡 = 89.97 + 0.107𝑃𝑖

(𝑠𝑒) = (11.85)(0.118)

Estimate of the error variance. 𝜎̂ 2 = 8.058

a) In the context of a specification error committed in Model 2. Explain the concept


of omitted variable bias.
b) From the given regression results. obtain an estimate of slope coefficient in the
regression of omitted variable Y on the included variable P.
c) Compare the consequences of including an irrelevant variable in the model and
excluding a relevant variable from the model. [Eco(h) 2016]

3. The Home ministry of a country wants to test if petty crimes (minor thefts) are higher
in states where poverty rates are high. They obtain data on several variables and ran
the following cross section regression for 35 states in the country.

𝐶̂𝑖 = 6.275 + 0.1147𝑃𝑅𝑖 − 0.0712𝐿𝑅𝑖 + 0.0862𝑆𝐷𝑃𝑖

𝑠𝑒 = (3.125)(0.02713)(0.0361)(0.03834)

𝑛 = 35 𝑅2 = 0.6876

where C = Crimes per lakh of population

PR = Poverty Rates

LR = Literacy Rates

SDP = State Domestic Product

i. A priori what signs are expected for the explanatory variables? Explain your
answers.
ii. Test for overall goodness of fit of the regression (Use a = 5%)
iii. Another model was used and following results were obtained:
̂ 𝑖 = 2.142 + 0.01186 In 𝑃𝑅𝑖 − 0.548 In 𝐿𝑅𝑖 + 0.0921 In 𝑆𝐷𝑃𝑖
𝐼𝑛𝐶
𝑆𝑒 = (1.102) (0.0673) (0.0259)(0.0921)
2
𝑛 = 35 𝑅 = 0.7923
Interpret the coefficient of In SDP
iv. How will you conduct MacKinnon-White-Davidson (MWD) test to select which
model is better? Write all the steps clearly. [Eco(h) 2022]

4. An individual is hired to determine the best location for the next branch of a famous
family restaurant chain 'Foodies' The individual decides to build a regression model
to explain the gross sales volume at each of the restaurants in the chain as a function
of various descriptions of the location of that branch. He considers the following
regression (original):

109
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860

𝑌̂𝑡 = 102.192 − 9075𝑁𝑖 + 0.3547𝑃𝑖 + 1.288𝑙𝑖

𝑆𝑒 = (2053)(0,0727)(0.543)

𝑛 = 22, 𝑅2 = 0.579 𝑅𝑆𝑆 = 384.27

Where, Y = gross sales volume, N = the number of competitive restaurants nearby, P =


the population nearby. and I = the average household income nearby.

i. Interpret the slope coefficients of the regression and 𝑅2


ii. Suppose we add another variable A to the regression above where A = address of
the restaurant. Consider the modified regression below :
𝑌̂𝑡 = 98.125 − 8975𝑁𝑖 + 0.3607𝑃𝑖 + 1.301𝑙𝑖 + 58.07𝐴𝑖
𝑆𝑒 = (2053)(0,0727)(0.543)(95.21)
𝑛 = 22, 𝑅2 = 0.0695
Do you think adding a new variable A has improved the fit of the equation?
Why/why not?
iii. Do you suspect a problem in Part (ii) above? What is the problem and what could
be the consequences of the problem? How will you correct for the problem?
iv. How do you conduct Ramsey RESET test to check for the likelihood of specification
error in the model?
v. Suppose that the average household income (I) is not measured correctly. What
are the consequences of this on the properties of the OLS estimators.
[Eco(h) 2022]

110
A-2 Appendix Tables

g b(y; n, p)
x
Table A.1 Cumulative Binomial Probabilities B(x; n, p) 5
a. n ! 5 y50

0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99

0 .951 .774 .590 .328 .237 .168 .078 .031 .010 .002 .001 .000 .000 .000 .000
1 .999 .977 .919 .737 .633 .528 .337 .188 .087 .031 .016 .007 .000 .000 .000
x 2 1.000 .999 .991 .942 .896 .837 .683 .500 .317 .163 .104 .058 .009 .001 .000
3 1.000 1.000 1.000 .993 .984 .969 .913 .812 .663 .472 .367 .263 .081 .023 .001
4 1.000 1.000 1.000 1.000 .999 .998 .990 .969 .922 .832 .763 .672 .410 .226 .049

b. n ! 10

0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99

0 .904 .599 .349 .107 .056 .028 .006 .001 .000 .000 .000 .000 .000 .000 .000
1 .996 .914 .736 .376 .244 .149 .046 .011 .002 .000 .000 .000 .000 .000 .000
2 1.000 .988 .930 .678 .526 .383 .167 .055 .012 .002 .000 .000 .000 .000 .000
3 1.000 .999 .987 .879 .776 .650 .382 .172 .055 .011 .004 .001 .000 .000 .000
4 1.000 1.000 .998 .967 .922 .850 .633 .377 .166 .047 .020 .006 .000 .000 .000
x
5 1.000 1.000 1.000 .994 .980 .953 .834 .623 .367 .150 .078 .033 .002 .000 .000
6 1.000 1.000 1.000 .999 .996 .989 .945 .828 .618 .350 .224 .121 .013 .001 .000
7 1.000 1.000 1.000 1.000 1.000 .998 .988 .945 .833 .617 .474 .322 .070 .012 .000
8 1.000 1.000 1.000 1.000 1.000 1.000 .998 .989 .954 .851 .756 .624 .264 .086 .004
9 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .994 .972 .944 .893 .651 .401 .096

c. n ! 15

0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99

0 .860 .463 .206 .035 .013 .005 .000 .000 .000 .000 .000 .000 .000 .000 .000
1 .990 .829 .549 .167 .080 .035 .005 .000 .000 .000 .000 .000 .000 .000 .000
2 1.000 .964 .816 .398 .236 .127 .027 .004 .000 .000 .000 .000 .000 .000 .000
3 1.000 .995 .944 .648 .461 .297 .091 .018 .002 .000 .000 .000 .000 .000 .000
4 1.000 .999 .987 .836 .686 .515 .217 .059 .009 .001 .000 .000 .000 .000 .000
5 1.000 1.000 .998 .939 .852 .722 .403 .151 .034 .004 .001 .000 .000 .000 .000
6 1.000 1.000 1.000 .982 .943 .869 .610 .304 .095 .015 .004 .001 .000 .000 .000
x 7 1.000 1.000 1.000 .996 .983 .950 .787 .500 .213 .050 .017 .004 .000 .000 .000
8 1.000 1.000 1.000 .999 .996 .985 .905 .696 .390 .131 .057 .018 .000 .000 .000
9 1.000 1.000 1.000 1.000 .999 .996 .966 .849 .597 .278 .148 .061 .002 .000 .000
10 1.000 1.000 1.000 1.000 1.000 .999 .991 .941 .783 .485 .314 .164 .013 .001 .000
11 1.000 1.000 1.000 1.000 1.000 1.000 .998 .982 .909 .703 .539 .352 .056 .005 .000
12 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .996 .973 .873 .764 .602 .184 .036 .000
13 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .995 .965 .920 .833 .451 .171 .010
14 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .995 .987 .965 .794 .537 .140

(continued )

111
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-3

g b(y; n, p)
x
Table A.1 Cumulative Binomial Probabilities (cont.) B(x; n, p) 5
y50
d. n ! 20

0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99

0 .818 .358 .122 .012 .003 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000
1 .983 .736 .392 .069 .024 .008 .001 .000 .000 .000 .000 .000 .000 .000 .000
2 .999 .925 .677 .206 .091 .035 .004 .000 .000 .000 .000 .000 .000 .000 .000
3 1.000 .984 .867 .411 .225 .107 .016 .001 .000 .000 .000 .000 .000 .000 .000
4 1.000 .997 .957 .630 .415 .238 .051 .006 .000 .000 .000 .000 .000 .000 .000
5 1.000 1.000 .989 .804 .617 .416 .126 .021 .002 .000 .000 .000 .000 .000 .000
6 1.000 1.000 .998 .913 .786 .608 .250 .058 .006 .000 .000 .000 .000 .000 .000
7 1.000 1.000 1.000 .968 .898 .772 .416 .132 .021 .001 .000 .000 .000 .000 .000
8 1.000 1.000 1.000 .990 .959 .887 .596 .252 .057 .005 .001 .000 .000 .000 .000
9 1.000 1.000 1.000 .997 .986 .952 .755 .412 .128 .017 .004 .001 .000 .000 .000
x
10 1.000 1.000 1.000 .999 .996 .983 .872 .588 .245 .048 .014 .003 .000 .000 .000
11 1.000 1.000 1.000 1.000 .999 .995 .943 .748 .404 .113 .041 .010 .000 .000 .000
12 1.000 1.000 1.000 1.000 1.000 .999 .979 .868 .584 .228 .102 .032 .000 .000 .000
13 1.000 1.000 1.000 1.000 1.000 1.000 .994 .942 .750 .392 .214 .087 .002 .000 .000
14 1.000 1.000 1.000 1.000 1.000 1.000 .998 .979 .874 .584 .383 .196 .011 .000 .000
15 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .994 .949 .762 .585 .370 .043 .003 .000
16 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .984 .893 .775 .589 .133 .016 .000
17 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .996 .965 .909 .794 .323 .075 .001
18 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .992 .976 .931 .608 .264 .017
19 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .997 .988 .878 .642 .182

(continued)

112
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-4 Appendix Tables

g b(y; n, p)
x
Table A.1 Cumulative Binomial Probabilities (cont.) B(x; n, p) 5
y50
e. n ! 25

0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99

0 .778 .277 .072 .004 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
1 .974 .642 .271 .027 .007 .002 .000 .000 .000 .000 .000 .000 .000 .000 .000
2 .998 .873 .537 .098 .032 .009 .000 .000 .000 .000 .000 .000 .000 .000 .000
3 1.000 .966 .764 .234 .096 .033 .002 .000 .000 .000 .000 .000 .000 .000 .000
4 1.000 .993 .902 .421 .214 .090 .009 .000 .000 .000 .000 .000 .000 .000 .000
5 1.000 .999 .967 .617 .378 .193 .029 .002 .000 .000 .000 .000 .000 .000 .000
6 1.000 1.000 .991 .780 .561 .341 .074 .007 .000 .000 .000 .000 .000 .000 .000
7 1.000 1.000 .998 .891 .727 .512 .154 .022 .001 .000 .000 .000 .000 .000 .000
8 1.000 1.000 1.000 .953 .851 .677 .274 .054 .004 .000 .000 .000 .000 .000 .000
9 1.000 1.000 1.000 .983 .929 .811 .425 .115 .013 .000 .000 .000 .000 .000 .000
10 1.000 1.000 1.000 .994 .970 .902 .586 .212 .034 .002 .000 .000 .000 .000 .000
11 1.000 1.000 1.000 .998 .980 .956 .732 .345 .078 .006 .001 .000 .000 .000 .000
x 12 1.000 1.000 1.000 1.000 .997 .983 .846 .500 .154 .017 .003 .000 .000 .000 .000
13 1.000 1.000 1.000 1.000 .999 .994 .922 .655 .268 .044 .020 .002 .000 .000 .000
14 1.000 1.000 1.000 1.000 1.000 .998 .966 .788 .414 .098 .030 .006 .000 .000 .000
15 1.000 1.000 1.000 1.000 1.000 1.000 .987 .885 .575 .189 .071 .017 .000 .000 .000
16 1.000 1.000 1.000 1.000 1.000 1.000 .996 .946 .726 .323 .149 .047 .000 .000 .000
17 1.000 1.000 1.000 1.000 1.000 1.000 .999 .978 .846 .488 .273 .109 .002 .000 .000
18 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .993 .926 .659 .439 .220 .009 .000 .000
19 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 .971 .807 .622 .383 .033 .001 .000
20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .991 .910 .786 .579 .098 .007 .000
21 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 .967 .904 .766 .236 .034 .000
22 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .991 .968 .902 .463 .127 .002
23 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 .993 .973 .729 .358 .026
24 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .996 .928 .723 .222

g
x
Table A.2 Cumulative Poisson Probabilities e2m my
F(x; m) 5 y!
y50

.1 .2 .3 .4 .5 .6 .7 .8 .9 1.0

0 .905 .819 .741 .670 .607 .549 .497 .449 .407 .368
1 .995 .982 .963 .938 .910 .878 .844 .809 .772 .736
2 1.000 .999 .996 .992 .986 .977 .966 .953 .937 .920
x 3 1.000 1.000 .999 .998 .997 .994 .991 .987 .981
4 1.000 1.000 1.000 .999 .999 .998 .996
5 1.000 1.000 1.000 .999
6 1.000

(continued)

113
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-5

g
x
Table A.2 Cumulative Poisson Probabilities (cont.) e2mmy
F(x; m) 5
y50 y!

2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 15.0 20.0

0 .135 .050 .018 .007 .002 .001 .000 .000 .000 .000 .000
1 .406 .199 .092 .040 .017 .007 .003 .001 .000 .000 .000
2 .677 .423 .238 .125 .062 .030 .014 .006 .003 .000 .000
3 .857 .647 .433 .265 .151 .082 .042 .021 .010 .000 .000
4 .947 .815 .629 .440 .285 .173 .100 .055 .029 .001 .000
5 .983 .916 .785 .616 .446 .301 .191 .116 .067 .003 .000
6 .995 .966 .889 .762 .606 .450 .313 .207 .130 .008 .000
7 .999 .988 .949 .867 .744 .599 .453 .324 .220 .018 .001
8 1.000 .996 .979 .932 .847 .729 .593 .456 .333 .037 .002
9 .999 .992 .968 .916 .830 .717 .587 .458 .070 .005
10 1.000 .997 .986 .957 .901 .816 .706 .583 .118 .011
11 .999 .995 .980 .947 .888 .803 .697 .185 .021
12 1.000 .998 .991 .973 .936 .876 .792 .268 .039
13 .999 .996 .987 .966 .926 .864 .363 .066
14 1.000 .999 .994 .983 .959 .917 .466 .105
15 .999 .998 .992 .978 .951 .568 .157
16 1.000 .999 .996 .989 .973 .664 .221
17 1.000 .998 .995 .986 .749 .297
18 .999 .998 .993 .819 .381
x
19 1.000 .999 .997 .875 .470
20 1.000 .998 .917 .559
21 .999 .947 .644
22 1.000 .967 .721
23 .981 .787
24 .989 .843
25 .994 .888
26 .997 .922
27 .998 .948
28 .999 .966
29 1.000 .978
30 .987
31 .992
32 .995
33 .997
34 .999
35 .999
36 1.000

114
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-6 Appendix Tables

Table A.3 Standard Normal Curve Areas (z) ! P(Z " z)


Standard normal density curve

Shaded area = (z)

0 z

z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09

!3.4 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0002
!3.3 .0005 .0005 .0005 .0004 .0004 .0004 .0004 .0004 .0004 .0003
!3.2 .0007 .0007 .0006 .0006 .0006 .0006 .0006 .0005 .0005 .0005
!3.1 .0010 .0009 .0009 .0009 .0008 .0008 .0008 .0008 .0007 .0007
!3.0 .0013 .0013 .0013 .0012 .0012 .0011 .0011 .0011 .0010 .0010
!2.9 .0019 .0018 .0017 .0017 .0016 .0016 .0015 .0015 .0014 .0014
!2.8 .0026 .0025 .0024 .0023 .0023 .0022 .0021 .0021 .0020 .0019
!2.7 .0035 .0034 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026
!2.6 .0047 .0045 .0044 .0043 .0041 .0040 .0039 .0038 .0037 .0036
!2.5 .0062 .0060 .0059 .0057 .0055 .0054 .0052 .0051 .0049 .0038
!2.4 .0082 .0080 .0078 .0075 .0073 .0071 .0069 .0068 .0066 .0064
!2.3 .0107 .0104 .0102 .0099 .0096 .0094 .0091 .0089 .0087 .0084
!2.2 .0139 .0136 .0132 .0129 .0125 .0122 .0119 .0116 .0113 .0110
!2.1 .0179 .0174 .0170 .0166 .0162 .0158 .0154 .0150 .0146 .0143
!2.0 .0228 .0222 .0217 .0212 .0207 .0202 .0197 .0192 .0188 .0183
!1.9 .0287 .0281 .0274 .0268 .0262 .0256 .0250 .0244 .0239 .0233
!1.8 .0359 .0352 .0344 .0336 .0329 .0322 .0314 .0307 .0301 .0294
!1.7 .0446 .0436 .0427 .0418 .0409 .0401 .0392 .0384 .0375 .0367
!1.6 .0548 .0537 .0526 .0516 .0505 .0495 .0485 .0475 .0465 .0455
!1.5 .0668 .0655 .0643 .0630 .0618 .0606 .0594 .0582 .0571 .0559
!1.4 .0808 .0793 .0778 .0764 .0749 .0735 .0722 .0708 .0694 .0681
!1.3 .0968 .0951 .0934 .0918 .0901 .0885 .0869 .0853 .0838 .0823
!1.2 .1151 .1131 .1112 .1093 .1075 .1056 .1038 .1020 .1003 .0985
!1.1 .1357 .1335 .1314 .1292 .1271 .1251 .1230 .1210 .1190 .1170
!1.0 .1587 .1562 .1539 .1515 .1492 .1469 .1446 .1423 .1401 .1379
!0.9 .1841 .1814 .1788 .1762 .1736 .1711 .1685 .1660 .1635 .1611
!0.8 .2119 .2090 .2061 .2033 .2005 .1977 .1949 .1922 .1894 .1867
!0.7 .2420 .2389 .2358 .2327 .2296 .2266 .2236 .2206 .2177 .2148
!0.6 .2743 .2709 .2676 .2643 .2611 .2578 .2546 .2514 .2483 .2451
!0.5 .3085 .3050 .3015 .2981 .2946 .2912 .2877 .2843 .2810 .2776
!0.4 .3446 .3409 .3372 .3336 .3300 .3264 .3228 .3192 .3156 .3121
!0.3 .3821 .3783 .3745 .3707 .3669 .3632 .3594 .3557 .3520 .3482
!0.2 .4207 .4168 .4129 .4090 .4052 .4013 .3974 .3936 .3897 .3859
!0.1 .4602 .4562 .4522 .4483 .4443 .4404 .4364 .4325 .4286 .4247
!0.0 .5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641
(continued)

115
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-7

Table A.3 Standard Normal Curve Areas (cont.) "(z) 5 P(Z # z)

z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09

0.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
0.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5753
0.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141
0.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
0.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
0.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
0.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
0.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852
0.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
0.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9278 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990
3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993
3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995
3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997
3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998

116
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-9

Table A.5 Critical Values for t Distributions t# density curve

Shaded area = $

0 t$,#

"

v .10 .05 .025 .01 .005 .001 .0005

1 3.078 6.314 12.706 31.821 63.657 318.31 636.62


2 1.886 2.920 4.303 6.965 9.925 22.326 31.598
3 1.638 2.353 3.182 4.541 5.841 10.213 12.924
4 1.533 2.132 2.776 3.747 4.604 7.173 8.610
5 1.476 2.015 2.571 3.365 4.032 5.893 6.869
6 1.440 1.943 2.447 3.143 3.707 5.208 5.959
7 1.415 1.895 2.365 2.998 3.499 4.785 5.408
8 1.397 1.860 2.306 2.896 3.355 4.501 5.041
9 1.383 1.833 2.262 2.821 3.250 4.297 4.781
10 1.372 1.812 2.228 2.764 3.169 4.144 4.587
11 1.363 1.796 2.201 2.718 3.106 4.025 4.437
12 1.356 1.782 2.179 2.681 3.055 3.930 4.318
13 1.350 1.771 2.160 2.650 3.012 3.852 4.221
14 1.345 1.761 2.145 2.624 2.977 3.787 4.140
15 1.341 1.753 2.131 2.602 2.947 3.733 4.073
16 1.337 1.746 2.120 2.583 2.921 3.686 4.015
17 1.333 1.740 2.110 2.567 2.898 3.646 3.965
18 1.330 1.734 2.101 2.552 2.878 3.610 3.922
19 1.328 1.729 2.093 2.539 2.861 3.579 3.883
20 1.325 1.725 2.086 2.528 2.845 3.552 3.850
21 1.323 1.721 2.080 2.518 2.831 3.527 3.819
22 1.321 1.717 2.074 2.508 2.819 3.505 3.792
23 1.319 1.714 2.069 2.500 2.807 3.485 3.767
24 1.318 1.711 2.064 2.492 2.797 3.467 3.745
25 1.316 1.708 2.060 2.485 2.787 3.450 3.725
26 1.315 1.706 2.056 2.479 2.779 3.435 3.707
27 1.314 1.703 2.052 2.473 2.771 3.421 3.690
28 1.313 1.701 2.048 2.467 2.763 3.408 3.674
29 1.311 1.699 2.045 2.462 2.756 3.396 3.659
30 1.310 1.697 2.042 2.457 2.750 3.385 3.646
32 1.309 1.694 2.037 2.449 2.738 3.365 3.622
34 1.307 1.691 2.032 2.441 2.728 3.348 3.601
36 1.306 1.688 2.028 2.434 2.719 3.333 3.582
38 1.304 1.686 2.024 2.429 2.712 3.319 3.566
40 1.303 1.684 2.021 2.423 2.704 3.307 3.551
50 1.299 1.676 2.009 2.403 2.678 3.262 3.496
60 1.296 1.671 2.000 2.390 2.660 3.232 3.460
120 1.289 1.658 1.980 2.358 2.617 3.160 3.373
1.282 1.645 1.960 2.326 2.576 3.090 3.291

117
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-10
Table A.6 Tolerance Critical Values for Normal Population Distributions

Two-sided Intervals One-sided Intervals


Confidence Level 95% 99% 95% 99%
% of Population Captured ! 90% ! 95% ! 99% ! 90% ! 95% ! 99% ! 90% ! 95% ! 99% ! 90% ! 95% ! 99%
Appendix Tables

2 32.019 37.674 48.430 160.193 188.491 242.300 20.581 26.260 37.094 103.029 131.426 185.617
3 8.380 9.916 12.861 18.930 22.401 29.055 6.156 7.656 10.553 13.995 17.370 23.896
4 5.369 6.370 8.299 9.398 11.150 14.527 4.162 5.144 7.042 7.380 9.083 12.387
5 4.275 5.079 6.634 6.612 7.855 10.260 3.407 4.203 5.741 5.362 6.578 8.939
6 3.712 4.414 5.775 5.337 6.345 8.301 3.006 3.708 5.062 4.411 5.406 7.335
7 3.369 4.007 5.248 4.613 5.488 7.187 2.756 3.400 4.642 3.859 4.728 6.412
8 3.136 3.732 4.891 4.147 4.936 6.468 2.582 3.187 4.354 3.497 4.285 5.812
9 2.967 3.532 4.631 3.822 4.550 5.966 2.454 3.031 4.143 3.241 3.972 5.389
10 2.839 3.379 4.433 3.582 4.265 5.594 2.355 2.911 3.981 3.048 3.738 5.074
11 2.737 3.259 4.277 3.397 4.045 5.308 2.275 2.815 3.852 2.898 3.556 4.829
12 2.655 3.162 4.150 3.250 3.870 5.079 2.210 2.736 3.747 2.777 3.410 4.633
13 2.587 3.081 4.044 3.130 3.727 4.893 2.155 2.671 3.659 2.677 3.290 4.472
14 2.529 3.012 3.955 3.029 3.608 4.737 2.109 2.615 3.585 2.593 3.189 4.337
15 2.480 2.954 3.878 2.945 3.507 4.605 2.068 2.566 3.520 2.522 3.102 4.222
16 2.437 2.903 3.812 2.872 3.421 4.492 2.033 2.524 3.464 2.460 3.028 4.123
Sample Size n 17 2.400 2.858 3.754 2.808 3.345 4.393 2.002 2.486 3.414 2.405 2.963 4.037
18 2.366 2.819 3.702 2.753 3.279 4.307 1.974 2.453 3.370 2.357 2.905 3.960

118
19 2.337 2.784 3.656 2.703 3.221 4.230 1.949 2.423 3.331 2.314 2.854 3.892
20 2.310 2.752 3.615 2.659 3.168 4.161 1.926 2.396 3.295 2.276 2.808 3.832
25 2.208 2.631 3.457 2.494 2.972 3.904 1.838 2.292 3.158 2.129 2.633 3.601
30 2.140 2.549 3.350 2.385 2.841 3.733 1.777 2.220 3.064 2.030 2.516 3.447
35 2.090 2.490 3.272 2.306 2.748 3.611 1.732 2.167 2.995 1.957 2.430 3.334
40 2.052 2.445 3.213 2.247 2.677 3.518 1.697 2.126 2.941 1.902 2.364 3.249
45 2.021 2.408 3.165 2.200 2.621 3.444 1.669 2.092 2.898 1.857 2.312 3.180
50 1.996 2.379 3.126 2.162 2.576 3.385 1.646 2.065 2.863 1.821 2.269 3.125
60 1.958 2.333 3.066 2.103 2.506 3.293 1.609 2.022 2.807 1.764 2.202 3.038
70 1.929 2.299 3.021 2.060 2.454 3.225 1.581 1.990 2.765 1.722 2.153 2.974
80 1.907 2.272 2.986 2.026 2.414 3.173 1.559 1.965 2.733 1.688 2.114 2.924
90 1.889 2.251 2.958 1.999 2.382 3.130 1.542 1.944 2.706 1.661 2.082 2.883
100 1.874 2.233 2.934 1.977 2.355 3.096 1.527 1.927 2.684 1.639 2.056 2.850
150 1.825 2.175 2.859 1.905 2.270 2.983 1.478 1.870 2.611 1.566 1.971 2.741
200 1.798 2.143 2.816 1.865 2.222 2.921 1.450 1.837 2.570 1.524 1.923 2.679
250 1.780 2.121 2.788 1.839 2.191 2.880 1.431 1.815 2.542 1.496 1.891 2.638
300 1.767 2.106 2.767 1.820 2.169 2.850 1.417 1.800 2.522 1.476 1.868 2.608
" 1.645 1.960 2.576 1.645 1.960 2.576 1.282 1.645 2.326 1.282 1.645 2.326

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-11

Table A.7 Critical Values for Chi-Squared Distributions %#2 density curve

Shaded area = α

0 2
% α,#

"

# .995 .99 .975 .95 .90 .10 .05 .025 .01 .005

1 0.000 0.000 0.001 0.004 0.016 2.706 3.843 5.025 6.637 7.882
2 0.010 0.020 0.051 0.103 0.211 4.605 5.992 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.344 12.837
4 0.207 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.832 15.085 16.748
6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.440 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.012 18.474 20.276
8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.534 20.090 21.954
9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.022 21.665 23.587
10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188
11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.724 26.755
12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.041 19.812 22.362 24.735 27.687 29.817
14 4.075 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319
15 4.600 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.577 32.799
16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267
17 5.697 6.407 7.564 8.682 10.085 24.769 27.587 30.190 33.408 35.716
18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156
19 6.843 7.632 8.906 10.117 11.651 27.203 30.143 32.852 36.190 38.580
20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997
21 8.033 8.897 10.283 11.591 13.240 29.615 32.670 35.478 38.930 41.399
22 8.643 9.542 10.982 12.338 14.042 30.813 33.924 36.781 40.289 42.796
23 9.260 10.195 11.688 13.090 14.848 32.007 35.172 38.075 41.637 44.179
24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.558
25 10.519 11.523 13.120 14.611 16.473 34.381 37.652 40.646 44.313 46.925
26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290
27 11.807 12.878 14.573 16.151 18.114 36.741 40.113 43.194 46.962 49.642
28 12.461 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.993
29 13.120 14.256 16.147 17.708 19.768 39.087 42.557 45.772 49.586 52.333
30 13.787 14.954 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672
31 14.457 15.655 17.538 19.280 21.433 41.422 44.985 48.231 52.190 55.000
32 15.134 16.362 18.291 20.072 22.271 42.585 46.194 49.480 53.486 56.328
33 15.814 17.073 19.046 20.866 23.110 43.745 47.400 50.724 54.774 57.646
34 16.501 17.789 19.806 21.664 23.952 44.903 48.602 51.966 56.061 58.964
35 17.191 18.508 20.569 22.465 24.796 46.059 49.802 53.203 57.340 60.272
36 17.887 19.233 21.336 23.269 25.643 47.212 50.998 54.437 58.619 61.581
37 18.584 19.960 22.105 24.075 26.492 48.363 52.192 55.667 59.891 62.880
38 19.289 20.691 22.878 24.884 27.343 49.513 53.384 56.896 61.162 64.181
39 19.994 21.425 23.654 25.695 28.196 50.660 54.572 58.119 62.426 65.473
40 20.706 22.164 24.433 26.509 29.050 51.805 55.758 59.342 63.691 66.766

For v . 40, x2a,v , 2 2 3


, va1 2 1 za b
9v B 9v
119
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-12 Appendix Tables

Table A.8 t Curve Tail Areas

t curve Area to the


right of t

0
t

t # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

0.0 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500
0.1 .468 .465 .463 .463 .462. .462 .462 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461
0.2 .437 .430 .427 .426 .425 .424 .424 .423 .423 .423 .423 .422 .422 .422 .422 .422 .422 .422
0.3 .407 .396 .392 .390 .388 .387 .386 .386 .386 .385 .385 .385 .384 .384 .384 .384 .384 .384
0.4 .379 .364 .358 .355 .353 .352 .351 .350 .349 .349 .348 .348 .348 .347 .347 .347 .347 .347
0.5 .352 .333 .326 .322 .319 .317 .316 .315 .315 .314 .313 .313 .313 .312 .312 .312 .312 .312
0.6 .328 .305 .295 .290 .287 .285 .284 .283 .282 .281 .280 .280 .279 .279 .279 .278 .278 .278
0.7 .306 .278 .267 .261 .258 .255 .253 .252 .251 .250 .249 .249 .248 .247 .247 .247 .247 .246
0.8 .285 .254 .241 .234 .230 .227 .225 .223 .222 .221 .220 .220 .219 .218 .218 .218 .217 .217
0.9 .267 .232 .217 .210 .205 .201 .199 .197 .196 .195 .194 .193 .192 .191 .191 .191 .190 .190
1.0 .250 .211 .196 .187 .182 .178 .175 .173 .172 .170 .169 .169 .168 .167 .167 .166 .166 .165
1.1 .235 .193 .176 .167 .162 .157 .154 .152 .150 .149 .147 .146 .146 .144 .144 .144 .143 .143
1.2 .221 .177 .158 .148 .142 .138 .135 .132 .130 .129 .128 .127 .126 .124 .124 .124 .123 .123
1.3 .209 .162 .142 .132 .125 .121 .117 .115 .113 .111 .110 .109 .108 .107 .107 .106 .105 .105
1.4 .197 .148 .128 .117 .110 .106 .102 .100 .098 .096 .095 .093 .092 .091 .091 .090 .090 .089
1.5 .187 .136 .115 .104 .097 .092 .089 .086 .084 .082 .081 .080 .079 .077 .077 .077 .076 .075
1.6 .178 .125 .104 .092 .085 .080 .077 .074 .072 .070 .069 .068 .067 .065 .065 .065 .064 .064
1.7 .169 .116 .094 .082 .075 .070 .065 .064 .062 .060 .059 .057 .056 .055 .055 .054 .054 .053
1.8 .161 .107 .085 .073 .066 .061 .057 .055 .053 .051 .050 .049 .048 .046 .046 .045 .045 .044
1.9 .154 .099 .077 .065 .058 .053 .050 .047 .045 .043 .042 .041 .040 .038 .038 .038 .037 .037
2.0 .148 .092 .070 .058 .051 .046 .043 .040 .038 .037 .035 .034 .033 .032 .032 .031 .031 .030
2.1 .141 .085 .063 .052 .045 .040 .037 .034 .033 .031 .030 .029 .028 .027 .027 .026 .025 .025
2.2 .136 .079 .058 .046 .040 .035 .032 .029 .028 .026 .025 .024 .023 .022 .022 .021 .021 .021
2.3 .131 .074 .052 .041 .035 .031 .027 .025 .023 .022 .021 .020 .019 .018 .018 .018 .017 .017
2.4 .126 .069 .048 .037 .031 .027 .024 .022 .020 .019 .018 .017 .016 .015 .015 .014 .014 .014
2.5 .121 .065 .044 .033 .027 .023 .020 .018 .017 .016 .015 .014 .013 .012 .012 .012 .011 .011
2.6 .117 .061 .040 .030 .024 .020 .018 .016 .014 .013 .012 .012 .011 .010 .010 .010 .009 .009
2.7 .113 .057 .037 .027 .021 .018 .015 .014 .012 .011 .010 .010 .009 .008 .008 .008 .008 .007
2.8 .109 .054 .034 .024 .019 .016 .013 .012 .010 .009 .009 .008 .008 .007 .007 .006 .006 .006
2.9 .106 .051 .031 .022 .017 .014 .011 .010 .009 .008 .007 .007 .006 .005 .005 .005 .005 .005
3.0 .102 .048 .029 .020 .015 .012 .010 .009 .007 .007 .006 .006 .005 .004 .004 .004 .004 .004
3.1 .099 .045 .027 .018 .013 .011 .009 .007 .006 .006 .005 .005 .004 .004 .004 .003 .003 .003
3.2 .096 .043 .025 .016 .012 .009 .008 .006 .005 .005 .004 .004 .003 .003 .003 .003 .003 .002
3.3 .094 .040 .023 .015 .011 .008 .007 .005 .005 .004 .004 .003 .003 .002 .002 .002 .002 .002
3.4 .091 .038 .021 .014 .010 .007 .006 .005 .004 .003 .003 .003 .002 .002 .002 .002 .002 .002
3.5 .089 .036 .020 .012 .009 .006 .005 .004 .003 .003 .002 .002 .002 .002 .002 .001 .001 .001
3.6 .086 .035 .018 .011 .008 .006 .004 .004 .003 .002 .002 .002 .002 .001 .001 .001 .001 .001
3.7 .084 .033 .017 .010 .007 .005 .004 .003 .002 .002 .002 .002 .001 .001 .001 .001 .001 .001
3.8 .082 .031 .016 .010 .006 .004 .003 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001
3.9 .080 .030 .015 .009 .006 .004 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001
4.0 .078 .029 .014 .008 .005 .004 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .000 .000
(continued)

120
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-13

Table A.8 t Curve Tail Areas (cont.)

t curve Area to the


right of t

0
t

t # 19 20 21 22 23 24 25 26 27 28 29 30 35 40 60 120 `( 5 z)

0.0 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500
0.1 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .460 .460 .460 .460 .460
0.2 .422 .422 .422 .422 .422 .422 .422 .422 .421 .421 .421 .421 .421 .421 .421 .421 .421
0.3 .384 .384 .384 .383 .383 .383 .383 .383 .383 .383 .383 .383 .383 .383 .383 .382 .382
0.4 .347 .347 .347 .347 .346 .346 .346 .346 .346 .346 .346 .346 .346 .346 .345 .345 .345
0.5 .311 .311 .311 .311 .311 .311 .311 .311 .311 .310 .310 .310 .310 .310 .309 .309 .309
0.6 .278 .278 .278 .277 .277 .277 .277 .277 .277 .277 .277 .277 .276 .276 .275 .275 .274
0.7 .246 .246 .246 .246 .245 .245 .245 .245 .245 .245 .245 .245 .244 .244 .243 .243 .242
0.8 .217 .217 .216 .216 .216 .216 .216 .215 .215 .215 .215 .215 .215 .214 .213 .213 .212
0.9 .190 .189 .189 .189 .189 .189 .188 .188 .188 .188 .188 .188 .187 .187 .186 .185 .184
1.0 .165 .165 .164 .164 .164 .164 .163 .163 .163 .163 .163 .163 .162 .162 .161 .160 .159
1.1 .143 .142 .142 .142 .141 .141 .141 .141 .141 .140 .140 .140 .139 .139 .138 .137 .136
1.2 .122 .122 .122 .121 .121 .121 .121 .120 .120 .120 .120 .120 .119 .119 .117 .116 .115
1.3 .105 .104 .104 .104 .103 .103 .103 .103 .102 .102 .102 .102 .101 .101 .099 .098 .097
1.4 .089 .089 .088 .088 .087 .087 .087 .087 .086 .086 .086 .086 .085 .085 .083 .082 .081
1.5 .075 .075 .074 .074 .074 .073 .073 .073 .073 .072 .072 .072 .071 .071 .069 .068 .067
1.6 .063 .063 .062 .062 .062 .061 .061 .061 .061 .060 .060 .060 .059 .059 .057 .056 .055
1.7 .053 .052 .052 .052 .051 .051 .051 .051 .050 .050 .050 .050 .049 .048 .047 .046 .045
1.8 .044 .043 .043 .043 .042 .042 .042 .042 .042 .041 .041 .041 .040 .040 .038 .037 .036
1.9 .036 .036 .036 .035 .035 .035 .035 .034 .034 .034 .034 .034 .033 .032 .031 .030 .029
2.0 .030 .030 .029 .029 .029 .028 .028 .028 .028 .028 .027 .027 .027 .026 .025 .024 .023
2.1 .025 .024 .024 .024 .023 .023 .023 .023 .023 .022 .022 .022 .022 .021 .020 .019 .018
2.2 .020 .020 .020 .019 .019 .019 .019 .018 .018 .018 .018 .018 .017 .017 .016 .015 .014
2.3 .016 .016 .016 .016 .015 .015 .015 .015 .015 .015 .014 .014 .014 .013 .012 .012 .011
2.4 .013 .013 .013 .013 .012 .012 .012 .012 .012 .012 .012 .011 .011 .011 .010 .009 .008
2.5 .011 .011 .010 .010 .010 .010 .010 .010 .009 .009 .009 .009 .009 .008 .008 .007 .006
2.6 .009 .009 .008 .008 .008 .008 .008 .008 .007 .007 .007 .007 .007 .007 .006 .005 .005
2.7 .007 .007 .007 .007 .006 .006 .006 .006 .006 .006 .006 .006 .005 .005 .004 .004 .003
2.8 .006 .006 .005 .005 .005 .005 .005 .005 .005 .005 .005 .004 .004 .004 .003 .003 .003
2.9 .005 .004 .004 .004 .004 .004 .004 .004 .004 .004 .004 .003 .003 .003 .003 .002 .002
3.0 .004 .004 .003 .003 .003 .003 .003 .003 .003 .003 .003 .003 .002 .002 .002 .002 .001
3.1 .003 .003 .003 .003 .003 .002 .002 .002 .002 .002 .002 .002 .002 .002 .001 .001 .001
3.2 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .001 .001 .001 .001 .001
3.3 .002 .002 .002 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000
3.4 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000
3.5 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000
3.6 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000 .000 .000
3.7 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000
3.8 .001 .001 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
3.9 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
4.0 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000

121
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-14 Appendix Tables

Table A.9 Critical Values for F Distributions

# 1 " numerator df

" 1 2 3 4 5 6 7 8 9

.100 39.86 49.50 53.59 55.83 57.24 58.20 58.91 59.44 59.86
.050 161.45 199.50 215.71 224.58 230.16 233.99 236.77 238.88 240.54
1
.010 4052.20 4999.50 5403.40 5624.60 5763.60 5859.00 5928.40 5981.10 6022.50
.001 405,284 500,000 540,379 562,500 576,405 585,937 592,873 598,144 602,284
.100 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38
.050 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38
2
.010 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39
.001 998.50 999.00 999.17 999.25 999.30 999.33 999.36 999.37 999.39
.100 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24
.050 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81
3
.010 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35
.001 167.03 148.50 141.11 137.10 134.58 132.85 131.58 130.62 129.86
.100 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.94
.050 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00
4
.010 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66
.001 74.14 61.25 56.18 53.44 51.71 50.53 49.66 49.00 48.47
.100 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32
.050 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77
5
.010 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16
.001 47.18 37.12 33.20 31.09 29.75 28.83 28.16 27.65 27.24
.100 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96
#2 ! denominator df

.050 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10
6
.010 13.75 10.92 9.78 9.15 8.75 8.47 8.26 8.10 7.98
.001 35.51 27.00 23.70 21.92 20.80 20.03 19.46 19.03 18.69
.100 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72
.050 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68
7
.010 12.25 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72
.001 29.25 21.69 18.77 17.20 16.21 15.52 15.02 14.63 14.33
.100 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56
.050 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39
8
.010 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91
.001 25.41 18.49 15.83 14.39 13.48 12.86 12.40 12.05 11.77
.100 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44
.050 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18
9
.010 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35
.001 22.86 16.39 13.90 12.56 11.71 11.13 10.70 10.37 10.11
.100 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35
.050 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02
10
.010 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94
.001 21.04 14.91 12.55 11.28 10.48 9.93 9.52 9.20 8.96
.100 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27
.050 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90
11
.010 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63
.001 19.69 13.81 11.56 10.35 9.58 9.05 8.66 8.35 8.12
.100 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21
.050 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80
12
.010 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39
.001 18.64 12.97 10.80 9.63 8.89 8.38 8.00 7.71 7.48
122 (continued)

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-15

Table A.9 Critical Values for F Distributions (cont.)

#1 " numerator df

10 12 15 20 25 30 40 50 60 120 1000

60.19 60.71 61.22 61.74 62.05 62.26 62.53 62.69 62.79 63.06 63.30
241.88 243.91 245.95 248.01 249.26 250.10 251.14 251.77 252.20 253.25 254.19
6055.80 6106.30 6157.30 6208.70 6239.80 6260.60 6286.80 6302.50 6313.00 6339.40 6362.70
605,621 610,668 615,764 620,908 624,017 626,099 628,712 630,285 631,337 633,972 636,301
9.39 9.41 9.42 9.44 9.45 9.46 9.47 9.47 9.47 9.48 9.49
19.40 19.41 19.43 19.45 19.46 19.46 19.47 19.48 19.48 19.49 19.49
99.40 99.42 99.43 99.45 99.46 99.47 99.47 99.48 99.48 99.49 99.50
999.40 999.42 999.43 999.45 999.46 999.47 999.47 999.48 999.48 999.49 999.50
5.23 5.22 5.20 5.18 5.17 5.17 5.16 5.15 5.15 5.14 5.13
8.79 8.74 8.70 8.66 8.63 8.62 8.59 8.58 8.57 8.55 8.53
27.23 27.05 26.87 26.69 26.58 26.50 26.41 26.35 26.32 26.22 26.14
129.25 128.32 127.37 126.42 125.84 125.45 124.96 124.66 124.47 123.97 123.53
3.92 3.90 3.87 3.84 3.83 3.82 3.80 3.80 3.79 3.78 3.76
5.96 5.91 5.86 5.80 5.77 5.75 5.72 5.70 5.69 5.66 5.63
14.55 14.37 14.20 14.02 13.91 13.84 13.75 13.69 13.65 13.56 13.47
48.05 47.41 46.76 46.10 45.70 45.43 45.09 44.88 44.75 44.40 44.09
3.30 3.27 3.24 3.21 3.19 3.17 3.16 3.15 3.14 3.12 3.11
4.74 4.68 4.62 4.56 4.52 4.50 4.46 4.44 4.43 4.40 4.37
10.05 9.89 9.72 9.55 9.45 9.38 9.29 9.24 9.20 9.11 9.03
26.92 26.42 25.91 25.39 25.08 24.87 24.60 24.44 24.33 24.06 23.82
2.94 2.90 2.87 2.84 2.81 2.80 2.78 2.77 2.76 2.74 2.72
4.06 4.00 3.94 3.87 3.83 3.81 3.77 3.75 3.74 3.70 3.67
7.87 7.72 7.56 7.40 7.30 7.23 7.14 7.09 7.06 6.97 6.89
18.41 17.99 17.56 17.12 16.85 16.67 16.44 16.31 16.21 15.98 15.77
2.70 2.67 2.63 2.59 2.57 2.56 2.54 2.52 2.51 2.49 2.47
3.64 3.57 3.51 3.44 3.40 3.38 3.34 3.32 3.30 3.27 3.23
6.62 6.47 6.31 6.16 6.06 5.99 5.91 5.86 5.82 5.74 5.66
14.08 13.71 13.32 12.93 12.69 12.53 12.33 12.20 12.12 11.91 11.72
2.54 2.50 2.46 2.42 2.40 2.38 2.36 2.35 2.34 2.32 2.30
3.35 3.28 3.22 3.15 3.11 3.08 3.04 3.02 3.01 2.97 2.93
5.81 5.67 5.52 5.36 5.26 5.20 5.12 5.07 5.03 4.95 4.87
11.54 11.19 10.84 10.48 10.26 10.11 9.92 9.80 9.73 9.53 9.36
2.42 2.38 2.34 2.30 2.27 2.25 2.23 2.22 2.21 2.18 2.16
3.14 3.07 3.01 2.94 2.89 2.86 2.83 2.80 2.79 2.75 2.71
5.26 5.11 4.96 4.81 4.71 4.65 4.57 4.52 4.48 4.40 4.32
9.89 9.57 9.24 8.90 8.69 8.55 8.37 8.26 8.19 8.00 7.84
2.32 2.28 2.24 2.20 2.17 2.16 2.13 2.12 2.11 2.08 2.06
2.98 2.91 2.85 2.77 2.73 2.70 2.66 2.64 2.62 2.58 2.54
4.85 4.71 4.56 4.41 4.31 4.25 4.17 4.12 4.08 4.00 3.92
8.75 8.45 8.13 7.80 7.60 7.47 7.30 7.19 7.12 6.94 6.78
2.25 2.21 2.17 2.12 2.10 2.08 2.05 2.04 2.03 2.00 1.98
2.85 2.79 2.72 2.65 2.60 2.57 2.53 2.51 2.49 2.45 2.41
4.54 4.40 4.25 4.10 4.01 3.94 3.86 3.81 3.78 3.69 3.61
7.92 7.63 7.32 7.01 6.81 6.68 6.52 6.42 6.35 6.18 6.02
2.19 2.15 2.10 2.06 2.03 2.01 1.99 1.97 1.96 1.93 1.91
2.75 2.69 2.62 2.54 2.50 2.47 2.43 2.40 2.38 2.34 2.30
4.30 4.16 4.01 3.86 3.76 3.70 3.62 3.57 3.54 3.45 3.37
7.29 7.00 6.71 6.40 6.22 6.09 5.93 5.83 5.76 5.59 5.44
123 (continued)

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-16 Appendix Tables

Table A.9 Critical Values for F Distributions (cont.)

#1 ! numerator df

" 1 2 3 4 5 6 7 8 9

.100 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16
.050 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71
13
.010 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19
.001 17.82 12.31 10.21 9.07 8.35 7.86 7.49 7.21 6.98
.100 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12
.050 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65
14
.010 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03
.001 17.14 11.78 9.73 8.62 7.92 7.44 7.08 6.80 6.58
.100 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09
.050 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59
15
.010 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89
.001 16.59 11.34 9.34 8.25 7.57 7.09 6.74 6.47 6.26
.100 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06
.050 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54
16
.010 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78
.001 16.12 10.97 9.01 7.94 7.27 6.80 6.46 6.19 5.98
.100 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03
.050 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49
17
.010 8.40 6.11 5.19 4.67 4.34 4.10 3.93 3.79 3.68
.001 15.72 10.66 8.73 7.68 7.02 6.56 6.22 5.96 5.75
.100 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00
#2 " denominator df

.050 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46
18
.010 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60
.001 15.38 10.39 8.49 7.46 6.81 6.35 6.02 5.76 5.56
.100 2.99 2.61 2.40 2.27 2.18 2.11 2.06 2.02 1.98
.050 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42
19
.010 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52
.001 15.08 10.16 8.28 7.27 6.62 6.18 5.85 5.59 5.39
.100 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96
.050 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39
20
.010 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46
.001 14.82 9.95 8.10 7.10 6.46 6.02 5.69 5.44 5.24
.100 2.96 2.57 2.36 2.23 2.14 2.08 2.02 1.98 1.95
.050 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37
21
.010 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40
.001 14.59 9.77 7.94 6.95 6.32 5.88 5.56 5.31 5.11
.100 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1.93
.050 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34
22
.010 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35
.001 14.38 9.61 7.80 6.81 6.19 5.76 5.44 5.19 4.99
.100 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92
.050 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32
23
.010 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30
.001 14.20 9.47 7.67 6.70 6.08 5.65 5.33 5.09 4.89
.100 2.93 2.54 2.33 2.19 2.10 2.04 1.98 1.94 1.91
.050 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30
24
.010 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26
.001 14.03 9.34 7.55 6.59 5.98 5.55 5.23 4.99 4.80
124 (continued)

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-17

Table A.9 Critical Values for F Distributions (cont.)

#1 " numerator df

10 12 15 20 25 30 40 50 60 120 1000

2.14 2.10 2.05 2.01 1.98 1.96 1.93 1.92 1.90 1.88 1.85
2.67 2.60 2.53 2.46 2.41 2.38 2.34 2.31 2.30 2.25 2.21
4.10 3.96 3.82 3.66 3.57 3.51 3.43 3.38 3.34 3.25 3.18
6.80 6.52 6.23 5.93 5.75 5.63 5.47 5.37 5.30 5.14 4.99
2.10 2.05 2.01 1.96 1.93 1.91 1.89 1.87 1.86 1.83 1.80
2.60 2.53 2.46 2.39 2.34 2.31 2.27 2.24 2.22 2.18 2.14
3.94 3.80 3.66 3.51 3.41 3.35 3.27 3.22 3.18 3.09 3.02
6.40 6.13 5.85 5.56 5.38 5.25 5.10 5.00 4.94 4.77 4.62
2.06 2.02 1.97 1.92 1.89 1.87 1.85 1.83 1.82 1.79 1.76
2.54 2.48 2.40 2.33 2.28 2.25 2.20 2.18 2.16 2.11 2.07
3.80 3.67 3.52 3.37 3.28 3.21 3.13 3.08 3.05 2.96 2.88
6.08 5.81 5.54 5.25 5.07 4.95 4.80 4.70 4.64 4.47 4.33
2.03 1.99 1.94 1.89 1.86 1.84 1.81 1.79 1.78 1.75 1.72
2.49 2.42 2.35 2.28 2.23 2.19 2.15 2.12 2.11 2.06 2.02
3.69 3.55 3.41 3.26 3.16 3.10 3.02 2.97 2.93 2.84 2.76
5.81 5.55 5.27 4.99 4.82 4.70 4.54 4.45 4.39 4.23 4.08
2.00 1.96 1.91 1.86 1.83 1.81 1.78 1.76 1.75 1.72 1.69
2.45 2.38 2.31 2.23 2.18 2.15 2.10 2.08 2.06 2.01 1.97
3.59 3.46 3.31 3.16 3.07 3.00 2.92 2.87 2.83 2.75 2.66
5.58 5.32 5.05 4.78 4.60 4.48 4.33 4.24 4.18 4.02 3.87
1.98 1.93 1.89 1.84 1.80 1.78 1.75 1.74 1.72 1.69 1.66
2.41 2.34 2.27 2.19 2.14 2.11 2.06 2.04 2.02 1.97 1.92
3.51 3.37 3.23 3.08 2.98 2.92 2.84 2.78 2.75 2.66 2.58
5.39 5.13 4.87 4.59 4.42 4.30 4.15 4.06 4.00 3.84 3.69
1.96 1.91 1.86 1.81 1.78 1.76 1.73 1.71 1.70 1.67 1.64
2.38 2.31 2.23 2.16 2.11 2.07 2.03 2.00 1.98 1.93 1.88
3.43 3.30 3.15 3.00 2.91 2.84 2.76 2.71 2.67 2.58 2.50
5.22 4.97 4.70 4.43 4.26 4.14 3.99 3.90 3.84 3.68 3.53
1.94 1.89 1.84 1.79 1.76 1.74 1.71 1.69 1.68 1.64 1.61
2.35 2.28 2.20 2.12 2.07 2.04 1.99 1.97 1.95 1.90 1.85
3.37 3.23 3.09 2.94 2.84 2.78 2.69 2.64 2.61 2.52 2.43
5.08 4.82 4.56 4.29 4.12 4.00 3.86 3.77 3.70 3.54 3.40
1.92 1.87 1.83 1.78 1.74 1.72 1.69 1.67 1.66 1.62 1.59
2.32 2.25 2.18 2.10 2.05 2.01 1.96 1.94 1.92 1.87 1.82
3.31 3.17 3.03 2.88 2.79 2.72 2.64 2.58 2.55 2.46 2.37
4.95 4.70 4.44 4.17 4.00 3.88 3.74 3.64 3.58 3.42 3.28
1.90 1.86 1.81 1.76 1.73 1.70 1.67 1.65 1.64 1.60 1.57
2.30 2.23 2.15 2.07 2.02 1.98 1.94 1.91 1.89 1.84 1.79
3.26 3.12 2.98 2.83 2.73 2.67 2.58 2.53 2.50 2.40 2.32
4.83 4.58 4.33 4.06 3.89 3.78 3.63 3.54 3.48 3.32 3.17
1.89 1.84 1.80 1.74 1.71 1.69 1.66 1.64 1.62 1.59 1.55
2.27 2.20 2.13 2.05 2.00 1.96 1.91 1.88 1.86 1.81 1.76
3.21 3.07 2.93 2.78 2.69 2.62 2.54 2.48 2.45 2.35 2.27
4.73 4.48 4.23 3.96 3.79 3.68 3.53 3.44 3.38 3.22 3.08
1.88 1.83 1.78 1.73 1.70 1.67 1.64 1.62 1.61 1.57 1.54
2.25 2.18 2.11 2.03 1.97 1.94 1.89 1.86 1.84 1.79 1.74
3.17 3.03 2.89 2.74 2.64 2.58 2.49 2.44 2.40 2.31 2.22
4.64 4.39 4.14 3.87 3.71 3.59 3.45 3.36 3.29 3.14 2.99
125 (continued)

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-18 Appendix Tables

Table A.9 Critical Values for F Distributions (cont.)

#1 " numerator df

" 1 2 3 4 5 6 7 8 9

.100 2.92 2.53 2.32 2.18 2.09 2.02 1.97 1.93 1.89
.050 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28
25
.010 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22
.001 13.88 9.22 7.45 6.49 5.89 5.46 5.15 4.91 4.71
.100 2.91 2.52 2.31 2.17 2.08 2.01 1.96 1.92 1.88
.050 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27
26
.010 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18
.001 13.74 9.12 7.36 6.41 5.80 5.38 5.07 4.83 4.64
.100 2.90 2.51 2.30 2.17 2.07 2.00 1.95 1.91 1.87
.050 4.21 3.35 2.96 2.73 2.57 2.46 2.37 2.31 2.25
27
.010 7.68 5.49 4.60 4.11 3.78 3.56 3.39 3.26 3.15
.001 13.61 9.02 7.27 6.33 5.73 5.31 5.00 4.76 4.57
.100 2.89 2.50 2.29 2.16 2.06 2.00 1.94 1.90 1.87
.050 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24
28
.010 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12
.001 13.50 8.93 7.19 6.25 5.66 5.24 4.93 4.69 4.50
.100 2.89 2.50 2.28 2.15 2.06 1.99 1.93 1.89 1.86
.050 4.18 3.33 2.93 2.70 2.55 2.43 2.35 2.28 2.22
29
.010 7.60 5.42 4.54 4.04 3.73 3.50 3.33 3.20 3.09
.001 13.39 8.85 7.12 6.19 5.59 5.18 4.87 4.64 4.45
.100 2.88 2.49 2.28 2.14 2.05 1.98 1.93 1.88 1.85
#2 ! denominator df

.050 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21
30
.010 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07
.001 13.29 8.77 7.05 6.12 5.53 5.12 4.82 4.58 4.39
.100 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79
.050 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12
40
.010 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89
.001 12.61 8.25 6.59 5.70 5.13 4.73 4.44 4.21 4.02
.100 2.81 2.41 2.20 2.06 1.97 1.90 1.84 1.80 1.76
.050 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07
50
.010 7.17 5.06 4.20 3.72 3.41 3.19 3.02 2.89 2.78
.001 12.22 7.96 6.34 5.46 4.90 4.51 4.22 4.00 3.82
.100 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74
.050 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04
60
.010 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72
.001 11.97 7.77 6.17 5.31 4.76 4.37 4.09 3.86 3.69
.100 2.76 2.36 2.14 2.00 1.91 1.83 1.78 1.73 1.69
.050 3.94 3.09 2.70 2.46 2.31 2.19 2.10 2.03 1.97
100
.010 6.90 4.82 3.98 3.51 3.21 2.99 2.82 2.69 2.59
.001 11.50 7.41 5.86 5.02 4.48 4.11 3.83 3.61 3.44
.100 2.73 2.33 2.11 1.97 1.88 1.80 1.75 1.70 1.66
.050 3.89 3.04 2.65 2.42 2.26 2.14 2.06 1.98 1.93
200
.010 6.76 4.71 3.88 3.41 3.11 2.89 2.73 2.60 2.50
.001 11.15 7.15 5.63 4.81 4.29 3.92 3.65 3.43 3.26
.100 2.71 2.31 2.09 1.95 1.85 1.78 1.72 1.68 1.64
.050 3.85 3.00 2.61 2.38 2.22 2.11 2.02 1.95 1.89
1000
.010 6.66 4.63 3.80 3.34 3.04 2.82 2.66 2.53 2.43
.001 10.89 6.96 5.46 4.65 4.14 3.78 3.51 3.30 3.13
126 (continued)

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-19

Table A.9 Critical Values for F Distributions (cont.)

#1 " numerator df

10 12 15 20 25 30 40 50 60 120 1000

1.87 1.82 1.77 1.72 1.68 1.66 1.63 1.61 1.59 1.56 1.52
2.24 2.16 2.09 2.01 1.96 1.92 1.87 1.84 1.82 1.77 1.72
3.13 2.99 2.85 2.70 2.60 2.54 2.45 2.40 2.36 2.27 2.18
4.56 4.31 4.06 3.79 3.63 3.52 3.37 3.28 3.22 3.06 2.91
1.86 1.81 1.76 1.71 1.67 1.65 1.61 1.59 1.58 1.54 1.51
2.22 2.15 2.07 1.99 1.94 1.90 1.85 1.82 1.80 1.75 1.70
3.09 2.96 2.81 2.66 2.57 2.50 2.42 2.36 2.33 2.23 2.14
4.48 4.24 3.99 3.72 3.56 3.44 3.30 3.21 3.15 2.99 2.84
1.85 1.80 1.75 1.70 1.66 1.64 1.60 1.58 1.57 1.53 1.50
2.20 2.13 2.06 1.97 1.92 1.88 1.84 1.81 1.79 1.73 1.68
3.06 2.93 2.78 2.63 2.54 2.47 2.38 2.33 2.29 2.20 2.11
4.41 4.17 3.92 3.66 3.49 3.38 3.23 3.14 3.08 2.92 2.78
1.84 1.79 1.74 1.69 1.65 1.63 1.59 1.57 1.56 1.52 1.48
2.19 2.12 2.04 1.96 1.91 1.87 1.82 1.79 1.77 1.71 1.66
3.03 2.90 2.75 2.60 2.51 2.44 2.35 2.30 2.26 2.17 2.08
4.35 4.11 3.86 3.60 3.43 3.32 3.18 3.09 3.02 2.86 2.72
1.83 1.78 1.73 1.68 1.64 1.62 1.58 1.56 1.55 1.51 1.47
2.18 2.10 2.03 1.94 1.89 1.85 1.81 1.77 1.75 1.70 1.65
3.00 2.87 2.73 2.57 2.48 2.41 2.33 2.27 2.23 2.14 2.05
4.29 4.05 3.80 3.54 3.38 3.27 3.12 3.03 2.97 2.81 2.66
1.82 1.77 1.72 1.67 1.63 1.61 1.57 1.55 1.54 1.50 1.46
2.16 2.09 2.01 1.93 1.88 1.84 1.79 1.76 1.74 1.68 1.63
2.98 2.84 2.70 2.55 2.45 2.39 2.30 2.25 2.21 2.11 2.02
4.24 4.00 3.75 3.49 3.33 3.22 3.07 2.98 2.92 2.76 2.61
1.76 1.71 1.66 1.61 1.57 1.54 1.51 1.48 1.47 1.42 1.38
2.08 2.00 1.92 1.84 1.78 1.74 1.69 1.66 1.64 1.58 1.52
2.80 2.66 2.52 2.37 2.27 2.20 2.11 2.06 2.02 1.92 1.82
3.87 3.64 3.40 3.14 2.98 2.87 2.73 2.64 2.57 2.41 2.25
1.73 1.68 1.63 1.57 1.53 1.50 1.46 1.44 1.42 1.38 1.33
2.03 1.95 1.87 1.78 1.73 1.69 1.63 1.60 1.58 1.51 1.45
2.70 2.56 2.42 2.27 2.17 2.10 2.01 1.95 1.91 1.80 1.70
3.67 3.44 3.20 2.95 2.79 2.68 2.53 2.44 2.38 2.21 2.05
1.71 1.66 1.60 1.54 1.50 1.48 1.44 1.41 1.40 1.35 1.30
1.99 1.92 1.84 1.75 1.69 1.65 1.59 1.56 1.53 1.47 1.40
2.63 2.50 2.35 2.20 2.10 2.03 1.94 1.88 1.84 1.73 1.62
3.54 3.32 3.08 2.83 2.67 2.55 2.41 2.32 2.25 2.08 1.92
1.66 1.61 1.56 1.49 1.45 1.42 1.38 1.35 1.34 1.28 1.22
1.93 1.85 1.77 1.68 1.62 1.57 1.52 1.48 1.45 1.38 1.30
2.50 2.37 2.22 2.07 1.97 1.89 1.80 1.74 1.69 1.57 1.45
3.30 3.07 2.84 2.59 2.43 2.32 2.17 2.08 2.01 1.83 1.64
1.63 1.58 1.52 1.46 1.41 1.38 1.34 1.31 1.29 1.23 1.16
1.88 1.80 1.72 1.62 1.56 1.52 1.46 1.41 1.39 1.30 1.21
2.41 2.27 2.13 1.97 1.87 1.79 1.69 1.63 1.58 1.45 1.30
3.12 2.90 2.67 2.42 2.26 2.15 2.00 1.90 1.83 1.64 1.43
1.61 1.55 1.49 1.43 1.38 1.35 1.30 1.27 1.25 1.18 1.08
1.84 1.76 1.68 1.58 1.52 1.47 1.41 1.36 1.33 1.24 1.11
2.34 2.20 2.06 1.90 1.79 1.72 1.61 1.54 1.50 1.35 1.16
2.99 2.77 2.54 2.30 2.14 2.02 1.87 1.77 1.69 1.49 1.22
127
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Durbin-Watson Critical Values - 95% (d) Page 1 of 4

SPS Home > Stats Tables > Durbin Watson 0.05 Table
Critical Values for the Durbin-Watson Statistic (d)
Level of Significance α = .05
k=l k=2 k=3 k=4 k=5
n
dL dU dL dU dL dU dL dU dL dU

6 0.61 1.40
7 0.70 1.36 0.47 1.90
8 0.76 1.33 0.56 1.78 0.37 2.29
9 0.82 1.32 0.63 1.70 0.46 2.13 0.30 2.59
10 0.88 1.32 0.70 1.64 0.53 2.02 0.38 2.41 0.24 2.82
11 0.93 1.32 0.66 1.60 0.60 1.93 0.44 2.28 0.32 2.65
12 0.97 1.33 0.81 1.58 0.66 1.86 0.51 2.18 0.38 2.51
13 1.01 1.34 0.86 1.56 0.72 1.82 0.57 2.09 0.45 2.39
14 1.05 1.35 0.91 1.55 0.77 1.78 0.63 2.03 0.51 2.30
15 1.08 1.36 0.95 1.54 0.82 1.75 0.69 1.97 0.56 2.21
16 1.10 1.37 0.98 1.54 0.86 1.73 0.74 1.93 0.62 2.15
17 1.13 1.38 1.02 1.54 0.90 1.71 0.78 1.90 0.67 2.10
18 1.16 1.39 1.05 1.53 0.93 1.69 0.92 1.87 0.71 2.06
19 1.18 1.4 1.08 1.53 0.97 1.68 0.86 1.85 0.75 2.02
20 1.20 1.41 1.10 1.54 1.00 1.68 0.90 1.83 0.79 1.99
21 1.22 1.42 1.13 1.54 1.03 1.67 0.93 1.81 0.83 1.96
22 1.24 1.43 1.15 1.54 1.05 1.66 0.96 1.80 0.96 1.94
23 1.26 1.44 1.17 1.54 1.08 1.66 0.99 1.79 0.90 1.92
24 1.27 1.45 1.19 1.55 1.10 1.66 1.01 1.78 0.93 1.90
25 1.29 1.45 1.21 1.55 1.12 1.66 1.04 1.77 0.95 1.89
26 1.30 1.46 1.22 1.55 1.14 1.65 1.06 1.76 0.98 1.88
27 1.32 1.47 1.24 1.56 1.16 1.65 1.08 1.76 1.01 1.86
28 1.33 1.48 1.26 1.56 1.18 1.65 1.10 1.75 1.03 1.85
29 1.34 1.48 1.27 1.56 1.20 1.65 1.12 1.74 1.05 1.84
30 1.35 1.49 1.28 1.57 1.21 1.65 1.14 1.74 1.07 1.83

128

http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27
Durbin-Watson Critical Values - 95% (d) Page 2 of 4

31 1.36 1.50 1.30 1.57 1.23 1.65 1.16 1.74 1.09 1.83
32 1.37 1.50 1.31 1.57 1.24 1.65 1.18 1.73 1.11 1.82
33 1.38 1.51 1.32 1.58 1.26 1.65 1.19 1.73 1.13 1.81
34 1.39 1.51 1.33 1.58 1.27 1.65 1.21 1.73 1.15 1.81
35 1.40 1.52 1.34 1.58 1.28 1.65 1.22 1.73 1.16 1.80
36 1.41 1.52 1.35 1.59 1.29 1.65 1.24 1.73 1.18 1.80
37 1.42 1.53 1.36 1.59 1.31 1.66 1.25 1.72 1.19 1.80
38 1.43 1.54 1.37 1.59 1.32 1.66 1.26 1.72 1.21 1.79
39 1.43 1.54 1.38 1.60 1.33 1.66 1.27 1.72 1.22 1.79
40 1.44 1.54 1.39 1.60 1.34 1.66 1.29 1.72 1.23 1.79
45 1.48 1.57 1,43 1.62 1.38 1.67 1.34 1.72 1.29 1.78
50 1.50 1.59 1.46 1.63 1.42 1.67 1.38 1.72 1.34 1.77
55 1.53 1.60 1.49 1.64 1.45 1.68 1.41 1.72 1.38 1.77
60 1.55 1.62 1.51 1.65 1.48 1.69 1.44 1.73 1.41 1.77
65 1.57 1.63 1.54 1.66 1.50 1.70 1.47 1.73 1.44 1.77
70 1.58 1.64 1.55 1.67 1.52 1.70 1.49 1.74 1.46 1.77
75 1.60 1.65 1.57 1.68 1.54 1.71 1.51 1.74 1.49 1.77
80 1.61 1.66 1.59 1.69 1.56 1.72 1.53 1.74 1.51 1.77
85 1.62 1.67 1.60 1.70 1.57 1.72 1.55 1.75 1.52 1.77
90 1.63 1.68 1.61 1.70 1.59 1.73 1.57 1.75 1.54 1.78
95 1.64 1.69 1.62 1.71 1.60 1.73 1.58 1.75 1.56 1.78
100 1.65 1.69 1.63 1.72 1.61 1.74 1.59 1.76 1.57 1.78
150 1.72 1.75 1.71 1.76 1.69 1.77 1.68 1.79 1.66 1.80
200 1.76 1.78 1.75 1.79 1.74 1.80 1.73 1.81 1.72 1.82
Where n = number of observations and k = number of independent variables

Critical Values for the Durbin-Watson Statistic (d)


Level of Significance α = .05
k=6 k=7 k=8 k=9 k = 10
n
dL dU dL dU dL dU dL dU dL dU

129

http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27
Durbin-Watson Critical Values - 95% (d) Page 3 of 4

11 0.20 3.01
12 0.27 2.83 0.17 3.15
13 0.33 2.70 0.23 2.99 0.15 3.27
14 0.39 2.57 0.29 2.85 0.20 3.11 0.13 3.36
15 0.45 2.47 0.34 2.73 0.25 2.98 0.18 3.22 0.11 3.44
16 0.50 2.39 0.40 2.62 0.30 2.86 0.22 3.09 0.16 3.30
17 0.55 2.32 0.45 2.54 0.36 2.76 0.27 2.98 0.20 3.18
18 0.60 2.26 0.50 2.47 0.41 2.67 0.32 2.87 0.24 3.07
19 0.65 2.21 0.55 2.40 0.46 2.59 0.37 2.78 0.29 2.97
20 0.69 2.16 0.60 2.34 0.50 2.52 0.42 2.70 0.34 2.89
21 0.73 2.12 0.64 2.30 0.55 2.46 0.46 2.63 0.38 2.81
22 0.77 2.09 0.68 2.25 0.59 2.41 0.51 2.57 0.42 2.73
23 0.80 2.06 0.72 2.21 0.63 2.36 0.55 2.51 0.47 2.67
24 0.84 2.04 0.75 2.17 0.67 2.32 0.58 2.46 0.51 2.61
25 0.87 2.01 0.78 2.14 0.70 2.28 0.62 2.42 0.54 2.56
26 0.90 1.99 0.82 2.12 0.74 2.24 0.66 2.38 0.58 2.51
27 0.93 1.97 0.85 2.09 0.77 2.22 0.69 2.34 0.62 2.47
28 0.95 1.96 0.87 2.07 0.80 2.19 0.72 2.31 0.65 2.43
29 0.98 1.94 0.90 2.05 0.83 2.16 0.75 2.28 0.68 2.40
30 1.00 1.93 0.93 2.03 0.85 2.14 0.78 2.25 0.71 2.36
31 1.02 1.92 0.95 2.02 0.88 2.12 0.81 2.23 074 2.33
32 1.04 1.91 0.97 2.00 0.90 2.10 0.84 2.20 0.77 2.31
33 1.06 1.90 0.99 1.99 0.93 2.09 0.86 2.18 0.80 2.28
34 1.08 1.89 1.02 1.98 0.95 2.07 0.89 2.16 0.82 2.26
35 1.10 1.88 1.03 1.97 0.97 2.05 0.91 2.14 0.85 2.24
36 1.11 1.88 1.05 1.96 0.99 2.04 0.93 2.13 0.87 2.22
37 1.13 1.87 1.07 1.95 1.01 2.03 0.95 2.11 0.89 2.20
38 1.50 1.86 1.09 1.94 1.03 2.02 0.97 2.10 0.91 2.18
39 1.16 1.86 1.10 1.93 1.05 2.01 0.99 2.09 0.93 2.16
40 1.18 1.85 1.12 1.92 1.06 2.00 1.01 2.07 0.95 2.15

130

http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27
Durbin-Watson Critical Values - 95% (d) Page 4 of 4

45 1.24 1.84 1.19 1.90 1.14 1.96 1.09 2.02 1.04 2.09
50 1.29 1.82 1.25 1.88 1.20 1.93 1.16 1.99 1.11 2.04
55 1.33 1.81 1.29 1.86 1.25 1.91 1.21 1.96 1.17 2.01
60 1.37 1.81 1.34 1.85 1.30 1.89 1.26 1.94 1.22 1.98
65 1.40 1.81 1.37 1.84 1.34 1.88 1.30 1.92 1.27 1.96
70 1.43 1.80 1.40 1.84 1.37 1.87 1.34 1.91 1.31 1.95
75 1.46 1.80 1.43 1.83 1.40 1.87 1.37 1.90 1.34 1.94
80 1.48 1.80 1.45 1.83 1.43 1.86 1.40 1.89 1.37 1.93
85 1.50 1.80 1.47 1.83 1.49 1.86 1.42 1.89 1.40 1.92
90 1.52 1.80 1.49 1.83 1.47 1.85 1.45 1.88 1.42 1.91
95 1.54 1.80 1.51 1.83 1.49 1.85 1.46 1.88 1.44 1.90
100 1.55 1.80 1.53 1.83 1.50 1.85 1.48 1.87 1.46 1.90
150 1.65 1.82 1.64 1.83 1.62 1.85 1.60 1.86 1.59 1.88
200 1.71 1.83 1.70 1.84 1.69 1.85 1.68 1.86 1.67 1.87
Where n = number of observations and k = number of independent variables

131

http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27

You might also like