Introductory Econometrics Sem 4
Introductory Econometrics Sem 4
com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
RSGCLASSES
INTRODUCTORY ECONOMETRICS
BY RAHUL SIR
(SRCC GRADUATE , DSE ALUMNI)
R25
1
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
INDEX
CHAPTERS No. CHAPTER NAME Page
1. Simple linear regression 3-19
5. Multicollinearity 67- 78
6. Heteroscedasticity 79- 91
7. Autocorrelation 92-104
2
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Chapter- 1
SIMPLE LINEAR REGRESSION ANALYSIS
2. The locus of the conditional means of Y for the fixed values of X is the
a. Conditional expectation function
b. Intercept line
c. Population regression line
d. Linear regression line
3
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
b. Is always false
c. May sometimes be true sometimes false
d. Nonsense statement
Yi + β̂2Xi
12. In sample regression function, the observed Yi can be expressed as Yi = ̂
+ ûi. The statement is
a. True
b. False
c. Depend on 𝛽̂ 2
d. Depends on 𝑌̂i
14. Under the least square procedure, larger the 𝑢̂i (in absolute terms), the larger the
a. Standard error
b. Regression error
c. Squared sum of residuals
d. Difference between true parameter and estimated parameter
4
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
15. The method of least squares provide with unique estimates of β̂1 and β̂2 that give
the smallest possible value of
a. 𝑢̂i
b. 𝑢̂i
c. 𝑢̂i
d. 𝒖 ̂𝒊𝟐
21. One of the assumptions of CRLM is that the values of the explanatory variable X
must
a. All be positive
b. Not all be the same
c. All be negative
d. Average to zero
5
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
23. In a two variable linear regression model the slope coefficient measures
a. The mean value of Y
b. The change in Y which the model predicts for a unit change in X
c. The change in which the model predicts for a unit change in Y
d. The value of Y for any given value of X
24. The fitted regression of equation is given by 𝑌̂𝑖 = 12 + 0.5 Xi What is the value of
the residual at the point X=50, Y=70 ?
a. 57
b. -57
c. 0
d. 33
25. What is the number of degrees of freedom for a simple bivariate linear regression
with 100 observations?
a. 100
b. 97
c. 98
d. 2
26. Given the assumption of the CRLM, the least squares estimates possess some
optimum properties given by Gauss-Markov theorem. Which of these statements
is NOT part of the theorem
a. The estimators of 𝛽̂ 2 is a linear function of a random variable
b. The average value of the estimator 𝛽̂ 2 is equal to zero
c. The estimator 𝛽̂ 2has minimum variance
d. The estimator 𝛽̂ 2 is unbiased estimator
6
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
30. Zero correlation does not necessarily imply independence between the two
variables. The statement is
a. False
b. True
c. Depends on the mean value of X and Y
d. None
TRUE FALSE
State whether the following statements are true false , or uncertain, Give your reasons.
Be precise.
i) The stochastic error term 𝑢𝑖 and the residual term 𝑒𝑖 mean the same thing.
ii) The PRF gives the value of the dependent variable corresponding to each value of the
independent variable.
iv) In the linear regression model the explanatory variable is the cause and the
dependent variable is the effect.
v) The conditional and unconditional mean of a random variable are the same thing.
vi) In practice, the two- variable regression model is useless because the behavior of a
dependent variable can never be explained by a single explanatory variable.
vii) The sum of the deviation of a random variable from its mean value is always equal to
zero.
viii) OLS is an estimating procedure that minimizes the sum of the errors squared,∑𝑒𝑖 2
7
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
ix) The coefficient of correlation, r, has the same sign as the slope coefficient b2.
xi) In simple regression model 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋𝑖 + 𝑢𝑖 , the OLS estimator 𝛽̂1 and 𝛽̂2 each
follow normal distribution only if 𝑢𝑖 follows normal distribution.[Eco(h)2019]
xii) If the estimate of slope coefficient in a bivariate regression is zero, the measure of
coefficient of determination is also zero. [Eco(h)2019]
xiii) If you choose a higher level of significance, a regression coefficient is more likely
to be significant. [Eco(h)2013]
xiv) In the regression modal Yt = B1 + B2Xi + ui, suppose we obtain a 95% confidence
interval for B2 as (0.1934, 1.8499), we can say the probability is,95% that this interval
includes the B2. [Eco(h)2014]
xv) In a two-variable PRF, if the slope coefficient 𝛽2 is zero; the intercept 𝛽1 is estimated
by the sample mean. [Eco(h)2015]
xvi) All Actual 𝑌𝑖 cannot lie above the sample linear regression line. [Eco(h)2017]
xvii) Consider a simple regression model estimated using OLS. It is known that the
Explained Sum of Squares is 75% higher than the Residual Sum of Squares. This
implies that more than 75% of the total variation in the dependent variable is
explained by the variation in the explanatory variable. [Eco(h)2023]
xviii) In a simple regression model estimated using OLS, the residuals (ei) are such that
𝑒̅ = 0 and 𝑒̅ 2 = 0. [Eco(h)2023]
xxi) If X and Y are related to each other by the equation: Y = 2 + 0.5 X, the correlation
coefficient between them is 0.5 [Eco(h)2023]
8
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Proofs
1. Prove that ̅
Y = b1 + b2̅
X
2. Prove that Σ 𝑒𝑖 =0
3. Prove that Σ𝑒𝑖 𝑥𝑖 =0 where ei is the residual term and 𝑥𝑖 is the deviation
of Xi from mean.
7. Prove that the mean of predicted value of Yi is always equal to actual mean, i.e.,
𝑌̂ =𝑌
9. Prove that the least square estimator b2 is linear, unbiased and consistent.
10. In CLRM, shows that OLS estimator for the slope coefficient is linear and unbiased.
11. Show that the OLS, estimators have the property of being linear and unbiased.
12. Prove that the least square estimators have the minimum variance amongst the
class of estimators.
13. Prove that the OLS estimators are best linear Unbiased Estimators (BLUE).
14. Derive the numerical properties of the OLS estimators and the regression line.
9
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(∑ 𝑦 𝑦̂ )2
𝑖 𝑖
16. Show that : (∑ 𝑦 2 )(∑
𝑖 𝑦̂ 2)𝑖
18. If we have two regression model Y on X and X on Y then show that product of two
regression slope coefficients of X on Y and Y on X is coefficient of determination.
19. If the estimation of slope coefficient in a bivariate regression is zero. The measure
of coefficient of determination is also zero.
Ans: (a) LIP, (b) LIP, (c) LIP, (d) LIP (e) LIV
2. Determine whether the following models are linear in the parameters, or the
variables, or both. Which of these models are linear regression models?
1
(i) InYi = β1 + β2 In(Xi) + ui, (ii) Yi = β1 + 𝛽2 Xi + ui
1
(iii) Yi = β1 + 𝛽2 2 Xi + ui, (iv) InYi = β1 - β2 {𝑋 } + ui
2
(v) Y = 𝑒 𝛽1 +𝛽2 𝑋𝑖 +𝑢𝑖 (vi) Yi = β1 – β32 Xi +ui
Ans. (i) LIP (ii) LIV (iii) LIV (iv) LIP (v) Neither (vi) LIV
10
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Y 3 5 7 14 11
(i)Obtain the estimated regression equation using ordinary least squares when Y
is regressed on X with in an intercept term.
(ii) Prepare the ANOVA table for this data.
Y 10 20 30 40 50 60 70 80 90
Also find the variance and standard variance errors of intercept and slope
coefficients.
4. Given below is the data for 10 years from the economic survey of india:
Year Private Final Consumption Expenditure GDP
(PFCE) (in Rs. ‘0000 cr.)
(in Rs. ‘0000cr.)
1985-86 43 54
1986-87 43 55
1987-88 45 56
1988-89 48 62
1989-90 51 67
11
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
1990-91 53 69
1991-92 54 70
1992-93 55 74
1993-94 57 78
1994-95 61 86
6. For a simple linear regression model , Y i =B1 + B2Xi + ui the following data are
given for 22 observations:
𝑋̅ = 10 𝑌̅ = 20 ∑𝑛𝑖=1(𝑋𝑖 − 𝑋̅ )2 = 60 ∑𝑛𝑖=1(𝑌𝑖 − 𝑌̅ )2 = 100
8. Given the following summary results for 6 pairs of observations on the dependent
variable Y and the independent variable X, calculate the 95% confidence interval for
12
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
11. For the regression model answer the questions that follow:
𝑌̂𝑖 = 432.4138 + 0.0013𝑋𝑖
𝑠𝑒 (16.9061)(0.000245)
𝑛 = 10, 𝑟 2 = 0.7849
𝑌𝑖 → 𝑚𝑎𝑟𝑘𝑠 𝑜𝑏𝑡𝑎𝑖𝑛𝑒𝑑
𝑋𝑖 → 𝐹𝑎𝑚𝑖𝑙𝑦 𝐼𝑛𝑐𝑜𝑚𝑒
Fill in the missing numbers. Would you reject the hypothesis that true B 2 is zero at α
= 0.05? Tell whether you are using a one tailed or two tailed test and why?
13. Given the following regression between retails sales of passenger cars (Si) and real
disposable income (Xi)
Ŝi = 5807 + 3.24Xi
SE = (1.634)
R = 0.22, n = 30
2
13
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
14. A regression was run between per capita savings (S0 and per capita income were
obtained):
Ŝi = 450.03+ 0.67Yi
SE (151.105) (0.011) n=20
(i) What is the economic interpretation of regression coefficients?
(ii) What do you think about the sign of constant term? What can be the possible reason
behind
it?
(iii) Say something about goodness of fit. Also carry out, ‘t’ test for slope coefficient at
1%.
(iv) Reform the above model by stating this is per 100 rupees. What do you think would
be impact on slope intercept?
(iv) Prepare 99% confidence intervals.
15. A regression was run between personal consumption expenditure (was run between
personal consumption expenditure (Y) and gross domestic product (X) all measured
in billions of dollars for the years 1982 to 1996 and the following results were
obtained:
̂
Yi = -184.0780 + 0.7064Xi
Se = (46.2619) (0.007827)
R2 = 0.22
(i) What is the economic interpretation of regression coefficient?
(ii) What is MPC?
(iii) Interpret r2.
(iv) Prepare 95% confidence intervals of regression coefficient.
(v) Test the significance of β1 and β2 writing the hypothesis.
16. The rational expectation hypothesis claim that expectation are un biased, i.e., the
average predicted value is equal to the actual values of the variable under
investigation. A researcher wished to see the validity of this claim with reference to
the interest rates on 3 months US treasury bills for 30 quarterly observations. The
results of the regression of actual interest (ri) on the predicted interest rates (r*i)
were as follows:
r̂i = 0.0240 + 0.9400 r*i
se (0.86) (0.14)
Carry out the tests to see the validity of the rational expectation hypothesis (choose
α=5%). Assume all basic assumption of the classical linear regression model are
satisfied.
14
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
MEAN FORECASTING
1. Using cross- sectional data on total sales and profits for 27 German companies in
1995, the following model is estimated:
Profit= B1+ B2 Salesi + ui
Where
Profits: Total profits in millions of dollars
Sales: Total sales in billions of dollars
The regression results are given below:
Estimates of Coefficients Standard errors
r2=0.4074
(a) Construct a 95% confidence interval for the slope coefficient. What can you say about
its statistical significance?
(b) Prove that in a simple regression model with an intercept, the F statistic for goodness
of fit of the model is equal to the square of the t statistic for a two sided t test on the
slope coefficient. Verify this statement for the regression results given in these
questions.
(c) Find the forecasted mean profits if annual sales are 25 billion dollars. Explain the
concept of a confidence band for true mean profits.
3. Based on the data collected on a particular Monday for 13 B.A (H) Economics,
second year students we want to estimate the following population regression
Equation: 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋𝑖 + 𝑢𝑖
Where:
𝑌𝑖 : Travelling time (in hrs) for the ith student from her home to college.
𝑋𝑖 : distance from home to college for ith student in km.
Using the above data and assuming that all the CLRM assumptions are satisfied, using a
95% confidence interval for the predicted mean travelling time when the distance
between college and a student’s house is 11Km
15
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
1. Explain the steps involved in the Jarque-Bera test for testing the validity of the
normality assumption in an empirical exercise. Perform the test for a JB test statistic
value equal to 0.8153 at 5% level of significance.
16
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
2. For the model 𝑌𝑖 =𝛽1 + 𝑢𝑖 , given that all the CLRM Assumption are satisfied , use
OLS to find the estimator of 𝛽1 . Show that this estimator can be decomposed into the
true value plus a linear combination of the disturbance term in the sample. Also
demonstrate that this estimator is an unbiased estimator of 𝛽1 .[ Eco(h) 2015]
3. Suppose that you are considering opening a restaurant at a location where average
traffic volume is 1000 care per day. To help you decide whether to open the restaurant
or not, you collect data on daily sales (in thousands of rupees) and average traffic
volume (in hundreds of cars per day) for a random sample of 22 restaurants. You set
up your model as:
Salesi = B1 + B2 AVtraffici + ui
i. Obtain the ordinary least square estimator of the slope, coefficient and interpret
it
ii. Estimate the average sales for your potential restaurant location.
iii. Will the values of the coefficient of determination change if you want to change
the unit of sales from thousands of rupees, leaving units of traffic volume
unchanged?
Explain your answer. [Eco(h) 2014]
4. Using data on sales of cameras (SALES) and its price (PRICE in thousands of rupees)
for 17 brands, the effect of price on sales is given by:
SALESt = 𝛼 + 𝛽 PRICEi + ui
This is tested using OLS method The results obtained are as follows (t-ratios are
mentioned within parentheses). Assume all assumptions for classical linear regression
model hold good.
𝑦𝑖 = 𝐵1 + 𝐵2𝑥𝑖 + 𝜇𝑖
17
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
7. How do you test for normality of error terms in the PRF using Jarque Bera test ?
What happen to least square estimates if the errors are not normally distributed?
What are its consequences for the Gauss Markov theorem?[Eco(h) 2021]
a. Find the estimators of 𝛽1 & 𝛼1 . Are they identical ? Are their variances identical ?
b. Find the estimators of 𝛽2 & 𝛼2 . Are they identical ? Are their variances identical ?
c. What is the advantage , if any , of the model II over model I ?
10. Based on a sample of size 20, the following regression line was estimated using the
least-squares method,
𝑌̂𝑖 = 5 + 3𝑋𝑖
18
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Construct a 95% confidence interval estimate of the true population mean of 𝑌 for 𝑋0 =
15. Do you expect the confidence interval to be wider if a similar interval is estimated for
𝑋0 = 2? Explain your answer. [Eco(h) 2018]
11. For an OLS estimated regression equation, 𝑌̂𝑖 = 𝑏1 + 𝑏2 𝑋𝑖 , the sum of the product of
the residuals and mean deviation of the explanatory variable is zero ?TRUE OR
FALSE Eco(h)2024]
12. For an OLS estimated regression equation, 𝑌̂𝑖 = 𝑏1 + 𝑏2 𝑋𝑖 , if we multiply both Y and
X by 1000 and re-estimate two variable regression model, the intercept coefficient
will increase by 1000 times ? TRUE OR FALSE [Eco(h)2024]
13. Consider the following regression function on sales revenue for a particular firm for
last 10 months, as estimated by OLS:
𝑠𝑒 = (4.03)(0.04)
̂
𝜎 2
𝑢 = 1.05
𝑋̅ = 25 ∑ 𝑋𝑡2 = 6500
Find the predicted mean sales revenue if the advertising expenditure for the firm in the
next month is 30 million dollars. Also, find the 99% confidence interval for the true
predicted mean of sales revenue corresponding to 30 million dollars advertising
expenditure. [Eco(h)2024]
13) a. 14) c. 15) d. 16) c. 17) a. 18)c. 19) d. 20) c. 21) b. 22) a 23) b
19
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
CHAPTER-2
Multiple Linear Regression
6. In classical linear regression model, λ2X2i + λ3X3=0 with λ3=λ3 = O refers to the
ASSUMPTION of
a. Zero mean value of disturbance term
b. Homoscedasticity
c. No autocorrelation
d. No multicollinearity
20
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
c. No autocorrelation
d. No multicollinearity
9. Given Yi = 𝛽 1X1i + 𝛽 2X2i +𝛽 3X3i+ui, state which of the following statement is true
a. 𝛽 2 measures the change in the mean value of Y per unit change in X2 ,holding
the value of X3 constant
b. 𝛽 3 gives the net effect of a unit change in X3, on the mean value of Y, net of
any effect that X2 may have on mean Y
c. Both a and b are true
d. Neither a nor b is true
21
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
17. Given the regression model Yi=β1+β2X2i +β3X3i+ui,, how would you state the null
hypothesis to test that X2 has no influence on Y with X3 held constant.
a. H0: β1 = 0
b. H0: β2 = 0
c. H0: β3 = 0
d. H0: β2 = 0 given β3 = 0
18. In hypothesis testing using t statistics, when the computed t value is found to
exceed the critical t value at the chosen level of significance, then
a. We reject the null hypothesis
b. We do not reject the null hypothesis
c. It depends on alternate hypothesis
d. It depends on F value
21. When R2 for a regression model is equal to zero, the F value is equal to
a. Infinity
b. High positive value
c. Low positive value
d. Zero
22
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
TRUE/ FALSE
Stats whether the following statement is True or False. Give reasons for your answer:
PROOFS
Practical Question
Y 1 3 8
X2 1 2 3
X3 2 1 -3
23
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Obtain the estimated regression equation using ordinary least squares if Y is regressed
on X2 and X3 with an intercept term.
Can you estimate the regression coefficients in this model? Explain your answers.
3. The following results were obtained from a sample of 12 firms on their output (Y),
labour input (X2) and capital input (X3), measured in arbitrary units:
∑ 𝑌 = 753 ∑ 𝑌2 = 48,139 ∑ 𝑌𝑋2 = 40,830
∑ 𝑋2 = 643 2
∑ 𝑋2 = 34,843 ∑ 𝑌𝑋3 = 6,796
∑ 𝑋2 = 106 2
∑ 𝑋3 = 976
Find the regression equation:
𝑌 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖
4. The following tables contains the scales price of 5 holiday cottages in Ushered,
Denmark, together with the age and the livable area of each cottage.
Yi X2i X3i
745 36 66
895 37 68
442 47 64
440 32 53
1598 10 101
Suppose it is thought that the price obtained for a cottage depends primarily on the age
and livable area. A possible model for the data might be th linear regression model
Yi = β1+β2X2i+β3X3i+ ui
24
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
where the random errors ui are independent, normally distributed random variables
with the zero mean and constant variances. Fit the model and obtain the parameters
and their respective standard errors.
5. You are given the following data based on a simple regression estimated for the
relationship between price (X2) and quantity of oranges sold (Y) in a super market
and also on the amount spent on advertising the product (X 3), for 12 consecutive
days.
𝑌̅ = 100, 𝑋̅2 = 70 𝑋̅3 = 6.7 2
∑ 𝑥2𝑖 = 2250 ∑ 𝑦𝑖 𝑥2𝑖 = −3550
∑ 𝑦𝑖 𝑥3𝑖 = 125.25 ∑ 𝑥2𝑖 𝑥3𝑖 = −54 2
∑ 𝑦𝑖 = 6300 2
∑ 𝑥3𝑖 = 4.857
(ii) Test the statistical significance of each estimated regression coefficient using α =
5%
(i) Estimate the three multiple regression coefficients and their standards
error .
(ii) Obtain R2 and
(iii) Test the statistical significance of each estimated regression coefficient
using α = 5%
̂
Yi = 1336.049 + 12.7413X2i +85.7640X3i
se = (175.2725) (0.9123 ) (8.8019)
t = (-7.6226). (13.9653) (9.7437)
R2 = 0.8906, F = 118.0585, n = 32
25
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(v) Test the overall significance of this equation i.e., test the joint hypothesis that
X2 and X3 are in significant in explaining the variance in Y.
(vi) What is the relationship between F and R2? Establish this for the regression
results presented above.
8. Consider the following regression for an imaginary country, say Utopia, for a period
of 15 years variables are: IMP = imports, GNP = Gross National Product and CPI =
Consumer Price Index.
̂ = -108.20 + 0.045 GNP2t -0.931CPI3t
IMP
t = (3.45) (1.23 ) (1.844)
R 2 = 0.9894
(i) Test whether, individually, the partial slope coefficients for GNP and CPI are
statistically significant at the 5% level of significance.
(ii) Test the whether GNP and CPI jointly have nay statistical significance in
explaining variations in exports. Cary out this test at 5% level of significance.
9. Consider the following model relating the gain in salary due to an MBA degree to a
number of its determinants.
𝑆𝐿𝑅𝑌𝐺𝐴𝐼𝑁𝑡 = 𝐵1 + 𝐵2 𝑇𝑈𝐼𝑇𝐼𝑂𝑁𝑡 + 𝐵3 𝑍1𝑡 + 𝐵4 𝑍2𝑡 + 𝐵5 𝑍3𝑡 + 𝑢𝑡
Where,
SLRYGAIN = Post salary MBA minus pre MBA salary, in thousands of dollars.
TUTION = annual tuition coast, in thousands of dollars.
Z1 = MBA skills in being in analysts, graded by recruiters.
Z2 = MBA skills in being team players, grade by recruiters.
Z3 = Curriculum evaluation by MBA’s.
Using data for top 25 business schools, the coefficients were estimated as follows,
standard errors in parenthesis.
(i) Carry out individual two tail tests at 10% level of significance for the slope
coefficients.
(ii) Test the model for overall significance at the 10% level if R2 = 0.461 was
obtained for the model.
10. For the multiple regression model for Y = mental impairment, X 1 = life events, and
X2 = SES.
E(Y) = α + β1X1 + β2X2
Following table contains the required results:
Coff. Std.Error t
26
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
11. The grades points average (GPA) of a random sample of 427 students in a college
were regressed on verbal SAT scores (VSAT) and mathematicians SAT scores
(MSAT) and the following regression model was estimated. (Standard errors are
reported in parentheses)
̂
𝐺𝑃𝐴 = 0.423 + 0.398VSATi + 0.001MSATi
SE (0.220) (0.061) (0.00029)
(i) The analyst found the unadjusted R2 = 0.22 and concluded that the VSAT and
MSAT scores are not good predictors of GPA. Do you agree with him? Write
down all the steps to test his claim and check it at 5% level of significance.
(ii) Suppose a student’s VSAT and MSAT scores increased by 100 points each.
How much increase in GPA can be expected?
(iii) As a result of the college policy if all the GPA scores were increased by 10%
what impact would it have on the regression coefficients and coefficient of
determination R2.
12. Using time series data for 1979 to 2009 for a certain economy, the following model of
demand for money was estimated:
Where
The table below has estimates of the coefficients and their standard errors
Y 0.530 0.112
27
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
13. A relationship was established between demands for housing (H). Gross National
Product (GDP), interest rate (INT) prevailing in the economy. The following results
were obtained:
Ĥ = 678.89 + 0.905GNP – 169.65INT
t = (1.80) (3.64) (-3.87)
R2 = 0.432, R2 = 0.375, df = 20
(i) Calculate the F value from the data?
(ii) What conclusion do you draw from the F-value?
Price = 𝛽0 + 𝛽1 assess + 𝑢
𝑡 = (16.27) (0.049)
i. How will you test the constraints 𝛽1 = 1 and 𝛽0 = 0 in the above regression if you
are given the SSR in the restricted model as 209448.99? Conduct the necessary
test(s) at 1% level of significance and give your conclusion?
ii. Suppose now that the estimated model is
Price = 𝛽0 + 𝛽1 Assess +𝐿𝑜𝑡𝑠𝑖𝑧𝑒 + 𝛽3 𝑆𝑞𝑟𝑓𝑡 + 𝛽4 𝐵𝑑𝑟𝑚𝑠 + 𝑢
Where
Lotsize = the size of the lot
Sqrft = the square footage
Bdrms = the number of bedrooms
The R2 = from estimating this model using the same 88 houses is 0.829. Test at
1% level of significance that all partial slope coefficients are equal to zero.
28
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
15. Based on the data for 1965 – IQ to 1983 – IVQ (n = 76), the following results were
obtained in the regression model to explain the personal consumption expenditure:
̂
Yi = -10.96 + 0.93 X2i – 2.09X3i
t = (-3.33) (249.06) (-3.09) R2 = 0.996
where, Y = PCE in billion rupees
X2 = the disposable income in billion rupees
X3 = the prime rate (%) charged by banks
(a) What is the marginal propensity to consume (MPC) the amount of additional
consumption expenditure?
(b) Is the MPC, statistically different from 1? Show the appropriate testing
procedure.
(c) What is the rational for inclusion of prime rate variable in the model? A priori,
would you expect a negative sign for this variable?
(d) Is b3 statistically different from zero?
(e) Test the hypothesis that R2 = zero?
(f) Compute the standard error for each coefficient.
16. The monthly salary (wage, n hundred s of rupees), age (AGE, in years), number of
years of experience (EXP, in years), number of years of education (EDU) were
obtained for 49 persons in a certain office. The estimator regression of wage on the
characteristics of a person were obtained as follows (with a statistic in parenthesis):
Wage = 632.244 + 142.510EDU + 43.225 EXP - 1.913 AGE
(1.493) (4.008) (3.022) (- 0.22)
(i) The value of adjusted R2, = 0.277. Using this information, test the model for
overall significance.
(ii) Test the coefficient of EDU and EXP for statistical significance at 1% level and
coefficients for age at 10% level.
17. Using quarterly data for 10 years (n= 40) for the U.S. economy, the following model
of demand for new cars were estimated:
NUMCARSi = B1 +B2 PRICEi + B3 INCOMEi + B4 INTRATEi +ui
Where
NUMCARS: Number of new car sales per thousand people
PRICE: New car price index
INCOME: Per capita real disposal income (in dollars)
The table below gives estimates of the coefficients and their standard errors:
29
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(i) A priori, what are the expected signs of the partial slope coefficients? Are
the results in accordance with these expectations?
(ii) Interpret the various slope coefficients and test whether they are
individually statistically different from zero. Use 10% level of significance.
(iii) The adjusted R squared reported for this model is 0.758. Test the Model
for overall goodness of fit at 5% level of significance.
18. A multiple regression analysis between yearly income (Y in $1.000s), college grade
point average (X1) age of the individuals (X2), and the gender of the individual (X3,
zero representing male) was performed on a sample of 10 people, and the following
results were obtained.
Analysis of variance
Source of Degrees Sum of Mean
Variation of Freedom Squares Square
Regression 360.59
Error 23.91
30
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
20. Child Mortality Rate (CMR) for 25 countries was regressed on Female Literacy Rate
(FLR) and per capita GDP (PCG). The following results were obtained:
̂ = 263.64 – 0.0056PCG-2.2316FLRi
𝐶𝑀𝑅
21. You are given the following regression models, compute adjusted R 2 for each of the
model and hence decide which of these a better fit is:
n = 32 for each model. Also compare model B and D using method of restricted least
squares.
22. Based on a sample of 38 countries the following regression was obtained:
̂
Yi = 414.4583 + 0.0523X1i – 50.0476X2i
se = (266.4583) (0.0018) (9.9581)
t = (1.1538) (28.2742) (-5.0257)
R = 0.916,
2 Adj R = 0.9594 F= 439.22
2
31
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Yi = 386.482 + 0.0732X1i
se = (268.421) (0.0049)
t = (1.4398) (14.9397)
R2 = 0.8978, ADJR2 = 0.8823 F = 436.81
23. How the regression coefficients , TSS , RSS, ESS , Coefficients of determination
affected in case of change of origin and change of scale .
32
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Sales is sale revenue and Advert is advertising expenditure. Both Sales and Advert are
measured in terms of thousands of rupees.
3. Consider the following data on hourly wage rates (Y), Labour productivity (𝑋1 ) and
literacy rate (𝑋2 ) in a country ABV:
𝑌 90 72 54 42 30 12
𝑋1 3 5 6 8 12 14
𝑋2 16 10 7 4 3 2
4. Using time series data for 1979 to 2009 for a certain economy, the following model
of demand for money was estimated.
𝑀𝐷𝑖 = 𝐵1 + 𝐵2 𝑌𝑖 + 𝐵3 𝐼𝑁𝑇𝑅𝐴𝑇𝐸𝑖 + 𝑢𝑖
Where
The table below has estimates of the coefficients and their standard errors
33
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Y 0.530 0.112
13) d. 14) b. 15) d. 16) b. 17) b. 18)a. 19) d. 20) c. 21) d. 22) b
34
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
CHAPTER-3
35
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
d. β2(1/Y)
10. When comparing r2 of two regression models, the models should have the same
a. X variables
b. Y variables
c. Error term
d. Beta coefficients
TRUE/FALSE
Practical Questions
1. The OLS Regression based on the log-linear data gave the following results:
36
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Model B :-
̂ = 0.7774 – 0.2530 InX t
InYt
se = (0.0152) (0.0494) r2 = 0.7448
Where Y= cups of coffee consumed per person per day
X= the price of coffee in rupees per cup.
3. Using 21 annual observations, the following equation for demand for a good was
estimated using OLS:
̂ t = 1.71 – 0.35 InX1t + 0.47 InX2t
InY R2 = 0.876, Ṝ2 = 0.843
se = (0.059) (0.083) (0.083)
Where,
Y = No of units demanded
X1 = Price of goods ( Rs. Per unit)
X2 = Consumer’s income
(i) Test at α =5% whether the good has unit income elasticity against the
alternative that the demand for the good is income inelastic.
37
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
4. For the data for 46 states in USA for 1992following regression result was obtained:
̂ = 4.30 + 1.34 InP + 0.17 InY
InC
se = (0.91) (0.32) (0.20) Ṝ2 = 0.27
̂ i = B1 + B2 In Li + B3In Ki + Ui
InY
where, Y = Output
L = Labor input
K = Capital input
38
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(i) What restriction has been imposed on the Cobb-Douglas production function
to obtain this estimated production function?
(ii) How will you test the validity of this restriction?
(i) Interpret the equation. Make appropriate hypothesis for signs of coefficient
and test your hypothesis.
(ii) What are the elasticity of salary with respect to education and experience?
(iii) If we run a linear regression instead of log-linear regression then how would
the interpretation change?
3. To determine how expenditure on service (Y) behaves if total personal expenditure
(X) rises by a certain percentage, the following regression model was obtained:
̂t = -12564.8 + 1844.22 In Xt
Y
se = (916.351) (114.32) r2 = 0.881 n=20
39
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
4. Consider the following regression for cross sectional data for 55 rural households in
India. The regress and in this equation is expenditure on food and the regress or is
total expenditure (a proxy for income)
̂ t = 1283.912 + 257.27 In (TEXP)
𝐹𝐸𝑋𝐵
t = (-4.3848)* (5.6625)* r2 = 0.3769
Note: *denotes an extremely small p-value.
RECIPROCAL MODEL
1. Based on annual percentage change in wage rates, Y and the unemployment rate, X
for kingdom for the period 1950-1966 the following results were obtained:
1
̂
Yi = -1.4282 + 8.02743 𝑋𝑖
Se = (2.0675) (2.8478) r2 = 0.3849,
(i) What is the interpretation of 8.02743?
(ii) Test the hypothesis that the estimated slope coefficient is not different from
zero. Which test will you use?
(iii) How would you use the F test to test the preceding hypothesis.
40
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(iv) Given that Y = 4.8 percent and X = 10.5 percent, what is the rate of change of
Y at these mean values?
(v) What is the elasticity of Y with respect to X at these mean values.
(vi) How would you test the hypothesis, is that true r 2 =0?
2. The percentage change in the index of hourly earnings (Y) and the civilian
unemployment rate (X) for the United States for the year 1958 to 1969 gives the
following regression model:
̂i = -0.2594 + 20.5880 1
Y 𝑋𝑖
t = (-0.2572) (4.3996) r2 = 0.6594
(i) What is the wage floor?
(ii) Interpret the slope term.
(iii) Test the significance of regression coefficients.
(iv) Interpret r2.
(v) The linear model for the same data is
̂
Yi = 8.0147 – 0.7883 Xt
t = (6.4625) (-3.2605) r2 = 0.5153
(a) Is positive slope in the reciprocal model analogous to negative slope in
the reciprocal model.
(b) Compare the slope terms of two models.
(c) Compare r2 for two models.
Polynomial Regression
1. The following regression considers the relationship between lung cancer and
smoking for 43 states in India:
Yi = β1 + β2Xi + β3X2i + ui
Where, Y = number of deaths from lung cancer.
X = number of cigarettes smoked.
Results are as follows:
Predictor Coeff. Std. error t p
Constant -6.910 6.193 -1.12 0.271
X 1.5765 0.4560 3.46 0.001
X2 -0.019 0.008 -2.35 0.024
R2 = 0.564, ADJ. R2 = 0.543
F P
Residual sum of squares 311.69 26.56 0.00
Sum of squares regression 403.89
(i) Interpret the above regression
(ii) Test the individual significance of regression coefficients. Which test do
you and why? (Use α = 5%)
(iii) Construct an ANOVA table for the problem and test for the overall
significance of the model. (Use α =5%)
41
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
2.The OLS regression results based on the Cost (Y) and Output (X) are as follows:
̂i = 141.7667 + 63.4776Xi – 12.9615X2i + 0.9396X3i
Y
se = (6.3753) (4.7786) (0.9857) (0.0591)
R = 0.9983,
2 n = 10
(i) Does this model represent the cost function; explain by testing the
coefficient in the model.
(ii) Test the significance of the regression coefficient.
(iii) Construct an ANOVA table for the problem and test for the overall
significance of the model. (Use α =5%)
(iv) Find the average and marginal cost curves.
1. Based on monthly data from January 1978 to December 1987, the following
regression results were obtained:
Model 1 : ̂t = 0.00681 + 0.758Xt
Y r2 = 0.4406
t = (0.262) (2.80)
p = (0.798) (0.0186)
Model 2 : ̂
Yt = 0.762Xt r2 = 0.4368
t = (2.954)
p = (0.0131)
Where, Y = monthly rate of return on Texaco common stick in %.
X = monthly market rate of return in %
(i) What is the difference between two regression models?
(ii) Would you retain the intercept term in model 1? Why or why not?
(iii) How would you interpret the slope term in the two models?
2. The following two models are based on the returns on a future fund (Y) and the
term on the market portfolio(X) for the period 1971-1980:
Model A:
̂i = 1.2797 + 1.0691 Xi
Y
se = (7.6886) (0.2383) r2 = 0.7115
Model B:
̂
Yi = 1.0899Xt
se = (0.1916) raw r2 = 0.7825
(i) Test the significance of incept term in the model A. Does this justify the
model B.
(ii) If the intercept term is absent then the slope term can be estimated by far
greater precision. Explain with the help of above models.
(iii) Can we compare the r2 of two models?
42
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
1. The relationship between infant mortality rate (IMR) and the expenditure on
immunization programmes for children (IMMUN) in lakhs of rupees for 63 districts
of India is postulated by the following two alternate models :
The R2 for Model A and Model B are obtained as 0.6152 and 0.8254 respectively. Use a
suitable test at 5% significance level to decide which model would you prefer-restricted
or unrestricted. State the null and alternate hypothesis clearly.
2. Consider the following Cobb Douglas production function estimated for Taiwan for
the period: 1965-1974.
R2 = 0.9951
RSSUR = 0.0136
Lt = labour at time t,
Kt = capital at time t
In = natural logarithms:
43
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
3. Consider the following model of monthly rents paid on rental units in industrial hub
cities of an economy:
where
i. How will you test the hypothesis that city population and social infrastructure
have no significant joint effects on monthly rents? Explain the steps involved in
the test with reference to the above model.
ii. Suppose b1 is estimated.as. 0.066. What is wrong with the statement: "A 10%
increase in population is associated with a 6.6% increase in monthly rent".
[Eco(H) 2014]
4. find the slope and elasticity of Y with respect to X for the following functional formal:
a) In Y = B1 – B2 (1/X)
b) Y = B1 + B2 In X. [Eco(h) 2013]
44
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
i. Establish the relationships between the two sets of regression coeffcients and
their standard errors.
ii. Is the R2 different between the two models? [Eco(h) 2019]
8. Data is available on per unit cost (Y in Rs) of a manufacturing firm over a 20-year
period, and index of its output (X). Following results were obtained:
i. Interpret the signs of the two slope coefficients in the above regression.
ii. At what level of output will the average cost function be minimum?
iii. Compute adjusted R' Is adjusted R' always less than R?? Justify your answer.
iv. Test that the variance of per unit cost (ox) over this 20 year period=20 against
not equal to 20. Use 5% level of significance.
v. Would your answer remain the same if a 95% confidence interval is constructed
to test the same hypothesis? Construct the interval and justify your answer.
[Eco(h) 2023]
9. Consider the model
𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖
Where,
45
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
a) How will the estimated intercept and slope coefficients change if the unit of
measurement of income is changed to Rs lakhs.
b) Suppose the researcher thinks that usually consumption increases with income
but at a decreasing rate and consumption increases with age. How would he
modify the model to see whether the data supports his hypothesis?
c) Suppose the researcher wants to assess the relative importance of age and
income on long term consumption, what model should he estimate? Explain.
[Eco(h) 2021]
11. Consider the Cobb-Douglas production function: [Eco. (H) III Sem. 2017(ER)]
𝛽 𝛾
𝑄𝑡 = 𝑒 𝛼 𝐾1 𝐿𝑡 𝑒 𝑢
Where, 𝑄 denotes output, K denotes capital input and L denotes labour input and e =
2.71828.
(a) Formulate a model that can be used to estimate the parameters a, 𝛽 and 𝛾 using
ordinary least squares.
(b) Show that this model implies a constant partial elasticity of output with respect to
labour but a variable marginal effect of labour on output. [Eco(h) 2020]
12. The following regression model was estimated using annual time-series data for the
period 1990-2012 for a certain country:
̂ 𝑡 = 𝑏1 + 𝑏2 𝐼𝑛𝑋2𝑡 + 𝑏3 𝐼𝑛𝑋3𝑡
𝐼𝑛𝑌
46
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
𝑋2 0.45 0.025
𝑋3 -0.377 0.063
Where
(i) How will you test the joint hypothesis that potato consumption is not affected
by the prices of cabbage and cauliflower? Explain the steps involved in the test
with reference to the above model.
(ii) If the estimated value of b, is 200, it means "a 1% increase in income is
associated with a 200% increase in per capita consumption of potatoes;
everything else kept constant." Is the above interpretation correct ? Explain.
14. Based on the data on GNP and money supply for the period 1965-2006 for India. Ma
the following regression results were obtained by regressing GNP (in billions of
Rupees) on money supply (in billions of Rupees) for alternate models :
(11.40) (108.93)
(75.85) (12.07)
47
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(14.45) (16.84)
(7.04) (55.58)
15. Using annual time-series data for the company 'Pure Juice' for the period 2000 - 2016,
the following equation was obtained :
̂ 𝑡 = 1.2028 + 0.0214𝑡
𝐼𝑛𝑌
𝑆𝑒 = (0.0233) (0.0025)
Where 𝑌𝑡 = revenue of the company in crores at time 𝑡 and 𝐼𝑛 indicates natural log.
̂ 𝑡 = 3.6889 + 0.583𝑡
𝐼𝑛𝑆
(i) What is the estimate of the instantaneous and compound growth rate?
(ii) What is the estimate of 𝑆0 ?
(iii) What will be the elasticity of sales with respect to time?
(iv) Suppose the researcher modifies the above equation and estimates the
following regression: 𝑆̂𝑡 = 5.6731 + 2.7530𝑡 Interpret the model.
(v) Compute elasticity of sales with respect to time for the model in part iv. Compare
your results with the answer obtained in part iii. [Eco(h) 2021]
17. Consider the following functional form :
1
𝑌 = 𝐵1 + 𝐵2 𝑋 + 𝐵3 ( )
𝑋
(i) Derive the expression for the marginal effect of Y with respect to X.
48
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(ii) Derive the expression for elasticity of Y with respect to X and express it in terms
of X only.
(iii) Assume without loss of generality. 𝐵1 = 0 and 𝐵2 > 0, 𝐵3 > 0. For what
value of X will this function attain a minima? Draw a rough sketch for the function
[Eco(h) 2017]
18. In order to test whether the developing economies are catching up with the advanced
economies or not, a researcher regressed the growth rate of GDP of a country on its
relative per capita GDP for 119 developing countries. The relative per capita GDP of a
country is measured as a ratio of the country's per capita GDP to the GDP per capita
of USA. The regression results were obtained as under (standard errors are reported
in parentheses):
19. In each of the following cases suggest a suitable functional form to explain the
relationship between dependent variable and the explanatory variable. Also justify
your choice and interpret the coefficients in each case.
(i) Cobb Douglas production function
(ii) Rate of growth of population in an economy
(iii) Total cost function of a firm
(iv) Engel Expenditure Function
(v) Phillips Curve
(vi) Average salary earned by the employee conditional upon the gender of the
employee. [Eco(h) 2020]
20. Suppose a third-degree polynomial regression was fitted to a cost-output model for
18 firms and the following results were obtained:
𝑠𝑒 = (6.37)(4.78)(2.98)(6.98)
49
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
i. If cost curves are to have U-shaped average and marginal cost curves, then what
are your a priori expectations about the intercept and slope estimators? Check if
these a priori expectations are satisfied for the given model.
ii. Interpret the estimated slope coefficient of Q' and test its significance at 10% level
of significance. [ECO(H)2024]
21. For a particular cafeteria, the following equation was estimated using yearly data on
cups of chocolate flavoured cold coffee sold:DOUBLE LOG MODEL
̂
log (𝑄𝑡 ) = 1.534 + 0.250𝑙𝑜𝑔(𝑃𝑡 ) − 0.025 𝑙𝑜𝑔(𝑃𝑡∗∗ )
𝑠𝑒 = (0.2001)(0.240)(0.016)
Where
𝑃𝑡∗∗ = price (in rupees) per cup of vanilla flavoured cold coffee
t = 1991-2023
i. What can you say about the own-price and cross-price elasticity of demand for
chocolate flavoured coffee? Test the hypothesis that the own price elasticity of
demand of chocolate flavoured coffee is unitary elastic at 10% level of significance.
ii. Draw the ANOVA table for the regression equation. Also calculate.
[ECO(H)2024]
50
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
CHAPTER-4
DUMMY VARIABLE
OBJECTIVE TYPE QUESTION
Choose the Best alternative for each question
1. Dummy variables classify the data into
a. Inclusive categories
b. Mutually exclusive categories
c. Qualitative categories
d. Quantitative categories
4. For question (3) above, given Yi=β1+β2D2i +β3D3i+ui, β1 represents the mean
annual salary of professors working in
a. Fully government aided colleges
b. Partially government aided colleges
c. Private colleges
d. All three colleges
5. For question (3) above, mean annual salary of professors working in fully
government aided colleges is given by
a. β1
b. β1 + β2
c. β1 + β3
d. β2 + β3
6. In trying to test that females earn less than their male counterparts was estimates
the following model: Yi=β1 +β2Di, where Y = average earnings per day in Rs. D =
1 for females and 0 otherwise. β2 here refers to the
a. Average earnings of male
b. Average earnings of female
c. Differential intercept coefficient for male earnings
d. Differential intercept coefficient for female earnings
51
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
9. The Process of removing the seasonal component from a time series sample date
is known as
a. Seasonalization
b. Seasonality
c. Deseasonalizstion
d. Seasonal trend testing
TRUE/FALSE
Practical Questions
̂
Yi = 3176.833 -503.1667Di
se = (233.0446) (329.5749)
r2 = 0.1890, n = 12
Where
Yi = Food expenditure (in Rs.)
Di = 1 for female
0 for male
(i) Find the average food expenditure of males and females.
(ii) Is there a significant difference in the average food expenditure of males and
females.
52
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
ANOVA Models With One Qualitative Variable Having more than Two Categories.
1. The data on average salary (in dollars) of public school teachers in 50 states and the
District Of Columbia for the year 1985 was available. These 51 areas are classified
into three geographical regions: (1) Northeast and North Central (21 states in all)
(2) South (17 states in all), and (3) West (13 states in all). The following regressions
model was obtained from the given data:
̂
Yi = 26,158.62 - 1734.473D2i - 3264.615D3i
Se = (1128.523). (1435.953) (1499.615)
t = (23.1759) - (-1.2078) (-2.1776)
(0.0000)* (0.2330)* (0.0349)* R2 = 0.0901
1. From a sample of 528 persons in May 1985, the following regression results were
obtained:
̂
Yi = 8.8148 + 1.0997D2i – 1.6729D3i
se = (0.4015) (0.4642) (0.4854)
t = (21.9528) (2.3688) (-3.4462)
(0.0000)* (0.0182)* (0.0006)*
53
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
1. The following regression results were obtained for 22 individuals, (standard error in
parenthesis)
̂
Yi = 1506.244 – 228.9868Di + 0.0589Xi
(188.0096) (107.0582) (0.0061)
R = 0.9284
2
Where,
Y = expenditure on food ($)
Di = Gender dummy variable = 1 for female
= 0 for male
Xi = after tax income ($)
(i) Holding after tax income constant, what is the difference between mean food
expenditure of males and females at the 5% level of significance? Is the
difference statistically significant? How can you say so?
(ii) What is the marginal propensity of food consumption holding gender
difference constant?
(iii) Write and draw the regression equation for males and females separately.
2. The following regression was estimated using data from a sample of 15 houses
(standard errors are given in brackets) :
Di = 0 for house i, if it does not face a park = 1 for house i, if it faces a park.
3. A person holding two or more jobs, one primary and one or more secondary, is
known as moonlighter. Based on a sample of 318 moonlighters, the following
regression is obtained, with standard errors in parenthesis:
̂ m = 37.07 + 0.403W – 90.06race + 75.51urban + 47.33hisch + 113.6region +
W
2.26age
se (0.06) (24.47) (21.6) (23.42) (27.62) (0.94)
Where,
54
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Wm = moonlighting wage
W = primary wage
Age = age in years
Race = 0, if white, 1 if non – white,
Urban = 0 if non urban, 1 if urban
Region = 0 if non west, 1 if west
Hisch = 0 if non graduate, 1 if high school graduate
Derive the wage equations for the following type of moonlighters
(i) White, non urban, western resident and high school graduate.
(ii) Non white, urban, non western resident and non high school graduate.
(iii) White, non urban, non western resident, and high school graduate.
4. You are given the following estimated double log model for cigarette consumption in
Turkey.
The results are based on 29 observations, for the period 1960 – 1988. The variables are
described
as follows:
InQ = Logarithm of cigarette consumption per adult (dependent variable)
InY = Logarithm pf per capita GNP in 1968 prices (in Turkish Liras)
InP = Logarithm of real price of cigarettes (in Turkish Liras per kg)
D82 = 1 for 1982 onward 0 before that
D86 = 1 for 1986 onward 0 before that
(i) What is the numerical value of the elasticity of demand for cigarettes with
respect to income for the period 1969 – 81? For the period 1986 – 88?
(ii) What is the numerical value of the elasticity of demand for cigarettes with
respect to price for the period 1982 – 85?
Where,
X4 = years of experience.
X1, X2 and X3 are the dummy variables representing the education level. Base case is
primary school. X1 for high school, X2 for higher secondary and X3 for graduate school.
i. If a salesperson has a graduate degree, how much will sales change according to
this model compared to a person with a primary education?
ii. How much in sales will a counter person with 10 years of experience and a high
school educate generate?
55
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
iii. Why do we need three dummy variables to use education level" in this regression
equation?
Interaction Dummies
1. (i) You are told that monthly wages. W (in rupees) earned by a person depends on his
age
A (in years). Write an appropriate model to study the effect of age on monthly wages.
(ii) Suppose it has been found that wages also depend on
Area of residence (Urban/ nonurban)
Level of education (Post graduate/ graduate)
Modify your model in part (i) above to include these qualitative variables.
(iii) Will your answer change if you are told that a person’s area of residence also
determines his level of education? What will be the regression equation for
urban post graduates?
2. Using data for 526 individuals the following model of wage determination was estimated:
LOG (W)I = B0 + B1D1 +B2EDUi + B3(D*EDU)i + ui
Where,
W = Daily wages in rupees
D = Dummy variable for gender, D = 1 for females and 0 for males
EDU = years of education
D*EDU = Interactive dummy
The table below gives estimated regression coefficients and their standard errors:
Estimates of Coefficients Standard errors
D -0.2270 0.1680
(a) Write the regression equations relating LOG (W) to EDU for males and females
separately.
(b) The returns to education are measured by the percentage increase in wages due to
an extra year of education, for males and females.
(c) Is the difference between returns to education for males and females statistically
significant at 5% level of significance?
3. To study the rate of growth of population in an economy over the period 1970 – 1992 the
Following models were estimated:
Model I:
̂ t = 4.73 + 0.024t
Inpop
t = (781.25) (54.71)
Model II:
̂ t = 4.77 + 0.015t
Inpop - 0.075Dt + 0.011(Dtt)
t = (2477.92) (34.01) (-17.03) (25.54)
56
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
where,
pop = population in millions
t = trend variable
Dt = 1 for 1970 – 1980, 0 otherwise (for 1980 – 1992)
(i) In model I, what is the rate of growth of population over the sample period.
Differentiate between instantaneous and compound rate of growth.
(ii) Are the population growth rates statistically different pre and post 1980?
(iii) If they are different, then what are growth rates for 1970 – 79 and 1980 – 92?
Yi = B0 + B1 Xi + B2 D2i + B3 D3i + ui
X: Years of service
D2 = 1 if Harvard MBA
0 otherwise
D3 = 1 if Wharton MBA
0 otherwise
What is the interpretation of B4 and B5. If both of these are statistically significant then
which model will you use and why?
Chow Test
1. For the data of savings and income for the US economy the following model is being
estimated:
Savings = β1 + β2 (Income)
We have the following regression results:
For the time period 1970 – 85
RSS = 1785.032 df = 10
For the time period 1985 – 95
RSS = 10,005.22 df = 12
For the time period: 1970 – 95
RSS = 23,248.30 df = ? (find out)
57
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Has the saving income relationship changed pre 1985 as compared to post 1985? Use
Chow test to find out (Given critical F value for given dof at 1% level of significance =
7.72)
2. Suppose we have the following relationship between savings and income form 1970 –
1995.
̂
Yi = 1.0161 + 152.4786Di + 0.0803Xi +.0655(DiXi)
Se = (20.1648) (33.0824) (0.0144) (0.0159)
R = 0.8819
2
58
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
1. A researcher wants to find out what are the factors which determine the number of
installs (I) of an application (app) from a famous app store. Size in Mbs (S), Reviews
in 000s (Re), Ratings (0 to 5) (Ra), Price in 'Rs (P). She ran the following regressions:
R2= 0.734
df = 156
se = (37.39) (0.0187)
df = 156. R2 = 0.806
How would you interpret this model? Explain the shape of the curve.
iv. What would be the slope and elasticity of number of installs with reference to the
equation given in above?
v. How would the equation in (iii) change if we suggest that number of app
installations varies with respect to the kind of cellular phone used by the
customer, that is android or ios phones? [Eco(h) 2021]
59
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
iv) How would the model in part (ii) be modified if the objective is to examine
whether the marginal effect of experience is gender specific?
v) How would be the regression in part (i) be modified if qualitative variable interact
with each other ? [Eco(h) 2022]
3. The purpose of this empirical exercise was to analyze the impact of takeovers on CEO
compensation. The model of interest was:
Where:
The model was estimated from data on 34 firms. The results are summarized in the
following table:
D 996.8745 111.9876
4. The following model was estimated for United States from 1958 to 1977 :
1 1
𝑌̂𝑡 = 10.078 − 10.337𝐷𝑡 − 17.549 ( ) + 38.173𝐷𝑡 ( )
𝑋𝑡 𝑋𝑡
R2= 0.8787
60
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
D = 1 for 1958-1969
= 0 if otherwise
N = 706. R2 = 0.123,
̅𝑅̅̅2̅ = 0.117
educ is education measured in years and age is age of the individual in years.
i. Is there any evidence that men sleep more than women? How strong is the
evidence?
ii. Interpreting the coefficients of the age and age squared variables explain what
does the researcher have in mind about the relation between sleep and age.
iii. Is there a statistically significant trade-off between working and sleeping? How
would the regression model have to be modified if there is reason to believe that
this trade off might be gender specific? [Eco(h) 2020 ]
6. Data was collected on 344 corporate executives to find out the effect of MBA degree
and work experience on their salary. The following model was estimated :
R2 = 0.8968
61
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
= 0 otherwise
i. Write the regression equations for female MBA executives and male MBA
executives separately.
ii. Find the mean income level for the reference category and interpret it.
iii. Test the statistical significance of differential intercept coefficient between female
MBA executives and Male MBA executives at 5% level of significance.
iv. Interpret the coefficient of D1 * X1.
v. Now suppose out of this sample of 344 executives, 48 are female MBA executives
and 156 are male MBA executives. To find out the relation between income earned
and work experience, we run three regressions and the results obtained are as
follows:
Regression A: 156 male MBA executives, RSSA = 3.701
Regression B: for 48 female MBA executives, RSSB = 4.803
Pooled Regression: with 204 (156male + 48female) executives, RSS = 9.7602
Using the above data. do the Chow test at 10% level of significance to check whether there
is significant improvement in doing a pooled regression as compared to other two
subsample regressions. [Eco(h) 2021]
7. Demographic data from 126 countries is obtained for the year 2017. It is hypothesized
that life expectancy (Y) is dependent on number of under five deaths (X2), polio
immunization coverage (D), Per capita Govt. Exp. on Health Care (X3) (in Rs crores),
Per Capita GNI (in Rs crores) (X4) and Average number of years of Schooling (X). Polio
immunization coverage = 1 if yes and 0 otherwise.
MODEL 1:
MODEL 2:
62
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
𝑠𝑒 = (0.406)(0.465)
8. A real estate Company used housing sales data to estimate the effect that the
pandemic lockdown had on demand for sub-urban real estate
Where Y= Share of sub-urban housing deals during a month, X= price per square metre
of sub-urban real estate, t = time,
i. Write the regression functions for lockdown months and non- lockdown months.
ii. How would you test the hypothesis that lockdown had no impact on price-
elasticity for sub-urban housing?
iii. Rewrite the regression result if Dummy assignment is switched as below:
Dt=0, if t is a lockdown month
iv. Another investigator believes that the relationship between the two variables X
and Y is given by Yt = 𝛽1 + 𝛽2 𝑋𝑡 + 𝜀𝑡 . Given a sample of n observations, the
investigator estimates 𝛽2 by calculating it as the average value of Y divided by the
average value of X. Discuss the properties of this estimator. What difference would
it make if it could be assumed that 𝛽1 = 0?
v. What will be the consequence for the Gauss Markov theorem if there are errors in
measuring Y? [Eco(h) 2023]
9. Regression results for Morena savings-income data are presented for the period
1970-1995,
63
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
𝑅2 = 0.8819
Where
𝑌𝑡 = savings
𝑋𝑡 =income
= 0 otherwise
i. Interpret the regression results and obtain the regressions or the two time
periods, that is, 1970-1981 and 1982-1995
ii. What do you infer by the statistical significance of the differential intercept and
the differential slope coefficients? [Eco(h) 2014]
10. i)In the regression model, in Yi = B1 + B2Di + ui where D is a dummy regressor, prove
that the relative change in Y when the dummy changes from 0 to 1 can be obtained as:
(𝑒 𝑏2 − 1 )
where e is the base of natural logarithm and b, is the ordinary least squares estimator of
the slope coefficient.
(ii) Suppose you have quarterly data on air-conditioner sales. Explain how you can obtain
average sales of air-conditioners for the our quarters separately using the method of
dummy variables. [Eco(h) 2013]
11. Using data for 120 individuals, the following model of wage determination was
estimated:
where
64
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
R2 = 0.4540
(a) Write the estimated regression equation for postgraduates and undergraduates
separately.
(b) Test the statistical significance of dummy variable at 5% level of significance. What
conclusion can you draw from this test?
(c) It PGRAD was defined to take values (0, 2) instead of (0, 1) will the estimated value of
B3 and its standard error change? What about its statistical significance?[Eco(h) 2016]
12. Suppose that earnings of individuals are dependent on whether they are skilled
workers and their work experience over the years. 6
(i) Define dummy variables to capture whether workers are skilled or not. Take
workers being unskilled as the reference category.
(ii) Develop a model which is linear in parameters that shows earnings of an
individual as a function of work experience and whether they are skilled. Interpret
your model.
(iii) Now assume that there is an interaction between skill of the workers and their
work experience. How would the model in (ii) change. Interpret the new model.
[Eco(h) 2019]
13. Using data for 110 schools the following model of monthly expenditure incurred by a
school was estimated .
𝑡 = (2.864)(−1.06)(13.466)(−2.04)
i. Write the regression equation for government school and private school
separately. Also, interpret the slope coefficient of two categories.
ii. Check the statistical significance of differential intercept coefficient and slope
drifter at 10% level of significance. Based on the conclusion of these tests, draw
the population regression lines for the government and private schools.
iii. Test the hypothesis that the population error term is normally distributed at 1%
level of significance when it is given that for the residuals, skewness is 0.28 and
the value of kurtosis is 2. Also, in classical linear regression model, what is the
65
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
rationale for the assumption that population error term follows normal
distribution?
[Eco(h) 2024]
1) b 2) a. 3) c. 4) c. 5) b 6). d 7) d. 8) c. 9) c.
66
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
CHAPTER -5
MULTICOLLINEARITY
OBJECTIVE TYPE QUESTION
1. One of the assumptions of CLRM is that the number of observations in the sample
must be greater than the number of
a) Regressors
b) Regressands
c) Dependent variable
d) Dependent and independent variables
2. Perfect multicollinearity between variables X 1 , X2 and X3 can be expressed using
constants 𝜆1 , 𝜆2 and 𝜆3 such that
a) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 = 0, where 𝜆1 , 𝜆2 and 𝜆3 are all equal to zero
simultaneously
b) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 + 𝑣 = 0 where 𝑣 is the stochastic term and 𝜆1 , 𝜆2 and
𝜆3 are not all equal to zero simultaneously.
c) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 = 0; where 𝜆1 , 𝜆2 and 𝜆3 are not equal to zero
simultaneously.
d) 𝜆1 𝑋1 + 𝜆2 𝑋2 + 𝜆3 𝑋3 + 𝑣 = 0 where 𝑣 is the stochastic term and 𝜆1 , 𝜆2 and
𝜆3 are all equal to zero simultaneously.
3. In a regression model 𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖 , F-test is seen to statistical
significant at less than 5 percent level of significance but the coefficients 𝛽1 and 𝛽2 ,
are seen to be statistically insignificant. This means that the
a) Two coefficients are highly correlated
b) Two variables are highly correlated
c) Two variables are perfectly correlated
d) Two variables are not correlated
4. If for a set of explanatory variables 𝑋2 , and 𝑋3 , the coefficients of correlation is
equal to 1, this means that between 𝑋2 and 𝑋3 there exists
a) No collinearity
b) Low level of collinearity
c) Perfect collinearity
d) Very high collinearity
5. If there exists high multicollinearity, then the regression coefficients are
a) Determinate
b) Indeterminate
c) Infinite values
d) Small negative value
6. If multicollinearity is perfect in a regression model then the regression coefficients
of the explanatory variables are
a) Determinate
b) Indeterminate
67
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
c) Infinite values
d) Small negative value
7. If multicollinearity is perfect in a regression model the standard errors of the
regression coefficients are
a) Determinate
b) Indeterminate
c) Infinite values
d) Small negative value
8. The coefficients of explanatory variables in a regression model with less than
perfect multicollinearly cannot be estimated with great precision and accuracy.
This statement is
a) Always true
b) Always false
c) Sometimes true
d) Nonsense statement
9. In a regression model with multicollinarity being very high, the estimators
a) Are unbiased
b) Are consistent
c) Standard errors are correctly estimated
d) All of the above
10. Multicollinearity is essentially a
a) Sample phenomenon
b) Population phenomenon
c) Both a and b
d) Either a or b
11. Which of the following statements is NOT TRUE about a regression model in the
presence of multicollinearity
a) t ratio of coefficients tends to be statistically insignificant
b) R2 is high
c) OLS estimators are not BLUE
d) OLS estimators are sensitive to small changes in the data
12. Which of these is NOT a symptom of multicollinearity in a regression model
a) High R2 with few significant t ratios for coefficients
b) High pair-wise correlations among regressors
c) High R2 and all partial correlation among regressors
d) VIF of a variable is below 10
13. A sure way of removing multicollinearity from the model is to
a) Work with panel data
b) Drop variables that cause multicollinearity in the first place
c) Transform the variables by first differencing them
d) Obtaining additional sample data
14. Assumption of 'No multicollinearity' means the correlation between the regresand
and regressor is
a) High
b) Low
68
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
c) Zero
d) Any of the above
15. An example of a perfect collinear relationship is a quadratic or cubic function. This
statement is
a) True
b) False
c) Depends on the functional form
d) Depends on economic theory
16. Multicollinearity is limited to
a) Cross-section data
b) Time series data
c) Pooled data
d) All of the above
17. Multicollinearity does not hurt is the objective of the estimation is
a) Forecasting only
b) Prediction only
c) Getting reliable estimation of parameters
d) Prediction or forecasting
18. As a remedy to multicollinearity, doing this may lead to specification bias
a) Transforming the variables
b) Adding new data
c) Dropping one of the collinear variables
d) First differencing the successive values of the variable
19. F test in most cases will reject the hypothesis that the partial slope coefficients are
simultaneously equal to zero. This happens when
a) Multicollinearity is present
b) Multicollinearity is absent
c) Multicollinearity may be present OR may not be present
d) Depends on the F-value
TRUE/FALSE
1. Despite perfect multicollinearity , OLS estimators are BLUE .
69
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
10. Consider the model: Yi = B1 + B2 Xi+B3Xi2 + B4Xi3 + ui. Since 𝑋 2 and 𝑋 3 are the
function of X , there is a perfect multicollinearity ?
Practical Questions
1. In the regression model
Yi = A1 + A2 X2i + A3 X3i + Ui
Show that we cannot uniquely estimate the original parameters A1, A2 and A3.
2. Let Y be the output. X2 be unskilled labour and X3 be skilled labour in the following
relationship:
Can the parameters of the model be uniquely estimated by ordinary least squares?
Explain.
Y -10 -8 -6 0 2 4
X2 1 2 3 4 5 6
X3 1 3 5 7 9 11
Yi = B1 + B2 X2i + B3 X3i + ui
(i) Explain, without solving, why you cannot estimate the three unknown
parameters of the model.
70
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(ii) Are any linear functions of these parameters estimable? Show the necessary
derivations.
Yi = Bi + B2 X2i + B3 X3i + Ui
In order to check for presence of multicollinearity. the auxiliary regression is run and the
results are as follows :
7. The consumption expenditure of families (c) is regressed upon the income of families
(1) and the wealth of families (W). All variables are measured in Rupees. The following
regression results were obtained for a sample of 10 families.
Variable Coefficient t Statistics
Income 0.94 1.14
Wealth - 0.04 -0.52
Constant 24.77 6.75
df = 7, R = 0.96
2
(i) Based on institution, what signs would you expect for the partial slope
coefficients? Do the observed signs agree with your intuition?
(ii) Every t statistic is insignificant but F statistic is significant. Verify this
statement at 10% level of significance. What are the reasons for this
paradoxical statement?
(iii) Do you expect the estimated coefficients to be unbiased and efficient?
8. Consider the following model relating the gain in salary due to an MBA degree to a
number of its determinants.
71
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
9. Consider the following regression for an imaginary country, say Utopia, for a period
of 15 years. Variables are: IMP = imports, GNP = Gross National Product and CPI =
Consumer Price Index
Regression 1 :
̂ t= -108.20 + 0.045GNP2t + 0.931CPI3t
IMP
t = (3.45) (1.232) (1.844)
R2 = 0.9894
(i) Test whether, individually, the partial slope coefficients for GNP and CPI are
statistically significant at the 5% level of significance.
(ii) Test whether GNP and CPI jointly have any statistical significance in
explaining variations in exports. Carry out this test at 5% level of significance.
(iii) Comment on the results obtained in (i) and (ii) above. Do you suspect any
problem?
(iv) Do you expect that OLS coefficients have retained their BLUE property? If no,
explain why. If yes explain why you would still be worried about their quality.
(v) Using a transformation of variables, real imports are regressed on real
income.
Regression 2:
̂t
IMP 𝐺𝑁𝑃𝑡
= -1.39 + 0.202
𝐶𝑃𝐼 𝐶𝑃𝐼
10. In a study of the production function of a firm for the period 1991 to 2011, the
following two regression models were obtained :
72
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Model I
Model II
where,
t is time trend
11.From the annual data for the US about the manufacturing sector, the results would
be following:
𝑅2 = 0.97, 𝐹 = 189.8
𝑅2 = 0.65, 𝐹 = 19.5
73
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
b) In regression (1), what is the priori sign of log (k)? Do the results conform to
this expectation? Why or why not?
d) Interpret regression (1)? What is the role of trend variable in this regression?
h) Are the 𝑅2 values of the two regressions comparable? Why or why not? How
would you make them comparable, if they are not comparable in the existing
form?
1. Consider the three-variable model, Yi = B1 +B2 X2i + B3 X3i + B4 X4i + ui. Let b2 the а
OLS estimator of the slope coefficient B 2.
i. Derive variance of b2, i.e., var(b2), terms of Variance Inflation Factor (VIF).
ii. When X2 is regressed on X3 and X4, 𝑅22 . obtained from this auxillary regression is
0.9217. Does it necessarily imply high variance of b2? Explain. [Eco(h) 2018]
educ is education measured in years and age is age of the individual in years.
74
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
i. Is there any evidence that men sleep more than women? How strong is the
evidence?
ii. Interpreting the coefficients of the age and age squared variables explain what
does the researcher have in mind about the relation between sleep and age
iii. Is there a statistically significant trade-off between working and sleeping? How
would the regression model have to be modified if there is reason to believe that
this trade off might be gender specific?
iv. Do you suspect multicollinearity in the model? Explain your answer.[Eco(h) 2020]
4. Consider the following regression results for 45 countries for the year 2011-2012.
(the /-ratios are given in brackets):
5. Quarterly data on country XYZ was collected for the period 2005-2019 to estimate the
relation between Foreign Direct Investment (FDI), Trade Openness (TO). Gross
Domestic Product (GDP) and Exchange Rate (E). TO is defined as the ratio of export
plus imports to GDP and t = trend. Following regression was estimated:
R2 = 0.904, d = 1.45
75
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
i. Interpret the estimated slope coefficients. Do you suspect some problem with the
above regression?
ii. What is the nature of the problem? How do you know? Explain its consequences?
̂𝑡
𝐹𝐷𝐼 𝐸𝑡 𝑇𝑂𝑡 𝑡
= 𝛽0 + 𝛽1 + 𝛽2 + 𝛽3 + 𝑢𝑡
𝐺𝐷𝑃𝑡 𝐺𝐷𝑃𝑡 𝐺𝐷𝑃𝑡 𝐺𝐷𝑃𝑡
Will this transformation solve the problem in (ii) above? How? Can you compare
R of this model with the model above? [Eco(h) 2022 ]
6. Suppose demand for Brazilian coffee in country Rico is a function of the real price of
Brazilian coffee (Pbc), real price of tea (Pt) and real disposable income (Y d) in Rico.
Suppose following results were obtained by running the implied regression:
𝑅̅2 = 0.60 𝑁 = 25
i. Interpret the slope coefficients. Are the signs in accordance with economic theory?
ii. Do you think that the equation suffers from some problem? What could be the
nature of the problem?
iii. What are in general the consequences of problem if any detected in part (ii)? (iv)
Suppose the researcher drops Pbc and run the following regression
̂ = 9.3 + 2.6 𝑃𝑡 + 0.0036𝑌𝑑
𝐶𝑜𝑓𝑓𝑒𝑒
𝑡 = (2.6) (4.0)
𝑅̅2 = 0.61 𝑁 = 25
Has the researcher made the correct decision in dropping 𝑃𝑏𝑐 from the equation? Explain.
iv. Do you think that Brazilian coffee in Rico is price inelastic? Why/Why not?
[Eco(h) 2023]
76
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
8. In order to test whether the developing economies are catching up with the advanced
economies or not. a researcher regressed the growth rate of GDP of a country on its
relative per capita GDP for 119 developing countries. The relative per capita GDP of a
country is measured as a ratio of the country's per capita GDP to the GDP per capita
of USA. The regression results were obtained as under (standard errors are reported
in parentheses) :
i. Interpret the above regression results. (ii) Find the marginal effect of P on G.
ii. If a researcher wishes to estimate the above relationship in logarithmic form and
estimates the following relationship :
InGi = B1 + B2 In Pi + B3 In 𝑃𝑖2 + ui
Do you think he will be able to estimate the model? Give reasons for your answer
[Eco(h) 2013]
9. Let Y be the Gross National Product, 𝑋2 be the Exports, 𝑋3 be the Imports and 𝑋4 be
the Net Exports in the following relationship:
Which assumptions of Classical Linear Regression Model is violated here? If you estimate
this equation by ordinary least squares, then which of the parameters can be estimated?
Explain. [Eco(h) 2024]
10. Using data collected from 1990 to 2022, for a particular country, the following
equation was estimated using OLS .
77
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
𝑅2 = 0.9987
i. Are the regression results in conformity with your a priori expectations. If not,
what can be the reason behind it? Explain any three remedies to overcome this
problem.
ii. Now suppose the researcher regressed female literacy rate on per capita GNP and
in this second regression, the R? is 0.5284. Do you think that the reason that you
suggested in (i) is significantly present in the model, at 5% level of significance? If
your answer is yes, then do you think that the presence of this is necessarily bad?
[Eco(h) 2024]
78
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
CHAPTER-6
Heteroscedasticity
2
6. Under park test in 𝑢̂ = In σ2 + 𝛽 In Xi + vi, is the suggested regression model.
𝑖
Here if we find 𝛽 to be statistically significantly different from zero, this means
that
a. Homoscedasticity assumption is satisfied
b. Homoscedasticity assumption is not satisfied
c. We need further testing
79
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
d. Xi has impact on Yi
7. According to Goldfeld and Quandt the problem with Park test is that the
a. Error term is heteroscedastic
b. Expected value of vi is nonzero
c. vi is serially correlated
d. Model is nonlinear in parameter
2
9. The following remedial measure for heteroscedasticity is used when σ is known
𝑖
for a regression model
a. Koenker-Bassett method
b. Weighted least square method
c. OLS method
d. White’s procedure
10. Which of the following is NOT considered the assumption about the pattern of
heteroscedasticity
11. Even if heteroscedasticity is suspected and detected, it is not easy to correct the
problem. This statement is
a. True
b. False
c. Sometimes true
d. Depends on test statistics used
State whether the following statements are true or false. Briefly justify your answer.
80
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Theory Questions
1. Suppose Heteroscedasticity is present in a regression model and ordinary least
squares procedure is applied to estimate the parameters of the model? What are
the consequences for the properties of the estimators and the hypothesis testing
procedures?
81
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Practical Questions
1. Based upon the data on research and development (R&D) expenditure. sales. and
profits for 18 industry groupings in the United States. all figures in millions of dollars,
the following model is fitted. Since the cross sectional data presented in used for this
model are quite heterogeneous, in a regression of R&D on sales (or profits).
heteroscedasticity is likely. The regression results were as follows :
𝑠𝑒 = (533.9317)(0.0083) 𝑟 2 = 0.4183
To see if the above model suffers from heteroscedasticity we obtained the residuals 𝑒𝑖 ,
squared them and fitted the following models to conduct formal tests.
82
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Use the Glejser test to determine, if the model suffers from the problem of
heteroscedasticity.
iii. 𝑒𝑖2 = 62,19,665 + 229.3508𝑆𝑎𝑙𝑒𝑠𝑖 − 0.000537𝑆𝑎𝑙𝑒𝑠𝑖2
𝑠𝑒 = (64,59,809)(126.2197)(0.0004) 𝑟 2 = 0.2895
Use the White's test to determine, if the model suffers from the problem of
heteroscedasticity.
On plotting the residual against X i, it was found that the variance of the residuals
increased with Xi
(i) What problem does this indicate? Name any one test for its detection.
(ii) What are the consequences of this problem for OLS estimators?
(iv) Explain the estimation process of Weighted Least Squares with known error
variances in this context.
3. A regression of salaries of 222 professors from seven universities in the U.S. on their
years of experience since they completed their Ph.D. was performed.
(a) The graph of squared residuals against the fitted values of the dependent
variable, salary is shown – below. What does the graph show? Is there u 2
versus fitted values(with least squares fit)
(b) The test statistic for White’s test for this regression was reported as
19.7.State the null and alternative hypothesis and the test statistic for carrying
out this test. Is the null hypothesis rejected at 5% level of significance?
83
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
4. Consider the regression model that postulates relationship between monthly demand
for burgers (Y) and monthly household income (HH_INC, in rupees).
𝑌𝑖 = 𝐴 + 𝐵 𝐻𝐻_𝐼𝑁𝐶𝑖 + 𝑢𝑖 ,
The regression was run for a cross section of 41 observations. Susp heteroscedasticity,
the White's test for heteroscedasticity was chosen following the results were obtained :
𝑅2 = 0.1148
Test for heteroscedasticity at 5% level of significance. State the null and alternative
hypothesis clearly.
̂ = (0.008) + 7.8(1/N)
W/N
t = (14.43) (76.58) r2 = 0.99
(b) What is the reason for transforming Eq (1) into Eq. (2).
(c) What is the author assuming in going from Eq. (1) to (2) ?
(d) Has the author successfully removed the problem which Eq. (1) is suffering
from.
(e) Can you relate the slopes and intercepts of the two models?
(f) Can you compare the R2 values of the two models? Why or Why not?
6. For pedagogic purposes Hanushek and Jackson estimate the following model:
84
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
b) Compare the results of the two regressions. Has the transformation of the
original model improved the results, that is, reduced the estimated standard
errors? Why or why not?
Yi = B1 + B2 Xt + ut
85
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Where
In : natural logarithm
When the White's general test for heteroscedasticity at 5% level of significance was
conducted, the R2 obtained from the regression of 𝑒𝑖2 on a constant, Years, Years2 and
Years3 was 0.45. Is the researcher correct in concluding that Years and Years 2 are
individually significant variables in the salary regression? Why? Why not?
̂ = 1992.35 + 0.233PRODi
COMP
t= (2.1275) (2.333)
R2 = 0.5891
Since the cross sectional data included heterogeneous units, heteroscedasticity was likely
to be present. The Park test was performed and the following results of auxiliary
regression were obtained :
̂2 = 35.817 − 2.8099𝑃𝑅𝑂𝐷
𝑙𝑛𝑒1 𝑖
(i) Use the result of auxiliary regression to check if the model indeed suffers from
heteroscedasticity, perform the test at 5% level of significance.
(ii) What could be the possible remedies of heteroscedasticity?[Eco(H) 2019]
2. The Home ministry of a country wants 10 lest if petty crimes (minor theis) are higher
in states where poverty rates are high. They obtain data on several variables and ran
the following cross section regression for 35 states in the country.
86
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
𝑆𝑒 = (3.125)(0.02713)(0.0361)(0.03834)
𝑛 = 35 𝑅2 = 0.6876
PR = Poverty Rates
LR = Literacy Rates
This equation was estimated using 50 cross sectional observations on states. t ordinary
least squares (OLS). To check for heteroscedasticity related to L separate regressions
were run for the 17 states with the lowest LR and the 17 states with the highest LR. The
sum of squared residuals for the low LR states was 270. The sum of squared residuals for
the high-LR states was 90.
i. Compute unbiased estimates of the variance of the error term in the two
subsamples.
ii. Conduct the Goldfeld-Quandt test at 5% level of significance.
iii. Regardless of your conclusion for part (ii), suppose you believe that
heteroscedasticity is indeed present and that the variance of the error term is
inversely proportional to state LR : Var (∈𝑖 ) = Y/LRi, where Y = an unknown
constant. Explain how you would transform the data to satisfy the classical
assumptions. [Eco(h) 2022]
𝑌𝑖 = 𝛽1 + 𝛽2 𝑋2𝑖 + 𝛽3 𝑋3𝑖 + 𝑢𝑖
How will you transform the model to obtain homoscedastic errors under each of the
following cases, assuming other CLRM assumptions for 𝑢𝑖 hold:
i. 𝑢𝑖 = 𝜀𝑖 (𝑋2𝑖 )1/2
87
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
= 0 otherwise
Suppose that E(𝜇/X, D1, D2) = 0 and V(𝜇/X, D1, D2) = 𝜎 2 𝑋 2 . Transform the original
equation to obtain homoscedastic error term.
6. Based on data on value added in manufacturing, MANU, and gross domestic product
for 28 countries in 2010, all measured in millions of US dollars. The following
regression results were reported (standard errors in parentheses),
̂ 𝑖 = 604 + 0.194𝐺𝐷𝑃𝑖
𝑀𝐴𝑁𝑈
𝑠𝑒 = (533.93) (0.013)
Since the cross sectional data were based on heterogeneous units, heteroscedasticity was
likely to be present. White's test was performed using ordinary least squares residuals,
ei of the above regression and the following results were obtained :
𝑅2 = 0.5891
i. Use the R2 value reported in the auxiliary regression to test if the model indeed
suffers from heteroscedasticity. Perform the test at 5% level of significance.
ii. In the light of your answer in part (i) what can you say about the regression results
reported above? [Eco(h) 2013]
88
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Yi = A + BXi + ui
How will you modify the original regression in order to deal with the problem of
heteroscedasticity in each of the following cases, if error variance follows the
a) 𝐸 (𝑢𝑖2 ) = 𝜎 2 𝑋𝑖2
b) 𝐸 (𝑢𝑖2 ) = 𝜎 2 𝑋𝑖3
1/3
c) 𝐸 (𝑢𝑖2 ) = 𝜎 2 𝑋𝑖 [Eco(h) 2017]
8. A researcher finds evidence of heteroscedasticity in the regression model.
Yi = B1 + B2 Xi + ui
The function is estimated using OLS and the residuals, ei, are found to be heteroscedastic
Transform the above model by applying the weighted least squares (WLS) method to
obtain homoscedastic errors under each of the following. Do the transformed regressions
in each have an intercept term :
9. A researcher postulates that the car density (number of cars per thousand
population), Y, in a city depends on the bus density (number of buses per thousand
population), X. He runs the regression model. Yi = B1 + B2 Xi + ui for a cross-section
of 128 cities in India and finds evidence of heteroscedasticity.
i. How would the model be re-estimated if it is assumed that error variance is
𝜎2
proportional to the reciprocal of Xi , that is 𝐸 (𝑢12 ) = ? Show that the transformed
𝑋
error term is homoscedastic.
ii. Can we compare R2 of the original model and the transformed model? Explain your
answer. [Eco(h) 2018]
10. A researcher obtained the following results for determining the relation between
school dropout rates of a district (% of class V students who drop out of school) in
India and district's per capita income, district's expenditure on education and a
dummy variable D_partyABC =1 if political party ABC was in power, 0 otherwise. 215
districts were included in this study.
89
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(.561) (.045)
11. The amount of loan (Li in lakhs) that is sanctioned by a bank to an applicant is
regressed on Gender Duminy for Male: D_Male=1 if male, 0 otherwise), Credit Score
(Ci higher values indicate good credit history), Income of (Inc; in lakh Rupees) and
education level (Ed; in years) of the applicant for a sample of 45 applicants
i. What are the likely consequences on the results of the Gauss Markov theorem if it
is found that income and education have a high correlation coefficient of 0.88?
ii. Interpret the coefficient of D_Male.
iii. Test for overall goodness of fit of this regression.
iv. The value of the test statistic of the White's General test was found to be 9.69. What
is the distribution of this test statistic? What are the null and alternative
hypotheses of this test? What can you conclude about the presence of
heteroscedasticity based on the above information given squares and cross
products of explanatory variables were included in the auxiliary regression?
v. What could be the possible remedy of the problem if heteroscedasticity is indeed
present? Assume that error variances are unknown. [Eco(h) 2023]
X 0.121 0.009
90
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
N=27
𝑅2 = 0.776
On plotting the residuals against X, it was found that the variance of the residuals
increased with X,
i. What problem does this indicate? Name any one test for its detection and explain
the steps to conduct this test.
ii. What are the consequences of this problem for OLS estimators? Which type of
dataset is more likely to be characterized by this problem?
iii. Explain the estimation process of Weighted Least Squares with unknown error
variances in this context where error variance is proportional to 𝑋 2 . Also, mention
which is the intercept and slope parameter in the the transformed model and how
do we get back to the original model.
91
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
CHAPTER-7
Autocorrelation
OBJECTIVE TYPE QUESTION
Choose the Best alternative for each question
1. When error terms across time series data are intercorrelated, it is known as
a. Cross correlation
b. Cross autocorrelation
c. Spatial autocorrelation
d. Serial autocorrelation
4. If in our regression model, one of the explanatory variables included is the ;aged
value of the dependent variable, then the model is referred to as
a. Best fit model
b. Dynamic model
c. Autoregressive model
d. First-difference form
92
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
8. The regression model does not include the lagged value(s) of the dependent
variable as one of the explanatory variables. This is an assumption underlying on
of the following tests of autocorrelation:
a. Durbin-Watson d test
b. Runs test
c. Breusch-Godfrey test
d. Graphical method
10. If the durbin-watsond-test statistics is found to be equal to 0, this means that fors-
order autocorrelation is
a. Perfectly positive
b. Perfectly negative
c. Zero
d. Imperfect negative correlation
State whether the following statements are true or false. Briefly justify your answer.
93
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(h) In the regression of the first difference of Y on the first differences of X, if there
is a constant term and a linear trend term, it means in the original model there
is linear as well as a quadratic trend term.
(i) For the two-variable regression model. 𝑌1 + 𝐵1 + 𝐵2𝑋𝑡 + 𝑢𝑡 , if the OLS residuals
(et) are plotted against time (t) and a distinct pattern is observed. then it is an
indication of heteroscedasticity.
THEORY QUESTIONS
Yt = B1 + B 2 Xt + ut
ut = 𝜌ut-1 + vt
3. In the two variables regression model, Y t =B1 + B2 Xt + ut, discuss how the problem
of autocorrelation can be remedied using First Difference Method (𝜌 = 1) if the
disturbance term u, follows AR(1) scheme. that is. u t = 𝜌ut-1 + vt.
Practical Questions
1. Given a sample of 50 observations and 4 explanatory variables, what can you say
about autocorrelation if
a) d=1.05, b) d=1.05, c) d=2.50, d) d=3.97
𝑌𝑡 = 𝐵1 + 𝐵2 𝑡 + 𝑢𝑡
94
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
t = time
(𝑡)(2.535) (−3.9608)
(a) Use the Durbin Watson d-statistic test to check if there is autocorrelation in the
model. Give the null and alternate hypothesis clearly.
(b) Give any three reasons that can cause autocorrelation.
4. Let the population regression function be as follows. where errors follow AR(1)
process:
𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝜇𝑡
𝜇𝑡 = 𝜌𝜇𝑡−1 + 𝜀𝑡
OLS is used to estimate the function using time-series data for 10 consecutive time
periods.
(i) If errors follow AR(1) how would it affect the least squares estimation?
(ii) The residuals for the 10 consecutive time periods are as follows
Time 1 2 3 4 5 6 7 8 9 10
Period
Residuals -5 -4 -3 -2 -1 +1 +2 +3 +4 +5
Plot the residuals with respect to time. What conclusion can you draw about the pattern
of the residuals over time?
95
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
5. A researcher estimated the demand function for money for an economy for 100
quarters using quarterly data for the period @1: 1985-1986 to Q2: 2010-2011. The
regression results are as follows (standard errors are mentioned in the brackets and
In indicates natural log) :
̂𝑡 = 2.6027 − 0.4024𝐼𝑛𝑅𝑡 + 0.59𝐼𝑛𝑌𝑡
𝐼𝑛𝑀
(𝑠𝑒) = (1.24)(0.36)(0.36)
𝑅2 = 9.2, 𝐷𝑢𝑟𝑏𝑖𝑛 𝑊𝑎𝑡𝑠𝑜𝑛 𝑑 − 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐 = 1.755
Where Mt = real cash balances
Rt = long-term interest rate
Yt = aggregate real national income
Use Durbin-Watson d test to check for the presence of first order autocorrelation
at 5% level of significance.
6. In a study of the determination of prices of final output at factor cost in the UK, the
following results were obtained on the basis of the data:
𝑅2 = 0.984, d=2.54
Where PF= prices of final output at factor cost, W= wages and salaries per employee,
X= gross domestic product per person employed, M= import prices, Mt−1 = import
prices lagged 1 year, PFt−1 = prices of final output at factor cost lagged 1 year.
“Since for 18 observations and 5 explanatory variables, the 5% lower & upper d values
are 0.71 and 2.06, the estimated d value of 2.54 indicates that there is no positive
autocorrelation. Comment.
Where Y is output and L is labour input, and K is capital input and∆is the first
difference operator. How would you interpret 𝛽1 in this model? Could it be
regarded as a estimate of technological change? Justify your answer.
8. (i) To study the effect of unemployment rate (u) on the index of variances (VAC i)
in U.S.A. for 24 observations, the following results were obtained:
In VACi = 7.3084 – 1.5375Inui
t= (5.8250) (-21.612)
r = 0.9550,
2 d = 0.9108
Is there a problem of autocorrelation indicate in the results. Choose α = 5%.
(ii) Outline the method of estimation that will produce BLUE estimators in the
presence of AR(1) autocorrelation.
96
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(i) Test for the presence of autocorrelation using Durbin Watson test at 5% level
of significance. State your hypotheses clearly.
10. (i)From the given data on the indexes of real compensation per hour (Y) and output
per hour (X) in the business sector of the U.S. economy for the period 1959 to 1998,
the base of the indexes being 1992 = 100. We obtain the following regression model.
Yt = 29.5192 + 0.7136Xt
se = (1.9423) (0.0241)
t = (15.1977) (29.6066)
r2 = 0.9584, d = 0.1229
Using Durbin Watson d test, check does the model suffers from autocorrelation.
(iii) Since the data underlying regression in part(i) is time series data, it is quite
possible that both wages and productivity exhibit trends. If that is the case,
then we need to include the time or trend, t, variable in the model to see the
relationship between wages and productivity net of the trends in the two
variables.
To test this, we include the trend variable in regression given in part(i) and
obtained the following results
̂𝑡
Y = 1.4752+ 1.3057Xt -0.9032t
se = (13.18) (0.2765) (0.4203)
t = (0.1119). (4.7230) (-2.1490)
R2 = 0.9631 d = 0.2046
Has the problem of autocorrelation resolved. If not, can we say that the model suffers
from pure autocorrelation?
97
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
11. For the Phillips curves for United States from 1958 to 1969 the following regression
was obtained:
1
̂
Y𝑡 = -0.2594 + 20.5880𝑋
𝑡
t = (-0.2572) (4.3996)
R2 = 0.6594, d = 0.6394
(i) Interpret the regression. Is there any evidence of first order autocorrelation in
the residuals?
(ii) On what counts would a researcher be satisfied with these results at a first
glance? Verify your conjectures using formal tests. For tables take the closest
value of n.
(iii) Is there anything in these results that the researcher needs to worry about?
Verify using formal test (s).
13. Consider the following demand for energy model for India for 1945 to1995:
98
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Does the model suffer from first order autocorrelation? Describe the test statistic you use
and why?
14. Consider the following regression results on a model of demand for competitive
imports based on U.K. quarterly data covering 1980(Q1) to 1996(Q4).
Se (.024)
Where:
Which test should be used to test the presence of AR(1) error process in this model?
Describe the test and perform this test at 5% level of significance.
99
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
60 6 3.72 —
200 20 1.61 —
𝑠𝑒 = (0.99)(0.089)(0.051)(0.0058)
𝑅2 = 0.567, 𝑛 = 34
Use the Breusch-Godfrey test to check for the presence of AR(1) scheme of
autocorrelation at 1% level of significance.
18. The following model of consumption is estimated for an economy for the years 1947-
2000 :
In Ct = B1 + B2 InPDIt + B3 INTt + ut
The OLS residuals (et) are then regressed on InPDI, INT, and et-1 as follows:
100
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
1. Consider the following model of Indian imports estimated using data for 40 years for
the period 1945-1985. (Standard errors are given in parentheses)
𝑠𝑒 = (0.0903)(0.0191)(0.0243)(0.024)
R2 = 0.994, d = 1.8
Where,
i. Does the model suffer from first order autocorrelation? Which test statistic do you
use and why?
ii. Outline the steps of the test used. Compute the test statistic and test the
hypotheses that the preceding regression does not suffer first order
autocorrelation.
iii. If the general model is given Yi = B1 + B2 X2i + B3 X3i + ui where errors follow
AR(1) scheme, that is 𝑢1 = 𝜌𝑢𝑡−1 + 𝛿𝑡 , where 5, is a white noise error term. Then
how would you transform the model to correct for the problem of autocorrelation.
[ Eco(h) 2014]
2. Consider the following model :
(GNPt – GNPt-1 ) = change in the GNP between time t and time (1 - 1).
i. Assuming you have the data to estimate the preceding model, would it be possible
to estimate all the coefficients of this model? If not. what coefficients can be
estimated? Do you suspect a problem in the regression?
ii. Suppose that the GNP, explanatory variable was absent from the model. Would
your answer to (i) be the same?
iii. What is a possible remedy to the problem detected in (i) above?
iv. Now suppose the model is given as Ct = 𝛽 1 + 𝛽 2 GNP1 + 𝛽 3 Ct-1 + ut and the errors
are assumed to be autocorrelated. How would you test for serial correlation in the
model? Discuss the underlying assumptions of the test if any?
101
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
v. Suppose the equation given in (iv) above is transformed and estimated as: C t
/GNPt = 𝛽 1 (1/GNPt) + 𝛽2 + 𝛽3(Ct-1 /GNPt) +ut /GNPt. What could be the possible
reason for the transformation? How would you test for such a problem?
3. What do you understand by the term Autocorrelation? Consider the regression model.
Yt = B1 + B2Xt +ut. How can the problem of autocorrelation be remedied if 𝜌 is
assumed to be 1 ( 𝜌 = 1) and it is assumed that the error term follows the AR (1)
scheme. that is.
ut = 𝜌ut-1 + et, −1 ≤ 𝜌 ≤ 1
4. Quarterly data on country XYZ was collected for the period 2005-2019 to estimate
the relation between Foreign Direct Investment (FDI), Trade Openness (TO). Gross
Domestic Product (GDP) and Exchange Rate (E). TO is defined as the ratio of export
plus imports to GDP and t = trend. Following regression was estimated:
𝑠𝑒 = (0.097)(0.013)(0.004)(0.015)(0.09)
𝑅2 = 0.904, 𝑑 = 1.45
i. Interpret the estimated slope coefficients. Do you suspect some problem with the
above regression?
ii. What is the nature of the problem? How do you know? Explain its consequences?
𝐹𝐷𝐼𝑡 𝐸𝑡 𝑇𝑂 𝑡
= 𝛽0 + 𝛽1 𝐺𝐷𝑃 + 𝛽2 𝐺𝐷𝑃𝑡 + 𝛽3 𝐺𝐷𝑃 + 𝑢𝑡
𝐺𝐷𝑃𝑡 𝑡 𝑡 𝑡
Will this transformation solve the problem in (ii) above? How? Can you compare
R of this model with the model above?
iii. Suppose now the regression is estimated as given below
̂𝑡 = −0.74 − 0.042𝑇𝑂𝑡 + 0.41𝑡
𝐹𝐷𝐼
𝑠𝑒 = (0.057)(0.019)(0.364)
𝑅2 = 0.896, 𝑑 = 1.34
Test whether the regression specified above suffers from first order
autocorrelation? Which test will you use and why? (Use a = 5%)
iv. If the errors obtained from regression specified in (iii) above follows higher order
autoregressive process then how would you test for serial correlation? Give the
steps of the test in detail.
v. With reference to the regression specified in part (iii). What will be the remedy
for the problem of autocorrelation if it is detected? Explain.[Eco(h) 2022]
5. The following regression was estimated using quarterly data for 10 years
𝑆𝑒 = (13.58)(0.0347)(0.0017)(0.04919)
102
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
i = interest rate
i. Interpret the above regression and comment on the expected and estimated signs
of the coefficients. Also comment on the individual significance of the coefficients.
ii. Construct an ANOVA table and comment on the joint significance of the regression.
iii. Suppose you wish to test the restriction 𝛽3 = 𝛽4 for the above regression. Explain
the two methods that you can use to carry out this test.
iv. Do you suspect autocorrelation in the model? If yes, how would you test for it?
[Eco(h) 2020]
6. A researcher estimated the demand function for money for an economy for 101
quarters using quarterly data for the period Qi: 1986-1987 to Qz: 2011-2012. The
regression results are as follows (standard errors are mentioned in the brackets and
in indicates natural log):
𝑠𝑒 = (1.24)(0.36)(0.34)(0.02)
R2 = 0.9165
i. Use Durbin's h-test to check for the presence of first order autocorrelation at 1%
level of significance.
ii. Can we use Durbin-Watson d-statistic test for the above regression ? Give reasons.
103
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
Where Si is the number of suicides per million population in a country in the year 2019
Divorce Rates is number of divorces per million population in a country in the year 2019
i. Why did not the NGO use only divorce rate as an explanatory variable? What
would be the properties of OLS estimator of the coefficient of divorce rate in such
a regression?
ii. Given GDP has an exact relation with HDI where HDI = (GDP per capita*Literacy
Rates*Life Expectancy)3, will perfect multi-collinearity be a problem in the above
regression?
iii. Interpret the coefficients of In GDP per capita and Divorce rates:
iv. Suppose NGO only examines the impact of divorce rates on suicide rates and run
the following regression: Si = 𝛽1 + 𝛽2 Divorce Rates 𝑠𝑖 + 𝜀2 . Show that 𝛽2 is an
efficient estimator.
v. The NGO also ran a time series regression for one specific country for a period of
35 years and obtained the following results.
St = 10.433-.047 HDIt † 343.45 In GDP per capitat + 0002 Divorce Rates
Durbin Watson d=2.03
What can be inferred about the presence of AR(1) from the results?[Eco(h) 2023]
8. For estimating the Phillips curves for the United States from 1958 to 1969 the
following regression was obtained :
1
𝑌̂𝑡 = −0.2594 + 20.5880
𝑋𝑡
𝑡 = (−0.2572)(4.3996)
𝑅2 = 0.6594, 𝑑 = 0.6394
1) d 2) c. 3) d . 4) c. 5) a. 6) d. 7) b 8) a. 9) c 10) a.
104
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
CHAPTER-8
Model Selection Criteria
Theory Questions
Practical questions
1. Consider the data in following Table:
Y X2 X3
1 1 2
3 2 1
8 3 -3
Based on these data, estimate the following regressions:
Yi = α1 + α2X2i + u1i
Yi = λ1 + λ3X3i + u2i
Yi = β1 + β2X2i + β3X3i + u3i
Note :Estimate only the coefficients and not the standard errors:
(i) Is α2 = β2? Why or why not?
(ii) Is λ3 = β3? Why or why not?
What important conclusion do you draw from this exercise?
105
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(iii) If FLR is regressed upon the PGNP the following results are obtained:
̂ i = 47.5971 –
𝐹𝐿𝑅 0.00256PGNPi
Se = (3.5553) - (0.0011)
r2 = 0.0721,
Explain the net effect and gross effect of PGNP on CM.
3. Suppose we estimate an equation for demand for food in India for the period 1922 –
41:
QD = demand for food
PD = food prices
Y = income
𝑄̂D = 92.05 – 0.142PD + 0.236Y
se (5.84) (0.067) (0.031)
R2 = 0.9832
(i) Comment on the above regression. Now if we omit the income variable we
get the following regression:
Qd = 89.97 + 0.107PD
se (11.85) (0.118)
(ii) Comment on the new regression with omitted variable. Do you suspect any
problem?
(iii) If the answer to (ii) above is yes, then what is the nature of the problem?
(iv) What are the consequences of such a problem?
4. Using quarterly for 10 years (n = 40) for the U.S. economy, the following model of
demand for new cars was estimated:
NUMCARSi = B1 + B2PRICEi + B3INCOMEi + B4 INTRATEi + ui
Where
NUMCARS: Number of new car sales per thousand people
Price: New car price index
INCOME: Per capita real disposable income (in$)
INTRATE: Interest rate (in percent)
The table below gives estimates of the coefficients and their standard errors:
(i) A priori, what are the expected signs of the partial slope coefficients? Are the
results in accordance with these expectations?
(ii) Interpret the various slope coefficients and test whether they are individually
statistically different from zero. Use 10% level of significance.
106
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(iii) The adjusted R squared reported for this model is 0.758. Test the model for
overall goodness of fit at 5% level of significance.
(iv) Suppose unemployment rate is an important determinant of demand for new
cars but is not included in the above regression model. What are the
consequences of omitting this variable?
5. The monthly salary (Wage, in hundred of rupees), age (AGE in years), number of
years of experience (EXP, in years), number of years of education (EDU) were
obtained for 49 persons in a certain office. The estimated regression of Wage on the
characteristics of a person were obtained as follows (with t statistics in parenthesis)
The regression results for the model for n = 45 are given below. The figures in
parantheses denote the standard errors.
𝑆𝑒 = (9.5932)(0.0027)(0.2099)
𝑅2 = 0.7897
𝑆𝑒 (0.553)(0.0011) 𝑅2 = 0.0721
𝑌̂𝑖 = 𝛼1 + 𝛼2 𝑋2𝑖 + 𝑣𝑖
107
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
1. The following are the regression results for Cobb-Douglas production function
estimated for Taiwan for the period 1958-1972 :
𝑡 = (−0.2011)(4.46642)(3.7214)
Where:
𝐿𝑡 = labour input
𝐾𝑡 = capital Input
Suppose the researcher estimates the following mis-specified equation in which capital
input is omitted:
In 𝑄𝑡 = 𝐴1 + 𝐴2 In 𝐿𝑡 + 𝑢𝑡
i. Find the numerical value of E(𝑎2 ) using the information given in the equation,
where 𝑎2 is the OLS estimator of 𝐴2 . Is it biased upward or downward?
ii. What will be the other consequences of estimating this mis-specified equation?
[Eco(h) 2013]
(𝑠𝑒) = (5.84)(0.067)(0.031)
However, if income, a relevant and important variable, is omitted from the above model,
then the following regression result is obtained:
108
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
(𝑠𝑒) = (11.85)(0.118)
3. The Home ministry of a country wants to test if petty crimes (minor thefts) are higher
in states where poverty rates are high. They obtain data on several variables and ran
the following cross section regression for 35 states in the country.
𝑠𝑒 = (3.125)(0.02713)(0.0361)(0.03834)
𝑛 = 35 𝑅2 = 0.6876
PR = Poverty Rates
LR = Literacy Rates
i. A priori what signs are expected for the explanatory variables? Explain your
answers.
ii. Test for overall goodness of fit of the regression (Use a = 5%)
iii. Another model was used and following results were obtained:
̂ 𝑖 = 2.142 + 0.01186 In 𝑃𝑅𝑖 − 0.548 In 𝐿𝑅𝑖 + 0.0921 In 𝑆𝐷𝑃𝑖
𝐼𝑛𝐶
𝑆𝑒 = (1.102) (0.0673) (0.0259)(0.0921)
2
𝑛 = 35 𝑅 = 0.7923
Interpret the coefficient of In SDP
iv. How will you conduct MacKinnon-White-Davidson (MWD) test to select which
model is better? Write all the steps clearly. [Eco(h) 2022]
4. An individual is hired to determine the best location for the next branch of a famous
family restaurant chain 'Foodies' The individual decides to build a regression model
to explain the gross sales volume at each of the restaurants in the chain as a function
of various descriptions of the location of that branch. He considers the following
regression (original):
109
www.rsgclasses.com
Rahul Sir( SRCC Graduate, DSE Alumni)-9810148860
𝑆𝑒 = (2053)(0,0727)(0.543)
110
A-2 Appendix Tables
g b(y; n, p)
x
Table A.1 Cumulative Binomial Probabilities B(x; n, p) 5
a. n ! 5 y50
0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99
0 .951 .774 .590 .328 .237 .168 .078 .031 .010 .002 .001 .000 .000 .000 .000
1 .999 .977 .919 .737 .633 .528 .337 .188 .087 .031 .016 .007 .000 .000 .000
x 2 1.000 .999 .991 .942 .896 .837 .683 .500 .317 .163 .104 .058 .009 .001 .000
3 1.000 1.000 1.000 .993 .984 .969 .913 .812 .663 .472 .367 .263 .081 .023 .001
4 1.000 1.000 1.000 1.000 .999 .998 .990 .969 .922 .832 .763 .672 .410 .226 .049
b. n ! 10
0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99
0 .904 .599 .349 .107 .056 .028 .006 .001 .000 .000 .000 .000 .000 .000 .000
1 .996 .914 .736 .376 .244 .149 .046 .011 .002 .000 .000 .000 .000 .000 .000
2 1.000 .988 .930 .678 .526 .383 .167 .055 .012 .002 .000 .000 .000 .000 .000
3 1.000 .999 .987 .879 .776 .650 .382 .172 .055 .011 .004 .001 .000 .000 .000
4 1.000 1.000 .998 .967 .922 .850 .633 .377 .166 .047 .020 .006 .000 .000 .000
x
5 1.000 1.000 1.000 .994 .980 .953 .834 .623 .367 .150 .078 .033 .002 .000 .000
6 1.000 1.000 1.000 .999 .996 .989 .945 .828 .618 .350 .224 .121 .013 .001 .000
7 1.000 1.000 1.000 1.000 1.000 .998 .988 .945 .833 .617 .474 .322 .070 .012 .000
8 1.000 1.000 1.000 1.000 1.000 1.000 .998 .989 .954 .851 .756 .624 .264 .086 .004
9 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .994 .972 .944 .893 .651 .401 .096
c. n ! 15
0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99
0 .860 .463 .206 .035 .013 .005 .000 .000 .000 .000 .000 .000 .000 .000 .000
1 .990 .829 .549 .167 .080 .035 .005 .000 .000 .000 .000 .000 .000 .000 .000
2 1.000 .964 .816 .398 .236 .127 .027 .004 .000 .000 .000 .000 .000 .000 .000
3 1.000 .995 .944 .648 .461 .297 .091 .018 .002 .000 .000 .000 .000 .000 .000
4 1.000 .999 .987 .836 .686 .515 .217 .059 .009 .001 .000 .000 .000 .000 .000
5 1.000 1.000 .998 .939 .852 .722 .403 .151 .034 .004 .001 .000 .000 .000 .000
6 1.000 1.000 1.000 .982 .943 .869 .610 .304 .095 .015 .004 .001 .000 .000 .000
x 7 1.000 1.000 1.000 .996 .983 .950 .787 .500 .213 .050 .017 .004 .000 .000 .000
8 1.000 1.000 1.000 .999 .996 .985 .905 .696 .390 .131 .057 .018 .000 .000 .000
9 1.000 1.000 1.000 1.000 .999 .996 .966 .849 .597 .278 .148 .061 .002 .000 .000
10 1.000 1.000 1.000 1.000 1.000 .999 .991 .941 .783 .485 .314 .164 .013 .001 .000
11 1.000 1.000 1.000 1.000 1.000 1.000 .998 .982 .909 .703 .539 .352 .056 .005 .000
12 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .996 .973 .873 .764 .602 .184 .036 .000
13 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .995 .965 .920 .833 .451 .171 .010
14 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .995 .987 .965 .794 .537 .140
(continued )
111
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-3
g b(y; n, p)
x
Table A.1 Cumulative Binomial Probabilities (cont.) B(x; n, p) 5
y50
d. n ! 20
0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99
0 .818 .358 .122 .012 .003 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000
1 .983 .736 .392 .069 .024 .008 .001 .000 .000 .000 .000 .000 .000 .000 .000
2 .999 .925 .677 .206 .091 .035 .004 .000 .000 .000 .000 .000 .000 .000 .000
3 1.000 .984 .867 .411 .225 .107 .016 .001 .000 .000 .000 .000 .000 .000 .000
4 1.000 .997 .957 .630 .415 .238 .051 .006 .000 .000 .000 .000 .000 .000 .000
5 1.000 1.000 .989 .804 .617 .416 .126 .021 .002 .000 .000 .000 .000 .000 .000
6 1.000 1.000 .998 .913 .786 .608 .250 .058 .006 .000 .000 .000 .000 .000 .000
7 1.000 1.000 1.000 .968 .898 .772 .416 .132 .021 .001 .000 .000 .000 .000 .000
8 1.000 1.000 1.000 .990 .959 .887 .596 .252 .057 .005 .001 .000 .000 .000 .000
9 1.000 1.000 1.000 .997 .986 .952 .755 .412 .128 .017 .004 .001 .000 .000 .000
x
10 1.000 1.000 1.000 .999 .996 .983 .872 .588 .245 .048 .014 .003 .000 .000 .000
11 1.000 1.000 1.000 1.000 .999 .995 .943 .748 .404 .113 .041 .010 .000 .000 .000
12 1.000 1.000 1.000 1.000 1.000 .999 .979 .868 .584 .228 .102 .032 .000 .000 .000
13 1.000 1.000 1.000 1.000 1.000 1.000 .994 .942 .750 .392 .214 .087 .002 .000 .000
14 1.000 1.000 1.000 1.000 1.000 1.000 .998 .979 .874 .584 .383 .196 .011 .000 .000
15 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .994 .949 .762 .585 .370 .043 .003 .000
16 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .984 .893 .775 .589 .133 .016 .000
17 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .996 .965 .909 .794 .323 .075 .001
18 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .992 .976 .931 .608 .264 .017
19 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .997 .988 .878 .642 .182
(continued)
112
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-4 Appendix Tables
g b(y; n, p)
x
Table A.1 Cumulative Binomial Probabilities (cont.) B(x; n, p) 5
y50
e. n ! 25
0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99
0 .778 .277 .072 .004 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
1 .974 .642 .271 .027 .007 .002 .000 .000 .000 .000 .000 .000 .000 .000 .000
2 .998 .873 .537 .098 .032 .009 .000 .000 .000 .000 .000 .000 .000 .000 .000
3 1.000 .966 .764 .234 .096 .033 .002 .000 .000 .000 .000 .000 .000 .000 .000
4 1.000 .993 .902 .421 .214 .090 .009 .000 .000 .000 .000 .000 .000 .000 .000
5 1.000 .999 .967 .617 .378 .193 .029 .002 .000 .000 .000 .000 .000 .000 .000
6 1.000 1.000 .991 .780 .561 .341 .074 .007 .000 .000 .000 .000 .000 .000 .000
7 1.000 1.000 .998 .891 .727 .512 .154 .022 .001 .000 .000 .000 .000 .000 .000
8 1.000 1.000 1.000 .953 .851 .677 .274 .054 .004 .000 .000 .000 .000 .000 .000
9 1.000 1.000 1.000 .983 .929 .811 .425 .115 .013 .000 .000 .000 .000 .000 .000
10 1.000 1.000 1.000 .994 .970 .902 .586 .212 .034 .002 .000 .000 .000 .000 .000
11 1.000 1.000 1.000 .998 .980 .956 .732 .345 .078 .006 .001 .000 .000 .000 .000
x 12 1.000 1.000 1.000 1.000 .997 .983 .846 .500 .154 .017 .003 .000 .000 .000 .000
13 1.000 1.000 1.000 1.000 .999 .994 .922 .655 .268 .044 .020 .002 .000 .000 .000
14 1.000 1.000 1.000 1.000 1.000 .998 .966 .788 .414 .098 .030 .006 .000 .000 .000
15 1.000 1.000 1.000 1.000 1.000 1.000 .987 .885 .575 .189 .071 .017 .000 .000 .000
16 1.000 1.000 1.000 1.000 1.000 1.000 .996 .946 .726 .323 .149 .047 .000 .000 .000
17 1.000 1.000 1.000 1.000 1.000 1.000 .999 .978 .846 .488 .273 .109 .002 .000 .000
18 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .993 .926 .659 .439 .220 .009 .000 .000
19 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 .971 .807 .622 .383 .033 .001 .000
20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .991 .910 .786 .579 .098 .007 .000
21 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 .967 .904 .766 .236 .034 .000
22 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .991 .968 .902 .463 .127 .002
23 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 .993 .973 .729 .358 .026
24 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .996 .928 .723 .222
g
x
Table A.2 Cumulative Poisson Probabilities e2m my
F(x; m) 5 y!
y50
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.0
0 .905 .819 .741 .670 .607 .549 .497 .449 .407 .368
1 .995 .982 .963 .938 .910 .878 .844 .809 .772 .736
2 1.000 .999 .996 .992 .986 .977 .966 .953 .937 .920
x 3 1.000 1.000 .999 .998 .997 .994 .991 .987 .981
4 1.000 1.000 1.000 .999 .999 .998 .996
5 1.000 1.000 1.000 .999
6 1.000
(continued)
113
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-5
g
x
Table A.2 Cumulative Poisson Probabilities (cont.) e2mmy
F(x; m) 5
y50 y!
2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 15.0 20.0
0 .135 .050 .018 .007 .002 .001 .000 .000 .000 .000 .000
1 .406 .199 .092 .040 .017 .007 .003 .001 .000 .000 .000
2 .677 .423 .238 .125 .062 .030 .014 .006 .003 .000 .000
3 .857 .647 .433 .265 .151 .082 .042 .021 .010 .000 .000
4 .947 .815 .629 .440 .285 .173 .100 .055 .029 .001 .000
5 .983 .916 .785 .616 .446 .301 .191 .116 .067 .003 .000
6 .995 .966 .889 .762 .606 .450 .313 .207 .130 .008 .000
7 .999 .988 .949 .867 .744 .599 .453 .324 .220 .018 .001
8 1.000 .996 .979 .932 .847 .729 .593 .456 .333 .037 .002
9 .999 .992 .968 .916 .830 .717 .587 .458 .070 .005
10 1.000 .997 .986 .957 .901 .816 .706 .583 .118 .011
11 .999 .995 .980 .947 .888 .803 .697 .185 .021
12 1.000 .998 .991 .973 .936 .876 .792 .268 .039
13 .999 .996 .987 .966 .926 .864 .363 .066
14 1.000 .999 .994 .983 .959 .917 .466 .105
15 .999 .998 .992 .978 .951 .568 .157
16 1.000 .999 .996 .989 .973 .664 .221
17 1.000 .998 .995 .986 .749 .297
18 .999 .998 .993 .819 .381
x
19 1.000 .999 .997 .875 .470
20 1.000 .998 .917 .559
21 .999 .947 .644
22 1.000 .967 .721
23 .981 .787
24 .989 .843
25 .994 .888
26 .997 .922
27 .998 .948
28 .999 .966
29 1.000 .978
30 .987
31 .992
32 .995
33 .997
34 .999
35 .999
36 1.000
114
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-6 Appendix Tables
0 z
z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
!3.4 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0002
!3.3 .0005 .0005 .0005 .0004 .0004 .0004 .0004 .0004 .0004 .0003
!3.2 .0007 .0007 .0006 .0006 .0006 .0006 .0006 .0005 .0005 .0005
!3.1 .0010 .0009 .0009 .0009 .0008 .0008 .0008 .0008 .0007 .0007
!3.0 .0013 .0013 .0013 .0012 .0012 .0011 .0011 .0011 .0010 .0010
!2.9 .0019 .0018 .0017 .0017 .0016 .0016 .0015 .0015 .0014 .0014
!2.8 .0026 .0025 .0024 .0023 .0023 .0022 .0021 .0021 .0020 .0019
!2.7 .0035 .0034 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026
!2.6 .0047 .0045 .0044 .0043 .0041 .0040 .0039 .0038 .0037 .0036
!2.5 .0062 .0060 .0059 .0057 .0055 .0054 .0052 .0051 .0049 .0038
!2.4 .0082 .0080 .0078 .0075 .0073 .0071 .0069 .0068 .0066 .0064
!2.3 .0107 .0104 .0102 .0099 .0096 .0094 .0091 .0089 .0087 .0084
!2.2 .0139 .0136 .0132 .0129 .0125 .0122 .0119 .0116 .0113 .0110
!2.1 .0179 .0174 .0170 .0166 .0162 .0158 .0154 .0150 .0146 .0143
!2.0 .0228 .0222 .0217 .0212 .0207 .0202 .0197 .0192 .0188 .0183
!1.9 .0287 .0281 .0274 .0268 .0262 .0256 .0250 .0244 .0239 .0233
!1.8 .0359 .0352 .0344 .0336 .0329 .0322 .0314 .0307 .0301 .0294
!1.7 .0446 .0436 .0427 .0418 .0409 .0401 .0392 .0384 .0375 .0367
!1.6 .0548 .0537 .0526 .0516 .0505 .0495 .0485 .0475 .0465 .0455
!1.5 .0668 .0655 .0643 .0630 .0618 .0606 .0594 .0582 .0571 .0559
!1.4 .0808 .0793 .0778 .0764 .0749 .0735 .0722 .0708 .0694 .0681
!1.3 .0968 .0951 .0934 .0918 .0901 .0885 .0869 .0853 .0838 .0823
!1.2 .1151 .1131 .1112 .1093 .1075 .1056 .1038 .1020 .1003 .0985
!1.1 .1357 .1335 .1314 .1292 .1271 .1251 .1230 .1210 .1190 .1170
!1.0 .1587 .1562 .1539 .1515 .1492 .1469 .1446 .1423 .1401 .1379
!0.9 .1841 .1814 .1788 .1762 .1736 .1711 .1685 .1660 .1635 .1611
!0.8 .2119 .2090 .2061 .2033 .2005 .1977 .1949 .1922 .1894 .1867
!0.7 .2420 .2389 .2358 .2327 .2296 .2266 .2236 .2206 .2177 .2148
!0.6 .2743 .2709 .2676 .2643 .2611 .2578 .2546 .2514 .2483 .2451
!0.5 .3085 .3050 .3015 .2981 .2946 .2912 .2877 .2843 .2810 .2776
!0.4 .3446 .3409 .3372 .3336 .3300 .3264 .3228 .3192 .3156 .3121
!0.3 .3821 .3783 .3745 .3707 .3669 .3632 .3594 .3557 .3520 .3482
!0.2 .4207 .4168 .4129 .4090 .4052 .4013 .3974 .3936 .3897 .3859
!0.1 .4602 .4562 .4522 .4483 .4443 .4404 .4364 .4325 .4286 .4247
!0.0 .5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641
(continued)
115
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-7
z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
0.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
0.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5753
0.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141
0.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
0.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
0.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
0.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
0.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852
0.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
0.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9278 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990
3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993
3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995
3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997
3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998
116
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-9
Shaded area = $
0 t$,#
"
117
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-10
Table A.6 Tolerance Critical Values for Normal Population Distributions
2 32.019 37.674 48.430 160.193 188.491 242.300 20.581 26.260 37.094 103.029 131.426 185.617
3 8.380 9.916 12.861 18.930 22.401 29.055 6.156 7.656 10.553 13.995 17.370 23.896
4 5.369 6.370 8.299 9.398 11.150 14.527 4.162 5.144 7.042 7.380 9.083 12.387
5 4.275 5.079 6.634 6.612 7.855 10.260 3.407 4.203 5.741 5.362 6.578 8.939
6 3.712 4.414 5.775 5.337 6.345 8.301 3.006 3.708 5.062 4.411 5.406 7.335
7 3.369 4.007 5.248 4.613 5.488 7.187 2.756 3.400 4.642 3.859 4.728 6.412
8 3.136 3.732 4.891 4.147 4.936 6.468 2.582 3.187 4.354 3.497 4.285 5.812
9 2.967 3.532 4.631 3.822 4.550 5.966 2.454 3.031 4.143 3.241 3.972 5.389
10 2.839 3.379 4.433 3.582 4.265 5.594 2.355 2.911 3.981 3.048 3.738 5.074
11 2.737 3.259 4.277 3.397 4.045 5.308 2.275 2.815 3.852 2.898 3.556 4.829
12 2.655 3.162 4.150 3.250 3.870 5.079 2.210 2.736 3.747 2.777 3.410 4.633
13 2.587 3.081 4.044 3.130 3.727 4.893 2.155 2.671 3.659 2.677 3.290 4.472
14 2.529 3.012 3.955 3.029 3.608 4.737 2.109 2.615 3.585 2.593 3.189 4.337
15 2.480 2.954 3.878 2.945 3.507 4.605 2.068 2.566 3.520 2.522 3.102 4.222
16 2.437 2.903 3.812 2.872 3.421 4.492 2.033 2.524 3.464 2.460 3.028 4.123
Sample Size n 17 2.400 2.858 3.754 2.808 3.345 4.393 2.002 2.486 3.414 2.405 2.963 4.037
18 2.366 2.819 3.702 2.753 3.279 4.307 1.974 2.453 3.370 2.357 2.905 3.960
118
19 2.337 2.784 3.656 2.703 3.221 4.230 1.949 2.423 3.331 2.314 2.854 3.892
20 2.310 2.752 3.615 2.659 3.168 4.161 1.926 2.396 3.295 2.276 2.808 3.832
25 2.208 2.631 3.457 2.494 2.972 3.904 1.838 2.292 3.158 2.129 2.633 3.601
30 2.140 2.549 3.350 2.385 2.841 3.733 1.777 2.220 3.064 2.030 2.516 3.447
35 2.090 2.490 3.272 2.306 2.748 3.611 1.732 2.167 2.995 1.957 2.430 3.334
40 2.052 2.445 3.213 2.247 2.677 3.518 1.697 2.126 2.941 1.902 2.364 3.249
45 2.021 2.408 3.165 2.200 2.621 3.444 1.669 2.092 2.898 1.857 2.312 3.180
50 1.996 2.379 3.126 2.162 2.576 3.385 1.646 2.065 2.863 1.821 2.269 3.125
60 1.958 2.333 3.066 2.103 2.506 3.293 1.609 2.022 2.807 1.764 2.202 3.038
70 1.929 2.299 3.021 2.060 2.454 3.225 1.581 1.990 2.765 1.722 2.153 2.974
80 1.907 2.272 2.986 2.026 2.414 3.173 1.559 1.965 2.733 1.688 2.114 2.924
90 1.889 2.251 2.958 1.999 2.382 3.130 1.542 1.944 2.706 1.661 2.082 2.883
100 1.874 2.233 2.934 1.977 2.355 3.096 1.527 1.927 2.684 1.639 2.056 2.850
150 1.825 2.175 2.859 1.905 2.270 2.983 1.478 1.870 2.611 1.566 1.971 2.741
200 1.798 2.143 2.816 1.865 2.222 2.921 1.450 1.837 2.570 1.524 1.923 2.679
250 1.780 2.121 2.788 1.839 2.191 2.880 1.431 1.815 2.542 1.496 1.891 2.638
300 1.767 2.106 2.767 1.820 2.169 2.850 1.417 1.800 2.522 1.476 1.868 2.608
" 1.645 1.960 2.576 1.645 1.960 2.576 1.282 1.645 2.326 1.282 1.645 2.326
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-11
Table A.7 Critical Values for Chi-Squared Distributions %#2 density curve
Shaded area = α
0 2
% α,#
"
# .995 .99 .975 .95 .90 .10 .05 .025 .01 .005
1 0.000 0.000 0.001 0.004 0.016 2.706 3.843 5.025 6.637 7.882
2 0.010 0.020 0.051 0.103 0.211 4.605 5.992 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.344 12.837
4 0.207 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.832 15.085 16.748
6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.440 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.012 18.474 20.276
8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.534 20.090 21.954
9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.022 21.665 23.587
10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188
11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.724 26.755
12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.041 19.812 22.362 24.735 27.687 29.817
14 4.075 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319
15 4.600 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.577 32.799
16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267
17 5.697 6.407 7.564 8.682 10.085 24.769 27.587 30.190 33.408 35.716
18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156
19 6.843 7.632 8.906 10.117 11.651 27.203 30.143 32.852 36.190 38.580
20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997
21 8.033 8.897 10.283 11.591 13.240 29.615 32.670 35.478 38.930 41.399
22 8.643 9.542 10.982 12.338 14.042 30.813 33.924 36.781 40.289 42.796
23 9.260 10.195 11.688 13.090 14.848 32.007 35.172 38.075 41.637 44.179
24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.558
25 10.519 11.523 13.120 14.611 16.473 34.381 37.652 40.646 44.313 46.925
26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290
27 11.807 12.878 14.573 16.151 18.114 36.741 40.113 43.194 46.962 49.642
28 12.461 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.993
29 13.120 14.256 16.147 17.708 19.768 39.087 42.557 45.772 49.586 52.333
30 13.787 14.954 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672
31 14.457 15.655 17.538 19.280 21.433 41.422 44.985 48.231 52.190 55.000
32 15.134 16.362 18.291 20.072 22.271 42.585 46.194 49.480 53.486 56.328
33 15.814 17.073 19.046 20.866 23.110 43.745 47.400 50.724 54.774 57.646
34 16.501 17.789 19.806 21.664 23.952 44.903 48.602 51.966 56.061 58.964
35 17.191 18.508 20.569 22.465 24.796 46.059 49.802 53.203 57.340 60.272
36 17.887 19.233 21.336 23.269 25.643 47.212 50.998 54.437 58.619 61.581
37 18.584 19.960 22.105 24.075 26.492 48.363 52.192 55.667 59.891 62.880
38 19.289 20.691 22.878 24.884 27.343 49.513 53.384 56.896 61.162 64.181
39 19.994 21.425 23.654 25.695 28.196 50.660 54.572 58.119 62.426 65.473
40 20.706 22.164 24.433 26.509 29.050 51.805 55.758 59.342 63.691 66.766
0
t
t # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
0.0 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500
0.1 .468 .465 .463 .463 .462. .462 .462 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461
0.2 .437 .430 .427 .426 .425 .424 .424 .423 .423 .423 .423 .422 .422 .422 .422 .422 .422 .422
0.3 .407 .396 .392 .390 .388 .387 .386 .386 .386 .385 .385 .385 .384 .384 .384 .384 .384 .384
0.4 .379 .364 .358 .355 .353 .352 .351 .350 .349 .349 .348 .348 .348 .347 .347 .347 .347 .347
0.5 .352 .333 .326 .322 .319 .317 .316 .315 .315 .314 .313 .313 .313 .312 .312 .312 .312 .312
0.6 .328 .305 .295 .290 .287 .285 .284 .283 .282 .281 .280 .280 .279 .279 .279 .278 .278 .278
0.7 .306 .278 .267 .261 .258 .255 .253 .252 .251 .250 .249 .249 .248 .247 .247 .247 .247 .246
0.8 .285 .254 .241 .234 .230 .227 .225 .223 .222 .221 .220 .220 .219 .218 .218 .218 .217 .217
0.9 .267 .232 .217 .210 .205 .201 .199 .197 .196 .195 .194 .193 .192 .191 .191 .191 .190 .190
1.0 .250 .211 .196 .187 .182 .178 .175 .173 .172 .170 .169 .169 .168 .167 .167 .166 .166 .165
1.1 .235 .193 .176 .167 .162 .157 .154 .152 .150 .149 .147 .146 .146 .144 .144 .144 .143 .143
1.2 .221 .177 .158 .148 .142 .138 .135 .132 .130 .129 .128 .127 .126 .124 .124 .124 .123 .123
1.3 .209 .162 .142 .132 .125 .121 .117 .115 .113 .111 .110 .109 .108 .107 .107 .106 .105 .105
1.4 .197 .148 .128 .117 .110 .106 .102 .100 .098 .096 .095 .093 .092 .091 .091 .090 .090 .089
1.5 .187 .136 .115 .104 .097 .092 .089 .086 .084 .082 .081 .080 .079 .077 .077 .077 .076 .075
1.6 .178 .125 .104 .092 .085 .080 .077 .074 .072 .070 .069 .068 .067 .065 .065 .065 .064 .064
1.7 .169 .116 .094 .082 .075 .070 .065 .064 .062 .060 .059 .057 .056 .055 .055 .054 .054 .053
1.8 .161 .107 .085 .073 .066 .061 .057 .055 .053 .051 .050 .049 .048 .046 .046 .045 .045 .044
1.9 .154 .099 .077 .065 .058 .053 .050 .047 .045 .043 .042 .041 .040 .038 .038 .038 .037 .037
2.0 .148 .092 .070 .058 .051 .046 .043 .040 .038 .037 .035 .034 .033 .032 .032 .031 .031 .030
2.1 .141 .085 .063 .052 .045 .040 .037 .034 .033 .031 .030 .029 .028 .027 .027 .026 .025 .025
2.2 .136 .079 .058 .046 .040 .035 .032 .029 .028 .026 .025 .024 .023 .022 .022 .021 .021 .021
2.3 .131 .074 .052 .041 .035 .031 .027 .025 .023 .022 .021 .020 .019 .018 .018 .018 .017 .017
2.4 .126 .069 .048 .037 .031 .027 .024 .022 .020 .019 .018 .017 .016 .015 .015 .014 .014 .014
2.5 .121 .065 .044 .033 .027 .023 .020 .018 .017 .016 .015 .014 .013 .012 .012 .012 .011 .011
2.6 .117 .061 .040 .030 .024 .020 .018 .016 .014 .013 .012 .012 .011 .010 .010 .010 .009 .009
2.7 .113 .057 .037 .027 .021 .018 .015 .014 .012 .011 .010 .010 .009 .008 .008 .008 .008 .007
2.8 .109 .054 .034 .024 .019 .016 .013 .012 .010 .009 .009 .008 .008 .007 .007 .006 .006 .006
2.9 .106 .051 .031 .022 .017 .014 .011 .010 .009 .008 .007 .007 .006 .005 .005 .005 .005 .005
3.0 .102 .048 .029 .020 .015 .012 .010 .009 .007 .007 .006 .006 .005 .004 .004 .004 .004 .004
3.1 .099 .045 .027 .018 .013 .011 .009 .007 .006 .006 .005 .005 .004 .004 .004 .003 .003 .003
3.2 .096 .043 .025 .016 .012 .009 .008 .006 .005 .005 .004 .004 .003 .003 .003 .003 .003 .002
3.3 .094 .040 .023 .015 .011 .008 .007 .005 .005 .004 .004 .003 .003 .002 .002 .002 .002 .002
3.4 .091 .038 .021 .014 .010 .007 .006 .005 .004 .003 .003 .003 .002 .002 .002 .002 .002 .002
3.5 .089 .036 .020 .012 .009 .006 .005 .004 .003 .003 .002 .002 .002 .002 .002 .001 .001 .001
3.6 .086 .035 .018 .011 .008 .006 .004 .004 .003 .002 .002 .002 .002 .001 .001 .001 .001 .001
3.7 .084 .033 .017 .010 .007 .005 .004 .003 .002 .002 .002 .002 .001 .001 .001 .001 .001 .001
3.8 .082 .031 .016 .010 .006 .004 .003 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001
3.9 .080 .030 .015 .009 .006 .004 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001
4.0 .078 .029 .014 .008 .005 .004 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .000 .000
(continued)
120
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-13
0
t
t # 19 20 21 22 23 24 25 26 27 28 29 30 35 40 60 120 `( 5 z)
0.0 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500
0.1 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .461 .460 .460 .460 .460 .460
0.2 .422 .422 .422 .422 .422 .422 .422 .422 .421 .421 .421 .421 .421 .421 .421 .421 .421
0.3 .384 .384 .384 .383 .383 .383 .383 .383 .383 .383 .383 .383 .383 .383 .383 .382 .382
0.4 .347 .347 .347 .347 .346 .346 .346 .346 .346 .346 .346 .346 .346 .346 .345 .345 .345
0.5 .311 .311 .311 .311 .311 .311 .311 .311 .311 .310 .310 .310 .310 .310 .309 .309 .309
0.6 .278 .278 .278 .277 .277 .277 .277 .277 .277 .277 .277 .277 .276 .276 .275 .275 .274
0.7 .246 .246 .246 .246 .245 .245 .245 .245 .245 .245 .245 .245 .244 .244 .243 .243 .242
0.8 .217 .217 .216 .216 .216 .216 .216 .215 .215 .215 .215 .215 .215 .214 .213 .213 .212
0.9 .190 .189 .189 .189 .189 .189 .188 .188 .188 .188 .188 .188 .187 .187 .186 .185 .184
1.0 .165 .165 .164 .164 .164 .164 .163 .163 .163 .163 .163 .163 .162 .162 .161 .160 .159
1.1 .143 .142 .142 .142 .141 .141 .141 .141 .141 .140 .140 .140 .139 .139 .138 .137 .136
1.2 .122 .122 .122 .121 .121 .121 .121 .120 .120 .120 .120 .120 .119 .119 .117 .116 .115
1.3 .105 .104 .104 .104 .103 .103 .103 .103 .102 .102 .102 .102 .101 .101 .099 .098 .097
1.4 .089 .089 .088 .088 .087 .087 .087 .087 .086 .086 .086 .086 .085 .085 .083 .082 .081
1.5 .075 .075 .074 .074 .074 .073 .073 .073 .073 .072 .072 .072 .071 .071 .069 .068 .067
1.6 .063 .063 .062 .062 .062 .061 .061 .061 .061 .060 .060 .060 .059 .059 .057 .056 .055
1.7 .053 .052 .052 .052 .051 .051 .051 .051 .050 .050 .050 .050 .049 .048 .047 .046 .045
1.8 .044 .043 .043 .043 .042 .042 .042 .042 .042 .041 .041 .041 .040 .040 .038 .037 .036
1.9 .036 .036 .036 .035 .035 .035 .035 .034 .034 .034 .034 .034 .033 .032 .031 .030 .029
2.0 .030 .030 .029 .029 .029 .028 .028 .028 .028 .028 .027 .027 .027 .026 .025 .024 .023
2.1 .025 .024 .024 .024 .023 .023 .023 .023 .023 .022 .022 .022 .022 .021 .020 .019 .018
2.2 .020 .020 .020 .019 .019 .019 .019 .018 .018 .018 .018 .018 .017 .017 .016 .015 .014
2.3 .016 .016 .016 .016 .015 .015 .015 .015 .015 .015 .014 .014 .014 .013 .012 .012 .011
2.4 .013 .013 .013 .013 .012 .012 .012 .012 .012 .012 .012 .011 .011 .011 .010 .009 .008
2.5 .011 .011 .010 .010 .010 .010 .010 .010 .009 .009 .009 .009 .009 .008 .008 .007 .006
2.6 .009 .009 .008 .008 .008 .008 .008 .008 .007 .007 .007 .007 .007 .007 .006 .005 .005
2.7 .007 .007 .007 .007 .006 .006 .006 .006 .006 .006 .006 .006 .005 .005 .004 .004 .003
2.8 .006 .006 .005 .005 .005 .005 .005 .005 .005 .005 .005 .004 .004 .004 .003 .003 .003
2.9 .005 .004 .004 .004 .004 .004 .004 .004 .004 .004 .004 .003 .003 .003 .003 .002 .002
3.0 .004 .004 .003 .003 .003 .003 .003 .003 .003 .003 .003 .003 .002 .002 .002 .002 .001
3.1 .003 .003 .003 .003 .003 .002 .002 .002 .002 .002 .002 .002 .002 .002 .001 .001 .001
3.2 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .001 .001 .001 .001 .001
3.3 .002 .002 .002 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000
3.4 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000
3.5 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000
3.6 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000 .000 .000
3.7 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000
3.8 .001 .001 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
3.9 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
4.0 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
121
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-14 Appendix Tables
# 1 " numerator df
" 1 2 3 4 5 6 7 8 9
.100 39.86 49.50 53.59 55.83 57.24 58.20 58.91 59.44 59.86
.050 161.45 199.50 215.71 224.58 230.16 233.99 236.77 238.88 240.54
1
.010 4052.20 4999.50 5403.40 5624.60 5763.60 5859.00 5928.40 5981.10 6022.50
.001 405,284 500,000 540,379 562,500 576,405 585,937 592,873 598,144 602,284
.100 8.53 9.00 9.16 9.24 9.29 9.33 9.35 9.37 9.38
.050 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38
2
.010 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39
.001 998.50 999.00 999.17 999.25 999.30 999.33 999.36 999.37 999.39
.100 5.54 5.46 5.39 5.34 5.31 5.28 5.27 5.25 5.24
.050 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81
3
.010 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35
.001 167.03 148.50 141.11 137.10 134.58 132.85 131.58 130.62 129.86
.100 4.54 4.32 4.19 4.11 4.05 4.01 3.98 3.95 3.94
.050 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00
4
.010 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66
.001 74.14 61.25 56.18 53.44 51.71 50.53 49.66 49.00 48.47
.100 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32
.050 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77
5
.010 16.26 13.27 12.06 11.39 10.97 10.67 10.46 10.29 10.16
.001 47.18 37.12 33.20 31.09 29.75 28.83 28.16 27.65 27.24
.100 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96
#2 ! denominator df
.050 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10
6
.010 13.75 10.92 9.78 9.15 8.75 8.47 8.26 8.10 7.98
.001 35.51 27.00 23.70 21.92 20.80 20.03 19.46 19.03 18.69
.100 3.59 3.26 3.07 2.96 2.88 2.83 2.78 2.75 2.72
.050 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68
7
.010 12.25 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72
.001 29.25 21.69 18.77 17.20 16.21 15.52 15.02 14.63 14.33
.100 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56
.050 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39
8
.010 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91
.001 25.41 18.49 15.83 14.39 13.48 12.86 12.40 12.05 11.77
.100 3.36 3.01 2.81 2.69 2.61 2.55 2.51 2.47 2.44
.050 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18
9
.010 10.56 8.02 6.99 6.42 6.06 5.80 5.61 5.47 5.35
.001 22.86 16.39 13.90 12.56 11.71 11.13 10.70 10.37 10.11
.100 3.29 2.92 2.73 2.61 2.52 2.46 2.41 2.38 2.35
.050 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02
10
.010 10.04 7.56 6.55 5.99 5.64 5.39 5.20 5.06 4.94
.001 21.04 14.91 12.55 11.28 10.48 9.93 9.52 9.20 8.96
.100 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27
.050 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90
11
.010 9.65 7.21 6.22 5.67 5.32 5.07 4.89 4.74 4.63
.001 19.69 13.81 11.56 10.35 9.58 9.05 8.66 8.35 8.12
.100 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21
.050 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80
12
.010 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39
.001 18.64 12.97 10.80 9.63 8.89 8.38 8.00 7.71 7.48
122 (continued)
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-15
#1 " numerator df
10 12 15 20 25 30 40 50 60 120 1000
60.19 60.71 61.22 61.74 62.05 62.26 62.53 62.69 62.79 63.06 63.30
241.88 243.91 245.95 248.01 249.26 250.10 251.14 251.77 252.20 253.25 254.19
6055.80 6106.30 6157.30 6208.70 6239.80 6260.60 6286.80 6302.50 6313.00 6339.40 6362.70
605,621 610,668 615,764 620,908 624,017 626,099 628,712 630,285 631,337 633,972 636,301
9.39 9.41 9.42 9.44 9.45 9.46 9.47 9.47 9.47 9.48 9.49
19.40 19.41 19.43 19.45 19.46 19.46 19.47 19.48 19.48 19.49 19.49
99.40 99.42 99.43 99.45 99.46 99.47 99.47 99.48 99.48 99.49 99.50
999.40 999.42 999.43 999.45 999.46 999.47 999.47 999.48 999.48 999.49 999.50
5.23 5.22 5.20 5.18 5.17 5.17 5.16 5.15 5.15 5.14 5.13
8.79 8.74 8.70 8.66 8.63 8.62 8.59 8.58 8.57 8.55 8.53
27.23 27.05 26.87 26.69 26.58 26.50 26.41 26.35 26.32 26.22 26.14
129.25 128.32 127.37 126.42 125.84 125.45 124.96 124.66 124.47 123.97 123.53
3.92 3.90 3.87 3.84 3.83 3.82 3.80 3.80 3.79 3.78 3.76
5.96 5.91 5.86 5.80 5.77 5.75 5.72 5.70 5.69 5.66 5.63
14.55 14.37 14.20 14.02 13.91 13.84 13.75 13.69 13.65 13.56 13.47
48.05 47.41 46.76 46.10 45.70 45.43 45.09 44.88 44.75 44.40 44.09
3.30 3.27 3.24 3.21 3.19 3.17 3.16 3.15 3.14 3.12 3.11
4.74 4.68 4.62 4.56 4.52 4.50 4.46 4.44 4.43 4.40 4.37
10.05 9.89 9.72 9.55 9.45 9.38 9.29 9.24 9.20 9.11 9.03
26.92 26.42 25.91 25.39 25.08 24.87 24.60 24.44 24.33 24.06 23.82
2.94 2.90 2.87 2.84 2.81 2.80 2.78 2.77 2.76 2.74 2.72
4.06 4.00 3.94 3.87 3.83 3.81 3.77 3.75 3.74 3.70 3.67
7.87 7.72 7.56 7.40 7.30 7.23 7.14 7.09 7.06 6.97 6.89
18.41 17.99 17.56 17.12 16.85 16.67 16.44 16.31 16.21 15.98 15.77
2.70 2.67 2.63 2.59 2.57 2.56 2.54 2.52 2.51 2.49 2.47
3.64 3.57 3.51 3.44 3.40 3.38 3.34 3.32 3.30 3.27 3.23
6.62 6.47 6.31 6.16 6.06 5.99 5.91 5.86 5.82 5.74 5.66
14.08 13.71 13.32 12.93 12.69 12.53 12.33 12.20 12.12 11.91 11.72
2.54 2.50 2.46 2.42 2.40 2.38 2.36 2.35 2.34 2.32 2.30
3.35 3.28 3.22 3.15 3.11 3.08 3.04 3.02 3.01 2.97 2.93
5.81 5.67 5.52 5.36 5.26 5.20 5.12 5.07 5.03 4.95 4.87
11.54 11.19 10.84 10.48 10.26 10.11 9.92 9.80 9.73 9.53 9.36
2.42 2.38 2.34 2.30 2.27 2.25 2.23 2.22 2.21 2.18 2.16
3.14 3.07 3.01 2.94 2.89 2.86 2.83 2.80 2.79 2.75 2.71
5.26 5.11 4.96 4.81 4.71 4.65 4.57 4.52 4.48 4.40 4.32
9.89 9.57 9.24 8.90 8.69 8.55 8.37 8.26 8.19 8.00 7.84
2.32 2.28 2.24 2.20 2.17 2.16 2.13 2.12 2.11 2.08 2.06
2.98 2.91 2.85 2.77 2.73 2.70 2.66 2.64 2.62 2.58 2.54
4.85 4.71 4.56 4.41 4.31 4.25 4.17 4.12 4.08 4.00 3.92
8.75 8.45 8.13 7.80 7.60 7.47 7.30 7.19 7.12 6.94 6.78
2.25 2.21 2.17 2.12 2.10 2.08 2.05 2.04 2.03 2.00 1.98
2.85 2.79 2.72 2.65 2.60 2.57 2.53 2.51 2.49 2.45 2.41
4.54 4.40 4.25 4.10 4.01 3.94 3.86 3.81 3.78 3.69 3.61
7.92 7.63 7.32 7.01 6.81 6.68 6.52 6.42 6.35 6.18 6.02
2.19 2.15 2.10 2.06 2.03 2.01 1.99 1.97 1.96 1.93 1.91
2.75 2.69 2.62 2.54 2.50 2.47 2.43 2.40 2.38 2.34 2.30
4.30 4.16 4.01 3.86 3.76 3.70 3.62 3.57 3.54 3.45 3.37
7.29 7.00 6.71 6.40 6.22 6.09 5.93 5.83 5.76 5.59 5.44
123 (continued)
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-16 Appendix Tables
#1 ! numerator df
" 1 2 3 4 5 6 7 8 9
.100 3.14 2.76 2.56 2.43 2.35 2.28 2.23 2.20 2.16
.050 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71
13
.010 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 4.19
.001 17.82 12.31 10.21 9.07 8.35 7.86 7.49 7.21 6.98
.100 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12
.050 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65
14
.010 8.86 6.51 5.56 5.04 4.69 4.46 4.28 4.14 4.03
.001 17.14 11.78 9.73 8.62 7.92 7.44 7.08 6.80 6.58
.100 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09
.050 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59
15
.010 8.68 6.36 5.42 4.89 4.56 4.32 4.14 4.00 3.89
.001 16.59 11.34 9.34 8.25 7.57 7.09 6.74 6.47 6.26
.100 3.05 2.67 2.46 2.33 2.24 2.18 2.13 2.09 2.06
.050 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54
16
.010 8.53 6.23 5.29 4.77 4.44 4.20 4.03 3.89 3.78
.001 16.12 10.97 9.01 7.94 7.27 6.80 6.46 6.19 5.98
.100 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03
.050 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49
17
.010 8.40 6.11 5.19 4.67 4.34 4.10 3.93 3.79 3.68
.001 15.72 10.66 8.73 7.68 7.02 6.56 6.22 5.96 5.75
.100 3.01 2.62 2.42 2.29 2.20 2.13 2.08 2.04 2.00
#2 " denominator df
.050 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46
18
.010 8.29 6.01 5.09 4.58 4.25 4.01 3.84 3.71 3.60
.001 15.38 10.39 8.49 7.46 6.81 6.35 6.02 5.76 5.56
.100 2.99 2.61 2.40 2.27 2.18 2.11 2.06 2.02 1.98
.050 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42
19
.010 8.18 5.93 5.01 4.50 4.17 3.94 3.77 3.63 3.52
.001 15.08 10.16 8.28 7.27 6.62 6.18 5.85 5.59 5.39
.100 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96
.050 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39
20
.010 8.10 5.85 4.94 4.43 4.10 3.87 3.70 3.56 3.46
.001 14.82 9.95 8.10 7.10 6.46 6.02 5.69 5.44 5.24
.100 2.96 2.57 2.36 2.23 2.14 2.08 2.02 1.98 1.95
.050 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37
21
.010 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40
.001 14.59 9.77 7.94 6.95 6.32 5.88 5.56 5.31 5.11
.100 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1.93
.050 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34
22
.010 7.95 5.72 4.82 4.31 3.99 3.76 3.59 3.45 3.35
.001 14.38 9.61 7.80 6.81 6.19 5.76 5.44 5.19 4.99
.100 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92
.050 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32
23
.010 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30
.001 14.20 9.47 7.67 6.70 6.08 5.65 5.33 5.09 4.89
.100 2.93 2.54 2.33 2.19 2.10 2.04 1.98 1.94 1.91
.050 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30
24
.010 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26
.001 14.03 9.34 7.55 6.59 5.98 5.55 5.23 4.99 4.80
124 (continued)
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-17
#1 " numerator df
10 12 15 20 25 30 40 50 60 120 1000
2.14 2.10 2.05 2.01 1.98 1.96 1.93 1.92 1.90 1.88 1.85
2.67 2.60 2.53 2.46 2.41 2.38 2.34 2.31 2.30 2.25 2.21
4.10 3.96 3.82 3.66 3.57 3.51 3.43 3.38 3.34 3.25 3.18
6.80 6.52 6.23 5.93 5.75 5.63 5.47 5.37 5.30 5.14 4.99
2.10 2.05 2.01 1.96 1.93 1.91 1.89 1.87 1.86 1.83 1.80
2.60 2.53 2.46 2.39 2.34 2.31 2.27 2.24 2.22 2.18 2.14
3.94 3.80 3.66 3.51 3.41 3.35 3.27 3.22 3.18 3.09 3.02
6.40 6.13 5.85 5.56 5.38 5.25 5.10 5.00 4.94 4.77 4.62
2.06 2.02 1.97 1.92 1.89 1.87 1.85 1.83 1.82 1.79 1.76
2.54 2.48 2.40 2.33 2.28 2.25 2.20 2.18 2.16 2.11 2.07
3.80 3.67 3.52 3.37 3.28 3.21 3.13 3.08 3.05 2.96 2.88
6.08 5.81 5.54 5.25 5.07 4.95 4.80 4.70 4.64 4.47 4.33
2.03 1.99 1.94 1.89 1.86 1.84 1.81 1.79 1.78 1.75 1.72
2.49 2.42 2.35 2.28 2.23 2.19 2.15 2.12 2.11 2.06 2.02
3.69 3.55 3.41 3.26 3.16 3.10 3.02 2.97 2.93 2.84 2.76
5.81 5.55 5.27 4.99 4.82 4.70 4.54 4.45 4.39 4.23 4.08
2.00 1.96 1.91 1.86 1.83 1.81 1.78 1.76 1.75 1.72 1.69
2.45 2.38 2.31 2.23 2.18 2.15 2.10 2.08 2.06 2.01 1.97
3.59 3.46 3.31 3.16 3.07 3.00 2.92 2.87 2.83 2.75 2.66
5.58 5.32 5.05 4.78 4.60 4.48 4.33 4.24 4.18 4.02 3.87
1.98 1.93 1.89 1.84 1.80 1.78 1.75 1.74 1.72 1.69 1.66
2.41 2.34 2.27 2.19 2.14 2.11 2.06 2.04 2.02 1.97 1.92
3.51 3.37 3.23 3.08 2.98 2.92 2.84 2.78 2.75 2.66 2.58
5.39 5.13 4.87 4.59 4.42 4.30 4.15 4.06 4.00 3.84 3.69
1.96 1.91 1.86 1.81 1.78 1.76 1.73 1.71 1.70 1.67 1.64
2.38 2.31 2.23 2.16 2.11 2.07 2.03 2.00 1.98 1.93 1.88
3.43 3.30 3.15 3.00 2.91 2.84 2.76 2.71 2.67 2.58 2.50
5.22 4.97 4.70 4.43 4.26 4.14 3.99 3.90 3.84 3.68 3.53
1.94 1.89 1.84 1.79 1.76 1.74 1.71 1.69 1.68 1.64 1.61
2.35 2.28 2.20 2.12 2.07 2.04 1.99 1.97 1.95 1.90 1.85
3.37 3.23 3.09 2.94 2.84 2.78 2.69 2.64 2.61 2.52 2.43
5.08 4.82 4.56 4.29 4.12 4.00 3.86 3.77 3.70 3.54 3.40
1.92 1.87 1.83 1.78 1.74 1.72 1.69 1.67 1.66 1.62 1.59
2.32 2.25 2.18 2.10 2.05 2.01 1.96 1.94 1.92 1.87 1.82
3.31 3.17 3.03 2.88 2.79 2.72 2.64 2.58 2.55 2.46 2.37
4.95 4.70 4.44 4.17 4.00 3.88 3.74 3.64 3.58 3.42 3.28
1.90 1.86 1.81 1.76 1.73 1.70 1.67 1.65 1.64 1.60 1.57
2.30 2.23 2.15 2.07 2.02 1.98 1.94 1.91 1.89 1.84 1.79
3.26 3.12 2.98 2.83 2.73 2.67 2.58 2.53 2.50 2.40 2.32
4.83 4.58 4.33 4.06 3.89 3.78 3.63 3.54 3.48 3.32 3.17
1.89 1.84 1.80 1.74 1.71 1.69 1.66 1.64 1.62 1.59 1.55
2.27 2.20 2.13 2.05 2.00 1.96 1.91 1.88 1.86 1.81 1.76
3.21 3.07 2.93 2.78 2.69 2.62 2.54 2.48 2.45 2.35 2.27
4.73 4.48 4.23 3.96 3.79 3.68 3.53 3.44 3.38 3.22 3.08
1.88 1.83 1.78 1.73 1.70 1.67 1.64 1.62 1.61 1.57 1.54
2.25 2.18 2.11 2.03 1.97 1.94 1.89 1.86 1.84 1.79 1.74
3.17 3.03 2.89 2.74 2.64 2.58 2.49 2.44 2.40 2.31 2.22
4.64 4.39 4.14 3.87 3.71 3.59 3.45 3.36 3.29 3.14 2.99
125 (continued)
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
A-18 Appendix Tables
#1 " numerator df
" 1 2 3 4 5 6 7 8 9
.100 2.92 2.53 2.32 2.18 2.09 2.02 1.97 1.93 1.89
.050 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28
25
.010 7.77 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22
.001 13.88 9.22 7.45 6.49 5.89 5.46 5.15 4.91 4.71
.100 2.91 2.52 2.31 2.17 2.08 2.01 1.96 1.92 1.88
.050 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27
26
.010 7.72 5.53 4.64 4.14 3.82 3.59 3.42 3.29 3.18
.001 13.74 9.12 7.36 6.41 5.80 5.38 5.07 4.83 4.64
.100 2.90 2.51 2.30 2.17 2.07 2.00 1.95 1.91 1.87
.050 4.21 3.35 2.96 2.73 2.57 2.46 2.37 2.31 2.25
27
.010 7.68 5.49 4.60 4.11 3.78 3.56 3.39 3.26 3.15
.001 13.61 9.02 7.27 6.33 5.73 5.31 5.00 4.76 4.57
.100 2.89 2.50 2.29 2.16 2.06 2.00 1.94 1.90 1.87
.050 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24
28
.010 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12
.001 13.50 8.93 7.19 6.25 5.66 5.24 4.93 4.69 4.50
.100 2.89 2.50 2.28 2.15 2.06 1.99 1.93 1.89 1.86
.050 4.18 3.33 2.93 2.70 2.55 2.43 2.35 2.28 2.22
29
.010 7.60 5.42 4.54 4.04 3.73 3.50 3.33 3.20 3.09
.001 13.39 8.85 7.12 6.19 5.59 5.18 4.87 4.64 4.45
.100 2.88 2.49 2.28 2.14 2.05 1.98 1.93 1.88 1.85
#2 ! denominator df
.050 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21
30
.010 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07
.001 13.29 8.77 7.05 6.12 5.53 5.12 4.82 4.58 4.39
.100 2.84 2.44 2.23 2.09 2.00 1.93 1.87 1.83 1.79
.050 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12
40
.010 7.31 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89
.001 12.61 8.25 6.59 5.70 5.13 4.73 4.44 4.21 4.02
.100 2.81 2.41 2.20 2.06 1.97 1.90 1.84 1.80 1.76
.050 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07
50
.010 7.17 5.06 4.20 3.72 3.41 3.19 3.02 2.89 2.78
.001 12.22 7.96 6.34 5.46 4.90 4.51 4.22 4.00 3.82
.100 2.79 2.39 2.18 2.04 1.95 1.87 1.82 1.77 1.74
.050 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04
60
.010 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72
.001 11.97 7.77 6.17 5.31 4.76 4.37 4.09 3.86 3.69
.100 2.76 2.36 2.14 2.00 1.91 1.83 1.78 1.73 1.69
.050 3.94 3.09 2.70 2.46 2.31 2.19 2.10 2.03 1.97
100
.010 6.90 4.82 3.98 3.51 3.21 2.99 2.82 2.69 2.59
.001 11.50 7.41 5.86 5.02 4.48 4.11 3.83 3.61 3.44
.100 2.73 2.33 2.11 1.97 1.88 1.80 1.75 1.70 1.66
.050 3.89 3.04 2.65 2.42 2.26 2.14 2.06 1.98 1.93
200
.010 6.76 4.71 3.88 3.41 3.11 2.89 2.73 2.60 2.50
.001 11.15 7.15 5.63 4.81 4.29 3.92 3.65 3.43 3.26
.100 2.71 2.31 2.09 1.95 1.85 1.78 1.72 1.68 1.64
.050 3.85 3.00 2.61 2.38 2.22 2.11 2.02 1.95 1.89
1000
.010 6.66 4.63 3.80 3.34 3.04 2.82 2.66 2.53 2.43
.001 10.89 6.96 5.46 4.65 4.14 3.78 3.51 3.30 3.13
126 (continued)
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Appendix Tables A-19
#1 " numerator df
10 12 15 20 25 30 40 50 60 120 1000
1.87 1.82 1.77 1.72 1.68 1.66 1.63 1.61 1.59 1.56 1.52
2.24 2.16 2.09 2.01 1.96 1.92 1.87 1.84 1.82 1.77 1.72
3.13 2.99 2.85 2.70 2.60 2.54 2.45 2.40 2.36 2.27 2.18
4.56 4.31 4.06 3.79 3.63 3.52 3.37 3.28 3.22 3.06 2.91
1.86 1.81 1.76 1.71 1.67 1.65 1.61 1.59 1.58 1.54 1.51
2.22 2.15 2.07 1.99 1.94 1.90 1.85 1.82 1.80 1.75 1.70
3.09 2.96 2.81 2.66 2.57 2.50 2.42 2.36 2.33 2.23 2.14
4.48 4.24 3.99 3.72 3.56 3.44 3.30 3.21 3.15 2.99 2.84
1.85 1.80 1.75 1.70 1.66 1.64 1.60 1.58 1.57 1.53 1.50
2.20 2.13 2.06 1.97 1.92 1.88 1.84 1.81 1.79 1.73 1.68
3.06 2.93 2.78 2.63 2.54 2.47 2.38 2.33 2.29 2.20 2.11
4.41 4.17 3.92 3.66 3.49 3.38 3.23 3.14 3.08 2.92 2.78
1.84 1.79 1.74 1.69 1.65 1.63 1.59 1.57 1.56 1.52 1.48
2.19 2.12 2.04 1.96 1.91 1.87 1.82 1.79 1.77 1.71 1.66
3.03 2.90 2.75 2.60 2.51 2.44 2.35 2.30 2.26 2.17 2.08
4.35 4.11 3.86 3.60 3.43 3.32 3.18 3.09 3.02 2.86 2.72
1.83 1.78 1.73 1.68 1.64 1.62 1.58 1.56 1.55 1.51 1.47
2.18 2.10 2.03 1.94 1.89 1.85 1.81 1.77 1.75 1.70 1.65
3.00 2.87 2.73 2.57 2.48 2.41 2.33 2.27 2.23 2.14 2.05
4.29 4.05 3.80 3.54 3.38 3.27 3.12 3.03 2.97 2.81 2.66
1.82 1.77 1.72 1.67 1.63 1.61 1.57 1.55 1.54 1.50 1.46
2.16 2.09 2.01 1.93 1.88 1.84 1.79 1.76 1.74 1.68 1.63
2.98 2.84 2.70 2.55 2.45 2.39 2.30 2.25 2.21 2.11 2.02
4.24 4.00 3.75 3.49 3.33 3.22 3.07 2.98 2.92 2.76 2.61
1.76 1.71 1.66 1.61 1.57 1.54 1.51 1.48 1.47 1.42 1.38
2.08 2.00 1.92 1.84 1.78 1.74 1.69 1.66 1.64 1.58 1.52
2.80 2.66 2.52 2.37 2.27 2.20 2.11 2.06 2.02 1.92 1.82
3.87 3.64 3.40 3.14 2.98 2.87 2.73 2.64 2.57 2.41 2.25
1.73 1.68 1.63 1.57 1.53 1.50 1.46 1.44 1.42 1.38 1.33
2.03 1.95 1.87 1.78 1.73 1.69 1.63 1.60 1.58 1.51 1.45
2.70 2.56 2.42 2.27 2.17 2.10 2.01 1.95 1.91 1.80 1.70
3.67 3.44 3.20 2.95 2.79 2.68 2.53 2.44 2.38 2.21 2.05
1.71 1.66 1.60 1.54 1.50 1.48 1.44 1.41 1.40 1.35 1.30
1.99 1.92 1.84 1.75 1.69 1.65 1.59 1.56 1.53 1.47 1.40
2.63 2.50 2.35 2.20 2.10 2.03 1.94 1.88 1.84 1.73 1.62
3.54 3.32 3.08 2.83 2.67 2.55 2.41 2.32 2.25 2.08 1.92
1.66 1.61 1.56 1.49 1.45 1.42 1.38 1.35 1.34 1.28 1.22
1.93 1.85 1.77 1.68 1.62 1.57 1.52 1.48 1.45 1.38 1.30
2.50 2.37 2.22 2.07 1.97 1.89 1.80 1.74 1.69 1.57 1.45
3.30 3.07 2.84 2.59 2.43 2.32 2.17 2.08 2.01 1.83 1.64
1.63 1.58 1.52 1.46 1.41 1.38 1.34 1.31 1.29 1.23 1.16
1.88 1.80 1.72 1.62 1.56 1.52 1.46 1.41 1.39 1.30 1.21
2.41 2.27 2.13 1.97 1.87 1.79 1.69 1.63 1.58 1.45 1.30
3.12 2.90 2.67 2.42 2.26 2.15 2.00 1.90 1.83 1.64 1.43
1.61 1.55 1.49 1.43 1.38 1.35 1.30 1.27 1.25 1.18 1.08
1.84 1.76 1.68 1.58 1.52 1.47 1.41 1.36 1.33 1.24 1.11
2.34 2.20 2.06 1.90 1.79 1.72 1.61 1.54 1.50 1.35 1.16
2.99 2.77 2.54 2.30 2.14 2.02 1.87 1.77 1.69 1.49 1.22
127
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
Durbin-Watson Critical Values - 95% (d) Page 1 of 4
SPS Home > Stats Tables > Durbin Watson 0.05 Table
Critical Values for the Durbin-Watson Statistic (d)
Level of Significance α = .05
k=l k=2 k=3 k=4 k=5
n
dL dU dL dU dL dU dL dU dL dU
6 0.61 1.40
7 0.70 1.36 0.47 1.90
8 0.76 1.33 0.56 1.78 0.37 2.29
9 0.82 1.32 0.63 1.70 0.46 2.13 0.30 2.59
10 0.88 1.32 0.70 1.64 0.53 2.02 0.38 2.41 0.24 2.82
11 0.93 1.32 0.66 1.60 0.60 1.93 0.44 2.28 0.32 2.65
12 0.97 1.33 0.81 1.58 0.66 1.86 0.51 2.18 0.38 2.51
13 1.01 1.34 0.86 1.56 0.72 1.82 0.57 2.09 0.45 2.39
14 1.05 1.35 0.91 1.55 0.77 1.78 0.63 2.03 0.51 2.30
15 1.08 1.36 0.95 1.54 0.82 1.75 0.69 1.97 0.56 2.21
16 1.10 1.37 0.98 1.54 0.86 1.73 0.74 1.93 0.62 2.15
17 1.13 1.38 1.02 1.54 0.90 1.71 0.78 1.90 0.67 2.10
18 1.16 1.39 1.05 1.53 0.93 1.69 0.92 1.87 0.71 2.06
19 1.18 1.4 1.08 1.53 0.97 1.68 0.86 1.85 0.75 2.02
20 1.20 1.41 1.10 1.54 1.00 1.68 0.90 1.83 0.79 1.99
21 1.22 1.42 1.13 1.54 1.03 1.67 0.93 1.81 0.83 1.96
22 1.24 1.43 1.15 1.54 1.05 1.66 0.96 1.80 0.96 1.94
23 1.26 1.44 1.17 1.54 1.08 1.66 0.99 1.79 0.90 1.92
24 1.27 1.45 1.19 1.55 1.10 1.66 1.01 1.78 0.93 1.90
25 1.29 1.45 1.21 1.55 1.12 1.66 1.04 1.77 0.95 1.89
26 1.30 1.46 1.22 1.55 1.14 1.65 1.06 1.76 0.98 1.88
27 1.32 1.47 1.24 1.56 1.16 1.65 1.08 1.76 1.01 1.86
28 1.33 1.48 1.26 1.56 1.18 1.65 1.10 1.75 1.03 1.85
29 1.34 1.48 1.27 1.56 1.20 1.65 1.12 1.74 1.05 1.84
30 1.35 1.49 1.28 1.57 1.21 1.65 1.14 1.74 1.07 1.83
128
http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27
Durbin-Watson Critical Values - 95% (d) Page 2 of 4
31 1.36 1.50 1.30 1.57 1.23 1.65 1.16 1.74 1.09 1.83
32 1.37 1.50 1.31 1.57 1.24 1.65 1.18 1.73 1.11 1.82
33 1.38 1.51 1.32 1.58 1.26 1.65 1.19 1.73 1.13 1.81
34 1.39 1.51 1.33 1.58 1.27 1.65 1.21 1.73 1.15 1.81
35 1.40 1.52 1.34 1.58 1.28 1.65 1.22 1.73 1.16 1.80
36 1.41 1.52 1.35 1.59 1.29 1.65 1.24 1.73 1.18 1.80
37 1.42 1.53 1.36 1.59 1.31 1.66 1.25 1.72 1.19 1.80
38 1.43 1.54 1.37 1.59 1.32 1.66 1.26 1.72 1.21 1.79
39 1.43 1.54 1.38 1.60 1.33 1.66 1.27 1.72 1.22 1.79
40 1.44 1.54 1.39 1.60 1.34 1.66 1.29 1.72 1.23 1.79
45 1.48 1.57 1,43 1.62 1.38 1.67 1.34 1.72 1.29 1.78
50 1.50 1.59 1.46 1.63 1.42 1.67 1.38 1.72 1.34 1.77
55 1.53 1.60 1.49 1.64 1.45 1.68 1.41 1.72 1.38 1.77
60 1.55 1.62 1.51 1.65 1.48 1.69 1.44 1.73 1.41 1.77
65 1.57 1.63 1.54 1.66 1.50 1.70 1.47 1.73 1.44 1.77
70 1.58 1.64 1.55 1.67 1.52 1.70 1.49 1.74 1.46 1.77
75 1.60 1.65 1.57 1.68 1.54 1.71 1.51 1.74 1.49 1.77
80 1.61 1.66 1.59 1.69 1.56 1.72 1.53 1.74 1.51 1.77
85 1.62 1.67 1.60 1.70 1.57 1.72 1.55 1.75 1.52 1.77
90 1.63 1.68 1.61 1.70 1.59 1.73 1.57 1.75 1.54 1.78
95 1.64 1.69 1.62 1.71 1.60 1.73 1.58 1.75 1.56 1.78
100 1.65 1.69 1.63 1.72 1.61 1.74 1.59 1.76 1.57 1.78
150 1.72 1.75 1.71 1.76 1.69 1.77 1.68 1.79 1.66 1.80
200 1.76 1.78 1.75 1.79 1.74 1.80 1.73 1.81 1.72 1.82
Where n = number of observations and k = number of independent variables
129
http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27
Durbin-Watson Critical Values - 95% (d) Page 3 of 4
11 0.20 3.01
12 0.27 2.83 0.17 3.15
13 0.33 2.70 0.23 2.99 0.15 3.27
14 0.39 2.57 0.29 2.85 0.20 3.11 0.13 3.36
15 0.45 2.47 0.34 2.73 0.25 2.98 0.18 3.22 0.11 3.44
16 0.50 2.39 0.40 2.62 0.30 2.86 0.22 3.09 0.16 3.30
17 0.55 2.32 0.45 2.54 0.36 2.76 0.27 2.98 0.20 3.18
18 0.60 2.26 0.50 2.47 0.41 2.67 0.32 2.87 0.24 3.07
19 0.65 2.21 0.55 2.40 0.46 2.59 0.37 2.78 0.29 2.97
20 0.69 2.16 0.60 2.34 0.50 2.52 0.42 2.70 0.34 2.89
21 0.73 2.12 0.64 2.30 0.55 2.46 0.46 2.63 0.38 2.81
22 0.77 2.09 0.68 2.25 0.59 2.41 0.51 2.57 0.42 2.73
23 0.80 2.06 0.72 2.21 0.63 2.36 0.55 2.51 0.47 2.67
24 0.84 2.04 0.75 2.17 0.67 2.32 0.58 2.46 0.51 2.61
25 0.87 2.01 0.78 2.14 0.70 2.28 0.62 2.42 0.54 2.56
26 0.90 1.99 0.82 2.12 0.74 2.24 0.66 2.38 0.58 2.51
27 0.93 1.97 0.85 2.09 0.77 2.22 0.69 2.34 0.62 2.47
28 0.95 1.96 0.87 2.07 0.80 2.19 0.72 2.31 0.65 2.43
29 0.98 1.94 0.90 2.05 0.83 2.16 0.75 2.28 0.68 2.40
30 1.00 1.93 0.93 2.03 0.85 2.14 0.78 2.25 0.71 2.36
31 1.02 1.92 0.95 2.02 0.88 2.12 0.81 2.23 074 2.33
32 1.04 1.91 0.97 2.00 0.90 2.10 0.84 2.20 0.77 2.31
33 1.06 1.90 0.99 1.99 0.93 2.09 0.86 2.18 0.80 2.28
34 1.08 1.89 1.02 1.98 0.95 2.07 0.89 2.16 0.82 2.26
35 1.10 1.88 1.03 1.97 0.97 2.05 0.91 2.14 0.85 2.24
36 1.11 1.88 1.05 1.96 0.99 2.04 0.93 2.13 0.87 2.22
37 1.13 1.87 1.07 1.95 1.01 2.03 0.95 2.11 0.89 2.20
38 1.50 1.86 1.09 1.94 1.03 2.02 0.97 2.10 0.91 2.18
39 1.16 1.86 1.10 1.93 1.05 2.01 0.99 2.09 0.93 2.16
40 1.18 1.85 1.12 1.92 1.06 2.00 1.01 2.07 0.95 2.15
130
http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27
Durbin-Watson Critical Values - 95% (d) Page 4 of 4
45 1.24 1.84 1.19 1.90 1.14 1.96 1.09 2.02 1.04 2.09
50 1.29 1.82 1.25 1.88 1.20 1.93 1.16 1.99 1.11 2.04
55 1.33 1.81 1.29 1.86 1.25 1.91 1.21 1.96 1.17 2.01
60 1.37 1.81 1.34 1.85 1.30 1.89 1.26 1.94 1.22 1.98
65 1.40 1.81 1.37 1.84 1.34 1.88 1.30 1.92 1.27 1.96
70 1.43 1.80 1.40 1.84 1.37 1.87 1.34 1.91 1.31 1.95
75 1.46 1.80 1.43 1.83 1.40 1.87 1.37 1.90 1.34 1.94
80 1.48 1.80 1.45 1.83 1.43 1.86 1.40 1.89 1.37 1.93
85 1.50 1.80 1.47 1.83 1.49 1.86 1.42 1.89 1.40 1.92
90 1.52 1.80 1.49 1.83 1.47 1.85 1.45 1.88 1.42 1.91
95 1.54 1.80 1.51 1.83 1.49 1.85 1.46 1.88 1.44 1.90
100 1.55 1.80 1.53 1.83 1.50 1.85 1.48 1.87 1.46 1.90
150 1.65 1.82 1.64 1.83 1.62 1.85 1.60 1.86 1.59 1.88
200 1.71 1.83 1.70 1.84 1.69 1.85 1.68 1.86 1.67 1.87
Where n = number of observations and k = number of independent variables
131
http://www.ekonomija.cg.yu/dokumenta/ekonometrija/Durbin-Watson%20statistika.htm 2006-04-27