FIN36181 QP

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

FIN 3618 Financial econometrics

– Final Exam –

To be answered individually.

Instructions and notes.

1. All answers should be written on your solution sheet.

2. Report numerical answers with three digits of precision after the decimal point.

3. It is not necessary to show work or provide justifications for TRUE/FALSE questions and
multiple choice questions.

4. See appendix A for critical value tables and B for equations.

5. No variable definitions will be provided for the equations in the appendix.

6. If you think a question may be interpreted in multiple ways, write down your interpreta-
tion and answer it accordingly.
Total exam points (100 points)

Part A: Working with data. (20 points)

The table below contains observations on the following time series variables:

• cpit : the level of the consumer price index.

• nasdaqt : the level of the NASDAQ stock index.

• simple inf lationt : the simple inflation rate in percent.

• simple returnt : the simple return to the NASDAQ stock index in percent.

• log inf lationt : the log inflation rate.

• log returnt : the log return to the NASDAQ stock index.

All rates and returns in this task are stated in percent. The index levels are end-of-year values.
For instance, cpi1976 provides the level of the consumer price index at the end of the year in 1976.

The returns and rates given are computed over the year specified in the index. For instance,
cpi1979 and cpi1980 are used to calculate simple inf lation1980 , which is the rate of inflation in
year 1980.

year cpit nasdaqt simple inflationt simple returnt log inflationt log returnt
1976 56.933 97.880 - - - -
1977 60.617 105.050 6.470 7.325 6.269 7.069
1978 65.242 117.980 7.630 12.308 7.353 11.608
1979 72.583 151.140 11.253 28.106 10.664 24.769
1980 82.383 202.340 13.502 33.876 12.665 29.174

1. Which of the following statements are TRUE? You may select multiple statements.

(a) The simple return to the NASDAQ index between the end of 1976 and the end of
1978 was 19.633%.
(b) The real log return to the NASDAQ index in 1980 was 20.374%.
(c) The log increase in the consumer price index between the end of 1978 and the end
of 1980 was 23.328%.
(d) The consumer price index level in 1978 can be computed as follows:

cpi1978 = (1 + simple inflation1978 /100) ∗ cpi1977 (1)

(e) In R, the log inflation rate can be computed as log(diff(data$cpi)), where data
is a data frame object and cpi is the cpit variable from the table above.
(f) Statements (a)-(e) are false.

2. Which of the following statements are TRUE? You may select multiple statements.

(a) If we change the base year for the CPI, the simple inflation rate will change.
(b) If we change the base year for the NASDAQ index, the simple returns to the NAS-
DAQ index will change.
(c) If we change the base year for the CPI, the real simple return to the NASDAQ index
will change.
(d) Statements (a)-(c) are false.

3. Which of the following statements are TRUE? You may select multiple statements.

(a) The (percent) log return to the NASDAQ between the end of the year in 1976 and
the end of the year in 1980 was 72.621%.
(b) The (percent) log return to the NASDAQ between the end of the year in 1978 and
the end of the year 1980 was 53.943%.
(c) The (percent) log real return to the NASDAQ in 1980 was 15.387%.
(d) The (percent) log real return to the NASDAQ in 1980 was 16.509%.
(e) Statements (a)-(d) are false.

4. Compute the log real return in percent in 1979. Report your result below with three digits
of precision after the decimal point.

5. Compute the simple return to the NASDAQ index between the end of 1976 and the end of
1978. Report your result below in percent with three digits of precision after the decimal
point.
Part B: CAPM and linear regression. (30 points)
You estimate the CAPM regression in Equation (2) with OLS and 30 observations.

yt = α + βxt + ut (2)

You recover the parameter estimates, the standard errors, the residual sum of squares (RSS),
and the total sum of squares (TSS), which are summarized in the table below.

Regression Results

α̂ 0.012
(0.005)
β̂ 1.310
(0.275)
RSS 371.32
TSS 531.47

1. In the standard CAPM regression, what is the dependent variable, yt , and what is the
independent variable, xt ?

2. Calculate R2 .

3. Calculate the test statistic for the following hypothesis test: H0 : α = 0 versus H1 : α ̸= 0.

4. What does α̂ correspond to in the CAPM and what does a positive value indicate?

5. You want to test whether the beta is statistically significantly different from one. Assume
you are using a two-sided test and a 5% level of significance. Complete the tasks below.

(a) Write down expressions for the null and alternative hypotheses.

(b) Calculate the test statistic for the null hypothesis.

(c) Use the t-table in the appendix to identify the critical values for a two-sided test
with a level of significance of 5%. Report them below.

(d) Report whether you reject the null or not.

(e) Calculate the adjusted-R2 using the assumptions from this problem and the value
you computed for R2 earlier in the task.

(f) Write down the function in R that you would use to perform OLS. You do not need
to include information about the regression in this problem.
Part C: Diagnostic tests. (30 points)
For the questions in this task, assume that you are working with an unrestricted econometric
model of the following form:
yt = β1 + β2 x2,t + β3 x3,t + β4 x4,t + ut (3)

1. Calculate the test statistic for an F -test with RRSS = 83, U RSS = 57, and four restric-
tions. Assume there are 30 observations.
2. Which of the following statements are TRUE? You may select multiple statements.

(a) The null hypothesis for the default form of the F -test is the following: H0 : β2 =
0 and β3 = 0 and β4 = 0.
(b) The alternate hypothesis for the default form of the F -test is the following: H1 :
β2 ̸= 0 and β3 ̸= 0 and β4 =
̸ 0.
(c) It is possible to test the restriction β2 + β3 = 1 with an F -test.
(d) It is possible to test the restriction β2 = 1 using either a t-test or an F -test.
(e) If RRSS > URSS for an F -test, then we will always reject the null hypothesis, even
if the difference between RRSS and URSS is small.
(f) Statements (a)-(e) are false.

3. Which of the following statements are TRUE? You may select multiple statements.

(a) When testing whether β2 = 0, we could use the following restricted model: yt =
β1 + β3 x3,t + β4 x4,t + ut .
(b) Imposing the restriction that β2 = 0 could reduce the residual sum of squares (RSS)
below the RSS for the unrestricted model in Equation (3).
(c) We can impose the restriction that β3 = 1 by re-defining the dependent variable in
the restricted model as yt − x3,t and removing the β3 x3,t term from the model.
(d) Statements (a)-(c) are false.

4. You are concerned that the parameters in the model are not stable, so you decide to
perform a break test. You split the sample into two subsamples, dividing it at the point
in the time series where you suspect a break. The first subsample consists of the first 25
observations and the second consists of the last 5 observations in the sample. Given the
small size of the second subsample, you decide to perform a predictive failure test. Use
this information to answer the following questions.

(a) Should you perform a backward or forward predictive failure test? Why?
(b) You find that the residual sum of squares for the restricted regression is 300 and the
residual sum of squares for the larger subsample regression is 125. Calculate the test
statistic for the predictive failure test. Recall that the model is given in Equation
(3).
(c) What distribution is used to compute the critical values for a predictive failure test?
Part D: True or false questions. (20 points)
Indicate whether each statement is TRUE or FALSE. You do not need to provide a justification.

1. [TRUE/FALSE]. The covariance of two independent random variables may sometimes


be negative.
2. [TRUE/FALSE]. If the covariance of two random variables is positive, then the corre-
lation between those two variables must also be positive.
(xi −x̄)2
P
3. [TRUE/FALSE]. The sample variance of a variable x can be calculated as N −1
,
where i indexes the observation and x̄ is the mean.
4. [TRUE/FALSE]. A random variable, x, has an excess kurtosis of 1.7. It is not possible
that x could be symmetric.
5. [TRUE/FALSE]. The Jarque-Bera Test is used to evaluate the normality of regression
residuals.
6. [TRUE/FALSE]. The simple inflation rate in period t can be computed as follows:
CP It
πt = CP It−1
− 1, where CP It is the consumer price index level at time t.

7. [TRUE/FALSE]. Investors who follow the mean-variance principle will prefer returns
that have either a higher mean or a higher variance.
8. [TRUE/FALSE]. The Treynor Ratio measures excess return per unit of systematic risk.
9. [TRUE/FALSE]. The beta of the market portfolio is equal to 1.
10. [TRUE/FALSE]. In the context of a univariate linear regression, the ratio of the slope
parameter, β, to its variance can be used as a test statistic for the following hypothesis
test: H0 : β = 0 versus H1 : β ̸= 0.
11. [TRUE/FALSE]. In hypothesis testing, we draw inferences about the sample parame-
ters, rather than the population parameters.
12. [TRUE/FALSE]. The inclusion of an irrelevant regressor in a linear regression model
will lead to biased and inconsistent parameter estimates.
13. [TRUE/FALSE]. Multicollinearity in a linear regression will reduce efficiency, but will
not bias parameter estimates.
14. [TRUE/FALSE]. The Chow Test is used to evaluate whether there is heteroscedasticity
in the disturbance term.
15. [TRUE/FALSE]. In a regression, correlation between the independent variable(s) and
the disturbance term will cause the parameter estimates to be inconsistent.
16. [TRUE/FALSE]. Principal component analysis can be used to construct a new set of
regressors that do not suffer from multicollinearity.
17. [TRUE/FALSE]. In the context of a linear regression, non-normality in the residuals
may cause the parameter estimates to be inconsistent.
18. [TRUE/FALSE]. If we find that the residuals are non-normal, one possible way to fix
this is to include dummy variables in the regression in periods where there are unusually
large residuals.
19. [TRUE/FALSE]. We can correct for autocorrelation in the residuals by using White’s
standard errors.

20. [TRUE/FALSE]. The squared regression residuals will always sum to zero as long as
there is a constant term in the regression.

A Critical values for Student’s t-distribution

A.1 t-table
Level of Significance
0.4 0.25 0.15 0.1 0.05 0.025 0.01 0.005 0.001 0.0005
1 0.3249 1.0000 1.9626 3.0777 6.3138 12.7062 31.8205 63.6567 318.3087 636.6189
2 0.2887 0.8165 1.3862 1.8856 2.9200 4.3027 6.9646 9.9248 22.3271 31.5991
3 0.2767 0.7649 1.2498 1.6377 2.3534 3.1824 4.5407 5.8409 10.2145 12.9240
4 0.2707 0.7407 1.1896 1.5332 2.1318 2.7764 3.7469 4.6041 7.1732 8.6103
5 0.2672 0.7267 1.1558 1.4759 2.0150 2.5706 3.3649 4.0321 5.8934 6.8688
6 0.2648 0.7176 1.1342 1.4398 1.9432 2.4469 3.1427 3.7074 5.2076 5.9588
7 0.2632 0.7111 1.1192 1.4149 1.8946 2.3646 2.9980 3.4995 4.7853 5.4079
8 0.2619 0.7064 1.1081 1.3968 1.8595 2.3060 2.8965 3.3554 4.5008 5.0413
9 0.2610 0.7027 1.0997 1.3830 1.8331 2.2622 2.8214 3.2498 4.2968 4.7809
10 0.2602 0.6998 1.0931 1.3722 1.8125 2.2281 2.7638 3.1693 4.1437 4.5869
11 0.2596 0.6974 1.0877 1.3634 1.7959 2.2010 2.7181 3.1058 4.0247 4.4370
12 0.2590 0.6955 1.0832 1.3562 1.7823 2.1788 2.6810 3.0545 3.9296 4.3178
13 0.2586 0.6938 1.0795 1.3502 1.7709 2.1604 2.6503 3.0123 3.8520 4.2208
14 0.2582 0.6924 1.0763 1.3450 1.7613 2.1448 2.6245 2.9768 3.7874 4.1405
15 0.2579 0.6912 1.0735 1.3406 1.7531 2.1314 2.6025 2.9467 3.7328 4.0728
16 0.2576 0.6901 1.0711 1.3368 1.7459 2.1199 2.5835 2.9208 3.6862 4.0150
17 0.2573 0.6892 1.0690 1.3334 1.7396 2.1098 2.5669 2.8982 3.6458 3.9651
18 0.2571 0.6884 1.0672 1.3304 1.7341 2.1009 2.5524 2.8784 3.6105 3.9216
Degrees of Freedom

19 0.2569 0.6876 1.0655 1.3277 1.7291 2.0930 2.5395 2.8609 3.5794 3.8834
20 0.2567 0.6870 1.0640 1.3253 1.7247 2.0860 2.5280 2.8453 3.5518 3.8495
21 0.2566 0.6864 1.0627 1.3232 1.7207 2.0796 2.5176 2.8314 3.5272 3.8193
22 0.2564 0.6858 1.0614 1.3212 1.7171 2.0739 2.5083 2.8188 3.5050 3.7921
23 0.2563 0.6853 1.0603 1.3195 1.7139 2.0687 2.4999 2.8073 3.4850 3.7676
24 0.2562 0.6848 1.0593 1.3178 1.7109 2.0639 2.4922 2.7969 3.4668 3.7454
25 0.2561 0.6844 1.0584 1.3163 1.7081 2.0595 2.4851 2.7874 3.4502 3.7251
26 0.2560 0.6840 1.0575 1.3150 1.7056 2.0555 2.4786 2.7787 3.4350 3.7066
27 0.2559 0.6837 1.0567 1.3137 1.7033 2.0518 2.4727 2.7707 3.4210 3.6896
28 0.2558 0.6834 1.0560 1.3125 1.7011 2.0484 2.4671 2.7633 3.4082 3.6739
29 0.2557 0.6830 1.0553 1.3114 1.6991 2.0452 2.4620 2.7564 3.3962 3.6594
30 0.2556 0.6828 1.0547 1.3104 1.6973 2.0423 2.4573 2.7500 3.3852 3.6460
35 0.2553 0.6816 1.0520 1.3062 1.6896 2.0301 2.4377 2.7238 3.3400 3.5911
40 0.2550 0.6807 1.0500 1.3031 1.6839 2.0211 2.4233 2.7045 3.3069 3.5510
45 0.2549 0.6800 1.0485 1.3006 1.6794 2.0141 2.4121 2.6896 3.2815 3.5203
50 0.2547 0.6794 1.0473 1.2987 1.6759 2.0086 2.4033 2.6778 3.2614 3.4960
60 0.2545 0.6786 1.0455 1.2958 1.6706 2.0003 2.3901 2.6603 3.2317 3.4602
70 0.2543 0.6780 1.0442 1.2938 1.6669 1.9944 2.3808 2.6479 3.2108 3.4350
80 0.2542 0.6776 1.0432 1.2922 1.6641 1.9901 2.3739 2.6387 3.1953 3.4163
90 0.2541 0.6772 1.0424 1.2910 1.6620 1.9867 2.3685 2.6316 3.1833 3.4019
100 0.2540 0.6770 1.0418 1.2901 1.6602 1.9840 2.3642 2.6259 3.1737 3.3905
120 0.2539 0.6765 1.0409 1.2886 1.6577 1.9799 2.3578 2.6174 3.1595 3.3735
150 0.2538 0.6761 1.0400 1.2872 1.6551 1.9759 2.3515 2.6090 3.1455 3.3566
200 0.2537 0.6757 1.0391 1.2858 1.6525 1.9719 2.3451 2.6006 3.1315 3.3398
300 0.2536 0.6753 1.0382 1.2844 1.6499 1.9679 2.3388 2.5923 3.1176 3.3233
∞ 0.2533 0.6745 1.0364 1.2816 1.6449 1.9600 2.3263 2.5758 3.0902 3.2905
A.2 Upper 5% critical values for F -distribution
Numerator Degrees of Freedom (m)
1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 ∞
1 161 200 216 225 230 234 237 239 241 242 244 246 248 249 250 251 252 253 254
2 18.5 19.0 19.2 19.2 19.3 19.3 19.4 19.4 19.4 19.4 19.4 19.4 19.4 19.5 19.5 19.5 19.5 19.5 19.5
3 10.1 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.70 8.66 8.64 8.62 8.59 8.57 8.55 8.53
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86 5.80 5.77 5.75 5.72 5.69 5.66 5.63
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62 4.56 4.53 4.50 4.46 4.43 4.40 4.37
Denominator Degrees of Freedom (T-k)

6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94 3.87 3.84 3.81 3.77 3.74 3.70 3.67
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51 3.44 3.41 3.38 3.34 3.30 3.27 3.23
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22 3.15 3.12 3.08 3.04 3.01 2.97 2.93
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01 2.94 2.90 2.86 2.83 2.79 2.75 2.71
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85 2.77 2.74 2.70 2.66 2.62 2.58 2.54
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.72 2.65 2.61 2.57 2.53 2.49 2.45 2.40
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62 2.54 2.51 2.47 2.43 2.38 2.34 2.30
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.53 2.46 2.42 2.38 2.34 2.30 2.25 2.21
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.46 2.39 2.35 2.31 2.27 2.22 2.18 2.13
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2:33 2.29 2.25 2.20 2.16 2.11 2.07
16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.42 2.35 2.28 2.24 2.19 2.15 2.11 2.06 2.01
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.38 2.31 2.23 2.19 2.15 2.10 2.06 2.01 1.96
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.34 2.27 2.19 2.15 2.11 2.06 2.02 1.97 1.92
19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.31 2.23 2.16 2.11 2.07 2.03 1.98 1.93 1.88
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12 2.08 2.04 1.99 1.95 1.90 1.84
21 4.32 3.47 3.07 2.84 2.68 2.57 2.49 2.42 2.37 2.32 2.25 2.18 2.10 2.05 2.01 1.96 1.92 1.87 1.81
22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.23 2.15 2.07 2.03 1.98 1.94 1.89 1.84 1.78
23 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 2.27 2.20 2.13 2.05 2.01 1.96 1.91 1.86 1.81 1.76
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.18 2.11 2.03 1.98 1.94 1.89 1.84 1.79 1.73
25 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 2.24 2.16 2.09 2.01 1.96 1.92 1.87 1.82 1.77 1.71
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93 1.89 1.84 1.79 1.74 1.68 1.62
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08. 2.00 1.92 1.84 1.79 1.74 1.69 1.64 1.58 1.51
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75 1.70 1.65 1.59 1.53 1.47 1.39
120 3.92 3.07 2.68 2.45 2.29 2.18 2.09 2.02 1.96 1.91 1.83 1.75 1.66 1.61 1.55 1.50 1.43 1.35 1.25
∞ 3.84 3.00 2.60 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.67 1.57 1.52 1.46 1.39 1.32 1.22 1.00
B Equations and Definitions
Note that you must be able to understand the equations below without any additional context.
You are assumed to have enough familiarity with each concept that the equations do not require
variable definitions.

B.1 Linear Algebra


• Matrix Addition: A + B = B + A

• Matrix Multiplication: AB ̸= BA

• Identity Matrix: AI = A

• Matrix Inverse: AA−1 = A−1 A = I

B.2 Natural Logs and Log Returns


• ln(x · y) = ln(x) + ln(y)

• ln(x/y) = ln(x) − ln(y)

• ln(y c ) = c · ln(y)

• Log return (percent): 100 ∗ ln(pt /pt−1 )

B.3 Probability and Statistics


(yi −ȳ)3
P
• Skewness: (N −1)σ 3

(yi −ȳ)4
P
• Kurtosis: (N −1)σ 4
Rb
• P (a ≤ X ≤ b) = a
f (s)ds.

B.4 CAPM
E[rp ]−rf
• Treynor ratio: βP

• Jensen’s alpha: αJ = (E[rp ] − rf ) − βP (E[rM ] − rf )

B.5 Univariate Linear Regression


PT
t=1 xt yt −T x y
• β̂ = P T 2 2
t=1 xt −T x̄

• α̂ = ȳ − β̂ x̄
qP 2

• s = T −2t
r P 2
x
• SE(α̂) = s P 2 t
T (( xt )−T x̄2 )

q
• SE(β̂) = s P 1
x2t −T x̄2

B.6 Multiple Linear Regression


• β̂ = (X ′ X)−1 X ′ y

• Variance-covariance matrix: s2 (X ′ X)−1

B.7 Diagnostic Tests


• White’s test: T R2

• Bruesch-Godfrey: (T − r)R2
2
h 2 i
b
• Jarque Bera: T 61 + (b224−3)

• Chow test: RSS−(RSS1 +RSS2 ) T −2k


RSS1 +RSS2 k

• Predictive failure test: RSS−RSS1 T1 −k


RSS1 T2

You might also like