0% found this document useful (0 votes)
48 views46 pages

04 Time-Series Analysis

The document discusses using an autoregressive model to analyze time series data of quarterly motorcycle sales. It finds seasonality in the residuals that is corrected for by including a seasonal lag corresponding to the fourth time period. Therefore, an AR(1) model with a seasonal lag is most appropriate to model this time series.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views46 pages

04 Time-Series Analysis

The document discusses using an autoregressive model to analyze time series data of quarterly motorcycle sales. It finds seasonality in the residuals that is corrected for by including a seasonal lag corresponding to the fourth time period. Therefore, an AR(1) model with a seasonal lag is most appropriate to model this time series.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Time-Series Analysis Test ID: 7440367

Question #1 of 106 Question ID: 461854

The table below shows the autocorrelations of the lagged residuals for the first differences of the natural logarithm of quarterly
motorcycle sales that were fit to the AR(1) model: (ln salest − ln salest − 1) = b0 + b1(ln salest − 1 − ln salest − 2) + εt. The critical
t-statistic at 5% significance is 2.0, which means that there is significant autocorrelation for the lag-4 residual, indicating the
presence of seasonality. Assuming the time series is covariance stationary, which of the following models is most likely to
CORRECT for this apparent seasonality?

Lagged Autocorrelations of First Differences in the Log of Motorcycle Sales

Lag Autocorrelation Standard Error t-Statistic

1 −0.0738 0.1667 −0.44271

2 −0.1047 0.1667 −0.62807

3 −0.0252 0.1667 −0.15117

4 0.5528 0.1667 3.31614

ᅞ A) (ln salest − ln salest − 4) = b 0 + b 1(ln salest − 1 − ln salest − 2) + εt.


ᅞ B) ln salest = b0 + b1(ln salest − 1) − b2(ln salest − 4) + εt.
ᅚ C) (ln salest − ln salest − 1) = b0 + b1(ln salest − 1 − ln salest − 2) + b2(ln salest − 4 − ln salest
− 5) + εt.

Explanation

Seasonality is taken into account in an autoregressive model by adding a seasonal lag variable that corresponds to the seasonality. In the
case of a first-differenced quarterly time series, the seasonal lag variable is the first difference for the fourth time period. Recognizing that
the model is fit to the first differences of the natural logarithm of the time series, the seasonal adjustment variable is (ln sales t − 4 − ln
sales t − 5).

Questions #2-7 of 106

Diem Le is analyzing the financial statements of McDowell Manufacturing. He has modeled the time series of McDowell's gross
margin over the last 15 years. The output is shown below. Assume 5% significance level for all statistical tests.

Autoregressive Model
Gross Margin - McDowell Manufacturing
Quarterly Data: 1st Quarter 1985 to 4th Quarter 2000
Regression Statistics

R-squared 0.767

Standard error of forecast 0.049

Observations 64

Durbin-Watson 1.923 (not statistically significant)

Coefficient Standard Error t-statistic

Constant 0.155 0.052 ?????

Lag 1 0.240 0.031 ?????

Lag 4 0.168 0.038 ?????

Autocorrelation of Residuals

Lag Autocorrelation Standard Error t-statistic

1 0.015 0.129 ?????

2 -0.101 0.129 ?????

3 -0.007 0.129 ?????

4 0.095 0.129 ?????

Partial List of Recent Observations

Quarter Observation

4th Quarter 2002 0.250

1st Quarter 2003 0.260

2nd Quarter 2003 0.220

3rd Quarter 2003 0.200

4th Quarter 2003 0.240

Abbreviated Table of the Student's t-distribution (One-Tailed Probabilities)

df p = 0.10 p = 0.05 p = 0.025 p = 0.01 p = 0.005

50 1.299 1.676 2.009 2.403 2.678

60 1.296 1.671 2.000 2.390 2.660

70 1.294 1.667 1.994 2.381 2.648

Question #2 of 106 Question ID: 461796

This model is best described as:


ᅚ A) an AR(1) model with a seasonal lag.
ᅞ B) an ARMA(2) model.
ᅞ C) an MA(2) model.

Explanation

This is an autoregressive AR(1) model with a seasonal lag. Remember that an AR model regresses a dependent variable
against one or more lagged values of itself. (Study Session 3, LOS 13.o)

Question #3 of 106 Question ID: 461797

Which of the following can Le conclude from the regression? The time series process:

ᅚ A) includes a seasonality factor, has significant explanatory power.


ᅞ B) Does not include a seasonality factor and and has significant explanatory power.
ᅞ C) Does not include a seasonality factor and has insignificant explanatory power.

Explanation

The gross margin in the current quarter is related to the gross margin four quarters (one year) earlier. To determine whether
there is a seasonality factor, we need to test the coefficient on lag 4. The t-statistic for the coefficients is calculated as the
coefficient divided by the standard error with 61 degrees of freedom (64 observations less three coefficient estimates). The
critical t-value for a significance level of 5% is about 2.000 (from the table). The computed t-statistic for lag 4 is 0.168/0.038 =
4.421. This is greater than the critical value at even alpha = 0.005, so it is statistically significant. This suggests an annual
seasonal factor.

The process has significant explanatory power since both slope coefficients are significant and the coefficient of determination
is 0.767. (Study Session 3, LOS 13.l)

Question #4 of 106 Question ID: 461798

Le can conclude that the model is:

ᅞ A) not properly specified because there is evidence of autocorrelation in the


residuals and the Durbin-Watson statistic is not significant.

ᅞ B) properly specified because the Durbin-Watson statistic is not significant.


ᅚ C) properly specified because there is no evidence of autocorrelation in the residuals.

Explanation

The Durbin-Watson test is not an appropriate test statistic in an AR model, so we cannot use it to test for autocorrelation in the
residuals. However, we can test whether each of the four lagged residuals autocorrelations is statistically significant. The t-test
to accomplish this is equal to the autocorrelation divided by the standard error with 61 degrees of freedom (64 observations
less 3 coefficient estimates). The critical t-value for a significance level of 5% is about 2.000 from the table. The appropriate t-
statistics are:

Lag 1 = 0.015/0.129 = 0.116


Lag 2 = -0.101/0.129 = -0.783
Lag 3 = -0.007/0.129 = -0.054
Lag 4 = 0.095/0.129 = 0.736
None of these are statically significant, so we can conclude that there is no evidence of autocorrelation in the residuals, and
therefore the AR model is properly specified. (Study Session 3, LOS 13.d)

Question #5 of 106 Question ID: 461799

What is the 95% confidence interval for the gross margin in the first quarter of 2004?

ᅚ A) 0.158 to 0.354.
ᅞ B) 0.168 to 0.240.
ᅞ C) 0.197 to 0.305.

Explanation

The forecast for the following quarter is 0.155 + 0.240(0.240) + 0.168(0.260) = 0.256. Since the standard error is 0.049 and
the corresponding t-statistic is 2, we can be 95% confident that the gross margin will be within 0.256 - 2 × (0.049) and 0.256 +
2 × (0.049) or 0.158 to 0.354. (Study Session 3, LOS 11.h)

Question #6 of 106 Question ID: 461800

With respect to heteroskedasticity in the model, we can definitively say:

ᅚ A) nothing.
ᅞ B) heteroskedasticity is not a problem because the DW statistic is not significant.
ᅞ C) an ARCH process exists because the autocorrelation coefficients of the residuals have
different signs.

Explanation

None of the information in the problem provides information concerning heteroskedasticity. Note that heteroskedasticity occurs
when the variance of the error terms is not constant. When heteroskedasticity is present in a time series, the residuals appear
to come from different distributions (model seems to fit better in some time periods than others). (Study Session 3, LOS 12.k)

Question #7 of 106 Question ID: 461801

Using the provided information, the forecast for the 2nd quarter of 2004 is:

ᅚ A) 0.253.
ᅞ B) 0.250.
ᅞ C) 0.192.

Explanation

To get the 2nd quarter forecast, we use the one period forecast for the 1st quarter of 2004, which is 0.155 + 0.240(0.240) +
0.168(0.260) = 0.256. The 4th lag for the 2nd quarter is 0.22. Thus the forecast for the 2nd quarter is 0.155 + 0.240(0.256) +
0.168(0.220) = 0.253. (Study Session 3, LOS 12.e)

Question #8 of 106 Question ID: 461864

Alexis Popov, CFA, has estimated the following specification: xt = b0 + b1 × xt-1 + et. Which of the following would most likely
lead Popov to want to change the model's specification?

ᅞ A) Correlation(et, et-1) is not significantly different from zero.


ᅚ B) Correlation(et, et-2) is significantly different from zero.
ᅞ C) b0 < 0.

Explanation

If correlation(et, et-2) is not zero, then the model suffers from 2nd order serial correlation. Popov may wish to try an AR(2)
model. Both of the other conditions are acceptable in an AR(1) model.

Question #9 of 106 Question ID: 461807

An analyst wants to model quarterly sales data using an autoregressive model. She has found that an AR(1) model with a
seasonal lag has significant slope coefficients. She also finds that when a second and third seasonal lag are added to the
model, all slope coefficients are significant too. Based on this, the best model to use would most likely be an:

ᅚ A) AR(1) model with 3 seasonal lags.


ᅞ B) AR(1) model with no seasonal lags.
ᅞ C) ARCH(1).

Explanation

She has found that all the slope coefficients are significant in the model xt = b0 + b1xt-1 + b2xt-4 + et. She then finds that all the
slope coefficients are significant in the model xt = b0 + b1xt-1 + b2xt-2 + b3xt-3 + b4xt-4 + et. Thus, the final model should be used
rather than any other model that uses a subset of the regressors.

Question #10 of 106 Question ID: 461826

Which of the following statements regarding time series analysis is least accurate?

ᅚ A) An autoregressive model with two lags is equivalent to a moving-average


model with two lags.
ᅞ B) If a time series is a random walk, first differencing will result in covariance stationarity.
ᅞ C) We cannot use an AR(1) model on a time series that consists of a random walk.

Explanation

An autoregression model regresses a dependent variable against one or more lagged values of itself whereas a moving
average is an average of successive observations in a time series. A moving average model can have lagged terms but these
are lagged values of the residual.

Question #11 of 106 Question ID: 461842

An AR(1) autoregressive time series model:


ᅚ A) can be used to test for a unit root, which exists if the slope coefficient equals
one.
ᅞ B) cannot be used to test for a unit root.
ᅞ C) can be used to test for a unit root, which exists if the slope coefficient is less than one.

Explanation

If you estimate the following model xt = b0 + b1 × xt-1 + et and get b1 = 1, then the process has a unit root and is nonstationary.

Question #12 of 106 Question ID: 461822

The primary concern when deciding upon a time series sample period is which of the following factors?

ᅚ A) Current underlying economic and market conditions.


ᅞ B) The total number of observations.
ᅞ C) The length of the sample time period.

Explanation

There will always be a tradeoff between the increase statistical reliability of a longer time period and the increased stability of
estimated regression coefficients with shorter time periods. Therefore, the underlying economic environment should be the
deciding factor when selecting a time series sample period.

Question #13 of 106 Question ID: 461784

Rhonda Wilson, CFA, is analyzing sales data for the TUV Corp, a current equity holding in her portfolio. She observes that
sales for TUV Corp. have grown at a steadily increasing rate over the past ten years due to the successful introduction of
some new products. Wilson anticipates that TUV will continue this pattern of success. Which of the following models is most
appropriate in her analysis of sales for TUV Corp.?

ᅚ A) A log-linear trend model, because the data series exhibits a predictable,


exponential growth trend.
ᅞ B) A linear trend model, because the data series is equally distributed above and below
the line and the mean is constant.
ᅞ C) A log-linear trend model, because the data series can be graphed using a straight,
upward-sloping line.

Explanation

The log-linear trend model is the preferred method for a data series that exhibits a trend or for which the residuals are
predictable. In this example, sales grew at an exponential, or increasing rate, rather than a steady rate.

Question #14 of 106 Question ID: 461817

Suppose that the time series designated as Y is mean reverting. If Yt+1 = 0.2 + 0.6 Yt, the best prediction of Yt+1 is:
ᅞ A) 0.8.
ᅞ B) 0.3.
ᅚ C) 0.5.

Explanation

The prediction is Yt+1 = b0 / (1-b1) = 0.2 / (1-0.6) = 0.5

Question #15 of 106 Question ID: 461819

Which of the following statements regarding an out-of-sample forecast is least accurate?

ᅞ A) There is more error associated with out-of-sample forecasts, as compared to in-


sample forecasts.
ᅞ B) Out-of-sample forecasts are of more importance than in-sample forecasts to the
analyst using an estimated time-series model.
ᅚ C) Forecasting is not possible for autoregressive models with more than two lags.

Explanation

Forecasts in autoregressive models are made using the chain-rule, such that the earlier forecasts are made first. Each later
forecast depends on these earlier forecasts.

Question #16 of 106 Question ID: 461828

Given an AR(1) process represented by xt+1 = b0 + b1×xt + et, the process would not be a random walk if:

ᅞ A) E(et)=0.
ᅚ B) the long run mean is b0 + b1.
ᅞ C) b1 = 1.

Explanation

For a random walk, the long-run mean is undefined. The slope coefficient is one, b1=1, and that is what makes the long-run
mean undefined: mean = b0/(1-b1).

Question #17 of 106 Question ID: 461805

Consider the estimated model xt = −6.0 + 1.1 xt − 1 + 0.3 xt − 2 + εt that is estimated over 50 periods. The value of the time
series for the 49th observation is 20 and the value of the time series for the 50th observation is 22. What is the forecast for the
52nd observation?

ᅚ A) 27.22.

ᅞ B) 24.2.
ᅞ C) 42.
Explanation

Using the chain-rule of forecasting,


Forecasted x51 = −6.0 + 1.1(22) + 0.3(20) = 24.2.
Forecasted x52 = −6.0 + 1.1(24.2) + 0.3(22) = 27.22.

Questions #18-23 of 106

Housing industry analyst Elaine Smith has been assigned the task of forecasting housing foreclosures. Specifically, Smith is
asked to forecast the percentage of outstanding mortgages that will be foreclosed upon in the coming quarter. Smith decides
to employ multiple linear regression and time series analysis.

Besides constructing a forecast for the foreclosure percentage, Smith wants to address the following two questions:

Research Question Is the foreclosure percentage significantly affected by short-term interest


1: rates?

Research Question Is the foreclosure percentage significantly affected by government


2: intervention policies?

Smith contends that adjustable rate mortgages often are used by higher risk borrowers and that their homes are at higher risk
of foreclosure. Therefore, Smith decides to use short-term interest rates as one of the independent variables to test Research
Question 1.

To measure the effects of government intervention in Research Question 2, Smith uses a dummy variable that equals 1
whenever the Federal government intervened with a fiscal policy stimulus package that exceeded 2% of the annual Gross
Domestic Product. Smith sets the dummy variable equal to 1 for four quarters starting with the quarter in which the policy is
enacted and extending through the following 3 quarters. Otherwise, the dummy variable equals zero.

Smith uses quarterly data over the past 5 years to derive her regression. Smith's regression equation is provided in Exhibit 1:

Exhibit 1: Foreclosure Share Regression Equation

foreclosure share = b0 + b1(ΔINT) + b2(STIM) + b3(CRISIS) + ε


where:
Foreclosure = the percentage of all outstanding mortgages foreclosed upon
share during the quarter

ΔINT = the quarterly change in the 1-year Treasury bill rate (e.g., ΔINT = 2
for a two percentage point increase in interest rates)
STIM = 1 for quarters in which a Federal fiscal stimulus package was in
place
CRISIS = 1 for quarters in which the median house price is one standard
deviation below its 5-year moving average

The results of Smith's regression are provided in Exhibit 2:

Exhibit 2: Foreclosure Share Regression Results


Variable Coefficient t-statistic
Intercept 3.00 2.40

ΔINT 1.00 2.22

STIM -2.50 -2.10

CRISIS 4.00 2.35

The ANOVA results from Smith's regression are provided in Exhibit 3:

Exhibit 3: Foreclosure Share Regression Equation ANOVA Table


Degrees of
Source Sum of Squares Mean Sum of Squares
Freedom
Regression 3 15 5.0000

Error 16 5 0.3125

Total 19 20

Smith expresses the following concerns about the test statistics derived in her regression:

Concern 1:If my regression errors exhibit conditional heteroskedasticity, my t-statistics will be


underestimated.

Concern 2:If my independent variables are correlated with each other, my F-statistic will be
overestimated.

Before completing her analysis, Smith runs a regression of the changes in foreclosure share on its lagged value. The following
regression results and autocorrelations were derived using quarterly data over the past 5 years (Exhibits 4 and 5,
respectively):

Exhibit 4. Lagged Regression Results

Δ foreclosure sharet = 0.05 + 0.25(Δ foreclosure sharet-1)

Exhibit 5. Autocorrelation Analysis


Lag Autocorrelation t-statistic
1 0.05 0.22

2 -0.35 -1.53

3 0.25 1.09

4 0.10 0.44

Exhibit 6 provides critical values for the Student's t-Distribution

Exhibit 6: Critical Values for Student's t-Distribution


Area in Both Tails Combined

Degrees of Freedom 20% 10% 5% 1%

16 1.337 1.746 2.120 2.921

17 1.333 1.740 2.110 2.898

18 1.330 1.734 2.101 2.878


19 1.328 1.729 2.093 2.861

20 1.325 1.725 2.086 2.845

Question #18 of 106 Question ID: 479729

Using a 1% significance level, which of the following is closest to the lower bound of the lower confidence interval of the ΔINT
slope coefficient?

ᅞ A) −0.045
ᅞ B) −0.296
ᅚ C) −0.316

Explanation

The appropriate confidence interval associated with a 1% significance level is the 99% confidence level, which equals;

slope coefficient ± critical t-statistic (1% significance level) × coefficient standard error

The standard error is not explicitly provided in this question, but it can be derived by knowing the formula for the t-statistic:

From Exhibit 1, the ΔINT slope coefficient estimate equals 1.0, and its t-statistic equals 2.22. Therefore, solving for the
standard error, we derive:

The critical value for the 1% significance level is found down the 1% column in the t-tables provided in Exhibit 6. The
appropriate degrees of freedom for the confidence interval equals n − k − 1 = 20 − 3 − 1 = 16 (k is the number of slope
estimates = 3). Therefore, the critical value for the 99% confidence interval (or 1% significance level) equals 2.921.

So, the 99% confidence interval for the ΔINT slope coefficient is:

1.00 ± 2.921(0.450): lower bound equals 1 − 1.316 and upper bound 1 + 1.316

or (−0.316, 2.316).

(Study Session 3, LOS 11.e)

Question #19 of 106 Question ID: 479730

Based on her regression results in Exhibit 2, using a 5% level of significance, Smith should conclude that:

ᅞ A) both stimulus packages and housing crises have significant effects on


foreclosure percentages.
ᅞ B) stimulus packages have significant effects on foreclosure percentages, but housing
crises do not have significant effects on foreclosure percentages.
ᅚ C) stimulus packages do not have significant effects on foreclosure percentages, but
housing crises do have significant effects on foreclosure percentages.

Explanation
The appropriate test statistic for tests of significance on individual slope coefficient estimates is the t-statistic, which is provided
in Exhibit 2 for each regression coefficient estimate. The reported t-statistic equals -2.10 for the STIM slope estimate and
equals 2.35 for the CRISIS slope estimate. The critical t-statistic for the 5% significance level equals 2.12 (16 degrees of
freedom, 5% level of significance).

Therefore, the slope estimate for STIM is not statistically significant (the reported t-statistic, -2.10, is not large enough). In
contrast, the slope estimate for CRISIS is statistically significant (the reported t-statistic, 2.35, exceeds the 5% significance
level critical value). (Study Session 3, LOS 10.a)

Question #20 of 106 Question ID: 479731

The standard error of estimate for Smith's regression is closest to:

ᅞ A) 0.53
ᅚ B) 0.56
ᅞ C) 0.16

Explanation

The formula for the Standard Error of the Estimate (SEE) is:

The SEE equals the standard deviation of the regression residuals. A low SEE implies a high R2. (Study Session 3, LOS 10.h)

Question #21 of 106 Question ID: 479732

Is Smith correct or incorrect regarding Concerns 1 and 2?

ᅞ A) Correct on both Concerns.


ᅚ B) Incorrect on both Concerns.

ᅞ C) Only correct on one concern and incorrect on the other.

Explanation

Smith's Concern 1 is incorrect. Heteroskedasticity is a violation of a regression assumption, and refers to regression error
variance that is not constant over all observations in the regression. Conditional heteroskedasticity is a case in which the error
variance is related to the magnitudes of the independent variables (the error variance is "conditional" on the independent
variables). The consequence of conditional heteroskedasticity is that the standard errors will be too low, which, in turn, causes
the t-statistics to be too high. Smith's Concern 2 also is not correct. Multicollinearity refers to independent variables that are
correlated with each other. Multicollinearity causes standard errors for the regression coefficients to be too high, which, in turn,
causes the t-statistics to be too low. However, contrary to Smith's concern, multicollinearity has no effect on the F-statistic.
(Study Session 3, LOS 11.k)

Question #22 of 106 Question ID: 479733

The most recent change in foreclosure share was +1 percent. Smith decides to base her analysis on the data and methods
provided in Exhibits 4 and 5, and determines that the two-step ahead forecast for the change in foreclosure share (in percent)
is 0.125, and that the mean reverting value for the change in foreclosure share (in percent) is 0.071. Is Smith correct?
ᅚ A) Smith is correct on the two-step ahead forecast for change in foreclosure share
only.
ᅞ B) Smith is correct on both the forecast and the mean reverting level.

ᅞ C) Smith is correct on the mean-reverting level for forecast of change in foreclosure


share only.

Explanation

Forecasts are derived by substituting the appropriate value for the period t-1 lagged value.

So, the one-step ahead forecast equals 0.30%. The two-step ahead (%) forecast is derived by substituting 0.30 into the
equation.

ΔForeclosure Sharet+1 = 0.05 + 0.25(0.30) = 0.125

Therefore, the two-step ahead forecast equals 0.125%.

(Study Session 3, LOS 11.d)

Question #23 of 106 Question ID: 479734

Assume for this question that Smith finds that the foreclosure share series has a unit root. Under these conditions, she can
most reliably regress foreclosure share against the change in interest rates (ΔINT) if:

ᅚ A) ΔINT has unit root and is cointegrated with foreclosure share.


ᅞ B) ΔINT has unit root and is not cointegrated with foreclosure share.
ᅞ C) ΔINT does not have unit root.

Explanation

The error terms in the regressions for choices A, B, and C will be nonstationary. Therefore, some of the regression
assumptions will be violated and the regression results are unreliable. If, however, both series are nonstationary (which will
happen if each has unit root), but cointegrated, then the error term will be covariance stationary and the regression results are
reliable. (Study Session 3, LOS 11.n)

Question #24 of 106 Question ID: 461823

The main reason why financial and time series intrinsically exhibit some form of nonstationarity is that:

ᅞ A) most financial and time series have a natural tendency to revert toward their
means.
ᅚ B) most financial and economic relationships are dynamic and the estimated regression
coefficients can vary greatly between periods.

ᅞ C) serial correlation, a contributing factor to nonstationarity, is always present to a certain


degree in most financial and time series.
Explanation

Because all financial and time series relationships are dynamic, regression coefficients can vary widely from period to period.
Therefore, financial and time series will always exhibit some amount of instability or nonstationarity.

Question #25 of 106 Question ID: 461806

Consider the estimated model xt = -6.0 + 1.1 xt-1 + 0.3 xt-2 + εt that is estimated over 50 periods. The value of the time series
for the 49th observation is 20 and the value of the time series for the 50th observation is 22. What is the forecast for the 51st
observation?

ᅞ A) 23.
ᅞ B) 30.2.
ᅚ C) 24.2.

Explanation

Forecasted x51 = -6.0 + 1.1 (22) + 0.3 (20) = 24.2.

Question #26 of 106 Question ID: 461802

The model xt = b0 + b1 xt-1 + b2 xt-2 + b3 xt-3 + b4 xt-4 + εt is:

ᅞ A) an autoregressive conditional heteroskedastic model, ARCH.


ᅚ B) an autoregressive model, AR(4).
ᅞ C) a moving average model, MA(4).

Explanation

This is an autoregressive model (i.e., lagged dependent variable as independent variables) of order p=4 (that is, 4 lags).

Question #27 of 106 Question ID: 461856

Suppose you estimate the following model of residuals from an autoregressive model:
εt2 = 0.25 + 0.6ε2t-1 + μt, where ε = ε^

If the residual at time t is 0.9, the forecasted variance for time t+1 is:

ᅞ A) 0.850.
ᅚ B) 0.736.
ᅞ C) 0.790.

Explanation

The variance at t = t + 1 is 0.25 + [0.60 (0.9)2] = 0.25 + 0.486 = 0.736. See also, ARCH models.
Question #28 of 106 Question ID: 461858

Suppose you estimate the following model of residuals from an autoregressive model:
εt2 = 0.4 + 0.80εt-12 + μt, where ε = ε^

If the residual at time t is 2.0, the forecasted variance for time t+1 is:

ᅞ A) 3.2.
ᅚ B) 3.6.
ᅞ C) 2.0.

Explanation

The variance at t=t+1 is 0.4 + [0.80 (4.0)] = 0.4 + 3.2. = 3.6.

Question #29 of 106 Question ID: 461855

The data below yields the following AR(1) specification: xt = 0.9 - 0.55xt-1 + Et , and the indicated fitted values and residuals.

fitted
Time xt residuals
values

1 1 - -

2 -1 0.35 -1.35

3 2 1.45 0.55

4 -1 -0.2 -0.8

5 0 1.45 -1.45

6 2 0.9 1.1

7 0 -0.2 0.2

8 1 0.9 0.1

9 2 0.35 1.65

The following sets of data are ordered from earliest to latest. To test for ARCH, the researcher should regress:

ᅞ A) (1, 4, 1, 0, 4, 0, 1, 4) on (1, 1, 4, 1, 0, 4, 0, 1)
ᅚ B) (1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01) on (0.3025, 0.64, 2.1025, 1.21, 0.04,
0.01, 2.7225).
ᅞ C) (-1.35, 0.55, -0.8, -1.45, 1.1, 0.2, 0.1, 1.65) on (0.35, 1.45, -0.2, 1.45, 0.9, -0.2, 0.9,
0.35)

Explanation

The test for ARCH is based on a regression of the squared residuals on their lagged values. The squared residuals are
(1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225). So, (1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01) is regressed on
(0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225). If coefficient a1 in: is statistically different from zero, the
time series exhibits ARCH(1).
Question #30 of 106 Question ID: 461815

The regression results from fitting an AR(1) to a monthly time series are presented below. What is the mean-reverting level for
the model?

Model: ΔExpt = b0 + b1ΔExpt-1 + εt

Coefficients Standard Error t-Statistic p-value

Intercept 1.3304 0.0089 112.2849 < 0.0001

Lag-1 0.1817 0.0061 30.0125 < 0.0001

ᅚ A) 1.6258.

ᅞ B) 0.6151.

ᅞ C) 7.3220.

Explanation

The mean-reverting level is b0 / (1 − b1) = 1.3304 / (1 − 0.1817) = 1.6258.

Questions #31-36 of 106

Yolanda Seerveld is an analyst studying the growth of sales of a new restaurant chain called Very Vegan. The increase in the
public's awareness of healthful eating habits has had a very positive effect on Very Vegan's business. Seerveld has gathered
quarterly data for the restaurant's sales for the past three years. Over the twelve periods, sales grew from $17.2 million in the
first quarter to $106.3 million in the last quarter. Because Very Vegan has experienced growth of more than 500% over the
three years, the Seerveld suspects an exponential growth model may be more appropriate than a simple linear trend model.
However, she begins by estimating the simple linear trend model:

(sales)t = α + β × (Trend)t + εt
Where the Trend is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12.
Regression Statistics

Multiple R 0.952640

R2 0.907523

Adjusted R2 0.898275

Standard Error 8.135514

Observations 12

1st order autocorrelation coefficient of


the residuals: −0.075

ANOVA

df SS

Regression 1 6495.203
Residual 10 661.8659

Total 11 7157.069

Coefficients Standard Error

Intercept 10.0015 5.0071

Trend 6.7400 0.6803

The analyst then estimates the following model:

(natural logarithm of sales)t = α + β × (Trend)t + εt


Regression Statistics

Multiple R 0.952028
R2 0.906357

Adjusted R2 0.896992

Standard Error 0.166686

Observations 12
1st order autocorrelation coefficient of
the residuals: −0.348

ANOVA

df SS

Regression 1 2.6892

Residual 10 0.2778

Total 11 2.9670

Coefficients Standard Error

Intercept 2.9803 0.1026

Trend 0.1371 0.0140

Seerveld compares the results based upon the output statistics and conducts two-tailed tests at a 5% level of significance. One
concern is the possible problem of autocorrelation, and Seerveld makes an assessment based upon the first-order
autocorrelation coefficient of the residuals that is listed in each set of output. Another concern is the stationarity of the data.
Finally, the analyst composes a forecast based on each equation for the quarter following the end of the sample.

Question #31 of 106 Question ID: 485703

Are either of the slope coefficients statistically significant?

ᅞ A) The simple trend regression is not, but the log-linear trend regression is.
ᅞ B) The simple trend regression is, but not the log-linear trend regression.
ᅚ C) Yes, both are significant.

Explanation
The respective t-statistics are 6.7400 / 0.6803 = 9.9074 and 0.1371 / 0.0140 = 9.7929. For 10 degrees of freedom, the critical
t-value for a two-tailed test at a 5% level of significance is 2.228, so both slope coefficients are statistically significant. (Study
Session 3, LOS 11.a)

Question #32 of 106 Question ID: 485704

Based upon the output, which equation explains the cause for variation of Very Vegan's sales over the sample period?

ᅞ A) Both the simple linear trend and the log-linear trend have equal explanatory
power.
ᅚ B) The cause cannot be determined using the given information.
ᅞ C) The simple linear trend.

Explanation

To actually determine the explanatory power for sales itself, fitted values for the log-linear trend would have to be determined
and then compared to the original data. The given information does not allow for such a comparison. (Study Session 3, LOS
11.b)

Question #33 of 106 Question ID: 485705

With respect to the possible problems of autocorrelation and nonstationarity, using the log-linear transformation appears to
have:

ᅞ A) improved the results for nonstationarity but not autocorrelation.


ᅞ B) improved the results for autocorrelation but not nonstationarity.
ᅚ C) not improved the results for either possible problems.

Explanation

The fact that there is a significant trend for both equations indicates that the data is not stationary in either case. As for
autocorrelation, the analyst really cannot test it using the Durbin-Watson test because there are fewer than 15 observations,
which is the lower limit of the DW table. Looking at the first-order autocorrelation coefficient, however, we see that it increased
(in absolute value terms) for the log-linear equation. If anything, therefore, the problem became more severe. (Study Session
3, LOS 11.b)

Question #34 of 106 Question ID: 485706

The primary limitation of both models is that:

ᅚ A) each uses only one explanatory variable.


ᅞ B) regression is not appropriate for estimating the relationship.
ᅞ C) the results are difficult to interpret.

Explanation

The main problem with a trend model is that it uses only one variable so the underlying dynamics are really not adequately
addressed. A strength of the models is that the results are easy to interpret. The levels of many economic variables such as
the sales of a firm, prices, and gross domestic product (GDP) have a significant time trend, and a regression is an appropriate
tool for measuring that trend. (Study Session 3, LOS 11.b)
Question #35 of 106 Question ID: 485707

Using the simple linear trend model, the forecast of sales for Very Vegan for the first out-of-sample period is:

ᅚ A) $97.6 million.
ᅞ B) $123.0 million.
ᅞ C) $113.0 million.

Explanation

The forecast is 10.0015 + (13 × 6.7400) = 97.62. (Study Session 3, LOS 11.a)

Question #36 of 106 Question ID: 485708

Using the log-linear trend model, the forecast of sales for Very Vegan for the first out-of-sample period is:

ᅚ A) $117.0 million.
ᅞ B) $109.4 million.
ᅞ C) $121.2 million.

Explanation

The forecast is e2.9803 + (13 × 0.1371) = 117.01. (Study Session 3, LOS 11.a)

Question #37 of 106 Question ID: 461863

Alexis Popov, CFA, wants to estimate how sales have grown from one quarter to the next on average. The most direct way for
Popov to estimate this would be:

ᅞ A) an AR(1) model with a seasonal lag.


ᅞ B) an AR(1) model.
ᅚ C) a linear trend model.

Explanation

If the goal is to simply estimate the dollar change from one period to the next, the most direct way is to estimate xt = b0 + b1 ×
(Trend) + et, where Trend is simply 1, 2, 3, ....T. The model predicts a change by the value b1 from one period to the next.

Question #38 of 106 Question ID: 461818

Frank Batchelder and Miriam Yenkin are analysts for Bishop Econometrics. Batchelder and Yenkin are discussing the models
they use to forecast changes in China's GDP and how they can compare the forecasting accuracy of each model. Batchelder
states, "The root mean squared error (RMSE) criterion is typically used to evaluate the in-sample forecast accuracy of
autoregressive models." Yenkin replies, "If we use the RMSE criterion, the model with the largest RMSE is the one we should
judge as the most accurate."
With regard to their statements about using the RMSE criterion:

ᅞ A) Batchelder is correct; Yenkin is incorrect.


ᅞ B) Batchelder is incorrect; Yenkin is correct.
ᅚ C) Batchelder is incorrect; Yenkin is incorrect.

Explanation

The root mean squared error (RMSE) criterion is used to compare the accuracy of autoregressive models in forecasting out-
of-sample values (not in-sample values). Batchelder is incorrect. Out-of-sample forecast accuracy is important because the
future is always out of sample, and therefore out-of-sample performance of a model is critical for evaluating real world
performance.

Yenkin is also incorrect. The RMSE criterion takes the square root of the average squared errors from each model. The model
with the smallest RMSE is judged the most accurate.

Question #39 of 106 Question ID: 461861

Consider the following estimated model:


(Salest - Sales t-1)= 100 - 1.5 (Sales t-1 - Sales t-2) + 1.2 (Sales t-4 - Sales t-5) t=1,2,.. T

and Sales for the periods 1999.1 through 2000.2:

t Period Sales
T 2000.2 $1,000
T-1 2000.1 $900
T-2 1999.4 $1,200
T-3 1999.3 $1,400
T-4 1999.2 $1,000
T-5 1999.1 $800

The forecasted Sales amount for 2000.3 is closest to:

ᅞ A) $1,730.
ᅞ B) $730.
ᅚ C) $1,430.

Explanation

Change in sales = $100 - 1.5 ($1,000-900) + 1.2 ($1,400-1,000)


Change in sales = $100 - 150 + 480 =$430
Sales = $1,000 + 430 = $1,430

Question #40 of 106 Question ID: 477240

Which of the following is NOT a requirement for a series to be covariance stationary? The:

ᅞ A) covariance of the time series with itself (lead or lag) must be constant.
ᅞ B) expected value of the time series is constant over time.
ᅚ C) time series must have a positive trend.

Explanation

For a time series to be covariance stationary: 1) the series must have an expected value that is constant and finite in all
periods, 2) the series must have a variance that is constant and finite in all periods, and 3) the covariance of the time series
with itself for a fixed number of periods in the past or future must be constant and finite in all periods.

Question #41 of 106 Question ID: 461851

Barry Phillips, CFA, is analyzing quarterly data. He has estimated an AR(1) relationship (xt = b0 + b1 × xt-1 + et) and wants to
test for seasonality. To do this he would want to see if which of the following statistics is significantly different from zero?

ᅞ A) Correlation(et, et-1).
ᅞ B) Correlation(et, et-5).
ᅚ C) Correlation(et, et-4).

Explanation

Although seasonality can make the other correlations significant, the focus should be on correlation(et, et-4) because the 4th lag
is the value that corresponds to the same season as the predicted variable in the analysis of quarterly data.

Questions #42-47 of 106

Jason Cranfell, CFA, has hypothesised that sales of luxury cars have grown at a constant rate over the past 15 years.

Question #42 of 106 Question ID: 461786

Which of the following models is most appropriate for modelling these data ?

ᅚ A) ln(LucCarSales) = b 0 + b 1(t) + et

ᅞ B) LuxCarSalest = b0 + b1LuxCarSales(t-1) + et
ᅞ C) LuxCarSales = b0 + b1(t) + et

Explanation

Whenever the rate of change is constant over time, the appropriate model is a log-linear trend model. A is a linear trend model
and C is an autoregressive model.

(Study Session 3, LOS 13.b)

Question #43 of 106 Question ID: 461787

After discussing the above matter with a colleague, Cranwell finally decides to use an autoregressive model of order one i.e.
AR(1) for the above data. Below is a summary of the findings of the model:

b0 0.4563
b1 0.6874

Standard error 0.3745

R-squared 0.7548

Durbin Watson 1.23

F 12.63

Observations 180

Calculate the mean reverting level of the series:

ᅞ A) 1.66
ᅞ B) 1.26
ᅚ C) 1.46

Explanation

The formula for the mean reverting level is b0/(1-b1) = 0.4563/(1-0.6874)=1.46

(Study Session 3, LOS 13.f)

Question #44 of 106 Question ID: 461788

Cranwell is aware that the Dickey Fuller test can be used to discover whether a model has a unit root. He is also aware that
the test would use a revised set of critical t-values. What would it mean to Bert to reject the null of the Dickey Fuller test (Ho: g
= 0) ?

ᅚ A) There is no unit root


ᅞ B) There is a unit root but the model can be used if covariance-stationary

ᅞ C) There is a unit root and the model cannot be used in its current form

Explanation

The null hypothesis of g = 0 actually means that b1 - 1 = 0 , meaning that b1 = 1. Since we have rejected the null, we can
conclude that the model has no unit root.

(Study Session 3, LOS 13.j)

Question #45 of 106 Question ID: 461789

Cranwell would also like to test for serial correlation in his AR(1) model. To do this, Cranwell should:

ᅞ A) determine if the series has a finite and constant covariance between leading
and lagged terms of itself.
ᅚ B) use a t-test on the residual autocorrelations over several lags.

ᅞ C) use the provided Durbin Watson statistic and compare it to a critical value.

Explanation

To test for serial correlation in an AR model, test for the significance of residual autocorrelations over different lags. The goal
is for all t-statistics to lack statistical significance. The Durbin-Watson test is used with trend models; it is not appropriate for
testing for serial correlation of the error terms in an autoregressive model. Constant and finite unconditional variance is not an
indicator of serial correlation but rather is one of the requirements of covariance stationarity.

(Study Session 3, LOS 13.e)

Question #46 of 106 Question ID: 461790

When using the root mean squared error (RMSE) criterion to evaluate the predictive power of the model, which of the following
is the most appropriate statement ?

ᅚ A) Use the model with the lowest RMSE calculated using the out-of-sample data.
ᅞ B) Use the model with the highest RMSE calculated using the in-sample data.
ᅞ C) Use the model with the lowest RMSE calculated using the in-sample data.

Explanation

RMSE is a measure of error hence the lower the better. It should be calculated on the out-of-sample data i.e. the data not
directly used in the development of the model. This measure thus indicates the predictive power of our model.

(Study Session 3, LOS 13.g)

Question #47 of 106 Question ID: 461791

If Cranwell suspects that seasonality may be present in his AR model, he would most correctly:

ᅞ A) use the Durbin Watson statistic.


ᅞ B) test for the significance of the slope coefficients.
ᅚ C) examine the t-statistics of the residual lag autocorrelations.

Explanation

Seasonality in monthly and quarterly data is apparent in the high (statistically significant) t-statistics of the residual lag
autocorrelations for Lag 12 and Lag 4 respectively. To correct for that, the analyst should incorporate the appropriate lag in
his/her AR model.

(Study Session 3, LOS 13.l)

Question #48 of 106 Question ID: 461782

Dianne Hart, CFA, is considering the purchase of an equity position in Book World, Inc, a leading seller of books in the United
States. Hart has obtained monthly sales data for the past seven years, and has plotted the data points on a graph. Which of
the following statements regarding Hart's analysis of the data time series of Book World's sales is most accurate? Hart should
utilize a:

ᅞ A) linear model to analyze the data because the mean appears to be constant.
ᅞ B) mean-reverting model to analyze the data because the time series pattern is
covariance stationary.
ᅚ C) log-linear model to analyze the data because it is likely to exhibit a compound growth
trend.
Explanation

A log-linear model is more appropriate when analyzing data that is growing at a compound rate. Sales are a classic example
of a type of data series that normally exhibits compound growth.

Question #49 of 106 Question ID: 461810

The regression results from fitting an AR(1) model to the first-differences in enrollment growth rates at a large university includes a Durbin-
Watson statistic of 1.58. The number of quarterly observations in the time series is 60. At 5% significance, the critical values for the
Durbin-Watson statistic are dl = 1.55 and du = 1.62. Which of the following is the most accurate interpretation of the DW statistic for the
model?

ᅚ A) The Durbin-Watson statistic cannot be used with AR(1) models.

ᅞ B) Since DW > dl, the null hypothesis of no serial correlation is rejected.

ᅞ C) Since dl < DW < du, the results of the DW test are inconclusive.

Explanation

The Durbin-Watson statistic is not useful when testing for serial correlation in an autoregressive model where one of the independent
variables is a lagged value of the dependent variable. The existence of serial correlation in an AR model is determined by examining the
autocorrelations of the residuals.

Question #50 of 106 Question ID: 461816

David Brice, CFA, has used an AR(1) model to forecast the next period's interest rate to be 0.08. The AR(1) has a positive
slope coefficient. If the interest rate is a mean reverting process with an unconditional mean, a.k.a., mean reverting level,
equal to 0.09, then which of the following could be his forecast for two periods ahead?

ᅚ A) 0.081.
ᅞ B) 0.113.

ᅞ C) 0.072.

Explanation

As Brice makes more distant forecasts, each forecast will be closer to the unconditional mean. So, the two period forecast
would be between 0.08 and 0.09, and 0.081 is the only possible answer.

Question #51 of 106 Question ID: 461804

Troy Dillard, CFA, has estimated the following equation using quarterly data: xt = 93 - 0.5× xt-1 + 0.1× xt-4 + et. Given the data
in the table below, what is Dillard's best estimate of the first quarter of 2007?

Time Value

2005: I 62

2005: II 62
2005: III 66

2005: IV 66

2006: I 72

2006: II 70

2006: III 64

2006: IV 66

ᅞ A) 66.40.
ᅞ B) 66.60.
ᅚ C) 67.20.

Explanation

To get the answer, Dillard will use the data for 2006: IV and 2006: I, xt-1 = 66 and xt-4 = 72 respectively:
E[x2007:I] = 93- 0.5× xt-1 + 0.1× xt-4
E[x2007:I] = 93- 0.5× 66 + 0.1× 72
E[x2007:I] = 67.20

Question #52 of 106 Question ID: 461792

To qualify as a covariance stationary process, which of the following does not have to be true?

ᅞ A) E[xt] = E[xt+1].
ᅚ B) Covariance(xt, xt-1) = Covariance(xt, xt-2).
ᅞ C) Covariance(xt, xt-2) = Covariance(xt, xt+2).

Explanation

If a series is covariance stationary then the unconditional mean is constant across periods. The unconditional mean or
expected value is the same from period to period: E[xt] = E[xt+1]. The covariance between any two observations equal distance
apart will be equal, e.g., the t and t-2 observations with the t and t+2 observations. The one relationship that does not have to
be true is the covariance between the t and t-1 observations equaling that of the t and t-2 observations.

Question #53 of 106 Question ID: 461808

The model xt = b0 + b1 xt − 1 + b2 xt − 2 + εt is:

ᅞ A) a moving average model, MA(2).


ᅞ B) an autoregressive conditional heteroskedastic model, ARCH.
ᅚ C) an autoregressive model, AR(2).

Explanation

This is an autoregressive model (i.e., lagged dependent variable as independent variables) of order p = 2 (that is, 2 lags).
Question #54 of 106 Question ID: 461759

David Wellington, CFA, has estimated the following log-linear trend model: LN(xt) = b0 + b1t + εt. Using six years of quarterly
observations, 2001:I to 2006:IV, Wellington gets the following estimated equation: LN(xt) = 1.4 + 0.02t. The first out-of-sample
forecast of xt for 2007:I is closest to:

ᅞ A) 4.14.
ᅚ B) 6.69.
ᅞ C) 1.88.

Explanation

Wellington's out-of-sample forecast of LN(xt) is 1.9 = 1.4 + 0.02 × 25, and e1.9 = 6.69. (Six years of quarterly observations, at 4
per year, takes us up to t = 24. The first time period after that is t = 25.)

Questions #55-60 of 106

Clara Holmes, CFA, is attempting to model the importation of an herbal tea into the United States which last year was $ 54
million. She gathers 24 years of annual data, which is in millions of inflation-adjusted dollars.

She computes the following equation:

(Tea Imports)t = 3.8836 + 0.9288 × (Tea Imports)t − 1 + et


t-statistics (0.9328) (9.0025)
R = 0.7942
2

Adj. R2 = 0.7844
SE = 3.0892
N = 23

Holmes and her colleague, John Briars, CFA, discuss the implication of the model and how they might improve it. Holmes is
fairly satisfied with the results because, as she says "the model explains 78.44 percent of the variation in the dependent
variable." Briars says the model actually explains more than that.

Briars asks about the Durbin-Watson statistic. Holmes said that she did not compute it, so Briars reruns the model and
computes its value to be 2.1073. Briars says "now we know serial correlation is not a problem." Holmes counters by saying
"rerunning the model and computing the Durbin-Watson statistic was unnecessary because serial correlation is never a
problem in this type of time-series model."

Briars and Holmes decide to ask their company's statistician about the consequences of serial correlation. Based on what
Briars and Holmes tell the statistician, the statistician informs them that serial correlation will only affect the standard errors
and the coefficients are still unbiased. The statistician suggests that they employ the Hansen method, which corrects the
standard errors for both serial correlation and heteroskedasticity.

Given the information from the statistician, Briars and Holmes decide to use the estimated coefficients to make some
inferences. Holmes says the results do not look good for the future of tea imports because the coefficient on (Tea Import)t − 1
is less than one. This means the process is mean reverting. Using the coefficients in the output, says Holmes, "we know that
whenever tea imports are higher than 41.810, the next year they will tend to fall. Whenever the tea imports are less than
41.810, then they will tend to rise in the following year." Briars agrees with the general assertion that the results suggest that
imports will not grow in the long run and tend to revert to a long-run mean, but he says the actual long-run mean is 54.545.
Briars then computes the forecast of imports three years into the future.

Question #55 of 106 Question ID: 485690

With respect to the statements made by Holmes and Briars concerning serial correlation and the importance of the Durbin-
Watson statistic:

ᅞ A) Holmes was correct and Briars was incorrect.


ᅚ B) they were both incorrect.
ᅞ C) Briars was correct and Holmes was incorrect.

Explanation

Briars was incorrect because the DW statistic is not appropriate for testing serial correlation in an autoregressive model of this
sort. Holmes was incorrect because serial correlation can certainly be a problem in such a model. They need to analyze the
residuals and compute autocorrelation coefficients of the residuals to better determine if serial correlation is a problem. (Study
Session 3, LOS 10.k)

Question #56 of 106 Question ID: 485691

With respect to the statement that the company's statistician made concerning the consequences of serial correlation,
assuming the company's statistician is competent, we would most likely deduce that Holmes and Briars did not tell the
statistician:

ᅚ A) the model's specification.


ᅞ B) the value of the Durbin-Watson statistic.
ᅞ C) the sample size.

Explanation

Serial correlation will bias the standard errors. It can also bias the coefficient estimates in an autoregressive model of this type.
Thus, Briars and Holmes probably did not tell the statistician the model is an AR(1) specification. (Study Session 3, LOS 10.m)

Question #57 of 106 Question ID: 485692

The statistician's statement concerning the benefits of the Hansen method is:

ᅚ A) correct, because the Hansen method adjusts for problems associated with both
serial correlation and heteroskedasticity.
ᅞ B) not correct, because the Hansen method only adjusts for problems associated with
serial correlation but not heteroskedasticity.
ᅞ C) not correct, because the Hansen method only adjusts for problems associated with
heteroskedasticity but not serial correlation.

Explanation

The statistician is correct because the Hansen method adjusts for problems associated with both serial correlation and
heteroskedasticity. (Study Session 3, LOS 10.k)
Question #58 of 106 Question ID: 485693

Using the model's results, Briar's forecast for three years into the future is:

ᅞ A) $47.151 million.
ᅞ B) $54.543 million.
ᅚ C) $54.108 million.

Explanation

Briars' forecasts for the next three years would be:

year one: 3.8836 + 0.9288 × 54 = 54.0388


year two: 3.8836 + 0.9288 × (54.0388) = 54.0748
year three: 3.8836 + 0.9288 × (54.0748) = 54.1083
(Study Session 3, LOS 11.a)

Question #59 of 106 Question ID: 485694

With respect to the comments of Holmes and Briars concerning the mean reversion of the import data, the long-run mean
value that:

ᅞ A) Briars computes is not correct, and his conclusion is probably not accurate.
ᅞ B) Briars computes is not correct, but his conclusion is probably accurate.
ᅚ C) Briars computes is correct.

Explanation

Briars has computed a value that would be correct if the results of the model were reliable. The long-run mean would be
3.8836 / (1 − 0.9288)= 54.5450. (Study Session 3, LOS 11.a)

Question #60 of 106 Question ID: 485695

Given the nature of their analysis, the most likely potential problem that Briars and Holmes need to investigate is:

ᅞ A) multicollinearity.
ᅞ B) unit root.
ᅚ C) autocorrelation.

Explanation

Multicollinearity cannot be a problem because there is only one independent variable. For a time series AR model,
autocorrelation is a bigger worry. The model may have been misspecified leading to statistically significant autocorrelations.
Unit root does not seem to be a problem given the value of b1<1. (Study Session 3, LOS 11.e)

Question #61 of 106 Question ID: 461852

The table below shows the autocorrelations of the lagged residuals for quarterly theater ticket sales that were estimated using
the AR(1) model: ln(salest) = b0 + b1(ln salest − 1) + et. Assuming the critical t-statistic at 5% significance is 2.0, which of the
following is the most likely conclusion about the appropriateness of the model? The time series:

Lagged Autocorrelations of the Log of Quarterly Theater Ticket Sales

Lag Autocorrelation Standard Error t-Statistic

1 −0.0738 0.1667 −0.44271

2 −0.1047 0.1667 −0.62807

3 −0.0252 0.1667 −0.15117

4 0.5528 0.1667 3.31614

ᅞ A) contains ARCH (1) errors.

ᅚ B) contains seasonality.

ᅞ C) would be more appropriately described with an MA(4) model.

Explanation

The time series contains seasonality as indicated by the strong and significant autocorrelation of the lag-4 residual.

Question #62 of 106 Question ID: 461860

Consider the following estimated model:


(Salest - Sales t-1) = 30 + 1.25 (Sales t-1 - Sales t-2) + 1.1 (Sales t-4 - Sales t-5) t=1,2,.. T

and Sales for the periods 1999.1 through 2000.2:

t Period Sales
T 2000.2 $2,000
T-1 2000.1 $1,800
T-2 1999.4 $1,500
T-3 1999.3 $1,400
T-4 1999.2 $1,900
T-5 1999.1 $1,700

The forecasted Sales amount for 2000.3 is closest to:

ᅞ A) $2,625.
ᅚ B) $1,730.
ᅞ C) $2,270.

Explanation

Note that since we are forecasting 2000.3, the numbering of the "t" column has changed.
Change in sales = $30 + 1.25 ($2,000-1,800) + 1.1 ($1,400-1,900)
Change in sales = $30 + 250 - 550 = -$270
Sales = $2,000 - 270 = $1,730

Question #63 of 106 Question ID: 461853

Which of the following statements regarding seasonality is least accurate?

ᅞ A) A time series that is first differenced can be adjusted for seasonality by


incorporating the first-differenced value for the previous year's corresponding
period.
ᅚ B) The presence of seasonality makes it impossible to forecast using a time-series
model.

ᅞ C) Not correcting for seasonality when, in fact, seasonality exists in the time series results
in a violation of an assumption of linear regression.

Explanation

Forecasting is no different in the case of seasonal component in the time-series model than any other forecasting.

Question #64 of 106 Question ID: 461831

Barry Phillips, CFA, has the following time series observations from earliest to latest: (5, 6, 5, 7, 6, 6, 8, 8, 9, 11). Phillips
transforms the series so that he will estimate an autoregressive process on the following data (1, -1, 2, -1, 0, 2, 0, 1, 2). The
transformation Phillips employed is called:

ᅚ A) first differencing.
ᅞ B) moving average.
ᅞ C) beta drift.

Explanation

Phillips obviously first differenced the data because the 1=6-5, -1=5-6, .... 1 = 9 - 9, 2 = 11 - 9.

Questions #65-70 of 106

Winston Collier, CFA, has been asked by his supervisor to develop a model for predicting the warranty expense incurred by
Premier Snowplow Manufacturing Company in servicing its plows. Three years ago, major design changes were made on
newly manufactured plows in an effort to reduce warranty expense. Premier warrants its snowplows for 4 years or 18,000
miles, whichever comes first. Warranty expense is higher in winter months, but some of Premier's customers defer
maintenance issues that are not essential to keeping the machines functioning to spring or summer seasons. The data that
Collier will analyze is in the following table (in $ millions):

Lagged Change Seasonal Lagged


Change in
in Change in
Warranty Warranty
Quarter Warranty Warranty
Expense Expense
Expense Expense
yt yt-1 yt-4

2002.1 103

2002.2 52 -51

2002.3 32 -20 -51

2002.4 68 +36 -20

2003.1 91 +23 +36

2003.2 44 -47 +23 -51

2003.3 30 -14 -47 -20

2003.4 60 +30 -14 +36

2004.1 77 +17 +30 +23

2004.2 38 -39 +17 -47

2004.3 29 -9 -39 -14

2004.4 53 +24 -9 +30

Winston submits the following results to his supervisor. The first is the estimation of a trend model for the period 2002:1 to
2004:4. The model is below. The standard errors are in parentheses.

(Warranty expense)t = 74.1 - 2.7* t + et


R-squared = 16.2%
(14.37) (1.97)

Winston also submits the following results for an autoregressive model on the differences in the expense over the period
2004:2 to 2004:4. The model is below where "y" represents the change in expense as defined in the table above. The
standard errors are in parentheses.

yt = -0.7 - 0.07* yt-1 + 0.83* yt-4 + et


R-squared = 99.98%
(0.643) (0.0222) (0.0186)

After receiving the output, Collier's supervisor asks him to compute moving averages of the sales data.

Question #65 of 106 Question ID: 461834

Collier's supervisors would probably not want to use the results from the trend model for all of the following reasons EXCEPT:

ᅞ A) it does not give insights into the underlying dynamics of the movement of the
dependent variable.
ᅞ B) the slope coefficient is not significant.
ᅚ C) the model is a linear trend model and log-linear models are always superior.

Explanation
Linear trend models are not always inferior to log-linear models. To determine which specification is better would require more
analysis such as a graph of the data over time. As for the other possible answers, Collier can see that the slope coefficient is
not significant because the t-statistic is 1.37=2.7/1.97. Also, regressing a variable on a simple time trend only describes the
movement over time, and does not address the underlying dynamics of the dependent variable. (Study Session 3, LOS 13.a)

Question #66 of 106 Question ID: 461835

For this question only, assume that Winston also ran an AR(1) model with the following results:

yt = −0.9 − 0.23* yt −1 + et
R-squared = 78.3%
(0.823) (0.0222)

The mean reverting level of this model is closest to:

ᅞ A) 0.77.
ᅞ B) 1.16.
ᅚ C) −0.73.

Explanation

The mean reverting level is X1 = bo/(1 − b1)

X1 = −0.9/[1 − (−0.23)] = −0.73

(Study Session 3, LOS 13.f)

Question #67 of 106 Question ID: 461836

Based upon the output provided by Collier to his supervisor and without any further calculations, in a comparison of the two
equations' explanatory power of warranty expense it can be concluded that:

ᅞ A) the autoregressive model on the first differenced data has more explanatory
power for warranty expense.
ᅚ B) the information provided is not sufficient to determine which equation has greater
explanatory power.
ᅞ C) the two equations are equally useful in explaining warranty expense.

Explanation

Although the R-squared values would suggest that the autoregressive model has more explanatory power, there are a few
problems. First, the models have different sample periods and different numbers of explanatory variables. Second, the actual
input data is different. To assess the explanatory power of warranty expense, as opposed to the first differenced values, we
must transform the fitted values of the first-differenced data back to the original level data to assess the explanatory power for
the warranty expense. (Study Session 3, LOS 12.h)

Question #68 of 106 Question ID: 472474

Based on the autoregressive model, expected warranty expense in the first quarter of 2005 will be closest to:

ᅚ A) $65 million.
ᅞ B) $60 million.
ᅞ C) $51 million.

Explanation

Substituting the 1-period lagged data from 2004.4 and the 4-period lagged data from 2004.1 into the model formula, change in
warranty expense is predicted to be higher than 2004.4.

11.73 =-0.7 - 0.07*24+ 0.83*17.

The expected warranty expense is (53 + 11.73) = $64.73 million. (Study Session 3, LOS 13.d)

Question #69 of 106 Question ID: 461838

Based upon the results, is there a seasonality component in the data?

ᅚ A) Yes, because the coefficient on yt-4 is large compared to its standard error.
ᅞ B) Yes, because the coefficient on yt is small compared to its standard error.
ᅞ C) No, because the slope coefficients in the autoregressive model have opposite signs.

Explanation

The coefficient on the 4th lag tests the seasonality component. The t-ratio is 44.6. Even using Chebychev's inequality, this
would be significant. Neither of the other answers are correct or relate to the seasonality of the data. (Study Session 3, LOS
13.l)

Question #70 of 106 Question ID: 461839

Collier most likely chose to use first-differenced data in the autoregressive model:

ᅚ A) in order to avoid problems associated with unit roots.


ᅞ B) to increase the explanatory power.
ᅞ C) because the time trend was significant.

Explanation

Time series with unit roots are very common in economic and financial models, and unit roots cause problems in assessing
the model. Fortunately, a time series with a unit root may be transformed to achieve covariance stationarity using the first-
differencing process. Although the explanatory power of the model was high (but note the small sample size), a model using
first-differenced data often has less explanatory power. The time trend was not significant, so that was not a possible answer.
(Study Session 3, LOS 13.k)

Question #71 of 106 Question ID: 461825

David Brice, CFA, has tried to use an AR(1) model to predict a given exchange rate. Brice has concluded the exchange rate
follows a random walk without a drift. The current value of the exchange rate is 2.2. Under these conditions, which of the
following would be least likely?

ᅞ A) The process is not covariance stationary.


ᅚ B) The residuals of the forecasting model are autocorrelated.

ᅞ C) The forecast for next period is 2.2.


Explanation

The one-period forecast of a random walk model without drift is E(xt+1) = E(xt + et ) = xt + 0, so the forecast is simply xt = 2.2.
For a random walk process, the variance changes with the value of the observation. However, the error term et = xt - xt-1 is not
autocorrelated.

Question #72 of 106 Question ID: 461827

A time series x that is a random walk with a drift is best described as:

ᅞ A) xt = b 0 + b 1 xt − 1.
ᅞ B) xt = xt − 1 + εt.
ᅚ C) xt = b0 + b1xt − 1 + εt.

Explanation

The best estimate of random walk for period t is the value of the series at (t − 1). If the random walk has a drift component,
this drift is added to the previous period's value of the time series to produce the forecast.

Question #73 of 106 Question ID: 461859

One choice a researcher can use to test for nonstationarity is to use a:

ᅚ A) Dickey-Fuller test, which uses a modified t-statistic.


ᅞ B) Dickey-Fuller test, which uses a modified χ2 statistic.
ᅞ C) Breusch-Pagan test, which uses a modified t-statistic.

Explanation

The Dickey-Fuller test estimates the equation (xt - xt-1) = b0 + (b1 - 1) * xt-1 + et and tests if H0: (b1 - 1) = 0. Using a modified t-
test, if it is found that (b1-1) is not significantly different from zero, then it is concluded that b1 must be equal to 1.0 and the
series has a unit root.

Question #74 of 106 Question ID: 461793

Which of the following statements regarding covariance stationarity is CORRECT?

ᅞ A) A time series may be both covariance stationary and heteroskedastic.


ᅞ B) A time series that is covariance stationary may have residuals whose mean changes
over time.
ᅚ C) The estimation results of an AR model involving a time series that is not covariance
stationary are meaningless.

Explanation

Covariance stationarity requires that the expected value and the variance of the time series be constant over time.
Question #75 of 106 Question ID: 461824

Which of the following statements regarding the instability of time-series models is most accurate? Models estimated with:

ᅞ A) a greater number of independent variables are usually more stable than those
with a smaller number.
ᅞ B) longer time series are usually more stable than those with shorter time series.
ᅚ C) shorter time series are usually more stable than those with longer time series.

Explanation

Those models with a shorter time series are usually more stable because there is less opportunity for variance in the
estimated regression coefficients between the different time periods.

Question #76 of 106 Question ID: 461850

Which of the following is a seasonally adjusted model?

ᅚ A) (Salest - Sales t-1)= b 0 + b 1 (Sales t-1 - Sales t-2) + b 2 (Sales t-4 - Sales t-5) + εt.
ᅞ B) Salest = b0 + b1 Sales t-1 + b2 Sales t-2 + εt.
ᅞ C) Salest = b1 Sales t-1+ εt.

Explanation

This model is a seasonal AR with first differencing.

Question #77 of 106 Question ID: 461857

Which of the following is least likely a consequence of a model containing ARCH(1) errors? The:

ᅞ A) regression parameters will be incorrect.

ᅚ B) model's specification can be corrected by adding an additional lag variable.

ᅞ C) variance of the errors can be predicted.

Explanation

The presence of autoregressive conditional heteroskedasticity (ARCH) indicates that the variance of the error terms is not constant. This
is a violation of the regression assumptions upon which time series models are based. The addition of another lag variable to a model is
not a means for correcting for ARCH (1) errors.

Question #78 of 106 Question ID: 461829

A time series that has a unit root can be transformed into a time series without a unit root through:

ᅚ A) first differencing.
ᅞ B) calculating moving average of the residuals.
ᅞ C) mean reversion.

Explanation

First differencing a series that has a unit root creates a time series that does not have a unit root.

Question #79 of 106 Question ID: 461830

Barry Phillips, CFA, has estimated an AR(1) relationship (xt = b0 + b1 × xt-1 + et) and got the following result: xt+1 = 0.5 + 1.0xt +
et. Phillips should:

ᅞ A) not first difference the data because b 1 − b 0 = 1.0 − 0.5 = 0.5 < 1.

ᅚ B) first difference the data because b1 = 1.


ᅞ C) not first difference the data because b0 = 0.5 < 1.

Explanation

The condition b1 = 1 means that the series has a unit root and is not stationary. The correct way to transform the data in such
an instance is to first difference the data.

Question #80 of 106 Question ID: 461783

Trend models can be useful tools in the evaluation of a time series of data. However, there are limitations to their usage.
Trend models are not appropriate when which of the following violations of the linear regression assumptions is present?

ᅞ A) Model misspecification.
ᅚ B) Serial correlation.

ᅞ C) Heteroskedasticity.

Explanation

One of the primary assumptions of linear regression is that the residual terms are not correlated with each other. If serial
correlation, also called autocorrelation, is present, then trend models are not an appropriate analysis tool.

Question #81 of 106 Question ID: 461760

Modeling the trend in a time series of a variable that grows at a constant rate with continuous compounding is best done with:

ᅚ A) a log-linear transformation of the time series.


ᅞ B) a moving average model.

ᅞ C) simple linear regression.

Explanation

The log-linear transformation of a series that grows at a constant rate with continuous compounding (exponential growth) will
cause the transformed series to be linear.

Question #82 of 106 Question ID: 461812

An analyst modeled the time series of annual earnings per share in the specialty department store industry as an AR(3)
process. Upon examination of the residuals from this model, she found that there is a significant autocorrelation for the
residuals of this model. This indicates that she needs to:

ᅞ A) alter the model to an ARCH model.


ᅞ B) switch models to a moving average model.
ᅚ C) revise the model to include at least another lag of the dependent variable.

Explanation

She should estimate an AR(4) model, and then re-examine the autocorrelations of the residuals.

Question #83 of 106 Question ID: 461809

The procedure for determining the structure of an autoregressive model is:

ᅚ A) estimate an autoregressive model (e.g., an AR(1) model), calculate the


autocorrelations for the model's residuals, test whether the autocorrelations
are different from zero, and revise the model if there are significant
autocorrelations.
ᅞ B) test autocorrelations of the residuals for a simple trend model, and specify the number
of significant lags.

ᅞ C) estimate an autoregressive model (for example, an AR(1) model), calculate the


autocorrelations for the model's residuals, test whether the autocorrelations are
different from zero, and add an AR lag for each significant autocorrelation.

Explanation

The procedure is iterative: continually test for autocorrelations in the residuals and stop adding lags when the autocorrelations
of the residuals are eliminated. Even if several of the residuals exhibit autocorrelation, the lags should be added one at a time.

Question #84 of 106 Question ID: 461758

In the time series model: yt=b0 + b1 t + εt, t=1,2,...,T, the:

ᅞ A) disturbance term is mean-reverting.


ᅚ B) change in the dependent variable per time period is b1.
ᅞ C) disturbance terms are autocorrelated.

Explanation

The slope is the change in the dependent variable per unit of time. The intercept is the estimate of the value of the dependent
variable before the time series begins. The disturbance term should be independent and identically distributed. There is no
reason to expect the disturbance term to be mean-reverting, and if the residuals are autocorrelated, the research should
correct for that problem.

Question #85 of 106 Question ID: 461821

Consider the estimated AR(2) model, xt = 2.5 + 3.0 xt-1 + 1.5 xt-2 + εt t=1,2,...50. Making a prediction for values of x for 1 ≤ t ≤
50 is referred to as:

ᅞ A) requires more information to answer the question.


ᅚ B) an in-sample forecast.
ᅞ C) an out-of-sample forecast.

Explanation

An in-sample (a.k.a. within-sample) forecast is made within the bounds of the data used to estimate the model. An out-of-
sample forecast is for values of the independent variable that are outside of those used to estimate the model.

Question #86 of 106 Question ID: 461862

Alexis Popov, CFA, is analyzing monthly data. Popov has estimated the model xt = b0 + b1 × xt-1 + b2 × xt-2 + et. The researcher
finds that the residuals have a significant ARCH process. The best solution to this is to:

ᅞ A) re-estimate the model using a seasonal lag.


ᅚ B) re-estimate the model with generalized least squares.

ᅞ C) re-estimate the model using only an AR(1) specification.

Explanation

If the residuals have an ARCH process, then the correct remedy is generalized least squares which will allow Popov to better
interpret the results.

Question #87 of 106 Question ID: 461832

Suppose that the following time-series model is found to have a unit root:

Sales t = b0 + b1 Sales t-1+ εt

What is the specification of the model if first differences are used?

ᅚ A) (Salest - Salest-1)= b 0 + b 1 (Sales t-1 - Sales t-2) + εt.


ᅞ B) Salest = b1 Sales t-1+ εt.
ᅞ C) Salest = b0 + b1 Sales t-1 + b2 Sales t-2 + εt.

Explanation
Estimation with first differences requires calculating the change in the variable from period to period.

Question #88 of 106 Question ID: 461820

William Zox, an analyst for Opal Mountain Capital Management, uses two different models to forecast changes in the inflation
rate in the United Kingdom. Both models were constructed using U.K. inflation data from 1988-2002. In order to compare the
forecasting accuracy of the models, Zox collected actual U.K. inflation data from 2004-2005, and compared the actual data to
what each model predicted. The first model is an AR(1) model that was found to have an average squared error of 10.429
over the 12 month period. The second model is an AR(2) model that was found to have an average squared error of 11.642
over the 12 month period. Zox then computed the root mean squared error for each model to use as a basis of comparison.
Based on the results of his analysis, which model should Zox conclude is the most accurate?

ᅞ A) Model 1 because it has an RMSE of 5.21.


ᅚ B) Model 1 because it has an RMSE of 3.23.
ᅞ C) Model 2 because it has an RMSE of 3.41.

Explanation

The root mean squared error (RMSE) criterion is used to compare the accuracy of autoregressive models in forecasting out-
of-sample values. To determine which model will more accurately forecast future values, we calculate the square root of the
mean squared error. The model with the smallest RMSE is the preferred model. The RMSE for Model 1 is √10.429 = 3.23,
while the RMSE for Model 2 is √11.642 = 3.41. Since Model 1 has the lowest RMSE, that is the one Zox should conclude is the
most accurate.

Questions #89-94 of 106

Bill Johnson, CFA, has prepared data concerning revenues from sales of winter clothing made by Polar Corporation. This data
is presented (in $ millions) in the following table:
Lagged Change Seasonal Lagged
Change In Sales
In Sales Change In Sales
Quarter Sales Y Y + (−1) Y + (−4)
2013.1 182
2013.2 74 −108
2013.3 78 4 −108
2013.4 242 164 4
2014.1 194 −48 164
2014.2 79 −115 −48 −108
2014.3 90 11 −115 4
2014.4 260 170 11 w

Question #89 of 106 Question ID: 461844

The preceding table will be used by Johnson to forecast values using:

ᅞ A) a log-linear trend model with a seasonal lag.


ᅞ B) a serially correlated model with a seasonal lag.
ᅚ C) an autoregressive model with a seasonal lag.

Explanation

Johnson will use the table to forecast values using an autoregressive model for periods in succession since each successive
forecast relies on the forecast for the preceding period. The seasonal lag is introduced to account for seasonal variations in
the observed data.

(LOS 13.a,l)

Question #90 of 106 Question ID: 461845

The value that Johnson should enter in the table in place of "w" is:

ᅚ A) 164.
ᅞ B) −115.
ᅞ C) −48.

Explanation

The seasonal lagged change in sales shows the change in sales from the period 4 quarters before the current period. Sales in
the year 2013 quarter 4 increased $164 million over the prior period.

(LOS 13.l)

Question #91 of 106 Question ID: 461846

Imagine that Johnson prepares a change-in-sales regression analysis model with seasonality, which includes the following:

Coefficients

Intercept −6.032

Lag 1 0.017

Lag 4 0.983

Based on the model, expected sales in the first quarter of 2015 will be closest to:

ᅞ A) 155.
ᅚ B) 210.
ᅞ C) 190.

Explanation

Substituting the 1-period lagged data from 2014.4 and the 4-period lagged data from 2014.1 into the model formula, change in
sales is predicted to be −6.032 + (0.017 × 170) + (0.983 × −48) = −50.326. Expected sales are 260 + (−50.326) = 209.674.

(LOS 13.l)

Question #92 of 106 Question ID: 461847

Johnson's model was most likely designed to incorporates correction for:


ᅞ A) heteroskedasticity of model residuals.
ᅚ B) nonstationarity in time series data.
ᅞ C) cointegration in the time series.

Explanation

Johnson's model transforms raw sales data by first differencing it and then modeling change in sales. This is most likely an
adjustment to make the data stationary for use in an AR model.

(LOS 13.k)

Question #93 of 106 Question ID: 461848

To test for covariance-stationarity in the data, Johnson would most likely use a:

ᅚ A) Dickey-Fuller test.
ᅞ B) Durbin-Watson test.
ᅞ C) t-test.

Explanation

The Dickey-Fuller test for unit roots could be used to test whether the data is covariance non-stationarity. The Durbin-Watson
test is used for detecting serial correlation in the residuals of trend models but cannot be used in AR models. A t-test is used to
test for residual autocorrelation in AR models

(LOS 13.k)

Question #94 of 106 Question ID: 461849

The presence of conditional heteroskedasticity of residuals in Johnson's model is would most likely to lead to:

ᅞ A) invalid standard errors of regression coefficients, but statistical tests will still
be valid.

ᅞ B) invalid estimates of regression coefficients, but the standard errors will still be valid.

ᅚ C) invalid standard errors of regression coefficients and invalid statistical tests.

Explanation

The presence of conditional heteroskedasticity may leads to incorrect estimates of standard errors of regression coefficients
and hence invalid tests of significance of the coefficients.

(LOS 13.j)

Question #95 of 106 Question ID: 461814

Which of the following statements regarding a mean reverting time series is least accurate?

ᅞ A) If the current value of the time series is above the mean reverting level, the
prediction is that the time series will decrease.
ᅞ B) If the time-series variable is x, then xt = b0 + b1xt-1.
ᅚ C) If the current value of the time series is above the mean reverting level, the prediction
is that the time series will increase.

Explanation

If the current value of the time series is above the mean reverting level, the prediction is that the time series will decrease; if
the current value of the time series is below the mean reverting level, the prediction is that the time series will increase.

Questions #96-101 of 106

Albert Morris, CFA, is evaluating the results of an estimation of the number of wireless phone minutes used on a quarterly
basis within the territory of Car-tel International, Inc. Some of the information is presented below (in billions of minutes):

Wireless Phone Minutes (WPM)t = bo + b1 WPMt-1 + εt

ANOVA Degrees of Freedom Sum of Squares Mean Square

Regression 1 7,212.641 7,212.641

Error 26 3,102.410 119.324

Total 27 10,315.051

Coefficients Coefficient Standard Error of the Coefficient

Intercept -8.0237 2.9023

WPM t-1 1.0926 0.0673

The variance of the residuals from one time period within the time series is not dependent on the variance of the residuals in
another time period.

Morris also models the monthly revenue of Car-tel using data over 96 monthly observations. The model is shown below:

Sales (CAD$ millions) = b0 + b1 Salest−1 + εt

Coefficients Coefficient Standard Error of the Coefficient


Intercept 43.2 12.32
Salest−1 0.8867 0.4122

Question #96 of 106 Question ID: 485697

The value for WPM this period is 544 billion. Using the results of the model, the forecast Wireless Phone Minutes three periods
in the future is:

ᅞ A) 586.35.
ᅞ B) 691.30.

ᅚ C) 683.18.

Explanation

The one-period forecast is −8.023 + (1.0926 × 544) = 586.35.


The two-period forecast is then −8.023 + (1.0926 × 586.35) = 632.62.

Finally, the three-period forecast is −8.023 + (1.0926 × 632.62) = 683.18.

(LOS 11.a)

Question #97 of 106 Question ID: 461770

The R-squared for the WPM model is closest to:

ᅚ A) 70%.
ᅞ B) 97%.
ᅞ C) 33%.

Explanation

R-squared = SSR/SST = 7,212.641/10,315.051 = 70%.

Question #98 of 106 Question ID: 485698

The WPM model was specified as a(n):

ᅞ A) Moving Average (MA) Model.


ᅞ B) Autoregressive (AR) Model with a seasonal lag.
ᅚ C) Autoregressive (AR) Model.

Explanation

The model is specified as an AR Model, but there is no seasonal lag. No moving averages are employed in the estimation of
the model.

(LOS 11.a, l)

Question #99 of 106 Question ID: 485699

Based upon the information provided, Morris would most likely get more meaningful statistical results by:

ᅞ A) doing nothing. No information provided suggests that any of these will


improve the specification.
ᅚ B) first differencing the data.
ᅞ C) adding more lags to the model.

Explanation

Since the slope coefficient is greater than one, the process may not be covariance stationary (we would have to test this to be
definitive). A common technique to correct for this is to first difference the variable to perform the following regression:
Δ(WPM)t = bo + b1 Δ(WPM)t-1 + ε t.

(LOS 11.j)

Question #100 of 106 Question ID: 485700


The mean reverting level of monthly sales is closest to:

ᅞ A) 43.2 million.
ᅚ B) 381.29 million.
ᅞ C) 8.83 million.

Explanation

(LOS 11.f)

Question #101 of 106 Question ID: 485701

Morris concludes that the current price of Car-tel stock is consistent with single stage constant growth model (with g=3%).
Based on this information, the sales model is most likely:

ᅚ A) Incorrectly specified and taking the natural log of the data would be an
appropriate remedy.

ᅞ B) Correctly specified.
ᅞ C) Incorrectly specified and first differencing the data would be an appropriate remedy.

Explanation

If constant growth rate is an appropriate model for Car-tel, its dividends (as well as earnings and revenues) will grow at a
constant rate. In such a case, the time series needs to be adjusted by taking the natural log of the time series. First
differencing would remove the trending component of a covariance non-stationary time series but would not be appropriate for
transforming an exponentially growing time series.

(LOS 11.b)

Question #102 of 106 Question ID: 461813

A monthly time series of changes in maintenance expenses (ΔExp) for an equipment rental company was fit to an AR(1)
model over 100 months. The results of the regression and the first twelve lagged residual autocorrelations are shown in the
tables below. Based on the information in these tables, does the model appear to be appropriately specified? (Assume a 5%
level of significance.)

Regression Results for Maintenance Expense Changes

Model: DExpt = b0 + b1DExpt-1 + et

Coefficients Standard Error t-Statistic p-value

Intercept 1.3304 0.0089 112.2849 < 0.0001

Lag-1 0.1817 0.0061 30.0125 < 0.0001

Lagged Residual Autocorrelations for Maintenance Expense Changes

Lag Autocorrelation t-Statistic Lag Autocorrelation t-Statistic


1 −0.239 −2.39 7 −0.018 −0.18

2 −0.278 −2.78 8 −0.033 −0.33

3 −0.045 −0.45 9 0.261 2.61

4 −0.033 −0.33 10 −0.060 −0.60

5 −0.180 −1.80 11 0.212 2.12

6 −0.110 −1.10 12 0.022 0.22

ᅚ A) No, because several of the residual autocorrelations are significant.

ᅞ B) Yes, because most of the residual autocorrelations are negative.

ᅞ C) Yes, because the intercept and the lag coefficient are significant.

Explanation

At a 5% level of significance, the critical t-value is 1.98. Since the absolute values of several of the residual autocorrelation's t-statistics
exceed 1.98, it can be concluded that significant serial correlation exists and the model should be respecified. The next logical step is to
estimate an AR(2) model, then test the associated residuals for autocorrelation. If no serial correlation is detected, seasonality and ARCH
behavior should be tested.

Question #103 of 106 Question ID: 461840

Which of the following statements regarding unit roots in a time series is least accurate?

ᅞ A) A time series with a unit root is not covariance stationary.


ᅞ B) A time series that is a random walk has a unit root.
ᅚ C) Even if a time series has a unit root, the predictions from the estimated model are
valid.

Explanation

The presence of a unit root means that the least squares regression procedure that we have been using to estimate an AR(1)
model cannot be used without transforming the data first.

A time series with a unit root will follow a random walk process. Since a time series that follows a random walk is not
covariance stationary, modeling such a time series in an AR model can lead to incorrect statistical conclusions, and decisions
made on the basis of these conclusions may be wrong. Unit roots are most likely to occur in time series that trend over time or
have a seasonal element.

Question #104 of 106 Question ID: 472472

Troy Dillard, CFA, has estimated the following equation using semiannual data: xt = 44 + 0.1× xt-1 - 0.25× xt-2 - 0.15× xt-3 + et.
Given the data in the table below, what is Dillard's best forecast of the second half of 2007?
Time Value

2003: I 31

2003: II 31

2004: I 33

2004: II 33

2005: I 36

2005: II 35

2006: I 32

2006: II 33

ᅞ A) 34.05.
ᅚ B) 34.36.
ᅞ C) 33.74.

Explanation

To get the answer, Dillard must first make the forecast for 2007:I
E[x2007:I]= 44 + 0.1 × xt-1 - 0.25 × xt-2 - 0.15 × xt-3
E[x2007:I] = 44 + 0.1× 33 - 0.25× 32 - 0.15× 35
E[x2007:I] = 34.05

Then, use this forecast in the equation for the first lag:
E[x2007:II] = 44 + 0.1× 34.05 - 0.25× 33 - 0.15× 32
E[x2007:II] = 34.36

Question #105 of 106 Question ID: 461811

The table below includes the first eight residual autocorrelations from fitting the first differenced time series of the absenteeism
rates (ABS) at a manufacturing firm with the model ΔABSt = b0 + b1ΔABSt-1 + εt. Based on the results in the table, which of the
following statements most accurately describes the appropriateness of the specification of the model, ΔABSt = b0 + b1ΔABSt-1
+ εt?

Lagged Autocorrelations of the Residuals of the First Differences in Absenteeism Rates

Lag Autocorrelation Standard Error t-Statistic

1 −0.0738 0.1667 −0.44271

2 −0.1047 0.1667 −0.62807

3 −0.0252 0.1667 −0.15117

4 −0.0157 0.1667 −0.09418

5 −0.1262 0.1667 −0.75705

6 0.0768 0.1667 0.46071

7 0.0038 0.1667 0.02280


8 −0.0188 0.1667 −0.11278

ᅞ A) The negative values for the autocorrelations indicate that the model does not fit the time
series.

ᅚ B) The low values for the t-statistics indicate that the model fits the time series.

ᅞ C) The Durbin-Watson statistic is needed to determine the presence of significant correlation of


the residuals.

Explanation

The t-statistics are all very small, indicating that none of the autocorrelations are significantly different than zero. Based on these results,
the model appears to be appropriately specified. The error terms, however, should still be checked for heteroskedasticity.

Question #106 of 106 Question ID: 461841

Marvin Greene is interested in modeling the sales of the retail industry. He collected data on aggregate sales and found the
following:
Salest = 0.345 + 1.0 Salest-1

The standard error of the slope coefficient is 0.15, and the number of observations is 60. Given a level of significance of 5%,
which of the following can we NOT conclude about this model?

ᅞ A) The slope on lagged sales is not significantly different from one.


ᅚ B) The model is covariance stationary.
ᅞ C) The model has a unit root.

Explanation

The test of whether the slope is different from one indicates failure to reject the null H0: b1=1 (t-critical with df = 58 is
approximately 2.000, t-calculated = (1.0 - 1.0)/0.15 = 0.0). This is a 2-tailed test and we cannot reject the null since 0.0 is not
greater than 2.000. This model is nonstationary because the 1.0 coefficient on Salest-1 is a unit root. Any time series that has a
unit root is not covariance stationary which can be corrected through the first-differencing process.

You might also like