0% found this document useful (0 votes)
6 views

Lecture 05

The document discusses time-series analysis, emphasizing its importance for financial analysts in making investment decisions through modeling past data to predict future outcomes. It covers various models such as linear and log-linear trends, autoregressive models, and the concept of mean reversion, while also addressing the challenges of estimating coefficients and ensuring covariance stationarity. Additionally, it highlights the significance of comparing forecast accuracy and the implications of random walks in financial time series.

Uploaded by

amir rafique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Lecture 05

The document discusses time-series analysis, emphasizing its importance for financial analysts in making investment decisions through modeling past data to predict future outcomes. It covers various models such as linear and log-linear trends, autoregressive models, and the concept of mean reversion, while also addressing the challenges of estimating coefficients and ensuring covariance stationarity. Additionally, it highlights the significance of comparing forecast accuracy and the implications of random walks in financial time series.

Uploaded by

amir rafique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Time-Series Analysis

• As financial analysts, we often use time-series data to make


investment decisions.

• A time series is a set of observations on a variable’s


outcomes in different time periods: the quarterly sales for a
particular company during the past five years, for example, or
the daily returns on a traded security.

• In this lecture, we explore the two chief uses of time-series


models:

• to explain the past and


• to predict the future of a time series.

• We also discuss how to estimate time-series models, and


• The following two examples illustrate the kinds of questions we
might want to ask about time series.

• Suppose it is the beginning of 2020 and we are managing a US-


based investment portfolio that includes Swiss stocks.
• Because the value of this portfolio would decrease if the Swiss
franc depreciates with respect to the dollar, and vice versa,
holding all else constant, we are considering whether to hedge
the portfolio’s exposure to changes in the value of the franc.

• To help us in making this decision, we decide to model the time


series of the franc/dollar exchange rate.

• Exhibit 1 shows monthly data on the franc/dollar exchange rate.


The data are monthly averages of daily exchange rates. Has the
exchange rate been more stable since 1987 than it was in
previous years? Has the exchange rate shown a long-term trend?
How can we best use past exchange rates to predict future
• As another example, suppose it is the beginning of 2020.

• We cover retail stores for a sell-side firm and want to predict


retail sales for the coming year.

• The data are not seasonally adjusted, hence the spikes around
the holiday season at the turn of each year. Because the
reported sales in the stores’ financial statements are not
seasonally adjusted, we model seasonally unadjusted retail
sales.

• How can we model the trend in retail sales? How can we adjust
for the extreme seasonality reflected in the peaks and troughs
occurring at regular intervals? How can we best use past retail
sales to predict future retail sales?
• Some fundamental questions arise in time-series analysis:

• How do we model trends?

• How do we predict the future value of a time series based on


its past values?

• How do we model seasonality?

• How do we choose among time-series models? And

• how do we model changes in the variance of time series over


time?
LINEAR TREND MODELS

• Estimating a trend in a time series and using that trend to predict


future values of the time series is the simplest method of
forecasting.

• For example, we saw in Exhibit 2 that monthly US retail sales


show a long-term pattern of upward movement—that is, a trend.

• In this section, we examine two types of trends—linear trends


and log-linear trends—and discuss how to choose between
them.
Linear Trend Models
• In Equation 1, the trend line, b0 + b1t, predicts the value of the
time series at time t (where t takes on a value of 1 in the first
period of the sample and increases by 1 in each subsequent
period).

• Because the coefficient b1 is the slope of the trend line, we refer to


b1 as the trend coefficient.

• We can estimate the two coefficients, b0 and b1, using ordinary


least squares, denoting the estimated coefficients as bˆ0 and bˆ1.

• Recall that ordinary least squares is an estimation method based on


the criterion of minimizing the sum of a regression’s squared
residuals.
• Now we demonstrate how to use these estimates to predict the value of the
time series in a particular period.

• Recall that t takes on a value of 1 in Period 1. Therefore, the predicted or fitted


value of yt in Period 1 isˆy 1 = bˆ0 + bˆ1 (1) .

• Similarly, in a subsequent period—say, the sixth period—the fitted value is ˆy


6 = bˆ0 + bˆ1 (6) .

• Now suppose that we want to predict the value of the time series for a period
outside the
• sample—say, period T + 1. The predicted value of yt for period T + 1 is ˆyT+1
= bˆ0 + bˆ1(T + 1) .

• For example, if bˆ0 is 5.1 and bˆ1 is 2, then at t = 5 the predicted value of y5
is 15.1 and at t = 6 the predicted value of y6 is 17.1. Note that each
consecutive observation in this time series increases by bˆ1 = 2, irrespective
of the level of the series in the previous period.
LOG-LINEAR TREND MODELS
• Sometimes a linear trend does not correctly model the growth
of a time series.

• In those cases, we often find that fitting a linear trend to a time


series leads to persistent rather than uncorrelated errors.

• If the residuals from a linear trend model are persistent, then


we need to employ an alternative model satisfying the
conditions of linear regression.

• For financial time series, an important alternative to a linear


trend is a log-linear trend. Log-linear trends work well in fitting
time series that have exponential growth.
Example
TREND MODELS AND TESTING FOR CORRELATED ERRORS

• Both the linear trend model and the log-linear trend model are single-variable
regression models.

• If they are to be correctly specified, the regression model assumptions must


be satisfied.

• In particular, the regression error for one period must be uncorrelated with the
regression error for all other periods.

• In Example 2 in the previous section, we could infer an obvious violation of


that assumption from a visual inspection of a plot of residuals (Exhibit 9). The
log-linear trend model of Example 3 appeared to fit the data much better, but
we still need to confirm that the uncorrelated errors assumption is satisfied.

To address that question formally, we must carry out a Durbin–Watson test on


AR TIME-SERIES MODELS AND COVARIANCE-STATIONARY
SERIES
• A key feature of the log-linear model’s depiction of time series, and a key
feature of time series in general, is that current-period values are related to
previous-period values.

• For example, Starbucks’ sales for the current period are related to its sales in
the previous period.

• An autoregressive model (AR), a time series regressed on its own past


values, represents this relationship effectively.

• When we use this model, we can drop the normal notation of y as the
dependent variable and x as the independent variable because we no longer
have that distinction to make

• Here we simply use xt. For example, Equation 4 shows a first-order


autoregression, AR(1), for the variable xt
• Note that the independent variable (xt-1) in AR equation is a
random variable.

• This fact may seem like a mathematical subtlety, but it is not.

• If we use ordinary least squares to estimate this Equation


when we have a randomly distributed independent variable
that is a lagged value of the dependent variable, our statistical
inference may be invalid.

• To make a valid statistical inference, we must make a key


assumption in time-series analysis: We must assume that the
time series we are modeling is covariance stationary.
• What does it mean for a time series to be covariance
stationary?

• The basic idea is that a time series is covariance stationary if


its properties, such as mean and variance, do not change over
time.
• A covariance stationary series must satisfy three principal
requirements. First, the expected value of the time series must
be constant and finite in all periods.

• Second, the variance of the time series must be constant and


finite in all periods.

• Third, the covariance of the time series with itself for a fixed
number of periods in the past or future must be constant and
finite in all periods.
DETECTING SERIALLY CORRELATED ERRORS IN AN AR
MODEL
• We can estimate an autoregressive model using ordinary least squares if the
time series is covariance stationary and the errors are uncorrelated.

• Unfortunately, our previous test for serial correlation, the Durbin–Watson


statistic, is invalid when the independent variables include past values of the
dependent variable.

• Therefore, for most time-series models, we cannot use the Durbin–Watson


statistic.

• Fortunately, we can use other tests to determine whether the errors in a time-
series model are serially correlated.

• One such test reveals whether the autocorrelations of the error term are
significantly different from 0. This test is a t-test involving a residual
MEAN REVERSION
• We say that a time series shows mean reversion if it tends to fall when its
level is above its mean and rise when its level is below its mean.

• Much like the temperature in a room controlled by a thermostat, a mean-


reverting time series tends to return to its long-term mean.

• How can we determine the value that the time series tends toward?

• If a time series is currently at its mean-reverting level, then the model predicts
that the value of the time series will be the same in the next period.

• At its mean-reverting level, we have the relationship xt+1 = xt.

• For an AR(1) model (xt+1 = b0 + b1xt), the equality x t+1 = xt implies the
level xt = b0 + b1xt or that the mean-reverting level, xt, is given by
COMPARING FORECAST MODEL PERFORMANCE
• One way to compare the forecast performance of two models
is to compare the variance of the forecast errors that the two
models make.

• The model with the smaller forecast error variance will be the
more accurate model, and it will also have the smaller
standard error of the time-series regression. (This standard
error usually is reported directly in the output for the time-
series regression.

• In comparing forecast accuracy among models, we must


distinguish between in-sample forecast errors and out-of-
sample forecast errors.

• In-sample forecast errors are the residuals from a fitted


• Often, we want to compare the forecasting accuracy of
different models after the sample period for which they were
estimated.

• We wish to compare the out-of-sample forecast accuracy of the


models.

• Out-of-sample forecast accuracy is important because the


future is always out of sample.

• Although professional forecasters distinguish between out-of-


sample and in-sample forecasting performance, many articles
that analysts read contain only in-sample forecast evaluations.

• Analysts should be aware that out-of-sample performance is


critical for evaluating a forecasting model’s real-world
• Typically, we compare the out-of-sample forecasting
performance of forecasting models by comparing their root
mean squared error (RMSE), which is the square root of the
average squared error.

• The model with the smallest RMSE is judged the most


accurate. The following example illustrates the computation
and use of RMSE in comparing forecasting models.
INSTABILITY OF REGRESSION COEFFICIENTS
• One of the important issues an analyst faces in modeling a time series is the
sample period to use.

• The estimates of regression coefficients of the time-series model can change


substantially across different sample periods used for estimating the model.

• Often, the regression coefficient estimates of a time-series model estimated using


an earlier sample period can be quite different from those of a model estimated
using a later sample period.

• Similarly, the estimates can be different between models estimated using


relatively shorter and longer sample periods.

• Further, the choice of model for a particular time series can also depend on the
sample period.

• For example, an AR(1) model may be appropriate for the sales of a company in one
particular sample period, but an AR(2) model may be necessary for an earlier or
• Unfortunately, there is usually no clear-cut basis in economic
or financial theory for determining whether to use data from a
longer or shorter sample period to estimate a time-series
model.

• We can get some guidance, however, if we remember that our


models are valid only for covariance-stationary time series.

• For example, we should not combine data from a period when


exchange rates were fixed with data from a period when
exchange rates were floating.

• The exchange rates in these two periods would not likely have
the same variance because exchange rates are usually much
more volatile under a floating-rate regime than when rates are
fixed.
RANDOM WALKS

• A random walk is one of the most widely studied time-series


models for financial data.

• A random walk is a time series in which the value of the


series in one period is the value of the series in the previous
period plus an unpredictable random error
• Note two important points.

• First, this equation is a special case of an AR(1) model with b0 =


0 and b1 = 1.

• Second, the expected value of ε t is zero.

• Therefore, the best forecast of xt that can be made in period t - 1


is x t-1. In fact, in this model, xt-1 is the best forecast of x in
every period after t - 1
• Random walks are quite common in financial time series.

• For example, many studies have tested whether and found


that currency exchange rates follow a random walk.

• Consistent with the second point made in the previous slide,


some studies have found that sophisticated exchange rate
forecasting models cannot outperform forecasts made using
the random walk model and that the best forecast of the future
exchange rate is the current exchange rate.
• Unfortunately, we cannot use the regression methods we have
discussed so far to estimate an AR(1) model on a time series
that is actually a random walk.

• To see why this is so, we must determine why a random walk


has no finite mean-reverting level or finite variance.

• Recall that if x t is at its mean-reverting level, then xt = b0 +


b1xt, or x t = b0/(1 - b1).

• In a random walk, however, b0 = 0 and b1 = 1, so b0/(1 - b1)


= 0/0.

• Therefore, a random walk has an undefined mean-reverting


level.
• A random walk is not a covariance-stationary time series.

• What is the practical implication of these issues?

• We cannot use standard regression analysis on a time series


that is a random walk.

• We can, however, attempt to convert the data to a covariance-


stationary time series if we suspect that the time series is a
random walk.

• In statistical terms, we can difference it.


• We difference a time series by creating a new time series—say,
yt—that in each period is equal to the difference between xt
and xt-1.

• This transformation is called first-differencing because it


subtracts the value of the time series in the first prior period
from the current value of the time series.

• Sometimes the first difference of x t is written as Δxt = xt - xt-


1.

• The first-differenced variable, yt, is covariance stationary.


• we can model first differenced series using linear regression.

• Of course, modeling the first-differenced series with an AR(1)


model does not help us predict the future, because b0 = 0 and
b1 = 0.

• We simply conclude that the original time series is, in fact, a


random walk.
• To this point, we have discussed only simple random walks—
that is, random walks without drift.

• In a random walk without drift, the best predictor of the time


series in the next period is its current value.

• A random walk with drift, however, should increase or


decrease by a constant amount in each period.

• The equation describing a random walk with drift is a special


case of the AR(1) model:
THE UNIT ROOT TEST OF NONSTATIONARITY
• How to use random walk concepts to determine whether a time series is
covariance stationary.

• This approach focuses on the slope coefficient in the random-walk-with-drift


case of an AR(1) model in contrast with the traditional autocorrelation
approach, which we discuss first.
• The examination of the autocorrelations of a time series at various lags is a
well-known prescription for inferring whether or not a time series is stationary.

• Typically, for a stationary time series, either autocorrelations at all lags are
statistically indistinguishable from zero or the autocorrelations drop off rapidly
to zero as the number of lags becomes large.

• Conversely, the autocorrelations of a nonstationary time series do not exhibit


those characteristics.

• However, this approach is less definite than a currently more popular test for
• We can explain what is known as the unit root problem in the context of an AR(1) model.

• If a time series comes from an AR(1) model, then to be covariance stationary, the absolute value of
the lag coefficient, b1, must be less than 1.0.

• We could not rely on the statistical results of an AR(1) model if the absolute value of the lag
coefficient were greater than or equal to 1.0 because the time series would not be covariance
stationary.

• If the lag coefficient is equal to 1.0, the time series has a unit root:

• It is a random walk and is not covariance stationary (note that when b1 is greater than 1 in
absolute value, we say that there is an “explosive root”).
• Dickey and Fuller (1979) developed a regression-based unit
root test based on a transformed version of the AR(1) model xt
= b0 + b1xt-1 + εt.

• Subtracting xt-1 from both sides of the AR(1) model produces


SEASONALITY IN TIME-SERIES MODELS

• One common complication is significant seasonality, a case in


which the series shows regular patterns of movement within
the year.

• At first glance, seasonality might appear to rule out using


autoregressive time-series models.

• After all, autocorrelations will differ by season.

• This problem can often be solved, however, by using seasonal


lags in an autoregressive model.
• A seasonal lag is usually the value of the time series one year
before the current period, included as an extra term in an
autoregressive model. Suppose, for example, that we model a
particular quarterly time series using an AR(1) model, xt = b0
+ b1xt-1 + ε t.

• If the time series had significant seasonality, this model would


not be correctly specified.

• The seasonality would be easy to detect because the seasonal


autocorrelation (in the case of quarterly data, the fourth
autocorrelation) of the error term would differ significantly
from 0.

• Suppose this quarterly model has significant seasonality. In this


case, we might include a seasonal lag in the autoregressive
model and estimate
ARCH MODELS
• Up to now, we have ignored any issues of heteroskedasticity in
time-series models and have assumed homoskedasticity.

• Heteroskedasticity is the dependence of the error term


variance on the independent variable; homoskedasticity is
the independence of the error term variance from the
independent variable.

• We have assumed that the error term’s variance is constant


and does not depend on the value of the time series itself or
on the size of previous errors.
• At times, however, this assumption is violated and the
variance of the error term is not constant. In such a situation,
the standard errors of the regression coefficients will be
• In work responsible in part for his shared 2003 Nobel Prize in
Economics, Robert F. Engle in 1982 first suggested a way of testing
whether the variance of the error in a particular time-series model in
one period depends on the variance of the error in previous periods.

• He called this type of heteroskedasticity “autoregressive conditional


heteroskedasticity” (ARCH).
• As an example, consider the ARCH(1) model
• Engle showed that we can test whether a time series is
ARCH(1) by regressing the squared residuals from a previously
estimated time-series model (AR, MA, or ARMA) on a constant
and one lag of the squared residuals.

• We can estimate the linear regression equation


REGRESSIONS WITH MORE THAN ONE TIME SERIES

• Up to now, we have discussed time-series models only for one


time series.

• Although in the lectures on multiple regression we used linear


regression to analyze the relationship among different time
series, in those lectures we completely ignored unit roots.

• A time series that contains a unit root is not covariance


stationary.

• If any time series in a linear regression contains a unit root,


ordinary least squares estimates of regression test statistics
Outcomes of Unit root and
regression estimates
IV DV Estimates Validity

C1 No No Yes

C2 Yes No No

C3 No Yes No

C4 Yes Yes No (Cointegration


absent)

C5 Yes Yes Yes (Cointegration


Present)
Cointegration Testing (Enger and Granger)

You might also like