Artificial Neural Networks in Time Series Forecasting: A Comparative Analysis
Artificial Neural Networks in Time Series Forecasting: A Comparative Analysis
Artificial Neural Networks in Time Series Forecasting: A Comparative Analysis
A Comparative Analysis 1
2,
Héctor Allende Claudio Moraga and Rodrigo Salas
Abstract
Artificial neural networks (ANN) have received a great deal of attention in many fields of
engineering and science. Inspired by the study of brain architecture, ANN represent a class of
nonlinear models capable of learning from data. ANN have been applied in many areas where
statistical methods are traditionally employed. They have been used in pattern recognition, classi-
fication, prediction and process control. The purpose of this paper is to discuss ANN and compare
them to nonlinear time series models. We begin exploring recent developments in time series fore-
casting with particular emphasis on the use of nonlinear models. Thereafter we include a review
of recent results on the topic of ANN. The relevance of ANN models for the statistical methods
is considered using time series prediction problems. Finally we construct asymptotic prediction
intervals for ANN and show how to use prediction intervals to choose the number of nodes in the
ANN.
keywords:Artificial neural networks; nonlinear time series models; prediction intervals; model
specification; asymptotic properties.
1 Introduction
Artificial neural networks (ANN) have received a great deal of attention over the last years. They
are being used in the areas of prediction and classification, areas where regression and other related
statistical techniques have traditionally been used [CT94]
Forecasting in time series is a common problem. Using a statistical approach, Box and Jenkins
[BJR94] have developed the integrated autoregressive moving average (ARIMA) methodology for
fitting a class of linear time series models. Statisticians in a number of ways have addressed the re-
striction of linearity in the Box-Jenkins approach. Robust versions of various ARIMA models have
been developed. In addition, a large amount of literature on inherently nonlinear time series models
is available. The stochastic approach to nonlinear time series outlined by [Ton90] can not only fit
non-linear models to time series data, but again provides measures of uncertainty in the estimated
model parameters as well as forecasts generated by these models. It is the stochastic approach that
again enables the specification of uncertainty in parameter estimates and forecasts.
More recently, ANN have been studied as an alternative to these nonlinear model-driven ap-
proaches. Because of their characteristics, ANN belong to the data-driven approach, i.e. the analysis
depends on the available data, with little a priori rationalization about relationships between variables
and about the models. The process of constructing the relationships between the input and output
1 This research was supported in part by the Research Grant BMBF RCH99/023 (Germany), in part by a Research Fellow-
ship of the German Academic Exchange Service (DAAD) and the Research Grant DGIP-UTFSM 240022
2 Address for correspondence
variables is addressed by certain general-purpose ’learning’ algorithms [Fin99]. Some drawbacks
to the practical use of ANN are the possibly long time consumed in the modeling process and the
large amount of data required by the present ANN technology. Speed-up is being achieved due to
the impressive progress in increasing the clock rate of present processors. The demands on the num-
ber of observations remains however a hard open problem. One cause of both problems is the lack
of definite generic methodology that could be used to design a small structure. Most of the present
methodologies use networks, with a large number of parameters (”weights”). This means lengthy
computations to set their values and a requirement for many observations. Unfortunately, in prac-
tice, a model’s parameters must be estimated quickly and just a small amount of data are available.
Moreover, part of the available data should be kept for the validation and for performance-evaluation
procedures.
This report reviews recent developments in one important class of nonlinear time series models,
like the ANN’s (model-free systems) and describe a methodology for the construction of prediction
intervals which facilitates the estimation of forecast.
In the next section we provide a very brief review of the linear and nonlinear ARMA models
and the optimal prediction. Section 3 contains an overview of ANN terminology and describes a
methodology for neural model identification. The multilayer feedforward ANN described can be
conceptualized as a means of fitting a highly nonlinear regression and time series prediction problem.
In section 4 we use the results of [HD97] to construct confidence intervals and prediction intervals
in nonlinear time series.
2
[BO93].
The ARMA models require a stationary time series in order to be useful for forecasting. The
condition for a series to be weak stationary is that for all t
E xt µ ; V xt σ2 ; COV xt xt k γk (2)
Diagnostic checking of the overall ARMA models is done by the residual. Several tests have
been proposed, among them the most popular seems to be the so-called portmanteau test proposed
by [LB78] and its robust version by [AG96]. These tests are based on a sum of squared correlation
of the estimated residual suitably scaled.
Many types of non-linear models have been proposed in the literature, see for example bilinear
models [Rao81], classification and regression trees [BFOS84], Threshold Autoregressive models
[Ton90] and Projection Pursuit Regression [Fri91]. The rewards from using non-linear models can
occasionally be substantial. However, on the debit side, it is generally more difficult to compute fore-
casts more than one step ahead. [LG94].
Another important class of non-linear models is that of non-linear ARMA models proposed by
[CM94]. Natural generalizations of the linear ARMA models to the non-linear case are the non-linear
ARMA models (NARMA)
xt h xt 1 xt 2
xt p εt 1
εt q εt (3)
where the εt j are specified in terms of present and past xu ’s. The predictor of (7) has a mean-square
3
error σ2 .
Since we have only a finite observation record, we cannot compute (6) and (7). It seems reason-
able to approximate the conditional mean predictor (7) by the recursive algorithm
x̂t h xt 1 xt 2
xt p ε̂t 1
ε̂t q (8)
Artificial Neural Networks (ANN) provide statistics with tractable multivariate non-lineal meth-
ods to be further studied. On the other hand statistical science provides one of the crucial methods
for constructing theoretical foundation of neuro-computing.
From a statistical perspective ANN are interesting because of their use in various kinds of prob-
lems, for example: prediction and classification. ANN have been used for a wide variety of ap-
plications, where statistical methods are traditionally employed. They have been used in classifi-
cation problems as identifying underwater sonar contacts, and predicting heart problems in patients
[Bax90]. In time series applications they have been used in predicting stock market performance
[Hut94]. ANN are currently the preferent tool in predicting protein secondary structures. [FR90].
4
The statisticians would normally solve these problems through classical statistical models such as dis-
criminant analysis, logistic regression, multiple regression and time series models such as ARIMA
and forecasting methods.
It is therefore time to recognize ANN as a potential tool for data analysis. Several authors have
done comparison studies between statistical methods and ANN (see e.g. [WY92] and [Ste96]).
These works tend to focus on performance comparisons and use specific problems as examples. The
ANN trained by error Backpropagation are examples of nonparametric regression estimators. In this
report we present the relations between nonparametric inference and ANN, we use the statistical
viewpoint to highlight strengths and weakness of neural models. There are a number of good intro-
ductory articles on ANN usually located in various trade journals. For instance, [Lip87] provides
an excellent overview of ANN for the signal processing community. There have also been papers
relating ANN and statistical methods [Rip93] and [Sar94]. One of the best for a general overview
for statisticians is [CT94].
Figure 1: A multilayer feedforward ANN for approximating the unknown function ϕ x
An ANN consists of elementary processing elements (neurons), organized in Layers (see Figure
1). The Layers between the input and the output layers are called ”hidden”. The number of input
units is determined by the application. The architecture or topology Aλ of a network refers to the
topological arrangement of the network connections. A class of neural models is specified by
Sλ gλ x w x Rm w W W Rp (13)
5
where gλ x w is a non-linear function of x with w being its parameter vector, λ is the number of
hidden neurons and p is the number of free parameters determined by Aλ , i.e., p ρ Aλ ).
A class (or family) of neural models is a set of ANN models which share the same
architecture
and whose individual members are continuously parameterized by the vector w w1 w2
w p T .
The elements of this vector are usually referred to as weights. For a single-hidden-layer
architecture,
the number of hidden units λ indexes the different classes of ANN models
S λ since it is an unam-
biguous descriptor of the dimensionally p of the parameter vector p m 2 λ 1 .
Given the sample of observations,
the task of neural learning is to construct an estimator g x w
of the unknown function ϕ x
λ 2# m 1# 1# 2#
gλ x w γ2 ∑ w" j γ1 ∑ wi" j xi wm" 1 $ j wλ" 1 (14)
j! 1 i! 1
where w w1 w2
w p T is a parameter vector to be estimated, γs are linearity or non-linearity and
λ is a control parameter (number of hidden units). An important factor in the specification of neural
models is the choice of base function γ. Otherwise known as ’activation’ or ’squashing’ functions,
these can be any non-linear function as long as they are continuous, bounded and differentiable.
Typically γ1 is a sigmoidal or the hyperbolic tangent, and γ2 is a linear function.
The estimated parameter ŵ is obtained by minimizing iteratively a cost functional Ln w i.e.
ŵ argmin Ln w : w W % W Rp (15)
where Ln w is, for example, the ordinary least squares function i.e.
1 n
2n i∑
2
Ln w yi gλ xi w (16)
! 1
The loss function in equation (16) gives us a measure of accuracy with which an estimator Aλ , fits
the observed data but it does not account for the estimator’s (model) complexity. Given a sufficient
large number of free parameters, p ρ Aλ , a neural estimator Aλ , can fit the data with arbitrary
accuracy. Thus, from the perspective of selecting between candidates, model expression (16) is an
inadequate measure. The usual approach to the selection is the so-called discrimination approach,
where the models are evaluated using a fitness criterion, which usually penalizes the in-sample per-
formance of the model, as the complexity of the functional form increases and the degrees of freedom
for error become less. Such criteria, commonly used in the context of regression analysis are: the
R-Squared adjusted for degrees of freedom, Mallow’s C p criterion, Akaike’s AIC criterion, etc.
The basic requirement of any method is convergence of the training algorithm to a locally unique
minimum. By introducing the requirement that for any particular architecture Aλ the network has to
be trained to convergence, we perform a restricted search in the function space. This is because from
each class Sλ we select only one member with its parameter estimated from equation (15). In this set-
ting the actual training (estimated) algorithm used, is of no consequence (provided that it satisfies the
convergence requirement). The first
step is to estimate the parameters w of the model by iteratively
minimizing the empirical loss Ln w (see (16)). This stage must not be confused with model selection,
which in this framework employs a different fitness criterion for selecting between fitted models. The
second step is to compute the error Hessian Ân . (See Appendix A). This is used to facilitate test on
convergence. The third step is intended to perform a test for convergence and uniqueness, basically
by examining whether Ân has negative eigenvalues. The fourth step is to estimate the prediction risk
6
Pλ E L wn & , which adjusts the empirical loss for complexity. The fifth step is to select a model by
employing the minimum prediction risk principle which expresses the trade-off between the general-
ization ability of the network and its complexity. It has to be noted, however, that since the search is
restricted, the selected network is the best among the alternatives considered and it does not neces-
sarily represent a global optimum. The final step involves testing the adequacy of the selected model.
Satisfying those tests is a necessary but not sufficient condition for model adequacy. Failure to satisfy
those tests indicates that either more hidden units are needed or some relevant variables were omitted.
the
task of ”neural learning” is to construct an estimator g x w Dn )( fˆ x of f x , where w
w1
w p T is a set of free parameters (known as ”connection weights” in sub-section 3.1), and Dn
is a finite
set of observations.
Since no a priori assumptions are made regarding the functional form
of f x , the neural model g x w is a non-parametric estimator of the conditional density E y
x , as
opposed to a parametric estimator where the functional form is assumed a priori, for example, in a
linear model.
or possibly involving a mixture of chaos and randomness xt h xt 1 xt 2
xt p % εt in which h is
an unknown smooth function and εt denotes noise. Similar to 2.1 we assume that E εt
xt 1 xt 2
0, and that εt has finite variance σ2 . Under these conditions the MSE optimal predictor of xt , given
xt 1 xt 2
xt p is as shown in equation (12).
Feedforward ANN were proposed as an NAR model for time series prediction by [CM94]. A
feedforward ANN provides a nonlinear approximation to h given by
λ
p
2# 1# 1#
x̂t h xt 1 xt 2
xt p ∑ w" j γ1 ∑ wi" j xt i w "p 1$ j (19)
j! 1 i! 1
7
+
where the function γ1 is a smooth bounded monotic function. (19) is similar to equation (14),
where γ1 is a sigmoide, γ2 is the identity function and the output node has no bias.
2# 1#
The parameter w " j and wi" j are estimated from a training and estimate ĥ of h. Estimates are ob-
tained by minimizing the sum of the squared residuals, similar to (15). This is done for example by a
gradient descent procedure known as ”Backpropagation” , by Super Self-Adapting Backpropagation
or by a second-order methods for learning (see [Fin99]).
where the random component ε has a normal distribution with mean zero and variance σ2 . The func-
tion g x w is a non-linear function such as (13).
The network is trained on the dataset Dn ,
xi yi ni! 1 ; that is, these data are used to predict the
future output at a new input xn 1 by ŷn 1 g xn 1 ŵ . We assume that for every 1 - i - n 1, (14)
and (20) are satisfied, that is, yi g xi w . εi , where yn 1 is the unobservable random variable
that
is the target of prediction. Further, we assume that the xi ’s are independent of the εi ’s and the x εi
, 1 - i - n 1 are independent identically distributed (i.i.d.).
Our aim in this section is to construct
prediction intervals for yn 1 and confidence intervals for g xn 1 w , the conditional expectation of
yn 1 given xn 1 .
To discuss the identifiability (or rather the unidentifiability) of parameters, we first discuss two
concepts (as in [Sus92]). We say that an ANN (with a fixed set of parameters) is ”redundant” ifthere
+
exists another ANN with fever neurons that represents exactly the same relationship function g w .
A formal definition is the reducibility of w, and can be found in [Sus92].
Definition 4.1: For γ1 choosen as symmetric sigmoidal function and γ2 a linear function w is
2#
called ”reducible” if one of following three cases holds for j / 0, (a) w " j 0 for some j 1
λ;
1# 1# 1# 1# 1# 1# 1#
(b) w " j w1" j
wm" j 0 for some j 1
λ; or (c) w " j wm" 1$ j '
0 w " j wm" 1$ l for some
j / l, where 0 denotes the zero vector of the appropriate size.
If w is reducible and γ1 is a sigmoidal function, then the corresponding ANN relative to (20)
is redundant [Sus92]. On the other hand, an irreducible w may not always lead to a nonredun-
dant ANN, although a sufficient condition on γ. [Sus92] proved that if the class of functions
γ1 bx b0 b 1 0 2 γ1 ( 1 is linearly independent then the irreducibility of w implies that the
corresponding ANN is nonredundant.
In general note that every ANN is unidentifiable. However [HD97] showed that ANN’s with cer-
tain activation functions, leave the distribution of y invariant up to certain family τ of transformations
8
+ +
of w. That is, if there exist another w such that g w g w , then there is a transformation gener-
ated by τ that transforms w to w. Further under the assumption that γ1 is continuously differentiable,
the matrix
Σ E ∇w g x w ∇w g x w T
(21)
is non-singular.
In this section we construct confidence intervals and prediction intervals based on an ANN and
show these to be asymptotically valid using the results.
14
where σ2 is a scale parameter that can be estimated consistently by an estimator σ̂2 and V 6 w03 7 is
14 λ4
a square matrix (See [Fin99]). Let ŵ be an arbitrary estimator that takes value from ŵ 3
ŵ 3
where ŵ 3 i4 Ti ŵ 3 1 4 . This is to say
that for every n and every dataset, there exists an i such that
ŵ0 ŵ 3 i4 . A real-valued function l w is said to be invariant with respect to all the transformations
Ti if
l w l Ti w for every i (26)
One can show that the asymptotic variance of an invariant statistic is also invariant as stated in the
following
5
result
(see [HD97]). Assume that l w is differentiable and invariant.
Then as n * ∞,
n l ŵ
0
l w0 & converges
to a normal with mean zero
and variance ν w0 , where ν2 w0
2
transformations Ti .
9
σ̂2 ∇l ŵ0 T V ŵ0 ∇l ŵ0 & (27)
therefore if ∇l w and 5 V w are both
continuous in w , then (27) is a consistent estimator for the
asymptotic variance of n l ŵ0 l w0 & .
Returning to the neural network problem, we now apply the last results to model (20) and we
assume that the true value w0 of w is irreducible. We may make this assumption without loss of
generality, because otherwise the neural network is redundant and we may drop one neuron without
changing the input output relationship at all. We may continue this process until a nonredundant
network is obtained. This corresponds to an irreducible w0 .
Assume that W , the parameter space, is a compact set. Furthermore, make the following assump-
tions:
i) εi are i.i.d. with mean zero and variance σ2 , and εi s are statistically independent of xi ’s.
ii) xi are i.i.d. samples from an unknown distribution F x whose support is Rm .
iii) γ1 in (14) is a symmetric sigmoidal function with continuous
second-order derivative.
Let γ1 de-
note its derivative.
Furthermore the class of functions γ
1 bx b0
b 1 0 <
2 γ
1 bx b0 b 1
0 =2 xγ1 bx b0 b 1 0 >2 γ1 ( 1 is linearly independent.
iv) γ2 in (14) is a linear function.
v) w0 is an interior point of W .
2
Let ŵ be a global minimizer of ∑ni! 1 6 yi g xi w 7 , which exists by the compactness of W and
continuity of g. Then
g xn 1 ŵ 0 t 1 α ? 2 4 ; n m 2 4 λ 1# σ̂ @ S ŵ (28)
3 " 3
is a confidence interval for g xn 1 w with asymptotic coverage probability 1 α. Here t 1 α ? 2 4 ; n
3 3 m 2 4 λ 1#
denotes the 1 α
2 quantile of a t-Student distribution with n m 2 λ 1 degrees of freedom,"
n
1
σ̂2
n
m 2 λ 1 ∑ yi g xi ŵ & 2 (29)
i! 1
and
1 T
1
S ŵ ∇w g xn 1 w &D Σ̂ ŵ ∇w g xn 1 w &D (30)
n ACB w ! ŵ B w ! ŵ E
where
n
1
Σ̂ ŵ
nA ∑ B ∇w g xi w ∇w g xi w T D w ! ŵ E
(31)
i! 1
10
is an asymptotic prediction interval for yn 1 ; that is,
Pr yn * 1 α
1 Iw yn 1 ; (33)
A practical problem that occurs in many applications of ANN’s is how to choose the network
structure. When restricted to feedforward networks with only one hidden layer, this problem be-
comes how to choose the number of hidden neurons. One possible approach, which can be called
the ”prediction interval approach”, is to choose the number of nodes so that the prediction interval
has coverage probability close to the nominal level (e. g. 95% or 90%) and has the shortest expected
length. Because both quantities are unknown, they should be estimated. The delete-one
jackknife for
the coverage probability could be used. Specifically, this involves deleting a pair xi yi and using
the rest of data together with xi to construct a prediction interval for yi . By letting i vary, we have
n intervals. The coverage probability can then be estimated by counting the proportion of times the
intervals cover yi . One could also calculate the average length of the n intervals and use it to estimate
the expected length. Another possible approach, which can be called the ”prediction error approach”,
is to choose the number of nodes to minimize the jackknife estimate of prediction error.
5 Application to Data
In this section the ANN will be applied in two examples, the first one is the wellknown ’airline’ data
and next we will deal with the ’RESEX’ data. Both time series are monthly observations and have
been analyzed by many scientists and are a baseline to compare different models.
The results reported in this paper were computed by separating each set of data in two subsets,
were the first n monthly observations (data), corresponding from time 1 to time T , called samples or
training set were used to fit the model and then use the last 12, called test set, corresponding from
time T 1 to T 12, to make the forecast. The data used to fit the model are also used for the training
of the neural network, this data were re-escaled in the interval 1 1 . The NN used to model the
data and then used to forecast is a feedforward with one hidden layer and a bias in the hidden and
output layer. The number of neurons m in the input layer is the same as the number of lags needed,
these neurons do not perform any processing, they just distribute the input values to the hidden layer,
they serve as a sense layer. In the hidden layer different number of neurons
are used to choose the
best architecture, the activation function used is the sigmoidal function z w 1 1e G w . One neuron
is used in the output corresponding to the forecast, and it uses a linear activation function to obtain
values in the real space. The forecasts were obtained using the data available (samples) and then give
a one-step forecast bringing in the recent observed data one at a time. The model parameters were
11
not re-estimated at each step when computing the forecasts.
H
S ∑i ei 2 , the sum of squared residuals up to time T, where ei yi g xi w are the resid-
uals, and g xi w is the output of the NN and yi is the target (training set).
H
The estimate of the residual standard deviation: σ̂ JI
S
ne f f ective p where p m 2 λ 1 is
the number of parameters.
H
The Akaike information criterion (AIC): AIC ne f f ective ln S
ne f f ective 2p
H
The Bayesian information criterion (BIC): BIC ne f f ective ln S
ne f f ective K p p ln ne f f ective
H
S pre is the sum of squares of one-step-ahead forecast errors of the test set.
To choose the architecture of the model that best fits to the data, one can use the residual sum of
squares, S, but the larger the model is made (more neurons), the smaller becomes S and the resid-
ual standard deviation, and the model gets more complicated. Instead BIC and AIC as minimization
criteria are used for choosing a ’best’ model from candidates models having different number of pa-
rameters. In both criteria, the first term measures the fit and the rest is a penalty term to prevent
overfitting, where BIC penalizes more severely the extra parameter than AIC does. Overfitting of the
model is not wanted, because it produces a very poor forecast, giving another reason to choose AIC
and BIC over S to select the best model. The lower value obtained by this criterion, the better is the
model.
To
identify different classes of neural models as expressed by equation (13), following notation
NN j1
jk ; λ was used, which denotes a neural network with inputs at lags j1
jk and with λ
neurons in the hidden layer.
The airline data was modeled by a special type of seasonal autoregressive integrated moving
average
model (ARIMA), of
order 0 1 1 L 0 1 1 12 as described in section 2.1 which has the
form 1 B12 1 B xt 1 Θ0B12 1 θB at , after some operations the following equation is
obtained, xt xt 1 xt 12 xt 13 at θat 1 Θ0 at 12 θΘ0 at 13 , taking care of using the ap-
propriate transformation to make the seasonality additive, in this case natural logarithm is taken over
12
the data.
We will use an ANN to fit and forecast the airline data, because of the non-linearity property of
the NN models; this will allow us to deal with the multiplicative seasonality.
Table 1: Results obtained for the NN model choosed for the airline data.
After selecting the model, it is used to forecast the rest of the data (test data) using one-step-ahead
forecast. The result is shown in Table 2, and it is represented in Figure 2. By using equation (29) to
(32) the asymptotic prediction interval is calculated for each one-step forecast. The prediction inter-
val computed for α 0 05 and α 0 10 is shown in Figure 3 respectively, and the values obtained
are shown in Table 2.
13
Figure 2: (left) Airline data and its NN model and prediction. (rigth) RESEX data and its NN model
and prediction.
Figure 3: Asymptotic prediction interval for the Airline data with α 0 05 (left) and α 0 10 (rigth)
Figure 4: Asymptotic prediction interval for α 0 05 (left) and α 0 10 (rigth) for the RESEX data
14
month (November) in which residence extensions could be requested free of charge. Most of the
orders were filled during December, with the remainder being filled in January.
Brubacher (1974) identified the stationary
series as an ARIMA 2 0 0 P 0 1 0 12 model, i.e.,
the RESEX data is represented by an AR 2 model after differencing. As described in 2.1 it has
the form 1 φ1 B φ2B2 1 B12 xt at and after some operations xt φ1 xt 1 φ2 xt 2 xt 12
φ1 xt 13 φ2 xt 14 at .
Table 3: Results obtained for the NN model choosed for the RESEX data.
After selecting the model, it is used to forecast the rest of the data (test data) using one-step-
ahead-forecast. The result is shown in Table 4, and it is represented in Figure 2.
By using equation (29) to (32) the asymptotic prediction interval is calculated for each one-step
forecast.The results are shown in Table 4 and in Figure 4 for α 0 05, and α 0 10. Both graphics
in Figure 4 have a prediction interval that is large, because of the huge outliers presented, giving a
poor forecast with a lot of error and variance. But the NN model at least tried to follow the trend of
the data in the outlier part.
6 Conclusions
It is the premise of this paper it was stated that the learning methods in ANN are sophisticated statis-
tical procedures and that tools developed for the study of statistical procedures generally don’t only
15
yield useful insights into the properties of specific learning procedures but also suggest valuable im-
provements in alternatives to and generalizations of existing learning procedures.
Particularly applicable are asymptotic analytical methods that describe the behavior of statistics
when the size n of the training set is large. At present, there is no easy answer to the question of how
large ”n” must be for the approximators described earlier to be ”good”.
The advantage of the ANN technique proposed in this paper is that it provides a methodology
for model-free approximation; i. e. the weighted vector estimation is independent of any model. It
has liberated us from the procedures of the model-based selection and the sample data assumptions.
When the non-linear systems are still in the state of development, we can conclude that the ANN
approach suggests a competitive and robust method for the system analysis, forecast and control.
The ANN present is a superior technique in the modeling of another nonlinear time series such as:
bilinear models, threshold autorregressive models and regression trees. And the connections between
forecasting, data compression, and neurocomputing shown in this report seems very interesting in the
time series analysis.
To decide which architecture one may use to model some time series, first, it is possible to try
traditional methods, by using a simple autocorrelation function, to find the kind of time series that we
are dealing with, and indeed, the lags that are used as input in the NN. Second, to select the number
of hidden neurons, we start with one and then we increase it until the performance evaluated by AIC
and BIC becomes worse. Then, we train the network with the first data, and finally use the last data
to forecast. Asymptotic predictions intervals are computed for each one-step-forecast, to show the
limits where the data is moving.
The study of the stochastic convergence properties (consistency, limiting distribution) of any pro-
posed new learning procedure is strongly recommended, in order to determine what it is that the ANN
eventually learns and under what specific conditions. Derivation of the limiting distribution will gen-
erally reveal the statistical efficiency of the new procedure relative to existing procedures and may
suggest modifications capable of improving statistical efficiency. Furthermore, the availability of the
limiting distribution makes possible valid statistical inferences. Such inferences can be of great value
in the research of the optimal network architectures in particular applications. A wealth of applicable
theory is already available in the statistics, engineering, and system identification and optimization
theory literatures.
It is also evident that the fields of statistics has much to gain from the neurocomputing techniques.
Analyzing neural network learning procedures pose a host of interesting theoretical and practical
challenges for statistical methods; all is not cut and dried. Most important, however, neural network
models provide a novel, elegant and rich class of mathematical statistical tools for data analysis.
In spite of the robust forecast performance for ANN some problems remain to be solved. For
example: (i) How many input nodes are required for a seasonal time series? (ii) How to treat the
outlier data? (iii) How to avoid the problem of overfitting? (iv) How to find the 1 α % confidence
interval for the forecast? (v) How to treat the missing data?
In general the following conclusions and guidelines can be stated concerning the use of statistical
methods and ANN:
1. If the functional form linking inputs and output is unknown, only known to be extremely com-
plex, or of no interest to the investigator, an analysis using ANN may be best. The availability
16
of large training datasets and powerful computing facilities are requirements for this approach.
2. If the underlying physics of the data generating process are to be incorporated into the analysis,
a statistical approach may be the best. Generally, fewer parameters need to be estimated and
the training datasets can be substantially smaller. Also, if measures of uncertainty are desired,
either in parameter estimates or forecasts, a statistical analysis is mandatory. If the models fit to
data are to be used to delve into the underlying mechanisms, and if measures of uncertainty are
sought, a statistical approach can give more insight. In this sense, statistics provides more value
added to a data analysis; it probably will require a higher level of effort to ascertain the best
fitting model, but error in predictions, error in parameter estimate, and assement of model ad-
equacy are available in statistical analysis. In addition to providing measures of parameter and
prediction uncertainty, statistical models inherently posses more structure than ANN do, which
are often regarded as ”black boxes”. This structure is manifested as specification of a random
component in statistical models. As such, statistical methods have more limited application.
If a nonlinear relationship exits between inputs and outputs, then data of this complexity may
best modeled by an ANN. A summary of these considerations can be found in table 7. (See
Appendix C)
17
A Appendix: Asymptotic distribution of ŵn
Under certain mild regularity 5 assumptions,
it can be shown [HD97] that the asymptotic distribution
of the standardized quantity n ŵn w0 is zero mean multivariate normal with covariance matrix
C A 1 BA 1 where ŵ is the estimated and w0 the true parameter vector and
A E ∇∇r x w0 & and B E ∇r z w0 ∇r z w0 r
The matrices A and B are non-singular with ∇ and ∇∇ denoting the p 1 gradient and p p
Hessian operator with respect to w (p is the number of network parameters). However, since the
true parameters w0 are not known, the weakly consistent estimator Ĉ Ân 1 B̂n Ân 1 of the covariance
matrix C has be used instead, where
n
 n 1
∑ ∇∇r zi ŵ (34)
i! 1
n
B̂n n 1
∑ ∇r zi ŵn ∇r zi ŵ T (35)
i! 1
1 2
r zi ŵn
yi g xi ; ŵn & (36)
n
This has no effect on the asymptotic distribution of the network’s parameters, although larger n
will be needed to obtain an approximation as good as if C itself were available. The single most
important assumption made is that ŵn is a locally unique solution, i.e. none of its parameters can
be expressed in terms of the others, or equivalently, the network is not overparameterized. This is
reflected in the natural requirement that matrices A and B are non-singular.
5
The fact that n ŵn w RQ N 0 C can be used to robustly estimate the standard error of any
complex function of ŵn i.e. θ ρ ŵn , without the need for an analytic derivation. By stochastically
sampling from the distribution of ŵn , we can inexpensively create a sufficient large number k of pa-
s4
rameter vectors ŵn3 , where s 1 2
r and then compute the estimate σ̂A of the standard error as
follows:
1
r 2U 2
σ̂A TS r 1 1
∑ 6 θ̂ 3 s4 θ̂ 0 7 (37)
s! 1
where
r r
θ̂ 0 r 1
∑ θ̂ 3 s4 r 1
∑ ρ ŵn3 s4 (38)
s! 1 s! 1
+
The scheme is independent of the functional ρ and much less computationally demanding,
compared to bootstrap for example, since the estimate ŵn has to be obtained only once (see [RZ99]).
18
B Appendix: Time Series Data
SERIES G: International Airline Passengers: monthly totals (Thousands of passengers).
JAN FEB MAR ABR MAY JUN JUL AUG SEP OCT NOV DEC
1949 112 118 132 129 121 135 148 148 136 119 104 118
1950 115 126 141 135 125 149 170 170 158 133 114 140
1951 145 150 178 163 172 178 199 199 184 162 146 166
1952 171 180 193 181 183 218 230 242 209 191 172 194
1953 196 196 236 235 229 243 264 272 237 211 180 201
1954 204 188 235 227 234 264 302 293 259 229 203 229
1955 242 233 267 269 270 315 364 347 312 274 237 278
1956 284 277 317 313 318 374 413 405 355 306 271 306
1957 315 301 356 348 355 422 465 467 404 347 305 336
1958 340 318 362 348 363 435 491 505 404 359 310 337
1959 360 342 406 396 420 472 548 559 463 407 362 405
1960 417 391 419 461 472 535 622 606 508 461 390 432
Table 5: Series G
JAN FEB MAR ABR MAY JUN JUL AUG SEP OCT NOV DEC
1966 10165 9279 10930 15876 16485 14075 14168 14535 15367 13396 12606 12932
1967 10545 10120 11877 14752 16932 14123 14777 14943 16573 15548 15838 14159
1968 12689 11791 12771 16952 21854 17028 16988 18797 18026 18045 16518 14425
1969 13335 12395 15450 19092 22301 18260 19427 18974 20180 18395 15596 14778
1970 13453 13086 14340 19714 20796 18183 17981 17706 20923 18380 17343 15416
1971 12465 12442 15448 21402 25437 20814 22066 21528 24418 20853 20673 18746
1972 15637 16074 18422 27326 32883 24309 24998 25996 27583 22068 75344 47365
1973 18115 15184 19832 27597 34256
Table 6: RESEX
19
C Appendix: Comparison between statistical Methods (SM) and
Artificial Neural networks (ANN)
Characteristics SM ANN
Randomness Complex, nonlinear
General Variability Input /output relationships
Structured Model Multiple Outputs.
Single or few outputs
Relatively small Massive training
Data required Training datasets Datasets needed to estimate
May require probability weights
Distributions
Model specifications Physical Law Models No process Knowledge required
Linear discrimination Nonlinear discrimination
Goodness of fit Many possibilities Few possibilities
Criterion Best fit can be tested Least squares
No best fit test
Parameter Relatively few iterative training Relatively many (weights)
Estimator for nonlinear; else noniterative Iterative training
computer time Severe demands on computer time
Calculate uncertainties for Response surfaces (splines) can
parameter estimates and be multivariate vectors
Outputs predicted values. No uncertainty computations
Residual diagnostic can provide Minimal diagnostics
physical insight
Computer power Low High
Required Parallel processing possible
Trends Evolutionary techniques not yet Evolutionary design possible
used.
References
[AG96] H. Allende and J. Galbiati. Robust test in time series model. Journal of Interamerican Statistical
Institute, 1(48):35–79, 1996.
[AH92] H. Allende and S. Heiler. Recursive generalized m-estimates for autoregressive moving average
models. Journal of Time Series Analysis, (13):1–18, 1992.
[Bax90] W. G. Baxt. Use of an artificial neural network for data analysis in clinical decision marking: The
diagnosis of acute coronary occlusion. Neural Computational, (2):480–489, 1990.
[BD91] P.J. Brockwell and R.A. Davis. Time Series Theory and Methods. Ed. Springer Verlag, 1991.
[BFOS84] L. Breiman, J. Friedman, R. Olshen, and C.J. Stone. Classification and regression trees. Technical
report, Belmont, C. A. Wadsworth, 1984.
[BJR94] G. E. P. Box, G.M. Jenkins, and G.C. Reinsel. Time Series Analysis, Forecasting and Control. Ed.
Englewood Cliffs: Prentice Hall, 3 edition, 1994.
[BO93] B.L. Bowerman and R.T. O’Connell. Forecasting and time series: an applied approach. Ed. Duxbury
Press, 3 edition, 1993.
[CM94] J.T. Connor and R.D. Martin. Recurrent neural networks and robust time series prediction. IEEE
Transactions of Neural Networks, 2(5):240–253, 1994.
[CT94] B. Cheng and D.M. Titterington. Neural networks: review from a statistical perspective. Statistical
Science, (1):2–54, 1994.
20
[Fin99] T. L. Fine. Feedforward Neural Network Methodology. Ed. Springer, 1999.
[FR90] B. Flury and H. Riedwyl. Multivariate Statistics: A practical Approach. Ed. Chapman Hall, 1990.
[Fri91] J.H. Friedman. Multivariate adaptive regression spline. The Annals of Statistics, (19):1–141, 1991.
[HD97] J. T. G. Hwang and A. A. Ding. Prediction for artificial neural networks. JASA 92, (438):748–757,
1997.
[Hut94] J.M. Hutchinson. A Radial Basis Function Approach to Financial Time Series Analysis. Phd tesis,
Massachusetts Institute of Technology, 1994.
[LB78] G. M. Ljung and G. E. P. Bax. On a measure of lack of fit in time series models. Biometrika,
65:297–303, 1978.
[LG94] J.L. Lin and C.W. Granger. Forecasting from non-linear models in practice. Int. Journal of Forecast-
ing, 13:1–9, 1994.
[Lip87] R. P. Lippmann. An introduction to computing with neural nets. IEEE ASSP Magazine, pages 4–22,
1987.
[Rao81] T. Subba Rao. On the theory of bilinear models. J. Roy. Statist. Soc. B, (43):244–255, 1981.
[Rip93] B. D. Ripley. Statistical aspects of neural networks. In networks and Chaos-Statistical and Proba-
bilistic Aspect. Ed. Barndorf-Nielsen O. E., Jensen J. L., Kendall W. S., Chapman and Hall, 1993.
[RZ99] A. P. N. Referes and A. D. Zapranis. Neural model identification, variable selection and model
adequacy. J. Forecasting, 18:299–322, 1999.
[Sar94] W. S. Sarle. Neural networks and statistical methods. In Proc. Of the 19th Anual SAS Users Group
International Conference, 1994.
[Ste96] H. S. Stern. Neural networks in applied statistics. Technometrics, 38(3):205–214, 1996.
[Sus92] H. J. Sussmann. Uniqueness of the weights for minimal feedforward nets with a given input-output
map. Neural networks, (5):589–593, 1992.
[Ton90] H. Tong. Non-linear Time Series. Ed. Oxford University Press, 1990.
[WY92] F.Y. Wu and K. K. Yen. Application of neural network in regression analysis. In Proc. Of the 14th
Annual Conference on Computers and Industrial Engineering, 1992.
21