0% found this document useful (0 votes)
56 views6 pages

Forecasting by Exponential Smoothing

This document introduces a simulation study that compares forecasting methods including exponential smoothing, Box-Jenkins procedures, and spectral analysis. It notes that while causal models aim to understand variable relationships, naive models like exponential smoothing are computationally simpler. The study aims to see if more complex methods significantly outperform exponential smoothing given their increased workload. It introduces the experiment's design and definitions for comparing forecast accuracy across methods in simulations of time series data.

Uploaded by

Han Ajmain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views6 pages

Forecasting by Exponential Smoothing

This document introduces a simulation study that compares forecasting methods including exponential smoothing, Box-Jenkins procedures, and spectral analysis. It notes that while causal models aim to understand variable relationships, naive models like exponential smoothing are computationally simpler. The study aims to see if more complex methods significantly outperform exponential smoothing given their increased workload. It introduces the experiment's design and definitions for comparing forecast accuracy across methods in simulations of time series data.

Uploaded by

Han Ajmain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Forecasting by exponential smoothing, the Box and Jenkins

procedure and spectral analysis. A simulation study


F. Cole (*)

1. INTRODUCTION overlook the advantages of structural analysis. Indeed,


the knowledge of the impact of the decision or explan-
In this introduction, we shall try to shed light on some atory variables on the dependent variables can some-
basic ideas that lurk behind the title, and on the pur- times be of more importance than making a good or
pose of this study. easy forecast. Naive models are not so much a substi-
We shall indicate the main conclusions, details of tute for causal models, but imply a totally different
which will be given in a later chapter. attitude towards analysing a time series.

1.1. Forecasting 1.2. The purpose of this study-simulation


When speaking about forecasting, one has to make a As the title indicates, the purpose of this study is to
distinction between naive and causal prediction. The compare forecasts made by exponential smoothing, the
latter is performed by using a model composed of one Box and Jenkins procedure and by spectral analysis.
or more equations, every such equation relating a Bhansali [1] has made a similar study comparing spec-
dependent variable to one or more explanatory vari- tral analysis with a technique called the "Regression-
abhs. Akaike" method. His main conclusion is that the former
The former only takes time into account as an ex- performs quite well, especially when predicting more
planatory variable, be it explicitly or implicitly by than one step ahead.
relating the series to one or more of its lagged ver- There are, in our opinion, however two drawbacks to
sions. his paper. First of all, he simulates samples of size
The assumption behind this is in fact stationarity. One 1.000. Theoretically, in order to prove asymptotic
presumes the dependent variable to behave in the properties this is necessary, but conclusions concerning
future as it did in the past. prediction, drawn out of such a study are of little im-
Defining this reasoning more precisely, the dependent portance to economists, since they will never be con-
variable is in fact correlated with hidden variables, fronted with time series of that length.
that are on their turn correlated with time. The as- It is then very much the question whether the spectral
sumption therefore is that the correlation of these method will do equally well, using short series, since it
unknown variables with time will stay the same. is a well known fact that spectral analysis needs quite
As a consequence, naive forecasts will nearly always large samples.
miss turning-points. The advantage of naive models is, Secondly, the underlying generating processes are of
despite the fact that they don't give information con- simple form. Indeed, the processes used are of the auto-
cerning the underlying structure, that they are, rela- regressive (AR) and moving average (MA) type. There-
tively speaking, computationaUy much easier and fore he can make use of a regression technique called
faster to perform. the "Regression-Akaike Method", which is computa-
One could however expect causal predictions to out- tionally straightforward. If however, as we did, a slight
perform naive forecasts. complication is introduced, such as autocorrelation
We write deliberately "could expect", since building of the disturbance term in an AR-scheme, a mixture
a causal model is a difficult and longwinded task, be- of both previously mentioned types results. These so
hind which lurk many risks. called mixed autoregressive moving average (ARMA)
One has to overcome problems, such as the choice of processes however, according to Box and Jenkins [3],
the functional form, specification of the variables and are very often encountered in reality, and do not lend
often complicated estimation procedures. themselves to such a simple estimation procedure.
When this matter is carried to a successful conclusion, The Box and Jenkins procedure, which is applicable to
the even more difficult task of prediction stands to this kind of models, does in fact involve subjective
be solved. Indeed, one is usually obliged to use fore- reasoning at some stages.
casts of exogenous variables, that are based on sub- This already explains our intention to use the Box and
jective grounds. It therefore could well be that the Jenkins and the spectral method.
resulting causal forecasts are worse than the naive Both methods are statistically refined, but involve a
predictions. However, one should be careful not to great deal of computation. It could therefore be argued

(*) F. Cole, Assistant in the D e p a r t m e n t o f A p p l i e d E c o n o m i c s , K a t h o l i e k e Universiteit L e u v e n ,


D e k e n s t r a a t 2, 3 0 0 0 L e u v e n , Belgium.

Journal of Computational and Applied Mathematics, volume 6, no 1, 1980. 53


whether the game is worth the candle. That is why we 2. THE EXPERIMENT AND THE FORECASTING
have planned to compare these two procedures with RESULTS
exponential smoothing, which is, we believe the most
simple and commonly used method for forecasting. 2.1. Preliminary remarks
If indeed it should turn out to be only slightly worse,
a. Since there is contradiction in the literature concern-
then one might question those more complicated
ing certain definitions, in order to avoid all misunder-
methods.
standing, we shall define some concepts as they are
A word has to be said about the method used for
used in this study.
comparing these procedures, namely simulation.
Let (Yt, t . . . . . -1,0,1 .... ) be a stochastic process with
Before resorting to simulation, one has to be certain
discrete time parameter. Assuming that this process
that the problem cannot be solved analytically, since
has been observed over the period T = {1,2 ..... t), the
the former is expensive and involves a great deal of
work. following definitions can be set forward.
It can be shown [3, p. 107] that the constant model Forecast error (FE) = x~t + r - Yt + r ; Yt + r being the
underlying the first order exponential smoothing, is point forecast for period t + r made at time t, Yt + r
equivalent to an integrated moving average of first being the observed value of the stochastic process.
order [IMA (1,1)], if the disturbance terms are uncor-
related. Therefore exponential smoothing will do Expected squared error (ESE) = E(~ t + r - Yt + r )2 "
worse or equally well as the Box and Jenkins proce- Mean squared error (MSE) = _1_1 ~ (Yt + r,j - Yt + r,j.)2
dure, according to whether the generating process is nj=l
or is not an IMA (1,1). The MSE is the simulated value for the ESE.
In the case under study however, the disturbance We therefore simulate the process n times over a period
terms are correlated. This being the case, it is impos- (1, 2 . . . . . t + r ) and take the average of the values
sible to draw conclusions from analytical arguments. ()'t + r - Yt + r) over the n realisations.
As for the Box and Jenkins procedure, compared to
spectral analysis, it can be shown [1,2] that the b. (Covariance) stationary time series
asymptotic mean squared error o f the former is, in In what follows we shall consider a generating process
general, smaller than the asymptotic mean squared which is (covariance) stationary.
error of the latter, if the coefficients of the ARMA The stochastic process (xt, t ~ T ) is called covariance
process are exactly known. In practice however,
stationary, if for m = 1, 2, E (x t ) exists and
these are rarely known, and are generally estimated
from the data. E (Xtl , xt2 ,..., Xtm) = E (Xtl + r ' " " xt m + r )
The above reasoning implies that for the general case
no analytical comparison is possible, such that one is for all t I ..... tm, r ~ T .
obliged to resort to simulation. One could argue whether stationarity of the data is a
reasonable assumption, since most economic time
1.3. Main conclusions series are not stationary.
Before going over to the study into more detail, we However, as Box and Jenkins state [3, chapter 4], many,
would like to state the main conclusions. ff not most, series are nonstationary but homogeneous,
Although we were quite critical with regard to Bhan- which means that they are non-stationary only in level
sali's [1] results, in the particular ARMA (1,1) model and/or slope, but that they can be transformed to sta-
we tested, having a small sample size and a more com- tionary series by simple differencing.
plicated generating process, they seem to remain valid. Even non-stationarity of the variance can sometimes
Although we cannot generalize to other models than be accounted for by taking a logarithmic transforma-
the ones tested, in the light of Bhansali's [1] study, we tion.
are inclined to presume that the results hold for other Anyway, leaving the question undecided, neither of
simple processes. the three methods applies to series that cannot be
However, only further research can confLrm this as- transformed into stationary time series by one of the
sumption. two previously mentioned methods.
Indeed, we found one of the two tested spectral Therefore we can state that non-stationarity is not a
methods to be the best method to put forward. It was relevant factor in the comparison between the different
superior or at least equivalent to the other methods. methods. This being the case, we have not built in non-
Only for larger forecasting periods (greater than four), stationarity in the generating process.
proved the Box and Jenkins procedure to be superior.
Exponential smoothing is clearly inferior to the other c. We will not go into the theoretical aspects of the
methods, since for all tested forecasting periods at various forecasting techniques. The interested reader
is referred to the bibliography at the end of this paper.
least one method proves to be better or at least equiva-
lent. The most relevant references are [11, [2], [3], [5], [10]
and [29].

Journal of Computational and Applied Mathematics, volume 6, no 1, 1980. - 54


2.2. The results Since the u t are normally distributed, we assume ~t + r'
the forecast for x t + r made at time t, to be normally
2.2.1. THE E X P E R I M E N T distributed.
2
We have applied the three techniques to data gener-
ated by the following underlying process : Then 50 ( ~ t + r , j - x t + r,j) is X2-distributed
j =1 ESE
x t - 0.5 x t _ l = ut + 0.6 u t _ l with 49 degrees o f freedom.
We thus generated a realisation (x t, t = 1 ..... 80 }
of the time series {x t, t . . . . . -1,0,1 .... }. P L~ESE50 Xa-q--,249 < 5--O1j 5+~r , (~t
j= _ x t + r,j)2
In order to be able to draw statistical meaningful con- 2
clusions, one has to have a sample containing several < ESE 2
o f such realLsations. 50 Xl-a--- , 4 9 } = 1 - a .
We generated 50 replicates, each replicate being dif- 2
ferent by using a different stream o f stochastic
In other words, with a probability of 95 Z, in order for
vatiates u t-
the model to be appropriate the MSE should fall with-
From each replicate we used 75 observations to pre-
in the regions :
dict x76 up to x80.
r= 1 0.89 < MSE < 1.98
Some characteristics r= 2 1.48 < MSE < 3.29
1. The u t are uncorrelated normal stochastic variates r= 3 1.70 < MSE < 3.76
with expected value equal to zero, and variance r= 4 1.81 < MSE < 3.99
equal to one [4 ; 20, p. 90]. r= 5 1.88 < MSE < 4.18
2. Since, because o f stationarity, E(xt) = E(x t -1) and
E (ut) = 0, E(xt) = 0. For every series we have put 2.2.3. THE B O X A N D J E N K I N S P R O C E D U R E
the starting value equal to its expected value It can be shown [3] that the ESE for a forecast r periods
x 0 = 0. ahead is :
3. The variance 2 02u '
ESE(r)= ( 1 + c 2 + ... + Cr_l)
o 2 = E [ x t _ E(xt)]2 ' which after some computation
which gives for the consecutive forecasting periods :
gives ax2 = 2.61.
ESE = 1 r= 1
2.2.2. E X P O N E N T I A L SMOOTHING 2.21 2
We applied the constant model x t = a + v t to the data. 2.51 3
2.59 4
2 2 E(xt x t - 1 ) 2.61 5
trv = o x = 2.61 9 = d= - 0.73
2
Ox Again we can find the 95 Z confidence interval on the
MSE :
It can be shown [27], assuming a ftrst order auto-
regressive scheme for the disturbance term, that the r= 1 0.64 < MSE < 1.42
expected squared error for a forecast r periods ahead 2 1.41 < MSE < 3.14
is: 3 1.61 < MSE < 3.56
4 1.66 < MSE < 3.68
ESE=E[Yt÷r-Yt+r] 2=°211+u a + 2d(1-a)a
2-a ( 2 - a ) ( 1 - d + da) 5 1.67 < MSE < 3.71

_ 2d r a .] (1) After running through the three stages of the Box and
1-d+da Jenkins procedure, we freely computed the MSE, which
d being the autocorrelation coefficient o f the disturb- can be found in table 1.
ante terlTlS.
In the case o f correlated disturbance terms, it is actual- 2.2.4. S P E C T R A L A N A L YSIS
ly possible to find a value o f a that minimizes the ESE It can be shown [1, 2] that the distribution o f the fore-
for given d and r. cast errors o f the spectral prediction method is only
Using expression (1), with d = 0.73, we found optimal
known asymptotically. Therefore, working with a finite
a's of approximately 0.83, 0.55, 0.1, 0.1, and 0.1
sample, we cannot set up confidence intervals for the
for respectively r = 1, 2, 3, 4 and 5. Substituting these
a's in (1), we find MSE.
Applying the forecasting formula's o f Bhansali [1, 2],
ESE = 1.40 r= 1 2.84 r= 4
we obtained the MSE for both spectral methods (see
2.32 r=2 2.95 r= 5
table 1).
2.62 r=3

Journal o f Computational and Applied Mathematics , volume 6, no 1, 1980. 55


Given the mean squared errors for the consecutive Therefore the Pitman-test cannot be applied. In this
forecasting periods o f the three considered methods, case we shall use the sign test [25].
we are now ready to make a comparison between
them. 2.3.2. COMPARISON
In order not to burden the text with unnecessary com-
2.3. Comparison of the results parisons, we shall go to work in the following way.
First we shall compare the Box and Jenkins procedure
2.3.1. INTRODUCTION with exponential smoothing using the Pitman-test. If
We give the results for the applied methods. the Box and Jenkins procedure proves to be, as sus-
pected, superior or at least as good, we drop the smooth-
Table 1. MSE for the various forecasting techniques ing technique for further competition.
Secondly, we do the same with the two spectral meth-
Smooth- Box and Spectral Spectral ods, using the sign test. Finally we compare the left-
ing Jenkins 1 2 over techniques.
r= 1 1.96 1.59 0.92 1.55
2.3.2.1. Exponential smoothing versus Box and Jenkins
r= 2 3.21 2.06 1.01 0.17
r=3 3.55 2.28 4.73 2.17 The Pitman-test provides us with confidence limits on
r-- 4 3.25 2.82 2.49 0.32
1= ° r ,where o 2i are the variances of two correlated
02
r= 5 3.24 2.82 2.34 4.23

normal distributions. The confidence limits for


Clearly we could make a comparison between the 2 2
results in the above table. OSMOOTH / OB& J are given in table 2 for a = 5 ~,.
Since the MSE's are random variables however, the 2 2
Table 2. Confidence limits for OSMOOTH/o B&J
question arises whether the observed differences be-
tween the methods are significant or due to pure
chancer a=57. LOWER LIMIT UPPER LIMIT
To answer this question, we shall have to apply certain
r= 1 0.86 1.77
statisticaltechniques. Some problems arise.
r= 2 1.10 2.20
1. For the unbiased forecasting methods, - exponen- r=3 1.17 2.05
ti/d smoothing and the Box and Jenkins procedure -,
r=4 0.92 1.44
the mean squared error is equal to the estimated vari-
r= 5 0.93 1.44
ance of the forecast errors. 2
This would lead us to believe the F-statistic nsl
ns~ As one can see, only for r = 2,3 is 0 2 / 0 2 significantly
is appropriate for comparing these methods. different from 1.
However, since both methods apply a kind of weighted
average to past observations to derive the forecasts, it For these cases the Box and Jenkins procedure is supe-
is very likely the MSE's will be correlated. rior. But even for r = 4, 5 the lower limit is seen to be
2 close to 1, implying that the Box and Jenkins method
Therefore MSE1 _ X1 is a ratio of two correlated is almost better.
MSE2 2 If no general ranking can be made, one thing we can
×2 certainly state, is that the B&J procedure is superior
x2-distributions, the distribution of which is not
or at least equivalent to smoothing.
known. As a consequence we have to look for another
This justifies our dropping of the exponential smooth-
method of comparison. The Pitman-test [22; 15, pp.
ing technique in the search of the best forecasting
454463] is appropriate for setting up confidence inter-
method.
vals for the distribution of the ratio of the variances
of two related normal distributions.
2.3.2.2. Spectral method 1 versus spectral method 2
2. For comparing the spectral technique with other
As explained above, we can no longer apply the Pitman-
methods, another problem arises. Indeed, since the
test for this comparison. Instead we used the sign test
forecast errors using the spectral method are biased,
the MSE is no longer an estimate of the variance, but
[25].
This test compares two methods by counting the num-
the variance plus the bias squared. Indeed :
ber of times one method was closer to the true value
E (5¢t+ r - Xt+r )2 = o 2 + [E(fCt+r) - Xt+r ]2 than the other. Under the hypothesis that the methods
are equivalent, with 5 7. significance, and n = 50, the
Since the MSE is an estimate of E(~t + r - Xt+r )2, it confidence limits for the number of times that method
is not an estimate o f the variance. 2 forecasts closer to the true value than method 1, lie
within the region [18, 32].

Journal of Computational and Applied Mathematics, volume 6, no 1, 1980. ' 56


The actual comparison is given in table 3. 2.4. Conclusions and final remarks
Table 3. Number of times spectral method 2 was 2.4.1. CONCLUSIONS
closer than method 1.
Comparison method 2/method I 1) The exponential smoothing technique is clearly in-
ferior to the other methods, since for all forecasting
periods at least one method proves to be better or at
r=1 14 worse least equivalent. This possibly results from the fact
r=2 38 better that we assumed, as is c o m m o n l y done, a first order
r=3 29 equivalent autoregressive model for the disturbance term, whereas
r=4 28 equivalent in the Box and Jenkins procedure we do not restrict
r=5 10 worse ourselves to such a simple model. However, to our
knowledge, more complex models would make the use
of exponential smoothing impossible in practice.
One must bear in mind that the sign test does not ac-
2) For small forecasting periods the second spectral
count for the magnitude of the difference between
model would seem to us the best method to put for-
the two methods, only for the sign.
ward. It is superior or at least equivalent to the other
Therefore we must kind of subjectively consider the
methods.
information contained in both tables 1 and 3. A drawback however is the fact that the forecast errors
are biased. This bias will most probably decrease, the
2.3.2.3. Spectral methods versus Box and Jenkins larger the sample at hand, since Bhansali [2] proved the
Here again we are obliged to resort to the sign test. forecast errors to be asymptotically unbiased.
The results are given in tables 4 and 5. Another possible improvement due to a larger sample
could be the fact that M = number of lags we estimate
Table 4. Number of times spectral method I was the covariance function for would be larger. As a con-
closer than B &J.
Comparison spectral 1/B & J sequence N = M___=the number o f spectral estimates
4
and forecasting coefficients would be greater too.
r= 1 26 equivalent Bhansali [2] proved that the asymptotic variance of
r= 2 34 better the forecast errors decreases for N increasing, as long
r= 3 16 worse a s N ~ ___M.
4
r= 4 37 better
Therefore it seems reasonable that the MSE would
r= 5 17 worse decrease too.
Therefore we suggest the spectral method 2 to be an
Table 5. Number of times spectral method 2 was excellent forecasting technique for time series with
closer than B & J daily data, since a large sample is then already obtained
Comparison spectral 2/B & J with one year of observations.
3) For larger forecasting periods (r > 4), we would
r-- 1 19 equivalent stick to the Box and Jenkins procedure.
r= 2 43 better
r= 3 18 equivalent 2.4.2. FINAL REMARKS
r= 4 42 better 1) One must bear in mind that the conclusions drawn
r= 5 14 worse are obtained from a simulation study. Therefore these
conclusions hold only for the model studied. It is not
possible to make extensions to allow for other generat-
2.3.2.4. Summary ing processes. This doesn't mean that a simulation study
Summarizing the information contained in tables 1, has no value. Indeed, it was mentioned that simple
3, 4 and 5, we would be inclined to set up the follow- ARMA-processes may represent many economic time
ing table. series. Therefore the conclusions drawn are not so
specific after all.
Table 6. Summary of results But one might investigate the influence on the results
of for example other values o f Ou2, larger ARMA-
r Best Second best schemes with greater lags, or another starting value.
1 --
All these are parameters that the experimenter has
2 spectral 2 spectral 1 under his control, and might influence the results.
3 spectral 2/B&J - 2) Another parameter that has to be supplied is the
4 spectral 2/spectral 1 - number of replicates. For some simulation experiments
5 B&J/smoothing - stopping rules exist [20].
In the study at hand the number of replicates only

Journal of Computational and Applied Mathematics, volume 6, no 1, 1980. 57


influences the degree o f accuracy o f the m e a n squared 8. FISHMAN G. S. : Spectral methods in econometrics,
error. Indeed, the m e a n squared error (for the u n - Harvard University Press, Cambridge, Massachusetts, 1969,
210 p.
- SgE7 X29 distributed, the variance o f
biased cases) is E
9. GRANGER C. W. J. : "The typical spectral shape of an
which is solely d e p e n d e n t o n the n u m b e r o f degrees economic variable", Econometrica, 1966, 34 (1), p. 150-
o f freedom, and hence o n the n u m b e r o f replicates. 161.
By a desired level o f accuracy of/3 ~0, we m e a n that 10. GRANGER C. W. J. & HATANAKA : Spectral analysis of
(1 - ct) g, e.g. 95 ~., o f the sampled MSE's fall w i t h i n economic time series, Princeton University Press, Prince-
the interval ton, New Jersey, 1964, 299 p.
11. HEUTS R. : "Uithreidingen in exponential smoothing
[E (MSE) - (-~-~-) E (MSE), E (MSE) + (.~7_~_)E (MSE)] modellen", course, Katholieke Universiteit Tilburg, 1974,
120 p.
F r o m the desired level o f accuracy, one can derive the 12. HEUTS R. : "Aanvullende opmerkingen bij bet boek van
n u m b e r of replicates needed. F o r 50 replicates, Charles Nelson : Applied time series analysis for managerial
fl= 207.. forecasting", course, Katholieke Universiteit Tilburg, 1975,
58 pp.
Taking into account the expensiveness o f c o m p u t e r
t'h'ne, we have felt the increase in the level o f accuracy, 13. JENKINS G. & WATTS D. G. : Spectral analysis and its
a n d hence the simplification o b t a i n e d for c o m p a r i n g applications, Holden Day, San Francisco, 1968.
the methods, n o t to be offset b y the increase in costs 14. JOHNSTON J. : Econometric methods, Mc Graw Hill, New
a n d time. Indeed, if the level o f precision was say 80 ~., York, 1972, 437 pp.
the figures in table 1 w o u l d be subject to little or n o 15. KENDALL M. G. & STUART A. : The advanced theory
variation. Hence, the f'wares w o u l d be suitable for a of statistics, Griffin, London, 1966, 3, p. 454-463.
direct comparison, w i t h o u t r u n n i n g any further statis- 16. KLEYNEN J. P. : "Design and analysis of simulation :
tical tests. R u n n i n g the statistical tests m a d e our com- practical statistical techniques", Reeks ter Discussie, Ka-
parison more l o n g w i n d e d a n d the conclusions less tholieke Universiteit Tilburg, 1975, 28 p.
firm, b u t we saved costs o f c o m p u t e r time. 17. MOOD A. M. : "On the asymptotic efficiency of certain
As always, a balance m u s t be reached b e t w e e n addi- non-parametric two sample tests", Annals of Mathematical
t i o n a l costs and additional revenue. Statistics, 1954, 25 (3), p. 514-522.
18. MORTIER : Mechanica, course, Rijksuniversiteit Gent,
Standaard Wetenschappelijke Uitgeverij, Antwerpen, 1967,
p. 74-75.
ACKNOWLEDGMENT
19. MONRO D. M. : "Complex discrete fast Fourier trans-
The a u t h o r is m u c h obliged to Prof. R. V a n d e n b o r r e form", Applied Statistics, 1975, 24 (1), p. 153-160.
a n d Mr. M. Geeraerts for their c o m m e n t s and con- 20. NAYLOR Th. H., BALINTFY J. K., BUILDICK D. S. &
structive criticism. CHU K. : Computer simulation techniques, J. Wiley &
Sons, New York, 1966.
21. NELSON Ch. : Applied time series analysis for managerial
BIBLIOGRAPHY forecasting, Holden day, San Francisco, 1973, 231 p.
22. PITMAN : "A note on normal correlation", Biometrika,
1. BHANSALI R. : "A Monte Carlo comparison of the regres-
1939.
sion method and the spectral methods of prediction",
Journal of the American Statistical Association, 1973, 68, 23. PLASMANS J. : "Comptementen van wiskundige statistiek"
p. 621-625. course, University of Antwerp, 1973, 190 p.
2. BHANSALI R. : "Asymptotic properties of the Wiener- 24. PROTTER and MORREY : Modem mathematical analysis,
Kolmogorov predictor I", Journal of the Royal Statistical Addison Wesley, Reading, Massachusetts, 1966, p. 435-
Society, Series B, 1974, 36, p. 61-72. 478.
3. BOX G. E. P. & JENKINS G. : Time series analysis, fore- 25. SIEGEL S. : Nonparametric statistics for the behavioural
casting and control, Holden-Day, San Francisco, 1970, sciences, McGraw Hill, New York, 1956, 311 p.
553 p.
26. TIGELAAR H. H. : "Spectraalanalyse en stochastische
4. BOX G. E. P. & MULLER M. E. : "A note on the genera- lineaire differentievergelijkingen", Reeks ter Discussie,
tion of random normal deviates", Annals of Mathematical Katholieke Universiteit Tilburg, 1975, 41 p.
Statistics, 1958, 29 (2), p. 610-611.
27. VANDENBORKE R . : "Forecasting", course, Katholieke
5. B R O W N Smoothing, forecasting and prediction,
R . G. : Universiteit Leuven, 1974.
Prentice Hall, London, 1962, 467 p.
28. VERHELST M . : "Simulation theory and applications",
6. COLE F. : "Theoretische aspekten van de spectraalanaly- course, Katholieke Universiteit Leuven, 1974.
se van economische tijdreeksen", (unpublished) license
thesis, University of Antwerp, 1974, 61 p. 29. KOOPMANS L. H. : Spectral analysis o f time series,
Academic Press, New York, 1974, 366 p.
7. DHRYMES Ph. : Econometrics. Statistical foundations
and applications, Harper & Row, New-York, 1970, 30. van der GENUGTEN B. B. : "Statistics", course, Katholie-
591 p. ke Universiteit Tilburg, 1976.

J o u r n a l o f C o m p u t a t i o n a l a n d A p p l i e d Mathematics, v o l u m e 6, no 1, 1980. 58

You might also like