Next Article in Journal
Some Results on Multivalued Proximal Contractions with Application to Integral Equation
Previous Article in Journal
Enhanced Subspace Iteration Technique for Probabilistic Modal Analysis of Statically Indeterminate Structures
Previous Article in Special Issue
Consumer Sentiment and Luxury Behavior in the United States before and after COVID-19: Time Trends and Persistence Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Long-Memory Model for Multiple Cycles with an Application to the US Stock Market

by
Guglielmo Maria Caporale
1,* and
Luis Alberiko Gil-Alana
2,3
1
Department of Economics and Finance, Brunel University of London, London UB8 3PH, UK
2
Department of Economics, Faculty of Economics, University of Navarra, 31009 Pamplona, Spain
3
Facultad de Derecho, Empresa y Gobierno, Universidad Francisco de Vitoria, 28223 Madrid, Spain
*
Author to whom correspondence should be addressed.
Submission received: 29 October 2024 / Revised: 4 November 2024 / Accepted: 6 November 2024 / Published: 7 November 2024

Abstract

:
This paper proposes a long-memory model that includes multiple cycles in addition to the long-run component. Specifically, instead of a single pole or singularity in the spectrum, it allows for multiple poles and, thus, different cycles with different degrees of persistence. It also incorporates non-linear deterministic structures in the form of Chebyshev polynomials in time. Simulations are carried out to analyze the finite sample properties of the proposed test, which is shown to perform well in the case of a relatively large sample with at least 1000 observations. The model is then applied to weekly data on the S&P 500 from 1 January 1970 to 26 October 2023 as an illustration. The estimation results based on the first differenced logged values (i.e., the returns) point to the existence of three cyclical structures in the series, with lengths of approximately one month, one year, and four years, respectively, and to orders of integration in the range (0, 0.20), which implies stationary long memory in all cases.
MSC:
91G15
JEL Classification:
C22; C15

1. Introduction

It is common in economics and finance, as well as in other disciplines, to decompose a time series into a trend, seasonal, and cyclical component [1,2]. For this purpose, various statistical tools have been developed over the years, and, most recently, machine learning and other big data techniques have also been used [3]. This paper proposes a new time series model incorporating all these components, which is a special case of the very general and flexible testing framework developed by Robinson (1994) [4]. More specifically, the deterministic part of the model includes a constant, a linear time trend [5,6], and either a cyclical structure or seasonal dummy variables [7]; it also allows for non-linearities in the form of neural networks [8], Fourier functions [9], or Chebychev polynomials in time, as in [10]. In the stochastic part, the spectral density function is allowed to have multiple poles or singularities and, thus, multiple integers or fractional roots of arbitrary order anywhere in the unit circle in the complex plane [11]. These are related to the previously mentioned components. In particular, the trend component is associated with the long-run or zero frequency, while the others (i.e., seasonal and cyclical) correspond to non-zero frequencies. Simulations are carried out to analyze the finite sample properties of this model, which is then applied to weekly data on the S&P 500 from 1 January 1970 to 26 October 2023 as an illustration.
The purpose of the present paper is to propose a new time series model that incorporates both linear and non-linear structures of deterministic and stochastic nature in a unified framework, allowing for the possibility of fractional degrees of differentiation at the zero frequency, as is standard in the (long memory) literature, as well as at non-zero (cyclical) frequencies. Thus, it is very general because it allows to consider not only classical unit [12,13] or fractional [14,15] roots but also multiple cyclical structures that might underlie many time series in economics and other fields. The remainder of the paper is structured as follows: Section 2 describes the model; Section 3 presents the test statistic; Section 4 reports on Monte Carlo evidence concerning the finite sample performance of the proposed test; Section 5 provides information about the data and discusses the empirical application; and Section 6 offers some concluding remarks.

2. The Econometric Model

Let y(t) be the observed time series from t = 1, 2, …, T. We consider the following model:
y t = f z t ; ψ + x t , t = 1 , 2 , ,
where f can be a linear or a non-linear function of z(t), which is a (k × 1) vector of observable deterministic variables, and γ is a (k × 1) vector of unknown parameters to be estimated. Thus, if f is linear, it can include, for instance, an intercept and a linear time trend of the form advocated by Bhargava (1986) [5], Schmidt and Phillips (1992) [6] and others in the context of unit roots, i.e.,
f z t ; ψ = α + b t
and, if it is non-linear, it can include, for example, Chebyshev polynomials in time of the form:
f z t ; ψ = i = 0 m θ i P i T t ,
where m indicates the number of coefficients of the Chebyshev polynomial in time Pi,T(t) defined as:
P 0 , T t = 1 ,   and   P i , T t = 2 cos i π ( t 0.5 ) / T ,
as described in [16,17]. Bierens and Martins (2010) propose the use of such polynomials in the case of time-varying cointegrating parameters [18]. There are several advantages to using them. First, their orthogonality avoids the problem of near collinearity in the regressor matrix, which arises with standard time polynomials. Second, according to Bierens (1997) [19] and Tomasevic et al. (2009) [20], they can approximate highly non-linear trends with rather low degree polynomials. Finally, they can approximate structural breaks in a much smoother way than the classical structural change models.
As for the stochastic part of the model, x(t) is specified as:
j = 1 m 1 2 cos w j r L + L 2 d j x t = u t , t = 1 , 2 , ,
where w j r = 2πr/T, r = T/j is a real scalar value, L is the lag operator, (i.e., Lx(t) = x(t − 1)), dj is another real value corresponding to the order of integration of the cycle that explodes (i.e., it goes to infinity) in the spectrum at λ = j; m stands for the number of cyclical structures, and u(t) is a short-memory process integrated of order 0 or I(0). Such a process is defined as a covariance stationary one with a spectral density function that is positive and finite across all frequencies in the spectrum. Thus, it could be a white noise process but also display weak autocorrelation, as in a stationary and invertible Auto Regressive Moving Average (ARMA) model. In the present study, in order to avoid overparameterization, we follow the exponential spectral approach of Bloomfield (1973) to model u(t) [21]. This is a non-parametric framework that is implicitly defined in terms of its spectral density function:
f λ ; τ = σ 2 2 π exp [ 2 i = 0 n τ i cos ( λ i ) ] ,
where σ2 is the variance of the error term, and n denotes the number of short-run dynamic terms. Its logged form approximates autoregressive processes fairly well. Bloomfield (1973) showed that for a stationary and invertible ARMA (p, q) process of the form [21]:
u ( t ) = r = 1 p φ r u ( t r ) + ε t + s = 1 q θ s ε ( t s ) ,
where εt is a white noise process, the spectral density function is given by:
f λ ; τ = σ 2 2 π 1 + s = 1 q θ s e i λ s 1 r = 1 p φ r e i λ r 2 .
According to Bloomfield (1973) [21], the log of the above expression can be well approximated by Equation (6) when p and q are small values, and thus it does not require the estimation of as many parameters as in the case of ARMA models. In addition, Bloomfield’s (1973) [21] model is stationary across all values (see Gil-Alana, 2004 [22]).
In the empirical application carried out below, we assume that p = 4 and that w(r;1) = 0, so the first cyclical component corresponds to the long run or zero frequency. In this case, the summand ( 1 2 cos w ( r ; j ) L + L 2 ) d j becomes ( 1 2 L + L 2 ) d 1 , which can be expressed as ( 1 L ) 2 d 1 , with the pole or singularity in the spectrum going to infinity at the zero frequency [23,24,25]. For the other two cyclical structures, we choose the frequencies on the basis of the values of the periodogram, which is an estimator of the spectral density function.

3. The Test Statistic

The test statistic can be easily derived from Robinson (1994) [4] extending the function f in Equation (1) to the non-linear case and specifying the errors to follow Bloomfield’s (1973) model [21].
Specifically, we test the null hypothesis:
Ho:d = do,
where do is a vector of real numbers and dimension m, with each element corresponding to the order of integration at a given frequency. Given this null hypothesis, the residuals in (1) and (5) are
r t = j = 1 m 1 2 cos w j r L + L 2 d j o x x t
where xx(t) are the residuals of the linear or non-linear model in (1), and the periodogram of r(t) is computed as:
P λ j = | 1 2 π T 1 2 t = 1 T r t e i λ j t 2 | .
The test statistic takes the form:
N L R O B = T σ ^ 4 a ^ A ^ 1 a ^ ,
where T is the sample size, and
a ^ = 2 π T f * ψ ( λ j ) g u ( λ j ; t a u ) 1 P ( λ j ) ,
σ ^ 2 = σ 2 ( t a u ) = 2 π T f = 1 T 1 g u ( λ j t a u ) 1 P ( λ j ) ,
A ^ = 2 T f * ψ λ j ψ λ j f * ψ λ j ξ ^ λ j f * ξ ^ λ j ξ ^ λ j 1 f * ξ ^ λ j ψ λ j ;
ψ λ f = log   2 sin   λ f 2 ;   ξ ^ λ j = t a u log   g xx λ j ; t a u ,
where λj = 2πj/T, and * indicates that the sums are taken over all frequencies bounded in the spectrum, namely removing those that explode or go to infinity. Also, tau is definted as arg   m i n τ T * σ 2 τ , where T* is a subset of the Rq Euclidean space.
It can be easily proven that, extending the conditions in Robinson (1994) [4] to the non-linear structure in (1), which is satisfied by Condition (*) in his paper,
N L R O B d χ 2 m   as   T ,
where T indicates the sample size, and “→d” stands for convergence in distribution. Thus, unlike in the case of other procedures, the present one is a classical large-sample testing situation. Moreover, this test is the most efficient in the Pitman sense against local departures from the null, i.e., if it is implemented against local departures, the limit distribution is χ 2 m (v) with a non-centrality parameter v which is optimal under Gaussianity of the error term. The latter is not necessary for the implementation of this procedure; a moment condition of only order 2 is required.

4. Finite Sample Properties

This section reports on the finite sample performance of the test described above. Specifically, we carry out Monte Carlo simulations to analyze the size and the power of this test against various alternatives. A similar experiment was conducted by Gil-Alana (2001) [26]; however, in that case, m = 1, allowing only a single cyclical structure. By contrast, in the present study, we assume that m = 3 and that the Data Generating Process (DGP) is characterized by the following orders of integration: d1 = 0.75, d2 = 0.50, and d3 = 0.25; this implies that the first cyclical structure is highly persistent and non-stationary, the second one is on the borderline between the stationary and non-stationary case, and the third one is stationary. For the length of the cycles, we impose j = 10, 100, and 250 with different sample sizes. These values are arbitrary, though the results were found to be robust to choose other values. For the alternative hypotheses, we consider values for the three orders of integration of 0.25, 0.50, and 0.75. For j, we choose the same values as in the true model; therefore, the size of the test is reported in the tables for d = (0.75, 0.50, 0.25)T.
Table 1 reports the results based on T = 1000. It can be seen that the size of the test is too large, with a rejection frequency of 0.119 for a nominal size of 0.050 and very large (above 0.700) in all cases. Table 2 shows that with a bigger sample size, i.e., T = 2000, the power becomes closer to the nominal size, 0.094, and the rejection probabilities are now higher than 0.800 in all cases, reaching 1 in 7 out of the 27 cases. Finally, Table 3 displays the results for T = 3000; in this case, the nominal size is 0.066, with 12 cases when the rejection probabilities are equal to 1 and values higher than 0.900 in all cases.

5. An Empirical Application

We use weekly data on the S&P 500 closing prices from 1 January 1970 to 26 October 2023. The source of the data is Yahoo Finance (https://finance.yahoo.com/quote/%5EGSPC/history/, accessed on 28 October 2024). Figure 1 displays plots of the original data, their log transformation in the upper panel, and their corresponding periodograms in the lower one. Both exhibit a very large value at the smallest (zero) frequency, which might indicate the need for (fractional or non-fractional) differentiation of the data. Note that short memory (or d = 0) implies that the spectral density is positive and bounded at the zero frequency, and one should expect a small value at this frequency in the periodograms of the series.
Figure 2 displays the plots of the first differenced data. Much higher volatility is observed in the second half of the sample when using the original data, but not in the case of the logged ones. Therefore, we use the latter series for the empirical application below. The periodograms display some peaks at non-zero frequencies, which may suggest the existence of multiple cyclical structures in the data.
Initially, we set m = 5 and obtained j1 = 1, as one would have expected in view of the periodograms displayed in Figure 1. The estimated value of d1 is then 1, which is consistent with the results of standard unit root tests ([12,13,27,28,29]. We also allowed for fractional integration by estimating the equation (1 − L) × (t) = u(t) instead of imposing (4). The results support the unit root null hypothesis, with values of d equal to 0.981 and 0.992 for the original and log transformed data, respectively, with the corresponding 5% confidence intervals being (0.972, 1.009) and (0.983, 1.017). Next, we focus on the first differenced log values, in this case setting m = 4 and finding an order of integration not significantly different from zero; finally, we set m = 3.
Note that standard unit roots are a special case of (5) with m = 1 and w = 0, so that the model becomes:
1 2 L + L 2 d j x t = u t , t = 1 , 2 , ,
and, denoting 2dj = d, this corresponds to the standard I(d) model at the long-run or zero frequency and a unit root if d = 1:
1 L d x t = u t , t = 1 , 2 , .
To allow for some degree of generality, we assume that x(t) represents the errors in a regression model with an intercept and a time trend, i.e.,
y t = α + β t + x t ,
and make two alternative assumptions for u(t), namely that it is a white noise process or that it follows the exponential spectral model of Bloomfield (1973) in turn [21].
We start with the linear case. Table 4 displays the estimates of d along with their associated 95% confidence bands, under the three standard specifications with: (i) no deterministic terms (α = β = 0 in (17)); (ii) an intercept only (β = 0); and (iii) a constant and a linear time trend. Our preferred model is selected on the basis of the statistical significance of the estimated coefficients (shown in bold in the table). We report the results for both the original and log-transformed data. The coefficient on the linear trend is significant in three out of the four cases (the exception is represented by the original data with Bloomfield disturbances), and although d is smaller than 1 in all four cases, the unit root null hypothesis cannot be rejected for any of them.
Table 5 concerns the non-linear model. One can see that the non-linear terms are significant in some cases (especially with white noise errors), and again, the estimates of d are within the unit root interval, which implies that first differencing is required for both the linear and the non-linear terms.
As mentioned before, we estimate the model given by Equations (1) and (4) for the return series, initially setting m = 4. In this case, the order of integration of one of the cyclical structures is found to be equal to zero. Next, we set m = 3. The results are very similar for the linear (Table 6) and non-linear (Table 7) models; in both cases, the orders of integration of all three components are significant. Performing Box–Ljung statistics on the estimated residuals across the models presented in the paper, we do not find evidence requiring additional autocorrelation in the data (Table 8 and Table 9).

6. Conclusions

This paper proposes a long-range dependence framework allowing for multiple cycles, which is then applied to analyze the behavior of the weekly S&P 500. Specifically, instead of a single pole or singularity in the spectrum, as in standard models, our model allows for multiple poles, resulting in different cycles with different degrees of persistence. It also incorporates non-linear deterministic structures in the form of Chebyshev polynomials over time. Monte Carlo simulations show that in finite samples, the proposed test behaves well if the sample size is relatively large, namely if the number of observations is at least 1000.
The empirical application using weekly data on the S&P 500 provides evidence of a large value in the periodogram at zero frequency. Unit and fractional root tests also suggest the need to take first differences. The estimation results based on the first differenced logged values (i.e., the returns) point to the existence of three cyclical structures in the series with lengths of approximately one month, one year, and four years, respectively, and to orders of integration in the range (0, 0.20), which implies stationary long memory in all cases.
Future research could develop the framework presented in this paper in several ways. For instance, the number of structures m could be endogenized; estimation methods such as those proposed by Giraitis and Leipus (1995), Woodward et al. (1998), Ferrara and Guegan (2001), and Sadek and Khotanzad (2004) could be extended to allow for non-linear trends, and breaks in the data could also be modeled [30,31,32,33]. Moreover, these methods could also be used to examine the stochastic behavior of a wide range of macro variables, such as GDP, inflation, or unemployment.

Author Contributions

Conceptualization, G.M.C. and L.A.G.-A.; data curation, L.A.G.-A.; writing—original draft, L.A.G.-A.; writing—review and editing, G.M.C.; supervision, G.M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the MINEIC-AEI-FEDER ECO2017-85503-R project from ‘Ministerio de Economía, Industria y Competitividad’ (MINEIC), ‘Agencia Estatal de Investigación’ (AEI) Spain and ‘Fondo Europeo de Desarrollo Regional’ (FEDER), and also from internal projects of the Universidad Francisco de Vitoria.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

Luis A. Gil-Alana gratefully acknowledges financial support from the MINEIC-AEI-FEDER ECO2017-85503-R project from ‘Ministerio de Economía, Industria y Competitividad’ (MINEIC), ‘Agencia Estatal de Investigación’ (AEI) Spain and ‘Fondo Europeo de Desarrollo Regional’ (FEDER), and also from internal projects of the Universidad Francisco de Vitoria. Comments from the Editor and two anonymous reviewers are gratefully acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nelson, C.R. Trend/Cycle Decomposition. In The New Palgrave Dictionary of Economics; Palgrave Macmillan: London, UK, 2008. [Google Scholar] [CrossRef]
  2. Persons, W.M. Indices of business conditions. Rev. Econ. Stat. 1919, 1, 5–107. [Google Scholar]
  3. Almeida, A.; Brás, S.; Sargento, S.; Pinto, F.C. Time series big data: A survey on data stream frameworks, analysis and algorithms. J. Big Data 2023, 10, 83. [Google Scholar] [CrossRef] [PubMed]
  4. Robinson, P.M. Efficient tests of nonstationary hypotheses. J. Am. Stat. Assoc. 1994, 89, 1420–1437. [Google Scholar] [CrossRef]
  5. Bhargava, A. On the theory of testing for unit roots in observed time series. Rev. Econ. Stud. 1986, 52, 369–384. [Google Scholar] [CrossRef]
  6. Schmdit, P.; Phillips, P.C.B. LM Tests for a Unit Root in the Presence of Deterministic Trends. Oxf. Bull. Econ. Stat. 1992, 54, 257–287. [Google Scholar] [CrossRef]
  7. Conde-Ruiz, J.I.; García, M.; Puch, L.A.; Ruiz, J. Calendar effects in daily aggregate employment creation and destruction in Spain. SERIEs 2019, 10, 25–63. [Google Scholar] [CrossRef]
  8. Yaya, O.; Oghonna, A.; Furuoka, F.; Gil-Alana, L.A. A new unit root analysis for testinghysteresis in unemployment. Oxf. Bull. Econ. Stat. 2021, 83, 960–981. [Google Scholar] [CrossRef]
  9. Gil-Alana, L.A.; Yaya, O. Testing fractional unit roots with non-linear smooth break approximations using Fourier functions. J. Appl. Stat. 2021, 48, 2542–2559. [Google Scholar] [CrossRef]
  10. Cuestas, J.C.; Gil-Alana, L.A. A non-linear approach with long range dependence based on Chebyshev polynomials. Stud. Nonlinear Dyanmics Econom. 2016, 20, 57–70. [Google Scholar]
  11. Koumané, E.F.; Hili, O. A new time domain estimation of k-factors GARMA processes. Comptes Rendus Math. 2012, 350, 925–928. [Google Scholar]
  12. Dickey, D.A.; Fuller, W.A. Distributions of the estimators for autoregressive time series with a unit root. J. Am. Stat. Assoc. 1979, 74, 427–481. [Google Scholar]
  13. Elliot, G.; Rothenberg, T.J.; Stock, J.H. Efficient tests for an autoregressive unit root. Econometrica 1996, 64, 813–836. [Google Scholar] [CrossRef]
  14. Geweke, J.; Porter-Hudak, S. The estimation and application of long memory time series models. J. Time Ser. Anal. 1983, 4, 221–223. [Google Scholar] [CrossRef]
  15. Sowell, F. Maximum Likelihood Estimation of Stationary Univariate Fractionally Integrated Time Series Models. J. Econom. 1992, 53, 165–188. [Google Scholar] [CrossRef]
  16. Hamming, R.W. Numerical Methods for Scientists and Engineers; Dove Medical Press: Macclesfield, UK, 1973. [Google Scholar]
  17. Smyth, G.K. Polynomial Approximation; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 1998. [Google Scholar]
  18. Bierens, H.J.; Martins, L.F. Time Varying Cointegration. Econom. Theory 2010, 26, 1453–1490. [Google Scholar] [CrossRef]
  19. Bierens, H.J. Testing the Unit Root with Drift Hypothesis Against Trend Stationarity with an Application to the US Price Level and Interest Rate. J. Econom. 1997, 81, 29–64. [Google Scholar] [CrossRef]
  20. Tomasevic, N.; Tomasevic, M.; Stanivuk, T. Regression Analysis and Approximation by Means of Chebishev Polynomials. Informatologia 2009, 42, 166–172. [Google Scholar]
  21. Bloomfield, P. An exponential model in the spectrum of a scalar time series. Biometrika 1973, 60, 217–226. [Google Scholar] [CrossRef]
  22. Gil-Alana, L.A. The use of the Bloomfield model as an approximation to ARMA processes in the context of fractional integration. Math. Comput. Model. 2004, 39, 429–436. [Google Scholar] [CrossRef]
  23. Granger, C.W.J. Long memory relationships and the aggregation of dynamic models. J. Econom. 1980, 14, 227–238. [Google Scholar] [CrossRef]
  24. Granger, C.W.J.; Joyeux, R. An introduction to long memory time series models and fractional differencing. J. Time Ser. Anal. 1980, 1, 15–39. [Google Scholar] [CrossRef]
  25. Hosking, J.R. Fractional differencing. Biometrika 1981, 68, 165–176. [Google Scholar] [CrossRef]
  26. Gil-Alana, L.A. Testing Stochastic Cycles in Macroeconomic Time Series. J. Time Ser. Anal. 2001, 22, 411–430. [Google Scholar] [CrossRef]
  27. Kwiatkowski, D.; Phillips, P.C.; Schmidt, P.; Shin, Y. Testing the null hypothesis of stationarity against the alternative of a unit root. J. Econom. 1992, 54, 159–178. [Google Scholar] [CrossRef]
  28. Ng, S.; Perron, P. Lag length selection and the Construction of Unit Root Tests with Good Size and Power. Econometrica 2001, 69, 1519–1554. [Google Scholar] [CrossRef]
  29. Phillips, P.C.B.; Perron, P. Testing for a Unit Root in Time Series Regression. Biometrica 1988, 75, 335–346. [Google Scholar] [CrossRef]
  30. Ferrara, L.; Guegan, D. Forecasting with k-factor Gegenbauer processes: Theory and Applications. J. Forecast. 2001, 20, 581–601. [Google Scholar] [CrossRef]
  31. Giraitis, L.; Leipus, P. A generalized fractionally differencing approach in long memory modelling. Lith. Math. J. 1995, 35, 65–81. [Google Scholar] [CrossRef]
  32. Sadek, N.; Khotanzad, A. K-factor Gegenbauer ARMA process for network traffic simulation. Comput. Commun. 2004, 2, 963–968. [Google Scholar]
  33. Woodward, W.A.; Cheng, Q.C.; Ray, H.L. A k-factor Gamma long memory model. J. Time Ser. Anal. 1998, 19, 485–504. [Google Scholar] [CrossRef]
Figure 1. Data in levels and periodograms.
Figure 1. Data in levels and periodograms.
Mathematics 12 03487 g001
Figure 2. Data in first differences and periodograms.
Figure 2. Data in first differences and periodograms.
Mathematics 12 03487 g002
Table 1. Rejection frequencies of the test for a sample size T = 1000.
Table 1. Rejection frequencies of the test for a sample size T = 1000.
d1d2d3Rejection Freq.
0.250.250.250.889
0.250.250.500.947
0.250.250.751.000
0.250.500.250.799
0.250.500.500.845
0.250.500.750.945
0.250.750.250.904
0.250.750.500.967
0.250.750.751.000
0.500.250.250.866
0.500.250.500.923
0.500.250.750.988
0.500.500.250.777
0.500.500.500.814
0.500.500.750.908
0.500.750.250.890
0.500.750.500.923
0.500.750.750.998
0.750.250.250.815
0.750.250.500.901
0.750.250.750.945
0.750.500.250.119
0.750.500.500.807
0.750.500.750.833
0.750.750.250.812
0.750.750.500.865
0.750.750.750.939
In bold, the value corresponds to the size of the test. Nominal size: 5%.
Table 2. Rejection frequencies of the test for a sample size T = 2000.
Table 2. Rejection frequencies of the test for a sample size T = 2000.
d1d2d3Rejection Freq.
0.250.250.250.911
0.250.250.501.000
0.250.250.751.000
0.250.500.250.801
0.250.500.500.906
0.250.500.751.000
0.250.750.250.998
0.250.750.501.000
0.250.750.751.000
0.500.250.250.899
0.500.250.500.978
0.500.250.751.000
0.500.500.250.839
0.500.500.500.848
0.500.500.750.922
0.500.750.250.955
0.500.750.500.988
0.500.750.751.000
0.750.250.250.890
0.750.250.500.977
0.750.250.750.994
0.750.500.250.094
0.750.500.500.883
0.750.500.750.847
0.750.750.250.890
0.750.750.500.914
0.750.750.750.978
In bold, the value corresponds to the size of the test. Nominal size: 5%.
Table 3. Rejection frequencies of the test for a sample size T = 3000.
Table 3. Rejection frequencies of the test for a sample size T = 3000.
d1d2d3Rejection Freq.
0.250.250.250.989
0.250.250.501.000
0.250.250.751.000
0.250.500.250.991
0.250.500.500.911
0.250.500.751.000
0.250.750.251.000
0.250.750.501.000
0.250.750.751.000
0.500.250.250.939
0.500.250.501.000
0.500.250.751.000
0.500.500.250.965
0.500.500.500.934
0.500.500.750.980
0.500.750.250.999
0.500.750.501.000
0.500.750.751.000
0.750.250.250.956
0.750.250.501.000
0.750.250.750.999
0.750.500.250.066
0.750.500.500.909
0.750.500.750.917
0.750.750.250.943
0.750.750.500.993
0.750.750.751.000
In bold, the value corresponds to the size of the test. Nominal size: 5%.
Table 4. Estimates of d in the linear model are given by Equation (1).
Table 4. Estimates of d in the linear model are given by Equation (1).
(i) Results Based on White Noise Errors
No TermsWith an InterceptWith a Time Trend
Original0.97 (0.94, 1.00)0.97 (0.94, 1.00)0.97 (0.94, 1.00)
Logged values0.99 (0.96, 1.02)0.98 (0.96, 1.01)0.98 (0.96, 1.01)
(ii) Results Based on Autocorrelated (Bloomfield) Errors
No TermsWith an InterceptWith a Time Trend
Original0.95 (0.92, 0.99)0.95 (0.92, 1.00)0.95 (0.92, 1.00)
Logged values0.97 (0.93, 1.02)0.99 (0.95, 1.04)0.99 (0.95, 1.04)
The values in parentheses are the 95% confidence intervals. Those in bold correspond to the models selected on the basis of the statistical significance of the deterministic terms.
Table 5. Estimates of d in the non-linear model are given by Equation (2).
Table 5. Estimates of d in the non-linear model are given by Equation (2).
(i) Results Based on White Noise Errors
Dθ1θ2θ3θ4
Original0.97
(0.93, 1.01)
1811.38
(1.92)
−1381.76
(−2.44)
484.63
(1.67)
−335.25
(−1.72)
Logged1.01
(0.98, 1.04)
5.400
(6.81)
−0.058
(−0.12)
−0.575
(−2.39)
1.167
(7.30)
(ii) Results Based on Autocorrelated (Bloomfield) Errors
Dθ1θ2θ3θ4
Original0.96
(0.94, 1.02)
1043.07
(1.59)
−702.52
(−1.93)
307.28
(1.48)
−185.24
(−1.32)
Logged1.00
(0.97, 1.04)
0.915
(6.81)
−3.429
(−4.32)
1.096
(2.77)
0.047
(0.17)
The values in parentheses in column 2 are the 95% confidence intervals. Those in bold correspond to the significant Chebyshev polynomials.
Table 6. Frequencies with the highest values at periodograms, with j = 1, …, 1000.
Table 6. Frequencies with the highest values at periodograms, with j = 1, …, 1000.
(1 − L) Data(1 − L) Log Data
jT/jValue at PeriodogramJT/jValue at Periodogram
7943.531448.098713.220.000642
9982.811082.756074.620.000520
27410.241076.5824211.600.000493
17016.511013.949203.050.000458
8143.45990.766794.130.000454
6084.61916.1717016.510.000639
J indicates the discrete frequency in the periodogram, T/j indicates the number of periods per cycle, and the third and sixth columns indicate the corresponding value in the periodogram.
Table 7. Frequencies with the highest values at periodograms, with j = 1, …, 120.
Table 7. Frequencies with the highest values at periodograms, with j = 1, …, 120.
(1 − L) Data(1 − L) Log Data
jT/jValue at PeriodogramJT/jValue at Periodogram
7537.44575.6511025.520.000248
15187.20485.4816175.500.000247
10726.25469.235056.160.000239
11025.52465.707040.110.000226
J indicates the discrete frequency in the periodogram, T/j indicates the number of periods per cycle, and the third and sixth columns indicate the corresponding value of the periodogram.
Table 8. Frequencies and orders of integration in a linear model.
Table 8. Frequencies and orders of integration in a linear model.
j1j2j3d1d2d3
White noise602
(4.66)
240
(11.70)
14
(200.57)
0.09
(0.02, 1.17)
0.06
(0.01, 0.09)
0.13
(0.11, 0.14)
Bloomfield601
(4.67)
241
(11.65)
14
(200.57)
0.06
(−0.01, 1.17)
0.07
(0.02, 0.10)
0.05
(−0.02, 0.10)
The values in parentheses in columns 2, 3, and 4 are the number of periods per cycle. Those in parentheses in columns 5, 6, and 7 are the 95% confidence bands for the orders of integration at the respective frequencies.
Table 9. Frequencies and orders of integration in a non-linear model.
Table 9. Frequencies and orders of integration in a non-linear model.
j1j2j3d1d2d3
White noise600
(4.69)
236
(11.89)
13
(216.00)
0.07
(−0.01, 0.14)
0.04
(−0.05, 0.08)
0.12
(0.05, 0.16)
Bloomfield609
(4.61)
238
(11.78)
14
(200.57)
0.05
(−0.02, 0.19)
0.03
(−0.03, 0.07)
0.11
(0.04, 0.17)
The values in parentheses in columns 2, 3, and 4 are the number of periods per cycle. Those in parentheses in columns 5, 6, and 7 are the 95% confidence bands for the orders of integration at the respective frequencies.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Caporale, G.M.; Gil-Alana, L.A. A Long-Memory Model for Multiple Cycles with an Application to the US Stock Market. Mathematics 2024, 12, 3487. https://doi.org/10.3390/math12223487

AMA Style

Caporale GM, Gil-Alana LA. A Long-Memory Model for Multiple Cycles with an Application to the US Stock Market. Mathematics. 2024; 12(22):3487. https://doi.org/10.3390/math12223487

Chicago/Turabian Style

Caporale, Guglielmo Maria, and Luis Alberiko Gil-Alana. 2024. "A Long-Memory Model for Multiple Cycles with an Application to the US Stock Market" Mathematics 12, no. 22: 3487. https://doi.org/10.3390/math12223487

APA Style

Caporale, G. M., & Gil-Alana, L. A. (2024). A Long-Memory Model for Multiple Cycles with an Application to the US Stock Market. Mathematics, 12(22), 3487. https://doi.org/10.3390/math12223487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop