Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Time Series Analysis with Long Memory in View
Time Series Analysis with Long Memory in View
Time Series Analysis with Long Memory in View
Ebook507 pages3 hours

Time Series Analysis with Long Memory in View

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Provides a simple exposition of the basic time series material, and insights into underlying technical aspects and methods of proof 

Long memory time series are characterized by a strong dependence between distant events. This book introduces readers to the theory and foundations of univariate time series analysis with a focus on long memory and fractional integration, which are embedded into the general framework. It presents the general theory of time series, including some issues that are not treated in other books on time series, such as ergodicity, persistence versus memory, asymptotic properties of the periodogram, and Whittle estimation.  Further chapters address the general functional central limit theory, parametric and semiparametric estimation of the long memory parameter, and locally optimal tests.

Intuitive and easy to read, Time Series Analysis with Long Memory in View offers chapters that cover: Stationary Processes; Moving Averages and Linear Processes; Frequency Domain Analysis; Differencing and Integration; Fractionally Integrated Processes; Sample Means; Parametric Estimators; Semiparametric Estimators; and Testing. It also discusses further topics. This book: 

  • Offers beginning-of-chapter examples as well as end-of-chapter technical arguments and proofs
  • Contains many new results on long memory processes which have not appeared in previous and existing textbooks
  • Takes a basic mathematics (Calculus) approach to the topic of time series analysis with long memory
  • Contains 25 illustrative figures as well as lists of notations and acronyms

Time Series Analysis with Long Memory in View is an ideal text for first year PhD students, researchers, and practitioners in statistics, econometrics, and any application area that uses time series over a long period. It would also benefit researchers, undergraduates, and practitioners in those areas who require a rigorous introduction to time series analysis.

LanguageEnglish
PublisherWiley
Release dateSep 7, 2018
ISBN9781119470427
Time Series Analysis with Long Memory in View

Related to Time Series Analysis with Long Memory in View

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Time Series Analysis with Long Memory in View

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Time Series Analysis with Long Memory in View - Uwe Hassler

    Dedication

    PROMETHEUS

    "Then I invented arithmetic for them,

    the most ingenious acquired skill,

    and joining letters to write down words,

    so they could store all things in Memory,

    the working mother of the Muses' arts."

    AESCHYLUS, Prometheus Bound

    Quoted from the translation by Ian Johnston, Richer Resources Publications, 2012

    List of Figures

    Figure 1.1 Annual minimal water levels of the Nile river.

    Figure 1.2 Monthly opinion poll in England, 1960–1996.

    Figure 1.3 Monthly US inflation, 1966–2008.

    Figure 1.4 Daily realized volatility, 1993–2007.

    Figure 1.5 Monthly unemployment rate, 1972–2008.

    Figure 2.1 Processes with (top to bottom) (a) moderate persistence, (b) antipersistence, and (c) strong persistence.

    Figure 3.1 White noise and differences thereof .

    Figure 3.2 Antipersistent process with .

    Figure 3.3 Antipersistent process with .

    Figure 3.4 Long memory under antipersistence with .

    Figure 3.5 Long memory under moderate persistence, .

    Figure 3.6 Long memory under strong persistence. .

    Figure 3.7 Long memory process under strong persistence with .

    Figure 4.1 AR(1) and EXP(1) with and .

    Figure 6.1 Simulated and series.

    Figure 6.2 Simulated series and their impulse responses.

    Figure 6.3 FIN( ) processes in the frequency and time domain.

    Figure 6.4 FIMA( ) processes with and .

    Figure 6.5 Spectra of ARFI( ) processes.

    Figure 7.1 .

    Figure 7.2 Autocorrelation bias for .

    Figure 7.3 Autocorrelation bias for .

    Figure 7.4 Quantiles and averages of for FIN( ), .

    Preface

    Scope of the Book

    Since the book by Box and Jenkins (1970), autoregressive moving average (ARMA) models integrated of order are a standard tool for time series analysis, where typically . The integrated ARMA (ARIMA) model of order means that a time series has to be differenced times in order to obtain a stationary and invertible ARMA representation. The papers by Granger and Joyeux (1980) and Hosking (1981) extended the ARIMA model with integer to the so‐called fractionally integrated model, where takes on noninteger values, often restricted to . In particular, the case of corresponds to a stationary model with long memory, where the latter means that the autocorrelations die out so slowly that they are not absolutely summable. For , the fractionally integrated model bridges the gap from stationarity to the so‐called unit root behavior ( ), where past shocks have a permanent effect on the present and values of allow for even more extreme persistence.

    This book grew out of lecture notes from which I taught PhD courses on time series analysis and in particular on time series with long memory. Long memory and fractional integration have become key concepts in time series analysis over the last decades. For instance, the updated edition of Box and Jenkins (1970), i.e. Box et al. (2015), contains a section on long memory and fractional integration, and so do Kirchgässner et al. (2013), Pesaran (2015), or Palma (2016). Also, previous textbooks like Brockwell and Davis (1991, Section 13.2) and Fuller (1996, Section 2.11) include short sections on this topic. Contrary to these books on general times series analysis containing only short digressions into the realm of long memory, there are nowadays specialized monographs dedicated to this topic exclusively, most recently by Giraitis et al. (2012) and Beran et al. (2013); see also the earlier books by Beran (1994) and Palma (2007). The approach of the present book differs from both routes, from the general interest track and from the specialized long memory track. I rather attempt to introduce into the theory of univariate time series analysis, and the foundations thereof, in such a way that long memory and fractional integration arise as a special case, naturally embedded into the general theory. This is reflected by the title: Time Series Analysis with Long Memory in View. This view is largely directed by the author's research agenda in this field over the last 25 years.

    Acknowledgment

    Twenty‐five years ago I wrote my doctoral thesis on time series with long memory under the supervision of Professor Wolters at the Freie Universität Berlin. Jürgen Wolters passed away in November 2015. I take this opportunity to commemorate his enthusiasm, generosity, and open‐mindedness from which I profited so much not only during my doctoral studies but also later on as his coauthor. Since my thesis, I have written a sequence of papers on long memory. I am indebted to many anonymous referees for writing, in many cases, critical and constructive reports on my papers before publication. Most papers were written with coauthors. I thank them for sharing their knowledge and endurance with me. All of them I owe insights that influenced my research agenda and hence this book. In particular, I wish to mention Matei Demetrescu and Mehdi Hosseinkouchack with whom the collaboration was especially fruitful. The intense discussions we had on a daily basis when they held postdoc positions at Goethe University Frankfurt shaped my view not only on how to address long memory but also on time series analysis in general. Christoph Hanck, Paulo Rodrigues, and Verena Werkmann have proofread an earlier draft of this book, and their many comments and corrections are gratefully acknowledged. Finally, I am grateful to the Volkswagen Stiftung for financing a year of sabbatical leave in 2014/2015 by an opus magnum grant; without this support it would not have been possible to write this book.

    October 2017

    Uwe Hassler

    List of Notation

    set of complex numbers set of natural numbers set of natural numbers including 0 set of real numbers set of integers largest integer smaller or equal to , natural logarithm of , also identity matrix of dimension Euler's constant probability expectation operator variance operator covariance operator autocovariance at lag autocorrelation at lag long‐run variance lag operator difference operator exponential distribution with parameter normal distribution with mean and variance chi‐square distribution with degrees of freedom convergence almost sure convergence convergence in mean square convergence in probability convergence in distribution weak convergence approximately equal is defined to equal implies and are equivalent converges to 1 is integrated of order follows a standard normal distribution

    Acronyms

    AIC Akaike information criterion AR autoregressive ARCH autoregressive conditional heteroskedasticity ARFIMA autoregressive fractionally integrated moving average ARMA autoregressive moving average BIC Bayesian information criterion CIR cumulated impulse response CLT central limit theorem CMT continuous mapping theorem CSS conditional sum of squares DCT dominated convergence theorem DFT discrete Fourier transform EXP exponential model fBm fractional Brownian motion FCLT functional central limit theorem FEXP fractional EXP FIN fractionally integrated noise GARCH generalized autoregressive conditional heteroskedasticity LLN law of large numbers LM Lagrange multiplier MA moving average MAC memory and autocorrelation consistent MDS martingale difference sequence ML maximum likelihood MSE mean squared error OLS ordinary least squares WLLN weak law of large numbers

    1

    Introduction

    1.1 Empirical Examples

    Figure 1.1 displays 663 annual observations of minimal water levels of the Nile river. This historical data is from Beran (1994, Sect. 12.2) and ranges from the year 622 until 1284. The second panel contains the sample autocorrelations at lag . The maximum value, , is not particularly large, but the autocorrelogram dies out only very slowly with still being significantly positive. Such a slowly declining autocorrelogram is characteristic of what we will define as long memory or strong persistence. It reflects that the series exhibits a very persistent behavior in that we observe very long cyclical movements or (reversing) trends. Note, e.g. that from the year 737 until 805, there are only three data points above the sample average (=11.48), i.e. there are seven decades of data below the average. Then the series moves above the average for a couple of years, only to swing down below the sample mean for another 20 years from the year 826 on. Similarly, there is a long upward trend from 1060 on until about 1125, followed again by a long‐lasting decline. Such irregular cycles or trends due to long‐range dependence, or persistence, have first been discovered and discussed by Hurst, a British engineer who worked as hydrologist on the Nile river; see in particular Hurst (1951). Mandelbrot and Wallis (1968) coined the term Joseph effect for such a feature; see also Mandelbrot (1969). This alludes to the biblical seven years of great abundance followed by seven years of famine, only that cycles in Figure 1.1 do not have a period of seven years, not even a constant period.

    Charts presenting the historical data and sample autocorrelogram of the annual minimal water levels of the Nile river.

    Figure 1.1 Annual minimal water levels of the Nile river.

    Long memory in the sense of strong temporal dependence as it is obvious in Figure 1.1 has been reported in many fields of science. Hipel and McLeod (1994, Section 11.5 ) detected long memory in hydrological or meteorological series like annual average rainfall, temperature, and again river flow data; see also Montanari (2003) for a survey. A further technical area beyond geophysics with long memory time series is the field of data network traffic in computing; see Willinger et al. (2003).

    The second data set that we look into is from political science. Let denote the poll data on partisanship, i.e. the voting intention measured by monthly opinion polls in England. More precisely, is the portion of people supporting the Labor Party. The sample ranges from September 1960 until October 1996 and has been analyzed by Byers et al. (1997). ¹ Figure 1.2 contains the logit transformation of this poll data,

    such that for ; here stands for the natural logarithm of . We observe long‐lasting upswings followed by downswings amounting to a pseudocyclical pattern or reversing trends. This is well reflected and quantified by the sample autocorrelations in the lower panel, decreasing from quite slowly to . Independently of Byers et al. (1997), Box‐Steffensmeier and Smith (1996) detected long memory in US opinion poll data on partisanship. Long memory in political popularity has been confirmed in a sequence of papers; see Byers et al. (2000, 2007), and Dolado et al. (2003); see also Byers et al. (2002) for theoretical underpinning of long memory in political popularity. Further evidence on long memory in political science has been presented by Box‐Steffensmeier and Tomlinson (2000); see also the special issue of Electoral Studies edited by Lebo and Clarke (2000).

    Charts presenting the poll data and sample autocorrelogram of the monthly opinion poll in England, through the years 1960 to 1996.

    Figure 1.2 Monthly opinion poll in England, 1960–1996.

    Since Granger and Joyeux (1980), the fractionally integrated autoregressive moving average (ARMA) model gained increasing popularity in economics. The empirical example in Granger and Joyeux (1980) was the monthly US index of consumer food prices. Granger (1980) had shown theoretically how the aggregation of a large number of individual series may result in an index that is fractionally integrated, which provided theoretical grounds for long memory as modeled by fractional integration in price indices. A more systematic analysis by Geweke and Porter‐Hudak (1983) revealed long memory in different US price indices. These early papers triggered empirical research in long memory in inflation rates in independent work by Delgado and Robinson (1994) for Spain and by Hassler and Wolters (1995) and Baillie et al. (1996) for international evidence. Since then, there has been offered abundant evidence in favor of long memory in inflation rates; see, e.g. Franses and Ooms (1997), Baum et al. (1999), Franses et al. (1999), Hsu (2005), Kumar and Okimoto (2007), Martins and Rodrigues (2014), and Hassler and Meller (2014), where the more recent research focused on breaks in persistence, i.e. in the order of fractional integration. For an early survey article on further applications in economics, see Baillie (1996).

    Figure 1.3 gives a flavor of the memory in US inflation. The seasonally adjusted and demeaned data from January 1966 until June 2008 has been analyzed by Hassler and Meller (2014). The autocorrelations fall from to a minimum of , Again, this slowly declining autocorrelogram mirrors the reversing trends in inflation, although Hassler and Meller (2014) suggested that the persistence may be superimposed by additional features like time‐varying variance.

    Charts presenting the data of monthly US inflation rate and the sample autocorrelogram, through the years 1966 to 2008.

    Figure 1.3 Monthly US inflation, 1966–2008.

    The fourth empirical example is from the field of finance. Figure 1.4 displays daily observations from January 4, 1993, until May 31, 2007. This sample of 3630 days consists of the logarithm of realized volatility of International Business Machines Corporation (IBM) returns computed from underlying five‐minutes data; see Hassler et al. (2016) for details. Although the dynamics of the series is partly masked by extreme observations, one clearly may distinguish periods of weeks where the data tends to increase, followed by long time spans of decrease. The high degree of persistence becomes more obvious when looking at the sample autocorrelogram. Starting off with , the decline is extremely slow with still being well above 0.2. Long memory in realized volatility is sometimes considered to be a stylized fact since the papers by Andersen et al. (2001, 2003). Such a view is supported by the special issue in Econometric Reviews edited by Maasoumi and McAleer (2008).

    Charts presenting the daily logarithm of realized volatility of US inflation and the sample autocorrelogram, through the years 1993 to 2007.

    Figure 1.4 Daily realized volatility, 1993–2007.

    Finally, with the last example we return to economics. Figure 1.5 shows 435 monthly observations from 1972 until 2008. The series is the logarithm of seasonally adjusted US unemployment rates (number of unemployed persons as a percentage of the civilian labor force); see Hassler and Wolters (2009) for details. The sample average of log‐unemployment is 1.7926; compare the straight line in the upper panel of Figure 1.5. Here, the trending behavior is so strong that the sample average is crossed only eight times over the period of 35 years. The deviations from the average are very pronounced and very long relative to the sample size. In that sense the series from Figure 1.5 seems to be most persistent of all the five examples considered in this introduction. This is also expressed by the sample autocorrelogram virtually beginning at one and for What is more, the autocorrelations decline almost linearly in , which is indicative of an process or an process with even ; see Hassler (1997, Corollary 3) and Section 7.5. Hence, the log‐unemployment data seems to be most persistent, or most strongly trending, among our empirical examples.

    Charts presenting the monthly logarithm of US unemployment rate and the sample correlogram, through the years 1972 to 2008.

    Figure 1.5 Monthly unemployment rate, 1972–2008.

    1.2 Overview

    There are two natural approaches to long memory modeling by fractional integration. The first one takes the nonstationary model as starting point, i.e. processes integrated of order 1. Such processes are often labeled as unit root processes in econometrics, where they play a major role within the cointegration framework; see, for instance, Johansen (1995), Lütkepohl (2005), or Pesaran (2015). The extension from the model to the more general model might be considered as a nearby approach from an econometric point of view. The second approach starts off with the classical stationary time series model, where the moving average coefficients from the Wold decomposition are assumed to be absolutely summable and to sum to a value different from 0. For this model, which may be called integrated of order 0, (see Chapter 6 ), it holds true that the scaled sample average converges with the square root of the sample size to a nondegenerate normal distribution. This model underlying the major body of time series books from Anderson (1971) over Brockwell and Davis (1991) and Hamilton (1994) to Fuller (1996) may be generalized to the stationary process for . The latter can be further extended to the region of nonstationarity ( ). Here, we follow this second route starting with the case. More precisely, the outline of the book is as follows.

    A definition of stationarity of stochastic processes is given in the next chapter. Moreover, Chapter 2 contains a discussion of ergodicity that corrects expositions found in some books (see Example 2.2). Next, we show that a familiar sufficient condition for ergodicity in the mean (defined in Definition 2.3) is also necessary; see Proposition 2.2. Then we distinguish between (short and long) memory (Definition 2.4) and different degrees of persistence on statistical grounds: Short memory is separated from long memory to characterize under what circumstances the variance of the sample average is of order , where denotes the sample size; see Proposition 2.3. Persistence is defined (Definition 2.5) to characterize the absence or presence and strength of a trend component in a process; see also Eq. (4.22).

    Chapter 3 focuses on moving average processes of infinite order, sometimes called linear processes. This is motivated by Wold's theorem in Section 3.2. We thus have a unified framework to embed the classical process of moderate persistence as well as processes with antipersistence or strong persistence, which may or may not display long memory at the same time. The discussion of memory vs. persistence is picked up again in Section 3.3. The discussion of Examples 3.2 through 3.5 shows that the series from Figures 1.1 to 1.5 display both long memory and strong persistence, which motivates the model of fractional integration in Chapter 6. Before leaving Chapter 3, we provide some interesting results on the summability of the classical ARMA process (Proposition 3.5) established with a sequence of technical lemmata.

    Chapter 4 introduces to the frequency domain where much of the long memory analysis is settled. The frequency domain is not only useful for data analysis, but it also allows for a deeper theoretical study. For instance, the classical concept of invertibility can be recast following Bloomfield (1985) and Bondon and Palma (2007) in a way (Proposition 4.6) that extends the region of invertibility of fractionally integrated processes; see Proposition 6.2. Next, we introduce the so‐called exponential model formulated in the frequency domain. This exponential model is typically not treated in time series books, although it is particularly convenient in the context of long memory as modeled by fractional integration. Similarly, time series books typically do not deal with so‐called Whittle estimation, which is a frequency domain approximation to maximum likelihood that we present in Section 4.6, thus laying the foundation for memory estimation in Chapters 8 and 9.

    Chapter 5 opens the route to fractional integration. It is a short chapter on the fractional difference and integration operator, respectively. We provide four technical lemmata that will be used repeatedly in subsequent chapters. Chapter 6 defines the stationary fractionally integrated process (of type I), building on a precise definition of processes; see Assumption 6.2. Conditions for (different degrees of) persistence follow under minimal restrictions from Lemma 5.4, while Proposition 6.1 translates this into the frequency domain. Corollary 6.1 and Proposition 6.3 reflect the persistence as (short or long) memory in the time domain. After a discussion of parametric fractionally integrated models in Section 6.2, two different types of nonstationarity are discussed in Section 6.3: First, type II fractionally integrated processes are only asymptotically stationary if . Second, the case covers nonstationarity for both type I and type II processes. Proposition 6.6 shows that classical parametric models imply frequency domain assumptions often entertained in the literature. For the rest of the book, we assume the fractionally integrated models as introduced in Chapter 6.

    Chapter 7 sets off with what seems to be the most general central limit theorem currently available for moving average processes. It is applied to the sample average of fractionally integrated processes, closing in particular the gap at in the literature; see Corollary 7.1. Section 7.3 extends the central limit theorem to a functional central limit theory, where fractional Brownian motions show up in the limit. Two seemingly different representations of type II fractional Brownian motion are shown to be identical in Lemma 7.2. Finally, this chapter contains in Section 7.5 an exposition on the behavior of the sample autocorrelations under fractional integration.

    The eighth chapter is dedicated to the estimation of all other parameters except for the mean, assuming a fully parametric model of fractional

    Enjoying the preview?
    Page 1 of 1