Y Ainer Ahlhaus and Iudas Iraitis
Y Ainer Ahlhaus and Iudas Iraitis
1. INTRODUCTION
estimate a jt and ó t for some ®xed t. This is done by using the tapered Yule±
Walker equations (2.2) (see later) on a segment of length N N (T ) with
N o(T ) around t. One goal of the paper is to determine the optimal length of
the segment N and the optimal data taper as a function of the local variation of
the coef®cients a jt and ó t .
The technical dif®culties require asymptotic considerations. As in non-
parametric regression we therefore rescale the parameters a jt and ó t in an
adequate way in order to obtain meaningful asymptotic results (cf. Dahlhaus,
1996b). Therefore, we consider the system of difference equations
X p
t t
aj X tÿ j,T ó åt 1<t<T (1:1)
j0
T T
where a0 (u) 1, få t g are i.i.d. variables with zero mean and variance 1 and
ó (:), a j (:), j 1, . . ., p, are (smooth) real functions on [0, 1]. A unique solution
of (1.1) which is causal exists under certain regularity conditions (see (2.4) later)
if ó (:) and a j (:), j 1, . . ., p, are extended to a j (u) a j (0) and ó (u) ó (0)
for u , 0. This means that the process X t,T for t < 0 and in particular the
starting values X ÿ p1,T , . . ., X 0,T are assumed to be stationary. å t,T : ó (t=T )å t
are the one-step-ahead forecast errors. We denote by t a time point in the
interval [1, T ] while u is a time point in the rescaled interval [0, 1] i.e. u t=T .
The properties of tapered Yule±Walker estimates are studied in this framework
in Section 2. The optimal segment length N and the optimal data taper are
derived in Theorem 2.2.
A time-varying AR process as in (1.1) is a special case of a locally
stationary process as introduced by Dahlhaus (1996a, 1996b, 1997)Ðsee the
de®nition below. In Section 3 we study as a generalization the non-parametric
estimation of arbitrary time-varying parameters of locally stationary processes.
Furthermore, the ®tted model may be misspeci®ed. We assume that the
observed data are generated by an arbitrary locally stationary process with
spectral density f (u, ë) and we ®t locally at time t a (stationary) ®nite-
dimensional model with spectral density f è( t=T ) (ë). The goal then is to estimate
the parameter function è(u), u 2 [0, 1] (for the AR model (1.1) we have
è(u) (ó 2 (u), a1 (u), . . ., , a p (u))9).
We discuss the asymptotic properties of the (local) estimator è(u) ^ of the
parameter è(u) de®ned as the minimizer of the Kullback±Leibler distance. As
in Section 2 we determine the optimal segment length and the optimal data
taper by minimizing the mean squared ^ ÿ è(u)i 2
error (quadratic risk) Ei è(u)
^ 2
and the integrated squared error Ei è(u) ÿ è(u)i du, where i i denotes the :
Eulidean norm.
In Section 4 we investigate the asymptotic properties of the integrated
periodogram. The results are used in Sections 2 and 3.
Processes with an evolutionary spectral representation were introduced and
investigated by Priestley (1965, 1988). The present approach of local
stationarity may be regarded as a setting which allows for a meaningful
ASSUMPTIONS 2.1
where få j g is an i.i.d. sequence and the real weights ø j, t,T are such that
P
j>0 jø j, t,T j , 1 uniformly in t and T (see Miller, 1969; Hallin, 1978, 1984;
MeÂlard, 1985).
Furthermore, let
and
b(u) ÿ (a1 (u), 2a2 (u), . . ., pa p (u))9
8P
> p=2ÿ1
< j0 fa j (u) ÿ a pÿ j (u)gc j p even
: P( pÿ1)=2 fa jÿ1 (u) ÿ a pÿ j (u)gd j
>
p odd
j0
^ N ,u ÿ Ru i sp i^ru ÿ ru i)
< C(i R
since by Lemma 4.2 (later) i^a N ,u i < 2 p . (For this property Assumption 2.1(iii)
ÿR ^ N ,u ^a N ,u ^ru ÿ ( R
^ N ,u a u ÿ ^ru R ^ N ,u a u ^ru )
^ N ,u (^a N ,u ÿ a u )
R
^ N ,u ÿ Ru )(^a N ,u ÿ a u ):
Ru (^a N ,u ÿ a u ) ( R
This gives
^a N ,u ÿ a u ÿRÿ1
u m^ N ,u Rÿ1 ^ ÿ1
^ N ,u rarN
u ( RN ,u ÿ Ru )R u m (2:11)
^ N ,u ÿ Ru )a u ^ru ÿ ru and
^ N ,u : ( R
where m
rarN Rÿ1 ^ ÿ1 ^
u ( RN ,u ÿ Ru )R u ( RN ,u ÿ Ru )(^
a N ,u ÿ a u ):
The ®rst term of the bias in Theorem 2.1 is due to non-stationarity (note that
it disappears if Ru and ru are constant over time) while the second term is the
bias of the tapered Yule±Walker estimate in the stationary case (cf. Zhang,
1992). The covariance is the same as the covariance of the Yule±Walker
estimate in the stationary case. If the goal is a small bias we may now balance
the two bias terms. In the following theorem we minimize the mean squared
error with respect to the segment length N and the taper function h.
2
Let i^a N ,u ÿ a u i Ó u : (^a N ,u ÿ a u )9Ó u (^a N ,u ÿ a u ). In view of the asymptotic
variance of ^a N ,u it makes sense to consider this norm with Ó u f1=ó 2 (u)gRu .
Alternatively, one may choose Ó u as the identity matrix. Furthermore, let
9
d2 d2 ÿ ÿ1 d2 d2
D1 (u) : Ru a u 2 ru R u Ó u R u Ru a u 2 ru
du 2 du du 2 du
D2 (u) : ó 2 (u) tr (Ó u Rÿ1
u )
1 (
1 2 )2
v(h) 1
C(h) : 2 h4 (x) dx h2 (x) x ÿ dx
d (h) 0 0 2
c(h) : v(h)d(h)1=2 :
We have
2 N 4 d 2 (h) 1 N4 1
Ei^a N ,u ÿ au iÓ u 4 D1 (u) v(h)D2 (u) o : (2:13)
T 4 N T4 N
2
THEOREM 2.2. The mean squared error Ei^a N ,u ÿ a u i Ó u is minimal for
h(x) hopt (x) f6x(1 ÿ x)g1=2 0<x<1 (2:14)
and
and
h(x) h(1 ÿ x): (2:18)
Suppose
1 x ÿ 1=2 1
h r (x) 1=2 h
ô ô 2
(if h2 is regarded as a probability density centred around 1=2, then h2ô is the
density
2
with scale multiplied by ô also centred around 1=2). We have
2 ÿ1
R hô (x) dx 1, d(hô ) ô d(h) and v(hô ) ô v(hô ) (with the integrals in the
de®nition of d and v extended to the real line). As a consequence we obtain
N hô N h =ô and f (N hô , h) f (N h , h), i.e. we can restrict ourselves to h with
2
2
2 1 2 1
h (x) x ÿ dx hopt (x) x ÿ dx (2:19)
R 2 R 2
where hopt is as in (2.14) with hopt (x) 0 for x 2
= [0, 1]. Therefore, we have to
show that
We obtain
1
å(x)h2opt (x) dx ÿ6 å(x)x(x ÿ 1) dx > 0
0 R=[0,1]
By the same argument we obtain the following theorem for the global risk
on the interval I T : [N =(2T), 1 ÿ N =(2T )].
THEOREM 2.3. The minimum of the integrated square risk I T Ei^a N ,u ÿ
2
a u i Ó u du is achieved by the taper hopt as given in (2.14) and the global
bandwidth
( )1=5
1=5 I T D2 (u) du
N Ngl C(hopt ) T 4=5 :
I T D1 (u) du
In this case
1=5
4=5
4=5 2 4 4=5
T Ei^a N ,u ÿ a u i Ó u du c(hopt ) D1 (u) du D2 (u) du
IT 5 IT IT
o(1):
squared error decreases. The results are similar to kernel estimation in non-
parametric regression or to kernel density estimation (cf. Silverman, 1986).
N =T is the bandwidth while h2 (x) corresponds to the kernel (cf. Remark 2.6
below). In fact h2opt (x) 6x(1 ÿ x) is a transformation of the Epanechnikov
kernel. If we de®ne the ef®ciency of the taper h as
c(hopt ) 4=5
eff (h) (< 1)
c(h)
which compares it with the optimal taper, then for large N the mean squared
error (2.16) will be the same using N observations and the taper h as if we use
eff (h)N observations and the taper hopt (cf. Silvermann, 1990).
REMARK 2.4. We omit the discussion of the properties of the scale parameter
estimate ó^ 2 (u) at this point. They can be obtained as a special case of the
results of Section 3.
REMARK 2.5. It should be pointed out that the asymptotic properties of the
bias and the quadratic risk for least squares and Yule±Walker estimates of AR
coef®cients were obtained by Shaman and Stine (1988) and Lewis and Reinsel
(1988) only under an additional ergodicity type assumption on the sample
covariance, Ei R^ ÿ1 ÿ Rÿ1 i 8 , 1. By our approach this assumption can be
N ,u u sp
omitted for Yule±Walker estimates in both the stationary and the locally
stationary case.
where the transfer function Aot,T (ë) Aot,T (ÿë) is such that
t
Aot,T (ë) A , ë O(T ÿ1 )
T
uniformly in 1 < t < T and jëj < ð for some function A(u, ë):
[0, 1] 3 [ÿð, ð] ! C with A(u, ë) A(u, ÿë). The functions A(u, ë) and
ì(u) are assumed to be smooth functions (in this paper we assume ì(u) 0 and
differentiability of A(u, ë)Ðcompare Assumption 3.1(ii)). The stochastic process
î(ë) is assumed to have bounded spectral densities h k , k > 2 (h1 0, h2 1),
de®ned by the cumulants of the random measure dî(ë),
0 1
Xk
cumfdî(ë1 ), . . ., dî(ë k )g ä@ ë j A h k (ë1 , . . ., ë kÿ1 )dë1 . . . dë k (3:2)
j1
where ä(:) is the Dirac function periodically extended to R with period 2ð.
The function
(see Dahlhaus, 1996a, Theorem 2.3). Additional examples are given by Dahlhaus
(1997).
Again we denote by t a time point in the interval [1, T ] while u is a time
point in the rescaled interval [0, 1], i.e. u t=T .
Suppose now that we have observations X 1,T , . . ., X T ,T of a locally
stationary process with mean zero and spectral density f (u, ë) and we ®t a
locally stationary model with spectral density f è(u) (ë). è(u) is estimated by the
minimum è ^ N (u) of the local Whittle likelihood
ð
L^ N (è, u) : 1 log f è (ë)
I n (u, ë)
dë è2È (3:4)
4ð ÿð f è (ë)
with a local version of the periodogram
N ÿ1 2
1 X s
I N (u, ë) : h exp(ÿiës)X [uT ]ÿ N =2s1,T (3:5)
2ð H N s0 N
P ÿ1 2 1
where h: [0, 1] ! R is a data taper, H N : Nj0 h ( j=N ) N 0 h2 (x) dx is
the normalizing factor and [a] denotes the entire part of a real number a. We
assume that the tapering function h(x), 0 < x < 1, has bounded second
derivative and is symmetric:
h(x) h(1 ÿ x) 0 < x < 1: (3:6)
Note that the above estimate can also be interpreted as a local ®t of a
stationary model with spectral density f è(u) (ë). More generally, all results below
also hold if a locally stationary model with spectral density f è(u) (u, ë) is ®tted.
However, we restrict ourselves to the slightly simpler case f è(u) (ë).
^ N (u) will be an
If the model is correct, i.e. if f (u, ë) f è0 (u) (ë), then è
estimate of è0 (u). If the model is misspeci®ed and f (u, ë) is not of the above
form then è ^ N (u) usually converges to è0 (u) where è0 (u) is the minimum of
1 ð f (u, ë)
L (è, u) : log f è (ë) dë è 2 È, 0 < u < 1: (3:7)
4ð ÿð f è (ë)
Note that
1
ð ( )
1 f è(u) (ë) f (u, ë)
d( f è , f ) : log ÿ 1 dë du
4ð 0 ÿð f (u, ë) f è(u) (ë)
2
T
B N (è, u) : fE=L N (è, u) ÿ =L (è, u)g
N2
T2 1 ð
2 = f ÿ1
è (u, ë)fEI N (u, ë) ÿ f (u, ë)g dë
N 4ð ÿð
where =2 L è0 ,u : =2 L fè0 (u), ug. In this section we use the following
assumptions.
ASSUMPTIONS 3.1
@3 @3 @ 2 @ ÿ1
f ÿ1 (u, ë) f è (u, ë) f (u, ë)
@èi1 @èi2 @èi3 è @èi1 @èi2 @èi3 @ë2 @èi1 è
are bounded for 1 < i1 , i2 , i3 < p uniformly in (è, u, ë) 2 È3 [0, 1] 3 [ÿð, ð],
where È is an open convex subset of R p and È denotes the closure of È.
(iv) sup0<u<1,è2È i=2 L (è, u)ÿ1 i sp , 1, where i:i sp denotes the spectral
norm of the matrix.
where
ð
^ N ,è (u) 1
=L = f ÿ1
è0 (u) (u, ë)fI N (u, ë) ÿ f (u, ë)g dë
0
4ð ÿð
2
N 1
2
B N fè0 (u), ug 1=2 S N fè0 (u), ug (3:12)
T N
and that the remainder r N is of the order
E1=2 (ir N i 2 ) < Cd 2N
uniformly in u and N with
N2 1
dN 2
1=2 :
T N
By the mean value theorem
^ ^ ÿ =L
=L ^ fè
^ N ,è (u) =2 L ^ N (u) ÿ è0 (u)g (3:13)
N ,è(u) 0 N ,è
and
2l
E sup iV N ,è i sp < Cd 2Nl : (3:15)
^ 0i
è:ièÿè0 i<i èÿè
and
^ N (u) ÿ è0 (u)i < Csup i(=2 L è )ÿ1 i sp i
iè sup i=M N (è)=i
è2È è2È,ièi< D1
where D1 D iè0 i. Now, from Corollary 4.2 (later) it follows easily that for
l>1
E sup i=M N (è)i 2 l < Cd 2Nl (3:18)
è2È,ièi<D1
^ N (u) ÿ è0 (u)i:
< C sup i=2 M N (è)i sp Ci è (3:19)
ièi< D1
Thus, (3.15) and therefore also Theorem 3.1 have been proved. j
Further let
PROOF. The convergences (3.22) and (3.23) follow from Corollary 4.1 and
Theorem 4.2.
As in Section 2 we now calculate the mean squared error for è ^ N (u) and
minimize it with respect to N and h. Theorems 3.1 and 3.2 imply
^ 2 1 N4 2 1 N4 1
Ei è N (u) ÿ è0 (u)i Ó u d (h)D1 (u) v(h)D2 (u) o (3:24)
4 T4 N T4 N
where
D1 (u) : bfè0 (u), ug9(=2 L è0 ,u )ÿ1 Ó u (=2 L è0 ,u )ÿ1 bfè0 (u), ug
and
D2 (u) : trfÓ u (=2 L è0 ,u )ÿ1 W (u)(=2 L è0 ,u )ÿ1 g:
Now we have the analogous result to Theorem 2.2.
^ ÿ è0 (u)i is minimal if
THEOREM 3.3. The mean squared error Ei è(u)
2
Óu
Theorem 3.3 follows in the same way as Theorem 2.2. Relation (3.25) is a
consequence of Theorem 3.1 and 3.2.
REMARK. The above results also give the mean squared error for the variance
estimate ó^ 2N ,u of Section 2. Unfortunately, Assumptions 3.1(iii) and 3.1(iv) are
not ful®lled in this case. However, one may derive this expression directly by
similar arguments to those in the proof of Theorem 2.1.
where
N ÿ1 2
1 X s
I YN (ë) : h exp(ÿiës)Y[uT ]ÿ N =2s1
2ð H N s0 N
is the periodogram on the segment [uT ] ÿ N =2 1, . . ., [uT ] N =2 of the
stationary process
where
ð
1
d^ N (x1 , x2 ) ö(ë)i N (ë, x1 )i N (ÿë, x2 ) dë
2ð H N ÿð
and
i N (ë, x)
X
N ÿ1
N s
expfÿis(ë ÿ x)gexp i [uT ] ÿ 1 x h A0[uT ]ÿ N =2s1,T (x):
s0
2 N
(4:5)
We now decompose d^ N (x1 , x2 ) into a main term and some remainders which
have to be estimated. Put
( l)
X
N ÿ1
N s
i N (ë, x) : expfÿis(ë ÿ x)gexp i [uT ] ÿ 1 x h A(s l (x)
s0
2 N
l 1, 2, 3
where
(4:6)
Therefore, (4.1) can be written as
X3
ð
ð X
3
J N (ö) d^ (Nj, l ) (x1 , x2 )î(dx1 )î(dx2 ) : J (Nj, l ) (ö): (4:7)
j, l1 ÿð ÿð j, l1
ð
Direct veri®cation gives that J (1,1) Y
N (ö) ÿð ö(ë)I N (ë) dë. Thus (4.2) follows, if
( j, l) ( j, l )
J N (ö) ÿ EJ N (ö) op (N ÿ1=2 ) for j l . 2. This will be derived at the end
of this proof.
To prove (4.4) we ®rst consider the kth-order cumulants of J N (ö). Let c:
[ÿð, ð] ! C 2 and
q(c) c(x1 x2 )fî(dx1 )î(dx2 ) ÿ E[î(dx1 )î(dx2 )]g:
[ÿð,ð]2
N N
ji(2)
N (ë, x)j <CL N (x ÿ ë) ji(3)
N (ë, x)j < C (4:12)
T T
uniformly in u, x, ë, where L N (x) is the periodic extension of
N jxj < 1=N
L N (x) : (4:13)
1=jxj 1=N < jxj < ð:
Moreover, it is easy to verify that, for any 0 , å , 1=2,
ð
L N (x y)L N ( y) dy < C(å)N å L1ÿå
N (x) (4:14)
ÿð
ð
jN å L1ÿå 2
N ( y)j dy < C(å)N (4:15)
ÿð
uniformly in jxj < ð and N > 1, where the constant C(å) does not depend on N.
This implies
2
2 iöi 1
i d^ (1,1)
N i L2 ((ÿð,ð]2 ) < C (4:16)
N
and
2 2 N
i d^ (Nj, l) i L2 ((ÿð,ð]2 ) < Ciöi 1 (4:17)
T2
for j l . 2. We therefore obtain for k even
X
3 X
3 Y
k
k
EjN 1=2 fJ N (ö) ÿ EJ N (ö)gj k N k=2 E q( d^ (Nj i , l i ) < Ciöi 1
j1 ,..., j k 1 l1 ,..., l k 1 i1
2 N2
EjN 1=2 fJ (Nj, l) (ö) ÿ EJ (Nj, l) (ö)gj2 < Ciöi 1
T2
which ®nally proves (4.2) and for j l . 2
ð
ð
EJ (Nj, l) (ö)) ö(ë)i(Nj) (ë, ÿx)i(Nl) (-ë, x) dë dx
ÿð ÿð
N
O
T
which, together with (4.2), gives (4.3). j
PROOF. The result follows from (4.2) since Theorem 2 of Dahlhaus (1983)
implies that
ð
1=2 Y Y
N ö j (ë)fI N (ë) ÿ EI N (ë)gdë ) fv(h)g1=2 fî(ö j )g j1,:::, p :
ÿð j1,:::, p
j
Keeping in mind relation (2.9) it is therefore heuristically clear from (4.3)
that ^a N ,u as given in (2.2) has the same asymptotic distribution as in the
stationary case which leads to (2.6).
We now derive the bias of J N (ö). Let
ð
J (ö) : ö(ë) f (u, ë) dë: (4:19)
ÿð
: q(1)
N O(iö0i 1 r N ):
uniformly in ö. Clearly, (4.21) and (4.22) imply (4.20). First we shall prove
(4.21). By de®nition (4.5) of i N (:, :),
ð X
N ÿ1
s [uT ] ÿ N =2 s 1 1
q(1)
N H ÿ1
N ö(ì) h2
f , ì dì O iöi 1 :
ÿð s0
N T T
@ v2 @ 2
f (u v, ì) f (u, ì) v f (u, ì) f (u, ì) O(v3 ):
@u 2 @u 2
Furthermore, for h such that h(x) h(1 ÿ x), 0 < x < 1, the following asser-
tions are valid:
Consequently,
ð
1 N2 @2 N2
q(1)
N : J (ö) d(h) ö(ì) 2 f (u, ì) d ì o iöi 1
2 T2 ÿð @u T2
uniformly in ö and u.
Now we turn to the proof of the assertion (4.22). In view of (4.11)±(4.12),
N ÿ1
N X s
i N (ë, ì) A(u, ì)exp i [uT ] ÿ 1 ì expfÿis(ë ÿ ì)gh
2 s0
N
N
O L N (ë ÿ ì) :
T
Furthermore, by Lemma A.7 of Dahlhaus (1997),
X
N ÿ1 s L2N (ì)
exp(ÿisì)h < C :
N N
s0
Hence
ð
ð
2 L4N (ë ÿ ì) N 2
rN < C j(ë ÿ ì)mod 2ðj 2 L N (ë ÿ ì) dë d ì
ÿð ÿð N3 T
1 N N2
<C o
N2 T 2 T2
by the assumption T =N 2 N =T ! 0, where C does not depend on ö and N.
This ®nishes the proof of assertion (4.22). j
We remark that Theorem 4.2 leads with (4.2) to the same approximation as
in (4.3) with O(N 2 =T 2 ) instead of O(N =T ) but under stronger assumptions.
To prove the next result we need the following lemma which can be
established in the same way as Theorem 19 of Ibragimov and HasÁminskii
(1981).
satisfy the inequalities jz j j . 1 (see Brockwell and Davis, 1987, Problem 8.3;
also Whittle, 1963). Therefore
1 ð Y
p
2
i^a i j1 ÿ zÿ1 2 p
j exp(ië)j dë < 4 almost surely.
2ð ÿð j1
ACKNOWLEDGEMENTS
The authors are very grateful to the referees whose comments led to a substantial
improvement of the paper.
This research was supported by the Deutsche Forschungsgemeinschaft and the
Alexander von Humboldt Foundation.
REFERENCES
BRILLINGER D. R. (1981) Time Series: Data Analysis and Theory. San Francisco, CA: Holden Day.
BROCKWELL, P. J. and DAVIS, R. A. (1987) Time Series: Theory and Methods. New York: Springer.
DAHLHAUS, R. (1983) Spectral analysis with tapered data. J. Time Ser. Anal. 4, 163±75.
б (1988) Small sample effects in time series analysis: a new asymptotic theory and a new
estimate. Ann. Stat. 16, 804±41.
б (1996a) On the Kullback±Leibler information divergence of locally stationary processes.
Stochastic Process. Appl. 62, 139±68.
б (1996b) Asymptotic statistical inference for nonstationary processes with evolutionary spectra.
In Athens Conference on Applied Probability and Time Series, Vol. II (eds P. M Robinson and
M. Rosenblatt), Lecture Notes in Statistics 115. New York: Springer, pp. 145±59.
б (1997) Fitting time series models to nonstationary processes. Ann. Stat. 25, 1±37.
б NEUMANN, M. and VON SACHS, R. (1997) Nonlinear wavelet estimation of time-varying
autoregressive processes. Bernoulli, to be published.
DAVIES, R. (1973) Asymptotic inference in stationary Gaussian time-series. Adv. Appl. Probab. 5,
469±97.
HALLIN, M. (1978) Mixed autoregressive moving average multivariate processes with time
dependent coef®cients. J. Multivariate Anal 8, 567±72.
б (1984) Spectral factorization of nonstationary moving average processes. Ann. Stat. 12,
172±92.
HUSSAIN, M. Y. and SUBBA RAO, T. (1976) The estimation of autoregressive moving average and
mixed autoregressive average systems with time-dependent parameters of non-stationary time
series. Int. J. Control 23, 647±56.
IBRAGIMOV, I. A. and HASÁMINSKII, R. Z. (1981) Statistical Estimation. Asymptotic Theory. New York:
Springer.
KUÈNSCH, H. R. (1995) A note on causal solutions for locally stationary AR processes. Preprint,
ETH ZuÈrich.
LEWIS, R. A. and REINSEL, G. C. (1988) Prediction error of multivariate time series with
misspeci®ed models. J. Time Ser. Anal. 9, 43±57.
MEÂLARD, G. (1985) An example of the evolutionary spectrum theory. J. Time Ser. Anal. 6, 81±90.
MILLER, K. S. (1969) Nonstationary autoregressive processes. IEEE Trans. Inform. Theory IT-15,
315±16.
PRIESTLEY, M. B. (1965) Evolutionary spectra and non-stationary processes. J. R. Stat. Soc. B 27,
204±29.
б (1981) Spectral Analysis and Time Series, Vol. 1. London: Academic.
б (1988) Non-linear and Non-stationary Time Series Analysis. London: Academic.