0% found this document useful (0 votes)
14 views8 pages

HW3 Solution-1

Solution to Time Series Analysis

Uploaded by

ture.almo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views8 pages

HW3 Solution-1

Solution to Time Series Analysis

Uploaded by

ture.almo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Time Series Analysis

Homework 3 (Solution)

1
This claim is discussed in the theory class. Here, we review it.
0. Claim: Let {Zt } ∼ i.i.d(0, σ 2 ). A linear process {Xt , t ∈ Z} with representation Xt =
P∞
j=−∞ ψj Zt−j ,
P∞
where j=−∞ |ψj | < ∞, is stationary with mean 0, autocovariance function (ACVF)

X
γX (h) = ψj ψj+h σ 2 , (1)
j=−∞

and
σ2 2
fX (λ) = ψ(e−iλ ) , (2)

P∞
where ψ(z) = j=−∞ ψj z j , and

1 X −ihλ
fX (λ) = e γX (h), −π ≤ λ ≤ π, (3)

h=−∞

is known as the spectral density.

Proof of the claim:


• To show that it is a stationary process with mean 0, firstly we show that E[Xt ] = 0:
 
∞ ∞
E [Xt ] = E  E [ψj Zt−j ] =
X X
ψj Zt−j  = = 0.
Since{Zt }∼i.i.d(0,σ 2 )
j=−∞ j=−∞

Thus µX = 0.
• Secondly, we compute ACVF:

γX (h) = Cov(Xt+h , Xt ) definition of ACVF


= E[(Xt − µX )(Xt+h − µX )] definition of Cov
= E [Xt+h Xt ] Since µX = 0
  !
∞ ∞ ∞
= E 
X X X
ψj Zt+h−j  ψk Zt−k  definition of Xt that Xt = ψj Zt−j
j=−∞ k=−∞ j=−∞
∞ ∞
ψj ψk E [Zt+h−j Zt−k ]
X X
= reordering
j=−∞ k=−∞
∞ ∞
ψh+j ′ ψk E [Zt−j ′ Zt−k ]
X X
= set h − j = −j ′ so j = h + j ′
j ′ =−∞ k=−∞

X X∞
= ψh+j ′ ψk δk,j ′ σ 2 since {Zt } ∼ i.i.d(0, σ 2 )
j ′ =−∞ k=−∞

X
= ψh+j ′ ψj σ 2 , Kronecker delta
j ′ =−∞

which is the same as (1). Since µX = 0, and ACVF does not change with time, Xt is stationary.
• Finally, we compute the spectral density. From definition of spectral density in (3), we have:

1 X −ihλ
fX (λ) = e γX (h)

h=−∞
2 ∞ ∞
σ X X
= e−ihλ ψj ψj+h from (2)
2π j=−∞
h=−∞
2 X∞ ∞
σ X
= e−ihλ ψj ψj+h reordering

h=−∞ j=−∞
2 X∞ ∞
σ X
= e−ijλ eijλ e−ihλ ψj ψj+h multiplying by 1 = e0 = eijλ−ijλ

h=−∞ j=−∞
2 ∞ ∞
σ X X  
= eijλ ψj e−i(h+j)λ ψj+h since ea eb = ea+b

h=−∞ j=−∞
2 ∞ ∞
σ X  X
ijλ
e−ikλ ψk

= e ψj j+h→k
2π j=−∞ k=−∞

σ2 X
= ψ(iλ)ψ(−iλ) definition of ψ(z) that ψ(z) = ψj z j
2π j=−∞
2
σ 2
= ψ(e−iλ ) .

1. Let {Xt } be a ARMA(1,1) process,

Xt − ϕ1 Xt−1 = Zt + θ1 Zt−1 or ϕ(B)Xt = θ(B)Zt ,

where ϕ(z) = 1 − ϕz and θ(z) = 1 + θz. Let |ϕ| < 1 and |θ| < 1 so that Xt is causal and invertible.
Then, we have
1 + θz
Xt = ψ(B)Zt , where ψ(z) = .
1 − ϕz

a) Compute γX (h).
b) Let the function f given by

1 X −ihλ
f (λ) = e γX (h), −π ≤ λ ≤ π.

h=−∞

Find f (λ) for ARMA(1,1) process in terms of θ, ϕ and λ.


Solution
P∞
• Part a) Firstly, we express Xt as a linear process with representation Xt = j=−∞ ψj Zt−j so that we
can use (1) and (2). To do this, we start by ψ(z):
1 + θz
ψ(z) = definition of ψ(z) given by the problem
1 − ϕz
 
X∞
= (1 + θz)  ϕj z j  Geometric series since |ϕ| < 1
j=0

X ∞
X
= ϕj z j +θ ϕj z j+1 reordering
j=0 j=0
| {z } | {z }
1+ ∞ set j ′ =j+1
P j j
j=1 ϕ z
∞ ∞ ∞ ∞

−1 j ′
X X X X
=1+ ϕj z j + θ ϕj z =1+ ϕj z j + θ ϕj−1 z j change variable two times j ′ = j + 1
j=1 j ′ =1 j=1 j=1

X
z j ϕj + θϕj−1 factoring z j

=1+
j=1

X
=1+ z j (ϕ + θ) ϕj−1 factoring ϕj−1
j=1

X
=1+ (ϕ + θ) ϕj−1 z j .
j=1
P∞
We have shown that ψ(z) = 1 + j=1 (ϕ + θ) ϕj−1 z j . Applying this fact to the equation Xt = ψ(B)Zt
given by the problem, we conclude that

X
Xt = ψj Zt−j ,
j=1

where

j−1
(ϕ + θ) ϕ , j ≥ 1

ψj = 1, j=0 . (4)

0, otherwise

P∞
Recalling that |ϕ| < 1 and |θ| < 1, it can be shown that for ψj in (4), j=−∞ |ψj | < ∞, and hence we
can use (1) and (2) proved in the claim. By inserting (4) in to (1), we find that for h = 0

X ∞
X
γX (0) = σ 2 ψj ψj = σ 2 ψj2
j=−∞ j=−∞
 

j−1 2
X
= σ 2 1 +

(ϕ + θ) ϕ  Insert (4)
j=1
 

X 2
= σ 2 1 + (ϕ + θ) ϕ2j−2 
j=1
 
∞ ∞ ∞
2
X ′ X X
= σ 2 1 + (ϕ + θ) ϕ2j  j − 1 = j′ ⇒ →
j ′ =0 j=1 j ′ =0
2
(ϕ + θ)
=1+ . Geometric series with common ratio |ϕ2 | < 1
1 − ϕ2
Similarly, for h > 0, since we are sure h ̸= 0, again by inserting (4) to (1), we find that
   
X∞ X∞
γX (h) = σ 2  ψj ψj+h  = σ 2  (ϕ + θ)ϕh−1 + (ϕ + θ) ϕj−1 (ϕ + θ) ϕj+h−1 
  
| {z }
j=−∞ j=1
| {z }| {z }
for j=0 and h>0 ψj ψj+h
 

X
2
= σ 2 (ϕ + θ)ϕh−1 + (ϕ + θ) ϕ2j+h−2 
j=1
 

X
2
= σ 2 (ϕ + θ)ϕh−1 + (ϕ + θ) ϕh ϕ2j−2 
j=1
 
∞ ∞ ∞
2
X ′ X X
= σ 2 (ϕ + θ)ϕh−1 + (ϕ + θ) ϕh ϕ2j  j − 1 = j′ ⇒ →
j ′ =0 j=1 j ′ =0
!
2
(ϕ + θ) ϕh
= σ2 (ϕ + θ)ϕh−1 + Geometric series with common ratio |ϕ2 | < 1
1 − ϕ2
!
2
(ϕ + θ) ϕ
= σ 2 ϕh−1 (ϕ + θ) +
1 − ϕ2

2
• For part b), we use (2). In view of (2), firstly we need to compute ψ(e−iλ ) for ARMA(1,1). As
mentioned in the problem, since for ARMA(1,1), ψ(z) = 1 + θz/1 − ϕz, we find that for −π ≤ λ ≤ π
2 ∗
σ 2 1 + θe−iλ 1 + θe−iλ

σ2 −iλ 2 σ 2 1 + θe−iλ
fX (λ) = ψ(e ) = =
2π 2π 1 − ϕeiλ 2π (1 − ϕeiλ ) (1 − ϕeiλ )∗
σ2 −iλ iλ 2 σ 2 1 + θ2 + θe−iλ + θeiλ
= ψ(e−if rac1+θe 1−ϕe λ ) = |eia |2 = eia e−ia = 1
2π 2π 1 + ϕ2 − ϕeiλ − ϕe−iλ
σ 2 1 + θ2 + 2θ cos(λ)
= Euler’s formula eix = cos(x) + i sin(x).
2π 1 + ϕ2 − 2ϕ cos(λ)

2. Let x1 , · · · , xn ∈ R be data points, µ ∈ R and let


n−h
1 X
γ̂(h) = (xt+h − µ)(xt − µ) for h ∈ [0, n).
n t=1

We can extend γ̂ by setting γ̂(h) = 0 for h ≥ n and γ̂(h) = γ̂(−h) for h ∈ Z with h < 0. Show that γ̂
is a positive definite function on Z.

Solution
To show that γ̂ is a positive definite function,
Pn we Pnneed to show that for any positive integer n for any real-
valued numbers a1 , · · · , an , the function i=1 j=1 ai aj γ̂(ti − tj ) ≥ 0, where t1 , · · · , tn are any n integer.
Now, let a = [a1 , · · · , an ]T be a column vector in Rn and aT denotes its transpose. Next, to have γ̂(ti − tj ),
we consider the samples of γ̂(h) for h ∈ (−n, n), i.e., we define
 
γ̂(0) γ̂(−1) · · · γ̂(1 − n)
 γ̂(1) γ̂(0) · · · γ̂(2 − n) 
Γ̂ =  .
 
..
 . 
γ̂(n − 1) γ̂(n − 2) ··· γ̂(0)
Thus, to show that γ̂ is a positive definite function, it suffices to show aT Γ̂a ≥ 0. Applying the fact that
γ̂(h) = γ̂(−h), Γ̂ is simplified as
 
γ̂(0) γ̂(1) · · · γ̂(n − 1)
 γ̂(1) γ̂(0) · · · γ̂(n − 2) 
Γ̂ =  .
 
..
 . 
γ̂(n − 1) γ̂(n − 2) · · · γ̂(0)

1 T
Next, we define yi = xi − µ. We can rewrite Γ̂ = nMM , where M is a n × 2n matrix defined as
 
0 ··· 0 0 y1 y2 ··· yn
0 ··· 0 y1 y2 ··· yn 0
M = .
 
..
 . 
0 y1 y2 ··· yn 0 ··· 0

Up to know, we show that there is a matrix denoted by M such that Γ̂ = n1 M M T . It remains to show that
aT Γ̂a ≥ 0 which is concluded from
 
1 1 T  1 T  T T 1
aT Γ̂a = aT MMT a = a M MT a = a M a M = ||M T a||2 ≥ 0.

n n n n

3. Let X = (Xt )t∈N be an i.i.d noise with mean-zero and EXt2 = 1 for t ∈ N. Define for t ∈ N and h ≥ 0,
n n−h
1X 1 X
X̂n = Xt , Γ̂h,n = (Xt+h − X̂n )(Xt − X̂n ).
n t=1 n t=1

a) Show that X̂n → 0 almost surely as n → ∞ and E[X̂n2 ] = 1/n.


√ d
b) Argue that as n → ∞, nX̂n − → Z, where Z is a standard normal random variable N (0, 1).
c) Show that for h ≥ 0,
n−h
1 X
Γ̂h,n = Xt+h Xt + Θh,n
n t=1

where the error term satisfies E|Θh,n | ≤ c(h+1)


n for some constant C > 0.
d) Deduce that for any fixed h ≥ 0, it holds almost surely and in expectation,

Γ̂h,n → E[Xt+h Xt ] = γX (h) as n → ∞.

e) Assume that EX14 < ∞. Argue that as n → ∞,


√ d
n(Γ̂h,n − γX (h)) −
→ Z, where Z is a normal
random variable N (0, σ 2 ).

Solution
• Part a) Since E[Xt ] = 0 for all t ∈ N, we have E[X̂n ] = n1 t=1 E[Xt ] = 0. Additionally, by the strong
Pn

law of large numbers (SLL), we have X̂n → E[Xt ] almost surely as n → ∞. Thus, X̂n → 0. Next,
recalling that Xt is an i.i.d noise, we find that
n X
n n n
E[X̂n2 ] = n12 E[Xs Xt ] = n12 E[Xt2 ] = n12 1
X X X
1= .
t=1 s=1 t=1 t=1
n
X̂n −µ
• Part b) Due to the central limit theorem, we know that converge to the standard normal
√X
σX / n

distribution. Applying the fact that in our problem µX = 0 and σX = 1, we conclude that nX̂n →d
N (0, 1).
Pn−h Pn−h
• Parts c) and d) It can be verified that n1 t=1 (Xt+h − X̂n )(Xt − X̂n ) = 1
n t=1 Xt+h + Θh,n , where

n−h
−1 X n−h 2
Θh,n = X̂n {Xt+h + Xt } + ( )X̂n ,
n t=1
n

and hence, E|Θh,n | ≤ c(h + 1)/n.


" n−h #
E Γ̂h,n = E n (Xt+h − X̂n )(Xt − X̂n )
1 X
h i

t=1
n−h
1 X h
E
i
= (Xt+h − X̂n )(Xt − X̂n )
n t=1
n−h
1 Xn
E [Xt+h ] E [Xt ] − E [Xt+h ] E X̂n − E [(Xt ] E X̂n + E X̂n2
h i h i h io
X̂n ⊥Xt
=
n t=1
 
n−h X  
1 X 2 1 |h| 
= µX − µ2X − µ2X + 1− γX (h) + µ2X
n t=1  n n 
|h|<n
 
n−h 1 X |h|
= 1− γX (h) .
n n n
|h|<n
| {z }
Var[X̂n ]

Next, using the L.L.N and Slutky’s Theorem, we conclude Γ̂h,n → γX (h).

4. Let Γ be the covariance matrix of the mean-zero vector (X1 , · · · , Xn ).


a) Show that Γ is invertible if det(Γ) > 0.
b) Show that Γ is invertible if and only if (X1 , · · · , Xn ) are linearly independent.
c) If det(Γ) = 0, what is X̂n = P[1,n−1] Xn ?

Solution
• Part a) Let A A be a square matrix such that AΓ = I where I is the identity matrix. Then, we have
det(AΓ) = det(A) det(AΓ) = 1. Since determinants are non-zero if and only if the matrix is invertible,
we have det(Γ) > 0.
• Part b)

5. Let W be an i.i.d noise. Let X be a causal (stationary) solution of the ARMA equation

Xt = ϕ1 Xt−1 + · · · + ϕp Xt−p + Wt + θ1 Wt−1 · · · + θq Wt−q , θ ∈ Z.

a) Compute the conditional expectation E[Xt |Xu , u < t].


⃗ = (Xt , · · · , Xt−p+1 ) for t ∈ Z. Show that the multivariate Markov process (X
b) Define X ⃗ t )t∈Z is a
Markov process.
Solution
6. A stationary time series {Xt } is called strictly linear if it has the representation

X
Xt = µ + ψj Zt−j , {Zt } ∼ i.i.d(0, σ 2 ).
j=−∞
P∞ P∞
Show that if {Xt } is a strictly linear time series where j=−∞ ψj ̸= 0 and j=−∞ |ψj | < ∞, then
X̄n = (X1 + · · · Xn )/n is asymptotic normal with mean µ and variance
 2
∞ 2 ∞
1 X σ  X
γ(h) = ψj  .
n n j=−∞
h=−∞

Solution
7. Let X be a time series and define the cost function
 !2 
n
L(a) = E  Xn+h − a ∈ Rn+1 .
X
ai Xi  ,
i=0

a) Show that if γX is positive definite, then L is a strictly convex function and deduce that it has a
unique minimum.
b) Compute the value a where the minimum is attained and give an alternative proof of Proposition
4.1 in your lecture note.

Solution
8. Python:
a) Load the dataset AirPassengers.csv from Kaggle into a pandas dataframe. This dataset contains
the monthly number of passengers of an airline company from 1949 to 1960.
b) Plot the time series and observe any trend, seasonality, or other patterns.
c) In order to check stationarity in practice, we can rely on two different techniques for identifying time
series stationarity: rolling statistics and augmented Dickey-Fuller stationarity test. In this exercise,
we practice rolling statistics. Rolling statistics is a simple technique where we plot the average and
spread of a time series over a short period of time and see if they stay consistent or change drastically.
This helps us to visually determine if the time series is stable or not. Test the stationarity of the time
series using rolling statistics.
d) As discussed in Exercise 2 of HW1, differencing is a method used to transform a non-stationary
time series into a stationary one. In Python, differencing can be performed on the data using the diff()
method, followed by dropping the missing values using the dropna() method. If the time series in part
c) is non-stationary, perform differencing by applying these two methods repeatedly until the resulting
series becomes stationary.
e) Estimate the mean and autocovariance of the stationary time series using the sample mean and
autocovariance functions.
f) Plot the autocovariance function and observe any decay or other patterns.

You might also like