HW3 Solution-1
HW3 Solution-1
Homework 3 (Solution)
1
This claim is discussed in the theory class. Here, we review it.
0. Claim: Let {Zt } ∼ i.i.d(0, σ 2 ). A linear process {Xt , t ∈ Z} with representation Xt =
P∞
j=−∞ ψj Zt−j ,
P∞
where j=−∞ |ψj | < ∞, is stationary with mean 0, autocovariance function (ACVF)
∞
X
γX (h) = ψj ψj+h σ 2 , (1)
j=−∞
and
σ2 2
fX (λ) = ψ(e−iλ ) , (2)
2π
P∞
where ψ(z) = j=−∞ ψj z j , and
∞
1 X −ihλ
fX (λ) = e γX (h), −π ≤ λ ≤ π, (3)
2π
h=−∞
Thus µX = 0.
• Secondly, we compute ACVF:
which is the same as (1). Since µX = 0, and ACVF does not change with time, Xt is stationary.
• Finally, we compute the spectral density. From definition of spectral density in (3), we have:
∞
1 X −ihλ
fX (λ) = e γX (h)
2π
h=−∞
2 ∞ ∞
σ X X
= e−ihλ ψj ψj+h from (2)
2π j=−∞
h=−∞
2 X∞ ∞
σ X
= e−ihλ ψj ψj+h reordering
2π
h=−∞ j=−∞
2 X∞ ∞
σ X
= e−ijλ eijλ e−ihλ ψj ψj+h multiplying by 1 = e0 = eijλ−ijλ
2π
h=−∞ j=−∞
2 ∞ ∞
σ X X
= eijλ ψj e−i(h+j)λ ψj+h since ea eb = ea+b
2π
h=−∞ j=−∞
2 ∞ ∞
σ X X
ijλ
e−ikλ ψk
= e ψj j+h→k
2π j=−∞ k=−∞
∞
σ2 X
= ψ(iλ)ψ(−iλ) definition of ψ(z) that ψ(z) = ψj z j
2π j=−∞
2
σ 2
= ψ(e−iλ ) .
2π
where ϕ(z) = 1 − ϕz and θ(z) = 1 + θz. Let |ϕ| < 1 and |θ| < 1 so that Xt is causal and invertible.
Then, we have
1 + θz
Xt = ψ(B)Zt , where ψ(z) = .
1 − ϕz
a) Compute γX (h).
b) Let the function f given by
∞
1 X −ihλ
f (λ) = e γX (h), −π ≤ λ ≤ π.
2π
h=−∞
where
j−1
(ϕ + θ) ϕ , j ≥ 1
ψj = 1, j=0 . (4)
0, otherwise
P∞
Recalling that |ϕ| < 1 and |θ| < 1, it can be shown that for ψj in (4), j=−∞ |ψj | < ∞, and hence we
can use (1) and (2) proved in the claim. By inserting (4) in to (1), we find that for h = 0
∞
X ∞
X
γX (0) = σ 2 ψj ψj = σ 2 ψj2
j=−∞ j=−∞
∞
j−1 2
X
= σ 2 1 +
(ϕ + θ) ϕ Insert (4)
j=1
∞
X 2
= σ 2 1 + (ϕ + θ) ϕ2j−2
j=1
∞ ∞ ∞
2
X ′ X X
= σ 2 1 + (ϕ + θ) ϕ2j j − 1 = j′ ⇒ →
j ′ =0 j=1 j ′ =0
2
(ϕ + θ)
=1+ . Geometric series with common ratio |ϕ2 | < 1
1 − ϕ2
Similarly, for h > 0, since we are sure h ̸= 0, again by inserting (4) to (1), we find that
X∞ X∞
γX (h) = σ 2 ψj ψj+h = σ 2 (ϕ + θ)ϕh−1 + (ϕ + θ) ϕj−1 (ϕ + θ) ϕj+h−1
| {z }
j=−∞ j=1
| {z }| {z }
for j=0 and h>0 ψj ψj+h
∞
X
2
= σ 2 (ϕ + θ)ϕh−1 + (ϕ + θ) ϕ2j+h−2
j=1
∞
X
2
= σ 2 (ϕ + θ)ϕh−1 + (ϕ + θ) ϕh ϕ2j−2
j=1
∞ ∞ ∞
2
X ′ X X
= σ 2 (ϕ + θ)ϕh−1 + (ϕ + θ) ϕh ϕ2j j − 1 = j′ ⇒ →
j ′ =0 j=1 j ′ =0
!
2
(ϕ + θ) ϕh
= σ2 (ϕ + θ)ϕh−1 + Geometric series with common ratio |ϕ2 | < 1
1 − ϕ2
!
2
(ϕ + θ) ϕ
= σ 2 ϕh−1 (ϕ + θ) +
1 − ϕ2
2
• For part b), we use (2). In view of (2), firstly we need to compute ψ(e−iλ ) for ARMA(1,1). As
mentioned in the problem, since for ARMA(1,1), ψ(z) = 1 + θz/1 − ϕz, we find that for −π ≤ λ ≤ π
2 ∗
σ 2 1 + θe−iλ 1 + θe−iλ
σ2 −iλ 2 σ 2 1 + θe−iλ
fX (λ) = ψ(e ) = =
2π 2π 1 − ϕeiλ 2π (1 − ϕeiλ ) (1 − ϕeiλ )∗
σ2 −iλ iλ 2 σ 2 1 + θ2 + θe−iλ + θeiλ
= ψ(e−if rac1+θe 1−ϕe λ ) = |eia |2 = eia e−ia = 1
2π 2π 1 + ϕ2 − ϕeiλ − ϕe−iλ
σ 2 1 + θ2 + 2θ cos(λ)
= Euler’s formula eix = cos(x) + i sin(x).
2π 1 + ϕ2 − 2ϕ cos(λ)
We can extend γ̂ by setting γ̂(h) = 0 for h ≥ n and γ̂(h) = γ̂(−h) for h ∈ Z with h < 0. Show that γ̂
is a positive definite function on Z.
Solution
To show that γ̂ is a positive definite function,
Pn we Pnneed to show that for any positive integer n for any real-
valued numbers a1 , · · · , an , the function i=1 j=1 ai aj γ̂(ti − tj ) ≥ 0, where t1 , · · · , tn are any n integer.
Now, let a = [a1 , · · · , an ]T be a column vector in Rn and aT denotes its transpose. Next, to have γ̂(ti − tj ),
we consider the samples of γ̂(h) for h ∈ (−n, n), i.e., we define
γ̂(0) γ̂(−1) · · · γ̂(1 − n)
γ̂(1) γ̂(0) · · · γ̂(2 − n)
Γ̂ = .
..
.
γ̂(n − 1) γ̂(n − 2) ··· γ̂(0)
Thus, to show that γ̂ is a positive definite function, it suffices to show aT Γ̂a ≥ 0. Applying the fact that
γ̂(h) = γ̂(−h), Γ̂ is simplified as
γ̂(0) γ̂(1) · · · γ̂(n − 1)
γ̂(1) γ̂(0) · · · γ̂(n − 2)
Γ̂ = .
..
.
γ̂(n − 1) γ̂(n − 2) · · · γ̂(0)
1 T
Next, we define yi = xi − µ. We can rewrite Γ̂ = nMM , where M is a n × 2n matrix defined as
0 ··· 0 0 y1 y2 ··· yn
0 ··· 0 y1 y2 ··· yn 0
M = .
..
.
0 y1 y2 ··· yn 0 ··· 0
Up to know, we show that there is a matrix denoted by M such that Γ̂ = n1 M M T . It remains to show that
aT Γ̂a ≥ 0 which is concluded from
1 1 T 1 T T T 1
aT Γ̂a = aT MMT a = a M MT a = a M a M = ||M T a||2 ≥ 0.
n n n n
3. Let X = (Xt )t∈N be an i.i.d noise with mean-zero and EXt2 = 1 for t ∈ N. Define for t ∈ N and h ≥ 0,
n n−h
1X 1 X
X̂n = Xt , Γ̂h,n = (Xt+h − X̂n )(Xt − X̂n ).
n t=1 n t=1
Solution
• Part a) Since E[Xt ] = 0 for all t ∈ N, we have E[X̂n ] = n1 t=1 E[Xt ] = 0. Additionally, by the strong
Pn
law of large numbers (SLL), we have X̂n → E[Xt ] almost surely as n → ∞. Thus, X̂n → 0. Next,
recalling that Xt is an i.i.d noise, we find that
n X
n n n
E[X̂n2 ] = n12 E[Xs Xt ] = n12 E[Xt2 ] = n12 1
X X X
1= .
t=1 s=1 t=1 t=1
n
X̂n −µ
• Part b) Due to the central limit theorem, we know that converge to the standard normal
√X
σX / n
√
distribution. Applying the fact that in our problem µX = 0 and σX = 1, we conclude that nX̂n →d
N (0, 1).
Pn−h Pn−h
• Parts c) and d) It can be verified that n1 t=1 (Xt+h − X̂n )(Xt − X̂n ) = 1
n t=1 Xt+h + Θh,n , where
n−h
−1 X n−h 2
Θh,n = X̂n {Xt+h + Xt } + ( )X̂n ,
n t=1
n
t=1
n−h
1 X h
E
i
= (Xt+h − X̂n )(Xt − X̂n )
n t=1
n−h
1 Xn
E [Xt+h ] E [Xt ] − E [Xt+h ] E X̂n − E [(Xt ] E X̂n + E X̂n2
h i h i h io
X̂n ⊥Xt
=
n t=1
n−h X
1 X 2 1 |h|
= µX − µ2X − µ2X + 1− γX (h) + µ2X
n t=1 n n
|h|<n
n−h 1 X |h|
= 1− γX (h) .
n n n
|h|<n
| {z }
Var[X̂n ]
Next, using the L.L.N and Slutky’s Theorem, we conclude Γ̂h,n → γX (h).
Solution
• Part a) Let A A be a square matrix such that AΓ = I where I is the identity matrix. Then, we have
det(AΓ) = det(A) det(AΓ) = 1. Since determinants are non-zero if and only if the matrix is invertible,
we have det(Γ) > 0.
• Part b)
5. Let W be an i.i.d noise. Let X be a causal (stationary) solution of the ARMA equation
Solution
7. Let X be a time series and define the cost function
!2
n
L(a) = E Xn+h − a ∈ Rn+1 .
X
ai Xi ,
i=0
a) Show that if γX is positive definite, then L is a strictly convex function and deduce that it has a
unique minimum.
b) Compute the value a where the minimum is attained and give an alternative proof of Proposition
4.1 in your lecture note.
Solution
8. Python:
a) Load the dataset AirPassengers.csv from Kaggle into a pandas dataframe. This dataset contains
the monthly number of passengers of an airline company from 1949 to 1960.
b) Plot the time series and observe any trend, seasonality, or other patterns.
c) In order to check stationarity in practice, we can rely on two different techniques for identifying time
series stationarity: rolling statistics and augmented Dickey-Fuller stationarity test. In this exercise,
we practice rolling statistics. Rolling statistics is a simple technique where we plot the average and
spread of a time series over a short period of time and see if they stay consistent or change drastically.
This helps us to visually determine if the time series is stable or not. Test the stationarity of the time
series using rolling statistics.
d) As discussed in Exercise 2 of HW1, differencing is a method used to transform a non-stationary
time series into a stationary one. In Python, differencing can be performed on the data using the diff()
method, followed by dropping the missing values using the dropna() method. If the time series in part
c) is non-stationary, perform differencing by applying these two methods repeatedly until the resulting
series becomes stationary.
e) Estimate the mean and autocovariance of the stationary time series using the sample mean and
autocovariance functions.
f) Plot the autocovariance function and observe any decay or other patterns.