Exercises Predictors
Exercises Predictors
Exercise 1
Consider the following process:
1 1
y (t) = · y (t − 1) + e (t) + 2 · e (t − 1) , e (t) ∼ W N 0,
4 4
1. Derive the expression for the 1-step optimal predictor from the available data.
2. What is the value of the 1-step prediction error variance?
3. Derive the expression for the 2-step optimal predictor from the available data.
4. What is the value of the 2-step prediction error variance?
5. Given the following observations:
1 1 1
y (1) = 1, y (2) = , y (3) = − , y (4) = 0, y (5) = −
2 2 2
compute yb (6|5) and yb (7|5).
1) Derive the expression for the 1-step optimal predictor from the available data.
The process is an ARMA(1, 1) with the following operatorial representation:
The pole of W (z) is z = 14 and it is inside the unit circle, so the digital lter W (z) is asymptotically stable.
Moreover e (t) is a WSS process. Thus y (t) is a WSS process.
In order to nd the predictor, rst of all, we have to check if y (t) is a canonical representation:
1. Same degree: ;
2. Coprime: ;
3. Monic: ;
the zeroes (roots of the numerator) are outside!
4. Roots inside the unitary circle: @
1
Predictors - Exercises Luca Maurelli ([email protected])
In order to replace the zero outside the unitary circle, we apply the all pass lter:
z + a1 z + 21
T (z) = a · =2·
z+a z+2
So the new dynamic lter is:
mη = E [η (t)] = 2 · E [e (t)] = 2 · me = 0
h i h i h i
2 2 2
λ2η = E (η (t) − mη ) = E η (t) = 4 · E e (t) = 4 · λ2e = 1
So:
η (t) ∼ W N (0, 1)
We can conclude that:
1 + 12 · z −1 1 − 14 · z −1
−1 + 14 · z −1 1
/ + 34 · z −1
Thus we have:
R1 (z)
y (t) = Q1 (z) + · η (t)
A (z)
R1 (z)
= Q1 (z) · η (t) + · η (t)
A (z)
3
· z −1
= η (t) + 4 1 −1 · η (t)
1− 4 ·z
While the rst term is unpredictable with the information at time t − 1, the second term is totally predictable
since it depends on η (t − 1). Thus the 1-step optimal predictor from the noise is:
· z −1
3
4
ŷ (t|t − 1) = · η (t)
1 − 14 · z −1
R1 (z)
= · η (t)
A (z)
C (z)
y (t) = · e (t)
A (z)
A (z) · y (t) = C (z) · e (t)
A (z)
e (t) = · y (t) = W̃ (z) · y (t)
C (z)
C (z)
y (t) = · e (t)
A (z)
Rr (z)
y (t) = + Qr (z) · e (t)
A (z)
| {z }
r-steps of long division
Rr (z)
y (t) = · e (t) + Qr (z) · e (t)
A (z) | {z }
| {z } r-steps prediction error
r-steps predictor
Rr (z)
ŷ (t|t − r) = · e (t)
A (z)
Rr (z) A (z)
ŷ (t|t − r) = · · y (t)
A (z) C (z)
| {z }
whitening lter
Rr (z)
= · y (t)
C (z)
R̃r (z)
= · y (t − r)
C (z)
Thus the 1-step optimal predictor from the available data is:
whitening lter
z
}|
{
R1 (z) A (z)
ŷ (t|t − 1) = · · y (t)
A (z) C (z)
3
· z −1 1
z −1
1−·
= 4 1 −1 · 41 −1 · y (t)
1−
4· z 1+ 2 ·z
3 −1
·z
= 4 1 −1 · y (t)
1+ 2 ·z
· z −1
3
4
ŷ (t|t − 1) = · y (t)
1 + 21 · z −1
1 −1 3 −1
1+ ·z · ŷ (t|t − 1) = ·z · y (t)
2 4
1 3
ŷ (t|t − 1) + · ŷ (t − 1|t − 2) = · y (t − 1)
2 4
1 3
ŷ (t|t − 1) = − · ŷ (t − 1|t − 2) + · y (t − 1)
2 4
Notice that this is obviously equal to the one-step forward shifting predictor due to the stationary properties:
1 3
ŷ (t + 1|t) = − · ŷ (t|t − 1) + · y (t)
2 4
A simpler way to nd the 1-step predictor of an ARMA process is given by the theory.
C (z) − A (z)
ŷ (t|t − 1) = · y (t)
C (z)
1
· z −1 − 1 − 14 · z −1
1+ 2
= · y (t)
1 + 21 · z −1
3
4 · z −1
= · y (t)
1 + 21 · z −1
3) Derive the expression for the 2-step optimal predictor from the available data.
We have already che process in its canonical representation. Let's compute 2 steps of the polynomial long
division:
1 + 12 · z −1 1 − 41 · z −1
−1 + 14 · z −1 1 + 43 · z −1
/ + 34 · z −1
- 34 · z −1 3
+ 16 · z −2
/ 3
+ 16 · z −2
Notice that:
R2 (z) A (z)
ŷ (t|t − 2) = · · y (t)
A (z) C (z)
R̃2 (z) A (z)
= · · y (t − 2)
A (z) C (z)
R̃2 (z)
= · y (t − 2)
C (z)
3
16
= 1 · y (t − 2)
1+ 2 · z −1
The variance of the prediction error tends to the variance of the process, since the r-step predictor tends
to the process mean for r → ∞ (the best future prediction when the future is far away is the mean value
of the process):
h i h i h i
2 2 2
E ε (t) = E (y (t) − ŷ (t|t − r)) −→ E (y (t) − my ) = γy (0)
r→∞
Exercise 2
Consider the following process:
1
y (t) = · y (t − 1) + e (t) − 2 · e (t − 1) , e (t) ∼ W N (0, 2)
2
1. Is the process WSS?
2. Compute the predictors ŷ (t|t − r) for r = 1, 2
3. Compute the 1-2 step prediction error variances.
1) Is the process WSS?
Let's put the process into its operatorial representation:
1 −1
· y (t) = 1 − 2 · z −1 · e (t)
1− ·z
2
So:
y (t)
= W (z)
e (t)
1 − 2 · z −1
=
1 − 12 · z −1
z−2
=
z − 12
The pole (z = 21 ) is inside the unit circle, so the digital lter is asymptotically stable. Moreover, e (t) is a
WSS process. We can conclude that y (t) is a WSS process too.
In order to nd the predictor, rst of all, we have to check if y (t) is a canonical representation:
1. Same degree: ;
2. Coprime: ;
3. Monic: ;
the zeroes (roots of the numerator) are outside!
4. Roots inside the unitary circle: @
Observation: Notice that the pole is the reciprocal of the zero. The transfer function W (z), except from
the gain, has the structure of an all pass lter:
z + a1
1 z−2
T (z) = a · = − ·
z+a 2 z − 12
The process equation is then an all-pass lter with gain equal to −2:
Consider the white noise η (t), derived from the original noise e (t):
1
η (t) = −2 · e (t) =⇒ e (t) = − · η (t)
2
Its mean and variance are given by:
mη = E [η (t)] = −2 · E [e (t)] = −2 · me = 0
h i h i h i
2 2 2
λ2η = E (η (t) − mη ) = E η (t) = 4 · E e (t) = 4 · λ2e = 8
So:
η (t) ∼ W N (0, 8)
If we replace e (t) in the process function, we have that:
1
y (t) = · y (t − 1) + e (t) − 2 · e (t − 1) , e (t) ∼ W N (0, 2)
2
1 1 1
y (t) = · y (t − 1) + − · η (t) − 2 · − · η (t − 1)
2 2 2
1 1
= · y (t − 1) − · η (t) + η (t − 1) , η (t) ∼ W N (0, 8)
2 2
The operatorial representation becomes:
y (t)
= W1 (z)
η (t)
1 1 − 2 · z −1
=− ·
2 1 − 12 · z −1
1 z−2
=− ·
2 z − 12
= T (z)
Observation: It is an all pass lter!
2) Compute the predictors ŷ (t|t − r) for r = 1, 2.
Since the process y (t) is the steady-state output of an all-pass lter fed by the white noise η (t), we can
conclude that y (t) has the same spectrum of the white noise η (t). But the white noise is totally unpredictable.
So the optimal r-step predictor is the trivial predictor, that is the expected value of the process y (t), which
is the expected value of the noise η (t):
ŷ (t|t − r) = E [y (t)] = E [η (t)] = 0, ∀r
3) Compute the 1-2 step prediction error variances.
Thus, the r-step prediction error variance is simply the variance of the process, i.e. the variance of the white
noise η (t):
h i h i
2 2
E ε (t) = E (y (t) − ŷ (t|t − 2))
h i
2
= E y (t)
h i
2
= E η (t) = λ2η = 8, ∀r
Exercise 3
Consider the following WSS process ARMAX model (derived from an exercise from the previous lecture):
z −2 1 − 0.55 · z −1
y (t) = −1 · u (t) + · e (t) , e (t) ∼ W N (0, 16)
1 − 1.3 · z + 0.4 · z −2 1 − 1.3 · z −1 + 0.4 · z −2
Thus:
−1 −2
A (z) = 1 − 1.3 · z + 0.4 · z
B (z) = 1
C (z) = 1 − 0.55 · z −1
k=2
B (z) C (z)
y (t) = · u (t − k) + · e (t)
A (z) A (z)
1. Same degree: ;
2. Coprime: ;
3. Monic: ;
4. Roots inside the unitary circle: ;
We conclude saying that the process is in its canonical representation. The details of the canonical represen-
tation can be looked up in the previous lecture.
Let's compute 2 steps of the polynomial long division between C (z) and A (z):
/ +0.75 · z −1 −0.4 · z −2
/ +0.575 · z −2 −0.3 · z −3
where
A (z) = 1 − 1.3 · z −1 + 0.4 · z −2
B (z) = 1
C (z) = 1 − 0.55 · z −1
k=2
Q2 (z) = 1 + 0.75 · z −1
R2 (z) = 0.575 · z −2 − 0.3 · z −3 = z −2 · 0.575 − 0.3 · z −1
Notice that:
B (z) C (z)
y (t) = · u (t − k) + · e (t)
A (z) A (z)
A (z) · y (t) = B (z) · u (t − k) + C (z) · e (t)
C (z) · e (t) = A (z) · y (t) − B (z) · u (t − k)
B (z) C (z)
y (t) = · u (t − k) + · e (t)
A (z) A (z)
B (z) Rr (z)
y (t) = · u (t − k) + + Qr (z) · e (t)
A (z) A (z)
| {z }
r-steps of long division
B (z) Rr (z)
y (t) = · u (t − k) + · e (t) + Qr (z) · e (t)
A (z) A (z) | {z }
| {z } r-steps prediction error
r-steps predictor
B (z) Rr (z)
ŷ (t|t − k) = · u (t − k) + · e (t)
A (z) A (z)
B (z) Rr (z) A (z) B (z)
ŷ (t|t − k) = · u (t − k) + · · y (t) − · u (t − k)
A (z) A (z) C (z) C (z)
| {z }
whitening lter
B (z) Rr (z) Rr (z) B (z)
= · u (t − k) + · y (t) − · · u (t − k)
A (z) C (z) A (z) C (z)
B (z) · (C (z) − Rr (z)) Rr (z)
= · u (t − k) + · y (t)
A (z) · C (z) C (z)
B (z) · Qr (z) · A (z) Rr (z)
= · u (t − k) + · y (t)
A (z) · C (z) C (z)
B (z) · Qr (z) Rr (z)
= · u (t − k) + · y (t)
C (z) C (z)
B (z) · Qr (z) R̃r (z)
= · u (t − k) + · y (t − r) , ∀k ≥ r
C (z) C (z)
C (z) Rr (z)
= + Qr (z)
A (z) A (z)
C (z) = Rr (z) + Qr (z) · A (z)
C (z) − Rr (z) = Qr (z) · A (z)