0% found this document useful (0 votes)
27 views

Exercises Predictors

1. The 1-step optimal predictor is given by y^(t|t-1) = (3/4) * z^-1 / (1 - 1/4 * z^-1) * η(t). 2. The value of the 1-step prediction error variance is 1/4. 3. The document derives expressions for multi-step optimal predictors from autoregressive moving average (ARMA) models by performing long division steps on the ARMA polynomials.

Uploaded by

fatiha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Exercises Predictors

1. The 1-step optimal predictor is given by y^(t|t-1) = (3/4) * z^-1 / (1 - 1/4 * z^-1) * η(t). 2. The value of the 1-step prediction error variance is 1/4. 3. The document derives expressions for multi-step optimal predictors from autoregressive moving average (ARMA) models by performing long division steps on the ARMA polynomials.

Uploaded by

fatiha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Predictors

Exercise 1
Consider the following process:
 
1 1
y (t) = · y (t − 1) + e (t) + 2 · e (t − 1) , e (t) ∼ W N 0,
4 4

1. Derive the expression for the 1-step optimal predictor from the available data.
2. What is the value of the 1-step prediction error variance?
3. Derive the expression for the 2-step optimal predictor from the available data.
4. What is the value of the 2-step prediction error variance?
5. Given the following observations:
1 1 1
y (1) = 1, y (2) = , y (3) = − , y (4) = 0, y (5) = −
2 2 2
compute yb (6|5) and yb (7|5).

1) Derive the expression for the 1-step optimal predictor from the available data.
The process is an ARMA(1, 1) with the following operatorial representation:

y (t) = W (z) · e (t)


C (z)
= · e (t)
A (z)
1 + 2 · z −1
= · e (t)
1 − 41 · z −1
z+2
= · e (t)
z − 41

The pole of W (z) is z = 14 and it is inside the unit circle, so the digital lter W (z) is asymptotically stable.
Moreover e (t) is a WSS process. Thus y (t) is a WSS process.

In order to nd the predictor, rst of all, we have to check if y (t) is a canonical representation:

1. Same degree: ;
2. Coprime: ;
3. Monic: ;
 the zeroes (roots of the numerator) are outside!
4. Roots inside the unitary circle: @

1
Predictors - Exercises Luca Maurelli ([email protected])

In order to replace the zero outside the unitary circle, we apply the all pass lter:
z + a1 z + 21
T (z) = a · =2·
z+a z+2
So the new dynamic lter is:

W1 (z) = W (z) · T (z)


z+
 2 z + 21
= · 2 ·
z − 41 z+
 2
1
z+ 2
=2· 1
z− 4
Here the polynomials are not monic. Thus, we dene:
1
η (t) = 2 · e (t) =⇒ e (t) = · η (t)
2
With the following statistical properties:

mη = E [η (t)] = 2 · E [e (t)] = 2 · me = 0

h i h i h i
2 2 2
λ2η = E (η (t) − mη ) = E η (t) = 4 · E e (t) = 4 · λ2e = 1

So:

η (t) ∼ W N (0, 1)
We can conclude that:

y (t) = W1 (z) · e (t)


1
z+ 2
=2· 1 · e (t)
z− 4
1
z+ 2
= 1 · η (t)
z− 4
1 + · z −1
1
2
= 1 · η (t)
1 − · z −1
4
C (z)
= · η (t)
A (z)
This is the canonical representation of the process y (t).

Now we compute 1 step of the polynomial long division:

1 + 12 · z −1 1 − 14 · z −1

−1 + 14 · z −1 1

/ + 34 · z −1

Dynamic Systems Identication (University of Bergamo) 2


Predictors - Exercises Luca Maurelli ([email protected])

where C (z) = 1 + 12 · z −1 , A (z) = 1 − 14 · z −1 , Q1 (z) = 1 and R1 (z) = 43 · z −1 .

Thus we have:

 
R1 (z)
y (t) = Q1 (z) + · η (t)
A (z)
R1 (z)
= Q1 (z) · η (t) + · η (t)
A (z)
3
· z −1
= η (t) + 4 1 −1 · η (t)
1− 4 ·z

While the rst term is unpredictable with the information at time t − 1, the second term is totally predictable
since it depends on η (t − 1). Thus the 1-step optimal predictor from the noise is:

· z −1
3
4
ŷ (t|t − 1) = · η (t)
1 − 14 · z −1
R1 (z)
= · η (t)
A (z)

Whitening lter from an ARMA model:

C (z)
y (t) = · e (t)
A (z)
A (z) · y (t) = C (z) · e (t)

The whitening lter is then:

A (z)
e (t) = · y (t) = W̃ (z) · y (t)
C (z)

In this particular example we have that:


C (z) A (z)
y (t) = · η (t) =⇒ η (t) = · y (t)
A (z) C (z)

Dynamic Systems Identication (University of Bergamo) 3


Predictors - Exercises Luca Maurelli ([email protected])

Optimal predictor from an ARMA model:


The predictor frorm the noise is:

C (z)
y (t) = · e (t)
A (z)
 
Rr (z)
y (t) = + Qr (z) · e (t)
A (z)
| {z }
r-steps of long division
Rr (z)
y (t) = · e (t) + Qr (z) · e (t)
A (z) | {z }
| {z } r-steps prediction error
r-steps predictor

The predictor from the noise is:


Rr (z)
ŷ (t|t − r) = · e (t)
A (z)
The prediction error is:

ε (t) = Qr (z) · e (t)


The predictor from the available data, using the whitening lter, is:

Rr (z)
ŷ (t|t − r) = · e (t)
A (z)
 
Rr (z) A (z)
ŷ (t|t − r) = · · y (t)
A (z) C (z)
| {z }
whitening lter
Rr (z)
= · y (t)
C (z)
R̃r (z)
= · y (t − r)
C (z)

Thus the 1-step optimal predictor from the available data is:

whitening lter
z
}|
 {
R1 (z) A (z)
ŷ (t|t − 1) = · · y (t)
A (z) C (z)
3
· z −1 1
z −1

1−· 
= 4 1 −1  ·  41 −1 · y (t)
1−
4· z 1+ 2 ·z
3 −1
·z
= 4 1 −1 · y (t)
1+ 2 ·z

with the recursive time-domain representation:

Dynamic Systems Identication (University of Bergamo) 4


Predictors - Exercises Luca Maurelli ([email protected])

· z −1
3
4
ŷ (t|t − 1) = · y (t)
1 + 21 · z −1
   
1 −1 3 −1
1+ ·z · ŷ (t|t − 1) = ·z · y (t)
2 4
1 3
ŷ (t|t − 1) + · ŷ (t − 1|t − 2) = · y (t − 1)
2 4
1 3
ŷ (t|t − 1) = − · ŷ (t − 1|t − 2) + · y (t − 1)
2 4
Notice that this is obviously equal to the one-step forward shifting predictor due to the stationary properties:
1 3
ŷ (t + 1|t) = − · ŷ (t|t − 1) + · y (t)
2 4
A simpler way to nd the 1-step predictor of an ARMA process is given by the theory.

1-step optimal predictor from an ARMA model:


C (z) − A (z)
ŷ (t|t − 1) = · y (t)
C (z)

In this particolar example we have:

C (z) − A (z)
ŷ (t|t − 1) = · y (t)
C (z)
1
· z −1 − 1 − 14 · z −1

1+ 2
= · y (t)
1 + 21 · z −1
3
4 · z −1
= · y (t)
1 + 21 · z −1

which gives the previously computed predictor.

2) What is the value of the 1-step prediction error variance?


The 1-step prediction error variance is simply given by the unpredictable part:
h i h i
2 2
E ε (t) = E (y (t) − ŷ (t|t − 1))
h i
2
= E (Q1 (z) · η (t))
h i
2
= E η (t) = λ2η = 1

3) Derive the expression for the 2-step optimal predictor from the available data.
We have already che process in its canonical representation. Let's compute 2 steps of the polynomial long
division:

Dynamic Systems Identication (University of Bergamo) 5


Predictors - Exercises Luca Maurelli ([email protected])

1 + 12 · z −1 1 − 41 · z −1

−1 + 14 · z −1 1 + 43 · z −1

/ + 34 · z −1

- 34 · z −1 3
+ 16 · z −2

/ 3
+ 16 · z −2

where C (z) = 1 + 12 · z −1 , A (z) = 1 − 14 · z −1 , Q2 (z) = 1 + 34 · z −1 and R2 (z) = 3


16 · z −2 .

Notice that:

R2 (z) = z −r · R̃2 (z)


Since, in this case, r = 2, we have:
3
R2 (z) = z −2 · R̃2 (z) =⇒ R̃2 (z) =
16
The 2-step optimal predictor is then given by:

R2 (z) A (z)
ŷ (t|t − 2) = · · y (t)
A (z) C (z)
R̃2 (z) A (z)
= · · y (t − 2)
A (z) C (z)
R̃2 (z)
= · y (t − 2)
C (z)
3
16
= 1 · y (t − 2)
1+ 2 · z −1

with the recursive time-domain representation:


3
16
ŷ (t|t − 2) = 1 · y (t − 2)
1+ 2 · z −1
 
1 −1 3
1+ ·z · ŷ (t|t − 2) = · y (t − 2)
2 16
1 3
ŷ (t|t − 2) + · ŷ (t − 1|t − 3) = · y (t − 2)
2 16
1 3
ŷ (t|t − 2) = − · ŷ (t − 1|t − 3) + · y (t − 2)
2 16

Dynamic Systems Identication (University of Bergamo) 6


Predictors - Exercises Luca Maurelli ([email protected])

4) What is the value of the 2-step prediction error variance?


The 2-step prediction error variance is simply given by the unpredictable part:
h i h i
2 2
E ε (t) = E (y (t) − ŷ (t|t − 2))
h i
2
= E (Q2 (z) · η (t))
" 2 #
3
= E η (t) + · η (t − 1)
4
 
2 9 2 3
= E η (t) + · η (t − 1) + · η (t) · η (t − 1)
16 4
9 3
= 1 · λ2η + · λ2η + · (
((
E [η (t)
(( · η( − 1)]
(t(
16 4 | ( {z }
η∼W N
9 25
=1+ =
16 16
Observe the variances:
• Process variance (not explicitly computed):
8
γy (0) = = 1.6
5

• 1-step prediction error variance: h i


2
E ε (t) = 1

• 2-step prediction error variance:


h i 25
2
E ε (t) = ≈ 1.56
16

The variance of the prediction error tends to the variance of the process, since the r-step predictor tends
to the process mean for r → ∞ (the best future prediction when the future is far away is the mean value
of the process):
h i h i h i
2 2 2
E ε (t) = E (y (t) − ŷ (t|t − r)) −→ E (y (t) − my ) = γy (0)
r→∞

4) Computation of yb (6|5) and yb (7|5).


Using the previous predictors, we can compute yb (6|5) and yb (7|5). Obviously, for the computation of yb (6|5)
we will make use of the 1-step predictor:
1 3
ŷ (t|t − 1) = − · ŷ (t − 1|t − 2) + · y (t − 1)
2 4
while for the computation of yb (7|5) we will adopt the 2-step predictor:
1 3
ŷ (t|t − 2) = − · ŷ (t − 1|t − 3) + · y (t − 2)
2 16
We will also use the given observations:
1 1 1
y (1) = 1, y (2) = , y (3) = − , y (4) = 0, y (5) = −
2 2 2
• yb (6|5)

Dynamic Systems Identication (University of Bergamo) 7


Predictors - Exercises Luca Maurelli ([email protected])

yb (1|0) = E [y (t)] = 0 (initialization)


1 3 1 3 3
yb (2|1) = − · ŷ (1|0) + · y (1) = − ·0+ ·1=
2 4 2 4 4
1 3 1 3 3 1
yb (3|2) = − · ŷ (2|1) + · y (2) = − · + · =0
2 4 2 4 4 2
1 3 1 3 1 3
yb (4|3) = − · ŷ (3|2) + · y (3) = − ·0− · =−
2 4 2 4 2 8
1 3 1 3 3 3
yb (5|4) = − · ŷ (4|3) + · y (4) = + · + ·0=
2 4 2 8 4 16
1 3 1 3 3 1 15
yb (6|5) = − · ŷ (5|4) + · y (5) = − · − · =−
2 4 2 16 4 2 32
The eect of the initialization rapidly vanishes.
• yb (7|5)

yb (2|0) = E [y (t)] = 0 (initialization)


1 3 1 3 3
yb (3|1) = − · ŷ (2|0) + · y (1) = − ·0+ ·1=
2 16 2 16 16
1 3 1 3 3 1
yb (4|2) = − · ŷ (3|1) + · y (2) = − · + · =0
2 16 2 16 16 2
1 3 1 3 1 3
yb (5|3) = − · ŷ (4|2) + · y (3) = − ·0− · =−
2 16 2 16 2 32
1 3 1 3 3 3
yb (6|4) = − · ŷ (5|3) + · y (4) = + · + ·0=
2 16 2 32 16 64
1 3 1 3 3 1 15
yb (7|5) = − · ŷ (6|4) + · y (5) = − · − · =−
2 16 2 64 16 2 128
The eect of the initialization rapidly vanishes.

Dynamic Systems Identication (University of Bergamo) 8


Predictors - Exercises Luca Maurelli ([email protected])

Exercise 2
Consider the following process:
1
y (t) = · y (t − 1) + e (t) − 2 · e (t − 1) , e (t) ∼ W N (0, 2)
2
1. Is the process WSS?
2. Compute the predictors ŷ (t|t − r) for r = 1, 2
3. Compute the 1-2 step prediction error variances.
1) Is the process WSS?
Let's put the process into its operatorial representation:
 
1 −1
· y (t) = 1 − 2 · z −1 · e (t)

1− ·z
2
So:
y (t)
= W (z)
e (t)
1 − 2 · z −1
=
1 − 12 · z −1
z−2
=
z − 12

The pole (z = 21 ) is inside the unit circle, so the digital lter is asymptotically stable. Moreover, e (t) is a
WSS process. We can conclude that y (t) is a WSS process too.

In order to nd the predictor, rst of all, we have to check if y (t) is a canonical representation:

1. Same degree: ;
2. Coprime: ;
3. Monic: ;
 the zeroes (roots of the numerator) are outside!
4. Roots inside the unitary circle: @
Observation: Notice that the pole is the reciprocal of the zero. The transfer function W (z), except from
the gain, has the structure of an all pass lter:
z + a1
 
1 z−2
T (z) = a · = − ·
z+a 2 z − 12
The process equation is then an all-pass lter with gain equal to −2:

y (t) = W (z) · e (t)


  
1 z−2
= −2 · − · · e (t)
2 z − 21
| {z }
all-pass lter
= −2 · T (z) · e (t)

where T (z) is an all-pass lter with gain 1.

Dynamic Systems Identication (University of Bergamo) 9


Predictors - Exercises Luca Maurelli ([email protected])

Consider the white noise η (t), derived from the original noise e (t):
1
η (t) = −2 · e (t) =⇒ e (t) = − · η (t)
2
Its mean and variance are given by:
mη = E [η (t)] = −2 · E [e (t)] = −2 · me = 0
h i h i h i
2 2 2
λ2η = E (η (t) − mη ) = E η (t) = 4 · E e (t) = 4 · λ2e = 8
So:
η (t) ∼ W N (0, 8)
If we replace e (t) in the process function, we have that:
1
y (t) = · y (t − 1) + e (t) − 2 · e (t − 1) , e (t) ∼ W N (0, 2)
2
   
1 1 1
y (t) = · y (t − 1) + − · η (t) − 2 · − · η (t − 1)
2 2 2
1 1
= · y (t − 1) − · η (t) + η (t − 1) , η (t) ∼ W N (0, 8)
2 2
The operatorial representation becomes:

y (t)
= W1 (z)
η (t)
1 1 − 2 · z −1
=− ·
2 1 − 12 · z −1
1 z−2
=− ·
2 z − 12
= T (z)
Observation: It is an all pass lter!
2) Compute the predictors ŷ (t|t − r) for r = 1, 2.
Since the process y (t) is the steady-state output of an all-pass lter fed by the white noise η (t), we can
conclude that y (t) has the same spectrum of the white noise η (t). But the white noise is totally unpredictable.
So the optimal r-step predictor is the trivial predictor, that is the expected value of the process y (t), which
is the expected value of the noise η (t):
ŷ (t|t − r) = E [y (t)] = E [η (t)] = 0, ∀r
3) Compute the 1-2 step prediction error variances.
Thus, the r-step prediction error variance is simply the variance of the process, i.e. the variance of the white
noise η (t):
h i h i
2 2
E ε (t) = E (y (t) − ŷ (t|t − 2))
h i
2
= E y (t)
h i
2
= E η (t) = λ2η = 8, ∀r

Dynamic Systems Identication (University of Bergamo) 10


Predictors - Exercises Luca Maurelli ([email protected])

Exercise 3
Consider the following WSS process ARMAX model (derived from an exercise from the previous lecture):

z −2 1 − 0.55 · z −1
y (t) = −1 · u (t) + · e (t) , e (t) ∼ W N (0, 16)
1 − 1.3 · z + 0.4 · z −2 1 − 1.3 · z −1 + 0.4 · z −2
Thus:
−1 −2

A (z) = 1 − 1.3 · z + 0.4 · z


B (z) = 1
C (z) = 1 − 0.55 · z −1


k=2

B (z) C (z)
y (t) = · u (t − k) + · e (t)
A (z) A (z)

1. Compute the predictor ŷ (t|t − r) for r = 2


2. Compute the associated prediction error variance.
1) Compute the predictor ŷ (t|t − r) for r = 2.
In order to nd the predictor, rst of all, we have to check if y (t) is a canonical representation:

1. Same degree: ;
2. Coprime: ;
3. Monic: ;
4. Roots inside the unitary circle: ;
We conclude saying that the process is in its canonical representation. The details of the canonical represen-
tation can be looked up in the previous lecture.

Let's compute 2 steps of the polynomial long division between C (z) and A (z):

1 −0.55 · z −1 1 −1.3 · z −1 +0.4 · z −2

−1 +1.3 · z −1 −0.4 · z −2 1 +0.75 · z −1

/ +0.75 · z −1 −0.4 · z −2

−0.75 · z −1 +0.975 · z −2 −0.3 · z −3

/ +0.575 · z −2 −0.3 · z −3

Dynamic Systems Identication (University of Bergamo) 11


Predictors - Exercises Luca Maurelli ([email protected])

where 

 A (z) = 1 − 1.3 · z −1 + 0.4 · z −2

B (z) = 1




C (z) = 1 − 0.55 · z −1



 k=2
Q2 (z) = 1 + 0.75 · z −1




R2 (z) = 0.575 · z −2 − 0.3 · z −3 = z −2 · 0.575 − 0.3 · z −1
 

Notice that:

R2 (z) = z −r · R̃2 (z)


Since, in this case, r = 2, we have:
R2 (z) = z −2 · R̃2 (z) =⇒ R̃2 (z) = 0.575 − 0.3 · z −1

Whitening lter from an ARMAX model:

B (z) C (z)
y (t) = · u (t − k) + · e (t)
A (z) A (z)
A (z) · y (t) = B (z) · u (t − k) + C (z) · e (t)
C (z) · e (t) = A (z) · y (t) − B (z) · u (t − k)

The whitening lter is then:

A (z) · y (t) − B (z) · u (t − k)


e (t) =
C (z)
A (z) B (z)
e (t) = · y (t) − · u (t − k)
C (z) C (z)

Dynamic Systems Identication (University of Bergamo) 12


Predictors - Exercises Luca Maurelli ([email protected])

Optimal predictor from an ARMAX model:


The predictor frorm the noise is:

B (z) C (z)
y (t) = · u (t − k) + · e (t)
A (z) A (z)
 
B (z) Rr (z)
y (t) = · u (t − k) + + Qr (z) · e (t)
A (z) A (z)
| {z }
r-steps of long division
B (z) Rr (z)
y (t) = · u (t − k) + · e (t) + Qr (z) · e (t)
A (z) A (z) | {z }
| {z } r-steps prediction error
r-steps predictor

The predictor from the noise is:


B (z) Rr (z)
ŷ (t|t − k) = · u (t − k) + · e (t)
A (z) A (z)
The prediction error is:

ε (t) = Qr (z) · e (t)


The predictor from the available data, using the whitening lter, is:

B (z) Rr (z)
ŷ (t|t − k) = · u (t − k) + · e (t)
A (z) A (z)
 
B (z) Rr (z) A (z) B (z)
ŷ (t|t − k) = · u (t − k) + · · y (t) − · u (t − k)
A (z) A (z) C (z) C (z)
| {z }
whitening lter
B (z) Rr (z) Rr (z) B (z)
= · u (t − k) + · y (t) − · · u (t − k)
A (z) C (z) A (z) C (z)
B (z) · (C (z) − Rr (z)) Rr (z)
= · u (t − k) + · y (t)
A (z) · C (z) C (z)
B (z) · Qr (z) · A (z) Rr (z)
= · u (t − k) + · y (t)
A (z) · C (z) C (z)
B (z) · Qr (z) Rr (z)
= · u (t − k) + · y (t)
C (z) C (z)
B (z) · Qr (z) R̃r (z)
= · u (t − k) + · y (t − r) , ∀k ≥ r
C (z) C (z)

Using the result from the long division:

C (z) Rr (z)
= + Qr (z)
A (z) A (z)
C (z) = Rr (z) + Qr (z) · A (z)
C (z) − Rr (z) = Qr (z) · A (z)

The 2-step optimal predictor is then given by:

Dynamic Systems Identication (University of Bergamo) 13


Predictors - Exercises Luca Maurelli ([email protected])

B (z) · Q2 (z) R̃2 (z)


ŷ (t|t − 2) = · u (t − 2) + · y (t − 2)
C (z) C (z)
(1) · 1 + 0.75 · z −1 0.575 − 0.3 · z −1
 
= · u (t − 2) + · y (t − 2)
(1 − 0.55 · z −1 ) (1 − 0.55 · z −1 )
1 + 0.75 · z −1 0.575 − 0.3 · z −1
= · u (t − 2) + · y (t − 2)
1 − 0.55 · z −1 1 − 0.55 · z −1
with the recursive time-domain representation:
1 + 0.75 · z −1 0.575 − 0.3 · z −1
ŷ (t|t − 2) = · u (t − 2) + · y (t − 2)
1 − 0.55 · z −1 1 − 0.55 · z −1
1 − 0.55 · z −1 · ŷ (t|t − 2) = 0.575 − 0.3 · z −1 · y (t − 2) + 1 + 0.75 · z −1 · u (t − 2)
  

ŷ (t|t − 2) − 0.55 · ŷ (t − 1|t − 3) = 0.575 · y (t − 2) − 0.3 · y (t − 3) + u (t − 2) + 0.75 · u (t − 3)


ŷ (t|t − 2) = 0.55 · ŷ (t − 1|t − 3) + 0.575 · y (t − 2) − 0.3 · y (t − 3) + u (t − 2) + 0.75 · u (t − 3)

2) Compute the associated prediction error variance.


The 2-step prediction error variance can be simply computed as the variance of the unpredictable part of the
process, which is:
h i h i
2 2
E ε (t) = E (y (t) − ŷ (t|t − 2))
h i h i
2 2
= E (Q2 (z) · e (t)) = E (e (t) + 0.75 · e (t − 1))
h i h i
2 2
= E e (t) + 0.752 · E e (t − 1) + 1.5 · (
((
E [e
( (t)
(( · e( − 1)]
(t(
| {z }
e∼W N
= λ2e 2
+ 0.75 · λ2e 2
= 16 + 0.75 · 16 = 25

Dynamic Systems Identication (University of Bergamo) 14

You might also like