0% found this document useful (0 votes)
7 views35 pages

Signal Processing-Stochastic Processes

The document discusses ergodicity in stochastic processes, emphasizing the relationship between ensemble averages and time averages for ergodic processes. It explains how to estimate expected values and autocorrelation functions using both ensemble and time averages, particularly in the context of wide-sense stationary (WSS) processes. Additionally, it covers linear filtering of stochastic processes and the implications for expected values and autocorrelations of the output when the input is a WSS Gaussian process.

Uploaded by

2023ht80700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views35 pages

Signal Processing-Stochastic Processes

The document discusses ergodicity in stochastic processes, emphasizing the relationship between ensemble averages and time averages for ergodic processes. It explains how to estimate expected values and autocorrelation functions using both ensemble and time averages, particularly in the context of wide-sense stationary (WSS) processes. Additionally, it covers linear filtering of stochastic processes and the implications for expected values and autocorrelations of the output when the input is a WSS Gaussian process.

Uploaded by

2023ht80700
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

EE2S31 Signal Processing – Stochastic Processes

Lecture 6: Filtering stochastic processes – Suppl. 1, 2

Alle-Jan van der Veen

25 May 2022
Ch.13.9 Ergodicity
Estimating expected value: ensemble average

How can we estimate E[X (n)] if we don’t know the PDF?

I
1X
Ensemble average: µ̂(n) = x(n, si ) .
I
i=1
We will need many independent observations!
If WSS process: E[X (n)] is the same for all n. Can we use that?

6. filtering stochastic processes 2 / 32


Ergodicity

If the process is ergodic, we can also average over time using a single
realization (in this case x(n, s2 )):

N
1 X
µ̂ = x(n, si )
N
n=1

Definition: for an ergodic process, the time average X̄ and the


ensemble average E[X ] are the same.

6. filtering stochastic processes 3 / 32


Ergodicity
Definition:
For a stationary random process X (t), define the time averages of a
sample function x(t) as
Z T
1
X̄ (T ) = x(t)dt
2T −T
Z T
1
X 2 (T ) = x 2 (t)dt
2T −T

These can be measured from a single available observation.


By definition, for an ergodic process
lim X̄ (T ) = µX
T →∞

WSS is not sufficient! The autocovariance CX (τ ) must go to zero


quickly enough, so time samples are sufficiently independent.
6. filtering stochastic processes 4 / 32
Ergodicity
Not all WSS processes are ergodic!
Process: Xn = A, with random amplitude A, uniform in [0, 1].

µX = 0.5
CX [k] = E[Xn Xn+k ] − E[Xn ]E[Xn+k ]
1
= var[A] =
12

6. filtering stochastic processes 5 / 32


Ergodicity
Theorem 13.13 Let X (t) be stationary, with expected value µX and
autocovariance CX (τ ).
R∞
If −∞ |CX (τ )|dτ < ∞, then the sequence X̄ (T ), X̄ (2T ), · · · is an
unbiased, consistent sequence of estimators of µX .

It suffices that CX (0) < ∞ (finite variance) and CX (τ ) = 0 for


τ > τ0 .

Proof

Unbiased:
Z T  Z T
1 1
E[X̄ (T )] = E X (t)dt = E[X (t)]dt
2T −T 2T −T
Z T
1
= µX dt = µX
2T −T

6. filtering stochastic processes 6 / 32


Ergodicity
Proof (continued)

Consistent: sufficient to show that lim var[X̄ (T )] = 0:


T →∞
" Z T 2 #
1
var[X̄ (T )] = E (X (t) − µX )dt
2T −T
Z T  Z T 
1 0 0
= E (X (t) − µX )dt (X (t − µX )dt
(2T )2 −T −T
Z T Z T
1
= E[(X (t) − µX )(X (t 0 ) − µX )]dt 0 dt
(2T )2 −T −T
Z T Z T
1
= CX (t − t 0 )dt 0 dt
(2T )2 −T −T
| {z }
bounded by some K

6. filtering stochastic processes 7 / 32


Ergodicity
Proof (continued)

Note that Z T Z ∞
CX (t − t 0 )dt 0 ≤ |CX (τ )|dτ < ∞
−T −∞
so that there is a constant K such that
Z T
1 K
var[X̄ (T )] ≤ 2
K dt =
(2T ) −T 2T

K
Thus lim var[X̄ (T )] ≤ lim = 0.
T →∞ T →∞ 2T

6. filtering stochastic processes 8 / 32


Similar for the Autocorrelation Function (1)

I
1X
Ensemble average: R̂X [k] = x(n, si )x(n + k, si )
I
i=1

Because the process is WSS, the value of n is not important.

6. filtering stochastic processes 9 / 32


Similar for the Autocorrelation Function (2)

Using time averages, the autocorrelation function can be estimated


from a single observation as
N
1 X
R̄X [k] = x(n, si )x(n + k, si )
N
n=1

6. filtering stochastic processes 10 / 32


Similar for the Autocorrelation Function (3)
The basic estimator form for time averages
N
1 X
R̄X [k] = x(n, si )x(n + k, si )
N
n=1

uses 2N − 1 data samples to estimate N lags of RK [k].

Example for k = 0, 1, 2 and N = 3:


1
RX [0] = {x(1)2 + x(2)2 + x(3)2 }
3
1
RX [1] = {x(1)x(2) + x(2)x(3) + x(3)x(4)}
3
1
RX [2] = {x(1)x(3) + x(2)x(4) + x(3)x(5)}
3
Also set RX [−1] = RX [1], RX [−2] = RX [2].

6. filtering stochastic processes 11 / 32


Similar for the Autocorrelation Function (4)
Modified estimator (using N samples to estimate N correlation lags):
N−k
1 X
R̂X [k] = x(n, si )x(n + k, si )
N
n=1

1
RX [0] = {x(1)2 + x(2)2 + x(3)2 }
3
1
RX [1] = {x(1)x(2) + x(2)x(3)}
3
1
RX [2] = {x(1)x(3)}
3

N−k
This estimator is biased: E[R̂X [k]] = N RX [k]
N−k
1 X
Unbiased version: R̃X [k] = x(n, si )x(n + k, si )
N −k
n=1
6. filtering stochastic processes 12 / 32
Suppl. 1, 2: Linear filtering of stochastic processes

Signals are often represented as sample functions of WSS processes:

Use PDF/PMF to describe the amplitude characteristics


Use autocorrelation to describe the time/spatial varying nature of the
signals.

6. filtering stochastic processes 13 / 32


Linear filtering stochastic processes

If the input is a sample function x(t) of a random process X (t) we get


Z ∞
y (t) = h(u)x(t − u)du = h(t) ∗ x(t)
−∞

and therefore we write


Z ∞
Y (t) = h(u)X (t − u)du = h(t) ∗ X (t)
−∞

6. filtering stochastic processes 14 / 32


Expected value of the output

In general:
Z ∞  Z ∞
E[Y (t)] = E h(u)X (t − u)du = h(u)E [X (t − u)] du
−∞ −∞
= h(t) ∗ E[X (t)]

If X (t) is WSS, then E[X (t)] = µX is constant:


Z ∞
E[Y (t)] = µX h(u)du
−∞

6. filtering stochastic processes 15 / 32


Crosscorrelation (WSS input)

Next, we look at the autocorrelation of Y (t), and crosscorrelation of


X (t) with Y (t).

It is convenient to first compute the crosscorrelation:

RXY (τ ) = E[X (t)Y (t + τ )]


 Z ∞ 
= E X (t) h(v )X (t + τ − v ) dv
−∞
Z ∞
= h(v ) E [X (t)X (t + τ − v )] dv
−∞
Z ∞
= h(v )RX (τ − v ) dv = h(τ ) ∗ RX (τ )
−∞

6. filtering stochastic processes 16 / 32


Autocorrelation (WSS input)

RXY (τ ) = h(τ ) ∗ RX (τ )

The autocorrelation of the output is then


RY (τ ) = E[Y (t)Y (t + τ )]
Z ∞ Z ∞ 
= E h(u)X (t − u) du h(v )X (t + τ − v ) dv
−∞ −∞
Z ∞ Z ∞
= h(u) h(v ) E [X (t − u)X (t + τ − v )] dv du
Z−∞
∞ Z −∞

= h(u) h(v )RX (τ − v + u) dv du
−∞ −∞
Z ∞
= h(u)RXY (τ + u) du = h(−τ ) ∗ RXY (τ )
−∞
= h(−τ ) ∗ h(τ ) ∗ RX (τ )

6. filtering stochastic processes 17 / 32


Autocorrelation (WSS input)

Hence, if X (t) is WSS, then Y (t) is also WSS: E[Y (t)] is independent
of time, and RY (t, τ ) only depends on the shift τ .

Since also RXY (t, τ ) only depends on τ , we conclude that X (t) and
Y (t) are jointly WSS.

6. filtering stochastic processes 18 / 32


Output distribution

What can we say about the PDF (or PMF) of the output?

In general this is difficult!


Exception: a Gaussian stochastic process.

6. filtering stochastic processes 19 / 32


Output distribution

If the input X (t) is a stationary Gaussian stochastic process, and


the filter is LTI with impulse response h(t),
then the output is also stationary Gaussian, with expected value and
autocorrelation as specified before.

“Handwaving proof”: Remember that a linear transformation of


jointly Gaussian RVs gives jointly Gaussian RVs.

6. filtering stochastic processes 20 / 32


Summarizing

WSS input gives WSS output

Statistical descriptions of X (t): Statistical descriptions of Y (t):


• mean µX • mean µY = µX t h(t)dt
R

• Autocorrelation RX (τ ). • RY (τ ) = h(−τ ) ∗ h(τ ) ∗ RX (τ ).

WSS Gaussian input gives WSS Gaussian output


6. filtering stochastic processes 21 / 32
Example 1

Let X (t) be WSS with E[X (t)] = 10. Apply a linear filter with impulse
response (
e t/0.2 0 ≤ t ≤ 0.1 sec.
h(t) =
0 otherwise
Determine E[Y (t)]

6. filtering stochastic processes 22 / 32


Example 1

Let X (t) be WSS with E[X (t)] = 10. Apply a linear filter with impulse
response (
e t/0.2 0 ≤ t ≤ 0.1 sec.
h(t) =
0 otherwise
Determine E[Y (t)]

Z ∞ Z 0.1
E[Y (t)] = E[X (t)] h(t)dt = 10 e t/0.2 dt = 2(e 0.5 − 1)
−∞ 0

6. filtering stochastic processes 22 / 32


Example 2

Given h(t) and the white Gaussian noise process W (t) with
RW (τ ) = η0 δ(τ ).

Find

E [Y (t)]
Crosscorrelation RWY (τ )
Autocorrelation RY (τ )

(”White” means zero mean, iid)

6. filtering stochastic processes 23 / 32


Example 2

E[W (t)] = 0 (white Gaussian noise process). So


Z T
1
E[Y (t)] = E[W (t)] dt = 0.
0 T

Crosscorrelation of input W (t) with output Y (t):


Z ∞ Z T
η0
RWY (τ ) = h(u)RW (τ − u) du = δ(τ − u) du
−∞ T 0
( η0
0≤τ ≤T
= T
0 otherwise.

6. filtering stochastic processes 24 / 32


Example 2
Z ∞ Z T
1
RY (τ ) = h(v )RWY (τ + v ) dv = RWY (τ + v ) dv
−∞ 0 T
First write RWY (τ + v ) as function of v :
( (
η0 η0
0 ≤ τ + v ≤ T −τ ≤ v ≤ T − τ
RWY (τ + v ) = T = T
0 otherwise 0 otherwise.

Integration boundaries now depend on τ . Hence, we get two cases:


1 T −τ η0 η0 (T − τ )
Z
0 ≤ τ ≤ T : RY (τ ) = dv =
T 0 T T2
Z T
1 η0 η0 (T + τ )
−T ≤ τ ≤ 0 : RY (τ ) = dv =
T −τ T T2
(
η0 (T −|τ |)
T2
|τ | ≤ T ,
Altogether: RY (τ ) =
0 otherwise.
6. filtering stochastic processes 25 / 32
Example 3

RY (τ ) = h(τ ) ∗ h(−τ ) ∗ RX (τ )
= g (τ ) ∗ RX (τ )

Z ∞
g (τ ) = h(τ ) ∗ h(−τ ) = 3e −t u(t) 3e −t+τ u(−τ + t) dt
(−∞ R ∞
9e τ τ e −2t dt = 29 e −τ if τ ≥ 0
= R∞
9e τ 0 e −2t dt = 29 e τ if τ < 0

6. filtering stochastic processes 26 / 32


Example 3

 
9 −τ 9 τ
RY (τ ) = g (τ ) ∗ RX (τ ) = e u(τ ) + e u(−τ ) ∗ (4 + 3δ(τ ))
2 2
Z +∞
9 −t
e u(t) + e t u(−t) (4 + 3δ(τ − t))dt

=
−∞ 2
36 ∞ −t 36 0 t
Z Z
27 27
= e dt + e dt + e −τ u(τ ) + e τ u(−τ )
2 0 2 −∞ 2 2
27
= 36 + e −|τ |
2
6. filtering stochastic processes 27 / 32
Sampling and filtering of random processes
Let X (t) be a continuous WSS process with E[X (t)] = µX and RX (τ ).
Sample with period Ts : Xn = X (nTs ). Then
Xn is also WSS with E[Xn ] = µX and RX [k] = RX (kTs ), because
E[Xn ] = E[X (nTs )] = µX
RX [k] = E[Xn Xn+k ] = E[X (nTs )X ([n + k]Ts )] = RX (kTs ).
Filtering of discrete-time
P random sequences:
Yn = hn ∗ Xn = j hj Xn−j
X
E[Yn ] = E[Xn ] hj
j
X
RXY [k] = E[Xn Yn+k ] = hj RX [k − j] = hk ∗ RX [k]
j
X X
RY [k] = E[Yn Yn+k ] = hi hj RX [k + i − j] = h−k ∗ RXY [k]
i j
| {z }
6. filtering stochastic processes 28 / 32
RXY [k+i]
Example
Let Yn be a sampled version of stochastic process Y (t). Y (t) has
autocorrelation function
(
10−9 (10−3 − |τ |) |τ | ≤ 10−3 ,
RY (τ ) =
0 otherwise.
What is the autocorrelation function of the sampled process Yn if
Fs = 104 samples/sec?

6. filtering stochastic processes 29 / 32


Example
Let Yn be a sampled version of stochastic process Y (t). Y (t) has
autocorrelation function
(
10−9 (10−3 − |τ |) |τ | ≤ 10−3 ,
RY (τ ) =
0 otherwise.
What is the autocorrelation function of the sampled process Yn if
Fs = 104 samples/sec?
(
10−9 (10−3 − |k F1s |) |k F1s | ≤ 10−3
 
1
RY [k] = RY k =
Fs 0 otherwise
(
10−9 (10−3 − |k 10−4 |) |k 10−4 | ≤ 10−3 ,
=
0 otherwise.
(
10−6 (1 − |0.1k|) |k| ≤ 10,
=
0 otherwise.

6. filtering stochastic processes 29 / 32


Problem 2.7 (modified notation)
Consider Xn = aXn−1 + Vn , where Vn : iid, E[Vn ] = 0, RV [k] = σ 2 δ[k].
Find RX [k].

6. filtering stochastic processes 30 / 32


Problem 2.7 (modified notation)
Consider Xn = aXn−1 + Vn , where Vn : iid, E[Vn ] = 0, RV [k] = σ 2 δ[k].
Find RX [k].

RVX [k] = E[Vn−k Xn ] = E[Vn−k (aXn−1 + Vn )]


= aRVX [k − 1] + σ 2 δ[k]
(
σ 2 ak k ≥ 0,
⇒ RVX [k] =
0 k < 0.

RXV [k] = RVX [−k]

RX [k] = E[Xn−k Xn ] = E[Xn−k (aXn−1 + Vn )]


= aRX [k − 1] + RXV [k]

6. filtering stochastic processes 30 / 32


Problem 2.7 (cont’d)
We saw until now:
(
σ 2 a−k k ≤ 0,
Rx [k] = aRX [k − 1] + RXV [k] , RXV [k] =
0 k > 0.

k >0: RX [k] = aRX [k − 1] = · · · = ak RX [0]

k=0: RX [0] = aRX [−1] + σ 2 = aRX [1] + σ 2 = a2 RX [0] + σ 2


σ2
RX [0] = =: σX2
1 − a2

)
It follows, for k ≥ 0: RX [k] = ak σX2 σ2
⇒ RX [k] = a|k|
Also, for k < 0, RX [k] = RX [−k] = a−k σX2 1 − a2

6. filtering stochastic processes 31 / 32


To do for this lecture:

Make some selected exercises of the Supplement:


1.1, 1.3, 2.1, 2.3, 2.5, 2.7
(Unfortunately, the supplement has far fewer exercises)

Next lecture, we’ll do Supplement Sections 5 and 6.

6. filtering stochastic processes 32 / 32

You might also like